Simplyblock targets ‘complex’ Ceph with software-defined NVMe
German startup Simplyblock aims to deliver low-cost high-performance flash and NVMe-over-TCP storage for service provider customers, and has Ceph deployments in its sights
IOPS at 66.9 million and latency of 6.2µs on an eight-node software-defined hyper-converged cluster compared with 0.77 million IOPS and latency of 3.35ms on the same configuration but in Ceph. That’s the claim of Simplyblock, which has been launched by a German startup of the same name that will target private cloud hosting providers.
“Our intent is to supply storage systems for providers of cloud services that don’t have the means to develop their own technology,” said Rob Pankow, CEO of Simplyblock. “Most often, that will mean local hosts that are chosen because customers want human-scale relationships.”
Storage is possibly the most complex layer in IT infrastructure, and in smaller hosts generally takes the form of disk and flash arrays usually bought from the main datacentre hardware suppliers.
“The most technophile hosting providers will often deploy a Ceph open source storage system on generic servers,” said Pankow. “But for the majority of the solutions we see deployed, most are flash arrays from Pure Storage or NetApp. These are excessively expensive products, but we offer an alternative that will cost 10 times less.”
But is such software-defined storage suited to all enterprises? Simplyblock doesn’t have the size to rival the high-level functionality offered by datacentre storage providers right now. Its aim is to target those teams that currently deploy Ceph, which is notoriously complex.
A containerised architecture optimised for NVMe
LeMagIT – French sister site to Computer Weekly – met Simplyblock on an IT Press Tour event in Berlin earlier this year. During that encounter, it wasn’t possible to determine precisely whether or not Simplyblock’s software-defined storage was a customised version of Ceph, optimised in some key areas.
“Ceph is an open source system that can do anything on all types of hardware from the moment you get it configured,” said Michael Schmidt, technical director at the startup. “Simplyblock builds on this with code optimised for NVMe SSD and for storage nodes connected to servers via NVMe-over-TCP.”
Schmidt added that Simplyblock uses open source Linux NVMe and NVMe-over-TCP drivers. However, optimisation of 64-bit code and a new algorithm that enables redundancy between storage nodes using erasure coding is key in getting to millions of IOPS per x86 core in the cluster.
“We can put up to 255 nodes in our clusters, with each sharing execution of erasure coding and sharing block mode, courtesy of its x86 cores and each with internal SSD storage capacity. Our clusters are accessible in block mode to tens of thousands of VMs simultaneously.”
Read more on software-defined storage
- Software-defined storage: What it is and variants available. SDS is available in numerous variants. It is usually cheaper, flexible to deploy and brings storage efficiencies, but there are pitfalls in complexity, management and performance.
- Software-defined storage: What’s available from key players. What’s available in software-defined storage from the big storage players, software-defined storage specialists, and SDS options focused on cloud, virtualisation and containers.
According to documentation supplied by Simplyblock, each application server is equipped with a driver to communicate in block mode with the cluster. Or, more precisely, with one of the nodes in the cluster, each being chosen in turn.
At the heart of the nodes in the cluster, Simplyblock’s software-defined storage is containerised. Containers are specified to manage erasure coding block sharing, another to manage I/O on internal drives, and a Management Domain container to index logical volumes in the cluster. More precisely, the Management Domain container indexes the other containers present in the cluster and shares the load between them with communication via API.
Simplyblock said a cluster can run with just two nodes, but recommends deploying at least five, all with NVMe SSDs, to get the best performance compared with Ceph. The kind of performance figures mentioned at the beginning of this article were gained from an eight-node cluster, each equipped with two Intel Xeon 6126 (12 cores each) at 2.6 GHz, 512GB of RAM, 10 NVMe SSDs of 7.68TB and two 100Gbps Ethernet connections.
Tomorrow, the cloud, data reduction, and file and object access
Simplyblock is a startup that’s only really just starting up. “We don’t yet have any customers, and we are in a testing phase with several German and Austrian hosting companies,” said Pankow.
And it has a number of technical additions in the pipeline. Between now and the end of the year, it will get the ability to interface with Simplyblock virtual clusters in the AWS cloud or on OpenStack infrastructure. It’s possible that will extend to Azure and GCP between now and the start of 2024, too.
“We have already tested the hosted version of our system on AWS EC2 VMs,” said Pankow. “It is interesting to compare performances of such a deployment compared with AWS EBS. We obtain the same performance as the top-of-the-range EBS io2 Block Express service [usually aimed high-performance database workloads], which is 1,000 IOPS per GB and 4GBps of throughput per volume. An entry-level EBS service only offers 50 IOPS per GB and 1GBps.”
Also, towards the end of 2024, Simplyblock will add data reduction technology that will see the size of contents reduced by 3x. At the start of next year, file and object storage modes are planned, with those run in new containers. These additions will come alongside a new CSI drive to allow these to be used in conjunction with a Kubernetes cluster.
Finally, at the end of 2024, Simplyblock plans to implement snapshots that can enable rapid restores of the most recently healthy datasets in case of cyber attack.