Fotolia

Hyper-converged infrastructure 2018 technology and market survey

We look at the latest in hyper-converged infrastructure – server and storage hardware nodes that can be built into scale-out clusters – and the players in the market

This article can also be found in the Premium Editorial Download: Computer Weekly: Getting up to speed with 5G

Hyper-converged infrastructure (HCI) has been around for a number of years. HCI systems consolidate the traditionally separate functions of compute (server) and storage into a single scale-out hardware platform.

In this article, we review what hyper-converged infrastructure means today, the suppliers that sell HCI and where the technology is headed.

HCI systems are predicated on the concept of merging the separate physical components of server and storage into a single hardware appliance. Suppliers sell the whole thing as an appliance or users can choose to build their own using software and hardware components readily available in the market.

The benefits of implementing hyper-converged infrastructure are in the cost savings that derive from a simpler operational infrastructure.

The integration of storage features into the server platform, typically through scale-out file systems, allows the management of LUNs and volumes to be eliminated, or at least hidden from the administrator. As a result, HCI can be operated by IT generalists, rather than needing the separate teams traditionally found in many IT organisations.

HCI implementations are typically scale-out, based on deployment of multiple servers or nodes in a cluster. Storage resources are distributed across the nodes to provide resilience against the failure of any component or node.

Distributing storage provides other advantages. Data can be closer to compute than with a storage area network, so it is possible to gain benefit from faster storage technology such as NVMe and NVDIMM.

The scale-out nature of HCI also provides financial advantages, as clusters can generally be built out in increments of a single node at a time. IT departments can buy nearer to the time the hardware is needed, rather than buying up-front and under-utilising equipment. As a new node is added to a cluster, resources are automatically rebalanced, so little additional work is needed other than rack, stack and connect to the network.

Shared core

Most HCI implementations have what is known as a “shared core” design. This means storage and compute (virtual machines) compete for the same processors and memory. In general, this could be seen as a benefit because it reduces wasted resources.

However, in the light of the recent Spectre/Meltdown vulnerabilities, I/O intensive applications (such as storage) will see a significant upswing in processor utilisation once patched. This could mean users having to buy more equipment simply to run the same workloads. Appliance suppliers claim that “closed arrays” don’t need patching and so won’t suffer the performance degradation.

But running servers and storage separately still has advantages for some customers. Storage resources can be shared with non-HCI platforms. And traditional processor-intensive functions such as data deduplication and compression can be offloaded to dedicated equipment, rather than being handled by the hypervisor.

Read more on hyper-converged infrastructure

  • The rise of hyper-converged infrastructure – with compute, storage and networks in one box – seems ideal for SMEs, but is it always a better idea than traditional IT architecture?
  • Hyper-converged infrastructure and containers are key deployments planned by UK customers in 2018, with flash storage, virtual machine storage and disk backup also prominent.

Unfortunately, with the introduction of NVMe-based flash storage, the latency of the storage and storage networking software stack is starting to become more of an issue. But startups are beginning to develop solutions that could be classed as HCI 2.0 that disaggregate the capacity and performance aspects of storage, while continuing to exploit scale-out features. This allows these systems to gain full use of the throughput and latency capabilities of NVMe.

NetApp has introduced an HCI platform based on SolidFire and an architecture that reverts to separating storage and compute, scaling each separately in a generic server platform. Other suppliers have started to introduce either software or appliances that deliver the benefits of NVMe performance in a scalable architecture that can be used as HCI.

HCI supplier roundup

Cisco Systems acquired Springpath in August 2017 and has used its technology in the HyperFlex series of hyper-converged platforms. HyperFlex is based on Cisco UCS and comes in three families: hybrid nodes, all-flash nodes and ROBO/edge nodes. Fifth generation platforms offer up to 3TB of DRAM and dual Intel Xeon processors per node. HX220c M5 systems deliver 9.6TB SAS HDD (hybrid), 30.4TB SSD (all-flash) while the HX240c M5 provides 27.6TB HDD and 1.6TB SSD cache (hybrid) or 87.4TB SSD (all-flash). ROBO/edge models use local network port speeds, whereas the hybrid and all-flash models are configured for 40Gb Ethernet. All systems support vSphere 6.0 and 6.5.

Dell EMC and VMware offer a range of technology based on VMware Virtual SAN. These are offered in five product families: G Series (general purpose), E Series (entry level/ROBO), V Series (VDI optimised), P Series (performance optimised) and S Series (Storage dense systems). Appliances are based on Dell’s 14th generation PowerEdge servers, with E Series based on 1U hardware, while V, P and S systems use 2U servers. Systems scale from single-node, four-core processors with 96GB of DRAM to 56 cores (dual CPU) and 1536GB DRAM. Storage capacities scale from 400GB to 1,600GB SSD cache and either 1.2TB to 48TB HDD or 1.92TB to 76.8TB SSD. All models start at a minimum of three nodes and scale to a maximum of 64 nodes based on the requirements and limitations of Virtual SAN and vSphere.

NetApp has designed an HCI platform that allows storage and compute to be scaled separately, although each node type sits within the same chassis. A minimum configuration consists of two 2U chassis, with two compute and four storage nodes. This leaves two expansion slots. The four-node storage configuration is based on SolidFire scale-out all-flash storage and is available in three configurations. The H300S (small) deploys 6x 480GB SSDs for an effective capacity of 5.5TB to 11TB. The H500S (medium) has 6x 960GB drives (11TB to 22TB effective) and the H700S (large) uses 6x 1.92TB SSDs (22TB to 44TB effective). There are three compute module types: H300E (small) with 2x Intel E5-2620v4 and 384GB DRAM, H500E (2x Intel E5-2650v4, 512GB DRAM) and H700E (large) with 2x Intel E5-2695v4, 768GB DRAM. Currently the platform only supports VMware vSphere, but other hypervisors could be offered in the future.

Nutanix is seen as the leader in HCI, bringing its first products to market in 2011. The company floated on the Nasdaq in September 2016 and continues to evolve its offerings into a platform for private cloud. The Nutanix hardware products span four families (NX-1000, NX-3000, NX-6000, NX-8000) that start at the entry-level NX-1155-G5 with Dual Intel Broadwell E5-2620-v4 processors, 64GB DRAM and a hybrid (1.92TB SSD, up to 60TB HDD) or all-flash (23TB SSD) storage configuration. At the high end, the NX-8150-G5 has a highest specification Dual Intel Broadwell E5-2699-v4, 1.5TB DRAM and hybrid (7.68GB SSD, 40TB HDD) or all-flash (46TB SSD) configurations. In fact, customers can select from such a large range of configuration options that almost any node specification is possible. Nutanix has developed a proprietary hypervisor called AHV, based on Linux KVM. This allows customers to implement systems and choose either AHV or VMware vSphere as the hypervisor.

Pivot3 was an earlier market entrant than even Nutanix, but had a different focus at that time (video surveillance). Today, Pivot3 offers a hardware platform (Acuity) and software solution (vSTAC). Acuity X-Series is offered in four node configurations, from the entry level X5-2000 (Dual Intel E5-2695-v4 up to 768GB of DRAM, 48TB HDD) to the X5-6500 (Dual Intel E5-2695-v4 up to 768GB of DRAM, 1.6TB NVMe SSD, 30.7TB SSD). Models X5-2500 and X5-6500 are “flash accelerated” as both a tier of storage and as a cache. Acuity supports the VMware vSphere hypervisor.

Scale Computing has had steady growth in the industry, initially focusing on SMB and gradually moving the value proposition of its HC3 platform higher by introducing all-flash and larger-capacity nodes. The HC3 series now has four product families (HC1000, HC2000, HC4000 and HC5000). These scale from the base model HC1100 (Single Intel E5-2603v4, 64GB DRAM, 4TB HDD) to the HC5150D (Dual Intel E5-2620v4, 128GB DRAM, 36TB HDD, 2.88TB SSD). There is also an all-flash model (HC1150DF) with Dual Intel E5-2620v4, 128GB DRAM, 36TB HDD and 38.4TB SSD. HC3 systems run the HyperCore hypervisor (based on KVM) for virtualisation and a proprietary file system called Scribe. This allowed Scale to offer more competitive entry-level models for SMB customers.

Simplivity was acquired by HPE in January 2017. The platform has since been added to HPE’s integrated systems portfolio. The Omnistack software that drives the Simplivity platform is essentially a distributed file system that integrates with the vSphere hypervisor. An accelerator card with dedicated FPGA is used to provide hardware-speed deduplication of new data into the platform. The HPE Simplivity 380 has three configuration options: Small Enterprise all-flash (Dual Intel Xeon Broadwell E-2600 v4 series, up to 1467GB DRAM and 12TB SSD); Medium Enterprise all-flash (Dual Intel Xeon Broadwell E2600-v4 series, up to 1428GB DRAM and 17.1TB SSD); and Large Enterprise all-flash (Dual Intel Xeon Broadwell E5-2600v4 series, up to 1422GB DRAM and 23TB SSD). Systems are scale-out and nodes can be mixed in a single configuration or spread over geographic locations.

Next Steps

Avoid hyper-converged infrastructure system gaffes

Read more on SAN, NAS, solid state, RAID