fotohansel - Fotolia
Five ways the hyper-converged infrastructure market is changing
We look at changes in the hyper-converged infrastructure market as suppliers go cool, go soft, disaggregate HCI nodes, provide as-a-service options and look to containers
Hyper-converged infrastructure (HCI) promises to simplify IT by combining storage and compute and usually a virtualisation environment in a single system or appliance.
This one-box approach takes the flexibility of virtualisation and networked storage, but condenses it down. The result, so its supporters argue, is a flexible and high-performance system that is suitable for smaller businesses, branch offices or edge applications.
And, increasingly, suppliers are looking to deliver hyper-converged through software and, especially, software-defined storage.
But the market itself is changing. In part, this is in response to customer demand and the growing emphasis on cloud-based and as-a-service-style consumption during the pandemic, and this is likely to continue.
And just as some suppliers have decided that HCI is not viable for them, others have entered the market, especially with software-defined offerings. HCI is now firmly established as an on-premise option, especially in deployments where ease of management is important.
Computer Weekly has looked at some of the key trends in the hyper-converged market.
HCI market: Some vendors pull back, more software products
Industry analyst Emergen Research valued the HCI market at $7.34bn in 2020 and predicts that the market will grow by 26.8%. Drivers include the backup and recovery market, and to improve application performance.
According to Naveen Chhabra, a senior analyst at Forrester, HCI is being used for VDI (desktop virtualisation), databases, analytical workloads and VM farms.
Although hyper-converged is primarily used in the datacentre, the technology is becoming more versatile because suppliers have improved controls available to IT administrators, says Chhabra. Nonetheless, HCI is best suited to workloads that make use of horizontal scaling (adding more nodes) rather than vertical scaling (adding more CPU, storage, memory, and so on).
Perhaps surprisingly, given the overall market interest, some suppliers have pulled back from hyper-converged. NetApp has dropped (direct) support for HCI, and some analysts point out that other suppliers are paying less attention to hyper-converged than they were.
Nutanix remains the best-known HCI supplier, but others with a strong presence include Cisco, VMWare, Dell EMC, Microsoft – through Azure – and Huawei.
And software-only options are gaining ground. Nutanix now focuses more on selling supplier-agnostic software than its own hardware. Scale Computing, StarWind and Pivot3 are also suppliers to watch. Stormagic, with its software-defined virtual SAN, is also often associated with hyper-converged infrastructure projects.
Is HCI ‘disaggregating’?
The original selling point for hyper-converged was that the key components of a system are integrated. This allows IT teams to deploy systems quickly and reduce management overheads. This integration is a large part of the appeal of HCI, especially for branch office or edge locations.
However, HCI scales better horizontally than vertically. Vertical scaling, for example for large-scale transactional databases, is not HCI’s strong suit.
“If your app needs vertical scaling, don’t think of hyper-converged,” says Forrester’s Chhabra. Although there are exceptions, such as running SAP Hana on HCI, for the most part, monolithic systems are less suited to HCI. The reason is that hyper-converged infrastructure cannot usually scale its (component) resources independently.
This is prompting suppliers to “disaggregate”, or split, the components of HCI so they are easier to scale. VMWare, for example, allows users to share storage across HCI clusters through its HCI Mesh system. Users can also connect to Dell’s storage sub-systems by connecting VxRail, through its vSAN infrastructure.
HCI: Always a good fit at the edge?
HCI answers some of the problems organisations encounter when deploying technology in small office, branch office or remote locations. These might lack dedicated IT teams, or even dedicated data rooms for larger and more complex equipment.
By the same token, hyper-converged should lend itself to edge applications, especially where it is delivered in a robust, appliance form factor. Cutting out the need for separate storage, compute and networking hardware also reduces power consumption and the need for cooling.
Also, using a single supplier means there are fewer moving parts. It might be overstating it to say there is less to go wrong – hyper-converged systems can be complex – but IT departments should be able to control all their systems from a single management tool.
But, as Forrester’s Chhabra cautions, there is no single industry definition of edge. Suppliers are looking at a wide range of use cases, and those that claim they can serve all of them are best avoided.
“For some [users of HCI], it is a retail or branch location, or a telco base station, or it’s a setup in the hinterlands,” he says. “The question here is, can all HCI deployments be made to fit into this large variety of deployments? The answer is no.”
CIOs should start by looking at the use case for hyper-converged in their edge environment, then see which suppliers have the best offering for the workload and the setup.
HCI as a service
HCI as-a-service is being promoted by suppliers such as Dell EMC with its VxRail, which is, in turn, part of the Dell Apex product line. Cisco offers a service option with HyperFlex. HPE’s Greenlake customers can use Nutanix Era or Microsoft Azure Stack HCI.
IT teams can also buy Azure Stack HCI directly from Microsoft. Azure Stack HCI comprises Hyper-V for compute, Storage Spaced Direct and a software-defined networking module. HPE offers SimpliVity HCI, which it says is HCI optimised for edge, VDI and “general virtualisation”. The technology is available on demand.
The growth of hyper-converged as a service is perhaps driven more by suppliers seeing an opportunity to provide resources on a subscription basis, than as a result of technical developments.
At one level, it makes sense for organisations buying infrastructure as a service (IaaS) to purchase HCI in the same way. If the workload lends itself to horizontal rather than vertical scaling, then scaling out cloud infrastructure on a node-by-node basis should reduce management overheads. It can also facilitate replication of on-premise HCI workloads to the cloud.
Against this, CIOs need to assess whether HCI fits the workloads they are looking to move to IaaS, or the public cloud more broadly. One benefit of the cloud is being able to buy compute and storage resources separately, and scale them up or down as required.
HCI as a service removes some of this inherent flexibility. But suppliers are investing in as-a-service delivery, and this should make it easier to fine-tune hyper-converged instances to different workloads and combine this with the option to use opex rather than capex for HCI.
HCI and containers
Support for containers is one area where HCI is clearly developing rapidly.
Established hyper-converged suppliers such as Cisco, Nutanix and VMWare support containerised workloads, as do the likes of IBM. Spectrum Fusion works with Red Hat’s OpenShift version of Kubernetes, and Nutanix is also working with Red Hat.
An ability to support containers and hypervisors extends the usefulness of HCI. As more cloud-based – or cloud-native – applications are developed for containers rather than VMs, this support is increasingly important to buyers.
HCI was not designed for containers, but suppliers are adapting hyper-converged nodes to support the flexibility demanded by environments such as Kubernetes. Google Anthos, for example, is hybrid cloud-based on NetApp technology. Kubernetes applications are available through the Google Cloud Platform marketplace.
As more enterprise applications move to containers, expect HCI suppliers to follow suit.