alphaspirit - Fotolia

SAN vs NAS vs hyper-converged for virtual machine storage

The pros and cons of file and block access storage arrays vs the new breed of all-in-one server/storage architecture for storage of virtual machine systems and data

This article can also be found in the Premium Editorial Download: MicroScope: MicroScope: The effects of Brexit

Virtualisation is a cornerstone of today’s enterprise IT architecture, but virtual machines pose a number of performance issues, despite the benefits they bring in cost and efficiency.

Storage is a particular weakness in virtualisation. Virtualisation-optimised storage technologies have been slower to develop, despite the rise of high-performance hardware, such as flash.

Newer technologies, such as hyper-converged infrastructure (HCI) simplify IT deployments and provide an alternative for enterprises running VMs. But, they have downsides too, especially where businesses need to scale up storage capacity. And they demand new skills.

So, what are the pros and cons of using SAN, NAS and hyper-converged for virtual machine storage?

With so much now running on virtual machines it is increasingly hard to separate the needs of virtualised workloads from the conventional aspects of their deployment. They do, however, require storage for the operating system, applications and their data. In addition, virtual systems to store the hypervisor and, critically, system management data.

As Scott Sinclair, senior analyst at ESG points out, the individual capacity demands of any particular VM are “typically not overwhelming”. Virtual machines are designed to use as little resource as possible. But, issues arise as virtualised environments grow, as IT managers push up server utilisation rates and put more virtual machines onto hardware.

Increased server utilisation – and 80% or more is possible – is a key benefit of virtualisation. But, this also causes demands on infrastructure to scale significantly. In turn this impacts performance and leads to issues for end users.

More on virtualisation and storage

  • Server and desktop virtualisation bring many savings, but you have to work smart to ensure storage performs efficiently. We survey the top five pitfalls and how to avoid them.
  • HCI breaks down traditional IT silos by bundling compute, storage, networking and virtualisation, which helps to streamline resource provisioning and management.

Hypervisors with a large number of active virtual machines bring a large volume of IOPS. And, because virtual machines share physical resources these IOPS are random, leading to the so-called I/O blender effect. In a virtual environment, it is harder to optimise hardware and system resources. The fine tuning possible with a dedicated server, operating system and application is lost.

To compensate, VMs need high-performance, low latency storage that can handle random I/O. Increasingly, enterprises use flash to keep up with virtualised systems’ performance demands.

In addition, virtual environments require the same availability, data resiliency, and protection features as conventional servers. So, quality of service (QoS) should be factored in too, to manage conflicting demands between critical and less critical applications, and potential spikes in demand.

A further issue is that conventional SAN storage – the bedrock of most enterprise systems – is not optimised for virtual environments. This introduces its own set of bottlenecks, in I/O and also across the network.

NAS, SAN and beyond the network

With virtual machines so widely used in the enterprise, organisations have turned to a wide range of storage technologies to store their data.

Internal server storage run alongside NAS and SAN systems. But, organisations have also moved towards storage within hyper-converged infrastructure (HCI) architectures, and cloud storage, in public and private forms.

Storage area networks (SANs) are based on fibre channel or iSCSI and are designed with predictable I/O in mind. SANs work by “fooling” the operating system or application into treating networked storage as dedicated direct-attached storage. The SAN serves up blocks to the application, but the need to navigate the hypervisor and VM creates extra steps and randomises I/O. And SANs are designed for predictable I/O requirements.

These issues come to the fore with I/O intensive applications, such as SQL databases.

This does not mean, however, that NAS systems necessarily perform better. Network-attached storage has its own file system on the array, potentially reducing overheads within the virtual environment. This works for some use cases, such as archiving, or handling large files, such as video, or virtualised desktop storage.

NAS storage is generally less performance-optimised than SAN with the majority of all-flash arrays tending to operate on SAN protocols. So, most virtualised infrastructures tend to run on SANs in large part because that’s the storage the enterprise already owns.

NAS systems often have the advantage that they support multiple protocols. This has the benefit of simplicity: Organisations only need to deploy one technology. But, whist NAS works well for smaller deployments, such as branch offices, and can even support large data volumes through scale-out NAS, the compute and I/O intensive nature of virtualisation makes NAS less suited to business-critical virtual servers. 

Hyper-converged: Back to the future

Hyper-converged infrastructure (HCI) offers another route. HCI brings compute, storage and the hypervisor together into one system to eliminate some of the bottlenecks of conventional architecture.

HCI could even be seen as a step backwards: HCI removes bottlenecks by moving away from the SAN towards a version of direct-attached storage. It’s a move made easier with the growth of storage-dense servers and high performance flash drives.

Whilst HCI brings performance benefits for some environments, particularly by reducing network overheads, its biggest advantage comes through simpler IT management.

Scale up, scale out

Hyper-converged infrastructure, though, is not always a perfect solution for virtual environments. Hyper-converged storage brings storage closer to the VM, but potential limitations on a system’s ability to scale out can be an issue with increasing data storage needs.

HCI simplifies technology deployment, but that very simplification – with compute, storage and networking in one box – can be a downside too. HCI is popular where IT management is an issue, such as remote and branch offices, or where a high degree of automation is required, such as in the public cloud.

But the close ties between the three elements can quickly lead to inefficiency when it comes to scaling a system. Unless compute and storage demands scale equally – and this is unlikely in the real world – enterprises can add more storage or more compute capacity than they need.

As ESG’s Sinclair points out, this can work where the organisation’s priority is ease of management. But if cost or even storage utilisation is the priority then a SAN will be the appropriate choice.

A further issue is that some workloads perform less well under HCI than standard virtualised technologies. This is due to the “flattened stack” effect, caused in part by the in-line features HCI adds, such as compression, data deduplication or even encryption.

Examples include SQL servers and other I/O intensive workloads. Here, separate, optimised components in compute, storage and networking will perform better. For less I/O intensive operations, such as desktop virtualisation, HCI works well.

Separate components should also be more flexible when it comes to upgrading as new technologies – such as NVMe – become available. It also reduces vendor lock-in as by no means all HCI vendors support all hypervisors.

Storage choices

Choosing the best storage infrastructure for virtualisation means finding that difficult balance between performance, cost and manageability.

A SAN, optimised with flash-based storage and an efficient network, will give the best performance. As SANs are the most common enterprise storage architecture, a SAN-based approach allows for incremental investment, even if the management overhead is greater.

NAS storage, for its part, lends itself to multi-protocol support and offers simplicity. Performance, though, will fall short of the needs of very I/O intensive virtual machines.

HCI, for its part, is a promising architecture. Its single-box approach simplifies management and lends itself to scale-out growth, which is one reason it has gained ground in cloud deployments. But scaling up HCI can be expensive, as internal storage hits capacity limits, and I/O intensive workloads can be slower on HCI than separate, optimised, SANs and compute pools.

For some enterprises, the benefits clearly outweigh the costs. But HCI remains a largely proprietary architecture, and vendor lock-in is a potential downside CIOs must also consider.

Read more on Cloud storage