VMware storage: SAN configuration basics

For VMware storage basics start here – the fundamentals of configuring SAN storage for VMware, such as VMFS, VMDKs, connectivity options and VMware storage features

VMware storage entails more than simply mapping a logical unit number (LUN) to a physical server. VMware’s vSphere enables system administrators to create multiple virtual servers on a single physical server chassis.

The underlying hypervisor, vSphere ESXi, can use both internal and external storage devices for guest virtual machines. In this article we will discuss the basics of using storage area network (SAN) storage on vSphere and the factors administrators should consider when planning a shared SAN storage deployment.

VMware storage: SAN basics

vSphere supports internally-connected disks that include JBODs, hardware Raid arrays, solid-state disks and PCIe SSD cards. But, the big drawback to using these forms of storage is that they are directly connected and associated with a single server.

SAN storage, however, provides a shared, highly available and resilient storage platform that can scale to a multi-server deployment. In addition, as we will discuss, storage array makers have now added specific vSphere support to their products, providing increased performance and scalability over local storage deployments.

It is possible to use NAS and SAN-based storage products with vSphere, but in this article we will consider only SAN, or block-based devices. This includes the iSCSI, Fibre Channel and Fibre Channel over Ethernet (FCoE) protocols.

VMware file system and datastores

An important architectural feature of vSphere block storage is the use of the VMware File System (VMFS). In the same way a traditional server formats block devices with a file system, so vSphere uses VMFS on block LUNs to store virtual machines.

The vSphere unit of storage is known as a datastore and can comprise one or more LUNs concatenated together. In many instances, vSphere deployments have a 1:1 relationship between the LUN and the datastore, but this is not a configuration restriction.

Over the generations of vSphere, VMFS has been updated and improved and the current ESXi 5.1 release uses VMFS version 5. Improvements have been made to scalability and performance, enabling a single datastore to host many virtual machines.

Within the datastore, virtual machines are stored as virtual machine disks (VMDKs). vSphere also allows direct connectivity to a LUN without using the VMFS formatting. These devices are known as raw device mapping (RDM) devices.

More on vSphere

VMware-SAN connectivity

vSphere supports Fibre Channel, FCoE and iSCSI block storage protocols.

Fibre Channel protocols provide a multi-path, highly resilient infrastructure, but require additional expense for dedicated storage networking equipment, such as Fibre Channel switches and host bus adapters (HBAs).

By contrast, iSCSI provides a cheaper option for shared storage, as network cards are typically much cheaper than Fibre Channel HBAs and converged network adapters (CNAs), but there are some drawbacks. 

Until the latest versions of vSphere, multi-pathing was more difficult to configure, but this situation has improved. In addition, connection speeds for iSCSI are currently limited to 1Gbps and 10Gbps, although anything less is not really worth considering. Finally, security for iSCSI devices can be more complex to administer, as the features are more basic and not suited to highly scalable environments.

Maximums and configuration limits

VMware imposes a number of limits on the size of configurations using block storage. These apply to iSCSI and Fibre Channel (unless indicated otherwise) and include:

  • LUNs per ESXi host – 256
  • Maximum volume size – 64TB
  • Maximum file size – 2TB minus 512 bytes

These limits seem quite high and are unlikely to be reached by most users, but in large-scale shared deployments the number of LUNs may be an issue, making it essential to plan the number and type of datastores to be used within a vSphere infrastructure.

Hypervisor features

The vSphere hypervisor contains a number of features for managing external storage.

Storage vMotion enables a virtual machine to be moved between datastores while the virtual machine (VM) is in use. This can be a great feature for rebalancing workloads or migrating from older hardware.

Storage DRS (SDRS) provides the foundation for policy-based storage. Creation of new VMs can be based on service-based policies such as IOPS and capacity. In addition, once VMs are deployed and in use, SDRS can be used to ensure capacity and performance is load-balanced across multiple similar datastores.

Storage features

Storage suppliers have worked hard to add features that directly support vSphere deployments.

vStorage APIs for Array Integration (VAAI) is a set of additional SCSI commands introduced into ESXi 4.1 that enable some of the heaving lifting work of virtual machine creation and management to be offloaded to the storage array.

The features are enabled by vSphere “primitives” which map directly to the new SCSI commands. These include Atomic Test and Set, which enables better granularity on file locking within VMFS; Full Copy, which offloads data copying and cloning to the array; and Block Zeroing, which offloads the zeroing out of VMFS files in thin provisioned environments. VAAI has since been expanded to include SCSI UNMAP, which allows the hypervisor to direct the storage array to free up released resources in thin provisioned environments.

vStorage APIs for Storage Awareness (VASA) is another set of APIs that allow vSphere to obtain more information about the underlying storage resources within an array. This includes characteristics such as RAID levels and whether thin provisioning and data deduplication are implemented. VASA works in conjunction with vSphere’s Profile Driven Storage feature and Storage Distributed Resource Scheduler (DRS) to enable policy-based placement of data and the hypervisor aware migration of data to the correct tier of storage based on application performance requirements.

Key steps in implementing SAN storage

Storage administrators should consider the following steps when implementing SAN storage:

  1. Choice of supplier and feature support
    Most, but not all, storage suppliers support advanced vSphere features such as VAAI and VASA. If these are likely to be of use, then a careful review should be made of product offerings.
  2. HBA support and dedicated iSCSI connections
    If administrators are planning to implement Fibre Channel, then HBAs must be on the VMware Hardware Compatibility List (HCL). The number of HBAs per server will be dependent on anticipated workload and will be a minimum of two for hardware redundancy. For iSCSI, dedicated network interface cards (NICs) should be used for the data network, again including multiple NICs for redundancy.
  3. Datastore size
    Where possible, datastores should be created to be as large as possible, within the restriction of the storage product’s limits, especially where thin provisioning is available. This reduces the amount of data movement users need to perform in the future.
  4. Datastore type
    Datastores are currently the lowest level of granularity for virtual machine performance. Administrators should therefore plan to have datastores to match the workload types. For example, test and development data could be placed on lower-performing storage than high-performance workloads. As datastores map to LUNs, administrators should also create separate datastores where LUNs are protected using array-based replication.

VMware and storage futures

VMware has already outlined the evolution of vSphere block storage, with demonstrations of virtual volumes (vVOLs). Today, a virtual machine is comprised of multiple files that sit on a physically attached LUN (or LUNs) mapped as a datastore. vVOLs provide the opportunity to abstract the virtual machine files into a container that is the vVOL, with the aim of enabling a quality of service that applies just to that virtual machine itself. Today, QoS can only be attributed to the entire datastore, which can result in data migration simply to ensure a virtual machine receives the level of service it requires.

As well as VMware, other suppliers have developed platforms that cater specifically for VMware. Tintri is a good example, although it has used file access NFS rather than block protocols. The Tintri VMstore platform understands the file types used to make up a virtual machine and so can ensure that quality of service, performance tracking and the use of flash within the product are accurately targeted at virtual machine level.


Image: iStockphoto/Thinkstock

Read more on Virtualisation and storage