Creativa - Fotolia
A guide to scale-out NAS: The specialists and startups
Specialist suppliers and startups in scale-out NAS, a technology that’s gone mainstream for its superior ability to scale capacity and performance
In our recent feature on scale-out NAS, we looked at products from the six biggest storage suppliers.
Here we examine the smaller scale-out NAS suppliers. Some offer systems for small and medium-sized enterprises (SMEs), but others are tailored for specialised applications such as high-performance computing (HPC) and virtualisation. All include high-availability features such as redundant drives and other components, along with a range of enterprise-level features such as data deduplication and storage tiering.
Irrespective of organisation size, scale-out NAS makes sense in a world where data volumes are increasing at unpredictable rates, and where paying upfront for large volumes of storage is increasingly viewed as uneconomical. Not only does that tie up capital, it tends to create silos of storage, which in turn increases the amount of data management required – something few businesses, especially those without an extensive IT department, undertake willingly.
Instead, scale-out architectures allow an organisation to buy what it needs when it needs it, so spreading the financial burden. As a result, research firm ESG predicts that by 2015, scale-out NAS will comprise 80% of the NAS market by revenue and 75% by capacity. Like most fields of human activity, there are trends and fashions in storage buying, and the time for scale-out NAS is now.
DDN ExaScaler and GridScaler
DataDirect Networks (DDN) aims its products at sectors that create large volumes of data, such as energy, life sciences, cloud and web, and financial services.
The company offers two routes to scale-out NAS: ExaScaler, which is aimed at HPC applications and runs the parallel, open-source Lustre File System; and GridScaler for enterprises, which runs IBM’s General Parallel File System (GPFS).
More on scale-out NAS
- Scale-out NAS product survey: The big six
- Scale-out set to eclipse scale-up for enterprise storage
- How does a scale-out NAS environment affect storage management?
- Definition: Scale-out storage
- Enterprise Private Clouds and Scale-Out NAS Benefit From Clustered Storage
- Scientists banish NAS sprawl with Red Hat scale-out NAS software
GridScaler scales performance by adding file-serving nodes and grows capacity by adding storage appliances, to up to 10,000 NAS clients. It supports policy-based data tiering while retaining a single namespace. Up to 200 servers can be included in a single cluster, with throughput for a single Linux client of up to 4Gbps over Infiniband and 700Mbps over 10GbE.
ExaScaler can support up to 20,000 clients and up to 400 gateway nodes, with throughput for a single Linux client of up to 3.5Gbps over Infiniband and 700Mbps over 10GbE.
Other than that, the differences are mainly in connectivity. GridScaler offers client access over a common internet file system (CIFS) and network file system (NFS), while ExaScaler uses 10GbE and remote direct memory access (RDMA)-enabled Infiniband. Both systems use the same storage appliances at the back end.
Storage configurations start with a single SFA12K appliance, which can scale up to 1,680 SATA, SAS and flash drives with a maximum capacity of 10PB when using 20 enclosures in two 48U racks.
The SFA12K range consists of three appliances: The 12K-20 and 12K-40, which connect to the server over 10GbE or Infiniband,; and the server-less 12K-20E, which packages the server into the box, so reducing server-to-storage latency. Features include advanced data protection features, such as automated storage tiering, snapshots, mirrored volumes and asynchronous replication.
Gridstore
Gridstore offers its systems (pictured) as storage for Hyper-V environments. It aims to reduce the problem of high volumes of random input/output (I/O) generated by hypervisors running multiple virtual machines (VMs), which most storage systems handle poorly.
Gridstore says its systems use virtualisation to re-establish a one-to-one relationship between a VM and its underlying storage, and to manage storage functionality on a per-VM basis rather than per logical unit number (LUN).
Like most fields of human activity, there are trends and fashions in storage buying, and the time for scale-out NAS is now
Capacities start at 4TB per 1U node with a three-node cluster, and can be expanded up to 48TB per node, allowing scalability up to 12PB as nodes are added. The storage systems use erasure coding. A write-back cache on a PCIe card with over 500GB of flash memory boosts throughput.
Gridstore systems are designed for two use cases. The H-Class caters for those that need high throughput, while the C-Class is aimed at those that need more capacity. All offer four 1GbE or two 10GbE ports as options. The GS-H2100-12 provides 12TB SATA and PCIe flash storage, while the capacity nodes – the GS-C2000-04 and GS-2100-12 – provide 4TB and 12TB respectively using SATA disks only, and connect using dual 1GbE ports as standard.
The software supports VM snapshots and live migration of VMs and their associated storage. Other features include VM replication, thin provisioning and data deduplication, plus the ability to prioritise traffic flows for each VM. Appliances are managed and controlled by a Gridstore vController VM.
Oracle ZFS Storage Appliances
Following Oracle's acquisition of Sun Microsystems in 2010, the company offers its ZFS Storage Appliances. The appliances use a combination of mechanical and flash storage, and DRAM and flash for caching to boost performance. They connect using 1GbE, 10GbE and, optionally, 8Gbps and 16Gbps Fibre Channel or Infiniband. Services include compression, data deduplication, cloning and replication. The systems can be accessed by clients using NFS, CIFS, HTTP, WebDav and FTP.
Storage consists of two controller appliances, the ZS3-2 and the ZS3-4, to which disk shelves can be connected. The ZS3-2 scales from 6TB to 1.5PB and allows up to 16 disk shelves to be attached, each with 20 or 24 disks per shelf. It comes with eight 10GbE ports as standard and a maximum port count of 32. The ZS3-4 allows up to 36 disks per shelf and so scales to 3.5PB, and includes eight 1GbE ports as standard and a maximum port count of 40.
Management tools include Dtrace Analytics, which provides fine-grained visibility into disk activity and usage. As you might expect, tools for integration with Oracle databases are also available, including Snap Management for database backup management, a database compression tool and Intelligent Storage Protocol, which provides metadata to help improve storage efficiency.
Overland SnapScale
The SnapScale series consists of clustered NAS running the company's RainCloud operating system (OS) for storage clusters, and is aimed at medium-sized businesses. Capacity can be boosted by adding hard drives or nodes to the cluster, which offers a single namespace and support for file- and block-level access. Protocols include CIFS, NFS and HTTP over 1GbE or 10GbE per node. Features include replication, compression and encryption. Maximum capacity is 512PB.
Each 2U SnapScale X2 unit can house up to 12 SAS drives up to 4TB in size, providing up to 24TB per node with a minimum drive count of four, while the 4U X4 unit scales up to 72TB from its 36-drive maximum.
Within a cluster, files can be distributed and data stripped across nodes for improved throughput. The systems provide high availability through redundancy and will failover in the event of a drive or node failure.
Panasas ActiveStor
Panasas targets ActivStor at energy, finance, government, life sciences, manufacturing, media and university research, and claims its system combines the benefits of flash performance and SATA economy.
ActivStor runs the company's PanFS parallel file system, and delivers linear scalability from its blade architecture via out-of-band metadata processing and parallel processing of its (triple parity) Raid6+ reads and writes. Maximum per-system throughput is 150Gbps. It uses a combination of director and storage blades to allow users to achieve the required balance between performance and capacity.
The ActivStor 14 scales to 8.12PB with a per-shelf capacity of 80TB SATA and 1.2TB flash drives, providing throughput of 1.6Gbps and 13,550 IOPS per shelf. At the top end, the ActivStor 16 scales to 12.24PB and provides a claimed system throughput of more than 1.3 million IOPS, or more than 13,550 IOPS per shelf. HDD capacity per shelf is 120TB SATA and 2.4TB SSD.
Connectivity is provided by two 10GbE or eight 1GbE ports per shelf over CIFS, NFS or Panasas DirectFlow, a parallel protocol that provides access for Linux clients.
Quantum StorNext Q-Series
The Q-Series is aimed at high-performance, big data workflows, such as healthcare and life sciences, science and engineering, and media organisations, with its 2U QXS range – one of two product lines within the series – aimed at scalability.
The QXS-1200 and QXS-2400 clusters deliver a maximum capacity of 384TB and 230.4TB from 96 and 192 drives respectively. Each QXS-1200 unit houses up to 12 7,200rpm 4TB NL-SAS drives and is designed to provide economical capacity, while the QXS-2400 provides higher performance from up to 24 10,000rpm 1.2TB SAS drives per unit. The systems connect over 16 16Gbps Fibre Channel ports.
Scalability is provided by 2U expansion units, up to seven of which can be attached per base system, and has the same capacities and drive unit types as the base system. Client systems supported include Windows, Mac and Linux.
Scale Computing HC3 Virtualisation Platform
The HC3 is aimed at virtualisation consolidation in medium-sized organisations with small IT departments. The systems consist of three collapsed server-and-storage systems that include a licensed hypervisor, and which scale from a single 6TB unit to an eight-unit cluster providing 28.8TB in a single namespace, managed from a single pane of glass. Guest operating systems officially supported by the hypervisor-in-a-box systems include RHEL/CentOS, SuSe Linux Enterprise and most recent versions of Windows.
The range starts with the HC1000, which offers a maximum of 8TB from four 2TB drives, accessible over two 1GbE or 10GbE ports. The HC2000 increases the number of CPU cores by 50% from four to six and provides up to 4.8TB from four 1.2TB 10,000rpm SAS drives – 15,000rpm 600GB drives can be specified as an alternative. Its port count is identical to that of the HC1000.
Double the number of CPU cores are housed in the top-end HC4000, along with eight 10,000rpm 1.2TB drives for a maximum capacity of 9.6TB. It provides a pair of 10GbE ports only. Nodes of different sizes can be accommodated in a single cluster.