sakkmesterke - Fotolia

DDN revamp adds SFA14K plus hyper-converged and all-flash options

DataDirect Networks upgrades core HPC storage arrays to create SFA14K, and adds hyper-converged SFA14KE plus all-flash IME14K front end to smooth I/O bottlenecks

HPC storage specialist DataDirect Networks (DDN) has revamped its SFA storage array to create the SFA14K, added a hyper-converged variant, the SFA14KE, plus an all-flash powered rapid access tier, the IME14K.

The SFA line, which includes the existing SFA12K, is a large-capacity array series aimed at high-performance computing (HPC) and cloud use cases that can house flash, plus SAS and SATA spinning disk.

The SFA14K can be configured as block storage or can operate the open-source Lustre or IBM GPFS parallel file systems. Currently, its connectivity options are InfiniBand or 100 Gigabit Ethernet (100GbE), but 32Gbps Fibre Channel (32Gbps FC) will be available from the second quarter of 2016 

The 14K platform on which the new variants are based has capacity for up to 48 NVMe PCIe slots and 24 SAS drives. From one 4U node DDN claims up to six million input/output operations per second (IOPS) and 60Gbps throughput.

The SFA14KE provides hyper-converged capability and is an SFA14K that can have embedded optional hypervisors – Microsoft, VMware or Citrix – the open-source cloud environment OpenStack, or Hadoop as a bare metal implementation.

The IME14K – IME stands for Infinite Memory Engine – provides a new, all-flash layer to the DDN ecosystem. It is effectively IME software in a controller on top of some 14K boxes with flash inside, with up to 48 SAS connected or 48 NVMe drives.

Read more about HPC storage

  • We survey the key suppliers in HPC storage, where huge amounts of IOPS, clustering, parallel file systems and custom silicon provide storage for massive number-crunching operations.
  • 100,000 Genomes Project rejects building open-source parallel file system on x86 servers and opts for EMC Isilon clustered NAS deployed by Capita S3.

The aim, said DDN marketing director Mike King, is to “provide predictable performance for bursty workloads using parallel file systems, and allowing compute to run at full spec”.

The IME aims to smooth out problems with I/O that occur when apps are not written for parallel file systems, said King.

These include cases where the storage system attempts to deal with large amounts of small and random blocks, where sizes are not aligned and performance bottlenecks result from Posix file locking when many small files are protected during access.

IME aims to smooth these issues out by a combination of added flash-based performance and caching, plus software intelligence to make random I/O more sequential and better aligned with disk block preferences.

Read more on Big data analytics