Anna Khomulo - Fotolia

Panasas set for Mach.2 throughput with multi-actuator HDDs

Scale-out NAS maker claims to be the first enterprise array maker to use multi-actuator HDDs designed for hyperscaler architectures and which double throughput

Panasas will allow customers to use multi-actuator HDDs from January when it launches the capability in the new version of its PanFS file system software.

Multi-actuator HDDs support in PanFS version 10 will make the company the first enterprise storage vendor to deploy them in its products, according to Panasas CEO Ken Claffey.

Multi-actuator HDDs have hitherto, according to Claffey, only been supplied to hyperscalers for their bespoke hardware architectures.

HDDs with multiple actuators, where more than one read head arm operates independently, were designed for the hyperscalers and lack the double ports made for dual redundant high availability (HA) seen in enterprise HDDs. Hyperscaler hardware architecture – based on massive amounts of network server-storage nodes – doesn’t use HA because it relies on erasure coding.

Seagate Mach.2 HDDs, the multi-actuator capability in the drives Panases uses, means double the performance, primarily in terms of throughput, which Claffey said amounted to an increase of nearly 2x. “Close to 500MBps per drive,” said Claffey.

Talking about the origins of multi-actuator HDDs, Claffey added: “Hyperscalers needed certain levels of IOPS per TB and the solution was to split the drive virtually into two. For us, they give throughput of 1.8GBps from a storage node compared to 1GBps from one with enterprise HDDs.”

Panasas has taken advantage of multi-actuator drives where others haven’t by building in that ability to its OS. What’s different is that ordinarily OSs see them as two drives and so to make them suited to an enterprise storage operating environment, changes are needed to allow the OS to handle failures properly.

Panasas storage nodes with Mach.2 drives will GA in Q1 of 2024.

But why is Panasas so keen on HDDs when flash appears to be set to take over the datacentre? Claffey sees things differently.

He said Panasas is agnostic about drives in media terms, but that among deployments it sees, a 90/10 split between HDD and flash is typical. That gives a minimum amount of flash for metadata etc and a reliance on hyperscaler-type HDDs for the bulk of capacity.

“The biggest companies in the world use a 90/10 architecture with different classes of media, and 90% is on on HDD,” he said. “We’re quite a long way from the all-flash datacentre. The reality is on price per TB there is at least 5x between the lowest cost flash and HDD. We are media agnostic but see the hyperscaler solution is optimal for customers in most cases now.”

The CEO said Panasas had made significant investments in R&D to allow its enterprise customers to take advantage of such architectures: “Customers say to us they want the same ease of use and resilience that they’re used to but with improving economics. We’ve made a significant investment in R&D and added 50% in terms of resources.”

Claffey wouldn’t be specific with numbers, but said it was in the “tens of people”, adding: “The roadmap is for incremental software features and for a broader ecosystem of hardware we support.”

Claffey said Panasas wants to align with the hyperscalers in terms of architecture and bring that to the world of HPC.

Additionally, PanFS 10 also added S3 compatibility that would allow customers to move AI/HPC workloads developed in the cloud to on-prem datacentres.

Read more on file storage

  • File, block and object: Storage fundamentals in the cloud era. We look at the three basic ways that storage accesses data – via file, block and object – as well as the ways in which the rise of the cloud and distributed systems have brought changes to them.  
  • Cloud NAS, what is it good for? We look at NAS file access storage, how it works, what it’s good for and the possibilities to access storage for files and unstructured data in the public cloud.

Read more on SAN, NAS, solid state, RAID