sdecoret - stock.adobe.com

Flash storage in the cloud from the big cloud providers

We survey the big three cloud providers – AWS, Azure and Google – and find a range of mostly block storage flash storage options with performance choices available

This article can also be found in the Premium Editorial Download: Computer Weekly: Where next for digital identity?

Flash storage is increasingly becoming the standard for enterprise deployments as prices drop and device capacities increase.

This also goes for storage on public cloud platforms, which have solid-state-based storage offerings that boost performance and throughput for applications that need these capabilities.

We look at what flash is on offer as cloud storage and how can it be accessed and used?

Flash storage as a service

Of the major storage protocols available today, object storage in the cloud typically has no specifications set around throughput and latency, and no transparency on how this technology is implemented.

File-based solutions offer some performance choices (typically two, as discussed in the “supplier roundup” below) although these could be flash-accelerated, rather than all-SSD solutions.

Block storage is the area where we see hardware-based differentiation in solutions, based on either solid-state disks or hard drives. Suppliers specifically reference high performance based on solid-state drives (SSDs) as a separate tier of service.

Flash in the cloud: Implementation specifics

Block storage is only available to be connected to virtual instances or virtual machines (VMs). This can be as a boot volume or secondary disk to hold application files. This type of storage is aimed at applications that need low latency, high performance and read/write at the block level, such as online transaction processing (OLTP) databases.

The characteristics of block storage are not consistent across the main three suppliers. Amazon Web Services (AWS) and Google Cloud Platform (GCP) allow volume size to be set by the user, whereas Microsoft Azure has fixed increments. Configuration maximums are roughly similar, with 60,000 to 80,000 IOPS (input/output operations per second) per instance and 1,500 to 2,000MBps throughput.

However, suppliers have different levels of scalability and performance, with some (Azure, AWS) scaling performance with capacity, while GCP scales performance with the vCPU count on the connected instance. This could have an effect on CIO choices as it can lead to over-provisioning in one area or another.

None of the suppliers offer specific latency metrics, other than to claim single-digit millisecond numbers. Even then, performance is not 100% guaranteed. For example, AWS’s General Purpose SSD (gp2) flash volumes are only guaranteed to deliver performance 99% of the time.

To achieve really low latency that is guaranteed, users would have to choose virtual instances with direct-attached flash. AWS and Google offer this kind of solution.

Cloud flash use cases

As already mentioned, typical use cases will be those that require low-latency block-based I/O. As well as traditional applications, this could include analytics, data warehousing and machine learning (ML) or artificial intelligence (AI) solutions.

The performance offered by storage-optimised instances and local flash storage has been used by storage suppliers to port their solutions to the cloud.

WekaIO Matrix is available on AWS using local SSD. Elastifile Cloud File System can be run on Google Cloud Platform. NetApp offers Cloud Volumes Ontap that runs on an Elastic Compute Cloud (EC2) instance using Elastic Block Store (EBS) SSD storage. All three of these solutions take block storage and provide resilient file solutions.

Building blocks

The idea of using block storage in the cloud as the building block for other solutions is likely to expand further. What’s not yet clear is whether the cloud providers themselves would want to compete and offer more complex services or let storage software suppliers provide this capability.

                                                      

Supplier roundup

Amazon Web Services (AWS)

AWS offers block (EBS), file (EFS) and object storage (S3). From these products, only EBS, the Elastic Block Store, has a feature that explicitly uses flash storage. EFS (Elastic File System) has performance general-purpose modes, but these don’t appear to use flash as the performance mode actually increases latency.

EBS has two SSD-based – Provisioned IOPS SSD (io1) and General Purpose SSD (gp2) – and two HDD-based storage options, all of which can only be connected to EC2 virtual instances and aren’t accessible from outside AWS. The full benefit of SSD performance requires the use of EBS-optimised EC2 instances where application and storage network traffic are physically isolated, rather than using a shared interface.

Provisioned IOPS SSD (io1) is the high-performance option. Volumes can scale from 4GB to 16TB, with up to 32,000 IOPS per volume and 500MBps of throughput. A single EC2 instance can support a maximum of 80,000 IOPS and 1,750MBps throughput.

General Purpose SSD (gp2) is for general-purpose SSD requirements. Volumes can scale from 1GB to 16TB with up to 10,000 IOPS per volume, 160MBps throughput and a maximum of 80,000 IOPS and 1,750MBps throughput per EC2 instance.

The difference between io1 and gp2 is in the I/O density. io1 targets 50 IOPS per gigabyte compared to three IOPS per gigabyte with gp2. The io1 provides the capability to guarantee IOPS with “provisioned IOPS” where additional performance is a chargeable option. In contrast, gp2 provides limited burst capability to meet spikes in throughput with a feature called volume credits.

AWS doesn’t quote specific latency metrics, other than to claim both offerings deliver “single-digit” millisecond responses.

AWS also offers locally attached flash called SSD Instance Store Volumes. These are directly connected to the host running certain storage-optimised EC2 instances and can be standard SCSI or NVMe devices. Performance depends on the instance size, with random read performance ranging from 100,000 to 3.3 million IOPS.

Microsoft Azure

Microsoft Azure offers file, block, object and scalable storage options (Data Lake and Archive). SSD-based storage is offered with block-based “Disk” storage and can only be used when connected to an Azure virtual machine.

Disk storage is delivered out of Blob storage, essentially a large pool of storage that serves as the backing store for object, file and block usages. Confusingly, Microsoft has chosen to use the term “block Blobs” to describe file storage, while typical block I/O volumes are implemented using page Blobs. A page is essentially a 512-byte block.

Premium SSD Managed Disks are available in eight fixed sizes, doubling in capacity from 32GB to 4TB. I/O throughput scales from 120 IOPS per disk to 7,500 IOPS per disk respectively, with throughput scaling from 25MBps to 250MBps. A single virtual machine can access multiple disks with a maximum of 256TB capacity, 80,000 IOPS and 2,000MBps throughput.

Microsoft has recently introduced a new cheaper tier of SSD storage in preview. Standard SSD Managed Disks have six capacity sizes, from 128GB to 4TB, each step doubling in capacity. IOPS per disk are fixed at 500, with throughput also fixed at 60MBps. This tier is intended as a cheaper option for test and development environments or entry-level production applications.

Google Cloud Platform (GCP)

GCP offers three main storage options: Cloud Storage (object), Persistent Disk (block) and Cloud Filestore (files). Filestore has two performance levels – Standard and Premium, although Google doesn’t disclose the internal technology that delivers the service.

Persistent Disk is available in two options and 64TB in size. Zonal and Regional SSD Persistent Disk offers up to 30 read or write IOPS per gigabyte of capacity, supporting from 15,000 to 60,000 read IOPS and 15,000 to 30,000 write IOPS per virtual instance. IOPS performance is dependent on the number of vCPUs defined on the instance to which the disk connects. Throughput is 0.48MBps per gigabyte of storage capacity with from 240MBps to 1,200MBps per virtual instance.

GCP also offers local SSDs that are connected directly to the host server running a virtual instance. These are available either as SCSI or NVMe devices and vastly increase performance capability. SCSI devices can sustain 266 read IOPS or 187 write IOPS per gigabyte, with 400,000 read/280,000 write IOPS per instance. Throughput per instance is 1,560MBps (read) and 1,090MBps (write).

Local NVMe SSDs offer higher performance with 453 IOPS per gigabyte (read) and 240 IOPS per gigabyte write, with up to 680,000 read IOPS and 360,000 write IOPS per instance. Throughput is up to 2,560MBps (read) and 1,400MBps (write). Using local SSD can create some resiliency challenges, as these disks are not replicated like persistent disks. Local SSDs are deployed in a fixed size of 375GB.

Read more about cloud storage

 

Read more on Computer storage hardware