ludodesign - Fotolia
Briefing: Cloud storage performance metrics
We look at some of the key storage performance metrics that determine what your applications will get from cloud storage, plus cloud essentials that are beyond simple measurement
About 50% of business data is now stored in the cloud – and the volume stored using cloud technologies is higher still when private and hybrid clouds are factored in.
Cloud storage is flexible and potentially cost-effective. Organisations can pick from the hyperscalers – Amazon Web Services, Google’s GCP and Microsoft Azure – as well as local or more specialist cloud providers.
But how do we measure the performance of cloud storage services? When storage is on-prem, there are numerous well-established metrics to allow us to keep track of storage performance. In the cloud, things can be less clear.
That is partly because when it comes to cloud storage, choice brings complexity. Cloud storage comes in a range of formats, capacities and performance, including file, block and object storage, hard-drive based systems, VM storage, NVMe, SSDs and even tape, as well as technology that works on a “cloud-like” basis on-premise.
This can make comparing and monitoring cloud storage instances harder than for on-premise storage. As well as conventional storage performance metrics, such as IOPS and throughput, IT professionals specifying cloud systems need to account for criteria such as cost, service availability, and even security.
Conventional storage metrics
Conventional storage metrics also apply in the cloud. But they can be rather harder to unpick.
Enterprise storage systems have two main “speed” measurements: throughput and IOPS. Throughput is the data transfer rate to and from storage media, measured in bytes per second; IOPS measures the number of reads and writes – input/output (I/O) operations – per second.
In these measurements, hardware manufacturers distinguish between read speeds and write speeds, with read speeds usually faster.
Hard disk, SSD and array manufacturers also distinguish between sequential and random reads or writes.
These metrics are affected by such things as the movement of read/write heads over disk platters, and by the need to erase existing data on flash storage. Random read-and-write performance is usually the best guide to real-world performance.
Hard-drive manufacturers quote revolutions per minute (rpm) figures for spinning disks, typically 7,200rpm for mainstream storage, and sometimes 12,000rpm for higher-grade enterprise systems and 5,400rpm for lower-performance hardware. These measures are not applicable to solid-state storage, however.
So, the higher the IOPS, the better performing the system. Spinning disk drives usually reach the 50 IOPS to 200 IOPS range.
Solid-state systems are significantly faster. On paper, a high-performance flash drive can reach 25,000 IOPS or even higher. Real-world performance differences will be smaller, however, once storage controller, network and other overheads such as the use of RAID and cache memory are factored in.
Latency is the third key performance measure to factor in. Latency is how quickly each I/O request is carried out. For an HDD-based system, this will be 10ms to 20ms. For SSDs, it is a few milliseconds. Latency is often the most important metric to determine whether storage can support an application.
Cloud metrics
But translating conventional storage metrics into the cloud is rarely straightforward.
Usually, buyers of cloud storage will not know exactly how their capacity is provisioned. The exact mix of flash, spinning disk and even tape or optical media is down to the cloud provider, and depends on its service levels.
Most large-scale cloud providers operate a blend of storage hardware, caching and load-balancing technologies, making raw hardware performance data less useful. Cloud providers also offer different storage formats – mostly block, file and object – making performance measurements even harder to compare.
Measures will also vary with the types of storage an organisation buys because the hyperscalers now offer several tiers of storage, based on performance and price.
Then there are service-focused offerings, such as backup and recovery, and archiving, which have their own metrics, such as recovery time objective (RTO) or retrieval times.
Read more on cloud storage performance
- Cloud flash storage: SSD options from AWS, Azure and GCP. We look at the flash storage options available from the big three cloud providers that can help narrow the gap between on-site and cloud workloads
- Cloud storage costs: How to get cloud storage bills under control. Many organisations look to the cloud to cut storage budgets, but the potential costs are many and varied. So what are the key ways to cut cloud storage costs?
The easiest area for comparisons, at least between the large cloud providers, is block storage.
Google’s Cloud Platform, for example, lists maximum sustained IOPS, and maximum sustained throughput (in MBps) for its block storage. This is further broken down into read and write IOPS, and throughput per GB of data and per instance. But as Google states: “Persistent disk IOPS and throughput performance depends on disk size, instance vCPU count, and I/O block size, among other factors.”
Google also lists a useful comparison of its infrastructure performance against a 7,200rpm physical drive.
Microsoft publishes guidance aimed at IT users that want to monitor its Blob (object) storage, which serves as a useful primer on storage performance measurement in the Azure world.
AWS has similar guidance based around its Elastic Block Store (EBS) offering. Again, this can guide buyers through the various storage tiers, from high-performance SSDs to disk-based cold storage.
Cost, service availability… and other useful measures
As cloud storage is a pay-as-you-use service, cost is always a key measurement.
Again, all the main cloud providers have tiers based on cost and performance. AWS, for example, has gp2 and gp3 general-purpose SSD volumes, io1 and io2 performance-optimised volumes, and st1 throughput-focused HDD volumes, aimed at “large, sequential workloads”. Buyers will want to compile their own cost and performance analysis in order to make like-to-like comparisons.
But there is more to cloud storage metrics than cost and performance. The cost per GB or instance needs to be considered alongside other fees, including data ingress and especially data egress, or retrieval, costs. Some very cheap long-term storage offerings can become very expensive when it comes to retrieving data.
A further measure is usable capacity: how much of the purchased storage is actually available to the client application, and at what point will utilisation start to impact real-world performance? Again, this might differ from figures for on-premise technology.
CIOs will also want to look at service availability. Storage component and sub-systems reliability is traditionally measured in mean time between failures (MTBF), or for SSDs, the newer terabytes written over time (TBW).
But for large-scale cloud provision, availability is a more common and useful measure. Cloud providers are increasingly using datacentre or telecoms-style availability or uptime measures, with “five nines” often the best and most expensive SLA.
Even then, these metrics are not the only factors to take into account. Buyers of cloud storage will also need to consider geographical location, redundancy, data protection and compliance, security, and even the cloud provider’s financial robustness.
Although these are not performance measures in the conventional sense, if a provider falls short, it could be barrier to using its service at all.