2121fisher - Fotolia
Cloud storage on-site hardware: AWS, Azure, Google Cloud
We look at the big three cloud vendors’ on-site offerings: AWS’s Outposts, Gateway and Snow hardware, Azure’s Stack and Arc appliances, and Google Cloud’s software-defined Anthos
Cloud computing has transformed IT. But cloud and on-premises infrastructure are being seen less as either/or and increasingly as complementary.
There are good reasons to keep infrastructure on-site, especially when it comes to storage. These include regulatory constraints, minimising latency, architectural compatibility, and a desire to make use of existing investments.
There are good reasons, too, for investing in the cloud. Scalability, resilience, and a pay-as-you-use model are attractive, even if on a per-gigabyte basis the cloud is not always cheaper.
More businesses are looking at hybrid cloud infrastructure and hybrid cloud storage as a way to achieve the best of both worlds.
Often hybrid cloud storage is implemented by storage vendors making their offerings compatible with the cloud, with the cloud accessible as a tier, but now, traffic is starting to move in the opposite direction and the big three cloud vendors – AWS, Azure and Google Cloud Platform – have extended their technology to customer premises.
Using cloud vendor technology allows IT teams to use the same interfaces, management tools and provisioning to run local storage and cloud resources. This applies to compute and storage, primarily through virtual machines or containers.
The cloud vendors’ aim is to make local IT resources more efficient and easier to run.
Local performance
Analysts IDC (see box) say cloud vendors are driven by the need to provide “local performance at any location without adding application complexity” in addition to “cloud anywhere”.
Cloud vendors are moving into the on-premises hardware market to address some of the performance issues inherent with cloud services, such as latency, and for storage, read and write performance.
Use cases for cloud vendor hardware include any applications where performance is critical, or data needs to be stored locally.
This extends from enterprise applications to analytics that involves sensitive information. But by connecting these applications to a cloud storage pool it should be easier to archive older data, and to deal with spikes in demand.
The limitation is that the workload needs to run in a VM for AWS and Azure, or in a Kubernetes container for Google Cloud Platform. There is, as yet, no bare metal support.
The big three: The approaches
Buyers also need to navigate the three vendors’ quite different approaches. Azure, for example, has two distinct ways to deliver on-premises services.
Azure and GCP support multiple hardware vendors, and also multiple clouds.
AWS sees its Outpost hybrid cloud as a single vendor stack. Google recently announced support for AWS, but not yet Azure, through Anthos.
Azure Arc will also let users deploy onto AWS or Google infrastructure and manage it through Arc.
These features, though, are still new, so storage managers might want to prioritise working with a single cloud vendor, and using familiar cloud tools, over multiple clouds – at least for now.
AWS
AWS provides three on-premises options:
- Outposts
- Storage Gateway
- Snow
Outposts gives access to AWS infrastructure to users that need low-latency services or local storage. The hardware and tools are identical, AWS says, to those of its cloud offerings.
Outposts hardware is connected to the nearest AWS region to the customer to make management across on-premises hardware and cloud as seamless as possible. IT teams can order Outposts hardware from their AWS console. Amazon plans to add a VMWare-compatible version of Outposts this year.
Storage Gateway connects on-premises hardware to the cloud to create a hybrid storage pool. Amazon says this helps customers cut costs by giving access to AWS storage for applications such as backup and archiving.
AWS has three gateways: Tape, file and volume. These can run in a virtual machine, or through Amazon’s Storage Gateway hardware appliance. This is based on a Dell EMC PowerEdge R640XL with two 10-core Intel Xeon processors, 128GB RAM and 5TB SSD storage.
AWS Snow consists primarily of appliances used to transfer data to the AWS cloud. These range from the Snowball data computing and edge device with either 80TB or 42TB of block storage to the 100PB truck-mounted Snowmobile. Users do not buy Snow devices outright; the cost is included with setup fees for the relevant AWS services.
Azure
Azure provides two ways to combine cloud and on-premises resources:
- Stack
- Arc
Stack has three components: Stack Edge, Stack HCI and Stack Hub. Stack runs on validated Azure hardware.
Stack Edge is for edge computing applications, including machine learning and IoT. It provides data transfer from the edge to Azure’s cloud.
Stack HCI is Microsoft’s hyper-converged architecture, which supports virtualised applications and their storage. Microsoft also positions it as a platform for on-premises architecture modernisation.
Read more about cloud storage
- Cloud storage 101: We look at NAS file storage options in AWS, Azure and Google Cloud. All three offer native-based and NetApp-based file storage, with Azure adding single namespace cache services.
- Do you use cloud storage for these use cases yet? We look at the use cases most suited to a quick transition to the cloud: backup, archiving, disaster recovery, file storage and cloud bursting.
Stack Hub allows IT teams to use Azure tools to run Azure apps on premises, but also to manage on-site storage, when data sovereignty or regulatory requirements demand it, for example.
Azure Arc extends Azure management tools (the Azure Resource Manager) to Windows and Linux servers, and Kubernetes. Arc supports multi-cloud, on-premises and edge computing deployments. Arc allows businesses to store data locally but still manage it through Azure tools.
Arc does not require specific hardware. It will work with validated Stack systems as well as legacy equipment.
Anthos
Google Cloud Platform’s hybrid cloud architecture is called Anthos and is based around the Kubernetes container orchestration platform. This allows containers to run on on-premises hardware or in Google’s cloud infrastructure.
Google does not provide its own on-premises hardware to customers, either for compute or storage. Instead, Anthos uses third-party technology.
For storage, Google announced its Anthos Ready Storage and Anthos Ready Platform partners earlier this year. Storage vendors include Dell EMC, HPE, NetApp, Portworx, Pure Storage and Robin.io.
These vendors, Google says, all use the Container Storage Interface (CSI) to provide persistent storage.
Although not specific to storage, Anthos Ready Platform vendors include Atos, Cisco, Dell EMC, HPE, Intel, Lenovo, NetApp, and Nutanix. These vendors have validated Anthos on their stacks.
Summary: In-house or cloud?
For CIOs, the big three cloud providers’ on-premises systems offer an alternative way to streamline storage (and compute) management through use of cloud tools.
This should also provide better integration between local storage arrays and cloud storage pools. Potentially this can cut costs, improve resilience and allow for rapid access to cloud capacity.
There are some limitations, however. Not all workloads will benefit from this integration; it will depend on storage throughput and application latency requirements, and the benefit of streamlining infrastructure management will depend on application compatibility with cloud vendor offerings. If a business has a large percentage of storage outside the cloud, they might not reap the full benefits.
IT teams also need to negotiate the vendors’ different approaches. AWS has the most tightly-controlled offering. Meanwhile, Azure is the most flexible, with Google an attractive option for containerised workloads.
But it’s early days, and the big three are sure to invest more in making their platforms appeal to enterprises that want to streamline their storage.
Analyst view: Scott Sinclair, senior analyst at ESG
What is driving the trend for cloud companies to get into the hardware business, directly or indirectly?
Sinclair: “This is being driven by their customers. Business and IT teams value a lot of what the cloud offers, but often have some workloads they wish to remain on premises. Instead of maintaining two separate environments, organisations want to standardise and they want consistency both on- and off-premises. The ability to deploy technology from the public cloud providers on premises helps provide that.”
What are the pros and cons for customers?
Sinclair: “The pros and cons vary based on the solution, the app, and what it is being compared to. A top consideration, however, should be what environment is your organisation more familiar with? The idea is to leverage commonality to provide greater simplicity and reduce the personnel burden of managing different environments. Your organisation’s existing skill set should be considered in that evaluation process.”
What variants on the theme are there? Why are the vendors taking different paths?
Sinclair: “It’s still early, and I expect each offering to expand its capabilities over time. Still some early variations do exist, with each variation offering possible insight on where each vendor likely perceives its relative opportunity in the market. Google Anthos, for example, supports a variety of heterogeneous infrastructure offerings both on- and off-premises. AWS Outposts, however, focused on delivering an on-premises version of its off premises technology which many IT organisations are already familiar with. And for Microsoft Azure Stack, users can choose from a variety of hardware options often from partners.”