idspopd - Fotolia
Cloud bursting: What it is and how to do it
Cloud bursting offers the ability to rapidly scale compute and/or storage from on-premise to public cloud capacity. But what are the key pitfalls and how are they avoided?
The public cloud has quickly established itself as an easy and frictionless way to build out IT infrastructure.
If you already have on-premise systems, at some point, there will be a desire to integrate those on-premise systems with off-premise offerings.
One way to do this is through cloud bursting – but exactly what is it, and what does it mean to “burst to the cloud”?
The term cloud bursting isn’t new – it has been discussed in enterprise IT for perhaps the past 10 years.
Bursting to the cloud means expanding on-premise workloads and moving some (or all) of that activity to the public cloud. Generally, this is done to mitigate against rapid workload growth, such as coping with peak demand.
It’s also possible to use cloud bursting as a tool to aid in workload migrations when applications move partially or wholly to the cloud to alleviate the load on on-premise kit during upgrades or replacements.
The “on-demand” model of cloud bursting provides the ability to cater for spikes or peaks in workload demand without having to retain lots of unused and expensive equipment onsite.
Website traffic
If peaks in website traffic, for example, are only seen three or four times a year, it makes sense to manage these requirements with on-demand infrastructure that is paid for only during the peaks.
When demand diminishes, cloud resources can be switched off. This represents a huge saving compared with having equipment that is either rarely used or active all the time, consuming space and power.
A third scenario is to use cloud bursting to mitigate against on-premise datacentre expansion.
Imagine a scenario where growth in compute demands requires building or expanding on-premise capabilities. It may make sense to move some of this workload to the public cloud to mitigate the capital spend.
This case is not entirely a cloud bursting scenario, as by definition, bursting implies that workload is moved to the cloud for a temporary period and then eventually brought back on-premise. However, it could be used as a temporary solution while upgrading an existing datacentre.
The myth of cloud bursting
While cloud bursting seems like a great idea, in reality the process is quite difficult.
Many applications simply aren’t designed to be distributed across two or more compute environments at the same time because they are generally “monolithic” in nature.
Think of systems that are built on top of a large relational database, for example. Taking this to the cloud would mean moving the entire application. Even where tiers of an application could be separated – web tier from application logic and database – the latency between these layers that the cloud introduces could make cloud bursting a challenge.
So, although many organisations may talk about cloud bursting, few will be implementing the process in a truly dynamic nature. In reality, many cloud bursting projects will focus on moving entire applications or application groups into the public cloud on a semi-permanent basis.
Cloud bursting and storage
How is data storage affected in cloud bursting scenarios?
First of all, storage plays a big part in enabling applications to be moved to and from public cloud. The process of bursting an application to the public cloud is generally based on either moving the application and data together or moving the data to another application instance already in place.
As an example, most applications today are packaged as virtual machines (VMs). Suppliers such as Velostrata (acquired by Google), Zerto and Racemi all provide capabilities to move entire VMs into the cloud.
The cloud providers have their own solutions for this too. Some of these tools are focused at moving the entire VM in a one-time process. However, Velostrata, for example, provides the capability to move only active data and bring updates to the VM back on-premise in a truly dynamic fashion.
This capability highlights one of the major issues with this kind of migration and that is keeping applications and data in-sync.
Moving an entire virtual machine (or groups of VMs) across the network is expensive and time consuming. This is especially true when moving virtual machines back on-premise. Hyper-scale cloud providers like to charge for data egress, making application and data repatriation less palatable.
There’s also the time aspect to consider. Generally, applications have to be unavailable when moving to/from the public cloud, and that can be a problem. Extended outages aren’t popular with users and need to be mitigated as much as possible.
Storage-focused cloud bursting
How about just moving the data to the public cloud?
Simply using public cloud as an extension of on-premise storage has been around for some time. Backup suppliers, as well as primary and secondary storage solution suppliers all provide the capability to push data to the public cloud as a form of archive.
This is good from the perspective of controlling costs for inactive data, but what about active applications?
A few things need to be considered to make active storage cloud bursting practical.
The first is having a consistent view of the data. This means managing the metadata associated with the data. For block storage, this requires tracking and accessing the latest version of any individual block. For files and object stores, this means knowing the most current version of a file or object.
Metadata consistency is a challenge, because all data updates generate a metadata change, whether this is information on a new file or updates to an existing one. These changes have to be distributed across all endpoints for the data as quickly and efficiently as possible. This leads us to another issue with metadata management – locking.
Read more about cloud and multicloud storage
- We look at the big five storage array makers’ efforts to connect on-premise hardware with cloud storage and find automated tiering, on-ramps, and backup and archive capability.
- A single environment across on-premise and cloud environments is possible with a new class of product that builds file systems and object stores with hybrid cloud.
To ensure that two locations don’t attempt to update the same content at the same time, one or other will gain a lock to the data and others have to wait.
This locking process can introduce significant – and unacceptable – latency. The alternative solution is to not care about locking, and make one copy read-only, or as seen with object stores, use the process of “last writer wins”, where effectively the last update is reflected as the current copy of the data.
“Last writer wins” is an acceptable solution for storage platforms like object stores, but totally impractical for block-based storage solutions, where data consistency is determined by ensuring every read and write is accurately reflected in time-series order.
Data protection
One final consideration in building a distributed storage and application architecture is to understand how to recover from failure.
What happens if an on-premise server fails? What happens if the cloud provider has an outage?
When data sits in multiple places, it can be hard to know where the last consistent copy of data exists if one of those platforms goes down. Failure scenarios need to be well understood to avoid data loss.
Cloud bursting storage solutions
How are suppliers tackling storage cloud bursting?
The main cloud providers identified the requirement at an early stage. Amazon Web Services (AWS) has a storage gateway product that deploys as a virtual machine in the on-premise datacentre and is exposed to local applications as an iSCSI logical unit number (LUN). Data is archived back to AWS and can be accessed remotely there. The AWS Storage Gateway now caters for file and virtual tape formats.
Microsoft acquired StorSimple some years ago to provide similar iSCSI capabilities to the AWS Storage Gateway. More recently, the company acquired Avere Systems for its vFXT technology that allows on-premise file systems to be extended to the public cloud.
Storage suppliers including NetApp (Data Fabric), Scality (with Zenko), Elastifile (CloudTier) and Cloudian (HyperFile/HyperStore) provide the ability to span on-premise and public cloud to move data on-demand. There are also many more examples across the industry of similar solutions that are available.
Looking forward
In the future, we will see applications being rewritten to specifically make them distributed across multiple public clouds and on-premise locations. In this scenario, cloud bursting will be an intrinsic feature of the design.
In the meantime, storage suppliers are moving us closer to having a more real-time distributed data ecosystems, albeit with proprietary solutions.