AlexOakenman - Fotolia
The ins and outs of cloud portability
Public cloud portability comes in many forms – the secret is to research and plan appropriately
Cloud providers build their offerings differently. It’s a classic “where standards lack, innovation and lock-in rule” situation.
Lock-in isn’t necessarily evil; if costs remain stable and the added value is so clear and continuous that the user doesn’t want to move, then lock-in isn’t a problem.
However, if value declines, alternatives emerge that are more cost-effective or the dynamics of the supplier relationship change, having the ability to switch to another provider becomes more important.
Switching providers today is certainly achievable for basic workloads and is a prerequisite competence for secondary providers to serve customers moving away from major players.
The true lock-in points are the services, such as database, abstraction and automation, queuing, and monitoring, and templates that make your workloads more meaningful.
Fewer choices
Choosing value and innovation can, ultimately, lead to fewer choices later. These services and inconsistencies make portability a major work in progress today. The portability challenge also includes more aggressive movement patterns, such as bursting and brokering cloud services.
Portability can be categorised in two ways. First, there is one-time movement. This involves moving a cloud-based application from one provider or environment to an alternative, with no intention of moving that workload back. The second form of portability is where frequent movement is required. This is where it is necessary to move a cloud-based application rapidly, based on real-time arbitrage of cloud prices or your current infrastructure usage, either between two providers or between deployment environments of a single provider.
One-time movement is common
Portability from one provider to another is a common demand of early cloud adopters and is very achievable with current tooling and template conversions. The difficulty is mapping out a complex application that leverages services specific to a single cloud provider. Heavy public cloud users entangled in various services and templates explain that their lock-in is real. However, many are satisfied because pricing is consistent and value from the products is high. When, or if, these circumstances change, demand for movement from one provider to another will be higher.
The current state of one-time movement has a number of problems. First is template inconsistency. Each cloud provider uses its own format and tooling to create application and infrastructure templates, many of which have different versions for their own public and private offerings. Moving a workload requires converting the template of one provider into the equivalent template in your new cloud environment. Advisory services and supplier-provided tooling can help transpose basic cloud services between formats; however, more-complex applications paired with services often lose a significant amount of their value during the conversion. Standards efforts, such as the Distributed Management Task Force’s (DMTF’s) open virtualization format (OVF) andTOSCA’s OASIS look to standardise the packaging format of virtual machine (VM)-based software to enable future compatibility. Each provides standardisation that will some day enable a more portable future, but suppliers have no consistent template to follow for repeatable portability.
The second problem area is services and ecosystem inconsistency. Basic storage and compute products are easy to map and convert to a new cloud service. Network configuration can be time-intensive and largely manual, but this is often done with supplier support. The services and ecosystem outside the basic project present the real issue. Services and ecosystem players are valuable. They enhance the solution and vastly minimise the developer time required to create even a basic equivalent solution. Each service is unique to the provider, making its use dependent on your use of that provider. However, most users don’t have the acumen about specific public-cloud providers to identify these inflection points of differences between specific services. Leading-edge enterprises not only understand these inflection points but map the value to the total cost of moving off the solution to determine whether it is a worthy trade-off.
Read more about cloud portability
Cloud providers are very keen to get your data into their systems but sometimes it can be difficult or costly to move it elsewhere. We look at solutions to cloud portability.
While IT providers are fond of suggesting moving workloads between clouds is as easy as dragging and dropping apps between environments, the reality can be far more complex.
Lack of maturity of containers between cloud providers is another common issue. The concept of containers is not new. Recent momentum behind operating system-level solutions, such as Docker, has developed from their ease of use, applicability to anything running on a Linux operating system, and timeliness at addressing application portability. Containers are stateless ways to package applications to establish abstraction from underlying environments. And although this enables portability, it also presents challenges. Workloads need context to meet requirements and follow policy. Other supporting portability efforts will help containers remain stateless as an entity and still provide the context necessary for enterprise workloads. Containers will likely have a place in your cloud portability story, but security remains a work in progress and we are yet to see the establishment of the long-term container players.
Expect to see plenty of churn in this market. So far, usage is primarily limited to test phases of the development cycle and for packaging legacy monolithic applications for easier development around these workloads.
The final migration challenge is the trap of application stickiness. While no cloud provider wants you to leave, if the provider focuses primarily on infrastructure, you have an easier opportunity to leave that service. The better cloud providers lock you in at the application level. If your applications are using any of the myriad application services from Amazon Web Services (AWS), such as Lambda or Simple Notification Service, migration from AWS to another cloud platform is very unlikely. You are stuck there.
The same applies to other cloud platforms. Again, this isn’t necessarily bad, but do remain in lockstep with your application developers’ use of such services. This can make or break anymigration or portability decisions.
Frequent movement still some way off
It is fun to think about the possibilities of bursting and brokering, but countless barriers stand in the way of enterprise customers. Dynamic porting of workloads is an interesting concept, but not yet an agenda item. Brokering refers to dynamic relocation of cloud workloads based on the lowest-cost platform at that point in time, whereas bursting looks to optimise the cost and performance of an application at any given point in time. For average use, an enterprise can pay for persistent usage in its own VM environment, and it can use public cloud resources for additional capacity in its time of need.
Brokering is only for initial deployment. In 2011, the idea of dynamically sourcing and brokering cloud services based on real-time changes in cost and performance was the future vision of cloud’s pay-as-you-go pricing strategy – and it remains a vision.
The first tools are only now emerging, and the use cases are limited, especially because costs for public clouds simply don’t vary enough to drive significant brokerage demand. Serving up real-time information for basic test scenarios and samples can support your own cloud adviser responsibilities for initial deployments, but will not offer support for porting already-provisioned workloads. The restrictions that limit one-time migration also apply to today; brokering tools help with strategic right-sourcing, but not with portability.
This article is based on Forrester’s The state of cloud migration, portability and interoperability, Q4 2017 by Lauren E Nelson and Charles Betz.