Tierney - stock.adobe.com

Cloud repatriation: How to do it successfully

The keys to reverse migration success include workload selection, how to prepare your on-premise infrastructure and future-proofing the decision to come back in-house

Cloud repatriation – sometimes called “reverse migration” – is something any organisation that uses cloud storage should consider.

It’s the process of moving workloads and data back from public cloud infrastructure to on-premise hardware. This could be to a business-owned datacentre, colocation or other shared facilities.

Organisations might choose to repatriate because of application performance, data security, regulations or, more often, cost. Firms will have their own cost-benefit analysis around when to stay with the cloud, or when to move back on-premise, but they also need a plan to make sure any repatriation project is a success.

There are no hard-and-fast rules about datasets that benefit most from moving back to on-premise storage.

That said, it’s possible to identify data where doing so makes sense. Broadly, repatriation might be the best option where data is sensitive, time sensitive or expensive to store in the cloud.

Sensitive data includes regulated information, customer personal data, or where issues of data sovereignty or other regulations put geographical limits on where it can be stored. Governments, too, will have additional restrictions on data that can be stored in the cloud, especially for anything that affects national security.

Time-sensitive data includes information that users need to access as rapidly as possible – think financial trading feeds – or where the application is sensitive to latency. This is a common issue in manufacturing and some areas of R&D, but latency can impact day-to-day business applications, and even technologies such as AI. If an organisation wants complete control over data flows, then it is likely to opt for its own network and storage, not the cloud.

The cost factor

Cost, too, is always a factor. Here, it is more a question of how data is used, rather than what it is. It makes a lot of sense to store a long-term archive or backup volume in the cloud, but the calculation changes when organisations want to access the data more frequently. That could be, for example, when using historical data in business intelligence applications or to train AI models. Then, cloud provider egress fees – a charge levied to move data out of the cloud – can mount up.

This is one area where the balance between the cloud and on-premise changes over time.

A small test and development server, with minimal storage, will be cost-effective in the cloud, but might be less so if used in production, and carefully calculated cloud storage budgets can be upended if business users decide that data in “cold storage” is going to be used on a regular basis instead.

“There’s been two-way movement of data and applications for a long time,” says Tony Lock, distinguished analyst at Freeform Dynamics. “It’s basically a fact of life. People move some things to the cloud because it makes sense, and then after a period of time, the way they’re using that information changes or their needs of it change, or something else triggers them to modify things, and they move it back.”

How do you prepare private infrastructure for cloud repatriation?

Organisations that want to move data back to their own IT infrastructure, such as a datacentre or colo facility, need to do the groundwork.

First, they must ensure they have the physical storage capacity for the data being moved. This needs to be planned. Some suppliers have long lead times for new arrays, or even upgrades such as new disks or solid-state modules.

Then there is networking capacity, and physical infrastructure in the datacentre such as rack space, power and cooling. A large repatriation project might be a prompt to reorganise the datacentre, perhaps by moving to newer equipment that can pack more storage into a single rack or that consumes less power.

Then there are the people needed to support the migration and subsequent day-to-day operations.

Are there enough staff to provision and manage a larger system? Do they have the security and privacy skills needed to handle sensitive data? Do they have the technical know-how to handle mission-critical, latency sensitive applications? These are key questions in a context where many organisations have reduced IT teams as they have outsourced to cloud providers.

Enterprises that have grown up in the cloud era might not have the in-house expertise at all. Building up a team can take as long, if not longer, than building up infrastructure, and cost can easily be overlooked while it’s wrapped up in cloud service provider fees.

How do you future-proof data and infrastructure when you repatriate from the cloud?

A key question here is also how to ensure you can reverse the process if you want to. Chief information officers (CIOs) will likely want to make sure that if they do move data and applications back from the cloud, they don’t miss out on the future benefits of cloud-native applications. In other words, you don’t want to move from the cloud to be locked into a local offering forever.

Whether an organisation can maintain its readiness for the full benefits of cloud-native will largely depend on their infrastructure.

Use of Kubernetes and other container-based applications on-premise is one way to ensure applications and data are hardware agnostic and easy to port, including to cloud.

Read more on cloud repatriation

At the same time, hyperscale cloud suppliers have made it easier to migrate data and provide management tools that can control local and cloud storage.

Nonetheless, the process is rarely simple. “There’s not easy portability back on-premise, unless you want to use the cloud in a very suboptimal, highly commoditised way,” warns Lydia Leong, distinguished vice-president analyst at Gartner. “One of the interesting characteristics of organisations who have repatriated is they use the cloud solely as a hosting platform. It was a way to get servers on-demand relatively cheaply.”

Repatriation can be harder still for firms that use software-as-a-service (SaaS) applications to run business processes.

“In many cases, there are no good on-premise equivalents to solutions you buy in the cloud,” says Leong. “In many markets, the most advanced enterprise vendor is now an SaaS vendor.”

She says firms should ensure they have the contractual right to repatriate data. Meanwhile, repatriating workloads from SaaS depends heavily on having the physical infrastructure and a suitable application to run locally.

Last, CIOs also need to consider if they can still make use of the cloud for temporary capacity or bursting. Again, this is an area where cloud-native applications and innovations such as object storage and global file systems will help.

As Freeform’s Lock notes, successful repatriation projects will keep a path open to the cloud to support future operations, such as entering a new market where the firm has no datacentre. There, the cloud makes sense, even if the longer-term plan is to bring data back in-house.

Read more on Infrastructure-as-a-Service (IaaS)

CIO
Security
Networking
Data Center
Data Management
Close