Maksym Yemelyanov - stock.adobe.

Preparing for enterprise-class containerisation

As more workloads are deployed in containers, IT teams will need to assess how to manage container sprawl, reduce cloud bills and support databases

Despite the option to move essentially ephemeral computing resources and data between public, private and hybrid clouds, there is still an all-encompassing push to deploy unmodified monolithic applications in virtual machines (VMs) running on public cloud infrastructure.

However, it is more efficient to break down an application into functional blocks, each of which runs in its own container. The Computer Weekly Developer’s Network (CWDN) asked industry experts about the modern trends, dynamics and challenges facing organisations as they migrate to the micro-engineering software world of containerisation.

Unlike VMs, containers share the underlying operating system (OS) and kernel, which means a single OS environment can support multiple containers. Put simply, containers can be seen as virtualisation at the process (or application) level, rather than at the OS level.

Those essential computing resources include core processing power, memory, data storage and input/output (I/O) provisioning, plus all the modern incremental “new age” functions and services, such as big data analytics engine calls, artificial intelligence (AI) brainpower and various forms of automation.

Although the move to containers provides more modular composability, the trade-off is a more complex interconnected set of computing resources that need to be managed, maintained and orchestrated. Despite the popularisation of Kubernetes and the entire ecosystem of so-called “observability” technologies, knowing the health, function and wider state of every deployed container concurrently is not always straightforward.

Migrating to containers

“The question I am often asked is how best to migrate applications from a VM environment to containers,” says Lei Zhang, tech lead and engineering manager of Alibaba’s cloud-native application management system, Alibaba Cloud Intelligence. “Every customer is trying to build a Kubernetes environment, and the ways to do it can seem complex. However, there is a range of methods, tools and best practice available for them to use.”

Zhang recommends that the first thing organisations looking to containerise their VM stack should do is create a clear migration plan. This involves breaking the migration into steps, beginning with the most stable applications, for example their website, and leaving the more complex applications until the container stack is more mature. 

According to Lewis Marshall, technology evangelist at Appvia, the mitigation of risk alone is a huge benefit that seemingly makes the decision to containerise legacy systems easier to make. “Using inherently immutable containers with your legacy systems is an opportunity to remove the bad habits, processes and operational practices that exist with systems that have to be upgraded in place, and are therefore non-immutable,” he says.

Three quick wins of rehosting

By breaking the containerisation process into manageable pieces sorted out by complexity, you can begin to prioritise quick wins and create a longer-term strategy, says Jiani Zhang, president of the alliance and industrial business unit at Persistent Systems. Here are three steps she suggests IT decision-makers should consider when looking at containerisation:

  • Rehosting: Look to apply the simplest containerisation technique possible to get quick wins early. Rehosting, otherwise known as the lift-and-shift method, is the easiest way to containerise your legacy application and move it to the cloud. Rehosting can dramatically increase return on investment in a short time. Not all applications can be rehosted, but the earlier you start, the longer you can enjoy the benefits while you spend time on the more difficult tasks.
  • Refactoring: Refactoring is certainly more time-consuming than rehosting, but by isolating individual pieces of legacy applications into containerised microservices, you can get the benefits of moving the most important aspects of the application without having to refactor the entire codebase. From a time and effort standpoint, it often makes sense to only move the most important components, rather than the entire application. One practical example of this is by refactoring a legacy application’s storage mechanism, such as the logs or user files. This will allow you to run the application in the container without losing any data, but also without moving everything into the container.
  • Rebuild: Sometimes you have to cut your losses and rebuild an application that has passed its shelf life. Although this is time-consuming, these are often the most expensive and least productive applications running on your system, and the work can pay off in the long run.

In Marshall’s experience, containers have the capacity to increase security while decreasing operating and maintenance costs. For instance, some legacy systems have a lot of manual operational activities, which makes any sort of update incredibly labour-intensive and fraught with risk.

Marshall recommends that IT administrators try to ensure that the cost of operating legacy systems trends downwards, towards zero. “If your system is a cost sink while adding limited business value, then updating or upgrading it should become your priority,” he says. “If your system is dependent on a few individuals who regularly put in lots of overtime to ‘keep the lights on’, that should be a huge red flag.

“It is also worth remembering that as a system ages, it generally becomes more expensive to maintain and the security and instability risks rise.”

Challenges of containerisation

The immutable nature of container-based services, which can be deleted and redeployed when a new update is available, highlights the flexibility and scale they present. But, as Bola Rotibi, research director CCS Insight, pointed out in a recent Computer Weekly article, while containers may come and go, there will be critical data that must remain accessible and with relevant controls applied.

She says: “For the growing number of developers embracing the container model, physical computer storage facilities can no longer be someone else’s concern. Developers will need to become involved in provisioning storage assets with containers. Being adept with modern data storage as well as the physical storage layer is vital to data-driven organisations.”

Douglas Fallstrom, vice-president of product and operations at Hammerspace, says applications need to be aware of the infrastructure and where data is located. This, he warns, adds to the overall complexity of containerisation and contributes to the need to reconfigure applications if something changes. Also, the idea of data storage is not strictly compatible with the philosophy of cloud-native workloads.

“Just as compute has gone serverless to simplify orchestration, we need data to go storageless so that applications can access their data without knowing anything about the infrastructure running underneath,” he says.

“When we talk about storageless data, what we are really saying is that data management should be self-served from any site or any cloud and let automation optimise the serving and protection of data without putting a call into IT.”

From a data management perspective, databases are generally not built to run in a cloud-native architecture. According to Jim Walker, vice-president of product marketing at Cockroach Labs, management of a legacy database on modern infrastructure such as Kubernetes is very difficult. He says many organisations choose to run their databases alongside the scale-out environment provided by Kubernetes.

“This often creates a bottleneck, or worse, a single point of failure for the application,” he adds. “Running a NoSQL database on Kubernetes is better aligned, but you will still experience transactional consistency issues.”

Without addressing this issue with the database, Walker believes that software developers building cloud-native applications only get a fraction of the value offered by containers and orchestration. “We’ve seen great momentum in Kubernetes adoption, but it was originally designed for stateless workloads,” he says. “Adoption has been held back as a result. The real push to adoption will occur as we build out data-intensive workloads to Kubernetes.”

Management considerations

Beyond the challenges of taking a cloud-native approach to legacy IT modernisation, containers also offer IT departments a way to rethink their software development pipeline. More and more companies are adopting containers, as well as Kubernetes, to manage their implementations, says Sergey Pronin, product owner at open source database company Percona.

“Containers work well in the software development pipeline and make delivery easier,” he says. “After a while, containerised applications move into production, Kubernetes takes care of the management side and everyone is happy.”

Thanks to Kubernetes, applications can be programmatically scale up and down to handle peaks in usage by dynamically handling processor, memory, network and storage requirements, he adds.

However, while the software engineering teams have done their bit by setting up auto-scalers in Kubernetes to make applications more available and resilient, Pronin warns that IT departments may find their cloud bills starting to snowball.

For example, an AWS Elastic Block Storage user will pay for 10TB of provisioned EBS volumes even if only 1TB is really used. This can lead to sky-high cloud costs. “Each container will have its starting resource requirements reserved, so overestimating how much you are likely to need can add a substantial amount to your bill over time,” says Pronin.

As IT departments migrate more workloads into containers and put them into production, they will eventually need to manage multiple clusters of containers. This makes it important for IT departments to track container usage and spend levels in order to get a better picture of where the money is going.

Read more on Containers