Containerisation in the enterprise - Appvia: Immutable truths to unplug the cost sinkers
As businesses continue to modernise their server estate and move towards cloud-native architectures, the elephant in the room is the monolithic core business application that cannot easily be rehosted without significant risk and disruption.
These days, it is more efficient to deploy an application in a container than use a virtual machine. Computer Weekly now examines the modern trends, dynamics and challenges faced by organisations now migrating to the micro-engineered world of software containerisation.
As all good software architects know, a container is defined as a ‘logical’ computing environment where code is engineered to allow a guest application to run in a state where it is abstracted away from the underlying host system’s hardware and software infrastructure resources.
So, what do enterprises need to think about when it comes to architecting, developing, deploying and maintaining software containers?
This post is written by Lewis Marshall, in his role as technology evangelist at Appvia — Appvia Kore is a cloud-native management platform that gives teams access to public cloud resources while automating industry best-practices and mitigating security risks.
Marshall writes as follows…
When companies are considering migrating to the world of containerisation, the elephant in the room is often old, legacy systems. Before embarking on any major migration project, companies need to ask themselves whether their current system can run on containers, whether it will save money and… simply put, if it is actually worth doing.
In nearly every instance, containers are an option, from a technology purist and practical/pragmatic possibility point of view. They are, after all, just modern deployment units and the lowest common denominator for packaging workloads. Even when you’re using Heroku or Function-as-a-Service (FaaS), your providers are running your workloads in containers, often with Kubernetes, regardless of whether you’re abstracted further up or not.
Whether moving to containers saves money and is worth doing is a much more complex question to answer. Containers have the capacity to increase security while decreasing operating and maintenance costs.
The mitigation of risk alone is a huge benefit that seemingly makes the decision easy. However, using inherently immutable containers with your legacy systems is an opportunity to remove the bad habits, processes and operational practices that exist with systems that have to be upgraded in place (and are therefore non-immutable).
It’s important to note that some legacy systems have a lot of manual operational activities. This makes any sort of update incredibly labour intensive and fraught with risk.
Unplugging the ‘cost sink’
The cost and benefits of this replacement strategy need to be understood, so the best approach is to take a step back and conduct an audit.
Top of mind should be the principle that the cost of operation should trend towards zero. If your system is a cost sink while adding limiting business value, then updating or upgrading it should become your priority. If your system is dependent on a few individuals who regularly put in lots of overtime to ‘keep the lights on’, that should be a huge red-flag. It’s also worth remembering that as a system ages it generally becomes more expensive to maintain and the security and instability risks rise.
Of course, embarking on an expansive upgrade or update of your infrastructure is likely to mean that the overall migration project suddenly becomes more time-consuming and expensive.
Nevertheless, when you factor in the wider business benefits and the fact that it is much more straightforward and effective to migrate to containers using modern systems, the benefits will far outweigh the costs.
Companies need to open their eyes to realising the bad practices around these legacy systems and look to what’s going to reduce risk, maximise efficiency and propel them forward.