Containerisation in the enterprise - D2iQ: The container complexity conundrum
As businesses continue to modernise their server estate and move towards cloud-native architectures, the elephant in the room is the monolithic core business application that cannot easily be rehosted without significant risk and disruption.
These days, it is more efficient to deploy an application in a container than use a virtual machine. Computer Weekly now examines the modern trends, dynamics and challenges faced by organisations now migrating to the micro-engineered world of software containerisation.
As all good software architects know, a container is defined as a ‘logical’ computing environment where code is engineered to allow a guest application to run in a state where it is abstracted away from the underlying host system’s hardware and software infrastructure resources.
So, what do enterprises need to think about when it comes to architecting, developing, deploying and maintaining software containers?
This post is written by Tobi Knaup, CEO and co-funder of D2iQ — the company is known for its technology that offers DevOps teams cloud-native application management software, services and training.
Knaup writes as follows…
Okay then, 2020 saw the promise of Kubernetes reach an all-time high, with organisations using container-orchestration to accelerate digital transformation strategies while tackling the challenges of scaling and managing clusters. Now the enthusiasm for Kubernetes is shifting from unbridled excitement to frustration as complexity stalls and even derails some production deployments. While it is projected that production projects using Kubernetes will rise 61% in the next two years, nearly all organisations (94%) run into challenges, most often during the development phase.
Organisations are also facing challenges when scaling their cloud architectures. With multiple Kubernetes clusters comes the management of the various point-solutions required to handle security, operations and development. This complexity has a tangible cost on organisations as time, resources and money are poured into the individual management of the containers and clusters. Aside from the initial investment in containers, tooling and additional services, the overhead needed to maintain these resources increases, as does the amount of effort required for cluster and container management.
The need for central governance
With no central governance across organisations, DevOps teams are spread thin. Enterprises without centralised governance or visibility across organisationally deployed clusters simply do not have the resources for effective management.
Within the stack, compliance, regulatory and IP challenges are governing where application resources are used, already allocating much-needed support and time.
As a result, for example, security operations are unable to ensure proper versioning for vulnerability management. Organisations need to centrally govern clusters and associated workloads to ensure consistency, security, performance and to enforce proper configuration and policy management across the entire footprint.
The need for automated workflows
DevOps enables agility and faster iteration on applications. Organisations that move to a DevOps model often go from updating applications every few months to doing it on a daily basis. Embracing DevOps not only means that the traditional responsibilities of developers and operators merge, but also that the lines between infrastructure and the applications running on top often become blurry. This means that the infrastructure needs to evolve at the same rate as the applications it is supporting, potentially multiple times per day. Constant change to infrastructure requires automation, reproducibility, as well as easy and fast error recovery in order to quickly roll back faulty changes.
GitOps, in particular, will take center stage to combat some of these challenges, leading to increased automation, accelerated delivery schedules and improved application quality. GitOps is highly effective for cloud-native journeys, as it helps to streamline workflows and keeps developer teams progressing toward their goals.
It consists of declarative descriptions of the infrastructure currently desired in the production environment that are stored in a Git repository, and an automated and continuous process to make the production environment match the desired state in the repository. It is similar in some ways to the controller pattern in Kubernetes, one of the key reasons for its resiliency and scalability. Using Git to store the desired state of the system has the added benefit of simplifying knowledge sharing and auditing, and provides a fast and easy way to roll back infrastructure changes.
Developers and business leaders have to join forces to combat Kubernetes complexity, implement streamlined workflows to achieve business agility while maintaining high levels of application quality, and create new solutions for effectively scaling Kubernetes deployments.