Can DevOps bring together speed, self-service & security?
This is a guest post for the Computer Weekly Developer Network written by David Moss in his role as regional director for western Europe at Avi Networks.
Moss writes…
One dictionary definition of the word conjoined is ‘being brought together for a common purpose’. In the all too often disparate worlds of software development and operations, being conjoined can remain an aspiration.
Among the reasons for this, is the question as to whether speed, self-service and security can ever be conjoined for those on the Dev side… and those in Ops.
For developers under pressure to deploy or re-architect applications for the cloud (or multiple clouds) the question of speed, in particular, is vital. Containers and Kubernetes are available… and now the race is on to exploit the scalability and elasticity of the cloud. In the Dev world this is not happening for its own sake, it is happening because it must.
However, DevOps and the cloud are already conjoined.
In many ways it was the Dev teams that led the charge to the cloud. Frustrated as they were with hanging around waiting for resource to be allocated, they decided to self-service. Now we’re at the point where when a new app needs a test environment, it is not expected to take weeks to raise a ticket – the mood music on the Dev side says ‘let’s get this thing out there and get going.’
It is a world where speed and self-service is the new normal and cloud adoption is king — and developers say this is driven by the need to compete.
Security-first Operations
For those on the operations side, security and self-service are red flags and pressure points.
Ops is not designed for speed [Ed – perhaps rather more for comfort eh?].
It is designed around security and stability and to make sure the business is safe and protected. Ops is not willing to risk the business in order to grow the business. As a department and a practice, Ops needs to make sure that everything is running the way that it should run with high availability.
The schism that we’re now seeing in enterprises is this tension between speed and stability and security. It is a friction between the art of the possible versus protecting the assets under management, maintaining quality standards, and ensuring that the app is secure and available when needed.
On the Ops side of the equation, there also exists legacy and heritage that is not designed for this world where speed matters most. Things continue to run even though they are not designed to work with containers or scale in the cloud in the way developers want.
So now we need to find a way to bridge speed with the enterprise-grade features that businesses need to operate safely.
Multiple views
This can be viewed through three different lenses which are separate, but connected. These are the application perspective, the infrastructure perspective and from the perspective of application services, which are technologies that connect applications and infrastructure together. In application terms, multiple types of applications – including those that are cloud-native – are now being deployed. Some are going to be on bare metal or VM in the datacentre. Others are going to be on VM or containers in different clouds.
That transitions into the infrastructure story.
Where traditionally infrastructure was always provided through ticketing systems, the question becomes one of automation. The answer is not just making it open season and giving everyone the ability to spend what they want, but rather it is about using automation to provide what’s needed without wasting resources, while getting new applications moving as quickly as possible.
The third aspect is understanding where services are needed and to what level.
This is about connecting the dots between applications and infrastructure. So, for example, certain services may not be needed for a test or a sandbox environment. But once it gets into production a whole set of services will be needed such as monitoring, firewalls, security and load balancing.
Multiple self-service clouds
The infrastructure and services story is also one of multiple clouds and provisioning. Many application services, like firewalls or load balancing, have traditionally been offered through a discrete appliance, which took time and effort to provision.
But in the cloud world, these services are offered as software that can be deployed, scaled up, and scaled down in minutes — meaning developers can say what they need and use it for as long as they require through self-service. Modern application services are effectively disposable infrastructure as opposed to high maintenance appliance-based infrastructure.
Many of these application services are offered by cloud providers within their respective environments. Additionally, there are cross-cloud providers who can offer these services consistently across multiple clouds, ensuring speed and self-service regardless of environment.
Not all clouds are equal
This is the reality of today’s development and operations. CIOs are not going to simply land and expand on a single cloud. Companies are using multiple clouds in order to exploit their best features. Not all clouds are equal and there will be many outside factors which dictate which cloud must be used – for example, geographic considerations, workload features, particular APIs or migration needs.
But to use multiple clouds it should not be a case of having to rethink the application or rethink infrastructure. In order to realise the true benefits of multi-cloud availability, CIOs must be able to move workloads between different clouds. It requires network management where all the same features and standards are available, irrespective of where the workload sits.
We’ve come a long way, and in today’s conjoined world of DevOps, it is possible therefore to achieve the goal of speed and self-service, without sacrificing security or stability. Application developers and IT Ops teams can, indeed, have their cake and eat it too.