Service Mesh: what is it, where are we... and where are we going?
This is a contributed post for the Computer Weekly Developer Network written by Ranga Rajagopalan is his capacity as CTO and co-founder of Avi Networks.
Avi Networks is known for its Intelligent Web Application Firewall (iWAF) technology.
The firm offers a software-only product that operates as a centrally managed fabric across datacentres, private clouds and public clouds.
Rajagopalan writes…
Cloud-native applications – shorthand these days for containers and microservices – have a lot going for them, but most [or at least many] of the benefits come from their ability to accelerate dev and test processes, reducing the time it takes to bring applications online, fix bugs, add new features and so on.
Move those applications out of the dev & test sandbox and into the wider production world, however, and cloud-native applications [often] add new issues in terms of scalability, security and management… so potentially wiping out those benefits altogether.
The solution is a service mesh [A service mesh is a configurable infrastructure layer for a microservices application that can work alongside an orchestration tool such as Kubernetes], but it’s not a simple product you can just bolt on to make cloud-native applications production ready.
It’s more a framework, which can be used to connect cloud-native components to the services they need… and one which can be delivered in a variety of ways.
A matter of scale
Scalability is absolutely fundamental to the problems posed by cloud-native technologies, which work by breaking applications down into much smaller parts (microservices) each wrapped in its own lightweight and very portable virtual environment (container).
So, whereas a conventional web application might span a handful of virtual machines, a cloud-native app can comprise a collection of hundreds or even thousands of microservices, each in its own container running anywhere across a hybrid cloud infrastructure.
On the plus side, containers can be turned on and off, patched, updated and moved around very rapidly and without impacting on the availability of the application as a whole.
Each, however, also needs to find and communicate both with its companions and shared load balancing, management, security and other application services. And that’s far from straightforward given the sheer number of containers involved and their potentially high turnover rates.
This need to communicate adds too much weight to cloud-native apps and would be a nightmare to manage at scale through traditional means. Hence the development of service mesh, a dedicated infrastructure layer for handling service-to-service requests, effectively joining up the dots for cloud-native apps by providing a centrally managed service ecosystem ready for containers to plug into and do their work.
Project Istio, open source
Despite the relative immaturity of this market, there’s a lot going on to put this all into practice, both by vendors in the services space (particularly application delivery, traffic and security management solutions) and the big-name cloud providers. This has led to the development of a number of proprietary service mesh ‘products’.
But, of [perhaps] greater interest is Istio, an open source initiative, originally led by Google, IBM and Lyft, but now with an ever-growing list of other well-known names contributing to and supporting its development including, Cisco, Pivotal, Red Hat and VMware.
Istio is now almost synonymous with service mesh, just as Kubernetes is with container orchestration. Not surprisingly, Istio’s initial implementations are very much bound to Kubernetes and cloud native application architecture. The promise of service mesh is alive today within a Kubernetes cluster, but the value will grow exponentially when a service mesh can be applied to all applications across clusters and clouds.
Where next for service mesh?
As mentioned above, more and more companies are joining the service mesh conversation under the Istio banner. This is the type of support that helped Kubernetes evolve from a project to the de facto container orchestration solution in just a few short years.
The rich combination of established technology giants and [perhaps more] innovative startups will continue to foster and develop the Istio service mesh to include more features and support more applications. By extending Istio with other proven technologies, you can easily apply cross-cluster and even cross-cloud communication — opening the door to apply the value of service mesh to existing applications in the datacentre.
The promise of granular application services delivered close to the application is an idea that is readily applicable to traditional applications running on virtual machines or bare metal infrastructure. With the disaggregation of application components made possible by the containerised microservices architecture, this mode of service delivery is a necessity and will eventually become ubiquitous across all application types beyond
Companies looking to adopt cloud-native technologies will, almost certainly, need a service mesh of some description and the smart money is on Istio being part of that solution. Whatever format it takes, however, the chosen solution needs to deliver value that fits with new and existing applications.