Kubernetes at 10: CRDs at core of extensible, modular storage in K8s
We talk to VMware engineer Xing Yang, who saw Kubernetes storage evolve from the early days where its modular, extensible origins translated to Operators for storage and backup
Kubernetes is 10! Mid-2024 sees the 10th birthday of the market-leading container orchestration platform.
Xing Yang, cloud-native storage tech lead at VMware by Broadcom, started working on storage in Kubernetes in 2017 on projects based around custom resource definitions (CRDs), which allow the orchestration platform to work around an extensible core.
Later, she went on to see container orchestrator platform Kubernetes achieve market leadership and to work on container storage interface (CSI) and Kubernetes Operators, which are based on CRDs and which bring storage and data protection functionality while retaining Kubernetes’ core characteristics.
We mark the first decade of Kubernetes with a series of interviews with engineers who helped develop Kubernetes and tackle challenges in storage and data protection – including the use of Kubernetes Operators – as we look forward to a future characterised by artificial intelligence (AI) workloads
What was the market like when Kubernetes first launched?
Xing Yang: When Kubernetes first launched, the container orchestration market was still emerging. Docker had also just been introduced and became a popular tool for building images. Kubernetes is a container orchestration system that makes it easy to deploy Docker images on distributed systems. This makes Kubernetes a popular choice that has evolved into the de facto container orchestration system of today.
How did you get involved in this area?
Yang: I started by contributing to the VolumeSnapshot project in Kubernetes SIG Storage in 2017, working closely with Jing Xu from Google. We initially tried to introduce the VolumeSnapshot API and controller into Kubernetes core code base, but it was rejected by SIG Architecture.
They asked us to use CRDs instead. The reason is that Kubernetes should be made truly modular, extensible, and maintainable with a minimum core. So, we implemented the VolumeSnapshot feature out-of-tree under Kubernetes CSI. It became the first SIG Storage core feature implemented as CRDs. We told our story during a Keynote presentation at KubeCon China in 2019: CRDs, no longer 2nd class thing!
We worked with other community members to move the VolumeSnapshot feature from Alpha to Beta, and finally made it generally available in Kubernetes 1.20 release. I became a maintainer in Kubernetes SIG Storage.
How did you realise Kubernetes was in the leading position in the market?
Yang: Kubernetes was initially introduced by Google in June 2014 and then donated to Linux Foundation and became the seeding project in the Cloud Native Computing Foundation (CNCF).
Other leading public cloud providers AWS and Azure started to offer Kubernetes distributions on their clouds in 2017 and made them generally available in 2018. When the leading cloud providers had Kubernetes distributions in their cloud, I realised Kubernetes was gaining momentum in the cloud and had achieved enterprise adoption.
When you looked at Kubernetes, how did you approach data and storage?
Yang: When Kubernetes was first introduced, it was meant for stateless workloads only. At that time, container applications were regarded as ephemeral and stateless and therefore did not need to persist data.
But, that changed drastically. Stateful workloads started to run in Kubernetes. Persistent volume claims, persistent volumes, and storage classes were introduced to provision data volumes for applications running in Kubernetes. The workload API StatefulSet was also introduced to run stateful workloads in Kubernetes. More and more stateful workloads run in Kubernetes today.
What issues first came up around data and storage with Kubernetes for you?
Yang: When I started to get involved in Kubernetes, CSI had just been introduced. It tried to design common interfaces so a storage vendor could write a plugin and have it work in a range of orchestration systems, which included Docker, Mesos, Kubernetes, and Cloud Foundry at that time.
The initial set of CSI interfaces were very basic, and included create, delete, attach, detach, mount and unmount volumes. However, to support stateful workloads more advanced functionalities were needed. For example, volume snapshot, cloning, volume expansion, and topology were not supported in CSI at the beginning.
What had to change?
Yang: More advanced functionalities were needed for CSI to support stateful workloads that run in Kubernetes more effectively.
Volume Snapshot was introduced in CSI to allow the persistent volumes to be snapshotted and used as a way to restore data if a data loss or data corruption happens. Volume Cloning was also added to CSI that can be used to copy the data stored in a persistent volume to create a new volume from it.
CSI topology is also a very important feature for distributed database workloads. It allows Kubernetes to do intelligent scheduling so the volume is dynamically provisioned at the best place to run the pod. So, you can deploy and scale the workloads across failure domains to provide high availability and fault tolerance.
CSI volume expansion is another important feature for stateful workloads. It allows you to expand the volume to a larger size if your application needs more space to write data.
There’s also the CSI Capacity Tracking feature that allows the Kubernetes scheduler to take capacity into account during scheduling.
There are also gaps in support for data protection in Kubernetes. There are some basic building blocks such as volume snapshots that can be used for backup and restore, but more is needed to protect stateful workloads in case of a disaster. We formed a Data Protection WG at the beginning of 2020 that aimed to promote data protection support in Kubernetes.
How did you get involved around Kubernetes Operators?
Yang: As more advanced storage features have been made available, Kubernetes has become a more mature platform to provide storage for stateful workloads, with databases one of the most important types of workloads.
As a co-chair of CNCF TAG Storage, I had the opportunity to collaborate with the Data on Kubernetes Community on a white paper about running databases in Kubernetes. As discussed in the whitepaper, Operators are one of the most important patterns used when running data in Kubernetes.
What happened around operators that made them a success for data and storage?
Yang: Operators leverage CRDs which are flexible and extensible. Many traditional databases were not originally designed for Kubernetes, but with Operators complex business logic can be encapsulated underneath these CRDs. For users, it is easy to request a database cluster by defining a custom resource (CR). Operator control logic relies on Kubernetes’ declarative nature and reconciles the actual state of the database with the desired state defined in the CR, and continuously tries to bridge the gap and keep the database running.
Operators help automate Day Two operations such as backup and restore, migration, upgrade, etc. They make it easier to port applications across clouds or support hybrid clouds. Also, CNCF has a rich ecosystem with lots of tools available. For example, Prometheus for monitoring, Cert Manager for authentication, Fluentd for log processing, Argo CD for declarative continuous delivery, and many more. Operators can use these third party tools to enhance the capabilities of database clusters that run in Kubernetes.
How did this support more cloud-native approaches? What were the consequences?
Yang: In a cloud-native environment, a Kubernetes pod that runs as part of a database application may get killed due to out-of-CPU or memory error or get restarted because a Kubernetes node goes down. Ephemeral storage is tightly coupled with a pod’s lifecycle so it disappears with the pod if you use local storage. If you use external storage there is a different issue, which is added latency.
Operators can help mitigate these issues by providing high availability, allowing applications to run in a distributed fashion, automating the deployment, providing monitoring, managing the lifecycle of the databases, and allowing databases to run properly in a Kubernetes environment.
Kubernetes is now 10. How do you think about it today?
Yang: A lot has happened in the 10 years since Kubernetes’ birth. Lots of features have been built into Kubernetes to support data workloads and Kubernetes is getting more mature. Kubernetes has declarative APIs. It is flexible and extensible. It provides a way to abstract the underlying infrastructure. Operators have been a magic playing card to extend Kubernetes use cases. It is the key that allows databases to run in Kubernetes.
What problems still exist around Kubernetes when it comes to data and storage?
Yang: Day Two operations are still a challenge when running data on Kubernetes, but this can be mitigated by using Operators. Kubernetes is too complex, it takes a long time to ramp up, it takes lots of effort to manage data workloads on Kubernetes and it’s complicated to integrate with the existing environment.
And for Operators, a lack of standardisation is still a problem. Also, running stateful workloads in a multi-cluster environment is still a challenge because Kubernetes was initially designed to work in a single cluster.
Any other anecdotes or information to share?
Yang: Kubernetes has come a long way since its birth 10 years ago. The future is bright for Kubernetes in the next decade and beyond.
Read more about Kubernetes and storage
- Kubernetes at 10: The long road to mastery of persistent storage. Jan Safranek of Red Hat saw containers emerge as an exciting new way of deploying applications, but it took several additions to make it the mature enterprise platform it is today.
- Kubernetes at 10: From stateless also-ran to ‘platform to build platforms’. Sergey Pronin of Percona was all for ‘Kubernetes for stateless’ at a time when it was hard to provision storage and day-two services – then Operators and Stateful Sets came along.