Binkski - stock.adobe.com
How container applications are shaping storage management
Persistent storage, backup and recovery were not on the minds of those behind container applications, but that is starting to change
Containers and microservices are increasingly being adopted by companies in the Asia-Pacific region to speed up application development and become more agile. That has implications on the storage front, with containers beginning to shape the way that storage is deployed and managed.
“Two to three years ago, when containers were just getting adopted, many companies built applications that were stateless, which meant they were providing you with data, but nothing was actually saved,” said Vishal Ghariwala, regional product management director at Red Hat Asia-Pacific.
But as containerised applications became more complex, it soon became necessary to save data and the state of an application on persistent storage, so that users can carry on from where they left off for tasks such as filling online forms.
Persistent storage, though, was not on the minds of those behind early container technology, with the storage needs of container workloads – in the case of Kubernetes – reliant on volume plugins, according to Sanjay K Deshmukh, VMware’s vice-president and managing director for Southeast Asia and Korea.
“These had several setbacks, such as volume plugin development being tightly dependent on Kubernetes releases, potential crashes in critical Kubernetes components due to bugs in volume plugins, and the reliance on the Kubernetes community for testing and maintaining all volume plugins,” said Deshmukh.
Instead, the focus was on speed and agility, so developers can spin up applications on the fly, along with compute and storage resources that would be freed up when an application ceased to exist.
“Today, developers want the flexibility to deploy their applications in any type of environment – and they want storage to follow the applications,” said Rahul Vijayan, senior principal product manager at Red Hat Asia-Pacific. “The question is, how do you build distributed containerised applications that require persistent storage?”
The fact that developers can now provision storage on their own is markedly different from the way storage has traditionally been set up. “Previously, a storage administrator would provision and attach storage to an application,” said Vijayan. “If more storage was needed after the application was deployed, a request had to be made, so the DevOps concept wasn’t there.”
Read more about storage trends in APAC
- Pure Storage is expanding its footprint in key markets such as Australia, Japan and South Korea, even as it proceeds with caution in China.
- The data deluge and compliance requirements are shaping how Asia-Pacific firms are approaching storage management issues in the age of cloud.
- NVMe storage is becoming popular among Asia-Pacific enterprises that want to reduce latency and speed up application performance.
- The Australian arm of global engineering firm Laing O’Rourke signs up for Nutanix to run its core applications on a private cloud.
Neither were traditional storage platforms designed to meet the requirements of today’s application development practices. An organisation could have thousands of microservices running everywhere, but traditional storage platforms often can’t scale to meet the needs of highly distributed workloads.
“Storage performance is often unpredictable and does not scale as fast as applications,” said Eugene Yeo, chief operating officer of regional telco MyRepublic, noting that the demand for higher data throughput and lower latency is a challenge when running high-density Kubernetes clusters. “Scaling out storage requires planning ahead and purchasing additional hardware,” he added.
The solution, said Deshmukh, is to move from traditional storage processes towards software-defined storage (SDS), which places data and storage within containers. “Through this, enterprises can migrate data – all or some of them at a time – across platforms seamlessly,” he said. “When the container disappears, enterprises can still access the data associated with the application.”
Indonesia’s Bank BTPN did just that, tapping Red Hat’s OpenShift Container Storage to leverage its Pure Storage flash arrays, which offer block storage, but lack the file storage needed by its containerised applications.
“They have applications that run across 10 containers that cannot access the same block,” said Vijayan. “So, they deployed our software as a virtual machine to consume their Pure Storage arrays, along with block, file and object storage, complete with data replication across different sites.”
Maintain storage persistence with SDS
Software-defined storage (SDS), a storage datacentre architecture that separates the management and provisioning of storage from the underlying hardware, can help enterprises to maintain persistent data storage within their containers.
“Through SDS, enterprises can automate their storage management needs through frameworks such as Kubernetes,” said VMware’s Deshmukh. “They can automatically scale storage up and down and eliminate over-provisioning. In an SDS model, the data plane – which is responsible for storing persisting data – should be virtualised and provide a convenient abstraction for applications.”
With the growing adoption of containers, the developer community has created the Container Storage Interface (CSI) to provide a standardised API (application programming interface) for container orchestration platforms to “talk” to storage plugins. Storage suppliers such as Dell EMC now offer CSI-compatible drivers to enable their storage systems to interface natively with containers and Kubernetes.
“One of our customers in Southeast Asia was building mobile and web-based applications in a public cloud service on a container-based system,” said Matthew Zwolenski, vice-president of presales at Dell Technologies Asia-Pacific and Japan. “As their core application moved to production, they decided to move the application in-house onto a platform they could govern and control.
“We helped them build a Kubernetes-based architecture on our VxRail platform and they were able to seamlessly port their application to this new architecture. They are deploying our ECS object store and Isilon file system to store large files, as well as massive datasets such as images, and separating the container-based data applications from their data.”
But there was still one more problem. As backup and recovery was not a priority for cloud-native applications, there are no frameworks, at least for now, to help automate backup and recovery processes for containerised applications, according to Red Hat’s Vijayan.
“A lot of that work is being done manually, but now that stateful and stateless applications are being developed, the Kubernetes community is thinking of having things like snapshots,” he said. “We are also working with the community to build those functionalities into container storage. Some of that exists, but it’s not really done in an automated, orchestrated manner.”