osorioartist - Fotolia
Deep dive: How Pure Fusion plans to implement storage classes
We spoke to Pure Storage founder John ‘Coz’ Colgrove about how Pure Storage’s Fusion control plane will make its capacity available to applications via pre-defined storage classes
Flash storage pioneer Pure Storage recently announced it will upgrade its Fusion control plane to make storage capacity in its arrays available via storage classes that allow it to be easily provisioned to any application needs it.
That potentially brings lots of advantages as it makes storage simpler and more easily consumable by multiple applications across numerous hosts and in varied environments.
Kubernetes has long used the concept of storage classes to provide persistent storage with defined profiles to applications. But that’s in containerised environments that Kubernetes controls from top to bottom.
So, how will storage classes work for for Pure’s Fusion control plane in heterogenous datacentre environments?
At Pure’s Accelerate event in Las Vegas in mid-June, we caught up with Pure Storage founder and chief visionary officer John “Coz” Colgrove to ask him.
How would you summarise the new functionality announced for Fusion?
John “Coz” Colgrove: The goal of Fusion is to let you run your fleet of arrays as a service offer, to let you define a small number of storage classes and so gain much greater consistency in your environment.
A typical customer buys some arrays. Next year, they buy some more arrays and then more the year after that. Maybe the next year, they decommission a few of the earlier ones and they buy some more.
They have this fleet they’ve purchased at a whole bunch of different times. They have different classes of array, maybe different generations of the same class. So, they have this fleet that is heterogeneous, and they have trouble managing it to gain uniform utilisation.
Because, of course, the performance levels and capacities of these things are different. This one has three features that this other one doesn’t. This one has two different features. And so, you have this mess of 27 different types of storage in your datacentre.
Now you are attempting to make a change. For example, if I want to implement this new security feature, planning it and doing it is a lot harder because all those different types of storage mean you have to figure out how to apply it.
What Fusion should let you do is have a small number of people that define storage classes with attributes. Then you can have your users do it via API provisioning, or you can still have the IT administrators do it by the user opening a ticket saying, “I need 10TB”, and they figure it out.
Instead of them having to go to all these arrays and figure out which one has capacity, which one has enough performance, which one has the right kind of tuning, Fusion will figure all that out and classify them, so: here’s my class A storage, here’s my class B storage, here’s my class C storage.
And I would tell every organisation, you really shouldn’t have more than three or maybe four classes. That way, you can reason about and move your datacentre services consistently. All of that should then allow people to get far greater utilisation out of their environment.
Because another problem you have is you buy these arrays and, for example, the finance department owns an array and has their applications on it. And if they don’t fill it up, they don’t want anybody else on it, right? You have to get rid of that. The cloud doesn’t do that. It makes no sense.
We see all these customers that have 20, 30, 50, 100 arrays and they’re 40%, 50%, 30% utilised. I want to start by getting those up to 70% or 80%, and that’s what Fusion wants to do.
When we started the Fusion project, we went after greenfield deployments. It was the easiest thing to do and to not have to worry about legacy environments. That was a mistake. What the new Fusion does is it goes after the brownfield deployments. People that already have fleets can now go and deploy Fusion, and in essence absorb their current configuration and start deploying new things using Fusion.
The next phase is then going to enable Fusion to rebalance across all those arrays seamlessly. Fusion will start recommending load balancing things.
You saw some of that in [Pure’s AI] Copilot. It’s a tech preview right now this summer, but when it’s released, it will go with Fusion. It will help you get these fleet insights. It’ll help non-expert users get deep insights about the fleet and make much more strategic decisions
What engineering obstacles has Pure had to overcome to get Fusion to work in brownfield environments?
Coz: The biggest thing was that it’s easy to say every volume in there, every object has been configured with Fusion, so Fusion knows all about it.
The central Fusion database can have all this knowledge and say it all conforms to one of these defined storage classes. The hard part with brownfield was saying we’re going to take all the existing configs, and we’re going to intake that and turn it into a Fusion configuration.
Fusion has a notion of things like a site. Very similar to some of the cloud concepts. There’s no site information that exists in the existing configuration.
You had to make decisions such as, “We’re going to import everything in the existing configuration and we’ll pull it into this default workspace.”
Another concept Fusion has is the idea that you could have a workspace that you’re controlling with your classes of storage, and I could have one that I’m controlling with mine. So, it had to take all the existing configuration, pull it in, and create defaults for all of this stuff. All the concepts that weren’t there before.
In Kubernetes, you have persistent volumes and you have storage classes and applications have their claims upon storage classes and on persistent volumes. It’s easy to see how that works in Kubernetes, because Kubernetes is the broker that handles those relationships. What’s the analog of that in Fusion, or is there an analog? How does it work?
Coz: No, there is very much an analog, and that’s what I was talking about, about the sites and the workspaces and things like that.
Where the cloud, even before Kubernetes, brought in this concept of a site, or a region, I guess you’d call it, an availability zone within a region, these sorts of concepts into the way people operate. And Kubernetes has a similar set of concepts, and they have some slight differences there, but we’ve adopted the same things in Fusion; sites, availability zones, and that sort of thing.
Because what we’re trying to do is create a cloud-like service offering that you can run on-prem, or in fact, that you can run on-prem and extend to your Pure Cloud Block Store and cloud resources.
Does Fusion provide the equivalent of a persistent volume claim at the application in a similar way?
Coz: The persistent volume claim is more of a container thing, so that would be more Portworx. But the analogy is normally that when you have a file system or a volume, you have it exported under a set of export rules, in the case of a file system or connected to a host, Fibre Channel, iSCSI, NVMe, and so on, but you have this host connection.
You can argue that host connection is the equivalent of a persistent reservation or persistent relationship. Because every time you power off your datacentre and restart it, it comes back.
Mapping to what happens with Kubernetes, if you’ve got a complex environment with a lot of legacy applications how do those applications make their claim to storage classes?
Coz: I’d say you still use Portworx or something like it to manage your container-based applications.
And non-container-based applications?
Coz: The non-container-based applications have, for example, a database that consists of five volumes. There’d be a persistent connection setup in the array configuration to connect these volumes to this host.
One of the things that Fusion tends to do that would be different – going back to what I was saying around balancing things – would be to get the best utilisation out of the fleet, you shouldn’t take a high-performance app and put it all in one array.
You should have that high-performance app distributed across a group of arrays, and you should have lower-performance applications also distributed across those arrays, so you get even usage from a busyness perspective and even usage from a capacity perspective.
In the old days, pre-Fusion, if you wanted to create a new database, this is what you’d do. You’d say, “I need a log volume and a scratch volume and three data volumes”, for example, and would normally figure out an array that had capacity and performance available, create those five volumes on the array – and maybe not create them all perfectly consistently with every other instance of that application – and then set up a connection from that array to the hosts that we’re going to run against this database.
In the Fusion world, you’re going to say, “I want these five volumes for this application.” You can have a template for the application. Or you’re going to say, “I want these five volumes and I want these four of them to be class A, and this scratch volume I want to be class C because I don’t care about backing it up the same way.”
Fusion will go and potentially put those on five different arrays, because Fusion will go find the space in your fleet and then instantiate the volume and instantiate the connection. And so it’ll let you do it in a way that gets much better utilisation of your fleet and gives you much greater consistency. I think those are the two biggest benefits.
I go back to whenever you want to make a change and you say, “I need to do better protection against ransomware”, or, “I need to do better on my backup”. When you can reason about your environment because it is simpler, you’d make a better decision.
And then, of course, when you can implement it by just saying, “I want to change the definition of class A storage to be backed up every four hours instead of every eight hours and go and get it done across the whole fleet.” That’s a power people don’t have today.
Read more on Pure Storage
- Pure deepens Fusion as reorientation to storage for AI continues. Pure Storage launches Fusion ‘storage classes’ across its arrays as a pool of storage for AI-centric workloads, plus AI Copilot and Evergreen//One for AI storage-as-a-service.
- Pure’s storage as a service: We can offer what others can’t. All-flash storage vendor Pure makes bold claims about subscription pricing for storage, stating that the competition can’t offer what it can because its arrays are built for non-disruptive upgrades.