Laurent - stock.adobe.com
Storage analytics: How AI helps storage management
We look at storage analytics products that measure key metrics in storage hardware, single-vendor and multi-vendor tools, and how AI-based storage management and AIOps is emerging
IT infrastructure spend is expected to decline as a result of the coronavirus pandemic, but that will likely be combined with continued growth in the volume of data, in part driven by greater levels of remote working.
All of this, and the need to get more from smaller budgets, will put pressure on organisations to streamline how they manage their IT. Increasingly, they are turning to automated monitoring for storage management as part of this.
Storage hardware monitoring is now well-established, with tools that gather data on volume usage and equipment performance, but also environmental data, temperature, power consumption and component-level readings from drives. The more advanced tools can make recommendations to optimise performance and utilisation.
“This data is then leveraged by the integrated intelligence of systems to provide insights back to the admin to offer recommendations on how to resolve issues or improve system optimisation, or to provide predictive insights on potential issues before they occur,” says Scott Sinclair, an analyst at ESG.
Over time, analysts expect storage analytics to move from monitoring and early fault detection to autonomous operations.
Hardware vendor tools versus multi-supplier monitoring
Most – but not all – storage analytics tools come from the hardware suppliers, giving IT managers a useful snapshot on the health of their arrays or storage subsystems. But this only gives a limited picture of a complete system or stack, especially when more than one vendor is involved.
“Most storage managers will use the default tools that come with the equipment,” says Andy Buss, at analyst house IDC. “Enterprises have the desire to run centralised tools, but often end up not doing so. They revert to what comes with their equipment.”
More multi-supplier storage monitoring and analysis tools are coming onto the market, partly spurred by a need for firms to manage hybrid environments and partly by the growing use of standardised application programming interfaces (APIs) – especially the representational state transfer (REST) API – for storage.
But using multi-supplier tools involves a trade-off between system-wide visibility and detailed feedback on system performance. Supplier-independent tools do not, as yet, gather sufficient data to optimise every manufacturer’s equipment fully.
“A lot of the new management tools are more multi-vendor capable,” says Buss. “You get a base level of functionality. You don’t get all the bells and whistles, but you do have more control over your entire infrastructure.”
There is growing appetite among IT teams, he says, for a “single pane of glass” to manage infrastructure, including storage. Such systems do come at an additional cost, and companies have been reluctant to pay for them.
But this is changing, in part due to their experience in operating systems such as Azure Stack and Amazon Web Services (AWS) Outposts, which come with sophisticated management built in.
Predictive tools and AIOps
Storage and system analysis tools are also becoming smarter. Hardware and management tool suppliers are turning to advanced analytics, and even artificial intelligence (AI) and deep learning, to improve system performance.
This can involve moving datasets to the most cost-effective storage tier, moving files away from a sub-system that shows signs it might fail, or consolidating data to maximise the balance between capacity utilisation and performance.
Companies are turning to these tools to cope with larger and more complex environments, including those that mix cloud and on-premise resources. AI and machine learning (ML) are increasingly seen as one way – sometimes the only way – to cope with that complexity and deliver IT performance to the business.
According to recent research by analysts ESG, some 23% of companies see AI and ML for system management as a top priority for datacentre modernisation.
“These intelligence features are vital for any IT environment of any significant size,” says ESG’s Scott Sinclair. “Organisations do not have the excess people to spending time continuously optimising every environment as workload usage evolves or to diagnose complex component failures.” Instead, they are turning to systems to do so.
Gartner has called this “AIOps, which stands for “artificial intelligence for IT operations”.
Gartner predicts that by the end of 2025, 40% of new deployments of infrastructure products, including storage and hyper-converged systems, will be AIOps-enabled, up from less than 10% in 2020.
The new tools proactively analyse capacity and performance status, predict potential issues that may cause data services disruption, and provide actionable advice to resolve Level 1 problems enhances storage utilisation efficiency.
Julia Palmer, Gartner vice-president, says: “Storage tools have always delivered on some metrics for capacity and performance, but it was not good enough as it required someone with storage expertise to constantly monitor it. AIOps tools, however, look for anomalies, patterns of consumption and performance trends and correlate it with normal behaviour of the specific customer system and other systems supported by the vendor.”
What’s available?
By bringing together richer data sources and a degree of artificial intelligence, some suppliers claim significant improvements in system performance and availability.
HPE’s InfoSight, for example, is viewed as one of the most advanced. It monitors 100,000 systems and between 30 million and 70 million sensors worldwide for maintenance and performance issues.
InfoSight claims to detect and fix 86% of potential problems without the need for human intervention. As many as 54% of problems picked up by InfoSight are “outside” storage and somewhere else in the stack, the company says. For now, InfoSight only works with HPE Nimble systems, and 3PAR technology.
Virtana – formally Virtual Instruments – is also a leader in system performance management, with a focus on hybrid architecture as well as private cloud.
There are other suppliers in the market too. IBM’s cloud-based Storage Insights, NetApp’s Active HQ and Hitachi Vantara are some of the best known.
Microsoft also has extensive monitoring capabilities in Azure, via REST APIs, as well as through Windows Server, where that platform is used to run storage-dense server hardware. VMWare also has its toolsets for virtualised environments, through its intelligent optimisation tools for vSAN.
Together these applications give CIOs powerful tools to monitor and optimise their environments, as well as prevent failures.
Smarter systems, smarter storage
The industry is, however, still at an early stage on its journey to intelligent, device-independent storage management.
Areas of growth are likely to include more support for hybrid environments, more granular management of the different types of flash storage on the market, and potentially support for capacity-based pricing. Tools such as Virtana and vSAN can already take account of data ingress and egress costs.
“The predictive capabilities in some of these systems is really amazing,” says ESG’s Sinclair. “Intelligent systems typically offer recommendations to optimise system performance or capacity, while some even offer the option to self-optimise.”
“In a similar fashion, these systems can also often automatically diagnose issues and recommend actions. It’s difficult to make claims on predictive capabilities, because the results can change based on the environment. But some vendors rely on these capabilities to make higher level claims, such as 100% availability guaranteed for example.”
Whether such claims can be substantiated remains to be seen. It will be easier for suppliers to hit reliability targets in largely monolithic, single vendor environments than for more complex systems.
And reliability and performance metrics will vary depending on the workload and applications. A system that prioritises short-term performance over reliability might not be giving the business what it needs.
As a result, storage analytics will continue to work alongside human analysts, suggests IDC’s Andy Buss. “Systems need to be sustainably reliable,” he says. “For the technology to be accepted, it needs to work as an assistant. Few companies will turn over their IT or storage infrastructure entirely to AI.”
Read more about storage analytics
- Five predictive storage analytics features you’ll want to watch for. Predictive storage analytics tools are becoming standard equipment in the enterprise. Get to know the features you’ll need, how they work and their benefits.
- AI and predictive analytics are included in many storage offerings. See how this functionality helps automate storage, which features are useful and which products have them.