Flyalone - Adobe
Navigating the data analytics and AI landscape
Analysts at the Gartner Data and Analytics Summit in Sydney call for organisations to focus on business outcomes, extend data literacy to AI literacy and foster human-AI collaboration to benefit from the AI era
Companies in the S&P 1200 that view data and analytics as strategic outperform their peers 80% of the time, according to Gartner research vice-president and distinguished analyst Mark Beyer during the opening keynote at the Gartner Data and Analytics Summit in Sydney.
Gartner distinguished vice-president analyst Rita Sallam added that there is a 30% improvement in financial performance when you compare organisations in the 75th percentile in terms of data and analytics maturity with those at the 25th percentile.
While senior management generally understands the importance of culture and strategy to data and analytics adoption, they tend to undervalue governance and the management of the data and analytics function, according to Beyer.
Part of the problem is that those on the business side tend to view governance in terms of control and see it as something that acts as a brake on progress. But, as Sallam explained, it is about improving execution, in part by enabling the use of data to improve business performance.
One way this can be done is through the implementation and use of data products. Key characteristics of data products are that they are findable, ready for consumption, up to date, and governed for appropriate use, Beyer said.
Those responsible for data and analytics practices should be prepared to shout their successes – including defensible estimates of their value – from the rooftops, Sallam said.
But it is important to use the right metrics for value, she warned. Rather than considering the return on investment, it is better to look at the business value delivered by projects: “Link technical outcomes to business outcomes in terms the business can understand,” such as the effect on executive bonuses, she suggested.
GenAI opportunities
When it comes to generative AI (GenAI), Beyer explained that Gartner sees the opportunities as falling into four categories: front office, back office, products and services, and core capabilities. Furthermore, it can be used to make incremental improvements at one extreme, or game-changing disruption at the other. The latter is a high-risk approach, he warned, but it also offers big potential rewards.
Another way of categorising the opportunities is “defend, extend or upend”. Gartner vice-president analyst Luke Ellery explained that “defend” projects are characterised by incrementalism, marginal gains and micro innovations. Such projects typically cost between $20,000 and $100,000.
“Extend” projects are intended to grow some combination of market size, reach, revenue or profitability. Finally, an “upend” initiative looks to change the players in the game or change the competition – but at $5m to $100m, “it’s so expensive and it’s such high risk”, Ellery said.
Cost estimates can blow out by 500% or even 1,000%, Ellery warned. Uncertainties include the possibility that no more datacentres will be allowed in a particular geography, the cost of data preparation, the actual cost of using GenAI application programming interfaces (APIs), and users’ habits, such as iterating towards a final answer that could lead to exponentially higher costs.
But there’s more to risk than simply cost. Among the issues raised by Ellery were the quality and ownership of data, the need to monitor and tune a model’s performance, and the absence of standard contract terms relating to the provision and use of GenAI models.
Fortunately, there are some tools that can help monitor AI-related risks and costs. ActiveFence, Arthur, TruEra, and Fiddler were among those mentioned by Ellery. Gartner clients can also use the company’s AI and GenAI cost calculator, which takes into consideration more than 100 detailed cost items and helps compare the available options.
Also, a formal and centralised financial operations (FinOps) practice can help flatten the cost curve associated with GenAI, Ellery suggested.
The flip side of cost is usually thought to be benefit. But Ellery notes that benefits do not automatically deliver value. For example, international law firm Ashurst compared the performance of human and AI-assisted human lawyers, and while the latter were faster, there was no productivity improvement due to the cost of AI.
The people factor
The successful adoption of AI requires “purpose-driven humans”, Sallam said, adding that this can be achieved by reorganising the operating model, extending data literacy to AI literacy, and establishing new leadership paradigms.
Den Hamer called for organisations to ensure their people are AI ready, which means investing in AI literacy for business and technical employees from the C-suite down. And it should include teaching them how to avoid AI-driven errors.
Peter Krensky, Gartner senior director analyst, pointed to two things about AI that everybody should know: “GIGO (garbage in, garbage out) still applies, and large language models sometimes ‘hallucinate’ so you need to spot that happening and manage the resulting situation.”
Beyond that, different jobs require different AI skills and skill levels. For example, an executive doesn’t need to know anything about AI engineering but should have a strong grasp of getting value from AI.
Consequently, organisations will need to identify the various personas and deliver targeted training for these different groups, but Krensky thinks many will “massively underperform”.
Pointing to the synergy that can be unleashed by combining human intelligence and knowledge with AI, den Hamer recommends investing in data management tools so data can be AI-ready, supporting business analysts with pattern discovery and communication capabilities, adopting self-learning AI systems for wider decision automation, and investing in capabilities such as natural language processing to extend the analytics user base.
Trust is another important consideration. Employees need to be able to trust what GenAI systems tell them, that privacy issues are being properly addressed, and that they are being told the truth about the implications for their ongoing employment.
Processes should also be redesigned, and role expectations set to reflect human-AI collaboration, not AI substitution, den Hamer said. End users should be authorised to make decisions that are supported by analytics, but continued oversight is needed to avoid self-serving analyses.
Innovation remains important, so employees should be allowed time to concentrate on side projects that are of potentially high impact, but those projects must be aligned to business outcomes, he added.
AI fatigue
Krensky predicts that AI fatigue will become the top issue in 2025 and that GenAI is not good for everything. Referencing the Gartner hype cycle for GenAI, he said the technology is still at the peak of inflated expectations, and as yet there is nothing in the trough of disillusionment, let alone the plateau of productivity. “Vendors have a tendency to exaggerate,” he warned.
“Eventually we will stop talking about AI all the time, but whether that is because we take it for granted, because it does not live up to the hype or because ‘intelligence’ becomes redefined as ‘augmented intelligence’ remains to be seen,” he said.
Organisations will also face challenges with scaling AI, as that requires a combination of technology, data, operations, organisation, skills, and governance and risk management, he noted.
Near-term steps on the AI journey include the development of AI agents using various technologies that can be composed into more sophisticated system.
For example, an expert agent might be built on a combination of a large language model, a large action model, and a causal model. The expert might draw on a planner agent combining predictive machine learning with optimisation algorithms, while being kept in check by a rules-based agent that provides safeguards.
The point is to use AI technologies that suit particular aspects of the overall task to be performed, in part to make AI more autonomous, adaptive and reliable.
Another step is the use of simulations and other types of synthetic data to train models. This has at least three advantages: there isn’t always enough real-world data for a particular project, real data can be biassed, and there are no privacy implications when using synthetic data.
Finally, AI regulation is going to increase. The European Union AI Act came into force during the week of the conference, and will lead the way for other jurisdictions, Krensky predicted, so now is the time to start thinking about risk categories. The act identifies four categories of risk: unacceptable (banned), high (regulated), limited (transparency obligations) and minimal (no obligations).
Other related actions suggested by Gartner include putting appropriate guardrails, tools and training into place; setting up a cross-function AI council with the organisation and fostering an understanding that responsible AI is a philosophy, not a checklist.
Read more about AI in APAC
- Some 500 customer service officers at Singapore’s DBS Bank will soon be able to tap a GenAI-powered virtual assistant to improve workflows and better serve customers.
- Snowflake’s regional leader Sanjay Deshmukh outlines how the company is helping customers to tackle the security, skills and cost challenges of AI implementations.
- Malaysian startup Aerodyne is running its drone platform on AWS to expand its footprint globally and support a variety of use cases, from agriculture seeding to cellular tower maintenance.
- The Australian government is experimenting with AI use cases in a safe environment while it figures out ways to harness the technology to benefit citizens and businesses.