AI is now more than just a science project
AI-for-AI’s sake is giving way to an enterprise AI model that demands skillful, value-driven results based on technology that can adapt with minimal cost
As far as hype levels go, no technology has approached the heights of artificial intelligence (AI).
The technology’s potential to automate, optimise, and extract previously unimaginable insights has struck a chord across all levels of Australian enterprise. From the C-level to the water cooler, the buzz has been palpable.
Now, 18 months after ChatGPT kickstarted the hype cycle, the awe has given way to a more sober-eyed understanding of the reality behind implementing the technology.
Initial Australian forays into AI could be described as ‘science-projects’. For the most part, technology teams were tinkering and toying with the technology as they sought to figure out what AI means for them and how it could fit into their business.
From my conversations with local businesses, these early experiments have involved AI projects being spun up in separate environments. This has allowed teams to build up their skills and discover where additional talent would be required.
Whereas senior leaders were initially enthusiastic and optimistic about these ‘science projects’, a recent shift has seen decision makers seek more well-defined business cases and evidence of how AI will deliver value for the enterprise.
AI-for-AI’s sake is over. Enterprise AI is now the focus.
As businesses look to move from practice to praxis, there are a number of hurdles they have to overcome. While there are numerous challenges, the three most prominent for enterprise AI are ensuring the necessary skills are in place, convincing the board of the real-world business value, and ensuring the infrastructure can provide the necessary computing power.
Syncing up skills
A recent survey of 650 IT leaders found all of them – 100% – require additional AI skills over the next 12 months.
According to the Australian Federal Government’s Digital Skills Organisation, the high demand for technical talent is likely to result in a shortage of 370,000 digitally-skilled workers by 2026.
This AI skills gap means organisations are extremely unlikely to develop their own large language models (LLMs). The Nutanix State of enterprise AI report found 90% of enterprises were planning to leverage pre-existing LLMs due to the shortage of necessary skills.
The two skills in most demand were ‘generative AI and prompt engineering’ and ‘data science and data analytics’ – identified by 45% and 44% of organisations respectively.
This shows that while organisations have given up the dream of creating their own LLMs, significant work is still required to deploy, operate, and support these new technologies.
For an enterprise LLM to deliver on its potential, it must have access to an organisation’s data. For many, this means enterprise data needs to be extracted from unwieldy legacy infrastructure and applications that were never designed for AI.
This will require significant investment – investment that requires the blessing of the board.
Bringing the board on board
Despite what the hype might suggest, enterprise generative AI and LLMs are not something you just download. In an enterprise setting, strict governance and privacy regulations mean AI needs to be implemented in a closed environment so confidential data is not leaked and used to train the model for external organisations.
For an enterprise LLM to truly deliver value, it must be able to access an organisation’s data. This is so that any insights are delivered based on a complete understanding of the business. If there are any blind spots, accuracy will be called into question.
Supplying access to this data, however, is easier said than done.
Particularly where an enterprise is still using legacy systems – which would typically be the most business-critical – the data needs to be rationalised and applications modernised before AI is implemented.
This is a critical first step. Any effort in convincing the board to implement AI must first include a conversation around modernising infrastructure. Without this, AI’s potential business benefits will be severely hamstrung as the model will be making decisions on an incomplete data set.
Intelligent infrastructure
Another recent report, the Nutanix Enterprise cloud index, found that while 90% of organisations across the region recognised AI as priority, one third said their current IT infrastructure was unsuited to running such applications.
Over the next year, organisations are expected to invest heavily in their IT infrastructure. In fact, 84% of organisations are planning to modernise their IT to better support AI.
A key part of this investment will be the adoption of hybrid multicloud environments – the combination of the edge, private, and public clouds. Already, one in five organisations in Asia-Pacific are running a hybrid multicloud model, and a further two in five plan to deploy in the next one to three years. The infrastructure, with its flexibility, scalability, and cost-effectiveness, provides an ideal foundation for AI implementation.
Further, it bridges the gap between an enterprise’s legacy applications still running on premise and those cloud native and SaaS (software-as-a-service) services it has already deployed.
Modern hybrid multicloud environments are also highly automated. This removes much manual management and maintenance legacy environments require, enabling an enterprise to take those highly-skilled engineers who’ve been ‘keeping the lights on’ and reskilling them to make the promise of AI a reality.
Like any technology, the promise of AI can provide incredible benefits if implemented correctly. AI-for-AI’s sake is giving way to an enterprise AI model that demands skillful, value-driven results based on technology that can adapt with minimal cost.
Michael Alp is managing director of Nutanix Australia and New Zealand