Getty Images/iStockphoto

Beyond IBM Watson: A Computer Weekly Downtime Upload podcast

Listen to this podcast

We speak to IBM software's product management lead about making AI work in the enterprise

While it is relatively easy to demonstrate business value from a pilot artificial intelligence (AI) projects that produces positive outcomes, it is far harder to scale these into production. Kareem Yusuf, senior vice president, product management and growth at IBM's software group, urges IT and business leaders to assess the return on investment (ROI), especially given that costs, measured in the volume of queries or tokens sent to an inference engine’s application programming interface (API), can quickly eat into the monetary value the project is likely to generate.

Looking at a straightforward example, he says: “If you're going to use AI within enterprise business processes, I think [a simple use case] is in decision support, making sense of more data than we can consume and giving us the right insights and direction or triggering work to be executed.”

Among the concerns Yusuf has heard from IBM customers looking to deploy AI in such a scenario is protection and quality of data. He says IT and business leaders want to ensure they can protect their own data. They also need assurances that what is running is not generating hallucinations or other false flags. “These are the key things that really bother many customers,” he says, adding: “When you think about it, we need to be able to know that we can trust the system.”

This also means IT and business leaders need to consider how the AI system has effectively been built and how it integrates with other enterprise IT. 

The there is the return on investment calculations. Yusuf urges organisations to consider the scalability of an AI project, which, he notes, has a direct impact on the return on investment. “The true cost of many of these systems are becoming apparent.” As an example, a pilot AI project may perform a quick call to an external service and this works as expected in the pilot setup. “When you do this at scale,” he says, there are usage costs measured in token and inference charges. “Does the pilot truly scale across in an enterprise wide context,” he says. As Yusuf notes, these are questions IT and business leaders need to consider if they want a return on investment aligned with the potential gains in efficiency or productivity demonstrated in the AI pilot deployment. He says: “Can I trust the tech I'm building on and can I deploy it at scale with quality so that I know that I can actually drive my business against it.” The conversation, he adds, needs to be focused on the use cases that the organisation believes will be able to drive value.

Among the areas of enterprise AI that IBM has identified as a business opportunity is where customers can take their data and, as he puts it, “Easily mash their data into the large language model.” This, he says, allows them to take advantage of large language models in  a way that enables protection and control of a business’ enterprise data. IBM has developed a project called InstructLab which provides the tools to create and merge changes to LLMs without having to retrain the model from scratch. It is available in the open source community along with IBM Granite, a foundation AI model for enterprise datasets. 

The company recently unveiled Granite 3.0, which it says, represents a shift from general purpose Large Language Models to smaller, lower cost, high performance models, designed for business use. Granite 3.0, according to IBM, is up to 24 times cheaper compared to other models.