This article is part of our Conference Coverage: Red Hat Summit 2024 news and conference guide

Red Hat eyes AI workloads in platform moves

Open source juggernaut rolls out offerings to make it easier to fine-tune large language models, among other moves to ease deployment of artificial intelligence workloads

Red Hat is doubling down on artificial intelligence (AI) through a slew of product offerings and enhancements that will make it easier for organisations to fine-tune AI models and deploy AI applications.

In his keynote at Red Hat Summit in Denver today, Red Hat CEO Matt Hicks said that while much progress has been made by organisations to fine-tune generative AI (GenAI) models through open source platforms like Hugging Face, their work can’t be combined with those of others to improve the models.

“Open source has always thrived with a very broad pool of contributors willing to contribute their knowledge, but the barriers to fine-tuning Mistral or [Meta’s] Llama without a background in data science have been too high,” he added.

Hicks said Red Hat is looking to address that challenge by open sourcing InstructLab, a technology from IBM that’s touted to make it easier for anyone, not just data scientists, to train and contribute to large language models (LLMs).

InstructLab is based on a technique developed by IBM Research that uses taxonomy-guided synthetic data generation and a novel multi-phase tuning framework. This approach makes AI model development more open and accessible by relying less on expensive human annotations and proprietary models.

With the technique, dubbed large-scale alignment for chatbots (Lab), LLMs can be improved by specifying skills and knowledge attached to a taxonomy, generating synthetic data from that information to fine-tune the models.

Hicks claimed that with InstructLab and the use of synthetic data, organisations can fine-tune LLMs and obtain results with significantly lower amounts of training data and “the ability to teach smaller models the skills relevant to your use case”.

“Training costs are lower, inference costs are lower and deployment options expand. These all in turn create options for how you might want to use [AI] in your business … with the choice to keep your data private and maintain it as your intellectual property,” he added.

This is truly an opportunity for you to create an AI that knows about your business and builds on your internal experience
Matt Hicks, Red Hat

InstructLab is part of RHEL (Red Hat Enterprise Linux) AI, a foundation model platform that lets organisations develop, test and deploy GenAI models. RHEL AI also includes the open source-licensed Granite LLM family from IBM Research and can be packaged as RHEL images for individual server deployments across a hybrid cloud environment.

RHEL AI is also included in OpenShift AI, Red Hat’s hybrid machine learning operations platform for running models at scale across distributed cluster environments.

Red Hat’s AI moves were set in motion last year when the company deepened its platform capabilities with OpenShift AI to address the needs of organisations that are set to add more AI workloads into the mix of applications that run on OpenShift.

The latest AI updates by Red Hat underscore the company’s resolve to be the platform of choice for organisations to build and run AI applications, at a time when open source is fuelling the AI revolution.

“We feel fortunate to be experiencing the convergence of AI and open source,” said Hicks, noting that every organisation will chart its own path when it comes to AI adoption.

“But wherever you are today, I can assure you that this is truly an opportunity for you to create an AI that knows about your business and builds on your internal experience. This is a journey that many of you have taken with us over the years and it has made open source what it is – the world’s best innovation model for software,” he added.

Read more about open source in APAC

Read more on Containers