Red Hat OpenShift AI expands predictive & generative AI 

This blog post is part of our Essential Guide: Red Hat Summit 2024 news and conference guide

If there are three z-heavy terms that describe the current shape of functional cloud computing deployments then they might be compartmentalisation, individualisation and customisation in the face of systems of virtualisation. Okay so that’s four, but who’s counting and we could add containerisation if we wanted to talk directly to the breadth of Kubernetes orchestration practices currently being undertaken today.

From on-premises datacentres to multiple public clouds to the edge, Red Hat OpenShift AI has picked individualisation as its descriptor of choice to help enterprises match their computing resources driven by the intelligent workload boom now occurring as a result of the deployment of Artificial Intelligence (AI).

The company has now announced advances in Red Hat OpenShift AI, an open hybrid artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift designed to enables users to create and deliver AI-enabled applications at scale across hybrid clouds. 

Red Hat’s AI vision

Red Hat says its vision for AI is to bring customer choice to the world of intelligent workloads, from the underlying hardware to the services and tools used to build on the platform. This choice factor is important in terms of it enabling businesses to their capacity to layer AI into daily business operations through an adaptable open source platform that enables both predictive and generative models, with or without the use of cloud environments.

We know that organisations in all industry verticals are facing many challenges when moving AI models from experimentation into production. It’s never easy to shoulder the burden of increased hardware costs, wider data privacy concerns and individuals, teams or departments who harbour a lack of trust in sharing their data with SaaS-based models. Generative AI (gen-AI) is changing rapidly, and many organisations are struggling to establish a reliable core AI platform that can run on-premise or on the cloud.

AI is not if, it’s when

“Bringing AI into the enterprise is no longer an ‘if,’ it’s a matter of ‘when.’ Enterprises need a more reliable, consistent and flexible AI platform that can increase productivity, drive revenue and fuel market differentiation. Red Hat’s answer for the demands of enterprise AI at scale is Red Hat OpenShift AI, making it possible for IT leaders to deploy intelligent applications anywhere across the hybrid cloud while growing and fine-tuning operations and models as needed to support the realities of production applications and services,” said Ashesh Badani, chief product officer and senior vice president, Red Hat.

Badani suggests that Red Hat’s AI strategy enables flexibility across the hybrid cloud, provides the ability to enhance pre-trained or curated foundation models with their customer data and the freedom to enable a variety of hardware and software accelerators. Red Hat OpenShift AI’s new and enhanced features deliver on these needs through access to the latest AI/ML innovations and support from an expansive AI-centric partner ecosystem. 

The latest version of the platform delivers technology functions including the model serving at the compute edge, which extends the deployment of AI models to remote locations using single-node OpenShift. It provides inferencing capabilities in resource-constrained environments with intermittent or air-gapped network access. This technology preview feature provides organisations with a scalable, consistent operational experience from core to cloud to edge and includes out-of-the-box observability.

Also here we find enhanced model serving with the ability to use multiple model servers to support both predictive and gen AI, including support for KServe, a Kubernetes custom resource definition that orchestrates serving for all types of models.

Let’s also make note of distributed workloads with Ray (the open source unified compute framework used to scale AI and Python workloads — from reinforcement learning to deep learning to tuning) using CodeFlare and RayKube, which uses multiple cluster nodes for faster, more efficient data processing and model training. 

As stated, Ray is a framework for accelerating AI workloads and KubeRay helps manage these workloads on Kubernetes. CodeFlare is central to Red Hat OpenShift AI’s distributed workload capabilities, providing a user-friendly framework that helps simplify task orchestration and monitoring. The central queuing and management capabilities enable optimal node utilisation and enable the allocation of resources, such as GPUs, to the right users and workloads.

According to IDC, to successfully exploit AI, enterprises will need to modernise many existing applications and data environments, break down barriers between existing systems and storage platforms, improve infrastructure sustainability and carefully choose where to deploy different workloads across cloud, datacentre and edge locations. 

Red Hat says that this shows that AI platforms must provide flexibility in order to support enterprises as they progress through their AI adoption journey and their needs and resources adapt.