The ephemeral stack - LGN: Composable datacentres for ‘edge’ AI

This is a guest post for the Computer Weekly Developer Network written by Daniel Warner in his capacity as CEO and co-founder of LGN — a company known as a developer of artificial perception technology used to balance and automatically connect an AI system to a sensor array.

Warner writes as follows…

Driven by a clear ROI and growing demand from consumers for intelligent devices, the next few years will see edge AI rapidly proliferate across the economy. 

Edge AI capability will engulf entire industries from healthcare to agriculture, a paradigm shift likely to drive almost exponential growth in edge AI deployments. Compared to today, International Data Corporation predicts that there will be an 800% growth in the number of AI applications operating at the edge by 2024. 

However, while increased demand for edge AI is certain, actually deploying at scale edge AI is one of today’s most critical business challenges. Paradoxically, while edge AI promises to take computing back from the cloud, deploying devices at scale is a challenge that the cloud, and its underlying data centres, remains vital for solving.

Even though edge devices promise to move inference away from the cloud, monitoring and training devices will primarily remain beyond the capability of the devices themselves. Regardless of advances in device hardware, making sense of the infinitely complex nature of the real world is a computing problem that requires vast resources only available in data centre environments. As a result, one of the most vital factors for making edge AI work is having the datacentre infrastructure to back up deployments.

Composing edge datacentres 

Unfortunately, while edge AI will see demand for datacentre compute skyrocket, dealing with the strain of receiving, processing, and returning data from vast numbers of independent devices is something most are ill-equipped to handle.

Providing training and monitoring to at-scale deployments of on-device AI is an unfamiliar task for datacentres typically set up for more predictable web or mobile deployments. Unlike AI that runs in the cloud, edge AI data, returning from real-world sensors, comes with an infinite variety of data input types and quality. 

With IoT devices estimated to generate 73 ZB (zettabytes) of data annually by 2025 (almost twice as much data as currently exists on the entire internet), the scale of this challenge cannot be underestimated. In response, an ideal but unrealistic solution would be to create a whole new array of datacentres tooled explicitly for training and monitoring edge AI. However, a far more cost-effective alternative is to reuse what’s already available by making today’s data centres composable.

On one level, composing datacentres for edge AI presents a hardware challenge. However, as well as requiring physical infrastructure to be reconfigurable, composable centres will require the same software infrastructure that currently runs within datacentres to function across different orchestrations and environments. Ultimately, the same ecosystem of tools needs to be capable of running applications both on the cloud and at the edge.

Achieving genuine composability

To create an edge AI-ready composable dataentre, existing tool stacks will need to be stretched to accommodate new data types and running environments. For example, machine learning monitoring tools like Prometheus and Grafan will need to be capable of deploying to both cloud and edge environments. Similarly, event streaming platforms like Kafka will have to cope with a vastly increased variety of data as they switch from the edge to the cloud. Achieving this kind of capability ultimately means extending Kubernetes to the edge.

Warner: Composable datacentres are a theoretical solution 9for edge AI) but making them work requires a framework designed for functionality.

However, the standard Kubernetes deployments that work in cloud environments are often unsuitable for what happens at the edge. In addition, resource constraints at the edge can also make Kubernetes fundamentally unsuitable. Solving this challenge and bridging the gap between the edge and the cloud, is what LGN’s Neuroform framework sets out to do. Neuroform is an open-source, cloud-native framework for orchestrating fleet-scale edge AI. Spanning the gamut of edge AI operation from deployment to monitoring and storage to orchestration, Neuroform creates a framework for integrating cloud-based Kubernetes into edge environments.

Joining edge to cloud

As leveraging edge AI becomes a business-critical goal for organisations, merging cloud capabilities with the requirements of machine learning that happens at the edge will become a vital task for datacentre providers. Composable datacentres provide a theoretical solution but making them work in reality requires a framework designed for this kind of functionality.

Creating this framework, Neuroform provides a modular ecosystem optimised for fleet-scale machine learning model deployment without sacrificing visibility.