Getty Images

CXL in the datacentre: Boosting memory for hungry workloads

We look at CXL, how it revolutionises connectivity between memory components, potentially saves energy, and adds oomph to heavily memory-dependent workloads such as AI analytics

Compute Express Link, otherwise known as CXL, is set to revolutionise the datacentre. So, what is it and what are the benefits? 

Memory management is a key element that enables datacentres to utilise the optimum memory configuration, especially as they pivot towards increasingly data-intensive workloads, such as data analytics, machine learning and artificial intelligence (AI).

These tasks demand significant memory and processing power, which can be resource-intensive and require massive amounts of power. Storage class memory also struggles to meet increasing demands for multiple processing nodes.

However, Compute Express Link (CXL) is set to revolutionise how datacentres and server farms operate, potentially reducing energy usage and other costs.

What is CXL?

CXL allows all PCIe-based components, such as graphics processing units (GPUs), dynamic access ram (DRAM) and solid states drives (SSDs) to operate as direct peers to the central processing unit (CPU). This effectively solves the growing gap between processor speed and memory bandwidth, allowing them to work together efficiently.

Until recently, any system that included multiple types of processor (CPU, GPU, etc) had to break the memory down into different spaces (server, GPU, etc). Any data sharing between components was conducted as an I/O transfer, which is generally a slower way to update memory than using the memory channel.

CXL enables pools of memory to be created for working datasets, offering far greater capacity than was previously possible. Rather than relying on cache memory, shared memory pools are available through CXL, which streamlines frequent data sharing and communication between processors. In this way, CXL has the potential to fundamentally shift the network architecture of datacentres by offering low-latency and scalable interconnect technology.

Since its launch in 2019, CXL has evolved to offer greater functionality. In August 2022, CXL 3.0 was released, which offered increased scalability and optimised system-level flows, as well as offering peer-to-peer communications and resource-sharing across a variety of compute domains.

The CXL Consortium

Rather than being developed by a single company using proprietary systems, CXL has been developed by the collaboration between a conglomerate of organisations called the CXL Consortium.

This open industry standards group originally comprised only four organisations, but now has more than 300 members, including IBM, Intel and AMD, with more companies still joining.

As CXL is a joint development project, no company has sole ownership of the underlying technology.

The benefits of open standards

Developing CXL as an open standard means that it is accessible and usable by anyone. A common prerequisite of open standards is an accompanying open license, which provides for future development and expansion. Although CXL has been solely developed by the CXL consortium, its development was conducted openly and transparently.

As CXL has been developed as an open standard, it eliminates the potential for proprietary technologies and it is not bound to any device or manufacturer. So, CXL allows the products of multiple suppliers to be connected directly to processors in CPU, GPU and DPU forms. This eliminates the need for proprietary memory and storage class memory (SCM).

This open nature of CXL essentially means that it can become a universal interconnect for all memory. The device-agnostic approach makes it incredibly flexible and future-proofed to ensure its flexibility is maintained. This is especially attractive for datacentre managers, as it expands their purchasing and scaling options.

Ideal use cases

For memory-hungry applications, CXL offers large memory availability at a comparatively low cost. It also enables the system to dynamically determine which applications should get a performance boost and which should not.

Large-scale datacentres and server farms that require rapid scaling are the organisations that are mostly likely to benefit from CXL technology, as it is these companies that need the greatest processing power. Smaller datacentres, server farms and other organisations processing significant amounts of data may find CXL technology to be advantageous, but the benefits will not be as noticeable.

One company using CXL is Numascale, which became a contributor to the CXL consortium in 2021, bringing their expertise in cache-coherent shared memory interconnect technology.

CXL reduces server power consumption, so electricity costs will be lower. Since less cooling will be required by servers, the environmental impact is lessened.

CXL technology potentially offers massive benefits to server farms and organisations that process large amounts of data. Although there will be an initial outlay to install CXL across a datacentre, improved efficiency and reduced overheads will soon become apparent.

Read more on CXL

  • Four key need-to-knows about CXL. Compute Express Link will pool multiple types of memory and so allow much higher memory capacities and the possibility of rapidly composable infrastructure to meet the needs of varied workloads
  • How CXL 3.0 technology will affect enterprise storage. Understand CXL 3.0 technology before its impacts on storage take serious effect. While the specification improves on previous generations, it could also demand more from storage.

Read more on Flash storage and solid-state drives (SSDs)