Nebulon upgrades SPU to Medusa2 with Nvidia DPU hardware
Nebulon aims to replace HCI with its OS on an offload card that it claims reduces server resource use by 25%. Medusa2 uses Nvidia DPU hardware and its own cloud control plane
A card in your servers that can cut server resource usage by 25%. That’s the equivalent of buying three servers instead of four. And all done by offloading networking, storage and data services to a DPU-based PCIe card.
That’s the promise from Nebulon, which sees its Services Processing Unit (SPU) revamped as the Medusa2 with new hardware based on an Nvidia DPU and Nebulon’s upgraded nebOS operating system.
The Medusa2 is specifiable in servers from OEM partners and is targeted at customers that want to transition from existing three-tier infrastructures, or are looking at edge and artificial intelligence (AI) deployments.
Core to the claimed benefits of the Medusa2 SPU is that it offloads CPU-hungry networking, storage and data services, including ransomware detection, to a discrete hardware component and away from the server.
The Medusa2 is based on an Nvidia Bluefield-3 DPU with ARM processor cores, and each has 48GB of DDR5 memory. It connects to servers via PCIe Gen5 slots and provides up to 200GBps throughput, plus NVMe, SAS and SATA storage connectivity.
Craig Nunes, chief operating officer at Nebulon, spelled out the benefits of offloading functionality in VMware environments.
“If you’re using vSAN and you turn on compression, dedupe, or encryption, straight away you’ll experience a 20% overhead on server CPUs,” said Nunes. “And with NSX, you’ll get a similar overhead on networking.
“Similarly, with hyper-converged infrastructure [HCI], you’ll lose one server for every four just to run the HCI.”
Nunes said what Nebulon offers is a replacement for HCI, where storage and networking and data services are moved off to their cards. But it’s not a DPU, he said.
“DPUs are well deployed across the hyperscalers, but have been slow to gain wider adoption,” said Nunes. “That’s because they’re hard to deploy with the existing customer software stack. Nebulon has built the software stack that can, and we call it the DPU stack. We’re still branded as an SPU, but built on underlying hardware that is the Bluefield-3 DPU.”
What Nebulon offers is essentially a hardware offload card that takes away processing overhead from the server CPU. It can integrate with VMware vSphere, Microsoft Server/Hyper-V, and Linux/KVM environments, while also having Kubernetes CSI drivers for containerised workloads.
Via its cloud-based control plane, it carries out monitoring and maintenance of the deployed Nebulon fleet, with software updates.
Within the Nebulon software stack is a so-called Secure Enclave that keeps potential ransomware threats separate from traffic in the On Cloud application and OS domain.
Nebulon’s three key target markets are:
- Customers that want to transition to modern infrastructures from existing three-tier configurations. No server software needed, just hardware. In particular in competition with HCI suppliers.
- Edge deployments where the ability to reduce hardware deployments and its cloud management model and security make it suited to remote deployments.
- AI infrastructure, for which Nunes said infrastructure with Nebulon components is well-suited to the builds and teardowns typical of transient workloads.
Read more on DPUs
- DPU 101: What are DPUs, what do they do, and who supplies them? We look at data processing units, the latest in a line of hardware offload devices that emerged in the era of composable infrastructure. They come as hardware and even in the cloud.
- DPUs vs SmartNICs: What storage admins need to know. To determine whether a SmartNIC or DPU is right for their organization, admins must understand the capabilities of different kinds of NICs and DPUs.