Supermicro Unveils New AI Systems With Cutting-Edge NVIDIA Blackwell GPUs
Supermicro, a renowned Total IT Solution Provider, has recently unveiled its latest Artificial Intelligence (AI) systems, designed to cater to the demands of large-scale generative AI projects. These advanced systems are integrated with NVIDIA's newest data center products, marking a significant step forward in AI technology. Among the highlights are the NVIDIA GB200 Grace Blackwell Superchip, alongside the NVIDIA B200 and B100 Tensor Core GPUs. This development is set to enhance Supermicro's existing NVIDIA HGX H100/H200 8-GPU systems by ensuring compatibility with the new NVIDIA HGX B100 8-GPU and B200 GPUs, thereby streamlining the delivery process.
Moreover, Supermicro is expanding its portfolio with the introduction of innovative offerings. Notably, these include the 4U NVIDIA HGX B200 8-GPU liquid-cooled system and the NVIDIA GB200 NVL72, a comprehensive rack-level solution equipped with 72 NVIDIA Blackwell GPUs. Built on the reliable HGX and MGX system architecture, these new systems are finely tuned to harness the full capabilities of NVIDIA Blackwell GPUs. Thanks to Supermicro's direct-to-chip liquid cooling technology, these systems can achieve increased thermal design power, unlocking the maximum potential of NVIDIA Blackwell GPUs for unparalleled AI training and real-time AI inference performance.

Supermicro's upcoming lineup is poised to support the forthcoming NVIDIA Blackwell B200 and B100 Tensor Core GPUs. These systems will undergo validation for the latest NVIDIA AI Enterprise software and will feature configurations such as the NVIDIA HGX B100 8-GPU and HGX B200 8-GPU systems, SuperBlade with up to 20 B100 GPUs, and 2U Hyper with up to 3 B100 GPUs. Specifically designed for training extensive foundational AI models, the NVIDIA HGX B200 8-GPU and HGX B100 8-GPU systems will offer advanced networking capabilities. They will support both NVIDIA Quantum-2 InfiniBand and NVIDIA Spectrum-X Ethernet options, providing high-speed interconnects that can deliver training results three times faster than previous generations.
In addition to these advancements, Supermicro is set to launch new MGX systems powered by the NVIDIA GB200 Grace Blackwell Superchip. These systems are tailored for the most demanding Large Language Model (LLM) inference workloads, promising significant performance improvements over earlier models. They are expected to support speed-ups of up to 30 times compared to the NVIDIA HGX H100.
Furthermore, Supermicro will extend its support to include the upcoming NVIDIA Quantum-X800 InfiniBand and Spectrum-X800 Ethernet platforms. This move aims to provide top-tier networking performance essential for AI infrastructures.
Supermicro plans to showcase its innovative GPU systems at the upcoming NVIDIA's GTC 2024 event. This occasion will offer attendees an in-depth look at Supermicro's comprehensive solutions tailored for a broad spectrum of AI applications. The event promises to be a pivotal moment for industry professionals and enthusiasts alike to explore the latest advancements in AI technology.