Super X AI Technology Limited announced the launch of its latest flagship product, the SuperX XN9160-B300 AI Server. Powered by NVIDIA’s Blackwell GPU (B300), the XN9160-B300 is designed to meet the growing demand for scalable, high-performance computing across AI training, machine learning (ML), and high-performance computing (HPC) workloads. Engineered for extreme performance, the system integrates advanced networking capabilities, scalable architecture, and energy-efficient design to support mission-critical data center environments.
The SuperX XN9160-B300 AI Server is purpose-built to accelerate large-scale distributed AI training and AI inference workloads, providing extreme GPU performance for intensive, high-demand applications. Optimized for GPU-supported tasks, it excels in foundation model training and inference, including reinforcement learning (RL), distillation techniques, and multimodal AI models, while also delivering high performance for HPC workloads such as climate modeling, drug discovery, seismic analysis, and insurance risk modeling.
Designed for enterprise-scale AI and HPC environments, the XN9160-B300 combines supercomputer-level performance with energy-efficient, scalable architecture, offering mission-critical capabilities in a compact, data-center-ready form factor.
Also Read: Phasecraft unveils Mondrian: quantum software for networks
The launch of the SuperX XN9160-B300 AI server marks a significant milestone in SuperX’s AI infrastructure roadmap, delivering powerful GPU instances and compute capabilities to accelerate global AI innovation.
XN9160-B300 AI Server
The SuperX XN9160-B300 AI Server, unleashing extreme AI compute performance within a 8U chassis, features Intel Xeon 6 Processors, 8 NVIDIA Blackwell B300 GPUs, up to 32 DDR5 DIMMs, and high-speed networking with up to 8 × 800 Gb/s InfiniBand.
High GPU Power and Memory
The XN9160-B300 is built as a highly scalable AI node, featuring the NVIDIA HGX B300 module housing 8 NVIDIA Blackwell B300 GPUs. This configuration provides the peak performance of the Blackwell generation, specifically designed for next-era AI workloads.
Crucially, the server delivers a massive 2,304GB of unified HBM3E memory across its 8 GPUs (288GB per GPU). This colossal memory pool is essential for eliminating memory offloading, supporting larger model residence, and managing the expansive Key/Value caches required for high-concurrency, long-context Generative AI and Large Language Models.
SOURCE: PRNewswire