Compal Unveils GPU Server with NVIDIA MGX for AI & HPC

Compal Unveils GPU Server with NVIDIA MGX for AI & HPC

Compal Electronics  unveiled three new server platforms—SX420-2A, SX220-1N, and SX224-2A. All are built on NVIDIA MGX architecture and designed to inject powerful performance into enterprise-level AI, HPC, and high-load computing applications.

Discover Compal’s flagship server, the SX420-2A and SX224-2A, powered by the industry-leading the NVIDIA RTX PRO™ 6000 Blackwell Server Edition

  • The SX420-2A 4U AI Server, making its debut, is engineered on the NVIDIA MGX architecture and optimized for deep AI-HPC applications. It features a flexible rack design that supports both EIA 19 “and ORV3 21” configurations, and it can be configured with up to 8 RTX PRO 6000 Blackwell GPUs—significantly boosting data center compute performance and resource utilization.
  • The SX224-2A 2U AI Server, integrates NVIDIA MGX architecture with an AMD x86 platform, offering highly flexible configuration options. It is engineered for future compatibility and optimized for diverse computing workloads—whether AI-HPC or AI-Graphics—allowing for tailored performance adjustments.

RTX PRO™ 6000 Blackwell GPU delivers top-tier performance: equipped with 96GB of ultra-fast GDDR7 memory and featuring a passively cooled thermal design, it ensures stable operation under extreme loads, providing revolutionary acceleration for both agentic and physical AI, as well as for scientific computing, graphics, and video applications. Combining this exceptional GPU performance, the SX420-2A and SX224-2A are undoubtedly one of the most competitive solutions in the industry.

Also Read: Rescale Expands Access to NVIDIA GPU-Accelerated Simulation Software and AI via NVIDIA’s Cloud Partner Network

SX220-1N: Scalable AI/HPC and Future-Proof Flexible Computing Platforms

  • SX220-1N 2U AI Server is designed for giant-scale AI and HPC applications, featuring the NVIDIA GH200 Grace Hopper™ Superchip and employing NVIDIA NVLink-C2C technology to deliver a coherent memory pool. This enables faster memory speeds and massive bandwidth to tackle large-scale computational tasks.

SOURCE: PRNewswire