WEKA, the AI-native data platform company, previewed the industry’s first high-performance storage solution for the NVIDIA Grace™ CPU Superchip . The solution runs on a powerful new storage server from Supermicro powered by WEKA® Data Platform software and Arm® Neoverse ™ V2 cores utilizing the NVIDIA Grace CPU Superchip and NVIDIA ConnectX-7 and NVIDIA BlueField-3 networking to accelerate enterprise AI workloads with unmatched performance density and energy efficiency.
Fueling Next-Generation AI Innovation
Today’s AI and high-performance computing (HPC) workloads demand blazing-fast data access, but most data centers face increasing space and power constraints.
NVIDIA Grace integrates the performance of a flagship x86-64 two-socket workstation or server platform into a single module. Grace CPU Superchips are powered by 144 high-performance Arm Neoverse V2 cores, which are 2x the power efficient of traditional x86 servers. NVIDIA ConnectX-7 NICs and BlueField-3 SuperNICs feature purpose-built RDMA/RoCE acceleration and deliver high-throughput, low-latency network connectivity at speeds of up to 400Gb/s. The combination of the WEKA Data Platform’s revolutionary zero-copy software architecture on the Supermicro Petascale storage server minimizes I/O bottlenecks and reduces AI pipeline latency. This greatly optimizes GPU utilization and also accelerates AI model training and inference, dramatically improving time to first token, discovery, and insight while reducing power consumption and associated costs.
The main benefits of the solution are:
- Extreme Speed and Scalability for Enterprise AI: The NVIDIA Grace CPU Superchip, featuring 144 powerful Arm® Neoverse ™ V2 cores connected by a high-performance, custom-designed NVIDIA Scalable Coherency Fabric, delivers the performance of a dual-socket x86 CPU server with half the power. NVIDIA ConnectX-7 NICs and NVIDIA BlueField-3 SuperNICs provide high-performance networking, essential for enterprise AI workloads. Combined with the WEKA Data Platform’s AI-native architecture, which accelerates time to first token by up to 10x, the solution enables optimal performance in AI data pipelines at virtually any scale.
- Optimal Resource Utilization: The powerful WEKA Data Platform, combined with the LPDDR5X memory architecture of the Grace CPUs, enables up to 1TB/s memory bandwidth and seamless data flow, eliminating bottlenecks. By integrating WEKA’s distributed architecture and kernel bypass technology, organizations can achieve faster AI model training, shorter epoch times, and higher inference speeds, making it the ideal solution to efficiently scale AI workloads.
- Exceptional Power and Space Efficiency: The WEKA Data Platform delivers 10-50x increased GPU stack efficiency to seamlessly handle large-scale AI and HPC workloads. With data copy reduction and cloud elasticity, the WEKA Platform can reduce the data infrastructure footprint by 4-7x and reduce carbon emissions, avoiding the storage of up to 260 tons of CO2e per PB per year and reducing energy costs by 10x. With the Grace CPU Superchip’s doubled energy efficiency compared to leading x86 servers, customers can do more with less, achieving sustainability goals while improving AI performance.
“AI is transforming the way businesses around the world innovate, create, and operate, but the surge in its adoption has dramatically increased data center energy consumption, which is expected to double by 2026 according to the International Atomic Energy Agency,” said Nilesh Patel , chief product officer at WEKA. “WEKA is excited to collaborate with NVIDIA, Arm, and Supermicro to develop high-performance, energy-efficient solutions for the next generation of data centers that power enterprise AI and high-performance workloads while accelerating processing of massive data and reducing time to actionable insights.”
“WEKA has teamed up with Supermicro to develop a high-performance storage solution that seamlessly integrates with the NVIDIA Grace CPU Superchip to improve the efficiency of large-scale, data-intensive AI workloads. The solution provides fast data access while reducing power consumption, enabling data-driven organizations to significantly boost their AI infrastructure,” said Ivan Goldwasser , director of datacenter CPUs at NVIDIA.
“Supermicro’s upcoming ARS-121L-NE316R Petascale storage server is the first storage-optimized server to leverage the NVIDIA Grace Superchip CPU,” said Patrick Chiu , senior director of Storage Product Management at Supermicro. “The system design features 16 high-performance Gen5 E3.S NVMe SSD bays along with three PCIe Gen 5 networking slots, which support up to two NVIDIA ConnectX 7 or BlueField-3 SuperNIC network adapters and one OCP 3.0 network adapter. The system is ideal for high-performance storage workloads such as AI, data analytics, and hyper-scale cloud applications. Our collaboration with NVIDIA and WEKA has resulted in a data platform that enables customers to make their data centers more energy efficient while adding new AI processing capabilities.”
“AI innovation requires a new approach to silicon and system design that balances performance with energy efficiency. Arm is proud to collaborate with NVIDIA, WEKA, and Supermicro to deliver a high-performance enterprise AI solution that offers exceptional value and uncompromising energy efficiency,” said David Lecomber , director of HPC at Arm.
SOURCE: PRNewsWire