Meta Platforms and NVIDIA have announced a multiyear, multigenerational strategic partnership to build out Meta’s next-generation AI infrastructure by deploying large-scale NVIDIA CPUs, networking technologies and millions of Blackwell and Rubin GPUs to support training, inference and core services across hyperscale data centers, significantly improving performance per watt and operational efficiency while addressing AI workload demands. The collaboration spans on-premises and cloud environments with deep hardware-software co-design across CPUs, GPUs, networking and software stacks, and includes adoption of NVIDIA Spectrum-X Ethernet for predictable, low-latency networking and NVIDIA Confidential Computing to enable privacy-focused AI features such as those powering WhatsApp.
Also Read: Infosys and Anthropic Collaborate on AI for Regulated Industries
“No one deploys AI at Meta’s scale – integrating frontier research with industrial-scale infrastructure to power the world’s largest personalization and recommendation systems for billions of users,” said Jensen Huang, founder and CEO of NVIDIA. “Through deep codesign across CPUs, GPUs, networking and software, we are bringing the full NVIDIA platform to Meta’s researchers and engineers as they build the foundation for the next AI frontier.” “We’re excited to expand our partnership with NVIDIA to build leading-edge clusters using their Vera Rubin platform to deliver personal superintelligence to everyone in the world,” said Mark Zuckerberg, founder and CEO of Meta, underscoring the long-term shared vision for foundational AI infrastructure.






















