Meta and AMD Partner for Longterm AI Infrastructure Agreement

Meta

Meta Platforms has announced a multi-year strategic agreement with Advanced Micro Devices (AMD) to support its next phase of AI expansion, securing up to 6 gigawatts of AMD Instinct GPU capacity to power large-scale AI workloads.

The partnership is aimed at strengthening Meta’s compute backbone as the company accelerates development of advanced AI systems and its vision of delivering “personal superintelligence” at global scale. The collaboration extends beyond chip supply, aligning both firms’ roadmaps across silicon design, systems architecture, and software integration to enable faster deployment of optimized AI infrastructure.

AMD will provide Instinct GPUs, EPYC CPUs, and rack-scale AI systems, supporting one of the industry’s largest planned AI compute rollouts. Initial shipments for the first deployments are expected to begin in the second half of 2026, using Meta’s Helios rack-scale architecture, developed in collaboration with AMD and introduced at the Open Compute Project Global Summit.

Also Read: Databricks Debuts Zerobus Ingest for Real‑Time Streaming

AMD Chair and CEO Lisa Su said the expanded collaboration positions AMD at the center of global AI infrastructure growth, emphasizing the companies’ joint focus on delivering high-performance, energy-efficient compute systems tailored to Meta’s large-scale workloads.

Meta CEO Mark Zuckerberg described the agreement as a key step in diversifying the company’s compute ecosystem, noting that AMD is expected to remain a long-term partner as Meta scales inference and training capacity.

The deal forms part of Meta’s broader Meta Compute initiative, which combines third-party hardware partnerships with the company’s in-house Meta Training and Inference Accelerator (MTIA) silicon program. By adopting a portfolio-based infrastructure strategy, Meta aims to build a more resilient and flexible AI stack capable of supporting rapid model growth and global deployment.

The agreement underscores intensifying competition among hyperscalers to secure advanced semiconductor capacity as demand for AI training and inference hardware continues to surge worldwide.

SOURCE: Meta