CoreWeave, the AI Hyperscaler™, announced Cohere, IBM and Mistral AI are the first customers to gain access to NVIDIA GB200 NVL72 rack-scale systems and CoreWeave’s full stack of cloud services – a combination designed to advance AI model development and deployment.
AI innovators across enterprises and other organizations now have access to advanced networking and NVIDIA Grace Blackwell Superchips purpose-built for reasoning and agentic AI, underscoring CoreWeave’s consistent record of being among the first to market with advanced AI cloud solutions.
Also Read: PayPal and TerraPay Partner to Expand Access to Cross-Border Payments across Middle East and Africa
“CoreWeave is built to move faster – and time and again, we’ve proven it by being first to operationalize the most advanced systems at scale,” said Michael Intrator, Co-Founder and Chief Executive Officer of CoreWeave. “Today is a testament to our engineering prowess and velocity, as well as our relentless focus on enabling the next generation of AI. We are thrilled to see visionary companies already achieving new breakthroughs on our platform. By delivering the most advanced compute resources at scale, CoreWeave empowers enterprise and AI lab customers to innovate faster and deploy AI solutions that were once out of reach.”
“Enterprises and organizations around the world are racing to turn reasoning models into agentic AI applications that will transform the way people work and play,” said Ian Buck, vice president of Hyperscale and HPC at NVIDIA. “CoreWeave‘s rapid deployment of NVIDIA GB200 systems delivers the AI infrastructure and software that are making AI factories a reality.”
SOURCE: PRNewswire