CoreWeave, Inc., The Essential Cloud for AI™, announced an expansion of its purpose-built AI cloud platform at NVIDIA’s GTC conference. The expansion brings NVIDIA HGX B300 to the CoreWeave Cloud, unlocking a new generation of performance for AI workloads, alongside new Weights & Biases capabilities that streamline reinforcement learning (RL) and agent development workflows.
As the industry shifts from large-scale training to iterative RL, infrastructure requirements are rapidly evolving. By combining the latest hardware with CoreWeave’s AI optimized cloud services and advanced development workflows, CoreWeave is closing the gap between training and production to support the next generation of self-improving agents and physical AI workloads.
“The next phase of AI is being defined by how efficiently AI systems can run and scale in production,” said Michael Intrator, CEO, co-founder, and chairman at CoreWeave. “By pairing the massive compute power of NVIDIA’s latest hardware with CoreWeave’s cloud services, we’re enabling enterprises to build and refine autonomous agents faster and more reliably than ever before. This expansion reinforces our position as the essential partner for any organization navigating the complexities of frontier-scale AI.”
“The era of AI is shifting from training models to operating agents at scale,” said Jensen Huang, founder and CEO, NVIDIA. “CoreWeave is a world-class new generation AI-Native cloud. We are thrilled to partner with them to build out NVIDIA computing infrastructure to power the world’s AI.”
Also Read: Zilliz Open-Sources Memsearch, Giving AI Agents Persistent, Human-Readable Memory
Purpose-Built for Agentic AI: NVIDIA HGX B300
CoreWeave Cloud unlocks NVIDIA HGX B300 performance at scale, marking a major step forward for frontier and agentic workloads. With the general availability of NVIDIA HGX B300, part of the NVIDIA Blackwell Ultra platform, customers will have access to:
- Performance Leap: NVIDIA HGX B300 is designed for AI reasoning and inference with enhanced compute and increased memory.
- Massive Memory: Featuring 2.1 TB of HBM3e memory—a 50% increase over HGX B200 instances—enabling teams to run long-context inference with low latency and train models with 100B+ parameters on a single node.
- Unprecedented Bandwidth: Next-generation NVIDIA Quantum-X800XDR InfiniBand support doubles node-to-node bandwidth, eliminating interconnect bottlenecks.
- Liquid-Cooled Reliability: Every HGX B300 server on CoreWeave Cloud is managed by state-of-the-art liquid cooling to eliminate thermal throttling and allow sustained peak performance.
“We’ve already run production workloads with CoreWeave on NVIDIA HGX B200, and that experience built real confidence in their ability to operate at scale,” said Aman Sanger, Co-Founder of Cursor, an AI-powered code editor. “We’re focused on collaborating with companies who deliver predictable performance, operational reliability, and ongoing support as our requirements evolve. As we move toward HGX B300, that proven operating model gives us confidence to focus on building more capable AI code generation systems rather than infrastructure risk.”
CoreWeave also expects to be among the first cloud providers to deploy the NVIDIA Vera Rubin NVL72 platform and NVIDIA Vera CPU rack in production in the second half of 2026. This expansion will further support large-scale inference, reasoning, and the most demanding agentic AI applications.
SOURCE: Buisinesswire





















