Building on their longstanding partnership, Google, Alphabet and NVIDIA announced new initiatives to advance AI, democratize access to AI tools, speed the development of agentic and physical AI, and transform industries including healthcare, manufacturing, and energy.
Engineers and researchers throughout Google and Alphabet are working closely with technical teams at NVIDIA to use AI and simulation to develop robots with grasping skills, reimagine drug discovery, and optimize energy grids. Employing NVIDIA Omniverse™, NVIDIA Cosmos™ and NVIDIA Isaac™ platforms, teams from Google DeepMind, Isomorphic Labs, Intrinsic, and X’s moonshot Tapestry will discuss milestones from their respective collaborations at the GTC global AI conference.
And to power research and AI production efforts for its customers, Google Cloud will be among the first to adopt the NVIDIA GB300 and NVIDIA RTX PRO 6000 Blackwell Server Edition, announced today at GTC.
NVIDIA will be the first industry partner to adopt SynthID, a Google DeepMind AI technology that embeds digital watermarks directly into AI-generated images, audio, text or video.
“I’m proud of our ongoing and deep partnership with NVIDIA, which spans the early days of Android and our cutting-edge AI collaborations across Alphabet,” said Sundar Pichai, CEO, Google and Alphabet. “I’m really excited about the next phase of our partnership as we work together on agentic AI, robotics, and bringing the benefits of AI to more people around the world.”
Also Read: Kingston Digital Releases Enterprise Class Gen5 Data Center SSD
“Alphabet and NVIDIA have a longstanding partnership that extends from building AI infrastructure and software to advancing the use of AI in the largest industries,” said Jensen Huang, founder and CEO of NVIDIA. “It’s a great joy to see Alphabet and NVIDIA researchers and engineers collaborate to solve incredible challenges, from drug discovery to robotics.”
Developing Responsible AI and Open Models
Google DeepMind and NVIDIA are working to build trust in generative AI through content transparency. NVIDIA will be the first external user of SynthID, which helps preserve the integrity of outputs from NVIDIA Cosmos world foundation models, available on build.nvidia.com, by helping to safeguard against misinformation and misattribution — all without compromising video quality.
Google DeepMind and NVIDIA also partnered to optimize Gemma, Google’s family of lightweight open models, to run on NVIDIA GPUs. The recent launch of Gemma 3 marks a significant leap forward for open innovation, and NVIDIA has played a key role in making it even more accessible for developers. Supercharged by the NVIDIA AI platform, Gemma is available as a highly optimized NVIDIA NIM microservice, harnessing the power of NVIDIA TensorRT-LLM for exceptional inference performance.
In addition, this deep engineering collaboration will extend to optimizing Gemini-based workloads on NVIDIA GPUs via Vertex AI. The NVIDIA Cosmos models are also now listed in the Vertex AI Model Garden.
The Age of Intelligent Robots
Intrinsic is an Alphabet company focused on making intelligently-adaptive AI for robotics usable for manufacturers across industries. Today, the majority of the world’s installed industrial robots are manually-programmed in expensive, time consuming ways.
Intrinsic Flowstate is a web-based digital twin and developer environment for building and deploying production-grade AI solutions. Partnering with NVIDIA, the teams have built deeper developer workflows enabling NVIDIA’s Isaac Manipulator foundation models (FMs) to be used in both simulated and real robotic workcells, with just a few clicks. Leveraging foundation models for robotics will significantly reduce application development time and improve flexibility, with AI that can adapt effortlessly. At GTC, Intrinsic will also share an early USD streaming connection between Intrinsic Flowstate and NVIDIA Omniverse – enabling real-time visualization of robot workcells across platforms.
Concurrently, NVIDIA and Google DeepMind are also announcing a collaboration with Disney Research to develop “Newton,” an open-source physics engine accelerated by NVIDIA Warp that is compatible with MuJoCo. Now powered by MuJoCo Warp, Newton will accelerate robotics machine learning workloads by more than 70x compared to MuJoCo’s existing GPU-accelerated simulator, MJX.
Applying AI Innovation to Real-World Challenges
Isomorphic Labs, founded by Google DeepMind CEO Demis Hassabis, is reimagining drug discovery with AI. It’s built a state-of-the-art drug design engine housed on Google Cloud with NVIDIA GPUs to enable the scale and performance needed to continue developing groundbreaking AI models that can help advance human health.
Tapestry, X’s moonshot for the electric grid, is building AI-powered products for a greener and more reliable future grid. Tapestry and NVIDIA are researching methods for increasing the speed and accuracy of electric grid simulations.
This joint exploration will focus on the challenges of integrating new energy sources and expanding grid capacity to meet the growing demands of data centers and AI, while ensuring grid stability. Tapestry and NVIDIA will evaluate potential solutions, including using AI to optimize the interconnection process, with the goal of enhancing the planning and modernization of energy infrastructure for a more sustainable future.
The Next Generation of AI-Optimized Infrastructure
Building on its commitment to provide customers with the most advanced AI infrastructure, Google Cloud will be one of the first companies to offer the latest instances of Blackwell GPUs — NVIDIA RTX PRO 6000 Blackwell and GB300.
With last month’s preview launches of its A4 and A4X virtual machines, Google Cloud became the first cloud provider to offer both NVIDIA B200- and GB200-based instances. Now, A4 is generally available — with A4X coming soon — so customers can take advantage of Blackwell’s powerful performance with the added benefits of Google Cloud’s AI Hypercomputer.
Google Cloud and NVIDIA have worked together to optimize popular open-source frameworks like JAX, a popular Python library for machine learning, and MaxText to run efficiently on NVIDIA GPUs at scale. MaxText, an advanced framework for scaling large models across massive GPU clusters, uses optimizations co-developed with NVIDIA to enable efficient training on tens of thousands of GPUs.
These efforts, alongside enhancements to Google’s XLA compiler and OpenXLA, enable deeper integration with NVIDIA AI software and tools — including the recently announced NVIDIA reasoning model family, which allows for more efficient utilization of NVIDIA accelerated systems and can be run on Google Cloud and Google Kubernetes Engine (GKE).
Source: Google Cloud