Intel and Google Deepen Partnership to Power AI and cloud infrastructure

Intel

Intel and Google have announced a multiyear collaboration to accelerate the next generation of AI and cloud infrastructure, specifically concentrating on the importance of using CPUs and IPUs for building heterogenous and large-scale AI systems. Given that the AI adoption is growing, the two firms are planning on improving efficiency, reducing energy consumption, and lowering TCO through coordination of various generations of Intel Xeon processors within the infrastructure of Google. Furthermore, the Google Cloud will continue using the Xeon processors to deliver workloads optimized AI services including training, inference, and general computing, as well as jointly developing IPU based on ASIC technology that will help with offloading network, storage, and security work from CPU.

Also Read: Cloudera Advances Enterprise Stability with Hybrid Data and AI Platform

“AI is reshaping how infrastructure is built and scaled,” said Lip-Bu Tan, CEO of Intel. “Scaling AI requires more than accelerators – it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.” “CPUs and infrastructure acceleration remain a cornerstone of AI systems—from training orchestration to inference and deployment,” said Amin Vahdat, SVP & Chief Technologist, AI Infrastructure, Google. “Intel has been a trusted partner for nearly two decades, and their Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads.”

Read More: Intel and Google Deepen Collaboration to Advance AI Infrastructure