IBM announced at GTC 2026 an expanded collaboration with NVIDIA to help enterprises operationalize AI at scale. Advancing efforts across GPU-native data analytics, intelligent document processing, on-premises and regulated infrastructure deployments, cloud, and consulting, the collaboration aims to give enterprises the data foundation, infrastructure, and expertise to move AI from pilot to production.
Enterprises are making significant investments in AI, but too many remain stuck between experimentation and production at scale. The barriers are consistent: data is fragmented and difficult to access; infrastructure wasn’t built for advanced AI workloads; AI deployments don’t support the compliance and residency requirements of regulated industries; and many organizations still need the guided expertise to implement and deploy the technologies. Today’s announcements from IBM and NVIDIA are designed to close these gaps.
“In the next wave of enterprise AI, the model layer will rely on the data, infrastructure, and orchestration layers – and on businesses that can bring all three together,” said Arvind Krishna, Chairman and CEO, IBM. “Our partnership with NVIDIA goes to the heart of that challenge. Together, we’re giving enterprises the solutions they need to stop experimenting with AI and start running on it.”
Also Read: DoveRunner Launches License Cipher Gateway, Standalone On-Premises License Protection
“IBM pioneered enterprise computing and data processing six decades ago – and today they are redefining it for the AI era,” said Jensen Huang, founder and CEO of NVIDIA. “Data is the ground truth that gives AI context and meaning. Together with IBM, we are bringing CUDA GPU acceleration directly into the data layer – turning analytics and document processing from bottlenecks into real-time intelligence engines.”
Accelerating Structured Data Analytics with GPU-Native Computing
IBM and NVIDIA are collaborating on an open-source integration to increase performance and reduce costs around how enterprises extract intelligence from their massive datasets. IBM watsonx.data’s SQL engine Presto is accelerated by NVIDIA cuDF to enable faster query execution on large datasets.
To validate in production, IBM and NVIDIA applied GPU-accelerated watsonx.data to Nestlé’s Order-to-Cash data mart. The data mart tracks every order, fulfillment, delivery, and invoice across 186 countries and processes terabytes across 44 tables. Nestlé was ideal for this proof of concept because of its strong digital backbone. With globally unified data models, a consolidated data foundation, and a single source of truth across markets, Nestlé already had timely, accurate, and trusted data at scale — the right foundation to put GPU-accelerated analytics to the test in a real production environment.
On CPUs, a single refresh previously took Nestlé 15 minutes and only ran a handful of times a day. Nestlé reports that with NVIDIA’s software and GPUs, the IBM watsonx.data Presto engine reduced query runtime down to three minutes – achieving 83% cost savings and an overall 30X price-performance improvement.
“For a company that serves billions, data underpins decision making across our global operations,” said Chris Wright, Chief Information and Digital Officer of Nestlé. “Working with IBM and NVIDIA, a targeted proof of concept has demonstrated the ability to refresh global operations data in a few minutes and at reduced cost. Our focus now is on turning this capability into tangible business impact – further improving decision speed in areas such as manufacturing and warehousing, and scaling these capabilities across our enterprise.”
SOURCE: IBM





















