Denodo Integrates NVIDIA NIM Inference Microservice to Accelerate and Optimize AI Capabilities for Enterprises

Denodo

Denodo, a leader in data management , announced it has integrated the NVIDIA NIM inference microservice into the Denodo platform as part of its focus on enhancing rapidly expanding artificial intelligence (AI) capabilities.

The logical data management capabilities of the Denodo platform combined with NVIDIA NIM enable enterprise customers to:

  • Quickly organize and transform data into AI-ready pipelines to feed Large Language Models (LLMs)
  • Access trusted enterprise data through the Retrieval Augmentation Generation (RAG) pipeline to improve model accuracy and eliminate hallucinations
  • Simplifies secure, real-time access to distributed enterprise knowledge for generative AI applications
  • Maintain data privacy/security and enforce fine-grained access controls on AI models that access organizational data
  • Use NVIDIA NeMo to rapidly deploy AI/ML from data preparation to model scoring , and leverage an integrated data architecture to feed enterprise data into the process

NVIDIA NIM is a set of cloud-native microservices that simplify and accelerate the deployment of generative AI models in a variety of environments, including the cloud, on-premises data centers, and workstations. It connects the power of the latest foundational AI models securely deployed on NVIDIA accelerated infrastructure to enterprise customers everywhere.

Also Read: Medius Appoints Fahmi Megdiche as Chief Information Security Officer

The integration of the Denodo platform with NVIDIA NIM helps customers seamlessly leverage advanced AI capabilities in their data management workflows. It also helps enterprises deploy and scale generative AI applications with unprecedented speed and efficiency. Key use cases include improving analytics and powerful AI-driven insights in verticals such as financial services, healthcare, retail, and manufacturing, and shortening the time to gain insights from disparate data sources. As an NVIDIA Metropolis partner, Denodo is working to deploy visual AI and visual language model (VLM) NIM microservices to streamline industrial processes and improve worker safety.

The Denodo platform, combined with NVIDIA NeMo, significantly improves the accuracy of SQL query generation for LLM. The RAG feature enables the model to retrieve relevant knowledge from data structures before generating output, resulting in more reliable responses, while the Denodo platform helps simplify and accelerate data access while reducing errors when the model queries the platform.

By integrating NVIDIA NIM, Denodo helps ensure that customers can maintain full control of their AI deployments, whether on-premises or in the cloud. This integration will deliver significant business benefits, including faster time to value and enhanced security for AI applications.

“I’m very excited about this integration and see it as a sign of where Denodo’s logical data management capabilities will take us,” said Narayan Sundar, senior director of strategic alliances at Denodo. “Denodo is at the forefront of enabling RAG-enabled generative AI applications, delivering real-time, governed and trusted data from an organization’s vast data assets. I look forward to seeing the innovations that emerge from enterprise customers leveraging the Denodo platform integrated with NVIDIA NIM.”

SOURCE: BusinessWire