deepset, a leader in enterprise AI orchestration, announced its Custom AI Agent Solution Architecture that enables enterprises to deploy AI agents securely and efficiently across cloud and on-premises environments. The solution combines deepset’s LLM orchestration capabilities and Haystack open source framework, with NVIDIA AI Enterprise software to help organizations build and manage AI agents while maintaining data security and operational control over AI workflows. Organizations like Airbus, OakNorth, and The Economist, use deepset AI Platform and Haystack to power custom AI applications tailored to their operational needs.
deepset’s solution leverages NVIDIA AI Enterprise, including Triton Inference Server for optimized performance, NVIDIA NIM microservices including NeMo Retriever text embedding NIM and NeMo Retriever text reranking NIM, and NVIDIA NeMo to support enterprise-grade security and compliance requirements. Whether deployed on-prem or via the deepset AI Platform in the cloud, organizations can deliver and scale AI agents with sovereignty.
“Enterprises need AI solutions that don’t just process information but do so with trust, security, and adaptability,” said Milos Rusic, CEO of deepset. “With NVIDIA AI Enterprise software, we’re delivering a performance optimized AI agent architecture that gives businesses full control over their AI workflows, whether in the cloud or on their own infrastructure.”
Also Read: Pure Storage Integrates NVIDIA AI Data Platform into FlashBlade to Fuel Enterprise AI Innovation
Key Features of deepset’s Custom AI Agent Solution Architecture:
- Advanced LLM Orchestration: deepset AI Platform ensures businesses can build compound AI systems using the models of their choice for their unique workflows and agentic applications. The underlying Haystack open source framework, provides the building blocks for rapid customization and high extensibility.
- Infrastructure Sovereignty and Flexibility:
- Cloud: Integrated NVIDIA AI Enterprise capabilities available to all deepset AI Platform subscribers
- On-Premises: Organizations can deploy deepset with NVIDIA through their infrastructure partners
- Performance Optimization with NVIDIA AI Enterprise: deepset AI Platform utilizes NVIDIA Triton Inference Server and NVIDIA NIM for efficient AI operations. The platform also includes specialized components for NeMo Retriever embedding models for enhanced retrieval and search.
- Enterprise-Grade NeMo Guardrails: an integration with NVIDIA NeMo Guardrails provides safety, compliance, and domain control for responsible AI deployment.
“Enterprise AI needs more than just models—it requires robust orchestration, infrastructure and governance,” said Amanda Saunders, director, Generative AI Software at NVIDIA. “With NVIDIA AI Enterprise software integrated with the deepset Custom AI Agent Solution Architecture businesses can build and deploy AI agents that are optimized with NVIDIA AI.”
From LLMs to AI Agents – Enabling the Next Generation of AI Workflows
As organizations transition from LLMs to fully operational AI agents, deepset’s AI orchestration platform and framework ensure trust, efficiency, and enterprise readiness at every stage. Users of the deepset AI Platform Cloud users gain immediate access to integrated capabilities, while organizations requiring on-premises deployment can through NVIDIA, deepset, and their infrastructure provider of choice.
“At YPulse, delivering timely, reliable insights to our users is critical,” said Dan Coates, President at YPulse. “The integration of NVIDIA AI Enterprise into the deepset AI Platform has enhanced our operational capabilities with faster deployments and optimized performance. With NVIDIA NIM microservices, we’ve improved inference throughput, scaling usage seamlessly even during periods of high demand. These advancements have not only improved system efficiency but enabled us to deliver greater value to our customers — providing more comprehensive youth insights at faster response times to help them drive better business decisions.”
SOURCE: Businesswire