MongoDB, Inc. announced a suite of transformative capabilities designed to simplify the complexity of deploying AI agents in production. By consolidating real-time data, vector search, long-term memory, and high-performance embedding models into a single, unified platform, MongoDB is eliminating the need for enterprises to stitch together disparate systems for AI scaling.
Enhancing Retrieval Accuracy for the Agentic Era
Enterprises are increasingly moving toward autonomous agents, yet the effectiveness of these agents is limited by the quality of the data layer. To address this, MongoDB introduced Automated Voyage AI Embeddings in MongoDB Vector Search (now in public preview). This feature automatically generates and updates embeddings as data is written, ensuring agents always have access to real-time, mathematically precise context.
Utilizing Voyage AI models-which currently rank #1 on the Retrieval Embedding Benchmark (RTEB)-MongoDB allows developers to deploy semantic search in minutes rather than weeks.
“The hardest part of running agents in production isn’t the model. It’s the data layer underneath it,” said CJ Desai, President and Chief Executive Officer of MongoDB. “To trust an agent at scale, it has to retrieve the right context, hold memory across sessions, and operate at machine speed, wherever the enterprise needs it. That’s why AI-native companies like ElevenLabs build voice agents on MongoDB, and why institutions like Lloyds Banking Group trust it for mission-critical workloads.”
Also Read: Uniphore and LTM Partner to Deliver Industry-Specific Enterprise AI Solutions
Persistent Memory for Scalable Intelligence
To function reliably, agents require the ability to learn from past interactions. The new LangGraph.js Long-Term Memory Store (now generally available) provides JavaScript and TypeScript developers with persistent, cross-conversation memory directly within MongoDB Atlas. This removes the “plumbing” of syncing external databases to maintain agent state.
“When AI tools and agents produce a wrong answer, the instinct is to blame the model,” said Pablo Stern, Chief Product Officer, AI and Emerging Products at MongoDB. “But the data platform is what enables the agent with the right context and memory to act correctly. With MongoDB, we’ve made this easy. Developers no longer have to build and maintain data infrastructure, wire up embeddings, or manage syncing between systems. They can focus on business outcomes rather than the plumbing.”
Performance Under Pressure: MongoDB 8.3
To support the high-throughput requirements of Fortune 500 organizations like Adobe, MongoDB also announced the general availability of MongoDB 8.3. This latest version delivers significant performance gains without requiring application code changes:
-
45% increase in read throughput.
-
35% increase in write performance.
-
15% more ACID transactions and 30% faster complex operations.
By moving common data transformations directly into the database, MongoDB 8.3 reduces the need for external pipelines, allowing for sub-100ms retrieval and sub-second context updates essential for real-time AI.
“The requirements of enterprises running AI at scale are what we build for. MongoDB 8.3 makes agent workloads faster and cheaper to run on infrastructure customers already have. We’ve also moved common data transformations into the database itself, so teams no longer have to maintain external pipelines just to feed their agents. Production AI doesn’t wait, and neither do we,” said Ben Cefalo, Chief Product Officer, Core Products at MongoDB.






















