OpenRouter raises $40 million to scale up multi-model inference for enterprise

OpenRouter

OpenRouter, the unified interface for large-language-model (LLM) inference, announced that it has closed a combined Seed and Series A financing of $40 million led by Andreessen Horowitz and Menlo Ventures, with participation from Sequoia and prominent industry angels. The investment will accelerate product development, bring new types of models to the platform, and expand enterprise support as OpenRouter becomes the default backbone for organizations that rely on multiple AI models.

“Inference is the fastest-growing cost for forward-looking companies, and it’s often coming from 4 or more different models. The sophisticated companies have run into these problems already, and built some sort of in-house gateway. But they’re realizing that making LLM’s “just work” isn’t an easy problem. They’re ripping out home-grown solutions and bringing in OpenRouter so they can focus on their domain-specific problems, and not on LLM integration,” said Alex Atallah, co-founder and CEO of OpenRouter. “This round lets us keep shipping at the speed developers expect while delivering the uptime, privacy, and IT guarantees that enterprises demand.”

Momentum Highlights

  • Rapid growth to $100m+: Annual run-rate inference spend on OpenRouter was $10m in October 2024, and has grown to over $100m run rate as of May 2025.
  • Developers are flocking: More than one million developers have used OpenRouter’s API since OpenRouter’s launch two years ago.
  • Organizational trust: A global footprint, with customers that range from early-stage startups to large multinationals—all routing mission-critical traffic through OpenRouter.
  • Ecosystem Investment: Integrated with Microsoft VSCode, Zapier, Cloudflare, Make.com, n8n, Posthog, and more.
  • Deep partnerships with AI labs: OpenRouter recently collaborated with OpenAI on the stealth launch of their GPT 4.1 model, giving customers early-access to a frontier model, and generating valuable real-world usage data for OpenAI.

Also Read: NetApp Appoints Disruptive Innovator Syam Nair as Chief Product Officer, Underscoring Commitment to Bold Product Vision

Why Companies are Choosing OpenRouter

OpenRouter’s Enterprise offering delivers the controls and assurances required by larger organizations:

  • Zero-logging by default with the ability to route to providers with data policies that work for your company.
  • Automatic multi-cloud failover across 50+ providers for best-in-class uptime.
  • Edge-deployed (~25 ms overhead) serving billions of requests and trillions of tokens every week.
  • Unified billing, reporting, and management. Real-time spend management, plus bring-your-own-capacity that blends customers’ inference capacity with OpenRouter’s burst pool.
  • A single API, with standardized token accounting across providers. Whether you need tool-calling, caching, performance, or price, OpenRouter normalizes providers and models to a drop-in compatible API so businesses can focus on their product, not LLM integrations.

Whether an organization is experimenting at $500/month or running a global product consuming millions of dollars of inference, OpenRouter can provide the uptime, selection, and failover that companies need.

Source: GlobeNewswire