Wallarm Unveils Agentic AI Protection to Secure AI Agents from Attacks

Wallarm

Wallarm, a leading provider of API security, announced the release of Agentic AI Protection, a breakthrough capability designed to secure AI agents from emerging attack vectors, such as prompt injection, jailbreaks, system prompt retrieval, and agent logic abuse. The new feature extends Wallarm’s API Security Platform to actively monitor, analyze, and block attacks against AI agents.

AI agents – increasingly integrated into customer service, development workflows, and business automation – bring new capabilities but also introduce new risks. In Wallarm’s research, 25% of the security issues reported in Agentic AI GitHub repositories remain unfixed, while others take years to resolve. These agents interact via APIs and are susceptible to attacks embedded in seemingly benign user input. Wallarm’s Agentic AI Protection inspects both incoming queries and outgoing responses, applying behavioral and semantic analysis to identify suspicious patterns before they can compromise the agents themselves or the systems to which they connect.

“AI agents have quickly become essential to modern digital infrastructure, but their attack surface is poorly understood and rapidly evolving,” said Ivan Novikov, CEO and Co-founder of Wallarm. “Agentic AI Protection is our answer to this new security frontier. It provides an always-on defense layer that detects and stops attacks before they impact your business.”

Also Read: IonQ Announces $22M Deal with EPB Establishing Chattanooga, Tennessee as the First Quantum Computing & Networking Hub in the U.S.

Key capabilities of Agentic AI Protection include:

  • Automated discovery of AI APIs
  • AI-powered analysis of interactions with AI agents
  • Detection of multiple attacks, such as prompt injection and jailbreak attempts
  • Blocking of system prompt leaks and agent manipulation
  • Native integration with existing Wallarm deployments

Wallarm will showcase Agentic AI Protection at the RSA Conference 2025 in San Francisco, booth S-3125 at the Moscone Center, where attendees can see live demonstrations of the feature protecting AI agents from adversarial input and logic exploitation.

Source: PRNewswire