New Platform Discovers 89% of Enterprise AI Use is Invisible to IT Teams as Lanai Launches Edge-Based AI Observability Agent

Lanai

Lanai announced its breakthrough edge-based AI Observability Agent, the first platform to run AI detection models directly on enterprise devices rather than routing sensitive conversations through centralized infrastructure. This launch introduces AI Interaction Discovery, which solves what traditional network monitoring and static “approved AI lists” cannot, while providing prompt-level visibility into employee GenAI interactions across any application, embedded, native, or newly released, without sending data outside company boundaries.

While companies invest $500–$2,000 per employee  on AI tools in the biggest tech spending spree since cloud adoption, new research from Lanai’s early deployments reveals 89% of actual AI usage is completely invisible to IT teams.

Across industries and professions, employees are using a range of AI tools, including Claude, ChatGPT, or coding tools like Cursor and Codeium, and also unknowingly feeding sensitive data into AI features embedded within approved applications like Salesforce Einstein, Microsoft Copilot, Adobe Firefly, Slack AI, HubSpot AI, Notion AI, and Figma AI. This creates a fundamental challenge between risk and productivity that traditional security tools can’t untangle.

In one striking example, an information security team confident they had “locked everything down” discovered 27 unauthorized AI tools in use within the first four days of Lanai deployment. This discovery gap aligns with broader industry findings showing 50% of office workers use shadow AI and up to 40% of enterprise IT spending goes untracked.

Also Read: PixVerse Raised $60M Series B to Accelerate Global AI Video Adoption

Lanai’s unique approach reveals not just what AI tools employees use, but the specific context that determines whether those interactions drive business value or create compliance violations.

“CEOs want companies to be AI-first. But leadership teams, especially CISOs and CIOs are being asked to manage and secure something they can’t see,” said Lexi Reese, CEO of Lanai. “And the truth is, Shadow AI isn’t a threat; it’s your productivity pipeline that needs governance, not shutdown. Traditional tools might catch someone visiting ChatGPT.com, but they can’t tell you whether that employee had a casual conversation or shared company trade secrets.”

Reese added, “The question isn’t ‘How do we control AI?’ It is ‘How do we secure and scale what’s working while cutting off what’s dangerous?’ Lanai turns AI governance from a brake into an accelerator.”

By deploying lightweight AI models directly on employee devices, Lanai delivers dynamic detection across any application without static lists, and with real-time prompt analysis for sensitive data and workflow insights. Deployment takes less than 24 hours via standard MDM systems, with no infrastructure changes required.

Beyond discovering AI usage, Lanai’s browser-level detection reveals the critical context that determines whether employee innovation becomes business value or regulatory violation. This is essential for highly regulated industries like healthcare, financial services, insurance, legal, and government contracting, where the difference between compliant and non-compliant AI use can mean the difference between competitive advantage and catastrophic penalties.

Examples of “Shadow AI” usage that Lanai has encountered through conversations with CIOs and CISOs include:

Insurance company:  Network tools only registered Salesforce usage, missing that the sales team had discovered Einstein’s predictive capabilities and uploaded ZIP code demographic data to improve upselling. Conversion rates jumped 35%, but the team unknowingly violated state insurance regulations around discriminatory pricing. Lanai would flag this interaction, surfacing both the business value and the regulatory risk in real time.

Technology firm preparing for IPO: The company’s AI security tool showed “ChatGPT – Approved,” but missed that an analyst was using a personal ChatGPT Plus account to analyze confidential revenue projections and competitive intelligence under deadline pressure. The static list couldn’t distinguish between approved enterprise and personal accounts. Lanai will detect the actual content flowing through personal AI accounts, revealing SEC violation risks that application-level monitoring failed to see.

Healthcare system:  Security teams recognized that doctors were using Epic’s clinical decision support, but missed that emergency physicians had begun entering patient symptoms into the embedded AI to accelerate diagnoses during busy shifts. While the approach improved patient throughput, it also violated HIPAA by using AI models not covered under their business associate agreements. Lanai distinguishes between compliant clinical AI use and violations within the same approved platform.

“We’re essentially moving AI observability from the network to the edge,” explained Steve Herrod, co-founder of Lanai and former VMware CTO. “It’s like the shift from monitoring server rooms to having telemetry inside every virtual machine. Traditional approaches see network traffic or ping static lists that do not update dynamically; we see the actual prompt interactions and where real risks and value live.”

Source: PRNewswire