Anthropic has introduced Claude Code Security, a new AI capability in its Claude Code offering, which aims to revolutionize the way development and security teams identify and fix software vulnerabilities by analyzing code in the same way that a human expert would. Now available in a research preview for Enterprise and Team users, with priority access for open-source maintainers, Claude Code Security analyzes codebases to identify nuanced vulnerabilities that are difficult to spot using traditional static code analysis and provides recommendations for fixes that can be reviewed by a human. Each vulnerability is put through a multi-step verification process to eliminate false positives and rank them for severity.
Also Read: Simbian Launches Autonomous AI Pentest Agent for Enterprise Security
Anthropic reports that leveraging Claude Opus 4.6, its recently enhanced AI model, its team has already identified over 500 serious vulnerabilities in long-running open-source projects, demonstrating the potential of semantic AI analysis to raise the industry’s security baseline. Anthropic positions Claude Code Security as a way to give defenders an edge as both attackers and tools increasingly adopt AI-driven methods, helping organizations proactively reduce risk while maintaining developer control over fixes and approvals.























