OpenAI Introduces Trusted Access for Cyber to Strengthen Defensive AI Capabilities

OpenAI

OpenAI announced the launch of Trusted Access for Cyber, a trust-based access framework designed to provide verified cybersecurity professionals and organizations with secure access to advanced AI capabilities while mitigating misuse risks. The initiative reflects OpenAI’s commitment to deploying highly capable models responsibly, accelerating defensive cybersecurity workflows, and supporting the broader ecosystem in strengthening resilience against evolving threats.

OpenAI highlighted that its GPT-5.3-Codex model represents the most cyber-capable frontier reasoning model to date and possesses the potential to significantly strengthen cyber defenses by accelerating tasks such as vulnerability discovery and remediation. The company noted that as models grow more capable, they can also introduce dual-use risks if powerful functionalities are placed in the wrong hands. Trusted Access for Cyber is designed to ensure enhanced defensive capabilities are placed into the hands of qualified professionals focused on protection and prevention.

Under the Trusted Access for Cyber framework, qualified security practitioners can verify their identity to gain access for cybersecurity work, and enterprises can enable trusted access for their entire teams through their OpenAI representatives. Additionally, security researchers and teams requiring access to more permissive models to accelerate defensive work may express interest in an invite-only program. All users with trusted access are required to comply with OpenAI’s Usage Policies and Terms of Use, and protections are designed to prevent prohibited behaviors, including data exfiltration, malware creation or deployment, and unauthorized testing.

Also Read: Cyberhaven Launches Unified AI and Data Security Platform for the AI Era

OpenAI noted that frontier models like GPT-5.3-Codex incorporate safety mitigations, including training to refuse clearly malicious requests and automated classifier-based monitoring to detect potential signals of suspicious activity. These tools are part of a broader layered approach that balances friction reduction for defenders with safeguards against misuse while policies and classifiers are refined through early participant feedback.

To further accelerate use of advanced AI for defensive cybersecurity work, OpenAI committed $10 million in API credits through its Cybersecurity Grant Program. The program aims to support teams with a proven track record in identifying and remediating vulnerabilities in open-source software and critical infrastructure systems, enabling them to leverage frontier models for defense at scale.

OpenAI emphasized the importance of rapidly adopting frontier cyber capabilities to improve software security, reduce threat response times, and enhance resilience across organizations of all sizes. By prioritizing frontline defenders and enabling secure and responsible use of powerful AI models, OpenAI aims to raise the baseline of cyber defense across the global ecosystem.

SOURCE: OpenAI