Codacy Launches Free AI Coding Risk Assessment to Benchmark AI Security and Compliance Posture

Codacy

Codacy, a leading platform for end-to-end AppSec and Code Quality automation, launched the AI Coding Risk Assessment, a self-assessment survey that helps engineering teams measure and benchmark the security posture of their AI-assisted development workflows (using tools like GitHub Copilot, Cursor, or Claude).

The initiative comes as organizations struggle to reconcile the speed of generative AI with the complex risks associated with untrusted, machine-generated code and increasing regulatory scrutiny globally. The survey, composed of 24 targeted questions, aims to provide the first comprehensive, anonymous data set on how teams are mitigating risk, covering:

  • Policy and Governance;
  • Security and Risk Management;
  • Culture and Training.

Also Read: CoreWeave Acquires Marimo to Unify the Generative AI Developer Workflow

Unlike generalized “state of” reports, the resulting data is immediately personalized. By contributing, every respondent receives a tailored industry benchmark that allows them to see exactly how their company’s practices compare to others in the industry, alongside an AI Governance and Security checklist to address gaps.

“After speaking with leading AI industry figures, including the teams behind Microsoft’s Copilot, Lovable and Windsurf, we observed a need for a unified, data-backed resource,” said Jaime Jorge, CEO and Co-founder of Codacy. “That’s why we created this benchmark. It helps companies identify where they stand, compare themselves to the market, and take concrete, actionable steps to leverage AI at scale.”

Source: PRWeb