Mindgard, the market-leading AI cybersecurity platform, launched Mindgard’s AI Security Labs, a free online tool for engineers to perform red teaming by evaluating the cyber risk to AI systems, including large language models (LLMs) like ChatGPT. As well as derisking a wide range of AI deployment scenarios, the tool marks a major advance in the cyber threat educational tools available to engineers.
As enterprises rapidly develop or adopt AI to gain competitive advantage, they are exposed to new attack vectors that conventional security tools cannot address. Even using so-called foundation models, such as ChatGPT, exposes them to risks as there have been no automated processes available until now to test the possible impacts of attacks.
Mindgard’s AI Security Labs lifts the lid on exposure to ML attacks faced by model developers and user organisations alike. These risks are currently predominantly undetected due to the complexity of identification and the lack of the specialised skills needed. Current AI penetration tests – if they happen at all – require months of programming and testing by hard-to-find and highly expensive teams. Even where they do happen, any subsequent change in the AI stack, model or underlying data necessitates a completely new test. As a result, senior management is often completely unaware of the likely impact of any disruption.
Mindgard’s free AI Security Labs automates the threat discovery process, providing repeatable AI security testing and reliable risk assessment in minutes rather than months. It allows engineers to select from a range of attacks against popular AI models, datasets and frameworks to assess potential vulnerabilities. The results provide insight on what is the current “art of the possible” in AI attacks and the likelihood of evasion, IP theft, data leakage, and model copying threats.
“Most organisations are flying blind when deploying AI, with no way to perform red teaming against emerging cyber risks,” said Dr. Peter Garraghan, CEO/CTO of Mindgard and Professor at Lancaster University. “Until now, there has been nowhere for technical teams to learn about the real threats to AI security. We created this free tool to empower engineers on the front lines of AI adoption with the education and capabilities to properly evaluate the attack surface.”
After the phenomenally fast adoption of LLMs following the launch of ChatGPT 3.5, a range of possible attacks on AI systems has been starting to emerge. Data poisoning has been observed, where chatbots have been made to swear or produce anomalous results. Data extraction is another threat, where an LLM reveals, for example, the sensitive data on which it was trained. And copying entire AI/ML models is increasingly common: Mindgard’s AI security researchers demonstrated this by copying ChatGPT 3.5 in two days at a cost of just $50.
“Established cybersecurity tools are ineffective against AI’s new threat landscape,” said Dr. Peter Garraghan, CTO of Mindgard. “Our free offering bridges that gap by putting test capabilities into engineers’ hands so they can properly secure AI before deployment.”
Mindgard’s AI Security Labs is available via a simple online sign-up. No payment or credit card information are required. Key benefits of Mindgard‘s free AI cyber risk tool include:
- Able to conduct AI red teaming in over 170 unique attack scenarios.
- Assessment of cyber risk of leading LLMs such as Mistral
- Demonstration of jailbreaking, data leakage, evasion, and model copying attacks
- Easy selection of AI models, datasets and frameworks to be used in the AI attack scenario
- Detailed reports on AI cyber risk, and attack success rates
As well as immediate sign up from its website, Mindgard also plans to make their solution available on Azure marketplace in the coming months, with Google Cloud Platform (GCP) and Amazon Web Services (AWS) to follow.
SOURCE: Mindgard