HiddenLayer, the leading security provider for artificial intelligence (AI) models and assets, announced Microsoft Azure AI as a new user of its Model Scanner. The Model Scanner will scan third-party and open-source models in the model collection curated by Azure AI, providing verification that they are free from cybersecurity vulnerabilities, malware, and other signs of tampering.
“We strongly advocate for the parallel acceleration of AI innovation and security solutions,” said Chris Sestito, CEO and Co-founder of HiddenLayer. “With the integration of our Model Scanner into the Azure AI catalog, we’re dedicated to establishing a secure avenue for the broad adoption of AI technologies.”
Open-source models are favored for their affordability and flexibility, but they can be susceptible to malicious exploitation. By validating that open-source models have been scanned by Model Scanner, Azure AI can help security teams streamline AI deployment processes and empower development teams to fine-tune or deploy open models safely and with greater confidence.
Also Read: KnowBe4 Ghana Multiple TrustRadius Top Rated Awards 2024
“We see a need for proactive security solutions that allow developers to deploy AI models safely–and feel confident fine-tuning these models with their own proprietary data,” said Sarah Bird, Chief Product Officer of Responsible AI at Microsoft. “Integrating HiddenLayer into our model onboarding process is the validation that our customers need as they drive competitive differentiation with AI.”
HiddenLayer Model Scanner recognizes all major machine learning model formats and frameworks and analyzes their structure, layers, tensors, functions, and modules to identify suspicious or malicious code, vulnerabilities, and integrity issues. Key capabilities enabled by HiddenLayer in the Azure AI model catalog include:
- Malware Analysis: Scans AI models for embedded malicious code that could serve as an infection vector and launchpad for malware
- Vulnerability Assessment: Scans for common vulnerabilities and exposures (CVEs) and zero-day vulnerabilities targeting AI models
- Backdoor detection: Scans model functionality for evidence of supply chain attacks and backdoors such as arbitrary code execution and network calls
- Model Integrity: Analyzes an AI model’s layers, components and tensors to detect tampering or corruption.
SOURCE: PRNewsWire