Security AI startup Mindgard, raised $8 million funding and appointed a new head of product and vice president of marketing.
Many AI products are Launching Without proper security guarantees, organizations remain vulnerable to risks such as LLM prompt injections and jailbreaks that exploit the probabilistic and opaque nature of AI systems. Manifested only at runtime. Securing these risks inherent in AI models and toolchains requires a fundamentally new approach.
Spun out from Lancaster University, Mr. Mindgals Dynamic Application Security Testing for AI (DAST-AI) solutions identify and resolve AI-specific vulnerabilities. can only detect at runtime. For organizations deploying AI or establishing guardrails, continuous security testing is essential to gain visibility into risks across the AI lifecycle.
“aAll software has security risks, and AI is no exception.” Dr Peter Garrahan, CEO of MindGuard and Professor at Lancaster University, said:
“The challenge is that the way these risks manifest within AI is fundamentally different than in other software.
Utilizing 10 years of experience in AI security research, Mindgard created To tackle this challenge. It was We are proud to lead the way in creating a safer and more secure future for AI. ”
Mr. Mindgals The solution integrates with existing automation and supports security teams, developers, AI red teamers, and pen tester Secure your AI without disrupting established workflows.
406 Ventures led the funding, with participation from Atlantic Bridge, Willowtree Investments, and existing investors IQ Capital and Lakestar. The new executives are Dave Ganly, former product director at Twilio, and Fergal Glynn, who recently served as CMO at Next DLP (acquired by Fortinet). of the company Product development and launch Mr. Mindgals We aim to demonstrate a leading presence in Boston and expand into the North American market.
According to Greg Dracon, partner at .406 Ventures, the rapid adoption of AI is creating new and complex security risks that cannot be addressed with traditional tools.
“Mof Indiagarde Born from the clear challenge of securing AI, this approach provides security teams and developers with the tools they need to deliver secure AI systems.”
Lead image: Mindguard. Photo: No credit.