Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Add Benchmaxxer Repellent to Open ASR Leaderboard

May 6, 2026

Agentic AI governance is now a product. Is your company ready?

May 6, 2026

Introducing HCompany’s HoloTab. Your AI browser companion.

May 5, 2026
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Wednesday, May 6
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Cybersecurity»British startup Mindgard tackles AI security risks – Next Unicorn
Cybersecurity

British startup Mindgard tackles AI security risks – Next Unicorn

By December 20, 2024Updated:February 13, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

British startup Mindguard, a spin-off from Lancaster University, has emerged to tackle the growing risks associated with artificial intelligence. As AI becomes increasingly integral to business operations, businesses face a critical balancing act. Deploy AI effectively and reap its benefits, or risk exposing yourself and your customers to significant vulnerabilities. Mindgard aims to provide solutions to these challenges by focusing on the security threats specific to AI systems.

Professor Peter Garraghan, CEO and CTO of Mindgard, highlighted the dual nature of AI risk. “AI is still software, so all of the cyber risks you’ve probably heard about also apply to AI,” he explained. However, the opaque and unpredictable behavior of neural networks introduces an additional layer of complexity and requires a customized approach to security. To address this, Mindgard has developed the Dynamic Application Security Testing for AI (DAST-AI) platform to detect vulnerabilities at runtime. This automated system uses threat libraries to simulate attacks and helps organizations assess the robustness of AI systems, including image classifiers and large-scale language models (LLMs).

The technology behind Mindgard is rooted in Garraghan’s academic expertise in AI security. His early predictions about threats to natural language processing (NLP) and image models proved prescient as such risks materialized in real-world applications. MindGuard’s links with Lancaster University continue to strengthen its capabilities, and the startup will own the intellectual property produced by 18 PhD researchers for years to come. “No other company in the world has a contract like this,” Garrahan said, emphasizing the strategic advantage.

Though deeply tied to research, Mindgard operates as a for-profit organization and offers its solutions as a SaaS platform. Targeted customers range from enterprises and traditional cybersecurity companies to AI startups aiming to demonstrate risk prevention to customers. Co-founder Steve Street will lead the company’s business operations as COO and CRO, while Fergal Glynn, newly appointed vice president of marketing, will lead the company’s U.S. expansion efforts from Boston.

MindGuard’s growth has been fueled by strong investor support. After raising £3m in a seed round in 2023, the company has launched a new round led by Boston-based .406 Ventures, with participation from Atlantic Bridge, WillowTree Investments and existing investors IQ Capital and Lakestar. Secured $8 million in funding. The funding will support team expansion, product development and entry into the US market, while R&D and engineering will remain in London.

Despite being a small company with 15 employees and plans to grow to 25 by the end of next year, Mindgard is positioning itself for broader adoption of AI technology and the security challenges that come with it. Masu. “We founded this company to do good in the world,” Gallagan said. “The good thing here is that people can trust AI and use it safely.”

Featured image courtesy of TechCrunch

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleAI and the evolution of legal human resources functions
Next Article Why government agencies need to be proactive about protecting data from AI threats

Related Posts

Cybersecurity

Uttar Pradesh Govt will use AI, monitor social media and implement strict security for the RO/ARO exam on July 27th

July 21, 2025
Cybersecurity

Reolink Elite Floodlight Camera has AI search without subscription

July 21, 2025
Cybersecurity

A new era of learning

July 21, 2025
Add A Comment

Comments are closed.

Top Posts

DeepInfra on Hug Face Inference Provider 🔥

May 2, 20267 Views

Soulgen revolutionizes the creation of NSFW content

May 11, 20256 Views

Per-token AI fees coming to GitHub Copilot

May 3, 20265 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

DeepInfra on Hug Face Inference Provider 🔥

May 2, 20267 Views

Soulgen revolutionizes the creation of NSFW content

May 11, 20256 Views

Per-token AI fees coming to GitHub Copilot

May 3, 20265 Views
Don't Miss

Add Benchmaxxer Repellent to Open ASR Leaderboard

May 6, 2026

Agentic AI governance is now a product. Is your company ready?

May 6, 2026

Introducing HCompany’s HoloTab. Your AI browser companion.

May 5, 2026
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2026 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?