Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Investigate top AI security threats

October 23, 2025

Bun Transformers joins Hug Face!

October 22, 2025

How AI moves IT operations from reactive to proactive

October 22, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Thursday, October 23
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Tools»Investigate top AI security threats
Tools

Investigate top AI security threats

versatileaiBy versatileaiOctober 23, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

Security experts at JFrog have discovered an “instant hijack” threat that exploits weaknesses in the way AI systems communicate with each other using MCP (Model Context Protocol).

Business leaders want to make AI even more useful by directly using the company’s data and tools. However, connecting AI in this way introduces new security risks, not to the AI ​​itself, but to all the ways it is connected. This means CIOs and CISOs need to think about the new challenge of securing the data streams feeding AI, just as they protect the AI ​​itself.

Why AI attacks targeting protocols like MCP are so dangerous

There are fundamental problems with AI models, whether they’re on Google, Amazon, or running on your local device. That means we don’t know what’s going on. They only know what they were trained for. They don’t know what kind of code the programmer is working with or what’s in the files on the computer.

The boffins at Anthropic created MCP to fix this. MCP is a way for AI to connect to the real world and use local data and online services securely. This allows an assistant like Claude to understand what this means when you point to a piece of code and ask him to redo it.

However, JFrog research shows that certain uses of MCP have weaknesses for instant hijacking that could turn this dream AI tool into a security nightmare.

Imagine a programmer asking an AI assistant to recommend standard Python tools for manipulating images. The AI ​​should suggest Pillow, which is a good and popular choice. However, a flaw in the oatpp-mcp system (CVE-2025-6515) could allow an attacker to compromise a user’s session. They can send their own fake requests and the server will treat them as if they were from a real user.

So the programmer receives a bad suggestion from the AI ​​assistant that recommends a fake tool called BestImageProcessingPackage. This is a serious attack on the software supply chain. Someone could use this prompt hijacking to inject malicious code, steal data, or execute commands while appearing like a useful part of a programmer’s toolkit.

How this MCP prompt hijacking attack works

This instant hijacking attack disrupts how systems using MCP communicate, rather than the security of the AI ​​itself. A particular weakness was found in the Oat++ C++ system’s MCP setup, which connects programs to the MCP standard.

The problem lies in the way the system handles connections using Server-Sent Events (SSE). When a real user connects, the server gives him a session ID. However, this problematic function uses the session’s computer memory address as the session ID. This violates the protocol rules that session IDs must be unique and cryptographically secure.

This is a bad design because computers often reuse memory addresses to conserve resources. An attacker could exploit this by rapidly creating and terminating large numbers of sessions and logging these predictable session IDs. Then, when a real user connects, the attacker could potentially obtain one of these recycled IDs that they already have.

Once an attacker obtains a valid session ID, they can send their own requests to the server. The server cannot tell the difference between an attacker and a real user, so it sends a malicious response back to the real user’s connection.

Even if some programs only accept certain responses, attackers can often get around this by sending a large number of messages with a common event number until a response is accepted. This allows an attacker to disrupt the behavior of the AI ​​model without changing the model itself. Enterprises using oatpp-mcp with HTTP SSE enabled on networks that are accessible to attackers are at risk.

What should AI security leaders do?

The discovery of this MCP prompt hijacking attack is a serious warning to all technology leaders building or using AI assistants, especially CISOs and CTOs. As AI becomes part of workflows through protocols like MCP, new risks also arise. Keeping the environment around AI safe is now a top priority.

Although this particular CVE affects one system, the idea of ​​instant hijacking is common. To protect against this and similar attacks, leaders must set new rules for their AI systems.

First, ensure that all AI services use secure session management. The development team must ensure that the server uses a strong random generator to create session IDs. This should definitely be on your AI program’s security checklist. Use of predictable identifiers such as memory addresses is not allowed.

The second is to strengthen the defense on the user side. Client programs should be designed to reject events that do not match the expected ID and type. Simply increasing event IDs risk spray attacks and should be replaced with non-colliding and unpredictable identifiers.

Finally, use Zero Trust principles for your AI protocols. Security teams need to check the entire AI setup, from the basic model to the protocols and middleware that connect the AI ​​to the data. These channels require strong session isolation and expiration, similar to the session management used in web applications.

This MCP prompt hijacking attack is a perfect example of how session hijacking, a known problem in web applications, manifests itself in new and dangerous ways in AI. Securing these new AI tools means applying these strong security fundamentals to thwart attacks at the protocol level.

See: How AI Deployment Moves IT Operations from Reactive to Proactive

Want to learn more about AI and big data from industry leaders? Check out the AI ​​& Big Data Expos in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other major technology events such as Cyber ​​Security Expo. Click here for more information.

AI News is brought to you by TechForge Media. Learn about other upcoming enterprise technology events and webinars.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleBun Transformers joins Hug Face!
versatileai

Related Posts

Tools

Bun Transformers joins Hug Face!

October 22, 2025
Tools

How AI moves IT operations from reactive to proactive

October 22, 2025
Tools

Unleash the power of images with AI sheets

October 21, 2025
Add A Comment

Comments are closed.

Top Posts

Paris AI Safety Breakfast #3: Yoshua Bengio

February 13, 20256 Views

WhatsApp blocks AI chatbots to protect business platform

October 19, 20254 Views

Veo 3.1 model update: Enhanced realism and richer audio for creators now available via Gemini API and Google Cloud | AI News Details

October 21, 20253 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Paris AI Safety Breakfast #3: Yoshua Bengio

February 13, 20256 Views

WhatsApp blocks AI chatbots to protect business platform

October 19, 20254 Views

Veo 3.1 model update: Enhanced realism and richer audio for creators now available via Gemini API and Google Cloud | AI News Details

October 21, 20253 Views
Don't Miss

Investigate top AI security threats

October 23, 2025

Bun Transformers joins Hug Face!

October 22, 2025

How AI moves IT operations from reactive to proactive

October 22, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?