Security experts at JFrog have discovered an “instant hijack” threat that exploits weaknesses in the way AI systems communicate with each other using MCP (Model Context Protocol).
Business leaders want to make AI even more useful by directly using the company’s data and tools. However, connecting AI in this way introduces new security risks, not to the AI itself, but to all the ways it is connected. This means CIOs and CISOs need to think about the new challenge of securing the data streams feeding AI, just as they protect the AI itself.
Why AI attacks targeting protocols like MCP are so dangerous
There are fundamental problems with AI models, whether they’re on Google, Amazon, or running on your local device. That means we don’t know what’s going on. They only know what they were trained for. They don’t know what kind of code the programmer is working with or what’s in the files on the computer.
The boffins at Anthropic created MCP to fix this. MCP is a way for AI to connect to the real world and use local data and online services securely. This allows an assistant like Claude to understand what this means when you point to a piece of code and ask him to redo it.
However, JFrog research shows that certain uses of MCP have weaknesses for instant hijacking that could turn this dream AI tool into a security nightmare.
Imagine a programmer asking an AI assistant to recommend standard Python tools for manipulating images. The AI should suggest Pillow, which is a good and popular choice. However, a flaw in the oatpp-mcp system (CVE-2025-6515) could allow an attacker to compromise a user’s session. They can send their own fake requests and the server will treat them as if they were from a real user.
So the programmer receives a bad suggestion from the AI assistant that recommends a fake tool called BestImageProcessingPackage. This is a serious attack on the software supply chain. Someone could use this prompt hijacking to inject malicious code, steal data, or execute commands while appearing like a useful part of a programmer’s toolkit.
How this MCP prompt hijacking attack works
This instant hijacking attack disrupts how systems using MCP communicate, rather than the security of the AI itself. A particular weakness was found in the Oat++ C++ system’s MCP setup, which connects programs to the MCP standard.
The problem lies in the way the system handles connections using Server-Sent Events (SSE). When a real user connects, the server gives him a session ID. However, this problematic function uses the session’s computer memory address as the session ID. This violates the protocol rules that session IDs must be unique and cryptographically secure.
This is a bad design because computers often reuse memory addresses to conserve resources. An attacker could exploit this by rapidly creating and terminating large numbers of sessions and logging these predictable session IDs. Then, when a real user connects, the attacker could potentially obtain one of these recycled IDs that they already have.
Once an attacker obtains a valid session ID, they can send their own requests to the server. The server cannot tell the difference between an attacker and a real user, so it sends a malicious response back to the real user’s connection.
Even if some programs only accept certain responses, attackers can often get around this by sending a large number of messages with a common event number until a response is accepted. This allows an attacker to disrupt the behavior of the AI model without changing the model itself. Enterprises using oatpp-mcp with HTTP SSE enabled on networks that are accessible to attackers are at risk.
What should AI security leaders do?
The discovery of this MCP prompt hijacking attack is a serious warning to all technology leaders building or using AI assistants, especially CISOs and CTOs. As AI becomes part of workflows through protocols like MCP, new risks also arise. Keeping the environment around AI safe is now a top priority.
Although this particular CVE affects one system, the idea of instant hijacking is common. To protect against this and similar attacks, leaders must set new rules for their AI systems.
First, ensure that all AI services use secure session management. The development team must ensure that the server uses a strong random generator to create session IDs. This should definitely be on your AI program’s security checklist. Use of predictable identifiers such as memory addresses is not allowed.
The second is to strengthen the defense on the user side. Client programs should be designed to reject events that do not match the expected ID and type. Simply increasing event IDs risk spray attacks and should be replaced with non-colliding and unpredictable identifiers.
Finally, use Zero Trust principles for your AI protocols. Security teams need to check the entire AI setup, from the basic model to the protocols and middleware that connect the AI to the data. These channels require strong session isolation and expiration, similar to the session management used in web applications.
This MCP prompt hijacking attack is a perfect example of how session hijacking, a known problem in web applications, manifests itself in new and dangerous ways in AI. Securing these new AI tools means applying these strong security fundamentals to thwart attacks at the protocol level.
See: How AI Deployment Moves IT Operations from Reactive to Proactive
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expos in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other major technology events such as Cyber Security Expo. Click here for more information.
AI News is brought to you by TechForge Media. Learn about other upcoming enterprise technology events and webinars.