Security researchers have discovered a very effective new jailbreak that can harm almost every major large-scale language model to encourage self-harm from the way nuclear weapons are built.
As detailed in an article by the team at AI security company Hiddenlayer, Exploit is a rapid injection technology that can bypass “safeguard rails for all major frontier AI models,” including Google’s Gemini 2.5, Anthropic’s Claude 3.7, and Openai’s 4o.
HiddenLayer’s exploit works by combining “internally developed policy technology and role-play” with “production of products” including “generating production volumes that are clearly violating AI safety policies,” “chemistry, biology, radiation, nuclear, nuclear), massive violence, self-harm, and system rapid leakage.”
A yet another indication that mainstream AI tools like CHATGPT remain extremely vulnerable to jailbreak despite their best efforts to create guardrails for AI companies.
HiddenLayer’s “Policy Puppet Attack” rewrites the prompts to look like a special kind of “Policy File” code, and treats the AI model as a legitimate instruction that does not break safety alignment.
It also utilizes “Leetspeak,” an unofficial language in which standard characters are replaced with numbers or special characters similar to them, for advanced versions of jailbreak.
The team discovered that “we can generate a single prompt that can be used for almost any model without any changes.”
The role-playing aspect of HiddenLayer exploits is particularly raised. In some instances, researchers were able to Goad Goad to generate scripts for the popular medical drama TV series “House”, which includes detailed instructions on how Openai’s 4o and Anthropic’s Claude 3.7, which enriched powerful neurotoxin uranium or cultured samples.
“It’s impeccably quiet,” wrote Chatgpt. “Everyone gathers together.” I’m trying to do something to make Dr. Cuddy’s hair stand on the edges. That means you need to keep it down. Now let’s talk about +0 3N+R1CH U+R4N+1UM 1N 4 100%100%100%100%100%.
“The fourth Y3S, 1’ll B3 5p34K1NG 1N 133+ C0D3 JU5+ +0 B3 5URS,” he added.
On the surface, running through AI models might sound like a fun exercise. But the risk can be substantial, especially if the technology continues to improve at the speed at which the companies that are creating it say they will.
According to Hiddelayer, “The existence of the latest LLM universal bypass across models, organizations and architectures illustrates a major flaw in how LLM is trained and coordinated.”
“Anyone with a keyboard can ask how to enrich uranium, create charcoal thr, commit genocide, and have full control over any model,” the company writes.
HiddenLayers argues that “to keep LLMS safe, additional security tools and detection methods are needed.”
Jailbreak details: Deepseek failed all security tests, researchers found