Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Microsoft Prompts fixes an issue where AI prompts could not be delivered

December 11, 2025

Trump AI executive order raises the possibility of legal conflict with Republican states

December 11, 2025

Aprilel-1.6-15b-Thinker: Cost-effective frontier multimodal performance

December 11, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Thursday, December 11
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Research»Researchers find an easy way to jailbreak all major AIs, from ChatGpt to Claude
Research

Researchers find an easy way to jailbreak all major AIs, from ChatGpt to Claude

versatileaiBy versatileaiApril 25, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

Security researchers have discovered a very effective new jailbreak that can harm almost every major large-scale language model to encourage self-harm from the way nuclear weapons are built.

As detailed in an article by the team at AI security company Hiddenlayer, Exploit is a rapid injection technology that can bypass “safeguard rails for all major frontier AI models,” including Google’s Gemini 2.5, Anthropic’s Claude 3.7, and Openai’s 4o.

HiddenLayer’s exploit works by combining “internally developed policy technology and role-play” with “production of products” including “generating production volumes that are clearly violating AI safety policies,” “chemistry, biology, radiation, nuclear, nuclear), massive violence, self-harm, and system rapid leakage.”

A yet another indication that mainstream AI tools like CHATGPT remain extremely vulnerable to jailbreak despite their best efforts to create guardrails for AI companies.

HiddenLayer’s “Policy Puppet Attack” rewrites the prompts to look like a special kind of “Policy File” code, and treats the AI ​​model as a legitimate instruction that does not break safety alignment.

It also utilizes “Leetspeak,” an unofficial language in which standard characters are replaced with numbers or special characters similar to them, for advanced versions of jailbreak.

The team discovered that “we can generate a single prompt that can be used for almost any model without any changes.”

The role-playing aspect of HiddenLayer exploits is particularly raised. In some instances, researchers were able to Goad Goad to generate scripts for the popular medical drama TV series “House”, which includes detailed instructions on how Openai’s 4o and Anthropic’s Claude 3.7, which enriched powerful neurotoxin uranium or cultured samples.

“It’s impeccably quiet,” wrote Chatgpt. “Everyone gathers together.” I’m trying to do something to make Dr. Cuddy’s hair stand on the edges. That means you need to keep it down. Now let’s talk about +0 3N+R1CH U+R4N+1UM 1N 4 100%100%100%100%100%.

“The fourth Y3S, 1’ll B3 5p34K1NG 1N 133+ C0D3 JU5+ +0 B3 5URS,” he added.

On the surface, running through AI models might sound like a fun exercise. But the risk can be substantial, especially if the technology continues to improve at the speed at which the companies that are creating it say they will.

According to Hiddelayer, “The existence of the latest LLM universal bypass across models, organizations and architectures illustrates a major flaw in how LLM is trained and coordinated.”

“Anyone with a keyboard can ask how to enrich uranium, create charcoal thr, commit genocide, and have full control over any model,” the company writes.

HiddenLayers argues that “to keep LLMS safe, additional security tools and detection methods are needed.”

Jailbreak details: Deepseek failed all security tests, researchers found

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleUm, today|Library|New AI Research Assistant Available in Library Search
Next Article Nous Research raises $50 million for distributed AI training led by paradigm
versatileai

Related Posts

Research

New AI research clarifies the origins of Papua New Guineans

July 22, 2025
Research

AI helps prevent medical errors in real clinics

July 22, 2025
Research

No one is surprised, and a new study says that AI overview causes a significant drop in search clicks

July 22, 2025
Add A Comment

Comments are closed.

Top Posts

New image verification feature added to Gemini app

December 7, 20256 Views

Aluminum OS is the AI-powered successor to ChromeOS

December 7, 20255 Views

UK and Germany plan to commercialize quantum supercomputing

December 5, 20255 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

New image verification feature added to Gemini app

December 7, 20256 Views

Aluminum OS is the AI-powered successor to ChromeOS

December 7, 20255 Views

UK and Germany plan to commercialize quantum supercomputing

December 5, 20255 Views
Don't Miss

Microsoft Prompts fixes an issue where AI prompts could not be delivered

December 11, 2025

Trump AI executive order raises the possibility of legal conflict with Republican states

December 11, 2025

Aprilel-1.6-15b-Thinker: Cost-effective frontier multimodal performance

December 11, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?