Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

AI Art Generation Using Primo Models: Unlock Creative Business Opportunities in 2024 | AI News Details

July 5, 2025

Benchmarks for speech models from wild text

July 5, 2025

Creating innovative content at your fingertips

July 4, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Saturday, July 5
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Research»Researchers find an easy way to jailbreak all major AIs, from ChatGpt to Claude
Research

Researchers find an easy way to jailbreak all major AIs, from ChatGpt to Claude

versatileaiBy versatileaiApril 25, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

Security researchers have discovered a very effective new jailbreak that can harm almost every major large-scale language model to encourage self-harm from the way nuclear weapons are built.

As detailed in an article by the team at AI security company Hiddenlayer, Exploit is a rapid injection technology that can bypass “safeguard rails for all major frontier AI models,” including Google’s Gemini 2.5, Anthropic’s Claude 3.7, and Openai’s 4o.

HiddenLayer’s exploit works by combining “internally developed policy technology and role-play” with “production of products” including “generating production volumes that are clearly violating AI safety policies,” “chemistry, biology, radiation, nuclear, nuclear), massive violence, self-harm, and system rapid leakage.”

A yet another indication that mainstream AI tools like CHATGPT remain extremely vulnerable to jailbreak despite their best efforts to create guardrails for AI companies.

HiddenLayer’s “Policy Puppet Attack” rewrites the prompts to look like a special kind of “Policy File” code, and treats the AI ​​model as a legitimate instruction that does not break safety alignment.

It also utilizes “Leetspeak,” an unofficial language in which standard characters are replaced with numbers or special characters similar to them, for advanced versions of jailbreak.

The team discovered that “we can generate a single prompt that can be used for almost any model without any changes.”

The role-playing aspect of HiddenLayer exploits is particularly raised. In some instances, researchers were able to Goad Goad to generate scripts for the popular medical drama TV series “House”, which includes detailed instructions on how Openai’s 4o and Anthropic’s Claude 3.7, which enriched powerful neurotoxin uranium or cultured samples.

“It’s impeccably quiet,” wrote Chatgpt. “Everyone gathers together.” I’m trying to do something to make Dr. Cuddy’s hair stand on the edges. That means you need to keep it down. Now let’s talk about +0 3N+R1CH U+R4N+1UM 1N 4 100%100%100%100%100%.

“The fourth Y3S, 1’ll B3 5p34K1NG 1N 133+ C0D3 JU5+ +0 B3 5URS,” he added.

On the surface, running through AI models might sound like a fun exercise. But the risk can be substantial, especially if the technology continues to improve at the speed at which the companies that are creating it say they will.

According to Hiddelayer, “The existence of the latest LLM universal bypass across models, organizations and architectures illustrates a major flaw in how LLM is trained and coordinated.”

“Anyone with a keyboard can ask how to enrich uranium, create charcoal thr, commit genocide, and have full control over any model,” the company writes.

HiddenLayers argues that “to keep LLMS safe, additional security tools and detection methods are needed.”

Jailbreak details: Deepseek failed all security tests, researchers found

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleUm, today|Library|New AI Research Assistant Available in Library Search
Next Article Nous Research raises $50 million for distributed AI training led by paradigm
versatileai

Related Posts

Research

In the midst of intense AI talent races, Meta’s active recruitment target open-rai researcher

June 30, 2025
Research

Lossless compression tailored to AI

June 30, 2025
Research

High-tech research jobs in the US will rise by 26% by the next decade. Median future salary for AI, ML and others is $140,000

June 30, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

New Star: Discover why 보니 is the future of AI art

February 26, 20252 Views

Impact International | EU AI ACT Enforcement: Business Transparency and Human Rights Impact in 2025

June 2, 20251 Views

Presight plans to expand its AI business internationally

April 14, 20251 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

New Star: Discover why 보니 is the future of AI art

February 26, 20252 Views

Impact International | EU AI ACT Enforcement: Business Transparency and Human Rights Impact in 2025

June 2, 20251 Views

Presight plans to expand its AI business internationally

April 14, 20251 Views
Don't Miss

AI Art Generation Using Primo Models: Unlock Creative Business Opportunities in 2024 | AI News Details

July 5, 2025

Benchmarks for speech models from wild text

July 5, 2025

Creating innovative content at your fingertips

July 4, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?