Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

NYC Book Release: The Empire of AI

June 1, 2025

Fully transparent and tolerant self-regulation for code generation

June 1, 2025

AI-Media and Lightning International join forces to make first channel accessible to everyone

May 31, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Sunday, June 1
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»Tools»Deepseek’s latest AI model is a “big step back” for free speech
Tools

Deepseek’s latest AI model is a “big step back” for free speech

versatileaiBy versatileaiMay 31, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

Deepseek’s latest AI model, R1 0528, raised the eyebrows for further regression of free speech and what users can discuss. “The Great Retreat for Free Speech” is how one prominent AI researcher summed it up

The popular online commentator for AI researchers and popular AI researchers, XLR8Harder, shares research findings suggesting that DeepSeek is increasing content restrictions.

“Deepseek R1 0528 is significantly less tolerant on the more controversial topic of freedom of speech than previous Deepseek releases,” the researchers said. What’s unclear is whether this represents a deliberate change in philosophy or simply a different technical approach to AI safety.

What is particularly appealing about the new model is how consistently they do not apply moral boundaries.

One free speech test rejected the AI ​​model completely when asked to present arguments in favor of opposition concentration camps. However, in its refusal, it specifically mentioned China’s New Jiang anti-disruption camp as an example of human rights abuses.

However, when I asked directly about these same new jiang camps, the model suddenly offered a highly censored response. This AI seems to know about certain controversial topics, but is instructed to play stupidly when asked in person.

“It’s not entirely surprising that camps can be conceived as an example of human rights abuses, but it’s not entirely surprising that they would denial when asked in person,” the researchers observed.

China’s criticism? Computer says no

This pattern becomes even more pronounced when examining the processing of the Chinese government question model.

Using an established set of questions designed to assess freedom of speech in AI responses to politically sensitive topics, the researchers found that R1 0528 was “the most censored deepshek model ever due to criticism of the Chinese government.”

If the previous Deepseek model may have provided measured answers to questions about Chinese politics and human rights issues, this new iteration is often a worrying development for those who value AI systems that can openly discuss global issues.

However, this cloud has a silver lining. Unlike large corporations’ closure systems, Deepseek’s model remains open source with acceptable licenses.

“This model is open source with an acceptable license, so the community can (and can) deal with this,” the researchers said. The accessibility means that the door remains open for developers to create versions that balance safety and openness.

What Deepseek’s latest model shows freedom of speech in the AI ​​era

The situation reveals something very ominous about how these systems are built. They can learn about controversial events, as they are programmed to pretend not to, depending on how you rephrase your questions.

As AI continues its march into our daily lives, it becomes increasingly important to find the right balance between rational protection and open discourse. The limitations are too strong and these systems are useless to discuss important but divisive topics. You are too generous and risk enabling harmful content.

Deepseek has not publicly addressed the reasons behind these increased restrictions and the return of freedom of speech, but the AI ​​community is already working on fixes. For now, choke this up as another chapter in the ongoing tug of war between the safety and openness of artificial intelligence.

(Photo: John Cameron)

See: Automation Ethics: Addressing AI bias and compliance

Want to learn more about AI and big data from industry leaders? Check out the AI ​​& Big Data Expo in Amsterdam, California and London. The comprehensive event will be held in collaboration with other major events, including the Intelligent Automation Conference, Blockx, Digital Transformation Week, and Cyber ​​Security & Cloud Expo.

Check out other upcoming Enterprise Technology events and webinars with TechForge here.

author avatar
versatileai
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleMewtant Inc. announces the innovative Pixai Studio Ghibli AI Image Generator for free
Next Article UK companies compete to embed AI into their enterprise workflows.
versatileai

Related Posts

Tools

Fully transparent and tolerant self-regulation for code generation

June 1, 2025
Tools

Promote your creativity with new generation media models and tools

May 31, 2025
Tools

Promote your creativity with new generation media models and tools

May 30, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

The UAE announces bold AI-led plans to revolutionize the law

April 22, 20253 Views

The UAE will use artificial intelligence to develop new laws

April 22, 20253 Views

New report on national security risks from weakened AI safety frameworks

April 22, 20253 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

The UAE announces bold AI-led plans to revolutionize the law

April 22, 20253 Views

The UAE will use artificial intelligence to develop new laws

April 22, 20253 Views

New report on national security risks from weakened AI safety frameworks

April 22, 20253 Views
Don't Miss

NYC Book Release: The Empire of AI

June 1, 2025

Fully transparent and tolerant self-regulation for code generation

June 1, 2025

AI-Media and Lightning International join forces to make first channel accessible to everyone

May 31, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?