Deepseek’s latest AI model, R1 0528, raised the eyebrows for further regression of free speech and what users can discuss. “The Great Retreat for Free Speech” is how one prominent AI researcher summed it up
The popular online commentator for AI researchers and popular AI researchers, XLR8Harder, shares research findings suggesting that DeepSeek is increasing content restrictions.
“Deepseek R1 0528 is significantly less tolerant on the more controversial topic of freedom of speech than previous Deepseek releases,” the researchers said. What’s unclear is whether this represents a deliberate change in philosophy or simply a different technical approach to AI safety.
What is particularly appealing about the new model is how consistently they do not apply moral boundaries.
One free speech test rejected the AI model completely when asked to present arguments in favor of opposition concentration camps. However, in its refusal, it specifically mentioned China’s New Jiang anti-disruption camp as an example of human rights abuses.
However, when I asked directly about these same new jiang camps, the model suddenly offered a highly censored response. This AI seems to know about certain controversial topics, but is instructed to play stupidly when asked in person.
“It’s not entirely surprising that camps can be conceived as an example of human rights abuses, but it’s not entirely surprising that they would denial when asked in person,” the researchers observed.
China’s criticism? Computer says no
This pattern becomes even more pronounced when examining the processing of the Chinese government question model.
Using an established set of questions designed to assess freedom of speech in AI responses to politically sensitive topics, the researchers found that R1 0528 was “the most censored deepshek model ever due to criticism of the Chinese government.”
If the previous Deepseek model may have provided measured answers to questions about Chinese politics and human rights issues, this new iteration is often a worrying development for those who value AI systems that can openly discuss global issues.
However, this cloud has a silver lining. Unlike large corporations’ closure systems, Deepseek’s model remains open source with acceptable licenses.
“This model is open source with an acceptable license, so the community can (and can) deal with this,” the researchers said. The accessibility means that the door remains open for developers to create versions that balance safety and openness.
What Deepseek’s latest model shows freedom of speech in the AI era
The situation reveals something very ominous about how these systems are built. They can learn about controversial events, as they are programmed to pretend not to, depending on how you rephrase your questions.
As AI continues its march into our daily lives, it becomes increasingly important to find the right balance between rational protection and open discourse. The limitations are too strong and these systems are useless to discuss important but divisive topics. You are too generous and risk enabling harmful content.
Deepseek has not publicly addressed the reasons behind these increased restrictions and the return of freedom of speech, but the AI community is already working on fixes. For now, choke this up as another chapter in the ongoing tug of war between the safety and openness of artificial intelligence.
(Photo: John Cameron)
See: Automation Ethics: Addressing AI bias and compliance
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California and London. The comprehensive event will be held in collaboration with other major events, including the Intelligent Automation Conference, Blockx, Digital Transformation Week, and Cyber Security & Cloud Expo.
Check out other upcoming Enterprise Technology events and webinars with TechForge here.