China has introduced strict regulations requiring AI-generated content to mandatory labeling. This is part of the Chinese government’s plan to tackle growing concerns over misinformation, fraud and copyright issues, reported by the South China Morning Post.
Users must declare whether the content is generated by AI. At the same time, service providers must maintain a record of such content for at least six months. Tampering or deletion of AI labels is strictly prohibited. Violations cause penalties.
These measures are part of China’s efforts to enhance control of the digital space. China’s Cyberspace Management (CAC) has made AI regulations a major focus in its 2025 “Qinglang” (clear and bright) campaign.
The campaign targets the spread of AI misinformation, manipulation and misuse. The “Internet Water Force” targets social media influencers who are paid to shake public opinion.
Other goals include surveillance of short video platforms, suppressing deceptive influencer marketing, and protecting minor users online. In particular, the rise of local AI models such as Deepseek, Qwen (Alibaba) and Manus has increased the rise of local AI models with startup butterfly effects, which have led to increased support for essential labeling.
Globally, countries are taking similar steps. The EU AI law requires labeling of AI content. The US and the UK are working on laws focused on transparency and compliance.
However, experts warn that labeling alone may not be enough. Issues include regulating real-time AI applications such as live streams and voice calls. Warn that watermarks and metadata can be easily modified or deleted. Inconsistent detection methods across the platform complicate enforcement.
Indian AI regulations
Although India does not have a specific AI law yet, it has launched key frameworks such as the National Strategy of AI (2018), the Principles of Responsible AI (2021), and the Principles of Responsible AI Operations. The idea is to guide ethical, transparent and accountable AI development.