Updated: October 23, 2025 9:19 PM IST
Draft regulations for AI-generated content are first step towards eliminating deepfakes and synthetic media
The just-released draft regulations on artificial intelligence (AI) generated content are India’s first legal attempt to address deepfakes and synthetic media. Proposed amendments to IT rules would require AI tools and social media platforms to label manipulated content in response to concerns about poll interference, misinformation and impersonation. This framework gets several things right. It provides legal clarity by defining “synthetically generated information” for the first time in Indian law, subjecting AI-generated material to existing takedown obligations, and stipulating specific labeling requirements. The proposed rules state that images and videos must be labeled to cover at least 10% of the display area, and audio must be identified in the first 10% of playback. This enforcement is based on two actors: the companies that develop such products and the platforms that host user-generated content. Platforms must declare whether uploaded content is synthetically generated, implement automated detection systems for verification, and require users to remove content if flagged through complaint redress mechanisms.
These are important first steps. Celebrity deepfakes are being used to create sexually suggestive videos of celebrities for fraud (the federal finance minister was recently targeted) and to circumvent strict regulations by avoiding explicit content. Tools like Sora and Dall-E evolve almost every quarter to create increasingly convincing, completely fictional images and video clips. Celebrities, aware that their faces, voices and mannerisms can easily be copied without their consent, are asking courts for injunctions against unauthorized use of their “likenesses.”
But the first step requires a stronger follow-through. The draft definition “all synthetically generated information” appears to be aimed at AI-generated text such as ChatGPT output. However, no guidance is provided on how to label or fingerprint such content. The draft also does not say how the center plans to deal with media produced by underground tools, where checks on nudity and gore can be easily broken. These may require separate legislation. This approach is increasingly being adopted in many regions.
The consultation period provides an opportunity to address these gaps. We need to consider further steps to limit the damage caused by text-based misinformation, while avoiding heavy-handed approaches that stifle AI adoption and innovation. India is right to be proactive, but in an area as complex as AI, precision is required. The more difficult task ahead lies in building institutions of national digital literacy and media trust, the only real antidote to the slide into alternative realities where facts are increasingly contested. The draft is the foundation and the rest of the legislative structure must now be planned.
See more
All access.
One subscription.
Get 360-degree coverage from daily headlines
Go to 100 years of archives.
electronic paper
complete archive
full access to
HT apps and websites
game
Subscribe now Already subscribed? Login

