The government on Wednesday proposed changes to IT rules to require clear labeling of AI-generated content and strengthen accountability for major platforms such as Facebook and YouTube to verify and flag synthetic information to limit user harm from deepfakes and misinformation.
“All we’re asking for is labeling the content… We have to label it to say whether certain content is synthetically generated or not. We’re not saying don’t post it or don’t do this or that. It doesn’t matter what you’re creating, it just says it’s synthetically generated. So once it says synthetically generated, people can decide whether it’s good or bad or whatever,” Krishnan said.
He said India’s approach to artificial intelligence (AI) adoption will prioritize innovation first, followed by regulation only where necessary, adding that the responsibility for implementing new labeling requirements will be shared by users, AI service providers and social media platforms.
Krishnan noted that providers of computer resources and software used to create synthetic content need to be able to create labels that are highly visible and cannot be removed.
Enforcement measures apply only to illegal content and “apply to all content, not just AI content.”
The proposed amendments to the IT Rules provide a clear legal basis for labeling, traceability and accountability in relation to synthetically generated information.
Apart from clearly defining synthetically generated information, the draft amendments, which are seeking comments from stakeholders by November 6, 2025, require labeling, visibility, and embedding of metadata for synthetically generated or modified information to distinguish such content from real media.
The stricter rules would strengthen the accountability of significant social media intermediaries (intermediaries with more than 5 million registered users) in verifying and flagging synthetic information through reasonable and appropriate technical means.

