Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

Anthropic’s $1 billion TPU expansion signals strategic change for enterprise AI infrastructure

October 26, 2025

Voip Unlimited launches AI Meetings — A new business intelligence layer for everyday conversations – Technology Reseller

October 25, 2025

Hugging Face and VirusTotal team up to power AI security

October 25, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Sunday, October 26
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
  • Resources
Versa AI hub
Home»Content Creation»India considers mandatory labeling of AI content, probes growing deepfake threat
Content Creation

India considers mandatory labeling of AI content, probes growing deepfake threat

versatileaiBy versatileaiOctober 24, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
#image_title
Share
Facebook Twitter LinkedIn Pinterest Email

Labeling of AI content in India: In an attempt to stem the “increasing misuse of synthetically generated information, including deepfakes”, the Center has now proposed draft rules that would make it mandatory to label artificial intelligence or AI-generated content on social media platforms such as YouTube and Instagram. Social media platforms will be required to request a declaration from users as to whether uploaded content is “synthetically generated information”.

According to the draft amendments to the Information Technology (Interim Guidelines and Digital Media Ethics Code) Rules, 2021, platforms that allow the creation of AI content will be required to prominently label such content or embed persistent unique metadata or identifiers. For visual content, the label must cover at least 10 percent of the total surface area, and for audio content, the label must cover the first 10 percent of the total playback time.

Deepfakes are digitally altered videos, typically used to spread false information. In the Indian context, the issue first surfaced in 2023 when a deepfake video of actor Rashmika Mandhana entering an elevator went viral on social media. Immediately after that incident, Prime Minister Narendra Modi called deepfakes a new “crisis.”

What India is proposing

Story continues below this ad

According to the draft amendments, social media platforms will have to let users declare whether uploaded content is synthetically generated. implement “reasonable and appropriate technical measures”, including automated tools or other appropriate mechanisms, to verify the accuracy of such declarations; And, if such declaration or technical verification confirms that the content is synthetically generated, ensure that this information (that the content is synthetically generated) is clearly and conspicuously displayed with appropriate labels or notices.

Failure to do so may result in the platform losing the legal immunity it enjoys from third-party content. This means that the platform’s responsibility extends to verifying the correctness of user declarations and taking reasonable and appropriate technical measures to ensure that synthetically generated information is not published without such declarations or labels.

The draft amendments introduce a new clause that defines synthetically generated information as “information that is artificially or algorithmically created, produced, altered or modified using computer resources in a manner that reasonably appears to be genuine or truthful.”

Some form of labeling is already happening online

Companies like Meta and Google have already introduced some form of AI labeling on their platforms, asking creators when uploading content whether it was created using AI. For example, on Instagram, Meta applies an “AI Information” label to content modified or created using AI, but enforcement remains spotty as some AI content on the platform does not appear to have a label.

Story continues below this ad

Last year, Meta said that as AI-generated content appears on the internet, it was working with other companies in the industry to develop common standards for identifying content through forums such as Partnership on AI (PAI). We’re also working on building tools that can identify invisible markers at scale, allowing us to label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.

YouTube will add a label to videos created using AI called “altered or synthetic content,” as well as a description of how the video was created. This provides insight into the origin of the content and whether it has been meaningfully altered using AI.

But as of now, most of these measures are reactive in nature, meaning that the label often appears after a video has come to the platform’s attention in case the creator has not declared that the content was created using AI.

India’s amendments take this a step further, requiring companies to put appropriate technical measures in place to verify AI content on their platforms without necessarily being threatened.

Story continues below this ad

Indian entertainment company fights deepfakes

The debate over the pitfalls of AI-generated deepfakes has taken the entertainment world by storm, with several prominent actors including Amitabh Bachchan, Aishwarya Rai Bachchan, Akshay Kumar and Hrithik Roshan filing lawsuits to protect their “moral rights” amid the proliferation of AI-generated videos that steal their faces, voices and other likenesses.

India’s laws regarding the protection of moral rights are relatively lenient compared to other regions. Experts say moral rights are not explicitly recognized in India and are protected only through a patchwork of other laws that may indirectly protect these rights.

This point was particularly highlighted when the makers of the popular film ‘Raanjhanaa’ used AI to change the ending of the film without the consent of the director and actors, much to their dismay.

How are other countries tackling deepfakes?

Under the European Union’s AI law, AI providers must label synthetic speech, images, video, and text in a machine-readable way so that they can be detected as artifacts. Deployers of AI systems that create deepfakes or text of public interest content must also disclose when the material is artificially generated or modified.

Story continues below this ad

China also introduced AI labeling rules last month, requiring content providers to display clear labels to identify material created by artificial intelligence. Visible AI symbols are needed for chatbots, AI lighting, synthetic speech, face swapping, and immersive scene editing. For other AI-based content, hidden tags such as watermarks are sufficient. The platform must also act as a watchdog. If AI-generated content is detected or suspicious, platforms should alert users and apply their own labels.

Denmark took a fundamentally different approach. The country is proposing a bill aimed at protecting citizens from deepfakes by giving them more copyright than their own likenesses. If passed, the law would allow anyone to request the deletion of digitally altered photos and videos that were created without their consent.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleNetflix goes “all in” on AI to “enhance” content production
Next Article Autonomy in the real world? Druid AI releases AI agent “Factory”
versatileai

Related Posts

Content Creation

Netflix goes “all in” on AI to “enhance” content production

October 23, 2025
Content Creation

Government doesn’t want creators to restrict AI content, just label it: IT Secretary Krishnan

October 23, 2025
Content Creation

Government mandates labeling of AI-generated content to ensure online transparency

October 23, 2025
Add A Comment

Comments are closed.

Top Posts

Paris AI Safety Breakfast #3: Yoshua Bengio

February 13, 20255 Views

WhatsApp blocks AI chatbots to protect business platform

October 19, 20254 Views

Investigate top AI security threats

October 23, 20253 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Paris AI Safety Breakfast #3: Yoshua Bengio

February 13, 20255 Views

WhatsApp blocks AI chatbots to protect business platform

October 19, 20254 Views

Investigate top AI security threats

October 23, 20253 Views
Don't Miss

Anthropic’s $1 billion TPU expansion signals strategic change for enterprise AI infrastructure

October 26, 2025

Voip Unlimited launches AI Meetings — A new business intelligence layer for everyday conversations – Technology Reseller

October 25, 2025

Hugging Face and VirusTotal team up to power AI security

October 25, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?