Close Menu
Versa AI hub
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

What's Hot

VEO – Google Deep Mind

May 21, 2025

Gemini 2.5 update from Google Deepmind

May 21, 2025

New work on AI, energy infrastructure and regulatory safety

May 20, 2025
Facebook X (Twitter) Instagram
Versa AI hubVersa AI hub
Wednesday, May 21
Facebook X (Twitter) Instagram
Login
  • AI Ethics
  • AI Legislation
  • Business
  • Cybersecurity
  • Media and Entertainment
  • Content Creation
  • Art Generation
  • Research
  • Tools
Versa AI hub
Home»AI Ethics»From deepfakes to disclosure: insights from three global case studies of PAI frameworks
AI Ethics

From deepfakes to disclosure: insights from three global case studies of PAI frameworks

By March 19, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email

While many people use AI for productivity and creativity, the rapid advancement of AI is accelerating the real potential for harm. AI-generated content, including deepfakes of audio and video, has been used to spread misinformation in elections, manipulate candidates’ public perceptions, and undermine trust in democratic processes. Attacks on vulnerable groups such as women have shaken up communities through deep sea creation and spread, and other intimate, nonconsensual imagery, leaving organizations in a hurry to mitigate future harm.

To reduce the spread of content generated by misleading AI, organizations have begun to deploy transparency measures. Recently, policymakers in China and Spain have announced efforts to request labels for AI-generated content distributed online. Governments and organizations are taking steps in the right direction to regulate AI-generated content, but more comprehensive action on a global scale is urgently needed. PAI is working to bring together organisations across civil society, industry, government and academia to develop comprehensive guidelines that will help public trust in AI, protect users, and promote audience understanding of synthetic content.

Governments and organizations are taking steps in the right direction to regulate AI-generated content, but more comprehensive action on a global scale is urgently needed.

PAI’s Responsible Practices for Synthetic Media: A Framework of Collective Action provides timely and normative guidance for the use, distribution and creation of synthetic media. This framework supports AI tool builders, as well as synthetic content creators and distributors, coordinate best practices to advance the use of synthetic media, and protect users. The framework is supported by 18 organizations, each submitting a case study examining the application of the framework in the real world.

Approaching the conclusions of the case study collection in the current format, we are excited to be able to publish the final round of case studies from Google and the final round from civil society organizations Meedan and Code for Africa. These three case studies explore how synthetic media affects elections and political content, how misleading and gender content can be restricted, and how transparency signals can help users make informed decisions about content.

African Code explores the impact of synthetic content on elections

In May 2024, a few weeks before the South African general election, the use of generative AI tools by a political party sparked controversy. It distributed a video showing the South African flag burning. The video was generated by AI, but the lack of disclosure led to rage from voters and a statement by the South African president that the video was offensive.

The burden of interpreting generative AI content should be placed at the content building, creating and distributing agencies, not at the viewers themselves.

In its case study, African code argues for full disclosure of content generated or edited to all AI, increased training for newsroom staff on how to use generative AI tools, updated journalistic policies taking into account advances in AI, and increased journalistic standards with users. In particular, it emphasizes that the burden of interpreting generative AI content should not be placed on the construction, creation, and distribution of content, not on the viewers themselves.

Although these recommendations did not prevent video creation and dissemination, case studies highlight the importance of direct disclosure, as recommended by our framework. Direct disclosures by the video creators may have eased public backlash and some of the subsequent fallouts. By using direct disclosures such as labeling, people who view the content could have distinguished between facts and AI-generated media and kept the focus on key messages.

Read the case studies

Google’s approach to direct disclosure

By understanding the importance of user feedback when implementing direct disclosure mechanisms, Google has conducted research that helps identify which mechanisms are most effective and useful for users. The findings informed Google’s approach to direct disclosure by notifying them.

How labels should be prominent: Implicit reliability effects (if some content is labeled as AI-generated, people may believe that unlabeled content must be authentic) and liar dividends (the ability to question authentic content due to the prevalence of synthetic content). Misconception of direct disclosure.

These takeaways helped Google develop disclosure solutions for implementation on three surfaces: YouTube, Search, and Google Ads. They said disclosure should feature context beyond “AI or not” to support audiences’ understanding of the content. AI disclosure provides only one data point that helps users determine the reliability of the content, along with other signals such as “What is this content source?” “How old are you?” and “Where will this content appear elsewhere?”

Disclosures must have a context that goes beyond “AI or not” to support audiences’ understanding of the content.

Additionally, Google recommends further research to better understand user needs, media literacy levels, and understanding and impact of disclosure. By better understanding how users interpret disclosures directly and make decisions about content, the platform can implement scalable and effective disclosure mechanisms that support the transparency of synthetic content that helps the audience understand the content.

These recommendations are in line with how direct disclosure is defined in the framework. “In terms of viewer or listening, it includes, but is not limited to, content labels, context notes, watermarks, and disclaimers.” It also aligns with three key principles of transparency, consent, and disclosure of the framework.

Read the case studies

Meedan identifies harmful synthetic content in South Asia

Check is an open source platform created by Meedan that helps users connect with intergovernmental groups on journalists, civil society organizations, and closed Messung platforms such as WhatsApp. Through checks, users can help identify and expose malicious synthetic content. By using checks in the research project to collaborate with local partners, Meedan was able to identify the presence of synthetic components in misleading gender content in South Asia.

In its case study, Meedan recommends that the platform create localized escalation channels that allow it to improve content monitoring and screening and take into account a wide range of contexts and regions. Once implemented, these methods will help the platform reduce the spread of malicious content shared among “large world” communities (the term Meedan’s preference for the global south), and better support local efforts to address it.

The use of direct disclosure may help researchers identify synthetic content faster.

The framework recommends that authors disclose synthetic content directly. The use of direct disclosure in this example could help researchers identify synthetic content faster. This case study not only highlighted the need for direct disclosure, but also highlighted the importance of considering localized contexts when attempting to mitigate harm.

Read the case studies

What’s next?

Developing comprehensive global regulations and best practices requires the support of organizations in a variety of fields, including industry, academia, civil society, government and more. The repetitive case reporting process between PAI and supporter organizations shows how real life changes can be achieved by supporters in these areas.

The transparency and willingness of these organizations is a step in the right direction to provide insight into their efforts to govern synthetic media responsibly. The March 2024 analysis recognized the importance of voluntary frameworks for AI governance. We would like to reveal further insights into these case studies and more insight into how policy and technology decisions can be made. We hope to provide a set of evidence regarding the implementation of actual AI policies and provide further consensus on best practices for the evolution of synthetic media policies.

These case studies span a variety of impact areas and explore different mitigation strategies. This work from our supporters contributes to improving the framework, pursuing future synthetic media governance, and clarifying the best ways in which optimal guidance is implemented by builders, creators and distributors.

Over the next few months, we will incorporate lessons learned from these case studies into the framework, ensuring that guidance continues to rely on shifts in AI fields. Additionally, public programming addressing some of these themes will publish an analysis of key points, open questions, and future directions in the field where public programming will be held. To stay up to date with where this work is, sign up for our newsletter next.

author avatar
See Full Bio
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleThe High-Tech Giants and Hollywood gather together in the White House AI strategy
Next Article Trump urged copyright laws to be protected from AI through Hollywood’s A-listers

Related Posts

AI Ethics

New work on AI, energy infrastructure and regulatory safety

May 20, 2025
AI Ethics

Tech leaders can shape responsible AI beyond model deployment

May 15, 2025
AI Ethics

Will AI apps help carry the mental load of moms?

May 8, 2025
Add A Comment
Leave A Reply Cancel Reply

Top Posts

Introducing walletry.ai – The future of crypto wallets

March 18, 20252 Views

Subscribe to Enterprise Hub with your AWS account

May 19, 20251 Views

The Secretary of the Ministry of Information will attend the closure of the AI ​​Media Content Training Program

May 18, 20251 Views
Stay In Touch
  • YouTube
  • TikTok
  • Twitter
  • Instagram
  • Threads
Latest Reviews

Subscribe to Updates

Subscribe to our newsletter and stay updated with the latest news and exclusive offers.

Most Popular

Introducing walletry.ai – The future of crypto wallets

March 18, 20252 Views

Subscribe to Enterprise Hub with your AWS account

May 19, 20251 Views

The Secretary of the Ministry of Information will attend the closure of the AI ​​Media Content Training Program

May 18, 20251 Views
Don't Miss

VEO – Google Deep Mind

May 21, 2025

Gemini 2.5 update from Google Deepmind

May 21, 2025

New work on AI, energy infrastructure and regulatory safety

May 20, 2025
Service Area
X (Twitter) Instagram YouTube TikTok Threads RSS
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 Versa AI Hub. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.

Sign In or Register

Welcome Back!

Login to your account below.

Lost password?