While many people use AI for productivity and creativity, the rapid advancement of AI is accelerating the real potential for harm. AI-generated content, including deepfakes of audio and video, has been used to spread misinformation in elections, manipulate candidates’ public perceptions, and undermine trust in democratic processes. Attacks on vulnerable groups such as women have shaken up communities through deep sea creation and spread, and other intimate, nonconsensual imagery, leaving organizations in a hurry to mitigate future harm.
To reduce the spread of content generated by misleading AI, organizations have begun to deploy transparency measures. Recently, policymakers in China and Spain have announced efforts to request labels for AI-generated content distributed online. Governments and organizations are taking steps in the right direction to regulate AI-generated content, but more comprehensive action on a global scale is urgently needed. PAI is working to bring together organisations across civil society, industry, government and academia to develop comprehensive guidelines that will help public trust in AI, protect users, and promote audience understanding of synthetic content.
Governments and organizations are taking steps in the right direction to regulate AI-generated content, but more comprehensive action on a global scale is urgently needed.
PAI’s Responsible Practices for Synthetic Media: A Framework of Collective Action provides timely and normative guidance for the use, distribution and creation of synthetic media. This framework supports AI tool builders, as well as synthetic content creators and distributors, coordinate best practices to advance the use of synthetic media, and protect users. The framework is supported by 18 organizations, each submitting a case study examining the application of the framework in the real world.
Approaching the conclusions of the case study collection in the current format, we are excited to be able to publish the final round of case studies from Google and the final round from civil society organizations Meedan and Code for Africa. These three case studies explore how synthetic media affects elections and political content, how misleading and gender content can be restricted, and how transparency signals can help users make informed decisions about content.
African Code explores the impact of synthetic content on elections
In May 2024, a few weeks before the South African general election, the use of generative AI tools by a political party sparked controversy. It distributed a video showing the South African flag burning. The video was generated by AI, but the lack of disclosure led to rage from voters and a statement by the South African president that the video was offensive.
The burden of interpreting generative AI content should be placed at the content building, creating and distributing agencies, not at the viewers themselves.
In its case study, African code argues for full disclosure of content generated or edited to all AI, increased training for newsroom staff on how to use generative AI tools, updated journalistic policies taking into account advances in AI, and increased journalistic standards with users. In particular, it emphasizes that the burden of interpreting generative AI content should not be placed on the construction, creation, and distribution of content, not on the viewers themselves.
Although these recommendations did not prevent video creation and dissemination, case studies highlight the importance of direct disclosure, as recommended by our framework. Direct disclosures by the video creators may have eased public backlash and some of the subsequent fallouts. By using direct disclosures such as labeling, people who view the content could have distinguished between facts and AI-generated media and kept the focus on key messages.
Read the case studies
Google’s approach to direct disclosure
By understanding the importance of user feedback when implementing direct disclosure mechanisms, Google has conducted research that helps identify which mechanisms are most effective and useful for users. The findings informed Google’s approach to direct disclosure by notifying them.
How labels should be prominent: Implicit reliability effects (if some content is labeled as AI-generated, people may believe that unlabeled content must be authentic) and liar dividends (the ability to question authentic content due to the prevalence of synthetic content). Misconception of direct disclosure.
These takeaways helped Google develop disclosure solutions for implementation on three surfaces: YouTube, Search, and Google Ads. They said disclosure should feature context beyond “AI or not” to support audiences’ understanding of the content. AI disclosure provides only one data point that helps users determine the reliability of the content, along with other signals such as “What is this content source?” “How old are you?” and “Where will this content appear elsewhere?”
Disclosures must have a context that goes beyond “AI or not” to support audiences’ understanding of the content.
Additionally, Google recommends further research to better understand user needs, media literacy levels, and understanding and impact of disclosure. By better understanding how users interpret disclosures directly and make decisions about content, the platform can implement scalable and effective disclosure mechanisms that support the transparency of synthetic content that helps the audience understand the content.
These recommendations are in line with how direct disclosure is defined in the framework. “In terms of viewer or listening, it includes, but is not limited to, content labels, context notes, watermarks, and disclaimers.” It also aligns with three key principles of transparency, consent, and disclosure of the framework.
Read the case studies
Meedan identifies harmful synthetic content in South Asia
Check is an open source platform created by Meedan that helps users connect with intergovernmental groups on journalists, civil society organizations, and closed Messung platforms such as WhatsApp. Through checks, users can help identify and expose malicious synthetic content. By using checks in the research project to collaborate with local partners, Meedan was able to identify the presence of synthetic components in misleading gender content in South Asia.
In its case study, Meedan recommends that the platform create localized escalation channels that allow it to improve content monitoring and screening and take into account a wide range of contexts and regions. Once implemented, these methods will help the platform reduce the spread of malicious content shared among “large world” communities (the term Meedan’s preference for the global south), and better support local efforts to address it.
The use of direct disclosure may help researchers identify synthetic content faster.
The framework recommends that authors disclose synthetic content directly. The use of direct disclosure in this example could help researchers identify synthetic content faster. This case study not only highlighted the need for direct disclosure, but also highlighted the importance of considering localized contexts when attempting to mitigate harm.
Read the case studies
What’s next?
Developing comprehensive global regulations and best practices requires the support of organizations in a variety of fields, including industry, academia, civil society, government and more. The repetitive case reporting process between PAI and supporter organizations shows how real life changes can be achieved by supporters in these areas.
The transparency and willingness of these organizations is a step in the right direction to provide insight into their efforts to govern synthetic media responsibly. The March 2024 analysis recognized the importance of voluntary frameworks for AI governance. We would like to reveal further insights into these case studies and more insight into how policy and technology decisions can be made. We hope to provide a set of evidence regarding the implementation of actual AI policies and provide further consensus on best practices for the evolution of synthetic media policies.
These case studies span a variety of impact areas and explore different mitigation strategies. This work from our supporters contributes to improving the framework, pursuing future synthetic media governance, and clarifying the best ways in which optimal guidance is implemented by builders, creators and distributors.
Over the next few months, we will incorporate lessons learned from these case studies into the framework, ensuring that guidance continues to rely on shifts in AI fields. Additionally, public programming addressing some of these themes will publish an analysis of key points, open questions, and future directions in the field where public programming will be held. To stay up to date with where this work is, sign up for our newsletter next.