AI is making it easier to manipulate and generate media, creating challenges to truth and trust online. In response, policymakers and AI practitioners are understandably calling for greater audience transparency regarding AI-generated content.
But as Leibovitz discussed in a recent article, what does an audience deserve to know? And how and by whom should that information be communicated to support truth and trust online? Should it?
Launched in February 2023, PAI’s Synthetic Media Framework is supported by 18 leading organizations involved in creating, distributing, and building synthetic media. Each supporting organization committed to publishing detailed case studies exploring the implementation of the framework in practice. We released the first 10 cases in March 2024, focusing on the broad themes of transparency, consent, and harmful/responsible use cases, along with case studies and accompanying analysis drafted by PAI .
A new series of case studies to be released later this year will focus on direct disclosure, or how to tell viewers when content has been modified or created with AI, such as labels or other visual signals. The following takeaways from these cases reflect a moment in time in a rapidly evolving field. Detailed case studies based on these attributes add texture and detail to the following themes:
People do not perceive AI labels as just technical and neutral signals.
Many media consumers view AI labels, which are intended to neutrally describe technological changes such as how content is produced, as prescriptive labels that indicate whether content is “true” or “false.” I recognize it. Most edits made using AI tools do not significantly change the meaning or context of the content.
One organization reported that the majority of AI-edited content on social media platforms does not involve editing or changing the material. Rather, the majority of AI-edited content posted on the platform included cosmetic or artistic edits that did not materially change the meaning of the content. Creators rarely know that they are using AI to create content.
One social media platform said it was unaware that many creators had made AI edits that prompted the platform to label their content. This makes opt-in labeling methods, where users are encouraged to tag content as AI-edited, especially difficult for platforms to rely on if they want to consistently disclose all AI-edited content. Direct disclosure must reveal how the content has been materially modified.
Knowing that an image was edited or created using AI is especially important when fundamentally or substantively changing the media in a way that could mislead viewers. For example, an image of an astronaut walking on the moon without a helmet could mislead users into thinking humans can breathe in space. However, enhancing images of the universe to make stars clearer does not substantially change the content, and may even make the images more faithful to reality. There is no “one size fits all” method for directly disclosing content.
A study of one social media platform found that users prefer content that is completely synthetic, photo-realistic, depicts current events, or shows people what they didn’t do or say. We’ve found that they expect more transparency in the content they do and say. Other platform users suggested that AI-edited content and AI-generated content should be labeled completely differently. Direct disclosure visuals should become more standardized across platforms.
Organizations use various icons to indicate and explain AI disclosures and content authenticity. A patchwork of direct disclosure approaches leaves users with no strong mental models of what constitutes authentic media or how it is described, making it easier for bad actors to mislead audiences. . A common visual language for disclosure is essential, requiring coordination not only in how the presence of provenance information is expressed, but also in how to reduce overconfidence in uncertain technical signals. Social media platforms have become more aggressive about over-labeling.
Some social media platforms are understandably concerned about using AI to enhance content or manipulate election information, especially given the historic election numbers in 2024 . In contrast, some social media platforms have suggested that it is better to have too many AI labels than too few. Recognizing the limitations of how labels are applied and how they are understood by audiences (discussed in Themes #1-5). Social media platforms rely on signals from developers to directly disclose information.
Technology platforms are often the first place users encounter synthetic media. As a result, these platforms play a huge role in exposing directly to users how their content is interacted with. However, this often relies on accurate indirect disclosure signals from upstream AI tool developers/builders. Direct disclosure does not completely stop malicious activity.
Direct disclosure is beneficial to good-faith actors. However, even if synthetic media model builders implement disclosure mechanisms, malicious actors seeking to create harmful materials such as AI-generated child sexual abuse material (CSAM) may There is potential for tweaking the model to avoid or deprive it. As a result, additional mitigation measures should be pursued, including those focused on removing CSAM from AI model training datasets. User education needs to be better resourced and coordinated.
User education on AI and media literacy is widely touted as necessary for media transparency. However, insufficient resources are allocated to its development, implementation, and interdepartmental coordination, especially those already trusted in civil life. Industry, media, and civil society organizations must educate the public about what disclosures do and do not mean (e.g., disclosing provenance or history does not necessarily verify the accuracy of content). (Please note that this is not limited to.)
what happens next
In the coming months, we will be publishing a full-length case study featuring the evidence that has informed these themes. Future programmatic efforts at PAI will specifically address these challenges around AI transparency by creating:
Guidance on what counts as a “material” or “fundamental” AI edit that justifies disclosure. Coordinated media and AI literacy education campaigns with cross-sector stakeholders to complement indirect and direct disclosures. Policy and practitioner recommendations on how to implement and adopt indirect and direct disclosure. Updates and clarifications of the synthetic media framework itself, including potential adaptations to agentic AI.
To keep up to date with our work in this field, please subscribe to our newsletter.