SAN FRANCISCO – November 19, 2024 – The Partnership on AI (PAI), a nonprofit multi-stakeholder organization dedicated to responsible AI, today announced a new initiative focused on mitigating synthetic media risks based on the implementation of the Synthetic Media Framework. We have published five new case studies on the topic. Featuring researchers from Meta, Microsoft, Truepic, Thorn, and the Stanford Institute for Human-Centered AI, the use case explores an unexplored area of synthetic media governance known as direct disclosure: how content has been made public. We dig into the methods and labels used to communicate. Modified or created by AI.
“Together, we must build a media foundation where transparency and truth go hand in hand. This also applies to how you disclose transparently,” said Claire Leibowicz, Head of AI and Media Integrity at Partnership on AI. “Ensuring transparency and strengthening digital integrity is critical, and will become even more important as AI touches every way people create, distribute, and interact with media.” In a few years.”
Each of the five case studies illustrates a different combination of technical and humanistic considerations needed to publish content and avoid responsible and harmful use of synthetic media. They discuss:
“As synthetic media becomes more sophisticated and ubiquitous, transparency around its creation is key to building and maintaining public trust,” said Rebecca, CEO of Partnership on AI. Finlay says. “By sharing real-world insights, it is our hope to help others navigate the complexities of synthetic media and contribute to a more trustworthy digital environment. ”
Policy recommendations
Based on this research, PAI offers policymakers and practitioners the following five recommendations to promote transparency, trust, and informed decision-making for media consumers in the age of AI: I urge you to do so.
Better define what counts as a significant use of AI based on multi-stakeholder input and user research.
Support rich, descriptive context around your content, whether the media is AI-generated or edited.
Standardize what is disclosed about your content and the visual signals for doing so.
Resource and coordinate user education efforts on AI and information.
Direct disclosure policies are accompanied by backend mitigation policies.
PAI’s Synthetic Media Framework was launched in February 2023 and has institutional support from 18 organizations. In March 2024, the group published 10 detailed case studies focusing on transparency, consent, and harmful/responsible use cases. This latest collection complements PAI’s earlier body of research focused on direct disclosure. To learn more about PAI’s Responsible Practices for Synthetic Media: A Framework for Collective Action and read the full case study, please visit: https://syntheticmedia.partnershiponai.org/
Support message
“The desire to provide transparency into the origins and history of online content is becoming a reality with the adoption of the C2PA Media Provenance Standard, which takes an ecosystem-wide approach and supports tool builders, content creators, distribution platforms We are supported by the Partnership on AI’s (PAI) Responsible Practices Framework for Synthetic Media, which recognizes how we can help companies implement the C2PA standard. I would like to share what I have learned as an update on my experience so far.
– Eric Horvitz, Chief Scientific Officer, Microsoft
“The misuse of generative AI models to create child sexual abuse material (AIG-CSAM) is one of the most pressing technical and policy issues facing the synthetic media space. PAI is a necessary but not sufficient practice to address the harms of AIG-CSAM, and we are grateful that it has highlighted the complexity of this issue and provided an opportunity for stakeholders to suggest ways to respond. Thank you.”
– Riana Pfefferkorn, Policy Researcher, Stanford Institute for Human-Centered AI
“At Truepic, we believe that the provenance of digital media is the foundation for increasing the trust and transparency of online content. By participating in PAI’s portfolio of case studies, we are taking a step forward from protecting cultural heritage in conflict zones. From increasing accountability in high-stakes scenarios, we were able to demonstrate how leveraging C2PA-compliant technology can have a real-world impact. We applaud the collaboration and driving meaningful progress in synthetic media governance, and we are proud to contribute to these collective efforts to build a more authentic and resilient information ecosystem. Masu.”
– Munir Ibrahim, Chief Communications Officer and Head of Public Relations, Truepic
“The Partnership on AI serves as a leader in an ecosystem that builds common understanding and guidance for building and deploying AI responsibly, leading to concrete action. As a mission-driven organization dedicated to protecting against child sexual abuse, Thorn is pleased to support PAI’s efforts with a case study in which generative AI facilitates child sexual abuse. We’re highlighting some of the concrete actions needed to prevent abuse. We all need to work together to make an impact, and we’re pleased to collaborate with PAI on that effort. Masu.”
– Dr. Rebecca Portnoff, Vice President of Data Science at Thorne
About AI partnerships
Partnership on AI (PAI) brings together diverse stakeholders from academia, civil society, industry, and media to create solutions to ensure that artificial intelligence (AI) delivers positive outcomes for people and society. It is a non-profit organization. PAI develops tools, recommendations, and other resources by soliciting input and sharing insights and perspectives from within and outside the AI community. These insights will be synthesized into practical guidance that can be used to accelerate the adoption of responsible AI practices, inform public policy, and advance public understanding of AI. For more information, please visit www.partnershiponai.org.
media contact
Holly Grisky
pai@finnpartners.com