PAI continued its work to promote truth and transparency across the digital media and information ecosystem in 2024, when AI-manipulated media was expected to play a role in elections taking place all over the world.
Earlier in the year, PAI launched a cross-disciplinary community of practice and created space for shared learning to explore the various approaches to the challenges posed by using AI tools in elections.
Demonstrating the application of PAI’s synthetic media framework, PAI has published 16 detailed case studies from AI development companies such as Adobe and Openai, media organizations such as CBC and BBC, platforms such as Meta and Microsoft, and civil society organizations such as Thorn and Withins. The case studies are a requirement for framework supporters, and we investigated how best practices for responsible development, creation and sharing of AI-generated media can be applied to real use cases.
The first set of case studies from framework supporters and accompanying analyses focused on transparency, consent, and harmful/responsible use cases. The second set of case studies from framework supporters focused on less exposed areas of synthetic media governance. Direct disclosure – How to tell viewers how to modify or create content with AI, such as labels and other signals – PAI developed policy recommendations based on insights from cases. If best practices for responsible synthetic media such as disclosure are not implemented along with safety recommendations from open source model builders, synthetic media can lead to real-world harm, such as manipulating democratic and political processes.