2024 was a historic year of global elections, with around 4 billion eligible voters casting votes in 72 countries. It was also a historic year for AI-creating content, which has a huge presence in elections around the world. The use of synthetic media, or media generated by AI (visual, auditory, or multimodal content generated or modified via artificial intelligence) can affect elections by affecting the voting procedure and the candidate’s narrative, allowing the spread of harmful content. Extensive access to improved AI applications has increased the quality and volume of distributed synthetic content, increasing the harm and mistrust.
Looking at global elections after 2025, it is important to recognize that one of the major harms of generation AI in the 2024 election is the deep nude creation of female candidates. This type of content is not only harmful to individuals, but can have a frightening effect on women’s political participation in future elections. The AI and Election Practice Community (COP) has provided important insights like these and practical data that will help inform policymakers and platforms seeking to protect future elections in the AI era.
As various stakeholders and actors anticipated, understood how they dealt with the use of generated AI during elections, and addressed potential risks, the COP provided a means of partnerships regarding AI (PAI) stakeholders, introduced continued efforts, received feedback from their peers, and discussed difficult questions and trade-offs when it came to deploying this technology. In the final three meetings of the eight-part series, PAI was attended by the Centre for Democracy Technology (CDT), Collaboration on International ICT Policy in East and South Africa (CIPESA), and Digital Action discussing the use of AI in West and beyond election information and AI regulations.
Investigating the spread of election information with Centre for Democracy & Technology (CDT)
The Center for Democracy and Technology has worked for 30 years to improve civil rights and civil liberties in the digital age, including nearly a decade of research and policy work on trust, security and accessibility in American elections. At the sixth meeting of the series, the CDT internally examined two recent research reports published on the confluence of democracy, AI and elections. The first report explores how chatbots from companies such as Openai, Humanity, Mistralai and Meta handle answers to election-based questions, particularly for voters with disabilities. The report found that 61% of responses from tested chatbots had inadequate responses (the report defined as responses containing one or more of the following: In one case, the chatbot provided information citing non-existent laws. A quarter of responses was likely to prevent or discourage voters from voting, and could raise concerns about the reliability of chatbots when providing important election information. The second report examined political ads across social media platforms and how policy changes in seven major tech companies over the past four years have affected US elections. To increase the opportunities for leveraging generative AI tools in the context of elections, whether chatbots or political ads, whether they are chatbots or political ads, organizations should invest in user safety research, implement assessment thresholds for deployment, and ensure full transparency of product limits.
Regulation of AI and CIPESA trends in Africa’s democracy
“Think and Tanks,” the collaboration on East and South Africa’s international ICT policy focuses on technology policies and practices that intersect with society, human rights and livelihoods. At the seventh meeting of the series, CIPESA outlined work on AI regulation and African trends, touching on topics such as national and regional AI strategies, elections and harmful content. As AI use continues to grow in Africa, most AI regulations across the continent focus on the ethical use of AI and its impact on human rights, lacking specific guidance on the impact of AI on elections. Case studies show that AI is undermining the integrity of elections on the continent, and distorts public perceptions in view of the limited skills of many, identifying misleading content and verifying facts. A June 2024 report by Clemson University’s Media Forensic Hub found that 464 accounts had used a massive language model (LLMS) during the elections in early 2024 to send over 650,000 messages from 464 accounts attacking more than 650,000 messages attacking government critics. The 2024 general election in South Africa saw similar misuse of AI, with AI-generated content leveraging racial and xenophobic bass to target politicians and stir voter sentiment. Examples include Deep Fark, which portrays Donald Trump in support of the Umkhonto Wesizwe (MK) party, and a 2009 video that manipulated a 2009 video of rapper Eminem, which supports the Economic Freedom Warriors Party (EFF). The discussion highlighted the need to focus on AI as it moves forward locally with particular attention to reducing the challenges that AI poses in the election context. The AI tools lower barriers to entry for those seeking to shake up elections, whether individuals, parties or ruling governments. As the use of AI tools grows in Africa, countries need to implement stronger regulations on the use of AI and election use (without suppressing expression) and take steps to ensure that country-specific efforts are part of a broader regional strategy.
Catalyze the change in global AI of democracy with digital action
Digital Action is a nonprofit organization that mobilizes civil society organizations, activists and funders around the world to inspire digital threats and take joint action. At the eighth final meeting in the PAI AI and Election Series, Digital Action shared an overview of the organization’s annual campaign for the year of democracy. The discussion focused on protecting the rights and freedoms of elections and citizens around the world, and exploring how social media content influenced elections. The main focus of the 2024 digital action work was to support the Global Coalition for Technical Justice. This called on large tech companies to make full and fair resource efforts to protect the 2024 election through specific measurable demands. The media had hoped to see very well-known examples of generative AI that shaking election outcomes around the world, but instead saw the impact of corruption on political campaigns, harm to individual candidates and communities, and wider harm on trust and future political participation. Many elections around the world were influenced by AI-generated content shared on social media, including Pakistan, Indonesia, India, South Africa and Brazil. In Brazil, Deepnudes appeared on a social media platform and an adult content website depicting two female politicians in the lead-up of the 2024 local elections. One politician took legal action, but the slow pace and lack of aggressive steps in the court process by social media platforms prevented timely revisions. To mitigate future harm, Digital Action has called for each major technology company to establish and publish a fully and equitably resourced action plan (which hosts global and national elections). In doing so, tech companies can provide great protection to groups such as female politicians who are often at risk during election periods.
What’s coming
PAI’s AI and Election Cop series ended after eight convenings with presentations from the industry, the media and civil society. Throughout the year, presenters provided participants with different perspectives and real-world examples of how generative AI has influenced global elections and how the platform works to combat harm from synthetic content.
Some of the key points from the series are:
Downballot candidates and female politicians are more vulnerable to the negative impact of generation AI in elections. Although there have been several attempts to influence national elections using generated AI (more on this can be read in the Pai case studies), downballot candidates were often more vulnerable to harm than nationally recognized. In many cases, local candidates with low resources were unable to effectively fight harmful content. Deepfakes have also been shown to prevent increased participation of female politicians in several general elections. Platforms need to dedicate more resources to localize generative AI policy enforcement. By being transparent about the use of generated AI in election ads, the platform protects users from harmful synthetic content, provides resources to elected officials to tackle election-related security challenges, and employs many disclosure mechanisms recommended by PAI’s synthetic media framework. However, they lack language support and localization of enforcement policies that involves domestic cooperation with local governments, civil society organisations and local governments, civil society organisations and community organisations representing marginalized groups such as disabled and women. As a result, the generator AI has been used to cause real-world harm before being addressed. Globally, more consistent regional strategies need to be adopted to regulate the use of generated AI in elections that balances free expression and safety. In the United States, the lack of federal laws regarding the use of generator AI in elections has led to various individual efforts from states and industrial organizations. As a result, there is a fractured approach to keeping users safe without a cohesive overall strategy. In Africa, countries’ attempts to regulate AI are very different. Some countries, such as Rwanda, Kenya, and Senegal, employ AI strategies that emphasize infrastructure and economic development, but are unable to address ways to mitigate the risks generated by free and fair elections. Governments around the world have shown several initiatives to catch up, but they must work with organizations at both the industry and state levels to implement the best practices and lessons learned. These government efforts cannot exist in a vacuum. Regulations must contribute to broader global governance efforts to regulate the use of generated AI in elections, ensuring safety and freedom of speech protection.
The AI and election community is over, but we continue to move our work forward to develop, create and share responsibly. To stay abreast of your work in this space, sign up for our newsletter.