Jordi Calvet-Bademunt is a senior researcher in the future of freedom of speech and a visiting scholar at Vanderbilt University.
As the US and the EU shape AI frameworks, they need to consider lessons from recent experience. The fear-driven narrative surrounding AI and the latest elections in which AI-generated content has limited impacts should warn policymakers against rushing to laws that could unintentionally undermine democratic values. Policymakers creating future US action plans, state legislatures, and authorities enforcing EU AI law should avoid a complete ban on political deepfakes and refrain from imposing the obligation to force AI models to comply with certain arbitrary volitional values. Instead, the focus should be on promoting AI literacy and transparency, such as making data accessible to researchers.
Throughout 2023 and 2024, prominent media outlets have expressed concern about the potential impact of AI on elections. In April 2024, the Washington Post warned readers: “AI Deepfakes are threatening to overturn global elections. No one can stop them.” The Associated Press shared similar concerns, alleging that “AI could force AI to charge and disrupt EU elections.” Many other reputable organizations have reflected these warnings and have circulated for years. Researchers found that news consumption appears to be linked to increasing voter concerns about the impact of AI on elections.
Public concerns coincided with media warnings. In the US, a Pew survey last September found that 57% of adults with political divisions were extremely concerned about AI-driven misinformation about elections. Similarly, 40% of European voters feared the misuse of AI during elections. Vice President of the EU Commissioner, Vice President of the United States, vividly described the depth of politicians’ AI as “changes the course of the atomic bomb (possible) voters’ preferences.”
Several AI-generated incidents have appeared. Up to 20,000 voters in New Hampshire have received robocalls with voices generated by AIs that mistakenly block voters from participating and mimic President Biden. Former President Donald Trump has distributed images generated by pop star Taylor Swift to support him, urging Swift to respond on social media to fix the misinformation.
However, research suggests that the 2024 terror-driven story about AI was not backed up by evidence. The Alan Turing Institute found that there was no significant evidence that AI had altered election outcomes in the UK, France, Europe or the US. Similarly, Princeton’s Sayash Kapoor and Arvind Narayanan did not expect that there would be a large absence of “waves” of AI-driven birth “terror” and concluded through an analysis of all 78 cases from the wired AI election project. Half of the content generated by the analyzed AI was non-deceptive, but the deceptive content reached most audiences who were likely to believe it.
This does not mean that the incorrect information generated by AI was ineffective at all. Voting behavior was not prominent, but it is possible that the misinformation generated by AI has strengthened existing splits. Furthermore, these conclusions may not apply equally in all settings, especially in local elections and in different national contexts, and may require updates as technology evolves. Data shortages and transparency also pose great challenges when assessing the impact of AI. Still, there is a consensus that the fear expressed in 2024 was quite exaggerated.
Researchers also found that AI is not uniquely suited to spreading misinformation. Traditional methods such as Photoshop and traditional video editing software were inexpensive, widely accessible, and equally effective. Importantly, the limited election impact of AI was unrelated to AI-related laws given that European AI law was not yet effective and that many US states and federal governments had no relevant regulations at the time.
Additionally, traditional and non-false reporting played a key role, such as false statements from political figures and common conspiracy theories. Recent insights from Slovakia’s 2023 parliamentary elections highlight this. In that case, a deepfake audio clip of the virus appears just before the vote, indicating that opposition leader Mikal Shaimchka is debating election fraud. Although quickly exposed, the clip initially raised great concerns about its potential impact. However, the vigilance surrounding the event overlooked a wider range of social issues, including the mistrust of facilities, Russian sentiment, and the role of politicians in amplifying disinformation that complicates the analysis of deepfake effects. This example highlights the importance of addressing these underlying social factors when faced with misinformation.
By September 2024, 19 US states had enacted laws specifically targeting the use of AI in political campaigns, with several others considering similar measures. As of March 2025, three states (California, Minnesota and Texas) had banned the creation or distribution of deepfakes in connection with elections under certain circumstances. Three more states (Maryland, Massachusetts and New York) were considering similar legislation. A federal judge in California blocked one such law on the basis of free speech and criticised it for acting “as a hammer in place of Messal.” Similar laws in Minnesota are currently facing judicial scrutiny.
Advocates of freedom of expression warn of the risks of these laws. Minnesota law, for example, criminalizes the spread of dapefakes into elections if it is made with the intention of “harming” a candidate and “affecting” the outcome. These laws often lack satire and parody exceptions. This is two powerful speech tools that criticize power. The political response to Kamala Harris’ parody Deepfark shows how the government can use such laws to suppress legal representation. Importantly, these risks do not arise from one side of the political spectrum alone.
In Europe, the EU completes the AI ​​Act (AI Act proposed in 2021, requiring AI to be watermarked and labelled with AI generated content. However, the main concern of AI law lies in its broad obligation to a powerful AI model to mitigate systemic risk, including ambiguous standards such as limiting negative impacts on “society as a whole.” As explained in a previous article in Tech Policy Press, this is a problematic concept that is likely to suppress legitimate speech.
For example, it could limit content critical to the government or support one-sided perspective in the Israeli-Palestinian conflict. Similar provisions in the EU Digital Services Act raised similar concerns. The final version of the EU Code of Practice that guides enforcement is still drafted. It is important to ensure that AI law enforcement is committed to protecting freedom of expression.
China offers a dystopian vision of what will happen when the government weaponizes AI regulations in its worst-case scenario. Chinese authorities review AI models, in line with “core socialist values,” and inevitably lead to censorship of content that diverges from the official Communist Party narrative, as evident in AI platforms like Deepseek.
It is important to remember that the fundamental right to freedom of expression protects the right to seek, receive and communicate information through any media, including AI. This protection applies not only to ideas and information that are considered welcome or harmless, but also to anything that can offend, shock or interfere with it. This protection is essential to maintaining the pluralism, tolerance and open mind needed for a democratic society.
Future US AI Action Plans should refrain from promoting the ban on political deepfakes, guided by available evidence. Similarly, state-level laws with comparable provisions should be repealed or revised. Less-restrictive measures such as labeling and watermarking may offer alternatives, but may raise First Amendment concerns. Furthermore, their effectiveness is questionable as malicious actors can circumvent these safeguards.
In the EU, the European Commission must ensure that enforcement of AI laws robustly protects freedom of expression. The obligation to mitigate systemic risk should not be interpreted as requiring the model to fit a particular perspective, but should allow space for content that is controversial or opposed. This principle should be clearly articulated in the Code of Practice.
More broadly, structural solutions are needed. First, policymakers and businesses need to ensure that researchers have access to high quality, reliable data and conduct more comprehensive research into the impact of AI-generated content. Several stakeholders have highlighted the restrictions brought about by currently limited access to data. This includes learning about how AI affects a particular community, such as women. You cannot respond effectively without a clear understanding of the landscape, risks and opportunities. In this regard, transparency provisions, such as those in the EU’s Digital Services Act, are welcome steps.
Equally important is promoting AI and media literacy. Rather than robbing the people of fear, there is a need for an educational campaign that empowers knowledgeable individuals. Based on existing research, the Alan Turing Institute advocates for establishing digital literacy and critical thinking programs. This is mandatory in primary and secondary schools and requires promotion among adults. UNESCO has also made similar recommendations.
Governments, businesses, and civil society organizations need to work together to equip the public with the skills to critically engage with content. Non-restrictive measures to combat disinformation, such as centralized and decentralized fact-checking, can also help users make informed decisions. The effectiveness and potential complementarity of both approaches to combat disinformation, which are subject to ongoing debate, should be carefully considered. Importantly, we shouldn’t expect AI to resolve deeper issues beyond technology, such as political polarization, misleading claims by politicians and the media, or disenfranchising voters.
Finally, it is important to remember that existing legal tools such as defamation and fraud laws are still available and can be used when necessary.
Ultimately, effective regulations must be evidence-based and clearly stated. Otherwise, policymakers risk undermining freedom of expression, creativity and satire. It is an important element of healthy democratic discourse.