This is part of our AI and the 2024 Election series.
Advances in artificial intelligence (AI) are having a major impact across America, with artificial intelligence (AI) being incorporated into nearly every aspect of daily life, from healthcare to banking. Elections are no exception, and AI is already bringing risks and opportunities related to election management, cybersecurity, and the information environment.
Although the potential impacts of AI on elections are wide-ranging, policymakers have focused almost exclusively on harm to the information environment. The rush to pass new laws and regulations in 2024 is due to AI accelerating the creation and spread of “deepfakes” and other forms of misinformation about candidates, election officials, and the voting process. This stemmed from widespread public concern that the use of
While Congress and various regulators have tried unsuccessfully to crack down on the use of AI in federal election communications, state-level efforts have been more effective. As shown in the map below, 20 states currently have laws aimed at using AI to generate fraudulent election content. In 2024 alone, 15 were approved. The details vary by state, but overall, the most common approach is to combine disclosure requirements with civil penalties.
Legislation by type and penalty related to AI and elections (2019-2024)
Note: Map is current as of November 22, 2024
Information disclosure has emerged as the dominant approach. Prohibition faces constitutional challenges
Requiring disclosure when AI is used to generate or modify election communications is the most common regulatory approach in states, accounting for 18 out of 20 that have enacted such laws. Disclosure provides transparency about the use of AI so the public can consume content with the full understanding that audio, video, or images have been manipulated and do not depict reality. . This is a well-known concept, as the Federal Election Commission (FEC) and many states already require campaigns to include various disclosures about certain political ads.
Texas and Minnesota have outright banned the use of AI-generated deepfakes in election-related matters, but these restrictions are on shaky legal ground following a recent California court ruling. . The law in question, AB 2839, aimed to move the state’s regulation of AI in elections from disclosure requirements to a general ban (with exceptions for parody and satire). The bill was approved in September as an emergency measure and went into effect immediately after Gov. Gavin Newsom signed it. A federal judge quickly blocked the law as an unconstitutional restriction on speech. Although the outcome of the lawsuit is not yet known, the concerns outlined in the judge’s order suggest that the prohibition approach would impose significant restrictions on political speech that would likely violate the First Amendment. There is.
States prioritize enforcement through civil penalties
One important policy decision for states that choose to regulate the use of AI in election communications is whether to treat violations as criminal or civil violations. This distinction has important implications regarding who can initiate legal proceedings for violations and the types of penalties that can be imposed. Of the 20 states that currently regulate the use of AI, 13 rely solely on civil penalties, five states use both civil and criminal penalties, and two states use only criminal penalties. .
In practice, civil enforcement typically means that the target of an AI-generated deepfake can seek a court-ordered injunction to prevent further distribution of the manipulated media. Alabama, Hawaii, Minnesota, Mississippi, and Oregon also allow certain government officials, including the attorney general, secretary of state, and local county attorneys, to seek injunctions. Content creators may also be subject to additional fines. However, only the government can initiate criminal prosecution, with penalties ranging from fines to imprisonment.
Narrow scope adjustments help prevent regulation of benign AI uses
Artificial intelligence is increasingly integrated into everyday life, and regulation targeting technology rather than application can have unintended consequences. Therefore, all regulations regarding the use of AI in election communications include provisions that narrow the application of the restrictions in some way. Examples include that the AI-enhanced communication is “materially deceptive,” that it is designed to harm a candidate or influence the outcome of an election, and that the AI-enhanced communication is “substantially deceptive,” that it is designed to harm a candidate or influence the outcome of an election, or that the AI-enhanced communication is “substantially deceptive,” that it is designed to harm a candidate or influence the outcome of an election, or that it is submitted within 60 days or after the election. These include requirements such as being created within the last 90 days.
These restrictions limit the use of AI that may be benign to alter or enhance videos or images without affecting the underlying message, or that may lack the intent to deceive or harm political candidates. It aims to prevent over-regulation of the use of certain AIs. California and Minnesota’s bans have already come under legal scrutiny for violating free speech rights, but these narrowing provisions could help disclosure laws withstand future legal challenges. . So far, there have been no court decisions addressing this issue.
Support for the regulation is broad and bipartisan.
Both Republican and Democratic-controlled Congresses have supported AI regulation in elections, and bills have often been approved by wide margins. Of the 20 states that have approved AI regulations, 10 were approved in Democratic-controlled states, eight were Republican-controlled states, and two states, Arizona and Wisconsin, had Republican-controlled legislatures and Democratic governors. worked together to approve the bill. The bill passed with support from more than 90% of legislators in 13 of these states.
The parties were roughly evenly divided, but Democrats (who hold fewer state capitals than Republicans) approved AI restrictions at a much higher rate. This is not surprising since Democrats generally have stronger support. But the majority of Americans on both sides of the aisle support regulation.
Louisiana was the only Republican-controlled state where the governor vetoed AI regulations approved by the state legislature. In his veto message, Gov. Jeff Landry cited concerns that the law would limit political speech and violate the First Amendment.
conclusion
In the lead-up to the 2024 election, a significant number of states have taken bipartisan action to regulate the use of AI in election communications. Given constitutional concerns about the ban, the law adopted requirements primarily to increase transparency when AI is used for fraudulent election communications. This level of policy change at the state level stands in contrast to the relative inaction in Washington, D.C., but more on that in the next post in this series.