Artificial intelligence (AI) is at stake in the United States heading into the 2024 election due to technology that generates highly realistic deepfakes that can be used to deceive voters and undermine trust in democracy. It was widely seen as a threat to the political process. Part 1 of this series addressed national policy responses to these concerns, including the significant increase in laws regulating the use of AI in election communications.
Part 2 reviews how the same policy debate played out in Washington, DC, as Congress and various federal regulators took the issue seriously. Although the details of the proposed laws and regulations varied, they all closely aligned with at least one of the two general approaches taken by each state: disclosure requirements and prohibitions.
Efforts to advance federal regulation of the use of AI in elections have not resulted in meaningful changes in policy, but federal agencies such as the Federal Election Commission (FEC) and the Federal Communications Commission (FCC) have We identified how it can be applied to AI. . Meanwhile, the Election Assistance Commission (EAC) and the Cybersecurity and Infrastructure Security Agency (CISA) helped local authorities plan for and adapt to the impact of AI in election administration.
Bills creating bans and disclosure requirements stall in Congress
The 118th Congress saw a flurry of AI bills introduced, including more than a dozen bills specifically dedicated to the use of AI in federal elections. Most of these failed to gain traction, but the Senate Rules Committee approved two bills introduced by Sen. Amy Klobuchar (D-Minn.) in May. One stipulates prohibited matters, and the other requires information disclosure.
S.2770 takes a prohibitive approach, banning the use of AI to generate “substantially deceptive” election communications and enforcing restrictions through injunctions and monetary damages to combat election deepfakes. targeted. Meanwhile, S.3875 required disclosure of AI-generated content used in political ads and authorized the FEC to impose fines for violations. Although similar in substance to state law, Congress’s authority over most election policies is limited to federal elections, meaning that these limitations apply only to presidential or U.S. Congressional campaigns.
In May 2024, both bills were approved by the Senate Rules Committee on a party-line vote with Democrats supporting and Republicans opposing, due in part to concerns about the impact on free speech. Although they have not advanced to the floor since being approved, they technically remain in place until the end of the 118th Congress in the coming weeks.
Amid pressure to address AI, FEC chooses technology-neutral approach to regulation
The FEC is an independent federal agency that regulates federal campaign finance laws. In May 2023, the agency began considering extending its current ban on fraudulent campaign activity to AI-generated content.
After more than a year of consideration and more than 2,000 public comments, the FEC declined to issue new rules. Instead, an interpretive rule explains the view that the underlying federal law prohibiting fraudulent misrepresentation is “technology-neutral” and therefore does not require regulation targeting a particular technology (in this case, AI). approved. In other words, fraud itself is a relevant factor, regardless of the technology used.
FCC enters AI battle early, aims to further expand regulatory scope
The FCC is an independent federal agency responsible for regulating interstate and international radio, television, wire, satellite, and cable. Traditionally, the FCC’s role in election policy has been limited to specific issues related to campaign advertising. This year we also tackled issues related to AI and elections.
The first case involving the FCC was also one of the earliest examples of high-profile “deepfakes” impacting the 2024 presidential election campaign. In January, some Democratic primary voters in New Hampshire received robocalls from an AI-generated voice imitating President Joe Biden, prompting them to skip the primary and instead save their votes for the general election. It was encouraging. This deception was quickly identified by the media and the damage was mitigated. As a result of its investigation, the FCC found that the use of AI-generated voices in robocalls is prohibited under current federal law, and fined the robocall creator $6 million. The FCC also began rulemaking in August to further tighten regulations governing the use of AI-generated robocalls and robotexts.
Additionally, the FCC announced in July that it would begin the process of establishing disclosure requirements for AI used in campaign ads aired on FCC-regulated television and radio stations. This announcement prompted the FEC chairman to write a letter to his FCC counterpart outlining his concerns with the rule and how it goes beyond the FCC’s jurisdiction. Regulatory outcomes related to AI-generated robocalls and robotexts and the use of AI in campaign advertising are still pending.
Elsewhere in Washington, EAC and CISA are helping election workers adapt to AI
EAC and CISA are two additional federal agencies that contributed to the federal response to AI in elections, but neither agency created new rules or regulations. Instead, we provided resources and guidelines to help local election officials adapt to the new reality in which AI can impact election administration.
In February, the EAC, an independent federal agency tasked with assisting election officials and helping Americans participate in the voting process, announced that it would be implementing a new system to combat AI-generated misinformation about the election process. issued a decision allowing the Department to utilize existing federal funding streams. Authorized by the Help America Vote Act in the early 2000s, these security grants have traditionally been used to replace voting equipment, implement auditing systems, improve cybersecurity, conduct cybersecurity training, and more generally to support federal elections. has been used to strengthen security. By updating its policy guidance, EAC provides flexibility for field officials to experiment with different approaches to combating election-related misinformation through public education and best practices that can be shared across the country over time. I derived.
Along these same lines, Congress is considering legislation that would require the EAC to develop voluntary guidelines for the use and preparation of AI in election administration, including addressing AI-generated misinformation. did. The bill, S.3897, would also require the EAC to produce an after-action report on how AI actually affected the 2024 election.
The bill, along with the aforementioned ban and disclosure bill, passed the Rules Committee in May and was the only bill supported by Republicans. However, S.3897 suffered the same fate as the other two bills and was not considered on the Senate floor. Nevertheless, voluntary guidelines provide a useful approach as they provide support to local government officials while also leaving room for bottom-up innovation.
Finally, through CISA, the federal government has a continuing role to play in protecting the security of America’s infrastructure, including election infrastructure, from both traditional and AI-enhanced cyber threats. Throughout 2024, CISA will provide local elections offices with guidance on AI security best practices, provide cybersecurity services such as vulnerability scans and in-person assessments, and use AI and other traditional techniques to monitored attempts by foreign governments to interfere with elections.
conclusion
Overall, there was a high level of interest in action across the federal government regarding the use of AI in elections. This interest did not lead to any significant policy changes. However, the agency provided guidance on how to apply existing regulations in the context of AI. This federal response contrasted with the relatively sophisticated policy changes that occurred in the states described in Part 1. In Part 3 of this series, we explore the factors that may have caused AI to have less of an impact on elections than initially feared. Differences in state and federal policy choices.