As artificial intelligence rapidly changes election strategies, state legislatures are moving to regulate its use in elections. This week, the Massachusetts House of Representatives unanimously passed a bill requiring clear disclosure labels on AI-generated political ads, and a proposal to criminalize deepfakes is drawing constitutional scrutiny in Maryland.
The push for this bill highlights the growing national concern about how to balance election integrity with First Amendment protections in the age of AI.
Massachusetts unanimously passes AI disclosure bill
On February 11, the Massachusetts House of Representatives approved a bill 157-0 that would require political ads that use “synthetic media” generated by artificial intelligence to include clear disclosure that they “contain AI-generated content.”
This disclosure must appear at the beginning and end of any audio or video political ad and must remain visible or audible throughout the duration of the AI-incorporated content.
Rep. Daniel Hunt (D-Dorchester), chairman of the House Election Law Committee, said AI is no longer theoretical. “If you watch the Super Bowl, you see how pervasive artificial intelligence is. It’s in our everyday lives. Voters have a right to know that what they’re seeing is real,” Hunt said.
The bill now heads to the Massachusetts Senate for further consideration.
What the Massachusetts AI political advertising bill would do
The proposed bill would:
Paid political communications using AI must clearly disclose its use. This rule applies to ads that influence voting for or against a candidate or voting metric. Violations will result in a $1,000 fine.
The measure follows similar efforts across the country. In 2024, the New Hampshire Court of Common Pleas passed AI regulations after a fake robocall impersonating then-President Joe Biden urged voters not to participate in the presidential primary.
Massachusetts lawmakers say their goal is transparency, not censorship.
Another election bill targets false communications
In addition to the AI disclosure measure, the Massachusetts House of Representatives also passed another bill aimed at preventing deceptive election tactics.
The proposal would prohibit candidates and political groups from distributing misleading communications within 90 days after an election. This includes content aimed at:
Harm a candidate’s reputation by falsely portraying them Mislead voters about election dates or voting procedures
The bill, passed 154-3, would allow victims to sue. Does not apply to news reporting, satire, or parody.
Meanwhile, the Massachusetts Senate has introduced another bill seeking greater transparency in voting campaign finance.
Please also read
Corey Lewandowski comes under fire: DHS claims of confusion, gun requirement controversy, tensions with President Trump erupt
Maryland’s deepfake bill raises First Amendment concerns
While Massachusetts is focused on disclosure, Maryland lawmakers are considering a more aggressive approach.
House Bill 145, filed in the Maryland House of Delegates, seeks to criminalize certain election-related deepfakes generated by AI. Violations could result in a $5,000 fine and up to five years in prison.
However, the proposal has drawn criticism from the Reason Foundation, which argues it could violate free speech rights.
In testimony submitted to the Maryland House Government, Labor and Elections Committee, technology policy fellow Richard Schill warned that the bill relies on “subjectively defined terms” such as election-related deepfakes. He argued that giving the state the power to determine intentions could chill constitutionally protected political speech.
Disclosure and criminalization: two regulatory models
Critics of the Maryland bill are proposing a disclosure-based framework similar to Utah’s House Bill 329, which focuses on requiring campaigns, candidates, PACs, and political committees to disclose the use of AI in paid advertising.
Rather than policing everyday online expression, this approach limits regulation to formal campaign structures.
Legal analysts say courts have traditionally provided strong protections for political speech, making criminal penalties for vaguely defined deepfake content legally weak.
Why AI political advertising is a growing concern
Artificial intelligence tools enable campaigns to:
Create highly realistic voice clones Manipulate compelling video footage Generate compelling political messages at scale
As the 2026 midterm elections approach, lawmakers are scrambling to stop AI from undermining public trust.
The debate reflects a broader question facing states across the country: Should governments prioritize transparency or criminal penalties when regulating AI in politics?
Massachusetts appears to support disclosure and voter awareness. Maryland’s proposal tests the limits of criminal enforcement.
The results could shape how AI-powered election campaigns evolve across the country.
FAQ
What did Massachusetts pass regarding AI political advertising?
The Massachusetts House of Representatives has passed a bill that would require political ads that use AI-generated content to include a disclosure that they “contain AI-generated content.”
Will Massachusetts bill ban AI in political ads?
No, it does not ban AI. Disclosure is required for voters to know when AI-generated content is being used.
What would be considered “synthetic media” under this bill?
Synthetic media includes audio and video generated or manipulated by AI used in paid political advertising.
What are the penalties for violating Massachusetts’ AI advertising rules?
Violators could be fined $1,000.
What is Maryland House Bill 145?
HB 145 is a bill that would criminalize certain election-related deepfakes, with penalties including fines of up to $5,000 and possible jail time.
Why is Maryland’s deepfake bill so controversial?
Critics argue that allowing states to determine the intentions behind AI-generated political content could violate First Amendment protections.
How is Utah’s approach different?
Rather than broadly criminalizing AI content, Utah’s model focuses on disclosure requirements for official campaign participants.
Why are states now regulating AI in elections?
Advances in AI technology have made it easier to create realistic fake audio and videos that can potentially mislead voters.
Could these laws face legal challenges?
yes. Critics argue that Maryland’s proposal in particular may not survive constitutional review in court.

