As Congress discusses the drastic budget settlement bill, provisions deeply buried in legislation make many state-level AI regulations, including those that directly affect radio stations, virtually unenforceable.
The provisions included in both the House and Senate versions of the bill impose a 10-year moratorium that prevents states from enforcing laws that “restrict, restrict or regulate” artificial intelligence. While states can continue to pass AI-related laws technically, they are helpless to implement them, effectively handing over regulatory oversight to federal agencies and private industries.
For radio stations already navigating the rising challenges from AI-generated deepfakes, synthetic voice cloning, political ad manipulation, and content theft, the potential loss of state-level protection adds another layer of risk. As broadcasters moved to address growing concerns, they found themselves increasingly at the intersection of AI technology and public trust.
Tennessee Attorney General Jonathan Skulmetti and Washington Attorney General Nick Brown joined Sen. Maria Cantwell (D-WA) and Marsha Blackburn (R-TN) to warn of potential consequences and publicly oppose the moratorium. “We want America to dominate AI. We want our enemies to not move ahead of us, but we need to ensure that American consumers are not left behind in the process,” Skrmetti said.
Many states have already advanced legislation designed to directly address the media’s AI threats. Blackburn pointed to Tennessee’s Elvis Act, which criminalizes fraudulent AI audio in music and broadcasts, as an example of the type of protection currently at risk.
In New York, broadcasters are required to provide audible disclosures when AI-generated content is used in political ads. Other states, including California, Texas, Minnesota, New Jersey, Idaho, Indiana, New Mexico, Utah, Wisconsin and Washington, have passed deep liability that places liability on broadcasters. Oregon goes further, requiring clear disclosure of AI use in all campaign communications.
If moratoriums become law, these state-level rules (designed specifically to protect broadcasters, news organizations and local media) could go dormant for the next decade, increasing new legal uncertainty for stations heading into high station election season.
NAB supports broadcasters’ efforts to combat misleading AI content, but warns that inconsistent or excessively broad patchwork of rules can be unfairly burdensome. The association urged lawmakers to take a balanced approach to protecting consumers while not imposing impossible compliance standards on local media operators.
This debate is due to growing public concern over media trust. According to a 2024 Reuters Institute report, 72% of Americans say they are increasingly worried about distinguishing reality from fake content. It has increased by 3% from the previous year, highlighting growing public anxiety about the role of AI in shaping news and information.