Senate Commerce Republicans maintain a decade-long moratorium on state AI laws with the latest version of President Donald Trump’s massive budget package. And more and more lawmakers and civil society groups are warning that their wider language can place consumer protection in the chopping block.
Republicans who support the provisions the House cleared as part of “one big beautiful bill law,” say it helps AI companies not be shaken by the complex patchwork of regulations. But opponents warn that if they survive the vote and the Congressional rules that prohibit it, they could be exempt from the state’s legal guardrail for years to come, without a federal standard promise to take their place.
“What this moratorium does is to prevent all states in the country from blocking basic regulations to protect workers and protect consumers,” Rep. Ro Khanna (D-CA), which includes districts including Silicon Valley, spoke about the barge in the interview. He warns that, as written, the language included in the budget adjustment package passed in the House of Representatives can limit state laws seeking to regulate social media companies, prevent algorithmic rent discrimination, and limit deep AI situations that could mislead consumers and voters. “Essentially, they’ll give businesses a free rein, develop AI in any way they want, and develop automatic decisions without protecting consumers, workers and children.”
“One thing that’s pretty certain is going even further than AI.”
The boundaries of what the moratorium can cover are unknown, and opponents say that is the point. “The prohibition language on automated decision-making is so broad that it cannot be 100% certainty which state laws can be tangled,” says Jonathan Walter, senior policy advisor at the Council of Private and Human Rights Leadership. “But one thing I feel is pretty certain and at least there’s some consensus, is going even further than AI.”
This could include accuracy standards and independent testing required for facial recognition models in states such as Colorado and Washington, he says. An analysis by the non-profit AI advocacy group Americans for Responsible Innovation (ARI) found that social media-focused laws such as New York’s “Suspension Addiction Training for Children” could be unintentionally invalidated by this provision. Travis Hall, the engagement director for Centre for Democracy and Technology, said in a statement that the House text “blocks the basic consumer protection law from applying to AI systems.” Even state government restrictions on the own use of AI could be blocked.
The new Senate language adds its own set of wrinkles. While this provision is no longer an easy prohibition, it puts the Broadband Infrastructure Fund in order to comply with the familiar 10-year suspension. Unlike the House version, the Senate version also covers criminal state law.
Supporters of AI Moratorium have argued that it doesn’t apply to as many laws as critics argue, but the JB branch, a large technical accountability advocate for public citizens, says “a major technical lawyer who is worth salt will argue that it applies, the way it was meant to be written.”
Kanna says some of his colleagues may not be fully aware of the scope of the rules. “I don’t think they thought how broad the moratorium is, how much it would hinder the ability to protect against consumers, children, and automation,” he says. A few days after it went through the house, even Trump’s ally, Rep. Marjorie Taylor Greene (R-GA), said she voted against OBBB if she realized that the AI moratorium was included in a large text package.
California SB 1047 is a poster child where industry players infuse excessively intense state laws. The bill, which aims to place safety guardrails on large AI models, was rejected by Democratic Gov. Gavin Newsom following a fierce pressure campaign by Openai and others. Companies like Openai, where CEO Sam Altman once advocated industry regulations, have recently focused on clearing up the rules that say they can stop competing with China in AI races.
“What you really do with this moratorium is creating a Wild West.”
Khanna acknowledges that there are “some inadequate state regulations,” and ensuring that the US is ahead of China in the AI race should be a priority. “But the approach to that should be for us to create good federal regulations,” he says. Due to the pace and unpredictability of AI innovation, Branch says, “to handcuff people out of trying to protect their citizens,” and “it’s just reckless,” without predicting future harm. And if state laws are not guaranteed for 10 years, Kanna says Congress has not faced the pressure to pass its own laws. “What we’re really doing with this moratorium is creating a Wild West,” he says.
Before the Senate commercial texts were released, dozens of California Democrats, led by Dozens of California Democrats (D-CA) in the House of Representatives, signed Senate leaders to remove AI clauses. They warn that the drastic definition of AI “will definitely cover computer processing.”
More than 250 state lawmakers representing all states will also urge Congress to withdraw their regulations. “As AI technology develops at a rapid pace, state and local governments are more confidential in their responses than legislative and federal agencies,” they write. “This democratic dialogue blocked at the state level will freeze policy innovations in developing AI governance best practices when experimentation is essential.”
Khanna warns that missing boats under AI regulations could have a higher interest than other internet policies like net neutrality. “It doesn’t just affect the structure of the internet,” he says. “It will affect people’s work. It will affect the role algorithms can play on social media. It will affect every part of our lives, and we can control a few people (who) and benefit the American people without accountability to the public.”