Previously, the Trump administration’s tax bill, also known as the “big beautiful bill,” passed in the Senate on Tuesday, prevented states from enforcing their own AI laws for five years, including rules that withhold up to $500 million in AI infrastructure funding if states are not compliant.
On Tuesday, in a day to “voting-o-rama,” which began Monday to pass Trump’s tax bill before the July 4 holiday, the Senate voted 99 to remove the proposed moratorium on states’ ability to regulate AI. The vote comes days after the senators revised their original proposal for a five-year, 10-year ban on enforcement, adding a state law exemption targeting unfair or deceptive practices and child sex abuse material (CSAM).
Also: Openai wants to trade access to AI models for less regulations
An early version of the rules also created $42 billion in broadband internet funding, relying on compliance with the state’s 10-year ban. The revised version only held $500 million in AI funds against ransom if the state challenged it.
The proposed moratorium
If passed, the rule would have banned the state from enforcing the AI law for five years, while at the same time putting the AI funds on Limbo. It would have only had no effect on the ongoing law. The laws already passed by the state will remain in writing, but will be virtually useless, so the state wants to put AI funds on the line.
In reality, this will create patchwork imbalances across the country. Some states have thorough laws, but they don’t have the funds to move AI safely forward, while others don’t have regulations, but they have a lot of funds to stay in the race.
Also: What does “Openai for the Government” mean to us? AI Policy
“State and local governments should have the right to hold companies responsible for protecting and explaining residents from harmful technologies,” said Jonathan Walter, senior policy advisor at the Civil Rights Technology Center for Leadership Council.
Many supporters excluded the ban on Tuesday and celebrated the news on Tuesday, including Adam Billen, vice president of public policy at Washington, D.C.-based responsible AI organization.
“There were 40 state AG, 14 governors. 260 state legislators from all 50 states, multiple 140 or more organisational coalition letters we gathered, thousands of calls and emails from parents and constituents, and later some important council champions, and we killed it almost entirely,” he said on the Linkedin Post. “Even the main sponsors of the clause voted to ultimately strip it up.”
Federal AI policy remains unknown
The administration is scheduled to announce its AI policy on July 22nd. Meanwhile, the country has been virtually flight blind, urging several states to introduce their own AI bills. Under the Biden administration, which took several steps to regulate AI, the state had already introduced AI laws as technology had evolved rapidly into the unknown.
Walter added that the ambiguity of the prohibited language could block non-AA automation monitoring, such as “an insurance algorithm, autonomous vehicle systems, and models that determine how much the residents pay for the utility.”
“The main issue here is that there is already a real, concrete harm from AI. This law takes the brakes from the state without replacing the brakes at all,” said the CEO of AI agent provider Conveyor and former CEO of Pentagon Regulatory Counsel.
If federal regulations remain a big question mark, by preventing states from enforcing individual AI policies, the Trump administration would have opened the door for AI companies to accelerate without checking or balance.
President Trump’s second term so far has not suggested that AI security is a federal regulation priority. Since January, the Trump administration has overridden the safety initiatives and test partnerships introduced by the Biden administration, reducing and renaming the US Center for AI Standards and Innovation to the US AI Safety Institute, and cutting back funding for AI research.
Also: AI leaders need to be strictly aware of regulatory, geopolitical and interpersonal concerns
“Even if President Trump meets his own deadline for comprehensive AI policies, it is unlikely that it will seriously address the harms from false and discriminatory AI systems,” Walter said. AI systems used in financial applications such as HR technology, employment, and mortgage fee determination have been shown to act with bias against marginalized groups and demonstrate racism.
Why states want their own AI regulations
Naturally, AI companies have expressed preferences for federal regulations over individual state laws, making it easier to maintain compliance models and products than trying to comply with patchwork laws. But in some cases, even if federal foundations are in place, states may need to set their own regulations for AI.
“The differences between countries regarding AI regulation reflect the different approaches states have to address fundamental issues such as employment law, consumer protection law, privacy law and civil rights,” notes Ballew. “AI regulations need to be incorporated into these existing legal schemes.”
Also: Humanity’s new AI model of classification information is already in use by the US government
He added that it is prudent to have “regulatory scheme diversity” as state and local officials are closest to those affected by these laws.
Previous proposal withheld internet funding
Broadband Equity, Access, and Deployment (Bead) is a $42 billion program run by National Communications and Information Management (NTIA) that helps nations build infrastructure and expand high-speed Internet access. Before that was revised, Senate rules were making all that money, plus $500 million in new funds.
Get top morning stories every day in your inbox with Tech Today Newsletter.