Later last night, House Republicans introduced a new language in a budget adjustment bill that would halt millions of Americans by reducing access to Medicaid and paying higher fees when seeking health care, making the lives of millions of people much more difficult. While much attention will be paid to these reductions, the bill is also packed with new languages, attempting to stop the state from enacting regulations on artificial intelligence entirely.
“…A law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems cannot be enforced in the decade that began on the date of this law’s enactment. The text of the bill will be considered by the House of Representatives in the May 13th budget adjustment markup.
The language of the bill, how to define AI and other “automated systems,” and what we consider “regulations” are spacious enough to cover relatively new generation AI tools and technologies that have existed for a longer period of time. In theory, the language makes it impossible to enforce many existing and proposed state laws aimed at protecting and informing people from AI systems.
For example, last year, California passed a law requiring providers to disclose clinical information when using generated AI to communicate to patients. In 2021, New York passed the first US law requiring employers to conduct bias audits of AI tools used in employment decisions. California also passed a law that will come into effect in 2026. This will require developers of generated AI models to share detailed documentation on the website about the data they used to develop these models.
In theory, none of these states can enforce these laws if Republicans pass budget adjustment bills in this current language.
The AI industry has been sucked into Trump even before he took office, and his administration is intertwined with AI executives, Doge’s Elon Musk and David Sacks as AI Czars or as advisors Marc Andreessen. Trump returned his favor by revoking the Biden ERA executive order aimed at mitising the risk of AI. Preventing states from schematizing their own paths on the issue and trying to protect people from these systems is one of the most radical positions Republicans have taken on the issue.