Openai Inc.’s policy proposal for President Donald Trump’s AI Action Plan introduces a poised issue that will spark the most important AI policy debate of the year. The company urged the federal government to protect AI developers by preempting state AI laws that risk “throwing innovation and undermining American leadership position in the case of AI.”
Currently, more than 781 AI bills are pending in the state legislature, with AI governance booming at $227 million. The federal government’s preemption could quickly eliminate them.
The federal preemption of state AI laws is essentially a debate on national security, economic control, and federalism. If the US suppresses the AI industry with a fragmented patchwork of restrictive state laws, it risks confiscating industry leadership in China.
Though bold, Openai’s proposal for federal preemption is not surprising in terms of content and timing. The proposal follows an executive order in January that called for the creation of an AI action plan focused on maintaining US leadership in AI development and eliminating the burden on the private sector.
The White House Science and Technology Policy issued a request for input regarding policies that should be incorporated. Openai has submitted several recommendations, particularly the federal preemption of state AI laws.
In exchange for voluntary data sharing with the federal government, Openai has requested the private sector to receive “relief from the 781 and count the proposed AI-related bills already introduced in US states this year.”
The federal preemption has been featured on Congressional radar since last year. In December, the bipartisan House Task Force on Artificial Intelligence issued a comprehensive report on AI-related policies and findings. “(p) Reempt extension of state AI laws under federal law is a tool that can be used by Congress,” the report said.
Concerns about the fragmented state AI regulatory framework are well-founded from a practical standpoint. Currently, comprehensive federal regulations do not control the development or use of AI, and individual states of regulatory disabling have begun to fill with a variety of AI policies.
In May 2024, Colorado became the first state to pass a comprehensive AI bill that would regulate developers and deployers of “high-risk AI” systems. The Virginia Legislature passed a similar AI bill last month, but Texas representatives recently introduced the country’s most restrictive comprehensive AI bill. Meanwhile, California has enacted 18 new AI laws that came into effect this year, focusing on domain-specific AI regulations, like many other states.
The patchwork of state AI regulations has already been deployed. Beyond hundreds of separate state bills, AI companies (and perhaps companies using AI) could be subject to 50 different AI safety standards, reporting requirements, and reporting agencies. This increases compliance costs and hinders investment and growth.
The state’s AI regulations, established in California, have included major AI companies such as Openai, Google LLC, Meta Platforms Inc. and PBC of Mankind, and have been able to effectively determine AI policies for other parts of the country. And the state passed both legislative meeting rooms as seen in SB 1047, but Gav. It has already demonstrated a trend towards sensible AI regulation, as rejected by Gavin Newsom (D).
The bill would have placed liability on developers of large AI models based on vague and undeveloped testing standards. Demonstrating further evidence of the disconnect between state and federal priorities, eight Democrats in Congress wrote to the newspapers, urging them to reject the bill.
States are not equipped to regulate rapidly evolving complex technologies, particularly when intersecting national security and diplomacy.
Given the widespread adoption of AI and integration into society, it is similar to key infrastructures such as the power grid and the Internet, given its important role in the economy and national security. The federal government cannot allow it to be blocked under a fragmented patchwork of state AI regulations.
But the decision to preempt AI law is just the first step. This is a bigger challenge lies in drafting possible laws.
The preemptive force of the federal government is also known as Article 2 of the Constitution, the hegemony clause. To preempt state law, Congress must pass laws that provide federal preemption.
First, the term “AI” is not defined universally or consistently. A definition that is too broad or too narrow can unintentionally preempt regulations regarding traditional technology or fail to preempt regulations for other purposes of AI technology.
Second, the federal preemption scope is just as controversial as the fundamental question of whether or not to allow a preemption. For example, there will be debate over whether all states need to be preempted by AI laws or only those that affect certain aspects such as model development, application tiers, and end-user interactions.
One possible approach is a more targeted, preemptive format, with a particular focus on state regulations governing training, deployment and testing of frontier AI models. Under this framework, the federal government can establish dedicated standards and regulations dedicated to frontier models (or simply preventing states from doing so without federal standards), but states can maintain their authority over the use of the application tier of AI and user interaction. But in reality, even these categories can be difficult to define properly.
AI development requires both rapid progress and long-term investment, and state-level uncertainty risks hampering US progress.
This article is based on Bloomberg Industry Group, Inc, publisher of Bloomberg Law and Bloomberg Tax. Or it does not necessarily reflect the opinions of its owner.
Author information
Oliver Roberts is an adjunct professor at Washington University at St. Louis Law School, co-head of Holtzman Vogel’s AI Practice Group and is founder and CEO of Wickard.ai.
Writing for us: Author’s Guidelines