Think back to May of this year. Congress was in the midst of a debate over a major bill. budget proposal. Among the many seismic regulations, Sen. Ted Cruz dropped A ticking technology policy time bomb: a 10-year moratorium on the ability of states to regulate artificial intelligence.
For many, this was a devastating event. A few giant AI companies appear to be swallowing our economy whole. Their energy needs exceed household needs, their data demands exceed their creators’ copyrights, and their products are causing mass unemployment and new types of clinical disease. mental illness. At a time when Congress seems unable to take action to pass meaningful consumer protections or market regulations, why block the only actors that seem capable of doing so: the states that have already enacted consumer protections and other AI regulations? Californiaand those actively discussing them, Massachusettswas warned. Seventeen Republican governors wrote: letter The idea was criticized but ultimately canceled due to a rare event. vote Bipartisan and almost unanimous.
The idea is back. Before Thanksgiving, House Republican leaders proposed It could be included in the annual defense spending bill. Then the draft document leaked It outlined the Trump administration’s intent to enforce state regulatory bans through executive authority. A flood of opposition (including from some people) republican party State leaders shot down the idea for weeks, but on Monday President Trump Posted People on social media are claiming that the promised executive order will indeed be issued soon. it will bring about growth cohort Many states, including California and New York, as well as Republican strongholds such as Utah and Texas, are at risk.
The set of motivations behind this proposal is clear: conservative ideology, cash, and China.
of intellectual Arguments in favor of a moratorium include:freedomEliminating national AI regulations would create a patchwork situation that would be difficult for AI companies to comply with, slowing the pace of innovation needed to win national security. AI arms race Together with China. AI companies and their investors have been actively promoting this theory for years and are increasingly endorsing it. outrageous Lobbying dollars. It’s a useful argument that can help not only overcome regulatory constraints but also (companies hope) win federal support. relief and energy Subsidy.
Citizens should interpret the debate from their own perspective, not from Big Tech’s. Preventing states from regulating AI means those companies can tell Washington what they want, but state representatives have no power to represent their interests. Which freedom is more important to you: the freedom for a few near-monopolists to profit from AI, or the freedom for you and your neighbors to demand protection from the misuse of AI?
This has more of a partisan element than an ideology. Vice President JD Vance claimed Preemptive federal action is needed to prevent “progressive” states from controlling the future of AI, he said. It’s a sign of creeping polarization, with Democrats decrying monopoly, bigotry and the evils associated with corporate AI, and Republicans reflexively taking sides. It doesn’t help that some of the parties have direct opinions financial gain In the AI supply chain.
But this need not be a partisan wedge issue. Both Democrats and Republicans have strong reasons to support state-level AI legislation. Everyone has a common interest in protecting consumers from harm caused by Big Tech companies. Republican Sen. Marsha Blackburn led the attack to kill Cruz’s original AI moratorium proposal. explained “This provision could allow big tech companies to continue exploiting children, creators, and conservatives. We cannot prevent states from enacting laws that protect their citizens.” More recently, Florida Governor Ron DeSantis wants to: Regulate AI In his condition.
Frequently heard complaints about the difficulty of complying with a patchwork of state regulations ring hollow. Almost all other consumer industries, such as automobiles, children’s toys, food, and pharmaceuticals, deal with local regulations that provide effective consumer protection. The AI industry includes some of the most valuable companies globally and has demonstrated the ability to comply with various regulations around the world, including those in the European Union. A.I. and data privacy The regulations are significantly more onerous than those previously adopted by U.S. states. If state regulatory power cannot be leveraged to shape the AI industry, which industries might it apply to?
Here, the regulatory superpower that a nation has is not its size or power, but rather its speed and locality. We need “laboratories of democracy” to experiment with different types of regulation that suit the specific needs and interests of voters, and to evolve in response to the concerns they raise, especially in consequential and rapidly changing areas like AI.
We need to embrace the ability of regulation to be a driver rather than a limiter of innovation. Regulations do not restrict companies from developing better products or making more profits. They help guide that innovation in a particular way that protects the public interest. Drug safety regulations do not prevent pharmaceutical companies from inventing medicines. They force us to invent safe and effective drugs. States can direct private innovation to serve the public.
But most importantly, regulation is needed to prevent the most dangerous effects of today’s AI. concentration of power related to trillion dollars AI companies and the power amplification technologies they are creating. In our new book, we outline the specific ways in which the use of AI in governance disrupts existing balances of power, and how these applications can steer towards a more just balance. rewiring democracy. Despite years of almost complete lack of Congressional action on AI, AI has captured the world’s attention. It has become clear that the only effective policy tool we have against this concentration of power is the state.
The federal government should help states regulate AI, not prevent them from doing so. Drive AI innovation. If proponents of a moratorium are worried that the private sector won’t deliver what they think it needs to compete in the new global economy, then we need to work with governments to help create AI innovation that serves the public and solves the problems that matter most to people. Following the lead of countries such as Switzerland, Franceand Singaporethe United States may invest in the development and deployment of AI models designed as public goods. It is transparent, open and useful for administrative and governance tasks.
Perhaps you don’t trust the federal government to build or operate AI tools that work in the public interest? Neither do we. States are a much better place for this innovation to occur because they are closer to the people, are responsible for providing most government services, are better aligned with local political sentiments, and achieve a variety of outcomes. greater trust. These are places where we can test, iterate, compare, and contrast regulatory approaches that may ultimately inform better federal policy. Additionally, training and operating the performance of AI tools such as large-scale language models is costly; decreased rapidlythe federal government can play a valuable role here in providing funding to underfunded states to lead this type of innovation.
Nathan E. Sanders and Bruce Schneier are the authors of the following books: Rewiring democracy: How AI will transform our politics, government, and citizenship.

