US President Donald Trump has signed an executive order establishing a national policy framework for AI. The measure aims to give federal policy primacy in technology development and creates a task force with the express goal of challenging state laws regulating emerging technologies. Because this executive directive is not as binding as an enacted law, it has sparked significant debate regarding regulatory jurisdiction and the future normative structure of the technology sector.
The Trump administration has argued that this step is necessary to maintain global technological competitiveness against countries such as China. Regulatory fragmentation at the state level has been identified as an impediment to efficient investment and agile AI development on a national scale. At the signing ceremony, President Trump pointed to the inefficiencies of a decentralized system, saying, “If you had to get 50 different approvals from 50 different states, you can forget about it.” This perspective emphasizes the intent to consolidate oversight under a single federal authority and foster a more predictable business environment for technology-driven Silicon Valley companies.
This executive order is based on previous failed attempts at legislation. A 10-year moratorium on state AI laws proposed earlier this year as part of the Republicans’ One Big Beautiful Bill Act failed to gain bipartisan agreement and was removed from the bill by a 99-1 Senate vote. This executive order represents a reinstatement of this agenda through an executive structure that does not have legal authority but establishes directive policy positions and designates federal authority.
The executive order has been interpreted to support the interests of technology companies developing AI. Silicon Valley lobbying groups have consistently argued that the proliferation of state regulatory frameworks introduces undue bureaucratic complexity. They argue that such complexity undermines U.S. companies’ ability to innovate and lead the global AI race. This order is intended to ease the compliance burden on these companies by reducing fragmented regulatory oversight.
The situation is complicated by the lack of a comprehensive federal proposal to mitigate the social, environmental, and political risks inherent in AI. This executive action is in contrast to existing, more stringent state regulations and regulations under consideration. California, for example, requires companies to publicly disclose safety tests for new AI models. Colorado has introduced requirements to assess the risks of algorithmic discrimination in the hiring process and take precautions against those risks. These state frameworks directly address the potential negative effects of AI on society, aspects that the federal approach seeks to marginalize.
To ensure implementation of the federal government’s goals of excellence, the executive order includes specific instructions for government agencies. One of the central provisions is a directive to the Department of Justice (DOJ) to establish an “AI Litigation Task Force.” The specific and sole responsibility of this task force is to mobilize federal legal mechanisms against subnational jurisdictions to legally challenge state regulations regarding AI.
Additionally, the order calls for a review of existing state laws that could force AI models to “alter their truthful output.” The provision takes direct aim at laws that impose requirements such as transparency, mitigation of bias and control over output, which the government and its industry allies see as obstacles. Legal action is expected to target states with nascent or advanced regulatory frameworks, with California and Colorado likely targets because of their specific laws regarding safety testing disclosure and discrimination risk assessment, respectively.
U.S. state leaders and civil rights groups quickly criticized the decision. They argue that the executive order concentrates normative power in favor of technology companies, which their analysis suggests increases public vulnerability. Terri Ole, vice president of California Action for Economic Security, which co-sponsored the state’s AI Safety Act this year, said the effort is “just another chapter in his strategy to hand over control of one of the most innovative technologies of our time to the CEOs of big tech companies.”
The administration also framed the need for unified federal regulation as a mechanism to prevent the infiltration of certain ideological biases into generative AI, a recurring concern among certain sectors. President Trump asserted that state intervention would result in technologically flawed development and emphasized his belief that diversified state regulations “will be destroyed in the early stages!” The Guardian newspaper reports.
The White House’s approach is consistent with a strategy to intensify global AI competition in direct competition with China, with the aim of securing America’s advantage in advanced capabilities. Within this strategic context, concerns raised by rights groups and researchers regarding the environmental costs of AI development, the potential for financial bubbles, or the spread of misinformation have been largely ignored.
The operational structure of the executive order gives an influential role to the Special Advisor on AI and Cryptocurrency, a position filled by billionaire venture capital investor and technology advocate David Sachs. Mr. Sachs has been directed to consult the Litigation Task Force when determining which state laws to challenge. The details highlight coordination and information exchange between the federal political sphere and Big Tech corporate interests, and Sasha Howarth, executive director of the Technology Oversight Project, called the order “bad policy.” “The Trump v. Sack executive order proves that the White House listens only to the CEOs of the powerful big tech companies that fund its banquet halls, not the ordinary people it pretends to serve,” Howarth said, according to the Guardian.

