On Wednesday, the Trump administration published its AI Action Plan. This is a 28-page document outlining the proposed policies for everything from building data centers to how government agencies use AI. As expected, the plan highlights deregulation, speed, and global domination, largely avoiding many conflicts that plague the AI space, including discussions about copyright, environmental protection and safety testing requirements.
And how the Trump administration changed AI: Timeline
“The US must do more than promote AI at its own border,” the plan states. “The US must also promote the adoption of American AI systems, computing hardware and standards around the world.”
It shows the main points from the plan and how they will affect the future of AI both nationally and internationally.
AI-up skills surrounding workers’ protection
Companies inside and outside the tech industry are increasingly offering premium AI courses to mitigate the impact of AI on their jobs. In a section entitled “Empowering American Workers in Age of AI,” the AI Action Plan continues this trend and proposes several initiatives built on the April 2, 2025 AI Education Executive Order.
Specifically, the plan proposes that the Ministry of Labor (DOL), Ministry of Education (ED), National Science Foundation, and Ministry of Commerce will secure funding for retraining programs and study the impact of AI on the job market.
Also, Microsoft is saving millions of AI and laying off thousands – where do they go from here?
The plan also creates tax incentives for employees to provide skills development and literacy programs. “In applicable circumstances, this can help employers provide tax-free rebates for AI-related training and increase private sector investment in AI skills development,” the plan clarifies.
Nowhere in the document suggests that managers regulate or protect workers that cannot be replaced by AI. By all-inning upskills without adjusting labor laws to AI reality, the Trump administration will hold workers accountable to keep up. It is unclear how speeding up the system alone will prevent evacuation.
Government AI models could be censored
Several figures within the Trump administration, including the President and AI Zal David Sachs, have accused popular AI models of “wake up” from Google, humanity and Openai of overweighting the value of liberalism. The AI Action Plan codifies that suspicion by suggesting that it removes “references to misinformation, diversity, equity, and climate change (DEI), and climate change” from the NIST AI Risk Management Framework (AI RMF).
(Disclosure: Ziff Davis, the parent company of ZDNET, filed a lawsuit against Openai in April 2025, claiming it infringed Ziff Davis’ copyright in training and operating AI systems.)
Released in January 2023, AI RMF, like MIT’s risk repository, is a public-private implementation resource aimed at “improving the ability to incorporate reliability considerations into the design, development, use and evaluation of AI products, services and systems.” Currently, there is no misinformation or references to climate change, but it recommends that workforce DEI initiatives be considered by organizations implementing new AI systems.
Also: How do these proposed standards aim to tame our AI Wild West?
The AI Action Plan proposal to remove these references is widely defined, but effectively makes the government-used model a censorship model.
Despite some logic contradictions regarding the protection of free speech, the same section states that the newly renamed AI Standards and Innovation Center (CAISI) (formerly the US Institute of AI Safety) will “conduct research and, if necessary, publish an evaluation of the frontier model from the Chinese Republic for coordination with talks and censorship of the Chinese Communications Party.”
“We must ensure that freedom of speech flourishes in the age of AI and that the AI sourced by the federal government objectively reflects the truth, not the agenda of social engineering,” the plan says.
The threat of state laws could return
Earlier this summer, Congress proposed a 10-year suspension of state AI laws publicly advocated by companies, including Openai. He was pushed into Trump’s “big and beautiful” tax bill, and the ban was removed in the last seconds before the bill was passed.
However, the AI Action Plan section suggests that state AI laws will remain under the microscope as federal policies develop.
The plan “in line with applicable law, we will work with federal agencies with discretionary AI-related funding programs to make funding decisions and consider the national AI-regulated environment when limiting funds when a state’s AI-regulated system may interfere with the effectiveness of its funding or awards.”
The language does not show what kind of regulations will be scrutinized, but given the Trump administration’s attitude towards AI safety, bias, responsibility and other protection efforts, it is fair to assume that countries seeking to regulate AI along these topics are the most targeted. The Rays bill, which has recently proposed requirements for developer safety and transparency in New York, comes to mind.
“The federal government should not allow AI-related federal funds to be directed to states with burdensome AI regulations that waste these funds, but should not interfere with the state’s right to pass prudent laws that are not overly restricted by innovation,” the plan continues and remains subjective.
For many, state AI laws remain important. “Without Congressional action, states must be allowed to advance rules that protect consumers,” Grace Geddy, policy analyst for AI issues in Consumer Report, told ZDNET.
Fast Tracking Infrastructure – at any cost
The plan became a priority as part of Project Stargate, and has designated several initiatives to accelerate permission to build data centers for Pennsylvania’s recent data-centric energy investment.
“We need to build and maintain the vast amount of AI infrastructure and the energy to drive it. To do that, we will continue to reject the fundamental climate doctrine and bureaucratic red tape,” the plan says. The government “promotes environmental permits by streamlining or reducing regulations promulgated under the Clean Air Act, Comprehensive Environmental Response, Compensation, Liability Act, and other relevant related laws.”
Given the environmental impacts that data center scale can be achieved, this naturally raises ecological concerns. However, some are optimistic that growth will drive energy efficiency efforts.
Also, how much energy does AI really use? The answer is surprising and a bit complicated
“As AI continues to expand, so does the demand for important natural resources such as energy and water,” SVP SVP and Chief Sustainability Officer at Ecolab, Ecolab, the Chief Sustainability Officer at Sustainability Solutions Company, told ZDNET. “By designing and deploying AI with efficiency in mind, you can optimize resource use while meeting demand. Companies that lead and win in the AI era are companies that prioritize business performance while optimizing water and energy usage.”
Whether that will happen is still uncertain, especially given that data center pollution has today, which is actively and negatively impacted.
The rest of the Biden era protections can still be removed
When Trump overturned Biden’s executive order in January, many of the orders were already burned into certain agencies and were protected. However, the plan shows that the government will work together through existing regulations to remove Biden-era relics.
The plan proposes that the Office of Management and Buldge (OMB) investigate “current federal regulations that prevent AI innovation and adoption and work with relevant federal agencies to take appropriate action.” OMB continues to “identify, amend or abolish regulations, regulations, memorandums of understanding, administrative orders, guidance documents, policy statements, and interagency agreements that unnecessarily interfere with the development or deployment of AI.”
Also: Amazing AI skills are disconnected and how to fix it
The plan is also intended to “ensure that all Federal Trade Commission (FTC) investigations under the previous administration will be launched to prevent advances in the theory of responsibility that overburdens AI innovation.”
“The language can give AI developers a free rein to create harmful products without considering the outcome,” a consumer report told ZDNET. “Many AI products offer real benefits to consumers, but many pose real threats, such as deep intimate image generators, therapy chatbots, and voice cloning services.”
Honorable mention
Get top morning stories every day in your inbox with Tech Today Newsletter.