Important points
President-elect Trump has appointed venture capitalist David Sachs as the White House AI and cryptocurrency czar. The Trump administration is likely to take a light regulatory approach to AI development and deployment, and may repeal some or all of President Biden’s executive orders on AI, as promised in the Republican platform. There is. Omnibus federal AI legislation is unlikely, so each state will likely continue to implement its own state-specific regulations. Despite criticizing CHIPS and the Science Act during the campaign, the Trump administration is not expected to seek to repeal or significantly change the Act.
The development and deployment of artificial intelligence (AI) systems will be the most important technological advancement in the coming years. But AI regulation received little attention during the presidential campaign, even though implementing AI is a top priority for most corporate executives and boards of directors.
Overall, we expect the Trump administration to take light regulatory action regarding AI. However, as we discuss below, individual states may continue to step into the void and enact their own AI laws.
Appointment of AI emperor
President-elect Donald Trump has named David Sachs, a venture capitalist and early PayPal executive, as the White House’s AI and cryptocurrency czar. Given Sachs’ background as a venture capitalist, many expect him to bring a pro-innovation, pro-startup approach to the AI sector, including when it comes to regulation. This could mesh well with President-elect Trump’s policies, given that in the announcement of Sachs’ appointment, Trump said Sachs would steer the government away from “big tech bias and censorship.” There is sex.
In the announcement, the president-elect also said that Sachs would “work hard on the legal framework to ensure the crypto industry has the clarity it has been seeking,” but made no comparable statement regarding AI regulation. There wasn’t. It is too early to tell whether this is a nod to the crypto industry or a cautious statement that it will not promote a legal framework for AI. (See “Cryptocurrencies stand to benefit from new regulators and a receptive Congress.”)
The future of Biden executive orders
In October 2023, President Joe Biden issued a sweeping Executive Order on AI (AI Order), which his administration uses to protect privacy, promote civil rights, and foster innovation while improving safety and security standards for AI. It was promoted as a means of establishing the However, the bulk of the AI Order was a set of instructions to various federal agencies requiring them to study and produce reports on the impact of AI and, in some cases, issue guidance on safe AI implementation.
It remains to be seen which parts of the AI mandate President-elect Trump will repeal, especially since some parts have bipartisan support.
The AI Order also invokes the Defense Production Act (DPA) to force companies that develop AI-based models that pose significant risks to national security, national economic security, or national public health and safety from using such models. Required to notify the federal government when training. Sharing the results of all red team safety tests (i.e., tests in a controlled environment to discover flaws and vulnerabilities in AI systems).
The Republican platform vowed to “repeat Joe Biden’s dangerous executive orders that stifle AI innovation and impose radical leftist ideas on the development of this technology.” Instead, Republicans support AI development rooted in free speech and human flourishing. ”
It remains to be seen which parts of the AI order President-elect Trump will repeal, especially since some aspects, such as the national security guidelines, have bipartisan support. However, Republicans have criticized the requirements placed on AI developers through the DPA as too prescriptive and anti-innovation, and therefore could be subject to repeal.
A November 2020 memo issued by the Office of Management and Budget (OMB) at the end of President-elect Trump’s first term also indicates that the incoming Trump administration is likely to choose a looser regulatory approach. . The memo, “Guidance for the Regulation of Artificial Intelligence Applications,” takes an innovation-friendly approach to AI, stating that “Federal agencies must not engage in any regulatory or nonregulatory actions that unnecessarily impede innovation and growth in AI.” It must be avoided.”
We also expect the Trump administration to focus less on issues of bias and discrimination related to AI. As just one example, OMB guidance for AI mandates establish safeguards for agencies to consider AI’s impact on factors that contribute to algorithmic discrimination and disparate impact, particularly when evaluating AI, and ensure that AI is impartial. We are proposing to ensure that we can move forward. Dignity and fairness.
The Trump administration may conclude that such requirements do not need to be included in agency AI safeguards.
More generally, certain AI-related law enforcement priorities are likely to be scaled back at the federal level, including oversight by the Federal Trade Commission (FTC). The FTC has targeted and mopped up deceptive and unfair uses of AI in several law enforcement actions during the Biden administration, and has cited “algorithmic disgorgement” (the use of illegally collected data) in many actions. We used an algorithm (forced deletion) remedy developed by (See our January 2, 2024 client alert, “FTC Proposed Order Proposes Blueprint for AI Deployment.”)
AI legislation
Given that there is no consensus even within the parties on what an omnibus federal AI bill should look like, it is unlikely that it will be enacted in the near future.
But two more narrow AI bills currently pending in Congress have bipartisan support and could provide an early preview of how the next four years will play out.
The AI Advancement and Confidence Act (HR 9497) and the Future of Artificial Intelligence Innovation Act (S. 4178). These laws authorize the creation of the AI Safety Institute, a group within the National Institute of Standards and Technology (NIST). Evaluate, test, and develop guidelines for AI models. CREATE AI Act (HR 5077; S. 2714). This makes the National Science Foundation’s National AI Research Resources Pilot Program permanent. The program is currently scheduled to run until January 2026 and will provide tools for AI research.
All of these bills have passed out of committee, but it remains to be seen whether the Trump administration will support any of them or seek amendments.
role of the state
In the absence of comprehensive federal laws and regulations regarding AI, we expect states to take a more active role in developing state-specific AI regulations. These could range from laws like Colorado’s comprehensive AI law to targeted laws like Tennessee’s law protecting deepfakes.
It is also possible that states will take a more aggressive approach to regulating the use of automated decision-making technology. This may include laws regarding:
Tests that AI developers must perform before releasing a particular model. Developers may be required to make disclosures regarding the safety of their models. Those deploying AI models have a duty to inform users that they are working with an AI model. There are limitations to how AI can be used.
Elon Musk’s role
One of the wild cards when trying to evaluate AI policies under the second Trump administration is Elon Musk’s role in such policies. Musk has emerged as a trusted advisor to President-elect Trump and has been tapped to help guide a planned advisory body, the Department of Government Efficiency, but he has long expressed concerns about the unchecked power of AI. has supported the California law that would impose regulations. Various obligations for developers of advanced AI. The bill was vetoed by Gov. Gavin Newsom.
But Musk, who also founded his own AI company, xAI, says there are no guardrails against disinformation and hate speech. He has criticized other AI companies for their liberal bias. Therefore, the AI policy views that Musk articulated to Trump could help shape the Trump administration’s stance on AI.
(See also “Increased investment in AI requires financial sponsors to address unique risks”)
(View source.)