The rules and regulations surrounding AI can seem as bewildering as the wild hallucinations spewed out by large-scale language models.
Approximately 100 state laws and proposed regulations have emerged over the past several years to fill the void created by the lack of federal standards. President Trump signed an executive order this month that simplifies things by giving federal oversight and rolling back a patchwork of state rules.
However, the federal framework for AI is still under development. And officials say Trump’s executive order will almost certainly face legal challenges in court.
For companies looking for guidance and predictability when developing their AI plans, 2026 seems unlikely to bring peace of mind.
“This means additional uncertainty for my clients, both AI innovators and Fortune 500 companies looking to implement AI,” says Danny Tobey, head of the Americas AI and data analytics practice at law firm DLA Piper.
The lack of clarity is largely due to widespread tensions between the federal government and state governors and legislatures over who has control over how AI companies develop their technology, which is rapidly reshaping jobs, impacting how companies hire, and raising public questions about privacy and consumer protection. Also, a major challenge to any attempt to regulate AI is the technology’s growing influence on the country’s stock market. Many of the largest companies by market capitalization, including Nvidia, Alphabet, Microsoft, Amazon, and Meta Platforms, have seen their valuations soar due to investor enthusiasm as AI adoption accelerates.
Mel Walker, data and AI practice leader at accounting firm CohnReznick, said the federal government has been moving too slowly to develop AI guidelines, but some of that hesitation is likely a result of how quickly the technology is evolving. Dispersed regional surveillance led by states like California and Colorado has made it difficult for companies to track.
“We need to make sure it doesn’t become too onerous and cumbersome for management to comply with, or it will completely stifle innovation,” Walker says.
She says there has been a noticeable increase in conversations between government and private sector officials after AI startup Anthropic revealed last month that it had thwarted a major AI cyberattack, possibly by a Chinese state-backed group. Anthropic CEO Dario Amodei called for stronger regulation of AI. “I think the nature of what happened with Anthropic created a lot of excitement, a sense of urgency, in the region,” Walker says. “This case will continue to make headlines until the United States decides what to do with this regulation.”
States that have been active in regulating AI include New York, which requires employers to disclose the role of AI in layoffs, and California, where a law signed in September by California Governor Gavin Newsom requires some AI developers to disclose safety protocols and provides protections for potential AI whistleblowers.
“We know there are a lot of great technology companies in California,” said Wende Knapp, employment and labor practice leader at law firm Woods Oviatt Gilman. “I think you’ll see (AI) continue to be heavily monitored, and I think other states will follow suit from a data privacy standpoint.”
Some governors have indicated that they do not intend to hand over the role of AI monitoring to the executive branch. “Executive orders cannot and cannot preempt state legislative action,” Florida Gov. Ron DeSantis wrote of X. Utah Gov. Spencer Cox told NPR last month that he is “very concerned about any intrusion by the federal government into states’ ability to regulate AI.”
Regulation is in flux in the US as well as Europe, where the region is reportedly considering several changes that would weaken the AI law passed last year.
As business leaders await more clarity from regulators, most chief information officers and chief technology officers rely on two frameworks to guide their AI policies. The ISO 42001 standard guides international companies seeking to comply with the European Union’s AI laws, while domestic companies tend to rely on the National Institute of Standards and Technology’s (NIST) AI risk management framework.
“All the CIOs and CTOs I talk to at these large public companies are basically using either NIST or ISO 42001 as their baseline framework,” says Bhavesh Vadani, a partner at CohnReznick. “That way, you may be able to meet most if not all of the state’s requirements.”
“Despite the uproar over AI regulation, smart companies are still building safety, transparency, and trust into their AI by design, because regardless of whether there are laws specific to AI, there will always be ways for people to do them harm,” Tobey said.
Burkhard Bockem, chief technology officer at industrial technology company Hexagon, is pushing for stricter boundaries and regulations to oversee “physical AI,” including the Stockholm-based company’s own efforts to develop humanoid robots. “Physical AI needs to have a higher standard because if something goes wrong, you can see the real-world impact,” Boecem says.
Most of the technology solutions Hexagon develops take an approach that makes it easy to sell anything internationally by meeting the most stringent regulatory requirements. But in the case of AI, where the technology is rapidly evolving, Hexagon may make an exception and develop broader AI capabilities for the U.S. market than what is allowed in Europe.
“Ultimately, I can only imagine that a piecemeal approach like this will slow down the industry as a whole,” Boecem says.

