As the U.S. government prepares to implement restrictions in state artificial intelligence laws through President Donald Trump’s recent executive orders, Michael Crasios, the White House director of science and technology policy, suggested to U.S. lawmakers that there are possible scenarios in which sector-specific AI regulation is both viable and necessary.
Kratsios did not provide many details when asked about specific elements of the new order, such as defining “onerous” national AI provisions, during an appearance before the U.S. House of Representatives’ Committee on Science, Space and Technology’s Research and Technology Subcommittee on January 14. Instead, it encouraged Congress to work with the administration on AI, leaving answers to Trump administration colleagues and previous guidance, although it omitted details about the specific role lawmakers should play in enacting federal legislation mandated by the executive order.
“We want to create a regulatory environment that provides a level of clarity and a level of understanding for all innovators, and the most important part of that is disseminating and working towards a use case sector-specific approach to AI regulation,” he said.
“Creating one-size-fits-all regulations for AI is not the best way to deal with all these new AI technologies,” Kratsios continued. “For example, those developing AI-based medical diagnostics should continue to be regulated by the FDA. Those developing drones should continue to be regulated by the FAA.”
Kratsios said the National Institute of Science and Technology has a strong role to play in setting standards for trustworthy AI, adding there are areas where lawmakers and the administration can provide clarity.
His first appearance in Congress since the executive order was signed gives him some insight into the administration’s views on AI issues heading into 2026.
Scope of presidential order
There has been little movement on the federal side since President Trump signed an order requiring agency action to limit the impact of state AI laws. The Justice Department’s AI Litigation Task Force to sue states over the law was established within the allotted 30-day period. Other deliverables for your order are expected to arrive 90 days after signing.
At a House subcommittee hearing, lawmakers from both sides explored next steps in the wake of the order. Rep. Jay Obanolte (R-Calif.) believes both states and the federal government can regulate AI, but said the government should first step up and establish its role and help states understand theirs.
“I think what everyone believes is that there should be federal lanes, there should be state lanes, and that the federal government first needs to work on defining what Article I of the Constitution is, interstate commerce, and where those pre-emptive guardrails are,” he said.
Kratsios said he remains opposed to state-level regulation because it could hurt small developers who are unable to meet various compliance requirements. He reiterated that the order does not address “legitimate” state actions related to child safety, AI computing and data infrastructure, and state government procurement.
But Kratsios deferred when Rep. Don Beyer, D-Virginia, asked him what power his role would have in defining the state’s ability to govern, or how burdensome state laws would be. He said most of that work would fall to the Department of Commerce.
“It’s a process that has to be determined,” he said of the cumbersome definition of the law.
Kratsios also reiterated the White House’s desire to create a national framework and encourage lawmakers to work with groups like the AI Education Task Force.
NIST’s role, AI standards
Kratsios expressed support for the mission of NIST and its AI Standards and Innovation Center (formerly the AI Safety Institute), noting that creating standards that can be trusted is “absolutely critical.” But he hesitated to say the latter should be codified under Obernolte’s upcoming bill.
The fate of NIST’s role is uncertain as Congress debates how much funding NIST should receive after it lost staff early last year. The administration proposed cutting NIST funding this year in its latest spending bill, but appropriators voted to increase it in early January.
Rep. Suhas Subramanyam, D-Virginia, said NIST lost 400 employees last year and asked how Kratsios could reconcile the cuts with the agency’s importance. He also asked what role governments should play in mitigating AI risks.
Kratsios said he was not familiar with these cuts, but said the agency has a “very important role” in setting advanced metrics for model evaluation that can be used across all industries.
“We want to trust them, whether it’s a medical model or something else, so that when they’re used by everyday Americans, they can rest in the knowledge that it’s been tested and evaluated,” he said.
Kratsios also said NIST should be “depoliticized,” a goal the Trump administration set in its AI Action Plan by removing references to bias and discrimination from the agency’s internationally referenced AI Risk Management Framework.
“Injecting political rhetoric into their research devalues and corrupts the broad work that NIST is trying to do across many important scientific fields,” Kratsios said.
How to deal with AI abuse
Lawmakers also sought insight into how the administration views the misuse of AI, and recently announced a partnership with Grok, an AI chatbot from Company X that has been in the spotlight frequently.
The chatbot has come under fire for producing blatant, non-consensual deepfakes, but X said it would no longer be able to do this after an investigation was launched by international regulators. The US military recently announced a partnership with Grok to expand the use of AI.
Kratsios deferred questions about its contracts and its April 2025 guidance document on intragovernmental procurement to the U.S. General Services Administration.
He said the Trump administration is committed to protecting children’s safety and privacy online, but that “the misuse of AI tools requires accountability for harmful or inappropriate uses, not necessarily blanket restrictions on the use and development of the technology.” Federal employees found to misuse AI products will be held accountable, he said.
Caitlin Andrews is a staff writer at IAPP.

