In one of the first major steps to discuss widespread regulations on artificial intelligence law at the federal level, members of the House Subcommittee on Commerce, Manufacturing and Trade met on May 21 to discuss the US location in the world’s AI races.
“We are here today to determine how Congress can help America grow the key industry, which is key to America’s competitiveness and work, without losing competition to write a global AI rulebook,” said Gus Bilirakis, a Republican with Florida State Rep. and chairman of the Innovation, Data and Commercial Subcommittee.
During the two and a half hour hearing, subcommittee members discussed the European Union’s groundbreaking AI laws that came into effect last year, the expansion of state laws on AI, and how to maintain American leadership in AI, the proposed moratorium for those laws.
Support for federal guidelines or regulations on AI technology was supported by bipartisan support in the last Congress, and the Bipartisan House Task Force on Artificial Intelligence released its research and findings in December. However, many Republicans who have supported these efforts in the past have argued that a suspension on state law could allow Congress time to pass a unified federal set of guidelines.
Rep. Jay Obernolte, a Republican from California, said more than 1,000 state laws related to AI introduced this year have created the urgency to compile federal guidelines. The state now has a “creative agency” over AI regulation, he said.
“The state went out before this. They feel creative ownership of their framework. They are what keeps us from doing this now,” Obernolte said. “This is an object lesson on why moratoriums are needed to prevent it from happening.”
Critics of the Moratorium questioned why laws at the state level prevented federal guidelines from being created.
Rep. Kim Schlie, a Democrat from Washington, said stripping states of their ability to legislate AI without a federal framework was a “great gift for Republican big technology.” The suspension on state AI law suggests suspending on ongoing laws and overriding existing laws.
“This pattern of big technology gifts and giveaways by the Trump administration, with the cooperation of Republicans in Congress, is hurting American consumers,” she said. “Instead, we should learn from the work our state and local counterparts are doing now to achieve well-thought, robust laws, and give American businesses the framework and resources they need to succeed while protecting consumers.”
House members opposed to AI law often cited the lack of regulation, one of the reasons why the US is currently leading the global AI market. The US ranks first, testified by Mark Balgava, director of general catalytics at global venture capital firms, but China is closely behind its computing power and its AI models.
Sean Heather, senior vice president of the Chamber of Commerce’s International Regulation Authority and Antitrust, testified that laws that closely reflect the European Union’s AI law, which came into effect last summer, could drive the United States out of its highest position. The EU’s AI law is comprehensive and places regulatory responsibility on AI developers to mitigate the risk of harm caused by the system. Developers should also provide technical documentation and training summary.
The EU AI law is one factor in why Europe is not a stronger player in AI, Bhargava said, but it is not the only one. He said the US has a history of investing in science and innovation and being founder-friendly for tech startups and immigrant founders. 46% of Top Fortune 500 companies in 2024 were founded by 65% of immigrants and top AI companies. Europe is not pursuing these business-friendly policies, Balgava said.
“The reason we’re moving ahead today is the startup. We have to think about how we keep giving them that edge, and giving them that edge means giving them guidelines, not necessarily a state regulatory framework, or a patchwork of overregulated,” Balgaba said. “We need to come up with that right balance.”
Currently, American AI companies are self-employed. In other words, many lawmakers are testing models for some of the social and cybersecurity risks that they want to be written in the law. Most investors also follow their own mandatory strategies, Balgava said. A typical catalyst evaluates the output of the model as well as the data set and training model. It also asks AI companies to identify potential downstream meanings that could come from the model.
Bhargava and a small group of members of the committee said they fear that excessively strong regulations, which put a regulatory burden on developers like the EU, can crush the next great technology startup before they can gain footing.
But all the lack of laws is coming together, leaving Americans in dangerous places, said Rep. Kathy Caster, a Florida native. She cited concerns about the interaction between minors and unregulated AI, such as the 14-year-old 14-year-old in the state who took his life after forming a close relationship with the chatbot, and another 14-year-old who was engaged in sexual conversations with meta-chatbots.
“What the heck is Congress doing?” Caster said. “What are you doing to remove officers from the beat while the state acts to protect us?”
Amba Kak, co-executive director of the AI Now Institute, which studies the social implications of AI, said he was skeptical of allowing the industry to self-govern or to grow freely. She said during the hearing, members alleged that existing agencies or general regulations would protect Americans from the harm of AI.
“But if that’s true, then we won’t see the reckless spread of AI applications based on child exploitation like this,” she said.
While Congress is in the early stages of considering the federal framework, Balgava said the state has passed existing AI laws with “best intentions” in mind.
“People want to protect consumers. They want to create a framework,” he said. “And partly because the federal government hasn’t stepped up because it has a framework to leave it to the states to regulate it.”
Bhargava “strongly” encouraged committee members to work together in a bipartisan framework, and incorporated findings from last year’s Bipartisan House Task Force.
“I really think it would be if we could turn this into a policy and enact it at the federal level rather than leave it to the states,” Balgava said. “It will be in the greatest benefit of the startups we represent.”