State across the country are considering ways to promote artificial intelligence coordination and innovation. The same goes for Virginia.
In 2025, around 1,000 artificial intelligence-related bills were introduced in the US, with 29 in Virginia alone. The proposed consumer protection bill, HB 2094, focuses on creating requirements for the development, deployment and use of “high-risk artificial intelligence systems.” This classification includes cases where technology is employed in situations such as financial and healthcare services, where legal benchmarks help protect users from bias and algorithmic discrimination.
Gov. Governor Glenn Youngkin said he would reject the law this spring, hindering small businesses and startups and curbing innovation.
This was the second time Michel Maldonado (D-District 20) has advanced the law, and she was hoping for a veto from the governor.
Lawmakers hope to see the mental health impact of minimal social media regulations and “respond to it aggressively,” she said.
“What I’m proposing is that we don’t wait to see how it evolves,” Maldonado told Technical.ly. “We would have been engaged in a very thoughtful conversation from the start.”
While Colorado’s measures still face delays and revisions, it would have been Virginia’s second state to adopt comprehensive AI rules. Maldonado plans to implement its third law next year.
“When you have a really complicated law, when it’s the first law of that kind, it often requires multiple attempts before it becomes a law,” she said. “I don’t think this law is an exception.”
Avoid legally expensive patchwork
Maldonado, along with around 200 other lawmakers, is part of the Multi-State AI Policymakers Working Group.
Members of the group have implemented similar AI bills in Virginia across the country. Nearby, Maryland, I passed my first reading at the last legislative meeting.
We don’t want to step into our ability to be innovative and creative, but we want to make sure we are protecting people along with our business.
Dell. Michel Maldonado
This interstate collaboration makes sense for Nate Lindfors, policy director for policy-focused entrepreneurial non-profit engines. But there is a possibility that lawmakers could tweak the bill.
Lindfors, whose employers support Youngkin’s veto, believes the issue will affect businesses navigating the country’s 19 comprehensive state-level data privacy laws.
“The legal patchwork is really expensive for startups,” Lindfors told Technical.ly.
According to an analysis by the Tech Industry Coalition Chamber of Progress Chamber, running the required impact assessments and other documents for HB 2094 could cost a total of $290 million.
“That number alone will only price small, small-owned innovators looking to work in the state,” said Brianna, organizational director of state and local government relations in the northeast. “This is an ironic, unintended consequence. It means we’re trying to reduce all sorts of potential bias in AI.”
Furthermore, according to Gillian Hadfield, a computer science professor appointed to Johns Hopkins’ School of Government and Policy, robust documents do not necessarily reduce the risk of harm.
She calls this trend of combining recordkeeping with compensation mitigation “lawyers’ disease,” and she sees it in non-AI regulatory approaches under the EU AI law.
“We don’t have much evidence of how well it works,” Hadfield told Technical.ly.
Entrepreneurs are looking for sector-by-sector approaches
Maldonado’s bill sought the horizontal application of proposed laws across the industry, using AI in high-risk activities and consequential decisions. She viewed it as a more “narrow” approach, as it applies to highly protected areas, compared to other AI laws.
She also does not expect lawmakers to instruct exactly how all areas should implement regulations.
“I don’t think anyone wants us as lawmakers who tell us how to do well for all sectors,” she explained. “I think the job of Congress is to establish a framework and a guardrail. I don’t want to step into the ability to be innovative and creative, but I want to make sure I’m protecting people along with the business.”
However, since each industry uses each platform differently, January to January believes in a nuanced approach to AI regulation.
“It comes down to a variety of uses. Different opportunities mean different potential biases to focus on, rather than a big omnibus approach,” she said.
Todd O’Boyle, senior director of technology policy for Progress of Progress, says consumer protection already exists in many of these “high-risk” systems, such as housing. Virginia’s Fair Housing Act makes discriminatory against tenants based on protected classes, and AI has not changed that, he said.
However, O’Boyle argued that he wanted to close the law’s potential AI-related loopholes.
Taylor Barkley agrees. The Abundance Institute’s Public Policy Director is a think tank identified as having its political brothers in 2024 as funded by the Koch brothers, and to promote Light Touch AI regulations, AI believes that too vast technology cannot be regulated across the sector.
It comes down to a variety of uses. Different opportunities mean different potential biases to focus on rather than large omnibus approaches.
Brianna January, Room of Progress
“You’ll regulate how people use spreadsheets. People use spreadsheets for all kinds of different applications and use cases,” Barkley told Technical.ly. “If there’s a law that prohibits discrimination in a spreadsheet, that’s a good thing. But if you let it go a little bit, how will that innovation travel?”
Instead, Berkley is a fan of consumer-centric law, which is being introduced with narrower lenses. For example, the bill had just been signed into Utah, a rich laboratory home to address the requirements for mental health AI chatbot responsibilities and technology disclosure.
Future AI regulations
As AI is constantly changing, laws and regulations need to be flexible, according to Hadfield at Johns Hopkins University.
“AI is going to change the way we do everything,” she said.
She advocates the use of multi-stakeholder regulatory bodies for the government, which helped propose a bill that would require it in California. This includes a joint public-private approach to determine what regulatory mechanics looks like, how to get there, and the best way to achieve results, including testing AI models.
Under this law, participation is not required, but if the parties involved result in injuries or damage, the parties will have a safe port.
“We want to see more state laws that are becoming creative about new approaches,” Hadfield said. “The measured approach, the right approach to regulation, ‘Hey, we’ve done a lot of this paperwork to take the standard and show you that you take it seriously.”
Company: Johns Hopkins University