Virginia Governor Glenn Youngkin rejected a bill this week that would regulate “high-risk” artificial intelligence systems. HB 2094, which passed narrowly through the state legislature, was intended to implement similar regulatory measures established by last year’s Colorado AI Act. At the same time, the Colorado AI Impact Task Force announced concerns about Colorado’s law. Additionally, Texas recently revised its proposed Texas Responsible AI Governance Act.
Virginia law, similar to Colorado law, would have imposed various obligations on businesses involved in creating or deploying high-risk AI systems that affect important decisions about individuals in areas such as employment, lending, healthcare, housing, and insurance. These obligations include conducting impact assessments, maintaining detailed technical documentation, adopting risk management protocols, and providing individuals with the opportunity to confirm negative decisions made by AI systems. Companies will also need to implement protections against algorithmic discrimination. Youngkin, like Colorado Governor Police, was worried that HB 2094 would curb the AI industry and Virginia’s economic growth. He also said existing laws related to discrimination, privacy, data usage and delinquency can be used to protect the public from potential AI-related harm. Police eventually signed Colorado law, while Youngkin did not.
However, despite the police signing Colorado law last year, they urged lawmakers in a statement to evaluate and provide additional clarity and revisions to the AI Act. And last month, the AI Task Force issued a report on their recommendations. The task force has identified potential areas where the law could become clear or improve. I divided them into four categories. (1) If there is a consensus regarding the change to be changed. (2) If the consensus requires additional time and stakeholder involvement. (3) Where consensus depends on solving multiple interconnected problems. (4) If there is a “solid disagreement.” The first is just a handful of relatively small changes. The second is to clarify the definition of, for example, what is “consequential decision.” This is because the AI tools used to create them are subject to the law. For example, the third defines “algorithm discrimination” and is the obligation that developers and deployers should have to prevent it. And fourth, as an example, whether to include opportunities to cure cases of non-compliance.
Texas, like Colorado and Virginia, is considering legislation that addresses high-risk AI systems, which are “substantial factors” in consequential decisions about people’s lives. The bill has recently been changed to remove the concept of algorithmic identification, banning AI systems developed or deployed with “intent to discriminate” as is currently being drafted. It has also been revised to explicitly state that different influences alone are not sufficient to prove that there is an intention to discriminate. The proposed Texas law is similar to Utah’s AI law (which came into effect on May 1, 2024), but if an individual is in an interaction with AI (though this is a government agency only), the law prohibits the intentional development of the AI system “harm or criminal.” The law was filed on March 14th and was pending at the time of this writing in the House Committee.
Practice it: HB 2094’s veto highlights a complex journey into comprehensive AI regulation at the state level. Continuing action at the state level is anticipated some time ago, as a legislature and before we see a consensus approach to AI governance. As a reminder, there is an AI law that currently focuses on various aspects of AI in New York (likeness and employment), California (several different topics), Illinois (employment), and Tennessee (similar), which are set to take effect at different times between 2024 and 2026, and sits on board in at least 17 states.