There is a commonly held belief that legal ethical regulations cannot keep up with the pace of technology. Convictions are so common that colloquial terms are given. It’s a “pacing problem.”
The term and concept were coined nearly 20 years ago, but we updated the question about whether recent propagation of generative artificial intelligence can enact laws that are appropriate or related to such complex emerging technologies.
That’s a fair question. ChatGpt was released at the end of November 2022, but there are relatively few AI regulations passed at the state or federal level. As Virginia Gov. Glenn Youngkin (R) saw last month in his HB 2094 veto, there is a shortage of even the laws that are almost certain to pass.
Despite a lot of uncertainty that has not been revealed, there are many similarities between how data privacy laws were formed five years ago and how AI laws are developing today. So, we can look at the evolution of data privacy laws and understand what to expect with future AI regulations. In doing so, organizations can make informed decisions about their approach to AI that is adaptable and “keep at pace” with future compliance requirements.
Today’s AI regulations resemble the first ripples of the wave of data privacy
Currently, AI regulations are relatively sparse, but when compared to data privacy regulations five years ago, that’s not surprising. At the start of 2020, 14 states introduced new privacy laws, but only two states had enacted new privacy laws. The two are very narrow, and are also California’s Consumer Privacy Act, or “CCPA.” In 2021, Colorado and Virginia enacted new laws that were not as vast as the CCPA, but not as vast as the CCPA. In 2022, two more states enacted laws similar to those in Colorado and Virginia. Seven states then enacted laws in 2023, and eight more states were tracked in 2024, all of which lined up widely with each other.
Over the years when these states passed new data privacy laws, the consensus gradually developed around the scope and approach of regulatory requirements. The bills operating in 14 states in 2025 are primarily in line with this consensus, as well as the bills in four states that passed one legislative room.
The same can happen if we apply what we know about this evolution of data privacy laws to the current state of AI regulations. Many states either introduce bills that fail first or pass limited bills. But as a few states push for comprehensive laws, consensus begins to form that will provide momentum for more and more states to enact laws.
This process is already beginning to happen with AI. In 2024, 18 states proposed widespread new AI regulations. Three of these states have enacted narrower bills. At the time of this writing, a total of over 20 states have implemented AI bills. Some bills have already failed, while the rest have not been enacted, but Colorado passed the only omnibus style law (explained below), and Virginia was expected to become the second state to enact similar laws until Youngkin’s veto. This would have created momentum for more laws to follow.
However, there are some notable differences between the development of data privacy laws and AI regulations.
First, while the types of data privacy laws enacted over the past five years have been fundamentally new in many ways that are barely similar to existing laws, many states are now trying to integrate AI regulations into existing legal frameworks. These existing laws were even cited by Governor Youngkin in his veto of the Virginia bill. “There are many laws that protect consumers and place businesses liability on discriminatory practices, privacy, data use, honorary libel and more,” he said.
Many states may continue to pass amendments or narrow laws as short-term approaches to addressing AI issues until better understanding and consensus can be developed. For example, Utah (despite a short legislative meeting) has passed several bills by disclosing the use of generated AI services and chatbots in occupations that require a state-granted license. Currently, most state bills and laws can be grouped into different categories.
Consumer protection when AI is used for profiling and automated decisions. The use of AI in employment and employment contexts. Deceptive media or “deepfakes.” This is further subcategorized by certain types of individuals (e.g., public figures and minors), and activities (election-related or sexually explicit). Form an AI task force or group dedicated to understanding the impact of AI.
Second, in contrast to data privacy laws that were developed more or less organically, AI policymakers have sought to actively organize nationwide to develop a more harmonious approach to AI regulation. However, one of the most active groups, the multi-state AI policymakers’ working group – the future of the Privacy Forum, the bipartisan assembly of more than 45 state legislators, was restricted after the forum retreated after being pressured to support the working group. As a result, the momentum that the nation was building towards developing a consistent approach to regulating AI nationwide has effectively stalled.