Editor’s Note: This is part 2 of a two-part series. You can read part 1 here.
“Early Adapters” may set the tone of future laws
The scope and approach of AI regulations remains largely in the air, but like data privacy, the first few major laws that pass will almost certainly be used as a reference point for additional laws.
Looking back at the first ten data privacy laws enacted by US states, each successive law combined similarities between Colorado and Virginia predecessors. The CCPA may have been one of the first, but it was considered too interesting for businesses as it took clear inspiration from the European Union’s general data protection regulations, while other states were not comfortable filing lawsuits.
Once the Colorado and Virginia laws were enacted, other states had models to introduce laws that would have enough support to pass (although there are variations unique to each state), but a wave of new laws quickly followed. Although some provisions of Colorado and Virginia laws were relatively unfamiliar at the time, such as the provision of a universal opt-out mechanism for consent, such as regulations regarding the use of “dark patterns” and the provision of a universal opt-out mechanism for consent, many of these aspects became part and parcel of state law that followed. Many states continue to update their laws to incorporate regulations inspired by other states, such as Minnesota, which only passed the Consumer Data Privacy Act last year, but have since introduced revisions with strong similarities to Washington’s My Health My Data Act.
Similarly, AI regulations combine the combination of amending existing laws to pass new, narrowly customized laws creating a kind of “feedback” loop. Rather than face the challenge of garnering widespread (and often tenuous) support to pass comprehensive laws, states are quickly pushing for existing laws and narrower AI laws that expand existing laws.
As an example of an expansion of existing law, the California Privacy Protection Agency board this month approached the final completion of the rules-making package on automated decision-making technologies, cybersecurity, auditing and risk assessments mandated by the California Privacy Rights Act of 2020.
This rulemaking process has been so prolonged that much of the original scope of the Privacy Rights Act was overtaken by the development of AI, resulting in a rulemaking package that is quite different from what was expected.
As for AI-specific regulations, Colorado’s Artificial Intelligence Act (which will take effect on February 1, 2026) was the first state law in the omnibus style, and despite facing the same criticism that destined Virginia’s AI law, it managed to pass. Other states have already modelled some of the laws proposed after Colorado’s laws, with many bills regulating “high-risk systems” and preventing “algorithmic discrimination.” In other words, the outcomes of AI systems do not produce discriminatory treatment when used to make “consequential decisions” that have a significant impact on employment, funds, insurance or insurance.
The AI bill also follows Colorado by distinguishing between developers (creators) and deployers (users) of AI systems, and each fulfills its own individual obligations. However, similar to data privacy laws, states do not have a full agreement on all aspects of regulations or how far they should go. One area of attention is the debate about whether developers or deployers should be responsible for monitoring the harm of algorithms. Colorado’s AI law requires developers to remain accountable for known or foreseeable risks within AI systems. This includes the requirement to report to both the state attorney general and known deployers within 90 days of discovering or realising that algorithm identification is occurring. Virginia’s bill also had reporting requirements, but it wasn’t as broad as in Colorado’s law. At this point, most bills are leaning towards putting accountability on AI systems deployers. However, almost half of the proposed invoices place accountability on developers simultaneously or separately.
While other states could ultimately advance more comprehensive legislation, as did the Colorado and Virginia laws, the key difference between developing AI regulations and developing data privacy is the number of considerations and considerations involved in AI. Data Privacy Act centres around the same basic principles and issues, including governance, notification, consent, individual rights, third party management and data sharing, data security, and retention. However, when it comes to AI, even the types of systems within range are inconsistent. Some bills regulate high-risk systems used for automated decision-making (similar to the state of Colorado and Virginia laws), while others broadly cover all AI systems. Conversely, some narrow target generation AI systems.
Nevertheless, nations are likely looking for inspiration and consensus for Colorado and Utah (which, together with California and Virginia, tend to be at the forefront of all technological policy development), even when new laws are not comprehensive. For example, Utah’s 2024 Artificial Intelligence Policy Act is narrower than Colorado’s AI Act. However, there is a unique section that establishes an “AI Learning Lab Program” that allows participants interested in using AI to submit applications to the state to live test AI technology, and effectively create a sandbox testing environment that allows temporary exemptions from regulatory fines. Utah strives to balance it with promoting innovation, promoting ongoing dialogue between businesses and policymakers, and ensuring reasonable consumer protection. Other states could monitor the success of this program very closely.
Ultimately, many states have bills in committees that cover clear issues (some of which have already been discussed here), but there are also significant variations in accountability requirements, such as the governance structure and documentation of AI programs. Perform risk or impact assessments, pre-deploy the AI system and then periodically. Provides notifications regarding AI usage. Requirements to report when the negative impact of AI will occur.
A comprehensive approach to AI regulation is unlikely
Despite the demands of federal regulations, Congress is unlikely to pass comprehensive laws, instead passing much narrower or sector-specific laws. In five years, all the data privacy bills from the Omnibus Federation have withered on grapes, and the same is already happening with AI.
Congress has introduced over 100 bills related to AI, but with the exception of a few outliers, it is unlikely that many of these will become law. As at the state level, issues regulated by these federal bills focus on transparency and accountability, while others focus on consumer protection. Some target specific industries (marketing, genetics, healthcare, education). Some focus on national defense. Others are broader about research and innovation practices. In addition to the difficulty of adjusting such a wide range of issues into a single law or set of laws, many of the same issues that hinder federal data privacy laws prevent them from being applied to future laws of AI law.
At the international level, where many countries have modeled the European Union’s data privacy laws regarding GDPR, AI laws have not happened. This is probably because most jurisdictions (in broad terms) prioritize innovation, whereas AI laws focus on preventing AI risks and harm. Governor Youngkin expressed this sentiment in the rejection message that “the role of government in protecting AI practices is what enables innovators to create and grow, should be to curb progress and place an unpleasant burden on many business owners in the Commonwealth.” Similar to US state development, many countries employ AI governance frameworks rather than modifying existing laws or passing comprehensive new laws. The amendments cover a wide range of issues, including consumer protection, cybersecurity and national defense, banking and finance, data privacy, healthcare (biometric and genetics), and expanding laws dealing with intellectual property. Another similarity is that many of these European countries have established task forces and working groups to codify national approaches to AI governance, defining national strategies, principles, ethics and guidance.
What can you learn?
AI is novel and uniquely complex in many ways, but the development of AI can follow much of what is already seen in data privacy. Initial developments so far may be too slow, but the pace of new laws and regulations could speed up next year or so.
Furthermore, many of the core principles of data privacy already fit the purposes of AI, including accountability and monitoring, impact assessment, transparency and notification, choice and consent, options to challenge your right to make and exercise, and protection from harm. There are still differences and uncertainties, but organizations can do more than realize that they create AI governance programs, policies, processes, and frameworks for AI operations that are well-located to “keep the pace” in what the future brings.