In the absence of a comprehensive federal artificial intelligence (AI) law, states have gained regulatory attention in 2024, creating a patchwork of laws that will shape the artificial intelligence landscape. Utah, Colorado, and California have emerged as pioneers, enacting unique approaches to managing the development and deployment of AI systems. These state laws are not just a Band-Aid. They are setting a precedent that is likely to influence the trajectory of national AI regulation in 2025 and beyond.
In this advisory, “AI Developer” refers to those who create AI tools and systems, and “AI Adopter” refers to those who implement AI tools and systems. However, the specific definitions of these terms may vary depending on the various laws discussed below.
Summary of state laws
1. Utah Artificial Intelligence Policy Act (UAIP)
Effective date: May 1, 2024 Primary focus: Transparency and consumer protection in the use of generative AI Scope: Entities using generative AI for customers in Utah Key requirements: Establishes disclosure requirements : Regulated professions must “conspicuously” disclose their use of generative AI at the outset of customer interactions; Other entities must, if requested by consumers, Creates the Office of Artificial Intelligence Policy and the AI Learning Labs Program for violations that prohibit the use of generative AI as a consumer protection defense, requiring “clear and conspicuous” disclosure of the use of . Learning Lab participants can enter into a “deregulation agreement” with the Secretariat to receive a 24-month regulatory moratorium in developing and testing innovative AI technologies.
2. Colorado Artificial Intelligence Act (CAIA)
Effective date: February 1, 2026 Primary focus: Regulating high-risk AI systems and algorithmic discrimination Scope: Developers and implementers of high-risk AI systems in Colorado Key requirement: “Consequential decision-making” Obligations for developers in sectors such as education, employment, and healthcare that focus on “high-risk” AI systems: Provide detailed documentation to adopters Disclose high-risk AI systems on their websites About algorithmic discrimination is 90 Obligation for adopters to notify the Attorney General within 2020: Notify consumers of high-risk AI uses before a decision is made Provide reasons for adverse decisions and allow for appeals Impact assessments and annual Conduct a review Establish a risk management policy Please note that this law may be amended prior to its effective date. Polis expressed concern about the law’s potential impact on stifling innovation when signed.
3. California AI Transparency Act
Effective date: January 1, 2026 Key focus: Transparency and discovery of AI-generated content Scope: Generated AI providers with over 1 million monthly users in California Key requirements: Free for image, video, or audio content Provide AI detection tools for “Manifest” disclosure of AI-generated content Include “potential” disclosures in the metadata of AI-generated content Ensure that licensees maintain disclosure capabilities
4. California AB 2013
Effective date: January 1, 2026 Primary focus: AI training data transparency Scope: Developers of generative AI systems available to Californians Key requirements: Disclosure of training dataset information on developer websites ( (including) Data sources and owners Number and type of data points Copyright data status Whether the dataset includes personal information Data collection period Use of synthetic data generation
Emerging trends in AI governance
This recent wave of AI regulation, enacted in 2024, exhibits several important trends, including greater transparency, risk-based frameworks, and increased emphasis on consumer protection. These trends are shaping the regulatory environment for the development and deployment of AI, creating both challenges and opportunities for companies operating in this space.
Emphasis on transparency: All three states are imposing restrictions on customer communications (e.g., Utah), comprehensive documentation requirements (e.g., Colorado), or content labeling and dataset disclosure requirements (e.g., California). We prioritize disclosure and transparency, including: To stay ahead of the curve, AI developers and adopters must implement clear and user-friendly disclosures about the use of AI to customers, especially for generative AI interactions. Risk-based approach: Colorado law in particular signals a shift toward risk-based regulation, focusing on stricter requirements for “high-risk” AI systems. This approach is in line with international trends such as EU AI legislation. If possible, AI developers should recognize the risk level of their AI systems early in the design process and integrate appropriate risk management capabilities into their tools. Developers and adopters also need to continuously monitor risks and take appropriate risk management measures when new risks are identified. Emphasis on consumer protection: Utah, Colorado, and California laws all prioritize consumer protection in AI regulation. This trend reflects growing concerns about the potential negative effects of AI on individuals, including algorithmic discrimination and privacy violations. AI developers and adopters must ensure that their systems are designed and implemented in a way that intentionally protects consumer rights and interests. Regulatory fragmentation: As more states introduce separate AI regulations, businesses will face the challenge of complying with an ever-growing patchwork of laws. Requirements such as disclosure obligations and risk assessments vary by state, which can complicate the work of developers and implementers working across multiple regions. To stay ahead of the curve, AI stakeholders need to invest in adaptable compliance frameworks that can be tailored to meet different legal standards in different states, ensuring seamless compliance while maintaining operational efficiency. There is.
As states take the lead in developing AI regulations, businesses must adapt to a rapidly evolving regulatory landscape. New legislation emphasizing transparency, consumer protection, and risk management is shaping the future direction of AI governance. By proactively anticipating and addressing these trends in 2025, AI developers and adopters can position themselves as industry leaders and gain a competitive advantage while driving responsible innovation.