Your go-to guide:
California has enacted multiple AI regulations, including new laws regarding AI safety and whistleblowing, transparency and watermarking of training data, anti-discrimination rules for HR and ADS, banning anti-competitive “common pricing algorithms,” and more. New York City has passed several AI-related bills under New York City Law 144, including the RAISE Act, social media warning requirements, synthetic performer disclosure, expanded rights of publicity, and broader government oversight of automated decision-making tools. Colorado delays implementation of comprehensive AI law until June 30, 2026. The requirements remain in effect but may face federal scrutiny in the future. The timeline for the EU AI law is likely to change as the European Commission considers a one-year deferral of high-risk system obligations due to industry pressure and preparedness concerns. AI companions and therapeutic tools are facing new state regulations, with Illinois, New York, Utah, and California introducing disclosure requirements, crisis response protocols, limits on AI in therapeutic contexts, protections for minors, and strict controls on interactions.
As 2025 draws to a close, AI regulation continues to accelerate around the world. States in the United States and regions such as the European Union have been particularly aggressive, creating a complex landscape for companies and organizations leveraging AI. We’ve compiled five key trends to watch for in 2026.
1. California AI Regulation: Safety, Transparency, HR Oversight, and Algorithmic Pricing
California enacted a number of AI regulations this year. In the coming years, companies involved in AI will be subject to new laws regarding AI safety, transparency of AI training data, oversight of AI in employment, and use of AI in pricing algorithms.
AI Safety Act (Effective Date: January 1, 2026):1 Establishes protection for employees from retaliation for certain reporting of AI-related risks or serious safety concerns to authorities, including whistleblowing related to certain AI models. Establishing the CalCompute Public AI Cloud Consortium under the Government Operations Agency to advance AI development and deployment. AI Training Data and Transparency Act (Effective Date: January 1, 2026):2 Requires covered providers to publicly disclose a summary of training data used in generative AI systems, including source, data type, IP/personal information, processing details, and relevant dates. Covered providers must provide watermarks and potential disclosures for AI-generated content, provide AI detection tools, and ensure that third-party licensees maintain these disclosure capabilities. Large platforms must label machine-readable provenance data and prohibit changes that disable disclosure capabilities. HR and Automated Decision Systems (ADS) (Senate Bill): 3 Prohibits discriminatory effects against protected groups, limits the use of ADS in background checks, requires accommodations, holds employers accountable to vendors, and requires four-year retention of ADS data. “Common Pricing Algorithm Prohibition (Effective Date: January 1, 2026)”4 Increases antitrust oversight by prohibiting the use or distribution of AI-driven “common pricing algorithms” to adjust or enforce pricing, and lowers standards for pleading in civil lawsuits under the Cartwright Act.
2. New York AI Regulation: Algorithmic Oversight and HR Innovation
New York City has passed multiple AI bills that establish standards in multiple areas. New York City has already implemented Local Law 144, which requires employers with automated employment decision systems (ADS) to conduct bias audits and disclose automated hiring decisions. At the state level, several bills were passed in the 2025 legislative session and are currently awaiting Governor Hochul’s action.
The Responsible AI Safety and Education Act, or “RAISE” Act:5 Targets developers who impose high AI training costs, mandates safety policies, risk mitigation frameworks, and prohibits the deployment of certain models. The penalty for a first-time offense is $10 million, and repeat offenses can reach $30 million. Anti-Addiction Social Media Labels: 6 Requires platforms that use “infinite scroll” or similar designs deemed “addictive” to display warning labels, with fines for non-compliance. Disclosure of synthetic performers: 7 Requires disclosure in commercial advertisements using AI-generated digital actors (synthetic performers) and imposes penalties for violations. Expanding publicity rights: 8 Strengthening consent requirements for the use of the voice and likeness of a deceased person, including in AI-generated and synthetic media. LOADing Expansion:9 Expand oversight of automated decision-making in government, requiring states, localities, and educational institutions to publicly inventory, implement disclosures, and increase transparency of AI tools.
3. Colorado AI bill paused
Colorado has delayed implementation from February 1, 2026 to June 30, 2026. 10 The Colorado AI Act 11 establishes requirements for developers and implementers of certain “high-risk” artificial intelligence systems, including obligations related to risk management, disclosure, and algorithmic mitigation of discrimination.
Meanwhile, a recent federal executive order on a national AI policy framework signals a commitment to establishing a national AI framework and directs federal agencies to challenge state AI laws that they believe are unduly burdensome or inconsistent with that framework, and to express their intent to preempt that framework, which could impact how Colorado’s law is viewed going forward. See GT Alert.
At this time, Colorado’s requirements remain in effect and valid under state law. But the EO signals a federal stance that could challenge state AI regulations and invite future federal preemption claims and Justice Department litigation.
4. EU AI law update: key delays under discussion amid industry pressure
The EU AI law officially entered into force in August 2024, but the European Commission is now reportedly preparing to delay implementation of its most onerous provisions by up to a year. The proposed delay targets the High-Risk AI Systems Rule, currently scheduled to go into effect in August 2027, and comes under pressure from U.S. tech companies, member states, and other stakeholders. The delay will be part of a broader “digital simplification” package that also includes relaxing other technology regulations. Some EU officials have floated “emergency suspension” mechanisms, arguing that technical standards and guidance are not mature enough to support compliance. But critics warn that rolling back the rules could undermine the law’s protections and credibility.
5. AI companions and therapists: new legal and ethical frontiers
States are rapidly moving to regulate AI companions and therapeutic chatbots, focusing on the safety, disclosure, and limitations of AI-powered emotional or clinical support. Effective August 4, 2025, Illinois Wellness and Psychological Resources Oversight Act 12 prohibits unlicensed or “unregulated” AI systems from providing psychotherapy and limits how AI can be used by licensed professionals. Only limited support functions, only with written disclosure and consent, and are never allowed to make treatment decisions, interact directly with clients, detect emotions, or develop treatment plans without human review. Exemptions apply to religious counseling, peer support, and public self-help resources.
If enacted, New York State’s AI Model Companion Act 13 would require AI companion models to include crisis response protocols for self-harm, harm to others, and financial exploitation, as well as clear notification that the system is non-human and information about crisis response service providers.
Utah’s Artificial Intelligence Policy Act 14 (as amended), which went into effect on May 1, 2024, requires conspicuous disclosures to certain covered licensed professionals when users interact with GenAI, and requires more stringent and mandatory disclosures for “high-risk” interactions involving sensitive data or important personal decisions. Disclosures must be made verbally for verbal communications and electronically for written documents, and the law explicitly prevents companies from avoiding liability by blaming the AI itself.
States are also focusing on protecting youth and high-risk interactions. If enacted, California’s LEAD for Kids Act 15 would prohibit the use of children’s data to train or fine-tune AI without proper consent and require developers and deployers to prevent unintended use by and against children. It also includes protection for whistleblowers.
conclusion
This trend could continue in 2026, with comprehensive AI legislation slowing as more active states pass individual bills regulating areas deemed high-risk.
1 CA SB 53 (AI Safety Act).
2 CA AB 2013 (Generative Artificial Intelligence Training Data Transparency Act). CA SB 942 (California AI Transparency Act); and CA SB 853 (Supplement to the AI Transparency Act) become effective January 1, 2027.
Amended Code Registration of 3 Cal., tit 2, §§ 11008-11079.
4 CA AB 325.
5 New York S.6953-B/A.6453-B.
6 New York S.4505/A.5346.
7 New York S.8420-A/A.8887-B.
8 New York S.8391/A.8882.
9 New York S.7599-C/A.8295-D.
10 CO SB-004. Amends SB 24-205.
11 SB 24-205.
12 IL HB 1806.
(13) New York AB 6767.
(14) UT SB 149 as amended by SB 226.
(15) CA AB-1064.

