Colorado’s pressing groundbreaking AI law continues to raise compliance challenges and policy concerns for employers and the broader business community, as highlighted in a recent report from the state’s AI task force. The February report follows the governor’s lead, suggesting legislative changes may be necessary before the law comes into effect to address lingering ambiguity, compliance burdens and other stakeholder concerns. Employers need to know the law, governor’s concerns and the results of the task force when the clock is engraved through the effective date in February 2026.
A summary of Colorado’s groundbreaking AI laws
Last year, Colorado enacted Senate Bill 24-205. This is the first law to regulate the use of artificial intelligence in high-risk decision-making. When effective February 1, 2026, the law places new obligations on developers and deployers of AI systems that affect “consequential decisions,” such as workplace, lending, housing and healthcare decisions. It establishes an obligation to avoid algorithmic discrimination and requires companies to ensure that AI systems do not produce biased results.
The law requires an impact assessment of AI systems that are subject to the requirements of the law. Developers must provide detailed documentation to deployers, and deployers must notify consumers if AI is being used as a result. The law also grants consumers the right to appeal decisions made by AI, and in some cases businesses must allow consumers to correct false data.
Additionally, SB 24-205 offers small business exemptions to businesses with fewer than 50 employees and imposes a direct reporting requirement for the Colorado Attorney General. The law does not focus on intent. It regulates the outcome of AI-generated decisions.
You can read the full summary of the law in the insights here.
Police Governor’s Reservation
When signing the law on SB 24-205, Gov. Jared Police expressed concern about the potential impact on innovation and competitiveness. In a signature statement issued on May 17, 2024, Police acknowledged the importance of preventing AI-driven discrimination, but warned that the broad regulatory framework of the law could curb Colorado’s technological advances.
Specifically, he encouraged lawmakers to improve key definitions (such as “algorithmic discrimination” and “consequential decisions”) and to reconsider the complex compliance structure of the law prior to its effective date in February 2026. Police also raised concerns about the patchwork effects of the law and urged federal measures to provide a more uniform AI regulatory framework across the country.
The AI Task Force reports the findings
The Artificial Intelligence Impact Task Force, established to research and recommend potential legislative amendments, issued a February 2025 report, highlighting the area of revision. The report categorized the proposed changes into four groups.
The obvious consensus issue for change
The report noted that most stakeholders agreed that some small explanations would be required, including refinement of notifications and document requirements.
Issues that could reach consensus in more discussions
There is a broad agreement that further improvements are needed to certain provisions of the law, but the precise approach to correcting them is under discussion. These topics are particularly important to employers. This is because it affects the way businesses need to comply with legal requirements and build AI governance programs.
One important area of ongoing discussion is the definition of “consequential decisions,” which determines which AI-driven business processes fall within the scope of the law. Employers will prefer to be more clear to ensure that employment, promotion, termination, and other HR functions are consistent with legal obligations. Without a clear definition, businesses could face increased compliance uncertainty and litigation risk. Another issue during negotiations is the scope of exemptions for certain companies. The law currently exempts small businesses with fewer than 50 employees, but some stakeholders argue that the exemption threshold must be revised or extended. These stakeholders want to avoid imbalanced compliance burdens on medium-sized businesses that may lack the resources to conduct comprehensive AI impact assessments. Furthermore, the timing and scope of impact assessments remains a concern. The law requires AI deployers to conduct regular impact assessments, but stakeholders discuss when these assessments are needed, what they cause, and what documents they must provide. Employers deploying AI in HR, finance and customer service should track these discussions and predict compliance obligations and operational impacts.
Interconnected issues requiring broader compromises
Some of the most complex challenges in the revision of SB 24-205 arise from the interconnected nature of its provisions. Adjusting one section often has ripple effects on other sections, making compromises more difficult. For employers, these issues are particularly relevant as they shape the legal risks, compliance obligations and operational realities of using AI in business processes.
One major area that requires broader compromise is the definition of “algorithmic discrimination.” The current definition has been criticized as too vague, making it difficult for businesses to determine whether their AI systems are compliant. Employers who weigh them want a clearer and more viable definition, ensuring that AI tools do not inadvertently cause violations. Another complex issue involves risk management requirements for AI deployers. The law requires the implementation of robust risk management programs using AI as a result, but stakeholders disagree with the required documentation and level of surveillance. Furthermore, discussion among the people of the AI Task Force continues regarding the company’s reporting obligations to the Attorney General (AG). Some industry representatives argue that current reporting requirements are too broad and could expose trade secrets, but public interest groups argue that transparency is needed to prevent AI discrimination.
Deep Departmental Issues
Some of the most controversial aspects of SB 24-205 continue to be deeply divided among stakeholders, and legislative consensus is challenging. Employers should pay particular attention to these issues. That solution (or lack of it) can have a significant impact on compliance requirements, enforcement risks, and overall business operations.
One of the hotly debated topics is whether a company should have the right to treat it before enforcement action. While representatives from some industry argue that businesses should have an opportunity to correct violations before facing penalties, public interest groups argue that such provisions could undermine the deterrent effect of the law. Another contested issue is the scope of trade secret protection under the law. Companies are concerned that essential AI disclosures could force them to reveal their own information, but advocates argue that transparency is necessary to prevent discrimination in the algorithm. The role of the Attorney General’s Office in enforcement of the law is also under contest. Some suggest that AG’s monitoring and investigative capabilities are expanded, while others ask that AG limit discretion to reduce regulatory uncertainty. Other controversial topics include amending consumer rights to appeal AI-driven decisions, potential changes to small business exemptions, and whether to delay the implementation of the law to allow businesses to prepare more.
What’s next?
Legislative changes appear likely as SB 24-205 is more than a year before its enforcement, but the content of such changes remains uncertain. The report does not propose any specific legislative amendments, but it strongly encourages policymakers to continue their discussion to address these concerns before the law comes into effect. The task force findings show that further improvements are needed to balance consumer protection with business feasibility.
What employers should do
To prepare for the enforcement of the law, businesses must:
Assessing AI Use – Determines whether an AI system used in employment, lending, housing, or other regulated areas falls under SB 24-205. Perform an AI risk assessment – Evaluating AI-driven decisions of bias can help mitigate risk, even before forced compliance begins. Check your contract with an AI vendor – Make sure your AI developers provide the documentation you need for compliance. Stay informed – Follow Legislative Development and Task Force updates to predict potential changes. Developing Compliance Plans – Prepare for potential consumer notifications and appeal obligations by improving internal processes now.