On May 5, 2025, the Colorado Senate rejected an attempt by state lawmakers known as Senate Bill (SB) 205 to amend Colorado’s Landmark Artificial Intelligence (AI) Act. Among other changes, SB 318 had removed the requirements for impact assessment and reporting, as well as AI developers and AI developers using “reasonable care” to remove the requirements for SB 205 to protect consumers from algorithmic discrimination of known or reasonably foreseeable risks. As a result of the failure of the fix, SB 205 may take effect on February 1, 2026, as planned.
In this article, we examine the driving force behind SB 318, the reasons for SB 318’s failure, and the implications of its failure, and how developers and deployers prepare for SB 205.
Depth
SB 318: Why was it introduced and what did you say?
SB 205 has created a fundamental framework for legislative measures to prevent the identification of algorithms in high-risk AI. Several other states are leveraging this framework in their respective AI laws. However, after SB 205 was enacted, many stakeholders, including Colorado Gov. Jared Polis, criticized the bill for being overly broad and potentially slowing down innovation. In response, the Colorado Legislature created a task force to amend SB 205, leading to the introduction of SB 318.
SB 318 has significantly reduced the burden on SB 205 on developers and deployers, shifting the law to something more business-friendly. SB 318 would have delayed the enactment of the law until 2027, and certain small businesses would not have had to comply until 2029.
Why did SB 318 fail, what does that mean?
Sen. Rodriguez explained at a hearing of the Colorado Senate Business and Labor Technical Committee on May 5, 2025 that he introduced SB 318 in an attempt to address SB 205’s criticism despite lack of stakeholder buy-in on the proposal. Committee members heard from stakeholders both in favor and against SB 318, and although SB 318 was insufficient to address important concerns about SB 205, there was a general agreement that stakeholders felt that “defining a consequential decision” was “high risk.” It was feasible and the notification requirements related to AI deployment were too intrusive. Due to these criticism and lack of consensus, Sen. Rodriguez decided to kill SB 318.
The Colorado legislative session will end on May 7, 2025 and will not be reconvened until January 2026. As a result, SB 205 must be complied with February 1, 2026, except for the rapid movement bill introduced at the beginning of the 2026 legislative session.
How do Deployers and developers prepare for SB 205?
In the wake of the SB 318 failure, developers and deployers must develop plans to enable compliance with SB 205 by February 1, 2026. Developers and deployers should pay particular attention to the following key requirements:
Developers and deployers should first assess the scope necessary to comply with SB 205. The more troublesome requirements of SB205 apply to entities that develop or deploy AI systems that are considered “high risk” under SB 205. A high-risk AI system is defined as “a key factor in a manufacturer or a critical decision.” Or healthcare services. SB 205 has many exceptions, including the provider taking actions to implement the recommendations, such as HIPAA-enabled entities that provide AI-generated healthcare recommendations. However, even this exception has limitations as it does not apply to AI systems that are considered “high risk.” Other notable health-related exceptions include high-risk AI in compliance with standards set forth by the Office of National Coordinators of Health Information Technology (as long as these standards are at least as strict as SB 205 standards), or have been approved, approved, approved, accredited, cleared, developed or granted by US food and drug administration.
SB 205 requires developers and deployers of high-risk AI systems to use reasonable care to protect consumers from “reasonably foreseeable risks” of algorithmic identification arising from the intent of high-risk AI systems and the use of contracts. Developers and deployers can create rebutable estimates of compliance with this requirement when providing notification to deployers or consumers, where applicable, including detailed information about AI systems and plans to address the risks of algorithm identification associated with those systems. Prior to February 1, 2026, developers and deployers must compile, review and modify required disclosures in the documents based on SB 205.
Deployers of high-risk AI systems establish refutable estimates of compliance only when implementing detailed risk management policies and programs to mitigate and prevent reasonably foreseeable or known risks of algorithmic discrimination. Deployers must review current policies and risk management programs to comply with the specific requirements of SB 205.
The burden of requirements is not particularly high, but some of the SB 205 provisions require compliance by AI systems developers and deployers, as well as high risk. SB 205 requires the deployer of an AI system to disclose to the consumer the use of such a system at each interaction unless it is clear to anyone that it is reasonable to interact with AI. Prior to the effective date of SB 205, deployers must work with the developer to insert appropriate notification regulations into AI systems interacting with consumers.