The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.
Tanya calms down
September 19, 2025
Colorado I took a bold step forward by handing it to 2024. Artificial Intelligence Consumer Protection ActColloquially known as: Colorado Artificial Intelligence Law (CAIA). This is the first comprehensive state law US It is designed to regulate high-risk artificial intelligence (AI) systems. The law requires developers and deployers to use “reasonable care” to avoid algorithmic discrimination, perform impact assessments, disclose AI use to consumers and others, and provide a means for human review of adverse decisions. These provisions aim to increase fairness and transparency in areas where AI has a major impact on people. The implementation of CAIA was originally set to February 2026, but was delayed by June 2026. ai Sunshine Act. This expansion provides local governments with a short window of opportunity to prepare them for full implementation and compliance with the law.
Prepare guardrails for trust
Colorado County and municipalities are included as potential deployers and developers under this law. This is because local governments are often the first point of contact for community members in various circumstances, and for some, it could be their first encounter with Ai-Augmented decision-making. Results of a global study on public trust in AI kpmg and University of Queensland Over 60% of people are wary of trusting AI systems, and found that about a third of respondents lacked confidence in government and commercial organizations to develop, use and govern AI. Most people expect and welcome AI regulations, but regulatory progress is slow. Congresses in all 50 states considered some form of law related to artificial intelligence in 2025, but most efforts failed. This highlights the lack of preparation of current laws and policy structures, and it is lagging behind in regulating the rapid development of AI and gaining public trust in such systems.
At the local government level, these risks are specific. No guardrails:
Community members may never know that AI tools are a substantial factor in their case.
Colorado’s law transparency requirements – advance notice, plain language explanations, right to amendment – are important. These safeguards ensure that those who need the service are not left in the dark about how decisions are made, but transparency alone does not solve the deeper issues.
Current framework gaps
While CAIA is a good start, there are key gaps in regulations that still need to be considered as councils continue to debate on improvements and local governments continue to prepare for compliance.
Scope ambiguity: This action rests on terms such as “consequential decisions,” “rational care,” and “substantial factors,” but nothing is closely defined and does not reflect how AI actually works. This ambiguity leads to exposure to inconsistent applications and litigation risks.
Narrow Focus on Discrimination: By highlighting algorithmic discrimination against protected classes, the law adopts a critical civil rights framework, but other harms such as data entry failures and calibrated risk models are excluded. Such harm can be out of the scope of CAIA protection.
Limited Enforcement Route: Enforcement is primarily in Colorado Attorney General. Individuals facing disadvantaged decisions may not be able to pursue claims under the law, except in the broader consumer protection laws. For those who have harmed flawed local government AI, relief relies on the attorney general’s enforcement priorities.
Unequal Government Applications: In the months since the CAIA was established, other states have been addressing the issue of AI regulations; US Federal Government There is another strategy that depends on removing “burdening regulations.” This will align the national AI framework and create a patchwork of missing applications, as some have suggested.
The Paradox of Trust
Colorado’s AI law is assembled as a consumer protection measure aimed at identifying algorithms and preventing harm. Although the law does not explicitly refer to “trust,” requirements for public disclosure, human loop protection, risk assessment and right to correct the situation are mechanisms that build public trust in AI-supported decisions.
The paradox is that laws may have the opposite effect if state and local governments lack the ability to financially and staff members to make meaningful implementation of these protections. Community members can see disclosure forms, rights of appeal, and transparent statements, but they may find the process hollow and performative. We review that appeals can be underfunded and that they can’t be vague and ambiguous. Instead of strengthening confidence, the gap between promises and delivery risks deepening public skepticism about both AI and the government that uses it. In an age of budget shortages and federal funding, the biggest danger of Colorado AI laws is not overregulated, but its incomplete implementation can undermine the very trust that the law is designed to promote.
Author: Tanya calms down I’m the CEO of Paradigm Public Affairs, LLC. Tanya’s areas of work include building relationships between local governments and communities, strategies and assessment of restorative justice and policies and programs. You can reach Tanya (Email protection). She is the only one who thinks this column and the wrong opinion.


