The strange thing about AI is that different uses can be regulated differently. In the workplace, candidates’ privacy rights based on AI tools may evaporate. On the one hand, you are a “consumer” with clear privacy rights. On the one hand, you are an “applicant” or “employee” and those rights will change or disappear. Colorado and California provide insight into how these rights decline.
Let’s start with Colorado. State privacy laws keep the definition of “consumer” mostly at the level of human resources departments. Employees, applicants and commercial actors are homebound, and opting out of profiling under Colorado privacy law means staying outside the interview room. But Colorado’s AI law calls employment and opportunity decisions “consequential” without linking them to opt-out rights under privacy laws. Opting out of the Colorado Privacy Act does not affect an employer’s hiring decisions.
AI laws still dictate to HR teams how to act. We require people to be given clear notice explaining the purpose and role of the system before making decisions. In the event of an adverse outcome, employers are obliged to provide details to applicants, including the reasons for the decision, routes to rectify inaccurate data, and appeals, including human review where possible. The law imposes a duty of reasonable care to prevent algorithmic discrimination and requires a risk program that conducts impact assessments at launch and annually after significant changes. Therefore, all AI tools used to make hiring decisions provide core rights of notice, cause, rectification, and appeal.
When building to this regulation, think like a railroad operator laying dedicated HR tracks. Map controller/processor roles to law developer/introducer duties to understand who does what. Demand model documentation, test results, and incident reports from vendors. Provide clear, non-opt-out notices that do not need to be delivered to applicants. Record the reasons you give, any amendments you make, and any objections you hear. We also align our programs to industry standards (NIST’s AI RMF) to support due diligence when processing candidate applications.
California takes a different path. The state’s privacy regulator treats automated systems that make hiring and contractor decisions as “critical.” This triggers automated decision-making technology (ADMT) notifications before use, opt-outs for important decisions, and response timelines for access and appeals. If someone opts out, you must stop using ADMT for that person within 15 business days. California’s civil rights regulations also cover automated tools used for hiring, promotion, and other human resources activities, and employer obligations focus on discrimination risks, testing, and documentation.
In Colorado, the order is notice, cause, correction, and objection. California has two administrations. First, ADMT rules are triggered when significant employment decisions are made or substantially determined by automation. This means notice before use, opt-out, access and explanation, as well as risk assessment. Second, crack down on discrimination within the same hiring flow under a parallel civil rights framework. For opt-out rights, think of processes that involve strong human appeals and specific admissions, acceptance, hiring, and assignment decisions with guardrails. Design your workflows and notifications to fit within those contours.
Zooming out, the hardest part isn’t just the rules. It’s a label. People’s rights come and go as their labels change, rather than their actual risks. You start as a consumer, you become an applicant, and all of a sudden different statutes turn on and off. This creates gaps for people and headaches for compliance teams.
Workarounds also appear. If you feed the model pseudonymous data, it says privacy rules don’t apply, even though the decision is still being made by a real person. Transparency will be reduced. Error correction tools are exhausted. Next comes “Human Review Theater.” This is a perfunctory click by someone without the authority to change the outcome, performed to check a box rather than correct the decision.
Please design your use to avoid these pitfalls. Set a plain English rights baseline that tracks individuals end-to-end, regardless of label. Map all legal terms in your notices and records to that baseline so people know what to expect and auditors can see through lines. Giving real power to reviewers and measuring override rates makes “human involvement” meaningful. Log status changes from consumer to applicant to employee and more.
For lawmakers and regulators, two durable paths outweigh today’s patchwork. The first is to build sector-specific AI rules that clearly exempt conflicting privacy obligations so that HR teams don’t have to play regulatory twister. 2: Harmonize definitions to ensure employees maintain the same fundamental rights in automated decision-making no matter where they are on the organizational chart. Both routes give workers and employers a more stable map. That may be the only way to keep the entrance ramp open and the guardrail real.

