In the absence of federal regulation, several states have passed or are considering legislation aimed at reducing the risk of algorithmic discrimination resulting from the use of AI systems by employers. This insight summarizes state and local AI laws that impact employers and notable pending actions.
“Algorithmic discrimination” means discriminatory behavior against an individual based on a protected characteristic (e.g., age, color, ethnicity, disability, national origin, race, religion, veteran status, etc.) Refers to the use of artificial intelligence (AI) systems that result in unfair treatment or disadvantage. , gender, etc.). AI systems are susceptible to discriminatory behavior, whether due to system training with flawed or unrepresentative data, or because the system finds and reproduces patterns of human discrimination within the training data. It is well known that it can produce results. Such discrimination is particularly problematic for employers who use AI systems to make hiring decisions.
Although President Biden has announced executive orders on the development and use of AI, there is no comprehensive federal law regulating the use of AI systems, particularly in the context of preventing algorithmic discrimination in employment decisions. Accordingly, several states have passed or are considering legislation aimed at reducing the risk of algorithmic discrimination resulting from the use of AI systems by employers. Each of the proposed bills would impose similar obligations on employers that use AI systems or automated decision-making tools (ADTs) when making employment decisions. 1
Generally, these laws and proposed legislation impose a duty of reasonable care on employers to mitigate and assess the risks of algorithmic discrimination posed by the use of AI systems. There are significant affirmative reporting requirements, including direct notification to individuals subject to decisions made by AI systems. In some cases, the bill would provide individuals with the opportunity to correct data entered into AI systems and appeal adverse decisions as a result, although this would likely require human review. There is a gender. Many of the bills include impact or risk assessment requirements to check for bias against protected groups. Details of the law and the proposed legislation are further discussed below.
enacted law
Colorado: Senate Bill 24-205
The Colorado Artificial Intelligence Act takes effect on February 1, 2026 and takes a risk-based approach to AI regulation, similar to the European Union’s AI law. The law applies to Colorado businesses that use AI systems to make employment decisions or as a key element in making decisions. The law aims to regulate the use of AI systems by the private sector and would impose a duty of reasonable accommodation on Colorado employers. If a party doing business in Colorado deploys or makes available an AI system that is intended to interact with a consumer, the law requires that the AI system must It also requires those parties to ensure disclosure to consumers. The Colorado Attorney General is responsible for enforcing the law and has the authority to promulgate regulations to implement and enforce the law’s requirements, including establishing standards for risk management policies, disclosure notices, and impact assessments. Penalties may include fines and injunctive relief. Colorado AI law does not include a private right of action.
Illinois: House Bill 3773
HB 3773 would amend the Illinois Human Rights Act to protect employees from discrimination due to the use of AI in employment-related decisions and to require transparency regarding the use of AI. Under HB 3773, an employer may not use AI that has the effect of subjecting an employee to discrimination on the basis of a protected class in recruiting, hiring, promotion, firing, discipline, or terms, privileges, or conditions of employment. You can’t. The law also prohibits employers from using zip codes as a proxy for protected class. Employers in Illinois must notify employees that they will use AI to make or assist in making employment-related decisions. HB 3773 applies to all individuals who employ one or more employees in Illinois and goes into effect on January 1, 2026.
New York City: Local Law 144 (LL 144)
Effective July 5, 2023, LL 144 prohibits employers and employment agencies from using automated employment decision-making tools (AEDTs) unless they confirm and provide notification that a bias audit has been conducted. is prohibited. This law only covers AEDTs that are used to substantially support or replace discretionary decision-making for employment decisions. Notification Requirement: Employers must notify that AEDT will be used. The notice must also include information on how to request a reasonable accommodation. If the applicant resides in New York City, the employer must provide the required notice at least 10 business days before the AEDT is used, along with a description of the “job qualifications and characteristics” that the AEDT will be used to evaluate. must be provided. Bias audit: Before using the AEDT, employers should conduct an audit of the tool to check for bias against protected groups (racial/ethnic and gender). The results of this audit must be made public. Audits must be performed at least once a year by an independent third party. LL 144 applies to all employers and employment agencies that use the AEDT “within the city.” This means that (1) your work location, at least part-time, is in a New York City office; (2) The job is fully remote, but the location associated with it is a New York office. or (3) the employment agency that uses AEDT is located in New York City. Penalties for violations include a $500 fine for the first violation and up to $1,500 for subsequent violations.
Pending bills and regulations
California Privacy Protection Agency (CPPA)
In November 2024, the California Privacy Protection Agency (CPPA) released draft regulations regarding the use of AI and automated decision-making technology (ADMT) promulgated under the California Consumer Privacy Act (CCPA). The California Court of Appeals has ruled that the CPPA can take effect immediately once the regulations are finalized. The public comment period was recently extended until February 19, 2025, and CCPA plans to hold a public hearing on the same day to allow direct comments to be submitted. Like other CCPAs, the draft regulations apply to commercial organizations doing business in California that meet at least one of the following criteria:
The business’s total annual revenue exceeds US$25 million. This business buys, sells, or shares the personal data of more than 100,000 California residents. The company derives at least half of its annual revenue from selling California residents’ data. This rule only applies to the use of AI and ADMT when making “key decisions.” The CCPA AI Draft Regulations have three key requirements:
Organizations using covered ADMT must issue a pre-use notice to consumers. Provides a way to opt out of ADMT. Explain how a company’s use of ADT impacts consumers.
California Civil Rights Council
The California Civil Rights Division (CRD) is responsible for enforcing the state’s anti-discrimination laws. As part of these efforts, the Civil Rights Council, a branch of the CRD, develops and issues regulations to enforce state civil rights laws. Under the proposed rules, employers who use AI in recruiting and employment practices would not be able to screen or rank applicants based on religious beliefs, disabilities, or medical conditions unless the factors are job-related. This means that you will not be able to use a prioritization system. The primary objective of the CRD’s proposed rules is that by providing an AI system, vendors will be treated as agents of the employer and/or employment agents. The rules will also prohibit employers from using AI during the interview process. The proposed rule would require covered employers and businesses to maintain employment records, including data generated from automated decision-making systems and AI training data, for at least four years. Under the proposed rule, employers would also be required to conduct anti-bias testing of their ADT systems.
Texas: 88(R) HB 1709
If passed, the Texas Responsible AI Governance Act would establish obligations for developers, installers, and sellers of “high-risk AI systems.” The proposal takes a risk-based approach to AI regulation, similar to the European Union’s AI law. “High-risk” systems include those used for critical decisions such as employment, health care, financial services, and criminal justice. Key provisions of the proposal include mandatory risk assessments, record-keeping requirements, and transparency measures. The bill outlines steep penalties for violations (up to $100,000 in fines) and proposes creating a regulatory “sandbox” to allow innovation while testing compliance with the law. The bill requires developers and implementers of high-risk AI systems to conduct detailed impact assessments. These assessments assess the risks of algorithmic discrimination, cybersecurity vulnerabilities, and transparency measures. Distributors must ensure that their AI systems meet compliance standards before entering the market.
The incoming Trump administration’s impact on AI regulation is likely to be minimal, as most efforts to regulate AI are being conducted at the state level. Therefore, AI laws will continue to expand at the state and local level. Democratic-led states could see stronger AI regulation to counter what’s happening at the federal level. Employers who are using or considering using AI to make employment decisions would do well to stay informed of relevant laws. With a patchwork of new laws at the state and local level, employers are seeing transparency measures and proactive auditing as achievable goals for managing the risk of bias inherent in AI tools. You also need to prioritize.