The California Civil Rights Board, which promulgates regulations implementing California’s civil rights laws, has announced new regulations regarding artificial intelligence (“AI”) in the workplace. These new rules (available here) will come into effect on October 1, 2025 and amend the existing regulatory framework of the Fair Employment and Housing Act (“Feha”). This latest regulation continues the California trend of police AI in the workplace, as previously reported here.
According to the Civil Rights Bureau, these regulations are “increasingly used in employment settings, utized decision systems that may rely on algorithms and artificial intelligence, increasingly being used to promote broad decisions regarding job seekers or employment, employment, and promotions. Such “automated decision systems” are defined as computational processes that either make decisions about employment benefits that may be derived and/or used from artificial intelligence, machine learning, algorithms, statistics, and/or other data processing methods, or promote human decision-making.
These regulations seek to clarify the application of the existing Anti-Discrimination Employment and Employment Act (i.e. Feha) in the AI context. Among other changes, regulations:
We broadly define the employer’s “agent.” This will become “employer” under FEHA, such as companies hired to hire and screen applicants. Employers require that they maintain a record of automated decision-making system data (such as data that reflects employment decisions or outcomes, such as data provided by individual applicants or employees) for at least four years. An automated decision system assessment that includes tests, questions, or puzzle games that elicit information about the failure confirms that it could constitute an illegal medical investigation. Specifies that it is illegal for an employer or other eligible entities (such as an agent) to use an automated decision system or selection criteria that discriminate against the applicant or employee or class of employee on a FEHA-protected basis, such as gender, race, or disability. Providing that an employer’s anti-bias test (or lack thereof) and response to the outcome of such tests, and other similarly active efforts to avoid illegal discrimination, are “related” to the employer’s defense against such claims.
Therefore, with the exception of limited exceptions (such as “agents” and definitions of new recordkeeping requirements), regulations primarily declare existing laws that apply to new technologies. In other words, these regulations make it clear that saying “AI did it” is not a defense when there is a suspicion of a different impact on protected classes.
We will continue to monitor how California applies anti-discrimination laws to use AI in employment decisions.

