BlogLine

AI in employment decisions: New compliance considerations for California employers

5/15/26

AI; artificial intelligence; generative AI; computers

By: Torre DiGiovanni

Artificial intelligence has been part of the employment landscape for several years, particularly in recruiting, screening, and evaluation processes. What many employers are only now beginning to appreciate, however, is the extent to which regulators expect those tools to be governed by existing anti‑discrimination principles, with real consequences for how automated systems are used in practice.

Amendments to California employment regulations governing Automated‑Decision Systems (“ADS”), which have now been in effect for several months, reflect the Civil Rights Department’s clear intent to prevent employers from relying on artificial intelligence as a shield against liability.

California’s ADS regulations are now in effect

Effective October 1, 2025, Title 2 of the California Code of Regulations adopted various additions and amendments concerning the use of Automated‑Decision Systems in the realm of employment law. These regulations impose FEHA liability upon an employer who discriminates against an employee or applicant through the use of ADS in recruitment, screening, hiring, advancement, or other employment‑related decisions.

As employers continue to expand their reliance on AI‑assisted decision‑making, these rules now operate as an active compliance standard, not a future consideration.

What is an automated‑decision system?

An Automated‑Decision System is defined as:

“A computational process that makes a decision or facilitates human decision making regarding an employment benefit, as defined in section 11008(i) of these regulations. An Automated‑Decision System may be derived from and/or use artificial intelligence, machine‑learning, algorithms, statistics, and/or other data‑processing techniques.” (2 C.C.R. § 11008.1(a)).

This definition is intentionally broad and captures tools that both independently make employment decisions and those that merely guide or influence human decision‑makers.

AI is now embedded in FEHA’s definition of an “agent”

The regulatory amendments incorporate ADS concepts into several foundational definitions. Most notably, the definition of an “agent” now includes:

“Any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA‑regulated activity…including when such activities and decisions are conducted in whole or in part through the use of an automated‑decision system.” (2 C.C.R. § 11008(b)).

As a result, decisions made with the assistance of AI are treated as decisions made by the employer itself for purposes of FEHA liability.

Automated tools cannot do what humans are prohibited from doing

The amendments make clear that employers may not use ADS to accomplish practices that would otherwise be unlawful if performed by a human. The regulations expressly prohibit, among other things:

  • Engaging in pre‑employment activities or inquiries, including those conducted through ADS, that discriminate, express preferences, or classify individuals on a prohibited basis;
  • Using ADS or selection criteria (including qualification standards, employment tests, or proxies) that discriminate against applicants on a protected basis;
  • Using ADS to measure skill, dexterity, reaction time, or other abilities in a manner that discriminates against individuals with protected disabilities;
  • Using ADS to analyze tone of voice, facial expressions, or other physical characteristics or behaviors that may discriminate based on race, national origin, gender, disability, or other protected traits; and
  • Assisting, inciting, or coercing unlawful employment discrimination, including where such conduct occurs in whole or in part through the use of ADS.

These provisions underscore that human oversight alone does not insulate employers when discriminatory outcomes result from automated tools.

Why this matters now

While these regulations have been in place since October, their practical implications are only now coming into sharper focus as AI‑assisted tools move from pilot programs to regular use. The amendments leave little ambiguity: artificial intelligence is not a neutral intermediary, but rather a mechanism through which employers act.

Employers should therefore evaluate how ADS is being used across the employment lifecycle and ensure that such systems comply with FEHA to the same extent as any human decision‑maker.

In sum, artificial intelligence is now treated as an extension of the employer, and when it discriminates, liability follows.

For more information on this topic contact Torre DiGiovanni at torre.digiovanni@fmglaw.com or your local FMG attorney.

Information conveyed herein should not be construed as legal advice or represent any specific or binding policy or procedure of any organization. Information provided is for educational purposes only. These materials are written in a general format and not intended to be advice applicable to any specific circumstance. Legal opinions may vary when based on subtle factual distinctions. All rights reserved. No part of this presentation may be reproduced, published or posted without the written permission of Freeman Mathis & Gary, LLP.

 

FMG Law Firm Services for Insureds – Emergency Legal Support Blogline