The EEOC’s Shift Away from Disparate Impact Liability
Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, and the Americans with Disabilities Act prohibit employers from implementing facially neutral procedures that unintentionally discriminate against individuals based on their protected categories.
The Equal Employment Opportunity Commission (EEOC) is the federal agency tasked with investigating claims of unintentional discrimination, called disparate impact.
According to an internal memorandum obtained by Bloomberg Law, the EEOC plans on closing all pending disparate impact discrimination charges based at the end of September 2025. Once these charges are closed, the EEOC is expected to issue right-to-sue letters allowing claimants to file their case in federal court. Charges that involve claims of both disparate impact and disparate treatment are likely to remain with the EEOC in normal course.
The EEOC’s posture comes months after President Donald J. Trump issued an Executive Order characterizing the disparate impact theory of discrimination as “wholly inconsistent with the Constitution” and a threat to “the commitment to merit and equality of opportunity that forms the foundation of the American Dream.” Accordingly, the Order directs the EEOC and other federal agencies to deprioritize enforcement of statutes and regulations related to disparate impact liability and to reexamine all pending investigations and suits relying on disparate impact. We have previously covered this Order in depth.
Disparate Impact Theory’s Role in Workplace AI Tools
To date, employers using workplace AI have focused on whether the AI tool’s output unintentionally discriminates against individuals based on their protected category. If, for instance, a workplace AI developer trains its tool on biased data, the tool may disproportionately and unintentionally subject applicants and/or employees to employment decisions based on their race, gender, age, disability status, or other protected categories. When a workplace AI tool relies on protected categories to generate outputs, it may have engaged in “algorithmic discrimination,” often defined as the use of an AI system that results in a violation of any applicable federal, state, or local discrimination law. Employers may be liable when they use AI that algorithmically discriminates, even if done so unintentionally.
The case of Mobley v. Workday, currently pending in the U.S. District Court for the Northern District of California, serves as a reminder that AI tools used to make employment decisions could be evaluated under a disparate impact theory if there is a plausible inference that an AI algorithm relies on protected characteristics.
Employer Takeaways
While the EEOC may cease investigating unintentional discrimination, civil plaintiffs may still file a charge with the EEOC, receive a right-to-sue letter, file a complaint in court, and potentially prevail on disparate impact claims against employers. Therefore, employers may still be liable for unintentional discrimination when a plaintiff successfully challenges discriminatory employment practices in federal court.
Further, any EEOC action does not affect disparate impact liability under the numerous local and state laws. Indeed, several current and pending laws expressly require employers using AI in employment-related decision-making to conduct disparate impact analyses to ensure that such systems do not result in disparate outcomes. As we have previously discussed, states and local jurisdictions will likely play a leading role in shaping the foreseeable future AI regulatory landscape. Employers must still comply with applicable state and local laws that prohibit employers’ use of AI and automated employment decision making tools that unintentionally discriminate.
If you have questions about the use or implementation of AI at your workplace, please contact the authors of this blog or your Epstein Becker & Green, P.C. attorney.
Blog Editors
Authors
- Member of the Firm
- Associate
- Law Clerk - Admission Pending