The EEOC’s Shift Away from Disparate Impact Liability

Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, and the Americans with Disabilities Act prohibit employers from implementing facially neutral procedures that unintentionally discriminate against individuals based on their protected categories.

The Equal Employment Opportunity Commission (EEOC) is the federal agency tasked with investigating claims of unintentional discrimination, called disparate impact.

According to an internal memorandum obtained by Bloomberg Law, the EEOC plans on closing all pending disparate impact discrimination charges based at the end of September 2025. Once these charges are closed, the EEOC is expected to issue right-to-sue letters allowing claimants to file their case in federal court. Charges that involve claims of both disparate impact and disparate treatment are likely to remain with the EEOC in normal course.

The EEOC’s posture comes months after President Donald J. Trump issued an Executive Order characterizing the disparate impact theory of discrimination as “wholly inconsistent with the Constitution” and a threat to “the commitment to merit and equality of opportunity that forms the foundation of the American Dream.” Accordingly, the Order directs the EEOC and other federal agencies to deprioritize enforcement of statutes and regulations related to disparate impact liability and to reexamine all pending investigations and suits relying on disparate impact. We have previously covered this Order in depth.

Disparate Impact Theory’s Role in Workplace AI Tools

To date, employers using workplace AI have focused on whether the AI tool’s output unintentionally discriminates against individuals based on their protected category. If, for instance, a workplace AI developer trains its tool on biased data, the tool may disproportionately and unintentionally subject applicants and/or employees to employment decisions based on their race, gender, age, disability status, or other protected categories. When a workplace AI tool relies on protected categories to generate outputs, it may have engaged in “algorithmic discrimination,” often defined as the use of an AI system that results in a violation of any applicable federal, state, or local discrimination law.  Employers may be liable when they use AI that algorithmically discriminates, even if done so unintentionally.

The case of Mobley v. Workday, currently pending in the U.S. District Court for the Northern District of California, serves as a reminder that AI tools used to make employment decisions could be evaluated under a disparate impact theory if there is a plausible inference that an AI algorithm relies on protected characteristics.

Employer Takeaways

While the EEOC may cease investigating unintentional discrimination, civil plaintiffs may still file a charge with the EEOC, receive a right-to-sue letter, file a complaint in court, and potentially prevail on disparate impact claims against employers. Therefore, employers may still be liable for unintentional discrimination when a plaintiff successfully challenges discriminatory employment practices in federal court.

Further, any EEOC action does not affect disparate impact liability under the numerous local and state laws. Indeed, several current and pending laws expressly require employers using AI in employment-related decision-making to conduct disparate impact analyses to ensure that such systems do not result in disparate outcomes.  As we have previously discussed, states and local jurisdictions will likely play a leading role in shaping the foreseeable future AI regulatory landscape. Employers must still comply with applicable state and local laws that prohibit employers’ use of AI and automated employment decision making tools that unintentionally discriminate.

If you have questions about the use or implementation of AI at your workplace, please contact the authors of this blog or your Epstein Becker & Green, P.C. attorney.

Back to Workforce Bulletin Blog

Search This Blog

Blog Editors

Authors

Related Services

Topics

Archives

Jump to Page

Subscribe

Sign up to receive an email notification when new Workforce Bulletin posts are published:

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.