The U.S. Equal Employment Opportunity Commission (EEOC) is set to close all pending charges related to unintentional workplace discrimination by the end of September 2025. This policy shift specifically targets cases known as "disparate impact," where a company's neutral policy or practice unintentionally harms a protected group.
Once these cases are closed, the federal agency will issue right-to-sue letters, allowing individuals to pursue their claims directly in federal court. The change signals a significant pivot in federal enforcement strategy but does not eliminate employer liability, especially as state and local governments increase their scrutiny of automated hiring tools.
Key Takeaways
- The EEOC will conclude investigations into all pending disparate impact discrimination cases by September 30, 2025.
- Disparate impact refers to neutral employment practices that unintentionally discriminate against protected groups.
- Claimants will receive right-to-sue letters, shifting the burden of litigation from the EEOC to private individuals.
- This change does not affect employer liability under federal, state, or local laws; companies can still be sued.
- The use of AI in hiring remains a major concern, with many states mandating bias audits to prevent algorithmic discrimination.
Understanding the EEOC's Policy Shift
The EEOC is the federal body responsible for enforcing laws against workplace discrimination. These laws, including Title VII of the Civil Rights Act, the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA), protect individuals based on race, gender, age, disability, and other categories.
Discrimination claims generally fall into two types. Disparate treatment is intentional discrimination, such as refusing to hire someone because of their age. Disparate impact, on the other hand, is unintentional. It occurs when a seemingly fair policy has a disproportionately negative effect on a protected group.
Background on the Policy Change
This move by the EEOC follows a 2020 Executive Order issued by then-President Donald J. Trump. The order directed federal agencies to review and potentially limit their enforcement actions based on the disparate impact theory.
The Executive Order described the disparate impact theory as “wholly inconsistent with the Constitution” and a threat to “the commitment to merit and equality of opportunity that forms the foundation of the American Dream.”
As a result, federal agencies were instructed to deprioritize investigations relying on this legal framework, leading to the current EEOC plan to close its backlog of such cases.
Impact on Employers and the Rise of AI in Hiring
While the EEOC is stepping back from investigating these specific claims, the legal risk for employers has not disappeared. In fact, it may become more complex due to the growing use of artificial intelligence (AI) and automated decision-making tools in the workplace.
These systems are often used to screen resumes, assess candidates, and make other employment-related decisions. If an AI tool is trained on biased historical data, it can perpetuate and amplify that bias, leading to what is known as algorithmic discrimination.
The Challenge of Algorithmic Discrimination
For example, if an AI hiring tool is trained on the resumes of a company's past successful employees, who were predominantly from one demographic, it might learn to penalize applicants from other backgrounds. This would create a disparate impact, even though the employer did not intend to discriminate.
Employers are often legally responsible for the outcomes produced by the AI tools they use, regardless of whether the bias was intentional. This places a significant burden on companies to vet their technology vendors and conduct regular audits of their automated systems.
Legal Precedent: Mobley v. Workday
The ongoing case of Mobley v. Workday, filed in the U.S. District Court for the Northern District of California, highlights this very issue. The lawsuit alleges that an AI-powered screening tool disproportionately filtered out applicants who were Black, over 40, or had disabilities, creating a plausible claim of disparate impact. This case serves as a critical reminder that AI systems in employment are subject to scrutiny under discrimination laws.
The Evolving Legal Landscape for Employers
The EEOC's decision to close its disparate impact caseload means that the primary battleground for these claims will shift from federal agency investigations to federal courts and state-level enforcement. After receiving a right-to-sue letter from the EEOC, an individual can file a lawsuit against their employer.
If a plaintiff can successfully demonstrate in court that an employment practice caused a disparate impact, the employer can still be held liable for damages. Therefore, companies cannot afford to ignore the potential for unintentional discrimination in their policies and systems.
State and Local Laws Fill the Gap
Crucially, the EEOC's federal policy change has no bearing on state and local anti-discrimination laws. Many states and cities have their own robust protections against disparate impact, and some are specifically targeting AI in employment.
Several jurisdictions have already enacted or are considering laws that require employers to:
- Conduct independent bias audits of their automated employment decision tools.
- Notify candidates when AI is being used in the hiring process.
- Provide transparency about the factors the AI system considers.
This patchwork of state and local regulations means that businesses, particularly those operating in multiple locations, must navigate a complex and fragmented legal environment. Compliance with federal law alone is no longer sufficient.
What This Means for the Future of Work
The EEOC's shift away from disparate impact investigations places more responsibility on employers to proactively ensure their practices are fair. With the rapid adoption of AI, this responsibility has become more critical than ever.
Companies are advised to work with legal counsel to understand their obligations under federal, state, and local laws. Performing regular disparate impact analyses, especially for automated tools, is becoming a necessary step to mitigate legal risk.
While the federal government's enforcement posture may have changed, the fundamental legal principle remains: employment practices must be fair in outcome, not just in intent. As technology continues to evolve, ensuring equitable outcomes will remain a central challenge for employers across the country.