Benjamin Shippen is a managing director at consulting firm BRG who focuses on economic modeling and statistical analysis in labor and employment.
Artificial intelligence is transforming how organizations recruit talent, but it is also drawing increased scrutiny from regulators and plaintiffs’ attorneys.
Benjamin Shippen
Permission granted by Benjamin Shippen
The U.S. Equal Employment Opportunity Commission and courts are beginning to examine whether AI-driven hiring tools unintentionally discriminate against protected groups, and a recent case, Mobley v. Workday, may become a defining moment.
In that lawsuit, a plaintiff alleged that Workday’s applicant-screening algorithms disproportionately exclude workers over 40, and a California court has conditionally certified a collective action against the company. If successful, the case could shift liability from individual employers to the vendors that build and operate AI tools.
Mobley may encourage plaintiffs to challenge AI-driven applicant screening at the company-level. If this case succeeds against Workday, it will likely set a precedent for targeting companies that use AI tools. Employers should act proactively now to analyze their applicant flow processes for potential adverse impact based on age, gender and race or ethnicity.
The new adverse impact landscape
AI has entered the applicant flow process at nearly every stage, from screening for minimum qualifications to ranking resumes, analyzing video interviews and scoring candidates. For large employers managing tens or hundreds of thousands of applications, these systems are invaluable for efficiency. Yet they also introduce new and complex risks.
AI models can inadvertently reproduce or amplify existing biases in the data they are trained on. What was once a linear, human-controlled process of screening, interviewing and selecting candidates is now a web of automated decisions that may obscure where bias occurs. That makes adverse impact harder to detect and, for employers, potentially more costly to defend.
Companies are discovering that integrating AI into their hiring workflows requires careful design, monitoring and legal oversight. It’s critical to test each step in the applicant flow, especially those using AI in the current climate, for potential disparate impact.
The process
To ensure compliance and fairness, organizations must understand how AI is influencing each step of their hiring process. Employers can model how AI-driven decisions affect applicant outcomes and apply selection analyses such as logistic regression or Fisher’s exact tests to determine whether AI-generated scores or rankings produce disparate impact by age, gender or race or ethnicity.
Consider two examples:
- AI scoring of video interviews. If a model assigns numeric or letter grades that recommend who advances, an economist should test whether protected groups systematically receive lower scores, even after controlling for qualifications.
- AI candidate retrieval tools. When algorithms identify and encourage certain past applicants to reapply, they may unintentionally favor specific demographics. Testing for disparate outcomes at this “invitation” step is now essential.
In both cases, an understanding of how the AI tool is applied is critical. Without insight into the mechanics of each automated decision, statistical analyses may be misspecified, leading to false assurances or false alarms about bias.
What employers should do now
As legal scrutiny intensifies, employers cannot treat AI tools as a black box. They should:
- Map the applicant flow. Identify every point where AI is making or influencing a decision.
- Collaborate early. Considering engaging labor economists and counsel to test for disparate impact before problems escalate.
- Document the process. Keep records of model design, validation and ongoing bias monitoring.
- Monitor continuously. Even a well-calibrated model can drift as data or hiring practices evolve.
The bottom line
The Mobley case shows that AI risk in hiring is not theoretical. Employers adopting these tools should move quickly to ensure their systems are explainable, monitored and statistically tested.






Leave a Reply