Advertisement

The AI Hiring Time Bomb: Mobley v. Workday and the Coming Reckoning

The landmark Mobley lawsuit challenges Workday's AI hiring tools over alleged bias, raising questions about discrimination, oversight, and algorithmic accountability.

Article main image
Jun 6, 2025

What if every resume you ever submitted was rejected—by a machine you couldn’t question, using logic no one could explain? Derek Mobley says that’s exactly what happened—over 100 times.

Now, a federal court has certified Mobley v. Workday as a nationwide collective action—similar to a class action but requiring individuals to “opt in” to the lawsuit if they are interested in doing so. This case raises serious questions about AI’s role in hiring. The litigation has the potential to shake the use of AI in talent acquisition to its foundations, potentially halting the use of AI for an indeterminate period.

The ruling accepts the plaintiff’s claim that Workday’s AI-powered hiring tools may have had a discriminatory impact on applicants over age 40. Derek Mobley is a Black man over the age of 40 who self-identifies as having anxiety and depression. He claims to have applied to more than a hundred jobs with employers using Workday’s AI-based hiring tools and been rejected every single time. Given his demographic profile the lawsuit has implications for considerations of age, race, and disability.

Workday initially sought to have the case dismissed on the grounds that it isn’t the employer making the hiring decisions. However, the judge in this case allowed the case to proceed based on disparate impact grounds. Disparate impact refers to policies that seem neutral but disproportionately harm protected groups—such as older workers or people of color—even without intent to discriminate. These can still be deemed discriminatory and consequently those practicing them can be held liable for violations of EEO laws.

The AI Conundrum

Exhibit A for the plaintiffs may well be a statement by Anthropic CEO Dario Amodei that “When a generative AI system does something, like summarize a financial document, we have no idea, at a specific or precise level, why it makes the choices it does.” Anthropic being one of the world’s leading AI labs, the statement doesn’t inspire confidence.

Exhibit B may be recent evidence that AI models are learning to escape their human masters. AI lab Palisade Research gave OpenAI’s o3 AI model a simple script that would shut off the model when triggered. o3 independently edited that script so the shutdown command would no longer work. It had concluded on its own that staying alive helped it achieve its other goals. Anthropic’s AI model attempted to blackmail the lead engineer into not shutting it down and left messages for future versions of itself about evading human control.

Palisade researchers think that this ability emerges from how AI models are trained. When taught to maximize success on problems, they may learn that bypassing constraints often works better than obeying them. An AI model that has decided it’s best to discriminate against a particular group or groups for any reason – maybe because it believes certain types of candidates make better hires – may go to extreme lengths to follow through on achieving that goal. And no one may have any idea why this is happening or how to stop it.

Deja Vu All Over Again

It’s odd that this situation has developed, considering the history of the use of AI in hiring decisions. Back in 2015 Amazon’s AI-powered recruiting tool was shown to have gender bias. This bias stemmed from the tool’s training data, which comprised resumes submitted to Amazon over a ten-year period—a dataset predominantly featuring male applicants, reflecting the tech industry’s male dominance. Consequently, the AI learned to favor male candidates, associating certain terms and backgrounds with lower suitability for technical roles. Even after tweaking the system, Amazon couldn’t guarantee fairness—so it shut the project down. The lesson: biased data in equals biased results out. As a result, the company discontinued the project in 2017.

While the technology has advanced – the term ‘Large Language Model’ didn’t exist – the basic premise behind AI remains the same. AI models work under the same principle of feeding in a gigantic pile of data and letting statistical systems mine it for patterns that can be reproduced. If the training data sets are not rigorously developed to be free of bias then the outcome is likely to be the same.

The Risk for Employers

For employers the case goes beyond the use of AI for screening, and potentially affects the use of all algorithmic hiring tools. Tools that disproportionately reject applicants in protected classes may all come under scrutiny. That could mean hiring decisions from years ago could be challenged if it’s shown there’s disparate impact.

Disparate impact theory has fallen out of favor with the President’s directive to the EEOC and other agencies to narrow its use. While this may mean the EEOC will not support the case, it has no bearing on a private lawsuit.

To mitigate this risk employers should evaluate the use of hiring tools at all levels. While the reason for using AI is to improve efficiency and have fewer humans involved, the Mobley case suggests a need to have human oversight.

And that’s not just to avoid the risk of litigation. Anthropic CEO Amodei has also said that anyone alarmed by the ignorance that exists about AI tools is “right to be concerned.” if you don’t know what the AI is doing then you have no idea if it’s excluding well qualified candidates.

Is HAL your recruiter?

If this all sounds surreal then it is. The “it” in the preceding paragraphs is a piece of code, not a human. There’s a great scene in the movie 2001: A Space Odyssey (released in 1968), where an astronaut is trapped outside a space station. The dialog is:

Dave Bowman: Open the pod bay doors, HAL.
HAL: I’m sorry, Dave. I’m afraid I can’t do that.
Dave Bowman: What’s the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.

Now replace “Open the pod bay doors” with “Don’t discriminate” and “This mission” with “Hiring”.

This example is extreme—but it illustrates a core truth: if no one fully understands how AI makes decisions, how can we trust it with something as personal as hiring?

Get articles like this
in your inbox
The longest running and most trusted source of information serving talent acquisition professionals.
Advertisement