The recent Mobley v. Workday lawsuit has put a spotlight on how AI-driven hiring tools can discriminate against minorities and people with disabilities. But why does this happen?
The answer lies in how these tools are trained and how human bias becomes embedded in technology. Any AI system—especially those built on large language models (LLMs)—risks inheriting bias unless deliberate steps are taken to mitigate it. And because the internet, which forms much of their training data, is rife with negativity and skewed narratives, those biases inevitably surface in AI decisions.
The Negativity Bias in Online Content
Online content doesn’t reflect reality in a balanced way. Research consistently shows that negative material spreads faster, further, and lingers longer than positive content.
- Negative posts are amplified through sharing.
In a Twitter study, only 20% of original tweets by public figures were negative, yet 31% of retweets were negative—a 50% amplification simply through resharing. - Emotionally charged content goes viral more easily.
Analyses of 30 million social media posts showed that negative or outrage-driven content dominates. In a dataset of 95,000 news articles and 579 million social shares, users were 1.9X more likely to share negative stories than positive ones. - People are drawn to negative headlines.
In an experiment analyzing 5.7 million clicks, each negative word in a headline boosted click-through rates by 2.3%, while positive words reduced clicks by 1.0%.
This creates a visibility problem: even if most original posts are neutral or positive, negative narratives take up more space in our feeds and shape perceptions disproportionately.
How Negativity Shapes Employment Stereotypes
This imbalance directly affects how disability and employment are discussed online.
- Posts framing reasonable accommodations as burdens, fraud myths, or disabled employees as less productive tend to spread more widely than success stories.
- Engagement-driven algorithms reward outrage and shock, amplifying negative narratives about cost, productivity, and “special treatment” while burying balanced or positive accounts.
A viral LinkedIn post criticizing ADA “overreach” is far more likely to dominate attention than a thoughtful article on inclusive hiring best practices.
Over time, these skewed narratives normalize exclusionary thinking and reinforce damaging stereotypes.
How AI Mirrors—and Magnifies—Social Bias
AI models learn what they see. If the training data overrepresents negative language about disability and work, the model absorbs those associations.
- A University of Washington (2024) study showed that adding disability-related awards to an otherwise identical résumé actually lowered its ranking in GPT-4’s screening.
- When online discourse overemphasizes risk or cost, AI models internalize those assumptions—leading to fewer interview opportunities for candidates who disclose disability advocacy, accommodations, or health-related employment gaps.
This creates a feedback loop: biased data trains biased models, which in turn produce biased outcomes, further reinforcing discriminatory practices.
Blind Faith in Hiring Tools
When Applicant Tracking Systems (ATS) first appeared, their goal was simple: ensure every applicant received equal treatment. Over time, features were added to boost recruiter efficiency—ranking, filtering, and now AI-driven scoring. But no one stopped to question whether these tools were truly unbiased.
In over two decades of building recruitment technology, I was never once asked to validate whether an algorithm produced unbiased results.
Some uncomfortable truths about hiring technology:
- The myth that recruitment tech helps companies find “the best” candidates is just that—a myth. We have no definition of “best.”
- ATS tools are designed primarily to reduce recruiter workload, not to ensure fairness or find hidden talent.
- Many systems use cloning—where a hiring manager submits “ideal” résumés and the ATS searches for similar profiles. This practice reinforces the hiring manager’s existing biases, rather than correcting them.
- No AI tool in hiring is validated against real performance data, partly because companies don’t share it, standards vary widely, and much of what’s available is subjective anyway.
So what do these tools rely on? Data from the internet—full of bias, negativity, and skewed narratives.
Availability Bias: How Repetition Shapes Beliefs
When HR leaders and hiring managers repeatedly encounter negative content—such as “AI interviews flagged disabled candidates as unfit” or “accommodations are too costly”—it normalizes exclusionary thinking.
This leads to availability bias: decision-makers more easily recall negative disability-related anecdotes than positive ones, even if positive examples are far more common. Echo chambers—forums and social groups that complain about supposed “unfair advantages” for disabled workers—further entrench these views, making inclusive policies harder to champion.
AI in hiring doesn’t have to be biased—but without intentional design and oversight, it will inevitably reflect the worst of our social biases.