Advertisement
Article main image
Sep 22, 2020
This article is part of a series called The Legal Lounge.

You see a job vacancy for your dream job. You apply in the ATS, and then you get an invite to an interview on a platform that uses artificial intelligence to screen candidates based upon the video they submit. What do you do? 

Consider what happens if you’re Black, blind, or have a speech impediment. Last week, one Black applicant crowdsourced advice as to how she should respond to employers that wanted to video-interview her with software that uses AI to determine employability.

“As a dark-skinned black woman, I feel like I’ve already been filtered out,” she tweeted. “Should I just respond with ‘No thanks’?”

The responses should stop all recruiters in their tracks, especially those using or are considering using AI. Despite claims by some, data scientists continue to find that AI does not eliminate bias. (Please rely on data scientists not affiliated with HR tech companies.)   

Listen. I’m going to be frank with you. If you’re using AI to screen candidates, to actually analyze whether a particular candidate should move on to the next round, you’re using biased technology that could have a discriminatory effect on applicants. Period. Full stop. 

 Now, I know what you’re thinking, “But Kate, ABC vendor and XYZ expert told me that AI can actually reduce bias!” 

Yep, those vendors and experts have some real dazzling bullshit to sell you. They’re showing you stats on how AI has shown less bias than humans, which means that their algorithms are better than people when making hiring decisions. 

But you know what people can do? Challenge each other, identify their bias, and actively work against it when our processes are designed to combat it. AI can’t do that. 

AI is trained to do what we tell it to do based on the data we feed it and what it learns along the way. AI learns what to do by consuming data. If the data we feed it skews more white or more male, then AI will learn to produce results that are more white or more male.

If the data we feed AI doesn’t account for deafness, Bell’s palsy, or facial scar tissue, then the AI could suppress individuals with those conditions. 

If the developers who created the AI are only women over the age of 50, they’re programming the AI based on their life experience, possibly overlooking the experiences of a young Latina.

If we haven’t accounted for every gender, race, religion, disability, sexual orientation, age, etc., along with adjusting for change and innovation when the AI was created, we’re going to get biased results.

The truth hurts, I know. But the truth is that AI contains and even amplifies bias. The science is clear. Yet, this fact doesn’t stop recruiting technology vendors and experts from proselytizing that their math is better than your brain. It’s not.

There are a million great ways to use AI in recruiting — automating busy work, blasting as many job boards as possible, identifying pay discrepancies — but AI isn’t ready to compete with our abilities to modify our behavior when we know and acknowledge that our biases are having a negative effect on applicants. 

It’s possible that someday AI will be able to self-regulate bias, but today is not that day. It won’t be that day for a while. In the meantime, let’s make better use of our time and resources to actually combat bias in each other.

Click here for more Legal Lounge columns.

This article is part of a series called The Legal Lounge.