If your house is not located underneath a rock, you may have noticed a lot of recent media conversations about “algorithmic bias,” but what does that really mean and how does it impact hiring and employee selection?
Algorithms are just rules that solve a problem or calculate something. They are not exactly a new idea — the earliest known examples were found on Babylonian clay tablets dating to 1600 BC. So why, 3,600 years later, are they suddenly getting so much bad press?
In short, because they are only just recently becoming absurdly powerful. AI and Big Data have joined forces to figure out all manner of new things (e.g., how to drive a car without a human, how to recognize faces, how to recommend Netflix movies to me so I never get off the couch ever again).
But with all of this newfound power to explain the world around us, they have begun to ferret out the deep and insidious nature of bias — and it is increasingly clear that bias pervades human decisions at a fundamental level. All of us are biased in a thousand ways, and our brains process data using myriad, usually subconscious, heuristics. Since AI is usually trained with human data, and humans code it, our biases are replicated inside the AI’s black-box brain and potentially propagated at a massive scale.
So, should we cancel algorithms?
Consider this: Algorithms, being incapable of malice (so far), can only do what we program them to do. What we need to cancel is bad programming. But can “good” programming create AI that is completely unbiased?
Well, not really. Bias is so inherent to human activity that expecting even well-developed algorithms to be free of bias is unrealistic. But we are still obligated to identify and minimize bias as exhaustively as we (humanly) can.
What we need, on top of well- and carefully developed algorithms, is better bias monitoring. We need fail-safe systems that can continuously evaluate how AI technology is working, notice when it is generating biased output, and then offer a way to fix or control it.
We already do that in a variety of ways, some better than others. In hiring, the best way is for companies to collect EEO information for every candidate — that is, demographic data on subgroup membership — and then create algorithms that ensure hiring outcomes are evenly distributed among those groups.
Except, this solution has two problems: First, EEO information is currently voluntary for candidates to provide because historically it might have been used to discriminate. Second, many companies do not want to store this information because it could be used by investigators to prove they are engaging in biased hiring.
For fair AI to flourish, we need solutions to these problems, too. While in the past, subgroup data could have been used to discriminate or incriminate, the reality is we need that data now for exactly the opposite reason — to ensure that your hiring process does neither of those things.
Our enhanced understanding of data and how it all interacts and relates is what allows us to know that bias exists in places we did not realize it existed. So don’t think of AI as causing bias so much as exposing it. There are instances of AI making errors based on the data it’s given. I would guess that these errors happened because the algorithms were not sufficiently studied and monitored, not because of malicious intent.
Ultimately, we need to consider bias from multiple perspectives. Algorithms will continue to predict in biased ways and, as they get stronger and include more data, they will find new ways to add bias that were previously safe. The answer, though, is not to stop using algorithms. It is to be ever-vigilant and remove bias from algorithms when we find it.
So remember three things: (1) Algorithms are neither good nor bad, (2) AI has the ability to uncover all sorts of previously hidden biases, and (3) we humans either have to be monitoring our algorithms to see if they uncover bias or build processes that can do it for us.