Any powerful tool can be used for good and bad. For instance, hammers can be incredibly helpful, but they can also be used to wreak havoc. The same goes for kitchen knives, baseball bats, juicy bacon double cheeseburgers, and even grand pianos (they can fall on people when being moved by crane out of NYC walk-ups).
Similarly, AI has also led to some havoc: biased results, new heights of marketing lies, personal privacy concerns, and fears that humans may be rendered useless in the driver’s seat (we will be eventually). But as Sundar Pichai, the CEO of Alphabet, says, “AI is probably the most important thing humanity has ever worked on. [It] is…more profound than electricity or fire.”
The reality is that, as with any powerful technology, AI can be used for both good and evil. But oftentimes it is incompetence, not evil, that leads to bad AI outcomes.
What are we in HR to do? On the one hand, there are seemingly powerful new AI capabilities that can add value to our work. But on the other hand, there is the famous story of Microsoft’s chatbot that trained itself on Twitter messages and became horribly racist in less than 24 hours. And more recently, poorly trained facial recognition systems have led to the arrest of innocents.
While early and poorly developed AI may lead to biased outcomes, AI is actually the solution to the problem of human bias.
What Even Is AI?
Before we dive deeper into why AI is the solution to our human bias issues, let’s define what we mean by AI. The term itself is so broad that its widespread use often renders it meaningless. While one could argue that the earliest calculators performed basic and fundamental aspects of AI, today’s definition of AI is a broad category of advanced analytics, the most transformational and promising being deep learning.
Deep learning powers the most exciting AI advances, including voice assistants that understand what you say and cars that can (almost) drive themselves. These solutions were not possible even a decade ago. The ability of algorithms to analyze and understand data in the world around us has become truly revolutionary.
AI’s Benefits in the Hiring Process
In the hiring process, AI can be applied in numerous ways to benefit both employers and candidates:
- Tests and simulations, including pre-hire assessments, can better predict job performance and retention through the use of more sophisticated and powerful AI-based algorithms. This can also result in shorter tests, which candidates and hiring managers alike can agree is an excellent outcome.
- Resumes, interviews and other unstructured sources of data can be parsed and analyzed scientifically, which can lead to more standardized, fair, efficient and faster hiring. In the past, these types of data sources could not be processed automatically, and would often be insufficiently reviewed or missed entirely by overworked recruiters.
- Overall efficiency can be increased substantially by reducing overlap in distinct steps of the hiring process, enabling smarter automatic scheduling and increasing the accuracy of automatic candidate communication tools like chatbots.
- To the extent that a hiring system knows a person’s protected class status, algorithms can continuously monitor how each class is doing and issue red flags or other alerts when they detect problems. In this manner, AI can help us identify and correct any sources of bias.
The above benefits of AI are all happening now. There are other AI features on the horizon that are being actively developed, including making better sense of large, complex, messy sets of data.
Is AI Ethical?
While AI is creating a lot of benefits in the hiring process, it can certainly also be used for nefarious purposes and half-baked features. As a result, it’s imperative that organizations take a strong stance on how they use AI in hiring. AI should only be used when:
- It is clearly beneficial for individuals, not just organizations. This includes using AI to reduce bias, enable better job matching, and create better candidate experiences.
- Its uses are transparent. We have to be able to understand what AI techniques are doing when they parse data and make predictions. If we cannot, then we should not implement them.
- AI-based marketing claims are verifiable. Vendors must be able to produce data and results to prove their claims. If they cannot, then it is safe to assume that they do not have any.
AI research should also be shared with the larger community in appropriate research channels. We need to work together to advance the technology. Of course, vendors will not want to publish their secret sauce, but knowledge can be shared without revealing every single detail of their IP.
Further, only data expressly and consciously provided by candidates should be used in AI algorithms. This means that you should :
- Not use facial recognition in any way. It is invasive, unproven in its predictive accuracy, and loaded with bias. We can use the transcribed words that are spoken in interview responses in our AI algorithms, but not use the video itself, nor the speaking tone or style.
- Not scrape the web for social media and other information about candidates. There is no body of academic research yet that shows predictive value in this type of information, and it is also invasive.
Ironically, the invasive AI uses outlined above do not even produce verifiable enhancements to predictive accuracy or fairness, so there is literally no reason to offer them other than enhancing marketing hype. Of course, research is constantly advancing on these fronts, and we do not know what the future holds.
If we as a civilization do not harness AI, it will eventually harness us. The way we control AI is by responsibly developing it with ethical guardrails. Wishing it away because it is complex or confusing will only lead to less scrupulous parties developing it, likely without fail-safe measures to do so ethically and effectively.