Advertisement

The Limits of AI: What It Can and Cannot Do for Recruiting (Today)

May 24, 2018

AI is making big strides into recruiting technology, with startups and established vendors investing heavily in products for a range of uses including interview scheduling, sourcing, and assessments. While it’s certain that AI will disrupt recruiting by automating a lot of what recruiters do, the technology is barely beyond infancy. This limits what it can deliver — at least for now.

Most products that use AI have one defining characteristic — they can do one narrowly defined task only, using specific data to produce a response. For example, which candidate is likely to respond to a solicitation for a job? Or, which applicant is likely to be a high performer? Even products that perform tasks which may appear less well defined take this approach. Interview scheduling means finding a match between the interviewer’s and candidate’s availability. It’s one task that’s completed based on interpreting natural language responses. When trained using large amounts of data, the algorithms get better at completing the task or producing the response.

More appropriately, what AI is doing in most cases is pattern recognition. That is, identifying trends, commonalities, and traits in data that are beyond the capability of any individual or even group to do. An example is the facial recognition technology developed by ZIFF. The product can capture data on subtle behaviors like eye movement, smiling, and other expressions and uses it to predict how well a candidate would perform in a job.

What AI Cannot Do

But AI has its limitations, stemming from the fact that any AI product is not “intelligent.” It may be able to perform a task better than any human, but it has no context for what it is doing. AI products cannot make value judgements. An example of this was Microsoft’s AI based chatbot that started using foul and extremely racist language. This was not because of anything the developers did, but because the chatbot was being trained on language used by those it interacted with. Since AI products are trained rather than programmed, the training data set will influence what they produce … much like a child imitating the behavior of its parents without knowing right from wrong.

It’s too early to know if any technology employed for recruiting produces biased results, but it may well happen if its trained on a dataset that reflected past biases. There are examples from other fields such as a product that predicts future criminal behavior and thereby influences sentencing. An evaluation of outcomes suggests a propensity for bias, which has real-world consequences for those whose lives are affected as a result. The same could be true for any AI-driven product that predicts performance on the job if the training data set is not carefully chosen.

There are also more mundane issues relating to AI. An interview-scheduling application may not recognize that a highly qualified candidate who expresses urgency in finding a time to interview may be receiving other offers and should be prioritized above others. The application has no way of knowing if a candidate is more qualified than others and should be shown preference. It’s not because of a lack of programming, but because it lacks the overall context for the interview.

Another limitation for AI systems is the explainability problem. The technology can be a black box; in other words, one where it is hard to explain how a certain decision was reached. A product may produce good results, but a lack of explainability limits its credibility in the eyes of those who rely on it for making decisions. Having to take it on faith that the product works does not inspire confidence. New laws, such as the European Union’s GDPR, could stop the use of any AI product for recruiting where this problem exists. The law specifically includes a right to explanation. Individuals have the right to be given an explanation for decisions that significantly affect them. So a person denied a job because the process included recommendations from AI could insist on an explanation of how the algorithm produced the decision.

AI models cannot transfer their learning. That is, use their experiences from one set of circumstances to work in another. Unlike a recruiter who can use the skills and experience developed in filling jobs in one industry or field to filling jobs in others, whatever an AI model has learned for a given task remains applicable to that specific task only. To adapt the model to work on something even slightly different means training it all over again. A model that predicts which chemical engineers perform well in the energy industry will be useless in predicting which perform well in the food industry.

The Future of AI

What is true today for AI may not hold tomorrow. As approaches for developing AI advance, many of the limitations described above may disappear. Techniques such as Reinforcement Learning may eliminate the need for training by getting a model to learn to make the right decisions. So the effects of bias from a dataset could become a non-issue. Transfer Learning could help take what a model has learned and make it available to others. For now AI applications can augment what a recruiter does; they’re more IA (Intelligence Augmentation) than AI. We don’t know what tomorrow will bring, but whatever it is will be more disruptive than anything we have today.