Advertisement

ChatGPT Falls Short During Candidate Assessment

Article main image
Feb 1, 2023

Research shows that when new hires don’t pan out, 89% of the time, the causes are attitudinal, not technical. In other words, candidates generally don’t fail because they lack the technical skills to perform the job. Instead, they may lack coachability or emotional intelligence, or perhaps they have the wrong temperament to fit the company’s culture.

For recruiters and hiring managers, that means assessing candidates requires an eye for subtlety and nuance; discerning a candidate’s coachability is not as straightforward as assessing their coding skills.

Which brings us to ChatGPT, which is essentially a conversational chatbot. This latest AI is frighteningly smart, but it’s not quite ready to pick up on the nuance embedded in candidates’ responses. We recently used the ChatGPT to assess real candidate responses, and as you’ll see, a good deal of nuance gets missed.

In a recent report on behavioral interview questions, we found that the following question was highly effective in differentiating high- and low-performing candidates: “Could you tell me about a time when you were asked to do something you didn’t know how to do?”

We gave ChatGPT that interview question and asked it to assess an actual answer from a real-life candidate. Here’s the answer we used:

Oh, that’s easy. I was always learning on the fly at my last job mostly because we didn’t have a lot of processes in place. Customers were always complaining that our billing system was seriously flawed, and they were right. 

I remember one time when a customer had a billing issue and my boss told me to check with my co-worker, Sam, to see if he had any knowledge of the case. But Sam had nothing. In situations like this, I will take it upon myself to track back into the customer’s records for clues or I will call the customer to see if something was missed. In this case, I really had to go on the hunt to find information that would help fix the customer’s billing issue. I find that there’s usually an answer if you look hard enough. 

What do you think of that candidate’s response? Was it specific? Did they answer the interview question? Here’s what ChatGPT thought of the candidate’s response:

The candidate answered the question and provided a specific example of a time when they were asked to do something they didn’t know how to do. They also explained their approach to handling the situation and their determination to find a solution. They demonstrated the ability to think critically and problem-solve. The candidate gave a good answer to the question.

On the face of it, ChatGPT is not wrong. The study “Words That Cost You The Interview” discovered that first-person pronouns and past-tense verbs are often powerful indicators that candidates are providing direct and specific answers to interview questions. This candidate used far more first-person than second or third-person pronouns, and they used many more past-tense than present or future-tense verbs.

The candidate was asked to tell about a time when they were asked to do something they didn’t know how to do, and the candidate responded by saying, “I remember one time when a customer had a billing issue…” The person has clearly recalled a specific event in response to our question.

But let’s go a bit deeper. After the candidate explained that their co-worker didn’t have an easy answer to the customer’s billing issue, the individual shifted their language, providing hypothetical rather than specific solutions. When they say, “In situations like this, I will take it upon myself to track back into the customer’s records for clues or I will call the customer to see if something was missed,” they’re not explaining what they actually did, only what they would hypothetically do.

The candidate does say, “In this case, I really had to go on the hunt to find information that would help fix the customer’s billing issue,” but they fail to offer any specifics about what going on the hunt entailed or what solutions they devised.

On a superficial first reading, the candidate answered our interview question. But when we dig a bit deeper, it becomes clear that the candidate didn’t really tell us much of anything. Sure, there was a time when they didn’t know how to do something, but we didn’t learn anything about specific steps they took to overcome that challenge (or even if they overcame it).

The point here is that for all the potentially transformative power of artificial intelligence like ChatGPT, there’s still a lot of the talent acquisition process that requires human skill and discernment. Maybe AI will get there eventually, but it’s not there yet.

Get articles like this
in your inbox
The longest running and most trusted source of information serving talent acquisition professionals.
Advertisement