Advertisement

Promises and Pitfalls of ChatGPT

The AI tech has the potential to change the landscape of recruiting — for better and for worse.

Article main image
May 3, 2023

Talent acquisition has been an enthusiastic early adopter of technology. That’s especially true of AI. In 2019 at the HR Technology Conference, I was stunned by the number of “AI for recruiting” vendors with gigantic booths.

While I might be naturally skeptical, I’m certainly no Luddite. AI has tremendous potential to transform one of the more challenging jobs out there: finding and selecting the right talent. The current zeitgeist of generative text or ChatGPT is all the rage. What could go right? What could go wrong? How can organizations avoid making a potentially damaging decision?

The Promise of ChatGPT in Recruiting

Great recruiters are great communicators, but they generally aren’t the types of communicators who are crafting bespoke content that takes hours to write. Instead, they must be high-volume and high-quality communicators.

So you can see why a tool that can naturally understand and respond to people would be so attractive. Assuming it was any good, it could help with the mountains of communication recruiters must manage — from managing outbound and inbound candidates to figuring out how to get a hiring manager and candidate connected. Maybe it could even help create job descriptions and advertising.

In theory, this automation could also help streamline and automate some of the more tedious tasks in talent acquisition. While GPT is good at reading information like resumes, there’s still not much research to show that it is reliable at doing tasks like force-ranking the candidates you should consider.

The Potential Pitfalls Await

You can’t deny there are potential benefits, even though I see the advantages of using actual people for all the tasks above. Still, there are challenges.

Recruiting data that you might train a potential GPT companion with has to be immaculate. Vendors that have worked in the trenches on this challenge already know the problem: There’s a lot of bad data in these systems and fixing it is nearly impossible. There’s no way to fake the outcomes you want.

While bias is the major and obvious concern, there’s another concern: A model doesn’t know what it doesn’t know. That feeling in the back of your head when you know you’re missing a key piece of data but don’t know what it is? AI is more likely to simply grade based on the information it has and not disclose the gap in thinking.

Just like with generative writing or art, there’s limited evidence suggesting people want to be led through a very human, vulnerable hiring process by a machine. While research shows that candidates are comfortable interacting with supportive chatbots, they may not be so keen on knowing that a chatbot decided to prioritize another resume over your own.

Is Generative AI the Real Problem?

While I can point fingers at all the ways that this approach is problematic, there’s also another reality to face.

Not every recruiter is a great one. Some do a poor job at communication. Have you read a bad job description or advertisement? I’ve read thousands! Ever had a recruiter ghost you, even as a fellow recruiter? Of course you have.

Oftentimes, it’s not a capability problem but a training gap. This is ultimately why organizations can even consider using generative AI to replace certain recruiting tasks — if they were doing a great job consistently and they were adding value to the hiring process, you wouldn’t think of replacing it.

Organizations that take a lazy approach to equipping and managing talent acquisition will be the first to run to maximize the use of these technologies — and they might even do better because they never gave their people the chance.

A Human-First Approach to Recruiting with Generative AI

If you’re doing hiring right (or you want to do it right), it’s crucial to take a human-first approach when integrating generative AI (or any other AI technology) into your recruiting process. Here’s how organizations can strike the right balance between AI and human involvement:

Be transparent with candidates. The research on chatbots for support is clear: Let people know a machine is doing some of the work but that humans are available as well. Ultimately, you should let candidates know how you’ll be using AI-assisted tools, just as you would communicate other aspects of the hiring process.

Use AI as a tool, not a replacement. You can’t replace people in the hiring process, so keep that in mind as you’re investigating solutions. Most organizations’ recruiting teams welcome technology assistance to improve their work rather than feel like they are competing for their job.

Continuously monitor and evaluate AI performance. There’s no doubt that generative AI is here to stay and that it will improve but organizations have an obligation to monitor performance. This is especially critical early on, especially from a legal perspective, to ensure your hiring process is fair and effective.

It’s an exciting time to be in talent acquisition, and everybody is looking to make use of these innovative new tools. But in the rush to be first or to get the new shiny thing, remember to keep your perspective on the hiring process for candidates, hiring managers, and your fellow recruiters.

It’s ultimately your call to find the right balance of technology and human touch for your individual organization. By focusing on keeping people at the center of our processes, we can continue to create innovative experiences for all.

Get articles like this
in your inbox
Subscribe to our mailing list and get interesting articles about talent acquisition emailed weekly!
Advertisement