Advertisement

Recruiting Is Rife With Weak Business Cases (and That’s OK)

Article main image
May 17, 2022

What does it mean to say that a given recruiting practice is good for business? Imagine your CFO (it’s always someone in finance squinting at you, right?) is questioning whether the new tech you want to use to improve candidate experience is “worth it.”

Now, you might explain why a positive candidate experience is important. You might even talk about how the new solution will likely impact candidate experience. You may present numbers around time to fill, apply rates, dropout rates, etc. That’s all nice, but your CFO doesn’t care about any of that — at least not enough. The numbers he’s really interested in are the ones with dollar signs before them. 

In other words, you’ve got to make a business case, one that is financially measurable. It’s not enough to say that something is good for the bottom line. You’ve got to prove it. 

But that’s not what this article is about. This isn’t some crappy “9 Ways to Make a Business Case” story. Rather, it’s about whether a business case should be required in the first place? 

More specifically, which efforts should demand a business case? Which shouldn’t? And what happens if you can’t make a strong financial case for something that is obviously — obviously! — good for the business?

The Business Case for Paperclips

The answers to the questions above can be found in a graph.

They can’t, actually, be found in a graph. But wouldn’t it be great if that were the case? Who doesn’t love a nice visual depiction of metrics? 

Thing is, that’s often not possible, for even your CFO knows (though he’ll never admit it) that a lot of data is soft. Figures are fungible. They come with limitations. More importantly, they come with various interpretations. 

This underscores the point that it’s not really numbers that matter most. It’s how we talk about them. So let’s talk about them when it comes to your business case for office supplies.

Remember when your C-suite gathered to pour over the report that PwC conducted for you about the ROI of purchasing paperclips for your workplace? 

The sarcasm in that question is real, but so is the principle underlying it: At what point, for what exactly, do you do a cost-benefit analysis? Is there a line? Where is it?

I once worked at a company where the CFO grilled me over the tip I gave a cab driver. He explained that 15% was the standard. I violated company policy by giving a bit more. If that sounds ridiculous, it is. But at least the company was drawing a financial line. Was it one driven by some sort of research? I doubt it, but nonetheless, a business decision, if not a case, was made.

And that’s often what happens. Semi-arbitrary thresholds will serve as guides for how money is spent in an organization. In an article I wrote years ago for The Conference Board Review about this topic, I said: 

“You can slap a price tag on anything. Training programs cost this; IT equipment costs that. But knowing the price of everything and the value of nothing risks stumbling to a point of no returns, where a marketing campaign that costs $1 million might be worth no more than $1. To mull over whether something will actually merit its cost, you must consider how —  if — you’ll eventually evaluate financial results.”

And so the question reverts to: Can we subject everything to a business case?

Chaos Around Causation

In that same article, David Larcker, the James Irvin Miller Professor of Accounting at Stanford’s Graduate School of Business, told me, “Accounting is really great at telling you if you made money, but it’s not so great

at saying: Here’s the procedure or process that made you that money.” 

For example, can you prove that spending X amount on an employer branding campaign yielded higher profits or greater savings? You might be able to demonstrate a more robust interest from job-seekers, but again, that’s not a true business case. 

Association, correlation, and causation are not the same. Often, there are just too many variables involved, particularly in the hiring process, and especially in the long-term. Isolating actions and expenses can prove…unprovable. 

This is something worth keeping in mind in many instances, including when tech vendors make their presentations. When they present all sorts of numbers, sometimes es como leer Español cuando no sabe el idioma. Puede leer las palabras pero no reconsceras el significado.

It’s like reading Spanish when you don’t know Spanish. You can read the language, but you won’t comprehend it.

Larcker further pointed out that fewer than 30% of companies had developed models that could make causal connections to long-term economic performance. He also cited a study he performed that examined a telecommunications company that was aiming to achieve 100% customer-satisfaction. However, the organization didn’t try to uncover customer-satisfaction correlated with profits. 

Actually, Larcker’s research showed that customers who were 100% satisfied spent no more money than those who were only 80% satisfied.

And yet, who among us wouldn’t argue for better customer service? Just as what modern TA professional wouldn’t push for a better candidate experience?

Thing is, you don’t invest in candidate experience; you invest in training or technology or some other aspect that will hopefully raise candidate experience. And then you’re left with the same question: What was the ROI? The “R” being financial. 

The answer will be messy. 

At the same time, Larcker also cautioned against what he termed “measurement disintegration,” which is when “an overabundance of marginal, insignificant, or irrelevant assessments dilutes the effect of the measurement process.”

As The Conference Board Review article pointed out:

“A leading home-finance company that Larcker [along with a colleague] studied suffered paralysis by analysis after instituting an ‘executive dashboard’ that eventually ballooned to tracking nearly 300 measures. Larcker also points to a bank that adopted multiple accounting and nonfinancial measures. As a result, the time per quarter that area directors began spending on evaluation jumped from less than one day to six days. Eventually, the company reverted to fewer, money-based measures.”

So Now What?

Perhaps all this explains why a study some years back by the consultancy ESI International showed that

fewer than half of surveyed executives track the impact, financial or otherwise, of their training and learning programs. The most common reasons included lack of resources and confusion about what to measure.

The study found out something else. You’ll want to lean in for this nugget: Almost 20% of managers who said that they don’t measure business impact admitted it’s because they were worried about the outcomes. 

What, then, do you measure?

The answer is simple but unsatisfying. You measure what you can. You measure candidate engagement, you track diversity hiring, you monitor time to fill, you monitor all sorts of ratios throughout your hiring process. The inability to create accurate financial measurements is no excuse to get lax on other metrics. The reality is that some correlations are obvious, like the value of candidate experience to the business. In the same way that office supplies are necessary.

The best leaders will always recognize that work is filled with intangibles. When you are in the people business especially — which recruitment certainly is — most of your efforts will be laden with work that can’t be translated into line items on a P&L sheet.

I summed it up best years back: “[I]f business were as simple as surrendering all the work to numbers, we wouldn’t need managers. And so, the real link between an activity and revenue is not nonfinancial performance but people.” More specifically, it boils down to you. 

Good instinct, good experience, good judgment. That’s all part of being a solid recruiting professional. As I said, “[Y]ou shouldn’t hurl figures around when making a business case or evaluating results as if the digits tell the whole story — because, as mentioned before, numbers do not speak. Nor do they make decisions. You do. There’s a certain illogic of blaming poor data when initiatives fail and accepting lavish praise when they succeed. In the end, the responsibility lies with you.

“If this still seems like insufficient guidance, you can always hire consultants to help the process along — if you can make the business case for them.”

This article is based on The Conference Board Review‘s “The Case Against the Business Case” by Vadim Liberman.

Get articles like this
in your inbox
The longest running and most trusted source of information serving talent acquisition professionals.
Advertisement