Is Personality the Ultimate Solution to Hiring?

There are hosts of personality tests on the market, all claiming to be helpful in making hiring decisions. But before hiring and recruiting professionals commit to using personality tests, they need to understand the difference between “causation” and “correlation.” In other words, does a “good” personality score really indicate “good” performance? Causation means that one thing causes another to happen. Spit on Superman’s cape and you can expect to be punished. The stimulus causes the punishment. Correlation is different. Correlation means that two things tend to occur at the same time, but one does not “cause” the other. Blue hair and Grandmas are correlated (i.e., co-related), but having blue hair does not cause someone to be a Grandma. This is a very important thing to know (as we will see later) when using a personality test to hire someone.

Hiring

Why do we give applicants tests? To quickly predict future performance! That is, instead of hiring someone and then waiting around a few years, we give applicants a short test, examine the score and conclude, “A-ha! This score means our applicant will be our next _____!” Test results, however, are accurate only if the test content is associated with job performance, meaning that it has a causal link to on-the-job behavior. For example, it’s “causal” when employees with bad problem-solving skills make bad decisions. It’s “causal” for bad decisions to create unnecessary expense. Therefore, common sense tells us that when problem solving is important to job performance, a quickie test of problem-solving ability can lead to more profits and fewer mistakes. A problem-solving test will simulate problem-solving on the job, and when properly validated, will show that low test scores predict low job performance. But what about personality tests? What do they have to do with job performance? (Note: I use the term “personality test” to mean any kind of self-descriptive test of style, temperament, or interest). Week after week, I see people using scores from personality-type tests to predict job success. When I ask them about causation, they look as me as if I just arrived from another planet. Then, after a long pause, I usually get three kinds or responses:

  1. “Our managers like it. That’s good enough for us!”
  2. “We gave it to our top producers and use their scores as our answer key.”
  3. “Huh?”

Sigh! Where is Superman when you really need him? We will only examine responses along the lines of #2 above. #1 and #3 buyers deserve what they get.

What’s a Top Producer?

This is one of those situations where common sense fails us. The term “top producers” is an attractive one, but top producers are seldom alike. For one thing, what is the definition of the word “top”? Most popular? Most sales? Most repeat customers? Biggest projects? Best attendance? Best looking? Lowest golf handicap? People are generally good at some things and poor at others. Lumping all “top producers” into the same category is like putting mixed fruit into a blender and pressing the “liquefy” button. All the individual differences disappear into mush. Therefore, the first step in using personality to predict job performance is to avoid the blender approach and carefully define the term “top” — for example: “most friendly,” “most service-oriented,” “best closer,” “best cross-dresser” (sorry, make that “cross-seller”). Remember, the more explicit the performance criteria, the more accurate the study. Some readers may now be thinking, “This is tough. How can I possibly classify top producers into similar groups?” Congratulations! They have moved one step closer to hiring enlightenment.

The Problem With Averages

Article Continues Below

Okay, we have minimized the blender approach and properly classified producers into a variety of “top” and “bottom” groups. Now we can give all top and bottom producers the personality test, average each group’s scores, and compare the differences? We are done, right? Not by a long shot. Averaging individual scores is like having one foot in a roaring hot fire and the other in freezing ice water, and concluding, on the average, that the temperature is comfortable. “Average” comparisons might seem like a good idea, but they tend to conceal extremes. Here’s an example. Take two score-ranges for adaptability: The high group had a range of 50 to 90, with an average of 70; the low group had a range of 40 to 80, with an average of 60. There’s a difference! We can use it to predict individual performance! Right? Wrong. Yes, there is a difference based on groups. But what we care about is hiring individuals, and there is an inconvenient score “overlap” that needs explanation. Why do some individuals in the “low group” actually score higher than the “high group,” and vice versa? We discovered correlational group differences for adaptability, but we cannot conclude adaptability causes performance.

The Right Way and the Wrong Way

When we “shotgun” a personality test to employees (i.e., give everyone the test and look for patterns), we run the risk of finding garbage relationships. That is, we might find high scores in “adaptability” are common among high performers, but that might occur entirely by chance. Could shotgun studies lead to hiring mistakes? Remember the blue-haired Grandma example? Shotgun personality results showed that blue hair and Grandmas were correlated, so if we started hiring only applicants with blue hair, would we get the Grandmas we needed? In other words, will blue-hair cause someone to become a Grandma? No, although we made blue hair a hiring criterion, some would be Grandmas, some punks, some Goths, and some men. Blue-haired was a bad hiring criterion because it did not “cause” Grandmas. Likewise, although “adaptability” might be common among high performers, will hiring applicants based on high adaptability scores lead to skilled employees? How do we discover if high “adaptability” actually leads to high performance? It’s a multiple-step process.

  1. We start with a clean slate.
  2. We interview jobholders, managers, and visionaries, asking them all kinds of questions about success and failure, listening carefully for key behaviors that sound like “adaptability.”
  3. We carefully separate existing employees based on some form of “adaptability” rating.
  4. We give both high and low performers a valid and reliable personality test (i.e., one specifically designed to measure on-the-job adaptability).
  5. We statistically compare scores on the adaptability test with adaptability performance ratings.
  6. If high adaptability scores correlate with high performance and low scores with low performance, then we can feel confident our adaptability test will predict job performance.

The Research

What’s a good article without research to back it up? The research shows:

  • Personality is only a good predictive factor when applicants already have the skills for the job. Think of skills and personality as two sides of a coin: One side represents the “hard skills” brought to the job (“can do”), the other, personality, represents how skills are applied (“will do”).
  • There are only a few personality traits associated with job performance. The challenge is to find and confirm them.
  • When done right, and assuming the employee has the right “hard skills,” having the right personality for the job can affect productivity by at least two to one.

Topics