Sure, it’s easy to say engineering, legal, IT, or actuarial jobs require technical degrees. People in these professions need a substantial amount of education to practice their trade. But we all know from watching folks in these professions that it takes more than a sheepskin to be successful. Sometimes, it takes certain personality factors to make a good job fit.
Job performance is a two-sided coin. On one side, all the hard skills required for the job, and on the other, all the social factors. For the purpose of this article, we’ll assume personality is on the social factor side.
The important things to ask are, “How do we decide which personality factors are important?” and “How do we measure them?”
A pre-screen test (i.e., interviews, tests, resume review, application form) is supposed to determine whether the applicant is qualified. In a perfect world, good scores predict good performance, and bad scores predict bad performance (assuming we hired applicants with bad scores).
Well, there’s a right way and a wrong way to link scores to future job performance. Unfortunately, the wrong way is the norm.
The wrong way comes in two varieties:
- Someone likes XYZ test, gives it to every applicant, and plays amateur shrink.
- Someone gives a one-size-fits-all test to current employees and correlates scores with job performance.
Both varieties are filled with mistakes. I call these WW1 and WW2.
WW1 is common among most organizations I know. WW2 is common among folks who might have taken a class in statistics, but skipped the class in measuring human performance. Either way, both WW1 and WW2 turns away good employees and hires bad employees. Of course, you don’t have to take my word for it. Just do a controlled study.
What is This Thing Called Performance?
If we want to use a pre-hire test, common sense says to first determine what factor we want to measure. Second, choose an accurate and trustworthy test that measures this factor. Third, make sure the test predicts job performance. Sound simple? Read on.
Some people think they can measure “performance” by looking at performance appraisals or supervisor ratings. However, we all know most performance ratings are primarily personal opinion shaped by friendships, power, ambition, or sucking- up (sucking-up is a technical term for getting other people to think you are actually competent at what you do).
As a result, most performance review data means:
- The employee was unskilled at shifting blame for his or her mistakes and now must wait for the rater to get Alzheimer’s.
- The employee is highly skilled at shifting blame and charismatic?and thus, is well-liked by the boss.
- Employees who are skilled at corporate camouflage.
In general, any data that can only be evaluated at the end of a long performance period is error-prone. Sales volume, for example, is fuzzy because it’s a function of persistence, fact-finding, learning, strategizing, presentations, adapting to buyer personalities, economic conditions, market conditions, and so forth.
To use an analogy, if you want to measure the quality of grapes in a fruit salad, you cannot put all the fruits into a blender, press mix, and come back six months later with your grape-o-meter.
Performance data should be easy to see and easy to measure.
Big Nets Often Contain Big Holes
There are some folks who think they can give a multi-factor personality test to employees, and then examine the results to identify correlations. Sorry. No cigar for them, either.
Let’s assume for a moment these folks got the performance thing right. Giving one great-big test to everyone and looking for results is a major mistake, primarily because a correlation is not always a causation. In other words, just because two things are statistically associated does not mean one causes the other.
Consider the famous correlation of rising skirt hemlines with rises in the stock market. If one actually caused the other, today’s hemlines would probably be located somewhere above the waistline, causing most male traders to become so distracted a market crash would be inevitable.
Statistics is deaf, dumb, and blind. It can show how two factors move with relationship to each other. Only a human being can decide whether one factor causes the other. For example, global organization or teamwork factors may correlate with performance, but if organization and teamwork don’t cause performance, then hiring people based on these scores will reject qualified candidates and hire unqualified ones.
Article Continues Below
How Many Factors?
The DISC is a widely used tool. Just check off a few dozen adjectives and you can get a mini-novel. It looks good, but is the DISC really as comprehensive or accurate as it looks? You decide.
The DISC is based on a two-factor theory that is almost 80 years old. It states that personality is a function of being either active or passive in a friendly or unfriendly environment. Does this sound sufficiently comprehensive to define your job?
I suppose hiring everyone who described themselves as mini-Napoleons would produce an employee base of toy soldiers who continually fought for power. However, would that be productive?
I once visited a company where the HR guru only hired people who excelled at teamwork. After a few years, it had 300 employees who would not schedule a meeting unless everyone could attend, make a decision unless everyone agreed, and not confront a production problem because it might hurt someone’s feelings.
Did they get the results they expected? No. They got the results they measured.
University research shows it takes about nine to 10 factors to predict job performance. Six or seven are based on job fit and three are based on job attitude. It is probably a good idea to check your test vendor to see if the test you are using was developed based on this research. If not, it probably won’t either predict the performance or attitudes you need.
As I said before, statistics is deaf, dumb, and blind. It measures associations, but only under the right conditions. For example, data sets smaller than 25 people are filled with too much individual information and not enough group information.
That is, comparing the characteristics of 15 high producers with 15 low producers is probably going to contain a substantial amount of error.
The bottom line? Trustworthy statistics needs large numbers of people to be accurate.
What about the size of correlation? Should we shout and celebrate a correlation of .30? Does it mean we have a 30% relationship? No. It means we have a 9% relationship. The technical details would cause John Wayne to weep, but correlations have to be squared to learn how much performance they predict. People who do not understand statistics are easy to fool.
Personality can be a very important thing to measure pre-hire because it can provide considerable insight into job performance. However, before you can trust a personality test to predict performance in your job, you have to clearly define what you want to predict; identify the personality factor that causes it; find a test developed specifically to measure that factor; either conduct your own study or transport someone else’s work; and finally, realize that personality is only one part of the puzzle.
Otherwise, if your organization is still using WW1 or WW2, it is probably turning away good employees and hiring bad ones.