There was an interesting story in the July 19 edition of Time about Dodge & Cox, the San Francisco-based mutual fund company. Here’s the opening paragraph. You might want to take the same approach they use when investing in stocks as you do when assessing your candidates:
The money managers at Dodge & Cox have heard the adage that a camel is a horse designed by committee. They politely disagree. Their horse, you see, keeps winning. Each of the firm’s four mutual funds has from nine to 18 portfolio managers, and everyone gets equal say in which stocks and bonds to buy and sell. “The investment business is permeated with the lore of the individual. We think that’s a bad way to manage money,” says CEO John Gunn, one of many decision makers. “There are a zillion independent variables, and it’s very hard for one person to think about them all.”
If you’ve ever lost a good candidate because just one person on the interviewing team (usually the weakest interviewer) didn’t like the person for some superficial, narrow-minded, or easily correctable reason, you know what a great waste of effort and resources the hiring team just expended. Doing searches over again until you find the right “compromise” candidate usually results in a wrong hire or an average hire.
Aside from the time wasted, your company just let the best person get away for a bad reason. This situation is preventable, but it requires minor adjustments to how you conduct the assessment process. Of course, it’s not easy, but it is possible. Why is it not easy? Consider the following 11 reasons:
- It’s hard to change human nature.
- Too many interviewers make quick judgments about a candidate in a few minutes.
- Interviewers ask candidates they like softball questions.
- Interviewers ask candidates they don’t like hardball questions.
- Interviewers tend to use behavioral interviewing more frequently when they don’t like someone.
- Candidates, even the best, are a bit nervous during the first part of the interview. When candidates are nervous they tend to avoid direct eye contact, tend to be less confident, and give shorter answers to questions.
- Candidates who are passive tend to be less prepared during the first interview and can appear uninterested. This can turn off managers.
- There are more hiring mistakes made in the first 30 minutes of an interview than any other time due to biases, incorrect perceptions, and lack of training.
- It’s easier to justify a “no” vote on a candidate than a “yes” vote. Worse, one no vote can offset two yes votes, giving a no vote more power.
- Unprepared interviewers tend to vote no more than yes, so we reward the unprepared interview with more influence and power.
- Few companies and fewer interviewers know how to convert their interviewing results into a formal assessment, other than yes, no, or maybe.
Rather than trying to change these things directly, it’s better to systematize them out of your hiring process. To start, take away full yes/no voting rights from everyone on the interviewing team, including the hiring manager. Instead, leave the full yes/no decision to the collective wisdom of the team. This requires a more formal and deliberative type of assessment process.
I’ve been using one in my search practice for the past 15 years with excellent results. You might find it useful. It’s based on this downloadable 10-Factor Candidate Assessment scorecard. The scorecard lists 10 strong predictors of on-the-job success in combination with a 1-5 rating scale. I suggest to my clients that they delay making quick judgments about a candidate and instead conduct a group evaluation based on these 10 factors:
- Technical Competency to Do the Work
- Motivation to Do the Work
- Team Skills with Comparable Groups
- Job-related Problem-Solving Ability
- The Consistent Achievement of Comparable Results
- Planning and Executing Comparable Work
- Environment and Cultural Fit
- Trend of Performance Over Time
- Character and Values
- Overall Potential
An important aspect of this scorecard is that each factor is measured in comparison to real job needs. To complete the assessment correctly, everyone on the hiring team needs to know real job needs. Here are some articles on performance profiles you might want to read on how to determine these real job needs.
By describing what the person needs to accomplish, not the qualifications the person needs to have, interviewers can more easily can assess the candidate’s past performance against required future performance. This alone helps interviewers stop making snap judgments.
The 1-5 rating scale and detailed guidance on how to rank a person helps even more in preventing superficial assessments. For example, to get a Level 4 ranking for Technical Competency, the candidate would need to demonstrate that he consistently does more than required for job success, and/or does it consistently better, and/or does the required technical work consistently faster. A Level 4 ranking also requires the person to have self-managed his work, have demonstrated the ability to learn new technical skills quickly, and have a track record of training others. This type of specific guidance forces interviewers to justify their rankings with facts and evidence, not intuition or gut feelings. This neutralizes those people conducting superficial interviewers and those who are unprepared.
What we’ve also discovered using this form is that it’s quite easy to determine whether someone is Unqualified (Level 1) or a Super Star (Level 5). However, it’s quite difficult to discern the differences between a Level 2, 3, or 4 during a one-hour interview.
On this 1-5 ranking scale, a Level 3 person is a great hire. This is a person who can meet all job objectives with minimal supervision and is able to take on bigger projects or get promoted quickly. A Level 4 is also a great hire. I explain to hiring managers that this is a person who will probably be their peer in a few years. A Level 5 hire will probably be their boss in a few years. The downside to Level 4 and Level 5 hires is that they require bigger jobs faster and will leave if they don’t get them. This definition of the 1-5 rating system helps managers get over the idea that hiring a Level 3 is a bad decision.
Article Continues Below
Implementing this type of evidence-based approach to assessing competency virtually eliminates these two common hiring mistakes:
- Hiring a Level 2 person, assuming the person is a Level 3 or Level 4.
- Not hiring a Level 3 or Level 4 person, assuming the person is a Level 2.
Level 2s are competent but not motivated to do the work. These people are frequently hired if they “perform” well during the interview. Unfortunately, the mistake is only uncovered after the person starts as time is wasted pushing this person to produce average work. Worse, these people are difficult to fire.
The problem can be easily prevented if interviewers share factual information before voting yes or no as described above. By formally delaying the rush to judgment, this same sharing of evidence process can eliminate the problem of not hiring the strong Level 3 or Level 4 who isn’t a great interviewer.
It’s pretty obvious that if interviewers share the information they gathered during their individual sessions with the candidate in some formal and deliberative way, assessment accuracy would increase. And if the team goes out of its way to make sure that no Level 2s are hired, not only will more bad hires be prevented, but good people will get a much more balanced and fair assessment.
Rather than giving everyone a full yes/no vote, just assign each interviewer a sub-set of the 10 factors to evaluate; three or four are appropriate. If you have enough interviewers, you can overlap some of the factors. Then instruct everyone to use the interview to get detailed evidence to evaluate each factor.
This type of evidence-based assessment process doesn’t take any more time, it just takes a little organization and training. The benefits, however, are huge. Just like stocks and bonds, it’s impossible for one person to accurately assess all of the factors required for a complete and accurate assessment. The logical conclusion is to make assessment a team effort. Just make it about collecting accurate evidence, not adding up a bunch of poorly considered yes and no votes.