There are literally hundreds if not thousands of products in the marketplace promising solutions to managing applicant flow. But, how do you know which ones to use, and, which ones to trust? Let’s start with the trust part.
I’ve found that many people think they can author a screening test by sitting down, making a list of questions, and adding up the answers. Sorry: that approach is likely to deliver unstable results, have nothing to do with job performance (e.g., neither overall nor specific), and will probably deliver false information. Professionals always start with a good theory of job performance.
For example, consider something like job fit. This theory predicts that, among people of equal skills, people who like their job will outperform those who do not. A weak theory, on the other hand, would be that handwriting predicts job performance. Good theories of job performance are always based on expert proof, not personal opinion. Weak theories hide behind claims that, while they don’t actually predict job performance, can “help” make a hiring decision. Can you spell verbal-shell-game? Ask yourself, how can a test “help” improve the quality of hire if it does not have anything to do with job performance? Hmmm?
In general you want to look for tests that predict elements of job fit, job skills, and job attitude. Job fit tells you whether the person will like the job or not. Job skills tell you whether the person has the right skills to do the job. And, job attitude tells you whether the person wants to do a good job. Remember, though, the actual elements of fit, skills, and attitude will change from one job to another.
Once a developer has a good theory, a test needs to be developed and proveN. This requires a lot of trial and error, edits, re-edits, internal analysis, and finally multiple studies showing test scores predict some aspect of job performance. Do you really want to make $10,000 decisions without having proof a test works? If you make it a habit to believe unsupported claims, I have a deal for you. First, give your candidate a cup of tea, save both the cup and leaves, pack them carefully, and send them to me with a cashier’s check for $1,000 (to cover shipping and handling). I’ll send you a free report that won’t actually predict job performance but will help you make a better hiring decision.
OK. Now that you have tossed out 90% of the bogus tests , here are two strategies for handling a large applicant flow and reducing early turnover. In later articles I’ll talk about improving organizational bench strength and improving sales.
Some organizations are fortunate enough to have a long queue of applicants. Attempting to reduce the applicant flow, they often purchase resume analysis software. While this kind of software may reduce labor, would someone please tell me how analyzing thousands of documents filled with misleading information can produce anything better than dozens of documents filled with misleading information? Organizations need tools that actually screen for job-related performance.
Screening applicants is a good place to use web applications such as realistic job previews and smart application blanks. Realistic job previews are gut-honest descriptions (as opposed to sweet-sounding invitations) of what the job is like. They can be video, audio, or written, but always should include things the employee will encounter such as work rules, organizational climate, and job-related competencies. Their purpose is to let people know what they will be selected for and what the job will be like. If the applicant does not like the RJP, they can opt-out early in the process; about 5% usually do.
I once worked with an organization that used an automated phone screen. The only problem was applicants were hanging up before finishing. We implemented a brief RJP so candidates could listen to descriptions of different jobs and apply for the ones they liked (or at least, did not dislike). Completed application calls immediately became the norm. The biggest problem with developing an RJP is managers who say, “OMG! We can’t tell them that!” My reply is, “So, you would rather surprise them?” RJPs are very effective ways to reduce early turnover and screen out blatantly disinterested applicants, but, not as good as our next tool: smart application blanks.
Smart application blanks are a specific type of test that contains items known (i.e., proven) to predict an aspect of job performance such as early turnover, job-fit, or job-attitude. They work on the premise that “highly successful” employees have specific things in common … and, this next part is critical … things that unsuccessful employees do not. SAB’s usually produce a three-box test: 1) probability the candidate fits the good profile; 2) probability the candidate fits the bad profile; or, 3) not really sure. Don’t get tricked into the idea of testing a candidate against a group average. That is bad science.
Now, let’s look deeper at what’s performance and what’s not; what’s different about employees compared to applicants; and, what kinds of things an SAB should measure.
Defining performance is slippery. Supervisor ratings are usually filled with personal bias; performance appraisal forms are usually one-size-fits none; and, completing tasks are usually combinations of many different skills at many different times. Accuracy depends on using performance numbers that are clear and as close to the employee as possible.
There is another problem when we use employees as our base. It’s called restriction of range. Think of RR as determining golf-skill differences between the best and the worst members of the PGA. In other words, the difference between the worst and the best PGA golfer is considerably less than the difference between the worst and the best tryout. We have to be very careful when we have restriction of range. Otherwise, we risk finding differences that are untrue and misleading (not a good thing if we plan to use the results for hiring).
Once we have determined clear performance standards and identified critical job elements, our next task is to develop a magic formula that ties them all together. There is seldom a straight-line relationship between fit-skills-attitude and performance. Sometimes they have different weights, sometimes they combine together, and sometimes the formula changes depending on what we want to predict. This is where I usually drag out the trusty old artificial intelligence engine.
Artificial intelligence analysis basically requires a very fast, very powerful computer. It first gives every element a value of 1,” tests a basic formula using performance as the goal, and calculates the error between its predicted value and the actual value. Then it re-combines elements and re-tests the equation a bazillion times. The AI engine stops when it finds the best fit (i.e., least error) between the prediction and the performance measure. Since AI is a force-fit process, accuracy depends on doing a considerable amount of homework, specifically, having clear relationships between fit-skill-attitude and performance, having plenty of subjects in the study, and knowing when to quit.
The final product is an innocent-looking short test that accurately predicts things like turnover, new accounts, customer service ratings, and so forth. Naturally, when you have a large pool and the luxury of picking and choosing applicants, this kind of data can be very helpful. For example, if your goal is reducing turnover, picking among applicants with an 80% predicted retention score will yield better results than picking applicants with a 60% predicted retention score.
In conclusion, every method used to separate qualified from unqualified applicants is a test. And every kind of testing contains patches of quicksand. Garden variety interviews might satisfy an inner need to get to know the applicant, but validated tests and RJP’s actually deliver results you can measure. Reducing turnover and increasing individual job performance using RJP’s and SAB’s could be the single most effective organizational intervention you could make.