Advertisement

The Need for Speed: Validation Strategies of the Future

Jan 8, 2003

I firmly believe that 2003 will be a breakout year in terms of the number of businesses that will adopt online screening and assessment tools as part of their web-based hiring initiatives. This makes perfect sense, because, as I have said many times, the infrastructure of the Internet and the changes it has brought to the hiring process offer some amazing possibilities. In fact, the marriage of Internet technology and assessment tools has fundamentally altered the employee selection process, and will continue to do so. A big part of the evolution of new technology-driven employee selection practices involves continual challenges to the established rules for “best practices” in this area. The advent of new technology often forces changes in the status quo and creates the need to “push the envelope” in order to facilitate change. I like to view this type of change from an evolutionary perspective. That is, those new ideas and systems that can show value and are proven to be legally sound will be developed and continue to evolve while those that are not able to do so will perish. The strategies used to “validate” tests and assessments used for selecting employees offer an excellent example of this evolutionary process at work. Validation is an essential area when it comes to employee selection. Using an assessment without solid validation evidence is like skating on thin ice, both from a legal and financial standpoint. The use of technology to create new models for employee selection has necessitated that some changes be made to traditional validation models. While some of these models will provide the DNA on which the systems of the future will be based, many others may represent evolutionary dead-ends. What Is Validation? The terms “validity” and “validation” can seem confusing to those who have not spent a good deal of time studying employee selection techniques. Don’t worry, though, there is no need to be intimidated as these terms really aren’t that complicated. Simply put, “validity” means nothing more then the assurance that a test or assessment is measuring what it is supposed to be measuring. Another way of looking at validity is to view it as the “accuracy” of a test. “Validation” is simply the process (or processes) used to establish proof that a test is accurately measuring what it is supposed to measure. When it comes to employee selection, validation is critical for two major reasons:

  1. It an important part of establishing the legal defensibility of a test or assessment.
  2. The ROI provided by any test or assessment is limited by its validity (the more appropriate and accurate the test is, the better chance of making good hiring decisions, and thus the bigger the impact on your bottom line).

Major Types of Validation While the basic definition of the term “validation” is relatively simple, its operationalization can get a bit more complicated. First and foremost, understanding validation requires the recognition of the idea that all validation methods are really just different ways of proving the same point (this point being that an assessment is both appropriate and accurate). This means that validation should be viewed as a process in which multiple methods or strategies are used to provide convergent evidence to support the appropriateness of an assessment measure or test. Unfortunately, situational constraints often make it difficult to actually use multiple validation methods and instead may limit validation efforts to one specific strategy. For this reason, most traditional validation approaches involve the use of one of the following three strategies (there are others, but they aren’t directly relevant to this article):

  • Content validation. The content validation process is all about documentation, that is, providing evidence to support the fact that the assessment you are using measures job-related factors. Ensuring content validation requires that the questions that make up an assessment be good measures of the construct that the assessment is designed to measure. Content validation also means using a job analysis to document that the constructs measured by a test or assessment are important for job success.
  • Criterion-related validation. This is a process in which a statistical link is established between candidates’ scores on an assessment measure and the job performance of those hired using that measure. This is the only type of validation that provides an actual statistic that summarizes the strength of the relationship between an assessment and job performance (this number is known as a validity coefficient). The ability to provide such a statistic is very useful, because it can be used to document the exact strength of the relationship between an assessment and job performance and to provide a good estimate of the economic value of the assessment in terms of ROI. Because it is data driven, this type of validation provides the foundation for many of the new breed of Internet-based validation models.
  • Validity generalization (i.e., “transporting” validity). This is a relatively new strategy, but one that is quickly becoming among the most popular because it is the quickest and easiest of the traditional validation strategies. Validity generalization is based on a statistical process that looks at the overall trend of many combined studies in order to determine the “universal truth” about the relationship between two variables. If this analysis reveals that a certain relationship is present across all situations (e.g., that a certain personality characteristic predicts performance in sales jobs no matter what the job or the industry), then its validity is said to be “generalizable.” This means that one is able to use this evidence to support the validity of such a test without first conducting a criterion-related validation study.

The acceptance of validity generalization arguments has led to support for the idea of “transporting” validity. This simply means that as long as one can show evidence of a test’s validity for predicting success in one specific situation, all one has to do is prove that any new situation is similar and the test’s validity can be transported to that new situation. The Role of the Internet in the Evolution of Validation Models While the Internet has brought about many changes in the employee selection process, one of the most significant has been its impact on validation methods. The driving force behind these changes has been the need to develop ways speed up the process of selecting the right assessments for a specific job. The three validation processes discussed in the previous section require organizations to invest a lot of time and effort into validation. While this investment is often needed to provide rock solid legal defensibility, it is incompatible with the “need for speed” that has been created by Internet-based systems. More innovative assessment companies have recognized that the key to future success lies in using technology to create new ways to reconcile the competing priorities of proper validation techniques and quick ramp-up time. These companies have responded by creating products and systems that are using the power of the data made available by the Internet to develop new validation models. It is often difficult to tell which of the following validation models represent the dawn of a new era and which represent nothing more than ineffective ways of cutting corners:

  • Real-time criterion-related validation. A constant stream of data is used to continually update criterion-related validation statistics. This supports real-time staffing in which constant monitoring and tweaking of parameters is possible. This is very different from past systems, where such studies were undertaken on a yearly basis at best. These systems are important, because the more data we obtain about such relationships, the closer we will be to developing true taxonomies of job performance.
  • Neural networks. These are a complex blend of I/O Psychology and artificial intelligence, representing a kind of hyper-criterion related validation in which data relationships are identified and artificially intelligent neural networks are trained to recognize relevant data. These neural networks are then used to provide information on which applicants are best suited for a particular job.
  • Item-level validities. The collection of large amounts of criterion-related validation data has allowed some companies to build systems in which each item (or small cluster of items) has its own validation information. This allows vendors to rapidly build customized assessments that are made up of items whose performance is known and which play a central role in the ability to provide flexible systems that can be ramped up quickly.
  • “Off the shelf” test menus. Pre-made tests are nothing new. But some companies have created online systems by which clients are able to choose the assessments they want to use from a drop down menu. These systems are often built on the principles of content validity and transporting validity. But it is important to understand that the ultimate responsibility for using a test correctly falls on your shoulders, not those of the test vendor. This means that the consumer should not use such a system unless they have taken the time to document all critical job requirements.
  • The “one size fits all” model. This is a relatively new idea that is being used by many test vendors. It involves having high performers take an assessment and then using their scores as a profile for success against which all applicants are screened. While this method can be effective and legally defensible, it is important to understand that simply measuring high performers does not always tell the whole story. It is often important to look at low performers as well in order to understand the negative side of performance. This method also does not place much emphasis on job analysis and thus is often hard to integrate into competency models.

Some Tips for Choosing a Validation Method While the intricacies of validation may be difficult to understand, following a few basic principles will help to ensure that you are making sound choices when it comes to validation.

  1. Always try to use multiple validation methods. It is fine to rely on one strategy to get you started, but the more additional evidence you can collect to support the validity of a test the better. For instance, you might want to establish the content-related validity of a test by ensuring that the tests used were properly constructed and by mapping what they measure onto the results of job analysis. You can then use transportability to choose an assessment battery, deliver it, and follow up with a criterion-related study to get a precise estimate of how well the test is performing. The “back up” provided by this philosophy will help you retain piece of mind when trying out some of the new validation models I have been discussing.
  2. Always do a job analysis first. It is important for you to know and understand your job inside and out before you go shopping for a test. This will provide a foundation for legal defensibility no matter what validation strategy is used.
  3. Have an outside expert verify that you are making the right choices. Validation can be a complex issue and vendors often have blinders on when it comes to the best validation model for a specific situation. Making sure you make the right choices often requires an objective opinion.

Want to learn more about traditional validation models? Here are two good sources of information:

Get articles like this
in your inbox
Subscribe to our mailing list and get interesting articles about talent acquisition emailed weekly!
Advertisement