It is not a stretch to say that the validation of pre-employment assessment tools is both one of the most important, and one of the most overlooked, aspects of any legitimate pre-employment assessment program.
Validation is a best practice that can provide both critical information about the ROI of an assessment and the documentation required to support its legal defensibility. Unfortunately, proper validation is not the norm when it comes to the use of assessments. While many companies make use of assessments that have been validated in the past or that do satisfy some of the requirements for test validity, conducting the validation work required to fully satisfy best practices and gain an understanding or ROI is often not on even on the radar screen.
When it comes to validation, my experience shows that the biggest stumbling block is a lack of understanding of just what validation is and why it is so important. While the concept of validation definitely has its complexities, it can be boiled down to a few simple concepts which are discussed below.
Webster’s online dictionary defines the word “validity” as:
….well-grounded or justifiable : being at once relevant and meaningful <a valid theory> b : logically correct <a valid argument> <valid inference>
….appropriate to the end in view : effective <every craft has its own valid methods>
These definitions definitely hold true when it comes to employment testing. Ask an I/O psychologist and he or she will tell you that validation simply means the act of establishing two key things: 1) That anything used to make employment decisions is job related, and 2) That the assessment actually measures what it is supposed to measure (i.e., that the test is “accurate”).
There are a variety of ways to document the job-relatedness and accuracy of a test as a decision-making tool; however, a working understanding of validation should focus on two general types of validation, content validation and criterion-related validation.
Content validation: Quite simply, content validation involves the documentation of the personal characteristics (i.e., experience, education, knowledges, skills, abilities, values, etc) required to perform the job.
At a minimum, claiming a selection measure is content “valid” requires an alignment of test content to job requirements so that support for the job-relatedness of the test can be documented. This means that the job or jobs in question must be carefully evaluated and that the input of subject matter experts (incumbents and supervisors) be used to create a full understanding of the various things that are required for successful job performance.
The process used to establish the job-relatedness of test content is known as “job analysis.” Once information about job performance and related characteristics has been documented via job analysis, selection measures can be mapped out to match job requirements. For instance, if job analysis shows that the job requires fast and accurate typing, then the use of a typing test to hire applicants for that job is assumed to be content-valid based on its relation to job performance requirements.
Content validation can satisfy EEOC requirements for claiming a test is valid (provided that proper procedures were followed). However, settling for only content validation is selling yourself way short. The real value proposition when it comes to validation lies in the evaluation of the ROI provided by a selection measure. This information can only be provided by criterion-related validation.
Criterion-related validation: Whenever possible, the statistical evaluation of the relationship between selection measures and valued business outcomes is desirable. This type of validation is known as “criterion-related validation” and it can provide concrete evidence of the accuracy of a test for predicting job performance. Criterion validation involves a statistical study that provides hard evidence of the relationship between scores on pre-employment assessments and valued business outcomes related to job performance. The statistical evidence resulting from this process provides a clear understanding of the ROI provided by the testing process and thus helps document the value provided. Criterion-related validation also provides support for the legal defensibility of an assessment because it clarifies the assessment’s accuracy as a decision-making tool.
While criterion-related validation may seem mysterious, it has much in common with two more well-known concepts that are used to help find value within business processes: six sigma and business intelligence. Both of these methods require that data be examined in order to help clarify relations between various process components. The resulting information can be used to help streamline business processes and uncover meaningful relationships between various streams of data. The creation of a feedback loop using criterion validation is really no different.
In an ideal world it is best to have both content and criterion validity evidence. Documenting content validity is a minimum requirement for any pre-employment selection measure; however, content validation alone can’t provide any evidence for the ROI associated with a test or selection measure. Adding statistical validation bolsters the legal defensibility of an assessment and provides insight into ROI. Unfortunately, most companies do not perform criterion-related validation.
There are a variety of reasons for this.
What does all this mean to staffing professionals?
As with anything else, it may take a bit of extra time and resources to do things right, but the extra effort will provide value and piece of mind.