As we gear up for the upcoming NFL draft this weekend, teams (and their fans) are studying, analyzing, prognosticating, and deciding which talent they should hire to help them achieve their goals.
In my preparations for the draft I recently ran across a really great article by Field Yates about the role of cognitive testing as one of the many pieces of predictive data used to help teams make player personnel decisions.
As a rabid fan of both testing and football and a self-proclaimed expert in each, I feel we can all learn a lot from the situation discussed in this article.
The article uses the controversy around the fact that Morris Claiborne, a cornerback who is projected to be a high first-round pick, did very poorly on the Wonderlic Exam (a cognitive ability test that all draft entrees are required to complete). The author suggests that Claiborne’s seemingly impossibly low score may actually be the result of a learning disability that will likely not hamper his on-field performance and uses this story as a lead-in to discuss the issue of using tests as a surrogate predictor of on-field performance.
I really enjoyed the author’s take on the importance of testing as a predictor of performance.
Just like a near-perfect score doesn’t equate to guaranteed success, a far-from-perfect score does not signal impending failure.
The point is — and this is what has been lost in the recent Claiborne headlines — the Wonderlic exam will always be a part of the draft equation, at least until a better metric is derived to replace it.
The premium each organization places on a particular Wonderlic score will inevitably vary; consensus is a rarity in personnel evaluation.
But what will always remain true is that every available tool to measure a player’s ability — the Wonderlic, 40-yard dash, bench press, and most importantly his film—is a piece of the draft puzzle.
I could not agree more with the author’s take on the value of testing as one piece of the bigger picture, the value of which is determined by the situation and the goals of the individuals who are responsible for making decisions. In my own work with testing and assessment I tend to recommend a model that focuses on the collection of a variety of data points. They all tap into different things that are important for success. Some can weed applicants out at key points in the hiring process; collectively, they can be “added up” at the end of the process to provide the data needed to make an informed final decision between candidates.
Here are a few more thoughts about the parallels between the article’s main points about predicting success in sports and my own insights around predicting success in the workplace. keep reading…