Receive daily articles & headlines each day in your inbox with your free ERE Daily Subscription.

Not logged in. [log in or register]

Use Anti-DISC to Become a Better Person and Make Better Assessments

by
Lou Adler
Dec 6, 2012, 12:09 am ET

Warning: do not use this slick all-purpose assessments for screening out people. However, it’s useful for becoming a better interviewer and screening in people.

DISC and all its variants (Calipers, Myers-Briggs, Predictive Index, etc.) should never be used to pre-screen people. At best, and if they’re not faked, these “tests” only predict preferences, certainly not competencies. At worst, they prevent diversity by eliminating the chance to see and hire people who can achieve great results but use a style different than the expected. (Note: Use these types of style indicators after you’ve narrowed the selection to 3-4 people who you’ve determined can meet the performance objectives required for success.)

Despite this predictive limitation — although it will be argued by those who use or sell them — the DISC style preferences are quite helpful for understanding how people communicate, make business and hiring decisions, and interact on-the-job.

To determine your dominate DISC style, look at the descriptions of the four styles in the graphic and select the one that best describes you. Then to validate this, answer these two questions:

Question 1: are you impatient or not? If you’re very impatient and would rather make decisions with no information, put yourself on the far right in the diagram. If you’re still trying to figure out your answer to this question, put yourself on the far left. Everyone else can put themselves somewhere in between.

Question 2: are you more into results or relationships? People who are more focused on the success of the project and are less sensitive to the needs of the people involved fall somewhere in the top-half of the grid. Those who are more concerned with the people involved are in the bottom-half.

Your answers to these simple questions categorize you into one of these four dominant styles:

Directors (impatient and results): these are people who are driven and results-oriented. They are dominant, frank, make quick hiring decisions based more on intuition than facts, and at times can be perceived as heavy-handed or overbearing.

Influencers (impatient and people): these people are typically extroverted, friendly, and persuasive, possessing the classic salesperson persona. They quickly decide whom to hire based on first impressions.

Supporters (patient and people): these people are the consensus builders — HR people, diplomats, and counselors. When hiring they look for people who “fit” with the organization and are team players.

Controllers: (patient and results) these people are the classic analyzers and techies. They tend to focus on experience and technical expertise when making hiring decisions.

Generally speaking, people are more comfortable with those who are similar to them. This includes their own dominant DISC style and the two adjacent styles. They tend to have the most conflict with their anti-DISC style — their diagonal opposites. However, by forcing themselves to adopt this style, they can improve interviewing accuracy as well as better understand some of the cause of their interpersonal disagreements. Here’s how:

  • Directors need to become more like Supporters, slowing down long enough to hear everyone’s viewpoint, especially those who disagree with you, using evidence rather than intuition and gut feelings, before deciding.
  • Influencers need to become more like Controllers, getting evidence of the candidate’s actual performance and ability rather than overvaluing first impressions and personality.
  • Supporters need to become more like Directors, judging the person more on the results achieved and not just whether the person fits the culture and is a team player.
  • Controllers need to become more like Influencers, determining if the person can work with a variety of different people, not just assessing their technical competency.

DISC has its good and bad points. Since you can figure out your DISC style in a few minutes, and even the not-so-clever can fake it, caution is urged on how it should be used. It should never be used for screening purposes for a number of reasons, but has value from a communications and self-development standpoint.

From a hiring standpoint it can be used to make better assessment decisions on two fronts. For one, the interviewer can become more open-minded and objective by collecting information using the best techniques of each style. For another, during the interview observe how candidates have modified their styles depending on the circumstances.

Some people are more flexible and others more rigid. When used as part of fact-finding this way, a DISC style assessment can help the interviewer better understand a candidate’s flexibility, cultural fit, and the person’s ability to work with and manage others. For something so simple, this has great value as long as it’s used properly.

This article is provided for informational purposes only and is not intended to offer specific legal advice. You should consult your legal counsel regarding any threatened or pending litigation.

  1. Margot Nash

    Well put, Lou. There are only a few personality characteristics that have been validated in terms of their ability to predict job performance, across the board. Namely, conscientiousness, and a few other more specific factors. The best predictor of job performance is a structured interview that relates to the competencies necessary for your specific job. However, I completely agree with this article: personality testing is often a fantastic way to learn how you can best motivate and manage your new (and existing) employees.

  2. Richard Melrose

    Self-reporting personality tests (i.e. DISC, Predictive Index, etc.) lack predictive validity for job performance and job learning, in addition to being easy to fake. ( http://bit.ly/Hy9uJG )

    Tests of general mental ability (GMA or g) explain just over half of the variation in job performance and job learning; and when combined with integrity testing, the predictive validity rises to roughly two-thirds. Job Analysis should establish the desired range of GMA to match the information-processing demands of the job.

    Structured interviews, work samples, job knowledge and job simulations add further predictive validity.

    To hire well and simultaneously maintain legal/regulatory compliance, employers should take the time to learn about predictive validity and the performance-related payoffs available from improving employee selection procedures. The seminal meta-analytical works of Schmidt and Hunter provides an excellent starting point ( http://bit.ly/QK64aW ).

    Very few companies have any better investments to make than improving their employee selection procedures; and the formula for the corresponding ROI calculation has been well established.

  3. Martin Snyder

    Lou I think the questions have to be asked in a hiring context only, because people exhibit a wide range of all the identified speeds and tendencies in various aspects of life. The screens can pick out extreme personalities across domains, but that’s a whole other discussion.

  4. Keith Halperin

    Thanks, Lou and Margot. However…..

    Interviews Are the 3rd (Really 9th) Best Way to Select People
    http://thestaffingadvisor.wordpress.com/2009/10/28/interviews-are-the-5th-or-6th-best-way-to-select-people/ (This is an advertorial, but it links to an article (at the bottom) which links to A DEFINITIVE STUDY:
    Schmidt, F.L. & Hunter, J.E. (1998) The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings,” Psychological Bulletin, 124, 262–274.)

    IF YOU WANT TO SKIP THE DETAILS BELOW, HERE’S THE SUMMARY:

    “Schmidt and Hunter point out that three combinations of methods that were the most powerful predictors of job performance were GENERAL MENTAL ABILITY PLUS A WORK SAMPLE TEST (in other words, hiring someone smart and seeing if they could do the work), GMA PLUS AN INTEGRITY TEST, and GMA PLUS A STRUCTURED INTERVIEW (but note that unstructured interviews, the way they are usually done, are weaker)”.
    ………………….

    Here’s the advertorial:
    Skilled researchers pored through 85 years of scientific literature to identify which employee selection methods were the best predictors of job performance. 85 years of research, distilled down into one set of findings.

    So of the 19 methods studied, which ones were the best?

    Structured interviews came in 3rd.
    The far more common unstructured interviews came in a dismal 9th – see: “So You Think You Can Interview?”
    Reference checks came in 13th.
    Years of job experience came in 14th.
    Years of education came in 16th.

    So … correct me if I’m wrong here, but that list covers just about all the methods most employers use when making a hiring decision.

    OK, so this research goes a long way toward explaining why there are so many hiring mistakes, but I bet it leaves you wondering just what those researchers found to be the best predictors of job performance…

    The best predictors of job performance were being smart, (General Mental Ability – such as IQ) and doing well on work sample tests (see: ”Talking About Work vs. Doing Work In the Interview.”) Actually employers who used a combination of two good methods improved their hiring accuracy even further.

    So, in 85 years of research, one finding is crystal clear:

    Most traditional methods of selecting employees are terrible at predicting job performance.

    But the fun really begins when you evaluate the entire recruiting and hiring cycle in light of these findings:

    You reduce your chances of making a good hiring decision when you emphasize (the nearly irrelevant) years of experience in your job description, and employment advertising. That (arbitrarily) limits who you will even consider in your pool of candidates.
    Then, when you dip into that already limited pool of candidates to select people for an interview, you further reduce your chances of making a good hiring decision when you rely on the resumes alone in selecting who to interview. Just what, exactly, can you learn from a resume beyond education and years of work experience? Less than you think, yet surveys show that years of experience is one of the most common factors executives use in evaluating candidates.
    So, before you have even had your first interview, before you have spoken one word to your potential future employee – your entire recruiting and hiring sequence conspired against you by using two of the least reliable indicators of actual job performance to select who you will speak with. And then of course, most managers compound the error by “winging it” with an unstructured interview. Hey, if that’s the combination of hiring methods you are using, maybe you should save the trouble and just rely on handwriting analysis instead (it was ranked 18th).

    ===============================================================

    Here’s the article:

    Selecting Talent: The Upshot from 85 Years of Research
    http://bobsutton.typepad.com/my_weblog/2009/10/selecting-talent-the-upshot-from-85-years-of-research.html

    I recently wrote about how the “talent wars” are likely to be returning soon in the U.S. (and indeed, there are signs they have already returned in places like China and Singapore), and how companies that have treated people well during the downturn will have an advantage in keeping and retaining the best people –and those that have not damn well better change their ways or will face the prospect of their best people running for the exits in concert with the inability to attract the best people. A related question has to do with the problem of determining who the best people might be — what does the best evidence say about the best way to pick new people?

    Its is always dangerous to say there is one definitive paper or study on any subject, but in this case there is candidate — a paper I have blogged about before when taking on graphology (handwriting analysis). But there is one article that just might qualify. It was published by Frank Schmidt and the late John Hunter in the Psychological Bulletin in 1998. These two very skilled researchers analyzed the pattern of relationships observed in peer reviewed journals during the prior 85 years to identify which employee selection methods were best and worst as predictors of job performance. They used a method called “meta-analysis” to do this, which they helped to develop and spread. The advantage of this method is — in the hands of skilled researchers like Schmidt and Hunter — is it reveals the overall patterns revealed by the weight of evidence, rather than the particular quirks of any single study.

    The upshot of this research is that work sample tests (e.g., seeing if people can actually do key elements of a job — if a secretary can type or a programmer can write code ), general mental ability (IQ and related tests), and structured interviews had the highest validity of all methods examined (Arun, thanks for the corrections). As Arun also suggests, Schmidt and Hunter point out that three combinations of methods that were the most powerful predictors of job performance were GMA plus a work sample test (in other words, hiring someone smart and seeing if they could do the work), GMA plus an integrity test, and GMA plus a structured interview (but note that unstructured interviews, the way they are usually done, are weaker).

    Note that this information about combinations is probably more important than the pure rank ordering, as it shows what blend of methods works best, but here is also the rank order of the 19 predictors examined, rank ordered by the validity coefficient, an indicator of how strongly the individual method is linked to performance:

    1. Work sample tests (.54)

    2. GMA tests …”General mental ability” (.51)

    3. Employment interviews — structured (.51)

    4. Peer ratings (.49)

    5. Job knowledge tests (.48) Test to assess how much employees know about specific aspects of the job.

    6. T & E behavioral consistency method (.45) “Based on the principle that past behavior is the best predictor of future behavior. In practice, the method involves describing previous accomplishments gained through work, training, or other experience (e.g., school, community service, hobbies) and matching those accomplishments to the competencies required by the job. a method were past achievements that are thought to be important to behavior on the job are weighted and score

    7. Job tryout procedure (.44) Where employees go through a trial period of doing the entire job.

    8. Integrity tests (.41) Designed to assess honesty … I don’t like them but they do appear to work.

    9. Employment interviews — unstructured (.38)

    10. Assessment centers (.37)

    11. Biographical data measures(.35)

    12. Conscientiousness tests (.31) Essentially do people follow through on their promises, do what they say, and work doggedly and reliably to finish their work.

    13. Reference checks (.26)

    14. Job experience –years (.18)

    15. T & E point method (.11)

    16. Years of education (.10)

    17. Interests (.10)

    18. Graphology (.02) e.g., handwriting analysis.

    19. Age (-01)

    Certainly, this rank-ordering does not apply in every setting. It is also important to recall that there is a lot of controversy about IQ, with many researchers now arguing that it is more malleable than previously thought. But I find it interesting to see what doesn’t work very well — years of education and age in particular. And note that unstructured interviews, although of some value, are not an especially powerful method, despite their widespread use. Interviews are strange in that people have excessive confidence in them, especially in their own abilities to pick winners and losers — when in fact the real explanation is that most of us have poor and extremely self-serving memories.

    Many of these methods are described in more detail here by the Society for Industrial and Organizational Psychology. Also note that I am not proposing that any boss or company just mindlessly apply this rank ordering, but I think it is useful to see the research.

    The reference for this article is:

    Schmidt, F.L. & Hunter, J.E. (1998) The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings,” Psychological Bulletin, 124, 262–274.

    ……………………………….

    Cheers,

    Keith

  5. Carol Schultz

    Lou: Your are right on point and preaching to the choir here. I think the net net is that companies are always looking for a faster, easier way to do things and product vendors want them to think their “tool/solution” is a panacea. As long as companies perpetually look for a simple tool to fix things rather than taking a comprehensive approach, nothing will change.

    I meet one on one with my clients and their teams. This takes more time, but is highly effective. The only time I use profiling tests is as you suggest above; once the candidate pile is narrowed down to a few finalists. I feel it can help provide additional insight into candidates and allows the interviewer to ask better and more qualifying questions.

  6. Richard Melrose

    @Keith It’s nice to have some company for a change – i.e. citing Schmidt and Hunter and urging recruiters/hiring managers to consider upgrading their employee selection procedures.

    A recent whitepaper from i4cp (Institute for Corporate Productivity) described the state of enterprise talent management as “dismally uniform and uniformly dismal”; and it all starts with sourcing and selection.

    In hourly, entry-level jobs, e.g. call center agents and retail sales associates, an employee selection procedure upgrade can readily double the pretax profit per new employee. In sales positions, it can add millions per new salesperson to the bottom line, over the employee lifecycle.

    The higher the average employee utility and the greater the variation in performance, for the particular job, the more an employee selection process upgrade is worth. When employers “do the math”, they are usually astounded by how little the upgrade cost, compared to the annually recurring employee productivity benefits. Greater than 10x, short-term ROI is easily achieved.

    Savvy employers take some of their winnings to further improve their sourcing and selection practices and/or provide additional compensation (including signing bonuses) to the top talent that they can now pick, systematically.

  7. Keith Halperin

    @ Richard: Thank you. Despite my clear biases (some of those shortly), I try to base my recruiting opinions based on proven, (or at least not disproven or completely made-up) methods.

    1) Has anything modified/ challenged the validity of Schmidt & Hunter’(S & H’s) work? Are there other reviewed studies which come up with other conclusions.

    2) If “no” to the above, who has worked to implement recruiting best-practices based on S & H’s work?

    3) If S & H is considered valid, why is it so little known/discussed in companies? (It’s certainly not a big secret.) Shouldn’t sr. recruiting heads Of major companies be actively pursuing and promoting S & H-validated processes? Why don’t we see/hear about major conferences and meeting designed to specifically work out the details of how to do this? Why aren’t recruiting “thought leaders” using S & H as the basis for their work? Why don’t we see S & H (if either/both are still active) writing HERE?

    My conclusion (my biases now): there are too many founders, CXOs and sr. executives who would be made to look very arrogant, stupid, or foolish by insisting on their current dysfunctional practices, and too many recruiting heads who may or may not know better, but would certainly lose out by pointing out that the prejudices and misconceptions of the above-mentioned folks have cost their companies millions or billions of dollars…Besides there’s far too much money invested in having companies do things badly and coming up with/selling clever-sounding solutions. As I often say: “I fear that the hype will continue as long as there are slick hucksters with high-level connections ready to sell the latest recruiting snake oil or “magic bullet” to desperate and not-yet insolvent recruiters and their superiors who fail to recognize that in most cases they are futilely “rearranging the deckchairs on the Titanic” of their companies’ ill-conceived, over-blown, grossly-dysfunctional hiring practices.”

    Happy Friday, Everybody!

    Keith “Ever Wordy” Halperin

    keithsrj@sbcglobal.net

  8. Richard Melrose

    @Keith

    (1) No, to the contrary, the S&H meta-analytical work provided bedrock that the Industrial/Organizational Psychology community has built upon.

    Schmidt and Hunter’s findings are referenced by The SHRM Foundation (Elaine D. Pulakos Ph.D.) in its “Selection Assessment Methods – A guide to implementing formal assessments to build a high-quality workforce” ( http://bit.ly/wzkFQf ). U.S. Office of Personnel (OPM) similarly references the S&H findings in its Assessment Decision Guide. ( http://bit.ly/yQhh4s ) Both SHRM and OPM acknowledge both the upside opportunity (+$) and the downside cost (-$ + risk) of the “do-nothing” alternative.

    (2) Few companies have worked to implement best practice selection processes that take full advantage of Schmidt and Hunter’s work. For that matter, very few companies comply (or even try) with the Uniform Guidelines on Employee Selection Procedures, which define best practices and which have been the U.S. regulatory reference since 1978. See page 30 of the SHRM document.

    (3) Many good questions in your #3, Keith. There are lots of “elephants” (large issues that everyone is acutely aware of, but nobody wants to talk about) in the boardrooms and C-suites of major corporations. By and large, business enterprises stink at strategy, innovation and continuous improvement, as well as talent management. They understand the “what for” of these meta-disciplines but lack the “how to”, even though, like the Schmidt and Hunter’s roadmap, all the requisite “how to” knowledge has been proven up and well codified. As Peter Drucker wrote: “What you have to do and the way you have to do it is incredibly simple. Whether you are willing to do it, that’s another matter.”

    I have personally gotten reactions like this from smart, knowledgeable people: “Your [best practice employee selection process] proposal is very interesting, but considerably more comprehensive than we envisioned.” Meanwhile, the do-nothing alternative is costing the company upwards of $10K PER HOUR, 24/7/365, in just ONE high-volume customer-facing job family, with more than 10,000 positions. This firm uses a self-report personality test, which has little or no predictive validity for job performance or training uptake. They conduct unstructured interviews. They let the ATS drive workflow rather than concerning themselves with the widely varying quality of applicants in the pipeline. When demand exceeds supply (because of high turnover), they lower their standards to put butts in the seats, fueling more future churn.

    John E. “Jack” Hunter passed away (2002). Frank L. Schmidt is Senior Scientist at the The Gallup Organization ( http://bit.ly/TK1zjL ) and a professor at the business school of the University of Iowa.

    In ere.net articles, Dr. Wendell Williams has put the selection process waste at 20-50% of payroll ( http://bit.ly/VsVd4v ), depending on job complexity (a Schmidt, Hunter and Judiesch contribution that connects the higher information-processing demands of a job to greater variation in job performance).

    So, upgrading a salesperson selection process delivers a bigger per-hire payoff than upgrading a filing clerk selection process.

    Most business leaders seem content with the elephants in their boardroom. Most apparently hope that like them, all of their mediocre competitors will take the do-nothing approach.

    Interestingly, selection process upgrades can be “experienced” at rather small scale, at very low costs and with no long-term commitment. Interested recruiters, hiring managers and/or business leaders could “experiment” in ways that allowed them to build financial models that demonstrated the full scale values based upon company-specific inputs and opportunities. Every employer should expect >10x short-term ROI.

  9. Keith Halperin

    Thanks, Richard for all this information. I really appreciate it.ISTM that if there is strong ROI implementing H & S- and UGESP-based hiring practices and yet they’re largely ignored, then there should be a serious perceived downside for the people running the current (largely) dysfunctional practices. Does it just come down to the GAFI (Greed Arrogance, fear, ignorance/incompetence) Principles again, or is there something more complex operating here?

    Cheers,
    Keith keithsrj@sbcglobal.net

  10. Richard Melrose

    Keith, I quess I’d leave out the Greed, but I can certainly attest to Arrogance, Fear, Ignorance and Incompetence in the C-suite, among People Managers and especially among those in HR who are supposed to be helping their organization select, deploy, manage, engage and retain top talent.

    It really surprises me, too. I should think that enlightened self interest would have all of the aforementioned leaders, professionals and managers wanting to hire well rather than manage hard.

    Moreover, as I mentioned above, employers can learn (validate) experientially, at small scale and low cost, on the job(s) of their choice. And even that modest learning initiative will earn >10x short-term ROI.

  11. Lou Adler

    Keith – recognize that H&S only evaluated available statistical info about interviewing that was published, most of it pretty antiquated. They didn’t consider all other interviewing techniques that have been shown to be effective. In addition, they only looked at the effectiveness of a combination of 2 different interviewing techniques that they had data for not all of the possible combinations.

    So using their info as the holy grail is an adequate starting point, but a terrible finishing point. For example, they did not take into account the additive effects of the behavioral consistency approach in combination with a GMA in combination with a performance-based interview in combination with a performance-based job analysis in combination with structured situational fit competency model in combination with an evidenced-based debriefing process and an organized panel interview, to name just a few of things missing in the process. At just one search firm that we’ve worked with using this combination on 1,500 placements tracking pre- and post-hire performance over 20 years interviewing predictability was closer to 95% as measured by fallouts after one year and 85% if measured against manager’s expectation of performance vs. predicted.

    Also ignored by H&S was the idea that the interview and assessment process used for promoting someone known from within into a new role is highly predictive. The process above captures this same process when used for hiring an unknown person. The key is to focus on real job needs, the performance track record of the person doing comparable work in comparable cultures and achieving comparable results, and the person’s process of success and possession of the achiever pattern (indicates that the person has been in the 25% of their peer group) in comparable jobs.

    The point of this blathering is that there are plenty of ways to skin a cat, when you’re in the business of skinning cats.

  12. Keith Halperin

    Thanks, Lou. This is very interesting. ISTM that while there may be a variety of ways to hire well, it is clear that there are also a number of things that should NOT be used (they are the least effective/worst predictors), and these are largely the things that ARE used, particularly 9, 14, 16, and 19:

    9. Employment interviews — unstructured (.38)

    10. Assessment centers (.37)

    11. Biographical data measures(.35)

    12. Conscientiousness tests (.31) Essentially do people follow through on their promises, do what they say, and work doggedly and reliably to finish their work.

    13. Reference checks (.26)

    14. Job experience –years (.18)

    15. T & E point method (.11)

    16. Years of education (.10)

    17. Interests (.10)

    18. Graphology (.02) e.g., handwriting analysis.

    19. Age (-01)

    …………………………………………

    It also seems that a combination of the most effective methods works best. How long would it take to give an applicant these multiple evaluations? I’m not sure, but I bet it wouldn’t take as long as the multiple rounds of unstructured interviews some employers of choice use, which are demoralizing to candidates, logistically difficult to schedule/coordinate, expensive in lost time, AND ineffective at making the best hiring choices.

    Cheers,

    Keith

  13. Lou Adler

    @keith – 100% agree with you on what not to use! (This is a first)

  14. Keith Halperin

    Thanks, Lou.

    Happy Hanukkah,
    Keith

Post a comment

Please log in to post a comment.

Note: You need to sign up for an account on our new commenting system if you haven't already done so — even if you have an existing ERE account. Find out why »

Login Information