Q. So, you are an ENFZ (Extraverting, Intuiting, Feeling, Organizing) personality type. Just what does that have to do with job performance? A. Absolutely nothing! Your personality type might be nice to know, but it has almost nothing to do with job skills. In fact, research shows the relationship between personality and skill is somewhere between 2% and 8%. Pretty poor numbers! Knowing the average profile of a group of managers still tells you nothing about their skills. Q. I’m confused! How does personality affect performance? A. Job performance consists of two major factors: task skills and contextual skills. Task skills are the “hard” abilities you bring to the job – things like intelligence, technical knowledge, and customer service skill. Contextual skills refer to how you use these skills in a social context, i.e., do you treat your coworkers like “pond sludge.” When you combine these two factors you get 1) job skills, and 2) job “schmoozing” ? which have, of course, four combinations. They are high skills – high schmoozing (this is good); low skills – low schmoozing (this is bad); high skills – low schmoozing (he’s good, but unfortunately, everyone hates him); and, low skills – high schmoozing (let’s promote him!). Q. So I can use personality tests, then? A. Not so fast. There is a BIG difference between personality in the training class and personality on the job. Training profiles are developed to teach people how to communicate better. This is very helpful, but not always related to job performance. Do not use these tests for selection. General personality profiles were developed to explain the human condition. They are very broad and not very job related. Do not use these tests for selection, either. Clinical personality profiles were developed to diagnose clinical illness. If you are not a psychologist and your employees do not work with weapons or fissionable material, do not use these for selection at all. As a matter of fact, if you use any of these tests for selection, not only will you be doing your organization a major disservice, you’ll eventually trigger a legal challenge. Q. OK, Smart Guy, what kind of personality test can I use? A. Well, this question has two parts. First, you need to find a personality test that was designed for selection. It won’t have any “foo-foo” items, it will be job-related and it will not use a most-least format. The second part of the answer is that it must be validated against job performance. Q. What’s “foo-foo”? A. A Foo-foo item is any item that is not directly related to the job. Asking someone about sexual preferences may be fine if you are a psychologist treating him or her for gender-identity issues, but is none of your business in a hiring context. Asking irrelevant or personally invasive questions is unprofessional and can cost you big time! Mr. Soroka collected millions from Target Stores because they thought it was a good idea to give him the MMPI test. Foo-foo items have nothing to do with the job. Q. What’s job related? A. Test developers who have done their homework know that the closer the item is to job practices, the more accurate it will be. For example, asking “I find it easy to generate new ideas” will not be as predictive as asking “I find it easy to generate new ideas on the job”. You should carefully examine a personality test to see that each item is directly related to some aspect of job performance. If not, the scores may be misleading. Q. What’s wrong with most-least format? A. This gets a little more complicated. “Most-least” scoring format is called “ipsative” scoring. It is a nice way to compare personal descriptors to get a personal score that represents personal preferences. (Did you notice my emphasis on the word “personal”?) This is an example of a test that is fine for training because it provides some insight into comparative preferences. Unfortunately, it doesn’t work for selection. An ipsative design does not give you the right kind of data needed for validation. For example, consider the difference between these questions: “Which describes you more, Analytical or Social?” versus “Rate your preference for analytical problem solving on the job; 1= hate it, 2= prefer not to, 3= so-so, 4= like it, 5= love it”. Strength of the response is one of the requirements for validation; that is, the higher the score, the higher the performance. <*SPONSORMESSAGE*> Q. What kind of test do I need then? A. Any one who has read a couple hundred research studies knows there are ten factors consistently associated with job fit and job performance. When converted from “lingua academica” into terms that normal people understand, they include:
- Problem Solving ? Preference for solving complicated problems.
- Idea Generation/Innovation ? Preference for creativity
- Rule Following ? Preference for rules and regulations
- Inflexibility ? Preference for stability
- Self Centeredness ? Tendency to be self-absorbed
- Teamwork ? Preference for working with others
- Expressiveness ? Tendency to be socially outgoing
- Impulsiveness ? Tendency to be impulsive
- Perfectionism ? Tendency to be perfect
- Attitude Toward Work ? Concerns about quality Oh, yes. And if you are using a test for selection, you just might want to include a “lie” scale that indicates faking. Q. That’s easy! A. We’re not finished yet…these are only test factors. People are more than a static set of factors, they use different factors depending on what the job requires from one moment to the next ? and you have to offer proof that test scores relate to performance. That means you have to do a little homework. First, make a list of important job behaviors that you want to measure. Next, ask current jobholders to complete a test that measures the ten personality items. Then hold a meeting with managers to rate current jobholders’ performance on each job behavior (be careful, managers “fib”). Finally, compare jobholders’ answers with managers’ ratings to discover which patterns are associated with high, medium and low performance. Q What’s with the pattern thing? A. Personality is very sensitive to job and activity. Consider this. A validation study was conducted to predict the item “Work Closely With Others.” In the sales position, Creativity, Self Centeredness, Teamwork, and Attitude Towards Work scores predicted this behavior. In the manager’s position, totally different scores predicted this item, i.e., Problem Solving, Expressiveness, Flexibility, and Impulsiveness. Furthermore, there was a mix of positive and negative weightings. You just never know which factors are predictive until you study the data. Q. But why not just measure the high producers? A. Basically, this is a way for unskilled test vendors to “sneak” around the validation problem. Unfortunately, it’s at your expense. Low scores on a selection test should indicate low performance just as high scores should indicate high performance. Averaging the scores of high producers is like putting fruit in a blender – individual differences mix together until you cannot tell a peach from a nectarine. If a test vendor suggests studying only your high producers, roll up a newspaper and rap him across the bridge of the nose, saying, “Bad Vendor! Bad!” Q. Is it legal? A. A better question would be, “Compared with what you are doing now? Get real!” This kind of system conforms to EEOC guidelines for establishing job requirements, business necessity, criterion validation, face validity, scientific processes, documentation, equality, fairness, and administrator training. Does yours? Q. Can the test be faked? A. I suppose anything could be faked, but in order to “fake” a test like this, the candidate would have to know which item was associated with each factor; the weight of the factor; the factor pattern for the position; and, the factor pattern for the organization. Very unlikely! Q. Does it predict knowledge, skills and abilities? A. No, it predicts a person’s willingness to use his or her knowledge, skills and abilities ? and, to a large degree, performance appraisal ratings. Q. Will one study work for all jobs? A. No, different jobs have will have different patterns. We suggest “clustering” jobs with similar characteristics into “families” such as sales, front line management, customer service, phone rep, etc. All members of the same family would participate in the same study. Q. How many people do we need for validation study? A. More is better. One hundred to two hundred people in the same family are a “workable” number. Below 100, the data becomes less reliable. If you have less than 100 people in a family, we recommend looking at fewer and broader performance areas such as teamwork or quality attitude. Q. How can I justify the investment? A. A better question is, “How can you justify the expense of turnover and low performance?” Tests like this can pay for themselves by reducing turnover by just one person! Q. That’s a lot of work. A. What’s your point?
Related Conference Sessions
- Tie Results Back to Business Metrics: How to Increase the Influence of Your Department to Senior Management
- Develop Top-performing Recruiters to be Future Talent Acquisition Leaders
- Apply Lean Manufacturing Principles to Talent Acquisition