You might have read about something called 360-degree feedback. Depending on who you read, it gets good, bad, or ugly reviews. People generally agree that performance feedback is a good thing, so what goes wrong? How can feedback from multiple raters possibly be a bad thing? Why do organizations generally toss it out after a few tries? After the initial shock and awe, why does it usually die on the vine? The reasons are quite simple.
Related Conference Sessions
- Talent Acquisition Analytics – Lessons Learned from Rating Competitors in Games and Sports
- Talent Assessment as a Strategic Business Tool: Fact or Fiction
- 360 is silly … Think about it. If you want to get information about your performance, why send a questionnaire to people who don’t know you, don’t care, or don’t see your performance. Isn’t it better to ask customers about customer behavior, subordinates about subordinate behavior, peers about peer behavior, and so forth? You will get better data using a few narrow surveys designed for targeted audiences than one great big one-size-fits-all survey.
- Too much, too fast … Ask anyone who ever received a 360 report and they will probably tell you the amount of data was so overwhelming it took a half-day workshop to explain. Changing behavior is difficult. We might want to change ourselves, but everyone else wants us to stay the same. In addition, have you ever tried to develop more than 1-2 behaviors at a time? Keep it simple.
- Irrelevant … Why ask someone to rate things that have nothing to do with job performance? If being inquisitive is not important to job performance, then leave it off the survey. The surest way to confuse employees is to ask questions and give feedback about silly items. Base survey items exclusively on job specifics.
- Guesstimates … This part is really irresponsible survey design. Some surveys suggest not rating items that don’t apply to the job or about which the rater is unsure. So far, so good. But what about rater subjectivity? Do we need to statistically analyze data for agreement between raters? Calculate meaningless averages? Asking survey items that cannot be seen or heard on a frequent basis is error filled.
- Reward and punish … Now, here’s a great way to torpedo feedback. As soon as people learn feedback is used to either punish or reward, the system is toast. Keep feedback on a developmental level. Reward and punish based job performance. Treat feedback as the road to performance … rate them separately. Your 360 will either be dead in the water, or encourage vicious infighting if your organization cannot separate the two.
- Organizational bias … External consultant are not experts in any single organization. On the other hand, multiple organization experience allows them to recognize strengths, weaknesses, and differences. The trend I see most is “no one here is below average” or, just as silly, “everyone here is seriously lacking.” This is management craziness. In every organization, people range from the top of the food chain to the bottom. Useful feedback is honest, objective, and free of organizational dogma.
- Not job related … What do you think happens when people are asked a lot of questions unrelated to someone’s job performance? Right. You get a lot of answers unrelated to doing the job. The result encourages confusion, frustration, and a bunch of ticked-off people. This a great outcome if you planned to waste someone’s money and time answering surveys. If you want to avoid that problem, either do a formal job analysis to identify specific competencies, or at least talk to the subordinate and his/her manager to get some ideas of what to survey.
- Poor planning … It’s simple. Never ask a question if you are unprepared for the answer. Know ahead of time what developmental plans apply to each survey item. Don’t plan? Don’t ask. And, don’t think recommending a list of books and resources will suffice. It takes an unusual person to engage in self-development. It’s much easier to bury bad news in the bottom drawer and hope it goes away.
- Crummy items … Asking questions about someone’s cognitive ability is sure to yield worthless results. Are you asking about intelligence? Accommodation to external factors? Based what alternatives? Who is best positioned to evaluate the quality? Get the picture? As a general trend, you cannot go wrong designing items following the S.M.A.R.T. goal setting principles. If it’s not specific, measurable, attainable, realistic, and time-bound, then it’s D.U.M.B.
- No involvement … I know this is a radical idea, but management is more than a title. It’s a responsibility that involves guiding, coaching, and developing subordinates. Any 360 should be a joint activity between coach and subject … again using the S.M.A.R.T. principles. Your feedback program is short lived if the people who benefit most are not involved.
- Uncoordinated … Which do you think works better? An organization-wide initiative to accomplish a single goal, or one encouraging everyone to do their own thing? I suggest choosing some kind of common goal such as teamwork, better problem-solving, initiative, creativity, goal setting, and so forth. It really does not make any difference…Just as long as everyone is in it together. Within the umbrella item (e.g., teamwork) each individual employee gets to choose those job-specific teamwork elements making the greatest difference to him/her, and for forth. Group involvement is a great way to encourage group development, and it makes life much easier for the training department.
Let’s summarize: commonsense and best-practices require shared group support; manager and subordinate to working together; isolating a specific area that affects a specific audience; developing a few critical S.M.A.R.T. items; gathering honest and unbiased feedback; using the data to support planned development activities; developing the skills; and, follow-up surveys to check for progress. Too much work? Management would resist it? You always have a fallback position.
Buy a great big survey with fuzzy generic items that apply to everyone; deliver it up, down, and sideways; summarize the results in a huge report; send subordinate back home with a page of developmental resources; and misuse results in performance reviews. The first year it will be hailed as a great idea. The second year it will be on life support. In three years it will be dead, and you will have wasted tens of thousands of dollars, managers and subordinates will be thoroughly irritated with you, and your professional reputation will get another hit.