article by Dr. John Sullivan & Master Burnett When staffing leaders are polled about key issues facing the staffing function, one of the top items always on the list the past few years has been developing the ability to prove performance and value through metrics. It seems that before you can progress down the long, perilous road to becoming a business partner (we prefer “business leader,” but for many HR practitioners that is out of the realm of possibility), you must first prove to other business leaders that you do something of value, and that you do it well. The challenge presented by senior leaders is not a complicated one, but for some reason, it seems to be one that ends in perennial failure. What follows is guidance on how to succeed based on countless conversations with practitioners who have gone down the road, and evaluations of what has worked in the past. Make Your Metrics Useful Making metrics useful seems to be where most people get off track. A metric should be telling, regardless of whether it is intended to report performance or diagnose the performance of a process. Unfortunately, many metrics in use today simply don’t tell you anything of value. To correct this problem, designers of metrics need to start by pre-identifying what questions need to be answered and what data is needed to answer them. The type of data needed will always center around quantity/volume, quality, time, money, and satisfaction. Any qualitative answer will involve producing data or proof that uses one or more measurement elements. For example, consider the following questions and the data types needed to adequately answer them:
- Does focusing on lowering our cost-per-hire decrease our probability of hiring quality candidates? Metrics needed: cost per hire (money), quality of hire (quality)
- Does our current recruiting process negatively impact our ability to hire top performers? Metrics needed: applicant/candidate satisfaction by recruiting process stage (satisfaction), number of applicants/candidates per recruiting process stage (quantity), indicator of applicant/candidate quality (quality)
Developing metrics that answer tough questions about the impact of a process or decision provides information that is useful. Making your metrics useful is key to developing support for them. Make Your Metrics Easy Another area where people fail when it comes to metrics is making them way more complex than they need to be. Few managers are statisticians, and few have the time to read and understand a report whose methodology must be explained over several pages before you can interpret the data presented. Everything about your metrics initiative must be easy, including:
- How the raw data is collected. If it isn’t based on data already collected, whatever instrument you use to collect new data must be minimally invasive on people’s time, easy to understand, and capable of providing consistent data that can be used time and time again.
- The formulas used to manipulate the data. A key to understanding the value of any measure is knowing how it was calculated. Using formulas that are overly complex is tantamount to asking others to ignore you or ask you a billion questions you most likely could not answer to their satisfaction.
- How your findings are reported. Again, the concept here is saving managers time and giving them something they can understand how to use. However you report your findings, the answers cannot be hidden in charts, graphs, or tables. If the answer is wildly apparent, you need to make it so!
Complexity kills the sustainability of metrics initiative. If you overbuild it you risk:
- Having whoever you task with completing your metrics procrastinate until eventually the project never gets completed.
- Having managers disregard your initiative because they don’t have time to try and understand it.
- Running out of resources or the willingness of others to carry it the initiative.
Keep Your Metrics Relevant This area of guidance is closely related to making metrics useful. Many staffing professionals who understand and produce metrics continue to push them out long after their relevance has past. Metrics used to diagnose a problem with a process are just that: diagnostic metrics meant to be applied for a short period of time. Continuing to track and report on them after an issue has been resolved would be like your doctor taking your vital signs every day of your life: It wouldn’t hurt you, but it isn’t necessary. Another key aspect to keeping your metrics relevant is reporting them on a timeline that demonstrates variance. The reason behind this is audience attention. We humans tend to tune out distractions that have become predictable, such as the hum of the lawnmower, the sound of the sea ó or the metrics that never seem to change. The last element of guidance is a simple one. Never trust that your perception will equate to that of your target audience. Always pretest what you intend to deliver, and make corrections when necessary. While these four areas of guidance may seem simple, they are the key to developing a metrics initiative capable of demonstrating both value and performance. Metrics are not complicated; in fact, most rely on basic math skills mastered in third grade. If you step back and frame your efforts around answering key questions, and do so in a way that makes sense, people will start to pay attention.