It’s Not Quantitative Just Because Numbers Are Involved
I had an interesting argument with some friends last night over “quantitative” measures of performance in education. One argued that standardized tests are worthless, suggested that the public sector intrinsically is worse at ranking and evaluating talent and success than the private sector. Everyone with experience in the private sector immediately jumped on him.
Large companies have an extremely difficult task; they need to compare thousands of employees in dissimilar roles along a uniform scale. What they tend to do is to have managers and peers issue ratings – these are normally on a numerical scale, which are then weighted and combined together for each employee, allowing all of these employees who have been in very different projects to be put on a unified scale. This massively simplifies decisions like compensation and retention, because it can be decided on the single axis of the “employee score” rather than having HR make individualized decisions based on interviewing every employee’s managers, peers, and subordinates. That might yield better decisions but would take a huge amount of time and manpower.
Despite having numbers, these scores are not a quantitative measure of performance. After all, the scores that each employee receives are just their co-workers deciding what they think about them and translating this into a number in a semi-arbitrary fashion. It’s a garbage number – one that doesn’t encode a significant amount of information about the object it is describing. And as they say, “garbage in, garbage out” – comparing weighted averages of these numbers against each other doesn’t yield a great deal of information!
Rankings yield some information – a 90th-percentile employee is almost surely more valuable than a 10th-percentile employee. But two 90th-percentile employees could have wildly different real values to the organization – one might be an innovative and hard-working powerhouse and the other might be an incredibly charismatic and likeable dolt who has a gift for making himself look good.
The epistemological problems of school testing are substantially less severe, since there is an objective basis for comparison – a mandated body of knowledge. Every 5th-grader is supposed to know certain things (e.g., vocabulary lists, certain mathematic operations) and every 10th-grader is supposed to know more. Many of these tests are poorly designed, but that’s an execution problem rather than an epistemological one. Rather than comparing inherently dissimilar objects (employee quality in a diversified organization), standardized tests are actually intended to test inherently similar objects, students’ command of a pre-defined body of material.
The question of ranking employees is much more interesting, since developing scales to compare dissimilar objects is both epistemologically troublesome and badly needed. The need is clear – large organizations cannot afford the HR overhead to evaluate each employee in their full context. The trouble is a bit more subtle. The simplest answer for what these numbers encode is “performance”, but that isn’t a satisfying answer. Past performance is only useful to an organization insofar as it predicts future performance. So are employee rankings intended to predict next year’s performance? What is performance anyway, when it doesn’t have quantitative measures attached to it (e.g., sales)? Are rankings intended to predict future value to the organization? Probably not, because the highest-ranked employees are most likely to leave for greener pastures and have extremely low expected future values to the organization. Employee ranking isn’t really a problem of coding information – the question of what information employers are trying to measure is deceptively complex and the stated reasons are incoherent when examined.
I would suggest that in practice, rankings are a method of employee conditioning rather than an attempt to impart meaningful information to employers. The numbers are just window dressing.