There are all sorts of reasons for managers to dread performance reviews. For one, it’s a lot of work. In preparation you need to request people to send peer feedback, write peer feedback for others, and write your own self reflection.
Then comes the real fun part: the judgment, the writing, the calibration with other managers, and then the delivery. At this point, you think you’re done, but no — throughout the remainder of the year, you need to deal with the aftermath.
And while struggling through all of this, you ask yourself: is it all worth it?
Common practice still says: yes, this is how we do it. You can write books attempting to get out of this all you want, but you shall do those performance reviews.
One of the tech granddads of management theory Andy Grove, in his famous “High Output Management”, in chapter 13 (every manager’s lucky number) explains the fundamental purpose of the performance review (emphasis mine):
There is one that is more important than any of the others: it is to improve the subordinate’s performance. The review is usually dedicated to two things: first, the skill level of the subordinate, to determine what skills are missing and to find ways to remedy that lack; and second, to intensify the subordinate’s motivation in order to get him on a higher performance curve for the same skill level.
The review process also represents the most formal type of institutionalized leadership. It is the only time a manager is mandated to act as judge and jury: we managers are required by the organization that employs us to make a judgment regarding a fellow worker and then to deliver that judgment to him, face to face.
That’s the theory, but does it work? That’s probably a hard question to answer in general, so let’s limit ourselves somewhat to our field of knowledge work. The story may be different for e.g., factory workers. Or not. I have no idea, but I doubt there’s one silver bullet answer.
Gallup is a famous workplace consultancy firm. This is what they found in their article “More Harm Than Good: The Truth About Performance Reviews” (spoiler alert: spoiler in the title):
According to Gallup, only 14% of employees strongly agree their performance reviews inspire them to improve.
In other words, if performance reviews were a drug, they would not meet FDA approval for efficacy.
And it costs organizations a lot of money -- as much as $2.4 million to $35 million a year in lost working hours for an organization of 10,000 employees to take part in performance evaluations -- with very little to show for it.
While this could be a drop-the-mic moment, things are never black-or-white, and I also don’t believe we’re at the stage where it’s fully clear that our best path forward is to simply drop performance reviews altogether. Although, supposedly, more and more employers are doing exactly that.
Let’s not invoke a revolution here just yet. What I will argue for instead is a smaller step: to drop just one aspect of it.
That one aspect is the judgment mentioned by Grove because I believe communicating it severely limits the potential to achieve the primary goal of “intensifying the subordinate’s motivation.” Why? Because judgment is an act of violence. And people generally don’t respond constructively to violence.
Here are some standard elements of a performance review:
- Areas of strength
- Areas for growth (and progress made on them)
- Performance rating (low, mid, high — usually worded using more flowery language)
While I have my reservations about our ability (our as in: the manager and peers) to accurately identify areas of strength and growth — these elements at least have the potential to be helpful, encouraging and motivating.
As to (1): It’s motivating to hear our strengths acknowledged (at least if they somewhat match our self image).
As to (2): If somebody has interest in developing themselves into a specific direction, and if it’s hard for themselves to figure out if they’re making progress, it is also valuable to learn about gaps that remain. If that happens during a performance review, why not? Note my two emphasized ifs there.
However, what I would really challenge is that last part (3), the judgmental part: the performance rating.
Oh boy. That’s the mood killer.
From my experience, unless the performance rating is precisely what the person under review really expect themselves, a rating is actively harmful.
If the rating is precisely what they expect — our best-case scenario — communicating it will have no significant effect. Phew.
If the rating is higher, it will likely have a short-term positive effect.
“Oh!” the person thinks, “I must be better than I thought, that’s a surprise, but… cool!” This, then sets expectations for the next cycle, because last time it was higher than anticipated, so why not again? If it does, we’re back to our neutral “as expected” case, but at some point the rating will be lower than expected. An even worse scenario is where a person actually felt they had been mailing it in, but this is not reflected in their rating. This will only encourage to continue mailing it in down the line.
If the rating is lower than expected, demotivation hits. All the other well-crafted sections of the performance review around strength areas, and various constructive ways of getting better — all noise.
Let me repeat what I said in the last chapter one more time:
Whenever there is negative judgment in your communication, people will fail to hear anything else.
You may think this is silly, you may think that people should get over themselves. But ultimately, as we’ll discuss later in this book — it doesn’t matter what you say, it only matters what people hear. At least if your genuine goal is to “improve the subordinate’s performance.” Demotivated people do not perform better.
Your hunch may be to be creative in structuring your performance review — to shuffle the order a bit. I found it doesn’t help. I’ve tried putting the rating at the beginning. I’ve tried putting it at the end. It doesn’t matter. Judgment is served. The rest becomes noise.
“But Zef, the performance rating shouldn’t come as a surprise, there should be an ongoing flow of feedback, right?”
Sure, sure. Even if, hypothetically, we manage to keep up this stream of feedback (“still not according to expectations”, “still not”, “nope”), that leaves the best-case scenario of “no surprise, no effect.”
While many of us may believe or hope that we’ve been clear with our feedback, are we sure? I found more often than one thinks, the person under review may have quiet hopes their rating will be higher than what they technically should be expecting. Managing expectations is hard. The person under review may be an optimist. Having optimists in your team is good. Being an optimist about to receive a performance rating is a bad place to be to sustain that optimism.
You may think all these effects are overblown, but I’ve seen that the feeling of being under appreciated — the most common result of a lower-than-expected performance rating, leads to anything from just temporary loss of motivation and productivity, to a completely loss of trust for years, to people simply quitting — likely not on the spot, but in time.
While well-intentioned, we’re playing with fire here.
I say: drop ‘em.
No more performance ratings.
What would we lose?
“Well, people need to know where they stand.”
In case of significant under performance this is indeed necessary: Somebody doesn’t put in the work. Not engaged. Not committed. But I don’t believe yearly or half-yearly performance reviews are the appropriate moment for such feedback. If somebody is clearly underperforming, this should be signaled way earlier, probably alongside some performance improvement plan of sorts.
On the other end, if a person is significantly over-performing, this should lead to new opportunities throughout the year: new projects, new challenges, new chances to grow, new roles. Not a one or two-word rating in a performance review.
One legitimization of performance ratings has been that they’re required to be able to promote people and assign bonuses.
In a previous chapter “No More Incentives” I’ve already covered bonuses, so let’s move on to promotions.
Pure seniority-based promotions (e.g., junior, to mid, to senior) usually don’t really rely on a performance rating. Most companies have multi-dimensional competency matrices (significantly more detailed than a 3 or 5 level performance rating) that can be used to judge if seniority promotions apply. Yes, this requires judgment (and I’m not a pure fan of competency matrices either see the chapter “No More Boxes”), but since people don’t need to be assessed for promotion every cycle, at least we can dial the amount of times we communicate our (negative) judgment down a bit.
Promotions in the higher regions (management roles, staff engineer roles) are likely opportunity-based — there’s a limited number of such roles available, and people are ideally selected on being the right fit for that particular role in that particular scope. I’m not convinced that looking at performance ratings over time is valuable input to decide on who gets those roles without a bunch more context. And probably that context, without the rating of whoever happened to be their manager at the time, is more valuable than the rating itself.
But More Pragmatically…
What if we cannot convince our organizations that we need to drop these ratings.
Perhaps there’s a cheap escape route here, because the argument has been that judgmental feedback to be ineffective, and feedback implies communication. What if we judge… but not communicate?
In HR, there is a concept of the 9 Box Grid, which suggests rating people in two dimensions: performance and potential. Generally, the performance dimension is communicated to the person in question, whereas the potential dimension, often is not:
The benefit of communicating that someone is at ‘full potential’ rather than at ‘low potential’ is that the former is less discouraging. We do want people to have a growth mindset and associate extra effort with improvements in performance so there is some tact required from the manager when it comes to communicating this. For this reason, some companies decide not to communicate this potential score to employees.
So, obvious idea: perhaps we could not communicate the performance rating either, if we insist on have such ratings?
Honestly, while technically an option, I don’t like it much. I like transparency, hiding such information likely will cause anxiety and mistrust, especially when people know this data is tracked, somewhere. Meh.
So if we cannot drop ratings altogether, what then? Honestly, I’m at a loss. The best we can do is attempt to make the ratings as fair as we possibly can, which is an expensive and tiring process. It usually involves managers meeting and “calibrating.” That is: explaining to each other what criteria were used for specific ratings and challenging each other on it. It’s hard. It’s painful. If ratings need to exist, this needs to be part of it. “Close your eyes, and think of England” as the British say.
Then, as we communicate the ratings to our people with as much dismissive language about its importance as we can, let’s cross our fingers and hope we’ll get out of it alive.