I’ve long held a belief that observation-driven performance reviews do a disproportionate dis-service to designers. The model of managerial assessment, multi-disciplinary calibration and stack ranking performed at departmental levels lead invariably to worse outcomes for designers than they do for other disciplines because we are outnumbered by people who work in ways which are fundamentally different, and fundamentally incomparable. In this post we’ll discuss why that is, and outline some starting points for an alternative approach.

Iceberging

We’ve spent years working to demystify design, to open up the work beyond the artefacts to try and bring stakeholders along with the discovery process, the real heart of what design is. That hasn’t happened with performance evaluation. My sense is that there is still that same myopic focus on artefacts and outputs, and as a single part of a larger machine designers are therefore being judged on the superficial part of their work, and their minimal control over the life that work leads once it is in the hands of engineering, product or data science disciplines.

Sharpshooting

In lieu of significant exposure to designers and their day-to-day work, it’s often the case that peers from outside of design will be asked to draw conclusions on performance levels on top an absurdly thin set of data points. They may have worked with a design for a few hours across a quarter, or more likely members of their own team will have done so and so all insight is at-best second-hand. It’s unfair for organisations to put people in this position, but so long as they are expected to hold a view, or to dissent from one, they are always going to have to do so using the limited information at their disposal. We should stop asking this of them.

All opinions are equal, some are even worse than that

When design managers take performance reviews into cross-disciplinary calibration and review sessions they are always outnumbered and often trying to explain both of a designer and their performance at the same time. Whilst for other, larger disciplines in those rooms there is typically strength in number consensus and context sufficient for lots of well qualified input, there is rarely that same thing for design, and so in lieu of well informed opinions specific to design, participants will often fallback on the way they think about performance in their own disciplines - engineering managers will default back to thinking about outputs, product managers will default to thinking about stakeholders management and business performance. Non-technical peers will want to look at measurable outputs linked to high-level business objectives as they might for a sales organisation or an operational unit. None of these things are bad perse, but become so when used in isolation of the broad impact of design across these and many other areas.

We as design managers and leaders typically play into these knowledge gaps by trying to conform to those expectations in the way we describe performance. We will link design work to business results, highlight good examples of stakeholder management, talk about output of design work. This might help get us through a single performance period, and might yield what feel like good results for our teams, but all it does in the longer-term is to cement the idea that design is best measured in output, and is interchangeable with all other disciplines in terms of expectations. This is clearly a bad idea for the long-term health of a design organisation.

Logbook-based performance reviews

A thought percolating in my mind as I last went through a performance cycle was that designers would be better served by a much less amorphous process whereby work and the assessment of the work happen not in isolation of each other, but as part of the same integrated process.

That might look like a logbook of projects, each one with a success criteria agreed upon and defined upfront, a set of key stakeholders and their in-time feedback, and the artefacts of the work. It sounds like a clumsy and unsophisticated way of assessing performance, but it has some key advantages - the expectations are set in stone on a per-project basis, the feedback is logged when it is fresh and is limited to the specific expertise of the people giving feedback, and the work itself is included as delivered, not as it exists whittled down by a subsequent lack of capacity, ambition, etc.

Design is a field role, we are out in the real world speaking to people and figuring out how to make a mark - we are not meant to be heads down writing code, nor in meeting rooms debating with stakeholders - and yet these are the kinds of expectations designers are often being measured against.

Logbook-based performance assessment acknowledges this and allows us to design a performance review process which feel less arbitrary, less prone to the kinds of fallacies outlined above, and perhaps less anxiety inducing for all involved.

In an upcoming post I will share a template for this type of performance artefact, and I’ll welcome your feedback on it.

The link has been copied!