By John Westgarth. Originally published on LinkedIn.
" I have a confession to make: As an Agile Coach I was over-engineering for 'impact' metrics. Here's why that doesn't work, and what I've decided to do instead."
Food Agility (where I work) invests in academic research to see how digital technology can be used to solve agricultural and food challenges. This involves collaboration between people from noticeably different backgrounds - academic research, government, startups, big tech and large farming organisations. Alignment between individual, project and organisation goals is essential for collaborative project teams to succeed in their mission. This is where Objectives + Key Results (OKRs) come into play.
OKRs are a management approach created by John Doerr (Intel, Google) as a way to give autonomy to high-performing teams, while at the same time ensuring that all teams align to deliver on the overarching organisational goal. Using the OKR system of goal setting, the organisation sets the Objectives it wants to achieve and nominates the Key Results it wants the team to achieve - and then gives responsibility to the team to decide “how” those Key Results will be achieved. Importantly, the organisation decides on the metrics used to determine the success of the Key Result.
For example - a company might set an Objective of ensuring that all of its customers are able to easily share data with other actors in a supply chain. It wants the team to hit the Key Result of “50 farmers onboarded to the data sharing platform”. Then it is up to the team to determine the best way to go about this - for example, who the early adopters might be, what data sharing policies need to be in place, or what tech build would need to exist to make this happen.
So far, so good. What is there to change your mind on?
Food Agility always encourages the teams we invest in to focus on the impact that their work will have on end-users. We sit with teams and work through not just what they want to build or research (also known as the Output), but also, what does it mean for a farmer, processor or exporter should the project be successful. We discuss how they will measure this impact and how they are ensuring it will happen during the life of the project, as opposed to an unspecified ‘later’ date. We know that Australia is underperforming at the commercial adoption of research outputs, this is why we work with teams that take full responsibility for both the “build” and the “delivery” of the project vision.
One way we do this is to work with teams to write impact metrics into their project OKRs. This is what I have changed my mind on.
How do you write impact into OKRs?
To write impact into the project OKRs we use the Design Thinking structure of Desirability, Viability and Feasibility (more here). We want teams to measure who was using the tech (desirable), whether the project was creating a financially sustainable output (viable) and could the research nail a technically significant insight (feasibility). To make this a reality project teams assigned an Objective to each of the three areas and then wrote 1-2 Key Results that measured the success of that Objective.
But there are problems with this approach:
- It can take years to see the true impact of a successful project. Teams were choosing metrics that measured impact - but these were ‘over the horizon’ metrics that weren’t giving the team the valuable feedback it needs to know the early stages of the project are on track.
- Teams found it hard to identify user adoption metrics for a project whose tech build wouldn’t be properly defined until later in the project.
- Key Results lacked baseline data - projects were aiming for a 5% increase in efficiency - but 5% increase from a baseline of what? And to figure out that baseline would require a research project of its own.
- Teams saw the metrics as descriptions of the project created during project formation - they were aspirational descriptions used during pitching for funding - rather than active numbers to hold themselves accountable to activity.
- Because the metrics weren’t moving each quarter, and in many cases were out of the control of the team (eg economic impact or sales metrics), they weren’t motivating and -depending on the team- could be seen as an administrative chore.
- Finally, teams struggled to set goals at the beginning of a quarter and then review their achievements at the end of the quarter. Teams were largely focused on the activities they had done, rather than how they went against the metrics. Possibly because the metrics weren’t relevant or motivating to their weekly activity.
And this is why I have changed my mind on OKRs.
As an Agile Coach I was over-engineering for “impact” metrics. I focused too heavily on helping teams dig into the usability of their project outputs and the economic or commercial impact that would flow from that. The original aim was to ensure that the team was aligned about their goal and could use a language (numbers) that all partners could understand. What I missed was that poorly calibrated metrics can have anti-patterns - they can make teams feel like failures even when they are grinding out great work, they can depower valuable project management tools such as dashboards and retrospectives and ultimately leave the team confused and reporting against numbers they couldn’t control. At their worst, OKRs can feel like an authoritarian reporting structure when really they should be about team empowerment and autonomy.
So where to from here?
I’m seeing a pattern emerge from our portfolio of 15-20 projects.
- North Star Metrics: Each project tends to have one Key Result that is more valuable than anything else they are tracking. It is the number you are proud to tell your customers and sponsors about and tends to be a scientific proof point that proves a technical solution works. For example - the accuracy and reliability of a statistical model to predict harvest times or yield amounts. We call this the North Star Metric.
- Community: all of our projects are testing their research outputs by putting minimal viable products (MVP) in front of customers and gauging the feedback. Key Results in this space can include the size of the early adopter community. This number represents the quality of your feedback loop, how many customers are trialling your outputs.
- Value delivery: many projects focus on using combining or processing data sets to deliver new insights to users. In these cases it is important to track how many times those reports are delivered. Then you can start tracking how long it takes to deliver a unique report or how the cost of delivery is reducing over time.
My suggestions on how to nail OKRs for collaborative research teams
- Communicate in numbers: when you start a meeting, start with the Key Results. Focus on the numbers - not the activity. Numbers give your activity a framework that will help others understand how your work is progressing. Describing the activities you undertook to make those numbers move is interesting colour to the discussion, but secondary to the numbers themselves.
- Key Results have two parts: a denominator and a numerator. That is, what you wanted to achieve, and what you did achieve. Make sure both parts of the Key Result are up to date and useful for the team. The gap between the two numbers is the starting point for how the project is tracking.
- Find your North Star: find the one number that gives the best description of the project. Focus your energy on baselining and tracking that number. Know it back to front and start every discussion by referencing that number. Doing this will focus your stakeholders and remind them of the project goal.
- Make sure your team can influence the numbers it chooses: there is no point in having numbers that are outside the direct control of the project team. These are demotivating and cloud your story. Find ones that describe your team and own them.
- Create a mix of activity and impact metrics: it is ok to have a humble mix of metrics that describe your activities right now, coupled with metrics that describe the impact you will have. But make sure you communicate how your quarterly activity is building towards your impact metrics.
- Start each quarter by setting numeric goals: this sounds obvious, but due to the cyclical nature of Agile sprints - it is possible that individual meetings such as the Showcase, Team Retro (end of sprint) and Planning (start of sprint) to lose their focus and merge together. Make sure you invest the time at the start of each quarter to set new goals. Include it as standard practice in your team meetings.
- Re-calibrate OKRs: if the metrics you have chosen are not working for you then find others that do. Just because you set metrics during the project setup phase doesn’t mean you have to stick with them during the life of a project. The main thing is that the metrics are motivational to the team and they are driving the team towards the agreed project goal. Your stakeholders and investors are always interested to hear what you have learned that has inspired you to adopt new metrics.
You can find Food Agility’s OKR template here or contact me on Twitter (@jwestgarth)