Negative metrics – why you shouldn’t focus on them

AUTHOR’S NOTE: This post is inspired by several keynotes and subsequent conversations at Agile NZ Conference 2014.

What is a negative metric? Well, a metric is something that you measure in order to give you an idea of how something is performing. A negative metric is a performance indicator that tells you if something is going badly, but it doesn’t tell you when that same something is necessarily going well.

For example, velocity. If velocity is low, or fluctuating between iterations, this is usually a sign that something isn’t going well. However if velocity is normal there is no guarantee that the team is delivering value. We know that they are working, but that is all we know.

If we want to be sure that they are delivering value, then we have to measure it directly. Measuring delivered value is much more complicated than measuring velocity. It requires feedback from the customer telling the Product Owner if what the team are building is meeting their needs.

Therefore velocity is useful if it’s being used by the Product Owner to determine what the team is capable of. It will certainly aid in release scheduling, but it is no guarantee that the product development is on the right track.

Another reality of metrics like velocity, is that the very act of focusing on it, has the potential to cause a decrease in the performance you desire. For example, if velocity is being used as a stick to beat the team (the usual statement from the business being something like “your velocity isn’t high enough and you need to raise it so that we can go faster”) then, overtime, velocity will become less meaningful. Why? Because the team will raise their velocity, but it won’t necessarily be as an increase in output. The team will merely increase their estimates, i.e. a story point will move from being 2-3 ideal days worth of work to 1-2 ideal days of work.

This results in the same value as previously being delivered, but with a higher velocity. In my experience, this isn’t done deliberately by the team, it’s done because a light is being shone on them and they will merely change it in an effort to please.

Other examples of negative metrics in software product development are:

  • Lines of code written – shows that your developers are writing code, but doesn’t testify to the usefulness or quality of that code in any way. Focus on this metric and you will certainly get an increase in lines of code written if nothing else.
  • Test code coverage – percentage value of code covered by tests. Shows that tests have been written to run the code, but there’s no guarantee of the effectiveness of those tests in preventing bugs.

Examples of metrics that you could focus on:

  • Value delivered – is the customer getting what they want/need?
  • The flow of value – how much and how often is value delivered?
  • Number and severity of bugs released into the wild – one of the only true measures of fit-for-purpose quality that customer and the business should care about. The further down the value stream bugs are found, the more expensive they will be to fix. Agile methods advocate testing from the get-go.
  • Happiness of your team(s) – happy teams are working in a way that is sustainable. Standards and quality will be high. The team will have a high sense of satisfaction.

In short, negative metrics have their place, however used unwisely they can have unintentional side effects and actually de-optimise the performance you’re looking to gain.