Value Velocity: A Better Productivity Metric?
December 17, 2007
One of my more popular essays is The Productivity Metric, in which I argue that all of the common ways of measuring productivity--SLOC, feature points, velocity*--have significant flaws. The best measure of productivity, I argue, is to measure the team's business impact by looking at ROI, IRR, or some other business value metric.
*Please note that I'm specifically talking about productivity. Velocity is a great tool for estimating and planning and I'm not trying to change that. It's just not a good measure of productivity.
While I still like this approach, it also has flaws. It can be really hard to measure value. Some types of work that are valuable don't lead directly to improved sales. Also, some of the causes of improved value are unrelated to the team's work. If software sales go up after a big sales push, was the development team really responsible? Similarly, if half the sales staff quit and sales plummet, should the development team be blamed?
The problem with measuring business value is separating out the effects of the team's work from all of the other things going on in the organization. Although this is usually a strength of the metric, sometimes the team's contribution gets lost in the noise.
Another Way
Lately, I've run across several teams who are experimenting with another approach to measuring productivity. It has the advantage of looking at the team's effort alone. It has the disadvantage of being a relative metric: you can't compare productivity across teams; you can only tell if a specific team's productivity is increasing or decreasing.
Here's the trick: rather than asking your business experts to measure business value after delivery (difficult!), have them estimate it beforehand. Every story (or feature--keep reading) gets an estimate before it's scheduled. At the end of each iteration, add up the value estimates for the stories you completed in that iteration. This is your "value velocity." It's like traditional velocity, except it's based on your customers' estimates of value rather than your programmers' estimates of cost.
As with story-point cost estimates, the actual value of the estimates aren't important as long as a "2" story is twice as valuable as a "1" story. And rather than reflecting the hours programmers work, as cost velocity does, value velocity actually reflects productivity. Remember, productivity equals output/time. Value estimates are a much better indication of output than cost estimates are.
Value velocity isn't perfect. Although it reflects productivity, it doesn't measure it. As with cost estimates, the quality of your estimates affects your result. Your estimates don't need to be accurate, or even expessed in real-world numbers, but they do need to be fairly consistent, so that a "2" really is twice as valuable as a "1". To do so, you probably need a consistent group of people, such as on-site customers, creating these estimates.
Although value velocity isn't perfect, a team with consistent value estimates would be able to graph their value velocity over time and see how their productivity changes. This would allow them to experiment with new techniques ("Let's switch pairs every 90 minutes! Now once a week!") and see how they affect productivity. If balanced with actual measures of value and some sort of defect counting, this could be a powerful tool.
Unanswered Questions
I do see some challenges. First, stories don't always have value on their own. New products, especially, often require multiple stories to create one feature with recognizable value. One solution would be to just estimate and score features rather than stories. This is nice because it focuses the team's attention on creating shippable features, but it could lead to inconsistent hills and valleys in your value velocity. Another option would be to pro-rate each feature's estimate across all of the stories required to deliver it.
Second, some types of stories don't provide value in the traditional way. What's the value of fixing a nasty crash bug? The entire value of the software? Or nothing at all? What about the value of fit and finish, such as adjusting color schemes and making sure all your pixels line up neatly? Will these stories' value be so low that they are no longer scheduled? One fit and finish story may not have value, but there is often value in having a polished presentation.
Third, value velocity is just as vulnerable to gaming as cost velocity is... perhaps more so. I'm not sure how to prevent this.
The jury's still out on the best way to address these challenges. Despite the uncertainties, I think value velocity has a lot of potential. I'm looking forward to learning more about how this approach works in practice. Give it a try and let me know how it works for you.