Rhythmic experimentation defines Scrum. A good Sprint experiment seeks to improve important metrics, such as increasing velocity or decreasing bug count. Some managers claim consistent velocity is important. Percent velocity deviation, σ(V)/E(V), is a reasonable metric to compare teams’ consistency. However, software companies usually look for innovation and profitability. Staid, old companies recreating boring stuff can get very consistent velocity. When innovating teams are asked for consistent velocity, some may game the metrics by padding, or stop innovating. Therefore, use consistency only as part of a collection of metrics that describe a team’s behavior, but not a target.
Client: Our company is looking into how to best represent the velocity variance per collection of Scrum Teams. Do you think that relative standard deviation is the best representation, or would you recommend another way to show velocity variance?
Dan Greening: If you are interested in velocity variance, you’ll want to use the percent-standard-deviation of the velocity. Standard deviation is the square root of the variance, and percent standard deviation is (standard deviation) / (mean). Think about standard deviation as the average difference of the sprint’s velocity from the mean sprint velocity. Since story point scales vary per team, you want this standard-deviation expressed as a percentage of the team’s mean. In this way, you can compare velocity deviation between teams.
However, when managers have asked teams to “reduce the velocity standard-deviation,” the outcome has been bad. Velocity deviation measures the estimation accuracy, assuming the team doesn’t game the metric (which is trivial to innocently do). There are many reasons that estimation can be inaccurate. One, of course, is an inexperienced or undisciplined team. However, another reason for the inaccuracy might be a team that is trying a new process such as pair-programming. Or the team may be innovating. Or the team might be learning new skills. These “good” things, which all help our client succeed in the market, will decrease estimation accuracy and increase velocity deviation.
So, like virtually all metrics, we don’t want to give teams goals to “reduce the velocity deviation”, because it becomes a perverse incentive for behaviors we don’t want. Here’s what we’ve seen when “consistent velocity” is rewarded: sandbagging (padding the estimate), lack of innovation, no process changes, etc. I’ve seen teams that actually thought that their managers wanted them to pad estimates so they could achieve their commitments, and so they did.
Velocity deviation is valuable as an indicator, and along with a collection of other metrics can help spark a conversation with team members. It can be helpful in finding teams that need help. Incentivizing it is the part I’m warning against. Even the common perception in one some of our clients that “consistent velocity is good” seems like the wrong thing to worry about.
Why would we even care about estimation accuracy? Well, the main reason might be that we want better forecasting. However, most clients have a much bigger problem. Roughly half the Scrum teams I’ve coached have a Backlog Forecast Horizon larger than zero. In other words; so what if our estimation accuracy is good, we don’t have an estimated backlog where we could use the team’s average velocity. We have nothing to apply this consistent velocity to, other than helping the team decide how much to take into its Sprint. But if that’s the only thing left, I’d say, “Hey team, how about just focusing on the top stuff on the backlog?”
Backlog Forecast Horizon is an important metric. As product owners seek to increase their Backlog Forecast Horizon, their teams get a better vision of what they are building. They may come up with seemingly “easy” ways to increase the forecast horizon, such as getting the team to estimate big chunky stories. The funny thing is that this form of “gaming” actually improves the teams longer term perspective. More power to them!
I hope this sheds some light on the velocity deviation metric, the forecast horizon metric, and their nuances.
If you have questions, send us an email for private reply: [email protected].
Leave a Reply
You must be logged in to post a comment.