We can forecast even when no historical data exists, if we use our experience and judgment. In Part 1 of our probabilistic forecasting series we looked at how uncertainty is presented; in Part 2 we looked at how uncertainty is calculated. Both of those parts presumed historical data was available.
We recommend adopting these practices to get good estimates from experts:
- Estimate as a group to uncover risks that may expand the range of uncertainty (use Planning Poker or other anchor-bias reducing mechanisms to help expose differences).
- Estimate using a range, not a single value
- Coach experts to estimate using ranges to combat their particular biases towards optimistic and pessimistic
Range estimates must be wide enough that everyone in the group feels that the real value is within the range, as in “95 times out of 100 this task should take between 5 and 35 days.”
People can learn to be good estimators. Most people perform estimation poorly when faced with uncertainty (see “Risk Intelligence: How to Live with Uncertainty” by Dylan Evans and “How to Measure Anything” by Douglas Hubbard.) They found that practicing picking a range that most likely contains the actual value of known problems (wingspan of a Boeing 747, miles between New Your and London for example), then giving experts feedback on the actual answer, increased estimation accuracy. Practice helps resolve personal pessimistic and optimistic biases.
When estimating how long IT work will take, teams should provide a lower and upper-bound. When a project of sequential stories needs forecasting, it’s simple: the project forecast range is between the sum of the lower-bounds and the sum of the upper-bounds. However, few large projects involve completing stories strictly in sequence. If you have multiple teams, people working in parallel or complex dependencies, a simple sum doesn’t work (not to mention the unlikely luck of every pice of work being at the lower bound or the higher bound). Most projects need a more powerful technique for accurate forecasting.
Monte Carlo simulation can responsibly forecast complex projects, even if the only data you have is expert opinion. When Monte Carlo simulation is performed properly, we can propagate uncertainty accuracies from different components to create a responsible project forecast. For example, a statement like “We have an 85% chance of finishing on or before 7th August 2014” is mathematically supportable.
In next part of our probabilistic forecasting series, we will look at the likelihood of values within a range, how that can help narrow our forecast risk, and why work estimate ranges follow predictable patterns that help us be more certain.