Strict Predictability of Cost and Schedule
Learn how strict predictability in terms of cost/schedule is supported by Agile.
We'll cover the following
- Predictability support: Story point assignment
- Predictability support: Velocity computation
- Predictability support: Small stories
- Predictability support: Up-front population, refinement, and estimation of the product backlog
- Predictability support: Short iterations
- Predictability support: Release burndown
- Predictability support: Accounting for variability in velocity
If you need predictability about cost and schedule with an exact, fixed feature set, predictability comes into play after the exact feature set has been defined—which is usually 10–30% of the way into a release cycle.
These key Agile practices support strict predictability:
-
Story point assignment
-
Velocity computation
-
Small stories
-
Up-front population, estimation, and refinement of the product backlog
-
Short iterations
-
Release burndown
-
Accounting for variability in velocity
If you don’t need strict predictability of cost and schedule, you might skip ahead to the next lesson, “Strict Predictability of Feature Set.” However, some of the concepts in this lesson are referenced in later lessons, so you might at least skim the headings before moving on.
Predictability support: Story point assignment
Direct estimation of effort is subject to issues of both bias and subjectivity (McConnell 2006). Bias is the intentional adjustment of the estimation in the desired direction. Subjectivity is the unintentional adjustment of the estimate due to wishful thinking or insufficient skill in estimation. The history of software development is that estimation is nearly always on the optimistic side, leading to a systemic tendency for individuals and teams to underestimate.
Story points are useful in part because they are not subject to bias. Instead of estimating effort directly, teams use story points to assign relative sizes to work items. People will often have a conversion factor in mind from hours to story points when they assign story points, but errors in those conversion factors don’t undermine the estimate because of the way that story points are used. Story points are used to compute velocity, which is calculated empirically, based on actual performance. A team might be optimistic in thinking, “We can complete 100 story points this sprint.” At the end of the sprint when they’ve completed 50 story points rather than 100, their velocity is 50, not 100, and that’s the number that is used for future planning.
Predictability support: Velocity computation
The most common use of velocity is for sprint planning, one sprint at a time. An equally valuable use of velocity is to support predictability. If a team has been working at a sustainable pace and completing 50 story points per sprint for the past 3 sprints (an average velocity of 50), the team can use its average velocity to predict when it will deliver the total amount of functionality.
Suppose your company is planning a release that’s 12 months out and consists of 1,200 story points. A 12-month schedule allows 26 biweekly sprints. The team works for 8 weeks (4 sprints) and sees an average velocity of 50 story points per sprint. At that time it’s valid to predict that the team will require 1,200/50 = 24 sprints to complete the planned work. The team can likely deliver that feature set in its one-year timeframe.
There are a few ifs, ands, or buts on this statement. The stories that are used to calibrate the team’s velocity need to be 100% complete—they must fully meet a rigorous Definition of Done. Also, the team can’t be accumulating technical debt that it will need to pay off later in the release cycle, because that will drag down its velocity in later sprints. The projection of velocity needs to account for vacation and holiday schedules. Plans need to account for any work still needed after Definition of Done, such as User Acceptance Test, System Test, and so on. The velocity must also account for the sprint-to-sprint variability shown by the team (more on that later). But compared to traditional Sequential project estimation, the ability of a team to produce an empirically based calibration of its productivity early into a release cycle—and to use that to predict a completion date—is a powerful capability.
Predictability support: Small stories
As discussed in the titled, “More Effective Agile Measurement" chapter, keeping stories small supports measurement of progress on Agile projects.
Predictability support: Up-front population, refinement, and estimation of the product backlog
The team that needs strict predictability will need to populate the product backlog with their release’s full set of stories up front—that is, adopt a more Sequential approach to populating the backlog.
They don’t need to refine the stories in as much detail as they would in a full Sequential approach. They need to refine them enough to be able to assign story points to each backlog item, which is more than they would refine them up front in a typical Agile approach. Then they actually assign story points to each backlog item, which is known as “story pointing the backlog.”
It’s difficult to elaborate every single story into enough detail to support meaningful story point assignments on the 1–13 scale early in a project. I’ll provide suggestions about how to address that issue later in this chapter.
Predictability support: Short iterations
As discussed in the “More Effective Agile Measurement” chapter, the shorter your iterations are, the more quickly you develop productivity data that can be used to forecast a team’s progress.
Predictability support: Release burndown
Monitoring ongoing progress against the team’s initial prediction is handled organically, in the normal flow of work. The team uses a release burndown to track the number of story points completed each sprint. If the team’s velocity begins to change from its initial average of 50, it can inform stakeholders and adjust plans accordingly.
Predictability support: Accounting for variability in velocity
Any team’s velocity will show variability from sprint to sprint. A team that averages 50 story points per sprint might actually have completed sprints of 42, 51, 53, and 54 story points. This suggests that using the team’s velocity to predict a long-range outcome includes some variability or risk.
The team with those four sprints showed a sample standard deviation of 5.5 story points vs. its average of 50. You can calculate a confidence interval based on the number of sprints completed to estimate risk to the team’s ultimate, whole-project velocity. And you can update that as the team completes more sprints and gains more experience.
This illustration shows an example of using initial velocity and a confidence interval to illustrate potential low and high velocity:
Get hands-on with 1200+ tech skills courses.