The first law of development, dictates the basic relationship between burn down rate, iteration time and number of story points being processed.
I start this post with the fourth “law of development physics:” Increasing variability in the development process always increases iteration time and reduces burn down rates. Which modifies the first law in a simplified model to approximately:
Iteration Time = Story Points in Progress / Burn Down Rate + n s
Where s (sigma) is an absolute measure of standard deviation of the iteration time
and n normalizes the variability relative to the best case performance at a given story point level: n = (story points in progress – 1)/(story points in progress * burn down rate)
The effect of variability then is to push iteration time above its best case possibility and decrease the burn down rate of otherwise equivalent processes. Using the first law scenario, impacts are easily in the range shown below:
Also, the magnitude of variability relative to the best case iteration time, will greatly impact the resulting iteration times from the process:
And, simulation shows that variability itself often grows as the number of story points in a process gets larger making the impact even more pronounced in large iterations:
Variability is inevitable. The ability to understand, control and strive to eliminate variability is essential to maintaining a healthy development process. This is the focus of six sigma initiatives – when processes operate at a six sigma level of control, there is less than 1 defect in 3 million iterations of the process. So, let’s define the causes of variability and assess ways to limit the impact of each. Causes of variability in a development processes are generally from:
- Natural Variability
- Road Blocks
- Re-work or Quality
- Setup
- Availability
- Bottlenecks
These are discussed separately below.
Natural Variability – this is a catchall bucket that includes variability due to normal human behavior. Included in this category are inherent differences in the nature of completing similar tasks, differences in skills and capabilities, differences in the time to problem solve, and other random impacts. I include in this category both variability between different people doing similar activities and between different activities for a single person.
Probability theory shows that breaking work into smaller independent elements reduces the impact of natural variability. Without all the math, consider that there is a greater chance of a large task that completes in 1 hour on average to grow to 100 minutes (15% probability*) than for say 10 smaller 6 minute tasks with the same distribution all to grow by the same percent (0.1% probability.) By breaking a big job into smaller pieces, we reduce the overall variability by orders of magnitude and make the development process more predictable.
Iterative agile development then, by its nature, helps reduce natural variability by forcing larger projects to be delivered in smaller pieces. Other ways to reduce natural variability include skills training, modular design, and use of standards and templates.
Road Blocks – this category is the equivalent of machine breakdowns in manufacturing and can have a large impact on process times. The development process can stop because tools break (a computer crashes,) but more often they are blocked because critical problems and issues cannot be quickly resolved. Unlike natural variability where the longer one works, the more likely they are to limit impact, road blocks can grow large and out of control quickly.
It is important to note that a few big road blocks can have a greater impact on project burn down rates than an equivalent number of smaller ones. A two day road block yields the same availability as 8 two hour road blocks; however, the longer down-time from the first scenario has down stream impacts that are far greater. Other activities dependent on the delayed work are likely to be stopped by the one larger road block, where multiple shorter roadblocks typically will allow other story point work to continue in downstream development.
So, identify road blocks early – preferably while they are still only risks – and implement mitigation approaches to keep them small. Don’t just track risks, but actively manage them and implement strategies that eliminate them while alternatives are still available. The format of the daily stand up meetings also plays to limiting road blocks by having a daily forum where one key focus is on openly identifying them so that timely resolutions can be applied. Close collaboration also helps limit variability from this cause by surfacing issues early.
Re-work or Quality – To deliver quality, we look to the customer to understand what is required and we look to the development process to ensure it is delivered. For the purpose of this discussion, let’s focus on quality as a measure of delivering features without the need for rework or fixes. The critical thing to note here is that the impact of rework is highly non-linear to the fraction of work that is reworked. With no rework, the impact is zero. As we approach 100% rework, it grows to infinity and nothing can be completed. For our forth law equation, the standard deviation of burn down rate can be significantly reduced as the rework percentages decrease (from 50% to 33%) in the example below:
The implication is that errors should be eliminated at the source. Thus slogans like “build in quality” and “zero defects”. Often the focus in agile is on mistake proofing the development and build activities using techniques like test driven development (TDD) and pair programming. These help to build quality code, but used alone, they lose connection with the voice of the customer.
Quality and mistake proofing need to spread across all activities of development: requirements, design, development, test and deployment. Thus, we start and end with the voice of the customer in agile development. They get the first word in iteration planning and the last word at closeout and retrospective meetings. Customers typically only know whether you’ve met their requirements when they see the results, so create user stories to catalog requirements, grow them throughout the development process and validate them as soon as possible by releasing product to customers. In product design, the concepts of design for manufacturability ensure that the product designs can be built in a quality manner. In software development, where product manufacturing is the creation of new software, the concept translates to design for extensibility: using modular design, design patterns, coding standards, and comments to simplify maintaining, extending and refactoring code. Quality testing begins with defining acceptance criteria before the build process and ends with user agreement that the criteria have been met. Acceptance and unit tests are automated whenever possible and the test base is grown throughout each iteration. xUnit tools, page emulators and other test harnesses allow developers to literally “build in” testing and therefore quality. Continuous integration, automated deployment and packaging product in each iteration limit variability in release activities.
Setup – this is delay or added work resulting from a team or an individual changing focus. It is also sometimes know as ramp-up. Obviously, there is setup at the beginning of a project when new people are brought on and new tools are configured. It is often overlooked within other development activities, but there are also setup delays any other time focus changes. This happens between iterations, when switching between unrelated activities, when working on multiple projects, at handoffs and when there is task fragmentation. Each of these requires time to physically and mentally prepare for the new focus. There may be different tools and programming languages to use, different requirements to understand, different standards to apply, and other adjustments to be made before the new task can start.
The mitigation approach here perhaps differs from that used in manufacturing. Flexible people, like machinery with short setups are essential to agile’s success. They are slightly faster in shifting focus and maintain better morale, but they have limited impact on maximizing burn down rates. And all people have a limited ability to flex. To maximize burn down, first focus on ensuring people are as fully dedicated to one project or initiative at a time as possible. Plan each iteration based on priorities at the start, then do not change those priorities within an iteration. Maximize the span of control each person has in the development process so that story point features are delivered with few handoffs. If multiple people are needed to deliver a feature, they should collaborate closely from the start to ensure understanding prior to any handoffs. Finally, focus on keeping standards as similar between projects as possible so that transitioning, when needed, is less difficult.
Availability – variability in availability can be caused by vacation, sickness, not being able to concentrate, interruptions and by the timing and percentage of a person to a project. The impact is similar to that of road blocks. Strategies to mitigate the impact should be fairly obvious. Of note here is that we should strive to maximize the team or organizations availability, not each individual’s. So, this should not be taken as justification for skipping daily stand-up or impromptu collaboration meetings in name of minimizing interruptions.
Bottlenecks – are typically caused because of a specialty skill that is in short supply or is over allocated. Approvals also often fall into this category in development projects. The impact is effectively to reduce the team’s burn down rate to that of the bottleneck capacity. Cross-training, Collaboration, and self-directed teams are proactive approaches to mitigate the impacts.
Credit: Again, this post is developed from Wallace Hopp’s and Mark Spearman’s great book Factory Physics, Foundations of Manufacturing Management, Mcgraw-Hill/Irwin, 1996.
* For finish rates in a process, an Erlang-2 distribution better approximates the variability than a normal distribution. This distribution always has a coefficient of variation squared equal to 1/2.
3 Replies to “Eliminate Variability”