Review

An amazing book. I can’t help thinking that this book contains lots of the ingredients to the secret sauce that would make my organisation work much more effectively and successfully. The author states himself that it is lacking in practical implementation detail—this is appropriate as the concepts apply to many different situations—but I would dearly love to read how people have gone about it.

Sessions
Your reading activity
Stats
Dates 10 February 2014 – 24 March 2014
Time spent reading 8 hours, 30 minutes
Highlights 142
Comments 17
Used app Readmill
Highlights

It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.

They believe that they should always strive to make actual performance conform to the original plan. They assume that the benefit of correcting a deviation from the plan will always exceed the cost of doing so. This places completely unwarranted trust in the original plan, and it blocks companies from exploiting emergent opportunities. Such behavior makes no economic sense.

We live in an uncertain world. We must recognize that our original plan was based on noisy data, viewed from a long time-horizon.

blindly insisting on conformance to the original plan destroys economic value.

Few developers realize that queues are the single most important cause of poor product development performance.

inventory is financially invisible in product development. We do not carry partially completed designs as assets on our balance sheet; we expense R&D costs as they are incurred. If we ask the chief financial officer how much inventory we have in product development, the answer will be, “Zero.”

Once we quantify the cost of delay, we become aware of the cost of queues. Once we recognize the cost of queues, we are motivated to measure and manage them. Without a cost of delay, queues appear to be free and therefore, unworthy of attention.

They believe that they should always strive to make actual performance conform to the original plan. They assume that the benefit of correcting a deviation from the plan will always exceed the cost of doing so. This places completely unwarranted trust in the original plan, and it blocks companies from exploiting emergent opportunities. Such behavior makes no economic sense.

blindly insisting on conformance to the original plan destroys economic value.

But how do we prevent all these small review meetings from driving up overhead? We conduct these review meetings on a regular time-based cadence. Every Wednesday afternoon at 1:00 pm, we review all the drawings completed in the last week. There is no need for a meeting announcement and no need to coordinate schedules. Meetings that are synchronized to a regular and predictable cadence have very low set-up costs. They contribute very little excess overhead.

The current orthodoxy focuses on planning and managing timelines, instead of the more powerful approach of managing queues.

I mention this to encourage you to look beyond the manufacturing domain for approaches to control flow.

In general, it is best to delay the project with a low cost of delay. This suggests that we should not prioritize on the basis of project profitability, but rather on how this profitability is affected by delay. Of course, this can only be done when we know the cost of delay, information that 85 percent of developers do not have.

Andrew Doran Andrew Doran

Interesting. Don’t prioritise projects based on ROI, use ‘cost of delay’.

TOC deserves respect because it has been an extraordinarily useful tool for making people aware of queues. However, the time has come to go a bit deeper. The issue is not queues, it is the economics of queues. Rather than saying queues are universally bad, we must treat them as having a quantifiable cost. This allows us to compare both the benefits and the costs of an intervention to reduce queue size.

WIP constraints exploit the direct relationship between cycle time and inventory, which is known as Little’s Formula.

Perhaps the most obvious structural feature of this book is that it is organized into 175 principles.

None of us have time to read everything we would like to read.

Our primary goal in product development is to make good economic choices.

We simply have no business trading money for cycle time if we do not know the economic value of cycle time.

Knowing this delay cost enables us to decide what we are willing to pay to meet this milestone.

reducing risk is so centrally important to product development that it is indispensable for us to quantify its economic impact.

making activities more efficient is much less important than eliminating inactivity.

making activities more efficient is much less important than eliminating inactivity. In product development, our greatest waste is not unproductive engineers, but work products sitting idle in process queues.

Today, no competent manufacturer believes that high utilization rates will optimize manufacturing.

After almost 30 years of analyzing product development trade-offs, I am struck by the frequency with which U-curve optimizations occur. Such U-curves are common in multivariable problems.

U-curve optimizations do not require precise answers. U-curves have flat bottoms, so missing the exact optimum costs very little. For example, in Figure 2-2, a 10 percent error in optimum batch size results in a 2 to 3 percent increase in total cost. This insensitivity has a very important practical implication. We do not need highly accurate information to improve our economic decisions.

Do not let fear of inaccuracy prevent you from creating economic frameworks.

Originally, we believed a feature was important to 50 percent of the customers and it would take 2 weeks of work. As time passed, we discovered it was important to 1 percent of the customers and would require 2 months of work. This means the economics of the decision have changed by a factor of 200.

To blindly conform to the original plan when it no longer represents the best economic choice is the act of a fool.

we must explicitly measure, and shorten, the time it takes to make a decision. It follows that if most opportunities and obstacles are first visible to people at the lowest level of the organization, then this level should be able to make these decisions.

One organization does this by setting limits on the authority of engineers to buy cycle time. Every engineer is permitted to buy up to 4 weeks of schedule improvement, at a cost of no higher than $500 per week. Their manager has higher authority limits, and the director has even more authority.

Control without participation is control without decision-making delays.

Decision rules make it practical to drive good economic decision making very deep into the organization.

The general principle is that we should make each decision at the point where further delay no longer increases the expected economic outcome.

Projects may achieve 95 percent of their performance objectives quite quickly, but they may struggle to attain the last 5 percent. Should they fully consume their allotted schedule or opt to deliver 95 percent of the value without further delay? The economic question is whether that last 5 percent is truly worth the expense and cycle time that will be consumed. Again, the decision should be made based on marginal economics.

Money we have already spent is a “sunk cost” and should not enter into an economic choice. We should make the choice on marginal economics.

Money we have already spent is a “sunk cost” and should not enter into an economic choice. We should make the choice on marginal economics.

Low-cost activities that remove a lot of risk should occur before high-cost activities that remove very little risk.

Whenever I am told that management makes decisions too slowly, I ask to see the specific proposal that went to management. Invariably, I discover that the cost and benefit of the decision are either unquantified or poorly quantified.

This way of labeling a queue is known as Kendall notation.

Queueing theory originated in 1909 with a paper by a mathematician named Agner Krarup Erlang at the Copenhagen Telephone Company.

developers assume that their cycle times will be faster when resources are fully utilized. In reality, as we shall see later, high levels of capacity utilization are actually a primary cause of long cycle time.

Inventory in product development is both physically and financially invisible. But just because this inventory is invisible doesn’t mean it doesn’t exist. Product development inventory is observable through its effects: increased cycle time, delayed feedback, constantly shifting priorities, and status reporting. Unfortunately, all of these effects hurt economic performance.

Queues delay feedback, and delayed feedback leads to higher costs.

Queues delay feedback, and delayed feedback leads to higher costs.

queues delay feedback and raise overhead, even when they are not on the critical path.

queues delay feedback and raise overhead, even when they are not on the critical path.

In practice, although capacity utilization lies at the heart of queue behavior, we rarely measure it directly. As you shall see throughout this book, we find it more practical to measure factors like queue size and WIP.

if two jobs take the same amount of time, it is better to service the one with the highest delay cost first. If two jobs have the same cost of delay, it is better to process the shortest job first.

One of the most useful tools for managing queues is the cumulative flow diagram (CFD), depicted in Figure 3-9.

Andrew Doran Andrew Doran

Reminds me of a diagram we use in waterfall testing phases showing ‘cumulative defects opened’ and ‘cumulative defects closed’. The distance between the two lines is equal to the number of open defects at that point in time.

tasks are actually not either on or off the critical path; instead, they have probabilities of getting on the critical path. The cost of delay for a task that is not on the critical path is the cost of delay for the project times the probability that the task will get on the critical path.

Andrew Doran Andrew Doran

This is genius.

There is another domain of applied statistics called random processes, which are sequences of random variables.

In fact, as Feller points out, there is only a 50 percent probability that the cumulative total will ever cross the zero axis during the last 500 flips. Although the zero value is the most probable value, the variance of the cumulative total grows larger and larger with time.

Andrew Doran Andrew Doran

Fascinating.

Because queues randomly spin out of control, they can drift into a sustained high-queue state that causes grave economic damage. Because they have no memory of how they got there, they will not be gracious enough to quickly drift back to a low-queue state.

Testing is probably the single most common critical-path queue in most R&D organizations. It is also one of the most dangerous, because it is on the critical path near the end of the development process.

almost any specialist can become a queue. This occurs because specialists are scarce resources and they are typically managed for efficiency. Furthermore, the pressure for efficiency leads to late involvement, which can raise costs even more.

We cannot add value without adding variability, but we can add variability without adding value.

There is probably no aspect of product development that is more misunderstood than variability.

Andrew Doran Andrew Doran

And so far in this book I haven’t yet grasped what is meant by it in this context. I am hoping for enlightenment!

the amount of variability is actually less important than the economic cost of variability.

We cannot make good economic choices if we only pay attention to probabilities.

An investment makes economic sense when its expected value exceeds its expected cost.

When variability increases, options are more valuable.

Andrew Doran Andrew Doran

I’ve not seen the Black-Scholes option pricing model explained so clearly as it is in this section.

Variability is only desirable when it increases economic value. This occurs when the positive tail of the probability distribution extends far enough into a high payoff region to overcome the cost of the negative tail.

Stock options have limited downside and unlimited upside. Under such conditions, we would like to maximize variability. Development projects have unlimited downside and limited upside. Downside has no limit because we can invest unlimited time and expense when we fall short on performance. Upside is limited because once we have sufficient performance to create customer preference, adding more performance produces little incremental profit.

Repeating the same failures is waste, because it generates no new information. Only new failures generate information.

Product developers should clearly distinguish exploratory testing, which should be optimized for information generation, and validation testing, which should be optimized for high success rates.

Thus, we can reduce overall variability by pooling variation from multiple uncorrelated sources.

Andrew Doran Andrew Doran

Is this the same thing as a financial portfolio diversification effect?

Andrew Doran Andrew Doran

I read on a couple of pages. Yes it is!

let’s say we have nine project teams and nine manufacturing engineers. We could assign a single manufacturing engineer to each team. This would be a good approach if the demand for manufacturing engineering had very low variability. However, if the variability were high, it would be better to operate the nine manufacturing engineers as a single resource pool, combining the variable demand of all nine projects. This would reduce the variability by roughly a factor of three, leading to smaller queues and better response times.

When our task list becomes very granular, the noise in each estimate is very high compared to the signal. Granular estimates produce good estimates of aggregate scope, but we should never schedule tasks at this level of detail. Instead, it makes more sense to aggregate many small tasks and to schedule them as a group. Aggregation pools variability and improves the signal-to-noise ratio of our estimates.

Forecasting 2 years ahead is not twice as hard as forecasting 1 year ahead; it can be 10 times harder.

The most powerful way to reduce variability in forecasts is to shorten our planning horizons.

As always, we should make decisions on the basis of overall economics, rather than maximizing reuse, which is another proxy variable.

A buffer converts uncertain earliness to certain lateness.

Improving iteration speed is particularly attractive because it is usually cheaper to improve than iteration success rate. In most product development processes, iteration time is dominated by queue time, so our emphasis on reducing queues creates enormous payoff.

When we permit a project to enter the pipeline, it begins accruing cost. Our investment in the project is perishable, and it is affected by the time it takes to pass through the pipeline. If we invest in market research only to have the product crawl forward slowly for 2 years, our market research will be 2 years out of date. We are better off holding this new product in a ready queue, than letting it enter our pipeline.

It is usually not productive to work on old bugs, since they may die of old age or irrelevance before we get around to fixing them. Instead, it is better to hold this inventory in a ready queue, delaying investment until we can allocate serious resources and quickly fix the bug.

batch size reduction is one of the cheapest, simplest, and most powerful ways to reduce variability and queues.

batch size reduction is one of the cheapest, simplest, and most powerful ways to reduce variability and queues.

Even worse, increasing WIP increases the need for status reports.

They assume that the long periods of uninterrupted work, created by large batches, improve the efficiency of an individual engineer. In fact, this efficiency gain is local to the engineer, and it often comes at the expense of destroying important feedback loops and lowering overall efficiency.

Give a programmer feedback on their code 24 hours after they wrote it and they will probably remember exactly what they were doing. Give them identical feedback 90 days after they wrote the code and they may barely recognize their own work.

Our problems grow even bigger when a large project attains the status of the project that cannot afford to fail. Under such conditions, management will almost automatically support anything that appears to help the “golden” project.

Figure 5-3 shows the complex web of benefits that arise from the simple change of reducing batch size.

Andrew Doran Andrew Doran

Good diagram.

huge improvements in software testing batch size are virtually always triggered by automation of testing. If it takes 24 hours to set up a test, we cannot test every 24 hours. We must invest in reducing transaction cost to enable the use of smaller batches.

Dispersed teams tend to use large batch asynchronous communications. Even when they use small batch e-mail communication, this communication is asynchronous with long delays between a question and a response. This is small batch size deprived of many of its benefits.

B21: The Batch Size First Principle: Reduce batch size before you attack bottlenecks.

One of the most dangerous of all batch size problems is the tendency to pack more innovation in a single project than is truly necessary.

Whereas typical corporate practice emphasizes controlling risk through careful analysis, venture capitalists excel at using funding batch size to control risk.

the practice of working in one phase at a time is so devoid of common sense that engineers seldom follow it,

We should never group all requirements into one large batch because this lets the slowest requirement hold up all of the work.

Consider replacing the project postmortem with a periodic physical exam!

the real art of managing queues is not about monitoring them and setting limits, it lies in what we do when we reach the limits.

It is logical to purge jobs of low value from the queue whenever there is a surplus of high-value jobs. This ensures that we will continue to generate open slots for the other high-value jobs that are constantly arriving.

Many companies have difficulty killing projects once they have started. If they used the sunk cost principle, Principle E17, they would recognize that they should only consider the incremental investment to finish the project compared to its return.

Zombie projects destroy flow. Kill the zombies!

Zombie projects destroy flow. Kill the zombies!

Which project activities are most suited for part-time resources? Those that are most prone to expensive congestion.

Which project activities are most suited for part-time resources? Those that are most prone to expensive congestion. These are high-variability tasks on the critical path.

W12: The Principle of T-Shaped Resources: Develop people who are deep in one area and broad in many.

a high cost-of-delay job requiring very little resource should go ahead of a low cost-of-delay job requiring a lot of resource, no matter how long the low cost-of-delay job has waited in queue.

once we recognize that queue size determines cycle time, we can easily use WIP constraints to control cycle time. This requires a basic shift in mind-set, a mind-set change that is very difficult for companies that believe a full pipeline is a good thing.

Anyone can be captain in a calm sea.

Figure 7-2 shows how highway throughput behaves as a function of speed. The product of density and speed generates a parabolic curve for highway throughput. This occurs because density decreases linearly with speed. At higher speeds, drivers follow other vehicles at a greater distance to allow sufficient reaction time. This parabolic curve for throughput, originally observed by Bruce Greenshields in 1934, has interesting implications. It shows that throughput is low at both extremes: maximum density and maximum speed. If we wish to maximize throughput, we must operate at the top of the parabola.

Andrew Doran Andrew Doran

Very interesting!

Readers who have already grown fond of U-curves may wish to turn this figure, or themselves, upside down.

Today, every project manager is already making deliberate choices to pay or not pay premiums for faster service from external resources. We simply have to extend these behaviors to congested internal resources.

Shorter, predictable, and frequent meetings give us all the advantages of small batch size that we discussed in Chapter 5. With more frequent opportunities for coordination and feedback, we simply can’t get very far off track. In contrast, when we hold lengthy meetings at unpredictable times and at infrequent intervals, we lose urgency very quickly.

There is an alternative: scheduling reviews at regular time intervals. For example, Project A may be reviewed on the first Monday of every month. Whatever is available to be reviewed at this date is reviewed. The strength of this approach is that review dates can be predicted months in advance. Everyone knows precisely when the review will occur. If the program experiences a problem, the review occurs anyway. In contrast, with a scope-based approach, programs with slippage are reviewed less frequently than programs that are on time.

For many years, Hewlett-Packard had morning and afternoon coffee breaks. Engineers would emerge from their cubicles at the same time every day as a coffee cart rolled through the development lab. This enabled informal information exchange between teams. If an engineer was working on a tough power-supply design issue immediately before the break, and he saw another engineer that was an expert on power supplies, the conversation would turn to power supplies. The coffee break cross-pollinated knowledge across teams.

Andrew Doran Andrew Doran

Great idea, however I think the takeup in our office would be minimal.

One company adopted an asynchronous software-supported process to handle engineering changes.

Andrew Doran Andrew Doran

I've seen this many times. Seldom is it better than getting together in a room with everyone for a group approval.

azteccrabtree azteccrabtree

How do you get a book I am new

some companies prioritize jobs based on return on investment (ROI). They assume a high ROI project should have priority over one with a lower ROI. In fact, we should be more interested in how ROI is affected by sequence. The total profit of a high ROI project may be less sensitive to a schedule delay than that of a low ROI. In such a case, the low ROI project should go first. Overall portfolio ROI adjusted for delay cost is more important than individual project ROI.

Andrew Doran Andrew Doran

Intro to weighted shortest job first (WSJF). Don't use ROI.

Many companies make the mistake of ranking their projects in priority order and telling the entire organization to support projects based on this ranking. This can lead to an unimportant task on a high-priority project displacing a high-priority task on a lower priority project.

As a general rule, any inexpensive step that eliminates a lot of risk should occur early.

This network-centric view has clear implications for development process design. It implies that it is incorrect to standardize the top-level process map; this is precisely where we need flexibility. Instead, we should standardize the modules, or nodes, that make up the development network. With standardized modules, we can easily select the specific modules that add value, and we can sequence them in the correct economic sequence.

Andrew Doran Andrew Doran

Therefore don't have an SDLC?

Product development networks also experience congestion. They also need flexible routing. For example, when we start our project, we may plan to use our low-cost internal testing facility to test our design. However, as we approach the test date, we may discover that this node is congested. What can we do? If we have developed an alternate route, such as an external test facility, congestion at the internal facility can make this the new optimal route.

We need to consciously develop alternate routes around likely points of congestion.

Flexibility is not simply a frame of mind that enables us to react to emerging circumstances. Flexibility is the result of advance choices and planning.

We can also improve the response time of backup resources by investing in keeping them informed on the program. Invite them to team meetings. Send them copies of meeting minutes and action item lists. A small investment in keeping up with the program leads to much faster response times.

A little rudder early is better than a lot of rudder late.

It is surprisingly easy to base control on economic impact. We simply identify the magnitude of economic deviation that is worth controlling and determine the amount of change in each proxy variable that will create this economic impact.

Consider a typical prioritization, “The highest priority on this program is schedule, followed by unit cost.” This prioritization implies that the smallest deviation in schedule is more important than the largest deviation in unit cost. This makes no economic sense. Project performance parameters almost never have priorities; they have transfer functions.

companies often establish control set points based on absolute changes in proxy variables, instead of basing them on the economic impact of these changes.

FF4: The Principle of Balanced Set Points: Set tripwires at points of equal economic impact.

Andrew Doran Andrew Doran

I love this one. Something that can definitely easily be applied across a portfolio of projects.

Our control system must enable us to exploit unexpected opportunities by increasing the deviation from the original plan whenever this creates economic value.

It is common that we must invest in creating a superior development environment in order to extract the smaller signals that come with fast feedback.

agility: the ability to quickly change direction while traveling at a high speed.

consider what happens when only management knows the logic behind decisions. Engineers are told, “This seemingly bizarre decision is actually best for the company due to mysterious and sophisticated reasons that are too esoteric to articulate.” This develops a culture of magical decision making, where everybody feels they can make great decisions unencumbered by either analysis or facts.

“The time to prepare for bad weather is before bad weather is upon you.”

If you ever have an opportunity to colocate a team, do it. If you don’t, fight hard to get it, and you will be glad you did.

Although this book has a strong economic focus, I personally do not believe that money is the scarcest or most valuable resource. The scarcest resource is always time. Our organization decodes what is important to us by how we spend our time. Allocating personal time to a program or project is the single most effective way to communicate its importance. This is as true in business as it is in raising a child.

I believe that the key to economic success is making good economic choices with the freshest possible information. When this information changes, the correct choice may also change.

Figure 8-6 summarizes the metrics that follow logically from the principles of this book.

We can measure queue size in terms of the number of items in queue, which is easy, or by trying to estimate the amount of work in queue, which is difficult. I normally recommend that you start with the easy metric, the number of items in queue. This is astonishingly effective even though most people assume it could not be.

Problems that are not resolved within a certain time are moved to a higher level. This works because most small problems turn into medium-size problems on the way to becoming big problems.

One of the best slogans I have encountered to describe the desired state came from a team that said, “If you are going to worry about it, I won’t.”

The continuous peer-to-peer communications of a colocated team is far more effective at responding to uncertainty than a centralized project management organization.

In product development, we can change direction more quickly when we have a small team of highly skilled people instead of a large team. We can change direction more quickly when we have a product with a streamlined feature set, instead of one that is bloated with minor features. We can change direction more quickly when we have reserve capacity in our resources. It is very difficult to apply an extra burst of effort when people are already working 100 hours per week.

Everything that can be forecast reasonably well in advance is carefully planned.

In product development, our opposing forces are the market and technical risks that must be overcome during the project. To dispose of these risks, projects must do more than assess them. Projects must apply meaningful effort to judge how severe the risks are and whether they can be overcome.

An imperfect decision executed rapidly is far better than a perfect decision that is implemented late.

We trust someone when we feel we can predict their behavior under difficult conditions.

Contrast the development organization that introduces one big product every 5 years, and the one that introduces one product each year five times in a row. Small batch sizes build trust, and trust enables the decentralized control that enables us to work with small batch sizes.

This book is still missing one chapter. It is the chapter that you will be writing, the one on implementation.

Andrew Doran Andrew Doran

Wouldn’t it be great if everyone who tried to implement the concepts in this book wrote about it and they were all collected in the same place? I would read all of them!