Categories
Uncategorized

You Can’t Understand Traffic Flow by Just Counting Cars

If you counted the number of cars on a stretch of road every morning at the same time, and found that the number of cars was the same every day, would you draw the conclusion that none of the cars are moving?  Probably not.  Yet that is exactly what we do when we make inferences about the health of a process by only looking at the process’ inventory over time.  To understand how well a process is functioning, we must consider all three legs of the process measurement stool; Average Inventory, Throughput Rate and Cost.  The goal is always to increase throughput while decreasing inventory and cost.

I often see people make the mistake of looking at month over month counts of process inventory (e.g. number of in process work orders, expense reports, recruiting orders, sales orders etc.) and drawing conclusion from the change in count.  To carry forward the traffic analogy, in order to understand the flow efficiency, we need to know how many cars are entering and exiting the stretch of road and an average time between entry and exit.  These additional measures paint a picture of the flow of traffic (throughput) not just a static snap shot.

To really understand the efficiency of this stretch of road, there is one more measure to consider; Cost.  Capturing the cost to maintain the road now gets at the heart of the matter; the cost to move a car from one end of the road to the other.

With these three measures now in place, we can watch this stretch of road for variation in performance.  Significant changes in any of these three measures triggers research to determine cause.  Consider this scenario:

Number of cars on the road (inventory) decreases, and number of cars exiting the road (throughput) decreases and the maintenance costs increase – While a decrease in inventory is good, it comes at the expense of decreased throughput and increased cost.  After investigating the cause of these changes, we find that significant roadwork is being performed to build an additional lane.  This work closed one of the three existing lanes.  The lane closure reduced the capacity of the road meaning fewer cars can be on the road at any one time.  The reduced capacity also meant that cars were closer together and drivers were reducing their speed which compounded to result in a significant reduction in the number of cars exiting the road.  The work being performed was increasing the current costs, and, because there would now be another lane to maintain, increased the future maintenance costs as well.

Can you think of similar lane closures in your business processes?  What about when a key team member resigns?  Or the disruption caused by a change to a new tool or process.  What about organizational changes?  These are examples of temporary lane closures that impact business process health.  Armed with the right measures of Average Inventory, Throughput Rate and Cost, you have an early warning system that will alert you to problems sooner and improve your ability to assess the impact of changes on the overall health of your processes.

See also Three Process Metrics that Matter and Theory of Constraints

 

Categories
Uncategorized

Three Process Metrics That Matter

Measuring process performance is the cornerstone of continual improvement. This may seem simple and obvious but like so many things, while it may be simple, it is often not easy. Sometimes it is difficult to capture the data necessary to maintain valuable measurements but I find this is not usually the problem. The more common problem is measuring too much of the wrong things. The technology that supports and enables business processes provide almost any data we want. Like Sherlock Holmes we think “Data, data, data; I can not make bricks without clay.” But, unlike fiction, more data often creates doubt and obscurity instead of leading us straight to the villain. There are only a critical three metrics that matter when gauging the health of any process. All other metrics are valuable as further forensics only after a problem is detected.

So, why not measure anything we can? I mean, we capture all this data, isn’t it a waste if we don’t use it? Consider Segal’s law; “a person with 2 watches is never sure of the time”.   Segal’s law is a cautionary postulation against the pitfalls of using too much information in decision making. If a person is wearing two watches, there are three potential states of those watches:

  1. Both watches are accurate and showing the same time. In this case, the wearer is confident because there is validation between instrumentation.
  2. One or both watches are inaccurate and displaying different times. In this case, the wearer is doubtful of the correct time because instrumentation is conflicting.
  3. Both watches are displaying the same inaccurate time. In this case, the wearer is just as confident of the correct time as if both watches are working putting trust in validated inaccuracy. Uh oh, then, if both watches are telling the same time, then that casts doubt on condition one.

So there is never a case where the wearer of 2 watches is truly confident of the correct time. Then the focus becomes on keeping instrumentation in synch. One well maintained high quality metric is far more useful than any combination of lower quality and often conflicting measurements.

The Theory of Constraints tells us there are only three key metrics to worry about when measuring the health of any given process; inventory, throughput and cost.

Inventory is all of the units in the process at any given time. Units are what the process acts on to produce an output. Some examples of business process units might include a sales order, trouble ticket, invoice, expense report or change order as example.

Throughput is the rate (expressed as # units over time period) at which a process produces output that is usable by all downstream processes. The last part of this is critical because it answers the common question “What about quality metrics?”. Quality is embedded in the throughput metric because only outputs that meet quality standards for all downstream processes are counted as throughput. If you apply this rule ruthlessly, it is not uncommon to find processes that, when first measured, have a throughput of zero because every output requires some level of rework, or does not meet service level’s or contains defects that are corrected by a downstream process.

Costs include all the money used to produce process throughput. Actual costs of a single process can be very difficult to capture. Most of the time, you’re looking at a single process acted on part time by many people for instance. Don’t get too wrapped up in this. If you can’t get to actual dollars, find a viable surrogate. In business processes, this is usually some variation of a human capital measurement.

When used together, these three metrics paint a valuable picture of the health of a single process. The objective is to reduce inventory and control or reduce costs while increasing throughput. Statistically relevant variations in the patterns of these three measures direct us to specific problems and areas requiring attention. Additionally, since each process is measured in a similar way, we can now make better decisions where to focus our improvement efforts. Given the choice between working on only one of two different processes, we can now compare relative changes in inventory, throughput and cost. Assuming both processes are valid candidates for improvement efforts, the one with the greatest deviation should get the most focus (all other things being equal).

Think of process measurement like a trip to the doctor. The first measures are simple indicators of health like weight, temperature and blood pressure. The results of these basic measures may drive further analysis but you’d never get wheeled in for a CAT scan or a full panel blood work up as a first step.

Don’t get seduced by the “more is better” mantra when it comes to process measurement. Reports that show a wall of data are rarely effective at driving action. Look for the key measures that are the best representation of inventory, throughput and cost and use those as your bellwether for action. Additional data may be required to select and direct specific actions. Use supporting data for forensic purposes not to measure top level process health. Start simple and use deeper level data to analyze once a problem is detected.