Measuring process performance is the cornerstone of continual improvement. This may seem simple and obvious but like so many things, while it may be simple, it is often not easy. Sometimes it is difficult to capture the data necessary to maintain valuable measurements but I find this is not usually the problem. The more common problem is measuring too much of the wrong things. The technology that supports and enables business processes provide almost any data we want. Like Sherlock Holmes we think “Data, data, data; I can not make bricks without clay.” But, unlike fiction, more data often creates doubt and obscurity instead of leading us straight to the villain. There are only a critical three metrics that matter when gauging the health of any process. All other metrics are valuable as further forensics only after a problem is detected.
So, why not measure anything we can? I mean, we capture all this data, isn’t it a waste if we don’t use it? Consider Segal’s law; “a person with 2 watches is never sure of the time”. Segal’s law is a cautionary postulation against the pitfalls of using too much information in decision making. If a person is wearing two watches, there are three potential states of those watches:
- Both watches are accurate and showing the same time. In this case, the wearer is confident because there is validation between instrumentation.
- One or both watches are inaccurate and displaying different times. In this case, the wearer is doubtful of the correct time because instrumentation is conflicting.
- Both watches are displaying the same inaccurate time. In this case, the wearer is just as confident of the correct time as if both watches are working putting trust in validated inaccuracy. Uh oh, then, if both watches are telling the same time, then that casts doubt on condition one.
So there is never a case where the wearer of 2 watches is truly confident of the correct time. Then the focus becomes on keeping instrumentation in synch. One well maintained high quality metric is far more useful than any combination of lower quality and often conflicting measurements.
The Theory of Constraints tells us there are only three key metrics to worry about when measuring the health of any given process; inventory, throughput and cost.
Inventory is all of the units in the process at any given time. Units are what the process acts on to produce an output. Some examples of business process units might include a sales order, trouble ticket, invoice, expense report or change order as example.
Throughput is the rate (expressed as # units over time period) at which a process produces output that is usable by all downstream processes. The last part of this is critical because it answers the common question “What about quality metrics?”. Quality is embedded in the throughput metric because only outputs that meet quality standards for all downstream processes are counted as throughput. If you apply this rule ruthlessly, it is not uncommon to find processes that, when first measured, have a throughput of zero because every output requires some level of rework, or does not meet service level’s or contains defects that are corrected by a downstream process.
Costs include all the money used to produce process throughput. Actual costs of a single process can be very difficult to capture. Most of the time, you’re looking at a single process acted on part time by many people for instance. Don’t get too wrapped up in this. If you can’t get to actual dollars, find a viable surrogate. In business processes, this is usually some variation of a human capital measurement.
When used together, these three metrics not only paint a valuable picture of the health of a single process. The objective is to reduce inventory and control or reduce costs while increasing throughput. Statistically relevant variations in the patterns of these three measures direct us to specific problems and areas requiring attention. Additionally, since each process is measured in a similar way, we can now make better decisions where to focus our improvement efforts. Given the choice between working on only one of two different processes, we can now compare relative changes in inventory, throughput and cost. Assuming both processes are valid candidates for improvement efforts, the one with the greatest deviation should get the most focus (all other things being equal).
Think of process measurement like a trip to the doctor. The first measures are simple indicators of health like weight, temperature and blood pressure. The results of these basic measures may drive further analysis but you’d never get wheeled in for a CAT scan or a full panel blood work up as a first step.
Don’t get seduced by the “more is better” mantra when it comes to process measurement. Reports that show a wall of data are rarely effective at driving action. Look for the key measures that are the best representation of inventory, throughput and cost and use those as your bellwether for action. Additional data may be required to select and direct specific actions. Use supporting data for forensic purposes not to measure top level process health. Start simple and use deeper level data to analyze once a problem is detected.