Three Process Metrics That Matter

Measuring process performance is the cornerstone of continual improvement. This may seem simple and obvious but like so many things, while it may be simple, it is often not easy. Sometimes it is difficult to capture the data necessary to maintain valuable measurements but I find this is not usually the problem. The more common problem is measuring too much of the wrong things. The technology that supports and enables business processes provide almost any data we want. Like Sherlock Holmes we think “Data, data, data; I can not make bricks without clay.” But, unlike fiction, more data often creates doubt and obscurity instead of leading us straight to the villain. There are only a critical three metrics that matter when gauging the health of any process. All other metrics are valuable as further forensics only after a problem is detected.

So, why not measure anything we can? I mean, we capture all this data, isn’t it a waste if we don’t use it? Consider Segal’s law; “a person with 2 watches is never sure of the time”.   Segal’s law is a cautionary postulation against the pitfalls of using too much information in decision making. If a person is wearing two watches, there are three potential states of those watches:

  1. Both watches are accurate and showing the same time. In this case, the wearer is confident because there is validation between instrumentation.
  2. One or both watches are inaccurate and displaying different times. In this case, the wearer is doubtful of the correct time because instrumentation is conflicting.
  3. Both watches are displaying the same inaccurate time. In this case, the wearer is just as confident of the correct time as if both watches are working putting trust in validated inaccuracy. Uh oh, then, if both watches are telling the same time, then that casts doubt on condition one.

So there is never a case where the wearer of 2 watches is truly confident of the correct time. Then the focus becomes on keeping instrumentation in synch. One well maintained high quality metric is far more useful than any combination of lower quality and often conflicting measurements.

The Theory of Constraints tells us there are only three key metrics to worry about when measuring the health of any given process; inventory, throughput and cost.

Inventory is all of the units in the process at any given time. Units are what the process acts on to produce an output. Some examples of business process units might include a sales order, trouble ticket, invoice, expense report or change order as example.

Throughput is the rate (expressed as # units over time period) at which a process produces output that is usable by all downstream processes. The last part of this is critical because it answers the common question “What about quality metrics?”. Quality is embedded in the throughput metric because only outputs that meet quality standards for all downstream processes are counted as throughput. If you apply this rule ruthlessly, it is not uncommon to find processes that, when first measured, have a throughput of zero because every output requires some level of rework, or does not meet service level’s or contains defects that are corrected by a downstream process.

Costs include all the money used to produce process throughput. Actual costs of a single process can be very difficult to capture. Most of the time, you’re looking at a single process acted on part time by many people for instance. Don’t get too wrapped up in this. If you can’t get to actual dollars, find a viable surrogate. In business processes, this is usually some variation of a human capital measurement.

When used together, these three metrics not only paint a valuable picture of the health of a single process. The objective is to reduce inventory and control or reduce costs while increasing throughput. Statistically relevant variations in the patterns of these three measures direct us to specific problems and areas requiring attention. Additionally, since each process is measured in a similar way, we can now make better decisions where to focus our improvement efforts. Given the choice between working on only one of two different processes, we can now compare relative changes in inventory, throughput and cost. Assuming both processes are valid candidates for improvement efforts, the one with the greatest deviation should get the most focus (all other things being equal).

Think of process measurement like a trip to the doctor. The first measures are simple indicators of health like weight, temperature and blood pressure. The results of these basic measures may drive further analysis but you’d never get wheeled in for a CAT scan or a full panel blood work up as a first step.

Don’t get seduced by the “more is better” mantra when it comes to process measurement. Reports that show a wall of data are rarely effective at driving action. Look for the key measures that are the best representation of inventory, throughput and cost and use those as your bellwether for action. Additional data may be required to select and direct specific actions. Use supporting data for forensic purposes not to measure top level process health. Start simple and use deeper level data to analyze once a problem is detected.

Applying the Lessons of Jonah

  The Goal is often touted as one of the most important business books of all time, yet many of the key principles are not widely applied in business process management. In The Goal, Eliyahu Goldratt describes his Theory of Constraints through a fable where the protagonist, a manufacturing plant manager named Alex is guided through the recovery of his plant by his old physics professor turned business consultant, Jonah. Jonah appears throughout the story to teach Alex in Socratic style to solve the underlying problems of his plant’s operations. In the end of course, the plant is saved and Alex is promoted.

The underlying principles of Constraint Theory are key to analyzing any process. But, it often takes a bit of a leap to apply lessons from manufacturing process to the less tangible world of business transaction based processes. In manufacturing, the process is visual. Inventory is physical. Throughput can be counted in widgets produced and sold. But in business processes, these things can be difficult to identify, see and count. I believe here in lies the rub for Jonah (and Lean for that matter).

With Jonah’s guidance, Alex comes to realize that the management metrics he uses to manage daily operations are all wrong. He discovers that the goal is not to improve efficiency, increase resource utilization or control costs, which are some of his current measures. The goal of any business is to MAKE MONEY…period. Therefore, all metrics must be in the context of contribution to that overall goal for the company. Also, Alex’s metrics were all focused on optimizing individual parts of the system; the sub-processes. Metrics that encourage optimizing sub-processes can be detrimental to the performance of the larger system. So, measurement must be focused on the performance of the system.

I struggled with two foundational problems when thinking about how to apply the Constraint Theory principles in my work. First, I’m almost never working on a production process that directly affects sales. For example, how does one equate the process of approving and paying employee expense reports to The Goal of making money? Second, Jonah focuses Alex on optimizing the whole system above individual processes. But, I’m never working at the “plant” level, always at the sub-system process level. Not sure what Jonah would say about these two challenges but here’s how I wrap my brain around them.

I redefine the goal in the context of the process I’m focused on. Think of the process itself as a stand alone business. Using the expense report process as an example. If the expense report process were a business, then the business would presumably be paid for accurately reimbursing employees for expenses. So, again as a business, reimbursing employees is what makes money so, the goal in this case is to accurately reimburse employees.

By putting the goal in context this way, you address both challenges. First, you can now show impact on ‘sales’ of the process. Second, you can now work at a system level since the scope of the system is now any process or sub-process that supports the activity of accurately reimbursing employees (the goal). Maybe this is painfully obvious to most but it helps me define what I’m working on at my level.

Waste of Extra Processing in Business

The most common waste that I find in business processes is the waste of extra processing.  Extra processing remember is the act of touching things more than once.  In manufacturing processes, overproduction may be the greatest of all waste sins, but in business processes, extra processing is rampant and right under our noses every day.  The good news is, it’s not terribly difficult to find.  The bad news is, it can be difficult if not impossible to eliminate.  But, put first things first by learning to see extra processing and recognize it for what it is; waste.

Flow Diagrams

The first thing I look for in a workflow diagram are loop backs.  You know, the diamond shape that indicates a decision.  Things like, “Report Complete?” or “Amount in Range?” or “ID Correct?”.  If YES, the process proceeds but if NO, the process loops back to a previous step.  Dig in here.  There is extra processing going on.  Look upstream in the process to find where changes can be made to prevent the loop back (NO) condition.  How far back does the loop go in the process?  If the loop back spans multiple previous steps requiring those steps to be repeated, look for ways to move the decision point further up stream.

Key Words

Look for words in process descriptions that indicate the act of “checking”.  Words like, review, approve, authorize, check, test, governance, oversight, audit all hint at wasteful activities.  That is not to say that they can be eliminated, but they are indicators of process areas that should receive scrutiny because they are non-value add.  The people involved in these activities are a gold mine of waste.  They can tell why things are rejected, or fail tests, or don’t get approved.  They are the keys to developing business rules that can be applied upstream that reduce the failure rate and therefore reduce extra processing.

Remember that both the act of checking and the act of correcting (rework) are both the waste of extra processing.  Applying business rules upstream in the process reduces both the burden of checking and the waste of rework.  A simple example might be an automatic approval for expense reports under a certain amount.  This reduces the number of reports that are reviewed and reduces the number of reports that will require revision and resubmittal.  Look around for the waste of extra processing.  You’ll soon begin to trip over it.

 

Everyone Should Learn BPMN

Ok ok…maybe not everyone, but, if you need to convey the nuances of complex business process with precision and brevity, Business Process Model and Notation (BPMN) is worth taking a look at.  Here’s why I love BPMN and use it almost exclusively to capture even the simplest of processes in an elegant drawing.

BPMN is Compact

You can say a lot in a very small space.  Take a look at this example…

BPMN Compare

Both drawings depict the same process.

  1. Do some manual step repeatedly until finished or until 30 minutes has elapsed
  2. Then run a script to complete the next step

The BPMN drawing on the left depicts in the first shape that the step is manual (the hand icon) and that it is to be repeated (the recursive arrow), that there is a time limit (the clock icon on the shape border).  The standard drawing on the right, uses three shapes to convey the same instructions.  This may not seem to be a significant difference in this example, but over the course of a process with hundreds of steps and multiple conditional branches, this conservation of space is critical to developing a clear vision of the workflow.

BPMN is Precise

There is little room for ambiguity.  The rules for BPMN are specific and, while the specifications offer alternative representations, within each construct, the rules are clear.  Where a standard drawing relies on the author’s ability to describe a process step, BPMN uses symbols.  Symbols are universal, intuitive and language independent.  Once you learn just a few basic symbols and rules, it is far easier to see a process at a glance without having to read through a lot of descriptive text.  Here is a brief legend of some of the most commonly used BPMN symbols.

BPMN Legend

Learn More

BPMN.org – The Object Management Group (OMG) controls the BPMN standard.  This is the authoritative site.

“Real Life BPMN” by Jakob Freund and Bernd Rucker describes the standard and principles in detail and gives excellent advice on demystifying BPMN for everyday use.