Falsehood

When I build supply chain analysis teams, and review the types of people I like to hire on or bring into process work, the background which interests me the most is a ‘scientific’ background. These people have been trained to look at problems and identify ways to explain them using a wonderful method which is so difficult to refute: the scientific method. Fundamental to this is the idea of creating hypotheses, or untested explanations, about an idea, phenomena, or event. In the process world, where we’re trying to improve end-to-end processes through types of process analysis, this takes the form of hypotheses about why a process performs the way it does, or hypotheses of how to change a process to make it work better. Then the method tries to prove or disprove the hypothesis but through a tricky technique: it doesn’t try to prove something is true. It tries to show that the opposite is false.

What separates the people with a fair amount of scientific training (or engineering, economics, or related areas) is that they know that at the root of what they do, they cannot ever fully explain the way a process performs. But using the concept of ‘falsifiability’, they can completely understand certain types of ‘negative tests’ around processes. For example, I might state that “X has no influence on this process A.” I find a single case of X having influence on process A, therefore I know my statement can be absolutely rejected as false.

People without this type of training feel very comfortable with making positive assertions. For example, I might state that “Process A performs the way it does because of X”. I will find a dozen cases to support the statement, perhaps find a statistical correlation, then rest satisfied that the statement is true. Unfortunately, I may miss another dozen cases which shows that X has no bearing on Process A, or that Y can also be shown to cause Process A to perform a certain way. The distinction between these two approaches to process analysis is absolutely critical to a successful supply chain improvement program.

An example of this in my past involved combining several plants in a manufacturing environment so that instead of material being manufactured in one plant, then configured and shipped in a second (with a 2-day in-transit lag), material would be manufactured, configured and shipped in a single plant. The hypothesis was that combining the activities in one plant would lower total inventory (removing the in-transit inventory), and reduce the total cycle time of manufacturing build. With simple math, one could simply remove the two-day lag, and the inventory, and calculate the reduction in cash-to-cash cycle time, and the working capital improvement, cost of capital, etc. This is an example of the pure ‘positive’ statement, where there was no attempt to validate if the statement might be false for the processes in case. Math, in this case, ‘proved’ that everything would work wonderfully.

However, things are not always as they seem. When the process team looked at the program, we asked a simple question: will it work? That is: is there a chance that it might fail? Is there a chance that the hypothesis was false? How do you test this? In this example, we used a software tool called Gensym, and build a model of the supply-chain processes using the SCOR framework: we built both the before, and after models, and seeding the system with data from the actual environment to get simulation parameters right, we then ran a simulation.

Surprise!

In our simulation, it turned out that there was an increase in work-in-progress (WIP) inventory, and an increase in average order cycle time in the combined plant. Well, it was quite a shock, and as a hardened supply chain type, I couldn’t myself believe what would be “completely obvious” to anyone who had worked with such systems was false. What was wrong?

Turns out that material planning processes, which were unchanged in the future plant environment, had a ‘hidden’ delay lag in planning, which closely matched the in-transit delay lag for the shipping. The result was that planning in the new combined environment would then position materials with a several-day old signal, buy all sorts of wrong materials, and worse, partially allocate materials to orders in such a way that they were not complete, and ready to ship. Inventory went up, and fewer orders got shipped in the simulation.

When we went back to senior leadership with the findings, they were, as you can imagine, somewhat skeptical. But by running, and re-running the simulation to meet any and all of their questions, it finally came out that their setup wouldn’t work, and they were embarking on a dangerous campaign to close plants and change manufacturing which would be expensive, and leave them worse off than when they began. They agreed to the conclusions, and we embarked on finding changes to processes which would in almost all cases create the results they needed.

With a corrected hypothesis, which included substantial changes to material planning, positioning, and order-allocation processes, then the system was shown to work well in simulation. But they understood the caveat we left them with: there may be a reason it might not work, but to the best of our efforts, we cannot find a reason why the decision, anymore, is an incorrect one. You can’t prove something is true absolutely, but you can prove that something is false is what it boils down to.

I can’t tell you how many times I’ve encountered the positive “X causes Y” hypothesis in process analysis, or even worse the “we will do X and see what happens.”

The takeaway: when you look at solutions to supply chain issues, you naturally generate hypotheses about what might solve an issue. Avoid generating untestable hypotheses. Focus on testing a process hypothesis not to show that it is true, but try to show that it is false.