Blink Analysis

Some time ago I reviewed a fairly large SCM program which covered an entire line of business in multiple geographies, many activities, and substantial depth of process capture. The intent of the program was to standardize processes to a single blueprint across the company, which they had made fairly dramatic progress on in a short amount of time (as usual: with frameworks including SCOR). However, the team felt uneasy: this phase of the program didn’t seem to have enough ‘heft’. They wished to do additional phases of analysis on the system to verify their thinking and feeling about what would be the most productive approach to simplifying the system. In other words, though they completely understood what to do, they felt that they needed much more ‘visible’ analysis to support their position. Were they heading for ‘analysis paralysis?’

In his recent book Blink, Malcolm Gladwell cites numerous instances where professionals in a field who had experienced a lot of data in their field, were able to, seemingly instantaneously, examine a situation and come to a correct analysis which eluded diligent analysts who had pursued detailed fact-based review of the same situation. In one case, it was a forgery sold to the Getty Museum. In another case, it was determining which doctors would get sued for malpractice. And further: couples who would get divorced, ‘speed dating’ outcomes, and even Wargame simulation outcomes.

This linked beautifully to a situation I had in Tokyo. I was reviewing a merger program, which had used SCOR frameworks to map the as-is and to-be execution structures, but almost all the material was presented in Japanese-Language adapted SCOR. I was able to understand the business flow, and the transformation, purely by the symbolic positioning of the SCOR elements. But the business wanted to take the occasion further: was there anything which they could optimize in the process further, beyond the obvious advantages of simplification which they already had achieved?

Well, no data, only a model, without any detail process information I could read, no best practice information in my native language – really, only the pattern of what was going on without any information. But looking at it, I noticed several things – a lack of a pattern for collaborative planning with their suppliers and customers; still some redundant process elements; and lack of interconnects between plants. So, I showed how a collaborative planning process between themselves, their suppliers and their customers could be set up; how they could set up the financial incentives to participate, and how the benefits would flow to design and sales to modify pricing, giving them a significant edge in a commodity-based business.

Of course, the details of this were not there – the specific benefits, the details of implementation, and so forth, but the principle was interesting. In a few minutes, without knowing anything except the SCOR pattern of the business they were operating, I could help them begin to understand some key next areas of Supply-Chain process they should investigate to look for the cost advantages of the next stage of their business integration. A “Blink” analysis.

Back to my original team problem. I showed them a matrix of possible types of analysis which my teams used in the past:

Multivoting Common Cause Variation Operational Definition Value Stream C&E Matrix Visual Control ANOVA Impact
& Effort
Kappa Critical Path
Observation Time Series Traveler Check sheet Customer
Interview
Stratified Data Product
& Service Grids
Chi-square Simulation Range, Standard Deviation, Variance TPM
Value Stream Special Cause Variation Frequency Plots Frequency Plots 5 Whys CVSM Degrees of Freedom Pugh
Matrix
Central Tendency (Mean, Median, Mode) Material
Flow
Value-Add (VA) Control Charts Location Check sheet Six-σ Scatter Plot Complexity Analysis Design of Experiment FMEA Box Plots Internal Benchmark
NVA ImR Chart Stable Population Sampling Gross
Disconnect
Hypothesis
Test
PCE Destruction Lead Time Flowchart Frequency or Histogram Cross-Industry
Benchmark
Time Value p-Chart Gage R&R 5 Whys T-test BIC Benchmark Lean Quick Fix Non-Normal Distribution Phase-Gate
Affinity
Diagrams
Inferential Statistics Stratification Capability Analysis Fishbone Work Cell Analysis Multiple Regression Cost Estimation Discrimination Capacity Constraint
BNVA ,r charts
,S charts
Measurement System Analysis Workflow Error Analysis (Type I, II, Power,
p-Value
What-if NVA Cost KPOV Normal Distribution Risk
Takt Time Work Cell Analysis Bias CVSM Correlation Industry Benchmark Process Cycle Efficiency WIP
TIP
Central Limits Mistake-Proofing
Brainstorm Descriptive Statistics Measurement Selection Np-Chart, C-Chart, u-Chart Pareto Process Balancing Regression Solution Selection Matrix Stability Time Trap

First, of course, the team blanched (instead of blinking). This is an awful lot of analysis; I pointed out that with very few exceptions, out of the 900+ processes they were looking at for simplification, in most cases there would be a fairly obvious winner to choose as the standard. The formal analysis to support this would be along the lines of ‘this is the cheapest’ or ‘this has the best capabilities’ (Cost Analysis; Capability Benchmark). Going through the formal exercise of capability benchmarking everything wouldn’t add anything in particular, since the gaps were usually quite glaring. SCOR had already partitioned the problem into areas of similar processes to look at, and pre-supplied what were the industry key metrics to look at.

Where there was not an obvious winner, then they probably needed to dig down and begin to look at some other elements – what’s the cost of making the change to the blueprint; what’s the risk of making the change to the blueprint; how complex were two similar processes (choose the less complex please); but what they didn’t need to do was very, very detailed analysis of process performance. Except for some very specific, lower-level issues, most of the work in Business Process Management is very effectively done without intricate numeric methods. So what’s a team to do for ‘heft’? Avoid it. Here’s my advice to these teams for the future:

Trust your first instincts (Blink) in SCM. If a process looks strange, it probably is, and could be improved. If it looks hopelessly complex, it probably is, simplify it. If it looks like a bottleneck, it probably is – eliminate it. Do you really need Chi-square tests or  Value-Destruction Grids? Yes, definitely if you need very fine precision in decision-making. No, if you don’t.

Business people aren’t that impressed by highly detailed numeric analysis of business performance. In fact they may be confused, or misled about what decisions they need to make. I recall one instance of working with an executive on a critical Logistics problem at Compaq. He had been handed a 600-page report by a regional team. His single comment was: if it’s that big, it must be fake. Whew!  Keep the analysis simple.

Spending a long time on analysis creates another type of problem: the thing being analyzed has changed. The more time you spend, the greater the error. Get your information out quickly, get a debate going, drive change. Be useful now.

Sponsors and stakeholders are generally comfortable with 80% solutions. They can go forward with decisions quickly (the more senior they are, generally, the more comfortable they are navigating with incomplete analysis), their experience is a guide, and the teams look more rational than trying for 100% perfect detailed nitty-gritty work.

Trust the “Blink” when you do SCM analysis.

 

 

 

This entry was posted in Supply Chain, Supply Chain Projects and tagged on by .
Joseph Francis

About Joseph Francis

Joseph Francis is a former Managing Director of PCG with over 20 years of experience in Supply Chain and IT Management. He served as Executive Director of Supply Chain Council and is co-designer of OpenReference's emerging global standard in supply chain strategic business management, and recognized worldwide as an expert in supply chain operations management.