Keep On Keeping On

Someone told me a bad joke this morning “How is Winnie the Pooh like Jack the Ripper”… They both share the same middle name. In Six-Sigma “DMAIC” or “DMADV”, there are five distinct phases of work, in what my company uses there is “SCADD”, or five distinct phases of work, in SCOR there are five distinct phases, in many methodologies there are 4, 5, 6 phases of work… and they all somewhat share the same ‘middle’ name, or analysis. I’ve written before here about not getting locked in ‘analysis paralysis’ by focusing on designing an analysis plan to answer a question, usually a question about what is a root-cause issue in business performance which doesn’t meet a requirement. However, finding a root-cause issue is not the same as solving the problem, and that’s the focus of my interest today.

Supply Chain re-engineering should be about solving business issues to be valuable – generally performance or capability issues. Merely identifying an issue, while providing great input for decisions, investments, change, doesn’t automatically solve the issue. In fact, there can be many, many ways to resolve an issue which has a cause identified – you’re not going to do all of them. I’ve had situation where, for a relatively simple process issue, clients have been faced with 30-40 solutions as the result of analysis of root cause, and possible solutions. Normally we would hope to have 5-10 prioritized changes to perform to solve the problem, not 30-40. All too often I’ve seen teams say “Voila” at the end of analysis, hand over the results, and start the next project. But it’s not over…

What’s going on? We lost the “DD” ,or the “DV”, or the “IC” part of the method. Once you’ve found some possible solutions to a definite root cause, you are going to have to design changes to processes. These designs take each solution and look at three types of change. One group will involve structural changes – connecting processes different ways, deleting processes or groups of processes, possibly adding completely new processes. Large structural changes are usually the most complex to put into place, while simple ones – two processes connected through an email or report – are ‘low hanging fruit’. Another group of designs look at ‘rule’ changes – running a process cycle more frequently, different policies or parameters for a process, different triggers for performing the process. Rule changes are generally the simplest, though changing a rule like re-planning inventory levels for frequently may require substantial IT heft to implement. Lastly, designs will look at changing business practices within processes. These may be as complex as automating processes with a large ERP system (in the case of Supply Chain), or cross-training to provide more workforce flexibility and planning accuracy.

Each solution will generate a design along these lines, and each design is generally then validated to see how much of an affect it will have on the targeted performance. Finally design change scope is assessed – mega changes spanning weeks and months, or multiple organizations vs low-hanging-fruit type changes which can be done in hours or with a few people. Design usually has to then be sorted and compared, and finally the business process owners select the programs they want implemented from the validated base. This is a far cry from “here’s a batch of solutions”!

Lastly, once all the designs have been validated and selected, generally teams make one or more last passes to examine the impact of the change. Projects are defined against each accepted design. A portfolio of all existing projects is generated, and then compared with the new designs. ROI is usually assessed against each project, then the portfolio is rebalanced for high ROI impact, and budget constrains which projects do not get done. Change management is launched, and finally the impacts of the changes are tracked against the original processes. Hopefully the program is closed formally at a certain point. Many days and weeks down the line from “here’s a bunch of analytic findings!”

I know many of you are thinking “we know all this”. But sometimes the obvious eludes everyone. Also, the temptation to present a block of analytic findings as a fait accompli for process improvement is overwhelming, especially I find in new Center of Expertise (COE) who need to quickly produce results – any results!

I want to point to several things to ensure that COE teams are executing due diligence on completing methods to really provide business performance. First, a process ‘steering’ body should look at projects, especially planning of staff for projects, to ensure that people do not drift off a project which is incomplete – frozen at analysis – and start new programs. With competing priorities – starting new projects, immediately implementing findings, scoping new work, planning resources – it is crucial that a steering body take responsibility for sorting out conflicting demands for process organizations.

Second, COE teams need to establish business plans which measure their performance in terms of real change accomplished – a tally of benefits – in addition to usual tracking of how many processes captured or how may programs executed. In annual review cycles of COE teams, there is sometimes a question which comes up – “what would happen if the team went away”, and a tracking of programs and benefits usually avoids the question altogether.

Third process business partners – sponsors and stakeholders – should demand at least two levels of design for completed process programs – “Level 2” , or major entities, work, and material flows for the program including high-level characterization of changes, and “Level 3 & 4”, or detailed process flow designs for structural changes, as well as detailed workflow diagrams for practice changes within individual processes.

Analysis is a critical competency, but future-state process design is too. They work together in tandem. Teams which accentuate one over the other, or halt too early in programs risk failing at both with programs having no defined results.