People want to be sure. When you are dealing with health and illness, wellness and disease, you want to get things right.
Lean is founded on direct observation of processes. In many instances, you can look at processes direct. In other situations, they are spread over time and space and you may not be able to look at everything at once. Even when you can undertake personal observations, you may need to supplement it with electronic data. This can be because the team involved want to look at aspects of flow, and that may need information from electronic data sets.
Often staff believe there is no data available. This sometimes proves to be correct, but more often there turns out to be avalanches of data stored in multi-terabyte servers, and generally available for the asking, assuming you can locate the right person – which usually takes no more than a phone call and a polite query accompanied by the right set of permissions.
Even when people can locate data, and begin to collect significant volumes of information, it can often be very difficult to move this in to action. Often the problems are reasonably understood from personal observation even before moving to electronic data. Sometimes evidence can push you in a completely new direction by challenging what you believed to be true, but more often it refines what you thought and makes variations more apparent.
So, why do people get stuck? Common reasons include:
- Lack of clarity on the question when they ask for data, resulting in too much information.
- Loosing sight of what they wanted the information for in the first place.
- Mistaking information needed for academic research, for information needed for improvement.
- Active attempts to delay a project – less common, but it can happen.
- An overwhelming anxiety that the information they have is not good enough to allow them to make decisions.
I’ll write about the ‘lost in data’ options in a future post. In this post, I will focus on the final point – a feeling that it just isn’t good enough.
This is a fundamental human concern. People worry about their information not being good enough when buying a £5 pair of ear buds: search engines and review sites thrive on this concern. So, it’s understandable that we can feel this to be almost overwhelming in decisions on health and social care services,.
There are several issues here, not least that no one ever has perfect information. Even Systematic Reviews and Randomised Control Trials, generally revered as the ultimate arbiters of correctness (in health care at least), don’t give you perfect information. Systematic reviews are only as good as the underlying trials they analyse, which may not have been conducted in populations similar to yours, or there may be few or even no relevant trials in the first place. Randomised Control Trials can get it wrong, either by chance, design flaws or inadvertent bias – which is one of the reason that systematic reviews exist. And, often, the kind of question you grapple with in improvement work does not lend itself to Randomised Control Trials in the first place.
For me, there are two misunderstandings here. Firstly, if you prototype and conduct PDSA cycles, then QI work tends to be self-correcting: you find out very quickly that the change is not having the effect you intended. That pushes you back to thinking about why the change did not have the effect you predicted. Was it something in the execution or context? Is it possible that the underlying theory is incorrect? Either way, you go again, and gather more information on whether or not it has an effect. You don’t just implement an entire change without knowing if it works or not, which is often people’s fear, based on their experience of traditional planning cycles and project implementation.
Secondly, nothing in this world is perfect. You can’t have perfect knowledge, perfect implementation, perfect theory. Sure, you can strive for Dr Deming’s profound knowledge, but you’re not going to reach perfection. You can never know everything. So, if that is your benchmark, you end up doing nothing, and asserting that you need just one more analysis, one more observation before you can do anything at all.
There’s a branch of psychological theory called Object Relations Theory. You can read about the detail if you are interested. The relevance here is the idea of being ‘good enough’. People often blame parents, particularly mothers, for children’s problems. This is very unfair, but that doesn’t seem to make it happen any less. This can drive people to impossible lengths to try to deliver the ‘perfect’ childhood.
Donald Winnicott was a British paediatrician, who trained as a psychoanalyst. His work is worth reading, but the focus here is on his idea of the ‘good enough’ mother. There is a discussion of the idea and its relevance to health care at this link. Winnicott believed that parents – and mothers in particular – did not have to be ‘perfect’ – they just had to be ‘good enough’. Winnicott wasn’t talking about settling for second best: he was making the point that being perfect – if that means doing everything- doesn’t help people to learn to cope.
Part of the ethos of Lean is to create a system in which staff can identify problems, undertake root cause analysis, generate options for improvement, prioritise them, and implement and test their preferred improvement options. People learn what is important to managers and coaches by what they do, as well as by what they say. If you demonstrate to people that they can’t do anything until they know everything, then you are modelling a logic trap. If they internalise this learning, they will get nothing done.
Better to focus on the idea of ‘good enough information’: enough to let you form a tentative theory, develop countermeasures and test them out. If they don’t work, you will know more than you knew at the start, and can build on this in future iterations of your change process.
Don’t get caught in the analysis trap. Some analysis is essential – too much can cause change paralysis