Health care staff in general, and doctors in particular, like randomised controlled trials (RCT). I discussed evaluation of Lean in a previous post, but it comes up often in health care, and it is worth spending more time on it.
Randomised controlled trials are very good ways of reducing bias in an evaluation. In a classic RCT, you identify people with a particular condition, and randomly allocate them to receive, for example, treatment as usual, or the new treatment you hope will be better. In a blinded trial, the patient does not know if they are in the ‘new intervention’ group or not. In a double blind trial, the health care professional does not know either – you can even have triple blind trials, where the person undertaking the analysis does not know which group is which until the very end of the study.
The intention of this is to reduce some types of bias. It works very well for a lot of questions. In some cases, a modified design is needed. For example, if the new treatment is surgery and the old treatment a drug intervention, then it is not possible to have the patient blinded to the study. Randomising patients is one thing, but in some cases you want to compare services, areas or even schools. This often requires more complex designs, such as cluster trials. For example, if you want to compare an intervention with two wards, and you want them to be far apart to avoid one ward in a hospital influencing another, you might use hospitals as your sampling unit, and then randomly select a hospital, and then a ward – say, a medical ward – in each randomly selected hospital. Some designs can be very complex, using combinations of sampling methods.
If you want to evaluate Lean, and you want to use an RCT methodology, there are quite a lot of challenges. For example, the wards, units or hospitals cannot be blind to the intervention – they are going to know if they are doing it or not. You might try to take partial account of this by, for example, offering some other training so that both wards do get some attention.
This is moving in to the region of evaluation of complex interventions: Lean is more complicated than administering a medicine to a group of patients. You are offering teaching and training; management attention; feedback and so on. In this type of evaluation, it is common to produce a theory of change, or a logic model, and to structure the intervention around that. An example of a design using this structure can be found at this link.
In theory, there’s no reason why similar designs could not be used to evaluate Lean in health care settings. Joosten and colleagues touched on some of this in a 2009 paper, available here. As with some other health care discussions of Lean, I was not convinced by some of the description. Joosten and colleagues describe Lean as being used ‘in a highly prescriptive way, limited to the application of shop floor tools’. They envisage American authors as then identifying how to use these tools outside a production line. As I discussed in the Deming post, I don’t believe this to be correct: Lean was used by Japanese companies in sophisticated ways, including as part of an overall management system.
This does create something of a straw man, but they go on to provide an interesting and relevant account of Lean principles in health care. There is a particularly interesting discussion of socio-technical aspects of Lean – the interaction of culture and Lean ‘technology’. Again, I do not feel this is really a new Western finding – Taiichi Ohno gives examples in his book that demonstrate that he well understood the importance of organisational culture. The authors are correct, however, to point out that culture is important.
This takes us back to the evaluation of Lean. Joosten points out that most published work on Lean is in the form of case studies, whether they are from a ward, a service or a whole hospital. This is notoriously liable to publication bias: few people submit case reports on things that did not work, and when they do, journals often reject them. This means that most case reports are positive, and can give a distorted view of the strength of the evidence.
So, do we need RCTs of Lean? Probably not, in the classic sense of an RCT. Methods for the evaluation of complex interventions do offer possible options, and these designs can involve randomisation. Studies like this are difficult, and the methods challenging. Is Lean important enough to be worth the bother? I think so. My strong impression from projects in which I have been involved is that Lean has produced beneficial changes. This can be demonstrated in metrics from particular projects, but this would not be enough to convince all sceptics. It is also possible that complex method evaluations could bring new insights, for example on what type of preparation for Lean projects works best in a particular situation, or what characteristics of a project help to sustain it. Lean practitioners need to be sensitive to professional concerns about ‘the wrong type of evidence’. Few of us are in a position to undertake complex and lengthy evaluations, but if the opportunity comes along, we should be willing to participate in them, both to help generate evidence, and because we may gain new insights in to Lean implementation.
Photo courtesy of pakorn at freedigitalphotos.net