Evidence-informed Quality Improvement
Martin Marshall, Professor of Healthcare Improvement, UCL; GP, London Borough of Newham
In the last two decades systems-based quality improvement has moved from being something of an amateur sport to becoming a discipline in its own right, underpinned by a philosophy and group of values, a rigorous set of methods and an evidence base supported by the emerging science of improvement. The research evidence describing how best to organise and deliver care is less definitive than that relating to clinical practice, reflecting the nature of the social sciences on which it is based. It does, however, provide useful guidance to practitioners who want to ensure that their efforts to improve their work are as effective as possible.
What does the evidence tell us? First, there is a wide choice of improvement methods to choose from, some of which are designed to be used by individual health professionals or small teams, some by whole organisations, and some by policy makers and system leaders.
Approaches to improving the quality of care
When subjected to objective empirical evaluation (which not infrequently results in more modest benefits than people might expect based on their own personal experience), we find that no one approach is very much more effective than any other, and that their impact is often variable and inconsistent. This is a consequence of the simple but important fact that effective improvement requires not just a good understanding of how best to bring together the three elements of improvement - intervention, implementation and context, is in its infancy.
Second, we know from systematic reviews undertaken by the Cochrane Collaboration that combining different approaches to improvement is usually more effective than using any one approach in isolation. So, audit on its own often has little impact but making the audit results publicly available in a way that encourages people to make judgements about relative differences in quality, can be very powerful. So too is combining guidelines with financial incentives, in the way that was done for the Quality and Outcomes Framework. The most effective improvement efforts are those that combine approaches that facilitate internal motivation (such as peer review) with those that play on external motivations (such as financial rewards or sanctions) - the carrot and stick approach. We also know from the research evidence that it would be unwise for health professionals to dismiss the so-called 'governmental' approaches, such as regulation, competition and target setting - they can all be effective ways of achieving change.
This leads to the third lesson from the research evidence; using specific interventions to improve quality is no different from using a drug to treat a disease - all interventions have side effects. So, putting comparative performance information in the public domain is highly likely to result in gaming behaviours, or sometimes even fraud. GPs will not need to be told that using financial incentives to change behaviour risks practitioners focusing most of their attention on what is being measured (such as blood pressure levels) at the expense of elements of quality that are less measurable (such as kindness). Even apparently benign approaches to improvement, such as the popular Plan-Do-Study-Act cycles, have opportunity cost for those who use them. Overall, it is probably true that the more effective the intervention, the more likely it is to have side effects.
And finally, there is a growing body of research evidence demonstrating just how difficult it is to embed, spread and sustain improvements. We are all guilty of 'doing projects', satisfying ourselves that we can make a difference but them moving onto the next project and losing the hard won gains. Only by careful planning from the start, incorporating a sophisticated array of interventions to maintain change and making a strong commitment to continuous learning, can we begin to see long term benefits from our investments in improvement work.