The College of Medicine is committed to emphasising the use of evidence based approaches and treatments.
That does not mean – as it so often does in the outside world – that we are limited to pharmacology or even biomedical science. Those are necessary but not sufficient. Alongside them we include psychosocial sciences such as public health, economics and psychology.
Medicine deals with patients, not lab rats. How people think and behave is as relevant as the efficacy of pharmaceutical treatments or surgical techniques. We need science that is capable of exploring complex therapeutic and community interventions, establishing both clinical and cost effectiveness. Nor should we sneer at the placebo effect. It is obvious that clinical trials depend on eliminating placebo. How else would we know what was the effect of the treatment and what was the effect of the body’s own ability to heal itself? But in clinical medicine, that very mechanism should be encouraged not discarded. Every good doctor knows that. It is a phenomenon that is crying out for good research.
Evidence changes. The European Medicines Agency advised in September 2010 that Avandia – used to treat Type 2 diabetes – should be withdrawn because of growing evidence linking it with heart problems. The risks outweigh the benefits, the Agency said. Yet when it was first licensed ten years ago, the evidence suggested it was safe for all except those already suffering heart failure.
The following month saw reports that reboxetine , prescribed for the acute treatment of major depression, may not be as effective as had been believed, and may be potentially harmful.
Some years ago, selective serotonin uptake inhibitors (SSRIs) were hailed as the final answer to treating depression. Whether your view is that depression is nothing more than a biochemical imbalance in the brain or it is largely psychological in origin, we now know that SSRIs – far from healing depression – have led to suicides in some children and young people. Then, the evidence and the guidance was to prescribe SSRIs to this group. Now the evidence and the guidance say the opposite.
There has long been an effective and safe alternative to SSRIs. St John’s Wort is a herbal medicine that has been well researched and demonstrated to be effective in treating mild to moderate depression. As far as we have been able to learn, there have never been any reports linking it to deaths. In Germany and elsewhere in Europe, doctors regularly prescribe it. But not in the UK. Why is it not recommended here? Is the answer prejudice or simply a blinkered approach from the traditional medical science community?
In a world where we rely on evidence based medicine, we cannot afford to be blinkered about its imperfections. Science must be objective and neutral if it is to have any real and lasting value.
There is growing concern that the financial interests of pharmaceutical companies may distort evidence, particularly by failing to publish negative results of trials . Other critics have argued there is no doubt there are conflicts of interest: biased reporting and poor quality design in industry-funded trials is leading to “bad evidence”. One review found that trials sponsored by drug companies were more than four times as likely to show results favouring the companies’ products that those that were sponsored by other organisations.
If that is right it means doctors are making treatment decisions based on poor evidence: they may be prescribing medications that are unnecessarily more costly than alternatives, less effective or that are more likely to result in adverse effects. This is so serious an issue − both for patient safety and for the public purse − that it deserves urgent investigation, perhaps by NICE, with strategies that take patient safety into account.
Forging a new gold standard
Nor is the concept of the randomised controlled trial as the ‘gold standard’ without its critics. Sir Michael Rawlins , Chair of NICE, and Dame Carol Black , former RCP President and now NHS National Director for Health and Work, have independently argued that, valuable though the RCT is, it does not necessarily answer all the questions we need to ask in order to achieve effective evidence based practice. In primary care, for instance, there is often very little RCT evidence on which to base clinical decisions. The emergence of complexity theory and thinking in medicine in recent times has arisen largely from the failure of reductionist science to address real problems in real people.
RTCs are certainly the best method of establishing the efficacy of single new drugs. However they tell us little about how the clinical effectiveness of one treatment compares with another or how we should manage complex long term conditions with multiple and sometimes non drug based interventions. Nor do they tell us about drug interactions, a serious issue when so many patients have complex problems and co-morbidities.
Society needs science that can be applied to real life questions in a real life context. Will treatment X or treatment Y be more effective for the individual patient in front of the doctor at this moment in time? What sort of evidence do we need to answer questions like that? What sort of evidence do we need to compare different treatments and establish cost as well as clinical effectiveness? That last question is of growing urgency in view of the current financial situation. Watch this space.