A network for students interested in evidence-based health care

All fair comparisons and outcomes should be reported

Posted on 2nd February 2018 by

Tutorials and Fundamentals

This is the twenty-third blog in a series of 36 blogs based on a list of ‘Key Concepts’ developed by an Informed Health Choices project team. Each blog will explain one Key Concept that we need to understand to be able to assess treatment claims.


Importance of treatments being compared fairly

The importance of basing healthcare decisions on appropriate evidence has never been more realised than in contemporary times. However, if the evidence is to be relied on, trials evaluating healthcare treatments must involve fair comparisons. This includes, for instance, making sure that neither of the treatment groups is advantaged in any way e.g. being younger and healthier compared to the comparison group and ensuring that participants are similar on important indices by allocating them at random to each group [2].  Failure to ensure that trials are fair can lead to a host of biases which can discredit the results of trials.

Reporting biases

Reporting bias refers to the tendency to selectively report studies or outcomes; typically, those that find favourable results and not others. It is prevalent in healthcare research and has been a problem for some time [4]. The James Lind Library also provides a useful resource regarding this issue: Recognising, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the World Health Organisation.

There are various types of reporting bias. For example, statistically significant ‘positive’ results are more likely to be published (‘publication bias’) to be published more rapidly (‘time lag bias’) and are more likely to be published in high impact journals (‘location bias’). Reporting bias can occur within a study too. For example, researchers may choose to selectively report on some outcomes and not others, depending on the nature and direction of results (‘outcome reporting bias’). You can read more about the different types of reporting bias here.

Many clinical trials are now of good quality and biases relating to unfair comparisons are less likely than they have been in the past. However, even if trials are based on fair comparisons, if some studies or outcomes are not reported – especially if they have yielded null or unexpected results for a treatment – this is a problem. It can distort the overall evidence base.

In the 1980s, a group of drugs developed to control cardiac arrhythmias following a heart attack was widely popular. Although there was evidence that the drug reduced cardiac arrhythmias, it was assumed that given that cardiac arrhythmias increased risk of death by heart attack, these drugs would, in turn, reduce the risk of death following heart attack. Unfortunately, there was no evidence that these drugs decreased the risk of death by heart attack. In fact, the opposite was true – these drugs caused many deaths in the 1980s. It was later found that trials suggesting the lethal effects of these drugs were hidden from view because efforts to publish them had been unsuccessful (Therapeutic fashion and publication bias: the case of anti-arrhythmic drugs in heart attack)  [1].

The consequences of reporting bias in healthcare can be far-reaching. Particularly in cases where a single study is used to inform healthcare decisions, failure to publish or report null findings can have serious consequences.

Relevance for systematic reviews

Let’s consider reporting bias in the context of systematic reviews.  Reviews of fair comparisons should be systematic.  Systematic reviews are methods of providing unbiased verdicts of the effects of treatments. Given that systematic reviews attempt to reduce bias, they should be relied on more than other forms of reviews (such as narrative reviews) or single studies, which are more prone to systematic errors (biases) and random errors (the play of chance) [1].  However, systematic reviews are limited by the quality of the studies  available for review. Hence, if reporting bias is present, a systematic review of a particular treatment may over-estimate its effectiveness and/or downplay its adverse effects.

Consequences of reporting bias

Given that healthcare decisions and future research depends on what is published, the various types of reporting biases can have serious consequences. Patients who receive treatments based on incomplete and biased evidence may be harmed, and sometimes die.

Reporting bias is therefore an ethical as well as a scientific problem [1].

Implications

When evaluating systematic reviews ask yourself “Have the authors attempted to locate relevant unreported evidence?”.

It is also important for authors of systematic reviews to attempt to engage with unreported research [4].

A combination of top down and bottom-up solutions can help to address the problem. Fortunately, ideas such as making it mandatory to pre-register the protocol of studies and publishing them whatever the results can help. Trials Tracker is also an excellent tool; exposing those that have not shared their trial results and helping to identify studies that should have been published.


Learning resources which further explain why all fair comparisons and outcomes should be reported

Read the rest of the blogs in the series here

References (pdf)

Testing Treatments

Take home message:

Tags:

Benjamin Kwapong

I am currently a master's student (MRes Psychology) at the University of Manchester. I hope to be a clinical psychologist so I am interested in evidence-based treatments to inform my practice once I qualify. I am particularly interested in dementia research (pharmacological and non-pharmacological) so I want to increase my understanding of how evidence for interventions come about and how to best understand them. View more posts from Benjamin

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.