People should not know which treatment they get
Posted on 30th October 2017 by Kenneth McLean
This is the seventeenth blog in a series of 34 blogs based on a list of ‘Key Concepts’ developed by an Informed Health Choices project team. Each blog will explain one Key Concept that we need to understand to be able to assess treatment claims.
Whenever a patient receives treatment for their condition and later experiences an improvement, people often assume that this is due to the most obvious ‘cause’ – the treatment itself. While this could well be true, we need to exclude other possibilities before we can be confident in this conclusion.
For example, the condition could have resolved on its own; the doctor-patient interaction could have had its own benefit; the patient may have behaved differently as a result of their treatment; their expectations could have influenced how their condition was perceived; and so on. This automatic assumption that an observed ‘effect’ is due to a particular ‘treatment’ has been the major driving force behind many medical treatments which have no real benefit.
We need fair tests to ensure that the patient’s own opinions and expectations of treatment do not influence the results. One of the best ways we can do that is through doing our best to keep participants in the dark (“blinded”) about which treatments they have been allocated in a study.
Example 1: Anton Mesmer and ‘animal magnetism’
One of the earliest examples of blinding participants to a treatment was in 1784, when Anton Mesmer’s fantastical claims regarding the effects of ‘animal magnetism’ were scrutinised (1). People exposed to ‘magnetised’ objects were reported to be cured of various illnesses. Some even experienced ‘mesmeric crises’ which could feature shrieking, crying, fainting, and convulsions. Eventually, the practice drew the attention of King Louis XVI of France, who appointed a commission to investigate the phenomenon scientifically.
The Commission found that participants only experienced these crises when told they were being exposed to ‘animal magnetism’, whether this was true or not – “imagination without magnetism produces convulsions and magnetism without imagination produces nothing”. Therefore, it was concluded that “imagination, imitation, and touch were the true causes” of the effects observed.
While such a dramatic impact might be rare in the context of a clinical trial, it illustrates just how powerful belief can be in shaping the response to treatment. This effect has been demonstrated many times in many different contexts (2). Therefore, wherever possible, participants should be blinded to ensure that there can be a fair comparison made between two treatments.
Example 2: Homeopathy
One of the most common methods of blinding patients involves giving tablets, identical in appearance, to all groups being compared. Participants in the treatment group receive tablets containing the medication being investigated, while those in the control group receive an inert sugar pill (aka a ‘placebo’) (2). Therefore, any effect experienced by the patient in the control group, good or bad, can be concluded to be due to their perceptions and expectations (e.g. the placebo effect). As such, placebos have become a desirable component of clinical trials where the therapeutic benefit of a potential treatment is uncertain.
One use of these placebos has been to investigate whether homeopathic remedies work. These remedies are increasingly being used by the general public (3), and there have been beneficial ‘effects’ seen in patients being treated with them. However, when compared to blinded, placebo controls, no differences have been detected (4).
Example 3: Sham procedures
Effective blinding of patients to the treatments they have been allocated is not always as simple as giving participants in both groups identical pills, for example, in studies involving a medication with an obvious side-effect (e.g. a change in urine colour), or with surgical and non-surgical intervention groups. In these cases, it may still be possible to blind patients to the treatment received but this often requires the researchers to be more creative.
One such study was performed in patients who were due to receive surgical treatment for osteoarthritis of the knee (6). Those in the control arm underwent a ‘sham’ procedure in which the standard procedure was mimicked, but without the step thought to be of actual benefit (either lavage or debridement). This included giving anaesthesia, making incisions into the skin over the knee, and the sounds of the joint space being washed out.
Interestingly, when the intervention and control groups were compared, no difference in patient-reported knee pain and function was detected between those that had had real knee debridement or lavage and those that had the sham procedure.
Since the sham procedures involved some degree of intervention, questions have been raised over the conflict between providing the best quality research, and the ethical principal to “first, do no harm” (7).
Without blinding patients, studies are more likely to find positive results, particularly when the outcome measures used are subjective, like symptoms (6). This highlights the need to be cautious about relying on the results of treatment comparisons if the participants knew which treatment they were receiving.
In cases where an appropriate method to blind patients is deemed unfeasible, it is important that researchers consider other ways to minimise any effect this could have on results, for example by blinding observers other than the patients in the study measuring outcomes.