Anecdotes are unreliable evidence
Posted on 16th June 2017 by John Castle
This is the second in a series of 36 blogs based on a list of ‘Key Concepts’ developed by an Informed Health Choices project team. Each blog will explain one Key Concept that we need to understand to be able to assess treatment claims.
Individuals can react differently to the same treatment…
Every single person on Earth is unique. With this in mind, when one person receives a treatment or an intervention, the interaction between their bodies and the treatment may be positive or harmful, but the interactions of the same treatment and someone else’s body may be markedly different.
This provides the basis for why personal experience and anecdotes are unreliable forms of evidence. If we perform tests on a single person we cannot be certain that the effects of the treatment in that person were not dependent upon the chance variation in their individual body. This is why we need to test treatments in large numbers of people. The more people you test it in, the less likely the different factors in each persons’ bodies are affecting the results.
Also, people tend to equate coincidences with cause and effect…
The single effect of a treatment occurring in someone may not have been because of the treatment at all. (This will be discussed further in a later blog in this series. Key Concept 1.3 ‘association is not the same as causation’).
For example, a person may develop a cold. Colds, on average, last 7 days. Despite the evidence saying that antibiotics are likely to be ineffective against a cold, the person may start taking antibiotics on the 5th or 6th day of their illness and the cold soon begins to disappear on the 7th day. We know it is unlikely the person got better because of the antibiotics because colds are viral illnesses and antibiotics affect bacteria, not viruses. However, the person may believe that the antibiotics were the reason for the cold disappearing. In reality, they were likely to get better anyway, without the medication.
This also goes in the opposite direction. Imagine a person is taking a drug, which research has shown not to be commonly associated with adverse effects. If, by coincidence, the person develops a migraine on the first day they start the medication, that person may assume the drug caused the migraine and so stop taking it.
It’s not just patients who fall foul of this mistake. There are many examples where doctors prescribe a drug that has not been licensed for a particular condition simply because they have seen it working in other patients. Doing so completely ignores the potential harms of the treatments, which are unknown without proper tests (See Key Concept 1.1 ‘treatments can harm’).
The example of Diethylstilbestrol (DES)
DES became popular in the early 1950s. It was thought to improve a malfunction of the placenta that was believed to cause miscarriages and stillbirths. Those who used it were encouraged by anecdotal reports of women with previous miscarriages and stillbirths who, after DES treatment, gave birth to a surviving child.
For example, one British obstetrician, consulted by a pregnant woman who had had two stillborn babies, prescribed the drug from early pregnancy onwards. The pregnancy ended with the birth of a liveborn baby. Reasoning that the woman’s ‘natural’ capacity for successful childbearing may have improved over this time, the obstetrician withheld DES during the woman’s fourth pregnancy; the baby died in the womb from ‘placental insufficiency’.
So, during the woman’s fifth and sixth pregnancies, the obstetrician and the woman were in no doubt that DES should be given again, and the pregnancies both ended with liveborn babies. Both the obstetrician and the woman concluded that DES was a useful drug.
Unfortunately, this conclusion was based on anecdote and was never shown to be correct in fair tests conducted with larger samples. Over the same period of time that the woman was receiving care, unbiased studies were actually being conducted and reported with many more participants, and they found no evidence that DES was beneficial .
Twenty years later, evidence of harmful side-effects began to emerge when the mother of a young woman with a rare cancer of the vagina made a very important observation. The mother had been prescribed DES during pregnancy and she suggested that her daughter’s cancer might have been caused by the drug . This time the observation was correct. But more importantly it was shown to be correct through systematic research involving large samples of participants, rather than one or two individuals.
Now, to summarize, I’m not necessarily saying that a large number of corroborating anecdotes cannot herald something significant about a treatment. Indeed the whole system of reporting side effects of a drug in the UK are dependent upon gathering patient anecdotes. But, what I am saying is that, in order to be sure about the safety of a treatment, and its efficacy, treatments should be rigorously evaluated in fair comparisons. Without these, we have no idea of knowing whether we’re giving (or being given) something that could kill a person, or something that could save their lives.
Some content republished from Testing Treatments with permission.
Learning resources which further explain why anecdotes are unreliable evidence
Read the rest of the blogs in the series here
Take Home Message