A network for students interested in evidence-based health care

Too Much Medicine: some reflections

Posted on 19th May 2017 by

Tutorials and Fundamentals

The concept of ‘too much medicine’ reflects a growing concern in the medical community regarding the over-testing, over-diagnosis and over-treatment of various pathologies. In the past few decades, the number of people diagnosed with diseases or risk factors like high blood pressure, cancer, and asthma have dramatically increased, and hence the number of people being treated for these problems has also increased [1][2][3].

At first glance, this trend may seem encouraging – early diagnosis of cancers can save lives, the identification of risk factors can prevent serious illnesses, and ultimately if more illnesses are identified, more can be treated or cured. This picture, however, is too simplistic. Over-identification of illnesses can lead to wasted resources and cause psychological and physical harms to patients. Welch summarises the problem neatly in his book on over-diagnosis: “The biggest problem is that over-diagnosis triggers overtreatment, and all of our treatments carry some harm” [4].

In order to address this problem of too much medicine, we need to identify what exactly is causing it, and how we ought to respond to these factors. One initiative to identify how and why we are faced with this problem can be seen with Oxford Univeristy’s recent conference ‘Too Much Medicine: Exploring the relevance of philosophy of medicine to medical research and practice’ (April 2017). What follows is a brief summary of a couple of the main themes that came out of this conference.

Overestimation of Risk

One clear factor in the over-diagnosis of medical problems comes from overestimating the risk a patient has of having certain pathologies. The use of new technologies and screening techniques for various abnormalities can be incredibly helpful in identifying early markers of certain diseases. However, the availability of such technologies can do harm as well as good. An obvious example of this is the use of screening techniques for cancers. Having to undergo these screenings can be very distressing for patients, and treatments often come with serious side-effects. Even without looking at economic issues about the efficient use of resources, we therefore have good reason for wanting to avoid the unnecessary diagnosis, screening and treatment of pathologies like this.

In cancer testing specifically, this problem comes in part out of the possibilities of “incidentalomas” – abnormalities and tumours discovered by accident during tests or scans for other problems. Many incidentalomas either do not grow at all, or grow so slowly that the patient will die long before the cancer would affect them. For example, thyroid nodules that are found this way are harmful >1% of the time [6] Thus, in the majority of cases, subjecting individuals to treatment would be unnecessary.

However, there is perhaps a deeper issue at play here than just over-screening. The availability of more information needn’t be a significant problem if we know how to handle the statistics available to us. For instance, if we know a thyroid nodule is unlikely to be cancerous, we can simply not treat or diagnose it. This doesn’t seem to be happening however. In simple terms, we’re not that great at distinguishing between an abnormality that is likely to be harmful and one which isn’t. Again meaning many individuals are treated unnecessarily. So more and more people are being diagnosed with problems like this whilst the mortality rates remain stable.

The Base Rate Fallacy

One further factor that might be at play here is the base rate fallacy. This fallacy occurs when we take into account the results of a particular test (e.g. an ultrasound), and take this at face value without giving due consideration to the prevalence of that disease in the first place. For instance, say a particular screening technique has a low chance of creating a false-positive result – say 0.5%. If a patient turns out a positive result here, we might want to say it is very likely that they have the problem in question, since the testing for it is reliable. This is not necessarily the case however. For example, if the prevalence of the problem in the general population, (or some relevant demographic), is only 0.1%, then the patient’s result is actually more likely to be a false positive than a true positive.

Too Much Medicine: References

[1] Akker L, van Luijn K, Verheij T, (2016), Overdiagnosis of asthma in children in primary care: a retrospective analysis, British Journal of General Practice

[2] Marcus  P M, Prorok  P C, Miller A B, DeVoto E J, Kramer B S, (2015), Conceptualizing Overdiagnosis in Cancer Screening, Journal of the National Cancer Institute

[3] Martin S A, Boucher M, Wright J M, Saini V, (2014), Mild hypertension in people at low risk, British Medical Journal

[4]  Welch H G, Schwartz L, Woloshin S, (2012), Overdiagnosed: Making People Sick in the Pursuit of Health, Beacon Press

[5] Singh S, Singh A, and Khanna A K, (2012), Thyroid Incidentaloma, Indian Journal of Surgical Oncology

Tags:

Sasha Lawson-Frost

I'm a Philosophy Student at UCL. I'm currently doing some research the philosophy of Evidence-Based Healthcare (EBH), particularly looking at how EBH deals with issues relating to patient well-being. Twitter: @sashalfrost View more posts from Sasha

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our newsletter

You will receive our monthly newsletter and free access to Trip Premium.