False hope in big brands: why you shouldn’t rely too much on a journal’s name
Posted on 22nd July 2016 by Tran Quang Hung
Nature, The Lancet, New England Journal of Medicine (NEJM), The Journal of the American Medical Association (JAMA), British Medical Journal (BMJ).
Needless to say, they are big brands!
And big brands create great products. At least according to common thinking.
So, often, when seeing a citation from these well-known journals, you may immediately jump to (and trust!) the conclusion, not even bothering to access to the original article.
Just like a man arriving at a famous restaurant may not even bother to step through the door before posting a status update on Facebook: “It is exceptional here. Yummyyyy. I love all of the dishes.”
Hold on a second. Conclusions can be misleading, don’t you think?
You might rely too much on big brands because you trust that they have highly rigorous peer-review systems. A high class filter that is able to tell good articles from bad ones. And only papers of quality can be published.
You might consider the impact factor index as proof of that. You might assume that the higher the impact factor, the more reliable the source. For example, the most recent (2015) impact factor for NEJM is 59.558. It is highest among general medical journals. So you can cast doubt on the conclusion of any article from any journal, except for NEJM, right?
I’m sorry, the answer is no.
Let’s find out why.
Impact factor rankings can be misleading
The impact factor (IF) refers to the number of times that all items published in the journal in the previous two years (e.g. 2013 and 2014) were cited in a given year (e.g. 2015), divided by the total number of citable articles published in that journal in the previous two years. For example, the 2015 IF of a journal considers the number of citations in 2015 to articles published in 2014 and 2013 only.
So, a journal with an impact factor of 6 in 2015 means that the articles published in the last two years (2014 and 2013) have been cited 6 times each on average in 2015.
IF is claimed to be a symbol of quality. So people often cite IF when trying to showcase the reliability of a journal.
However, IF is an indicator of influence, NOT trustworthiness.
Many widely cited clinical studies are later refuted [1].
Ioannidis found 49 highly cited original clinical-research studies, 45 of which claimed that the intervention was effective. However, approximately 1/3 of these studies were subsequently shown to have inaccurate results [2].
So if you want your results to be known globally, ok, go ahead, send your drafts to big brands. Get as many citations as possible.
But when running into a paper from big brands, just appraise it and be careful as with any other articles.
Articles in big brands are not flawless
At present, no one can say with certainty that peer review can guarantee quality of biomedical research.
It can be a bit of a faith-based process.
And it turns out that peer reviewers can overlook errors.
At the BMJ, Richard Smith and colleagues took a 600-word study that they were about to publish and deliberately inserted 8 errors. Then they sent the paper to about 300 reviewers. The median number of errors spotted was 2, and 20% of the reviewers did not spot any [3].
Sometimes, studies published in medical journals are not only scientifically poor but also have done great damage.
The most famous example is the Lancet paper that suggested that the MMR (measles, mumps, rubella) vaccine caused autism: the result was a drop off in the number of children vaccinated, epidemics of measles, and more than a decade of fruitless argument [4].
The paper was later retracted from The Lancet.
What to do, after all?
The thing is, it makes no sense deciding whether a source in itself is reliable.
Instead, try to decide whether a given paper is reliable; whether a conclusion is worth following.
Find the original link. Read the entire original paper. Carefully. From head to toe.
The real test lies in an attentive scrutiny of the methods section. This process is called critical appraisal.
There are some good books for learning critical appraisal, such as How to read a paper, The doctor’s guide to critical appraisal. The NHS even provides critical appraisals for some newly released papers.
It can feel hard. Very hard. I know.
But it’s worth the effort. Undoubtedly.
REFERENCES
- Sue Hughes. Many widely cited clinical-research studies are later refuted. Medscape. July 12, 2005. Accessed July 15, 2016.
- Ioannidis JPA. Contradicted and initially stronger effects in highly cited clinical research. JAMA. 2005; 294:218-228.
- Richard S. Classical peer review: an empty gun. Breast Cancer Res. 2010;12(4): S13.
- Wakefield AJ et al. Ileal-lymphoid-nodular hyperplasia, non-specific colitis and pervasive developmental disorder in children. Lancet. 1998, 351: 637-641.