3 Questions you should be able to answer with “yes!” before you read a medical paper.
Quicklinks for the busy:
Question 1: Is the study different enough from already performed studies?
Question 3: Did the investigators describe randomization, blinding and power in detail?
Last advent, my fiancée bought an “Escape the Room” advent-calendar. In it, we found 24 puzzles, one for each December day before Christmas, and an entertaining story about a thief, Sherlock Holmes and the “Koh-i-noor” diamond. Each day before work, we walked the dog, poured a cup of tea or coffee, and assumed the persona of said thief to steal the diamond.
To me, science and medical communication are a lot like “Escape the Room”. You start with a puzzle – a question, a hypothesis, an assumption – and you systematically collect evidence to solve it. Of course, real world research is a wee bit more complicated that solving a puzzle in a defined environment. Also, real word research takes a wee bit longer than one cup of coffee.
Our job as medical researchers is not only to communicate effectively, but also to research effectively and find the strongest evidence possible.
The following questions should help you decide, in a couple of minutes, if a publication is worth reading. Hopefully, this will save you plenty of hours, which you can then use for more productive activities. Like reading my blog.
Question 1: Is the study different enough from already performed studies?
There is no reason to test – or read about – anything that somebody else has already proven and published. But since proving something without a shed of doubt is impossible – except in mathematics – it stands to reason scientists evaluate and publish slight variations of similar research questions. When done well, these studies contribute to our existing knowledge by updating the likelihood if a particular hypothesis is true or false. So, it’s perfectly fine to publish studies that seem “unoriginal”.
However, since we don’t want to waste our time with studies that don’t add value, check if one of the following applies:
- The study is conducted longer or enrolled more patients than existing studies.
- The methods are more rigorous than the methods in existing studies.
- The numerical results add significantly to a meta-analysis.
- The population is different (age, sex, ethnic groups) from existing studies.
When I can, I embark my research adventure with systematic reviews and meta-analyses, because they usually provide the strongest evidence about a certain hypothesis. The Cochrane Collaboration is famous for its systematic reviews.
Question 2: Does the study design make sense?
There are excellent reasons why randomized controlled trials, or RCTs, are the best choice for certain hypotheses, especially when we want to check if a drug really works. But occasionally RCTs are unnecessary, impractical or even inappropriate.
Here is a list of research fields and the preferred study designs:
Is an intervention effective: Randomized Controlled Trial
Is a diagnostic test valid: Cross-sectional survey
Can a test be applied to a large population to pick up a disease at an early stage: Cross-sectional survey
What is the prognosis for someone with a certain disease: Longitudinal survey
Is a harmful agent related to the development of a disease: Case-control or cohort study
You don’t need to shred the publication immediately if the chosen study design seems off. However, you should be extra vigilant about the proposed evidence and potential biases.
Question 3: Did the investigators describe randomization, blinding and power in detail?
Nothing weakens a study like inappropriate randomization, sloppy blinding or too few participants to detect an effect. So, before you jump to the “Results” section of a publication, scroll to “Methods” and look for the descriptions of randomization, blinding, and power.
The more detailed the description, the stronger the data usually is. Ideally, it looks like this:
Randomization – “Patients were then randomized […] according to a computer-generated random sequence using an interactive voice response system.”
Blinding – “Investigators, patients, and sponsor personnel were masked to assignment.”
Power – “The study was designed with 90% power to show superiority […] at the 26-week primary end point with an SD of 1.3%, a one-sided α of 0.025, and a noninferiority margin of 0.40%. This corresponds to 280 patients per active treatment arm and 140 for placebo, with an assumed dropout rate of 11%.”
Answering these questions before you read a medical publication will remove a lot of the “litter-ature” (see what I did there?) and save you hours – or at least minutes.
Reference:
Greenhalgh T. How to read a paper - the basics of evidence-based medicine and healthcare. 6th ed. United Kingdom: John Wiley & Sons Ltd; 2019.
Do you have comments, doubts or questions for me?
… I’m looking forward to hearing from you.
See you soon, best wishes