How to Avoid Being Duped by Sham Science
Say you see an article about how chocolate can prevent aging. You want to figure out what's actually going on, so you pull up the original study. Here's how you properly read it.
So you want to read a study?
First of all, congratulations. Most people don’t make it past the media headlines, which are often written with clicks, rather than precise truths, in mind. Reading the original research paper can give you a detailed account of a new discovery and put the results in the proper context. A news story from a major publication should, in theory, do the same thing—and there are plenty of examples of great science journalism these days—but even the best reporting likely won't go into the same detail as the scientists themselves.
Trouble is, peer-reviewed studies are dense, nuanced, and all-together tough to navigate. As a freelance science journalist for outlets like Science and Nautilus, I’ve had the chance to report on everything from the sex lives of vampire squid, to police shootings, to robotic exoskeletons. There’s no one correct way to read a research paper, but along the way, I've developed an appropriately exhaustive guide to the most important things to pay attention to while reading a study. Here are some of the habits and techniques that have helped me.
Before You Begin
First, understand that science is an incremental process: every study is built on previous work, and most advance our knowledge of a subject by a very small amount. Landmark discoveries like the DNA double helix or the Higgs Boson are rare and usually receive coverage from major media outlets—and maybe a Nobel Prize. Most science has an extremely narrow focus: one protein, one gene, one drug molecule in one group of test animals. The average study is only a single piece of a huge, incomplete puzzle. It can be an invaluable piece, but people, including journalists, get into trouble by over generalizing the results, taking them out of context, or extrapolating the findings to situations the scientists never meant to address.
Also, before you start reading, take a moment to Google the journal that the publication appears in. Academics love to talk about “impact factors,” or the relative prestige of various journals, but tons of great science is published in smaller, lesser known journals, and even giants like Science and Nature are not immune from publishing junk occasionally. You want to make sure you’re not reading from a pay-to-publish journal that will take money from anyone and print anything, regardless of its quality and without peer review. Peer review is the backbone of the scientific process, and the process is vital, even if imperfect. Any journal that skimps on this should not be trusted.
To give you an idea of which publications to avoid, here's a list of “potential, possible, or probable predatory scholarly open-access publishers.”
Finally, Google the authors. What are their qualifications? What have they done in the past? Where do they work?
Where to Start
Begin with the article’s abstract—the one-paragraph introductory CliffsNotes version of the study. A good one will be written in plain language and should roughly explain what the researchers wanted to know, the experiment they conducted, and the gist of their results. Because it’s supposed to be a short summary, the abstract probably won't make much of an attempt to put the results into context or explain the limits of the experiment—it prioritizes clarity and conciseness over nuance and the quantification of risk or uncertainty. So, while reading the abstract first is helpful, it is impossible to write a thorough and measured article based solely on the abstract of a paper, though lazy journalists will sometimes try.
Following the abstract, many journals will have a “background” section, which helps contextualize the study within the existing body of research. This section is useful in figuring out how important the new finding is. For example, you might see a news article trumpeting a huge breakthrough in cancer treatment, but in reality scientists have merely refined an existing cancer treatment and made it marginally safer or more effective. The background will help temper the hype.
Materials and Methods
This section is usually pretty dense, and if you’re a non-expert, don't expect to understand every sentence. But it often contains vital information that will help you get an idea of the significance and particularities of the results.
You’ll want to keep an eye out for particularly important information like sample size and the species of the animal model. Sample size in particular can help you decide how much trust to put in the results. Studies on a dozen test subjects (rats, ferrets, people, rocks, cell cultures, celestial events) can be interesting and help guide future research, but such a small sample increases the chances of both false positives and false negatives. A small sample size does not inherently mean a study is bad, but readers should be skeptical of any claims about what the results mean for the world at large. A study on how gut microbes influence weight gain in a population of 15 Americans, for instance, may be useful as a research tool, but a resulting article that claims your gut microbes are making you fat should be viewed as dubious at best and irresponsible at worst.
The average study is one piece in a huge, incomplete puzzle.
You’ll also want to pay close attention to the models used in the research. It seems obvious that a lab mouse is not the perfect equivalent to a human, but it’s easy to get excited when a new drug, diet, or operation looks promising in mice. Animal models are testing grounds, not proving grounds; they inform researchers about what might work in humans.
Looking at the type of experiment the scientists performed is also useful. Randomized control trials, which organize subjects into groups that either do or do not receive the treatment in question, are often considered the gold standard. However, these experiments are expensive and come with ethical challenges—you can’t assign people to treatment groups when the treatment is starvation or limb loss, for instance. All study designs, however, have their own strengths and weaknesses: determining which design is right for the job is a complex task that researchers and critical readers must evaluate on a case-by-case basis.
This is the part of the paper you’ll want to pay the most attention to, as it lets you know what happened in the experiment. Look at what the scientists say they found, but also look at how they qualify and express their results via statistical expressions like confidence intervals and p-values, which tell you how likely it is that the observed result is actually caused by the factor being investigated. For example, a p-value of .05 indicates that if the researchers repeated the experiment 100 times, they’d expect to see a result equal to or more extreme than what they observed at least 95 of those times. Generally speaking, p-values need to be less than or equal to .05 to be considered statistically significant—the lower, the better.
The confidence interval tells us what range of values we can expect the result to fall into with a given level of certainty. For instance, a study might say it found that a drug increased cycling performance by 10 meters-per-second (m/s) with a 95 percent confidence interval of +/-2, which means that while they’re not positive the improvement is exactly 10 m/s, the researchers are 95 percent certain that it is between 8 m/s and 12 m/s.
While neither p-values or confidence intervals are perfect, they give us a measure of how sure we can be in a study's results and are relatively easy to understand. Sure things do not exist in science—there’s always a chance that an observation is the result of random chance. Good science quantifies that risk and minimizes it wherever possible.
Discussion and Conclusion
Most scientists will use the discussion and conclusion sections, often combined into one, to explain how they interpret the results. Pay close attention to this section, and be on the lookout for information on the limits of the study. Most scientists are terrified of sensationalism and are thus measured and cautious in presenting the ramifications of their results. Take a similar approach. If you’ve already read the abstract and results, you should have a pretty good handle on what happened. Use these sections to confirm that you’re interpreting everything properly.
Figures, Graphs, and Charts
Don’t skip these. Often whole experiments or sections of experiments can be summed up visually. The graphics will give you another chance to wrap your brain around what the researchers discovered—some people even look at them first, after reading the abstract. Confidence intervals can be represented graphically in the form of error bars.
These figures and charts will typically be sprinkled throughout the paper at relevant points, though some publications put them at the end. A helpful rule of thumb: if you can’t describe to a friend what is going on in the figures, there’s a good chance you don’t quite understand the study yet.
Conflicts of Interest
All reputable journals require scientists to disclose any potential conflicts of interest in their work. Check for this every time; you can usually find the disclosure statement at the end of a study, but the exact location can vary. If a researcher does admit to a potential conflict—like receiving money or equipment from a stakeholder, for example—it does not immediately invalidate the study, but it is cause for a more critical examination of the work. Consider how the soda industry funneled lots of money into research of the health impacts of sugar: many of those studies have been found to demonstrate bias, even though they were conducted by “independent experts” at reputable universities.
There’s no section in a scientific paper that allows other researchers in the field to make comments, so this is an area where journalism can add real value. If you’re reading a news article about a new discovery, look for a comment from at least one outside expert. Many times, that person will simply confirm that they don't see any issues with the study. Other times, they'll point out a flaw or concern that would be invisible to the uninitiated, or help put the results in context. All of this is extremely valuable.