Good science vs. bad science: remember the basics

What has been the real driver of violent crime in America? Not unemployment, or guns, or wealth disparities, or lack of access to education. According to a fascinating new Mother Jones article, it’s exposure to lead.

The piece builds a case around this thesis: “Gasoline lead is responsible for a good share of the rise and fall of violent crime over the past half century.” This isn’t totally crazy, since we know that lead is a destructive neurotoxin. But any skeptic out there would immediately wonder about the evidence behind such an encompassing claim, mainly because it rests mostly on population-level observational studies, which look at links between lead exposure in the environment and crime rates. As Dr. David Juurlink, a physician and researcher at the Sunnybrook Health Sciences Centre in Toronto, told Science-ish, while lead could be the missing element in violent crime, “Many, many other factors also could, particularly in concert. Perhaps lead is one contributing factor, but it’s an abuse of the basic tenets of epidemiology to ascribe so much of the blame to lead.”

In other words, he added, a correlation between the two phenomena does not equal cause and effect.

Almost as soon as the article was published, debunkers were out in full force, picking apart the evidence behind the story. Even the author of the article, Kevin Drum, followed up with a blog post outlining the different types of evidence in the piece and the need for further research. But many a reader may have left the article thinking the lead theory is more than just a theory—that it’s the little-known cause of criminal activity.

And it’s not the last, or even most offensive, article that will contain an incredible thesis about new science. So how do you avoid falling prey to bad science in health stories? This is a question Science-ish gets asked a lot, most recently related to an article published in Slate about Dr. Oz’s dubious use of medical evidence. Understanding evidence-based medicine and statistics takes years of practice and study, but the good news is there are a few very basic red flags to keep in mind when you’re reading about the health sciences.

1. Is the sample representative?

The population that’s studied is not always generalizable to you or your patients, despite how it’s framed by the journalist writing about the study. For example, while animal studies are important, and have added a lot to our understanding of treatments and disease, they are only a starting point. As this fine paper (PDF) on writing health stories points out, “Findings in animals might suggest health effects in human beings, but they seldom prove them.” So any extrapolation on effects in people should be read with a critical gaze.

Dr. Victor Montori, evidence-based medicine guru at the Mayo Clinic in Minnesota, told Science-ish the next question to ask is: Were the right humans involved? “A study on healthy volunteers is not the same as a study on sick people. A study on people with a single disease is not the same as a study on people with multiple morbidities.” As well, the sample population might not be representative in other ways. The experiment may have been done in adults, so results may not apply to children, or it may have been done in men, and results don’t apply to women. “Participants in the study become important. Are they the same as the ones inferences are being applied to?” asked Dr. Montori. For these reasons, you always want to ask who was studied.

2. How would this study square in the real world?

The next thing you want to ask is whether the treatment or exposure the study looked at is consistent with the way it would exist in the real world. Dr. Montori used the example of studying whether drinking soda gives people gas. “If the study exposed people to 29 cans of Coke drunk within a minute, and they got gas, that’s different than if someone drinks them over a year.” In studies of diet, weight-loss results may be staggering over four weeks but that may not be long enough to know how the diet will play out in real life. In studies of pharmaceuticals, drugs are often compared with placebo to establish that they work, but that’s not necessarily a fair comparison when there are other similar treatments available, and clinicians and patients want to know whether the new drug is better than the best-available drug.

3. Who funded the research?

Thinking about the vested interests behind the science is a key way to distinguish good science from bad. You always want to, as Deep Throat told Bob Woodward, follow the money. Doing so will help you root out checkbook science, secret spokespeople, and industry-influenced policies. For example, many news outlets reported on a Lancet study that showed that Weight Watchers works. There was little made of the fact that the study was sponsored by the weight-loss company, and that its design was flawed. (The key problem: If you compare people on a rigorous weight-loss program to folks who get standard care from their doctors, it seems intuitive that the program group will win. That’s not really a fair comparison.) Studies of pharmaceutical research have, time and again, demonstrated what this new systematic review on the subject sums up “that industry-sponsored drug and device studies are more often favourable to the sponsor’s products than non-industry-sponsored drug and device studies.”

4. Was the report based on an experiment or observational science?

The reason skeptics immediately attacked the lead story in Mother Jones was because the hypothesis partly hinged on observational science, not controlled experiments. The key difference between the two is that controlled experiments involve exposing one group to an intervention (a pill or treatment) and another group to a placebo, and then seeing whether the two fare differently. If the two groups were randomly assigned to the treatment or intervention, and they have different outcomes, you can be reasonably confidentthe outcomes were the result of exposure to that treatment or intervention. This is because outside variables the scientists don’t control for—or confounding factors—would be equally distributed between the two groups.

For this reason, researchers generally place more confidence in experiments—especially randomzied ones—than in observational studies. In the latter, researchers can never control for all confounding factors because no intervention is introduced and researchers are simply looking for links between an exposure to something that’s already occurring and a certain outcome. This isn’t to say observational science is bad. It’s not. It can even tell us more about how something plays out in the real world than a controlled experiment. It’s just important to be aware of the limitations of different types of studies and the claims that can be made about them.

5. How big is the study?

Larger studies involving more people at different sites are generally better than smaller studies, especially when it comes to changing clinical practice or altering your behaviour. That’s because big studies are usually more representative of the broader population and less influenced by extreme cases than small studies. As Steven Hoffman, assistant professor at McMaster University, put it: “Results from a randomized controlled trial of 10 conveniently chosen people, for example, are very likely to be influenced by extreme cases and outliers, whereas a randomly selected representative sample of 10,000 people is not.”

6. What about the other evidence?

Our function in media is to bring you the news. But reporting on new, single studies actually isn’t the best way to tell you about what’s happening in science. In fact, this kind of reporting is probably misrepresentative and misleading. A single study will not change clinical practice or thinking about health. Many studies, in different contexts, using different methods, on different populations, will.

Dr. Montori gave this example: “Imagine one of my colleagues, sitting here, playing with a data set, finds an interesting association, and decides to write an abstract. The Mayo Clinic puts out a press release, and the abstract gets important coverage in the New York Times.” His imaginary colleague is interviewed about his contribution to science, but the reader has no idea about the other similar papers that may have had different outcomes that didn’t get any coverage. They don’t know about the other, perhaps less sexy, studies the researcher undertook with different results that were never published. “The reader is looking only at a select set of research that makes a headline and that’s not a good way of finding out what’s going on,” he added. For this reason, evidence-based medicine practitioners like Dr. Montori, and evidence-based blogs like Science-ish, advocate for seeking out syntheses of evidence on clinical questions. They minimize bias, they sum up all the research on a subject, and they will get you closer to the truth than any single study can.

 

“This article written by Medical Post senior article Julia Belluz first appeared online at CanadianHealthcareNetwork.ca”

 

2Ascribe Inc. is a medical transcription services agency located in Toronto, Ontario Canada, providing medical transcription services to physicians, clinics and other healthcare providers across Canada and the US.  As a service to our clients and the healthcare industry, 2Ascribe offers articles of interest to physicians and other healthcare professionals, medical transcriptionists and office staff, as well as of general interest.  Additional articles may be found at https://www.2ascribe.com.

You might also enjoy

AI Bias

The results created by an AI model can be considered