We have a tendency to believe that science is black and white.
Scientific fact has a short half-life of ~45 years.
One single flawed paper can convince the masses to abandon healthy lifestyle practices.
Journalists craft the findings into sensational headlines. “Informing” the public of the “dangerous” health fad they mistakenly fell for. The new paradigm permeates society. The entire body of literature? Ignored.
The damage is done.
Successful real-world results, out the window. All due to that slight error in the research article’s abstract. Leaving room for journalists to make their own conclusions.
Avoiding this trap is easier than it may seem.
Whether you’re a casual health optimizer, a passionate biohacker, or a hardcore researcher, learning the art of interpreting and understanding research is one of the most critical skills you’ll develop.
To cut through misinformation, take charge of your well-being, and live optimally.
Rather than blindly trusting some health and wellness expert (myself included), I always recommend that YOU do some research. Before switching to a new diet, taking some miracle supplement, or trying a fringe modality that can drastically impact your health.
Often, this means reading some science.
I wouldn’t call myself a scientist by any means, but I read thousands of papers every year. I can tell a lot about new research in seconds. Deep understanding requires subject matter expertise. This article isn’t about that. Rather, I use these preliminary criteria to evaluate scientific research.
Straight to the Source
Independent media publishers often make headlines with the “findings” of new research, summarized for the layperson. Then some expert rubberstamps a quote solidifying that this one new study shatters all of the previous scientific paradigms.
I’ve lost track of how often I’ve seen this exact process unfold. Usually, the journalist came to their conclusions by simply skimming the abstract. More careful investigation (sometimes just reading the conclusion), highlights different findings.
So whenever you read a controversial tabloid, find and look at the study referenced. The article itself should contain a link. But if not, you can search:
“[topic name] new study 2023“
Once you find the study, if you can only see an abstract look for a link on the page that includes “doi”. That takes you to the original paper. Which sometimes shows more study details than PubMed.
Research Study Interpretation
A few days after the release of groundbreaking research, communities around the internet begin dissecting the study. Rather than reinventing the wheel, take a peek through the relevant comments.
Internet posters will comment on things like:
- Glaring methodology mistakes
- Incomplete results
- Inability to replicate
- Statistical errors
- Biases and “data massaging”
You can just search for “[article name] study interpretation”.
Third-party study interpretation won’t tell you everything, but it can illuminate worthless research.
Scientific rigor varies tremendously from one journal to another.
The most prestigious journals prioritize quality papers. Others care more about collecting fees and furthering extremist ideologies. Unfortunately, many official-sounding journals fall into the latter.
For health research, the following are considered the best publications:
- PLOS One
- The Lancet
Yes, other journals do have credible research. And some bad stuff gets through the above too.
But other journals warrant extra skepticism.
A quick Google search of fraudulent scientific papers turns up all kinds of results. For example, in one sting operation, John Bohannon submitted 304 completely fake articles for publication, more than half of which got accepted.
It’s no guarantee, but science published in prestigious journals undergoes more thorough evaluation.
Properly designed and executed trials cost a small fortune.
The scientific community has a dirty secret.
Overwhelming evidence suggests that sponsorship strongly influences the outcome of the research. Even when the researchers believe they acted without bias.
Intuitively, this checks out. If someone’s paycheck (or subsequent research contract) depends on an outcome, it automatically has bias. Dr. Dawson Church’s work explains these mechanics in depth. The question isn’t if, but how much.
Whether deliberate or unintentional, the savvy reader must always question who benefits and/or gets paid for the outcome.
For example, a company trying to bring a new product to market may overstate the benefits or downplay (even ignore) the side effects.
What you want to see under the Conflicts of Interest section is this:
“The authors declare no conflict of interest.”
This was a recent policy change. Research done years ago may not disclose these conflicts so openly. You might have to dig through the article.
Pay attention to anything else. Of course, we cannot completely ignore research with conflicts of interest.
Always check for conflicts of interest, but also note that older research often buried them deep within the paper. I certainly don’t make major changes to my own routines as a result of one conflicted study.
Science is a web. New research builds on the axioms and discoveries of previous papers.
Scroll to the bottom of any public study, and you’ll see dozens of references. The more commonly something is referenced, the higher the scientific value (and usually credibility).
How often a paper gets cited determines what’s called its “impact factor”.
In PubMed, in the right-hand pane, if you click “Cited by” it’ll take you to the other articles referencing the current one. This can be useful for finding related research, and seeing the dates and journals publishing relevant articles.
If a study has very few citations, it’s either new (check the published date), not very credible, or an obscure topic.
Sadly, precisely zero studies will perfectly apply to you.
As Roger Williams explains in his book Biochemical Individuality, science is the average results of the population studied. Yet at the same time, often not a single study participant is “average”.
We’re all physiologically unique.
Most research investigates one of two things:
- Drugs, molecules, and active ingredients
- Subpopulations with health conditions
Things effective under very specific conditions, do not necessarily extrapolate to others.
For example, metformin alternatives work very differently in diabetics versus highly insulin sensitive folks. Just like strict keto and carnivore might drastically improve blood sugar issues, but aren’t tolerable to those with gallbladder or kidney issues.
Human trials are extremely expensive, lengthy, and often hard to get approved.
So researchers resort to mice, rats, and other animals. Or even sterile lab cultures (in vitro).
The more distant the studied population from your own biological circumstances, the less likely you’ll experience the same results.
In your quest, you’ll come across all kinds of studies.
Epidemiological studies of hundreds of thousands or even millions of people.
N=1 case studies.
And everything in between.
While every ancedote can be useful, generally, larger studies are more conclusive and statistically significant.
Though for certain subjects like nootropics, you won’t always find well-controlled studies with hundreds or thousands of subjects.
Nonetheless, be cautious of conclusions made from observing tiny subsets of the population.
Emotions & Morals
Good science attempts objectivity.
Yes, all humans are emotional beings that use logic to justify.
But research should focus on experimental facts.
When I skim over scientific writing, I look for moralizing statements (which don’t belong).
Proper research doesn’t tell you what to think or lay down emotional arguments. Rather, it supplies facts and allows you to make up your mind.
Emotionally charged papers (this is good/evil, everyone should/shouldn’t, if only they, etc) are a definite warning as to the integrity of the research as a whole.
Logical Abstract & Conclusion
The rest of this list takes a little more work but still doesn’t require hours of reading.
The abstract is an overview of the entire purpose of the scientific article. It usually outlines the rationale, the importance, the basic background of the experiment, and any findings.
Sometimes, you can also preview their conclusion.
You should be able to follow their entire process without major questions or reading any conflicting statements. Did they jump to any unwarranted conclusions?
I’ve seen instances where the wording of the abstract appears to intentionally overemphasize or understate their findings. If the information freely available seems illogical, the study may have serious flaws.
Authors & Institute
If you do a quick web search of the lead author and their institute, what comes up?
Researchers usually publish multiple papers and represent legitimate facilities.
It takes a little more time to do a simple background check, but I want to know more about the team behind the research before making large lifestyle decisions.
This one won’t always turn up gold, but can help you identify whether the team publishing the piece is well known and respected. Or the opposite.
When the stakes are high, make sure that the authors and affiliated institution are reputable.
For those willing to dig into the methodology itself, you can sometimes find massive red flags that destroy the credibility of the study.
Such as with Ancel Keys’ infamous work vilifying meat and dietary fat. His secret to brainwashing the world to fear an ancestral dietary staple?
He deleted the data that didn’t fit his desired outcome.
That meant handpicked the countries he studied from 22 down to a select six. Only through further research did the world learn the truth.
We can learn from this and apply some basic data questioning:
- Why was the particular independent variable (outcome) chosen? Is it a good representation of what we’re investigating?
- Are the factors described likely causing the outcome? Or is it a spurious, random correlation? Or even worse, is the causation backward?
- How did they choose the data to study?
- How easily could they have massaged the data to nicely fit their hypothesis?
- Would sensible additional data ruin the findings?
Basically, approach the data and methodology with suspicion. Competent researchers should lay out exactly why they chose their data and any other relevant considerations.
Keep your eyes peeled for (and question) erratic data that seems chosen to promote a narrative.
By definition, no scientific study is the “end all be all” on a subject.
Theories constantly get created, revised, and discarded.
Whenever I come across new research, I compare it to the corpus. I ask myself a few important questions:
- How does this fit into existing knowledge?
- Does the paper contain nuances that may explain a potentially different outcome?
- Is this evidence more compelling than previous works?
- What’s the potential harm if I test this for myself?
When the current article contradicts everything else, I look for the authors’ recognition and theories on why. If they don’t address or speculate on the conflict, I’m wary.
Often, new research doesn’t refute entire subjects, but rather small nuances.
Compare the current research to the existing literature before making any decisions.
The last two factors, however, are among the most important.
Understanding the quality of research can be tricky without relevant background.
There are several things I like to consider when examining research:
- Target variables — is the study focusing on particular biomarkers (like cholesterol), or clinical outcomes (heart attack)?
- Adequate controls — does the methodology control for the proper environment, diet, stress, and other factors that may heavily influence results?
- Real significance — what kind of impact will the results actually have?
You’ll really want to consider the implications of what the study actually found. For example, a new molecule may lower cholesterol (as shown by the theoretical paper), but did it decrease cardiovascular disease? If not, why? Do I care more about having low cholesterol or low CVD?
Another popular sleight of hand is relative versus absolute risk/benefit.
We care about the absolute — how much a treatment improves the subject overall. The relative stat, however, artificially inflates the numbers. Following the previous cholesterol example, a study compared a control group to a medication. The placebo group showed an all-cause mortality of 9% and 6.7% for the treatment group. Researchers used the relative risk of this data to conclude a “28% reduction in risk of coronary-related death” for those treated with cholesterol medication.
Statistics can make the outcome appear greater than reality, often muddying the risk-to-reward.
Usually, critics will quickly find flaws and post their comments online. But if you don’t want to wait, you can send the paper over to your nerdy biohacker friends for deeper examination.
Science has a serious plague.
Known as the replication crisis,
“The largest gap was in papers published in Nature/Science: non-replicable papers were cited 300 times more than replicable ones… In psychology, only 39 percent of the 100 experiments successfully replicated”A new replication crisis: Research that is less likely to be true is cited more
Even the most respected journals publish highly problematic scientific articles.
Unreproducible experiments simply aren’t valid and should be discarded.
When the same exact procedure results in drastically different results, this indicates major issues, often with data quality (or “massaging”).
Without reproducibility, we cannot attribute the outcomes to the studied variables. If I know research cannot be replicated, I completely disregard it.
Simple Steps to Read & Understand Science Without Wasting Your Day
Every day a deluge of new information gets distributed throughout the web of the internet.
Scientific understanding evolves rapidly.
Proponents of virtually anything can point to peer-reviewed studies published in credible sounding journals.
Laughably obvious facts today were ridicule in the recent past.
Blindly trusting new science leads to disaster. Trust and science don’t belong together.
If a theory is not falsifiable, it’s not real science.
The best we can currently do is to prove that outcomes were more than chance. Repeatably, by anyone that follows specified methodology.
Although it’s imperfect, I spend a large chunk of time reviewing new research articles. I don’t have expertise with most of the subjects, so I use easy evaluation heuristics and rules.
Some of these include:
- Searching for existing critiques
- Checking the credibility of the journal and study authors
- Looking if they’ve published other relevant works
- Counting how often the work has been cited (it’s so-called “impact factor”)
- Finding who funded the research and any conflict of interest
- Skimming the abstract and conclusion for logical, unemotional flow
- Comparing new work to the previous body of evidence
- Scrutinizing the subjects, sample size, type of trial, and data collected
If it was well thought out and properly conducted, it should be reproducible. Follow-up work will confirm its validity.
You can learn a great deal without spending your entire afternoon reading a single paper. Of course, that will give you a deeper understanding, but much of what I come across isn’t sound enough to deserve hours of thorough analysis.
Remember, science is never black or white, but rather a gradient.
“Science is the belief in the ignorance of experts”Richard Feynman
Even today, countless unexplainable phenomenon surround us.
Like the Big Bang. We base the scientific method around it but know very little about its origin.
Every study approximates some facet of general truth. But that doesn’t mean that it’ll apply to you. Which is why you must learn to read the subtle cues of your body and brain. Then no doctors, experts, or health practitioners can possibly know your biology better than you.
I hope this is helpful when you’re evaluating scientific research articles. And I’m sure there are tons of other important criteria that I’m missing. What factors do you look for?