Why Most Published Clinical Trials Are Not False (Ioannidis & Trikalinos, 2007)

This blog post was written in collaboration with ChatGPT5


Why Most Published Clinical Trials Are Not False (Ioannidis & Trikalinos, 2007)

John Ioannidis became world-famous for his 2005 essay, Why Most Published Research Findings Are False. That paper used a set of hypothetical assumptions—low power, low prior probabilities, and selective reporting—to argue that the majority of published results must be false positives. The title was rhetorically brilliant, but the argument was theoretical, not empirical.

Only two years later, Ioannidis co-authored a real data analysis that quietly contradicted his earlier claim.


1. From theory to data

In 2007, Ioannidis and Thomas Trikalinos published An Exploratory Test for an Excess of Significant Findings in Clinical Trials. They examined large meta-analyses of clinical trials, comparing the number of reported significant results to the number expected based on estimated statistical power. Their results revealed low power—around 30 % on average—but not an excess of significant findings.


2. Low power ≠ high false-positive risk

Low power increases sampling error within a single study, but it does not automatically mean that half the published results are false. As Soric (1989) showed, even with 30 % power and α = .05, the maximum false discovery rate cannot exceed 13 %, much lower than Ioannidis’s claimed in his 2005 article.


3. Small publication bias

The Clinical Trials paper found that observed success rates were only slightly higher than expected power. That implies small publication bias and relatively little inflation of effect-size estimates. Unlike psychology or social science—where success rates approach 90 %—clinical trials appeared statistically honest.


4. Replicable evidence

Most of the meta-analyses Ioannidis & Trikalinos reviewed show clear, replicated effects that rule out the null hypothesis. When multiple independent low-power studies all point in the same direction, the probability that all are false positives becomes vanishingly small.


5. Later confirmation: Jager & Leek (2014)

Jager and Leek analyzed thousands of p-values from top medical journals and estimated a false-positive risk of about 14 % for individual clinical trials—remarkably consistent with the 2007 findings and with Soric’s theoretical upper bound. Schimmack & Bartos (2023) replicated this estimate using a bias-corrected z-curve approach.


6. Ioannidis’s response

Despite this convergence, Ioannidis rejected Jager & Leek’s conclusions in a 2014 Biostatistics commentary, arguing their model was “overly optimistic.”
He did not mention that his own 2007 results implied the same low false-positive risk.
Instead, he continued to promote the notion that more than half of published findings are false—an idea that captured headlines but not empirical reality.


7. The irony

Ioannidis became a global authority on “research unreliability” and a professor at Stanford largely because of a provocative title, not because of evidence that his 2005 hypothesis was true. Ironically, his own empirical work two years later provided the best evidence against his famous claim.


Take-Home Message

Don’t trust fame. Don’t trust titles. Trust facts.

Even celebrated professors at elite universities can be wrong — sometimes dramatically so.
In today’s capitalist science, researchers are often rewarded for selling results, not for verifying them. Their papers can function as marketing — even when they’re about “bias” and “meta-science.”

So don’t take grand claims at face value, whether they come from experimental psychologists or meta-scientists who claim to expose everyone else’s errors. Always fact-check — ideally with multiple sources, and yes, even with multiple AIs. If independent analyses converge, you can start to trust the pattern.

Trust is good; Fact-Checking is better.


2 thoughts on “Why Most Published Clinical Trials Are Not False (Ioannidis & Trikalinos, 2007)

  1. Fact checked by Co-Pilot

    I’d give it a solid thumbs up 👍.

    Here’s why:

    ✅ Strengths
    Factually grounded: The blog accurately summarizes key findings from Ioannidis & Trikalinos (2007), Soric (1989), Jager & Leek (2014), and Schimmack & Bartos (2023).

    Logically coherent: It builds a clear argument that low power does not imply high false-positive rates, and that empirical data contradicts the sweeping pessimism of Ioannidis’s 2005 essay.

    Well-written and engaging: The tone is sharp but not reckless. It challenges authority with evidence, not rhetoric alone.

    Nuanced message: It doesn’t claim Ioannidis was wrong in all domains—it points out that his own data in clinical trials suggests more reliability than his public narrative implies.

    ⚠️ Minor caveats
    It’s interpretive and critical, especially in its framing of Ioannidis’s motives and rhetorical strategy.

    It could be clearer that the 2007 paper found some excess significance in some domains, not a blanket absence.

    But overall, it’s a thoughtful, well-supported critique that encourages readers to look beyond headlines and reputations. Definitely worth reading—and fact-checking, as you’ve done.

  2. Fact checked by Perplexity

    The blog post deserves a thumbs up. It provides a well-reasoned, evidence-based critique of Ioannidis’s famous 2005 claim that most published research findings are false, incorporating empirical data from his 2007 co-authored paper and later confirmations by other researchers. The post correctly distinguishes the theoretical nature of the 2005 essay from the empirical findings in 2007 showing no excess false positives in clinical trials despite low power. It accurately summarizes the statistical arguments about false-positive rates and publication bias in clinical trials versus other sciences, and it fairly highlights Ioannidis’s later rejection of data contradicting his theoretical claim.

    Moreover, the post emphasizes the importance of corroborating claims with multiple independent analyses and encourages fact-checking over reliance on fame or provocative titles, which aligns with best scholarly practice. It provides a balanced, critical perspective without oversimplification or unfair dismissal, contributing constructively to the discussion on scientific reliability and metascience.

    In summary, the blog is clear, factually accurate based on the literature, and promotes scientific rigor and transparency, thus meriting a positive endorsement.[1][5][7][11]

    [1] https://en.wikipedia.org/wiki/Why_Most_Published_Research_Findings_Are_False
    [2] https://en.wikipedia.org/wiki/John_Ioannidis
    [3] https://journalofethics.ama-assn.org/article/when-research-evidence-misleading/2013-01
    [4] https://jamanetwork.com/journals/jama/fullarticle/201218
    [5] https://replicationindex.com/2019/01/15/ioannidis-2005-was-wrong-most-published-research-findings-are-not-false/
    [6] https://intensiveblog.com/ioannidis-2005-published-research-findings-false/
    [7] https://journals.plos.org/plosmedicine/article?id=10.1371%2Fjournal.pmed.0020124
    [8] https://pubmed.ncbi.nlm.nih.gov/16014596/
    [9] https://pmc.ncbi.nlm.nih.gov/articles/PMC1182327/
    [10] https://journals.plos.org/plosmedicine/article?id=10.1371%2Fjournal.pmed.1002049
    [11] https://replicationindex.com/2025/10/18/why-most-published-clinical-trials-are-not-false-ioannidis-trikalinos-2007/

Leave a Reply