Open Science Requires Open Results

The open science movement has made remarkable progress. Data sharing policies are now standard at top journals. Pre-registration is common. Reproducibility checks are increasingly required. In a recent survey, 97% of social scientists supported posting data and code online (Swanson et al., 2020). Open data is the one reform that enjoys near-universal support.

But open data without open results is performative transparency. If researchers share their data but journals refuse to publish reanalyses that challenge the original conclusions, the data sit in a repository like evidence in a courthouse that never opens.

I recently experienced this firsthand. Dai et al. (2023) published a meta-analysis of implicit priming studies in Psychological Bulletin. The article explicitly states that it “carefully followed the Transparency and Openness Promotion Guidelines.” The data are available. I reanalyzed them using z-curve and a step-function selection model.

The reanalysis reveals a more complex and more concerning picture than the original meta-analysis suggested. A step-function selection model estimates a seemingly robust unobserved average effect size — implicit priming likely produces a real but modest effect. However, the z-curve analysis of the published test statistics tells a different story about the evidence base: ERR ≈ 49%, EDR ≈ 14%, and the false discovery rate could be as high as 69%. Most published studies were too underpowered to provide evidence for the effect, and a substantial proportion of significant results may be false positives. For subliminal priming specifically, the estimated false discovery rate was 100% — none of the significant results can be trusted. The literature points to a real phenomenon but fails to provide credible evidence for it. This changes the interpretation of an entire research literature — from one with heterogeneous but broadly supportive evidence to one where a plausible effect exists but the published record cannot be trusted to distinguish signal from noise.

I submitted this reanalysis to Psychological Bulletin — the same journal that published the original meta-analysis and that endorsed the transparency framework that made the reanalysis possible. It was rejected on procedural grounds, without evaluation of its scientific merit.

The journal’s own policy explains why. Psychological Bulletin “traditionally publishes commentaries that are requested by the action editor in light of issues that arise during the review.” Unsolicited commentaries are accepted only during the brief window before an article appears in print. After that, requests are “rarely granted.” In other words, corrections are reserved for insiders who saw the paper during closed peer review. Once the article is published — and the open data become available to the wider scientific community — the journal closes the door to the very corrections that open data make possible.

This is not a quirk of one journal or one editor. It is a structural feature of the publication system. The journal promoted transparency. The data are available. The reanalysis was conducted. The system worked exactly as designed — until the last step, where the journal declined to publish what transparency revealed.

The evidence that the publication system resists self-correction is growing. Failed replications do not reduce citations of the original studies (Serra-Garcia & Gneezy, 2021; von Hippel, 2022). Replication and reproduction studies are infrequently cited and do not affect the citation trends of original papers (Ankel-Peters et al., 2023; Coupé & Reed, 2022). Now we can add a third category: reanalyses of open data that reach different conclusions cannot find publication outlets — not because they are wrong, but because the policies that govern commentary and correction were designed for a closed system and have not adapted to the open one.

Meta-analyses are precisely the category of paper where reanalysis matters most. The same data analyzed with different statistical methods or different assumptions about selection can support fundamentally different conclusions. Yet Psychological Bulletin does not even award open science badges for meta-analyses, as if transparency is less relevant when the product is a synthesis.

Open science reformers have focused on making inputs transparent — data, code, pre-registration, materials. This is necessary but not sufficient. The bottleneck was never access to data. It is access to readers. A correction on a blog reaches a fraction of the audience that the original paper reached through a top journal.

If open science is to fulfill its promise, “open” must apply not just to data but to results — open to revision when reanalysis warrants it, and open to be seen by the same audience that read the original. Journals that mandate data sharing should commit to considering reanalyses of the data they helped make available. A journal that publishes a meta-analysis under the Transparency and Openness Promotion Guidelines but refuses to publish a reanalysis of the same data is not practicing open science. It is performing it.


Leave a Reply