Category Archives: False Discovery Rate

Personalized P-Values for Social/Personality Psychologists

Last update 2/24/2021
(the latest updated included articles published in 2020. This produced some changes in the rankings).

Introduction

Since Fisher invented null-hypothesis significance testing, researchers have used p < .05 as a statistical criterion to interpret results as discoveries worthwhile of discussion (i.e., the null-hypothesis is false). Once published, these results are often treated as real findings even though alpha does not control the risk of false discoveries.

Statisticians have warned against the exclusive reliance on p < .05, but nearly 100 years after Fisher popularized this approach, it is still the most common way to interpret data. The main reason is that many attempts to improve on this practice have failed. The main problem is that a single statistical result is difficult to interpret. However, when individual results are interpreted in the context of other results, they become more informative. Based on the distribution of p-values it is possible to estimate the maximum false discovery rate (Bartos & Schimmack, 2020; Jager & Leek, 2014). This approach can be applied to the p-values published by individual authors to adjust p-values to keep the risk of false discoveries at a reasonable level, FDR < .05.

Researchers who mainly test true hypotheses with high power have a high discovery rate (many p-values below .05) and a low false discovery rate (FDR < .05). Figure 1 shows an example of a researcher who followed this strategy (for a detailed description of z-curve plots, see Schimmack, 2021).

We see that out of the 317 test-statistics retrieved from his articles, 246 were significant with alpha = .05. This is an observed discovery rate of 78%. We also see that this discovery rate closely matches the estimated discovery rate based on the distribution of the significant p-values, p < .05. The EDR is 79%. With an EDR of 79%, the maximum false discovery rate is only 1%. However, the 95%CI is wide and the lower bound of the CI for the EDR, 27%, allows for 14% false discoveries.

When the ODR matches the EDR, there is no evidence of publication bias. In this case, we can improve the estimates by fitting all p-values, including the non-significant ones. With a tighter CI for the EDR, we see that the 95%CI for the maximum FDR ranges from 1% to 3%. Thus, we can be confident that no more than 5% of the significant results wit alpha = .05 are false discoveries. Readers can therefore continue to use alpha = .05 to look for interesting discoveries in Matsumoto’s articles.

Figure 3 shows the results for a different type of researcher who took a risk and studied weak effect sizes with small samples. This produces many non-significant results that are often not published. The selection for significance inflates the observed discovery rate, but the z-curve plot and the comparison with the EDR shows the influence of publication bias. Here the ODR is similar to Figure 1, but the EDR is only 11%. An EDR of 11% translates into a large maximum false discovery rate of 41%. In addition, the 95%CI of the EDR includes 5%, which means the risk of false positives could be as high as 100%. In this case, using alpha = .05 to interpret results as discoveries is very risky. Clearly, p < .05 means something very different when reading an article by David Matsumoto or Shelly Chaiken.

Rather than dismissing all of Chaiken’s results, we can try to lower alpha to reduce the false discovery rate. If we set alpha = .01, the FDR is 15%. If we set alpha = .005, the FDR is 8%. To get the FDR below 5%, we need to set alpha to .001.

A uniform criterion of FDR < 5% is applied to all researchers in the rankings below. For some this means no adjustment to the traditional criterion. For others, alpha is lowered to .01, and for a few even lower than that.

The rankings below are based on automatrically extracted test-statistics from 40 journals (List of journals). The results should be interpreted with caution and treated as preliminary. They depend on the specific set of journals that were searched, the way results are being reported, and many other factors. The data are available (data.drop) and researchers can exclude articles or add articles and run their own analyses using the z-curve package in R (https://replicationindex.com/2020/01/10/z-curve-2-0/).

I am also happy to receive feedback about coding errors. I also recommended to hand-code articles to adjust alpha for focal hypothesis tests. This typically lowers the EDR and increases the FDR. For example, the automated method produced an EDR of 31 for Bargh, whereas hand-coding of focal tests produced an EDR of 12 (Bargh-Audit).

And here are the rankings. The results are fully automated and I was not able to cover up the fact that I placed only #139 out of 300 in the rankings. In another post, I will explain how researchers can move up in the rankings. Of course, one way to move up in the rankings is to increase statistical power in future studies. The rankings will be updated again when the 2021 data are available.

Despite the preliminary nature, I am confident that the results provide valuable information. Until know all p-values below .05 have been treated as if they are equally informative. The rankings here show that this is not the case. While p = .02 can be informative for one researcher, p = .002 may still entail a high false discovery risk for another researcher.

NameTestsODREDRERRFDRAlpha
Robert A. Emmons588885881.05
David Matsumoto3788379851.05
Linda J. Skitka5326875822.05
Jonathan B. Freeman2745975812.05
Virgil Zeigler-Hill5157274812.05
David P. Schmitt2077871772.05
Emily A. Impett5497770762.05
Michael E. McCullough3346969782.05
Kipling D. Williams8437569772.05
John M. Zelenski1567169762.05
Kurt Gray4877969812.05
Hilary B. Bergsieker4396768742.05
Cameron Anderson6527167743.05
Jamil Zaki4307866763.05
Phoebe C. Ellsworth6057465723.05
Jim Sidanius4876965723.05
Benjamin R. Karney3925665733.05
A. Janet Tomiyama767865763.05
Carol D. Ryff2808464763.05
Juliane Degner4356364713.05
Thomas N Bradbury3986163693.05
Steven J. Heine5977863773.05
David M. Amodio5846663703.05
Elaine Fox4727962783.05
Klaus Fiedler14217860723.05
Richard W. Robins2707660704.05
William B. Swann Jr.10707859804.05
Margaret S. Clark5057559774.05
Edward P. Lemay2898759814.05
Patricia G. Devine6067158674.05
B. Keith Payne8797158764.05
Ximena B. Arriaga2846658694.05
Rainer Reisenzein2016557694.05
Jean M. Twenge3817256594.05
Barbara A. Mellers2878056784.05
Joris Lammers7056956694.05
Nicholas Epley15047455724.05
Richard M. Ryan9987852695.05
Edward L. Deci2847952635.05
Ethan Kross6146652675.05
Lee Jussim2268052715.05
Samuel D. Gosling1085851625.05
Jens B. Asendorpf2537451695.05
Roger Giner-Sorolla6638151805.05
Tessa V. West6917151595.05
James J. Gross11047250775.05
Paul Rozin4497850845.05
Shinobu Kitayama9837650715.05
Janice R. Kelly3667550705.05
Sheena S. Iyengar2076350805.05
Paul K. Piff1667750635.05
Mina Cikara3927149805.05
Bertram Gawronski18037248766.01
Edward R. Hirt10428148656.01
Penelope Lockwood4587148706.01
John T. Cacioppo4387647696.01
Matthew D. Lieberman3987247806.01
Daniel M. Wegner6027647656.01
Agneta H. Fischer9527547696.01
Leaf van Boven7117247676.01
Stephanie A. Fryberg2486247666.01
Alice H. Eagly3307546716.01
Rainer Banse4027846726.01
Jeanne L. Tsai12417346676.01
Jennifer S. Lerner1818046616.01
Dacher Keltner12337245646.01
Constantine Sedikides25667145706.01
Andrea L. Meltzer5495245726.01
R. Chris Fraley6427045727.01
Brian A. Nosek8166844817.01
Ursula Hess7747844717.01
S. Alexander Haslam11987243647.01
Charles M. Judd10547643687.01
Mark Schaller5657343617.01
Jason P. Mitchell6007343737.01
Jessica L. Tracy6327443717.01
Mario Mikulincer9018942647.01
Lisa Feldman Barrett6446942707.01
Susan T. Fiske9117842747.01
Bernadette Park9737742647.01
Paul A. M. Van Lange10927042637.01
Wendi L. Gardner7986742637.01
Philip E. Tetlock5497941737.01
Jordan B. Peterson2666041797.01
Michael Inzlicht5666441618.01
Stacey Sinclair3277041578.01
Richard E. Petty27716940648.01
Norbert Schwarz13377240638.01
Wendy Wood4627540628.01
Tiffany A. Ito3498040648.01
Elizabeth Page-Gould4115740668.01
Carol S. Dweck10287039638.01
Marcel Zeelenberg8687639798.01
Christian S. Crandall3627539598.01
Tobias Greitemeyer17377239678.01
Jason E. Plaks5827039678.01
Jerry Suls4137138688.01
Eric D. Knowles3846838648.01
John F. Dovidio20196938629.01
C. Nathan DeWall13367338639.01
Harry T. Reis9986938749.01
Joshua Correll5496138629.01
Abigail A. Scholer5565838629.01
Mahzarin R. Banaji8807337789.01
Antony S. R. Manstead16567237629.01
Kevin N. Ochsner4067937709.01
Fritz Strack6077537569.01
Ayelet Fishbach14167837599.01
Lorne Campbell4336737619.01
Geoff MacDonald4066737679.01
Mark J. Brandt2777037709.01
Craig A. Anderson4677636559.01
Barbara L. Fredrickson2877236619.01
Nyla R. Branscombe12767036659.01
Niall Bolger3766736589.01
D. S. Moskowitz34187436639.01
Duane T. Wegener9807736609.01
Joanne V. Wood10937436609.01
Yaacov Schul4116136649.01
Jeff T. Larsen18174366710.01
Nalini Ambady125662355610.01
John T. Jost79470356110.01
Daphna Oyserman44655355410.01
Samuel L. Gaertner32175356110.01
Michael Harris Bond37873358410.01
Michael D. Robinson138878356610.01
Igor Grossmann20364356610.01
Azim F. Sharif18374356810.01
Eva Walther49382356610.01
C. Miguel Brendl12176356810.01
Emily Balcetis59969356810.01
Diana I. Tamir15662356210.01
Thomas Gilovich119380346910.01
Paula M. Niedenthal52269346110.01
Ozlem Ayduk54962345910.01
Wiebke Bleidorn9963347410.01
Alison Ledgerwood21475345410.01
Kerry Kawakami48768335610.01
Christopher R. Agnew32575337610.01
Jennifer A. Richeson83167335211.01
Malte Friese50161335711.01
Danu Anthony Stinson49477335411.01
Mark Snyder56272326311.01
Robert B. Cialdini37972325611.01
Russell H. Fazio109469326111.01
Eli J. Finkel139262325711.01
Ulrich Schimmack31875326311.01
Margo J. Monteith77376327711.01
E. Ashby Plant83177315111.01
Christopher K. Hsee68975316311.01
Yuen J. Huo13274318011.01
Roy F. Baumeister244269315212.01
John A. Bargh65172315512.01
Tom Pyszczynski94869315412.01
Delroy L. Paulhus12177318212.01
Kathleen D. Vohs94468315112.01
Jamie Arndt131869315012.01
Arthur Aron30765305612.01
Anthony G. Greenwald35772308312.01
Jennifer Crocker51568306712.01
Dale T. Miller52171306412.01
Aaron C. Kay132070305112.01
Lauren J. Human44759307012.01
Steven W. Gangestad19863304113.005
Nicholas O. Rule129468307513.01
Jeff Greenberg135877295413.01
Hazel Rose Markus67476296813.01
Russell Spears228673295513.01
Gordon B. Moskowitz37472295713.01
Richard E. Nisbett31973296913.01
Eliot R. Smith44579297313.01
Boris Egloff27481295813.01
Caryl E. Rusbult21860295413.01
Dirk Wentura83065296413.01
Nir Halevy26268297213.01
Adam D. Galinsky215470284913.01
Jeffry A. Simpson69774285513.01
Yoav Bar-Anan52575287613.01
Roland Neumann25877286713.01
Richard J. Davidson38064285114.01
Eddie Harmon-Jones73873287014.01
Brent W. Roberts56272287714.01
Naomi I. Eisenberger17974287914.01
Sander L. Koole76765285214.01
Shelly L. Gable36464285014.01
Joshua Aronson18385284614.005
Elizabeth W. Dunn39575286414.01
Grainne M. Fitzsimons58568284914.01
Geoffrey J. Leonardelli29068284814.005
Matthew Feinberg29577286914.01
Jan De Houwer197270277214.01
Karl Christoph Klauer80167276514.01
Guido H. E. Gendolla42276274714.005
Jennifer S. Beer8056275414.01
Klaus R. Scherer46783267815.01
Galen V. Bodenhausen58574266115.01
Sonja Lyubomirsky53171265915.01
Claude M. Steele43473264215.005
William G. Graziano53271266615.01
Kristin Laurin64863265115.01
Kerri L. Johnson53276257615.01
Phillip R. Shaver56681257116.01
David Dunning81874257016.01
Laurie A. Rudman48272256816.01
Joel Cooper25772253916.005
Batja Mesquita41671257316.01
Ronald S. Friedman18379254416.005
Steven J. Sherman88874246216.01
Alison L. Chasteen22368246916.01
Shigehiro Oishi110964246117.01
Thomas Mussweiler60470244317.005
Mark W. Baldwin24772244117.005
Jonathan Haidt36876237317.01
Brandon J. Schmeichel65266234517.005
Jeffrey W Sherman99268237117.01
Felicia Pratto41073237518.01
Klaus Rothermund73871237618.01
Bernard A. Nijstad69371235218.005
Roland Imhoff36574237318.01
Jennifer L. Eberhardt20271236218.005
Michael Ross116470226218.005
Marilynn B. Brewer31475226218.005
Dieter Frey153868225818.005
David M. Buss46182228019.01
Wendy Berry Mendes96568224419.005
Yoel Inbar28067227119.01
Sean M. McCrea58473225419.005
Spike W. S. Lee14568226419.005
Joseph P. Forgas88883215919.005
Maya Tamir134280216419.005
Paul W. Eastwick58365216919.005
Elizabeth Levy Paluck3184215520.005
Andrew J. Elliot101881206721.005
Jay J. van Bavel43764207121.005
Tanya L. Chartrand42467203321.001
Geoffrey L. Cohen159068205021.005
David A. Pizarro22771206921.005
Ana Guinote37876204721.005
Kentaro Fujita45869206221.005
William A. Cunningham23876206422.005
Robert S. Wyer87182196322.005
Peter M. Gollwitzer130364195822.005
Gerald L. Clore45674194522.001
Amy J. C. Cuddy17081197222.005
Nilanjana Dasgupta38376195222.005
Travis Proulx17463196222.005
James K. McNulty104756196523.005
Dolores Albarracin52067195623.005
Richard P. Eibach75369194723.001
Kennon M. Sheldon69874186623.005
Wilhelm Hofmann62467186623.005
Ed Diener49864186824.005
Frank D. Fincham73469185924.005
Toni Schmader54669186124.005
Roland Deutsch36578187124.005
Lisa K. Libby41865185424.005
James M. Tyler13087187424.005
Chen-Bo Zhong32768184925.005
Brad J. Bushman89774176225.005
Ara Norenzayan22572176125.005
Benoit Monin63565175625.005
Michel Tuan Pham24686176825.005
E. Tory. Higgins192068175426.001
Timothy D. Wilson79865176326.005
Ap Dijksterhuis75068175426.005
Michael W. Kraus61772175526.005
Carey K. Morewedge63376176526.005
Leandre R. Fabrigar63270176726.005
Joseph Cesario14662174526.001
Simone Schnall27062173126.001
Daniel T. Gilbert72465166527.005
Melissa J. Ferguson116372166927.005
Charles S. Carver15482166428.005
Mark P. Zanna65964164828.001
Sandra L. Murray69760165528.001
Laura A. King39176166829.005
Heejung S. Kim85859165529.001
Gun R. Semin15979156429.005
Nathaniel M Lambert45666155930.001
Shelley E. Taylor43869155431.001
Nira Liberman130475156531.005
Lee Ross34977146331.001
Ziva Kunda21767145631.001
Jon K. Maner104065145232.001
Arie W. Kruglanski122878145833.001
Gabriele Oettingen104761144933.001
Gregory M. Walton58769144433.001
Sarah E. Hill50978135234.001
Fiona Lee22167135834.001
Michael A. Olson34665136335.001
Michael A. Zarate12052133136.001
Daniel M. Oppenheimer19880126037.001
Yaacov Trope127773125738.001
Steven J. Spencer54167124438.001
Deborah A. Prentice8980125738.001
William von Hippel39865124840.001
Oscar Ybarra30563125540.001
Dov Cohen64168114441.001
Ian McGregor40966114041.001
Mark Muraven49652114441.001
Martie G. Haselton18673115443.001
Susan M. Andersen36174114843.001
Shelly Chaiken36074115244.001
Hans Ijzerman2145694651.001

Soric’s Maximum False Discovery Rate

Originally published January 31, 2020
Revised December 27, 2020

Psychologists, social scientists, and medical researchers often conduct empirical studies with the goal to demonstrate an effect (e.g., a drug is effective). They do so by rejecting the null-hypothesis that there is no effect, when a test statistic falls into a region of improbable test-statistics, p < .05. This is called null-hypothesis significance testing (NHST).

The utility of NHST has been a topic of debate. One of the oldest criticisms of NHST is that the null-hypothesis is likely to be false most of the time (Lykken, 1968). As a result, demonstrating a significant result adds little information, while failing to do so because studies have low power creates false information and confusion.

This changed in the 2000s, when the opinion emerged that most published significant results are false (Ioannidis, 2005; Simmons, Nelson, & Simonsohn, 2011). In response, there have been some attempts to estimate the actual number of false positive results (Jager & Leek, 2013). However, there has been surprisingly little progress towards this goal.

One problem for empirical tests of the false discovery rate is that the null-hypothesis is an abstraction. Just like it is impossible to say the number of points that make up the letter X, it is impossible to count null-hypotheses because the true population effect size is always unknown (Zhao, 2011, JASA).

An article by Soric (1989, JASA) provides a simple solution to this problem. Although this article was influential in stimulating methods for genome-wide association studies (Benjamin & Hochberg, 1995, over 40,000) citations, the article itself has garnered fewer than 100 citations. Yet, it provides a simple and attractive way to examine how often researchers may be obtaining significant results when the null-hypothesis is true. Rather than trying to estimate the actual false discovery rate, the method estimates the maximum false discovery rate. If a literature has a low maximum false discovery rate, readers can be assured that most significant results are true positives.

The method is simple because researchers do not have to determine whether a specific finding was a true or false positive result. Rather, the maximum false discovery rate can be computed from the actual discovery rate (i.e., the percentage of significant results for all tests).

The logic of Soric’s (1989) approach is illustrated in Tables 1.

NSSIG
TRUE06060
FALSE76040800
760100860
Table 1

To maximize the false discovery rate, we make the simplifying assumption that all tests of true hypotheses (i.e., the null-hypothesis is false) are conducted with 100% power (i.e., all tests of true hypotheses produce a significant result). In Table 1, this leads to 60 significant results for 60 true hypotheses. The percentage of significant results for false hypotheses (i.e., the null-hypothesis is true) is given by the significance criterion, which is set at the typical level of 5%. This means that for every 20 tests, there are 19 non-significant results and one false positive result. In Table 1 this leads to 40 false positive results for 800 tests.

In this example, the discovery rate is (40 + 60)/860 = 11.6%. Out of these 100 discoveries, 60 are true discoveries and 40 are false discoveries. Thus, the false discovery rate is 40/100 = 40%.

Soric’s (1989) insight makes it easy to examine empirically whether a literature tests many false hypotheses, using a simple formula to compute the maximum false discovery rate from the observed discovery rate; that is, the percentage of significant results. All we need to do is count and use simple math to obtain valuable information about the false discovery rate.

However, a major problem with Soric’s approach is that the observed discovery rate in a literature may be misleading because journals are more likely to publish significant results than non-significant results. This is known as publication bias or the file-drawer problem (Rosenthal, 1979). In some sciences, publication bias is a big problem. Sterling (1959; also Sterling et al., 1995) found that the observed discovery rated in psychology is over 90%. Rather than suggesting that psychologists never test false hypotheses, it rather suggests that publication bias is particularly strong in psychology (Fanelli, 2010). Using these inflated discovery rates to estimate the maximum FDR would severely understimate the actual risk of false positive results.

Recently, Bartoš and Schimmack (2020) developed a statistical model that can correct for publication bias and produce a bias-corrected estimate of the discovery rate. This is called the expected discovery rate. A comparison of the observed discovery rate (ODR) and the expected discovery rate (EDR) can be used to assess the presence and extent of publication bias. In addition, the EDR can be used to compute Soric’s maximum false discovery rate when publication bias is present and inflates the ODR.

To demonstrate this approach, I I use test-statistics from the journal Psychonomic Bulletin and Review. The choice of this journal is motivated by prior meta-psychological investigations of results published in this journal. Gronau, Duizer, Bakker, and Wagenmakers (2017) used a Bayesian Mixture Model to estimate that about 40% of results published in this journal are false positive results. Using Soric’s formula in reverse shows that this estimate implies that cognitive psychologists test only 10% true hypotheses (Table 3; 72/172 = 42%). This is close to Dreber, Pfeiffer, Almenber, Isakssona, Wilsone, Chen, Nosek, and Johannesson’s (2015) estimate of only 9% true hypothesis in cognitive psychology.

NSSIG
TRUE0100100
FALSE136872900
13681721000
Table 3

These results are implausible because rather different results are obtained when Soric’s method is applied to the results from the Open Science Collaboration (2015) project that conducted actual replication studies and found that 50% of published significant results could be replicated; that is, produced a significant results again in the replication study. As there was no publication bias in the replication studies, the ODR of 50% can be used to compute the maximum false discovery rate, which is only 5%. This is much lower than the estimate obtained with Gronau et al.’s (2018) mixture model.

I used an R-script to automatically extract test-statistics from articles that were published in Psychonomic Bulletin and Review from 2000 to 2010. I limited the analysis to this period because concerns about replicability and false positives might have changed research practices after 2010. The program extracted 13,571 test statistics.

Figure 1 shows clear evidence of selection bias. The observed discovery rate of 70% is much higher than the estimated discovery rate of 35% and the 95%CI of the EDR, 25% to 53% does not include the ODR. As a result, the ODR produces an inflated estimate of the actual discover rate and cannot be used to compute the maximum false discovery rate.

However, even with a much lower estimated discovery rate of 36%, the maximum false discovery rate is only 10%. Even with the lower bound of the confidence interval for the EDR of 25%, the maximum FDR is only 16%.

Figure 2 shows the results for a replication with test statistics from 2011 to 2019. Although changes in research practices could have produced different results, the results are unchanged. The ODR is 69% vs. 70%; the EDR is 38% vs. 35% and the point estimate of the maximum FDR is 9% vs. 10%. This close replication also implies that research practices in cognitive psychology have not changed over the past decade.

The maximum FDR estimates of 10% confirms the results based on the replication rate in a small set of actual replication studies (OSC, 2015) with a much larger sample of test statistics. The results also show that Gronau et al.’s mixture model produces dramatically inflated estimates of the false discovery rate (see also Brunner & Schimmack, 2019, for a detailed discussion of their flawed model).

In contrast to cognitive psychology, social psychology has seen more replication failures. The OSC project estimated a discovery rate of only 25%. Even this low rate would imply that a maximum of 16% of discoveries in social psychology are false positives. A z-curve analysis of a representative sample of 678 focal tests in social psychology produced an estimated discovery rate of 19% with a 95%CI ranging from 6% to 36% (Schimmack, 2020). The point estimate implies a maximum FDR of 22%, but the lower limit of the confidence interval allows for a maximum FDR of 82%. Thus, social psychology may be a literature where most published results are false. However, the replication crisis in social psychology should not be generalized to other disciplines.

Conclusion

Numerous articles have made claims that false discoveries are rampant (Dreber et al., 2015; Gronau et al., 2015; Ioannidis, 2005; Simmons et al., 2011). However, these articles did not provide empirical data to support their claim. In contrast, empirical studies of the false discovery risk usually show much lower rates of false discoveries (Jager & Leek, 2013), but this finding has been dismissed (Ioannidis, 2014) or ignored (Gronau et al., 2018). Here I used a simpler approach to estimate the maximum false discovery rate and showed that most significant results in cognitive psychology are true discoveries. I hope that this demonstration revives attempts to estimate the science-wise false discovery rate (Jager & Leek, 2013) rather than relying on hypothetical scenarios or models that reflect researchers’ prior beliefs that may not match actual data (Gronau et al., 2018; Ioannidis, 2005).

References

Bartoš, F., & Schimmack, U. (2020, January 10). Z-Curve.2.0: Estimating Replication Rates and Discovery Rates. https://doi.org/10.31234/osf.io/urgtn

Dreber A., Pfeiffer T., Almenberg, J., Isaksson S., Wilson B., Chen Y., Nosek B. A.,  Johannesson, M. (2015). Prediction markets in science. Proceedings of the National Academy of Sciences, 50, 15343-15347. DOI: 10.1073/pnas.1516179112

Fanelli D (2010) Positive” Results Increase Down the Hierarchy of the Sciences. PLOS ONE 5(4): e10068. https://doi.org/10.1371/journal.pone.0010068

Gronau, Q. F., Duizer, M., Bakker, M., & Wagenmakers, E.-J. (2017). Bayesian mixture modeling of significant p values: A meta-analytic method to estimate the degree of contamination from H₀. Journal of Experimental Psychology: General, 146(9), 1223–1233. https://doi.org/10.1037/xge0000324

Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLOS Medicine 2(8): e124. https://doi.org/10.1371/journal.pmed.0020124

Ioannidis JP. (2014). Why “An estimate of the science-wise false discovery rate and application to the top medical literature” is false. Biostatistics, 15(1), 28-36.
DOI: 10.1093/biostatistics/kxt036.

Jager, L. R., & Leek, J. T. (2014). An estimate of the science-wise false discovery rate and application to the top medical literature. Biostatistics, 15(1), 1-12.
DOI: 10.1093/biostatistics/kxt007

Lykken, D. T. (1968). Statistical significance in psychological research. Psychological Bulletin, 70(3, Pt.1), 151–159. https://doi.org/10.1037/h0026141

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), 1–8.

Schimmack, U. (2019). The Bayesian Mixture Model is fundamentally flawed. https://replicationindex.com/2019/04/01/the-bayesian-mixture-model-is-fundamentally-flawed/

Schimmack, U. (2020). A meta-psychological perspective on the decade of replication failures in social psychology. Canadian Psychology/Psychologie canadienne, 61(4), 364–376. https://doi.org/10.1037/cap0000246

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science22(11), 1359–1366. 
https://doi.org/10.1177/0956797611417632

Soric, B. (1989). Statistical “Discoveries” and Effect-Size Estimation. Journal of the American Statistical Association, 84(406), 608-610. doi:10.2307/2289950

Zhao, Y. (2011). Posterior Probability of Discovery and Expected Rate of Discovery for Multiple Hypothesis Testing and High Throughput Assays. Journal of the American Statistical Association, 106, 984-996, DOI: 10.1198/jasa.2011.tm09737