Category Archives: False Discovery Rate

Personalized P-Values for Social/Personality Psychologists

Last update 8/25/2021
(expanded to 410 social/personality psychologists; included Dan Ariely)


Since Fisher invented null-hypothesis significance testing, researchers have used p < .05 as a statistical criterion to interpret results as discoveries worthwhile of discussion (i.e., the null-hypothesis is false). Once published, these results are often treated as real findings even though alpha does not control the risk of false discoveries.

Statisticians have warned against the exclusive reliance on p < .05, but nearly 100 years after Fisher popularized this approach, it is still the most common way to interpret data. The main reason is that many attempts to improve on this practice have failed. The main problem is that a single statistical result is difficult to interpret. However, when individual results are interpreted in the context of other results, they become more informative. Based on the distribution of p-values it is possible to estimate the maximum false discovery rate (Bartos & Schimmack, 2020; Jager & Leek, 2014). This approach can be applied to the p-values published by individual authors to adjust p-values to keep the risk of false discoveries at a reasonable level, FDR < .05.

Researchers who mainly test true hypotheses with high power have a high discovery rate (many p-values below .05) and a low false discovery rate (FDR < .05). Figure 1 shows an example of a researcher who followed this strategy (for a detailed description of z-curve plots, see Schimmack, 2021).

We see that out of the 317 test-statistics retrieved from his articles, 246 were significant with alpha = .05. This is an observed discovery rate of 78%. We also see that this discovery rate closely matches the estimated discovery rate based on the distribution of the significant p-values, p < .05. The EDR is 79%. With an EDR of 79%, the maximum false discovery rate is only 1%. However, the 95%CI is wide and the lower bound of the CI for the EDR, 27%, allows for 14% false discoveries.

When the ODR matches the EDR, there is no evidence of publication bias. In this case, we can improve the estimates by fitting all p-values, including the non-significant ones. With a tighter CI for the EDR, we see that the 95%CI for the maximum FDR ranges from 1% to 3%. Thus, we can be confident that no more than 5% of the significant results wit alpha = .05 are false discoveries. Readers can therefore continue to use alpha = .05 to look for interesting discoveries in Matsumoto’s articles.

Figure 3 shows the results for a different type of researcher who took a risk and studied weak effect sizes with small samples. This produces many non-significant results that are often not published. The selection for significance inflates the observed discovery rate, but the z-curve plot and the comparison with the EDR shows the influence of publication bias. Here the ODR is similar to Figure 1, but the EDR is only 11%. An EDR of 11% translates into a large maximum false discovery rate of 41%. In addition, the 95%CI of the EDR includes 5%, which means the risk of false positives could be as high as 100%. In this case, using alpha = .05 to interpret results as discoveries is very risky. Clearly, p < .05 means something very different when reading an article by David Matsumoto or Shelly Chaiken.

Rather than dismissing all of Chaiken’s results, we can try to lower alpha to reduce the false discovery rate. If we set alpha = .01, the FDR is 15%. If we set alpha = .005, the FDR is 8%. To get the FDR below 5%, we need to set alpha to .001.

A uniform criterion of FDR < 5% is applied to all researchers in the rankings below. For some this means no adjustment to the traditional criterion. For others, alpha is lowered to .01, and for a few even lower than that.

The rankings below are based on automatrically extracted test-statistics from 40 journals (List of journals). The results should be interpreted with caution and treated as preliminary. They depend on the specific set of journals that were searched, the way results are being reported, and many other factors. The data are available (data.drop) and researchers can exclude articles or add articles and run their own analyses using the z-curve package in R (

I am also happy to receive feedback about coding errors. I also recommended to hand-code articles to adjust alpha for focal hypothesis tests. This typically lowers the EDR and increases the FDR. For example, the automated method produced an EDR of 31 for Bargh, whereas hand-coding of focal tests produced an EDR of 12 (Bargh-Audit).

And here are the rankings. The results are fully automated and I was not able to cover up the fact that I placed only #188 out of 400 in the rankings. In another post, I will explain how researchers can move up in the rankings. Of course, one way to move up in the rankings is to increase statistical power in future studies. The rankings will be updated again when the 2021 data are available.

Despite the preliminary nature, I am confident that the results provide valuable information. Until know all p-values below .05 have been treated as if they are equally informative. The rankings here show that this is not the case. While p = .02 can be informative for one researcher, p = .002 may still entail a high false discovery risk for another researcher.

1Robert A. Emmons538789901.05
2Allison L. Skinner2295981851.05
3David Matsumoto3788379851.05
4Linda J. Skitka5326875822.05
5Jonathan B. Freeman2745975812.05
6Virgil Zeigler-Hill5157274812.05
7Arthur A. Stone3107573812.05
8David P. Schmitt2077871772.05
9Emily A. Impett5497770762.05
10Paula Bressan628270762.05
11Kurt Gray4877969812.05
12Michael E. McCullough3346969782.05
13Kipling D. Williams8437569772.05
14John M. Zelenski1567169762.05
15Elke U. Weber3126968770.05
16Hilary B. Bergsieker4396768742.05
17Cameron Anderson6527167743.05
18Rachael E. Jack2497066803.05
19Jamil Zaki4307866763.05
20A. Janet Tomiyama767865763.05
21Benjamin R. Karney3925665733.05
22Phoebe C. Ellsworth6057465723.05
23Jim Sidanius4876965723.05
24Amelie Mummendey4617065723.05
25Carol D. Ryff2808464763.05
26Juliane Degner4356364713.05
27Steven J. Heine5977863773.05
28David M. Amodio5846663703.05
29Thomas N Bradbury3986163693.05
30Elaine Fox4727962783.05
31Miles Hewstone14277062733.05
32Linda R. Tropp3446561803.05
33Rainer Greifeneder9447561773.05
34Klaus Fiedler19507761743.05
35Jesse Graham3777060763.05
36Richard W. Robins2707660704.05
37Simine Vazire1376660644.05
38On Amir2676759884.05
39Edward P. Lemay2898759814.05
40William B. Swann Jr.10707859804.05
41Margaret S. Clark5057559774.05
42Bernhard Leidner7246459654.05
43B. Keith Payne8797158764.05
44Ximena B. Arriaga2846658694.05
45Joris Lammers7286958694.05
46Patricia G. Devine6067158674.05
47Rainer Reisenzein2016557694.05
48Barbara A. Mellers2878056784.05
49Joris Lammers7056956694.05
50Jean M. Twenge3817256594.05
51Nicholas Epley15047455724.05
52Kaiping Peng5667754754.05
53Krishna Savani6387153695.05
54Leslie Ashburn-Nardo1098052835.05
55Lee Jussim2268052715.05
56Richard M. Ryan9987852695.05
57Ethan Kross6146652675.05
58Edward L. Deci2847952635.05
59Roger Giner-Sorolla6638151805.05
60Bertram F. Malle4227351755.05
61Jens B. Asendorpf2537451695.05
62Samuel D. Gosling1085851625.05
63Tessa V. West6917151595.05
64Paul Rozin4497850845.05
65Joachim I. Krueger4367850815.05
66Sheena S. Iyengar2076350805.05
67James J. Gross11047250775.05
68Mark Rubin3066850755.05
69Pieter Van Dessel5787050755.05
70Shinobu Kitayama9837650715.05
71Matthew J. Hornsey16567450715.05
72Janice R. Kelly3667550705.05
73Antonio L. Freitas2477950645.05
74Paul K. Piff1667750635.05
75Mina Cikara3927149805.05
76Beate Seibt3797249626.01
77Ludwin E. Molina1636949615.05
78Bertram Gawronski18037248766.01
79Penelope Lockwood4587148706.01
80Edward R. Hirt10428148656.01
81Matthew D. Lieberman3987247806.01
82John T. Cacioppo4387647696.01
83Agneta H. Fischer9527547696.01
84Leaf van Boven7117247676.01
85Stephanie A. Fryberg2486247666.01
86Daniel M. Wegner6027647656.01
87Anne E. Wilson7857147646.01
88Rainer Banse4027846726.01
89Alice H. Eagly3307546716.01
90Jeanne L. Tsai12417346676.01
91Jennifer S. Lerner1818046616.01
92Andrea L. Meltzer5495245726.01
93R. Chris Fraley6427045727.01
94Constantine Sedikides25667145706.01
95Paul Slovic3777445706.01
96Dacher Keltner12337245646.01
97Brian A. Nosek8166844817.01
98George Loewenstein7527144727.01
99Ursula Hess7747844717.01
100Jason P. Mitchell6007343737.01
101Jessica L. Tracy6327443717.01
102Charles M. Judd10547643687.01
103S. Alexander Haslam11987243647.01
104Mark Schaller5657343617.01
105Susan T. Fiske9117842747.01
106Lisa Feldman Barrett6446942707.01
107Jolanda Jetten19567342677.01
108Mario Mikulincer9018942647.01
109Bernadette Park9737742647.01
110Paul A. M. Van Lange10927042637.01
111Wendi L. Gardner7986742637.01
112Will M. Gervais1106942597.01
113Jordan B. Peterson2666041797.01
114Philip E. Tetlock5497941737.01
115Amanda B. Diekman4388341707.01
116Daniel H. J. Wigboldus4927641678.01
117Michael Inzlicht6866641638.01
118Naomi Ellemers23887441638.01
119Phillip Atiba Goff2996841627.01
120Stacey Sinclair3277041578.01
121Francesca Gino25217540698.01
122Michael I. Norton11367140698.01
123David J. Hauser1567440688.01
124Elizabeth Page-Gould4115740668.01
125Tiffany A. Ito3498040648.01
126Richard E. Petty27716940648.01
127Tim Wildschut13747340648.01
128Norbert Schwarz13377240638.01
129Veronika Job3627040638.01
130Wendy Wood4627540628.01
131Minah H. Jung1568339838.01
132Marcel Zeelenberg8687639798.01
133Tobias Greitemeyer17377239678.01
134Jason E. Plaks5827039678.01
135Carol S. Dweck10287039638.01
136Christian S. Crandall3627539598.01
137Harry T. Reis9986938749.01
138Vanessa K. Bohns4207738748.01
139Jerry Suls4137138688.01
140Eric D. Knowles3846838648.01
141C. Nathan DeWall13367338639.01
142Clayton R. Critcher6978238639.01
143John F. Dovidio20196938629.01
144Joshua Correll5496138629.01
145Abigail A. Scholer5565838629.01
146Chris Janiszewski1078138589.01
147Herbert Bless5867338579.01
148Mahzarin R. Banaji8807337789.01
149Rolf Reber2806437729.01
150Kevin N. Ochsner4067937709.01
151Mark J. Brandt2777037709.01
152Geoff MacDonald4066737679.01
153Mara Mather10387837679.01
154Antony S. R. Manstead16567237629.01
155Lorne Campbell4336737619.01
156Sanford E. DeVoe2367137619.01
157Ayelet Fishbach14167837599.01
158Fritz Strack6077537569.01
159Jeff T. Larsen18174366710.01
160Nyla R. Branscombe12767036659.01
161Yaacov Schul4116136649.01
162D. S. Moskowitz34187436639.01
163Pablo Brinol13566736629.01
164Todd B. Kashdan3777336619.01
165Barbara L. Fredrickson2877236619.01
166Duane T. Wegener9807736609.01
167Joanne V. Wood10937436609.01
168Niall Bolger3766736589.01
169Craig A. Anderson4677636559.01
170Michael Harris Bond37873358410.01
171Glenn Adams27071357310.01
172Daniel M. Bernstein40473357010.01
173C. Miguel Brendl12176356810.01
174Azim F. Sharif18374356810.01
175Emily Balcetis59969356810.01
176Eva Walther49382356610.01
177Michael D. Robinson138878356610.01
178Igor Grossmann20364356610.01
179Diana I. Tamir15662356210.01
180Samuel L. Gaertner32175356110.01
181John T. Jost79470356110.01
182Eric L. Uhlmann45767356110.01
183Nalini Ambady125662355610.01
184Daphna Oyserman44655355410.01
185Victoria M. Esses29575355310.01
186Linda J. Levine49574347810.01
187Wiebke Bleidorn9963347410.01
188Thomas Gilovich119380346910.01
189Alexander J. Rothman13369346510.01
190Paula M. Niedenthal52269346110.01
191Ozlem Ayduk54962345910.01
192Paul Ekman8870345510.01
193Alison Ledgerwood21475345410.01
194Christopher R. Agnew32575337610.01
195Michelle N. Shiota24260336311.01
196Malte Friese50161335711.01
197Kerry Kawakami48768335610.01
198Danu Anthony Stinson49477335411.01
199Jennifer A. Richeson83167335211.01
200Margo J. Monteith77376327711.01
201Ulrich Schimmack31875326311.01
202Mark Snyder56272326311.01
203Russell H. Fazio109469326111.01
204Eric van Dijk23867326011.01
205Tom Meyvis37777326011.01
206Eli J. Finkel139262325711.01
207Robert B. Cialdini37972325611.01
208Jonathan W. Kunstman43066325311.01
209Delroy L. Paulhus12177318212.01
210Yuen J. Huo13274318011.01
211Gerd Bohner51371317011.01
212Christopher K. Hsee68975316311.01
213Vivian Zayas25171316012.01
214John A. Bargh65172315512.01
215Tom Pyszczynski94869315412.01
216Roy F. Baumeister244269315212.01
217E. Ashby Plant83177315111.01
218Kathleen D. Vohs94468315112.01
219Jamie Arndt131869315012.01
220Anthony G. Greenwald35772308312.01
221Nicholas O. Rule129468307513.01
222Lauren J. Human44759307012.01
223Jennifer Crocker51568306712.01
224Dale T. Miller52171306412.01
225Thomas W. Schubert35370306012.01
226W. Keith Campbell52870305812.01
227Arthur Aron30765305612.01
228Pamela K. Smith14966305212.01
229Aaron C. Kay132070305112.01
230Steven W. Gangestad19863304113.005
231Eliot R. Smith44579297313.01
232Nir Halevy26268297213.01
233E. Allan Lind37082297213.01
234Richard E. Nisbett31973296913.01
235Hazel Rose Markus67476296813.01
236Emanuele Castano44569296513.01
237Dirk Wentura83065296413.01
238Boris Egloff27481295813.01
239Monica Biernat81377295713.01
240Gordon B. Moskowitz37472295713.01
241Russell Spears228673295513.01
242Jeff Greenberg135877295413.01
243Caryl E. Rusbult21860295413.01
244Naomi I. Eisenberger17974287914.01
245Brent W. Roberts56272287714.01
246Yoav Bar-Anan52575287613.01
247Eddie Harmon-Jones73873287014.01
248Matthew Feinberg29577286914.01
249Roland Neumann25877286713.01
250Eugene M. Caruso82275286413.01
251Ulrich Kuehnen82275286413.01
252Elizabeth W. Dunn39575286414.01
253Jeffry A. Simpson69774285513.01
254Sander L. Koole76765285214.01
255Richard J. Davidson38064285114.01
256Shelly L. Gable36464285014.01
257Adam D. Galinsky215470284913.01
258Grainne M. Fitzsimons58568284914.01
259Geoffrey J. Leonardelli29068284814.005
260Joshua Aronson18385284614.005
261Henk Aarts100367284514.005
262Vanessa K. Bohns42276277415.01
263Jan De Houwer197270277214.01
264Dan Ariely60070276914.01
265Charles Stangor18581276815.01
266Karl Christoph Klauer80167276514.01
267Jennifer S. Beer8056275414.01
268Eldar Shafir10778275114.01
269Guido H. E. Gendolla42276274714.005
270Klaus R. Scherer46783267815.01
271William G. Graziano53271266615.01
272Galen V. Bodenhausen58574266115.01
273Sonja Lyubomirsky53071265915.01
274Kai Sassenberg87271265615.01
275Kristin Laurin64863265115.01
276Claude M. Steele43473264215.005
277David G. Rand39270258115.01
278Paul Bloom50272257916.01
279Kerri L. Johnson53276257615.01
280Batja Mesquita41671257316.01
281Rebecca J. Schlegel26167257115.01
282Phillip R. Shaver56681257116.01
283David Dunning81874257016.01
284Laurie A. Rudman48272256816.01
285David A. Lishner10565256316.01
286Mark J. Landau95078254516.005
287Ronald S. Friedman18379254416.005
288Joel Cooper25772253916.005
289Alison L. Chasteen22368246916.01
290Jeff Galak31373246817.01
291Steven J. Sherman88874246216.01
292Shigehiro Oishi110964246117.01
293Thomas Mussweiler60470244317.005
294Mark W. Baldwin24772244117.005
295Evan P. Apfelbaum25662244117.005
296Nurit Shnabel56476237818.01
297Klaus Rothermund73871237618.01
298Felicia Pratto41073237518.01
299Jonathan Haidt36876237317.01
300Roland Imhoff36574237318.01
301Jeffrey W Sherman99268237117.01
302Jennifer L. Eberhardt20271236218.005
303Bernard A. Nijstad69371235218.005
304Brandon J. Schmeichel65266234517.005
305Sam J. Maglio32572234217.005
306David M. Buss46182228019.01
307Yoel Inbar28067227119.01
308Serena Chen86572226719.005
309Spike W. S. Lee14568226419.005
310Marilynn B. Brewer31475226218.005
311Michael Ross116470226218.005
312Dieter Frey153868225818.005
313G. Daniel Lassiter18982225519.01
314Sean M. McCrea58473225419.005
315Wendy Berry Mendes96568224419.005
316Paul W. Eastwick58365216919.005
317Kees van den Bos115084216920.005
318Maya Tamir134280216419.005
319Joseph P. Forgas88883215919.005
320Michaela Wanke36274215919.005
321Dolores Albarracin54066215620.005
322Elizabeth Levy Paluck3184215520.005
323Vanessa LoBue29968207621.01
324Christopher J. Armitage16062207321.005
325Elizabeth A. Phelps68678207221.005
326Jay J. van Bavel43764207121.005
327David A. Pizarro22771206921.005
328Andrew J. Elliot101881206721.005
329William A. Cunningham23876206422.005
330Kentaro Fujita45869206221.005
331Geoffrey L. Cohen159068205021.005
332Ana Guinote37876204721.005
333Tanya L. Chartrand42467203321.001
334Selin Kesebir32866197322.005
335Vincent Y. Yzerbyt141273197322.01
336Amy J. C. Cuddy17081197222.005
337James K. McNulty104756196523.005
338Robert S. Wyer87182196322.005
339Travis Proulx17463196222.005
340Peter M. Gollwitzer130364195822.005
341Nilanjana Dasgupta38376195222.005
342Richard P. Eibach75369194723.001
343Gerald L. Clore45674194522.001
344James M. Tyler13087187424.005
345Roland Deutsch36578187124.005
346Ed Diener49864186824.005
347Kennon M. Sheldon69874186623.005
348Wilhelm Hofmann62467186623.005
349Laura L. Carstensen72377186424.005
350Toni Schmader54669186124.005
351Frank D. Fincham73469185924.005
352David K. Sherman112861185724.005
353Lisa K. Libby41865185424.005
354Chen-Bo Zhong32768184925.005
355Stefan C. Schmukle11462177126.005
356Michel Tuan Pham24686176825.005
357Leandre R. Fabrigar63270176726.005
358Neal J. Roese36864176525.005
359Carey K. Morewedge63376176526.005
360Timothy D. Wilson79865176326.005
361Brad J. Bushman89774176225.005
362Ara Norenzayan22572176125.005
363Benoit Monin63565175625.005
364Michael W. Kraus61772175526.005
365Ad van Knippenberg68372175526.001
366E. Tory. Higgins186868175425.001
367Ap Dijksterhuis75068175426.005
368Joseph Cesario14662174526.001
369Simone Schnall27062173126.001
370Joshua M. Ackerman38053167013.01
371Melissa J. Ferguson116372166927.005
372Laura A. King39176166829.005
373Daniel T. Gilbert72465166527.005
374Charles S. Carver15482166428.005
375Leif D. Nelson40974166428.005
376David DeSteno20183165728.005
377Sandra L. Murray69760165528.001
378Heejung S. Kim85859165529.001
379Mark P. Zanna65964164828.001
380Nira Liberman130475156531.005
381Gun R. Semin15979156429.005
382Tal Eyal43962156229.005
383Nathaniel M Lambert45666155930.001
384Angela L. Duckworth12261155530.005
385Dana R. Carney20060155330.001
386Lee Ross34977146331.001
387Arie W. Kruglanski122878145833.001
388Ziva Kunda21767145631.001
389Shelley E. Taylor42769145231.001
390Jon K. Maner104065145232.001
391Gabriele Oettingen104761144933.001
392Gregory M. Walton58769144433.001
393Michael A. Olson34665136335.001
394Fiona Lee22167135834.001
395Melody M. Chao23757135836.001
396Adam L. Alter31478135436.001
397Sarah E. Hill50978135234.001
398Jaime L. Kurtz9155133837.001
399Michael A. Zarate12052133136.001
400Jennifer K. Bosson65976126440.001
401Daniel M. Oppenheimer19880126037.001
402Deborah A. Prentice8980125738.001
403Yaacov Trope127773125738.001
404Oscar Ybarra30563125540.001
405William von Hippel39865124840.001
406Steven J. Spencer54167124438.001
407Martie G. Haselton18673115443.001
408Shelly Chaiken36074115244.001
409Susan M. Andersen36174114843.001
410Dov Cohen64168114441.001
411Mark Muraven49652114441.001
412Ian McGregor40966114041.001
413Hans Ijzerman2145694651.001
414Linda M. Isbell1156494150.001
415Cheryl J. Wakslak2787383559.001

Ioannidis is Wrong Most of the Time

John P. A. Ioannidis is a rock star in the world of science (wikipedia).

By traditional standards of science, he is one of the most prolific and influential scientists alive. He has published over 1,000 articles that have been cited over 100,000 times.

He is best known for the title of his article “Why most published research findings are false” that has been cited nearly 5,000 times. The irony of this title is that it may also apply to Ioannidis, especially because there is a trade-off between quality and quantity in publishing.

Fact Checking Ioannidis

The title of Ioannidis’s article implies a factual statement: “Most published results ARE false.” However, the actual article does not contain empirical data to support this claim. Rather, Ioannidis presents some hypothetical scenarios that show under what conditions published results MAY BE false.

To produce mostly false findings, a literature has to meet two conditions.

First, it has to test mostly false hypotheses.
Second, it has to test hypotheses in studies with low statistical power, that is a low probability of producing true positive results.

To give a simple example, imagine a field that tests only 10% true hypothesis with just 20% power. As power predicts the percentage of true discoveries, only 2 out of the 10 true hypothesis will be significant. Meanwhile, the alpha criterion of 5% implies that 5% of the false hypotheses will also produce a significant result. Thus, 5 of the 90 false hypotheses will also produce a significant result. As a result, there will be two times more false positives (4.5 over 100) than true positives (2 over 100).

These relatively simple calculations were well known by 2005 (Soric, 1989). Thus, why did Ioannidis article have such a big impact? The answer is that Ioannidis convinced many people that his hypothetical examples are realistic and describe most areas in science.

2020 has shown that Ioannidis’s claim does not apply to all areas of science. In amazing speed, bio-tech companies were able to make not just one but several successful vaccine’s with high effectiveness. Clearly some sciences are making real progress. On the other hand, other areas of science suggest that Ioannidis’s claims were accurate. For example, the whole literature on single-gene variations as predictors of human behavior has produced mostly false claims. Social psychology has a replication crisis where only 25% of published results could be replicated (OSC, 2015).

Aside from this sporadic and anecdotal evidence, it remains unclear how many false results are published in science as a whole. The reason is that it is impossible to quantify the number of false positive results in science. Fortunately, it is not necessary to know the actual rate of false positives to test Ioannidis’s prediction that most published results are false positives. All we need to know is the discovery rate of a field (Soric, 1989). The discovery rate makes it possible to quantify the maximum percentage of false positive discoveries. If the maximum false discovery rate is well below 50%, we can reject Ioannidis’s hypothesis that most published results are false.

The empirical problem is that the observed discovery rate in a field may be inflated by publication bias. It is therefore necessary to estimate the amount of publication bias and if necessary correct the discovery rate, if publication bias is present.

In 2005, Ioannidis and Trikalinos (2005) developed their own test for publication bias, but this test had a number of shortcomings. First, it could be biased in heterogeneous literatures. Second, it required effect sizes to compute power. Third, it only provided information about the presence of publication bias and did not quantify it. Fourth, it did not provide bias-corrected estimates of the true discovery rate.

When the replication crisis became apparent in psychology, I started to develop new bias tests that address these limitations (Bartos & Schimmack, 2020; Brunner & Schimmack, 2020; Schimmack, 2012). The newest tool, called z-curve.2.0 (and yes, there is a app for that), overcomes all of the limitations of Ioannidis’s approach. Most important, it makes it possible to compute a bias-corrected discovery rate that is called the expected discovery rate. The expected discovery rate can be used to examine and quantify publication bias by comparing it to the observed discovery rate. Moreover, the expected discovery rate can be used to compute the maximum false discovery rate.

The Data

The data were compiled by Simon Schwab from the Cochrane database ( that covers results from thousands of clinical trials. The data are publicly available ( under a CC-By Attribution 4.0 International license (“Re-estimating 400,000 treatment effects from intervention studies in the Cochrane Database of Systematic Reviews”; (see also van Zwet, Schwab, & Senn, 2020).

Studies often report results for several outcomes. I selected only results for the primary outcome. It is often suggested that researchers switch outcomes to produce significant results. Thus, primary outcomes are the most likely to show evidence of publication bias, while secondary outcomes might even be biased to show more negative results for the same reason. The choice of primary outcomes also ensures that the test statistics are statistically independent because they are based on independent samples.


I first fitted the default model to the data. The default model assumes that publication bias is present and only uses statistically significant results to fit the model. Z-curve.2.0 uses a finite mixture model to approximate the observed distribution of z-scores with a limited number of non-centrality parameters. After finding optimal weights for the components, power can be computed as the weighted average of the implied power of the components (Bartos & Schimmack, 2020). Bootstrapping is used to compute 95% confidence intervals that have shown to have good coverage in simulation studies (Bartos & Schimmack, 2020).

The main finding with the default model is that the model (grey curve) fits the observed distribution of z-scores very well in the range of significant results. However, z-curve has problems extrapolating from significant results to the distribution of non-significant results. In this case, the model (grey curve) underestimates the amount of non-significant results. Thus, there is no evidence of publication bias. This is seen in a comparison of the observed and expected discovery rates. The observed discovery rate of 26% is lower than the expected discovery rate of 38%.

When there is no evidence of publication bias, there is no reason to fit the model only to the significant results. Rather, the model can be fitted to the full distribution of all test statistics. The results are shown in Figure 2.

The key finding for this blog post is that the estimated discovery rate of 27% closely matches the observed discovery rate of 26%. Thus, there is no evidence of publication bias. In this case, simply counting the percentage of significant results provides a valid estimate of the discovery rate in clinical trials. Roughly one-quarter of trials end up with a positive result. The new question is how many of these results might be false positives.

To maximize the rate of false positives, we have to assume that true positives were obtained with maximum power (Soric, 1989). In this scenario, we could get as many as 14% (4 over 27) false positive results.

Even if we use the upper limit of the 95% confidence interval, we only get 19% false positives. Moreover, it is clear that Soric’s (1989) scenario overestimate the false discovery rate because it is unlikely that all tests of true hypotheses have 100% power.

In short, an empirical test of Ioannidis’s hypothesis that most published results in science are false shows that this claim is at best a wild overgeneralization. It is not true for clinical trials in medicine. In fact, the real problem is that many clinical trials may be underpowered to detect clinically relevant effects. This can be seen in the estimated replication rate of 61%, which is the mean power of studies with significant results. This estimate of power includes false positives with 5% power. If we assume that 14% of the significant results are false positives, the conditional power based on a true discovery is estimated to be 70% (14 * .05 + 86 * . 70 = .61).

With information about power, we can modify Soric’s worst case scenario and change power from 100% to 70%. This has only a small influence on the false positive discovery rate that decreases to 11% (3 over 27). However, the rate of false negatives increases from 0 to 14% (10 over 74). This also means that there are now three-times more false negatives than false positives (10 over 3).

Even this scenario overestimates power of studies that produced false negative results because power of studies with significant results is higher than power of studies that produced non-significant results when power is heterogenous (Brunner & Schimmack, 2020). In the worst case scenario, the null-hypothesis may rarely be true and power of studies with non-significant results could be as low as 14.5%. To explain, if we redo all of the studies, we expected that 61% of the significant studies produce a significant result again, producing 16.5% significant results. We also expect that the discovery rate will be 27% again. Thus, the remaining 73% of studies have to make up the difference between 27% and 16.5%, which is 10.5%. For 73 studies to produce 10.5 significant results, the studies have to have 14.5% power. 27 = 27 * .61 + 73 * .145.

In short, while Ioannidis predicted that most published results are false positives, it is much more likely that most published results are false negatives. This problem is of course not new. To make conclusions about effectiveness of treatments, medical researchers usually do not rely on a single clinical trial. Rather results of several studies are combined in a meta-analysis. As long as there is no publication bias, meta-analyses of original studies can boost power and reduce the risk of false negative results. It is therefore encouraging that the present results suggest that there is relatively little publication bias in these studies. Additional analyses for subgroups of studies can be conducted, but are beyond the main point of this blog post.


Ioannidis wrote an influential article that used hypothetical scenarios to make the prediction that most published results are false positives. Although this article is often cited as if it contained evidence to support this claim, the article contained no empirical evidence. Surprisingly, there also have been few attempts to test Ioannidis’s claim empirically. Probably the main reason is that nobody knew how to test it. Here I showed a way to test Ioannidis’s claim and I presented clear empirical evidence that contradicts this claim in Ioannidis’s own field of science, namely medicine.

The main feature that distinguishes science and fiction is not that science is always right. Rather, science is superior because proper use of the scientific method allows for science to correct itself, when better data become available. In 2005, Ioannidis had no data and no statistical method to prove his claim. Fifteen years later, we have good data and a scientific method to test his claim. It is time for science to correct itself and to stop making unfounded claims that science is more often wrong than right.

The danger of not trusting science has been on display this year, where millions of Americans ignored good scientific evidence, leading to the unnecessary death of many US Americans. So far, 330, 000 US Americans are estimated to have died of Covid-19. In a similar country like Canada, 14,000 Canadians have died so far. To adjust for population, we can compare the number of deaths per million, which is 1000 in the USA and 400 in Canada. The unscientific approach to the pandemic in the US may explain some of this discrepancy. Along with the development of vaccines, it is clear that science is not always wrong and can save lives. Iannaidis (2005) made unfounded claims that success stories are the exception rather than the norm. At least in medicine, intervention studies show real successes more often than false ones.

The Covid-19 pandemic also provides another example where Ioannidis used off-the-cuff calculations to make big claims without any evidence. In a popular article titled “A fiasco in the making” he speculated that the Covid-19 virus might be less deadly than the flu and suggested that policies to curb the spread of the virus were irrational.

As the evidence accumulated, it became clear that the Covid-19 virus is claiming many more lives than the flu, despite policies that Ioannidis considered to be irrational. Scientific estimates suggest that Covid-19 is 5 to 10 times more deadly than the flu (BNN), not less deadly as Ioannidis implied. Once more, Ioannidis quick, unempirical claims were contradicted by hard evidence. It is not clear how many of his other 1,000 plus articles are equally questionable.

To conclude, Ioannidis should be the last one to be surprised that several of his claims are wrong. Why should he be better than other scientists? The question is only how he deals with this information. However, for science it is not important whether scientists correct themselves. Science corrects itself by replacing old, false information with better information. One question is what science does with false and misleading information that is highly cited.

If YouTube can remove a video with Ioannidis’s false claims about Covid-19 (WP), maybe PLOS Medicine can retract an article with the false claim that “most published results in science are false”.

Washington Post

The attention-grabbing title is simply misleading because nothing in the article supports the claim. Moreover, actual empirical data contradict the claim at least in some domains. Most claims in science are not false and in a world with growing science skepticism spreading false claims about science may be just as deadly as spreading false claims about Covid-19.

If we learned anything from 2020, it is that science and democracy are not perfect, but a lot better than superstition and demagogy.

I wish you all a happier 2021.

Soric’s Maximum False Discovery Rate

Originally published January 31, 2020
Revised December 27, 2020

Psychologists, social scientists, and medical researchers often conduct empirical studies with the goal to demonstrate an effect (e.g., a drug is effective). They do so by rejecting the null-hypothesis that there is no effect, when a test statistic falls into a region of improbable test-statistics, p < .05. This is called null-hypothesis significance testing (NHST).

The utility of NHST has been a topic of debate. One of the oldest criticisms of NHST is that the null-hypothesis is likely to be false most of the time (Lykken, 1968). As a result, demonstrating a significant result adds little information, while failing to do so because studies have low power creates false information and confusion.

This changed in the 2000s, when the opinion emerged that most published significant results are false (Ioannidis, 2005; Simmons, Nelson, & Simonsohn, 2011). In response, there have been some attempts to estimate the actual number of false positive results (Jager & Leek, 2013). However, there has been surprisingly little progress towards this goal.

One problem for empirical tests of the false discovery rate is that the null-hypothesis is an abstraction. Just like it is impossible to say the number of points that make up the letter X, it is impossible to count null-hypotheses because the true population effect size is always unknown (Zhao, 2011, JASA).

An article by Soric (1989, JASA) provides a simple solution to this problem. Although this article was influential in stimulating methods for genome-wide association studies (Benjamin & Hochberg, 1995, over 40,000) citations, the article itself has garnered fewer than 100 citations. Yet, it provides a simple and attractive way to examine how often researchers may be obtaining significant results when the null-hypothesis is true. Rather than trying to estimate the actual false discovery rate, the method estimates the maximum false discovery rate. If a literature has a low maximum false discovery rate, readers can be assured that most significant results are true positives.

The method is simple because researchers do not have to determine whether a specific finding was a true or false positive result. Rather, the maximum false discovery rate can be computed from the actual discovery rate (i.e., the percentage of significant results for all tests).

The logic of Soric’s (1989) approach is illustrated in Tables 1.

Table 1

To maximize the false discovery rate, we make the simplifying assumption that all tests of true hypotheses (i.e., the null-hypothesis is false) are conducted with 100% power (i.e., all tests of true hypotheses produce a significant result). In Table 1, this leads to 60 significant results for 60 true hypotheses. The percentage of significant results for false hypotheses (i.e., the null-hypothesis is true) is given by the significance criterion, which is set at the typical level of 5%. This means that for every 20 tests, there are 19 non-significant results and one false positive result. In Table 1 this leads to 40 false positive results for 800 tests.

In this example, the discovery rate is (40 + 60)/860 = 11.6%. Out of these 100 discoveries, 60 are true discoveries and 40 are false discoveries. Thus, the false discovery rate is 40/100 = 40%.

Soric’s (1989) insight makes it easy to examine empirically whether a literature tests many false hypotheses, using a simple formula to compute the maximum false discovery rate from the observed discovery rate; that is, the percentage of significant results. All we need to do is count and use simple math to obtain valuable information about the false discovery rate.

However, a major problem with Soric’s approach is that the observed discovery rate in a literature may be misleading because journals are more likely to publish significant results than non-significant results. This is known as publication bias or the file-drawer problem (Rosenthal, 1979). In some sciences, publication bias is a big problem. Sterling (1959; also Sterling et al., 1995) found that the observed discovery rated in psychology is over 90%. Rather than suggesting that psychologists never test false hypotheses, it rather suggests that publication bias is particularly strong in psychology (Fanelli, 2010). Using these inflated discovery rates to estimate the maximum FDR would severely understimate the actual risk of false positive results.

Recently, Bartoš and Schimmack (2020) developed a statistical model that can correct for publication bias and produce a bias-corrected estimate of the discovery rate. This is called the expected discovery rate. A comparison of the observed discovery rate (ODR) and the expected discovery rate (EDR) can be used to assess the presence and extent of publication bias. In addition, the EDR can be used to compute Soric’s maximum false discovery rate when publication bias is present and inflates the ODR.

To demonstrate this approach, I I use test-statistics from the journal Psychonomic Bulletin and Review. The choice of this journal is motivated by prior meta-psychological investigations of results published in this journal. Gronau, Duizer, Bakker, and Wagenmakers (2017) used a Bayesian Mixture Model to estimate that about 40% of results published in this journal are false positive results. Using Soric’s formula in reverse shows that this estimate implies that cognitive psychologists test only 10% true hypotheses (Table 3; 72/172 = 42%). This is close to Dreber, Pfeiffer, Almenber, Isakssona, Wilsone, Chen, Nosek, and Johannesson’s (2015) estimate of only 9% true hypothesis in cognitive psychology.

Table 3

These results are implausible because rather different results are obtained when Soric’s method is applied to the results from the Open Science Collaboration (2015) project that conducted actual replication studies and found that 50% of published significant results could be replicated; that is, produced a significant results again in the replication study. As there was no publication bias in the replication studies, the ODR of 50% can be used to compute the maximum false discovery rate, which is only 5%. This is much lower than the estimate obtained with Gronau et al.’s (2018) mixture model.

I used an R-script to automatically extract test-statistics from articles that were published in Psychonomic Bulletin and Review from 2000 to 2010. I limited the analysis to this period because concerns about replicability and false positives might have changed research practices after 2010. The program extracted 13,571 test statistics.

Figure 1 shows clear evidence of selection bias. The observed discovery rate of 70% is much higher than the estimated discovery rate of 35% and the 95%CI of the EDR, 25% to 53% does not include the ODR. As a result, the ODR produces an inflated estimate of the actual discover rate and cannot be used to compute the maximum false discovery rate.

However, even with a much lower estimated discovery rate of 36%, the maximum false discovery rate is only 10%. Even with the lower bound of the confidence interval for the EDR of 25%, the maximum FDR is only 16%.

Figure 2 shows the results for a replication with test statistics from 2011 to 2019. Although changes in research practices could have produced different results, the results are unchanged. The ODR is 69% vs. 70%; the EDR is 38% vs. 35% and the point estimate of the maximum FDR is 9% vs. 10%. This close replication also implies that research practices in cognitive psychology have not changed over the past decade.

The maximum FDR estimates of 10% confirms the results based on the replication rate in a small set of actual replication studies (OSC, 2015) with a much larger sample of test statistics. The results also show that Gronau et al.’s mixture model produces dramatically inflated estimates of the false discovery rate (see also Brunner & Schimmack, 2019, for a detailed discussion of their flawed model).

In contrast to cognitive psychology, social psychology has seen more replication failures. The OSC project estimated a discovery rate of only 25%. Even this low rate would imply that a maximum of 16% of discoveries in social psychology are false positives. A z-curve analysis of a representative sample of 678 focal tests in social psychology produced an estimated discovery rate of 19% with a 95%CI ranging from 6% to 36% (Schimmack, 2020). The point estimate implies a maximum FDR of 22%, but the lower limit of the confidence interval allows for a maximum FDR of 82%. Thus, social psychology may be a literature where most published results are false. However, the replication crisis in social psychology should not be generalized to other disciplines.


Numerous articles have made claims that false discoveries are rampant (Dreber et al., 2015; Gronau et al., 2015; Ioannidis, 2005; Simmons et al., 2011). However, these articles did not provide empirical data to support their claim. In contrast, empirical studies of the false discovery risk usually show much lower rates of false discoveries (Jager & Leek, 2013), but this finding has been dismissed (Ioannidis, 2014) or ignored (Gronau et al., 2018). Here I used a simpler approach to estimate the maximum false discovery rate and showed that most significant results in cognitive psychology are true discoveries. I hope that this demonstration revives attempts to estimate the science-wise false discovery rate (Jager & Leek, 2013) rather than relying on hypothetical scenarios or models that reflect researchers’ prior beliefs that may not match actual data (Gronau et al., 2018; Ioannidis, 2005).


Bartoš, F., & Schimmack, U. (2020, January 10). Z-Curve.2.0: Estimating Replication Rates and Discovery Rates.

Dreber A., Pfeiffer T., Almenberg, J., Isaksson S., Wilson B., Chen Y., Nosek B. A.,  Johannesson, M. (2015). Prediction markets in science. Proceedings of the National Academy of Sciences, 50, 15343-15347. DOI: 10.1073/pnas.1516179112

Fanelli D (2010) Positive” Results Increase Down the Hierarchy of the Sciences. PLOS ONE 5(4): e10068.

Gronau, Q. F., Duizer, M., Bakker, M., & Wagenmakers, E.-J. (2017). Bayesian mixture modeling of significant p values: A meta-analytic method to estimate the degree of contamination from H₀. Journal of Experimental Psychology: General, 146(9), 1223–1233.

Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLOS Medicine 2(8): e124.

Ioannidis JP. (2014). Why “An estimate of the science-wise false discovery rate and application to the top medical literature” is false. Biostatistics, 15(1), 28-36.
DOI: 10.1093/biostatistics/kxt036.

Jager, L. R., & Leek, J. T. (2014). An estimate of the science-wise false discovery rate and application to the top medical literature. Biostatistics, 15(1), 1-12.
DOI: 10.1093/biostatistics/kxt007

Lykken, D. T. (1968). Statistical significance in psychological research. Psychological Bulletin, 70(3, Pt.1), 151–159.

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), 1–8.

Schimmack, U. (2019). The Bayesian Mixture Model is fundamentally flawed.

Schimmack, U. (2020). A meta-psychological perspective on the decade of replication failures in social psychology. Canadian Psychology/Psychologie canadienne, 61(4), 364–376.

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science22(11), 1359–1366.

Soric, B. (1989). Statistical “Discoveries” and Effect-Size Estimation. Journal of the American Statistical Association, 84(406), 608-610. doi:10.2307/2289950

Zhao, Y. (2011). Posterior Probability of Discovery and Expected Rate of Discovery for Multiple Hypothesis Testing and High Throughput Assays. Journal of the American Statistical Association, 106, 984-996, DOI: 10.1198/jasa.2011.tm09737