Category Archives: False Discovery Rate

Personalized P-Values for Social/Personality Psychologists

Last update 4/9/2021
(includes 2020, expanded to 353 social/personality psychologists, minor corrections, added rank numbers for easy comparison)

Introduction

Since Fisher invented null-hypothesis significance testing, researchers have used p < .05 as a statistical criterion to interpret results as discoveries worthwhile of discussion (i.e., the null-hypothesis is false). Once published, these results are often treated as real findings even though alpha does not control the risk of false discoveries.

Statisticians have warned against the exclusive reliance on p < .05, but nearly 100 years after Fisher popularized this approach, it is still the most common way to interpret data. The main reason is that many attempts to improve on this practice have failed. The main problem is that a single statistical result is difficult to interpret. However, when individual results are interpreted in the context of other results, they become more informative. Based on the distribution of p-values it is possible to estimate the maximum false discovery rate (Bartos & Schimmack, 2020; Jager & Leek, 2014). This approach can be applied to the p-values published by individual authors to adjust p-values to keep the risk of false discoveries at a reasonable level, FDR < .05.

Researchers who mainly test true hypotheses with high power have a high discovery rate (many p-values below .05) and a low false discovery rate (FDR < .05). Figure 1 shows an example of a researcher who followed this strategy (for a detailed description of z-curve plots, see Schimmack, 2021).

We see that out of the 317 test-statistics retrieved from his articles, 246 were significant with alpha = .05. This is an observed discovery rate of 78%. We also see that this discovery rate closely matches the estimated discovery rate based on the distribution of the significant p-values, p < .05. The EDR is 79%. With an EDR of 79%, the maximum false discovery rate is only 1%. However, the 95%CI is wide and the lower bound of the CI for the EDR, 27%, allows for 14% false discoveries.

When the ODR matches the EDR, there is no evidence of publication bias. In this case, we can improve the estimates by fitting all p-values, including the non-significant ones. With a tighter CI for the EDR, we see that the 95%CI for the maximum FDR ranges from 1% to 3%. Thus, we can be confident that no more than 5% of the significant results wit alpha = .05 are false discoveries. Readers can therefore continue to use alpha = .05 to look for interesting discoveries in Matsumoto’s articles.

Figure 3 shows the results for a different type of researcher who took a risk and studied weak effect sizes with small samples. This produces many non-significant results that are often not published. The selection for significance inflates the observed discovery rate, but the z-curve plot and the comparison with the EDR shows the influence of publication bias. Here the ODR is similar to Figure 1, but the EDR is only 11%. An EDR of 11% translates into a large maximum false discovery rate of 41%. In addition, the 95%CI of the EDR includes 5%, which means the risk of false positives could be as high as 100%. In this case, using alpha = .05 to interpret results as discoveries is very risky. Clearly, p < .05 means something very different when reading an article by David Matsumoto or Shelly Chaiken.

Rather than dismissing all of Chaiken’s results, we can try to lower alpha to reduce the false discovery rate. If we set alpha = .01, the FDR is 15%. If we set alpha = .005, the FDR is 8%. To get the FDR below 5%, we need to set alpha to .001.

A uniform criterion of FDR < 5% is applied to all researchers in the rankings below. For some this means no adjustment to the traditional criterion. For others, alpha is lowered to .01, and for a few even lower than that.

The rankings below are based on automatrically extracted test-statistics from 40 journals (List of journals). The results should be interpreted with caution and treated as preliminary. They depend on the specific set of journals that were searched, the way results are being reported, and many other factors. The data are available (data.drop) and researchers can exclude articles or add articles and run their own analyses using the z-curve package in R (https://replicationindex.com/2020/01/10/z-curve-2-0/).

I am also happy to receive feedback about coding errors. I also recommended to hand-code articles to adjust alpha for focal hypothesis tests. This typically lowers the EDR and increases the FDR. For example, the automated method produced an EDR of 31 for Bargh, whereas hand-coding of focal tests produced an EDR of 12 (Bargh-Audit).

And here are the rankings. The results are fully automated and I was not able to cover up the fact that I placed only #139 out of 300 in the rankings. In another post, I will explain how researchers can move up in the rankings. Of course, one way to move up in the rankings is to increase statistical power in future studies. The rankings will be updated again when the 2021 data are available.

Despite the preliminary nature, I am confident that the results provide valuable information. Until know all p-values below .05 have been treated as if they are equally informative. The rankings here show that this is not the case. While p = .02 can be informative for one researcher, p = .002 may still entail a high false discovery risk for another researcher.

RankNameTestsODREDRERRFDRAlpha
1Robert A. Emmons588885881.05
2David Matsumoto3788379851.05
3Linda J. Skitka5326875822.05
4Jonathan B. Freeman2745975812.05
5Virgil Zeigler-Hill5157274812.05
6Arthur A. Stone3107573812.05
7David P. Schmitt2077871772.05
8Emily A. Impett5497770762.05
9Kurt Gray4877969812.05
10Kipling D. Williams8437569772.05
11John M. Zelenski1567169762.05
12Michael E. McCullough3346969782.05
13Hilary B. Bergsieker4396768742.05
14Cameron Anderson6527167743.05
15Jamil Zaki4307866763.05
16Rachel E. Jack2497066803.05
17A. Janet Tomiyama767865763.05
18Phoebe C. Ellsworth6057465723.05
19Jim Sidanius4876965723.05
20Benjamin R. Karney3925665733.05
21Carol D. Ryff2808464763.05
22Juliane Degner4356364713.05
23Steven J. Heine5977863773.05
24David M. Amodio5846663703.05
25Thomas N Bradbury3986163693.05
26Elaine Fox4727962783.05
27Klaus Fiedler19507761743.05
28Linda R. Tropp3446561803.05
29Richard W. Robins2707660704.05
30Simine Vazir1376660644.05
31Edward P. Lemay2898759814.05
32William B. Swann Jr.10707859804.05
33Margaret S. Clark5057559774.05
34Bernhard Leidner7246459654.05
35Patricia G. Devine6067158674.05
36B. Keith Payne8797158764.05
37Ximena B. Arriaga2846658694.05
38Rainer Reisenzein2016557694.05
39Barbara A. Mellers2878056784.05
40Jean M. Twenge3817256594.05
41Joris Lammers7056956694.05
42Nicholas Epley15047455724.05
43Krishna Savani6387153695.05
44Lee Jussim2268052715.05
45Edward L. Deci2847952635.05
46Richard M. Ryan9987852695.05
47Ethan Kross6146652675.05
48Roger Giner-Sorolla6638151805.05
49Jens B. Asendorpf2537451695.05
50Bertram F. Malle4227351755.05
51Tessa V. West6917151595.05
52Samuel D. Gosling1085851625.05
53Stefan Schmukle4367850815.05
54Paul Rozin4497850845.05
55Joachim I. Krueger4367850815.05
56Paul K. Piff1667750635.05
57Shinobu Kitayama9837650715.05
58Janice R. Kelly3667550705.05
59Matthew J. Hornsey16567450715.05
60James J. Gross11047250775.05
61Mark Rubin3066850755.05
62Sheena S. Iyengar2076350805.05
63Antonio L. Freitas2477950645.05
64Mina Cikara3927149805.05
65Ludwin E. Molina1636949615.05
66Edward R. Hirt10428148656.01
67Bertram Gawronski18037248766.01
68Penelope Lockwood4587148706.01
69John T. Cacioppo4387647696.01
70Daniel M. Wegner6027647656.01
71Agneta H. Fischer9527547696.01
72Matthew D. Lieberman3987247806.01
73Leaf van Boven7117247676.01
74Stephanie A. Fryberg2486247666.01
75Jennifer S. Lerner1818046616.01
76Rainer Banse4027846726.01
77Alice H. Eagly3307546716.01
78Jeanne L. Tsai12417346676.01
79Dacher Keltner12337245646.01
80Constantine Sedikides25667145706.01
81Andrea L. Meltzer5495245726.01
82R. Chris Fraley6427045727.01
83Ursula Hess7747844717.01
84Brian A. Nosek8166844817.01
85Charles M. Judd10547643687.01
86Jessica L. Tracy6327443717.01
87Mark Schaller5657343617.01
88Jason P. Mitchell6007343737.01
89S. Alexander Haslam11987243647.01
90Mario Mikulincer9018942647.01
91Susan T. Fiske9117842747.01
92Bernadette Park9737742647.01
93Jolanda Jetten19567342677.01
94Paul A. M. Van Lange10927042637.01
95Lisa Feldman Barrett6446942707.01
96Wendi L. Gardner7986742637.01
97Philip E. Tetlock5497941737.01
98Phillip Atiba Goff2996841627.01
99Jordan B. Peterson2666041797.01
100Amanda B. Diekman4388341707.01
101Stacey Sinclair3277041578.01
102Michael Inzlicht6866641638.01
103Tiffany A. Ito3498040648.01
104Wendy Wood4627540628.01
105Norbert Schwarz13377240638.01
106Richard E. Petty27716940648.01
107Elizabeth Page-Gould4115740668.01
108Tim Wildschut13747340648.01
109Veronika Job3627040638.01
110Marcel Zeelenberg8687639798.01
111Christian S. Crandall3627539598.01
112Tobias Greitemeyer17377239678.01
113Carol S. Dweck10287039638.01
114Jason E. Plaks5827039678.01
115Jerry Suls4137138688.01
116Eric D. Knowles3846838648.01
117C. Nathan DeWall13367338639.01
118John F. Dovidio20196938629.01
119Harry T. Reis9986938749.01
120Joshua Correll5496138629.01
121Abigail A. Scholer5565838629.01
122Clayton R. Critcher6978238639.01
123Kevin N. Ochsner4067937709.01
124Ayelet Fishbach14167837599.01
125Fritz Strack6077537569.01
126Mahzarin R. Banaji8807337789.01
127Antony S. R. Manstead16567237629.01
128Mark J. Brandt2777037709.01
129Lorne Campbell4336737619.01
130Geoff MacDonald4066737679.01
131Sanford E. DeVoe2367137619.01
132Duane T. Wegener9807736609.01
133Craig A. Anderson4677636559.01
134D. S. Moskowitz34187436639.01
135Joanne V. Wood10937436609.01
136Todd B. Kashdan3777336619.01
137Barbara L. Fredrickson2877236619.01
138Nyla R. Branscombe12767036659.01
139Niall Bolger3766736589.01
140Yaacov Schul4116136649.01
141Jeff T. Larsen18174366710.01
142Eva Walther49382356610.01
143Michael D. Robinson138878356610.01
144C. Miguel Brendl12176356810.01
145Samuel L. Gaertner32175356110.01
146Victoria M. Esses29575355310.01
147Azim F. Sharif18374356810.01
148Michael Harris Bond37873358410.01
149Glenn Adams27071357310.01
150John T. Jost79470356110.01
151Emily Balcetis59969356810.01
152Eric L. Uhlmann45767356110.01
153Igor Grossmann20364356610.01
154Nalini Ambady125662355610.01
155Diana I. Tamir15662356210.01
156Daphna Oyserman44655355410.01
157Thomas Gilovich119380346910.01
158Alison Ledgerwood21475345410.01
159Linda J. Levine49574347810.01
160Paula M. Niedenthal52269346110.01
161Wiebke Bleidorn9963347410.01
162Ozlem Ayduk54962345910.01
163Christopher R. Agnew32575337610.01
164Kerry Kawakami48768335610.01
165Danu Anthony Stinson49477335411.01
166Jennifer A. Richeson83167335211.01
167Malte Friese50161335711.01
168Michelle N. Shiota24260336311.01
169Margo J. Monteith77376327711.01
170Ulrich Schimmack31875326311.01
171Mark Snyder56272326311.01
172Robert B. Cialdini37972325611.01
173Russell H. Fazio109469326111.01
174Eric van Dijk23867326011.01
175Eli J. Finkel139262325711.01
176E. Ashby Plant83177315111.01
177Christopher K. Hsee68975316311.01
178Yuen J. Huo13274318011.01
179Delroy L. Paulhus12177318212.01
180John A. Bargh65172315512.01
181Roy F. Baumeister244269315212.01
182Tom Pyszczynski94869315412.01
183Jamie Arndt131869315012.01
184Kathleen D. Vohs94468315112.01
185Vivian Zayas25171316012.01
186Anthony G. Greenwald35772308312.01
187Dale T. Miller52171306412.01
188Aaron C. Kay132070305112.01
189Jennifer Crocker51568306712.01
190Arthur Aron30765305612.01
191Arthur Aron30765305612.01
192Lauren J. Human44759307012.01
193Nicholas O. Rule129468307513.01
194Steven W. Gangestad19863304113.005
195Boris Egloff27481295813.01
196Eliot R. Smith44579297313.01
197Jeff Greenberg135877295413.01
198Monica Biernat81377295713.01
199Hazel Rose Markus67476296813.01
200Russell Spears228673295513.01
201Richard E. Nisbett31973296913.01
202Gordon B. Moskowitz37472295713.01
203Nir Halevy26268297213.01
204Dirk Wentura83065296413.01
205Caryl E. Rusbult21860295413.01
206E. Allan Lind37082297213.01
207Roland Neumann25877286713.01
208Yoav Bar-Anan52575287613.01
209Jeffry A. Simpson69774285513.01
210Adam D. Galinsky215470284913.01
211Joshua Aronson18385284614.005
212Matthew Feinberg29577286914.01
213Elizabeth W. Dunn39575286414.01
214Naomi I. Eisenberger17974287914.01
215Eddie Harmon-Jones73873287014.01
216Brent W. Roberts56272287714.01
217Grainne M. Fitzsimons58568284914.01
218Geoffrey J. Leonardelli29068284814.005
219Sander L. Koole76765285214.01
220Richard J. Davidson38064285114.01
221Shelly L. Gable36464285014.01
222Guido H. E. Gendolla42276274714.005
223Jan De Houwer197270277214.01
224Karl Christoph Klauer80167276514.01
225Jennifer S. Beer8056275414.01
226Vanessa K. Bohns42276277415.01
227Charles Stangor18581276815.01
228Klaus R. Scherer46783267815.01
229Galen V. Bodenhausen58574266115.01
230Claude M. Steele43473264215.005
231Sonja Lyubomirsky53171265915.01
232William G. Graziano53271266615.01
233Kristin Laurin64863265115.01
234Kerri L. Johnson53276257615.01
235Phillip R. Shaver56681257116.01
236Ronald S. Friedman18379254416.005
237Mark J. Landau95078254516.005
238Nurit Shnabel56476257916.01
239David Dunning81874257016.01
240Laurie A. Rudman48272256816.01
241Joel Cooper25772253916.005
242Batja Mesquita41671257316.01
243David A. Lishner10565256316.01
244Steven J. Sherman88874246216.01
245Alison L. Chasteen22368246916.01
246Mark W. Baldwin24772244117.005
247Thomas Mussweiler60470244317.005
248Shigehiro Oishi110964246117.01
249Evan P. Apfelbaum25662244117.005
250Jonathan Haidt36876237317.01
251Jeffrey W Sherman99268237117.01
252Brandon J. Schmeichel65266234517.005
253Sam J. Maglio32572234217.005
254Roland Imhoff36574237318.01
255Felicia Pratto41073237518.01
256Klaus Rothermund73871237618.01
257Bernard A. Nijstad69371235218.005
258Jennifer L. Eberhardt20271236218.005
259Marilynn B. Brewer31475226218.005
260Michael Ross116470226218.005
261Dieter Frey153868225818.005
262David M. Buss46182228019.01
263Sean M. McCrea58473225419.005
264Wendy Berry Mendes96568224419.005
265Spike W. S. Lee14568226419.005
266Yoel Inbar28067227119.01
267Serena Chen86572226719.005
268Joseph P. Forgas88883215919.005
269Maya Tamir134280216419.005
270Paul W. Eastwick58365216919.005
271Elizabeth Levy Paluck3184215520.005
272Kees van den Bos115084216920.005
273Dolores Albarracin54066215620.005
274Andrew J. Elliot101881206721.005
275Ana Guinote37876204721.005
276David A. Pizarro22771206921.005
277Kentaro Fujita45869206221.005
278Geoffrey L. Cohen159068205021.005
279Tanya L. Chartrand42467203321.001
280Jay J. van Bavel43764207121.005
281William A. Cunningham23876206422.005
282Robert S. Wyer87182196322.005
283Amy J. C. Cuddy17081197222.005
284Nilanjana Dasgupta38376195222.005
285Gerald L. Clore45674194522.001
286Peter M. Gollwitzer130364195822.005
287Travis Proulx17463196222.005
288Selin Kesebir32866197322.005
289Richard P. Eibach75369194723.001
290James K. McNulty104756196523.005
291Kennon M. Sheldon69874186623.005
292Wilhelm Hofmann62467186623.005
293James M. Tyler13087187424.005
294Roland Deutsch36578187124.005
295Laura L. Carstensen72377186424.005
296Frank D. Fincham73469185924.005
297Toni Schmader54669186124.005
298Lisa K. Libby41865185424.005
299Ed Diener49864186824.005
300Chen-Bo Zhong32768184925.005
301Michel Tuan Pham24686176825.005
302Brad J. Bushman89774176225.005
303Ara Norenzayan22572176125.005
304E. Tory. Higgins186868175425.001
305Benoit Monin63565175625.005
306Carey K. Morewedge63376176526.005
307Michael W. Kraus61772175526.005
308Leandre R. Fabrigar63270176726.005
309Ap Dijksterhuis75068175426.005
310Timothy D. Wilson79865176326.005
311Joseph Cesario14662174526.001
312Simone Schnall27062173126.001
313Melissa J. Ferguson116372166927.005
314Daniel T. Gilbert72465166527.005
315Charles S. Carver15482166428.005
316Leif D. Nelson40974166428.005
317Mark P. Zanna65964164828.001
318Sandra L. Murray69760165528.001
319Laura A. King39176166829.005
320Heejung S. Kim85859165529.001
321Gun R. Semin15979156429.005
322Tal Eyal43962156229.005
323Nathaniel M Lambert45666155930.001
324Dana R. Carney20060155330.001
325Nira Liberman130475156531.005
326Lee Ross34977146331.001
327Shelley E. Taylor42769145231.001
328Ziva Kunda21767145631.001
329Jon K. Maner104065145232.001
330Arie W. Kruglanski122878145833.001
331Gregory M. Walton58769144433.001
332Gabriele Oettingen104761144933.001
333Sarah E. Hill50978135234.001
334Fiona Lee22167135834.001
335Michael A. Olson34665136335.001
336Michael A. Zarate12052133136.001
337Melody M. Chao23757135836.001
338Jamie L. Kurtz9155133837.001
339Daniel M. Oppenheimer19880126037.001
340Deborah A. Prentice8980125738.001
341Yaacov Trope127773125738.001
342Steven J. Spencer54167124438.001
343William von Hippel39865124840.001
344Oscar Ybarra30563125540.001
345Dov Cohen64168114441.001
346Ian McGregor40966114041.001
347Mark Muraven49652114441.001
348Susan M. Andersen36174114843.001
349Martie G. Haselton18673115443.001
350Shelly Chaiken36074115244.001
351Linda M. Isbell1156494150.001
352Hans Ijzerman2145694651.001
353Cheryl J. Wakslak2787383559.001

Ioannidis is Wrong Most of the Time

John P. A. Ioannidis is a rock star in the world of science (wikipedia).

By traditional standards of science, he is one of the most prolific and influential scientists alive. He has published over 1,000 articles that have been cited over 100,000 times.

He is best known for the title of his article “Why most published research findings are false” that has been cited nearly 5,000 times. The irony of this title is that it may also apply to Ioannidis, especially because there is a trade-off between quality and quantity in publishing.

Fact Checking Ioannidis

The title of Ioannidis’s article implies a factual statement: “Most published results ARE false.” However, the actual article does not contain empirical data to support this claim. Rather, Ioannidis presents some hypothetical scenarios that show under what conditions published results MAY BE false.

To produce mostly false findings, a literature has to meet two conditions.

First, it has to test mostly false hypotheses.
Second, it has to test hypotheses in studies with low statistical power, that is a low probability of producing true positive results.

To give a simple example, imagine a field that tests only 10% true hypothesis with just 20% power. As power predicts the percentage of true discoveries, only 2 out of the 10 true hypothesis will be significant. Meanwhile, the alpha criterion of 5% implies that 5% of the false hypotheses will also produce a significant result. Thus, 5 of the 90 false hypotheses will also produce a significant result. As a result, there will be two times more false positives (4.5 over 100) than true positives (2 over 100).

These relatively simple calculations were well known by 2005 (Soric, 1989). Thus, why did Ioannidis article have such a big impact? The answer is that Ioannidis convinced many people that his hypothetical examples are realistic and describe most areas in science.

2020 has shown that Ioannidis’s claim does not apply to all areas of science. In amazing speed, bio-tech companies were able to make not just one but several successful vaccine’s with high effectiveness. Clearly some sciences are making real progress. On the other hand, other areas of science suggest that Ioannidis’s claims were accurate. For example, the whole literature on single-gene variations as predictors of human behavior has produced mostly false claims. Social psychology has a replication crisis where only 25% of published results could be replicated (OSC, 2015).

Aside from this sporadic and anecdotal evidence, it remains unclear how many false results are published in science as a whole. The reason is that it is impossible to quantify the number of false positive results in science. Fortunately, it is not necessary to know the actual rate of false positives to test Ioannidis’s prediction that most published results are false positives. All we need to know is the discovery rate of a field (Soric, 1989). The discovery rate makes it possible to quantify the maximum percentage of false positive discoveries. If the maximum false discovery rate is well below 50%, we can reject Ioannidis’s hypothesis that most published results are false.

The empirical problem is that the observed discovery rate in a field may be inflated by publication bias. It is therefore necessary to estimate the amount of publication bias and if necessary correct the discovery rate, if publication bias is present.

In 2005, Ioannidis and Trikalinos (2005) developed their own test for publication bias, but this test had a number of shortcomings. First, it could be biased in heterogeneous literatures. Second, it required effect sizes to compute power. Third, it only provided information about the presence of publication bias and did not quantify it. Fourth, it did not provide bias-corrected estimates of the true discovery rate.

When the replication crisis became apparent in psychology, I started to develop new bias tests that address these limitations (Bartos & Schimmack, 2020; Brunner & Schimmack, 2020; Schimmack, 2012). The newest tool, called z-curve.2.0 (and yes, there is a app for that), overcomes all of the limitations of Ioannidis’s approach. Most important, it makes it possible to compute a bias-corrected discovery rate that is called the expected discovery rate. The expected discovery rate can be used to examine and quantify publication bias by comparing it to the observed discovery rate. Moreover, the expected discovery rate can be used to compute the maximum false discovery rate.

The Data

The data were compiled by Simon Schwab from the Cochrane database (https://www.cochrane.org/) that covers results from thousands of clinical trials. The data are publicly available (https://osf.io/xjv9g/) under a CC-By Attribution 4.0 International license (“Re-estimating 400,000 treatment effects from intervention studies in the Cochrane Database of Systematic Reviews”; (see also van Zwet, Schwab, & Senn, 2020).

Studies often report results for several outcomes. I selected only results for the primary outcome. It is often suggested that researchers switch outcomes to produce significant results. Thus, primary outcomes are the most likely to show evidence of publication bias, while secondary outcomes might even be biased to show more negative results for the same reason. The choice of primary outcomes also ensures that the test statistics are statistically independent because they are based on independent samples.

Results

I first fitted the default model to the data. The default model assumes that publication bias is present and only uses statistically significant results to fit the model. Z-curve.2.0 uses a finite mixture model to approximate the observed distribution of z-scores with a limited number of non-centrality parameters. After finding optimal weights for the components, power can be computed as the weighted average of the implied power of the components (Bartos & Schimmack, 2020). Bootstrapping is used to compute 95% confidence intervals that have shown to have good coverage in simulation studies (Bartos & Schimmack, 2020).

The main finding with the default model is that the model (grey curve) fits the observed distribution of z-scores very well in the range of significant results. However, z-curve has problems extrapolating from significant results to the distribution of non-significant results. In this case, the model (grey curve) underestimates the amount of non-significant results. Thus, there is no evidence of publication bias. This is seen in a comparison of the observed and expected discovery rates. The observed discovery rate of 26% is lower than the expected discovery rate of 38%.

When there is no evidence of publication bias, there is no reason to fit the model only to the significant results. Rather, the model can be fitted to the full distribution of all test statistics. The results are shown in Figure 2.

The key finding for this blog post is that the estimated discovery rate of 27% closely matches the observed discovery rate of 26%. Thus, there is no evidence of publication bias. In this case, simply counting the percentage of significant results provides a valid estimate of the discovery rate in clinical trials. Roughly one-quarter of trials end up with a positive result. The new question is how many of these results might be false positives.

To maximize the rate of false positives, we have to assume that true positives were obtained with maximum power (Soric, 1989). In this scenario, we could get as many as 14% (4 over 27) false positive results.

Even if we use the upper limit of the 95% confidence interval, we only get 19% false positives. Moreover, it is clear that Soric’s (1989) scenario overestimate the false discovery rate because it is unlikely that all tests of true hypotheses have 100% power.

In short, an empirical test of Ioannidis’s hypothesis that most published results in science are false shows that this claim is at best a wild overgeneralization. It is not true for clinical trials in medicine. In fact, the real problem is that many clinical trials may be underpowered to detect clinically relevant effects. This can be seen in the estimated replication rate of 61%, which is the mean power of studies with significant results. This estimate of power includes false positives with 5% power. If we assume that 14% of the significant results are false positives, the conditional power based on a true discovery is estimated to be 70% (14 * .05 + 86 * . 70 = .61).

With information about power, we can modify Soric’s worst case scenario and change power from 100% to 70%. This has only a small influence on the false positive discovery rate that decreases to 11% (3 over 27). However, the rate of false negatives increases from 0 to 14% (10 over 74). This also means that there are now three-times more false negatives than false positives (10 over 3).

Even this scenario overestimates power of studies that produced false negative results because power of studies with significant results is higher than power of studies that produced non-significant results when power is heterogenous (Brunner & Schimmack, 2020). In the worst case scenario, the null-hypothesis may rarely be true and power of studies with non-significant results could be as low as 14.5%. To explain, if we redo all of the studies, we expected that 61% of the significant studies produce a significant result again, producing 16.5% significant results. We also expect that the discovery rate will be 27% again. Thus, the remaining 73% of studies have to make up the difference between 27% and 16.5%, which is 10.5%. For 73 studies to produce 10.5 significant results, the studies have to have 14.5% power. 27 = 27 * .61 + 73 * .145.

In short, while Ioannidis predicted that most published results are false positives, it is much more likely that most published results are false negatives. This problem is of course not new. To make conclusions about effectiveness of treatments, medical researchers usually do not rely on a single clinical trial. Rather results of several studies are combined in a meta-analysis. As long as there is no publication bias, meta-analyses of original studies can boost power and reduce the risk of false negative results. It is therefore encouraging that the present results suggest that there is relatively little publication bias in these studies. Additional analyses for subgroups of studies can be conducted, but are beyond the main point of this blog post.

Conclusion

Ioannidis wrote an influential article that used hypothetical scenarios to make the prediction that most published results are false positives. Although this article is often cited as if it contained evidence to support this claim, the article contained no empirical evidence. Surprisingly, there also have been few attempts to test Ioannidis’s claim empirically. Probably the main reason is that nobody knew how to test it. Here I showed a way to test Ioannidis’s claim and I presented clear empirical evidence that contradicts this claim in Ioannidis’s own field of science, namely medicine.

The main feature that distinguishes science and fiction is not that science is always right. Rather, science is superior because proper use of the scientific method allows for science to correct itself, when better data become available. In 2005, Ioannidis had no data and no statistical method to prove his claim. Fifteen years later, we have good data and a scientific method to test his claim. It is time for science to correct itself and to stop making unfounded claims that science is more often wrong than right.

The danger of not trusting science has been on display this year, where millions of Americans ignored good scientific evidence, leading to the unnecessary death of many US Americans. So far, 330, 000 US Americans are estimated to have died of Covid-19. In a similar country like Canada, 14,000 Canadians have died so far. To adjust for population, we can compare the number of deaths per million, which is 1000 in the USA and 400 in Canada. The unscientific approach to the pandemic in the US may explain some of this discrepancy. Along with the development of vaccines, it is clear that science is not always wrong and can save lives. Iannaidis (2005) made unfounded claims that success stories are the exception rather than the norm. At least in medicine, intervention studies show real successes more often than false ones.

The Covid-19 pandemic also provides another example where Ioannidis used off-the-cuff calculations to make big claims without any evidence. In a popular article titled “A fiasco in the making” he speculated that the Covid-19 virus might be less deadly than the flu and suggested that policies to curb the spread of the virus were irrational.

As the evidence accumulated, it became clear that the Covid-19 virus is claiming many more lives than the flu, despite policies that Ioannidis considered to be irrational. Scientific estimates suggest that Covid-19 is 5 to 10 times more deadly than the flu (BNN), not less deadly as Ioannidis implied. Once more, Ioannidis quick, unempirical claims were contradicted by hard evidence. It is not clear how many of his other 1,000 plus articles are equally questionable.

To conclude, Ioannidis should be the last one to be surprised that several of his claims are wrong. Why should he be better than other scientists? The question is only how he deals with this information. However, for science it is not important whether scientists correct themselves. Science corrects itself by replacing old, false information with better information. One question is what science does with false and misleading information that is highly cited.

If YouTube can remove a video with Ioannidis’s false claims about Covid-19 (WP), maybe PLOS Medicine can retract an article with the false claim that “most published results in science are false”.

Washington Post

The attention-grabbing title is simply misleading because nothing in the article supports the claim. Moreover, actual empirical data contradict the claim at least in some domains. Most claims in science are not false and in a world with growing science skepticism spreading false claims about science may be just as deadly as spreading false claims about Covid-19.

If we learned anything from 2020, it is that science and democracy are not perfect, but a lot better than superstition and demagogy.

I wish you all a happier 2021.

Soric’s Maximum False Discovery Rate

Originally published January 31, 2020
Revised December 27, 2020

Psychologists, social scientists, and medical researchers often conduct empirical studies with the goal to demonstrate an effect (e.g., a drug is effective). They do so by rejecting the null-hypothesis that there is no effect, when a test statistic falls into a region of improbable test-statistics, p < .05. This is called null-hypothesis significance testing (NHST).

The utility of NHST has been a topic of debate. One of the oldest criticisms of NHST is that the null-hypothesis is likely to be false most of the time (Lykken, 1968). As a result, demonstrating a significant result adds little information, while failing to do so because studies have low power creates false information and confusion.

This changed in the 2000s, when the opinion emerged that most published significant results are false (Ioannidis, 2005; Simmons, Nelson, & Simonsohn, 2011). In response, there have been some attempts to estimate the actual number of false positive results (Jager & Leek, 2013). However, there has been surprisingly little progress towards this goal.

One problem for empirical tests of the false discovery rate is that the null-hypothesis is an abstraction. Just like it is impossible to say the number of points that make up the letter X, it is impossible to count null-hypotheses because the true population effect size is always unknown (Zhao, 2011, JASA).

An article by Soric (1989, JASA) provides a simple solution to this problem. Although this article was influential in stimulating methods for genome-wide association studies (Benjamin & Hochberg, 1995, over 40,000) citations, the article itself has garnered fewer than 100 citations. Yet, it provides a simple and attractive way to examine how often researchers may be obtaining significant results when the null-hypothesis is true. Rather than trying to estimate the actual false discovery rate, the method estimates the maximum false discovery rate. If a literature has a low maximum false discovery rate, readers can be assured that most significant results are true positives.

The method is simple because researchers do not have to determine whether a specific finding was a true or false positive result. Rather, the maximum false discovery rate can be computed from the actual discovery rate (i.e., the percentage of significant results for all tests).

The logic of Soric’s (1989) approach is illustrated in Tables 1.

NSSIG
TRUE06060
FALSE76040800
760100860
Table 1

To maximize the false discovery rate, we make the simplifying assumption that all tests of true hypotheses (i.e., the null-hypothesis is false) are conducted with 100% power (i.e., all tests of true hypotheses produce a significant result). In Table 1, this leads to 60 significant results for 60 true hypotheses. The percentage of significant results for false hypotheses (i.e., the null-hypothesis is true) is given by the significance criterion, which is set at the typical level of 5%. This means that for every 20 tests, there are 19 non-significant results and one false positive result. In Table 1 this leads to 40 false positive results for 800 tests.

In this example, the discovery rate is (40 + 60)/860 = 11.6%. Out of these 100 discoveries, 60 are true discoveries and 40 are false discoveries. Thus, the false discovery rate is 40/100 = 40%.

Soric’s (1989) insight makes it easy to examine empirically whether a literature tests many false hypotheses, using a simple formula to compute the maximum false discovery rate from the observed discovery rate; that is, the percentage of significant results. All we need to do is count and use simple math to obtain valuable information about the false discovery rate.

However, a major problem with Soric’s approach is that the observed discovery rate in a literature may be misleading because journals are more likely to publish significant results than non-significant results. This is known as publication bias or the file-drawer problem (Rosenthal, 1979). In some sciences, publication bias is a big problem. Sterling (1959; also Sterling et al., 1995) found that the observed discovery rated in psychology is over 90%. Rather than suggesting that psychologists never test false hypotheses, it rather suggests that publication bias is particularly strong in psychology (Fanelli, 2010). Using these inflated discovery rates to estimate the maximum FDR would severely understimate the actual risk of false positive results.

Recently, Bartoš and Schimmack (2020) developed a statistical model that can correct for publication bias and produce a bias-corrected estimate of the discovery rate. This is called the expected discovery rate. A comparison of the observed discovery rate (ODR) and the expected discovery rate (EDR) can be used to assess the presence and extent of publication bias. In addition, the EDR can be used to compute Soric’s maximum false discovery rate when publication bias is present and inflates the ODR.

To demonstrate this approach, I I use test-statistics from the journal Psychonomic Bulletin and Review. The choice of this journal is motivated by prior meta-psychological investigations of results published in this journal. Gronau, Duizer, Bakker, and Wagenmakers (2017) used a Bayesian Mixture Model to estimate that about 40% of results published in this journal are false positive results. Using Soric’s formula in reverse shows that this estimate implies that cognitive psychologists test only 10% true hypotheses (Table 3; 72/172 = 42%). This is close to Dreber, Pfeiffer, Almenber, Isakssona, Wilsone, Chen, Nosek, and Johannesson’s (2015) estimate of only 9% true hypothesis in cognitive psychology.

NSSIG
TRUE0100100
FALSE136872900
13681721000
Table 3

These results are implausible because rather different results are obtained when Soric’s method is applied to the results from the Open Science Collaboration (2015) project that conducted actual replication studies and found that 50% of published significant results could be replicated; that is, produced a significant results again in the replication study. As there was no publication bias in the replication studies, the ODR of 50% can be used to compute the maximum false discovery rate, which is only 5%. This is much lower than the estimate obtained with Gronau et al.’s (2018) mixture model.

I used an R-script to automatically extract test-statistics from articles that were published in Psychonomic Bulletin and Review from 2000 to 2010. I limited the analysis to this period because concerns about replicability and false positives might have changed research practices after 2010. The program extracted 13,571 test statistics.

Figure 1 shows clear evidence of selection bias. The observed discovery rate of 70% is much higher than the estimated discovery rate of 35% and the 95%CI of the EDR, 25% to 53% does not include the ODR. As a result, the ODR produces an inflated estimate of the actual discover rate and cannot be used to compute the maximum false discovery rate.

However, even with a much lower estimated discovery rate of 36%, the maximum false discovery rate is only 10%. Even with the lower bound of the confidence interval for the EDR of 25%, the maximum FDR is only 16%.

Figure 2 shows the results for a replication with test statistics from 2011 to 2019. Although changes in research practices could have produced different results, the results are unchanged. The ODR is 69% vs. 70%; the EDR is 38% vs. 35% and the point estimate of the maximum FDR is 9% vs. 10%. This close replication also implies that research practices in cognitive psychology have not changed over the past decade.

The maximum FDR estimates of 10% confirms the results based on the replication rate in a small set of actual replication studies (OSC, 2015) with a much larger sample of test statistics. The results also show that Gronau et al.’s mixture model produces dramatically inflated estimates of the false discovery rate (see also Brunner & Schimmack, 2019, for a detailed discussion of their flawed model).

In contrast to cognitive psychology, social psychology has seen more replication failures. The OSC project estimated a discovery rate of only 25%. Even this low rate would imply that a maximum of 16% of discoveries in social psychology are false positives. A z-curve analysis of a representative sample of 678 focal tests in social psychology produced an estimated discovery rate of 19% with a 95%CI ranging from 6% to 36% (Schimmack, 2020). The point estimate implies a maximum FDR of 22%, but the lower limit of the confidence interval allows for a maximum FDR of 82%. Thus, social psychology may be a literature where most published results are false. However, the replication crisis in social psychology should not be generalized to other disciplines.

Conclusion

Numerous articles have made claims that false discoveries are rampant (Dreber et al., 2015; Gronau et al., 2015; Ioannidis, 2005; Simmons et al., 2011). However, these articles did not provide empirical data to support their claim. In contrast, empirical studies of the false discovery risk usually show much lower rates of false discoveries (Jager & Leek, 2013), but this finding has been dismissed (Ioannidis, 2014) or ignored (Gronau et al., 2018). Here I used a simpler approach to estimate the maximum false discovery rate and showed that most significant results in cognitive psychology are true discoveries. I hope that this demonstration revives attempts to estimate the science-wise false discovery rate (Jager & Leek, 2013) rather than relying on hypothetical scenarios or models that reflect researchers’ prior beliefs that may not match actual data (Gronau et al., 2018; Ioannidis, 2005).

References

Bartoš, F., & Schimmack, U. (2020, January 10). Z-Curve.2.0: Estimating Replication Rates and Discovery Rates. https://doi.org/10.31234/osf.io/urgtn

Dreber A., Pfeiffer T., Almenberg, J., Isaksson S., Wilson B., Chen Y., Nosek B. A.,  Johannesson, M. (2015). Prediction markets in science. Proceedings of the National Academy of Sciences, 50, 15343-15347. DOI: 10.1073/pnas.1516179112

Fanelli D (2010) Positive” Results Increase Down the Hierarchy of the Sciences. PLOS ONE 5(4): e10068. https://doi.org/10.1371/journal.pone.0010068

Gronau, Q. F., Duizer, M., Bakker, M., & Wagenmakers, E.-J. (2017). Bayesian mixture modeling of significant p values: A meta-analytic method to estimate the degree of contamination from H₀. Journal of Experimental Psychology: General, 146(9), 1223–1233. https://doi.org/10.1037/xge0000324

Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLOS Medicine 2(8): e124. https://doi.org/10.1371/journal.pmed.0020124

Ioannidis JP. (2014). Why “An estimate of the science-wise false discovery rate and application to the top medical literature” is false. Biostatistics, 15(1), 28-36.
DOI: 10.1093/biostatistics/kxt036.

Jager, L. R., & Leek, J. T. (2014). An estimate of the science-wise false discovery rate and application to the top medical literature. Biostatistics, 15(1), 1-12.
DOI: 10.1093/biostatistics/kxt007

Lykken, D. T. (1968). Statistical significance in psychological research. Psychological Bulletin, 70(3, Pt.1), 151–159. https://doi.org/10.1037/h0026141

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), 1–8.

Schimmack, U. (2019). The Bayesian Mixture Model is fundamentally flawed. https://replicationindex.com/2019/04/01/the-bayesian-mixture-model-is-fundamentally-flawed/

Schimmack, U. (2020). A meta-psychological perspective on the decade of replication failures in social psychology. Canadian Psychology/Psychologie canadienne, 61(4), 364–376. https://doi.org/10.1037/cap0000246

Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science22(11), 1359–1366. 
https://doi.org/10.1177/0956797611417632

Soric, B. (1989). Statistical “Discoveries” and Effect-Size Estimation. Journal of the American Statistical Association, 84(406), 608-610. doi:10.2307/2289950

Zhao, Y. (2011). Posterior Probability of Discovery and Expected Rate of Discovery for Multiple Hypothesis Testing and High Throughput Assays. Journal of the American Statistical Association, 106, 984-996, DOI: 10.1198/jasa.2011.tm09737