Category Archives: Alpha Wars

Personalized P-Values for Social/Personality Psychologists

Last update 8/25/2021
(expanded to 410 social/personality psychologists; included Dan Ariely)

Introduction

Since Fisher invented null-hypothesis significance testing, researchers have used p < .05 as a statistical criterion to interpret results as discoveries worthwhile of discussion (i.e., the null-hypothesis is false). Once published, these results are often treated as real findings even though alpha does not control the risk of false discoveries.

Statisticians have warned against the exclusive reliance on p < .05, but nearly 100 years after Fisher popularized this approach, it is still the most common way to interpret data. The main reason is that many attempts to improve on this practice have failed. The main problem is that a single statistical result is difficult to interpret. However, when individual results are interpreted in the context of other results, they become more informative. Based on the distribution of p-values it is possible to estimate the maximum false discovery rate (Bartos & Schimmack, 2020; Jager & Leek, 2014). This approach can be applied to the p-values published by individual authors to adjust p-values to keep the risk of false discoveries at a reasonable level, FDR < .05.

Researchers who mainly test true hypotheses with high power have a high discovery rate (many p-values below .05) and a low false discovery rate (FDR < .05). Figure 1 shows an example of a researcher who followed this strategy (for a detailed description of z-curve plots, see Schimmack, 2021).

We see that out of the 317 test-statistics retrieved from his articles, 246 were significant with alpha = .05. This is an observed discovery rate of 78%. We also see that this discovery rate closely matches the estimated discovery rate based on the distribution of the significant p-values, p < .05. The EDR is 79%. With an EDR of 79%, the maximum false discovery rate is only 1%. However, the 95%CI is wide and the lower bound of the CI for the EDR, 27%, allows for 14% false discoveries.

When the ODR matches the EDR, there is no evidence of publication bias. In this case, we can improve the estimates by fitting all p-values, including the non-significant ones. With a tighter CI for the EDR, we see that the 95%CI for the maximum FDR ranges from 1% to 3%. Thus, we can be confident that no more than 5% of the significant results wit alpha = .05 are false discoveries. Readers can therefore continue to use alpha = .05 to look for interesting discoveries in Matsumoto’s articles.

Figure 3 shows the results for a different type of researcher who took a risk and studied weak effect sizes with small samples. This produces many non-significant results that are often not published. The selection for significance inflates the observed discovery rate, but the z-curve plot and the comparison with the EDR shows the influence of publication bias. Here the ODR is similar to Figure 1, but the EDR is only 11%. An EDR of 11% translates into a large maximum false discovery rate of 41%. In addition, the 95%CI of the EDR includes 5%, which means the risk of false positives could be as high as 100%. In this case, using alpha = .05 to interpret results as discoveries is very risky. Clearly, p < .05 means something very different when reading an article by David Matsumoto or Shelly Chaiken.

Rather than dismissing all of Chaiken’s results, we can try to lower alpha to reduce the false discovery rate. If we set alpha = .01, the FDR is 15%. If we set alpha = .005, the FDR is 8%. To get the FDR below 5%, we need to set alpha to .001.

A uniform criterion of FDR < 5% is applied to all researchers in the rankings below. For some this means no adjustment to the traditional criterion. For others, alpha is lowered to .01, and for a few even lower than that.

The rankings below are based on automatrically extracted test-statistics from 40 journals (List of journals). The results should be interpreted with caution and treated as preliminary. They depend on the specific set of journals that were searched, the way results are being reported, and many other factors. The data are available (data.drop) and researchers can exclude articles or add articles and run their own analyses using the z-curve package in R (https://replicationindex.com/2020/01/10/z-curve-2-0/).

I am also happy to receive feedback about coding errors. I also recommended to hand-code articles to adjust alpha for focal hypothesis tests. This typically lowers the EDR and increases the FDR. For example, the automated method produced an EDR of 31 for Bargh, whereas hand-coding of focal tests produced an EDR of 12 (Bargh-Audit).

And here are the rankings. The results are fully automated and I was not able to cover up the fact that I placed only #188 out of 400 in the rankings. In another post, I will explain how researchers can move up in the rankings. Of course, one way to move up in the rankings is to increase statistical power in future studies. The rankings will be updated again when the 2021 data are available.

Despite the preliminary nature, I am confident that the results provide valuable information. Until know all p-values below .05 have been treated as if they are equally informative. The rankings here show that this is not the case. While p = .02 can be informative for one researcher, p = .002 may still entail a high false discovery risk for another researcher.

RankNameTestsODREDRERRFDRAlpha
1Robert A. Emmons538789901.05
2Allison L. Skinner2295981851.05
3David Matsumoto3788379851.05
4Linda J. Skitka5326875822.05
5Jonathan B. Freeman2745975812.05
6Virgil Zeigler-Hill5157274812.05
7Arthur A. Stone3107573812.05
8David P. Schmitt2077871772.05
9Emily A. Impett5497770762.05
10Paula Bressan628270762.05
11Kurt Gray4877969812.05
12Michael E. McCullough3346969782.05
13Kipling D. Williams8437569772.05
14John M. Zelenski1567169762.05
15Elke U. Weber3126968770.05
16Hilary B. Bergsieker4396768742.05
17Cameron Anderson6527167743.05
18Rachael E. Jack2497066803.05
19Jamil Zaki4307866763.05
20A. Janet Tomiyama767865763.05
21Benjamin R. Karney3925665733.05
22Phoebe C. Ellsworth6057465723.05
23Jim Sidanius4876965723.05
24Amelie Mummendey4617065723.05
25Carol D. Ryff2808464763.05
26Juliane Degner4356364713.05
27Steven J. Heine5977863773.05
28David M. Amodio5846663703.05
29Thomas N Bradbury3986163693.05
30Elaine Fox4727962783.05
31Miles Hewstone14277062733.05
32Linda R. Tropp3446561803.05
33Rainer Greifeneder9447561773.05
34Klaus Fiedler19507761743.05
35Jesse Graham3777060763.05
36Richard W. Robins2707660704.05
37Simine Vazire1376660644.05
38On Amir2676759884.05
39Edward P. Lemay2898759814.05
40William B. Swann Jr.10707859804.05
41Margaret S. Clark5057559774.05
42Bernhard Leidner7246459654.05
43B. Keith Payne8797158764.05
44Ximena B. Arriaga2846658694.05
45Joris Lammers7286958694.05
46Patricia G. Devine6067158674.05
47Rainer Reisenzein2016557694.05
48Barbara A. Mellers2878056784.05
49Joris Lammers7056956694.05
50Jean M. Twenge3817256594.05
51Nicholas Epley15047455724.05
52Kaiping Peng5667754754.05
53Krishna Savani6387153695.05
54Leslie Ashburn-Nardo1098052835.05
55Lee Jussim2268052715.05
56Richard M. Ryan9987852695.05
57Ethan Kross6146652675.05
58Edward L. Deci2847952635.05
59Roger Giner-Sorolla6638151805.05
60Bertram F. Malle4227351755.05
61Jens B. Asendorpf2537451695.05
62Samuel D. Gosling1085851625.05
63Tessa V. West6917151595.05
64Paul Rozin4497850845.05
65Joachim I. Krueger4367850815.05
66Sheena S. Iyengar2076350805.05
67James J. Gross11047250775.05
68Mark Rubin3066850755.05
69Pieter Van Dessel5787050755.05
70Shinobu Kitayama9837650715.05
71Matthew J. Hornsey16567450715.05
72Janice R. Kelly3667550705.05
73Antonio L. Freitas2477950645.05
74Paul K. Piff1667750635.05
75Mina Cikara3927149805.05
76Beate Seibt3797249626.01
77Ludwin E. Molina1636949615.05
78Bertram Gawronski18037248766.01
79Penelope Lockwood4587148706.01
80Edward R. Hirt10428148656.01
81Matthew D. Lieberman3987247806.01
82John T. Cacioppo4387647696.01
83Agneta H. Fischer9527547696.01
84Leaf van Boven7117247676.01
85Stephanie A. Fryberg2486247666.01
86Daniel M. Wegner6027647656.01
87Anne E. Wilson7857147646.01
88Rainer Banse4027846726.01
89Alice H. Eagly3307546716.01
90Jeanne L. Tsai12417346676.01
91Jennifer S. Lerner1818046616.01
92Andrea L. Meltzer5495245726.01
93R. Chris Fraley6427045727.01
94Constantine Sedikides25667145706.01
95Paul Slovic3777445706.01
96Dacher Keltner12337245646.01
97Brian A. Nosek8166844817.01
98George Loewenstein7527144727.01
99Ursula Hess7747844717.01
100Jason P. Mitchell6007343737.01
101Jessica L. Tracy6327443717.01
102Charles M. Judd10547643687.01
103S. Alexander Haslam11987243647.01
104Mark Schaller5657343617.01
105Susan T. Fiske9117842747.01
106Lisa Feldman Barrett6446942707.01
107Jolanda Jetten19567342677.01
108Mario Mikulincer9018942647.01
109Bernadette Park9737742647.01
110Paul A. M. Van Lange10927042637.01
111Wendi L. Gardner7986742637.01
112Will M. Gervais1106942597.01
113Jordan B. Peterson2666041797.01
114Philip E. Tetlock5497941737.01
115Amanda B. Diekman4388341707.01
116Daniel H. J. Wigboldus4927641678.01
117Michael Inzlicht6866641638.01
118Naomi Ellemers23887441638.01
119Phillip Atiba Goff2996841627.01
120Stacey Sinclair3277041578.01
121Francesca Gino25217540698.01
122Michael I. Norton11367140698.01
123David J. Hauser1567440688.01
124Elizabeth Page-Gould4115740668.01
125Tiffany A. Ito3498040648.01
126Richard E. Petty27716940648.01
127Tim Wildschut13747340648.01
128Norbert Schwarz13377240638.01
129Veronika Job3627040638.01
130Wendy Wood4627540628.01
131Minah H. Jung1568339838.01
132Marcel Zeelenberg8687639798.01
133Tobias Greitemeyer17377239678.01
134Jason E. Plaks5827039678.01
135Carol S. Dweck10287039638.01
136Christian S. Crandall3627539598.01
137Harry T. Reis9986938749.01
138Vanessa K. Bohns4207738748.01
139Jerry Suls4137138688.01
140Eric D. Knowles3846838648.01
141C. Nathan DeWall13367338639.01
142Clayton R. Critcher6978238639.01
143John F. Dovidio20196938629.01
144Joshua Correll5496138629.01
145Abigail A. Scholer5565838629.01
146Chris Janiszewski1078138589.01
147Herbert Bless5867338579.01
148Mahzarin R. Banaji8807337789.01
149Rolf Reber2806437729.01
150Kevin N. Ochsner4067937709.01
151Mark J. Brandt2777037709.01
152Geoff MacDonald4066737679.01
153Mara Mather10387837679.01
154Antony S. R. Manstead16567237629.01
155Lorne Campbell4336737619.01
156Sanford E. DeVoe2367137619.01
157Ayelet Fishbach14167837599.01
158Fritz Strack6077537569.01
159Jeff T. Larsen18174366710.01
160Nyla R. Branscombe12767036659.01
161Yaacov Schul4116136649.01
162D. S. Moskowitz34187436639.01
163Pablo Brinol13566736629.01
164Todd B. Kashdan3777336619.01
165Barbara L. Fredrickson2877236619.01
166Duane T. Wegener9807736609.01
167Joanne V. Wood10937436609.01
168Niall Bolger3766736589.01
169Craig A. Anderson4677636559.01
170Michael Harris Bond37873358410.01
171Glenn Adams27071357310.01
172Daniel M. Bernstein40473357010.01
173C. Miguel Brendl12176356810.01
174Azim F. Sharif18374356810.01
175Emily Balcetis59969356810.01
176Eva Walther49382356610.01
177Michael D. Robinson138878356610.01
178Igor Grossmann20364356610.01
179Diana I. Tamir15662356210.01
180Samuel L. Gaertner32175356110.01
181John T. Jost79470356110.01
182Eric L. Uhlmann45767356110.01
183Nalini Ambady125662355610.01
184Daphna Oyserman44655355410.01
185Victoria M. Esses29575355310.01
186Linda J. Levine49574347810.01
187Wiebke Bleidorn9963347410.01
188Thomas Gilovich119380346910.01
189Alexander J. Rothman13369346510.01
190Paula M. Niedenthal52269346110.01
191Ozlem Ayduk54962345910.01
192Paul Ekman8870345510.01
193Alison Ledgerwood21475345410.01
194Christopher R. Agnew32575337610.01
195Michelle N. Shiota24260336311.01
196Malte Friese50161335711.01
197Kerry Kawakami48768335610.01
198Danu Anthony Stinson49477335411.01
199Jennifer A. Richeson83167335211.01
200Margo J. Monteith77376327711.01
201Ulrich Schimmack31875326311.01
202Mark Snyder56272326311.01
203Russell H. Fazio109469326111.01
204Eric van Dijk23867326011.01
205Tom Meyvis37777326011.01
206Eli J. Finkel139262325711.01
207Robert B. Cialdini37972325611.01
208Jonathan W. Kunstman43066325311.01
209Delroy L. Paulhus12177318212.01
210Yuen J. Huo13274318011.01
211Gerd Bohner51371317011.01
212Christopher K. Hsee68975316311.01
213Vivian Zayas25171316012.01
214John A. Bargh65172315512.01
215Tom Pyszczynski94869315412.01
216Roy F. Baumeister244269315212.01
217E. Ashby Plant83177315111.01
218Kathleen D. Vohs94468315112.01
219Jamie Arndt131869315012.01
220Anthony G. Greenwald35772308312.01
221Nicholas O. Rule129468307513.01
222Lauren J. Human44759307012.01
223Jennifer Crocker51568306712.01
224Dale T. Miller52171306412.01
225Thomas W. Schubert35370306012.01
226W. Keith Campbell52870305812.01
227Arthur Aron30765305612.01
228Pamela K. Smith14966305212.01
229Aaron C. Kay132070305112.01
230Steven W. Gangestad19863304113.005
231Eliot R. Smith44579297313.01
232Nir Halevy26268297213.01
233E. Allan Lind37082297213.01
234Richard E. Nisbett31973296913.01
235Hazel Rose Markus67476296813.01
236Emanuele Castano44569296513.01
237Dirk Wentura83065296413.01
238Boris Egloff27481295813.01
239Monica Biernat81377295713.01
240Gordon B. Moskowitz37472295713.01
241Russell Spears228673295513.01
242Jeff Greenberg135877295413.01
243Caryl E. Rusbult21860295413.01
244Naomi I. Eisenberger17974287914.01
245Brent W. Roberts56272287714.01
246Yoav Bar-Anan52575287613.01
247Eddie Harmon-Jones73873287014.01
248Matthew Feinberg29577286914.01
249Roland Neumann25877286713.01
250Eugene M. Caruso82275286413.01
251Ulrich Kuehnen82275286413.01
252Elizabeth W. Dunn39575286414.01
253Jeffry A. Simpson69774285513.01
254Sander L. Koole76765285214.01
255Richard J. Davidson38064285114.01
256Shelly L. Gable36464285014.01
257Adam D. Galinsky215470284913.01
258Grainne M. Fitzsimons58568284914.01
259Geoffrey J. Leonardelli29068284814.005
260Joshua Aronson18385284614.005
261Henk Aarts100367284514.005
262Vanessa K. Bohns42276277415.01
263Jan De Houwer197270277214.01
264Dan Ariely60070276914.01
265Charles Stangor18581276815.01
266Karl Christoph Klauer80167276514.01
267Jennifer S. Beer8056275414.01
268Eldar Shafir10778275114.01
269Guido H. E. Gendolla42276274714.005
270Klaus R. Scherer46783267815.01
271William G. Graziano53271266615.01
272Galen V. Bodenhausen58574266115.01
273Sonja Lyubomirsky53071265915.01
274Kai Sassenberg87271265615.01
275Kristin Laurin64863265115.01
276Claude M. Steele43473264215.005
277David G. Rand39270258115.01
278Paul Bloom50272257916.01
279Kerri L. Johnson53276257615.01
280Batja Mesquita41671257316.01
281Rebecca J. Schlegel26167257115.01
282Phillip R. Shaver56681257116.01
283David Dunning81874257016.01
284Laurie A. Rudman48272256816.01
285David A. Lishner10565256316.01
286Mark J. Landau95078254516.005
287Ronald S. Friedman18379254416.005
288Joel Cooper25772253916.005
289Alison L. Chasteen22368246916.01
290Jeff Galak31373246817.01
291Steven J. Sherman88874246216.01
292Shigehiro Oishi110964246117.01
293Thomas Mussweiler60470244317.005
294Mark W. Baldwin24772244117.005
295Evan P. Apfelbaum25662244117.005
296Nurit Shnabel56476237818.01
297Klaus Rothermund73871237618.01
298Felicia Pratto41073237518.01
299Jonathan Haidt36876237317.01
300Roland Imhoff36574237318.01
301Jeffrey W Sherman99268237117.01
302Jennifer L. Eberhardt20271236218.005
303Bernard A. Nijstad69371235218.005
304Brandon J. Schmeichel65266234517.005
305Sam J. Maglio32572234217.005
306David M. Buss46182228019.01
307Yoel Inbar28067227119.01
308Serena Chen86572226719.005
309Spike W. S. Lee14568226419.005
310Marilynn B. Brewer31475226218.005
311Michael Ross116470226218.005
312Dieter Frey153868225818.005
313G. Daniel Lassiter18982225519.01
314Sean M. McCrea58473225419.005
315Wendy Berry Mendes96568224419.005
316Paul W. Eastwick58365216919.005
317Kees van den Bos115084216920.005
318Maya Tamir134280216419.005
319Joseph P. Forgas88883215919.005
320Michaela Wanke36274215919.005
321Dolores Albarracin54066215620.005
322Elizabeth Levy Paluck3184215520.005
323Vanessa LoBue29968207621.01
324Christopher J. Armitage16062207321.005
325Elizabeth A. Phelps68678207221.005
326Jay J. van Bavel43764207121.005
327David A. Pizarro22771206921.005
328Andrew J. Elliot101881206721.005
329William A. Cunningham23876206422.005
330Kentaro Fujita45869206221.005
331Geoffrey L. Cohen159068205021.005
332Ana Guinote37876204721.005
333Tanya L. Chartrand42467203321.001
334Selin Kesebir32866197322.005
335Vincent Y. Yzerbyt141273197322.01
336Amy J. C. Cuddy17081197222.005
337James K. McNulty104756196523.005
338Robert S. Wyer87182196322.005
339Travis Proulx17463196222.005
340Peter M. Gollwitzer130364195822.005
341Nilanjana Dasgupta38376195222.005
342Richard P. Eibach75369194723.001
343Gerald L. Clore45674194522.001
344James M. Tyler13087187424.005
345Roland Deutsch36578187124.005
346Ed Diener49864186824.005
347Kennon M. Sheldon69874186623.005
348Wilhelm Hofmann62467186623.005
349Laura L. Carstensen72377186424.005
350Toni Schmader54669186124.005
351Frank D. Fincham73469185924.005
352David K. Sherman112861185724.005
353Lisa K. Libby41865185424.005
354Chen-Bo Zhong32768184925.005
355Stefan C. Schmukle11462177126.005
356Michel Tuan Pham24686176825.005
357Leandre R. Fabrigar63270176726.005
358Neal J. Roese36864176525.005
359Carey K. Morewedge63376176526.005
360Timothy D. Wilson79865176326.005
361Brad J. Bushman89774176225.005
362Ara Norenzayan22572176125.005
363Benoit Monin63565175625.005
364Michael W. Kraus61772175526.005
365Ad van Knippenberg68372175526.001
366E. Tory. Higgins186868175425.001
367Ap Dijksterhuis75068175426.005
368Joseph Cesario14662174526.001
369Simone Schnall27062173126.001
370Joshua M. Ackerman38053167013.01
371Melissa J. Ferguson116372166927.005
372Laura A. King39176166829.005
373Daniel T. Gilbert72465166527.005
374Charles S. Carver15482166428.005
375Leif D. Nelson40974166428.005
376David DeSteno20183165728.005
377Sandra L. Murray69760165528.001
378Heejung S. Kim85859165529.001
379Mark P. Zanna65964164828.001
380Nira Liberman130475156531.005
381Gun R. Semin15979156429.005
382Tal Eyal43962156229.005
383Nathaniel M Lambert45666155930.001
384Angela L. Duckworth12261155530.005
385Dana R. Carney20060155330.001
386Lee Ross34977146331.001
387Arie W. Kruglanski122878145833.001
388Ziva Kunda21767145631.001
389Shelley E. Taylor42769145231.001
390Jon K. Maner104065145232.001
391Gabriele Oettingen104761144933.001
392Gregory M. Walton58769144433.001
393Michael A. Olson34665136335.001
394Fiona Lee22167135834.001
395Melody M. Chao23757135836.001
396Adam L. Alter31478135436.001
397Sarah E. Hill50978135234.001
398Jaime L. Kurtz9155133837.001
399Michael A. Zarate12052133136.001
400Jennifer K. Bosson65976126440.001
401Daniel M. Oppenheimer19880126037.001
402Deborah A. Prentice8980125738.001
403Yaacov Trope127773125738.001
404Oscar Ybarra30563125540.001
405William von Hippel39865124840.001
406Steven J. Spencer54167124438.001
407Martie G. Haselton18673115443.001
408Shelly Chaiken36074115244.001
409Susan M. Andersen36174114843.001
410Dov Cohen64168114441.001
411Mark Muraven49652114441.001
412Ian McGregor40966114041.001
413Hans Ijzerman2145694651.001
414Linda M. Isbell1156494150.001
415Cheryl J. Wakslak2787383559.001

What would Cohen say? A comment on p < .005

Most psychologists are trained in Fisherian statistics, which has become known as Null-Hypothesis Significance Testing (NHST).  NHST compares an observed effect size against a hypothetical effect size. The hypothetical effect size is typically zero; that is, the hypothesis is that there is no effect.  The deviation of the observed effect size from zero relative to the amount of sampling error provides a test statistic (test statistic = effect size / sampling error).  The test statistic can then be compared to a criterion value. The criterion value is typically chosen so that only 5% of test statistics would exceed the criterion value by chance alone.  If the test statistic exceeds this value, the null-hypothesis is rejected in favor of the inference that an effect greater than zero was present.

One major problem of NHST is that non-significant results are not considered.  To address this limitation, Neyman and Pearson extended Fisherian statistic and introduced the concepts of type-I (alpha) and type-II (beta) errors.  A type-I error occurs when researchers falsely reject a true null-hypothesis; that is, they infer from a significant result that an effect was present, when there is actually no effect.  The type-I error rate is fixed by the criterion for significance, which is typically p < .05.  This means, that a set of studies cannot produce more than 5% false-positive results.  The maximum of 5% false positive results would only be observed if all studies have no effect. In this case, we would expect 5% significant results and 95% non-significant results.

The important contribution by Neyman and Pearson was to consider the complementary type-II error.  A type-II error occurs when an effect is present, but a study produces a non-significant result.  In this case, researchers fail to detect a true effect.  The type-II error rate depends on the size of the effect and the amount of sampling error.  If effect sizes are small and sampling error is large, test statistics will often be too small to exceed the criterion value.

Neyman-Pearson statistics was popularized in psychology by Jacob Cohen.  In 1962, Cohen examined effect sizes and sample sizes (as a proxy for sampling error) in the Journal of Abnormal and Social Psychology and concluded that there is a high risk of type-II errors because sample sizes are too small to detect even moderate effect sizes and inadequate to detect small effect sizes.  Over the next decades, methodologists have repeatedly pointed out that psychologists often conduct studies with a high risk to fail; that is, to provide empirical evidence for real effects (Sedlemeier & Gigerenzer, 1989).

The concern about type-II errors has been largely ignored by empirical psychologists.  One possible reason is that journals had no problem filling volumes with significant results, while rejecting 80% of submissions that also presented significant results.  Apparently, type-II errors were much less common than methodologists feared.

However, in 2011 it became apparent that the high success rate in journals was illusory. Published results were not representative of studies that were conducted. Instead, researchers used questionable research practices or simply did not report studies with non-significant results.  In other words, the type-II error rate was as high as methodologists suspected, but selection of significant results created the impression that nearly all studies were successful in producing significant results.  The influential “False Positive Psychology” article suggested that it is very easy to produce significant results without an actual effect.  This led to the fear that many published results in psychology may be false positive results.

Doubt about the replicability and credibility of published results has led to numerous recommendations for the improvement of psychological science.  One of the most obvious recommendations is to ensure that published results are representative of the studies that are actually being conducted.  Given the high type-II error rates, this would mean that journals would be filled with many non-significant and inconclusive results.  This is not a very attractive solution because it is not clear what the scientific community can learn from an inconclusive result.  A better solution would be to increase the statistical power of studies. Statistical power is simply the inverse of a type-II error (power = 1 – beta).  As power increases, studies with a true effect have a higher chance of producing a true positive result (e.g., a drug is an effective treatment for a disease). Numerous articles have suggested that researchers should increase power to increase replicability and credibility of published results (e.g., Schimmack, 2012).

In a recent article, a team of 72 authors proposed another solution. They recommended that psychologists should reduce the probability of a type-I error from 5% (1 out of 20 studies) to 0.5% (1 out of 200 studies).  This recommendation is based on the belief that the replication crisis in psychology reflects a large number of type-I errors.  By reducing the alpha criterion, the rate of type-I errors will be reduced from a maximum of 10 out of 200 studies to 1 out of 200 studies.

I believe that this recommendation is misguided because it ignores the consequences of a more stringent significance criterion on type-II errors.  Keeping resources and sampling error constant, reducing the type-I error rate increases the type-II error rate. This is undesirable because the actual type-II error is already large.

For example, a between-subject comparison of two means with a standardized effect size of d = .4 and a sample size of N = 100 (n = 50 per cell) has a 50% risk of a type-II error.  The risk of a type-II error rises to 80%, if alpha is reduced to .005.  It makes no sense to conduct a study with an 80% chance of failure (Tversky & Kahneman, 1971).  Thus, the call for a lower alpha implies that researchers will have to invest more resources to discover true positive results.  Many researchers may simply lack the resources to meet this stringent significance criterion.

My suggestion is exactly opposite to the recommendation of a more stringent criterion.  The main problem for selection bias in journals is that even the existing criterion of p < .05 is too stringent and leads to a high percentage of type-II errors that cannot be published.  This has produced the replication crisis with large file-drawers of studies with p-values greater than .05,  the use of questionable research practices, and publications of inflated effect sizes that cannot be replicated.

To avoid this problem, researchers should use a significance criterion that balances the risk of a type-I and type-II error.  For example, in a between-subject design with an expected effect size of d = .4 and N = 100, researchers should use p < .20 for significance, which reduces the risk of a type -II error to 20%.  In this case, type-I and type-II error are balanced.  If the study produces a p-value of, say, .15, researchers can publish the result with the conclusion that the study provided evidence for the effect. At the same time, readers are warned that they should not interpret this result as strong evidence for the effect because there is a 20% probability of a type-I error.

Given this positive result, researchers can then follow up their initial study with a larger replication study that allows for a stricter type-I error control, while holding power constant.   With d = 4, they now need N = 200 participants to have 80% power and alpha = .05.  Even if the second study does not produce a significant result (the probability that two studies with 80% power are significant is only 64%, Schimmack, 2012), researchers can combine the results of both studies and with N = 300, the combined studies have 80% power with alpha = .01.

The advantage of starting with smaller studies with a higher alpha criterion is that researchers are able to test risky hypothesis with a smaller amount of resources.  In the example, the first study used “only” 100 participants.  In contrast, the proposal to require p < .005 as evidence for an original, risky study implies that researchers need to invest a lot of resources in a risky study that may provide inconclusive results if it fails to produce a significant result.  A power analysis shows that a sample size of N = 338 participants is needed to have 80% power for an effect size of d = .4 and p < .005 as criterion for significance.

Rather than investing 300 participants into a risky study that may produce a non-significant and uninteresting result (eating green jelly beans does not cure cancer), researchers may be better able and willing to start with 100 participants and to follow up an encouraging result with a larger follow-up study.  The evidential value that arises from one study with 300 participants or two studies with 100 and 200 participants is the same, but requiring p < .005 from the start discourages risky studies and puts even more pressure on researchers to produce significant results if all of their resources are used for a single study.  In contrast, lowering alpha reduces the need for questionable research practices and reduces the risk of type-II errors.

In conclusion, it is time to learn Neyman-Pearson statistic and to remember Cohen’s important contribution that many studies in psychology are underpowered.  Low power produces inconclusive results that are not worthwhile publishing.  A study with low power is like a high-jumper that puts the bar too high and fails every time. We learned nothing about the jumpers’ ability. Scientists may learn from high-jump contests where jumpers start with lower and realistic heights and then raise the bar when they succeeded.  In the same manner, researchers should conduct pilot studies or risky exploratory studies with small samples and a high type-I error probability and lower the alpha criterion gradually if the results are encouraging, while maintaining a reasonably low type-II error.

Evidently, a significant result with alpha = .20 does not provide conclusive evidence for an effect.  However, the arbitrary p < .005 criterion also fails short of demonstrating conclusively that an effect exists.  Journals publish thousands of results a year and some of these results may be false positives, even if the error rate is set at 1 out of 200. Thus, p < .005 is neither defensible as a criterion for a first exploratory study, nor conclusive evidence for an effect.  A better criterion for conclusive evidence is that an effect can be replicated across different laboratories and a type-I error probability of less than 1 out of a billion (6 sigma).  This is by no means an unrealistic target.  To achieve this criterion with an effect size of d = .4, a sample size of N = 1,000 is needed.  The combined evidence of 5 labs with N = 200 per lab would be sufficient to produce conclusive evidence for an effect, but only if there is no selection bias.  Thus, the best way to increase the credibility of psychological science is to conduct studies with high power and to minimize selection bias.

This is what I believe Cohen would have said, but even if I am wrong about this, I think it follows from his futile efforts to teach psychologists about type-II errors and statistical power.