Personalized P-Values for Social/Personality Psychologists

Last update 2/24/2021
(the latest updated included articles published in 2020. This produced some changes in the rankings).


Since Fisher invented null-hypothesis significance testing, researchers have used p < .05 as a statistical criterion to interpret results as discoveries worthwhile of discussion (i.e., the null-hypothesis is false). Once published, these results are often treated as real findings even though alpha does not control the risk of false discoveries.

Statisticians have warned against the exclusive reliance on p < .05, but nearly 100 years after Fisher popularized this approach, it is still the most common way to interpret data. The main reason is that many attempts to improve on this practice have failed. The main problem is that a single statistical result is difficult to interpret. However, when individual results are interpreted in the context of other results, they become more informative. Based on the distribution of p-values it is possible to estimate the maximum false discovery rate (Bartos & Schimmack, 2020; Jager & Leek, 2014). This approach can be applied to the p-values published by individual authors to adjust p-values to keep the risk of false discoveries at a reasonable level, FDR < .05.

Researchers who mainly test true hypotheses with high power have a high discovery rate (many p-values below .05) and a low false discovery rate (FDR < .05). Figure 1 shows an example of a researcher who followed this strategy (for a detailed description of z-curve plots, see Schimmack, 2021).

We see that out of the 317 test-statistics retrieved from his articles, 246 were significant with alpha = .05. This is an observed discovery rate of 78%. We also see that this discovery rate closely matches the estimated discovery rate based on the distribution of the significant p-values, p < .05. The EDR is 79%. With an EDR of 79%, the maximum false discovery rate is only 1%. However, the 95%CI is wide and the lower bound of the CI for the EDR, 27%, allows for 14% false discoveries.

When the ODR matches the EDR, there is no evidence of publication bias. In this case, we can improve the estimates by fitting all p-values, including the non-significant ones. With a tighter CI for the EDR, we see that the 95%CI for the maximum FDR ranges from 1% to 3%. Thus, we can be confident that no more than 5% of the significant results wit alpha = .05 are false discoveries. Readers can therefore continue to use alpha = .05 to look for interesting discoveries in Matsumoto’s articles.

Figure 3 shows the results for a different type of researcher who took a risk and studied weak effect sizes with small samples. This produces many non-significant results that are often not published. The selection for significance inflates the observed discovery rate, but the z-curve plot and the comparison with the EDR shows the influence of publication bias. Here the ODR is similar to Figure 1, but the EDR is only 11%. An EDR of 11% translates into a large maximum false discovery rate of 41%. In addition, the 95%CI of the EDR includes 5%, which means the risk of false positives could be as high as 100%. In this case, using alpha = .05 to interpret results as discoveries is very risky. Clearly, p < .05 means something very different when reading an article by David Matsumoto or Shelly Chaiken.

Rather than dismissing all of Chaiken’s results, we can try to lower alpha to reduce the false discovery rate. If we set alpha = .01, the FDR is 15%. If we set alpha = .005, the FDR is 8%. To get the FDR below 5%, we need to set alpha to .001.

A uniform criterion of FDR < 5% is applied to all researchers in the rankings below. For some this means no adjustment to the traditional criterion. For others, alpha is lowered to .01, and for a few even lower than that.

The rankings below are based on automatrically extracted test-statistics from 40 journals (List of journals). The results should be interpreted with caution and treated as preliminary. They depend on the specific set of journals that were searched, the way results are being reported, and many other factors. The data are available (data.drop) and researchers can exclude articles or add articles and run their own analyses using the z-curve package in R (

I am also happy to receive feedback about coding errors. I also recommended to hand-code articles to adjust alpha for focal hypothesis tests. This typically lowers the EDR and increases the FDR. For example, the automated method produced an EDR of 31 for Bargh, whereas hand-coding of focal tests produced an EDR of 12 (Bargh-Audit).

And here are the rankings. The results are fully automated and I was not able to cover up the fact that I placed only #139 out of 300 in the rankings. In another post, I will explain how researchers can move up in the rankings. Of course, one way to move up in the rankings is to increase statistical power in future studies. The rankings will be updated again when the 2021 data are available.

Despite the preliminary nature, I am confident that the results provide valuable information. Until know all p-values below .05 have been treated as if they are equally informative. The rankings here show that this is not the case. While p = .02 can be informative for one researcher, p = .002 may still entail a high false discovery risk for another researcher.

Robert A. Emmons588885881.05
David Matsumoto3788379851.05
Linda J. Skitka5326875822.05
Jonathan B. Freeman2745975812.05
Virgil Zeigler-Hill5157274812.05
David P. Schmitt2077871772.05
Emily A. Impett5497770762.05
Michael E. McCullough3346969782.05
Kipling D. Williams8437569772.05
John M. Zelenski1567169762.05
Kurt Gray4877969812.05
Hilary B. Bergsieker4396768742.05
Cameron Anderson6527167743.05
Jamil Zaki4307866763.05
Phoebe C. Ellsworth6057465723.05
Jim Sidanius4876965723.05
Benjamin R. Karney3925665733.05
A. Janet Tomiyama767865763.05
Carol D. Ryff2808464763.05
Juliane Degner4356364713.05
Thomas N Bradbury3986163693.05
Steven J. Heine5977863773.05
David M. Amodio5846663703.05
Elaine Fox4727962783.05
Klaus Fiedler14217860723.05
Richard W. Robins2707660704.05
William B. Swann Jr.10707859804.05
Margaret S. Clark5057559774.05
Edward P. Lemay2898759814.05
Patricia G. Devine6067158674.05
B. Keith Payne8797158764.05
Ximena B. Arriaga2846658694.05
Rainer Reisenzein2016557694.05
Jean M. Twenge3817256594.05
Barbara A. Mellers2878056784.05
Joris Lammers7056956694.05
Nicholas Epley15047455724.05
Richard M. Ryan9987852695.05
Edward L. Deci2847952635.05
Ethan Kross6146652675.05
Lee Jussim2268052715.05
Samuel D. Gosling1085851625.05
Jens B. Asendorpf2537451695.05
Roger Giner-Sorolla6638151805.05
Tessa V. West6917151595.05
James J. Gross11047250775.05
Paul Rozin4497850845.05
Shinobu Kitayama9837650715.05
Janice R. Kelly3667550705.05
Sheena S. Iyengar2076350805.05
Paul K. Piff1667750635.05
Mina Cikara3927149805.05
Bertram Gawronski18037248766.01
Edward R. Hirt10428148656.01
Penelope Lockwood4587148706.01
John T. Cacioppo4387647696.01
Matthew D. Lieberman3987247806.01
Daniel M. Wegner6027647656.01
Agneta H. Fischer9527547696.01
Leaf van Boven7117247676.01
Stephanie A. Fryberg2486247666.01
Alice H. Eagly3307546716.01
Rainer Banse4027846726.01
Jeanne L. Tsai12417346676.01
Jennifer S. Lerner1818046616.01
Dacher Keltner12337245646.01
Constantine Sedikides25667145706.01
Andrea L. Meltzer5495245726.01
R. Chris Fraley6427045727.01
Brian A. Nosek8166844817.01
Ursula Hess7747844717.01
S. Alexander Haslam11987243647.01
Charles M. Judd10547643687.01
Mark Schaller5657343617.01
Jason P. Mitchell6007343737.01
Jessica L. Tracy6327443717.01
Mario Mikulincer9018942647.01
Lisa Feldman Barrett6446942707.01
Susan T. Fiske9117842747.01
Bernadette Park9737742647.01
Paul A. M. Van Lange10927042637.01
Wendi L. Gardner7986742637.01
Philip E. Tetlock5497941737.01
Jordan B. Peterson2666041797.01
Michael Inzlicht5666441618.01
Stacey Sinclair3277041578.01
Richard E. Petty27716940648.01
Norbert Schwarz13377240638.01
Wendy Wood4627540628.01
Tiffany A. Ito3498040648.01
Elizabeth Page-Gould4115740668.01
Carol S. Dweck10287039638.01
Marcel Zeelenberg8687639798.01
Christian S. Crandall3627539598.01
Tobias Greitemeyer17377239678.01
Jason E. Plaks5827039678.01
Jerry Suls4137138688.01
Eric D. Knowles3846838648.01
John F. Dovidio20196938629.01
C. Nathan DeWall13367338639.01
Harry T. Reis9986938749.01
Joshua Correll5496138629.01
Abigail A. Scholer5565838629.01
Mahzarin R. Banaji8807337789.01
Antony S. R. Manstead16567237629.01
Kevin N. Ochsner4067937709.01
Fritz Strack6077537569.01
Ayelet Fishbach14167837599.01
Lorne Campbell4336737619.01
Geoff MacDonald4066737679.01
Mark J. Brandt2777037709.01
Craig A. Anderson4677636559.01
Barbara L. Fredrickson2877236619.01
Nyla R. Branscombe12767036659.01
Niall Bolger3766736589.01
D. S. Moskowitz34187436639.01
Duane T. Wegener9807736609.01
Joanne V. Wood10937436609.01
Yaacov Schul4116136649.01
Jeff T. Larsen18174366710.01
Nalini Ambady125662355610.01
John T. Jost79470356110.01
Daphna Oyserman44655355410.01
Samuel L. Gaertner32175356110.01
Michael Harris Bond37873358410.01
Michael D. Robinson138878356610.01
Igor Grossmann20364356610.01
Azim F. Sharif18374356810.01
Eva Walther49382356610.01
C. Miguel Brendl12176356810.01
Emily Balcetis59969356810.01
Diana I. Tamir15662356210.01
Thomas Gilovich119380346910.01
Paula M. Niedenthal52269346110.01
Ozlem Ayduk54962345910.01
Wiebke Bleidorn9963347410.01
Alison Ledgerwood21475345410.01
Kerry Kawakami48768335610.01
Christopher R. Agnew32575337610.01
Jennifer A. Richeson83167335211.01
Malte Friese50161335711.01
Danu Anthony Stinson49477335411.01
Mark Snyder56272326311.01
Robert B. Cialdini37972325611.01
Russell H. Fazio109469326111.01
Eli J. Finkel139262325711.01
Ulrich Schimmack31875326311.01
Margo J. Monteith77376327711.01
E. Ashby Plant83177315111.01
Christopher K. Hsee68975316311.01
Yuen J. Huo13274318011.01
Roy F. Baumeister244269315212.01
John A. Bargh65172315512.01
Tom Pyszczynski94869315412.01
Delroy L. Paulhus12177318212.01
Kathleen D. Vohs94468315112.01
Jamie Arndt131869315012.01
Arthur Aron30765305612.01
Anthony G. Greenwald35772308312.01
Jennifer Crocker51568306712.01
Dale T. Miller52171306412.01
Aaron C. Kay132070305112.01
Lauren J. Human44759307012.01
Steven W. Gangestad19863304113.005
Nicholas O. Rule129468307513.01
Jeff Greenberg135877295413.01
Hazel Rose Markus67476296813.01
Russell Spears228673295513.01
Gordon B. Moskowitz37472295713.01
Richard E. Nisbett31973296913.01
Eliot R. Smith44579297313.01
Boris Egloff27481295813.01
Caryl E. Rusbult21860295413.01
Dirk Wentura83065296413.01
Nir Halevy26268297213.01
Adam D. Galinsky215470284913.01
Jeffry A. Simpson69774285513.01
Yoav Bar-Anan52575287613.01
Roland Neumann25877286713.01
Richard J. Davidson38064285114.01
Eddie Harmon-Jones73873287014.01
Brent W. Roberts56272287714.01
Naomi I. Eisenberger17974287914.01
Sander L. Koole76765285214.01
Shelly L. Gable36464285014.01
Joshua Aronson18385284614.005
Elizabeth W. Dunn39575286414.01
Grainne M. Fitzsimons58568284914.01
Geoffrey J. Leonardelli29068284814.005
Matthew Feinberg29577286914.01
Jan De Houwer197270277214.01
Karl Christoph Klauer80167276514.01
Guido H. E. Gendolla42276274714.005
Jennifer S. Beer8056275414.01
Klaus R. Scherer46783267815.01
Galen V. Bodenhausen58574266115.01
Sonja Lyubomirsky53171265915.01
Claude M. Steele43473264215.005
William G. Graziano53271266615.01
Kristin Laurin64863265115.01
Kerri L. Johnson53276257615.01
Phillip R. Shaver56681257116.01
David Dunning81874257016.01
Laurie A. Rudman48272256816.01
Joel Cooper25772253916.005
Batja Mesquita41671257316.01
Ronald S. Friedman18379254416.005
Steven J. Sherman88874246216.01
Alison L. Chasteen22368246916.01
Shigehiro Oishi110964246117.01
Thomas Mussweiler60470244317.005
Mark W. Baldwin24772244117.005
Jonathan Haidt36876237317.01
Brandon J. Schmeichel65266234517.005
Jeffrey W Sherman99268237117.01
Felicia Pratto41073237518.01
Klaus Rothermund73871237618.01
Bernard A. Nijstad69371235218.005
Roland Imhoff36574237318.01
Jennifer L. Eberhardt20271236218.005
Michael Ross116470226218.005
Marilynn B. Brewer31475226218.005
Dieter Frey153868225818.005
David M. Buss46182228019.01
Wendy Berry Mendes96568224419.005
Yoel Inbar28067227119.01
Sean M. McCrea58473225419.005
Spike W. S. Lee14568226419.005
Joseph P. Forgas88883215919.005
Maya Tamir134280216419.005
Paul W. Eastwick58365216919.005
Elizabeth Levy Paluck3184215520.005
Andrew J. Elliot101881206721.005
Jay J. van Bavel43764207121.005
Tanya L. Chartrand42467203321.001
Geoffrey L. Cohen159068205021.005
David A. Pizarro22771206921.005
Ana Guinote37876204721.005
Kentaro Fujita45869206221.005
William A. Cunningham23876206422.005
Robert S. Wyer87182196322.005
Peter M. Gollwitzer130364195822.005
Gerald L. Clore45674194522.001
Amy J. C. Cuddy17081197222.005
Nilanjana Dasgupta38376195222.005
Travis Proulx17463196222.005
James K. McNulty104756196523.005
Dolores Albarracin52067195623.005
Richard P. Eibach75369194723.001
Kennon M. Sheldon69874186623.005
Wilhelm Hofmann62467186623.005
Ed Diener49864186824.005
Frank D. Fincham73469185924.005
Toni Schmader54669186124.005
Roland Deutsch36578187124.005
Lisa K. Libby41865185424.005
James M. Tyler13087187424.005
Chen-Bo Zhong32768184925.005
Brad J. Bushman89774176225.005
Ara Norenzayan22572176125.005
Benoit Monin63565175625.005
Michel Tuan Pham24686176825.005
E. Tory. Higgins192068175426.001
Timothy D. Wilson79865176326.005
Ap Dijksterhuis75068175426.005
Michael W. Kraus61772175526.005
Carey K. Morewedge63376176526.005
Leandre R. Fabrigar63270176726.005
Joseph Cesario14662174526.001
Simone Schnall27062173126.001
Daniel T. Gilbert72465166527.005
Melissa J. Ferguson116372166927.005
Charles S. Carver15482166428.005
Mark P. Zanna65964164828.001
Sandra L. Murray69760165528.001
Laura A. King39176166829.005
Heejung S. Kim85859165529.001
Gun R. Semin15979156429.005
Nathaniel M Lambert45666155930.001
Shelley E. Taylor43869155431.001
Nira Liberman130475156531.005
Lee Ross34977146331.001
Ziva Kunda21767145631.001
Jon K. Maner104065145232.001
Arie W. Kruglanski122878145833.001
Gabriele Oettingen104761144933.001
Gregory M. Walton58769144433.001
Sarah E. Hill50978135234.001
Fiona Lee22167135834.001
Michael A. Olson34665136335.001
Michael A. Zarate12052133136.001
Daniel M. Oppenheimer19880126037.001
Yaacov Trope127773125738.001
Steven J. Spencer54167124438.001
Deborah A. Prentice8980125738.001
William von Hippel39865124840.001
Oscar Ybarra30563125540.001
Dov Cohen64168114441.001
Ian McGregor40966114041.001
Mark Muraven49652114441.001
Martie G. Haselton18673115443.001
Susan M. Andersen36174114843.001
Shelly Chaiken36074115244.001
Hans Ijzerman2145694651.001

16 thoughts on “Personalized P-Values for Social/Personality Psychologists

  1. Only 801 of the listed 1260 effects were actually taken from research that I was involved in (some seem to stem from articles for which I was editor, others are a mystery to me). On the other hand, the majority of my research is missing. It seems preferable to publish data that is actually based on a more or less representative sample of research actually done by the person with whom that data is associated.


    1. Thank you for the comment. They are valuable to improve the informativeness of the z-curve analyses.
      1. only social/personalty journals and general journals like Psych Science were used (I posted a list of the journals).
      I will make clear which journals were used.
      2. I am trying to screen out mentions of names as editor, but the program is not perfect. I will look into this and update according.
      3. I found a way to screen out more articles where your name appeared in footnotes (thank you).
      4. I updated the results and they did improve.
      5. Please check the new results.


      1. Thank you for the quick response. Some of my research is published in psychophysiology or cognitive journals hence I now understand why so much is missing.


      2. I figure that research practices can vary once physiological measures are taken or in cognitive studies with within-subject designs. I will eventually do similar posts for other areas.


  2. I’m dismayed (and aghast) to see that I’m almost at the bottom of this list. Any advice on how to investigate this further to see where the problem lies?


    1. Thank you for your comment.
      You can download a file called “William von Hippel-rindex.csv”
      It contains all the articles that were used and computes the R-Index based on the z-scores found for that article. The R-Index is a simple way to estimate replicability that works for small sets of test statistics. An R-Index of 50 would suggest that the replicability is about 50%. The EDR would be lower, but is hard to estimate with a small set of test statistics. The file is sorted by the R-Index. Articles with an R-Index below 50 are probably not robust. This is a good way to start diagnosing the problem.


      1. Hi Uli, that’s very helpful – thanks!

        But now I’m confused. To start with the worst offenders on my list, I have four papers with an R-Index of 0. I can’t tell what two of them are, as your identifier doesn’t include the article title or authors, but two of them are clear. The first of those two has large samples, reports a wide variety of large and small correlations, and strikes me as highly replicable. Indeed, study 2 (N=466) is a direct replication of study 1 (N=196) with an even larger sample. Study 3 goes in a slightly different direction, but mostly relies on the data from Study 2. The other paper reports large samples (Ns = 200) but small effects. We submitted it with only one study, the editor asked for replication, we ran a direct replication with the same sample size and found the same effect. Those are both in the paper. Since then we’ve tried to replicate it once and have succeeded (that finding isn’t yet published).

        That’s the first issue, and strikes me as the most important. Secondarily, there are at least four or five papers in this list that aren’t my own – perhaps more but it’s hard to tell what some of the papers are – and the resultant list of papers is only about 1/3 of my empirical publications. Thus, setting aside the most important issue above, I don’t have a clear sense of what my actual replicability score would look like with all of my papers.

        All the best, Bill


  3. There are numerous correlations reported in both papers, along with various mediational analyses in one of them, so definitely not a single result.

    With regard to the second issue, the file lists the journal title and year, but that’s it. Sometimes I haven’t published in that journal in that year, so I know it’s not me. Sometimes I have, but in this particular case the only paper I published in that journal in that year has another one of the R = 0 examples, but includes a sample in the millions and a multiverse analysis. There’s no chance that could have a replicability index of 0.

    Liked by 1 person

  4. Thanks Uli, very kind of you to offer to run the analysis for me. I’ve created a dropbox folder with all of my empirical articles in it and shared it with you. Let me know if that doesn’t come through. Best, Bill

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s