Personalized P-Values for Social/Personality Psychologists

Last update 8/25/2021
(expanded to 410 social/personality psychologists; included Dan Ariely)

Introduction

Since Fisher invented null-hypothesis significance testing, researchers have used p < .05 as a statistical criterion to interpret results as discoveries worthwhile of discussion (i.e., the null-hypothesis is false). Once published, these results are often treated as real findings even though alpha does not control the risk of false discoveries.

Statisticians have warned against the exclusive reliance on p < .05, but nearly 100 years after Fisher popularized this approach, it is still the most common way to interpret data. The main reason is that many attempts to improve on this practice have failed. The main problem is that a single statistical result is difficult to interpret. However, when individual results are interpreted in the context of other results, they become more informative. Based on the distribution of p-values it is possible to estimate the maximum false discovery rate (Bartos & Schimmack, 2020; Jager & Leek, 2014). This approach can be applied to the p-values published by individual authors to adjust p-values to keep the risk of false discoveries at a reasonable level, FDR < .05.

Researchers who mainly test true hypotheses with high power have a high discovery rate (many p-values below .05) and a low false discovery rate (FDR < .05). Figure 1 shows an example of a researcher who followed this strategy (for a detailed description of z-curve plots, see Schimmack, 2021).

We see that out of the 317 test-statistics retrieved from his articles, 246 were significant with alpha = .05. This is an observed discovery rate of 78%. We also see that this discovery rate closely matches the estimated discovery rate based on the distribution of the significant p-values, p < .05. The EDR is 79%. With an EDR of 79%, the maximum false discovery rate is only 1%. However, the 95%CI is wide and the lower bound of the CI for the EDR, 27%, allows for 14% false discoveries.

When the ODR matches the EDR, there is no evidence of publication bias. In this case, we can improve the estimates by fitting all p-values, including the non-significant ones. With a tighter CI for the EDR, we see that the 95%CI for the maximum FDR ranges from 1% to 3%. Thus, we can be confident that no more than 5% of the significant results wit alpha = .05 are false discoveries. Readers can therefore continue to use alpha = .05 to look for interesting discoveries in Matsumoto’s articles.

Figure 3 shows the results for a different type of researcher who took a risk and studied weak effect sizes with small samples. This produces many non-significant results that are often not published. The selection for significance inflates the observed discovery rate, but the z-curve plot and the comparison with the EDR shows the influence of publication bias. Here the ODR is similar to Figure 1, but the EDR is only 11%. An EDR of 11% translates into a large maximum false discovery rate of 41%. In addition, the 95%CI of the EDR includes 5%, which means the risk of false positives could be as high as 100%. In this case, using alpha = .05 to interpret results as discoveries is very risky. Clearly, p < .05 means something very different when reading an article by David Matsumoto or Shelly Chaiken.

Rather than dismissing all of Chaiken’s results, we can try to lower alpha to reduce the false discovery rate. If we set alpha = .01, the FDR is 15%. If we set alpha = .005, the FDR is 8%. To get the FDR below 5%, we need to set alpha to .001.

A uniform criterion of FDR < 5% is applied to all researchers in the rankings below. For some this means no adjustment to the traditional criterion. For others, alpha is lowered to .01, and for a few even lower than that.

The rankings below are based on automatrically extracted test-statistics from 40 journals (List of journals). The results should be interpreted with caution and treated as preliminary. They depend on the specific set of journals that were searched, the way results are being reported, and many other factors. The data are available (data.drop) and researchers can exclude articles or add articles and run their own analyses using the z-curve package in R (https://replicationindex.com/2020/01/10/z-curve-2-0/).

I am also happy to receive feedback about coding errors. I also recommended to hand-code articles to adjust alpha for focal hypothesis tests. This typically lowers the EDR and increases the FDR. For example, the automated method produced an EDR of 31 for Bargh, whereas hand-coding of focal tests produced an EDR of 12 (Bargh-Audit).

And here are the rankings. The results are fully automated and I was not able to cover up the fact that I placed only #188 out of 400 in the rankings. In another post, I will explain how researchers can move up in the rankings. Of course, one way to move up in the rankings is to increase statistical power in future studies. The rankings will be updated again when the 2021 data are available.

Despite the preliminary nature, I am confident that the results provide valuable information. Until know all p-values below .05 have been treated as if they are equally informative. The rankings here show that this is not the case. While p = .02 can be informative for one researcher, p = .002 may still entail a high false discovery risk for another researcher.

Good science requires not only open and objective reporting of new data; it also requires unbiased review of the literature. However, there are no rules and regulations regarding citations, and many authors cherry-pick citations that are consistent with their claims. Even when studies have failed to replicate, original studies are cited without citing the replication failures. In some cases, authors even cite original articles that have been retracted. Fortunately, it is easy to spot these acts of unscientific behavior. Here I am starting a project to list examples of bad scientific behaviors. Hopefully, more scientists will take the time to hold their colleagues accountable for ethical behavior in citations. They can even do so by posting anonymously on the PubPeer comment site.

RankNameTestsODREDRERRFDRAlpha
1Robert A. Emmons538789901.05
2Allison L. Skinner2295981851.05
3David Matsumoto3788379851.05
4Linda J. Skitka5326875822.05
5Todd K. Shackelford3057775822.05
6Jonathan B. Freeman2745975812.05
7Virgil Zeigler-Hill5157274812.05
8Arthur A. Stone3107573812.05
9David P. Schmitt2077871772.05
10Emily A. Impett5497770762.05
11Paula Bressan628270762.05
12Kurt Gray4877969812.05
13Michael E. McCullough3346969782.05
14Kipling D. Williams8437569772.05
15John M. Zelenski1567169762.05
16Elke U. Weber3126968770.05
17Hilary B. Bergsieker4396768742.05
18Cameron Anderson6527167743.05
19Rachael E. Jack2497066803.05
20Jamil Zaki4307866763.05
21A. Janet Tomiyama767865763.05
22Benjamin R. Karney3925665733.05
23Phoebe C. Ellsworth6057465723.05
24Jim Sidanius4876965723.05
25Amelie Mummendey4617065723.05
26Carol D. Ryff2808464763.05
27Juliane Degner4356364713.05
28Steven J. Heine5977863773.05
29David M. Amodio5846663703.05
30Thomas N Bradbury3986163693.05
31Elaine Fox4727962783.05
32Miles Hewstone14277062733.05
33Linda R. Tropp3446561803.05
34Rainer Greifeneder9447561773.05
35Klaus Fiedler19507761743.05
36Jesse Graham3777060763.05
37Richard W. Robins2707660704.05
38Simine Vazire1376660644.05
39On Amir2676759884.05
40Edward P. Lemay2898759814.05
41William B. Swann Jr.10707859804.05
42Margaret S. Clark5057559774.05
43Bernhard Leidner7246459654.05
44B. Keith Payne8797158764.05
45Ximena B. Arriaga2846658694.05
46Joris Lammers7286958694.05
47Patricia G. Devine6067158674.05
48Rainer Reisenzein2016557694.05
49Barbara A. Mellers2878056784.05
50Joris Lammers7056956694.05
51Jean M. Twenge3817256594.05
52Nicholas Epley15047455724.05
53Kaiping Peng5667754754.05
54Krishna Savani6387153695.05
55Leslie Ashburn-Nardo1098052835.05
56Lee Jussim2268052715.05
57Richard M. Ryan9987852695.05
58Ethan Kross6146652675.05
59Edward L. Deci2847952635.05
60Roger Giner-Sorolla6638151805.05
61Bertram F. Malle4227351755.05
62Jens B. Asendorpf2537451695.05
63Samuel D. Gosling1085851625.05
64Tessa V. West6917151595.05
65Paul Rozin4497850845.05
66Joachim I. Krueger4367850815.05
67Sheena S. Iyengar2076350805.05
68James J. Gross11047250775.05
69Mark Rubin3066850755.05
70Pieter Van Dessel5787050755.05
71Shinobu Kitayama9837650715.05
72Matthew J. Hornsey16567450715.05
73Janice R. Kelly3667550705.05
74Antonio L. Freitas2477950645.05
75Paul K. Piff1667750635.05
76Mina Cikara3927149805.05
77Beate Seibt3797249626.01
78Ludwin E. Molina1636949615.05
79Bertram Gawronski18037248766.01
80Penelope Lockwood4587148706.01
81Edward R. Hirt10428148656.01
82Matthew D. Lieberman3987247806.01
83John T. Cacioppo4387647696.01
84Agneta H. Fischer9527547696.01
85Leaf van Boven7117247676.01
86Stephanie A. Fryberg2486247666.01
87Daniel M. Wegner6027647656.01
88Anne E. Wilson7857147646.01
89Rainer Banse4027846726.01
90Alice H. Eagly3307546716.01
91Jeanne L. Tsai12417346676.01
92Jennifer S. Lerner1818046616.01
93Andrea L. Meltzer5495245726.01
94R. Chris Fraley6427045727.01
95Constantine Sedikides25667145706.01
96Paul Slovic3777445706.01
97Dacher Keltner12337245646.01
98Brian A. Nosek8166844817.01
99George Loewenstein7527144727.01
100Ursula Hess7747844717.01
101Jason P. Mitchell6007343737.01
102Jessica L. Tracy6327443717.01
103Charles M. Judd10547643687.01
104S. Alexander Haslam11987243647.01
105Mark Schaller5657343617.01
106Susan T. Fiske9117842747.01
107Lisa Feldman Barrett6446942707.01
108Jolanda Jetten19567342677.01
109Mario Mikulincer9018942647.01
110Bernadette Park9737742647.01
111Paul A. M. Van Lange10927042637.01
112Wendi L. Gardner7986742637.01
113Will M. Gervais1106942597.01
114Jordan B. Peterson2666041797.01
115Philip E. Tetlock5497941737.01
116Amanda B. Diekman4388341707.01
117Daniel H. J. Wigboldus4927641678.01
118Michael Inzlicht6866641638.01
119Naomi Ellemers23887441638.01
120Phillip Atiba Goff2996841627.01
121Stacey Sinclair3277041578.01
122Francesca Gino25217540698.01
123Michael I. Norton11367140698.01
124David J. Hauser1567440688.01
125Elizabeth Page-Gould4115740668.01
126Tiffany A. Ito3498040648.01
127Richard E. Petty27716940648.01
128Tim Wildschut13747340648.01
129Norbert Schwarz13377240638.01
130Veronika Job3627040638.01
131Wendy Wood4627540628.01
132Minah H. Jung1568339838.01
133Marcel Zeelenberg8687639798.01
134Tobias Greitemeyer17377239678.01
135Jason E. Plaks5827039678.01
136Carol S. Dweck10287039638.01
137Christian S. Crandall3627539598.01
138Harry T. Reis9986938749.01
139Vanessa K. Bohns4207738748.01
140Jerry Suls4137138688.01
141Eric D. Knowles3846838648.01
142C. Nathan DeWall13367338639.01
143Clayton R. Critcher6978238639.01
144John F. Dovidio20196938629.01
145Joshua Correll5496138629.01
146Abigail A. Scholer5565838629.01
147Chris Janiszewski1078138589.01
148Herbert Bless5867338579.01
149Mahzarin R. Banaji8807337789.01
150Rolf Reber2806437729.01
151Kevin N. Ochsner4067937709.01
152Mark J. Brandt2777037709.01
153Geoff MacDonald4066737679.01
154Mara Mather10387837679.01
155Antony S. R. Manstead16567237629.01
156Lorne Campbell4336737619.01
157Sanford E. DeVoe2367137619.01
158Ayelet Fishbach14167837599.01
159Fritz Strack6077537569.01
160Jeff T. Larsen18174366710.01
161Nyla R. Branscombe12767036659.01
162Yaacov Schul4116136649.01
163D. S. Moskowitz34187436639.01
164Pablo Brinol13566736629.01
165Todd B. Kashdan3777336619.01
166Barbara L. Fredrickson2877236619.01
167Duane T. Wegener9807736609.01
168Joanne V. Wood10937436609.01
169Daniel A. Effron4846636609.01
170Niall Bolger3766736589.01
171Craig A. Anderson4677636559.01
172Michael Harris Bond37873358410.01
173Glenn Adams27071357310.01
174Daniel M. Bernstein40473357010.01
175C. Miguel Brendl12176356810.01
176Azim F. Sharif18374356810.01
177Emily Balcetis59969356810.01
178Eva Walther49382356610.01
179Michael D. Robinson138878356610.01
180Igor Grossmann20364356610.01
181Diana I. Tamir15662356210.01
182Samuel L. Gaertner32175356110.01
183John T. Jost79470356110.01
184Eric L. Uhlmann45767356110.01
185Nalini Ambady125662355610.01
186Daphna Oyserman44655355410.01
187Victoria M. Esses29575355310.01
188Linda J. Levine49574347810.01
189Wiebke Bleidorn9963347410.01
190Thomas Gilovich119380346910.01
191Alexander J. Rothman13369346510.01
192Francis J. Flynn37872346310.01
193Paula M. Niedenthal52269346110.01
194Ozlem Ayduk54962345910.01
195Paul Ekman8870345510.01
196Alison Ledgerwood21475345410.01
197Christopher R. Agnew32575337610.01
198Michelle N. Shiota24260336311.01
199Malte Friese50161335711.01
200Kerry Kawakami48768335610.01
201Danu Anthony Stinson49477335411.01
202Jennifer A. Richeson83167335211.01
203Margo J. Monteith77376327711.01
204Ulrich Schimmack31875326311.01
205Mark Snyder56272326311.01
206Russell H. Fazio109469326111.01
207Eric van Dijk23867326011.01
208Tom Meyvis37777326011.01
209Eli J. Finkel139262325711.01
210Robert B. Cialdini37972325611.01
211Jonathan W. Kunstman43066325311.01
212Delroy L. Paulhus12177318212.01
213Yuen J. Huo13274318011.01
214Gerd Bohner51371317011.01
215Christopher K. Hsee68975316311.01
216Vivian Zayas25171316012.01
217John A. Bargh65172315512.01
218Tom Pyszczynski94869315412.01
219Roy F. Baumeister244269315212.01
220E. Ashby Plant83177315111.01
221Kathleen D. Vohs94468315112.01
222Jamie Arndt131869315012.01
223Anthony G. Greenwald35772308312.01
224Nicholas O. Rule129468307513.01
225Lauren J. Human44759307012.01
226Jennifer Crocker51568306712.01
227Dale T. Miller52171306412.01
228Thomas W. Schubert35370306012.01
229W. Keith Campbell52870305812.01
230Arthur Aron30765305612.01
231Pamela K. Smith14966305212.01
232Aaron C. Kay132070305112.01
233Steven W. Gangestad19863304113.005
234Eliot R. Smith44579297313.01
235Nir Halevy26268297213.01
236E. Allan Lind37082297213.01
237Richard E. Nisbett31973296913.01
238Hazel Rose Markus67476296813.01
239Emanuele Castano44569296513.01
240Dirk Wentura83065296413.01
241Boris Egloff27481295813.01
242Monica Biernat81377295713.01
243Gordon B. Moskowitz37472295713.01
244Russell Spears228673295513.01
245Jeff Greenberg135877295413.01
246Caryl E. Rusbult21860295413.01
247Naomi I. Eisenberger17974287914.01
248Brent W. Roberts56272287714.01
249Yoav Bar-Anan52575287613.01
250Eddie Harmon-Jones73873287014.01
251Matthew Feinberg29577286914.01
252Roland Neumann25877286713.01
253Eugene M. Caruso82275286413.01
254Ulrich Kuehnen82275286413.01
255Elizabeth W. Dunn39575286414.01
256Jeffry A. Simpson69774285513.01
257Sander L. Koole76765285214.01
258Richard J. Davidson38064285114.01
259Shelly L. Gable36464285014.01
260Adam D. Galinsky215470284913.01
261Grainne M. Fitzsimons58568284914.01
262Geoffrey J. Leonardelli29068284814.005
263Joshua Aronson18385284614.005
264Henk Aarts100367284514.005
265Vanessa K. Bohns42276277415.01
266Jan De Houwer197270277214.01
267Dan Ariely60070276914.01
268Charles Stangor18581276815.01
269Karl Christoph Klauer80167276514.01
270Mario Gollwitzer50058276214.01
271Jennifer S. Beer8056275414.01
272Eldar Shafir10778275114.01
273Guido H. E. Gendolla42276274714.005
274Klaus R. Scherer46783267815.01
275William G. Graziano53271266615.01
276Galen V. Bodenhausen58574266115.01
277Sonja Lyubomirsky53071265915.01
278Kai Sassenberg87271265615.01
279Kristin Laurin64863265115.01
280Claude M. Steele43473264215.005
281David G. Rand39270258115.01
282Paul Bloom50272257916.01
283Kerri L. Johnson53276257615.01
284Batja Mesquita41671257316.01
285Rebecca J. Schlegel26167257115.01
286Phillip R. Shaver56681257116.01
287David Dunning81874257016.01
288Laurie A. Rudman48272256816.01
289David A. Lishner10565256316.01
290Mark J. Landau95078254516.005
291Ronald S. Friedman18379254416.005
292Joel Cooper25772253916.005
293Alison L. Chasteen22368246916.01
294Jeff Galak31373246817.01
295Steven J. Sherman88874246216.01
296Shigehiro Oishi110964246117.01
297Thomas Mussweiler60470244317.005
298Mark W. Baldwin24772244117.005
299Evan P. Apfelbaum25662244117.005
300Nurit Shnabel56476237818.01
301Klaus Rothermund73871237618.01
302Felicia Pratto41073237518.01
303Jonathan Haidt36876237317.01
304Roland Imhoff36574237318.01
305Jeffrey W Sherman99268237117.01
306Jennifer L. Eberhardt20271236218.005
307Bernard A. Nijstad69371235218.005
308Brandon J. Schmeichel65266234517.005
309Sam J. Maglio32572234217.005
310David M. Buss46182228019.01
311Yoel Inbar28067227119.01
312Serena Chen86572226719.005
313Spike W. S. Lee14568226419.005
314Marilynn B. Brewer31475226218.005
315Michael Ross116470226218.005
316Dieter Frey153868225818.005
317G. Daniel Lassiter18982225519.01
318Sean M. McCrea58473225419.005
319Wendy Berry Mendes96568224419.005
320Paul W. Eastwick58365216919.005
321Kees van den Bos115084216920.005
322Maya Tamir134280216419.005
323Joseph P. Forgas88883215919.005
324Michaela Wanke36274215919.005
325Dolores Albarracin54066215620.005
326Elizabeth Levy Paluck3184215520.005
327Vanessa LoBue29968207621.01
328Christopher J. Armitage16062207321.005
329Elizabeth A. Phelps68678207221.005
330Jay J. van Bavel43764207121.005
331David A. Pizarro22771206921.005
332Andrew J. Elliot101881206721.005
333William A. Cunningham23876206422.005
334Kentaro Fujita45869206221.005
335Geoffrey L. Cohen159068205021.005
336Ana Guinote37876204721.005
337Tanya L. Chartrand42467203321.001
338Selin Kesebir32866197322.005
339Vincent Y. Yzerbyt141273197322.01
340Amy J. C. Cuddy17081197222.005
341James K. McNulty104756196523.005
342Robert S. Wyer87182196322.005
343Travis Proulx17463196222.005
344Peter M. Gollwitzer130364195822.005
345Nilanjana Dasgupta38376195222.005
346Richard P. Eibach75369194723.001
347Gerald L. Clore45674194522.001
348James M. Tyler13087187424.005
349Roland Deutsch36578187124.005
350Ed Diener49864186824.005
351Kennon M. Sheldon69874186623.005
352Wilhelm Hofmann62467186623.005
353Laura L. Carstensen72377186424.005
354Toni Schmader54669186124.005
355Frank D. Fincham73469185924.005
356David K. Sherman112861185724.005
357Lisa K. Libby41865185424.005
358Chen-Bo Zhong32768184925.005
359Stefan C. Schmukle11462177126.005
360Michel Tuan Pham24686176825.005
361Leandre R. Fabrigar63270176726.005
362Neal J. Roese36864176525.005
363Carey K. Morewedge63376176526.005
364Timothy D. Wilson79865176326.005
365Brad J. Bushman89774176225.005
366Ara Norenzayan22572176125.005
367Benoit Monin63565175625.005
368Michael W. Kraus61772175526.005
369Ad van Knippenberg68372175526.001
370E. Tory. Higgins186868175425.001
371Ap Dijksterhuis75068175426.005
372Joseph Cesario14662174526.001
373Simone Schnall27062173126.001
374Joshua M. Ackerman38053167013.01
375Melissa J. Ferguson116372166927.005
376Laura A. King39176166829.005
377Daniel T. Gilbert72465166527.005
378Charles S. Carver15482166428.005
379Leif D. Nelson40974166428.005
380David DeSteno20183165728.005
381Sandra L. Murray69760165528.001
382Heejung S. Kim85859165529.001
383Mark P. Zanna65964164828.001
384Nira Liberman130475156531.005
385Gun R. Semin15979156429.005
386Tal Eyal43962156229.005
387Nathaniel M Lambert45666155930.001
388Angela L. Duckworth12261155530.005
389Dana R. Carney20060155330.001
390Lee Ross34977146331.001
391Arie W. Kruglanski122878145833.001
392Ziva Kunda21767145631.001
393Shelley E. Taylor42769145231.001
394Jon K. Maner104065145232.001
395Gabriele Oettingen104761144933.001
396Nicole L. Mead24070144633.01
397Gregory M. Walton58769144433.001
398Michael A. Olson34665136335.001
399Fiona Lee22167135834.001
400Melody M. Chao23757135836.001
401Adam L. Alter31478135436.001
402Sarah E. Hill50978135234.001
403Jaime L. Kurtz9155133837.001
404Michael A. Zarate12052133136.001
405Jennifer K. Bosson65976126440.001
406Daniel M. Oppenheimer19880126037.001
407Deborah A. Prentice8980125738.001
408Yaacov Trope127773125738.001
409Oscar Ybarra30563125540.001
410William von Hippel39865124840.001
411Steven J. Spencer54167124438.001
412Martie G. Haselton18673115443.001
413Shelly Chaiken36074115244.001
414Susan M. Andersen36174114843.001
415Dov Cohen64168114441.001
416Mark Muraven49652114441.001
417Ian McGregor40966114041.001
418Hans Ijzerman2145694651.001
419Linda M. Isbell1156494150.001
420Cheryl J. Wakslak2787383559.001

33 thoughts on “Personalized P-Values for Social/Personality Psychologists

  1. Only 801 of the listed 1260 effects were actually taken from research that I was involved in (some seem to stem from articles for which I was editor, others are a mystery to me). On the other hand, the majority of my research is missing. It seems preferable to publish data that is actually based on a more or less representative sample of research actually done by the person with whom that data is associated.

    1. Thank you for the comment. They are valuable to improve the informativeness of the z-curve analyses.
      1. only social/personalty journals and general journals like Psych Science were used (I posted a list of the journals).
      I will make clear which journals were used.
      2. I am trying to screen out mentions of names as editor, but the program is not perfect. I will look into this and update according.
      3. I found a way to screen out more articles where your name appeared in footnotes (thank you).
      4. I updated the results and they did improve.
      5. Please check the new results.

      1. Thank you for the quick response. Some of my research is published in psychophysiology or cognitive journals hence I now understand why so much is missing.

      2. I figure that research practices can vary once physiological measures are taken or in cognitive studies with within-subject designs. I will eventually do similar posts for other areas.

  2. I’m dismayed (and aghast) to see that I’m almost at the bottom of this list. Any advice on how to investigate this further to see where the problem lies?

    1. Thank you for your comment.
      You can download a file called “William von Hippel-rindex.csv”
      It contains all the articles that were used and computes the R-Index based on the z-scores found for that article. The R-Index is a simple way to estimate replicability that works for small sets of test statistics. An R-Index of 50 would suggest that the replicability is about 50%. The EDR would be lower, but is hard to estimate with a small set of test statistics. The file is sorted by the R-Index. Articles with an R-Index below 50 are probably not robust. This is a good way to start diagnosing the problem.

      1. Hi Uli, that’s very helpful – thanks!

        But now I’m confused. To start with the worst offenders on my list, I have four papers with an R-Index of 0. I can’t tell what two of them are, as your identifier doesn’t include the article title or authors, but two of them are clear. The first of those two has large samples, reports a wide variety of large and small correlations, and strikes me as highly replicable. Indeed, study 2 (N=466) is a direct replication of study 1 (N=196) with an even larger sample. Study 3 goes in a slightly different direction, but mostly relies on the data from Study 2. The other paper reports large samples (Ns = 200) but small effects. We submitted it with only one study, the editor asked for replication, we ran a direct replication with the same sample size and found the same effect. Those are both in the paper. Since then we’ve tried to replicate it once and have succeeded (that finding isn’t yet published).

        That’s the first issue, and strikes me as the most important. Secondarily, there are at least four or five papers in this list that aren’t my own – perhaps more but it’s hard to tell what some of the papers are – and the resultant list of papers is only about 1/3 of my empirical publications. Thus, setting aside the most important issue above, I don’t have a clear sense of what my actual replicability score would look like with all of my papers.

        All the best, Bill

      2. please check the number of results. Many papers with R-Index of 0 have only 1 result which is often just a missing value, meaning no results were found. So, you can ignore those.

  3. There are numerous correlations reported in both papers, along with various mediational analyses in one of them, so definitely not a single result.

    With regard to the second issue, the file lists the journal title and year, but that’s it. Sometimes I haven’t published in that journal in that year, so I know it’s not me. Sometimes I have, but in this particular case the only paper I published in that journal in that year has another one of the R = 0 examples, but includes a sample in the millions and a multiverse analysis. There’s no chance that could have a replicability index of 0.

  4. Thanks Uli, very kind of you to offer to run the analysis for me. I’ve created a dropbox folder with all of my empirical articles in it and shared it with you. Let me know if that doesn’t come through. Best, Bill

  5. Hi Uli,

    I am surprised that the work you are analyzing for my index contains only 36 entries (when Web of Science retrieves 155). Two of the entries you list are not mine.

    Please make sure you include all empirical publications for an author, which in my case span a variety of areas, methods, and journals. Excluding the papers that are not mine would also be methodologically sound and make your own work more authoritative before you disseminate it.

    Thanks,
    Dolores

    1. The method uses a sampling approach. It is based on the journals that are tracked in the replicability rankings of 120 psychology journals, although it may expand as the rankings expand. The list of 120 journals includes the major social psychology journals. So, it is possible that your results might differ for other areas, but as this list focusses on social psychology, it also makes sense to focus on these journals.

    2. P.S. We all learned after 2011 that the way we collected and analyzed data was wrong and resulted in inflated estimates of replicability and effect sizes. These results mainly reflect this. How have your research practices changed over the past 10 years in response to the replication crisis?

      1. There are two articles of which I am not an author. They are not too easy to find in the very small sample you have. I have to find your data again to point them out but they are pretty obvious from the authors and should be checked. Thanks

      2. It is not that easy. I would have to open all 36 articles to check the full author list. As you already did the work, it would be nice to share the information with me so that I can remove them.

      3. These two are not mine, Uli:

        entry 26, The impression management of intelligence
        entry 5, US southern and northern differences in perception

        You should add an index of how much of an author’s work you’re tracking. It is likely not a valid representation of the author per se unless you select papers at random from the person’s record.

        The replication crisis was fully established towards the mid-2010s right? so the last 10 years do not reflect the changes in response to it.

      4. I found the problem why those two studies were included and ran the search again. The main results showed a change in the ODR from 67 to 66 percent and in the EDR from 19 to 21 percent.

        Not all of your other publications are original studies (e.g. meta-analysis). Others are in years that are not covered or journals that are not covered. If you send me a folder with pdf files of these articles, I am happy to include them.

      5. Thanks, Uli. I will get you the PDFs. Meta-analyses are still quantitative research, so you should find a way of including them. The statistics are all comparable and power matters too.

      6. Meta-analysis are different from original articles in important ways.
        1. Authors have no influence on the quality of the studies they meta-analyze.
        2. The focus is on effect size estimation and not on hypothesis testing.
        Thus, it makes no sense to include them in an investigation of the robustness of original research.

      7. Well, they don’t control the quality but they can select for quality and they definitely have statistical power considerations. In many cases, the focus IS actually on hypothesis testing and testing new hypotheses, and the method does signal an interest in reproducibility for sure.

      8. I am not saying meta-analysis are not important and cannot be evaluated, but my method cannot do this. This also means that the results here are not an overall evaluation of a researcher, which no single index can (including H-Index).

Leave a Reply