Since 2011, it is an open secret that many published results in psychology journals do not replicate. The replicability of published results is particularly low in social psychology (Open Science Collaboration, 2015).

A key reason for low replicability is that researchers are rewarded for publishing as many articles as possible without concerns about the replicability of the published findings. This incentive structure is maintained by journal editors, review panels of granting agencies, and hiring and promotion committees at universities.

To change the incentive structure, I developed the Replicability Index, a blog that critically examined the replicability, credibility, and integrity of psychological science. In 2016, I created the first replicability rankings of psychology departments (Schimmack, 2016). Based on scientific criticisms of these methods, I have improved the selection process of articles to be used in departmental reviews.

1. I am using Web of Science to obtain lists of published articles from individual authors (Schimmack, 2022). This method minimizes the chance that articles that do not belong to an author are included in a replicability analysis. It also allows me to classify researchers into areas based on the frequency of publications in specialized journals. Currently, I cannot evaluate neuroscience research. So, the rankings are limited to cognitive, social, developmental, clinical, and applied psychologists.

2. I am using department’s websites to identify researchers that belong to the psychology department. This eliminates articles that are from other departments.

3. I am only using tenured, active professors. This eliminates emeritus professors from the evaluation of departments. I am not including assistant professors because the published results might negatively impact their chances to get tenure. Another reason is that they often do not have enough publications at their current university to produce meaningful results.

Like all empirical research, the present results rely on a number of assumptions and have some limitations. The main limitations are that

(a) only results that were found in an automatic search are included

(b) only results published in 120 journals are included (see list of journals)

(c) published significant results (p < .05) may not be a representative sample of all significant results

(d) point estimates are imprecise and can vary based on sampling error alone.

These limitations do not invalidate the results. Large difference in replicability estimates are likely to predict real differences in success rates of actual replication studies (Schimmack, 2022).

## Harvard

I used the department website to find core members of the psychology department. I counted 23 professors and 1 associate professors, which makes it one of the smaller departments in North America. Not all researchers conduct quantitative research and report test statistics in their result sections. I limited the analysis to 16 professors and 1 associate professors that had at least 100 significant test statistics.

Figure 1 shows the z-curve for all 13,147 tests statistics in articles published by these 17 faculty members. I use the Figure to explain how a z-curve analysis provides information about replicability and other useful meta-statistics.

1. All test-statistics are converted into absolute z-scores as a common metric of the strength of evidence (effect size over sampling error) against the null-hypothesis (typically H0 = no effect). A z-curve plot is a histogram of absolute z-scores in the range from 0 to 6. The 1,465 z-scores greater than 6 are not shown because z-scores of this magnitude are extremely unlikely to occur when the null-hypothesis is true (particle physics uses z > 5 for significance). Although they are not shown, they are included in the meta-statistics.

2. Visual inspection of the histogram shows a steep drop in frequencies at z = 1.96 (solid red line) that corresponds to the standard criterion for statistical significance, p = .05 (two-tailed). This shows that published results are selected for significance. The dashed red line shows significance for p < .10, which is often used for marginal significance. Thus, there are more results that are presented as significant than the .05 criterion suggests.

3. To quantify the amount of selection bias, z-curve fits a statistical model to the distribution of statistically significant results (z > 1.96). The grey curve shows the predicted values for the observed significant results and the unobserved non-significant results. The full grey curve is not shown to present a clear picture of the observed distribution. The statistically significant results (including z > 6) make up 27% of the total area under the grey curve. This is called the expected discovery rate because the results provide an estimate of the percentage of significant results that researchers actually obtain in their statistical analyses. In comparison, the percentage of significant results (including z > 6) includes 69% of the published results. This percentage is called the observed discovery rate, which is the rate of significant results in published journal articles. The difference between a 69% ODR and a 27% EDR provides an estimate of the extent of selection for significance. The difference of~ 40 percentage points is fairly large. The upper level of the 95% confidence interval for the EDR is 38%. Thus, the discrepancy is not just random. To put this result in context, it is possible to compare it to the average for 120 psychology journals in 2010 (Schimmack, 2022). The ODR (69% vs. 72%) and the EDR (27% vs. 28%) are similar. This suggest that the research produced by Harvard faculty members is neither more nor less replicable than research produced at other universities.

4. The EDR can be used to estimate the risk that published results are false positives (i.e., a statistically significant result when H0 is true), using Soric’s (1989) formula for the maximum false discovery rate. An EDR of 27% implies that no more than 14 of the significant results are false positives, however the lower limit of the 95%CI of the EDR, 18%, allows for 24% false positive results. Most readers are likely to agree that this is too high. One solution to this problem is to lower the conventional criterion for statistical significance (Benjamin et al., 2017). Figure 2 shows that alpha = .005 reduces the point estimate of the FDR to 3% with an upper limit of the 95% confidence interval of 5%. Thus, without any further information readers could use this criterion to interpret results published in articles by researchers in the psychology department of Harvard University.

Most of the faulty are cognitive psychologists (k = 7) or clinical psychologists (k = 5). The z-curve for clinical research shows a lower EDR and ERR, but the confidence intervals are wide and the difference may just reflect sampling error.

Consistent with other comparisons of disciplines, cognitive results have a higher EDR and ERR, but the confidence intervals are too wide to conclude that this difference is statistically significant at Harvard. Thus, the overall results hold largely across areas.

The next analysis examines whether research practices changed in response to the credibility crisis in psychology. I selected articles published since 2016 for this purpose.

The EDR is higher than the EDR for all years (42% vs. 27%), but the ODR (64% vs. 67%) and the ERR remained unchanged (68% vs. 69%). Thus, selection for significance decreased, but is still present in more recent articles.

There is considerable variability across individual researchers, although confidence intervals are often wide due to the smaller number of test statistics. The table below shows the meta-statistics of all 18 faculty members that provided results for the departmental z-curve. You can see the z-curve for individual faculty member by clicking on their name.

Rank | Name | ARP | EDR | ERR | FDR |

1 | Mina Cikara | 69 | 73 | 65 | 3 |

2 | Samuel J. Gershman | 69 | 77 | 60 | 4 |

3 | George A. Alvarez | 62 | 83 | 41 | 8 |

4 | Daniel L. Schacter | 57 | 70 | 44 | 7 |

5 | Alfonso Caramazza | 56 | 71 | 41 | 8 |

6 | Mahzarin R. Banaji | 56 | 75 | 37 | 9 |

7 | Jason P. Mitchell | 55 | 60 | 50 | 5 |

8 | Katie A. McLaughlin | 51 | 72 | 30 | 12 |

9 | Fiery Cushman | 50 | 77 | 22 | 19 |

10 | Elizabeth S. Spelke | 49 | 64 | 33 | 11 |

11 | Elizabeth A. Phelps | 46 | 69 | 22 | 18 |

12 | Susan E. Carey | 45 | 61 | 28 | 13 |

13 | Jesse Snedeker | 45 | 67 | 22 | 18 |

14 | Matthew K. Nock | 43 | 57 | 29 | 13 |

15 | Jill M. Hooley | 43 | 59 | 27 | 14 |

16 | Daniel T. Gilbert | 41 | 64 | 17 | 26 |

17 | John R. Weisz | 29 | 43 | 15 | 31 |