Replicability Rankings of Psychology Departments

Introduction

Since 2011, it is an open secret that many published results in psychology journals do not replicate. The replicability of published results is particularly low in social psychology (Open Science Collaboration, 2015).

A key reason for low replicability is that researchers are rewarded for publishing as many articles as possible without concerns about the replicability of the published findings. This incentive structure is maintained by journal editors, review panels of granting agencies, and hiring and promotion committees at universities.

To change the incentive structure, I developed the Replicability Index, a blog that critically examined the replicability, credibility, and integrity of psychological science. In 2016, I created the first replicability rankings of psychology departments (Schimmack, 2016). Based on scientific criticisms of these methods, I have improved the selection process of articles to be used in departmental reviews.

1. I am using Web of Science to obtain lists of published articles from individual authors (Schimmack, 2022). This method minimizes the chance that articles that do not belong to an author are included in a replicability analysis. It also allows me to classify researchers into areas based on the frequency of publications in specialized journals. Currently, I cannot evaluate neuroscience research. So, the rankings are limited to cognitive, social, developmental, clinical, and applied psychologists.

2. I am using department’s websites to identify researchers that belong to the psychology department. This eliminates articles that are from other departments.

3. I am only using tenured, active professors. This eliminates emeritus professors from the evaluation of departments. I am not including assistant professors because the published results might negatively impact their chances to get tenure. Another reason is that they often do not have enough publications at their current university to produce meaningful results.

Like all empirical research, the present results rely on a number of assumptions and have some limitations. The main limitations are that
(a) only results that were found in an automatic search are included
(b) only results published in 120 journals are included (see list of journals)
(c) published significant results (p < .05) may not be a representative sample of all significant results
(d) point estimates are imprecise and can vary based on sampling error alone.

These limitations do not invalidate the results. Large difference in replicability estimates are likely to predict real differences in success rates of actual replication studies (Schimmack, 2022).

Department Rankings

The main results of the replicability analysis are included in this table. Detailed analyses of departments and faculty members can be found by clicking on the hyperlink of a university.

The table is sorted by the all time actual replication prediction (ARP). It is easy to sort the table by other meta-statistics.

The ERR is the expected replication rate that is estimated based on the average power of studies with significant results (p < .05).

The EDR is the expected discovery rate that is estimated based on the average power of studies before selection for significance. It is estimated using the distribution of significant p-values converted into z-scores.

Bias is the discrepancy between the observed discovery rate (i.e., the percentage of significant results in publications) and the expected discovery rate. Bias reflects the selective reporting of significant results.

The FDR is the false discovery risk. It is estimated using Soric’s formula that converts the expected discovery rate into an estimate of the maximum percentage of false positive results under the assumption that true hypothesis are tested with 100% power.

For more information about these statistics, please look for tutorials or articles on z-curve on this blog.

UniversityARP-AllERR-ALLEDR-ALLBias-AllFDR-AllARP-5YERR-5YEDR-5YBias-5YFDR-5Y
University of Michigan55694131858.57245276
Western University54.5703929873.5777012
University of Toronto546741288566943247
Princeton University52.5654030867.5746183
University of Amsterdam50.5663535104769254115
Harvard University4869274014556842227
Yale University4865313812557040318
University Texas - Austin46.56627441455.57041248
University of British Columbia 44672147204765293413
McGill University43.56621572043.569185723
Columbia University41.5622149193961175026
New York University41622050204870264315
Stanford University4160224518586650155

Leave a ReplyCancel reply