2021 Replicability Report for the Psychology Department at the University of Amsterdam

Since 2011, it is an open secret that many published results in psychology journals do not replicate. The replicability of published results is particularly low in social psychology (Open Science Collaboration, 2015).

A key reason for low replicability is that researchers are rewarded for publishing as many articles as possible without concerns about the replicability of the published findings. This incentive structure is maintained by journal editors, review panels of granting agencies, and hiring and promotion committees at universities.

To change the incentive structure, I developed the Replicability Index, a blog that critically examined the replicability, credibility, and integrity of psychological science. In 2016, I created the first replicability rankings of psychology departments (Schimmack, 2016). Based on scientific criticisms of these methods, I have improved the selection process of articles to be used in departmental reviews.

1. I am using Web of Science to obtain lists of published articles from individual authors (Schimmack, 2022). This method minimizes the chance that articles that do not belong to an author are included in a replicability analysis. It also allows me to classify researchers into areas based on the frequency of publications in specialized journals. Currently, I cannot evaluate neuroscience research. So, the rankings are limited to cognitive, social, developmental, clinical, and applied psychologists.

2. I am using department’s websites to identify researchers that belong to the psychology department. This eliminates articles that are from other departments.

3. I am only using tenured, active professors. This eliminates emeritus professors from the evaluation of departments. I am not including assistant professors because the published results might negatively impact their chances to get tenure. Another reason is that they often do not have enough publications at their current university to produce meaningful results.

Like all empirical research, the present results rely on a number of assumptions and have some limitations. The main limitations are that
(a) only results that were found in an automatic search are included
(b) only results published in 120 journals are included (see list of journals)
(c) published significant results (p < .05) may not be a representative sample of all significant results
(d) point estimates are imprecise and can vary based on sampling error alone.

These limitations do not invalidate the results. Large difference in replicability estimates are likely to predict real differences in success rates of actual replication studies (Schimmack, 2022).

University of Amsterdam

The University of Amsterdam is the highest ranked European psychology department (QS Rankings). I used the department website to find core members of the psychology department. I found 48 senior faculty members. Not all researchers conduct quantitative research and report test statistics in their result sections. Therefore, the analysis is limited to 25 faculty members that had at least 100 test statistics.

A search of the database retrieved 13,529 test statistics. This is the highest number of statistical tests for all departments examined so far (Department Rankings). This partially explains the high ranking of the University of Amsterdam in rankings of prestige.

Figure 1 shows the z-curve plot for these results. I use the Figure to explain how a z-curve analysis provides information about replicability and other useful meta-statistics.

1. All test-statistics are converted into absolute z-scores as a common metric of the strength of evidence (effect size over sampling error) against the null-hypothesis (typically H0 = no effect). A z-curve plot is a histogram of absolute z-scores in the range from 0 to 6. The 2,034 z-scores greater than 6 are not shown because z-scores of this magnitude are extremely unlikely to occur when the null-hypothesis is true (particle physics uses z > 5 for significance). Although they are not shown, they are included in the computation of the meta-statistics.

2. Visual inspection of the histogram shows a drop in frequencies at z = 1.96 (solid red line) that corresponds to the standard criterion for statistical significance, p = .05 (two-tailed). This shows that published results are selected for significance. The dashed red line shows significance for p < .10, which is often used for marginal significance. Thus, there are more results that are presented as significant than the .05 criterion suggests.

3. To quantify the amount of selection bias, z-curve fits a statistical model to the distribution of statistically significant results (z > 1.96). The grey curve shows the predicted values for the observed significant results and the unobserved non-significant results. The statistically significant results (including z > 6) make up 35% of the total area under the grey curve. This is called the expected discovery rate because the results provide an estimate of the percentage of significant results that researchers actually obtain in their statistical analyses. In comparison, the percentage of significant results (including z > 6) includes 70% of the published results. This percentage is called the observed discovery rate, which is the rate of significant results in published journal articles. The difference between a 70% ODR and a 35% EDR provides an estimate of the extent of selection for significance. The difference of ~35 percentage points is large in absolute terns, but relatively small in comparison to other psychology departments. The upper level of the 95% confidence interval for the EDR is 46%. Thus, the discrepancy is not just random. To put this result in context, it is possible to compare it to the average for 120 psychology journals in 2010 (Schimmack, 2022). The ODR (70% vs. 72%) is similar, but the EDR is higher (35% vs. 28%), suggesting less severe selection for significance by faculty members at the University of Amsterdam that are included in this analysis.

4. The z-curve model also estimates the average power of the subset of studies with significant results (p < .05, two-tailed). This estimate is called the expected replication rate (ERR) because it predicts the percentage of significant results that are expected if the same analyses were repeated in exact replication studies with the same sample sizes. The ERR of 66% suggests a fairly high replication rate. The problem is that actual replication rates are lower than the ERR predictions (about 40% Open Science Collaboration, 2015). The main reason is that it is impossible to conduct exact replication studies and that selection for significance will lead to a regression to the mean when replication studies are not exact. Thus, the ERR represents the best case scenario that is unrealistic. In contrast, the EDR represents the worst case scenario in which selection for significance does not select more powerful studies and the success rate of replication studies is not different from the success rate of original studies. The EDR of 35% is below the actual replication success rate of 40%. To predict the success rate of actual replication studies, I am using the average of the EDR and ERR, which is called the actual replication prediction (ARP). For the University of Amsterdam, the ARP is (70 +35)/2 = 53%. This is somewhat higher than the currently best estimate of the success rate for actual replication studies based on the Open Science Collaboration project (~40%). Thus, research from the University of Amsterdam is expected to replicate at a higher rate than the replication rate for psychology in general.

5. The EDR can be used to estimate the risk that published results are false positives (i.e., a statistically significant result when H0 is true), using Soric’s (1989) formula for the maximum false discovery rate. An EDR of 35% implies that no more than 10% of the significant results are false positives, but the lower limit of the 95%CI of the EDR, 23%, allows for 18% false positive results. One solution to this problem is to lower the conventional criterion for statistical significance (Benjamin et al., 2017). Figure 2 shows that alpha = .005 reduces the point estimate of the FDR to 2% with an upper limit of the 95% confidence interval of 4%. Thus, without any further information readers could use this criterion to interpret results published in articles by psychology researchers at Western University.

Some researchers have changed research practices in response to the replication crisis. It is therefore interesting to examine whether replicability of newer research has improved. It is particularly interesting to examine changes at the University of Amsterdam because Erik-Jan Wagenmakers, a faculty member in the Methodology department, is a prominent advocate of methodological reforms. To examine this question, I performed a z-curve analysis for articles published in the past five year (2016-2021).

The results are disappointing. There is no evidence that research practices have changed in response to concerns about replication failures. The EDR estimate dropped from 35% to 25%, although this is not a statistically significant change. The ERR also decreased slightly from 72% to 69%. Therefore, the predicted success rate for actual replication studies decreased from 51% to 47%. This means that the University of Amsterdam decreased in rankings that focus on the past five years because some other departments have improved.

The replication crisis has been most severe in social psychology (Open Science Collaboration, 2015) and was in part triggered by concerns about social psychological research in the Netherlands. I therefore also conducted a z-curve analysis for the 10 faculty members in social psychology. The EDR is lower (24% vs. 35%) than for the whole department, which also implies a lower actual replication rate and a higher false positive risk.

There is variability across individual researchers, although confidence intervals are often wide due to the smaller number of test statistics. The table below shows the meta-statistics of all faculty members that provided results for the departmental z-curve. You can see the z-curve for individual faculty member by clicking on their name.

Rank  NameARPEDRERRFDR
1Jaap M. J. Murre7781742
2Hilde M. Geurts7376692
3Timo Stein7376702
4Hilde M. Huizenga6875613
5Maurits W. van der Molen6572574
6Astrid C. Homan6269554
7Wouter van den Bos6074476
8Frenk van Harreveld5464447
9Gerben A. van Kleef5370379
10K. Richard Ridderinkhof5369369
11Bruno Verschuere52723211
12Maartje E. J. Raijmakers51742813
13Merel Kindt48623510
14Mark Rotteveel47593410
15Sanne de Wit47742022
16Susan M. Bogels44632615
17Matthijs Baas44622516
18Arnoud R. Arntz43681726
19Filip van Opstal43652020
20Suzanne Oosterwijk42562913
21Edwin A. J. van Hooft40651530
22E. J. B. Doosje38611531
23Nils B. Jostmann37482615
24Barbara Nevicka37591433
25Reinout W. Wiers36472515

Leave a ReplyCancel reply