[this blog post is a draft for an article in a special issue. Comments are welcome.]
The Big Bem: The Universe Implodes
The 2010s started with a bang. Journal clubs were discussing the preprint of Bem’s (2011) article “Feeling the future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect.” Psychologists were confronted with a choice. Either they had to believe in anomalous effects or they had to believe that psychology was an anomalous science. In a discussion group at the University of Toronto, one of my colleagues noted the danger of making the wrong choice only to end up on the wrong side of history. Ten years later, it is possible to look back at Bem’s article with the hindsight of 2020.
It is now clear that Bem used questionable practices to produce false evidence for his outlandish claims (Francis, 2012; Schimmack, 2012, 2018, 2020). Moreover, it has become apparent that these practices were the norm and that many other findings in social psychology cannot be replicated. This realization has led to major changes in the way social psychologists conduct and report their work. The speed and the extent of these changes has been revolutionary. Akin to the cognitive revolution in the 1960s and the affective revolution in the 1980s, the 2010s have witnessed a method revolution that has produced a new field called meta-psychology and two new journals that publish articles addressing methodological problems and improvements; Advances in Methods and Practices in Psychological Science and Meta-Psychology. For researchers who are spending most of their time pursuing primary research, it can be confusing to keep up with the rapid developments in meta-psychology that often emerge on blogs, pre-prints, twitter, and Facebook before they are appear in peer-reviewed journals.
In this review article, I present an overview of major developments in meta-psychology that are shaping the future of psychological science. Most of the review focuses on replication failures in experimental social psychology, and the different explanations for these failures. I argue that the use of questionable research practices accounts for many replication failures and point out how social psychologists have responded to this realization. Other disciplines may learn from these lessons and may need to reform their research practices in the coming decade.
Arguably, the most important development in psychology has been the normalization of publishing replication failures. When Bem (2011) published his abnormal results supporting paranormal phenomena, researchers quickly failed to replicate these sensational results. However, they had a hard time publishing these results. The Editor of JPSP at that time, Eliot Smith, did not even send the manuscript out for review. This was probably the last attempt to suppress negative evidence and it failed for two reasons. First, online-only journals with unlimited journal space like PlusOne or Frontiers were more than happy to publish these articles (Ritchie, Wiseman, & French, 2012). Second, the decision to reject the replication studies was made public and created a lot of attention because Bem’s article had attracted so much attention (Aldhous, 2011). This created social pressure and in 2012, JPSP did publish a major replication failure of Bem’s results (Galak, LeBouef, Nelson, & Simmons, 2012).
Over the past decade, new article formats have evolved that make it easier to publish articles that fail to confirm theoretical predictions such as registered reports (Chambers, 2019) and registered-replication reports (APS, 2015). Registered reports are articles that are accepted for publication before the results are known; thus, avoiding the problem of publication bias that only confirmatory findings are published. Registered replication reports are registered reports that aim to replicate an original study in a high-powered study with many laboratories. Although, registered replication reports can produce significant and non-significant results, they have produced stunning replication failures. These failures are especially stunning because RRR had a much higher chance to produce a significant result than the original studies with much smaller samples. Thus, the fact that RRR’s of ego-depletion (Hagger et al. 2016), or facial feedback (Wagenmakers et al., 2016) produced non-significant results with thousands of participants were surprising, to say the least.
Replication failures of specific studies are important for specific research questions, but they do not examine the crucial meta-psychological question whether these failures are anomalies or symptomatic of a wider problem in psychological science. After all, Bem’s studies were replicated because researchers were skeptical that these results can be replicated. These replication failures are unable to answer the question whether there is a replication crisis. Answering this broader question requires a representative sample of studies from the population of results published in psychology journals. Given the diversity of psychology, this is a monumental task.
A first step towards this goal was the Reproducibility Project that focused on results published in three psychology journals in the year 2008. The journals represented social/personality psychology (JPSP), cognitive psychology (JEP:LMC), and all areas of psychology (Psychological Science). Although all articles published in 2008 were eligible, not all studies were replicated, in part because some studies were very expensive or difficult to replicate. In the end, 97 studies with significant results were replicated as closely as possible. The headline finding was that only 37% of the replication studies replicated a statistically significant result.
This finding has been widely cited as evidence that psychology has a replication problem. However, headlines tend to blur over the fact that results varied as a function of discipline. While the success rate for cognitive psychology was 50% and even higher for typical within-subject designs with many observations per participant, the success rate was only 25% for social psychology, and even lower for the typical between-subject design that was employed to study ego-depletion, facial feedback or other prominent effects in social psychology.
These results do not warrant the broad claim that psychology has a replication crisis or that most results published in psychology are false. A more nuanced conclusion is that social psychology has a replication crisis and that methodological factors account for these differences. Disciplines that rely on within-subject designs with many repeated measures or intervention studies with a pre-post design are likely to suffer less than disciplines that compare a single measure across participants.
No Crisis: Experts can Reliably Produce Effects
After some influential priming results could not be replicated, Daniel Kahneman wrote a letter to John Bargh (Yong, 2012). He suggested that leading priming researchers should conduct a series of replication studies to demonstrate that their original results are replicable. In response, John Bargh and other prominent social psychologists conducted numerous studies that showed the effects are robust. At least, this is what might have happened in an alternative universe. In this universe, however, there have been few attempts to self-replicate original findings. Bartlett asked Bargh why he did not prove his critics wrong by doing the study again (Batlett, 2013). The answer is not particularly convincing.
“So why not do an actual examination? Set up the same experiments again, with additional safeguards. It wouldn’t be terribly costly. No need for a grant to get undergraduates to unscramble sentences and stroll down a hallway. Bargh says he wouldn’t want to force his graduate students, already worried about their job prospects, to spend time on research that carries a stigma. Also, he is aware that some critics believe he’s been pulling tricks, that he has a “special touch” when it comes to priming, a comment that sounds like a compliment but isn’t. “I don’t think anyone would believe me,” he says”
A few self-replications ended with a replication failure (Elkins-Brown, Saunders, & Inzlicht, 2018). One notable successful self-replication was conducted by Petty and colleagues (Latrell, Petty, & Xu, 2017). The authors not only replicated the original finding, they also reproduced the non-significant result of the replication study. In addition, they found a significant interaction, indicating that procedural differences made the effect stronger or weaker. This study has been widely celebrated as an exemplary way to respond to replication failures. It also suggests that flaws in replication studies are sometimes responsible for replication failures. However, it is impossible to generalize from this single instance to other replication failures. Thus, it remains unclear how many replication failures were caused by problems with the replication studies.
To conclude, the 2010s have seen a rise in publications of non-significant results that fail to replicate original results and that contradict theoretical predictions. The evidence produced by these studies has demonstrated a replication crisis in social psychology, but not in cognitive psychology. Other areas have been slow to investigate the replicability of their published results.
No-Crisis: Decline Effect
Jonathan Schooler’s idea that replication failures occur because effects weaken over time was proposed by Johnathan Schooler and popularized in a New Yorker article (Lehrer, 2010). Schooler coined the term decline effect for the observation that effect sizes often decrease over time. Unfortunately, it does not work for more mundane behaviors like eating cheesecake. No matter how often you eat cheese cakes, they still add calories and pounds to your weight. However, for more elusive effects like social priming or verbal overshadowing, it seems to be the case that it is easier to discovery effects than to replicate them (Wegner, 1992), but it is not clear what causes decline effects for social psychology experiments. A team of researchers conducted a registered replication study of Schooler and Engstler-Schooler’s (1990) verbal overshadowing study (Alogna et al., 2014). The results replicated a statistically significant effect, but with smaller effect sizes. Schooler (2014) considered this finding a win-win because his original results had been replicated and the reduced effect size supported the presence of a decline effect. However, the notion of a decline effect is misleading because it merely describes a phenomenon rather than providing an explanation for it. Schooler (2014) offered several possible explanations. One possible explanation was regression to the mean (see next paragraph). A second explanation was that slight changes in experimental procedures can reduce effect sizes (more detailed discussion below). More controversial, Schooler also eludes to the possibility that some paranormal processes may produce a decline effect. “Perhaps, there are some parallels between VO [verbal overshadowing] effects and parapsychology after all, but they reflect genuine unappreciated mechanisms of nature (Schooler, 2011) and not simply the product of publication bias or other artifact” (p. 582). Schooler, however, fails to acknowledge that a mundane explanation for the decline effect are questionable research practices that inflate effect size estimates in original studies. Using statistical tools, Francis (2012) showed that Schooler’s original verbal overshadowing studies showed signs of bias. Thus, there is no need to look for paranormal explanation of the decline effect in verbal overshadowing. The normal practices of selectively publishing only significant results are sufficient to explain it. In sum, the decline effect is descriptive rather than explanatory and Schooler’s suggestion that it reflects some paranormal phenomena is not supported by scientific evidence.
No Crisis: Regression to the Mean is Normal
Regression to the mean has been invoked as one possible explanation for the decline effect (Schooler, 2014; Fiedler, 2015). Fiedler’s argument is that random measurement error in psychological measures is sufficient to produce replication failures. However, random measurement error is neither necessary nor sufficient to produce replication failures. The outcome of a replication study is determined solely by the studies statistical power and if the replication study is an exact replication of an original study, both studies have the same amount of random measurement error and power (Brunner & Schimmack, 2019). Thus, if the OSC project found 97 significant results in 100 published studies, the observed discovery rate of 97% suggests that the studies had 97% power to obtain a significant result. Random measurement error would have the same effect on power and therefore have the same effect on the outcome of original studies and replication studies. Therefore, Fiedler’s claim that random measurement error alone explains replication failures is simply wrong and based on a misunderstanding of statistics. Moreover, regression to the mean requires that studies were selected for significance. Schooler (2014) ignores this aspect of regression to the mean when he suggests that regression to the mean is normal and expected. However, the effect sizes of eating cheesecake do not decrease over time because there is no selection process. In contrast, the effect sizes of social psychological experiments decrease when original articles selected significant results and replication studies do not select for significance. Thus, it is not normal for success rates to decrease from 97% to 25%, just like it would not be normal for a basketball players’ free-throw percentage to drop from 97% to 25%. Thus, regression to the mean does not warrant the label of being normal and this argument cannot be used to claim that there is no replication crisis.
No Crisis: Exact Replications are Impossible
Heraclitus, an ancient Greek philosopher, observed that you can never step into the same river twice. Similarly, it is impossible to exactly recreate the conditions of a psychological experiment. This trivial observation has been used to argue that replication failures are neither surprising nor problematic, but rather the norm. We should never expect to get the same result from the same paradigm because the actual experiments are never identical, just like a river is always changing (Strobe & Strack, 2014). This argument has led to a heated debate about the distinction and value of direct versus conceptual replication studies (Zwaan, Etz, Lucas, & Donnellan, 2018).
The purpose of direct replication studies is to replicate an original study as closely as possible. Critics argue that direct replication studies are uninformative because there are only two possible outcomes. Either the replication study is successful and nothing new is learned, or the replication study fails and this only shows that the replication study differed from the original study.
This argument ignores the surprising finding that researchers are seemingly able to alter conditions at will and get the effect in their own laboratories (conceptual replication studies always work), but suddenly even close replications fail to show the effect when the research is registered or carried out by other researchers. It is simply not plausible that conceptual replications that intentionally change features of a study are always successful, while direct replication that try to reproduce the original conditions as closely as possible fail.
This argument also ignores the difference between disciplines. Why is there no replication crisis in cognitive psychology, if each experiment is like a new river? And why does eating cheesecake always lead to a weight gain, no matter whether it is chocolate cheesecake, raspberry white-truffle cheesecake, or caramel fudge cheesecake? The reason is that the main features of rivers remain the same. Even if the river is not identical, you still get wet every time you step into it.
To explain the higher replicability of results in cognitive psychology than in social psychology, vanBavel et al. (2016) proposed that social psychological studies are more difficult to replicate for a number of reasons. They called this property of studies contextual sensitivity. Coding studies for contextual sensitivity showed the predicted negative correlation between contextual sensitivity and replicability. However, Inbar (2016) found that this correlation was no longer significant when discipline was included as a predictor. Thus, the results suggested that social psychological studies are more contextual sensitive and less replicable, but that contextual sensitivity did not explain the lower replicability of social psychology.
It is also not clear that contextual sensitivity implies that social psychology does not have a crisis. Replicability is not the only criterion of good science, especially if exact replications are impossible. Findings that can only be replicated when conditions are reproduced exactly lack generalizability, which makes them rather useless for applications and for construction of broader theories. Take verbal-overshadowing as an example. Even a small change in experimental procedures reduced a practically significant effect size of 16% to a no longer meaningful effect size of 4% (Alogna et al., 2014), and neither of these experimental conditions were similar to real-world situations of eye-witness identification. Thus, the practical implications of this phenomenon remain unclear because it depends too much on the specific context. In conclusion, empirical results are only meaningful, if researchers have a clear understanding of the conditions that can produce a statistically significant result most of the time (Fisher, 1926). Contextual sensitivity makes it harder to do so. Thus, it is one potential factor that may contribute to the replication crisis in social psychology because social psychologists do not know under which conditions their results can be reproduced. For example, I asked Roy F. Baumeister to specify optimal conditions to replicate ego-depletion. He was unable or unwilling to do so.
No Crisis: The Replication Studies are Flawed
The argument that replication studies are flawed comes in two flavors. One argument is that replication studies are often carried out by young researchers with less experience and expertise. They did their best, but they are just not very good experimenters (Gilbert, King, Pettigrew, & Wilson, 2016). Cunningham and Baumeister (2016) proclaim “Anyone who has served on university thesis committees can attest to the variability in the competence and commitment of new researchers. Nonetheless, a graduate committee may decide to accept weak and unsuccessful replication studies to fulfill degree requirements if the student appears to have learned from the mistakes” (p. 4). There is little evidence to support this claim. In fact, a meta-analysis found no differences in effect sizes between studies carried out by Baumeister’s lab and other labs (Hagger et al., 2010).
The other argument is that replication failures are sexier and more attention grabbing than successful replications. Thus, replication researchers sabotage their studies or data analyses to produce non-significant results (Bryana, Yeager, & O’Brien, 2019; Strack, 2016). The latter accusations have been made without empirical evidence to support this claim. For example, Strack (2016) used a positive correlation between sample size and effect size to claim that some labs were motivated to produce non-significant results, presumably by using a smaller sample size. However, a proper bias analysis showed no evidence that there were too few significant results (Schimmack, 2018). Moreover, the overall effect size across all labs was also non-significant.
Inadvertent problems, however, may explain some replication failures. For example, some replication studies reduced statistical power by replicating a study with a smaller sample than the original study (Open Science Collaboration, 2015; Ritchie et al., 2011). In this case, a replication failure could be a false negative (type-II error). Consistent with the logic of meta-analysis, studies with larger sample sizes should be given more weight. Thus, it is problematic to conduct replication studies with smaller samples. At the same time, registered replication reports with thousands of participants should be given more weight than original studies with less than 100 participants. Size matters.
However, size is not the only factor that matters and researchers disagree about the implications of replication failures. Not surprisingly, authors of the original studies typically recognize some problems with the replication attempts (Baumeister & Vohs, 2016; Strack, 2016). Ideally, researchers would agree ahead of time on a research design that is acceptable to all parties involved. Kahneman (2003) called this model an adversarial collaboration. However, original researchers have either not participated in the planning of a study (Strack, 2016) or withdrawn their approval after the negative results were known (Baumeister & Vohs, 2016). None have acknowledged that their original results were obtained with questionable research practices that make it hard to replicate the results. To make replication studies more meaningful, it would be important that leading researchers agree ahead of time on a research design. Failure to find agreement would itself undermine the value of published research because experts should be able to specify the optimal conditions for producing an effect.
In conclusion, replication failures can occur for a number of reasons, just like significant results in original studies can occur for a number of reasons. Inconsistent results are frustrating because they often require further research. This being said, there is no evidence that low quality of replication studies is the sole or the main cause of replication failures in social psychology.
No Crisis: Replication Failures are Normal
In an opinion piece for the New York Times, Lisa Feldmann Barrett, current president of the Association for Psychological Science, commented on the OSC results and claimed that “the failure to replicate is not a cause for alarm; in fact, it is a normal part of how science works” (Feldman Barrett, 2015). On the surface, Feldmann Barrett makes a valid point. It is true that replication failures are a normal part of science. First, if psychologists would conduct studies with 80% power, 1 out of 5 studies would fail to replicate, even if everything is going well and all predictions are true. Second, replication failures are expected when researchers test risky hypotheses (e.g., effects of candidate genes on personality) that have a high probability of being false. In this case, a significant result may be a false positive result and replication failures demonstrate that it was a false positive. Thus, honestly reported replication failures play an integral part in normal science, and the success rate of replication studies provides valuable information about the empirical support for a hypothesis. However, a success rate of 25% or less for social psychology is not a sign of normal science. It is this stark discrepancy between the success rates in journals and honest replication attempts that suggested social psychology is not a normal science. If social psychological theories make risky predictions that are often false, journals should be filled with non-significant results, but they are not (Sterling, 1959; Sterling et al., 1995). This suggest that the problem is not the low success rate in replication studies, but the high success rate in psychology journals.
Crisis: Original Studies Are Not Credible Because They Used NHST
Bem’s anomalous results were published with a commentary by Wagenmakers et al. (2011). This commentary made various points that are discussed in more detail below, but one unique and salient point of Wagenmakers et al.’s comment concerned the use of null-hypothesis significance testing (NHST). In numerous publications, Wagenmakers has argued that NHST is fundamentally flawed (Wagenmakers, 2007). Bem presented 9 results with p-values below .05 as evidence for ESP Wagenmakers et al. object to the use of a significance criterion of .05 and argues that this criterion makes it too easy to publish false positive results (see also Benjamin et al., 2016).
Wagenmakers et al. (2011) claimed that this problem can be avoided by using Bayes-Factors. When they used Bayes-Factors with default priors, several of Bem’s studies no longer showed evidence for ESP. Based on these findings they argued that psychologists must change the way they analyze their data. Since then, Wagenmakers has worked tirelessly to promote Bayes-Factors as an alternative to NHST. However, Bayes-Factors depend on the choice of a prior, and the same data can lead to different inferences with different priors.
Bem, Utts, and Johnson (2011) pointed out that Wagenmakers et al.’s (2011) default prior assumed that there is a 50% probability that ESP works in the opposite direction (below chance accuracy) and a 25% probability that effect sizes are greater than d = 1. Only 25% of the prior distribution was allocated to effect sizes in the predicted direction between 0 and 1. This prior makes no sense for research on extrasensory perception processes that are expected to produce small effects.
When Bem et al. (2011) specified a more reasonable prior, Bayes-Factors actually showed more evidence for ESP than NHST. Moreover, the results of individual studies are less important than the combined evidence across studies. A meta-analysis of Bem’s studies shows that even with the default prior, Bayes-Factor reject the null-hypothesis with an odds-ratio of one-billion to 1. Thus, if we trust Bem’s data, Bayes-Factors also suggest that Bem’s results are robust and it remains unclear why Galak et al. (2012) were unable to replicate Bem’s results.
One argument in favor of Bayes-Factors is that NHST is one-sided. Significant results are used to reject the null-hypothesis, but non-significant results cannot be used to affirm the null-hypothesis. As a result, empirical results rarely falsify a theory unless a theory predicts effects in the opposite direction of the population effect. This means psychological theories are never subjected to real tests that they may fail (Popper, This makes non-significant results difficult to publish, which leads to publication bias. The claim is that Bayes-Factors solve this problem because they can provide evidence for the null-hypothesis. However, this claim is false. Bayes-Factors are odds-ratios between two alternative hypotheses. Unlike in NHST, these two competing hypotheses are not mutually exclusive. That is, there is an infinite number of additional hypotheses that are not tested. Thus, if the data favor the null-hypothesis, they do not provide support for the null-hypothesis. They merely provide evidence against one specified alternative hypothesis. There is always another possible alternative hypothesis that fits the data better than the null-hypothesis. As a result, even Bayes-Factors that strongly favor H0 fail to provide evidence that the true effect size is exactly zero.
The solution to this problem is not new, but unfamiliar to many psychologists. To demonstrate the absence of an effect, it is necessary to specify a region of effect sizes around zero and to demonstrate that the population effect size is likely to be within this region. This can be achieved using NHST (equivalence tests, Lakens, Scheel, & Isager, 2018), or Bayesian statistics (Kruschke, 2018). The main reason why psychologists are not familiar with tests that demonstrate the absence of an effect is that typical sample sizes in psychology have too much sampling error to produce precise estimates of effect sizes.
In conclusion, Wagenmakers et al. claimed that the NHST contributed to the replication crisis, but there is no evidence that replication failures are caused by the use of the wrong statistical approach. The problem with Bem’s results was not the use of NHST, but the use of questionable research practices to produce illusory evidence (Francis, 2012; Schimmack, 2012, 2016, 2020).
Crisis: Original Studies Report Many False Positives
An influential article by Ioanndis (2005) claimed that most published research findings are false. This eye-catching claim has been cited thousands of times. Few citing authors have bothered to point out that the claim is entirely based on hypothetical scenarios rather than empirical evidence. In psychology, fear that most published results are false positives was stoked by Simonsohn, Nelson, and Simmons’ (2011) “False Positive Psychology” article that showed with simulation studies that the aggressive use of questionable research practices can dramatically increase the probability that a study produces a significant result without a real effect. These articles shifted concerns about false negatives (Cohen, 1994) to concerns about false positives.
The problem with this focus on false positive results is that it implies that replication failures reveal false positive results. For example, Nelson, Simmons, and Simonsohn (2018) write “Experimental psychologists spent several decades relying on methods of data collection and analysis that make it too easy to publish false-positive, nonreplicable results. During that time, it was impossible to distinguish between findings that are true and replicable and those that are false and not replicable” (p. 512). However, replication failures do not reveal that original findings were false positive results. Another reason is that replication results are false negative results. That is, the population effect size is not zero, but the replication study had insufficient power to correctly reject the null-hypothesis. The false assumption that replication failures reveal false positive results has created a lot of confusion in the interpretation of replication failures (Maxwell, Lau, & Howard, 2015).
For example, Gilbert et al. (2016) attribute the low replication rate in the reproducibility project to low power of the replication studies. This does not make sense, when the replication studies had the same or sometimes even larger sample sizes than the original studies. As a result, the replication studies had as much or more power than the original studies. So, how could low power explain that discrepancy between the 97% success rate in original studies and the 25% success rate in replication studies? It cannot.
Gilbert et al.’s (2016) criticism only makes sense if replication failures in the replication studies are falsely interpreted as evidence that the original results were false positives. To test this, one could conduct a study with a much larger sample size that is able to detect much smaller effect sizes than the original studies. If these studies produce a statistically significant result, it is possible to conclude that the original study reported a true positive result and that the replication study reported a false negative result. While this is true, it is also true that the original studies had insufficient power to produce significant results with the small population effect sizes that the large replication study revealed. Thus, it remains a mystery how journals can report over 90% significant results with small sample sizes. Moreover, many of the effect sizes that are different from zero may lack practical significance. Thus, the real empirical evidence is provided by the large-scale replication studies, while the original results published in journals provide no credible evidence in themselves.
There have been attempts to estimate the false positive rate in social psychology. One approach is to examine sign changes in replication studies. If 100 true null-hypothesis are tested, 50 studies are expected to show a positive sign and 50 studies are expected to show a negative sign due to random sampling error. If these 100 studies are replicated this will happen again. Just like two coin-flips, we would therefore expect 50 successful replications and 50 unsuccessful replications by chance alone. A higher frequency of outcomes with the same sign suggests that sometimes the null-hypothesis was false. Wilson and Wixted found that 25% of social psychological results in the OSC project showed a sign reversal. This would suggest that 50% of the studies tested a true null-hypothesis. Of course, sign reversals are also possible when the effect size is not strictly zero. However, the probability of a sign reversal decreases as effect sizes increase. Thus, it is possible to say that about 50% of the replicated studies had an effect size close to zero. Unfortunately, this estimate is imprecise due to the small sample size.
Gronau et al. (2017) attempted to estimate the false discovery rate using a statistical model that is fitted to the exact p-values of original studies. The applied this model to three datasets, and found FDRs of 34% -46% for cognitive psychology, 40-60% for social, and 48%-88% for social priming. The problem with these estimates is that they are obtained with a model that limits heterogeneity. Simulation studies show that this dogmatic prior inflates FDR estimates (Schimmack & Brunner, 2019). The 40% FDR for cognitive psychology is particularly implausible because 50% of studies actually replicated with a significant result and sign-reversals were only observed in 10% of studies. It is implausible that cognitive psychologists test either false hypothesis or have nearly 100% power when they test a real effect. It is much more likely that many of the non-significant results are false negatives due to modest power.
Bartoš and Schimmack (2020) developed a statistical model, called z-curve.2.0, that makes it possible to estimate the discovery rate based on the test-statistics in published articles. The model fits a finite mixture model to the significant p-values (converted into z-scores), and then projects the model into the range of non-significant results. This makes it possible to compute the expected discovery rate; that is the percentage of results that are significant. This estimate of the discovery rate can be used to compute the maximum FDR using a simple formula (Soric, 1989). Applying this model to Gronau et al.’s (2017) datasets yields FDRs of 9% (95%CI = 2% to 24%) for cognitive, 26% (4% to 100%), and 61% (19% to 100%) for social priming. The results confirm the general rank-ordering with cognitive being more replicable than social psychology, especially social priming research. Thus, Kahneman was right to direct a letter at Bargh and to describe this line of research as the “poster child for doubts about the integrity of psychological research.” However, the results also make clear that major problems with social priming research cannot be generalized to all areas of psychology.
In conclusion, it is impossible to specify exactly whether an original finding was a false positive result or not. There have been several attempts to estimate the number of false positives results in the literature, but there is no consensus about the proper method to do so. I believe that the distinction between false and true positives is not particularly helpful, if the null-hypothesis is specified as a value of zero. An effect size of d = .0001 is not any more meaningful than an effect size of d = 0000. To be meaningful, published results should be replicable given the same sample sizes as used in original research. Demonstrating an significant result in the same direction in a much larger sample with a much smaller effect size should not be considered to be a successful replication of a result with a large effect size in a small sample; it is actually an original discovery.
Z-Curve: Quantifying the Crisis
Some psychologists developed statistical models that can quantify the influence of selection for significance on replicability. Brunner and Schimmack (2019) demonstrated mathematically that mean power predicts the expected replication rate (ERR) if the original studies could be replicated exactly (including the same sample size). The tricky part is to estimate mean power based on published test statistics.
The first models were p-curve and p-uniform (Simonsohn et al., 2014; van Aert, Wicherts, & van Assen, 2016). However, the focus of these methods was on effect-size estimation. A p-curve app also produces estimates of power (Simonsohn, Nelson, and Simmons, 2014). Brunner and Schimmack (2019) compared four methods to estimate the ERR, including p-curve. They found that p-curve overestimated the expected replication rate (ERR) when studies varied in effect sizes (the p-curve app also overestimates when there is only variability in sample sizes, Brunner, 2019). In contrast, a new method called z-curve performed very well across many scenarios, especially when heterogeneity was present.
Bartoš and Schimmack (2020) validated an extended version of z-curve (z-curve2.0) that provides confidence intervals and provides estimates of the expected discovery rate, that is, the percentage of observed significant results over the observed significant and the unobserved non-significant results. Z-curve has already been applied to various datasets of results in social psychology (see R-Index blog for numerous examples).
The most important dataset was created by Motyl et al. (2017) who coded a representative sample of studies in social psychology journals. The main drawback of Motyl’s audit of social psychology was that they did not have a proper statistical tool to estimate replicability. The closest to an estimator of replicability was the R-Index, although the R-Index provides biased estimates, especially when power deviates in either direction from 50%. Fortunately, the estimate was close to 50% (62% for 2003-2004 & 52% for 2013-2014). The average estimate is slightly above 50%, suggesting that social psychology has a replication crisis, but not as bad as the 25% estimate from the OSC project suggested.
A better way to estimate replicability is to fit z-curve to Motyl et al.’s data. To be included in the z-curve analysis, a study had to (a) use a t-test or F-test, (b) have a valid test-statistic, and (c) not be from the journal Psychological Science. The last criterion was used to focus on social psychology. I also excluded studies with more than 4 experimenter degrees of freedom (e.g., 177 df). This left 678 studies for analysis. The set included 450 between-subject studies, 139 mixed designs, and 67 within-subject designs. The preponderance of between-subject designs is typical of social psychology and one of the reasons for the low power of studies in social psychology.
There are a number of explanations for the discrepancy between the OSC estimate and the z-curve estimate. First of all, the number of studies in the OSC project is very small and sampling error alone could explain some of the differences. Second, the set of studies in the OSC project was not representative and may have selected studies with lower replicability. Third, some actual replication studies may have modified procedures in ways that lowered the chance of obtaining a significant result (e.g., reduced sample size). Fourth, as Stroebe and Strack (2014) pointed out, it is never possible to exactly replicate a study. Thus, z-curve estimates are overly optimistic because they assume exact replications. If there is contextual sensitivity, selection for significance will produce additional regression to the mean and a better estimate of the actual replication rate is the expected discovery rate ( Bartoš & Schimmack, 2020). Z-curve estimated an EDR of 21% (an alternative fitting algorithm produced an even lower estimate of 15%), which is indeed more closely aligned with the success rate in actual replication studies. In combination, the existing evidence suggests that the replicability of social psychological research is somewhere between 20% and 50%, which is clearly unsatisfactory and much lower than the illusory success rates of 90% and more in social psychological journals. Even the success rate of 90% is an underestimation because most of the non-significant results are in the range of marginally significant results (z = 1.65 to z = 1.96) that are often used to claim support for a prediction. Thus, the observed success rate is close to 100%.
Figure 1 also clearly shows that questionable research practices explain the gap between success rates in laboratories and success rates in journals. The z-curve estimate of non-significant results shows that a large proportion of non-significant results are expected, but hardly any of these expected studies every get published. This is reflected in an observed discovery rate of 90% and an expected discovery rate of 21%. The confidence intervals do not overlap, indicating that this discrepancy is highly significant. Given such extreme selection for significance, it is not surprising that published effect sizes are inflated and replication studies fail to reproduce significant results. In conclusion, out of all explanations for replication failures in psychology, the use of questionable research practices is the main factor. In comparison to other explanations, it is the only explanation that is supported by empirical evidence.
Z-curve can also be used to examine the power of subgroups of studies. In the OSC project, studies with a z-score greater than 4 had an 80% chance to be replicated. To achieve an ERR of 80% with Motyl’s data, z-scores have to be greater than 3.5. In contrast, studies with just significant results (p < .05 & p > .01) have only an ERR of 28%. This information can be used to reevaluate published results. Studies with p-values between .05 and .01 should not be trusted unless other information suggests otherwise (e.g., a trustworthy meta-analysis). In contrast, results with z-scores greater than 4 can be used to plan new studies. Unfortunately, there are much more questionable results with p-values greater than .01 (42%) than trustworthy results with z > 4 (17%), but at least there are some findings that are likely to replicate even in social psychology.
An Inconvenient Truth
Every crisis is an opportunity to learn to avoid future mistakes. Lending practices were changed after the financial crisis in the 2000s. Psychologists and other sciences can learn from the replication crisis in social psychology, but only if they are honest and upfront about the real cause of the replication crisis. Social psychologists did not use the scientific method properly. Neither Fisher nor Neyman and Pearson, who created NHST, proposed that non-significant results are irrelevant or that only significant results should be published. The problems of selection for significance is evident and has been well-known (Rosenthal, 1979). Cohen (1962) warned about low power, but the main concern were large file-drawers filled with type-II errors. Nobody could imagine that whole literatures with hundreds of studies are built on nothing but sampling error and selection for significance. Bem’s article and replication failures in the 2010s showed that the abuse of questionable research practices was much more excessive than anybody was willing to believe.
The key culprit were conceptual replication studies. Even social psychologists were aware that it is unethical not to report replication failures. For example, Bem advised researchers to use questionable research practices to find significant results in their data. “Go on a fishing expedition for something – anything – interesting” even if this meant to “err on the side of discovery” (Bem, 2010). However, even Bem made it clear that “this is not advice to suppress negative results. If your study was genuinely designed to test hypotheses that derive from a formal theory or are of wide general interest for some other reason, then they should remain the focus of your article. The integrity of the scientific enterprise requires the reporting of disconfirming results.”
How then is it possible that Bem himself and other social psychologists never reported disconfirming results? The solution to this problem was to never replicate a study exactly and to always vary some feature of the study. “Never do a direct replication; that way, if a conceptual replication doesn’t work, you maintain plausible deniability” (Anonymous cited in Spellman, 2015). This is how Morewedge, Gilbert, and Wilson describe their research process.
“Let us be clear: We did not run the same study over and over again until it yielded significant results and then report only the study that “worked.” Doing so would be clearly unethical. Instead, like most researchers who are developing new methods, we did some preliminary studies that used different stimuli and different procedures and that showed no interesting effects. Why didn’t these studies show interesting effects? We’ll never know.”
It was only in 2012 that psychologists realized that changing results in their studies were heavily influenced by sampling error and not by some minor changes in the experimental procedure (e.g., as a graduate student we were joking that the color of experiments’ underwear might influence results). Only a few psychologists have been open about this. In a commendable editorial, Lindsay (2019) talks about his realization that his research practices were suboptimal.
“Early in 2012, Geoff Cumming blew my mind with a talk that led me to realize that I had been conducting underpowered experiments for decades. In some lines of research in my lab, a predicted effect would come booming through in one experiment but melt away in the next. “My students and I kept trying to find conditions that yielded consistent statistical significance—tweaking items, instructions, exclusion rules—but we sometimes eventually threw in the towel because results were maddeningly inconsistent.”
Rather than invoking some supernatural decline effect like Schooler, Lindsay realized that his research practices were suboptimal. A first step for social psychologists is to acknowledge their past mistakes and to learn from their mistakes. Unfortunately, there has been no collective admission of wrongdoing. Instead we have seen public displays of denial and anger and maybe some private experiences of shame and depression. Maybe it is time for acceptance. Making are a fact of life. It is the response to error that counts (Nikki Giovanni). So far, the response by social psychologists has been underwhelming. It is time for some leaders to step up.
The Way out of the Crisis
The most obvious solution to the replication crisis is to ban the use of questionable research practices, and to consider their use a violation of research ethics. Kitayama claimed that collating promising small pilot studies into one dataset was an acceptable practice in the past, but no scientific organization has clearly stated that this practice is no longer acceptable. Why should stakeholders trust publications if this is still a tolerated practice?
Professional organizations have made no effort to discuss questionable research practices and to specify which practices are acceptable and which ones are not. Thus, researchers can still use the same practices that Bem used to produce false evidence for extresensory perception to produce false evidence for their theories.
At present, the enforcement of good practices is left to editors of journals who can ask pertinent questions during the submission process (Lindsay, 2019). Another solution has been to ask researchers to preregister their studies, which limits researchers’ freedom to go on a fishing expedition. There has been a lot of debate about the value of preregistration and some resistance. Some journal editors introduced badges for preregisration (Roger Giner-Sorolla, JESP), but others did not (Chris Crandall, PSPB) (cf. Open Science Foundation, 2020).
There are also no clear standards about pre-registration or how much researchers are bound by pre-registration. For example, Noah, Schul, and Mayo (2018) preregistered the prediction of an interaction between being observed and a facial feedback manipulation. Although the predicted interaction was not significant, they interpreted the non-significant pattern as confirming their prediction rather than stating that there was no support for their preregistered prediction. These lax standards impede the necessary improvement to make social psychological publications credible.
Finally, preregistration of studies alone will only produce more non-significant results with underpowered designs and not increase the replicability of significant results. To increase replicability, social psychologists finally have to conduct power analysis to plan studies that can produce significant results without QRPs. Although higher power is essential to the improvement of research, there are no badges for good a prior power analyses.
To ensure that published results are credible and replicable, I argue that researchers should be rewarded for conducting high-powered studies. As a priori power-analysis are based on estimates of effect sizes, this evaluation should be based on the actual power that is achieved in studies. This can be estimated using z-curve.
Z-Curve can be used to quantify the expected replication rate of individual researchers. This information can then be used in combination with existing measures of research quality like number of publications, citation counts or the H-Index.
I illustrate the value of doing so with two eminent social psychologists. Roy F. Baumeister is one of the leading social psychologists in terms of traditional impact measures. Currently, Roy Baumeister has an H-Index of 105. During the 2010s, there have been concerns about his research practices to provide evidence for his theory of glucose-fueled will-power (Carter et al., 2014, 2015; Schimmack, 2012), and a major replication failure (Hagger et al., 2016). A z-curve analysis of Bameister’s research articles that contribute to his H-Index shows that the expected replication rate is only 22% (Figure 2), which is below the average for social psychology (cf. Figure 1).
Susan T. Fiske has an H-Index of 69, which is impressive, but notably lower than Baumeister’s H-Index. Thus, if we rely on productivity and impact without considering replicability, Baumeister is the more successful social psychologist. However, a z-curve analysis of Fiske’s work shows higher replicability, 59% (Figure 3).
To combine quantity and quality of impact, I propose to weight the H-Index by replicability. This HR-Index would be 23 for Baumeister and 41 for Fiske. This reflects more accurately that Fiske has made a more positive contribution to social psychology than Baumeister because her work is more replicable.
By taking replicability into account, the incentive to publish as many discoveries as possible without caring about their truth-value (i.e., “to err on the side of discovery”) is no longer the best strategy to achieve fame and recognition in a field. The HR-Index could also motivate researchers to retract articles that they no longer believe in, which would lower the H-Index but increase the R-Index. For highly problematic papers this could produce a net gain in the HR-Index.
In conclusion, to improve social psychology, and to make it an empirical science, research practices have to change. To do so, it is important to identify good practices and to reward researchers who use good practices. In addition to open-science badges, researchers should be rewarded for publishing studies with good power that can be replicated.
The 2010s have revealed major flaws in the way social psychologists conduct and report their research. Selective publishing of significant results based on studies with low statistical power produced results that are difficult to replicate because published effect sizes are inflated by sampling error. The chance that published results replicate, especially those obtained in between-subject designs with small samples, is estimated to be between 20% and 40%. Meta-analyses do not solve this problem because questionable research practices inflate effect size estimates in meta-analyses. Thus, many theories in social psychology lack empirical support.
A few social psychologists have acknowledged this painful truth. “I want a better tomorrow, I want social psychology to change. But, the only way we can really change is if we reckon with our past, coming clean that we erred; and erred badly” (Inzlicht, 2016), but the vast majority of social psychologists have responded with defiant silence, denial, or lashed out against critics. As a result, a whole decade has been wasted, rather than confronting problems head on. Not a single senior social psychologist has responded to the replication crisis by calling for major reforms and holding researchers accountable for their research practices.
Fortunately, some younger social psychologists are pushing for reforms, but they lack the social power to implement these forms. This means that progress is slow and uneven. While some social psychologists follow open science practices, others continue to do business as usual. As quick and dirty studies produce statistically significant results much faster, the incentive structure continues to reward bad practices.
It is therefore necessary to reveal and measure the use of good versus bad practices. The R-Index provides this valuable information and should be used to reward researchers who produce replicable results that provide credible scientific evidence and an empirical foundation for theories of human behavior. The R-Index can also be used to evaluate and compare other disciplines in psychology. Demonstrating that scientific results are replicable is of utmost importance to ensure that the general public and paying undergraduate students do not loose trust in psychology.
Aldhous, P. (2011). Journal rejects studies contradicting precognition. New Scientist. https://www.newscientist.com/article/dn20447-journal-rejects-studies-contradicting-precognition/ (retrieved 1/6/2020)
Alogna, V. K., Attaya, M. K., Aucoin, P., Bahnik, S., Birch, S., Birt, A. R., . . . Zwaan, R. A. (2014). Registered replication report: Schooler & Engstler-Schooler (1990). Perspectives on
Psychological Science, 9, 556–578.
Barrett, L. F. (2015). Psychology is not in crisis. New York Times. https://www.nytimes.com/2015/09/01/opinion/psychology-is-not-in-crisis.html. (retrieved 1/8/2020)
Bartlett, T. (2013). Power of Suggestion: The amazing influence of unconscious cues is among the most fascinating discoveries of our time—that is, if it’s true. The Chronicle of Higher Education, https://www.chronicle.com/article/Power-of-Suggestion/136907
Bartoš, F. & Schimmack, U. (2020). Z-Curve.2.0: Estimating Replication and Discovery Rates. Manuscript Submitted for Publication.
Bem, D. J. (2000). Writing an empirical article. In R. J. Sternberg (Ed.), Guide to publishing in psychological journals (pp. 3–16). Cambridge, England: Cambridge University Press. doi:10.1017/CBO9780511807862.002
Bem, D. J. (2011). Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100, 407–425. doi:10.1037/a0021524
Bem, D. J., Utts, J., & Johnson, W. O. (2011). Must psychologists change the way they analyze their data? Journal of Personality and Social Psychology, 101(4), 716–719. https://doi.org/10.1037/a0024777
Brunner, J. (2018). An even better p-curve. https://replicationindex.com/2018/05/10/an-even-better-p-curve/ (retrieved 1/8/2020)
Brunner, J. & Schimmack, U. (2019). Estimating Population Mean Power Under Conditions of Heterogeneity and Selection for Significance. Meta-Psychology, In Press.
Bryan, C. J., Yeager, D. S., & O’Brien, J. M. (2019). Replicator degrees of freedom allow publication of misleading failures to replicate, 116, 25535-25545. Proceedings of the National Academy of Sciences, doi/10.1073/pnas.1910951116
Carter, E. C., Kofler, L. M., Forster, D. E., & McCullough, M. E. (2015). A series of meta-analytic tests of the depletion effect: Self-control does not seem to rely on a limited resource. Journal of Experimental Psychology: General, 144(4), 796–815. https://doi.org/10.1037/xge0000083
Carter, E. C., and McCullough, M. E. (2013). Is ego depletion too incredible? Evidence for the overestimation of the depletion effect. Behav. Brain Sci. 36, 683–684. doi: 10.1017/S0140525X13000952
Carter, E. C., & McCullough, M. E. (2014). Publication bias and the limited strength model of self-control: Has the evidence for ego depletion been overestimated? Frontiers in Psychology, 5, Article 823.
Cohen J. 1962. The statistical power of abnormal-social psychological research: a review. Journal of Abnormal and Social Psychology, 65:145–53
Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 997–1003.
Cunningham, M. R., & Baumeister, R. F. (2016). How to make nothing out of something: Analyses of the impact of study sampling and statistical interpretation in misleading meta-analytic conclusions. Frontiers in Psychology, 7, Article 1639.
Elkins-Brown, N., Saunders, B., & Inzlicht, M. (2018). The misattribution of emotions and the error-related negativity: A registered report. Cortex: A Journal Devoted to the Study of the Nervous System and Behavior, 109, 124–140. https://doi.org/10.1016/j.cortex.2018.08.017
Fiedler, K. (2015). Regression to the mean. https://brettbuttliere.wordpress.com/2018/03/10/fiedler-on-the-replicability-project/ (retrieved 1/6/2020)
Fisher, R. A. The arrangement of field experiments. Journal of the Ministry of Agriculture, 1926, 33, 503-513.
Francis, G. (2012). Too good to be true: Publication bias in two prominent studies from experimental psychology. Psychonomic Bulletin & Review, 19, 151–156. doi:10.3758/s13423-012-0227-9
Galak, J., LeBoeuf, R. A., Nelson, L. D., & Simmons, J. P. (2012). Correcting the past: Failures to replicate psi. Journal of Personality and Social Psychology, 103(6), 933–948. https://doi.org/10.1037/a0029709
Gilbert, D. T., King, G., Pettigrew, S., & Wilson, T. D. (2016). Comment on “Estimating the reproducibility of psychological science.” Science, 351, 1037–1103.
Gronau, Q. F., Duizer, M., Bakker, M., & Wagenmakers, E.-J. (2017). Bayesian mixture modeling of significant p values: A meta-analytic method to estimate the degree of contamination from H₀. Journal of Experimental Psychology: General, 146(9), 1223–1233. https://doi.org/10.1037/xge0000324
Hagger, M. S., Wood, C., Stiff, C., & Chatzisarantis, N. L. D. (2010). Ego depletion and the strength model of self-control: A meta-analysis. Psychological Bulletin, 136(4), 495–525. https://doi.org/10.1037/a0019486
Hagger, M. S., Chatzisarantis, N. L. D., Alberts, H., Anggono, C. O., Batailler, C., Birt, A. R., … Zwienenberg, M. (2016). A Multilab Preregistered Replication of the Ego-Depletion Effect. Perspectives on Psychological Science, 11(4), 546–573. https://doi.org/10.1177/1745691616652873
Ioannidis JPA (2005) Why most published research findings are false. PLoS Med 2: e124.
Lakens, D., Scheel, A. M., & Isager, P. M. (2018). Equivalence testing for psychological research: A tutorial. Advances in Methods and Practices in Psychological Science, 1(2), 259–269. https://doi.org/10.1177/2515245918770963
Kahneman, D. (2003). Experiences of collaborative research. American Psychologist, 58(9), 723–730. https://doi.org/10.1037/0003-066X.58.9.723
Kruschke, J. K., & Liddell, T. M. (2018). The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective. Psychonomic Bulletin & Review, 25(1), 178–206. https://doi.org/10.3758/s13423-016-1221-4
Kvarven, E., Strømland, M., & Johannesson (2019). Comparing Meta-Analyses and Pre-Registered Multiple Labs Replication Projects. Preprint. (retrieved 1/6/2020)
Luttrell, A., Petty, R. E., & Xu, M. (2017). Replicating and fixing failed replications: The case of need for cognition and argument quality. Journal of Experimental Social Psychology, 69, 178–183. https://doi.org/10.1016/j.jesp.2016.09.006
Maxwell, S. E., Lau, M. Y., & Howard, G. S. (2015). Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? American Psychologist, 70(6), 487–498. https://doi.org/10.1037/a0039400
Motyl, M., Demos, A. P., Carsel, T. S., Hanson, B. E., Melton, Z. J., Mueller, A. B., Prims, J. P., Sun, J., Washburn, A. N., Wong, K. M., Yantis, C., & Skitka, L. J. (2017). The state of social and personality science: Rotten to the core, not so bad, getting better, or getting worse? Journal of Personality and Social Psychology, 113(1), 34–58. https://doi.org/10.1037/pspa0000084
Nelson, L. D., Simmons, J., & Simonsohn, U. (2018). Psychology’s renaissance. Annual Review of Psychology, 69, 511–534. https://doi.org/10.1146/annurev-psych-122216-011836
Noah, T., Schul, Y., & Mayo, R. (2018). When both the original study and its failed replication are correct: Feeling observed eliminates the facial-feedback effect. Journal of Personality and Social Psychology, 114(5), 657–664. https://doi.org/10.1037/pspa0000121
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716–aac4716. doi:10 . 1126 / science . aac4716
Popper, K. R. (1959). The logic of scientific discovery. London, England: Hutchinson.
Ritchie, S. J., Wiseman, R., & French, C. C. (2012a). Failing the future: Three unsuccessful attempts to replicate Bem’s “retroactive facilitation of recall” effect. PLoS One, 7(3), Article e33423. doi:10.1371/journal.pone.0033423
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638–641. https://doi.org/10.1037/0033-2909.86.3.638
Schimmack, U. (2012). The ironic effect of significant results on the credibility of multiple-study articles. Psychological Methods, 17, 551–566
Schimmack, U. (2012). The ironic effect of significant results on the credibility of multiple-study articles. Psychological Methods, 17(4), 551–566. https://doi.org/10.1037/a0029487
Schimmack, U. (2018). Why the Journal of Personality and Social Psychology Should Retract Article DOI: 10.1037/a0021524 “Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect” by Daryl J. Bem. https://replicationindex.com/2018/01/05/bem-retraction/ (blog post retrieved 1/6/2020)
Schimmack, U. (2018). Fritz Strack asks “Have I done something wrong” https://replicationindex.com/2018/04/29/fritz-strack-response/ (retrieved 1/8/2020)
Schimmack, U. (2020). the-replicability-index-is-the-most-powerful-tool-to-detect-publication-bias-in-meta-analyses. https://replicationindex.com/2020/01/01/the-replicability-index-is-the-most-powerful-tool-to-detect-publication-bias-in-meta-analyses/. (Blog post retrieved 1/6/2020)
Schimmack, U. & Brunner. J. (2019). The Bayesian Mixture Model for p-curves is fundamentally flawed. https://replicationindex.com/2019/04/01/the-bayesian-mixture-model-is-fundamentally-flawed/ (retrieved 1/8/2020)
Schooler, J. W. (2014). Turning the lens of science on itself: Verbal overshadowing, replication, and metascience. Perspectives on Psychological Science, 9(5), 579–584. https://doi.org/10.1177/1745691614547878
Schooler, J. W., & Engstler-Schooler, T. Y. (1990). Verbal overshadowing of visual memories: Some things are better left unsaid. Cognitive Psychology, 22(1), 36–71. https://doi.org/10.1016/0010-0285(90)90003-M
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). P-curve and effect size: Correcting for publication bias using only significant results. Perspectives on Psychological
Science, 9, 666–681.
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2017). Pcurve app 4.06. Retrieved May 30, 2019, from http://www.p-curve.com
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. https://doi.org/10.1177/0956797611417632
Stroebe, W., & Strack, F. (2014). The alleged crisis and the illusion of exact replication. Perspectives on Psychological Science, 9(1), 59–71. https://doi.org/10.1177/1745691613514450
Sterling, T. D. (1959). Publication decision and the possible effects on inferences drawn from tests of significance – or vice versa. Journal of the American Statistical Association,
Sterling, T. D., Rosenbaum, W. L., & Weinkam, J. J. (1995). Publication decisions revisited: The effect of the outcome of statistical tests on the decision to publish and vice versa. The American Statistician, 49, 108–112.
Strack, F. (2016). Reflection on the smiling registered replication report. Perspectives on Psychological Science, 11(6), 929–930. https://doi.org/10.1177/1745691616674460
van Aert, R. C. M., Wicherts, J. M., & van Assen, M. A. L. M. (2016). Conducting meta-analyses based on p values: Reservations and recommendations for applying p-uniform and pcurve. Perspectives on Psychological Science, 11, 713–729.
Wagenmakers, E.-J., Beek, T., Dijkhoff, L., Gronau, Q. F., Acosta, A., Adams, R. B., … Zwaan, R. A. (2016). Registered Replication Report: Strack, Martin, & Stepper (1988). Perspectives on Psychological Science, 11(6), 917–928. https://doi.org/10.1177/1745691616674458
Wagenmakers EJ, Wetzels R, Borsboom D, van der Maas HL. 2011. Why psychologists must change the way they analyze their data: the case of psi: comment on Bem 2011. Journal of Personality and Social Psychology, 100(3), 426–32.
Wegner, D. M. (1992). The premature demise of the solo experiment. Personality and Social Psychology Bulletin, 18(4), 504–508. https://doi.org/10.1177/0146167292184017
Yong, E. (2012). Nobel laureate challenges psychologists to clean up their act: Social-priming research needs “daisy chain” of replication. Nature. https://www.nature.com/news/nobel-laureate-challenges-psychologists-to-clean-up-their-act-1.11535
Zwaan, R. A., Etz, A., Lucas, R. E., & Donnellan, M. B. (2018). Making replication mainstream. Behavioral and Brain Sciences, 41, Article e120. https://doi.org/10.1017/S0140525X17001972