Denial is Not Going to Fix Social Psychology

In 2015, social psychologists replicated published results in psychology journals. While the original studies, often including multiple studies, reported nearly exclusively significant results (97% success rate), the replication studies produced only 25% significant results (Open Science Collaboration, 2015).

Since this embarrassing finding has been published, leaders of social psychology have engaged in damage control, using a string of false arguments to suggest that a replication rate of 25% is normal and not a crisis (see Schimmack, 2020, for a review).

One open question about the OSC results is what they actually mean. One explanation is that original studies actually reported false positive results. That is, a significant result was reported although there is actually no effect of an experimental manipulation. The other explanation is that the original studies merely reported inflated effect sizes, but did get the direction of an effect right. As social psychologists do not care about effect sizes, the latter explanation is not a problem for social psychologists. Unfortunately, a replication rate of 25% does not tell us how many original results were false positives, but there have been attempts to estimate the false discovery rate in the OSC studies.

Brent M. Wilson, a post-doctoral student at UC San Diego, and distinguished professor, John T. Wixted, also at UC San Diego published an article that used sign-changes between original studies and replication studies to estimate the false discovery rate (Wilson & Wixted, 2018). The logic is straightforward. A true null-result is equally likely to show an effect in one direction (increase) or the other direction (decrease) due to sampling error alone. Thus, a sign change in a replication study may suggest that the original result was a statistical fluke. Based on this logic, the authors conclude that 49% of the results in social psychology were false positive results.

The implications of this conclusion cannot be overstated. Every other result published in social psychology is a false positive result. Half of the studies in social psychology textbooks support false claims unless textbook writers are clairvoyant and can tell true effects from false effects. If this is not bad enough, the estimate of 49% uses the nil-hypothesis to claim that a reported result is false. However, effects in the same direction that are very small have no practical significance, especially when effect sizes are difficult to estimate because they are susceptible to small changes in experimental procedures. Thus, the implication of Wilson and Wixted’s article is that social psychology has a replication crisis because it is not clear which published results can be replicated with practically meaningful effect sizes. I cited the article accordingly, in my review article (Schimmack, 2020).

You may understand my surprise when the same authors a couple of years later write another article that claims most published results are true (Wilson, Harris, & Wixted, 2020).

Although the authors do not suffer from full-blown amnesia and do recall and cite their previous article, they fail to mention that they previously estimated that 49% of published results in social psychology are false positives. Instead, the blur the distinction between cognitive and social psychology although cognitive psychology had an estimate of 19% false positive, compared to the 49% for social psychology.

So, apparently the authors remembered that they published an article on this topic, but they forgot their main argument and the conclusions of their original article. In fact, in the original article they found a silver lining in the fact that 49% or more of results of social psychology are false positives. They argued that this finding shows that social psychologists are willing to test risky hypothesis that have a high chance of being false. In contrast, cognitive psychologists should be ashamed that they have an 81% success rate, which only shows that they make obvious predictions.

Assuming their estimate is correct, it is not good news that only 1 out of 17 hypotheses that are tested by social psychologists is true. The problem is that social psychologists do not just test a hypothesis and give up when they get a non-significant result. Rather, they continue to run a series of conceptual replication studies with minor variations until a significant result is found. Thus, the chance that false findings are published are rather high, which would explain why findings are difficult to replicate.

In conclusion, Wilson and Wixted published two articles with opposing conclusions. One article claims that social psychology is a wild chase of effects when most experiments test hypotheses that are false (i.e., the null-hypothesis is true). This leads to the publication of many false positive results that fail to replicate in honest replication studies that do not select for significance. Two years later social psychology is a respectable science that may not be much different from cognitive psychology, and most published results are true, which also implies that most tested hypotheses must be true because a high proportion of false hypothesis would result in false positive results in journals.

What caused this flip-flop about the replication crisis is unclear. Maybe the fact that Susan T. Fiske was in charge of publishing the new article in PNAS has something to do with it. Maybe she pressured the authors into saying nice things about social psychology. Maybe they were willing accomplices in white-washing the embarrassing replication outcome for social psychology. I don’t know and I don’t care. Their new PNAS article is nonsense and ignores other evidence that social psychology has a major replication problem (Schimmack, 2020). Fiske may wish that articles like the PNAS article hide the fact that social psychologists made a mockery of the scientific method (publish only studies that work, err on the side of discovery, never replicate a study so that you have plausible deniability). I can only hope that young scholars realize that old practices produces a pile of results that have no theoretical or practical meaning and work towards improving scientific practices. The future of social psychology depends on it.

References

Schimmack, U. (2020). A meta-psychological perspective on the decade of replication failures in social psychology. Canadian Psychologist, in press https://replicationindex.com/2020/01/05/replication-crisis-review/

B. M. Wilson, J. T. Wixted, The prior odds of testing a true effect in cognitive and social psychology. Adv. Methods Pract. Psychol. Sci. 1, 186–197 (2018).

Brent M. Wilson, Christine R. Harris, and John T. Wixted (2020). Science is not a signal detection problem. PNAS, http://www.pnas.org/cgi/doi/10.1073/pnas.1914237117

Leave a Reply