Social psychologists are known for deception. First, they deceived their participants about the purpose of a study as in the famous Milgram experiment. Then, they deceived themselves that their studies produce robust and replicable results. After it became apparent that less than 25% of published results in social psychology can be replicated, they are now deceiving readers to maintain the illusion that they are a science.
The latest blatant attempt at deception is Fabrigar, Wegener, and Petty’s article “A Validity-Based Framework for Understanding Replication in Psychology” published in PSPB which is edited by Chris Crandall, who has been defending shoddy practices and questionable results on social media for the past decade.
The authors first deception is that they fail to mention the extent of the replication crisis in social psychology. A comprehensive replication attempt found that only 25% of results in social psychology could be replicated (Open Science Collaboration, 2015). There also has been no other representative samples of social psychology studies. Nevertheless, the authors imply that the result was only sometimes less than 50%. This dishonest presentation of the facts has been used by several prominent social psychologists to avoid stating the fact that only a quarter of published results is expected to replicate (cf. Schimmack, 2020a).

Next, the authors note that researchers make different attributions about the causes of replication failures. Some authors assume that the low replication rate shows that original results were produced with questionable research practices that inflate effect sizes and make it unlikely that a replication study will be successful (John et al., 2012). Other researchers defend original studies and blame replication failures on problems with the replication studies. However, the authors fail to mention that there is strong support for the first explanation and very little support for the second explanation (Schimmack, 2020a).

It is unscientific and deceptive to hide relevant data from an article on a topic that can be examined empirically. The argument whether we can trust original studies or not like an argument about ice cream flavors. Regarding the replication crisis, there is a correct answer and empirical data clearly show that the correct answer is that questionable research practices were abused to present everything as statistically significant, even time-reversed stimulation by erotic stimuli (Bem, 2011; cf. Schimmack, 2018). Why should anybody trust social psychologists if they are not able to admit to their mistakes and learn from them.
The deception does not end here. The authors claim that replication failures can be attributed to four potential problems: statistical conclusion validity, internal validity, construct validity, and external validity. This sounds super scientific, but is just bullshit.
Internal validity is about causality and if an experiment is replicated with an experiment both studies have internal validity. So, a replication failure cannot be interpreted to low internal validity in the replication study.
External validity is the question whether a laboratory experiment shows results that can be generalized to the real world. An independent criticism of experimental social psychology is that many experiments lack external validity, but this is true for original and replication studies. Based on concerns about external validity, social psychology should do less experiments, but this has nothing to do with the replication crisis.
Construct validity has to do with the ability of an experimental manipulation to manipulate the variable of interest (e.g. mood) and the amount of variance in a measure that reflects the construct that is supposed to be manipulated (e.g., prejudice). Once more, construct validity is a property of original and replication studies. So, construct validity also has nothing to do with the replication crisis. However, construct validity is a problem in social psychology because many measures have not been properly validated (Schimmack, 2020b), a problem that is not unique to social psychology (Schimmack, 2020c).
This leaves only statistical conclusion validity as a viable explanation for replication failures, but the term statistical conclusion validity is rare and its meaning is unclear. The authors explain:

In short, statistical conclusion validity boils down to not making a type-I error or a type-II error. However, it is problematic to talk about replication failures in terms of these two errors when the null-hypothesis is specified as an effect size of zero; the nil-hypothesis (Cohen, 1994′ Schimmack, 2020a). Let’s use a simple example. Let’s say that some experimental manipulation outside of participants behaviour has a very small effect on their behaviour, d = .05. As we are assuming a non-null effect size, we know that there is an effect and that the nil-hypothesis is false. Therefore, studies that test this hypothesis can only make a type-II error. Now assume that a researcher conducts a study with N = 30 participants. This study has a probability of 5.2% to produce a significant result with the classic criterion of p < .05 (two-tailed). So, we would expect a non-significant result. However, using a variety of statistical tricks, known as questionable research practices, a researcher can inflate the effect size and increase the chance of obtaining a significant results to 60% or more (Simmons et al., 2011). Thus, it does not require a lot of resources to produce “evidence” for the effect. Now let’s consider a researcher who does attempt to replicate these findings, but without statistical tests. This researcher is very unlikely (1 out of 40 times) to produce a significant result that matches the original result (effect size in the same direction & p < .05). So, this researcher will publish a replication failure.
Based on social psychologists logic, the replication failure is the wrong result because it fails to provide evidence against the nil-hypothesis when the nil-hypothesis is false, while the original study showed the correct result. The problem with this warped logic is that the original study used deception to produce evidence against the false nil-hypothesis. It is deceptive to claim that the probability of a type-I error is no more than 5%, when questionable research practices were used. This problem is ignored when we focus on the type-II error in the replication study.
What is fundamentally wrong with experimental social psychology is the idea that falsifying the nil-hypothesis is sufficient to make scientific advances. It is sad that social psychologists in 2020 can still publish an article that maintains this illusion. Using questionable research practices to produce p-values less than .05 in a test of nil-hypothesis is not a sound scientific method. As long as social psychologists deceive themselves that it is, it is not a science. Defund social psychology until they clean up their act.
References
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science, 23(5), 524–532. https://doi.org/10.1177/0956797611430953
Open Science Collaboration (OSC). (2015). Estimating the reproducibility of psychological science. Science, 349, aac4716. http://dx.doi.org/10.1126/science.aac4716
Schimmack, U. (2018). Why the Journal of Personality and Social Psychology Should Retract Article DOI: 10.1037/a0021524 “Feeling the Future: Experimental evidence for anomalous retroactive influences on cognition and affect” by Daryl J. Bem
https://replicationindex.com/2018/01/05/bem-retraction/
Schimmack, U. (2020a). A Meta-Psychological Perspective on the Decade of Replication Failures in Social Psychology. https://replicationindex.com/2020/01/05/replication-crisis-review/
(also in press in a peer-reviewed journal if you care about this, drop date Nov 1).
Schimmack, U. (2020b). The Implicit Association Test: A Measure in Search of a Construct
https://replicationindex.com/2019/05/30/iat-pops/
(also a peer-reviewed article in PoPS)
Schimmack, U. (2020c). The validation crisis in psychology. https://replicationindex.com/2019/02/16/the-validation-crisis-in-psychology/
(also in press in Meta-Psychology, a peer-reviewed journal)
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359 –1366. http://dx.doi.org/10.1177/0956797611417632
According to Wikipedia:
“Discrimination is the act of making distinctions between human beings based on the groups, classes, or other categories to which they are perceived to belong. People may discriminate on the basis of … profession… Discrimination especially occurs when individuals or groups are treated “in a way which is worse than the way people are usually treated,” on the basis of their actual or perceived membership in certain groups or social categories.”
Guilty on all counts, right? How about dropping the hatred, cooling down, and distinguishing between individual authors and their arguments, entire disciplines and individual authors, and considering the possibility that some arguments in this world may be a bit more complex than the anti-theoretical focus on replicability foresees…?
You are adding fuel to the fire when you write “anti-theoretical focus on replicability” If you cannot see that replicability is necessary so that empirical evidence can be used to test, strengthen, or weaken theories, you are part of the problem.
You think that’s bad? Just how replicable and informative is the “interview 13 people and make generalisations based on selected quotes” approach in “qualitative” research, or its even more self-absorbed hip sibling, “Interpretative Phenomenological Analysis”? And that’ relates to the good stuff in that field.