One of the bigger stories in Psychological (WannaBe) Science was the forced resignation of Klaus Fiedler from his post as editor-in-chief at the prestigious journal “Perspectives on Psychological Science.” In response to his humiliating eviction, Klaus Fiedler declared “I am the victim.”

In an interview, he claimed that the his actions that led to the vote of no confidence by the Board of Directors of the Association of Psychological Science (APS) were “completely fair, respectful, and in line with all journal standards.” In contrast, the Board of Directors listed several violations of editorial policies and standards.

The APS board listed the following complaints.

- accept an article criticizing the original article based on three reviews that were also critical of the original article and did not reflect a representative range of views on the topic of the original article;
- invite the three reviewers who reviewed the critique favorably to themselves submit commentaries on the critique;
- accept those commentaries without submitting them to peer review; and,
- inform the author of the original article that his invited reply would also not be sent out for peer review. The EIC then sent that reply to be reviewed by the author of the critical article to solicit further comments.

As bystanders, we have to decide whether these accusations by several board members are accurate or whether these are trumped up charges that misrepresent the facts and Fiedler is an innocent victim. Even without specific knowledge about this incidence and the people involved, bystanders are probably forming an impression about Fiedler and his accusers. First, it is a natural human response to avoid embarrassment after a public humiliation. Thus, Fiedler’s claims of no wrong-doing have to be taken with a grain of salt. On the other hand, APS board members could also have motives to distort the facts, although they are less obvious.

To understand the APS board’s responses to Fiedler’s actions, it is necessary to take into account that Fiedler’s questionable editorial decisions affected Steven Roberts, an African American scholar, who had published an article about systemic racism in psychology in the same journal under a previous editor (Roberts et al., 2020). Fiedler’s decision to invite three White critical reviewers to submit their criticisms as additional commentaries was perceived by Roberts’ as racially biased. When he made his concerns public, over 1,000 bystanders agreed and signed an open letter asking for Fiedler’s resignation. In contrast, an opposing open letter received much fewer signatures. While some of the signatures on both sides have their own biases because they know Fiedler as a friend or foe, most of the signatures did not know anything about Fiedler, but reacted to Roberts’ description of his treatment. Fiedler never denied that this account was an accurate description of events. He merely claims that his actions were “completely fair, respectful, and in line with journal standards.” Yet, nobody else has supported Fiedler’s claim that it is entirely fair and acceptable to invite three White-ish reviewers to submit their reviews as commentaries and to accept these commentaries without peer-review.

I conducted an informal and unrepresentative poll that confirmed my belief that inviting reviewers to submit a commentary is rare.

What is even more questionable is that all the three reviews support with Hommel’s critical commentary of Robert’s target article. It is not clear why reviews of a commentary were needed to be published as additional commentaries if these reviews agreed with Hommel’s commentary. The main point of reviews is to determine whether a submission is suitable for publication. If Hommel’s commentary was so deficient that all three reviewers were able to make additional points that were missing from his commentary, his submission should have been rejected with or without a chance of resubmission. In short, Fiedler’s actions were highly unusual and questionable, even if they were not racially motivated.

Even if Fiedler thought that his actions were fair and unbiased when he was acting, the response by Roberts, over 1,000 signatories, and the APS board of directors could have made him realize that others viewed his behaviors differently and maybe recognize that his actions were not as fair as he assumed. He could even have apologized for his actions or at least the harm they caused however unintentional. Yet, he chose to blame others for his resignation – “I am the victim”. I believe that Fiedler is indeed a victim, but not in the way he perceives the situation. Rather than blaming others for his disgraceful resignation, he should blame himself. To support my argument, I will propose a mediation model and provide a case-study of Fiedler’s response to criticism as empirical support.

## From Arrogance to Humiliation

A well-known biblical proverb states that arrogance is the cause of humiliation (“Hochmut kommt vor dem Fall). I am proposing a median model of this assumed relationship. Fiedler is very familiar with mediation models (Fiedler, Harris, & Schott, 2018). A mediation model is basically a causal chain. I propose that arrogance may lead to humiliation because it breeds ignorance. Figure 1 shows ignorance as the mediator. That is, arrogance makes it more likely that somebody is discounting valid criticism. In turn, individuals may act in ways that are not adaptive or socially acceptable. This leads to either personal harm or a damage to a person’s reputation. Arrogance and ignorance will also shape the response to social rejection. Rather than making an internal attribution that elicits feelings of embarrassed, an emotion that repairs social relationships, arrogant and ignorant individuals will make an external attribution (blame) that leads to anger, an emotion that further harms social relationships.

Fiedler’s claim that his actions were fair and that he is the victim makes it clear that he made an external attribution. He blames others, but the real problem is that Fiedler is unable to recognize when he is wrong and criticism is justified. This attributional bias is well known in psychology and called a self-serving attribution. To enhance one’s self-esteem, some individuals attribute successes to their own abilities and blame others for their failures. I present a case-study of Fiedler’s response to the replication crisis as evidence that his arrogance blinds him to valid criticism.

## Replicability and Regression to the Mean

In 2011, social psychology was faced with emerging evidence that many findings, including fundamental findings like unconscious priming, cannot be replicated. A major replication project found that only 25% of social psychology studies produced a significant result again in an attempt to replicate the original study. These findings have triggered numerous explanations for the low replication rate in social psychology (OSC, 2015; Schimmack, 2020; Wiggins & Christopherson, 2019).

Explanations for the replication crisis in social psychology can be divided into two camps. One camp believes that replication failures reveal major problems with the studies that social psychologists conducted for decades. The other camp argues that replication failures are a normal part of science and that published results can be trusted even if they failed to replicate in recent replication studies. A notable difference between these two camps is that defenders of the credibility of social psychology tend to be established and prominent figures in social psychology. As a result, they also tend to be old, men, and White. However, these surface characteristics are only correlated with views about the replication crisis. The main causal factor is likely to be the threat to eminent social psychologists concerns about their reputation and legacy. Rather than becoming famous names along with Allport, their names may be used to warn future generations about the dark days when social psychologists invented theories based on unreliable results.

Consistent with the stereotype of old, White, male social psychologists, Fiedler has become an outspoken critic of the replication movement and tried to normalize replication failures. After the credibility of psychology was challenged in news outlets, the board of the German Psychological Society (DGPs) issued a reassuring (whitewashing) statement that tried to reassure the public that psychology is a science. The web page has been deleted, but a copy of the statement is preserved here (Stellungnahme). This official statement triggered outrage among some members and DGPs created a discussion forum (also deleted now). Fiedler participated in this discussion with the claim that replication failures can be explained by a statistical phenomenon known as regression to the mean. He repeated this argument in an email with a reporter that was shared by Mickey Inzlicht in the International Social Cognition Network group (ISCON) on Facebook. This post elicited many commentaries that were mostly critical of Fiedler’s attempt to cast doubt about the scientific validity of the replication project. The ISCON post and the comments were deleted (when Mickey left Facebook), but they were preserved in my Google inbox. Here is the post and the most notable comments.

## Michael Inzlicht shares Fiedler’s response to the outcome of the Reproducibility Project that only 25% of significant results in social psychology could be replicated (i.e., produced a p-value below .05).

` `

August 31 at 9:46am

*Klaus Fiedler has granted me permission to share a letter that he wrote to a reported (Bruce Bowers) in response to the replication project. This letter contains Klaus’s words only and the only part I edited was to remove his phone number. I thought this would be of interest to the group.*

Dear Bruce:

Thanks for your email. You can call me tomorrow but I guess what I have to say is summarized in this email.

Before I try to tell it like it is, I ask you to please attend to my arguments, not just the final evaluations, which may appear unbalanced. So if you want to include my statement in your article, maybe along with my name, I would be happy not to detach my evaluative judgment from the arguments that in my opinion inevitably lead to my critical evaluation.

First of all I want to make it clear that I have been a big fan of properly conducted replication and validation studies for many years – long before the current hype of what one might call a shallow replication research program. Please note also that one of my own studies has been included in the present replication project; the original findings have been borne out more clearly than in the original study. So there is no self-referent motive for me to be overly critical.

However, I have to say that I am more than disappointed by the present report. In my view, such an expensive, time-consuming, and resource-intensive replication study, which can be expected to receive so much attention and to have such a strong impact on the field and on its public image, should live up (at least) to the same standards of scientific scrutiny as the studies that it evaluates. I’m afraid this is not the case, for the following reasons …

The rationale is to plot the effect size of replication results as a function of original results. Such a plot is necessarily subject to regression toward the mean. On a-priori-grounds, to the extent that the reliability of the original results is less than perfect, it can be expected that replication studies regress toward weaker effect sizes. This is very common knowledge. In a scholarly article one would try to compare the obtained effects to what can be expected from regression alone. The rule is simple and straightforward. Multiply the effect size of the original study (as a deviation score) with the reliability of the original test, and you get the expected replication results (in deviation scores) – as expected from regression alone. The informative question is to what extent the obtained results are weaker than the to-be-expected regressive results.

To be sure, the article’s muteness regarding regression is related to the fact that the reliability was not assessed. This is a huge source of weakness. It has been shown (in a nice recent article by Stanley & Spence, 2014, in PPS) that measurement error and sampling error alone will greatly reduce the replicability of empirical results, even when the hypothesis is completely correct. In order not to be fooled by statistical data, it is therefore of utmost importance to control for measurement error and sampling error. This is the lesson we took from Frank Schmidt (2010). It is also very common wisdom.

The failure to assess the reliability of the dependent measures greatly reduces the interpretation of the results. Some studies may use single measures to assess an effect whereas others may use multiple measures and thereby enhance the reliability, according to a principle well-known since Spearman & Brown. Thus, some of the replication failures may simply reflect the naïve reliance on single-item dependent measures. This is of course a weakness of the original studies, but a weakness different from non-replicability of the theoretically important effect. Indeed, contrary to the notion that researchers perfectly exploit their degrees of freedom and always come up with results that overestimate their true effect size, they often make naïve mistakes.

By the way, this failure to control for reliability might explain the apparent replication advantage of cognitive over social psychology. Social psychologists may simply often rely on singular measure, whereas cognitive psychologists use multi-trial designs resulting in much higher reliability.

The failure to consider reliability refers to the dependent measure. A similar failure to systematically include manipulation checks renders the independent variables equivocal. The so-called Duhem-Quine problem refers to the unwarranted assumption that some experimental manipulation can be equated with the theoretical variable. An independent variable can be operationalized in multiple ways. A manipulation that worked a few years ago need to work now, simply because no manipulation provides a plain manipulation of the theoretical variable proper. It is therefore essential to include a manipulation check, to make sure that the very premise of a study is met, namely a successful manipulation of the theoretical variable. Simply running the same operational procedure as years before is not sufficient, logically.

Last but not least, the sampling rule that underlies the selection of the 100 studies strikes me as hard to tolerate. Replication teams could select their studies from the first 20 articles published in a journal in a year (if I correctly understand this sentence). What might have motivated the replication teams’ choices? Could this procedure be sensitive to their attitude towards particular authors or their research? Could they have selected simply studies with a single dependent measure (implying low reliability)? – I do not want to be too suspicious here but, given the costs of the replication project and the human resources, does this sampling procedure represent the kind of high-quality science the whole project is striving for?

Across all replication studies, power is presupposed to be a pure function of the size of participant samples. The notion of a truly representative design in which tasks and stimuli and context conditions and a number of other boundary conditions are taken into account is not even mentioned (cf. Westfall & Judd).

## Comments

**Brent W. Roberts, 10:02am Sep 4**

This comment just killed me “What might have motivated the replication teams’ choices? Could this procedure be sensitive to Their attitude towards Particular authors or Their research?” **Once again, we have an eminent, high powered scientist impugning the integrity of, in this case, close to 300, mostly young researchers.** What a great example to set.

**Daniel Lakens, 12:32pm Sep 4****I think the regression to the mean comment just means: if you start from an extreme initial observation, there will be regression to the mean.** **He will agree there is publication bias –** but just argues the reduction in effect sizes is nothing unexpected – we all agree with that, I think. I find his other points less convincing – there is data about researchers expectencies about whether a study would replicate. Don’t blabla, look at data. The problem with moderators is not big – original researchers OKéd the studies – if they can not think of moderators, we cannot be blamed for not including others checks. Finally, it looks like our power was good, if you examine the p-curve. Not in line with the idea we messed up. I wonder why, with all commentaries I’ve seen, no one takes the effort to pre-register their criticisms, and then just look at the studies and data, and let us know how much it really matters?

**Felix Cheung, ,2:11pm Sep 4**

I don’t understand why the regression to mean cannot be understood in a more positive light **when the “mean” in regression to the mean refers to the effect sizes of interests. If that’s the case, then regressing to mean would mean that we are providing more accurate estimates of the effect sizes.**

**Joachim Vandekerckhove, 2:15pm Aug 31**

The dismissive “regression to the mean” argument **either simply takes publication bias as given or assumes that all effect sizes are truly zero.** Either of those assumptions make for an interesting message to broadcast, I feel.

**Michael Inzlicht, 2:54pm Aug 31**

I think we all agree with this, Jeff, but as Simine suggested, **if the study in question is a product of all the multifarious biases we’ve discussed and cannot be replicated (in an honest attempt), what basis do we have to change our beliefs at all?** To me the RP–plus lots of other stuff that has come to light in the past few years–make me doubt the evidentiary basis of many findings, and by extension, many theories/models. Theories are based on data…and it turns out that data might not be as solid as we thought.

**Jeff Sherman, 2:58pm Aug 31**

Michael, I don’t disagree. **I think RP–plus was an important endeavor.** I am sympathetic to Klaus’s lament that the operationalizations of the constructs weren’t directly validated in the replications.

**Uli Schimmack, 11:15am Sep 1****This is another example that many psychologists are still trying to maintain the illusion that psychology doesn’t have a replicabiltiy problem.**A recurrent argument is that human behavior is complex and influenced by many factors that will produce variation in results across seemingly similar studies.

Even if this were true, it would not explain why all original studies find significant effects. If moderators can make effects appear or disappear, there would be an equal number of non-significant results in original and replication studies. If psychologists were really serious about moderating factors, non-significant results would be highly important to understand under what conditions an effect does not occur. The publication of only significant results in psychology (since 1959 Sterling) shows that psychologists are not really serious about moderating factors and that moderators are only invoked post-hoc to explain away failed replications of significant results.

**Just like Klaus Fiedler’s illusory regression to the mean, these arguments are hollow and only reveal the motivated biases of their proponents to deny a fundamental problem in the way psychologists collect, analyze, and report their research findings.**

If a 25% replication rate for social psychology is not enough to declare a crisis then psychology is really in a crisis and psychologists provide the best evidence for the validity of Freud’s theory of repression. Has Daniel Kahneman commented on the reproducibility-project results?

**Garriy Shteynberg, 10:33pm Sep 7****Again, I agree that there is publication bias **and its importance even **in a world where all H0 are false** (as you show in your last comment). Now, **do you see that in that very world, regression to the mean will still occur?** Also, in the spirit of the dialogue, try to refrain from claiming what others do not know. I am sure you realize that making such truth claims on very little data is at best severely underpowered.

**Uli Schimmack, 10:38pm Sep 7**

Garriy Shteynberg Sorry, but I always said that **regression to the mean occurs when there is selection bias, but without selection bias it will not occur.** That is really the issue here and I am not sure what point you are trying to make. We agree that studies were selected and that low replication rate is a result of this selection and regression to the mean. If you have any other point to make, you have to make it clearer.

**Malte Elson, 3:38am Sep 8**

Garriy Shteynberg would you maybe try me instead? I followed your example of the perfect discipline with great predictions and without publication bias. **What I haven’t figured out is what would cause regression to the mean to only occur in one direction (decreased effect size at replication level). The predictions are equally great at both levels since they are exactly the same. Why would antecedent effect sizes in publications be systematically larger if there was no selection at that level?**

**Marc Halusic, 12:53pm Sep 1**

Even if untold moderators affect the replicability of a study that describes a real effect, it would follow that any researcher who cannot specify the conditions under which an effect will replicate does not understand that effect well enough to interpret it in the discussion section.

**Maxim Milyavsky, 11:16am Sep 3****I am not sure whether Klaus meant that regression to mean by itself can explain the failure of replication or regression to mean given a selection bias. I think that without selection bias regression to mean cannot count as an alternative explanation.** **If it could, every subsequent experiment would yield a smaller effect than the previous one, which sounds like absurd.** **I assume that Klaus knows that. So, probably he admits that there was a selection bias. Maybe he just wanted to say – it’s nobody’s fault. Nobody played with data, people were just publishing effects that “worked”. Yet, what is sounds puzzling to me is that he does not see any problem in this process**.

*– Mickey shared some of the responses with Klaus and posted Klaus’s responses to the comment. Several commentators tried to defend Klaus by stating that he would agree with the claim that selection for significance is necessary to see an overall decrease in effect sizes. However, Klaus Fiedler doubles down on the claim that this is not necessary even though the implication would be that effect sizes shrink every time a study is replicated which is “absurd” (*Maxim Milyavsk), although even this absurd claim has been made (Schooler, 2011).

**Michael Inzlicht, September 2 at 1:08pm**

More from Klaus Fiedler. He has asked me to post a response to a sample of the replies I sent him. Again, this is unedited, directly copying and pasting from a note Klaus sent me. (Also not sure if I should post it here or the other, much longer, conversation).

Having read the echo to my earlier comment on the Nosek report, I got the feeling that I should add some more clarifying remarks.

(1) With respect to my complaints about the complete failure to take regressiveness into account, some folks seem to suggest that this problem can be handled simply by increasing the power of the replication study and that power is a sole function of N, the number of participants. Both beliefs are mistaken. Statistical power is not just a function of N, but also depends on treating stimuli as a random factor (cf. recent papers by Westfall & Judd). Power is 1 minus β, the probability that a theoretical hypothesis, which is true, will be actually borne out in a study. This probability not only depends on N. It also depends on the appropriateness of selected stimuli, task parameters, instructions, boundary conditions etc. Even with 1000 participant per cell, measurement and sampling error can be high, for instance, when a test includes weakly selected items, or not enough items. It is a cardinal mistake to reduce power to N.

(2) The only necessary and sufficient condition for regression (to the mean or toward less pronounced values) is a correlation less than zero. This was nicely explained and proven by Furby (1973). We all “learned” that lesson in the first semester, but regression remains a counter-intuitive thing. When you plot effect sizes in the replication studies as a function of effect sizes in the original studies and the correlation between corresponding pairs is < 1, then there will be regression. The replication findings will be weaker than the original ones. One can refrain from assuming that the original findings have been over-estimations. One might represent the data the other way around, plotting the original results as a function of given effects in the replication studies, and one will also see regression. (Note in this connection that Etz’ Bayesian analysis of the replication project also identified quite a few replications that were “too strong”). For a nice illustration of this puzzling phenomenon, you may also want to read the Erev, Wallsten & Budescu (1994) paper, which shows both overconfidence and underconfidence in the same data array.

(3) I’m not saying that regression is easy to understand intuitively (Galton took many years to solve the puzzle). The very fact that people are easily fooled by regression is the reason why controlling for expected regression effects is standard in the kind of research published here. It is almost a prototypical example of what Don Campbell (1996) had in mind when he tried to warn the community from drawing erroneous inferences.

(4) I hope it is needless to repeat that controlling for the reliability of the original studies is essential, because variation in reliability affects the degree of regressiveness. It is particularly important to avoid premature interpretations of seemingly different replication results (e.g., for cognitive and social psychology) that could reflect nothing but unequal reliability.

(5) My critical remark that the replication studies did not include manipulation checks was also met with some spontaneous defensive reactions. Please note that the goal to run so-called “exact” replications (I refrain from discussing this notion here) does not prevent replication researchers from including additional groups supposed to estimate the effectiveness of a manipulation under the current conditions. (Needless to add that a manipulation check must be more than a compliant repetition of the instruction).

(6) Most importantly perhaps, I would like to reinforce my sincere opinion that methodological and ethical norms have to be applied to such an expensive, pretentious and potentially very consequential project even more carefully and strictly than they are applied to ordinary studies. Hardly any one of the 100 target studies could have a similarly strong impact, and call for a similar degree of responsibility, as the present replication project.

Kind regards, Klaus

*This response elicited an even more heated discussion. Unfortunately, only some of these comments were mailed to my inbox. I must have made a *very negative comment about Klaus Fiedler that elicited a response by Jeff Sherman, the moderator of the group. Eventually, I was banned from the group and created the Psychological Methods Discussion Group. that became the main group for critical discussion of psychological science.

**Uli Schimmack, 2:36pm Sep 2**

Jeff Sherman The comparison extends to the (in German) official statement regarding the results of the OSF-replication project. It does not mention that publication bias is at least a factor that contributed to the outcome or mentions any initiatives to improve the way psychologists conduct their research. It would be ironic if a social psychologists objects to a comparison that is based on general principles of social behavior.

I think I don’t have to mention that the United States of America pride themselves on freedom of expression that even allows Nazis to publish their propaganda which German law does not allow. In contrast, censorship was used by socialist Germany to maintain in power. So, please feel free to censor my post. and send me into Psychological Method exile.

**Jeff Sherman, 2:49pm Sep 2**

Uli Schimmack I am not censoring the ideas you wish to express. I am saying that opinions expressed on this page must be expressed respectfully.

Calling this a freedom of speech issue is a red herring. Ironic, too, given that one impact of trolling and bullying is to cause others to self-censor.

I am working on a policy statement. If you find the burden unbearable, you can choose to not participate.

**Uli Schimmack, 2:53pm Sep 2**

Jeff Sherman Klaus is not even part of this. So, how am I bullying him? Plus, I don’t think Klaus is easily intimidated by my comment. And, **as a social psychologist how do you explain that Klaus doubled down when every comment pointed out that he ignores the fact that regression to the mean can only produce a decrease in the average if the original sample was selected to be above the mean**?

*This discussion led to a letter to the DGPs board by Moritz Heene that expressed outrage about the whitewashing of the replication results in their official statement. *

**From: Moritz HeeneTo: Andrea Abele-Brehm, Mario Gollwitzer, & Fritz StrackSubject: DGPS-Stellungnahme zu ReplikationsprojektDate: Wed, 02 Sep 2015**

[I suggest to copy and past the German text into DeepL, a powerful translation program]

Sehr geehrte Mitglieder des Vorstandes der DGPS,

Zunächst Dank an Sie für das Bemühen, die Ergebnisse des OSF-Replikationsprojektes der Öffentlichkeit klarer zu machen. Angesichts dieser Stellungnahme der DGPS möchte ich jedoch persönlich meinen Widerspruch dazu ausdrücken, da ich als Mitglied der DGPS durch diese Stellungnahmen in keiner Weise eine ausgewogene Sichtweise ausgedrückt sehe, sie im Gegenteil als sehr einseitig empfinde. Ich sehe diese Stellungnahme vielmehr als einen Euphemismus der Replikationsproblematik in der Psychologie an, um es milde auszudrücken, bin davon enttäuscht und hatte mir mehr erwartet.

Meine Kritikpunkte an ihrer Stellungnahme:

1. Zum Argument 68% der Studien seien repliziert worden: Der Test dazu prüft, ob der replizierte Effekte im Konfidenzintervall um den originalen Effekt liegt, ob diese also signifikant voneinander verschieden sind, so die Logik der Autoren. Lassen wir mal großzügig beiseite, dass dies kein Test über die *Differenz* der Effektgrößen ist, da das Konfidenzintervall um den originalen beobachteten Effekt gelegt wird, nicht um die Differenz. Wesentlicher ist, dass dies ein schlechtes Maß für Replizierbarkeit ist, denn die originalen Effekte sind upward biased (sieht man in dem originalen paper auch), und vergessen wir den publication bias nicht (siehe density distribution der p-Werte im originalen paper). Anzunehmen, dass die originalen Effektgrößen die Populationseffektgrößen sind, ist wirklich eine heroische Annahme, gerade angesichts des positiven bias der originalen Effekte. Nebenbei: In einem offenen Brief von Klaus Fiedler auf Facebook dazu publiziert wurde, wird argumentiert, die Regression zur Mitte habe die im Schnitt geringeren Effektgrößen im OSF-Projekt produziert, könne diesen Effekt erklären. Dieses Argument mag teilweise stimmen, impliziert aber, dass die originalen Effekte extrem (also biased, weil selektiv publiziert wurde) waren, denn genau das ist ja das Charakteristikum dieses Regressionseffektes: Ergebnisse, die in einer ersten Messung extrem waren, “tendieren” in einer zweiten Messung zum Mittelwert. Die Tatsache, dass die originalen Effekte einen deutlichen positiven bias aufweisen, wird in Ihrer Stellungnahme ignoriert, bzw. gar nicht erst erwähnt.

Das Argument der 68%-Replizierbarkeit wird im übrigen auch vom Hauptautor in Antwort auf ihre Stellungnahme ganz offen in ähnlicher Weise kritisiert:

Kurzum: Sich genau diese Statistik als Unterstützung dafür aus der OSF-Studie herauszusuchen, um der Öffentlichkeit zu erklären, dass in der Psychologie im Grunde alles in Ordnung ist, sehe ich als “cherry picking” von Ergebnissen an.

2. Das Moderatoren-Argument ist letztlich unhaltbar, denn erstens > wurde dies insbesondere im OSF-Projekt 3 intensiv getestet. Das Ergebnis ist u.a. hier zusammengefasst:

https://hardsci.wordpress.com/2015/09/02/moderator-interpretations-of-the-reproducibility-project/

Siehe u.a.:

In Many Labs 1 and Many Labs 3 (which I reviewed here), different labs followed standardized replication protocols for a series of experiments. In principle, different experimenters, different lab settings, and different subject populations could have led to differences between lab sites. But in analyses of heterogeneity across sites, that was not the result. In ML1, some of the very large and obvious effects (like anchoring) varied a bit in just how large they were (from “kinda big” to “holy shit”). Across both projects, more modest effects were quite consistent. Nowhere was there evidence that interesting effects wink in and out of detectability for substantive reasons linked to sample or setting. Länger findet man es hier zusammengefasst:

https://hardsci.wordpress.com/2015/03/12/an-open-review-of-many-labs-3-much-to-learn

The authors put the interpretation so well that I’ll quote them at length here [emphasis added]:

A common explanation for the challenges of replicating results across samples and settings is that there are many seen and unseen moderators that qualify the detectability of effects (Cesario, 2014). As such, when differences are observed across study administrations, it is easy to default to the assumption that it must be due to features differing between the samples and settings. Besides time of semester, we tested whether the site of data collection, and the order of administration during the study session moderated the effects. None of these had a substantial impact on any of the investigated effects. This observation is consistent with the first “Many Labs” study (Klein et al., 2014) and is the focus of the second (Klein et al., 2015). The present study provides further evidence against sample and setting differences being a default explanation for variation in replicability. That is not to deny that such variation occurs, just that direct evidence for a given effect is needed to demonstrate that it is a viable explanation.

Zweitens schreiben Sie In ihrer Stellungnahme: Solche Befunde zeigen vielmehr, dass psychologische Prozesse oft kontextabhängig sind und ihre Generalisierbarkeit weiter erforscht werden muss. Die Replikation einer amerikanischen Studie erbringt möglicherweise andere Ergebnisse, wenn diese in Deutschland oder in Italien durchgeführt wird (oder umgekehrt). In ähnlicher Weise können sich unterschiedliche Merkmale der Stichprobe (Geschlechteranteil, Alter, Bildungsstand, etc.) auf das Ergebnis auswirken. Diese Kontextabhängigkeit ist kein Zeichen von fehlender Replizierbarkeit, sondern vielmehr ein Zeichen für die Komplexität psychologischer Phänomene und Prozesse.

Nein, das zeigen diese neuen Befunde eben nicht, denn dies ist eine (Post-hoc-)Interpretation die durch die im neuen OSF-Projekt erhobenen Moderatoren *nicht* unterstützt wird, da diese Moderatorenanalysen gar nicht durchgeführt wurden. Die postulierte Kontextabhängigkeit wurde zudem im OSF-Projekt #3 nicht gefunden. Was man zwischen den labs als Variationsquelle fand war schlicht und einfach Stichprobenvariation, wie man sie nun mal in der Statistik erwarten muss. Ich sehe für Ihre Behauptung also gar keine empirische Basis, wie sie doch in einer sich empirisch nennenden Wissenschaft doch vorhanden sein sollte.

Was mir als abschließende Aussage in der Stellungnahme deutlich fehlt ist, dass die Psychologie (und gerade die Sozialpsychologie) in Zukunft keine selektiv publizierten und “underpowered studies” mehr akzeptieren sollte. Das hätte den Kern des Problems etwas besser getroffen.

Mit freundlichen Grüßen,

Moritz Heene

*Moritz Heene received the following response from one of the DGPs board members. *

**From: Mario Gollwitzer To: Moritz Heene Subject: Re: DGPS-Stellungnahme zu ReplikationsprojektDate: Thu, 03 Sep 2015 10:19:28 +0200**

Lieber Moritz,

vielen Dank für deine Mail — sie ist eine von vielen Rückmeldungen, die uns auf unsere Pressemitteilung vom Montag hin erreicht hat, und wir finden es sehr gut, dass in der DGPs-Mitgliedschaft dadurchoffenbar eine Diskussion angestoßen wurde. Wir glauben, dass diese Diskussion offen geführt werden sollte; daher haben wir uns entschlossen, zu unserer Pressemitteilung (und der Science-Studie bzw. dem ganzen Replikations-Projekt) eine Art Diskussionsforum auf unserer DGPs-Homepage einzurichten. Wir arbeiten gerade daran, die Seite aufzubauen. Ich fände es gut, wenn auch du dich hier beteiligen würdest, gerne mit deiner kritischen Haltung gegenüber unserer Pressemitteilung.

**Deine Argumente kann ich gut nachvollziehen — und ich stimme dir zu, dass die Zahl “68%” nicht einen “Replikationsanteil” wiederspiegelt. Das war eine missverständliche Äußerung. **

Aber abgesehen davon war unser Ziel, mit dieser Pressemitteilung den negativen, teilweise hämischen und destruktiven Reaktionen vieler Medien auf die Science-Studie etwas Konstruktives hinzuzufügen bzw. entgegenzusetzen. Keineswegs wollten wir die Ergebnisse der Studie”schönreden” oder eine Botschaft im Sinne von “alles gut, business as usual” verbreiten! Vielmehr wollten wir argumentieren, dass Replikationsversuche wie diese die Chance auf einen Erkenntnisgewinn bieten, die man nutzen sollte. Das ist die konstruktive Botschaft, die wir gerne auch ein bisschen stärker in den Medien vertreten sehen wollen.

Anders als du bin ich allerdings der Überzeugung, dass es durchaus möglich ist, dass die Unterschiede zwischen einer Originalstudie undihren Replikationen durchaus durch eine (unbekannte) Menge (teilweise bekannter, teilweise unbekannter) Moderatorvariablen (und deren Interaktionen) zustande kommen. Auch “Stichprobenvariation” ist nicht anderes als ein Sammelbegriff für solche Moderatoreffekte. Einige dieser Effekte sind für den Erkenntnisgewinn über ein psychologisches Phänomen zentral, andere nicht. Es gilt, die zentralen Effekte besser zu beschreiben und zu erklären. Darin sehe ich auch einen Wert von Replikationen, insbesondere von konzeptuellen Replikationen.

Abgesehen davon bin ich aber mit dir völlig einer Meinung, dass man nicht ausschließen kann, dass einige der nicht-replizierbaren, aber publizierten Effekte — übrigens nicht bloß in der Sozialpsychologie, sondern in allen Disziplinen — falsch Positive sind, für die es eine Reihe von Gründen gibt (selektives Publizieren, fragwürdige Auswertungspraktiken etc.), die hoch problematisch sind. Über diese Dinge wird ja andernorts auch heftig diskutiert. Diese Diskussionwollten wir aber in unserer Pressemitteilung erst einmal beseite lassen und stattdessen speziell auf die neue Science-Studiefokussieren.

Nochmals vielen Dank für deine Email. Solche Reaktionen sind für uns ein wichtiger Spiegel unserer Arbeit.

Herzliche Grüße, Mario

After the DGPs created a discussion forum, Klaus Fiedler, Moritz Heene and I shared our exchange of views openly on this site. The website is no longer available, but Moritz Heene saved a copy. He also shared our contribution on The Winnower.

**RESPONSE TO FIEDLER’S POST ON THE REPLICATION**We would like to address the two main arguments in Dr. Fiedler’s post on https://www.dgps.de/index.php?id=2000735

1), that the notably lower average effect size in the OSF-project are a statistical artifact of regression to the mean,

2) that low reliability contributed to the lower effect sizes in the replication studies.

Response to 1) as noted in Heene’s previous post, Fiedler’s regression to the mean argument (results that were extreme in a first assessment tend to be closer to the mean in a second assessment) implicitly assumes that the original effects were biased; that is, they are extreme estimates of population effect sizes because they were selected for publication. However, Fiedler does not mention the selection of original effects, which leads to a false interpretation of the OSF-results in Fiedler’s commentary:

**“(2) The only necessary and sufficient condition for regression (to the mean or toward less pronounced values) is a correlation less than zero. … One can refrain from assuming that the original findings have been over-estimations.” (Fiedler)**

**It is NOT possible to avoid the assumption that original results are inflated estimates because selective publication of results is necessary to account for the notable reduction in observed effect sizes.**

**a) Fiedler is mistaken when he cites Furby (1973) as evidence that regression to the mean can occur without selection. “The only necessary and sufficient condition for regression (to the mean or toward less pronounced values) is a correlation less than zero. This was nicely explained and proven by Furby (1973)” (Fiedler). It is noteworthy that Furby (1973) explicitly mentions a selection above or below the population mean in his example, when Furby (1973) writes: “Now let us choose a certain aggression level at Time 1 (any level other than the mean)”.**

The math behind regression to the mean further illustrates this point. The expected amount of regression to the mean is defined as (1 – r)*(mu – M), where r = correlation between first and second measurement, mu: population mean, and M = mean of the selected group (sample at time 1). For example, if r = .80 (thus, less than 1 as assumed by Fiedler) and the observed mean in the selected group (M) equals the population mean (mu) (e.g., M = .40, mu = .40, and M – mu = .40 – .40 = 0), no regression to the mean will occur because (1 – .80)*(.40-.40) = .20*0 = 0. Consequently, a correlation less than 1 is not a necessary and sufficient condition for regression to the mean. The effect occurs only if the correlation is less than 1 and the sample mean differs from the population mean. [Actually the mean will decrease even if the correlation is 1, but individual scores will maintain their position relative to other scores]

b) The regression to the mean effect can be positive or negative. If M < mu and r < 1, the second observations would be higher than the first observations, and the trend towards the mean would be positive. On the other hand, if M > mu and r < 1, the regression effect is negative. In the OSF-project, the regression effect was negative, because the average effect size in the replication studies was lower than the average effect size in the original studies. This implies that the observed effects in the original studies overestimated the population effect size (M > mu), which is consistent with publication bias (and possibly p-hacking).

Thus, the lower effect sizes in the replication studies can be explained as a result of publication bias and regression to the mean. The OSF-results make it possible to estimate, how much publication bias inflates observed effect sizes in original studies. **We calculated that for social psychology the average effect size fell from Cohen’s d = .6 to d = .2. This shows inflation by 200%. **It is therefore not surprising that the replication studies produced so few significant results because the increase in sample size did not compensate for the large decrease in effect sizes.

Regarding Fiedler’s second point 2)

In a regression analysis, the observed regression coefficient (b) for an observed measure with measurement error is a function of the true relationship (bT) and an inverse function of the amount of measurement error (1 – error = reliability; Rel(X)):

` `

(Interested readers can obtain the mathematical proof from Dr. Heene).

The formula implies that an observed regression coefficient (and other observed effect sizes) is always smaller than the true coefficient that could have been obtained with a perfectly reliable measure, when the reliability of the measure is less than 1. As noted by Dr. Fiedler, unreliability of measures will reduce the statistical power to obtain a statistically significant result. This statistical argument cannot explain the reduction in effect sizes in the replication studies because unreliability has the same influence on the outcome in the original studies and the replication studies. In short, the unreliability argument does not provide a valid explanation for the low success rate in the OSF-replication project.

REFERENCES

Furby, L. (1973). Interpreting regression toward the mean in developmental research. Developmental Psychology, 8(2), 172-179. doi:10.1037/h0034145

*On September 5, Klaus Fiedler emailed me to start a personal discussion over email. *

**From: klaus.fiedler [klaus.fiedler@psychologie.uni-heidelberg.de]Sent: September-05-15 7:17 AMTo: Uli Schimmack; kf@psychologie.uni-heidelberg.deSubject: iscon gossip**

Dear Uli … auf Deutsch … lieber Uli,

Du weisst vielleicht, dass ich nicht fuer Facebook registriert bin, aber ich kriege gelegentlich von anderen Notizen aus dem Chat geschickt. Du bist der Einzige, dem ich mal kurz schreibe. Du hattest geschrieben, dass meine Kommentare falsch waren und ich deshalb keinerlei Repsekt mehr verdiene.

Du bist ein methodisch motivierter und versierter Kollege, und ich waere daher sehr dankbar, wenn Du mir sagen koenntest, inwiefern meine Punkte nicht zutreffen. Was ist falsch:

— dass es die regression trap gibt?

— dass eine state-of-the art Studie der Art Retest = f(Test) für Regression kontrollieren muss?

— dass Regression eine Funktion der Reliabilitaet ist?

— dass allein ein hohes participant N keineswegs dieses Problem behebt?

— dass ein fehlender manipulation check die zentral Praemisse unterminiert, dass die UV ueberhaupt hergestellt wurde?

— dass fehlende Kontrolle von measurement + sampling error die Interpretation der Ergebnisse unterminiert?

Oder ist der Punkt, dass scientific scrutiny nicht mehr zaehlt, wenn “junge Leute” fuer eine “gute Sache” kaempfen?

Sorry, die letzte Frage driftet ein bisschen ab ins Polemische. Das war nicht so gemeint. **Ich moechte wirklich wissen, warum ich falsch liege, dann wuerde ich das auch gern richtigstellen. **Ich habe doch nicht behauptet, dass ich empirische Daten habe, die den Vergleich von kognitiver und sozialer Psychologie erhellen (obwohl es stimmt, dass man den Vergleich nur machen kann, wenn man Reliabilitaet und Effektivitaet der Manipulationen kontrolliert). Was mich motiviert, ist lediglich das Ziel, dass auch Meta-Science (und gerade Meta-Science) denselben strengen Standards unterliegt wie jene Forschung, die sie bewertet (und oft leichtfertig schaedigt).

Was die Sozialpsychologie angeht, so hast Du sicher schon gemerkt, dass ich auch ihr Kritiker bin … Vielleicht koennen wir uns ja mal darueber unterhalten …

Schoene Gruesse aus Heidelberg, Klaus

*I responded to this email and asked him directly to comment on selection bias as a reasonable explanation for the low replicability of social psychology results. *

Dear Klaus Fiedler,

Moritz Heene and I have written a response to your comments posted on the DGPS website, which is waiting for moderation.

I cc Moritz so that he can send you the response (in German), but I will try to answer your question myself.

First, I don’t think it was good that Mickey posted your comments. I think it would have been better to communicate directly with you and have a chance

to discuss these issues in an exchange of arguments. It is also

unfortunate that I mixed my response to the official DGPSs statement with your comments. I see some similarities, but you expressed a personal opinion and did not use the authority of an official position to speak for all psychologists when many psychologists disagree with the statement, which led to the post-hoc creation of a discussion forum to find out about members’ opinions on this issue.

Now let me answer your question. First, I would like to clarify that we are trying to answer the same question. To me the most important question is why the reproducibility of published results in psychology journals is so low (it is only 8% for social psychology, see my post https://replicationindex.wordpress.com/2015/08/26/predictions-about-replicat

ion-success-in-osf-reproducibility-project/ )?

One answer to this question is publication bias. This argument has been made since Sterling (1959). Cohen (1962) estimated the replication rate at 60% based on his analysis of typical effect sizes and sample sizes in Journal of Abnormal and Social Psychology (now JPSP). The 60% estimate has been replicated by Sedlmeier and Giegerenzer (1989). So, with this figure in

mind we could have expected that 60 out of 100 randomly selected results in JPSP would replicate. However, the actual success rate for JPSP is much lower. How can we explain this?

For the past five years I have been working on a better method to estimate post-hoc power, starting with my Schimmack (2012) Psych Method paper, followed by publications on my R-Index website. Similar work has been conducted by Simonsohn (p-curve) and Wicherts

(puniform) approach. The problem with the 60% estimate is that it uses reported effect sizes which are inflated. After correcting for information, the estimated power for social psychology studies in the OSF-project is only 35%. This still does not explain why only 8% were replicated and I think it is an interesting question how much moderators or mistakes in the replication study explain this discrepancy. However, a low replication rate of 35% is entirely predicted based on the published result after taking power and publication bias into account.

**In sum, it is well established and known that selectin of significant results distorts the evidence in the published literature and that this creates a discrepancy between the posted success rate (95%) and the replication rate (let’s say less than 50% to be conservative). I would be surprised if you would disagree with my argument that (a) publication bias is present and (b) that publication bias at least partially contributes to the low rate of successful replications in the OSF-project.**

A few days later, I sent a reminder email.

Dear Klaus Fiedler,

I hope you received my email from Saturday in reply to your email “iscon gossip”. It would be nice if you could confirm that you received it and let me know whether you are planning to respond to it.

Best regards,

Uli Schimmack

*Klaus Fiedler responds without answering my question about the fact that regression to the mean can only explain a decrease in the mean effect sizes if the original values were inflated by selection for significance. *

Hi:

as soon as my time permits, I will have a look. Just a general remark in response to your email, I do not undersatand what argument applies to my critical evaluation of the Nosek report. What you are telling me in the email does not apply to my critique.

Or do you contest that

- a state-of the art study of retest = f(original test) has to tackle the regression beast
- reliability of the dependent measure has to be controlled
- manipulation check is crucial to assess the effective variation of the independent variable
- the sampling of studies was suboptimal

If you disagree, I wonder if there is any common ground in scientific methodology.

I am not sure if I want to contribute to Facebook debates … As you can see, the distance from a scientitic argument to personal attacks is so short that I do not believe in the value of such a forum

Kind regards, Klaus

P.S. If I have a chance to read what you have posted, I may send a reply to the DPGs. By the way, I just sent my comments to Andrea Abele Brehm.

I did not ask her to publicize it. But that’s OK

*As in a chess game, I am pressing my advantage – Klaus Fiedler is clearly alone and wrong with his immaculate regression argument – in a follow up email. *

Dear Klaus Fiedler,

I am waiting for a longer response from you, but to answer your question I find it hard to see how my comments are irrelevant as they are challenge direct quotes from your response.

**My main concern is that you appear to neglect the fact that regression to the mean can only occur when selection occurred in the original set of studies.**

**Moritz Heene and I responded to this claim and find that it is invalid. If the original studies were not a selection of studies, the average mean should be an estimate of the average population mean and there would be no reason to expect a dramatic decrease in effect size in the OSF replication studies. Let’s just focus on this crucial point.**

**You can either maintain that selection is not necessary and try to explain how regression to the mean can occur without selection or you can concede that selection is necessary and explain how the OSF replication study should have taken selection into account. At a minimum, it would be interesting to hear your response to our quote of Furby (1973) that shows he assumed selection, while you cite Furby as evidence that selection is not necessary.**

Although we may not be able to settle all disputes, we should be able to determine whether Furby assumed selection or not.

Here are my specific responses to your questions.

– a state-of the art study of retest = f(original test) has to tackle the regression beast [we can say that it tackeled it by examining how much selection contributed to the original results by seeing how much means regressed towards a lower mean of population effect sizes.

**Result: there was a lot of selection and a lot of regression.**

– reliability of the dependent measure has to be controlled

**in a project that aims to replicate original studies exactly, reliability is determined by the methods of the original study**

– manipulation check is crucial to assess the effective variation of the independent variable

**sure, we can question how good the replication studies were, but adding additional manipulation checks might also introduce concerns that the study is not an exact replication. Nobody is claiming that the replication studies are conclusive, but no study can assure that it was a perfect study. **

– the sampling of studies was suboptimal

**how so? The year was selected at random. To take the first studies in a year was also random. Moreover it is possible to examine whether the results are representative of other studies in the same journals and they are; see my blog**

**You may decide that my responses are not satisfactory, but I would hope that you answer at least one of my questions: Do you maintain that the OSF-results could have been obtained without selection of results that overestimate the true population effect sizes (a lot)?**

Sincerely,

Uli Schimmack

*Moritz Heene comments. *

Thanks, Uli! Don’t let them get away by tactically ignoring these facts.

BTW, since we share the same scientific rigor, as far as I can see, we could ponder about a possible collaboration study. Just an idea. [This led to the statistical examination of Kahneman’s book Thinking: Fast and Slow]

Regards, Moritz

*Too busy to really think about the possibility that he might have been wrong, Fiedler sends a terse response. *

Klaus Fiedler

Very briefly … in a mad rush this morning: **This is not true. A necessary and sufficient condition for regression is r < 1. So if the correlation between the original results and the replications is less than unity, there will be regression. Draw a scatter plot and you will easily see. An appropriate reference is Furby (1973 or 1974).**

*I try to clarify the issue in another attempt. *

Dear Klaus Fiedler,

The question is what you mean by regression. We are talking about the mean at time 1 and time 2.

Of course, there will be regression of individual scores, but we are interested in the mean effect size in social psychology (which also determines power and percentage of significant results given equal N).

It is simply NOT true that the mean will change systematically unless there

is systematic selection of observations.

As regression to the mean is defined by (1- r) * (mu – M), the formula implies that a selection effect (mu – M unequal 0) is necessary. Otherwise the whole term becomes 0.

There are three ways to explain mean differences between two sets of exact replication studies.

The original set was selected to produce significant results. The replication studies are crappy and failed to reproduce the same conditions. Random sampling error (which can be excluded because the difference in OSF is highly significant).

In the case of the OSF replication studies, selection occurred because the published results were selected to be significant from a larger set of results with non-significant results.

If you see another explanation, it would be really helpful if you would elaborate on your theory.

Sincerely,

Uli Schimmack

*Moritz Heene joins the email exchange and makes a clear case that Fiedler’s claims are statistically wrong. *

Dear Klaus Fiedler, dear Uli,

Just to add another clarification:

**Once again, Furby (1973, p.173, see attached file) explicitly mentioned selection: “Now let us choose a certain aggression level at Time 1 (any level other than the mean) and call it x’ “.**

Furthermore, regression to the mean is defined by (1- r)*(mu – M). See Shepard and Finison (1983, p.308, eq. [1]): “The term in square brackets, the product of two factors, is the estimated reduction in BP [blood pressure] due to regression.”

Now let us fix terms:

Definition of necessity and sufficiency

Necessity:

~p –> ~q , with “~” denoting negation

So, if r is *not* smaller than 1 than regression to the mean does *not* occur.

This is true as can be verified by the formula.

Sufficiency:

p –> q

So, if r is smaller than 1 than regression to the mean does occur. This is **not** true as can be verified by the formula as explained in our reply on https://www.dgps.de/index.php?id=2000735#c2001225 and in Ulrich’s previous email.

Sincerely,

Moritz Heene

I sent another email to Klaus to see whether he is going to respond.

Lieber Dr. Fiedler,

Kann ich noch auf eine Antwort von Ihnen warten oder soll ich annehmen dass Sie sich entschieden haben nicht auf meine Anfrage zu antworten?

LG, Uli Schimmack

*Klaus Fiedler does respond. *

Dear Ullrich:

Yes, I was indeed very, very busy over two weeks, working for the Humboldt foundation, for two conferences where I had to play leading roles, the Leopoldina Academy, and many other urgent jobs. Sorry but this is simply so.

I now received your email reminder to send you my comments to what you and Moritz Heene have written. However, it looks like you have already committed yourself publicly (I was sent this by colleagues who are busy on facebook):

Fiedler was quick to criticize the OSF-project and Brian Nosek for making the mistake to ignore the well-known regression to the mean effect. This silly argument ignores that regression to the mean requires that the initial scores are selected, which is exactly the point of the OSF-replication studies.

Look, this passage shows that there is apparently a deep misunderstanding about the “silly argument”. Let me briefly try to explain once more what my critique of the Science article (not Brian Nosek personally – this is not my style) referred to.

At the statistical level, I was simply presupposing that there is common ground on the premise that regressiveness is ubiquitous; it is not contingent on selected initial scores. Take a scatter plot of 100 bi-variate points (jointly distributed in X and Y). If r(X,Y) < 1(disregarding sign), regressing Y on X will result in a regression slope less than 1. The variance of predicted Y scores will be reduced. I very much hope we all agree that this holds for every correlation, not just those in which X is selected. If you don’t believe, I can easily demonstrate it with random (i.e., non-selective vectors x and y).

Across the entire set of data pairs, large values of X will be underestimated in Y, and small values of X will be overestimated. By analogy, large original findings can be expected to be much smaller in the replication. However, when we regress X on Y, we can also expect to see that large Y scores (i.e., i.e., strong replication effects) have been weaker in the original. The Bayes factors reported by Alexander Etz in his “Bayesian reproducibility project”, although not explicit about reverse regression, strongly suggest that there are indeed quite a few cases in which replication results have been stronger than the original ones. Etz’ analysis, which nicely illustrates how a much more informative and scientifically better analysis than the one provided by Nosek might look like, also reinforces my point that the report published in Science is very weak. By the way, the conclusions are markedly different from Nosek, showing that most replication studies were equivocal. The link (that you have certainly found yourself) is provided below.

We know since Rulon (1941 or so) and even since Galton (1986 or so) that regression is a tricky thing, and here I get to the normative (as opposed to the statistical, tautological) point of my critique, which is based on the recommendation of such people as Don Campbell, Daniel Kahneman & Amos Tversky, Ido Erev, Tom Wallsten & David Budescu and many others, who have made it clear that the interpretation of retesting or replication studies will be premature and often mistaken, if one does not take the vicissitudes of regression into account. A very nice historical example is Erev, Wallsten & Budescu’s 1994 Psych. Review article on overconfidence. They make it clear you find very strong evidence for both overconfidence and underconfidence in the same data array, when you regress either accuracy on confidence or confidence on accuracy, respectively. Another wonderful demonstration is Moore and Small’s 2008 Psych. Review analysis of several types of self-serving biases.

So, while my statistical point is analytically true (because regression slope with a single predictor is always < 1; I know there can be suppressor effects with slopes > 1 in multiple regression), my normative point is also well motivated. I wonder if the audience of your Internet allusion to my “silly argument” has a sufficient understanding of the “regression trap” so that, as you write:

Everybody can make up their own mind and decide where they want to stand, but the choices are pretty clear. You can follow Fiedler, Strack, Baumeister, Gilbert, Bargh and continue with business as usual or you can change. History will tell what the right choice will be.

By the way, why you put me in the same pigeon hole as Fritz, Roy, Dan, and John. The role I am playing is completely different and it definitely not aims at business as usual. My very comment on the Nosek article is driven my deep concerns about the lack of scientific scrutiny in such a prominent journal, in which there is apparently no state-of-the-art quality control. A replication project is the canonical case of a scientific interpretation that strongly calls for awareness of the regression trap. That is, the results are only informative if one takes into account what shrinkage of strong effects could be expected by regression alone. Regressiveness imposes an upper limit on the possible replication success, which ought to be considered as a baseline for the presentation of the replication results.

To do that, it is essential to control for reliability. (I know that the reliability of individual scores within a study is not the same as the reliability of the aggregate study results, but they are of course related). I also continue to believe, strongly, that a good replication project ought to control for the successful induction of the independent variable, as evident in a manipulation check (maybe in an extra group), and that the sampling of the 100 studies itself was suboptimal. If Brian Nosek (or others) come up with a convincing interpretation of this replication project, then it is fine. However, the present analysis is definitely not convincing. It is rather a symptom of shallow science.

So, as you can see, the comments that you and Moritz Heene have sent me do not really affect these considerations. And, because there is obviously no common ground between the two of us, not even about the simplest statistical constraints, I have decided not to engage in a public debate with you. I’m afraid hardly anybody in this Facebook cycle will really invest time and work to read the literature necessary to judge the consequences of the regression trap, in order to make an informed judgment. And I do not want to nourish the malicious joy of an audience that apparently likes personal insults and attacks, detached from scientific arguments.

Kind regards, Klaus

P.S. As you can see, I CC this email to myself and to Joachim Krueger, who spontaneously sent me a similar note on the Nosek article and the regression trap.

http://scholarlycommons.law.northwestern.edu/cgi/viewcontent.cgi?article=7482&context=jclc&sei-redir=1&referer=http%3A%2F%2Fscholar.google.de%2Fscholar_url%3Fhl%3Dde%26q%3Dhttp%3A%2F%2Fscholarlycommons.law.northwestern.edu%2Fcgi%2Fviewcontent.cgi%253Farticle%253D7482%2526context%253Djclc%26sa%3DX%26scisig%3DAAGBfm25GOVXRqGWCcEzKXfDySpdZ9q8NA%26oi%3Dscholaralrt#search=%22http%3A%2F%2Fscholarlycommons.law.nor! thwester n.edu%2Fcgi%2Fviewcontent.cgi%3Farticle%3D7482%26context%3Djclc%22

Am 9/18/2015 um 3:21 PM schrieb Ulrich Schimmack:

Lieber Dr. Fiedler,

Kann ich noch auf eine Antwort von Ihnen warten oder soll ich annehmen dass Sie sich entschieden haben nicht auf meine Anfrage zu antworten?

LG, Uli Schimmack

*Klaus Fiedler responds*

Dear Ullrich:

Yes, I was indeed very, very busy over two weeks, working for the Humboldt foundation, for two conferences where I had to play leading roles, the Leopoldina Academy, and many other urgent jobs. Sorry but this is simply so.

I now received your email reminder to send you my comments to what you and Moritz Heene have written. However, it looks like you have already committed yourself publicly (I was sent this by colleagues who are busy on facebook):

Fiedler was quick to criticize the OSF-project and Brian Nosek for making the mistake to ignore the well-known regression to the mean effect. This silly argument ignores that regression to the mean requires that the initial scores are selected, which is exactly the point of the OSF-replication studies.

Look, this passage shows that there is apparently a deep misunderstanding about the “silly argument”. Let me briefly try to explain once more what my critique of the Science article (not Brian Nosek personally – this is not my style) referred to.

At the statistical level, I was simply presupposing that there is common ground on the premise that regressiveness is ubiquitous; **it is not contingent on selected initial scores.** Take a scatter plot of 100 bi-variate points (jointly distributed in X and Y). If r(X,Y) < 1(disregarding sign), regressing Y on X will result in a regression slope less than 1. The variance of predicted Y scores will be reduced. I very much hope we all agree that this holds for every correlation, not just those in which X is selected. If you don’t believe, I can easily demonstrate it with random (i.e., non-selective vectors x and y).

Across the entire set of data pairs, large values of X will be underestimated in Y, and small values of X will be overestimated. By analogy, large original findings can be expected to be much smaller in the replication. However, when we regress X on Y, we can also expect to see that large Y scores (i.e., i.e., strong replication effects) have been weaker in the original. The Bayes factors reported by Alexander Etz in his “Bayesian reproducibility project”, although not explicit about reverse regression, strongly suggest that there are indeed quite a few cases in which replication results have been stronger than the original ones. **Etz’ analysis, which nicely illustrates how a much more informative and scientifically better analysis than the one provided by Nosek might look like, also reinforces my point that the report published in Science is very weak.** By the way, the conclusions are markedly different from Nosek, showing that most replication studies were equivocal. The link (that you have certainly found yourself) is provided below.

We know since Rulon (1941 or so) and even since Galton (1986 or so) that regression is a tricky thing, and here I get to the normative (as opposed to the statistical, tautological) point of my critique, which is based on the recommendation of such people as Don Campbell, Daniel Kahneman & Amos Tversky, Ido Erev, Tom Wallsten & David Budescu and many others, who have made it clear that the interpretation of retesting or replication studies will be premature and often mistaken, if one does not take the vicissitudes of regression into account. A very nice historical example is Erev, Wallsten & Budescu’s 1994 Psych. Review article on overconfidence. They make it clear you find very strong evidence for both overconfidence and underconfidence in the same data array, when you regress either accuracy on confidence or confidence on accuracy, respectively. Another wonderful demonstration is Moore and Small’s 2008 Psych. Review analysis of several types of self-serving biases.

So, while my statistical point is analytically true (because regression slope with a single predictor is always < 1; I know there can be suppressor effects with slopes > 1 in multiple regression), my normative point is also well motivated. **I wonder if the audience of your Internet allusion to my “silly argument” has a sufficient understanding of the “regression trap” so that, as you write:**

**Everybody can make up their own mind and decide where they want to stand, but the choices are pretty clear. You can follow Fiedler, Strack, Baumeister, Gilbert, Bargh and continue with business as usual or you can change. History will tell what the right choice will be.**

By the way, why you put me in the same pigeon hole as Fritz, Roy, Dan, and John. The role I am playing is completely different and it definitely not aims at business as usual. My very comment on the Nosek article is driven my deep concerns about the lack of scientific scrutiny in such a prominent journal, in which there is apparently no state-of-the-art quality control. A replication project is the canonical case of a scientific interpretation that strongly calls for awareness of the regression trap. **That is, the results are only informative if one takes into account what shrinkage of strong effects could be expected by regression alone. Regressiveness imposes an upper limit on the possible replication success, which ought to be considered as a baseline for the presentation of the replication results.**

To do that, it is essential to control for reliability. (I know that the reliability of individual scores within a study is not the same as the reliability of the aggregate study results, but they are of course related). I also continue to believe, strongly, that a good replication project ought to control for the successful induction of the independent variable, as evident in a manipulation check (maybe in an extra group), and that the sampling of the 100 studies itself was suboptimal. If Brian Nosek (or others) come up with a convincing interpretation of this replication project, then it is fine. **However, the present analysis is definitely not convincing. It is rather a symptom of shallow science.**

**So, as you can see, the comments that you and Moritz Heene have sent me do not really affect these considerations. **And, because there is obviously no common ground between the two of us, not even about the simplest statistical constraints, I have decided not to engage in a public debate with you. **I’m afraid hardly anybody in this Facebook cycle will really invest time and work to read the literature necessary to judge the consequences of the regression trap,** in order to make an informed judgment. And I do not want to nourish the malicious joy of an audience that apparently likes personal insults and attacks, **detached from scientific arguments.**

Kind regards, Klaus

P.S. As you can see, I CC this email to myself and to Joachim Krueger, who spontaneously sent me a similar note on the Nosek article and the regression trap.

*I made another attempt to talk about selection bias and ended pretty much with a simple yes/no question as a prosecutor asking a hostile witness.*

Dear Klaus,

**I don’t understand why we cannot even agree about the question that regression to the mean is supposed to answer. **

**Moritz Heene and I are talking about the mean difference in effect sizes (the intercept, not the slope, in a regression)**. According to the Science article, the effect sizes in the replication studies were, **on average, 50% lower than the effect sizes in the original studies. **My own analysis fo**r social psychology show a difference of d = .6 and d = .2, which suggests results published in original articles are inflated by 200%. ** **Do you believe that regression to the mean can explain this finding?** Again, this is not a question about the slope, so please try to provide an explanation that can account for mean differences in effect sizes.

Of course, you can just say that we know that a published significant result is inflated by publication bias. After all, power is never 100% so if you select 100% significant results for publication, you cannot expect 100% successful replications. The percentage that you can expect is determined by the true power of the set of studies (this has nothing to do with regression to the mean, it is simply power + publication bias. However, the OSF-reproducibility project did take power into account and increased sample sizes to account for the problem. They are also aware that the replication studies will not produce 100% successes if the replication studies were planned with 90% power.

The problem that I see with the OSF-project is that they were naïve to use the observed effect sizes to conduct their power analyses. As these effect sizes were strongly inflated by publication bias, the true power was much lower than they thought it would be. For social psychology, I calculated the true power of the original studies to be only 35%. Increasing sample sizes from 90 to 120 does not make much of a difference with power this low. If your point is simply to say that the replication studies were underpowered to reject the null-hypothesis, I agree with you. But the reason for the low power is that reported results in the literature are not credible and strongly influenced by bias. Published effect sizes in social psychology are, on average, 1/3 real and 2/3 bias. Good luck finding the false positive results with evidence like this.

**Do you disagree with any of my arguments about power, publication bias, and the implication that social psychological results lack credibility? **

Best regards,

Uli

*Klaus Fiedler’s response continues to evade the topic of selection bias that undermines the credibility of published results with a replication rate of 25%, but he acknowledges for the first time that regression works in both directions and cannot explain mean changes without selection bias.. *

Dear Uli, Moritz and Krueger:

I’m afraid it’s getting very basic now … we are talking about problems which are not really there … very briefly, just for the sake of politeness

First, as already clarified in my letter to Uli yesterday, **nobody will come to doubt that every correlation < 1 will produce regression in both directions.** The scatter plot does not have to be somehow selected. Let’s talk about (or simulate) a bi-variate random sample. Given r < 1, if you plot Y as a function of X (i.e., “given” X values), the regression curve will have a slope < 1, that is, Y values corresponding to high X values will be smaller and Y values corresponding to low X values will be higher. In one word, the variance in Y predictions (in what can be expected in Y) will shrink. If you regress X on Y, the opposite will be the case in the same data set. That’s the truism that I am referring to.

**Of course, regression is always a conditional phenomenon**. Assuming a regression of Y on X: If X is (very) high, the predicted Y analogue is (much) lower. If X is (very) low, the predicted Y analogue is (much) higher. But this conditional IF phrase does not imply any selectivity. The entire sample is drawn randomly. By plotting Y as a function of given X levels (contaminated with error and unreliability), you conditionalize Y values on (too) high or (too) low X values. But this is always the case with regression.

If I correctly understand the point, you simply equate the term “selective” with “conditional on” or “given”. But all this is common sense, or isn’t it. If you believe you have found a mathematical or Monte-Carlo proof that a correlation (in a bivariate distribution) is 1 and there is no regression (in the scatter plot), then you can probably make a very surprising contribution to statistics and numerical mathematics.

Of course, regression a multiplicative function of unreliability and extremity. So points have to be extreme to be regressive. But I am talking about the entire distribution …

Best, Klaus

… who is now going back to work, sorry.

*At this point, Moritz Heene is willing to let it go. There is really no point in arguing with a dickhead – a slightly wrong translation of the German term “Dickkopf” (bull-headed, stubborn). *

Lieber Uli,

Sorry, schnell auf Deutsch:

Angesichts der Email unten von Fiedler sehe ich es als “fruitless endeavour” an, da noch weiter zu diskutieren. Er geht auf unsere -formal korrekten!- Argumente überhaupt nicht ein und mittlerweile ist er schon bei “Ihr seid es gar nicht wert, dass ich mit Euch diskutiere”

angekommen. Auch, dass er Ferby (1973) nachweislich falsch zitiert, ist ihm keine Erwähnung wert. Ich diskutiere das nun nicht mehr mit ihm, weil er es einfach nicht einsehen will und daher unsere mathematisch korrekten Argumente einfach nicht mehr erwähnt (tactical ignorance).

Eines der großen Probleme der Psychologie ist, dass die Probleme grauenhaft basal zu widerlegen sind. Bspw. ist das “hidden-moderatorArgument” am Stammtisch mit 1.3 Promille noch zu widerlegen. Taucht aber leider in Artikeln von Strack und Stroebe und anderen immer wieder auf.

*I agreed with him and decided to write a blog post about this fruitless discussion. I didn’t until now, when the PoPS scandal reminded me of Fiedler’s “I am never wrong” attitude. *

Hallo Moritz,

Ja Diskussion ist zu Ende.

Nun werde ich ein blog mit den emails schreiben um zu zeigen mit welchen schadenfeinigen (? Ist das wirklich ein Wort) Argumenten gearbeitet wird.

Null Respekt fuer Klaus Fiedler.

LG, Uli

*I communicated our decision to end the discussion to Klaus Fiedler in a final email. *

Dear Klaus,

Last email from me to you.

It is sad that you are not even trying to answer my questions about the results of the reproducibility project.

I also going back to work now, where my work is to save psychology from psychologists like you who continue to deny that psychology has been facing a crisis for 50 years, make some quick bogus statistical arguments to undermine the credibility of the OSF-reproducibility project, and then go back to work as usual.

History will decide who wins this argument.

Disappointed (which implies that I had expected more for you when I started this attempt at a scientific discussion), Uli

Klaus Fiedler replied with his last email.

Dear Uli:

no, sorry that is not my intention … and not my position. I would like to share with you my thoughts about reproducibility … and I am not at all happy with the (kernel of truth) of the Nosek report. However, **I believe the problems are quite different from those focused in the current debate, and in the premature consequences drawn by Nosek, Simonsohn, an others.** You may have noticed that I have published a number of relevant articles, arguing that **what we are lacking is not better statistics and larger subject samples **but a better methodology more broader. Why should we two (including Moritz and Joachim and

others) not share our thoughts, and I would also be willing to read your papers. Sure. For the moment, we have been only debating about my critique of the Nosek report. My point was that in such a report of replications plotted against originals,

**an informed interpretation is not possible unless one takes regression into acount**- one has to control for reliability as a crucial moderator
- one has to consider manipulation checks
- one has to contemplate sampling of studies

Our “debate” about 2+2=4 (I agree that’s what it was) does not affect this critique. I do not believe that I am at variance with your mathematical sketch, but **it does not undo the fact that in a bivariate distribution of 100 bivariate points, the devil is lurking in the regression trap.**

So please distinguish between the two points: (a) **Nosek’s report does not live up to appropriate standards;** but (b) I am not unwilling to share with you my thoughts about replicability. (By the way, I met Ioannidis some weeks ago and I never saw as clearly as now that he, like Fanelli, whom I also met, believe that all behavioral science is unreliable and invalid)

Kind regards, Klaus

## More Gaslighting about the Replication Crisis by Klaus Fiedler

Klaus Fiedler and Norbert Schwarz are both German-born influential social psychologists. Norbert Schwarz migrated to the United States but continued to collaborate with German social psychologists like Fritz Strack. Klaus Fiedler and Norbert Schwarz have only one peer-reviewed joined publication titled “Questionable Research Practices Revisited” This article is based on John, Loewenstein, & Prelec’s (2012) influential article that coined the term “questionable research practices” In the original article, John et al. (2012) conducted a survey and found that many researchers admit that they used QRPs and also found these practices were acceptable (i.e., not a violation of ethical norms about scientific integrity). John et al.’s (2012) results provide a simple explanation for the outcome of the reproducibility project. Researchers use QRPs to get statistically significant results in studies with low statistical power. This leads to an inflation of effect sizes. When these studies are replicated WITHOUT QRPs, effect sizes are closer to the real effect sizes and lower than the inflated estimates in replications. As a result, the average effect size shrinks and the percentage of significant results decreases. All of this was clear, when Moritz Heene and I debated with Fiedler.

Fiedler and Schwarz’s article had one purpose, namely to argue that John et al.’s (2012) article did not provide credible evidence for the use of QRPs. The article does not make any connection between the use of QRPs and the outcome of the reproducibility project.

**The resulting prevalence estimates are lower by order of magnitudes. We conclude that inflated prevalence estimates, due to problematic interpretation of survey data, can create a descriptive norm (QRP is normal) that can counteract the injunctive norm to minimize QRPs and unwantedly damage the image of behavioral sciences, which are essential to dealing with many societal problems” (Fiedler & Schwarz, 2016, p. 45). **

Indeed, the article has been cited to claim that “questionable research practices” are not always questionable and that **“QRPs may be perfectly acceptable given a suitable context and verifiable justification (Fiedler & Schwarz, 2016; …) **(Rubin & Dunkin, 2022).

To be clear what this means. Rubin and Dunkin claim that it is perfectly acceptable to run multiple studies and publish only those that worked, drop observations to increase effect sizes, and to switch outcome variables after looking at the results. No student will agree that these practices are scientific or trust results based on such practices. However, Fiedler and other social psychologists want to believe that they did nothing wrong when they engaged in these practices to publish.

## Fiedler triples down on Immaculate Regression

I assumed everybody had moved on from the heated debates in the wake of the reproducibility project, but I was wrong. Only a week ago, I discovered an article by Klaus Fiedler – with a co-author with one of his students that repeats the regression trap claims in an English-language peer-reviewed journal with the title “The Regression Trap and Other Pitfalls of Replication Science—Illustrated by the Report of the Open Science Collaboration” (Fiedler & Prager, 2018).

ABSTRACT**The Open Science Collaboration’s 2015 report suggests that replication effect sizes in psychology are modest. However, closer inspection reveals serious problems.**

A more general aim of our critical note, beyond the evaluation of the OSC report, is to emphasize the need to enhance the methodology of **the current wave of simplistic replication science**.

**Moreover, there is little evidence for an interpretation in terms of insufficient statistical power. **

**Again, it is sufficient to assume a random variable of positive and negative deviations (from the overall mean) in different study domains or ecologies, analogous to deviations of high and low individual IQ scores. One need not attribute such deviations to “biased” or unfair measurement procedures, questionable practices, or researcher expectancies. **

**Yet, when concentrating on a domain with positive deviation scores (like gifted students), it is permissible—though misleading and unfortunate—to refer to a “positive bias” in a technical sense, to denote the domain-specific enhancement.**

**Depending on the selectivity and one- sided distribution of deviation scores in all these domains, domain-specific regression effects can be expected.**

**How about the domain of replication science? Just as psychopathology research produces overall upward regression, such that patients starting in a crisis or a period of severe suffering (typically a necessity for psychiatric diagnoses) are better off in a retest, even without therapy (Campbell, 1996), research on scientific findings must be subject to an opposite, downward regression effect. Unlike patients representing negative deviations from normality, scientific studies published in highly selective journals constitute a domain of positive deviations, of well-done empirical demonstrations that have undergone multiple checks** **on validity and a very strict review process. In other words, the domain of replication science, major empirical findings, is inherently selective. It represents a selection of the most convincing demonstrations of obtained effect sizes that should exceed most everyday empirical observations. Note once more that the emphasis here is not on invalid effects or outliers but on valid and impressive effects, which are, however, naturally contaminated with overestimation error (cf. Figure 2).**

**The domain-specific overestimation that characterizes all science is by no means caused by publication bias alone. ** **[!!!!!** the addition of alone here is the first implicit acknowledgement that publication bias contributes to the regression effect!!!!]

**To summarize, it is a moot point to speculate about the reasons for more or less successful replications as long as no evidence is available about the reliability of measures and the effectiveness of manipulations.**

**In the absence of any information about the internal and external validity (Campbell, 1957) of both studies, there is no logical justification to attribute failed replications to the weakness of scientific hypotheses or to engage in speculations about predictors of replication success. **

**A recent simulation study by Stanley and Spence (2014) highlights this point, showing that measurement error and sampling error alone (Schmidt, 2010) can greatly reduce the replication success of empirical tests of correct hypotheses in studies that are not underpowered.**

**Our critical comments on the OSC report highlight the conclusion that the development of such a methodology is sorely needed.**

## Final Conclusion

Fiedler’s illusory regression account of the replication crisis was known to me since 2015. It was not part of the official record. However, his articles with Schwarz in 2016 and Prager in 2018 are part of his official CV. The articles show a clear motivated bias against Open Science and the reforms initiated by social psychologists to fix their science. He was fired because he demonstrated the same arrogant dickheadedness in interactions with a Black scholar. Does this mean he is a racist? No, he also treats White colleagues with the same arrogance, yet when he treated Roberts like this he abused his position as gate-keeper at an influential journal. I think APS made the right decision to fire him, but they were wrong to hire him in the first place. The past editors of PoPS have shown that old White eminent psychologists are unable to navigate the paradigm shift in psychology towards credibility, transparency, and inclusivity. I hope APS will learn a lesson from the reputational damage caused by Fiedler’s actions and search for a better editor that represents the values of contemporary psychologists.

P.S. This blog post is about Klaus Fiedler, the public figure and his role in psychological science. It has nothing to do with the human being.

P.P.S I also share the experience of being forced from an editorial position with Klaus. I was co-founding editor of Meta-Psychology and made some controversial comments about another journal that led to a negative response. To save the new journal, I resigned. It was for the better and Rickard Carlsson is doing a much better job alone than we could have done together. It hurt a little, but live goes on. Reputations are not made by a single incidence, especially if you can admit to mistakes.