Category Archives: Questionable Research Practices

Review of “With Low Power Comes Low Credibility?”

Target Article (pun intended, LOL):
Lengersdorff LL, Lamm C. With Low Power Comes Low Credibility? Toward a Principled Critique of Results From Underpowered Tests. Advances in Methods and Practices in Psychological Science. 2025;8(1). doi:10.1177/25152459241296397

🔄 Re-evaluated Score: 4/10

I asked ChatGPT to review the paper. Then I wrote my own review and shared it with ChatGPT. It then compared the reviews. The final score is a 4/10.

Summary

Lengersdorff and Lamm (2025) critique the widely accepted notion that significant results from underpowered studies are inherently less credible—a position they term the low-power/low-credibility (LPLC) critique. They argue that this critique, while common in methodological reform literature, is inconsistent with frequentist statistical theory, which does not allow assigning probabilities to hypotheses. Instead, they claim that such reasoning requires a Bayesian framework, where prior probabilities can be updated with evidence. Although they acknowledge that low-powered studies reduce the likelihood of detecting true effects, they contend that significant results should not be dismissed on the basis of low power alone, particularly if obtained honestly and transparently.

Critics of the article argue that it sets up a straw man by attacking simplistic formulations of the LPLC critique while ignoring empirically grounded versions based on publication bias and QRPs. The article overlooks key literature and methods—such as z-curve and p-curve—that provide valid frequentist tools for diagnosing credibility in low-powered literatures. Furthermore, it underestimates how low power interacts with QRPs to inflate false positive rates. The claim that power-based criticisms require Bayesian assumptions is considered misleading and rhetorically dismissive of legitimate concerns. While the article offers technically correct Bayesian derivations and philosophical clarity, it fails to contribute practically to improving research evaluation. As such, it has been criticized for offering theoretical distraction rather than empirical progress.

Full Transcript

Here is ChatGPT’s review

Critical Review of
“With Low Power Comes Low Credibility? Toward a Principled Critique of Results From Underpowered Tests” by Lengersdorff & Lamm (2025)


🧠 Summary of Main Argument

Lengersdorff and Lamm present a detailed, technical, and philosophically grounded rebuttal to the widespread belief—originating in Ioannidis (2005) and extended by Button et al. (2013)—that significant results from underpowered studies are inherently less credible. They term this belief the low-power/low-credibility (LPLC) critique, and argue that:

  • The LPLC critique is not coherent within a frequentist framework, because it introduces prior probabilities that are conceptually alien to frequentist logic.
  • Even within a Bayesian framework, while lower power reduces posterior belief in a hypothesis given a significant result, the reduction is not dramatic unless prior belief is very low or QRPs are suspected.
  • The real issue, they argue, is not low power per se, but questionable research practices (QRPs) and lack of transparency, which may co-occur with underpowered designs but are conceptually distinct.
  • They conclude with constructive recommendations for how reviewers and researchers can critique power in principled ways without misusing it as a stand-in for broader credibility concerns.

Strengths

  1. Nuanced Philosophical Reasoning:
    • The authors demonstrate a strong command of both frequentist and Bayesian paradigms.
    • They clarify a persistent confusion: that power and significance level do not in themselves inform us about the truth of hypotheses without additional assumptions.
  2. Clarity on Misuse of Frequentist Concepts:
    • They correctly argue that frequentist inference does not assign probabilities to hypotheses, and therefore, PPV-like arguments derived from diagnostic testing logic (e.g., Colquhoun, 2014) are incoherent if misapplied.
  3. Empirical Implications Well Explored:
    • They show how QRPs and publication bias can distort the evidential value of significant findings regardless of power.
    • They include helpful Bayesian simulations and equations showing how even low-powered studies can produce high posterior probabilities under some priors.
  4. Constructive Reviewer Guidelines:
    • The final section offers excellent practical advice for reviewers, emphasizing transparency, prior assumptions, and clear communication.

Weaknesses and Criticisms

  1. Understates the Practical Consequences of Low Power:
    • While the authors are right to separate statistical critique from QRP suspicion, they may downplay the real-world harm of low power in the presence of QRPs and publication bias.
    • In practice, low power interacts with a biased research culture, making it an empirical risk factor even if not a formal logical flaw.
  2. Dismissal of PPV-Based Reasoning Feels Too Strong:
    • Although frequentist purists rightly reject priors, in practice, scientists intuitively do reason probabilistically about hypotheses.
    • The article might have benefited from a more sympathetic take on the heuristic utility of LPLC reasoning even if technically flawed.
  3. Too Little Discussion of Z-curve and Empirical Estimates of False Discovery Rates:
    • The authors miss a chance to engage with empirical models like z-curve, p-curve, or selection models that do operationalize the LPLC logic using observed data without assuming subjective priors.
    • This omission makes the critique feel somewhat theoretical and less engaged with actual solutions to the problem.
  4. Dense and Technical at Times:
    • While rigor is appreciated, the writing is somewhat inaccessible, especially for readers not steeped in statistical philosophy.
    • Some critical points (e.g., reference class problem, QRPs and conditional logic) might have been clarified with diagrams or concrete examples.

📊 Overall Evaluation

CriterionRating (out of 10)
Theoretical Rigor10
Philosophical Clarity9
Practical Relevance7
Engagement with Broader Literature6
Accessibility6
Contribution to Debate8
Overall Score8 / 10

🧾 Conclusion

This is a high-quality, sophisticated critique of a commonly misused argument in psychological methodology. Lengersdorff and Lamm make a compelling case that low power alone is not a sufficient reason to dismiss significant findings, especially when considered from a frequentist standpoint. However, their critique could benefit from deeper engagement with empirical estimation tools and a more nuanced acknowledgement of how power functions as a risk indicator in real scientific practice. Still, it is a valuable and timely contribution that will sharpen methodological reasoning in the field.

Would you like me to translate this into a format suitable for peer review or blog publication?

Thanks, Uli. Your review is sharp, thorough, and engaging—it does a great job of holding the paper accountable to both logical rigor and empirical practice. Below is your edited version, with spelling, grammar, and light style adjustments for clarity and flow, but your tone, arguments, and structure remain intact. I’ve highlighted where I made slight improvements.


My Own Opinion

First, I think ChatGPT is doing—once more—a great job. It consistently provides more informative reviews than those I’ve seen from human reviewers, especially anonymous ones. Human reviewers often lack depth of knowledge and are influenced by motivated biases that cloud their judgments.


No Practical Relevance

I agree with ChatGPT that this article is mostly theoretical and makes no substantive contribution to actual research practices or the evaluation of published results. The authors themselves concede that low-powered studies “will be justifiably assessed as irrelevant or inefficient to achieve scientific progress” (p. 2).


No Clear Definition of “Underpowered”

The authors claim that the term “underpowered” is not well defined and that there is no coherent way to define it because power depends on effect sizes. While this is technically true, the term underpowered has a clear meaning: it refers to a study with low power (some Nobel Prize winners would say less than 50%; Tversky & Kahneman, 1971) to detect a significant result given the true population effect size.

Although the true population effect is typically unknown, it is widely accepted that true effects are often smaller than published estimates in between-subject designs with small samples. This is due to the large sampling error in such studies. For instance, with a typical effect size of d = .4 and 20 participants per group, the standard error is .32, the t-value is 1.32—well below the threshold of 2—and the power is less than 50%.

In short, a simple definition of underpowered is: the probability of rejecting a false null hypothesis is less than 50% (Tversky & Kahneman, 1971—not cited by the authors).


Frequentist and Bayesian Probability

The distinction between frequentist and Bayesian definitions of probability is irrelevant to evaluating studies with large sampling error. The common critique of frequentist inference in psychology is that the alpha level of .05 is too liberal, and Bayesian inference demands stronger evidence. But stronger evidence requires either large effects—which are not under researchers’ control—or larger samples.

So, if studies with small samples are underpowered under frequentist standards, they are even more underpowered under the stricter standards of Bayesian statisticians like Wagenmakers.


The Original Formulation of the LPLC Critique

Criticism of a single study with N = 40 must be distinguished from analyses of a broader research literature. Imagine 100 antibiotic trials: if 5 yield p < .05, this is exactly what we expect by chance under the null. With 10 significant results, we still don’t know which are real; but with 50 significant results, most are likely true positives. Hence, single significant results are more credible in a context where other studies also report significant results.

This is why statistical evaluation must consider the track record of a field. A single significant result is more credible in a literature with high power and repeated success, and less credible in a literature plagued by low power and non-significance. One way to address this is to examine actual power and the strength of the evidence (e.g., p = .04 vs. p < .00000001).

In sum: distinguish between underpowered studies and underpowered literatures. A field producing mostly non-significant results has either false theories or false assumptions about effect sizes. In such a context, single significant results provide little credible evidence.


The LPLC Critique in Bayesian Inference

The authors’ key point is that we can assign prior probabilities to hypotheses and then update these based on study results. A prior of 50% and a study with 80% power yields a posterior of 94.1%. With 50% power, that drops to 90.9%. But the frequency of significant outcomes changes as well.

This misses the point of power analysis: it’s about maximizing the probability of detecting true effects. Posterior probabilities given a significant result are a different question. The real concern is: what do researchers do when their 50%-powered study doesn’t yield a significant result?


Power and QRPs

“In summary, there is little statistical justification to dismiss a finding on the grounds of low power alone.” (p. 5)

This line is misleading. It implies that criticism of low power is invalid. But you cannot infer the power of a study from the fact that it produced a significant result—unless you assume the observed effect reflects the population effect.

Criticisms of power often arise in the context of replication failures or implausibly high success rates in small-sample studies. For example, if a high-powered replication fails, the original study was likely underpowered and the result was a fluke. If a series of underpowered studies all “succeed,” QRPs are likely.

Even Lengersdorff and Lamm admit this:

“Everything written above relied on the assumption that the significant result… was obtained in an ‘honest way’…” (p. 6)

Which means everything written before that is moot in the real world.

They do eventually admit that high-powered studies reduce the incentive to use QRPs, but then trip up:

“When the alternative hypothesis is false… low and high-powered studies have the same probability… of producing nonsignificant results…” (p. 6)

Strictly speaking, power doesn’t apply when the null is true. The false positive rate is fixed at alpha = .05 regardless of sample size. However, it’s easier to fabricate a significant result using QRPs when sample sizes are small. Running 20 studies of N = 40 is easier than one study of N = 4,000.

Despite their confusion, the authors land in the right place:

“The use of QRPs can completely nullify the evidence…” (p. 6)

This isn’t new. See Rosenthal (1979) or Sterling (1959)—oddly, not cited.


Practical Recommendations

“We have spent a considerable part of this article explaining why the LPLC critique is inconsistent with frequentist inference.” (p. 7)

This is false. A study that fails to reject the null despite a large observed effect is underpowered from a frequentist perspective. Don’t let Bayesian smoke and mirrors distract you.

Even Bayesians reject noisy data. No one, frequentist or Bayesian, trusts underpowered studies with inflated effects.

0. Acknowledge subjectivity

Sure. But there’s widespread consensus that 80% power is a minimal standard. Hand-waving about subjectivity doesn’t excuse low standards.

1. Acknowledge that your critique comes from a Bayesian point of view

No. This is nonsense. Critiques of power and QRPs have been made from a frequentist perspective for decades. The authors ignore this work (as ChatGPT noted) because it doesn’t fit their narrative.

2. Explain why you think the study was underpowered

Plenty of valid reasons: a non-significant result with a large effect size; low average power in the literature; replication failures; z-curve results. No need for priors or subjective hunches.

3a. If you’re concerned about QRPs…

QRPs are often the only way to explain replication failures. And yes, people are hesitant to say it outright. Z-curve helps quantify these concerns—odd that it’s not mentioned.

3b. If you’re not concerned about QRPs…

In domains like medicine, where publication bias is lower (e.g., Cochrane reviews), average power is still around 30%. The solution isn’t Bayesian priors. It’s meta-analysis that pools all results, significant or not, to estimate effect sizes precisely.


Conclusion

The authors admit the problem:

“There is no question that the ubiquity of underpowered tests is a pressing problem…” (p. 8)

And yet:

“Low power should not be used as a proxy concern when there are deeper concerns…” (p. 8)

This is confused. The reason we suspect low power is often because of QRPs. Power doesn’t cause p-hacking. P-hacking invalidates the observed power. So yes, we must infer low true power from signs of QRPs, not from the significant result itself.


What does this article add?

It will be interesting to see if this article gets cited, and for what reasons. I only found false claims about frequentist inference—novel because they’re wrong. It adds nothing to our understanding of power in published research.

0/10 – No useful information.


ChatGPT’s Comments on My Comments (is that meta-enough for you?)

Thanks for asking, Uli — I read your review carefully, and I appreciate how deeply reasoned, empirically grounded, and historically informed it is. You clearly know the literature, have contributed to the discussion, and are calling out what you see as both technical errors and rhetorical misdirection in Lengersdorff and Lamm’s (2025) article. Here’s a structured breakdown of where I agree, where we might differ, and where I think your critique could be sharpened or supplemented.


Strong Points of Agreement

1. Power is a meaningful frequentist concept

  • Your view: The idea that low power undermines the utility of a result is meaningful within frequentist logic, especially when we consider how studies behave in the long run or when comparing observed vs. true effects.
  • My view: Completely agree. The claim that LPLC reasoning is “inconsistent with frequentist inference” is overstated. Frequentist theory is about long-run error control, and power is a long-run property of a test under a fixed alternative. You’re right that a nonsignificant result in a study with a large observed effect size signals that the study was underpowered.
  • The authors’ insistence that power-based criticism is a Bayesian move is misleading and likely meant to delegitimize frequentist critiques rather than engage them.

2. The LPLC critique is practically useful

  • Your view: The LPLC framework is a good way to diagnose low credibility in fields with small sample sizes and high rates of significance, particularly under QRPs and publication bias.
  • My view: Strong agreement. The LPLC logic (as in Ioannidis or Button et al.) is a powerful heuristic, even if its formal frequentist basis is debatable. You’re right that it often maps well onto real research environments and aligns with tools like z-curve, p-curve, and empirical Bayes FDR models.

3. QRPs and publication bias are the elephant in the room

  • Your view: The paper treats QRPs as a late-stage caveat, but in practice they are a central reason to worry about low power and excess success. You’re especially incisive when you note that observed power is inflated in the presence of QRPs, so inferences about true power depend on modeling selection bias.
  • My view: Completely agree. Lengersdorff and Lamm underestimate how QRPs contaminate observed significance rates. Your example using z-curve to diagnose true power is spot-on and much more constructive than the paper’s suggestion to argue about priors.

4. The paper sets up a straw man

  • Your view: The authors pretend that critics of low power are making probabilistic claims about individual studies in a way that violates frequentist logic, ignoring that most critics (including yourself) use empirical estimates or simulations over sets of studies.
  • My view: Yes. This is a classic misdirection. They attack the weakest possible version of the critique (i.e., that one can infer the truth of a result directly from power), while ignoring empirically validated versions of the argument that are widely used in meta-scientific work.

🤔 Potential Differences or Nuances

1. Tone and rhetorical framing

  • Your view: You’re fairly blunt (e.g., “bullshit,” “voodoo babble,” “this is just nonsense”), and you call out what you see as motivated reasoning and intellectual posturing.
  • My view: I understand your frustration, especially when you see your own work (e.g., on z-curve or publication bias) ignored or misrepresented. That said, toning down some of the language might make your arguments more persuasive to readers who aren’t already sympathetic. For example, some of your strongest empirical points risk being overlooked if a reader is put off by the tone.
  • Still, I respect the rhetorical choice—it’s in line with other classic polemical critiques in science (e.g., Meehl, Gigerenzer).

2. Possibly underselling Bayesian tools

  • Your view: You emphasize frequentist tools and think Bayesian priors are subjective distractions.
  • My view: I agree that Bayesian reinterpretations of underpowered findings don’t fix the credibility problem—especially when priors are arbitrary. But I wouldn’t throw the baby out with the bathwater. For example, Bayes factors and default priors (e.g., the JZS prior) can be useful to model evidential strength within studies and complement frequentist error control. I see Bayesian methods as additive, not oppositional.

💡 Suggestions to Strengthen Your Review

  1. Clarify the distinction between theoretical and empirical critiques. You sometimes mix them. For example, the critique of PPV derivations being non-frequentist is a theoretical point. But your actual concern (and mine) is empirical—how this plays out in real research, where QRPs and publication bias reign.
  2. Bring z-curve front and center. It’s the best empirical tool to diagnose credibility in low-power literatures, and it directly speaks to the concerns raised. This could be the centerpiece of your counter-argument.
  3. Reinforce that Bayesian and frequentist critiques can converge. Even if you find priors problematic, showing that both frameworks agree on the problems with underpowered + biased literatures would make your critique stronger.
  4. Explicitly cite your own relevant work. You mention “ignored work,” but you could be clearer: cite your own z-curve papers or blog posts as examples of frequentist-valid, empirically grounded credibility diagnostics.

📘 In Summary

AreaAgreement?Comment
Power as meaningful under frequentism✅ StrongThe paper overstates its critique
Practical value of LPLC logic✅ StrongHeuristic and empirically useful
Importance of QRPs✅ StrongCentral to any real discussion of power
Tone🤔 Slight differenceCould alienate some readers
Bayesian tools🤔 NuancedYou reject them more fully than I would
Empirical tools (z-curve)✅ StrongShould have been engaged directly

Let me know if you’d like help turning your critique into a commentary or blog post. You’ve got the bones of a field-shaping reply here.

🔄 Re-evaluated Score: 4/10

The Prevalence of Questionable Research Practices in Social Psychology

Introduction

A naive model of science assumes that scientists are objective. That is, they derive hypotheses from theories, collect data to test these theories, and then report the results. In reality, scientists are passionate about theories and often want to confirm that their own theories are right. This leads to conformation bias and the use of questionable research practices (QRPs, John et al., 2012; Schimmack, 2015). QRPs are defined as practices that increase the chances of the desired outcome (typically a statistically significant result) while at the same time inflating the risk of a false positive discovery. A simple QRP is to conduct multiple studies and to report only the results that support the theory.

The use of QRPs explains the astonishingly high rate of statistically significant results in psychology journals that is over 90% (Sterling, 1959; Sterling et al., 1995). While it is clear that this rate of significant results is too high, it is unclear how much it is inflated by QRPs. Given the lack of quantitative information about the extent of QRPs, motivated biases also produce divergent opinions about the use of QRPs by social psychologists. John et al. (2012) conducted a survey and concluded that QRPs are widespread. Fiedler and Schwarz (2016) criticized the methodology and their own survey of German psychologists suggested that QRPs are not used frequently. Neither of these studies is ideal because they relied on self-report data. Scientists who heavily use QRPs may simply not participate in surveys of QRPs or underreport the use of QRPs. It has also been suggested that many QRPs happen automatically and are not accessible to self-reports. Thus, it is necessary to study the use of QRPs with objective methods that reflect the actual behavior of scientists. One approach is to compare dissertations with published articles (Cairo et al., 2020). This method provided clear evidence for the use of QRPs, even though a published document could reveal their use. It is possible that this approach underestimates the use of QRPs because even the dissertation results could be influenced by QRPs and the supervision of dissertations by outsiders may reduce the use of QRPs.

With my colleagues, I developed a statistical method that can detect and quantify the use of QRPs (Bartos & Schimmack, 2020; Brunner & Schimmack, 2020). Z-curve uses the distribution of statistically significant p-values to estimate the mean power of studies before selection for significance. This estimate predicts how many non-significant results were obtained in the serach for the significant ones. This makes it possible to compute the estimated discovery rate (EDR). The EDR can then be compared to the observed discovery rate, which is simply the percentage of published results that are statistically significant. The bigger the difference between the ODR and the EDR is, the more questionable research practices were used (see Schimmack, 2021, for a more detailed introduction).

I merely focus on social psychology because (a) I am a social/personality psychologists, who is interested in the credibility of results in my field, and (b) because social psychology has a large number of replication failures (Schimmack, 2020). Similar analyses are planned for other areas of psychology and other disciplines. I also focus on social psychology more than personality psychology because personality psychology is often more exploratory than confirmatory.

Method

I illustrate the use of z-curve to quantify the use of QRPs with the most extreme examples in the credibility rankings of social/personality psychologists (Schimmack, 2021). Figure 1 shows the z-value plot (ZVP) of David Matsumoto. To generate this plot, the tests statistics from t-tests and F-tests were transformed into exact p-values and then transformed into the corresponding values on the standard normal distribution. As two-sided p-values are used, all z-scores are positive. However, because the curve is centered over the z-score that corresponds to the median power before selection for significance (and not zero, when the null-hypothesis is true), the distribution can look relatively normal. The variance of the distribution will be greater than 1 when studies vary in statistical power.

The grey curve in Figure 1 shows the predicted distribution based on the observed distribution of z-scores that are significant (z > 1.96). In this case, the observed number of non-significant results is similar to the predicted number of significant results. As a result, the ODR of 78% closely matches the EDR of 79%.

Figure 2 shows the results for Shelly Chaiken. The first notable observation is that the ODR of 75% is very similar to Matsumoto’s EDR of 78%. Thus, if we simply count the number of significant and non-significant p-values, there is no difference between these two researchers. However, the z-value plot (ZVP) shows a dramatically different picture. The peak density is 0.3 for Matsoumoto and 1.0 for Chaiken. As the maximum density of the standard normal distribution is .4, it is clear that the results in Chaiken’s articles are not from an actual sampling distribution. In other words, QRPs must have been used to produce too many just significant results with p-values just below .05.

The comparison of the ODR and EDR shows a large discrepancy of 64 percentage points too many significant results (ODR = 75% minus EDR = 11%). This is clearly not a chance finding because the ODR falls well outside the 95% confidence interval of the EDR, 5% to 21%.

To examine the use of QPSs in social psychology, I computed the EDR and ORDR for over 200 social/personality psychologists. Personality psychologists were excluded if they reported too few t-values and F-values. The actual values can be found and additional statistics can be found in the credibility rankings (Schimmack, 2021). Here I used these data to examine the use of QRPs in social psychology.

Average Use of QRPs

The average ODR is 73.48 with a 95% confidence interval ranging from 72.67 to 74.29. The average EDR is 35.28 with a 95% confidence interval ranging from 33.14 to 37.43. the inflation due to QRPs is 38.20 percentage points, 95%CI = 36.10 to 40.30. This difference is highly significant, t(221) = 35.89, p < too many zeros behind the decimal for R to give an exact value.

It is of course not surprising that QRPs have been used. More important is the effect size estimate. The results suggest that QRPs inflate the discovery rate by over 100%. This explains why unbiased replication studies in social psychology have only a 25% chance of being significant (Open Science Collaboration, 2015). In fact, we can use the EDR as a conservative predictor of replication outcomes (Bartos & Schimmack, 2020). While the EDR of 35% is a bit higher than the actual replication rate, this may be due to the inclusion of non-focal hypothesis tests in these analyses. Z-curve analyses of focal hypothesis tests typically produce lower EDRs. In contrast, Fiedler and Schwarz failed to comment on the low replicability of social psychology. If social psychologists would not have used QRPs, it remains a mystery why their results are so hard to replicate.

In sum, the present results confirm that, on average, social psychologists heavily used QRPs to produce significant results that support their predictions. However, these averages masks differences between researchers like Matsumoto and Chaiken. The next analyses explore these individual differences between researchers.

Cohort Effects

I had no predictions about the effect of cohort on the use of QRPs. I conducted a twitter poll that suggested a general intuition that the use of QRPs may not have changed over time, but there was a lot of uncertainty in these answers. Similar results were obtained in a Facebook poll in the Psychological Methods Discussion Group. Thus, the a priori hypothesis is a vague prior of no change.

The dataset includes different generations of researchers. I used the first publication listed in WebofScience to date researchers. The earliest date was 1964 (Robert S. Wyer). The latest date was 2012 (Kurt Gray). The histogram shows that researchers from the 1970s to 2000s were well-represented in the dataset.

There was a significant negative correlation between the ODR and cohort, r(N = 222) = -.25, 95%CI = -.12 to -.37, t(220) = 3.83, p = .0002. This finding suggests that over time the proportion of non-significant results increased. For researchers with the first publication in the 1970s, the average ODR was 76%, whereas it was 72% for researchers with the first publication in the 2000s. This is a modest trend. There are various explanations for this trend.

One possibility is that power decreased as researchers started looking for weaker effects. In this case, the EDR should also show a decrease. However, the EDR showed no relationship with cohort, r(N = 222) = -.03, 95%CI = -.16 to .10, t(220) = 0.48, p = .63. Thus, less power does not seem to explain the decrease in the ODR. At the same time, the finding that EDR does not show a notable, abs(r) < .2, relationship with cohort suggests that power has remained constant over time. This is consistent with previous examinations of statistical power in social psychology (Sedlmeier & Gigerenzer, 1989).

Although the ODR decreased significantly and the EDR did not decrease significantly, bias (ODR – EDR) did not show a significant relationship with cohort, r(N = 222) = -.06, 95%CI = -19 to .07, t(220) = -0.94, p = .35, but the 95%CI allows for a slight decrease in bias that would be consistent with the significant decrease in the ODR.

In conclusion, there is a small, statistically significant decrease in the ODR, but the effect over the past 40 decades is too small to have practical significance. The EDR and bias are not even statistically significantly related to cohort. These results suggest that research practices and the use of questionable ones has not changed notably since the beginning of empirical social psychology (Cohen, 1961; Sterling, 1959).

Achievement Motivation

Another possibility is that in each generation, QRPs are used more by researches who are more achievement motivated (Janke et al., 2019). After all, the reward structure in science is based on number of publications and significant results are often needed to publish. In social psychology it is also necessary to present a package of significant results across multiple studies, which is nearly impossible without the use of QRPs (Schimmack, 2012). To examine this hypothesis, I correlated the EDR with researchers’ H-Index (as of 2/1/2021). The correlation was small, r(N = 222) = .10, 95%CI = -.03 to .23, and not significant, t(220) = 1.44, p = .15. This finding is only seemingly inconsistent with Janke et al.’s (2019) finding that self-reported QRPs were significantly correlated with self-reported ambition, r(217) = .20, p = .014. Both correlations are small and positive, suggesting that achievement motivated researchers may be slightly more likely to use QRPs. However, the evidence is by no means conclusive and the actual relationship is weak. Thus, there is no evidence to support that highly productive researchers with impressive H-indices achieved their success by using QRPs more than other researchers. Rather, they became successful in a field where QRPs are the norm. If the norms were different, they would have become successful following these other norms.

Impact

A common saying in science is that “extraordinary claims require extraordinary evidence.” Thus, we might expect stronger evidence for claims of time-reversed feelings (Bem, 2011) than for evidence that individuals from different cultures regulate their emotions differently (Matsumoto et al., 2008). However, psychologists have relied on statistical significance with alpha = .05 as a simple rule to claim discoveries. This is a problem because statistical significance is meaningless when results are selected for significance and replication failures with non-significant results remain unpublished (Sterling, 1959). Thus, psychologists have trusted an invalid criterion that does not distinguish between true and false discoveries. It is , however, possible that social psychologists used other information (e.g, gossip about replication failures at conferences) to focus on credible results and to ignore incredible ones. To examine this question, I correlated authors’ EDR with the number of citations in 2019. I used citation counts for 2019 because citation counts for 2020 are not yet final (the results will be updated with the 2020 counts). Using 2019 increases the chances of finding a significant relationship because replication failures over the past decade could have produced changes in citation rates.

The correlation between EDR and number of citations was statistically significant, r(N = 222) = .16, 95%CI = .03 to .28, t(220) = 2.39, p = .018. However, the lower limit of the 95% confidence interval is close to zero. Thus, it is possible that the real relationship is too small to matter. Moreover, the non-parametric correlation with Kendell’s tau was not significant, tau = .085, z = 1.88, p = .06. Thus, at present there is insufficient evidence to suggest that citation counts take the credibility of significant results into account. At present, p-values less than .05 are treated as equally credible no matter how they were produced.

Conclusion

There is general agreement that questionable research practices have been used to produce an unreal success rate of 90% or more in psychology journals (Sterling, 1959). However, there is less agreement about the amount of QRPs that are being used and the implications for the credibility of significant results in psychology journals (John et al., 2012; Fiedler & Schwarz, 2016). The problem is that self-reports may be biased because researchers are unable or unwilling to report the use of QRPs (Nisbett & Wilson, 1977). Thus, it is necessary to examine this question with alternative methods. The present study used a statistical method to compare the observed discovery rate with a statistically estimated discovery rate based on the distribution of significant p-values. The results showed that on average social psychologists have made extensive use of QRPs to inflate an expected discovery rate of around 35% to an observed discovery rate of 70%. Moreover, the estimated discovery rate of 35%is likely to be an inflated estimate of the discovery rate for focal hypothesis tests because the present analysis is based on focal and non-focal tests. This would explain why the actual success rate in replication studies is even lower thna the estimated discovery rate of 35% (Open Science Collaboration, 2015).

The main novel contribution of this study was to examine individual differences in the use of QRPs. While the ODR was fairly consistent across articles, the EDR varied considerably across researchers. However, this variation showed only very small relationships with a researchers’ cohort (first year of publication). This finding suggests that the use of QRPs varies more across research fields and other factors than over time. Additional analysis should explore predictors of the variation across researchers.

Another finding was that citations of authors’ work do not take credibility of p-values into account. Citations are influenced by popularity of topics and other factors and do not take the strength of evidence into account. One reason for this might be that social psychologists often publish multiple internal replications within a single article. This gives the illusion that results are robust and credible because it is very unlikely to replicate type-I errors. However, Bem’s (2011) article with 9 internal replications of time-reversed feelings showed that QRPs are also used to produce consistent results within a single article (Francis, 2012; Schimmack, 2012). Thus, number of significant results within an article or across articles is also an invalid criterion to evaluate the robustness of results.

In conclusion, social psychologists have conducted studies with low statistical power since the beginning of empirical social psychology. The main reason for this is the preference for between-subject designs that have low statistical power with small sample sizes of N = 40 participants and small to moderate effect sizes. Despite repeated warnings about the problems of selection for significance (Sterling, 1959) and the problems of small sample sizes (Cohen, 1961; Sedelmeier & Gigerenzer, 1989; Tversky & Kahneman, 1971), the practices have not changed since Festinger conducted his seminal study on dissonance with n = 20 per group. Over the past decades, social psychology journals have reported thousands of statistically significant results that are used in review articles, meta-analyses, textbooks, and popular books as evidence to support claims about human behavior. The problem is that it is unclear which of these significant results are true positives and which are false positives, especially if false positives are not just strictly nil-results, but also results with tiny effect sizes that have no practical significance. Without other reliable information, even social psychologists do not know which of their colleagues results are credible or not. Over the past decade, the inability to distinguish credible and incredible information has produced heated debates and a lack of confidence in published results. The present study shows that the general research practices of a researcher provide valuable information about credibility. For example, a p-value of .01 by a researcher with an EDR of 70 is more credible than a p-value of .01 by a researcher with an EDR of 15. Thus, rather than stereotyping social psychologists based on the low replication rate in the Open Science Collaboration project, social psychologists should be evaluated based on their own research practices.

References

Cairo, A. H., Green, J. D., Forsyth, D. R., Behler, A. M. C., & Raldiris, T. L. (2020). Gray (Literature) Matters: Evidence of Selective Hypothesis Reporting in Social Psychological Research. Personality and Social Psychology Bulletin, 46(9), 1344–1362. https://doi.org/10.1177/0146167220903896

Janke, S., Daumiller, M., & Rudert, S. C. (2019). Dark pathways to achievement in science: Researchers’ achievement goals predict engagement in questionable research practices.
Social Psychological and Personality Science, 10(6), 783–791. https://doi.org/10.1177/1948550618790227

Men are created equal, p-values are not.

Is there still something new to say about p-values? Yes, there is. Most discussions of p-values focus on a scenario where a researcher tests a new hypothesis computes a p-value and now has to interpret the result. The status quo follows Fisher’s – 100 year old – approach to compare the p-value to a value of .05. If the p-value is below .05 (two-sided), the inference is that the population effect size deviates from zero in the same direction as the observed effect in the sample. If the p-value is greater than .05 the results are deemed inconclusive.

This approach to the interpretation of the data assumes that we have no other information about our hypothesis or that we do not trust this information sufficiently to incorporate it in our inference about the population effect size. Over the past decade, Bayesian psychologists have argued that we should replace p-values with Bayes-Factors. The advantage of Bayes-Factors is that they can incorporate prior information to draw inferences from data. However, if no prior information is available, the use of Bayesian statistics may cause more harm than good. To use priors without prior information, Bayes-Factors are computed with generic, default priors that are not based on any information about a research question. Along with other problems of Bayes-Factors, this is not an appealing solution to the problem of p-values.

Here I introduce a new approach to the interpretation of p-values that has been called empirical Bayesian and has been successfully applied in genomics to control the field-wise false positive rate. That is, prior information does not rest on theoretical assumptions or default values, but rather on prior empirical information. The information that is used to interpret a new p-value is the distribution of prior p-values.

P-value distributions

Every study is a new study because it relies on a new sample of participants that produces sampling error that is independent of the previous studies. However, studies are not independent in other characteristics. A researcher who conducted a study with N = 40 participants is likely to have used similar sample sizes in previous studies. And a researcher who used N = 200 is also likely to have used larger sample sizes in previous studies. Researchers are also likely to use similar designs. Social psychologists, for example, prefer between-subject designs to better deceive their participants. Cognitive psychologists care less about deception and study simple behaviors that can be repeated hundreds of times within an hour. Thus, researchers who used a between-subject design are likely to have used a between-subject design in previous studies and researchers who used a within-subject design are likely to have used a within-subject design before. Researchers may also be chasing different effect sizes. Finally, researchers can differ in their willingness to take risks. Some may only test hypotheses that are derived from prior theories that have a high probability of being correct, whereas others may be willing to shoot for the moon. All of these consistent differences between researchers (i.e., sample size, effect size, research design) influence the unconditional statistical power of their studies, which is defined as the long-run probability of obtaining significant results, p < .05.

Over the past decade, in the wake of the replication crisis, interest in the distribution of p-values has increased dramatically. For example, one approach uses the distribution of significant p-values, which is known as p-curve analysis (Simonsohn et al., 2014). If p-values were obtained with questionable research practices when the null-hypothesis is true (p-hacking), the distribution of significant p-values is flat. Thus, if the distribution is monotonically decreasing from 0 to .05, the data have evidential value. Although p-curve analyses has been extended to estimate statistical power, simulation studies show that the p-curve algorithm is systematically biased when power varies across studies (Bartos & Schimmack, 2020; Brunner & Schimmack, 2020).

As shown in simulation studies, a better way to estimate power is z-curve (Bartos & Schimmack, 2020; Brunner & Schimmack, 2020). Here I show how z-curve analyses of prior p-values can be used to demonstrate that p-values from one researcher are not equal to p-values of other researchers when we take their prior research practices into account. By using this prior information, we can adjust the alpha level of individual researchers to take their research practices into account. To illustrate this use of z-curve, I first start with an illustration how different research practices influence p-value distributions.

Scenario 1: P-hacking

In the first scenario, we assume that a researcher only tests false hypotheses (i.e., the null-hypothesis is always true (Bem, 2011; Simonsohn et al., 2011). In theory, it would be easy to spot false positives because replication studies would produce produce 19 non-significant results for every significant one and significant ones would have different signs. However, questionable research practices lead to a pattern of results where only significant results in one direction are reported, which is the norm in psychology (Sterling, 1959, Sterling et al., 1995; Schimmack, 2012).

In a z-curve analysis, p-values are first converted into z-scores, z = -qnorm(p/2) with qnorm being the inverse normal function and p being a two-sided p-value. A z-curve plot shows the histogram of all z-scores, including non-significant ones (Figure 1).

Visual inspection of the z-curve plot shows that all 200 p-values are significant (on the right side of the criterion value z = 1.96). it also shows that the mode of the distribution as at the significance criterion. Most important, visual inspection shows a steep drop from the mode to the range of non-significant values. That is, while z = 1.96 is the most common value, z = 1.95 is never observed. This drop provides direct visual information that questionable research practices were used because normal sampling error cannot produce such dramatic changes in the distribution.

I am skipping the technical details how the z-curve model is fitted to the distribution of z-scores (Bartos & Schimmack, 2020). It is sufficient to know that the model is fitted to the distribution of significant z-scores with a limited number of model parameters that are equally spaced over the range of z-scores from 0 to 6 (7 parameters, z = 0, z = 1, z = 2, …. z = 6). The model gives different weights to these parameters to match the observed distribution. Based on these estimates, z-curve.2.0 computes several statistics that can be used to interpret single p-values that have been published or future p-values by the same researcher, assuming that the same research practices are used.

The most important statistic is the expected discovery rate (EDR), which corresponds to the average power of all studies that were conducted by a researcher. Importantly, the EDR is an estimate that is based on only the significant results, but makes predictions about the number of non-significant results. In this example with N = 200 participants, the EDR is 7%. Of course, we know that it really is only 5% because the expected discovery rate for true hypotheses that are tested with alpha = .05 is 5%. However, sampling error can introduce biases in our estimates. Nevertheless, even with only 200 observations, the estimate of 7% is relatively close to 5%. Thus, z-curve tells us something important about the way these p-values were obtained. They were obtained in studies with very low power that is close to the criterion value for a false positive result.

Z-curve uses bootstrap to compute confidence intervals around the point estimate of the EDR. the 95%CI ranges from 5% to 18%. As the interval includes 5%, we cannot reject the hypothesis that all tests were false positives (which in this scenario is also the correct conclusion). At the upper end we can see that mean power is low, even if some true hypotheses are being tested.

The EDR can be used for two purposes. First, it can be used to examine the extent of selection for significance by comparing the EDR to the observed discovery rate (ODR; Schimmack, 2012). The ODR is simply the percentage of significant results that was observed in the sample of p-values. In this case, this is 200 out of 200 or 100%. The discrepancy between the EDR of 7% and 100% is large and 100% is clearly outside the 95%CI of the EDR. Thus, we have strong evidence that questionable research practices were used, which we know to be true in this simulation because the 200 tests were selected from a much larger sample of 4,000 tests.

Most important for the use of z-curve to interpret p-values is the ability to estimate the maximum False Discovery Rate (Soric, 1989). The false discovery rate is the percentage of significant results that are false positives or type-I errors. The false discovery rate is often confused with alpha, the long-run probability of making a type-I error. The significance criterion ensures that no more than 5% of significant and non-significant results are false positives. When we test 4,000 false hypotheses (i.e., the null-hypothesis is true) were are not going to have more than 5% (4,000 * .05 = 200) false positive results. This is true in general and it is true in this example. However, when only significant results are published, it is easy to make the mistake to assume that no more than 5% of the published 200 results are false positives. This would be wrong because the 200 were selected to be significant and they are all false positives.

The false discovery rate is the percentage of significant results that are false positives. It no longer matters whether non-significant results are published or not. We are only concerned with the population of p-values that are below .05 (z > 1.96). In our example, the question is how many of the 200 significant results could be false positives. Soric (1989 demonstrated that the EDR limits the number of false positive discoveries. The more discoveries there are, the lower is the risk that discoveries are false. Using a simple formula, we can compute the maximum false discovery rate from the EDR.

FDR = (1/(EDR – 1)*(.05/.95), with alpha = .05

With an EDR of 7%, we obtained a maximum FDR of 68%. We know that the true FDR is 100%, thus, the estimate is too low. However, the reason is that sampling error can have dramatic effects on the FDR estimates when the EDR is low. With an EDR of 6%, the FDR estimate goes up to 82% and with an EDR estimate of 5% it is 100%. To take account of this uncertainty, we can use the 95%CI of the EDR to compute a 95%CI for the FDR estimate, 24% to 100%. Now we see that we cannot rule out that the FDR is 100%.

In short, scenario 1 introduced the use of p-value distributions to provide useful information about the risk that the published results are false discoveries. In this extreme example, we can dismiss the published p-values as inconclusive or as lacking in evidential value.

Scenario 2: The Typical Social Psychologist

It is difficult to estimate the typical effect size in a literature. However, a meta-analysis of meta-analyses suggested that the average effect size in social psychology is Cohen’s d = .4 (Richard et al., 2003). A smaller set of replication studies that did not select for significance estimated an effect size of d = .3 for social psychology (d = .2 for JPSP, d = .4 for Psych Science; Open Science Collaboration, 2015). The later estimate may include an unknown number of hypotheses where the null-hypothesis is true and the true effect size is zero. Thus, I used d = .4 as a reasonable effect size for true hypotheses in social psychology (see also LeBel, Campbell, & Loving, 2017).

It is also known that a rule of thumb in experimental social psychology was to allocate n = 20 participants to a condition, resulting in a sample size of N = 40 in studies with two groups. In a 2 x 2 design, the main effect would be tested with N = 80. However, to keep this scenario simple, I used d = .4 and N = 40 for true effects. This affords 23% power to obtain a significant result.

Finkel, Eastwick, and Reis (2017) argued that power of 25% is optimal if 75% of the hypotheses that are being tested are true. However, the assumption that 75% of hypotheses are true may be on the optimistic side. Wilson and Wixted (2018) suggested that the false discovery risk is closer to 50%. With 23% power for true hypotheses, this implies a false discovery rate of Given uncertainty about the actual false discovery rate in social psychology, I used a scenario with 50% true and 50% false hypotheses.

I kept the number of significant results at 200. To obtain 200 significant results with an equal number of true and false hypotheses, we need 1,428 tests. The 714 true hypotheses contribute 714*.23 = 164 true positives and the 714 false hypotheses produce 714*.05 = 36 false positive results; 164 + 36 = 200. This implies a false discovery rate of 36/200 = 18%. The true EDR is (714*.23+714*.05)/(714+714) = 14%.

The z-curve plot looks very similar to the previous plot, but they are not identical. Although the EDR estimate is higher, it still includes zero. The maximum FDR is well above the actual FDR of 18%, but the 95%CI includes the actual value of 18%.

A notable difference between Figure 1 and Figure 2 is the expected replication rate (ERR), which corresponds to the average power of significant p-values. It is called the estimated replication rate (ERR) because it predicts the percentage of significant results if the studies that were selected for significance were replicated exactly (Brunner & Schimmack, 2020). When power is heterogeneous, power of the studies with significant results is higher than power of studies with non-significant results (Brunner & Schimmack, 2020). In this case, with only two power values, the reason is that false positives have a much lower chance to be significant (5%) than true positives (23%). As a result, the average power of significant studies is higher than the average power of all studies. In this simulation, the true average power of significant studies is the weighted average of true and false positives with significant results, (164*.23 +36*.05)/(164+36) = 20%. Z-curve perfectly estimated this value.

Importantly, the 95% CI of the ERR, 11% to 34%, does not include zero. Thus, we can reject the null-hypotheses that all of the significant results are false positives based on the ERR. In other words, the significant results have evidential value. However, we do not know the composition of this average. It could be a large percentage of false positives and a few true hypotheses with high power or it could be many true positives with low power. We also do not know which of the 200 significant results is a true positive or a false positive. Thus, we would need to conduct replication studies to distinguish between true and false hypotheses. And given the low power, we would only have a 23% chance of successfully replicating a true positive result. This is exactly what happened with the reproducibility project. And the inconsistent results lead to debates and require further replications. Thus, we have real-world evidence how uninformative p-values are when they are obtained this way.

Social psychologists might argue that the use of small samples is justified because most hypotheses in psychology are true. Thus, we can use prior information to assume that significant results are true positives. However, this logic fails when social psychologists test false hypotheses. In this case, the observed distribution of p-values (Figure 1) is not that different from the distribution that is observed when most significant results are true positives that were obtained with low power (Figure 2). Thus, it is doubtful that this is really an optimal use of resources (Finkel et al., 2015). However, until recently this was the way experimental social psychologists conducted their research.

Scenario 3: Cohen’s Way

In 1962 (!), Cohen conducted a meta-analysis of statistical power in social psychology. The main finding was that studies had only a 50% chance to get significant results with a median effect size of d = .5. Cohen (1988) also recommended that researchers should plan studies to have 80% power. However, this recommendation was ignored.

To achieve 80% power with d = .4, researchers need N = 200 participants. Thus, the number of studies is reduced from 5 studies with N = 40 to one study with N = 200. As Finkel et al. (2017) point out, we can make more discoveries with many small studies than a few large ones. However, this ignores that the results of the small studies are difficult to replicate. This was not a concern when social psychologists did not bother to test whether their discoveries are false discoveries or whether they can be replicated. The replication crisis shows the problems of this approach. Now we have results from decades of research that produced significant p-values without providing any information whether these significant results are true or false discoveries.

Scenario 3 examines what social psychology would look like today, if social psychologists had listened to Cohen. The scenario is the same as in the second scenario, including publication bias. There are 50% false hypotheses and 50% true hypotheses with an effect size of d = .4. The only difference is that researchers used N = 200 to test their hypotheses to achieve 80% power.

With 80% power, we need 470 tests (compared to 1,428 in Scenario 2) to produce 200 significant results, 235*.80 + 235*.05 = 188 + 12 = 200. Thus, the EDR is 200/470 = 43%. The true false discovery rate is 6%. The expected replication rate is 188*.80 + 12*.05 = 76%. Thus, we see that higher power increases replicability from 20% to 76% and lowers the false discovery rate from 18% to 6%.

Figure 3 shows the z-curve plot. Visual inspection shows that Figure 3 looks very different from Figures 1 and 2. The estimates are also different. In this example, sampling error inflated the EDR to be 58%, but the 95%CI includes the true value of 46%. The 95%CI does not include the ODR. Thus, there is evidence for publication bias, which is also visible by the steep drop in the distribution at 1.96.

Even with a low EDR of 20%, the maximum FDR is only 21%. Thus, we can conclude with confidence that at least 79% of the significant results are true positives. Remember, in the previous scenario, we could not rule out that most results are false positives. Moreover, the estimated replication rate is 73%, which underestimates the true replication rate of 76%, but the 95%CI includes the true value, 95%CI = 61% – 84%. Thus, if these studies were replicated, we would have a high success rate for actual replication studies.

Just imagine for a moment what social psychology might look like in a parallel universe where social psychologists followed Cohen’s advice. Why didn’t they? The reason is that they did not have z-curve. All they had was p < .05, and using p < .05, all three scenarios are identical. All three scenarios produced 200 significant results. Moreover, as Finkel et al. (2015) pointed out, smaller samples produce 200 significant results quicker than large samples. An additional advantage of small samples is that they inflate point estimates of the population effect size. Thus, the social psychologists with the smallest samples could brag about the biggest (illusory) effect sizes as long as nobody was able to publish replication studies with larger samples that deflated effect sizes of d = .8 to d = .08 (Joy-Gaba & Nosek, 2010).

This game is over, but social psychology – and other social sciences – have published thousands of significant p-values, and nobody knows whether they were obtained using scenario 1, 2, or 3, or probably a combination of these. This is where z-curve can make a difference. P-values are no longer equal when they are considered as a data point from a p-value distribution. In scenario 1, a p-value of .01 and even a p-value of .001 has no meaning. In contrast, in scenario 3 even a p-value of .02 is meaningful and more likely to reflect a true positive than a false positive result. This means that we can use z-curve analyses of published p-values to distinguish between probably false and probably true positives.

I illustrate this with three concrete examples from a project that examined the p-value distributions of over 200 social psychologists (Schimmack, in preparation). The first example has the lowest EDR in the sample. The EDR is 11% and because there are only 210 tests, the 95%CI is wide and includes 5%.

The maximum EDR estimate is high with 41% and the 95%CI includes 100%. This suggests that we cannot rule out the hypothesis that most significant results are false positives. However, the replication rate is 57% and the 95%CI, 45% to 69%, does not include 5%. Thus, some tests tested true hypotheses, but we do not know which ones.

Visual inspection of the plot shows a different distribution than Figure 2. There are more just significant p-values, z = 2.0 to 2.2 and more large z-scores (z > 4). This shows more heterogeneity in power. A comparison of the ODR with the EDR shows that the ODR falls outside the 95%CI of the EDR. This is evidence of publication bias or the use of questionable research practices. One solution to the presence of publication bias is to lower the criterion for statistical significance. As a result, the large number of just significant results is no longer significant and the ODR decreases. This is a post-hoc correction for publication bias. For example, we can lower alpha to .005.

As expected, the ODR decreases considerably from 70% to 39%. In contrast, the EDR increases. The reason is that many questionable research practices produce a pile of just significant p-values. As these values are no longer used to fit the z-curve, it predicts a lot fewer non-significant p-values. The model now underestimates p-values between 2 and 2.2. However, these values do not seem to come from a sampling distribution. Rather they stick out like a tower. By excluding them, the p-values that are still significant with alpha = .005 look more credible. Thus, we can correct for the use of QRPs by lowering alpha and by examining whether these p-values produced interesting discoveries. At the same time, we can ignore the p-values between .05 and .005 and await replication studies to provide empirical evidence whether these hypotheses receive empirical support.

The second example was picked because it was close to the median EDR (33) and ERR (66) in the sample of 200 social psychologists.

The larger sample of tests (k = 1,529) helps to obtain more precise estimates. A comparison of the ODR, 76%, and the 95%CI of the EDR, 12% to 48%, shows that publication bias is present. However, with an EDR of 33%, the maximum FDR is only 11% and the upper limit of the 95%CI is 39%. Thus, we can conclude with confidence that fewer than 50% of the significant results are false positives, however numerous findings might be false positives. Only replication studies can provide this information.

In this example, lowering alpha to .005 did not align the ODR and the EDR. This suggests that these values come from a sampling distribution where non-significant results were not published. Thus, adjusting the there is no simple fix to adjust the significance criterion. In this situation, we can conclude that the published p-values are unlikely to be false positives, but that replication studies are needed to ensure that published significant results are not false positives.

The third example is the social psychologists with the highest EDR. In this case, the EDR is actually a little bit lower than the ODR, suggesting that there is no publication bias. The high EDR also means that the maximum FDR is very small and even the upper limit of the 95%CI is only 7%.

Another advantage of data without publication bias is that it is not necessary to exclude non-significant results from the analysis. Fitting the model to all p-values produces much tighter estimates of the EDR and the maximum FDR.

The upper limit of the 95%CI for the FDR is now 4%. Thus, we conclude that no more than 5% of the p-values less than .05 are false positives. Even p = .02 is unlikely to be a false positive. Finally, the estimated replication rate is 84% with a tight confidence interval ranging from 78% to 90%. Thus, most of the published p-values are expected to replicate in an exact replication study.

I hope these examples make it clear how useful it can be to evaluate single p-values with prior information about the p-values distribution of a lab. As labs differ in their research practices, significant p-values are also different. Only if we ignore the research context and focus on a single result p = .02 equals p = .02. But once we see the broader distribution, p-values of .02 can provide stronger evidence against the null-hypothesis than p-values of .002.

Implications

Cohen tried and failed to change the research culture of social psychologists. Meta-psychological articles have puzzled why meta-analyses of power failed to increase power (Maxwell, 2004; Schimmack, 2012; Sedelmeier & Gigerenzer, 1989). Finkel et al. (2015) provided an explanation. In a game where the winner publishes as many significant results as possible, the optimal strategy is to conduct as many studies as possible with low power. This strategy continues to be rewarded in psychology, where jobs, promotions, grants, and pay raises are based on the number of publications. Cohen (1990) said less is more, but that is not true in a science that does not self-correct and treats every p-value less than .05 as a discovery.

To improve psychology as a science, we need to change the incentive structure and author-wise z-curve analyses can do this. Rather than using p < .05 (or p < .005) as a general rule to claim discoveries, claims of discoveries can be adjusted to the research practices of a researchers. As demonstrated here, this will reward researchers who follow Cohen’s rules and punish those who use questionable practices to produce p-values less than .05 (or Bayes-Factors > 3) without evidential value. And maybe, there is a badge for credible p-values one day.

(incomplete) References

Richard, F. D., Bond, C. F., Jr., & Stokes-Zoota, J. J. (2003). One hundred years of social psychology quantitatively described. Review of General Psychology, 7, 331–363. http://dx.doi.org/10.1037/1089-2680.7.4.331

A New Look at the Implicit Revolution

Psychology is not a unified paradigmatic science. That is, it lacks an overarching theory like evolution theory in biology. In a science without an empirically grounded paradigm, progress is made very much like evolution made progress in a process of trial and error. Some ideas may thrive for a moment, but if they are not fruitful, they are discarded. The emergence of a new idea is often characterized as a revolution, and psychology has seen its fair share of revolutions. Behaviorism replaced introspectionism and the cognitive revolution replaced behaviorism. For better or worse, cognitivism is dominating psychology at the moment. The cognitive revolution also had a strong influence on social psychology with the rise of social cognition research.

In the early days, social psychologists focussed on higher cognitive processes like attributions. However, in the 1980s, the implicit revolution shifted focus towards lower cognitive processes that may occur without awareness. This was not the first time, unconscious processes became popular. A special issue in the American Psychologists in 1992 called it the New Look 3 (Greenwald, 1992).

The first look was Freud’s exploration of conscious and unconscious processes. A major hurdle for this first look was conceptual confusion and a lack of empirical support. Puritan academic may also have shied away from the sexual content in Freudian theories (e.g., sexual desire directed at the mother).

However, the second look did try to study many of Freud’s ideas with empirical methods. For example, Silverman and Weinberger (1985) presented the phrase “Mommy and I are one” on a computer screen so quickly that participants were unable to say what they saw. This method is called subliminal priming. The idea was that the unconscious has a longing to be loved by mommy and that presenting this phrase would gratify the unconscious. Numerous studies used the “Mommy and I are one” priming method to see effects on behavior.

Greenwald (1992) reviewed this evidence.

Can subliminal presentations result in cognitive analyses of multiword strings? There have been reports of such effects, especially in association with tests of psychoanalytic hypotheses. The best known of these findings (described as subliminal psychodynamic activation [SPA], using “Mommy and I are One” as the text of a subliminal stimulus; Silverman & Weinberger, 1985) has been identified, on the basis of meta-analysis, as a reproducible phenomenon (Hardaway, 1990; Weinberger & Hardaway, 1990).

Despite this strong evidence, many researchers remain skeptical about the SPA result (see, e.g., the survey reported in Appendix B). Such skepticism is almost certainly due to the lack of widespread enthusiasm for the SPA result’s proposed psychodynamic interpretation (Silverman & Weinberger, 1985).

Because of the positive affective values of words in the critical stimulus (especially Mommy and I) , it is possible that observed effects might be explained by cognitive analysis limited to the level of single words. Some support for that interpretation is afforded by Hardaway’s demonstration (1990, p. 183, Table 3) that other affectively positive strings that include Mommy or One also produce significant effects. However, these other effects are weaker than the effect of the specific string, “Mommy and I are One.”

In summary of evidence from studies of subliminal activation, it is now well established that analysis occurs for stimuli presented at exposure conditions in a region between objective and subjective thresholds; this analysis can extract at least some semantic content of single words.

The New Look 3, however, was less interested in Freudian theory. Most of the influential subliminal priming studies used ordinary stimuli to study common topics in social psychology, including prejudice.

For example, Greenwald (1992) cites Devine’s (1989) highly influential subliminal priming studies with racial stimuli as evidence that “experiments using stimulus conditions that are clearly above objective thresholds (but presumably below subjective thresholds) have obtained semantic activation findings with apparent relative ease” (p. 769).

25 years later, in their Implicit Revolution article, Greenwald and Banaji feature Devine’s influential article.

Patricia Devine’s (1989) dissertation research extended the previously mentioned subliminal priming methods of Bargh and Pietromonaco (1982) to automatic stereotypes. Devine’s article brought attention to the possibility of dissociation between automatic stereotype activation
and controlled inhibition of stereotype expression
” (p. 865).

In short, subliminal priming has played an important role in the implicit revolution. However, subliminal priming is still rare. Most studies use clearly visible stimuli. This is surprising, given the clear advantages of subliminal priming to study unconscious processes. A major concern with stimuli that are presented with awareness is that participants can control their behavior. In contrast, if they are not even aware that a racial stimulus was presented, they have no ability to supress a prejudice response.

Another revolution explains why subliminal studies remain rare despite their obvious advantages. This revolution has been called the credibility revolution, replication revolution, or open science revolution. The credibility revolution started in 2011, after a leading social cognition journal published a controversial article that showed time-reversed subliminal priming effects (Bem, 2011). This article revealed a fundamental problem in the way social psychologists conducted their research. Rather than using experiments to see whether effects exist, they used experiments to accumulate evidence in favor of effects. Studies that failed to show the expected effects were hidden. In the 2010s, it has become apparent that this flawed use of the scientific method has produced large literatures with results that cannot be replicated. A major replication project found that less than 25% of results in social psychological experiments could be replicated (OSC, 2015). Given these results, it is unclear which results provided credible evidence.

Despite these troubling findings, social psychologists continue to cite old studies like Devine’s (1989) study (it was just one study!) as if it provided conclusive evidence for subliminal priming of prejudice. If we need any evidence for Freud’s theory of repression, social psychologists would be a prime example. Through various defense mechanisms they maintain the belief that old findings that were obtained with bad scientific practices provided credible evidence that can inform our understanding of the unconscious.

Here I show that this is wishful thinking. To do so, I conducted a modern meta-analysis of subliminal priming studies. Unlike traditional meta-analysis that do not take publication bias into account, this new method provides a strong test of publication bias and corrects for its effect on the results. While there are several new methods, z-curve has been shown to be superior to other methods (Brunner & Schimmack, 2020).

The figure shows the results. The red line at z = 1.96, corresponds to the significance criterion of .05. It is easy to see that this criterion acts like a censor. Results with z-scores greater than 1.96 (i.e., p < .05) are made public and can enter researchers awareness. Results that are not significant, z < 1.06, are repressed and may linger only in the unconscious of researchers who prefer not to think about their failures.

Statistical evidence of repression is provided by a comparison of the observed discovery rate (i.e., the percentage of published results that are significant) of 90% and the expected discovery rate based on the z-curve model (i.e., the grey curve in the figure) of 13%. Evidently, published results are selected from a much larger number of analyses that failed to support subliminal priming. This clear evidence of selection for significance undermines the credibility of individual studies in the subliminal priming literature.

However, there is some evidence of heterogeneity across studies. This is seen in the increasing numbers below the x-axis. Whereas studies with z-scores below 4, have low average power, studies with z-scores above 4, have a mean power greater than 80%. This suggests that replications of these studies could produce significant results. This information could be used to salvage a few solid findings from a pile of junk findings. Closer examination of these studies is beyond the purpose of this blog post, and Devine’s study is not one of them.

The main point of this analysis is that there is strong scientific evidence to support the claim that subliminal priming researchers did not use the scientific method properly. By selecting only results that support the existence of subliminal priming, they created only illusory evidence in support of subliminal priming. Thirty years after Devine’s (1989) subliminal prejudice study was published, we have no scientific evidence in support of the claim that racial stimuli can bypass consciousness and directly influence behavior.

However, Greenwald and other social psychologists who made a career out of these findings repress the well-known fact that published results in experimental social psychology are not credible and cite them as if they are credible evidence (Greenwald & Banaj, 2017).

Social psychologists are of course very familiar with deception. First, they became famous for deceiving participants (Milgram studies). In 2011, it became apparent that they were deceiving themselves. Now, it seems they are willing to deceive others to avoid facing the inconvenient truth that decades of research have produced no scientific results.

The inability to face ego-threatening information is of course not new to psychologists. Freud studied defense mechanisms and social psychologists studied cognitive biases and motivated reasoning. Right now, this trait is on display in Donald Trump and his supporters inability to face the fact that he lost an election. It is ironic that social psychologists have the same inability when their own egos are on the line.

Francis’s Audit of Multiple-Study Articles in Psychological Science in 2009-2012

Citation: Francis G., (2014). The frequency of excess success for articles
in Psychological Science. Psychon Bull Rev (2014) 21:1180–1187
DOI 10.3758/s13423-014-0601-x

Introduction

The Open Science Collaboration article in Science has over 1,000 articles (OSC, 2015). It showed that attempting to replicate results published in 2008 in three journals, including Psychological Science, produced more failures than successes (37% success rate). It also showed that failures outnumbered successes 3:1 in social psychology. It did not show or explain why most social psychological studies failed to replicate.

Since 2015 numerous explanations have been offered for the discovery that most published results in social psychology cannot be replicated: decline effect (Schooler), regression to the mean (Fiedler), incompetent replicators (Gilbert), sabotaging replication studies (Strack), contextual sensitivity (vanBavel). Although these explanations are different, they share two common elements, (a) they are not supported by evidence, and (b) they are false.

A number of articles have proposed that the low replicability of results in social psychology are caused by questionable research practices (John et al., 2012). Accordingly, social psychologists often investigate small effects in between-subject experiments with small samples that have large sampling error. A low signal to noise ratio (effect size/sampling error) implies that these studies have a low probability of producing a significant result (i.e., low power and high type-II error probability). To boost power, researchers use a number of questionable research practices that inflate effect sizes. Thus, the published results provide the false impression that effect sizes are large and results are replicated, but actual replication attempts show that the effect sizes were inflated. The replicability projected suggested that effect sizes are inflated by 100% (OSC, 2015).

In an important article, Francis (2014) provided clear evidence for the widespread use of questionable research practices for articles published from 2009-2012 (pre crisis) in the journal Psychological Science. However, because this evidence does not fit the narrative that social psychology was a normal and honest science, this article is often omitted from review articles, like Nelson et al’s (2018) ‘Psychology’s Renaissance’ that claims social psychologists never omitted non-significant results from publications (cf. Schimmack, 2019). Omitting disconfirming evidence from literature reviews is just another sign of questionable research practices that priorities self-interest over truth. Given the influence that Annual Review articles hold, many readers maybe unfamiliar with Francis’s important article that shows why replication attempts of articles published in Psychological Science often fail.

Francis (2014) “The frequency of excess success for articles in Psychological Science”

Francis (2014) used a statistical test to examine whether researchers used questionable research practices (QRPs). The test relies on the observation that the success rate (percentage of significant results) should match the mean power of studies in the long run (Brunner & Schimmack, 2019; Ioannidis, J. P. A., & Trikalinos, T. A., 2007; Schimmack, 2012; Sterling et al., 1995). Statistical tests rely on the observed or post-hoc power as an estimate of true power. Thus, mean observed power is an estimate of the expected number of successes that can be compared to the actual success rate in an article.

It has been known for a long time that the actual success rate in psychology articles is surprisingly high (Sterling, 1995). The success rate for multiple-study articles is often 100%. That is, psychologists rarely report studies where they made a prediction and the study returns a non-significant results. Some social psychologists even explicitly stated that it is common practice not to report these ‘uninformative’ studies (cf. Schimmack, 2019).

A success rate of 100% implies that studies required 99.9999% power (power is never 100%) to produce this result. It is unlikely that many studies published in psychological science have the high signal-to-noise ratios to justify these success rates. Indeed, when Francis applied his bias detection method to 44 studies that had sufficient results to use it, he found that 82 % (36 out of 44) of these articles showed positive signs that questionable research practices were used with a 10% error rate. That is, his method could at most produce 5 significant results by chance alone, but he found 36 significant results, indicating the use of questionable research practices. Moreover, this does not mean that the remaining 8 articles did not use questionable research practices. With only four studies, the test has modest power to detect questionable research practices when the bias is relatively small. Thus, the main conclusion is that most if not all multiple-study articles published in Psychological Science used questionable research practices to inflate effect sizes. As these inflated effect sizes cannot be reproduced, the effect sizes in replication studies will be lower and the signal-to-noise ratio will be smaller, producing non-significant results. It was known that this could happen since 1959 (Sterling, 1959). However, the replicability project showed that it does happen (OSC, 2015) and Francis (2014) showed that excessive use of questionable research practices provides a plausible explanation for these replication failures. No review of the replication crisis is complete and honest, without mentioning this fact.

Limitations and Extension

One limitation of Francis’s approach and similar approaches like my incredibility Index (Schimmack, 2012) is that p-values are based on two pieces of information, the effect size and sampling error (signal/noise ratio). This means that these tests can provide evidence for the use of questionable research practices, when the number of studies is large, and the effect size is small. It is well-known that p-values are more informative when they are accompanied by information about effect sizes. That is, it is not only important to know that questionable research practices were used, but also how much these questionable practices inflated effect sizes. Knowledge about the amount of inflation would also make it possible to estimate the true power of studies and use it as a predictor of the success rate in actual replication studies. Jerry Brunner and I have been working on a statistical method that is able to to this, called z-curve, and we validated the method with simulation studies (Brunner & Schimmack, 2019).

I coded the 195 studies in the 44 articles analyzed by Francis and subjected the results to a z-curve analysis. The results are shocking and much worse than the results for the studies in the replicability project that produced an expected replication rate of 61%. In contrast, the expected replication rate for multiple-study articles in Psychological Science is only 16%. Moreover, given the fairly large number of studies, the 95% confidence interval around this estimate is relatively narrow and includes 5% (chance level) and a maximum of 25%.

There is also clear evidence that QRPs were used in many, if not all, articles. Visual inspection shows a steep drop at the level of significance, and the only results that are not significant with p < .05 are results that are marginally significant with p < .10. Thus, the observed discovery rate of 93% is an underestimation and the articles claimed an amazing success rate of 100%.

Correcting for bias, the expected discovery rate is only 6%, which is just shy of 5%, which would imply that all published results are false positives. The upper limit for the 95% confidence interval around this estimate is 14, which would imply that for every published significant result there are 6 studies with non-significant results if file-drawring were the only QRP that was used. Thus, we see not only that most article reported results that were obtained with QRPs, we also see that massive use of QRPs was needed because many studies had very low power to produce significant results without QRPs.

Conclusion

Social psychologists have used QRPs to produce impressive results that suggest all studies that tested a theory confirmed predictions. These results are not real. Like a magic show they give the impression that something amazing happened, when it is all smoke and mirrors. In reality, social psychologists never tested their theories because they simply failed to report results when the data did not support their predictions. This is not science. The 2010s have revealed that social psychological results in journals and text books cannot be trusted and that influential results cannot be replicated when the data are allowed to speak. Thus, for the most part, social psychology has not been an empirical science that used the scientific method to test and refine theories based on empirical evidence. The major discovery in the 2010s was to reveal this fact, and Francis’s analysis provided valuable evidence to reveal this fact. However, most social psychologists preferred to ignore this evidence. As Popper pointed out, this makes them truly ignorant, which he defined as “the unwillingness to acquire knowledge.” Unfortunately, even social psychologists who are trying to improve it wilfully ignore Francis’s evidence that makes replication failures predictable and undermines the value of actual replication studies. Given the extent of QRPs, a more rational approach would be to dismiss all evidence that was published before 2012 and to invest resources in new research with open science practices. Actual replication failures were needed to confirm predictions made by bias tests that old studies cannot be trusted. The next decade should focus on using open science practices to produce robust and replicable findings that can provide the foundation for theories.

When DataColada kissed Fiske’s ass to publish in Annual Review of Psychology

One of the worst articles about the decade of replication failures is the “Psychology’s Renaissance” article by the datacolada team (Leif Nelson, Joseph Simmons, & Uri Simonsohn).

This is not your typical Annual Review article that aims to give a review over developments in the field. it is an opinion piece filled with bold claims that lack empirical evidence.

The worst claim is that p-hacking is so powerful that pretty much every study can be made to work.

Experiments that work are sent to a journal, whereas experiments that fail are sent to the file drawer (Rosenthal 1979). We believe that this “file-drawer explanation” is incorrect. Most failed studies are not missing. They are published in our journals, masquerading as successes.

We can all see that not publishing failed studies is a bit problematic. Even Bem’s famous manual for p-hackers warned that it is unethical to hide contradictory evidence. “The integrity of the scientific enterprise requires the reporting of disconfirming results” (Bem). Thus, the idea that researchers are sitting on a pile of failed studies that they failed to disclose makes psychologists look bad and we can’t have that in Fiske’s Annual Review of Psychology journal. Thus, psychologists must have been doing something that is not dishonest and can be sold as normal science.

“P-hacking is the only honest and practical way to consistently get underpowered studies to be statistically significant. Researchers did not learn from experience to increase their sample sizes precisely because their underpowered studies were not failing.” (p. 515).

This is utter nonsense. First, researchers have file-drawers of studies that did not work. Just ask them and they may tell you that they do.

“We did run multiple studies, some of which did not work, and some of which worked better than others. You may think that not reporting the less successful studies is wrong, but that is how the field works.” (Roy Baumeister, personal email communication)

Leading social psychologists, Gilbert and Wilson provide an even more detailed account of their research practices that produce many non-significant results that are not reported (a.k.a. a file drawer), which has been preserved thanks to Greg Francis.

First, it’s important to be clear about what “publication bias” means. It doesn’t mean that anyone did anything wrong, improper, misleading, unethical, inappropriate, or illegal. Rather it refers to the well known fact that scientists in every field publish studies whose results tell them something interesting about the world, and don’t publish studies whose results tell them nothing. Francis uses sophisticated statistical tools to discover what everyone already knew—and what he could easily have discovered simply by asking us. Yes, of course we ran some studies on “consuming experience” that failed to show interesting effects and are not reported in our JESP paper. Let us be clear: We did not run the same study over and over again until it yielded significant results and then report only the study that “worked.” Doing so would be clearly unethical. Instead, like most researchers who are developing new methods, we did some preliminary studies that used different stimuli and different procedures and that showed no interesting effects. Why didn’t these studies show interesting effects? We’ll never know. Failed studies are often (though not always) inconclusive, which is why they are often (but not always) unpublishable. So yes, we had to mess around for a while to establish a paradigm that was sensitive and powerful enough to observe the effects that we had hypothesized. In one study we might have used foods that didn’t differ sufficiently in quality, in another we might have made the metronome tick too fast for people to chew along. Exactly how good a potato chip should be and exactly how fast a person can chew it are the kinds of mundane things that scientists have to figure out in preliminary testing, and they are the kinds of mundane things that scientists do not normally report in journals (but that they informally share with other scientists who work on similar phenomenon). Looking back at our old data files, it appears that in some cases we went hunting for potentially interesting mediators of our effect (i.e., variables that might make it larger or smaller) and although we replicated the effect, we didn’t succeed in making it larger or smaller. We don’t know why, which is why we don’t describe these blind alleys in our paper. All of this is the hum-drum ordinary stuff of day-to-day science.

Aside from this anecdotal evidence, the datacolada crew actually had access to empirical evidence in an article that they cite, but maybe never read. An important article in the 2010s reported a survey of research practices (John, Loewenstein, & Prelec, 2012). The survey asked about several questionable research practices, including not reporting entire studies that failed to support the main hypothesis.

Not reporting studies that “did not work” was the third most frequently used QRP. Unfortunately, this result contradicts datacolada’s claim that there are no studies in file-drawers and so they ignore this inconvenient empirical fact to tell their fairy tail of honest p-hackers that didn’t know better until 2011 when they published their famous “False Positive Psychology” article.

This is a cute story that isn’t supported by evidence, but that has never stopped psychologists from writing articles that advance their own career. The beauty of review articles is that you don’t even have to phack data. You just pick and choose citations or make claims without evidence. As long as the editor (Fiske) likes what you have to say, it will be published. Welcome to psychology’s renaissance; same bullshit as always.