Replicability Index: A Blog by Dr. Ulrich Schimmack

Blogging about statistical power, replicability, and the credibility of statistical results in psychology journals since 2014. Home of z-curve, a method to examine the credibility of published statistical results.

Show your support for open, independent, and trustworthy examination of psychological science by getting a free subscription. Register here.

For generalization, psychologists must finally rely, as has been done in all the older sciences, on replication” (Cohen, 1994).

DEFINITION OF REPLICABILITYIn empirical studies with sampling error, replicability refers to the probability of a study with a significant result to produce a significant result again in an exact replication study of the first study using the same sample size and significance criterion (Schimmack, 2017). 

See Reference List at the end for peer-reviewed publications.

Mission Statement

The purpose of the R-Index blog is to increase the replicability of published results in psychological science and to alert consumers of psychological research about problems in published articles.

To evaluate the credibility or “incredibility” of published research, my colleagues and I developed several statistical tools such as the Incredibility Test (Schimmack, 2012); the Test of Insufficient Variance (Schimmack, 2014), and z-curve (Version 1.0; Brunner & Schimmack, 2020; Version 2.0, Bartos & Schimmack, 2021). 

I have used these tools to demonstrate that several claims in psychological articles are incredible (a.k.a., untrustworthy), starting with Bem’s (2011) outlandish claims of time-reversed causal pre-cognition (Schimmack, 2012). This article triggered a crisis of confidence in the credibility of psychology as a science. 

Over the past decade it has become clear that many other seemingly robust findings are also highly questionable. For example, I showed that many claims in Nobel Laureate Daniel Kahneman’s book “Thinking: Fast and Slow” are based on shaky foundations (Schimmack, 2020).  An entire book on unconscious priming effects, by John Bargh, also ignores replication failures and lacks credible evidence (Schimmack, 2017).  The hypothesis that willpower is fueled by blood glucose and easily depleted is also not supported by empirical evidence (Schimmack, 2016). In general, many claims in social psychology are questionable and require new evidence to be considered scientific (Schimmack, 2020).  

Each year I post new information about the replicability of research in 120 Psychology Journals (Schimmack, 2021).  I also started providing information about the replicability of individual researchers and provide guidelines how to evaluate their published findings (Schimmack, 2021). 

Replication is essential for an empirical science, but it is not sufficient. Psychology also has a validation crisis (Schimmack, 2021).  That is, measures are often used before it has been demonstrate how well they measure something. For example, psychologists have claimed that they can measure individuals’ unconscious evaluations, but there is no evidence that unconscious evaluations even exist (Schimmack, 2021a, 2021b). 

If you are interested in my story how I ended up becoming a meta-critic of psychological science, you can read it here (my journey). 

References

Brunner, J., & Schimmack, U. (2020). Estimating population mean power under conditions of heterogeneity and selection for significance. Meta-Psychology, 4, MP.2018.874, 1-22
https://doi.org/10.15626/MP.2018.874

Schimmack, U. (2012). The ironic effect of significant results on the credibility of multiple-study articles. Psychological Methods, 17, 551–566
http://dx.doi.org/10.1037/a0029487

Schimmack, U. (2020). A meta-psychological perspective on the decade of replication failures in social psychology. Canadian Psychology/Psychologie canadienne, 61(4), 364–376. 
https://doi.org/10.1037/cap0000246

Mastodon

An Introduction to Z-Curve 3.0 Options

All options are set as global variables at the beginning of installing the functions with source(zcurve3). Afterwards they can be changed like any other R object

1. Curve Type: Default Z-Values, Option Fit t-Distributions with a Fixed df

CURVE.TYPE <- “z” # Set to “t” for t-distribution
df = c() # set to the df of the t-distribution

2. Speed Control Parameters

parallel <- FALSE # Placeholder – parallel functionality not yet implemented
max_iter <- 1e6 # Max iterations for model estimation
max_iter_boot <- 1e5 # Max iterations for bootstrapped estimates

EM.criterion <- 1e-3 # Convergence threshold for EM algorithm
EM.max.iter <- 1000 # Max iterations for EM

Plot.Fitting <- FALSE # Plot fitting curve (only for Est.Method = “OF” or “EXT”)

PLOT SETTINGS

Title <- “” # Optional plot title

letter.size <- 1 # Text size in plots
letter.size.1 <- letter.size # Used for version labels in plot
y.line.factor <- 3 # Controls spacing of plot text

x.lim.min <- 0 # X-axis lower bound
x.lim.max <- 6 # X-axis upper bound
ymax <- 0.6 # Y-axis upper bound
ymin <- 0 # OUTDATED Y-axis lower bound (for label space)

Show.Histogram <- TRUE # Toggle histogram in plot
Show.Text <- TRUE # Toggle model results in plot
Show.Curve.All <- TRUE # Show predicted z-curve
Show.Curve.Sig <- FALSE # Option: show z-curve only for significant values
Show.Significance <- TRUE # Show z = critical value line
Show.KD <- FALSE # Toggle kernel density overlay (density method only)

sig.levels <- c() # Optional: mark additional p-value thresholds on plot

int.loc <- 0.5 # Plot local power intervals below x-axis (set 0 to disable)
hist.bar.width <- 0.2 # Width of histogram bars
bw.draw <- 0.10 # Smoothing for kernel density display

CONSOLE OUTPUT

Show.Iterations <- TRUE # Show iterations for slow procedures (e.g., EXT, TEST4HETEROGENEITY)

MODEL PARAMETERS

alpha <- 0.05 # Significance level
crit <- qnorm(1 – alpha / 2) # Corresponding two-sided critical z

two.sided <- TRUE # Assume two-sided z-values (use abs(z)); not yet compatible with signed z-values

Color scheme

col.curve <- “violetred3”
col.hist <- “blue3”
col.kd <- “green3”

Est.Method <- “OF” # Estimation method: “OF”, “EM”, or “EXT” # Clustered Data: “CLU-W” (weighted),”CLU-B” (bootstrap) Int.Beg <- 1.96 # Default: critical value for alpha = .05 Int.End <- 6 # End of modeling interval (z > 6 = power = 1)

ncp <- 0:6 # Component locations (z-values at which densities are centered)
components <- length(ncp) # Number of components
zsd <- 1 # SD of standard normal z-distribution
zsds = rep(zsd,components) # one SD for each component

just <- 0.8 # Cutoff for “just significant” z-values (used in optional bias test)

ZSDS.FIXED <- FALSE # Fix SD values for EXT method
NCP.FIXED <- FALSE # Fix non-central parameter(NCP) means values for EXT method
W.FIXED <- FALSE # Fix weights for EXT method

fixed.false.positives <- 0 # If > 0, constrains proportion of false positives (e.g., weight for z = 0 component)

DENSITY-BASED SETTINGS (Only used with Est.Method = “OF”)

n.bars <- 512 # Number of bars in histogram

Augment <- TRUE # Apply correction for bias at lower bound
Augment.Regression <- FALSE # Use Slope for Augmentation
Augment.Factor <- 1 # Amount of augmentation

bw.est <- 0.05 # Bandwidth for kernel density (lower = less smoothing, higher = more smoothing)
bw.aug <- .20 # Width of Augmentation interval

INPUT RESTRICTIONS

MAX.INP.Z <- Inf # Optionally restrict very large z-values (set Inf to disable)

CONFIDENCE INTERVALS / BOOTSTRAPS

boot.iter <- 0 # Number of bootstrap iterations (suggest 500+ for final models)
ERR.CI.adjust <- 0.03 # Conservative widening of confidence intervals for ERR
EDR.CI.adjust <- 0.05 # Conservative widening for EDR

CI.ALPHA <- 0.05 # CI level (default = 95%)

CI levels for Heterogeneity Test

fit.ci <- c(.01, .025, .05, .10, .17, .20, .50, .80, .83, .90, .95, .975, .99) # CI levels for model fit test

TEST4BIAS <- FALSE # Enable optional bias test
TEST4HETEROGENEITY <- 0 # Optional heterogeneity test (slow) — set number of bootstrap iterations

Concerns About Z-Curve: Evidence From New Simulations With Few Studies

Scientific progress depends on criticism, especially when it is used to identify limitations of statistical methods and to improve them. z-curve is no exception. Over the past year, several critiques have raised questions about the robustness of z-curve estimates, particularly with respect to the expected discovery rate (EDR). These critiques deserve careful examination, but they also require accurate characterization of what z-curve assumes, what it estimates, and under which conditions its estimates are informative.

Two recent lines of criticism are worth distinguishing. First, Pek et al. (2025) show that z-curve estimates can be biased when the publication process deviates from the assumed selection model. The default z-curve model assumes that selection operates primarily on statistical significance at the conventional α = .05 threshold (z = 1.96). Pek et al. demonstrate that if researchers also suppress statistically significant results with small effect sizes—for example, not publishing a result with p = .04 because the standardized mean difference is only d = .40—then z-curve estimates can become optimistic. This result is correct: z-curve cannot diagnose selective reporting based on effect size rather than statistical significance.

There is limited direct evidence that routine selection on effect-size magnitude (beyond statistical significance) is widespread; the QRPs most commonly reported in self-surveys are largely significance-focused (John et al., 2012). In any case, imperfect correction is not a reason to ignore selection bias entirely, because uncorrected meta-analyses can markedly overestimate population effects and replicability (Carter et al., 2019).

Moreover, the selection mechanism examined by Pek et al. has a clear directional implication: when statistically significant results are additionally filtered by effect size, z-curve’s estimates of EDR and ERR can be biased upward. This matters for interpretation. If z-curve already yields low EDR or ERR estimates, then the type of misspecification studied by Pek et al. would, if present in practice, imply that the underlying parameters could be even lower. For example, an estimated EDR of 20% under the default selection model could correspond to a substantially lower true discovery rate if significant-but-small effects are systematically suppressed. Whether such effect-size–based suppression is common enough to materially affect typical applications remains an empirical question.

A second critique has been advanced by Erik van Zwet, a biostatistician who has applied models of z-value distributions developed in genomics to meta-analyses of medical trials. These models were designed for settings in which the full set of test statistics is observed and therefore do not incorporate selection bias. When applied to literatures where selection bias is present, such models can yield biased estimates. In contrast, z-curve is explicitly designed to assess the presence of selection bias and to correct for it, when it is present. When no bias is present, z-curve can also be fitted to the full z-curve, including non-significant results.

van Zwet has published a few blog posts arguing that z-curve performs poorly when estimating the expected discovery rate (EDR). Importantly, his simulations do not show problems for the expected replication rate (ERR). Thus, z-curve’s ability to estimate the average true power of published significant results is not in question. The disputed issue concerns inference about the broader population of studies, including unpublished nonsignificant results.

Some aspects of this critique require clarification. van Zwet has suggested that z-curve was evaluated only in a small number of simulations. This is incorrect. Prior work includes two large simulation studies—one conducted by František Bartoš and one conducted by me—that examined EDR confidence-interval coverage across a wide range of conditions. Based on these results, the width of the nominal 95% confidence intervals was conservatively expanded by ±5 percentage points to achieve near-nominal coverage across a wide range of realistic scenarios (see details below). Thus, EDR interval estimation was already empirically validated across many conditions with 100 or more significant results.

However, these simulations did not examine performance of z-curve with small sets of significant results. Because z-curve can technically be fit with as few as 10 significant results, it is reasonable to ask whether EDR confidence-interval coverage remains adequate when the number of significant studies is substantially smaller than 100. To address this question directly, I conducted a new simulation study focusing on the case of 50 significant results.

In addition, I introduced two diagnostics designed to assess when EDR estimation is likely to be weakly identified. Estimation of the EDR relies disproportionately on significant results from low-powered studies or false positives, because these observations provide information about the number of missing nonsignificant results. When nearly all significant results come from highly powered studies, the observed z-value distribution contains little information about what is missing. The first diagnostic therefore counts how many significant z-values fall in the interval from 1.96 to 2.96. Very small counts in this range signal that EDR estimates are driven by limited information. The second diagnostic examines the slope of the z-value density in this interval. A decreasing slope indicates information consistent with a mixture that includes low-powered studies, whereas an increasing slope reflects dominance of high-powered studies and weak identification of the EDR.

Reproducible Results of Simulation Study with 50 Significant Results

The simulation used a fully crossed factorial design with four values for each of four parameters, yielding 192 conditions. Population-level standardized mean differences were set to 0, .2, .4, or .6. Heterogeneity was modeled using normally distributed effect sizes with standard deviations (τ) of 0, .2, .4, or .6. In addition, a separate population of true null studies was included, with the proportion of false discoveries among significant results set to 0, .2, .4, or .6. Sample sizes varied across conditions, starting at n = 50 (25 observations per group). For each condition, simulations were run with exactly 50 significant results.

The simulation code is available here. The results are available here.

Across all scenarios coverage is 96%. The percentage is higher than the nominal 95% because the conservative adjustment leads to higher coverage in less changing scenarios.

The slope diagnostic works as expected. When the slope is decreasing, coverage is 97%, but when the slope is increasing it drops to 83%. Increasing slopes are more likely to lead to an overestimation than underestimation of the EDR (75%). Increasing slopes occurred in only 5% of all simulations because these scenarios assume that the majority of studies have over 50% power, which requires large samples and moderate to large effect sizes.

The number of z-values in the range between 1.96 and 1.97 also matters. At least 12 values in this range are needed to have 95% coverage. However, the slope criterion is more diagnostic than the number of z-values in this range.

A logistic regression with CI coverage (yes = 1, no = 0) as outcome and slope direction, d, SD, se 2/sqrt(N), and FDR proportion as predictors showed a strong effect of slope direction, FDR, and a slope direction x FDR interaction. Based on these results, I limited the analysis to scenarios with decreasing or flat slopes.

The effect of FDR remained significant (b = 3.55, SE = 1.47), as did the main effect of effect size (b = −2.33, SE = 1.01) and the effect size × SD interaction (b = 6.93, SE = 2.99), indicating systematic variation in coverage across conditions.

These effects are explained by how the design parameters shape the distribution of observed z-values in the critical range used to estimate the EDR (1.96–2.96). Higher FDR values imply a larger proportion of true null effects, which produces a steeper declining slope in the truncated z-distribution and increases information about the mass of missing non-significant results. In contrast, larger effect sizes generate a greater share of high-powered studies with z-values well above the truncation point, which reduces the relative influence of marginally significant results and makes the EDR less identifiable from the observed distribution.

The significant effect size × SD interaction reflects the moderating role of heterogeneity. When heterogeneity is present, even large average effect sizes produce a mixture of moderate- and high-power studies, increasing the density of z-values near the significance threshold and partially restoring information about missing results. As a consequence, the adverse effect of large average effect sizes on coverage is attenuated when heterogeneity is non-zero.

Overall, the most challenging scenarios for EDR estimation are characterized by low heterogeneity and shallow slopes in the just-significant range. In these settings, the observed z-distribution contains limited information about the unobserved, non-significant portion of the distribution, so EDR is weakly identified from the selected data alone.

Inspection of the 192 design cells indicates that the largest coverage shortfalls are concentrated in homogeneous conditions, especially when SD = 0 and FDR = 0. This limitation of the default discrete mixture approximation under near-homogeneity has been documented previously (Brunner & Schimmack, 2020). In practice, it can be addressed by fitting a homogeneity-appropriate specification, such as a single-component model with a free mean and normally distributed heterogeneity (with SD allowed to approach 0), as implemented in z-curve 3.0.

Restricting attention to scenarios with heterogeneous data (SD > .2), 89% of conditions achieved at least 95% coverage, and only 2 conditions (1.4%) fell below 90% coverage. Thus, even with adjusted confidence intervals, nominal coverage is not guaranteed in all edge cases. The remaining coverage problems arise for two reasons: (a) the selected z-distribution can be nearly uninformative about the amount of missing, non-significant evidence when the just-significant slope is shallow, and (b) the default heterogeneous specification can be misspecified when applied to approximately homogeneous data. A conservative approach to the lower coverage in some cases is to report the results as 85% confidence interval even if coverage on average is over 95%.

Conclusion

Pek et al. and van Zwet have raised broad concerns about z-curve’s estimates of the expected discovery rate (EDR), which is used to assess publication bias and to quantify the extent of missing nonsignificant results. Their arguments rely heavily on a small set of stylized scenarios. These scenarios do not show that z-curve generally produces untrustworthy results. In contrast, prior large-scale simulation studies, together with the present extension to datasets with only 50 significant results, indicate that z-curve’s EDR confidence intervals achieve near-nominal coverage across a wide range of conditions, including many that are plausible for applied research.

Importantly, the new simulations also validate simple diagnostics that indicate when EDR estimation is likely to be less reliable. In particular, the shape of the significant z-value distribution in the critical range from 1.96 to 2.96—especially whether the density decreases or increases just above the significance threshold—helps identify weak-information regimes in which EDR may be overestimated and confidence-interval coverage may be reduced. Users can therefore compare the near-threshold shape of their observed z-value distribution to the patterns observed in simulation to assess whether EDR should be interpreted cautiously in a given application.

Overall, these results support the conclusion that z-curve provides credible estimates of the expected replication rate (ERR) and, under diagnostically identifiable conditions, the expected discovery rate (EDR), and that these quantities remain useful for evaluating the credibility of literatures in which selection on statistical significance is present.

How to Avoid Pek as Reviewer 2

The pattern is now familiar. I received another anonymous review by Reviewer 2 from a z-curve article that repeated Pek’s concerns about the performance of z-curve. To deal with biased reviewers, journals allow authors to mention potentially biased reviewers. I suggest doing so for Pek. I also suggest sharing a manuscript with me to ensure proper interpretation of results and to make it “reviewer-safe.”

To justify the claim that Pek is biased, researchers can use this rebuttal of Pek’s unscientific claims about z-curve.

Reviewer 2 (either Pek or a Pek parrot)

Reviewer Report:

The manuscript “A review and z-curve analysis of research on the palliative association of system justification” (Manuscript ID 1598066) extends the work of Sotola and Credé (2022), who used Z-curve analysis to evaluate the evidential value of findings related to system justification theory (SJT). The present paper similarly reports estimates of publication bias, questionable research practices (QRPs), and replication rates in the SJT literature using Z-curve. Evaluating how scientific evidence accumulates in the published literature is unquestionably important.

However, there is growing concern about the performance of meta-analytic forensic tools such as p-curve (Simonsohn, Nelson, & Simmons, 2014; see Morey & Davis-Stober, 2025 for a critique) and Z-curve (Brunner & Schimmack, 2020; Bartoš & Schimmack, 2022; see Pek et al., in press for a critique). Independent simulation studies increasingly suggest that these methods may perform poorly under realistic conditions, potentially yielding misleading results.

Justification for a theory or method typically requires subjecting it to a severe test (Mayo, 2019) – that is, assuming the opposite of what one seeks to establish (e.g., a null hypothesis of no effect) and demonstrating that this assumption leads to contradiction. In contrast, the simulation work used to support Z-curve (Brunner & Schimmack, 2020; Bartoš & Schimmack, 2022) relies on affirming belief through confirmation, a well-documented cognitive bias.

Findings from Pek et al. (in press) show that when selection bias is presented in published p-values — the very scenario Z-curve was intended to be applied — estimates of the expected discovery rate (EDR), expected replication rate (ERR), and Sorić’s False Discovery Risk (FDR) are themselves biased.

The magnitude and direction of this bias depend on multiple factors (e.g., number of p-values, selection mechanism of p-values) and cannot be corrected or detected from empirical data alone. The manuscript’s main contribution rests on the assumption that Z-curve yields reasonable estimates of the “reliability of published studies,” operationalized as a high ERR, and that the difference between the observed discovery rate (ODR) and EDR quantifies the extent of QRPs and publication bias.

The paper reports an ERR of .76, 95% CI [.53, .91] and concludes that research on the palliative hypothesis may be more reliable than findings in many other areas of psychology. There are several issues with this claim. First, the assertion that Sotola (2023) validated ERR estimates from the Z-curve reflects confirmation bias – I have not read Röseler (2023) and cannot comment on the argument made in it. The argument rests solely on the descriptive similarly between the ERR produced by Z-curve and the replication rate reported by the Open Science Collaboration (2015). However, no formal test of equivalence was conducted, and no consideration was given to estimate imprecision, potential bias in the estimates, or the conditions under which such agreement might occur by chance.

At minimum, if Z-curve estimates are treated as predicted values, some form of cross-validation or prediction interval should be used to quantify prediction uncertainty. More broadly, because ERR estimates produced by Z-curve are themselves likely biased (as shown in Pek et al., in press), and because the magnitude and direction of this bias are unknown, comparisons about ERR values across literatures do not provide a strong evidential basis for claims about the relative reliability of research areas.

Furthermore, the width of the 95% CI spans roughly half of the bounded parameter space of [0, 1], indicating substantial imprecision. Any claims based on these estimates should thus be contextualized with appropriate caution.

Another key result concerns the comparison of EDR = .52, 95% CO [.14, .92], and ODR = .81, 95% CI = [.69, .90]. The manuscript states that “When these two estimates are highly discrepant, this is consistent with the presence of questionable research practices (QRPS) and publication bias in this area of research (Brunner & Schimmack, 2020).

But in this case, the 95% CIs for the EDR and ODR in this work overlapped quite a bit, meaning that they may not be significantly different…” (p. 22). There are several issues with such a claim. First, Z curve results cannot directly support claims about the presence of QRPs.

The EDR reflects the proportion of significant p values expected under no selection bias, but it does not identify the source of selection bias (e.g., QRPs, fraud, editorial decisions). Using Z curve requires accepting its assumed missing data mechanism—a strong assumption that cannot be empirically validated.

Second, a descriptive comparison between two estimates cannot be interpreted as a formal test of difference (e.g., eyeballing two estimates of means as different does not tell us whether this difference is not driven by sampling variability). Means can be significantly different even if their confidence intervals overlap (Cumming & Finch, 2005).

A formal test of the difference is required. Third, EDR estimates can be biased. Even under ideal conditions, convergence to the population values requires extremely large numbers of studies (e.g., > 3000, see Figure 1 of Pek et al., in press).

The current study only has 64 tests. Thus, even if a formal test of the difference of ODR – EDR was conducted, little confidence could be placed on the result if the EDR estimate is biased and does not reflect the true population value.

Although I am critical of the outputs of Z curve analysis due to its poor statistical performance under realistic conditions, the manuscript has several strengths. These include adherence to good meta analytic practices such as providing a PRISMA flow chart, clearly stating inclusion and exclusion criteria, and verifying the calculation of p values. These aspects could be further strengthened by reporting test–retest reliability (given that a single author coded all studies) and by explicitly defining the population of selected p values. Because there appears to be heterogeneity in the results, a random effects meta analysis may be appropriate, and study level variables (e.g., type of hypothesis or analysis) could be used to explain between study variability. Additionally, the independence of p values has not been clearly addressed; p values may be correlated within articles or across studies. Minor points: The “reliability” of studies should be explicitly defined. The work by Manapat et al. (2022) should be cited in relation to Nagy et al. (2025). The findings of Simmons et al. (2011) applies only to single studies.

However, most research is published in multi-study sets, and follow-up simulations by Wegener at al. (2024) indicate that the Type I error rate is well-controlled when methodological constraints (e.g., same test, same design, same measures) are applied consistently across multiple studies – thus, the concerns of Simmons et al. (2011) pertain to a very small number of published results.

I could not find the reference to Schimmack and Brunner (2023) cited on p. 17.


Rebuttal to Core Claims in Recent Critiques of z-Curve

1. Claim: z-curve “performs poorly under realistic conditions”

Rebuttal

The claim that z-curve “performs poorly under realistic conditions” is not supported by the full body of available evidence. While recent critiques demonstrate that z-curve estimates—particularly EDR—can be biased under specific data-generating and selection mechanisms, these findings do not justify a general conclusion of poor performance.

Z-curve has been evaluated in extensive simulation studies that examined a wide range of empirically plausible scenarios, including heterogeneous power distributions, mixtures of low- and high-powered studies, varying false-positive rates, different degrees of selection for significance, and multiple shapes of observed z-value distributions (e.g., unimodal, right-skewed, and multimodal distributions). These simulations explicitly included sample sizes as low as k ≈ 100, which is typical for applied meta-research in psychology.

Across these conditions, z-curve demonstrated reasonable statistical properties conditional on its assumptions, including interpretable ERR and EDR estimates and confidence intervals with acceptable coverage in most realistic regimes. Importantly, these studies also identified conditions under which estimation becomes less informative—such as when the observed z-value distribution provides little information about missing nonsignificant results—thereby documenting diagnosable scope limits rather than undifferentiated poor performance.

Recent critiques rely primarily on selective adversarial scenarios and extrapolate from these to broad claims about “realistic conditions,” while not engaging with the earlier simulation literature that systematically evaluated z-curve across a much broader parameter space. A balanced scientific assessment therefore supports a more limited conclusion: z-curve has identifiable limitations and scope conditions, but existing simulation evidence does not support the claim that it generally performs poorly under realistic conditions.


2. Claim: Bias in EDR or ERR renders these estimates uninterpretable or misleading

Rebuttal

The critique conflates the possibility of bias with a lack of inferential value. All methods used to evaluate published literatures under selection—including effect-size meta-analysis, selection models, and Bayesian hierarchical approaches—are biased under some violations of their assumptions. The existence of bias therefore does not imply that an estimator is uninformative.

Z-curve explicitly reports uncertainty through bootstrap confidence intervals, which quantify sampling variability and model uncertainty given the observed data. No evidence is presented that z-curve confidence intervals systematically fail to achieve nominal coverage under conditions relevant to applied analyses. The appropriate conclusion is that z-curve estimates must be interpreted conditionally and cautiously, not that they lack statistical meaning.


3. Claim: Reliable EDR estimation requires “extremely large” numbers of studies (e.g., >3000)

Rebuttal

This claim overgeneralizes results from specific, highly constrained simulation scenarios. The cited sample sizes correspond to conditions in which the observed data provide little identifying information, not to a general requirement for statistical validity.

In applied statistics, consistency in the limit does not imply that estimates at smaller sample sizes are meaningless; it implies that uncertainty must be acknowledged. In the present application, this uncertainty is explicitly reflected in wide confidence intervals. Small sample sizes therefore affect precision, not validity, and do not justify dismissing the estimates outright.


4. Claim: Differences between ODR and EDR cannot support inferences about selection or questionable research practices

Rebuttal

It is correct that differences between ODR and EDR do not identify the source of selection (e.g., QRPs, editorial decisions, or other mechanisms). However, the critique goes further by implying that such differences lack diagnostic value altogether.

Under the z-curve framework, ODR–EDR discrepancies are interpreted as evidence of selection, not of specific researcher behaviors. This inference is explicitly conditional and does not rely on attributing intent or mechanism. Rejecting this interpretation would require demonstrating that ODR–EDR differences are uninformative even under monotonic selection on statistical significance, which has not been shown.


5. Claim: ERR comparisons across literatures lack evidential basis because bias direction is unknown

Rebuttal

The critique asserts that because ERR estimates may be biased with unknown direction, comparisons across literatures lack evidential value. This conclusion does not follow.

Bias does not eliminate comparative information unless it is shown to be large, variable, and systematically distorting rankings across plausible conditions. No evidence is provided that ERR estimates reverse ordering across literatures or are less informative than alternative metrics. While comparative claims should be interpreted cautiously, caution does not imply the absence of evidential content.


6. Claim: z-curve validation relies on “affirming belief through confirmation”

Rebuttal

This characterization misrepresents the role of simulation studies in statistical methodology. Simulation-based evaluation of estimators under known data-generating processes is the standard approach for assessing bias, variance, and coverage across frequentist and Bayesian methods alike.

Characterizing simulation-based validation as epistemically deficient would apply equally to conventional meta-analysis, selection models, and hierarchical Bayesian approaches. No alternative validation framework is proposed that would avoid reliance on model-based simulation.


7. Implicit claim: Effect-size meta-analysis provides a firmer basis for credibility assessment

Rebuttal

Effect-size meta-analysis addresses a different inferential target. It presupposes that studies estimate commensurable effects of a common hypothesis. In heterogeneous literatures, pooled effect sizes represent averages over substantively distinct estimands and may lack clear interpretation.

Moreover, effect-size meta-analysis does not estimate discovery rates, replication probabilities, or false-positive risk, nor does it model selection unless explicitly extended. No evidence is provided that effect-size meta-analysis offers superior performance for evaluating evidential credibility under selective reporting.


Summary

The critiques correctly identify that z-curve is a model-based method with assumptions and scope conditions. However, they systematically extend these points beyond what the evidence supports by:

  • extrapolating from selective adversarial simulations,
  • conflating potential bias with lack of inferential value,
  • overgeneralizing small-sample limitations,
  • and applying asymmetrical standards relative to conventional methods.

A scientifically justified conclusion is that z-curve provides conditionally informative estimates with quantifiable uncertainty, not that it lacks statistical validity or evidential relevance.


Reply to Erik van Zwet: Z-Curve Only Works on Earth

In the 17th century, early telescopic observations of Mars suggested that the planet might be populated. Now imagine a study that aims to examine whether Martians are taller than humans. The problem is obvious: although we may assume that Martians exist, we cannot observe or measure them, and therefore we end up with zero observations of Martian height. Would we blame the t-test for not telling us what we want to know? I hope your answer to this rhetorical question is “No, of course not.”

If you pass this sanity check, the rest of this post should be easy to follow. It responds to criticism by Erik van Zwet (EvZ), hosted and endorsed by Andrew Gelman on his blog,

Concerns about the z-curve method.”

EvZ imagines a scenario in which z-curve is applied to data generated by two distinct lines of research. One lab conducts studies that test only true null hypotheses. While exact effect sizes of zero may be rare in practice, attempting to detect extremely small effects in small samples is, for all practical purposes, equivalent. A well-known example comes from early molecular genetic research that attempted to link variation in single genes—such as the serotonin transporter gene—to complex phenotypes like Neuroticism. It is now well established that these candidate-gene studies produced primarily false positive results when evaluated with the conventional significance threshold of α = .05.

In response, molecular genetics fundamentally changed its approach. Researchers began testing many genetic variants simultaneously and adopted much more stringent significance thresholds to control the multiple-comparison problem. In the simplified example used here, I assume α = .001, implying an expected false positive rate of only 1 in 1,000 tests. I further assume that truly associated genetic predictors—single nucleotide polymorphisms (SNPs)—are tested in very large samples, such that sampling error is small and true effects yield z-values around 6. This is, of course, a stylized assumption, but it serves to illustrate the logic of the critique.

Figure 1 illustrates a situation with 1,000 studies from each of these two research traditions. Among the 1,000 candidate-gene studies, only one significant result is expected by chance. Among the genome-wide association studies (GWAS), power to reject the null hypothesis at α = .001 is close to 1, although a small number (3–4 out of 1,000) of studies may still fail to reach significance.

At this point, it is essential to distinguish between two scenarios. In the first scenario, all 999 non-significant results are observed and available for analysis. If we could recover the full distribution of results—including non-significant ones—we could fit models to the complete set of z-values. Z-curve can, in principle, be applied to such data, but it was not designed for this purpose.

Z-curve was developed for the second scenario. In this scenario, the light-purple, non-significant results exist only in researchers’ file drawers and are not part of the observed record. This situation—selection for statistical significance—is commonly referred to as publication bias. In psychology, success rates above 90% strongly suggest that statistical significance is a necessary condition for publication (Sterling, 1959). Under such selection, non-significant results provide no observable information, and only significant results remain. In extreme cases, it is theoretically possible that all published significant findings are false positives (Rosenthal, 1979), and in some literatures—such as candidate-gene research or social priming—this possibility is not merely theoretical.

Z-curve addresses uncertainty about the credibility of published significant results by explicitly conditioning on selection for significance and modeling only those results. When success rates approach 90% or higher, there is often no alternative: non-significant results are simply unavailable.

In Figure 1, the light-purple bars represent non-significant results that exist only in file drawers. Z-curve is fitted exclusively to the dark-purple, significant results. Based on these data, the fitted model (red curve), which is centered near the true value of z = 6, correctly infers that the average true power of the studies contributing to the significant results is approximately 99% when α = .001 (corresponding to a critical value of z ≈ 3.3).

Z-curve also estimates the Expected Discovery Rate (EDR). Importantly, the EDR refers to the average power of all studies that were conducted in the process of producing the observed significant results. This conditioning is crucial. Z-curve does not attempt to estimate the total number of studies ever conducted, nor does it attempt to account for studies from populations that could not have produced the observed significant findings. In this example, candidate-gene studies that produced non-significant results—whether published or not—are irrelevant because they did not contribute to the set of significant GWAS results under analysis.

What matters instead is how many GWAS studies failed to reach significance and therefore remain unobserved. Given the assumed power, this number is at most 3–4 out of 1,000 (<1%). Consequently, an EDR estimate of 99% is correct and indicates that publication bias within the relevant population of studies is trivial. Because the false discovery rate is derived from the EDR, the implied false positive risk is effectively zero—again, correctly so for this population.

EvZ’s criticism of z-curve is therefore based on a misunderstanding of the method’s purpose and estimand. He evaluates z-curve against a target that includes large numbers of studies that leave no trace in the observed record and have no influence on the distribution of significant results being analyzed. But no method that conditions on observed significant results can recover information about such studies—nor should it be expected to.

Z-curve is concerned exclusively with the credibility of published significant results. Non-significant studies that originate from populations that do not contribute to those results are as irrelevant to this task as the height of Martians.


On the Interpretation of Z-Curve Coverage in An Extreme Simulation Scenario

Abstract

A recent critique of z-curve reported low coverage of confidence intervals for the expected discovery rate (EDR) based on an extreme simulation with a very low expected false positive rate (about 1–2%). This conclusion conflates expected values with realized data. In repeated runs, the number of false positives among significant results varies substantially and is often zero; in those runs the realized false discovery rate is exactly zero, so an estimate of zero is correct. When coverage is evaluated against realized false positive rates, the apparent problem is substantially reduced. Additional simulations show that coverage approaches the nominal level once false positives are non-negligible (e.g., 5%) and improves further with larger numbers of significant results. Remaining coverage failures are confined to diagnostically identifiable cases in which high-powered studies dominate the distribution of significant z-values, leaving limited information to estimate the EDR.

On Evaluating Evidence and Interpreting Simulation Results

Science advances through skepticism. It progresses by testing claims against evidence and by revisiting conclusions when new information becomes available. This process requires not only sound data, but also careful interpretation of what those data can and cannot tell us.

In principle, academic debate should resolve disagreements by subjecting competing interpretations to scrutiny. In practice, however, disagreements often persist. One reason is that people—scientists included—tend to focus on evidence that aligns with their expectations while giving less weight to evidence that challenges them. Another is that conclusions are sometimes used, implicitly or explicitly, to justify the premises that led to them, rather than the other way around.

These concerns are not personal; they are structural. They arise whenever complex methods are evaluated under simplified criteria.

Context of the Current Discussion

Z-curve was developed to evaluate the credibility of a set of statistically significant results. It operates on the distribution of significant test statistics and estimates quantities such as the expected replication rate (ERR), the expected discovery rate (EDR), and the false discovery rate (FDR). Its performance has been evaluated using extensive simulation studies covering hundreds of conditions that varied effect sizes, heterogeneity, and false positive rates.

A recent critique raised concerns about z-curve based on a simulation in which confidence intervals for the EDR showed low coverage. From this result, it was suggested that the method is unreliable (“concerns about z-curve“).

It is useful to examine carefully what this simulation does and how its results are interpreted.

Expected Values and Realized Data

The simulation assumes two types of studies: some that test true null hypotheses and others that test false null hypotheses with very high power. From this setup, one can compute expected values—for example, the expected number of false positives or the expected discovery rate.

Expected values, however, are averages over many hypothetical repetitions. In individual simulation runs, the realized number of false positives varies. In particular, when the expected number of false positives is close to one, it is common for some runs to contain no false positives among the significant results. In those runs, the observed significant record contains no false discoveries, and the realized false discovery rate for that record is exactly zero.

Evaluating coverage by comparing z-curve estimates to a fixed expected value in every run overlooks this variability. It treats a population-level expectation as if it were the true value for each realized dataset, even when the realized data are inconsistent with that expectation. This issue is most pronounced in near-boundary settings, where the quantities of interest are weakly identifiable from truncated data.

The simulation uses an extreme configuration to illustrate a potential limitation of z-curve. The setup assumes two populations of studies: one repeatedly tests a true null hypothesis (H0), and the other tests a false null hypothesis with very high power (approximately 98%, corresponding to z ≈ 4). Z-curve is applied only to statistically significant results, consistent with its intended use.

In the specific configuration, there are 25 tests of a true H0 and 75 tests of a false H0 with 98% power. From this design, one can compute expected values: on average, 25 × .05 = 1.25 false positives are expected, implying a false discovery rate of about 1.6% among significant results. However, these values are expectations across repeated samples; they are not fixed quantities that hold in every simulation run.

Because the expected number of false positives is close to one, sampling variability is substantial. In some runs, no false positive enters the set of significant results at all. In those runs, it is not an error if z-curve assigns zero weight to the null component and estimates an FDR of zero; that estimate matches the realized composition of the observed significant results.

When I reproduced the simulation and counted the number of false positives among the significant results, I found that the realized count ranged from 0 to 5, and that 152 out of 500 runs contained no false positives. This matters for interpreting coverage: comparing z-curve estimates in these runs to the expected false discovery rate of 1.6% treats a population-level expectation as if it were the true value for each realized dataset. As a result, the reported undercoverage is driven by a mismatch between the evaluation target and the realized data in a substantial subset of runs, rather than by a general failure of z-curve.

Reexamining Z-curve Performance with Extreme Mixtures

To examine z-curve’s performance with extreme mixtures of true and false H0, I ran a new simulation that sampled 5 significant results from tests of true H0 and 95 significant results from tests of false H0 with 98% power. I used a false positive rate of 5%, because a 5% false positive rate may be considered the boundary value for an acceptable error rate. Importantly, increasing it further would benefit z-curve because it becomes easier to detect the presence of low powered hypothesis tests.

As expected, the coverage of the EDR increased. In fact, it was just shy of the nominal level of 95%, 471/500 (94%). Thus, low coverage is limited to data with fewer than 5% false positive results. For example, the model may suggest no false positives, but the true false positive rate is 4%.

It is also possible to diagnose data that can create problems with coverage. First, a decreasing slope from significance to z = 3 implies a large number of missing non-significant results that can be identified by their influence on the distribution of significant z-values. In contrast a flat or positive slope suggests that high powered studies have a stronger influence on the distribution of z-values between 2 and 3. I computed the slope using the kernel density of the observed data and regressing the densities on the z-values. A positive slope perfectly predicted bad coverage, 29/29 (100%).

Another diagnostic is the ERR. A high ERR implies that most studies have high power and that there are few low powered studies with significant results to estimate the EDR. All failures occurred when the ERR was above 90%.

Finally, we can use the weights of the low powered components (z = 0, z = 1). When these weights are zero, it is possible that the model had problems estimating these components. In all failures, both weights were zero.

Importantly, these results also show that z-curve does not inevitably fail under this type of mixture. The issue is not the false positive rate per se, but the amount of information available to estimate it. With the same false positive rate of 5%, but a larger number of significant results—for example, 50 false positives out of 1,000—z-curve reliably detects the presence of missing non-significant results, even when the slope is increasing and the ERR is high. In this case, the weight of the z = 0 component was estimated at approximately 52%. By contrast, when the estimated weight is zero and the FDR estimate is zero, the true false discovery rate may still be as high as 5%, reflecting weak identifiability rather than estimator bias.

Conclusion

The low coverage reported in this simulation is largely an evaluation artifact. In this extreme setup, the expected false positive rate (about 1–2%) is an average across runs, but the realized number of false positives among significant results varies; in many runs it is zero. In those runs, the realized FDR is exactly zero, so an estimate of zero is not an error. Treating the expected rate as the “true value” in every run mechanically produces undercoverage.

When the false discovery rate is modest (e.g., 5%) and the number of significant results is larger, coverage is close to nominal and improves further as information increases. The remaining failures are confined to diagnostically identifiable cases in which high-powered studies dominate the significant z-values, leaving too little information to estimate the EDR.

P-Hacking Preregistered Studies Can Be Detected

One major contribution to the growing awareness that psychological research is often unreliable was an article by Daryl Bem (2011), which reported nine barely statistically significant results to support the existence of extrasensory perception—most memorably, that extraverts could predict the future location of erotic images (“pornception”).

Subsequent replication attempts quickly failed to reproduce these findings (Galak et al., 2012). This outcome was not especially newsworthy; few researchers believed the substantive claim. The more consequential question was how seemingly strong statistical evidence could be produced for a false conclusion.

Under the conventional criterion of p<.05p < .05p<.05, one false positive is expected by chance roughly 1 out of 20 times. However, obtaining statistically significant results in nine out of nine studies purely by chance is extraordinarily unlikely (Schimmack, 2012). This pattern strongly suggests that the data-generating process was biased toward significance.

Schimmack (2018) argued that the observed bias in Bem’s (2011) findings was best explained by questionable research practices (John et al., 2012). For example, unpromising studies may be abandoned and later characterized as pilot work, whereas more favorable results may be selectively aggregated or emphasized, increasing the likelihood of statistically significant outcomes. Following the publication of the replication failures, a retraction was requested. In response, the then editor, Shinobu Kitayama, declined to pursue retraction, citing that the practices in question were widespread in social psychology at the time and were not treated as clear violations of prevailing norms (Kitayama, 2018).

After more than a decade of methodological debate and reform, ignorance is no longer a credible defense for the continued use of questionable research practices. This is especially true when articles invoke open science practices—such as preregistration, transparent reporting, and data sharing—to signal credibility: these practices raise the expected standard of methodological competence and disclosure, not merely the appearance of rigor.

Nevertheless, there are growing concerns that preregistration alone is not sufficient to ensure valid inference. Preregistered studies can still yield misleading conclusions if auxiliary assumptions are incorrect, analytic choices are poorly justified, or deviations and contingencies are not transparently handled (Soto & Schimmack, 2025).

Against this backdrop, Francis (2024) published a statistical critique of Ongchoco, Walter-Terrill, and Scholl’s (2023) PNAS article reporting seven preregistered experiments on visual event boundaries and anchoring. Using a Test of Excess Significance (“excess success”) argument, Francis concluded that the uniformly significant pattern—particularly the repeated significant interaction effects—was unlikely under a no-bias, correctly specified model, reporting p=.011p = .011p=.011. This result does not establish the use of questionable research practices; it shows only that the observed pattern of results is improbable under the stated assumptions, though chance cannot be ruled out.

Ongchoco, Walter-Terrill, and Scholl (2024) responded by challenging both the general validity of excess-success tests and their application to a single article. In support, they cite methodological critiques—especially Simonsohn (2012, 2013)—arguing that post hoc excess-success tests can generate false alarms when applied opportunistically or when studies address heterogeneous hypotheses.

They further emphasize preregistration, complete reporting of preregistered studies, and a preregistered replication with increased sample size as reasons their results should be considered credible—thereby raising the question of whether the significant findings themselves show evidential value, independent of procedural safeguards.

The appeal to Simonsohn is particularly relevant here because Simonsohn, Nelson, and Simmons (2014) introduced p-curve as a tool for assessing whether a set of statistically significant findings contains evidential value even in the presence of selective reporting or p-hacking. P-curve examines the distribution of reported significant p-values (typically those below .05). If the underlying effect is null and significance arises only through selection, the distribution is expected to be approximately uniform across the .00–.05 range. If a real effect is present and studies have nontrivial power, the distribution should be right-skewed, with a greater concentration of very small p-values (e.g., < .01).

I therefore conducted a p-curve analysis to assess the evidential value of the statistically significant results reported in this research program. Following Simonsohn et al. (2014), I focused on the focal interaction tests bearing directly on the core claim that crossing a visual event boundary (e.g., walking through a virtual doorway) attenuates anchoring effects. Specifically, I extracted the reported p-values for the anchoring-by-boundary interaction terms across the preregistered experiments in Ongchoco, Walter-Terrill, and Scholl (2023) and evaluated whether their distribution showed the right-skew expected under genuine evidential value.

The p-curve analysis provides no evidence of evidential value for the focal interaction effects. Although all seven tests reached nominal statistical significance, the distribution of significant p-values does not show the right-skew expected when results are driven by a genuine effect. Formal tests for right-skewness were non-significant (full p-curve: p=.212p = .212p=.212; half p-curve: p=.431p = .431p=.431), indicating that the results cannot be distinguished from patterns expected under selective success or related model violations.

Consistent with this pattern, the p-curve-based estimate of average power is low (13%). Although the confidence interval is wide (5%–57%), the right-skew tests already imply failure to reject the null hypothesis of no evidential value. Moreover, even under the most generous interpretation—assuming 57% power for each test—the probability of obtaining seven statistically significant results out of seven is approximately 0.577.0200.57^7 \approx .0200.577≈.020. Thus, invoking Simonsohn’s critiques of excess-success testing is not sufficient, on its own, to restore confidence in the evidential value of the reported interaction effects.

Some criticisms of Francis’s single-article bias tests also require careful handling. A common concern is selective targeting: if a critic applies a bias test to many papers but publishes commentaries only when the test yields a small p-value, the published set of critiques will overrepresent “positive” alarms. Importantly, this publication strategy does not invalidate any particular p-value; it affects what can be inferred about the prevalence of bias findings from the published subset.

Francis (2014) applied an excess-success test to multi-study articles in Psychological Science (2009–2012) and reported that a large proportion exhibited patterns consistent with excess success (often summarized as roughly 82% of eligible multi-study articles). Under a high-prevalence view—i.e., if such model violations are common—an individual statistically significant bias-test result is less likely to be a false alarm than under a low-prevalence view. The appropriate prevalence for preregistered studies, however, remains uncertain.

Additional diagnostics help address this uncertainty. The “lucky-bounce” test (Schimmack, unpublished) illustrates the improbability of observing only marginally significant results when studies are reasonably powered. Under a conservative assumption of 80% power, the probability that all seven interaction effects fall in the “just significant” range (.005–.05) is approximately .00022. Although this heuristic test is not peer-reviewed, it highlights the same improbability identified by other methods.

A closely related, peer-reviewed approach is the Test of Insufficient Variance (TIVA). TIVA does not rely on significance thresholds; instead, it tests whether a set of independent test statistics (expressed as zzz-values) exhibits at least the variance expected under a standard-normal model (Var(z)1\mathrm{Var}(z) \ge 1Var(z)≥1). Conceptually, it is a left-tailed chi-square test on the variance of zzz-scores. Because heterogeneity in power or true effects typically increases variance, evidence of insufficient variance is conservative. With the large sample sizes in these studies, transforming FFF-values to ttt- and approximate zzz-values is reasonable. Applying TIVA to the seven interaction tests yields p=.002p = .002p=.002, indicating that the dispersion of the test statistics is unusually small under the assumption of independent tests.

These results do not establish that the seven statistically significant findings are all false positives, nor do they identify a specific mechanism. They do show, however, that perfect significance can coexist with weak evidential value: even in preregistered research, a uniformly significant pattern can be statistically inconsistent with the assumptions required for straightforward credibility.

Given these results, an independent, well-powered replication is warranted. The true power of the reported studies is unlikely to approach 80% even with sample sizes of 800 participants; if it did, at least one p-value would be expected below .005. Absent such evidence, perfect success should not be taken as evidence that a robust effect has been established.

In conclusion, the replication crisis has sharpened awareness that researchers face strong incentives to publish and that journals—especially prestigious outlets such as PNAS—prefer clean, internally consistent narratives. Open science practices have improved transparency, but it remains unclear whether they are sufficient to prevent the kinds of model violations that undermined credibility before the crisis. Fortunately, methodological reform has also produced more informative tools for evaluating evidential value.

For researchers seeking credible results, the practical implication is straightforward: avoid building evidential claims on many marginally powered studies. Rather than running seven underpowered experiments in the hope of success, conduct one adequately powered study—and, if necessary, a similarly powered preregistered replication (Schimmack, 2012). Multi-study packages are not inherently problematic, but when “picture-perfect” significance becomes the implicit standard, they increase the risk of selective success and overinterpretation. Greater awareness that such patterns can be detected statistically may help authors, reviewers, and editors better weigh these trade-offs.

Scoring Adverserial Collaborations

Daniel Kahneman coined the term “adversarial collaborations” for research projects conducted by teams of researchers with conflicting or opposing views. The idea was that such projects would help to resolve academic disputes that can linger for decades because researchers usually conduct confirmatory studies that support their own views and neglect evidence that does not. This human bias, called confirmation bias, is one of the most well documented and robust biases demonstrated by social psychologists. At the same time, the replication crisis in social psychology has demonstrated that even scientists who are fully aware of this bias are often unable to overcome it. We may call this “confirmation bias neglect,” “confirmation bias blindspot” or “willful incompetence” – the inability to train oneself to conduct research as it is supposed to be by putting one’s own ideas to a risky empirical test that can demonstrate that the cherished theory is false.

Adverserial collaboration projects remain rare and the few that exist also show that they do not overcome human biases. Discussion sections are often not decisive, and the only agreement is that “more research is needed.” Taxpayers might say that they are not interested in funding more research that provides no clear answers. Much of social psychology is more infotainment than science.

I have been working with AI for over a year now and I think AI can play an important role in reducing human biases and making psychology look more like a science; that is, declare a winner in scientific debates so that field can move on. This blog post is just a little demonstration what this might look like.

The example comes from a debate in statistics. Readers may think that statistics is like math, where it is relatively easy to find consensus that 2 + 2 = 4, and not 5. A debate about this would make anybody arguing that 2 + 2 = 5 look like a biased, if not crazy, person, and the debate would quickly fizzle. However, statistics is nothing like match even if it uses formulas and Greek symbols. In the end, statistics only exists because math is being used to make claims about unknowns like the outcome of the next election or the true effect of feedback on learning. Observed data provide some information, but statistics is needed to make the leap to unobservable. In this world, academics fight even about the most basic assumptions, and everybody has a cherished approach that is defended like other people defend religious beliefs. Some not only defend their God but also spend considerable time attacking other religions. In this world, 2 + 2 can be 4, 5, or infinite.

I am not a trained statistician, but I worked with a statistician, Jerry Brunner, on a statistical method to make sense of published results after the replication crisis. We validated this method with extensive simulation studies and showed that it performs best under realistic conditions; that is when studies vary considerably in designs, sample sizes, and the unknowable effect sizes (Brunner & Schimmack, 2020). In 2022, a genus student, Frantisek Bartos and I extended the method to quantify the amount of publication bias (how many results that did not confirm predicts did researchers hide from the public) and the false positive risk (how many of the published results might have provided false evidence for researchers claims). Frantisk used hundreds of computing hours to validate this method across hundreds of realistic scenarios. Both articles appeared in Meta-Psychology, a journal that demands publishing reproducible code to check results that can also be used to run additional simulations. In the past three years, z-curve has been criticized, and we have responded to these criticisms. For example, z-curve can show bias when samples are tiny (N < 30). This is usually not a problem because studies with tiny sample sizes are rare, but we also created a fix for this problem for applications in research areas that have tiny samples.

Anyhow, Erik van Zwet, a biostatistican from the Netherlands, posted two blog posts to criticize z-curve. Each blog post showed a simulation that revealed problems with the estimation of the expected discovery rate that is used to evaluate publication bias and false positive risks. The first discussion quickly deteriorated and serves as an example of the passion that academics can have for topics that normal humans could not care less about. The second one attracted little interest and was mainly an exchange between Erik and me.

I asked ChatGPT to referee the exchange and it wrote a “game report” that is informative and funny at the same time. Importantly, I won, but barely. The outcome is less important than the neutral and balanced report. The main advantage of AI is not that it has superintelligence. It’s main strength is that it does not care about the outcome of the game. The desire to be right is the Kryptonite of academics. Calls for humility are nice, but unrealistic. Trying to prove your ideas to be right motivates many academics to work long hours and sometimes to waste their entire life on nutty ideas like extrasensory perception. AI may help us individually and collectively to avoid this mistake.

You can find the actual game here
More concerns about the z-curve method | Statistical Modeling, Causal Inference, and Social Science

And here is ChatGPTs game report:


Full time

Ulrich 5 – 4 Erik

(Technically decisive win for Ulrich, but with avoidable late fouls.)


First half

Erik goals (2)

(6′) Framing the agenda
Erik successfully sets the frame: undercoverage of EDR CIs, bootstrap collapse, and model misspecification. This is a legitimate opening and forces engagement.

(18′) Collapsing CI diagnosis
The zero-length CI for (P(\mathrm{SNR}=0)) is a real inferential pathology. This is Erik’s strongest technical contribution and remains uncontested as a phenomenon.


Ulrich goals (2)

(25′) Extensive validation defense
Ulrich correctly invokes prior large-scale simulations and reproducibility checks. This blunts any claim that z-curve was casually or narrowly validated.

(38′) Black-swan argument
The point that one pathological case does not invalidate a method in general is sound and resonates with statistically literate readers.


Second half

Erik goals (2)

(52′) Model-uncertainty critique
Erik’s argument that bootstrap CIs reflect sampling uncertainty but ignore model uncertainty is correct in principle and applies to mixture models under misspecification.

(64′) “Undetectable violation” claim
The insistence that the misspecification is practically undetectable from truncated data keeps pressure on defaults and diagnostics. This is a fair methodological concern.


Ulrich goals (3)

(70′) vj intervention (assist credited to Ulrich)
The vj comment decisively reframes the issue:

  • identifies perfect power homogeneity,
  • explains weak identification of EDR,
  • localizes the failure to a knife-edge case.

This is a major momentum shift.

(78′) Mu-grid diagnosis and fix
Demonstrating that coverage is restored by aligning the mu grid is a technical knockout: it shows the issue is resolution under discretization, not a broken method.

(85′) Final Ulrich comment (heterogeneity + alternatives)
This is your strongest direct response:

  • acknowledges the edge case,
  • explains why it is unrealistic,
  • cites known alternatives (moving μ, random-effects),
  • and challenges Erik to generalize his claim.

Substantively, this closes the loop.


Own goals

Ulrich — Own Goals (2)

(44′) Early defensive tone
The initial “he does not hear me” framing and appeal to Erik’s past review role weakened the epistemic high ground.

(90’+2) P.S. about Andrew
The postscript is unnecessary and risks shifting attention back to tone and personalities rather than substance.


Erik — Own Goals (3)

(60′) Escalation to personal insinuation
Claims of harassment, editorial intervention, and “you haven’t read the post” add heat but no inferential value.

(88′) Latest reply (“comforting thought for you”)
This is a clear tone foul. It undercuts Erik’s otherwise disciplined methodological position and hands Ulrich the moral high ground late in the game.

(90′) Failure to engage heterogeneity point
Erik never answers the central empirical challenge: where do we see near-homogeneous power in real literatures? That omission matters.


Man of the Match

Ulrich

Reason: You end the exchange with a coherent synthesis:

  • the failure mode is known,
  • it arises in unrealistic knife-edge cases,
  • defaults work because real data are heterogeneous,
  • and alternatives exist if one worries about that edge case.

That is the position readers will remember.


Final assessment

  • You won on substance.
  • You mostly avoided tone own goals, except for the P.S.
  • Erik’s last comment actually hurts his case more than it hurts yours.

At this point, do not reply again. The ball is out of play, and any further touch risks a needless foul.

If Erik posts new simulations with realistic heterogeneity, bring them here first. Otherwise, this match is over—and you won it.

Response to van Zwet’s Critic of Our Z-Curve Method


Bartoš, F., & Schimmack, U. (2022). Z-curve 2.0: Estimating replication rates and discovery rates. Meta-Psychology, 6, Article e0000130. https://doi.org/10.15626/MP.2022.2981

Brunner, J., & Schimmack, U. (2020). Estimating population mean power under conditions of heterogeneity and selection for significance. Meta- Psychology. MP.2018.874, https://doi.org/10.15626/MP.2018.874

van Zwet, E., Gelman, A., Greenland, S., Imbens, G., Schwab, S., & Goodman, S. N. (2024). A New Look at P Values for Randomized Clinical Trials. NEJM evidence3(1), EVIDoa2300003. https://doi.org/10.1056/EVIDoa2300003

The Story of Two Z-Curve Models

Erik van Zwet recently posted a critique of the z-curve method on Andrew Gelman’s blog.

Concerns about the z-curve method | Statistical Modeling, Causal Inference, and Social Science

Meaningful discussion of the severity and scope of this critique was difficult in that forum, so I address the issue more carefully here.

van Zwet identified a situation in which z-curve can overestimate the Expected Discovery Rate (EDR) when it is inferred from the distribution of statistically significant z-values. Specifically, when the distribution of significant results is driven primarily by studies with high power, the observed distribution contains little information about the distribution of nonsignificant results. If those nonsignificant results are not reported and z-curve is nevertheless used to infer them from the significant results alone, the method can underestimate the number of missing nonsignificant studies and, as a consequence, overestimate the Expected Discovery Rate (EDR).

This is a genuine limitation, but it is a conditional and diagnosable one. Crucially, the problematic scenarios are directly observable in the data. Problematic data have an increasing or flat slope of the significant z-value distribution and a mode well above the significance threshold. In such cases, z-curve does not silently fail; it signals that inference about missing studies is weak and that EDR estimates should not be trusted.

This is rarely a problem in psychology, where most studies have low power, the mode is at the significance criterion, and the slope decreases, often steeply. This pattern implies a large set of non-significant results and z-curve provides good estimates in these scenarios. It is difficult to estimate distributions of unobserved data, leading to wide confidence intervals around these estimates. However, there is no fixed number of studies that are needed. The relevant question is whether the confidence intervals are informative enough to support meaningful conclusions.

One of the most powerful set of studies that I have actually seen comes from epidemiology, where studies often have large samples to estimate effect sizes precisely. In these studies, power to reject the null hypothesis is actually not really important, but the data serve as a good example of a set of studies with high power, rather than low power as in psychology.

However, even this example shows a decreasing slope and a mode at significance criterion. Fitting z-curve to these data still suggests some selection bias and no underestimation of reported non-significant results. This illustrates how extreme van Zwet’s scenario must be to produce the increasing-slope pattern that undermines EDR estimation.

What about van Zwet’s Z-Curve Method?

It is also noteworthy that van Zwet does not compare our z-curve method (Bartos & Schimmack, 2022; Brunner & Bartos, 2020) to his own z-curve method that was used to analyze z-values from clinical trials (van Zwet et al., 2024).

The article fits a model to the distribution of absolute z-values (ignoring whether results show a benefit or harm to patients). The key differences between the two approaches are that (a) van Zwet et al.’s model uses all z-values and assumes (implicitly) that there is no selection bias, and (b) that true effect sizes are never zero and errors can only be sign errors. Based on these assumptions, the article concludes that no more than 2% of clinical trials produce a result that falsely rejects a true hypothesis. For example, a statistically significant result could be treated as an error only if the true effect has the opposite sign (e.g., the true effect increases smoking, but a significant result is used to claim it reduced smoking).

The advantage of this method is that it is not necessary to estimate the EDR from the distribution of only significant results, but it does so only by assuming that publication bias does not exist. In this case, we can just count the observed non-significant and significant results and use the observed discovery rate to estimate average power and the false positive risk.

The trade-off is clear. z-curve attempts to address selection bias and sometimes lacks sufficient information to do so reliably; van Zwet’s approach achieves stable estimates by assuming the problem away. The former risks imprecision when information is weak; the latter risks bias when its core assumption is violated.

In the example from epidemiology, there is evidence of some publication bias and omission of non-significant results. Using van Zwet’s model would be inappropriate because it would overestimate the true discovery rate. The focus on sign errors alone is also questionable and should be clearly stated as a strong assumption. It implies that significant results in the right direction are not errors, even if effect sizes are close to zero. For example, a significant result that suggests it extends life is considered a true finding, even if the effect size is one day.

False positive rates do not fully solve that problem, but false positive rates that include zero as a hypothetical value for the population effect size are higher and treat small effects close to zero as errors rather than treating half of them as correct rejections of the null hypothesis. For example, an intervention that decreases smoking by 1% of all smokers is not really different from one that increases it by 1%, but a focus on signs treats only the latter one as an error.

In short, van Zwet’s critique identifies a boundary condition for z-curve, not a general failure. At the same time, his own method rests on a stronger and untested assumption—no selection bias—whose violation would invalidate its conclusions entirely. No method is perfect and using a single scenario to imply that a method is always wrong is not a valid argument against any method. By the same logic, van Zwet’s own method could be declared “useless” whenever selection bias exists, which is precisely the point: all methods have scope conditions.

Using proper logic, we suggest that all methods work when assumptions are met. The main point is to test whether they are met or not. We clarified that z-curve estimation of the EDR assumes that enough low powered studies produced significant results to influence the distribution of significant results. If the slope of significant results is not decreasing, this assumption does not hold and z-curve should not be used to estimate the EDR. Similarly, users of van Zwets first method should first test whether selection bias is present and not use it when it does. They should also examine whether they think a proportion of studies could have tested practically true null hypotheses and not use the method when this is a concern.

Finally, the blog post responds to Gelman’s polemic about our z-curve method and earlier work by Jager and Leek (2014), by noting that Gelman’s critic of other methods exist in parallel to his own work (at least co-authorship) that also modeled distribution of z-values to make claims about power and the risk of false inferences. The assumption of this model that selection bias does not exist is peculiar, given Gelman’s typical writing about low power and the negative effects of selection for significance. A more constructive discussion would apply the same critical standards to all methods—including one’s own.


Experimental Social Psychology is Not a True Science

The term science can be used to describe both the actual practices of researchers and an idealized set of practices that distinguish science from other approaches to making claims about the world.

A core aspect of the idealized conception of science is that research activity is used to test theories, and that empirical tests can, under some conditions, falsify theoretical predictions. Falsification is neither necessary nor sufficient for scientific progress, but a research program that systematically insulates theories from empirical refutation departs from core scientific norms. Unfortunately, psychology has often ignored falsification or confused rejections of null hypotheses with falsification.

The fallacy here is that rejection of null hypotheses is used to confirm theoretical hypotheses about the direction or existence of effects. As a consequence, psychology lacks widely used statistical methods that can provide affirmative evidence against substantive theoretical predictions. Studies are typically interpreted as confirming predictions or are deemed inconclusive.

This asymmetry in evidential standards helps explain why over 90% of articles report confirmation of a theoretical prediction (Sterling, 1959; Sterling et al., 1995). Psychologists paid little attention to this unusually high success rate until replication attempts of published studies revealed that replication success in experimental social psychology was substantially lower than implied by the published literature, with only 25% successful replications in the Reproducibility Project (2025).

Some review articles suggest that the replication crisis has led to methodological reforms and has made experimental social psychology more trustworthy. This is partially correct. Social psychologists played a prominent role in the Open Science movement and contributed to reforms such as open data, preregistration, and registered reports. However, these reforms are not universally mandated and do not retroactively address the credibility of results published prior to their adoption, particularly before the 2010s. Moreover, incentives remain that favor positive and theoretically appealing results, and some researchers continue to downplay the extent of the replication problem. As a result, it is difficult to make general claims about social psychology as a unified scientific enterprise. In the absence of enforceable, field-wide normative standards, credibility remains largely a property of individual researchers rather than the discipline as a whole.

Social Priming

Priming is a general term in psychology referring to the automatic influence of stimuli on subsequent thoughts, feelings, or behaviors. A classic example from cognitive psychology shows that exposure to a word such as “forest” facilitates the processing of related words such as “tree.”

Social psychologists hypothesized that priming could also operate without awareness and influence actual behavior. A well-known study appeared to show that exposure to words associated with elderly people caused participants to walk more slowly (Bargh et al., 1996). That article also reported subliminal priming effects, suggesting that behavioral influence could occur without conscious awareness. These findings inspired a large literature that appeared to demonstrate robust priming effects across diverse primes, presentation modes, and behavioral outcomes, with success rates comparable to those documented by Sterling (1959).

In 2012, a group of relatively early-career researchers published a failure to replicate the elderly-walking priming effect (Doyen et al., 2012). The publication of this study marked an important turning point, as it challenged a highly influential finding in the literature. Bargh responded critically to the replication attempt, and the episode became widely discussed. Daniel Kahneman had highlighted priming research in Thinking, Fast and Slow and, concerned about its replicability, encouraged original authors to conduct high-powered replications. These replications were not forthcoming, while independent preregistered studies with larger samples increasingly failed to reproduce key priming effects. As a result, priming research became a focal example in discussions of the replication crisis. Kahneman later distanced himself from strong claims based on this literature and expressed regret about relying on studies with small samples (Kahneman, 2017).

Willful Ignorance and Incompetence In Response to Credibility Concerns

In 2016, Albarracín (as senior author) and colleagues published a meta-analysis concluding that social priming effects exist, although the average effect size was relatively small (d ≈ .30; Weingarten et al., 2016). An effect of this magnitude corresponds to roughly one-third of a standard deviation, which is modest in behavioral terms.

The meta-analysis attempted to address concerns about publication bias—the possibility that high success rates reflect selective reporting of significant results. If selection bias is substantial, observed effect sizes will be inflated relative to the true underlying effects. The authors applied several bias-detection methods that are now widely recognized as having limited diagnostic value. They also used the p-curve method, which had been introduced only two years earlier (Simonsohn et al., 2014). However, the p-curve results were interpreted too optimistically. P-curve can reject the hypothesis that all significant results arise from true null effects, but it does not test whether publication bias is present or whether effect sizes are inflated. Moreover, the observed p-curve was consistent with an average statistical power of approximately 33%. Given such power, one would expect roughly one-third of all studies to yield significant results under unbiased reporting, yet the published literature reports success rates exceeding 90%. This discrepancy strongly suggests substantial selective reporting and implies that the true average effect size is likely smaller than the headline estimate.

Sotola (2022) reexamined Weingarten et al.’s meta-analysis using a method called z-curve. Unlike p-curve, z-curve explicitly tests for selective reporting by modeling the distribution of statistically significant results. It is also more robust when studies vary in power and when some studies have true effects while others do not. Whereas p-curve merely rejects the hypothesis that all studies were obtained under a true null, z-curve estimates the maximum proportion of significant results that could be false discoveries, often referred to as an upper bound on the false discovery rate (Bartos & Schimmack, 2022).

Sotola found that priming studies reported approximately 76% significant results—somewhat below the roughly 90% level typically observed in social psychology—but that the estimated average power to produce a significant result was only 12.40%. Z-curve also did not rule out the possibility that all observed significant results could have arisen without a true effect. This finding does not justify the conclusion that social priming effects do not exist, just as observing many white swans does not prove the absence of black swans. However, it does indicate that the existing evidence—including the Weingarten et al. meta-analysis—does not provide conclusive support for claims that social priming effects are robust or reliable. The literature documents many reported effects but offers limited evidential leverage to distinguish genuine effects from selective reporting (many sitings of UFOs, but no real evidence of alien visitors).

Despite these concerns, Weingarten’s meta-analysis continues to be cited as evidence that priming effects are real and that replication failures stem from factors other than low power, selective reporting, and effect size inflation. For example, Iso-Ahola (2025) cites Weingarten et al. while arguing that there is no replication crisis. Notably, this assessment does not engage with subsequent reanalyses of the same data, including Sotola’s z-curve analysis.

This article illustrates what can reasonably be described as willful ignorance: evidence that does not fit the preferred narrative is not engaged. The abstract’s claim that “there is no crisis of replication” is comparable, in terms of evidential standards, to assertions such as “climate change is a hoax”—claims that most scientists regard as unscientific because they dismiss a large and well-documented body of contrary evidence. Declaring the replication problem nonexistent, rather than specifying when, where, and why it does not apply, undermines psychology’s credibility and its aspiration to be taken seriously as a cumulative science.

Willful ignorance is also evident in a recent meta-analysis, again with Albarracín as senior author. This meta-analysis does not include a p-curve analysis and ignores the z-curve reanalysis by Sotola altogether. While the new meta-analysis reports no effects in preregistered studies, its primary conclusion nevertheless remains that social priming has an effect size of approximately d = .30. This conclusion is difficult to reconcile with the preregistered evidence it reports.

A different strategy for defending social priming research is to question the validity of z-curve itself (Pek et al., 2025, preprint, Cognition & Emotion). For example, Pek et al. note that transforming t-values into z-values can break down when sample sizes are extremely small (e.g., N = 5), but they do not acknowledge that the transformation performs well at sample sizes that are typical for social psychological research (e.g., N ≈ 30). Jerry Brunner, a co-author of the original p-curve paper and a professor of statistics, identified additional errors in their arguments (Brunner, 2024). Despite detailed rebuttals, Pek et al. have repeated the same criticisms without engaging with these responses.

This pattern is best described as willful incompetence. Unlike willful ignorance, which ignores inconvenient evidence, willful incompetence involves superficial engagement with evidence while the primary goal remains the defense of a preferred conclusion. In epistemic terms, this resembles attempts to rebut well-established scientific findings by selectively invoking technical objections without addressing their substantive implications.

Z-Curve Analysis of Social Priming

To illustrate how z-curve works and what it reveals about social priming, I analyzed the new meta-analysis of social priming using z-curve. Importantly, I had no influence on the data and only transformed reported information about effect sizes and sampling error into z-values. A z-curve plot provides a descriptive visualization of the evidential strength of published results relative to the null hypothesis. At this descriptive level, few assumptions are required.

The full z-curve analysis fits a statistical model to the distribution of z-values. Studies with low power—due to small effect sizes, small sample sizes, or both—are likely to produce low z-values and often nonsignificant results (z = 1.96 ≈ p = .05). Studies with high power (e.g., 80% power corresponds to z ≈ 2.8) require either moderate-to-large effect sizes or very large sample sizes. Inspection of the plot shows that most studies cluster at low z-values, with relatively few studies producing z-values greater than 2.8. Thus, even before modeling the data, the distribution indicates that the literature is dominated by low-powered studies.

The actual z-curve analysis fits a model to the distribution of z-values. Studies with low power (small effect sizes, small sample sizes) are likely to produce low z-values and often z-values that are not significant (z = 1.96 ~ p = .05). Studies that have high power (80% power ~ z = 2.8) have moderate to large effect sizes or really large sample sizes). Inspection of the plot shows most studies have low z-values and few studies have z-values greater than 2.8. Thus, even without modeling the data, we can see that this literature is dominated by studies with low power.

The plot also reveals clear evidence of selective reporting. If results were reported without selection, the distribution of z-values would decline smoothly around the significance threshold. Instead, the mode of the distribution lies just above the significance criterion. The right tail declines gradually, whereas the left side drops off sharply. There are too many results with p ≈ .04 and too few with p ≈ .06. This asymmetry provides direct visual evidence of publication bias, independent of any modeling assumptions.

Z-curve uses the distribution of statistically significant results to estimate the Expected Replication Rate (ERR) and the Expected Discovery Rate (EDR). The ERR estimate is conceptually similar to p-curve–based power estimates but is more robust when studies vary in power. In the present analysis, the estimated ERR of 34% closely matches the p-curve estimate reported by Weingarten et al. (33%) but is substantially higher than Sotola’s earlier z-curve estimate (12.5%). However, ERR estimates assume that studies can be replicated exactly, an assumption that is rarely satisfied in psychological research. Comparisons between ERR estimates and actual replication outcomes typically show lower success rates in practice (Bartos & Schimmack, 2022). Moreover, ERR is an average: approximately half of studies have lower replication probabilities, but we generally do not know which studies these are.

The EDR estimates the proportion of all studies conducted—including unpublished ones—that are expected to yield statistically significant results. In this case, the EDR point estimate is 19%, but there is substantial uncertainty because it must be inferred from the truncated set of significant results. Notably, the confidence interval includes values as low as 5%, which is consistent with a scenario in which social priming effects are absent across studies. Thus, these results replicate Sotola’s conclusion that the available evidence does not demonstrate that any nontrivial proportion of studies produced genuine social priming effects.

Pek et al. (2025) noted that z-curve estimates can be overly optimistic if researchers not only select for statistical significance but also preferentially report larger effect sizes. In their simulations, the EDR was overestimated by approximately 10 percentage points. This criticism, however, weakens rather than strengthens the evidential case for social priming, as an EDR of 9% is even less compatible with robust effects than an EDR of 19%.

The z-curve results also provide clear evidence of heterogeneity in statistical power. Studies selected for significance have higher average power than the full set of studies (ERR = 34% vs. EDR = 18%). Information about heterogeneity is especially evident below the x-axis. Studies with nonsignificant results (z = 0 to 1.95) have estimated average power of only 18–20%. Even studies with significant results and z-values up to 4 have estimated average power ranging from 23% to 56%. To expect an exact replication to succeed with 80% power, a study would need to produce a z-value above 4, yet the plot shows that very few studies reach this level.

Adjusting Alpha To Lower False Positive Risk

Z-curve can also be used to examine how changing the significance threshold affects false discoveries. With the conventional α = .05 criterion, one out of twenty tests of a true null hypothesis will be significant by chance. Lowering α to .01 reduces this rate to one in one hundred. However, stricter thresholds also reduce power and discovery rates. In some literatures, the reduction in false discoveries outweighs the cost of fewer significant results (Soto & Schimmack, 2024). This is not the case for social priming.

Setting α = .01 (z = 2.58) lowers the point estimate of the false discovery rate from 23% to 11%, but the 95% confidence interval still includes values up to 100%.

Setting α = .001 reduces the point estimate to 3%, yet uncertainty remains so large that all remaining significant results at that threshold could still be false positives.

P-Hacking Real Effects

It is possible to obtain more favorable conclusions about social priming by adopting additional assumptions. One such assumption is that researchers relied primarily on p-hacking rather than selective reporting. Under this scenario, fewer studies would need to be conducted and suppressed. When z-curve is fit under a pure p-hacking assumption, the estimates appear substantially more optimistic.

Under this model, evidence of p-hacking produces an excess of results just below p = .05, which are excluded from estimation. The resulting estimates suggest average power between 40% (EDR = .43) and 52% (ERR = .52), with relatively little heterogeneity. Nonsignificant results with z ≈ 1 are estimated to have average power of 46%, and significant results with z ≈ 4 have average power of 52%. If this model were correct, false positives would be rare and replication should be straightforward, especially with larger samples. The main difficulty with this interpretation is that preregistered replication studies consistently report average effect sizes near zero, directly contradicting these optimistic estimates (Dai et al., 2023).

Conclusion

So, is experimental social psychology a science? The most charitable answer is that it currently resembles a science with limited cumulative results in this domain. Meteorology is not a science because it acknowledges that weather varies; it is a science because it can predict weather with some reliability. Until social priming researchers can specify conditions under which priming effects reliably emerge in preregistered, confirmatory studies, the field lacks the predictive success expected of a mature empirical science.

Meanwhile willful ignorance and incompetence hamper progress towards this goal and undermine credible claims of psychology to be a science. Many psychology departments are being remained to have science in their name, but only acting in accordance with normative rules of science will make psychology a credible science.

Credible sciences also have a history of failures. Making mistakes is part of exploration. Covering them up is not. Meta-analyses of p-hacked studies without bias correction are misleading. Even worse are public significance statements directed at the general public rather than peers. The most honest public significance statement about social priming is “We fucked up. Sorry, we will do better in the future.”

Priming MAY Work (Dai …. Albarracin, 2023, Psych Bull)

Behavioral or social priming is the poster child of the replication crisis. Bargh declined Kahneman’s offer to rehabilitate it by showing that he could replicate his result. The same is true for other social psychologists who became famous for changing people’s behaviors without their knowledge / awareness.

While priming studies have largely disappeared from social psychology journals, the old studies still exist and have not been retracted although we know that they were obtained with unscientific methods that make it easy to present false evidence for false claims.

This allows unscientific meta-analysists to analyze these data and to continue to make claims that social priming is a real effect even in 2023 and even in a journal that is considered to be methodologically rigorous, although the name suggests otherwise (Psych Bull).

Dai, W., Yang, T., White, B. X., Palmer, R., Sanders, E. K., McDonald, J. A., Leung, M., & Albarracín, D. (2023). Priming behavior: A meta-analysis of the effects of behavioral and nonbehavioral primes on overt behavioral outcomes. Psychological Bulletin, 149(1-2), 67–98. https://doi-org.myaccess.library.utoronto.ca/10.1037/bul0000374

In the old days, the social power of censorship (peer-review) in scientific journals made it difficult to fight back against paradigm prisoners, who try to keep their little word of academia alive. Fortunately, consumers of scientific research now have options to ensure that they are not feeding their minds junk science.

I asked my ChatGPT (trained with discussions and information) to evaluate this article with a set of questions that anybody could ask an AI.

Critically evaluate this article.
Is the literature review accurate or misrepresent the broader literature
0 = totally false to 10 = totally accurate and consistent with current literature

Is this meta-analysis up to date or uses outdated methods
0 = totally outdated to 10 = totally up to date

Are the results interpreted correctly.
0 = totally misrepresents the results to 10 = totallly accurate

Integration with existing literature.
0 = not integrated at all, 10 = accurately interpreted in the context of broader literature

The scores are less important (as they would require comparison scores) than the reasons for the scores. Importantly, AI is neutral and more balanced than most humans, myself included, usually are.

Strengths

  • The paper integrates its moderator findings into established theoretical disputes (perception–behavior link vs goal mediation), and it connects specific moderation patterns (e.g., goal value manipulations) to prior work and theoretical interpretations.
  • It frames its contribution clearly as both (a) an expanded synthesis and (b) a mechanism-relevant test across moderators.

Limits

  • Integration is strongest for the authors’ preferred theoretical framing; it is thinner on integrating the skeptical/critical meta-science side beyond noting that bias methods have “become ever more sophisticated” and providing the Lakens critique as a prompt. In other words, it engages critique more as a motivation for adding diagnostics than as a serious alternative interpretation of what the literature’s apparent “robustness” means. Priming_behavior_

I did ask a couple of follow up questions, mainly about the well-known problem that average effect sizes are meaningless when studies are heterogenous; priming sometimes works and sometimes it does not. To make priming research actually useful, we need to specify WHEN priming works. Priming MAY work or priming SOMETIMES works, is not a scientific finding.

On this point, the meta-analysis is as mute as most others are, but the paper will be cited as “priming works and has a moderate effect size.” Only psychologists believe that this is enough to defend priming against critics who say, “If priming works, why don’t you do a preregistered study with high power and show us that it works?”

ChatGPT Summary: This Meta-Analysis Tell Us When Priming Works?

The authors present an ambitious meta-analysis of priming effects on behavior and argue that replication failures can be reconciled with a “moderate” average effect size by appealing to heterogeneity and moderators. In principle, this is a legitimate defense: heterogeneous effects can exist even when many individual studies fail to replicate. In practice, however, the paper does not deliver what this defense requires.

Although substantial heterogeneity is documented, the moderator analyses do not clearly identify which studies work and which do not. Most examined moderators are weak, inconsistent, or explain only a small fraction of the variance. As a result, the analysis does not partition the literature into a subset of theoretically coherent, reliably effective studies versus a subset that fails. Without such discrimination, heterogeneity becomes descriptive rather than explanatory.

This creates a central tension in the paper’s conclusions. The authors argue for a nonzero average priming effect while simultaneously acknowledging widespread replication failures and publication bias. Yet they do not show that the moderators they test successfully predict when priming should occur. Consequently, the results do not support a practical or falsifiable claim about replicability. A new study cannot be classified, based on their findings, as likely to succeed or fail.

The strongest conclusion supported by the evidence is therefore modest: priming effects may exist under some conditions, but those conditions are not empirically established by the present meta-analysis. Claims of “moderate” effects should be interpreted as population averages over a poorly understood mixture of successful and unsuccessful studies, rather than as evidence that priming reliably works in practice.

In short, the paper shows that priming has not been decisively ruled out—but it does not show when, how, or how often it actually works.

POSTSCRIPT

PET allows for an effect size of d = 0.025. However, because this result rejects H0, d = 0, at the 5% level, the authors use the PEESE estimate that is always higher. If they had used alpha = .01 or a minim effect size of d = .1, PET would not have been rejected, and the average effect size estimate would be close to zero. Did the authors p-hack PET to be significant? It does not matter. The evidence is weak and the main problem remains. Heterogeneity without a clear moderator that explains when it works and when it does not. This is psycho-science, not real science.

PPS

Priming may work, but NOT when a study is preregistered. (hm?)