Tag Archives: Backpack

Loken and Gelman’s Simulation Is Not a Fair Comparison

“What I’d like to say is that it is OK to criticize a paper, even [if, typo in original] it isn’t horrible.” (Gelman, 2023)

In this spirit, I would like to criticize Loken and Gelman’s confusing article about the interpretation of effect sizes in studies with small samples and selection for significance. They compare random measurement error to a backpack and the outcome of a study to running speed. Common sense suggests that the same individual under identical conditions would run faster without a backpack than with a backpack. The same outcome is also suggested by psychometric theories that suggest random measurement error attenuates population effect sizes, which would make it harder to demonstrate significance and produce, on average, weaker effect sizes.

The key point of Loken and Gelman’s article is to suggest that this intuition fails under some conditions. “Should we assume that if statistical significance is achieved in the presence
of measurement error, the associated effects would have been stronger without
noise? We caution against the fallacy”

To support their clam that common sense is a fallacy under certain conditions, they present the results of a simple simulation study. After some concerns about their conclusions were raised, Loken and Gelman shared the actual code of their simulation study. In this blog post, I share the code with annotations and reproduce their results. I also show that their results are based on selecting for significance only for the measure with random measurement error (with a backpack) and not for the measure without a backpack (no random measurement error). Reversing the selection shows that selection for significance without measurement error produces stronger effect sizes even more often than selection for significance with a backpack. Thus, it is not a fallacy to assume that we would all run faster without a backpack holding all other factors equal. However, a runner with a heavy backpack and tailwinds might run faster than a runner without a backpack facing strong headwinds. While this is true, the influence of wind on performance makes it difficult to see the influence of the backpack. Under identical conditions backpacks slow people down and random measurement error attenuates effects.

Loken and Gelman’s presentation of the results may explain why some readers, including us, misinterpreted their results to imply that selection bias and random measurement error may interaction in some complex way to produce even more inflated estimates of the true correlation. We added some lines of code to their simulation to compute the average correlations after selection for significance separately for the measure without error and the measure with error. This way, both measures benefit equally from selection bias. The plot also provides more direct evidence about the amount of bias that is introduced by selection bias and random measurement error. In addition, the plot shows the average 95% confidence intervals around the estimated correlation coefficients.


The plot shows that for large samples (N > 1,000), the measure without error always produces the expected true correlation of r = .15, whereas the measure with error always produces the expected attenuated correlation of r = .15 * .80 = .12. As sample sizes get smaller, the effect of selection bias becomes apparent. For the measure without error, the observed effect sizes are now inflated. For the measure with error, selection bias corrects for the inflation and the two biases cancel each other out to produce more accurate estimates of the true effect size than with the measure without error. For sample sizes below N = 400, however, both measures produce inflated estimates and in really small samples the attenuation effect due to unreliability is overwhelmed by selection bias. However, while the difference due to unreliability is negligible and approaches zero, it is clear that random measurement error combined with selection bias never produces even stronger estimates than the measure without error. Thus, it remains true that we should expect a measure without random measurement error to produce stronger correlations than a measure with random error. This fundamental principle of psychometrics, however, does not warrant the conclusion that an observed statistically significant correlation in small samples underestimates the true correlation coefficient because the correlation may have been inflated by selection for significance.

The plot also shows how researchers can avoid misinterpretation of inflated effect size estimates in small samples. In small samples, confidence intervals are wide. Figure 2 shows that the confidence interval around inflated effect size estimates in small samples is so wide that it includes the true correlation of r = .15. The width of the confidence interval in small samples make it clear that the study provided no meaningful information about the size of an effect. This does not mean the results are useless. After all, the results correctly show that the relationship between the variables is positive rather than negative. For the purpose of effect size estimation it is necessary to conduct meta-analysis and to include studies with significant and non-significant results. Furthermore, meta-analysis need to test for the presence of selection bias and correct for it when it is present.

P.S. If somebody claims that they ran a marathon in 2 hours with a heavy backpack, they may not be lying. They may just not tell you all of the information. We often fill in the blanks and that is where things can go wrong. If the backpack were a jet pack and the person was using it to fly for some of the race, we would no longer be surprised by the amazing feat. Similarly, if somebody tells you that they got a correlation of r = .8 in a sample of N = 8 with a measure that has only 20% reliable variance, you should not be surprised if they tell you that they got this result after picking 1 out of 20 studies because selection for significance will produce strong correlations in small samples even if there is no correlation at all. Once they tell you that they tried many times to get the one significant result, it is obvious that the next study is unlikely to replicate a significant result.

Sometimes You Can Be
Faster With a Heavy Backpack

Annotated Original Code

 
### This is the final code used for the simulation studies posted by Andrew Gelman on his blog
 
### Comments are highlighted with my initials #US#
 
# First just the original two plots, high power N = 3000, low power N = 50, true slope = .15
 
r <- .15
sims<-array(0,c(1000,4))
xerror <- 0.5
yerror<-0.5
 
for (i in 1:1000) {
x <- rnorm(50,0,1)
y <- r*x + rnorm(50,0,1) 
 
#US# this is a sloppy way to simulate a correlation of r = .15
#US# The proper code is r*x + rnorm(50,0,1)*sqrt(1-r^2)
#US# However, with the specific value of r = .15, the difference is trivial
#US# However, however, it raises some concerns about expertise
 
xx<-lm(y~x)
sims[i,1]<-summary(xx)$coefficients[2,1]
x<-x + rnorm(50,0,xerror)
y<-y + rnorm(50,0,yerror)
xx<-lm(y~x)
sims[i,2]<-summary(xx)$coefficients[2,1]
 
x <- rnorm(3000,0,1)
y <- r*x + rnorm(3000,0,1)
xx<-lm(y~x)
sims[i,3]<-summary(xx)$coefficients[2,1]
x<-x + rnorm(3000,0,xerror)
y<-y + rnorm(3000,0,yerror)
xx<-lm(y~x)
sims[i,4]<-summary(xx)$coefficients[2,1]
 
}
 
plot(sims[,2] ~ sims[,1],ylab=”Observed with added error”,xlab=”Ideal Study”)
abline(0,1,col=”red”)
 
plot(sims[,4] ~ sims[,3],ylab=”Observed with added error”,xlab=”Ideal Study”)
abline(0,1,col=”red”)
 
#US# There is no major issue with graphs 1 and 2. 
#US# They merely show that high sampling error produces large uncertainty in the estimates.
#US# The small attenuation effect of r = .15 vs. r = 12 is overwhelmed by sampling error
#US# The real issue is the simulation of selection for significance in the third graph
 
# third graph
 
# run 2000 regressions at points between N = 50 and N = 3050 
 
r <- .15
 
propor <-numeric(31)
powers<-seq(50,3050,100)
 
#US# These lines of code are added to illustrate the biased selection for significane 
propor.reversed.selection <-numeric(31) 
mean.sig.cor.without.error <- numeric(31) # mean correlation for the measure without error when t > 2
mean.sig.cor.with.error <- numeric(31) # mean correlation for the measure with error when t > 2
 
#US# It is sloppy to refer to sample sizes as powers. 
#US# In between subject studies, the power to produce a true positive result
#US# is a function of the population correlation and the sample size
#US# With population correlations fixed at r = .15 or r = .12, sample size is the
#US# only variable that influences power
#US# However, power varies from alpha to 1 and it would be interesting to compare the 
#US# power of studies with r = .15 and r = .12 to produce a significant result.
#US# The claim that “one would always run faster without a backback” 
#US# could be interpreted as a claim that it is always easier to obtain a 
#US# significant result without measurement error, r = .15, than with measurement error, r = .12
#US# This claim can be tested with Loken and Gelman’s simulation by computing 
#US# the percentage of significant results obtained without and with measurement error
#US# Loken and Golman do not show this comparison of power.
#US# The reason might be the confusion of sample size with power. 
#US# While sample sizes are held constant, power varies as a function of the population correlations
#US# without, r = .15, and with, r = .12, measurement error. 
 
xerror<-0.5
yerror<-0.5
 
j = 1
i = 1
 
for (j in 1:31)  {
 
sims<-array(0,c(1000,4))
for (i in 1:1000) {
x <- rnorm(powers[j],0,1)
y <- r*x + rnorm(powers[j],0,1)
#US# the same sloppy simulation of population correlations as before
xx<-lm(y~x)
sims[i,1:2]<-summary(xx)$coefficients[2,1:2]
x<-x + rnorm(powers[j],0,xerror)
y<-y + rnorm(powers[j],0,yerror)
xx<-lm(y~x)
sims[i,3:4]<-summary(xx)$coefficients[2,1:2]
}
 
#US# The code is the same as before, it just adds variation in sample sizes
#US# The crucial aspect to understand figure 3 is the following code that 
#US# compares the results for the paired outcomes without and with measurement error
 
#US# Carlos Ungil (https://ch.linkedin.com/in/ungil) pointed out on Gelman’s blog #US# that there is another sloppy mistake in the simulation code that does not alter the results #US# The code compares absolute t-values (coefficient/sampling error), while the article #US# talks about inflated effect size estimates. However, while the sampling error variation #US# creates some variability, the pattern remains the same.  #US# For sake of reproducibility I kept the comparison of t-values. 
 
# find significant observations (t test > 2) and then check proportion
temp<-sims[abs(sims[,3]/sims[,4])> 2,]
 
#US# the use of t > 2 is sloppy and unnecessary.
#US# summary(lm) gives the exact p-values that could be used to select for significance
#US# summary(xx)[2,4] < .05
#US# However, this does not make a substantial difference 
 
#US# The crucial part of this code is that it uses the outcomes of the simulation 
#US# with random measurement error to select for significance
#US# As outcomes are paired, this means that the code sometimes selects outcomes
#US# in which sampling error produces significance with random measurement error 
#US# but not without measurement error. 
 
propor[j] <- table((abs(temp[,3]/temp[,4])> abs(temp[,1]/temp[,2])))[2]/length(temp[,1])
 
#US# this line is added to compute the mean correlation for significant outcomes 
#US# when measurement error is present.
mean.sig.cor.with.error[j] = mean(temp[,3])
 
#US# Conditioning on significance for one of the two measures is a strange way
#US$ to compare outcomes with and without measurement error.
#US# Obviously, the opposite selection bias would favor the measure without error.
#US# This can be shown by computing the same proportion after selectiong for significance 
#US$ for the measure without error
 
temp<-sims[abs(sims[,1]/sims[,2])> 2,]
propor.reversed.selection[j] <- table((abs(temp[,1]/temp[,2])> abs(temp[,3]/temp[,2])))[2]/length(temp[,4])
 
#US# this line is added to compute the mean correlation for significant outcomes 
#US# without measurement error. 
mean.sig.cor.without.error[j] = mean(temp[,1])
 
print(j)
 
#US# we can also add to comparisons that are more meaningful and avoid the comparison 
###
 
}
 
 
#US# the plot code had to be modified slightly to have matching y-axes 
#US# I also added a title 
title = “Original Loken and Gelman Code”
 
plot(powers,propor,type=”l”,
ylim=c(0,1),main=title,  ### added code
xlab=”Sample Size”,ylab=”Prop where error slope greater”,col=”blue”)
 
#US# text that explains what the plot displays, not shown
#US# #text(200,.8,”How often is the correlation higher for the measure with error”,pos=4)
#US# text(200,.75,”when pairs of outcomes are selected based on significance of”,pos=4) 
#US# text(200,.70,”of the measure with error?”,pos=4)
 
#US# We can now plot the two outcomes in the same figure 
#US# The original color was blue. I used red for the reversed selection
par(new=TRUE)
plot(powers,propor.reversed.selection,type=”l”,
ylim=c(0,1), ### added code
xlab=”Sample Size”,ylab=”Prop where error slope greater”,col=”firebrick2″)
 
#US# adding a legend 
legend(1500,.9,legend=c(“with backpack only sig. \n shown in article \n “,
“without backpack only sig. \n added by me”),pch=c(15),
pt.cex=2,col=c(“blue”,”firebrick2″))
 
#US# adding a horizontal line at 50%
abline(h=.5,lty=2)
 
 
#US# The following code shows the plot of mean correlations after selection for significance
#US# for the measure with error (blue) and the measure witout error (red)
 
title = “Comparison of Correlations after Selection for Significance”
 
plot(powers,mean.sig.cor.with.error,type=”l”,ylim=c(.1,.4),main=title,
xlab=”Sample Size”,ylab=”Mean Observed Correlation”,col=”blue”)
 
par(new=TRUE)
 
plot(powers,mean.sig.cor.without.error,type=”l”,ylim=c(.1,.4),main=””,
xlab=”Sample Size”,ylab=”Mean Observed Correlation”,col=”firebrick2″)
 
#US# adding a legend 
legend(2000,.4,legend=c(“with error”,
“without error”),pch=c(15),
pt.cex=2,col=c(“blue”,”firebrick2″))
 
#US# adding a horizontal line at 50%
abline(h=.15,lty=2)