It is my pleasure to post the first guest post on the R-Index blog. The blog post is written by my colleague and partner in “crime”-detection, Jerry Brunner. I hope we will see many more guest posts by Jerry in the future.
GUEST POST:
Jerry Brunner
Department of Statistical Sciences
University of Toronto
First, my thanks to the mysterious Dr. R for the opportunity to do this guest post. At issue are the estimates of population mean power produced by the online p-curve app. The current version is 4.06, available at http://www.p-curve.com/app4/pcurve4.php. As the p-curve team (Simmons, Nelson, and Simonsohn) observe in their blog post entitled “P-curve handles heterogeneity just fine” at http://datacolada.org/67, the app does well on average as long as there is not too much heterogeneity in power. They show in one of their examples that it can over-estimate mean power when there is substantial heterogeneity.
Heterogeneity in power is produced by heterogeneity in effect size and heterogeneity in sample size. In the simulations reported at http://datacolada.org/67, sample size varies over a fairly narrow range — as one might expect from a meta-analysis of small-sample studies. What if we wanted to estimate mean power for sets of studies with large heterogeneity in sample sizes or an entire discipline, or sub-areas, or journals, or psychology departments? Sample size would be much more variable.
This post gives an example in which the p-curve app consistently over-estimates population mean power under realistic heterogeneity in sample size. To demonstrate that heterogeneity in sample size alone is a problem for the online pcurve app, population effect size was held constant.
In 2016, Brunner and Schimmack developed an alternative p-curve method (p-curve 2.1), which performs much better than the online app p-curve 4.06. P-curve 2.1 is fully documented and evaluated in Brunner and Schimmack (2018). This is the most recent version of the notorious and often-rejected paper mentioned in https://replicationindex.com/201/03/25/open-discussion-forum. It has been re-written once again, and submitted to Meta-psychology. It will shortly be posted during the open review process, but in the meantime I have put a copy on my website at http://www.utstat.toronto.edu/~brunner/papers/Zcurve6.7.pdf.
P-curve 2.1 is based on Simonsohn, Nelson and Simmons’ (2014) p-curve estimate of effect size. It is designed specifically for the situation where there is heterogeneity in sample size, but just a single fixed effect size. P-curve 2.1 is a simple, almost trivial application of p-curve 2.0. It first uses the p-curve 2.0 method to estimate a common effect size. It then combines that estimated effect size and the observed sample sizes to calculate an estimated power for each significance test in the sample. The sample mean of the estimated power values is the p-curve 2.1 estimate.
One of the virtues of p-curve is that it allows for publication bias, using only significant test statistics as input. The population mean power being estimated is the mean power of the sub-population of tests that happened to be significant. To compare the performance of p-curve 4.06 to p-curve 2.1, I simulated samples of significant test statistics with a single effect size, and realistic heterogeneity in sample size.
Here’s how I arrived at the “realistic” sample sizes. In another project, Uli Schimmack had harvested a large number of t and F statistics from the journal Psychological Science, from the years 2001-2015. I used N = df + 2 to calculate implied total sample sizes. I then eliminated all sample sizes less than 20 and greater than 500, and randomly sampled 5,000 of the remaining numbers. These 5,000 numbers will be called the “Psychological Science urn.” They are available at http://www.utstat.toronto.edu/~brunner/data/power/PsychScience.urn3.txt, and can be read directly into R with the scan function.
The numbers in the Psychological Science urn are not exactly sample sizes and they are not a true random sample. In particular, truncating the distribution at 500 makes them less heterogeneous than real sample sizes, since web surveys with enormous sample sizes are eliminated. Still, I believe the numbers in the Psychological Science urn may be fairly reflective of the sample sizes in psychology journals. Certainly, they are better than anything I would be able to make up. Figure 1 shows a histogram, which is right skewed as one might expect.
By sampling with replacement from the Psychological Science urn, one could obtain a random sample of sample sizes, similar to sampling without replacement from a very large population of studies. However, that’s not what I did. Selection for significance tends to select larger sample sizes, because tests based on smaller sample sizes have lower power and so are less likely to be significant. The numbers in the Psychological Science urn come from studies that passed the filter of publication bias. It is the distribution of sample size after selection for significance that should match Figure 1.
To take care of this issue, I constructed a distribution of sample size before selection and chose an effect size that yielded (a) population mean power after selection equal to 0.50, and (b) a population distribution of sample size after selection that exactly matched the relative frequencies in the Psychological Science urn. The fixed effect size, in a metric of Cohen (1988, p. 216) was w = 0.108812. This is roughly Cohen’s “small” value of w = 0.10. If you have done any simulations involving literal selection for significance, you will realize that getting the numbers to come out just right by trial and error would be nearly impossible. I got the job done by using a theoretical result from Brunner and Schimmack (2018). Details are given at the end of this post, after the results.
I based the simulations on k=1,000 significant chi-squared tests with 5 degrees of freedom. This large value of k (the number of studies, or significance tests on which the estimates are based) means that estimates should be very accurate. To calculate the estimates for p-curve 4.06, it was easy enough to get R to write input suitable for pasting into the online app. For p-curve 2.1, I used the function heteroNpcurveCHI, part of a collection developed for the Brunner and Schimmack paper. The code for all the functions is available at http://www.utstat.toronto.edu/~brunner/Rfunctions/estimatR.txt. Within R, the functions can be defined with source("http://www.utstat.toronto.edu/~brunner/Rfunctions/estimatR.txt"). Then to see a list of functions, type functions() at the R prompt.
Recall that population mean power after selection is 0.50. The first time I ran the simulation, the p-curve 4.06 estimate was 0.64, with a 95% confidence interval from from 0.61 to 0.66.. The p-curve 2.1 estimate was 0.501. Was this a fluke? The results of five more independent runs are given in the table below. Again, the true value of mean power after selection for significance is 0.50.
Estimate | |||
P-curve 2.1 | P-curve 4.06 | P-curve 4.06 Confidence Interval | |
0.510 | 0.64 | 0.61 | 0.67 |
0.497 | 0.62 | 0.59 | 0.65 |
0.502 | 0.62 | 0.59 | 0.65 |
0.509 | 0.64 | 0.61 | 0.67 |
0.487 | 0.61 | 0.57 | 0.64 |
It is clear that the p-curve 4.06 estimates are consistently too high, while p-curve 2.1 is on the money. One could argue that an error of around twelve percentage points is not too bad (really?), but certainly an error of one percentage point is better. Also, eliminating sample sizes greater than 500 substantially reduced the heterogeneity in sample size. If I had left the huge sample sizes in, the p-curve 4.06 estimates would have been ridiculously high.
Why did p-curve 4.06 fail? The answer is that even with complete homogeneity in effect size, the Psychological Science urn was heterogeneous enough to produce substantial heterogeneity in power. Figure 2 is a histogram of the true (not estimated) power values.
Figure 2 shows that that even under homogeneity in effect size, a sample size distribution matching the Psychological Science urn can produce substantial heterogeneity in power, with a mode near one even though the mean is 0.50. In this situation, p-curve 4.06 fails. P-curve 2.1 is clearly preferable, because it specifically allows for heterogeneity in sample size.
Of course p-curve 2.1 does assume homogeneity in effect size. What happens when effect size is heterogeneous too? The paper by Brunner and Schimmack (2018) contains a set of large-scale simulation studies comparing estimates of population mean power from p-curve, p-uniform, maximum likelihood and z-curve, a new method dreamed up by Schimmack. The p-uniform method is based on van Assen, van Aertand and Wicherts (2014), extended to power estimation as in p-curve 2.1. The p-curve method we consider in the paper is p-curve 2.1. It does okay as long as heterogeneity in effect size is modest. Other methods may be better, though. To summarize, maximum likelihood is most accurate when its assumptions about the distribution of effect size are satisfied or approximately satisfied. When effect size is heterogeneous and the assumptions of maximum likelihood are not satisfied, z-curve does best.
I would not presume to tell the p-curve team what to do, but I think they should replace p-curve 4.06 with something like p-curve 2.1. They are free to use my heteroNpcurveCHI and heteroNpcurveF functions if they wish. A reference to Brunner and Schimmack (2018) would be appreciated.
Details about the simulations
Before selection for significance, there is a bivariate distribution of sample size and effect size. This distribution is affected by the selection process, because tests with higher effect size or sample size (or especially, both) are more likely to be significant. The question is, exactly how does selection affect the joint distribution? The answer is in Brunner and Schimmack (2018). This paper is not just a set of simulation studies. It also has a set of “Principles” relating the population distribution of power before selection to its distribution after selection. The principles are actually theorems, but I did not want it to sound too mathematical. Anyway, Principle 6 says that to get the probability of a (sample size, effect size) pair after selection, take the probability before selection, multiply by the power calculated from that pair, and divide by the population mean power before selection.
In the setting we are considering here, there is just a single effect size, so it’s even simpler. The probability of a (sample size, effect size) pair is just the probability of the sample size. Also, we know the probability distribution of sample size after selection. It’s the relative frequencies of the Psychological Science urn. Solving for the probability of sample size before selection yields this rule: the probability of sample size before selection equals the probability of sample size after selection, divided by the power for that sample size, and multiplied by population mean power before selection.
This formula will work for any fixed effect size. That is, for any fixed effect size, there is a probability distribution of sample size before selection that makes the distribution of sample size after selection exactly match the Psychological Science frequencies in Figure 1. Effect size can be anything. So, choose the effect size that makes expected (that is, population mean) power after selection equal to some nice value like 0.50.
Here’s the R code. First, we read the Psychological Science urn and make a table of probabilities.
rm(list=ls()) options(scipen=999) # To avoid scientific notation source("http://www.utstat.toronto.edu/~brunner/Rfunctions/estimatR.txt"); functions() PsychScience = scan("http://www.utstat.toronto.edu/~brunner/data/power/PsychScience.urn3.txt") hist(PsychScience, xlab='Sample size',breaks=100, main = 'Figure 1: The Psychological Science Urn') # A handier urn, for some purposes nvals = sort(unique(PsychScience)) # There are 397 rather than 8000 values nprobs = table(PsychScience)/sum(table(PsychScience)) # sum(nvals*nprobs) = 81.8606 = mean(PsychScience)
For any given effect size, the frequencies from the Psychological Science urn can be used to calculate expected power after selection. Minimizing the (squared) difference between this value and the desired mean power yields the required effect size.
# Minimize this function to find effect size giving desired power # after selection for significance. fun = function(es,wantpow,dfreedom) { alpha = 0.05; cv=qchisq(1-alpha,dfreedom) epow = sum( (1-pchisq(cv,df=dfreedom,ncp=nvals*es))*nprobs ) # cat("es = ",es," Expected power = ",epow,"\n") (epow-wantpow)^2 } # End of all the fun # Find needed effect size for chi-square with df=5 and desired # population mean power AFTER selection. popmeanpower = 0.5 # Change this value if you wish EffectSize = nlminb(start=0.01, objective=fun,lower=0,df=5,wantpow=popmeanpower)$par EffectSize # 0.108812
Calculate the probability distribution of sample size before selection.
# The distribution of sample size before selection is proportional to the # distribution after selection divided by power, term by term. crit = qchisq(0.95,5) powvals = 1-pchisq(crit,5,ncp=nvals*EffectSize) Pn = nprobs/powvals EG = 1/sum(Pn) cat("Expected power before selection = ",EG,"\n") Pn = Pn*EG # Probability distribution of n before selection
Generate test statistics before selection.
nsim = 50000 # Initial number of simulated statistics. This is over-kill. Change the value if you wish. set.seed(4444) # For repeated simulations, execute the rest of the code repeatedly. nbefore = sample(nvals,size=nsim,replace=TRUE,prob=Pn) ncpbefore = nbefore*EffectSize powbefore = 1-pchisq(crit,5,ncp=ncpbefore) Ybefore = rchisq(nsim,5,ncp=ncpbefore)
Select for significance.
sigY = Ybefore[Ybefore>crit] sigN = nbefore[Ybefore>crit] sigPOW = 1-pchisq(crit,5,ncp=sigN*EffectSize) hist(sigPOW, xlab='Power',breaks=100,freq=F ,main = 'Figure 2: Power After Selection for Significance')
Estimate mean power both ways.
# Two estimates of expected power before selection c( length(sigY)/nsim , mean(powbefore) ) c(popmeanpower, mean(sigPOW)) # Golden length(sigY) k = 1000 # Select 1,000 significant results. Y = sigY[1:k]; n = sigN[1:k]; TruePower = sigPOW[1:k] # Estimate with p-curve 2.1 heteroNpcurveCHI(Y=Y,dfree=5,nn=n) # 0.5058606 the first time. # Write out chi-squared statistics for pasting into the online app for(j in 1:k) cat("chi2(5) =",Y[j],"\n")
References
Brunner, J. and Schimmack, U. (2018). Estimating population mean power under conditions of heterogeneity and selection for significance. Under review. Available at http://www.utstat.toronto.edu/~brunner/papers/Zcurve6.7.pdf.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd Edition), Hillsdale, New Jersey: Erlbaum.
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). P-curve and effect size: correcting for publication bias using only significant results. Perspectives on Psychological Science, 9, 666-681.
van Assen, M. A. L. M., van Aert, R. C. M., & Wicherts, J. M. (2014). Meta-analysis using effect size distributions of only statistically significant studies. Psychological methods, 20, 293-309.
4 thoughts on “An Even Better P-curve”