Category Archives: Uncategorized

Personality, Partnership and Well-Being

Personality psychologists have been successful in promoting the Big Five personality factors as a scientific model of personality. Short scales have been developed that make it possible to include Big Five measures in studies with large nationally representative samples. These data have been used to examine the influence of personality on wellbeing in married couples (Dyrenforth et al., 2010).

The inclusion of partners’ personality in studies of well-being has produced two findings. First, being married to somebody with a desirable personality (low neuroticism, high extraversion, openness, agreeableness, and conscientiousness) is associated with higher well-being. Second, similarity in personality is not a predictor of higher well-being.

A recent JPSP article mostly replicated these results (van Scheppingen, Chopik, Bleidorn, & Denissen, 2019). “Similar to previous studies using difference scores and profile correlations, results from response surface analyses indicated that personality similarity explained a small amount of variance in well-being as compared with the amount of variance explained by linear actor and partner effects” (e51)

Unfortunately, personality psychologists have made little progress in the measurement of the Big Five and continue to use observed scale scores as if they are nearly perfect measures of personality traits. This practice is problematic because it has been demonstrated in numerous studies that a large portion of the variance in Big Five scale scores is measurement error. Moreover, systematic rating biases have been shown to contaminate Big Five scale scores.

Anusic et al. (2009) showed how at least some of the systematic measurement errors can be removed from Big Five scale scores by means of structural equation modelling. In a structural equation model, the shared variance due to evaluative biases can be modelled with a halo factor, while the residual variance is treated as a more valid measure of the Big Five traits.

The availability of partner data makes it possible to examine whether the halo biases of husbands and wives are correlated. It is also possible to see whether halo bias of a partner has positive effects on well-being. As halo bias in self-ratings is often considered a measure of self-enhancement, it is possible that partner who enhance their own personality have a negative effect on well-being. Alternative, partners who enhance themselves are also more likely to have a positive perception of their partner (Kim et al., 2012), which could increase well-being. An interesting question is how much partner’s actual personality influences well-being after halo bias is removed from partner’s ratings of personality.

It was easy to test these hypotheses with the correlations reported in Table 1 in van Scheppingen et al.’s article, which is based on N = 4,464 couples in in the Health and Retirement Study. Because information about standard deviations were not provided, all SDs were set to 1. However, the actual SDs of Big Five traits tend to be similar so that this is a reasonable approximation.

I fitted the Halo-Alpha-Beta model to the data, but as with other datasets alpha could not be identified. Instead, a positive correlation between agreeableness and extraversion was present in this Big Five measure, which may reflect some secondary loadings that could be modelled with items as indicators. I allowed for the two halo factors to be correlated and I allowed well-being to be predicted by actor-halo and partner-halo. I also allowed for spousal similarity for each Big Five dimension. Finally, well-being was influenced by self-neuroticism and partner-neuroticism because neuroticism is the strongest predictor of well-being. This model had acceptable fit, CFI = .981, RMSEA = .038.

Figure 1 shows the model and the standardized parameter estimates.

The main finding is that self-halo is the strongest predictor of self-rated well-being. This finding replicates Kim et al.’s (2012) results. True neuroticism (tna; i.e., variance in neuroticism ratings without halo bias) is the second strongest predictor. The third strongest predictor is partner’s true neuroticism, although it explains less than 1% of the variance in well-being. The model also shows a positive correlation between partners’ halo factors, r = .32. This is the first demonstration that spouses’ halos are positively correlated. More research is needed to examine whether this is a robust finding and what factors contribute to spousal similarity in halo. This correlation has implications for spousal similarity in actual personality traits. After removing shared halo variance, spousal similarity is only notable for openness, r = .19, and neuroticism, r = . 13.

The key implications of this model is that actual personality traits, at least those measured with the Big Five, have a relatively small effect on well-being. The only trait with a notable contribution is neuroticism, but partner’s neuroticism explains less than 1% of the variance in well-being. An open question is whether the effect of self-halo should be considered a true effect on well-being or whether it simply reflects shared method variance (Schimmack & Kim, in press).

It is well-known that well-being is relatively stable over extended periods of time (Anusic & Schimmack, 2016; Schimmack & Oishi; 2005) and that spouses have similar levels of well-being (Schimmack & Lucas, 2010). The present results suggest that the Big Five personality traits account only for a small portion of the stable variance that is shared between spouses. This finding should stimulate research that looks beyond the Big Five to study well-being in married couples. This blog post shows the utility of structural equation modelling to do so.

The (not so great) Power of Situational Manipulations in the Laboratory

Social psychology is based on the fundamental assumption that brief situational manipulations can have dramatic effects on behavior. This assumption seemed to be justified by sixty years of research that often demonstrated large effects of subtle and sometimes even subliminal manipulations of the situation. However, since 2011 it has become clear that these impressive demonstrations were a sham. Rather than reporting the actual results of studies, social psychologists selectively reported results that were statistically significant, and because they used small samples, effect sizes were inflated dramatically to produce significant results. Thus, selective reporting of results from between-subject experiments with small samples ensured that studies could only provide evidence for the power of the situation.

Most eminent social psychologists who made a name for themselves using this flawed scientific method have been silent and have carefully avoided replicating their cherished findings from the past. In a stance of defiant silence, they pretend that their published results are credible and should be taken seriously.

A younger generation of social psychologists has responded to the criticism of old studies by improving their research practices. The problem for these social psychologists is that subtle manipulations of situations at best have subtle effects on behavior. Thus, the results are no longer very impressive, and even with larger samples, it is difficult to provide robust evidence for them. This is illustrated with an article by Van Dessel , Hughes, and De Houwer (2018) in Psychological Science, 2018.

The article has all the feature of the new way of doing experimental social psychology. The article received badges for sharing materials, sharing data, and preregistration of hypotheses. The authors also mention an a-priori power analysis that assumed a small to medium effect size.

The OSF materials provide further information about power calculations in the Design documents of each study (https://osf.io/wjz3u/). I compiled this information in the Appendix. It shows that the authors did not take attrition due to exclusion criteria into account and that they computed power for one-tailed significance tests. This would lead to lower power in post-hoc power analyses with two-tailed significance tests that are used in the article. The authors also assumed a stronger effect size for Studies 3 and 4, although these studies tested riskier hypotheses (actual behavior, one-day delay between manipulation and measurement of dependent variables). Most important, the authors powered studies to have 80% power for each individual hypothesis tests, which means that they can at best expect to find 80% significant results in their tests of multiple hypotheses (Schimmack, 2012).

Indeed, the authors found some non-significant results. For example, the IAT did not show the predicted effect in Study 1. However, Studies 1 and 2 mostly showed the predicted results, but they lack ecological validity because they examined responses to novel, fictional stimuli.

Studies 3 and 4 are more important for understanding actual human behaviors because they examined health behaviors with cookies and carrots as stimuli. The main hypothesis was that a novel Goal-Relevant Avatar-Consequences Task would shift health behaviors intentions, and attitudes. Table 3 shows means for several conditions and dependent variables that produce 12 hypothesis tests.

The next table shows the t-values for the 12 hypothesis tests. These t-values can be converted into p-values, z-scores, and observed power. The last column records whether the result is significant with alpha = .05.

The most important result is that median observed power is 64% and matches the success rate of 67%. Thus, the results are credible and there is no evidence to suggest that QRPs were used to produce significant results. However, the consistent estimates also suggest that the studies did not have 80% power as the authors intended based on their a prior assumption that effect sizes would be small to moderate. In fact, the average effect size is d = .29. An a priori power analysis with this effect size shows that n = 188 participants per cell (total N = 376) are needed to achieve 80% power. Thus all studies were underpowered. .

Power can be improved by combining theoretically equivalent cells. This produces significant results for consumer choice, d = .36, t(571) = 4.21, the explicit attitude measure, d = .24, t(571) = 2.83, and the IAT, d = .33, t(571) =3.81.

Thus, the results show that the goal-relevant avatar-consequence task can shift momentary behavioral intentions and attitudes. However, it is not clear whether it actually changes behavior. The reason is that Study 4 was underpowered with only 92 participants in each cell and the snack eating effect was just significant, p = .018. This finding first needs to be replicated with an adequate sample.

Study 3 aimed to demonstrate that the effects of a brief situational manipulation can have lasting effects. As a result, participants completed a brief survey on the next day. The results are reported in Table 3.

This table allows for 10 hypothesis tests. The results are shown in the next table.

First, I did not include the question about difficulty because it is difficult to say how the situational manipulation should affect it. The item also produced the weakest evidence. The remaining 8 tests showed three significant results. The success rate of 38% is matched by the average observed power, 35%. Thus, once more there is no evidence that QRPs were used to produce significant results. At the same time, the power estimate shows that the study did not have 80% power. One reason is that the average effect size is weaker, d = .22. An a priori power analysis shows that n = 326 participants per cell would be needed to have 80% power. Thus, the actual cell frequencies of n = 99 to 108 were too small to expect consistent results.

The inconsistent results make it difficult to interpret the results. It is possible that the manipulation had a stronger effect on ratings of unhealthy behaviors than on healthy behaviors, but it is also possible that the pattern of means changes in a replication study.

The authors conclusion, however, highlights statistically significant results as if non-significant results are theoretically irrelevant.

“Compared with a control training, consequence-based approach-avoidance training (but not typical approach-avoidance training) reduced self-reported unhealthy eating behaviors and increased healthy eating intentions 24 hr after training” (p. 1907).

This conclusion is problematic because the pattern of significant results was not predicted a priori and strongly influenced by random sampling error. Selecting significant results from a larger set of statistical tests creates selection bias and the results are unlikely to replicate when studies have low power. This does not mean that the conclusions are false. It only means that the results need to be replicated in a study with adequate power (N = 326 x 3 = 978).

Conclusion

Social psychologists have a long tradition of experimental research that aims to change individuals’ behaviors with situational manipulations in laboratories. In 2011, it became apparent that most results in this tradition lack credibility because researchers used small samples with between-subject designs and reported only significant results. As a result, reported effect sizes are vastly inflated and give a wrong impression of the power of situations. In response to this realization some social psychologists have embraced open science practices and report all of their results honestly. Bias tests confirmed that this article reported as many significant results as the power of studies justifies. However, the observed power was lower than the a priori power that researchers assumed they had when they planned their sample sizes. This is particularly problematic for Studies 3 and 4 that aimed to show that results last and influence actual behavior.

My recommendation for social psychologists is to take advantage of better designs (within-subject), conduct fewer studies, and to include real behavioral measures in these studies. The problem for social psychologists is that it is now easy to collect data with online samples, but these studies do not include measures of real behavior. The study of real behavior was done with a student sample, but it only had 92 participants per cell, which is a small sample size to detect the small to moderate effects of brief situational manipulations on actual behavior.

APPENDIX

A Comparison of Scientific Doping Tests

Psychological research is often underpowered; that is, studies have a low probability of producing significant results even if the hypothesis is correct, measures are valid, and manipulations are successful. The problem with underpowered studies is that they have too much sampling error to produce a statistically significant signal to noise ratio (i.e., effect size relative to sampling error). The problem of low power was first observed in 1962 by Cohen and has persisted till this day.

Researchers continue to conduct underpowered studies because they have found a number of statistical tricks to increase power. The problem with these tricks is that they produce significant results that are difficult to replicate and that have a much higher risk of being false positives than the claim p < .05 implies. These statistical tricks are known as questionable research practices (QRPs). John et al. (2012) referred to the use of these QRPs as scientific doping.

Since 2011 it has become apparent that many published results cannot be replicated because they were produced with the help of questionable research practices. This has created a crisis of confidence or a replication crisis in psychology.

In response to the replication crisis, I have developed several methods that make it possible to detect the use of QRPs. It is possible to compare these tests to doping tests in sports. The problem with statistical doping tests is that they require more than one sample to detect the use of doping. The more studies are available, the easier it is to detect scientific doping, but often the set of studies is small. Here I examine the performance of several doping tests for a set of six studies.

The Woman in Red: Examining the Effect of Ovulatory Cycle on Women’s Perceptions of and Behaviors Toward Other Women

In 2018, the journal Personality and Social Psychological Bulletin, published an article that examined the influence of women’s cycle on responses to a woman in a red dress. There are many reasons to suspect that there are no meaningful effects in this line of research. First, it has been shown that the seminal studies on red and attractiveness used QRPs to produce significant results (Francis, 2013). Second, research on women’s cycle has been difficult to replicate (Peperkoorn, Roberts, & Pollet, 2016).

The article reported six studies that measured women’s cycle and manipulated the color of a woman’s dress between subject. The key hypothesis was an attenuated interaction effect. That is, ovulating women should rate the woman in the red dress more negatively than women who were not ovulating. Table 1 shows the results for the first dependent variable that was reported.

resultDF1DF2test.statisticpvalzvalobs.powerSIG
F(1,62)=3.2341623.230.081.770.551
F(1,205)=3.6812053.680.061.910.601
F(1,125)=0.0111250.010.920.100.060
F(1,125)=3.8611253.860.051.950.621
F(1,188)=3.1711883.170.081.770.551
F(1,533)=3.1515333.150.081.770.551

The pattern of results is peculiar because five of the six results are marginally significant; that is the p-value is greater than .05, but smaller than .10. This is strange because sampling error should produce more variability in p-values across studies. Why would the p-values always be greater than .05 and never be less than .05? It is also not clear why p-values did not decrease when researchers started to increase sample sizes from N = 62 in Study 1 to N = 533 in Study 6. As increasing sample sizes decrease sampling error, we would expect test statistics (ratio of effect size over sampling error) to become stronger and p-values to become smaller. Finally, the observed power of the six studies tends to be around 50%, except for Study 3 with a clear non-significant result. How is it possible that 5 studies with about a 50% chance to get marginally significant results produced marginally significant results in all 5 studies? Thus, a simple glance at the pattern of results raises several red flags about the statistical integrity of the results. However, do doping tests confirm this impression?

Incredibility Index

Without the clearly non-significant result in Study 3, we would have 5 significant results with an average observed power of 57%. The incredibility index simply computes the binomial probabilty of obtaining 5 significant results in 5 attempts with a 57% probability of doing so (Schimmack, 2012). The probability of doing so is 6%. Using the median power (55%) produces the same result. This would suggest that QRPs were used. However, the set of studies does include a non-significant result, which reflects a change in publishing norms. Results like these would not have been reported before the replication crisis. And reporting a non-significant result makes the results more credible (Schimmack, 2012).

With the non-significant result, average power is 49% and there are now only 5 out of 6 successes. Although there is still a discrepancy ( 49% power vs. 83% success rate), the probability of this happening by chance is 17%. Thus, there is no strong evidence that QRPs were used.

The problem here is that the incredibility index has low power to detect doping in small sets of studies, unless all results are significant. Even a single non-significant result makes the observed pattern of results a lot more credible. However, absence of evidence does not mean evidence of absence. It is still possible that QRPs were used, but that the incredibility index failed to detect this.

Test of Insufficient Variance

The test of insufficient variance (TIVA) converts the p-values into z-scores and makes the simplifying assumptions that p-values were obtained from a series of z-tests. This makes it possible to use the standard normal distribution as a model of the sampling error in each study. For a set of independent test statistics that are sampled from a standard normal distribution, the sampling error is 1. However, if QRPs are used to produce significant results, test-statistics cluster just above the significance criterion (which is 1.65 for p < .10, when marginally significant results are present). This clustering can be detected by comparing the observed variance in z-scores to the expected variance of 1, using the chi-square test for the comparison of two variances.

Again, it is instructive to focus first on the set of 5 studies with marginally significant results. The variance of z-scores is very low, Var.Z = 0.008, because p-values are confined to the tight range from .05 to .10. The probability of observing this clustering in five studies is p = .0001 or 1 out of 8,892 times. Thus, we would have strong evidence of scientific doping.

However, when we include the non-significant result, variance increases to Var.Z = 0.507, which is no longer statistically significant in a set of six studies, p = .23. This shows again that a single clearly non-significant results makes the reported a lot more credible. It also shows that one large outlier makes TIVA insensitive to detecting QRPs, even when they are present.

The Robust Test of Insufficient Variance (formerly known as the Lucky Bounce Test)

The Robust Test of Insufficient Variance (ROTIVA) is less sensitive to outliers than TIVA. It works by creating a region of p-values (or z-scores, or observed powers) that are considered to be lucky. That is, the result is significant, but not highly convincing. A useful area of lucky outcomes are p-values between .05 and .005, which correspond to power of 50% to 80%. We might say that studies with 80% power are reasonably powered and produce significant results most of the time. However, studies with 50% power are risky because the produce a significant result only in every other study. Thus, getting a significant result is lucky. With two-sided p-values, the interval ranges from z = 1.96 to 2.8. However, when marginal significance is used, the interval ranges from z = 1.65 to 2.49 with a center at 2.07.

Once the area of lucky outcomes is defined, it is possible to specify the maximum probablity of observing a lucky outcome, which is obtained by centering the sampling distribution in the middle of the lucky interval, which is 34%.

Thus, the maximum probability of obtaining a lucky significant result in a single study is 34%. This value can be used to compute the probability of obtaining x number of lucky result in a set of studies using binomial probabilities. With 5 out of 5 studies, the probability is very small, p = .005, but we see that the robust test is not as powerful as TIVA in this situation without outliers. This reverses when we include the outlier. ROTIVA still shows significant evidence of QRPs with 5 out of 6 lucky results, p = .020, when TIVA was no longer significant.

Z-Curve2.0

Z-curve was developed to estimate the replication rate for a set of studies with significant results (Brunner & Schimmack, 2019). As z-curve only selects significant results, it assumes rather than tests the presence of QRPs. The details of z-curve are too complex to discuss here. It is only important to know that z-curve allows for heterogeneity in power and approximates the distribution of significant p-values, converted into z-scores, with a mixture model of folded standard normal distributions. The model parameters are weights for components with low to high power. Although the model is fitted only to significant results, the weights can also be used to make predictions about the distribution of z-scores in the range of non-significant results. It is then possible to examine whether the predicted number of non-significant results matches the observed number of significant results.

To use z-curve for sets of studies with marginally significant results, one only needs to adjust the significance criterion from p = .05 (two-tailed) to p = .10 (two-tailed) or from z = 1.96 to z = 1.65. Figure 2 shows the results, including bootstrapped confidence intervals.

The most relevant statistic for the detection of QRPs is the comparison of the observed discovery rate and the estimated discovery rate. As for the incredibility index, the observed discovery rate is simply the percentage of studies with significant results (5 out of 6). The expected discovery rate is the area under the gray curve that is in the range of significant results with z > 1.65. As can be seen this area with very small, given the estimated sampling distribution from which significant results were selected. The 95%CI for the observed discovery rate has a lower limit of 54%, while the upper limit for the estimated discovery rate is 15%. Thus, these intervals do not overlap and are very far from each other, which provides strong evidence that QRPs were used.

Conclusion

Before the replication crisis it was pretty much certain that articles would only report significant results that support hypotheses (Sterling, 1959). This selection of confirmatory evidence was considered an acceptable practices, although it undermines the purpose of significance testing. In the wake of the replication crisis, I developed tests that can examine whether QRPs were used to produce significant results. These tests work well even in small sets of studies as long as all results are significant.

In response to the replication crisis, it has become more acceptable to publish non-significant results. The presence of clearly non-significant results makes a published article more credible, but it doesn’t automatically mean that QRPs were not used. A new deceptive practice would be to include just one non-significant result to avoid detection by scientific doping tests like the incredibility index or TIVA. Here I show that a second generation of doping tests is able to detect QRPs in small sets of studies even when non-significant results are present. This is bad news for p-hackers and good news for science.

I suggest that journal editors and reviewers make use of these tools to ensure that journals publish only credible scientific evidence. Articles like this one should not be published because they do not report credible scientific evidence. Not publishing articles like this is even beneficial for authors because they avoid damage to their reputation when post-publication peer-reviews reveal the use of QRPs that are no longer acceptable.

References

Francis, G. (2013). Publication bias in “Red, Rank, and Romance in Women Viewing Men” by Elliot et al. (2010). Journal of Experimental Psychology: General, 142, 292-296.

John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence
of questionable research practices with incentives for truth telling.
Psychological Science, 23, 524–532. doi:10.1177/0956797611430953

Peperkoorn, L. S., Roberts, S. C., & Pollet, T. V. (2016). Revisiting the red effect on attractiveness and sexual receptivity: No effect of the color red on human mate preferences. Evolutionary Psychology, 14(4). http://dx.doi.org/10.1177/1474704916673841

Schimmack, U. (2012). The ironic effect of significant results on the credibility of multiple-study articles. Psychological Methods, 17(4), 551-566. https://replicationindex.com/2018/02/18/why-most-multiple-study-articles-are-false-an-introduction-to-the-magic-index/

Schimmack, U. (2015). The Test of Insufficient Variance. https://replicationindex.com/2015/05/13/the-test-of-insufficient-variance-tiva-a-new-tool-for-the-detection-of-questionable-research-practices-2/

Schimmack, U. (2015). The Lucky Bounce Test. https://replicationindex.com/2015/05/27/when-exact-replications-are-too-exact-the-lucky-bounce-test-for-pairs-of-exact-replication-studies/

Hidden Evidence in Racial Bias Research by Cesario and Johnson

In a couple of articles, Cesario and Johnson have claimed that police officers have a racial bias in the use of force with deadly consequences (Cesario, Johnson, & Terrill, 2019; Johnson, Tress, Burkel, Taylor, & Cesario, 2019). Surprisingly, they claim that police officers in the United States are MORE likely to shoot White civilians than Black civilians. And the differences are not small either. According to their PNAS article, “a person fatally shot by police was 6.67 times less likely (OR = 0.15 [0.09, 0.27]) to be Black than White” (p. 15880). In their SPPS article, they write “The odds were 2.7 times higher for Whites to be killed by police gunfire relative to Blacks given each group’s SRS homicide reports, 2.6 times higher for Whites given each group’s SRS homicide arrests, 2.9 times higher for Whites given each group’s NIBRS homicide reports, 3.9 times higher for Whites given each group’s NIBRS homicide arrests, and 2.5 times higher for Whites given each group’s CDC death by assault data. Thus, the authors claim that for every Black civilian killed by police, there are 2 to 6 White civilians killed by police under similar circumstances.

The main problem with Cesario and Johnson’s conclusion is that they rest entirely on the assumption that violent crime statistics are a reasonable estimate for the frequency of encounters with police that may result in the fatal use of force.

One cannot experience a policing outcome without exposure to police, and if exposure rates differ across groups, then the correct benchmark is on those exposure rates.” (Cesario, Johnson, & Terrill, 2019, p. 587).

In the context of police shootings, exposure would be reasonably approximated by rates of criminal involvement for Blacks and Whites; the more group members are involved in criminal activity, the more exposure they have to situations in which police shootings would be likely to occur” (p. 587).

The quotes make it clear that Cesario and Johnson use crime statistics as a proxy for encounters with police that sometimes result in the fatal use of force.

What Cesario and Johnson are not telling their readers is that there are much better statistics to estimate how frequently civilians encounter police. I don’t know why Cesario and Johnson did not use this information or share it with their readers. I only know that they are aware that this information exists because they cite an article that made use of this information in their PNAS article (Tregle, Nix, Alpert, 2019). Although Tregle et al. (2019) use exactly the same benchmarking approach as Cesario and Johnson, the results are not mentioned in the SPPS article.

The Police-Public-Contact Survey

The Bureau of Justice Statistics has collected data from over 100,000 US citizens about encounters with police. The Police-Public Contact Survey has been conducted in 2002, 2005, 2008, 2011, and 2015. Tregle et al. (2019) used the freely available data to create three benchmarks for fatal police shootings.

First, they estimated that there are 2.5 million police-initiated contacts a year with Black civilians and 16.6 million police initiated contacts a year with White civilians. This is a ratio of 1:6.5, which is slightly bigger than the ratio for Black and White citizens (39.9 million vs. 232.9 million), 1:5.8. Thus, there is no evidence that Black civilians have disproportionally more encounters with police than White civilans. Using either one of these benchmarks, still suggests that Black civilians are more likely to be shot than White civilians by a ratio of 3:1.

One reason for the proportionally higher rate of police encounters for White civilians is that they drive more than Blacks, which leads to more traffic stops for Whites. Here the ratio is 2.0 million to 14.0 million or 1:7. The picture changes for street stops, with a ratio of 0.5 million to 2.6 million, 1:4.9. But even this ratio still implies that Black civilians are at a greater risk to be fatally shot during a street stop with an odds-ratio of 2.55:1.

It is telling that Cesario and Johnson are aware of an article that came to opposite conclusions based on a different approach to estimate police encounters and do not mention this finding in their article. Apparently it was more convenient to ignore this inconsistent evidence to tell their readers that data consistently show no anti-Black bias. While readers who are not scientists may be shocked by this omission of inconvenient evidence, scientists are all to familiar with this deceptive practice of cherry picking that is eroding trust in science.

Encounters with Treats and Use of Force

Cesario and Johnson are likely to argue that it is wrong to use police encounters as a benchmark and that violent crime statistics are more appropriate because police officers mostly use force in encounters with violent criminals. However, this is simply an assumption that is not supported by evidence. For example, it is questionable to use homicide statistics because homicide arrests account for a small portion of incidences of fatal use of force.

A more reasonable benchmark are incidences of non-fatal use of force. The PPCS data make it possible to do so because respondents also report about the nature of the contact with police, including the use of force. It is not even necessary to download and analyze the data because Hyland et al. (2015) already reported on racial disparities in incidences that involved threats or non-fatal use of force (see Table 2, Table 1 in Hyland et al. (2015).

The crucial statistic is that there are 159,100 encounters with Black civilians and 445,500 encounters with White civilians that involve threat or use of force; a ratio of 1: 2.8. Using non-fatal encounters as a benchmark for fatal encounters still results in a greater probability of a Black civilian to be killed than a White civilian, although the ratio is now down to a more reasonable ratio of 1.4:1.

It is not clear why Cesario and Johnson did not make use of a survey that was designed to measure police encounters when they are trying to estimate racial disparities in police encounters. What is clear is that these data exist and that they lead to a dramatically different conclusion than the surprising results in Cesario and Johnson’s analyses that rely on violent crime statistics to estimate police encounters.

Implications

It is important to keep in mind that the racial disparity in the fatal use of force in the population is 3:1 (Tregle et al., 2019, Table 1). The evidence from the PPCS only helps to shed light on the factors that contribute to this disparity. First, Black civilians are not considerably more likely to have contact with police than White civilians. Thus, it is simply wrong to claim that different rates of contact with police explain racial disparities in fatal use of force. There is also no evidence that Black civilians are disproportionally more likely to be stopped by police by driving. The caveat here is that Whites might drive more and that there could be a racial bias in traffic stops after taking amount of driving into account. This simply shows how difficult it is to draw conclusions about racial bias based on these kind of data. However, the data do show that the racial disparity in fatal use of force cannot be attributed to more traffic stops of Black drivers. Even the ratio of street stops is not notably different from the population ratios.

The picture changes when threats and use of force as added to the picture. Black civilians are 2.5 times more likely to have an encounter that involves threats and use of force than White civilans (3.5% vs. 1.4%, in Table 2; Table 1 from Hyland et al., 2015).

These results shed some light on an important social issue, but these numbers also fail to answer important questions. First of all, they do not answer questions about the reasons why officers use threats and force more often with Black civilians. Sometimes the use of force is justified and some respondents of the PPCS even admitted that the use of force was justified. However, at other times the use of force is excessive. The incidence rates in the PPCS are too small to draw firm conclusions about this important question.

Unfortunately, social scientists are under pressure to publish to build their careers, and they are under pressure to present strong conclusions to get their manuscripts accepted. This pressure can lead researchers to make bigger claims than their data justify. This is the case with Cesario and Johnson’s claim that officers have a strong bias to use deadly force more frequently with White civilians than Black civilians. This claim is not supported by strong data. Rather it rests entirely on the use of violent crime statistics to estimate police encounters. Here I show that this approach is questionable and that different results are obtained with other reasonable approaches to estimate racial differences in police encounters.

Unfortunately, Cesario and Johnson are not able to see how offensive their claims are to family members of innocent victims of deadly use of force, when they attribute the use of force to violent crime, which implies that the use of force was justified and that victims are all criminals who threatened police with a weapon. Even if the wast majority of cases are justified and fatal use of force was unavoidable, it is well known that this is not always the case. Research on fatal use of force would be less important if police officers would never make mistakes in the use of force. Cesario and Johnson receive tax-payer money to found their research because fatal use of force is sometimes unnecessary and unjustified. It is those cases that require an explanation and interventions that minimize the unnecessary use of force. To use taxpayer’s money to create the false impression that fatal use of force is always justified and that police officers are more afraid of using force with Black civilians than they are afraid of Black civilians is not helpful and offensive to the families of innocent Black victims that are grieving a loved one. The families of Tamir Rice, Atatiana Jefferson, Eric Garner, Philando Castile, to name a few, deserve better.

Police Officers are not Six Times more Likely to Shoot White Civilians than Black Civilians: A Coding Error in Johnson et al. (2019)

Rickard Carlsson and I submitted a letter to the Proceedings of the National Academy of Sciences. The format allows only 500 words (PDF). Here is the long version of our concerns about Johnson et al.’s PNAS article about racial disparities in police shootings. An interesting question for meta-psychologists is how the authors and reviewers did not catch an error that led to the implausible result that police officers are six times more likely to shoot White civilians than Black civilians when they felt threatened by a civilian.

Police Officers are not Six Times more Likely to Shoot White Civilians than Black Civilians: A Coding Error in Johnson et al. (2019)

Ulrich Schimmack Rickard Carlsson
University of Toronto, Mississauga Lineaus University

The National Academy of Sciences (NAS) was founded in 1863 by Abraham Lincoln to provide independent, objective advice to the nation on matters related to science and technology (1).  In 1914, NAS established the Proceedings of the National Academy of Sciences (PNAS) to publish scientific findings of high significance.  In 2019, Johnson, Tress, Burke, Taylor, and Cesario published an article on racial disparities in fatal shootings by police officers in PNAS (2).  Their publication became the topic of a heated exchange in the Oversight Hearing on Policing Practices in the House Committee on the Judiciary on September 19, 2019. Heather Mac Donald cited the article as evidence that there is no racial disparity in fatal police shootings. Based on the article, she also claimed “In fact, black civilians are shot less, compared with whites, than their rates of violent crime would predict” (3). Immediately after her testimony, Phillip Atiba Goff challenged her claims and pointed out that the article had been criticized (4). In a rebuttal, Heather MacDonald cited Johnson from the authors response that the authors stand by their finding (5).  Here we show that the authors’ conclusions are based on a statistical error in their analyses.

The authors relied on the Guardian’s online database about fatal use of force (7). The database covers 1,146 incidences in 2015.  One aim of the authors’ research was to examine the influence of officers’ race on the use of force. However, because most officers are White, they only found 12 incidences (N = 12, 5%) where a Black citizen was fatally shot by a Black officer. This makes it impossible to examine statistically reliable effects of officers’ race.  In addition, the authors examined racial disparities in fatal shootings with regression models that related victims’ race to victims, officers, and counties’ characteristics. The results showed that “a person fatally shot by police was 6.67 times less [italics added] likely (OR = 0.15 [0.09, 0.27]) to be Black than White” (p. 15880).  This finding would imply for every case of a fatal use of force with a Black citizen like Eric Garner or Tamir Rice, there should be six cases similar cases with White citizens.  The authors explain this finding with depolicing; that is, officers may be “less likely to fatally shoot Black civilians for fear of public and legal reprisal” (p. 15880).  The authors also conducted several additional analyses that are reported in their supplementary materials.  However, they claim that their results are robust and “do not depend on which predictors are used” (p. 15881). We show that all of these statements are invalidated by a coding mistake in their statistical model.

Table 1
Racial Disparity in Race of Fatally Shot Civilians

    Model County Predictor Odds-Ratio (Black/White), 95%CI
M1 Homicide Rates 0.31 (0.23, 0.42)
M2 Population Rates 2.03 (1.21, 3.41)
M3 Population & Homicide Rates 0.89 (0.44, 1.80)

The authors did not properly code categorical predictor variables. In a reply, the authors acknowledge this mistake and redid the analyses with proper weighted effect coding of categorical variables. Their new results are reported in Table. 1   The correct results show that the choice of predictor variables does have a strong influence on the conclusions.  In a model that only uses homicide rates as predictor (M1), the intercept still shows a strong anti-White bias, with 3 White civilians being killed for every 1 Black civilian in a county with equal proportions of Black and White citizens. In the second model with population proportions as predictor, the data show anti-Black bias. When both predictors are used, the data show parity, but with a wide margin of error that ranges from a ratio of 2 White civilians for 1 Black civilian to 2 Black civilians for 1 Black civilian.  Thus, after correcting the statistical mistake, the results are no longer consistent and it is important to examine which of these models should be used to make claims about racial disparities.

We argue that it is necessary to include population proportions in the model.  After all, there are many counties in the dataset with predominantly White populations and no shootings of Black civilians. This is not surprising. For officers to encounter and fatally shoot a Black resident, there have to be Black civilians. To ignore the demographics would be a classic statistical mistake that can lead to false conclusions, such as the famous example that is used to teach the difference between correlation and causation. In this example, it appears as if Christians commit more homicides because homicide rates are positively correlated with the number of churches. This inference is wrong because the correlation between churches and homicides simply reflects the fact that counties with a larger population have more churches and more homicides.  Thus, the model that uses only population ratios as predictor is useful because it tells us whether White or Black people are shot more often than we would expect if race was unrelated to police shootings. Consistent with other studies, including an article by the same authors, we see that Black citizens are shot disproportionally more often than White citizens (8,9).

The next question that a scientific study of police shootings can examine is why there exist racial disparities in police shootings.  Importantly, answering this question does not make racial disparities disappear. Even if Black citizens are shot more often because they are more often involved in crimes, as the authors claim, there exists a racial disparity.  It didn’t disappear, nor does this explanation account for incidences like the death of Eric Garner or Tamir Rice.  However, the authors’ conclusion that “racial disparity in fatal shootings is explained by non-Whites’ greater exposure to the police through crime” (p. 15881) is invalid for several reasons.

First of all, the corrected results for the model that takes homicide rates and population rates into account no longer provides conclusive evidence about racial disparities. The data still allow for a racial disparity where Black civilians are shot at twice the rate as White civilians.  Moreover, this model ignores the authors’ own finding that victims’ age is a significant predictor of victims’ race.  Parity is obtained for the average age of 37, but the age effect implies that 20-year old victims are significantly more likely to be Black, OR(B/W) = 3.26, 95%CI = 1.26 to 8.43 while 55-year old victims are significantly more likely to be White, OR(B/W) = 0.24, 95%CI = 0.08 to 0.71.  Thus, even when homicide rates are included in the model, the authors’ data are consistent with the public perception that officers are more likely to use force with young Black men than with young White men.

The second problem is that the model does not include other potentially relevant predictor variables, such as poverty rates, and that an analysis across counties is unable to distinguish between actual and spurious predictors because all statistics are highly correlated with counties’ demographics (r > .9).

A third problem is that it is questionable to rely on statistics about homicide victims as a proxy for police encounters. The use of homicide rates implies that most victims of fatal use of force are involved in homicides. However, the incidences in the Guardian database show that many victims were involved in less severe crimes.

Finally, it is still possible that there is racial disparity in unnecessary use of force even if fatal incidences are proportional to violent crimes. If police encounter more Black people in ambiguous situations because Black people are disproportionally more involved in violent crime, they would still accidentally shoot more Black citizens than White citizens. It is therefore important to distinguish between racial bias of officers and racial disparities in fatal incidences of use of force.  Racial bias is only one of several factors that can produce racial disparities in the use of excessive force.

Conclusion

During a hearing on policing practices in the House Committee on the Judiciary, Heather MacDonald cited Johnson et al.’s (2019) article as evidence that crime accounts for racial disparities in the use of lethal force by police officers and that “black civilians are shot less, compared with whites, than their rates of violent crime would predict.” Our analysis of Johnson et al.’s (2019) article shows that these statements are to a large extent based on a statistical error.  Thus, the article cannot be used as evidence to claim that there are no racial disparities in policing or as evidence that police officers are even more reluctant to use excessive force with Black suspects than with White civilians.  The only lesson that we can learn from this article is that social scientists make mistakes and that pre-publication peer-review alone does not ensure that these mistakes are caught and corrected. It is puzzling how the authors and reviewers did not detect a statistical mistake when the results implied that police officers fatally shoot 6 White suspects for every Black suspect. It was this glaring finding that made us conduct our own analyses and to detect the mistake. This shows the importance of post-publication peer review to ensure that scientific information that informs public policy is as objective and informative as it can be

References

1. National Academy of Sciences.  Mission statement of the (http://www.nasonline.org/about-nas/mission/)

2. Johnson, D. J., Trevor T., Nicole, B., Carley, T., & Cesario, J. (2019). Officer characteristics and racial disparities in fatal officer-involved shootings. Proceedings of the National Academy of Sciences, 116(32), 15877–15882.

3. MacDonald, H. (2019). False Testimony, https://www.city-journal.org/police-shootings-racial-bias

4. Knox, D. & Mummolo, J. (2019). Making inferences about racial disparities in police violence. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3431132

5. Johnson, D. J., & Cesario, J. (2019). Reply to Knox and Mummolo: Critique of Johnson et al. (2019). https://psyarxiv.com/dmhpu/

6. Johnson, D. J., & Cesario, J. (2019). Reply to Schimmack: Critique of Johnson et al. (2019).

7. “The counted.” The Guardian. https://www.theguardian.com/us-news/ng-interactive/2015/jun/01/the-counted-police-killings-us-database#

8. J. Cesario, D. J. Johnson, W. Terrill, Is there evidence of racial disparity in police use of deadly force? Analyses of officer-involved fatal shootings in 2015–2016. Soc. Psychol. Personal. Sci. 10, 586–595 (2018).

9. Edwards, F., Lee, H., Esposito, M. (2019). Risk of being killed by police use of force in the United States by age, race-ethnicity, and sex. Proceedings of the National Academy of Sciences, 116(34), 16793-16798. doi: 10.1073/pnas.1821204116


Does PNAS article show there is no racial bias in police shootings?

Politics in the United States is extremely divisive and filled with false claims based on fake facts. Ideally, social scientists would provide some clarity to these toxic debates by informing US citizens and politicians with objective and unbiased facts. However, social scientists often fail to do so for two reasons. First, they often lack the proper data to provide valuable scientific input into these debates. Second, when the data do not provide clear answers, social scientists’ inferences are shaped as much (or more) by their preexisting beliefs than by the data. It is therefore not a surprise, that the general public increasingly ignores scientists because they don’t trust then to be objective.

Unfortunately, the causes of police killings in the United States is one of these topics. While a few facts are known and not disputed, these facts do not explain why Black US citizens are killed by police more often than White citizens. While it is plausible that there are multiple factors that contribute to this sad statistic, the debate is shaped by groups who either blame a White police force on the one hand and those who blame Black criminals on the other hand.

On September 26, the House Committee on the Judiciary held an Oversight Hearing on Policing Practices. In this meeting, an article in the prestigious journal Proceedings of the National Academy of Sciences (PNAS) was referenced by Heather Mac Donald, who works for the conservative think tank Manhattan Institute, as evidence that crime is the single factor that explains racial disparities in police shootings.

The Manhattan Institute posted a transcript of her testimony before the committee. Here claim is clear. She not only claims that crime explains the higher rate of Black citizens being killed, she even claims that taking crime into account shows a bias of the police force to kill disproportionally FEWER Black citizens than White citizens.

Image

Heather MacDonald is not a social scientist and nobody should expect that she is an expert in logistic regression. This is the job of scientists; authors, reviewers, and editors. The question is whether they did their job correctly and whether their analyses support the claim that after taking population ratios and crime rates into account, police officers in the United States are LESS likely to shot a Black citizen than a White citizen.

The abstract of the article summarizes three findings.

1. As the proportion of Black or Hispanic officers in a FOIS increases, a person shot is more likely to be Black or Hispanic than White.

In plain English, in counties with proportionally more Black citizens, proportionally more Black people are being shot. For example, the proportion of Black people killed in Georgia or Florida is greater than the proportion of Black people killed in Wyoming or Vermont. You do not need a degree in statistic to realize that this tells us only that police cannot shoot Black people if there are no Black people. This result tells us nothing about the reasons why proportionally more Black people than White people are killed in places where Black and White people live.

2. Race-specific county-level violent crime strongly predicts the race of the civilian shot.

Police do not shoot and kill citizens at random. Most, not all, police shootings occur when officers are attacked and they are justified to defend themselves with lethal force. When police officers in Wyoming or Vermont are attacked, it is highly likely that the attacker is White. In Georgia or Florida, the chance that the attacker is Black is higher. Once more this statistical fact does not tell us why Black citizens in Georgia or Florida or other states with a large Black population are killed proportionally more often than White citizens in these states.

3. The key finding that seems to address racial disparities in police killings is that “although we find no overall evidence of anti-Black or anti-Hispanic disparities in fatal shootings, when focusing on different subtypes of shootings (e.g., unarmed shootings or “suicide by
cop”), data are too uncertain to draw firm conclusions”

First, it is important to realize that the authors do not state that they have conclusive evidence that there is no racial bias in police shootings. In fact, they clearly that that for shootings of unarmed citizens” their data are inconclusive. It is a clear misrepresentation of this article to claim that it provides conclusive evidence that crime is the sole factor that contributes to racial disparity in police shootings. Thus, Heather Mac Donald lied under oath and misrepresented the article.

Second, the abstract misstates the actual findings reported in the article, when the authors claim that they “find no overall evidence of anti-Black or anti-Hispanic disparities in fatal shootings”. The problem is that the design of the study is unable to examine this question. To see this, it is necessary to look at the actual statistical analyses more carefully. Instead, the study examines another question: Which characteristics of a victim make it more or likely that a victim is Black or White. For example, an effect of age could show that young Black citizens are proportionally more likely to be killed than young White citizens, while older Black men are proportionally less likely to be shot than older White men. This would provide some interesting insights into the causal factors that lead to police shootings, but it doesn’t change anything about the proportions of Black and White citizens being shoot by police.

We can illustrate this using the authors’ own data that they shared (unfortunately, they did not share information about officers to fully reproduce their results). However, they did find a significant effect for age. To make it easier to interpret the effect, I divided victims into those under 30 and those 30 and above. This produces a simple 2 x 2 table.

An inspection of the cell frequencies shows that the group with the highest frequency are older White victims. This is only surprising if we ignore the base rates of these groups in the general population. Older White citizens are more likely to be victims of police shootings because there are more of them in the population. As this analysis does not examine proportions in the population this information is irrelevant.

It is also not informative, that there are about two times more White victims (476) than Black victims (235). Again, we would expect more White victims simply because more US citizens are White.

The meaningful information is provided by the odds of being a Black or White victim in the two age groups. Here we see that older victims are much less likely to be Black (122/355) than younger victims (113/121). When we compute the odds ratio, we see that young victims are 1.89 times more likely to be Black than old victims. This shows that young Black man are disproportinally more likely to be the victims of police shootings than young White men. Consistent with this finding, the article states that “Older civilians were 1.85
times less likely (OR = 0.54 [0.45, 0.66]) to be Black than White”

In Table 2, the age effect remains significant after controlling for many variables, including rates of homicides committed by Black citizens. Thus, the authors found that young Black citizens are killed more frequently by police than young White men, eve when they attempted to control statistically for the fact that young Black men are disproportionally involved in criminal activities. This finding is not surprising to critics who claim that there is a racial bias in the police force that has resulted in deaths of innocent young Black men. It is actually exactly what one would expect if racial bias plays a role in police shootings.

Although this finding is statistically significant and the authors actually mention it when they report the results in Table 1, they never comment on this finding again in their article. This is extremely surprising because it is common practice to highlight statistically significant results and to discuss their theoretical implications. Here, the implications are straightforward. Racial bias does not target all Black citizens equally. Young Black men (only 10 Black and 25 White victims were female) are disproportionally more likely to be shoot by police even after controlling for several other variables.

Thus, while the authors attempt to look for predictors of victims’ race provides some interesting insights into the characteristics of Black victims, these analyses do not address the question why Black citizens are more likely to be shot than White citizens. Thus, it is unclear how the authors can state “We find no evidence of anti-Black or anti-Hispanic disparities across shootings” (p. 15877) or “When considering all FOIS in 2015, we did not find anti-Black or anti-Hispanic disparity” (p. 15880).

Surely, they are not trying to say that they didn’t find evidence for it because their analysis didn’t examine this question. In fact, their claims are based on the results in Table 3. Based on these results, the authors come to the conclusion that “controlling for predictors at the civilian, officer, and county levels,” a victim is more than 6 times more likely to be to be White than Black. This makes absolutely no sense if the authors did, indeed, center continuous variables and effect coded nominal variables, as they state.

The whole point of centering and effect coding is to keep the intercept of an analysis interpretable and consistent with the odds ratio without predictor variables. To use age again as an example, the odds ratio of a victim being Black is .49. Adding age as a predictor shows us how the odds change within the two age groups, but this does not change the overall odds ratio. However, if we do not center the continuous age variable or do not take the different frequencies of young (224) and old (477) victims into account, the intercept is no longer interpretable as a measure of racial disparities.

This image has an empty alt attribute; its file name is image-64.png

To illustrate this, here are the results of several logistic regression analysis with age as a predictor variable.

First, I used raw age as a predictor.
summary(glm(race ~ pc$age,family=binomial(link=”logit”)))

The intercept changes from -.71 to 1.08. As these values are log-odds, we need to transform them to get the odds ratios, which are .49 (235/711) and 2.94. The reason is that the intercept is a prediction of the racial bias at age 0, which would suggest that police officers are 3 times more likely to kill a Black newborn than a White newborn. This prediction is totally unrealistic because there are fortunately very few victims younger than 15 years of age. In short, this analysis changes the intercept, but the results do no longer tell us anything about racial disparities in general because the intercept is about a very small, and in this case, non-existing subgroup.

We can avoid this problem by centering or standardizing the predictor variable. Now a value of 0 corresponds to the average age.

age.centered = pc$age – mean(pc$age)
summary(glm(race ~ age.centered,family=binomial(link=”logit”)))

The age effect remains the same, but now the intercept is proportional to the odds in the total sample [disclaimer: I don’t know why it changed from -.71 to -.78; any suggestions are welcome]

This is also true when we split age into young (< 30) and old (30 or older) groups.

When the groups are dummy coded (< 30 = 0, 30+ = 1), the intercept changes and shows that victims are now more likely to be Black in the younger group coded as zero.

summary(glm(race ~ (pc$age > 30),family=binomial(link=”logit”)))

However, with effect coding the intercept hardly changes.

summary(glm(race ~ (scale(pc$age > 30)),family=binomial(link=”logit”)))


Thus, it makes no sense when the authors claim that they centered continuous variables and effect coded nominal variables and the intercept changed from exp(-.71) = .49 to exp(-1.90) = .15, which they report in Table 3. Something went wrong in their analyses.

Even if this result were correct, the interpretation of this result as a measure of racial disparities is wrong. One factor that is omitted from the analysis is the proportion of White citizens in the counties. It doesn’t take a rocket scientist to realize that counties with a larger White population are more likely to have White victims. The authors do not take this simply fact into account, although they did have a measure of the population size in their data set. We can create a measure of the proportion of Black and White citizens and center the predictor so that the intercept reflects a population with equal proportions of Black and White citizens.

When we use this variable as a predictor, the surprising finding that police officers are much more likely to shot and kill White citizens disappears. The odds ratio changes from 0.49 to exp(-.04) = .96, and the 95%CI includes 1, 95%CI = 0.78 to 1.19.

This finding may be taken as evidence that there is little racial disparity after taking population proportions into account. However, this ignores the age effect that was found earlier. When age is included as a predictor, we see now that young Black men are disproprotionally likely to be killed, while the reverse is true for older victims. One reason for this could be that criminals are at a higher risk of being killed. Even if White criminals are not killed in their youth, they are likely to be killed at an older age. As Black criminals are killed at a younger age, there are fewer Black criminals that get killed at an older age. Importantly, this argument does not imply that all victims of police shootings are criminals. The bias to kill Black citizens at a younger age also affects innocent Blacks, as the age effect remained significant after controlling for crime rates.

The racial disparity for young citizens becomes even larger when homicide rates are included using the same approach. I also excluded counties with a ratio greater than 100:1 for population or homicide rates.

summary(glm(race ~ (pc$age > 30) + PopRatio + HomRatio,family=binomial(link=”logit”)))


The intercept of 0.65 implies that young (< 30) victims of police shootings are two times more likely to be Black than White when we adjust the risk for the proportion of Black vs. White citizens and homicides. The significant age effect shows again that this risk switches for older citizens. As we are adjusting for homicide rates, this suggest that older White citizens are at an increased risk of being killed by police. This is an interesting observation as much of the debate has been about young Black men who were innocent. According to these analyses, there should also be cases of older White men who are innocent victims of police killings. Looking for examples of these cases and creating more awareness about these cases does not undermine the concerns of the Black Lives Matter movement. Police killings are not a zero sum game. The goal should be to work towards reducing the loss of Black, Blue (police), White, and all other lives. Scientific studies can help to do that when authors analyze and interpret the data correctly. Unfortunately, this is not what happened in this case. Fortunately, the authors shared (some of) their data and it was possible to put their analyses under the microscope. The results show that their key conclusions are not supported by their data. First, there is no disparity that leads to the killing of more White citizens than Black or Hispanic citizens by police. This claim is simply false. Second, the authors have an unscientific aversion to take population rates into account. In counties with mostly White population, crime is mostly committed by White citizens, and police is more likely to encounter and kill White criminals. It is not a mistake to include population rates in statistical analyses. It is a mistake not to do so. Third, the authors ignored a key finding of their own analysis that age is a significant predictor of police shootings. Consistent with the Black Lives Matter claim, their data show that police disproportionally shoots young Black men. This bias is offset to some extent by the opposite bias in older age groups, presumably because Black men have already been killed, which reduces the at risk population of Black citizens in this age group.

In conclusion, the published article already failed to show that there is no racial disparity in police shootings, but it was easily misunderstood as providing evidence for this claim. A closer inspection of the article shows even more problems with the article, which means this article should not be used to support empirical claims about police shootings. Ideally, the article would be retracted. At a minimum, PNAS should publish a notice of concern.

Poverty Explains Racial Bias in Police Shootings

Statistics show that Black US citizens are disproportionally more likely to be killed by police than White US citizens. Cesario, Johnson, and Terrill (2019 estimated that the odds of being killed by police are 2.5 times higher for Black citizens than for White citizens. To my knowledge, no social scientist has disputed this statistical fact.

However, social scientists disagree about the explanation for this finding. Some social scientists argue that racial bias is at least a contributing factor to the disparity in police killings. Others, deny that racial bias is a factor and point out that Black citizens are killed in proportion to their involvement in crime.

Cesario et al. write “when adjusting for crime, we find no systematic evidence of
anti-Black disparities in fatal shootings, fatal shootings of unarmed citizens, or fatal shootings involving misidentification of harmless objects” (p. 586).

They argue that criminals are more likely to encounter police and that “exposure to police accounts for the racial disparities in fatal shootings observed at the population
level” (p. 591).

They also argue that the data are strong enough to rule out racial bias as a contributing factor that influences police shootings in addition to disproportionate involvement in criminal activities.

None of their tests “provided evidence of systematic anti-Black disparity.
Moreover, the CDC data (as well as the evidence discussed in Online Supplemental Material #2) provide a very strong test of whether biased policing accounts for these
results” (p. 591).

“When considering all fatal shootings, it is clear that systematic anti-Black disparity at the national level is not observed” (p. 591).

The authors also point out that their analyses are not conclusive, but recommend their statistical approach for future investigations of this topic.

“The current research is not the final answer to the question of race and police use of deadly force. Yet it does provide perspective on how one should test for group
disparities in behavioral outcomes and on whether claims of anti-Black disparity in fatal police shootings are as certain as often portrayed in the national media” (p. 591).

Here I follow the authors advice and use their statistical approach to demonstrate that crime rates do not account for racial disparities in police killings. Instead, poverty is a much more likely cause of racial disparities in police killings.

Imagine a scenario, where a cop stops a car on a country road for speeding. In scenario A, the car is a brand new, grey Lincoln, and the driver is neat and wearing a suit. In the other scenario, the car is a 1990s old van, and the driver is unkempt and wearing an undershirt and dirty jeans. Which of these scenarios is more likely to end up with the driver of the vehicle being killed? Importantly, I argue that it doesn’t matter whether the driver is Black, White or Hispanic. What matters is that they fit a stereotype of a poor person, who looks more like a potential criminal.

The poverty hypothesis explains the disproportionate rate of police killings of Black people by the fact that Black US citizens are more likely to be poor, because a long history of slavery and discrimination continues to produce racial inequalities in opportunities and wealth. According to this hypothesis, the racial disparities in police killings should shrink or be eliminated, when we use poverty rates rather than population proportions as a benchmark for police killings (Cesario et al., 2019).

I obtained poverty rates in the United States from the Kaiser Family Foundation website (KFF).

In absolute numbers, there are more White citizens who are poor than Black citizens. However, proportional to their representation in the population, Black citizens are 2.5 times more likely to be poor than White citizens.

These numbers imply that there are approximately 40 million Black citizens and 180 million White citizens.

Based on Cesario et al’s (2019) statistics in Table 1, there are on average 255 Black citizens and 526 White citizens that are killed by police in a given year.

We can now use this information to compute the odds of being killed, the odds of being poor, and the odds of being killed given being poor, assuming that police predominantly kill poor people.

First, we see again that Black citizens are about two times more likely to be killed by police than White citzens (Total OR(B/W) = 2.29). This matches the odds ratio of being Black among poor people (.20/.08 = 2.5).

More important, the odds ratio of getting killed by police for poor Black citizens, 3.34 out of 100,000, is similar to the odds ratio of getting killed by police for poor White citizens, 3.64 out of 100,000. The odds ratio is close to 1, and does no longer show a racial bias for Black citizens to be killed more often by police, OR(B/W) = 0.92. In fact, there is a small bias for White citizens to be more likely to be killed. This might be explained by the fact that White US citizens are more likely to own a gun than Black citizens, and owning a gun may increase the chances of a police encounter to go wrong (Gramlich, 2018).

The present results are much more likely to account for the racial bias in police killings than Cesario et al.’s (2019) analyses that suggested crime is a key factor. The crime hypothesis makes the unrealistic assumption that only criminals get killed by police. However, it is known that innocent US citizens are sometimes killed by accident in police encounters. It is also not clear how police could avoid such accidents because they cannot always know whether they are encountering a criminal or not. In these situations of uncertainty, police officers may rely on cues that are partially valid indicators such as race or appearance. The present results suggest that cues of poverty play a more important role than race. As a result, poor White citizens are also more likely to be killed than middle-class and well-off citizens.

Cesario et al.’s (2019) results also produced some surprising and implausible results. For example, when using reported violent crimes, Black citizens have a higher absolute number of severe crimes (67,534 reported crimes in a year) than White citizens (29,713). Using these numbers as benchmarks for police shootings leads to the conclusion that police offers are 5 times more likely to kill a White criminal than a Black criminal, OR(B/W) = 0.21.

According to this analyses, police should have killed 1,195 Black criminals, given the fact that they killed 526 White criminals, and that there are 2.3 times more Black criminals than White criminals. Thus, the fact that they only killed 252 Black criminals shows that police disproportionally kill White criminals. Cesario et al. (2019) offer no explanation for this finding. They are satisfied with the fact that their analyses show no bias to kill more Black citizens.

The reason for the unexplained White-bias in police killings is that it is simply wrong to use crime rates as the determinant of police shootings. Another injustice in the United States is that Black victims of crime are much less likely to receive help from the police than White victims (Washington Post). For example, the Washington Post estimated that every year 2,600 murders go without an arrest of a suspect. It is much more likely that the victim of an unsolved murder is Black (1,860) than White (740), OR(B/W) = 2.5. Thus, one reason why police offers are less likely to kill Black criminals than White criminals is that they are much less likely to arrest Black criminals who murdered a Black citizen. This means, that crime rates are a poor benchmark for encounters with the police because it is more likely that a Black criminal gets killed by another Black criminal than that he is arrested by a White police officer. This means that innocent, poor Black citizens face two injustices. They are more likely to be mistaken as a criminal and killed by police and they do not receive help from police when they are a victim of a crime.

Conclusion

I welcome Cesario et al.’s (2019) initiative to examine the causes of racial disparities in police shootings. I also agree with them that we need to use proper benchmarks to understand these racial disparities. However, I disagree with their choice of crime statistics to benchmark police shootings. The use of crime statistics is problematic for several reasons. First, police do not always know whether they encounter a criminal or not and sometimes shoot innocent people. The use of crime statistics doesn’t allow for innocent victims of police shootings and makes it impossible to examine racial bias in the killing of innocent citizens. Second, crime statistics are a poor indicator of police encounters because there exist racial disparities in the investigation of crimes with Black and White victims. I show that poverty is a much better benchmark that accounts for racial disparities in police shootings. Using poverty, there is only a relatively small bias that police officers are more likely to shoot White poor citizens than Black poor citizens, and this bias may be explained by the higher rate of gun-ownership by White citizens.

Implications

My new finding that poverty rather than criminality accounts for racial disparities in police shootings has important implications for public policy.

Cesario et al. (2019) suggest that their findings imply that implicit bias training will have little effect on police killings.

This suggests that department-wide attempts at reform through programs such as implicit bias training will have little to no effect on racial disparities in deadly force, insofar as
officers continue to be exposed after training to a world in which different racial groups are involved in criminal activity to different degrees (p.
592).

This conclusion is based on their view that police only kill criminals during lawful arrests and that killings of violent criminals are an unavoidable consequences of having to arrest these criminals.

However, the present results lead to a different conclusions. Although some killings by police are unavoidable, others can be avoided because not all victims of police shootings are violent criminals. The new insight is that the bias is not only limited to Black people, but also includes poor White people. I see no reason why better training could not reduce the number of killings of poor Americans.

The public debate about police killings also ignores other ways to reduce police killings. The main reason for the high prevalence of police killings in the United States are the gun laws of the United States. This will not change any time soon. Thus, all citizens of the United States, even those that do not own guns, need to be aware that many US citizens are armed. A police officer who makes 20 traffic stops a day, is likely to encounter at least five drivers who own a gun and maybe a couple of drivers who have a gun in their car. Anybody who encounters a police officer needs to understand that they have to assume you might have a gun on you. This means citizens need to be trained how to signal to a police officer that they do not own a gun or pose no threat to the police officer’s live in any other way. Innocent until proven guilty applies in court, but it doesn’t apply when police encounter citizens. You are a potential suspect, until officers can be sure that you are not a treat to them. This is the price US citizens pay for the right to bear arms. Even if you do not exercise this right, it is your right, and you have to pay the price for it. Every year, 50 police officers get killed. Everyday they take a risk when they put on their uniform to do their job. Help them to do their job and make sure that you and them walk away sound and save from the encounter. It is unfair that poor US citizens have to work harder to convince the police that they are not a threat to their lives, and better communication, contact, and training can help to make encounters between police and civilians better and saver.

In conclusion, my analysis of police shootings shows that racial bias in police shootings is a symptom of a greater bias against poor people. Unlike race, poverty is not genetically determined. Social reforms can reduce poverty and the stigma of poverty, and sensitivity training can be used to avoid killing of innocent poor people by police.