Hidden Evidence in Racial Bias Research by Cesario and Johnson

In a couple of articles, Cesario and Johnson have claimed that police officers have a racial bias in the use of force with deadly consequences (Cesario, Johnson, & Terrill, 2019; Johnson, Tress, Burkel, Taylor, & Cesario, 2019). Surprisingly, they claim that police officers in the United States are MORE likely to shoot White civilians than Black civilians. And the differences are not small either. According to their PNAS article, “a person fatally shot by police was 6.67 times less likely (OR = 0.15 [0.09, 0.27]) to be Black than White” (p. 15880). In their SPPS article, they write “The odds were 2.7 times higher for Whites to be killed by police gunfire relative to Blacks given each group’s SRS homicide reports, 2.6 times higher for Whites given each group’s SRS homicide arrests, 2.9 times higher for Whites given each group’s NIBRS homicide reports, 3.9 times higher for Whites given each group’s NIBRS homicide arrests, and 2.5 times higher for Whites given each group’s CDC death by assault data. Thus, the authors claim that for every Black civilian killed by police, there are 2 to 6 White civilians killed by police under similar circumstances.

The main problem with Cesario and Johnson’s conclusion is that they rest entirely on the assumption that violent crime statistics are a reasonable estimate for the frequency of encounters with police that may result in the fatal use of force.

One cannot experience a policing outcome without exposure to police, and if exposure rates differ across groups, then the correct benchmark is on those exposure rates.” (Cesario, Johnson, & Terrill, 2019, p. 587).

In the context of police shootings, exposure would be reasonably approximated by rates of criminal involvement for Blacks and Whites; the more group members are involved in criminal activity, the more exposure they have to situations in which police shootings would be likely to occur” (p. 587).

The quotes make it clear that Cesario and Johnson use crime statistics as a proxy for encounters with police that sometimes result in the fatal use of force.

What Cesario and Johnson are not telling their readers is that there are much better statistics to estimate how frequently civilians encounter police. I don’t know why Cesario and Johnson did not use this information or share it with their readers. I only know that they are aware that this information exists because they cite an article that made use of this information in their PNAS article (Tregle, Nix, Alpert, 2019). Although Tregle et al. (2019) use exactly the same benchmarking approach as Cesario and Johnson, the results are not mentioned in the SPPS article.

The Police-Public-Contact Survey

The Bureau of Justice Statistics has collected data from over 100,000 US citizens about encounters with police. The Police-Public Contact Survey has been conducted in 2002, 2005, 2008, 2011, and 2015. Tregle et al. (2019) used the freely available data to create three benchmarks for fatal police shootings.

First, they estimated that there are 2.5 million police-initiated contacts a year with Black civilians and 16.6 million police initiated contacts a year with White civilians. This is a ratio of 1:6.5, which is slightly bigger than the ratio for Black and White citizens (39.9 million vs. 232.9 million), 1:5.8. Thus, there is no evidence that Black civilians have disproportionally more encounters with police than White civilans. Using either one of these benchmarks, still suggests that Black civilians are more likely to be shot than White civilians by a ratio of 3:1.

One reason for the proportionally higher rate of police encounters for White civilians is that they drive more than Blacks, which leads to more traffic stops for Whites. Here the ratio is 2.0 million to 14.0 million or 1:7. The picture changes for street stops, with a ratio of 0.5 million to 2.6 million, 1:4.9. But even this ratio still implies that Black civilians are at a greater risk to be fatally shot during a street stop with an odds-ratio of 2.55:1.

It is telling that Cesario and Johnson are aware of an article that came to opposite conclusions based on a different approach to estimate police encounters and do not mention this finding in their article. Apparently it was more convenient to ignore this inconsistent evidence to tell their readers that data consistently show no anti-Black bias. While readers who are not scientists may be shocked by this omission of inconvenient evidence, scientists are all to familiar with this deceptive practice of cherry picking that is eroding trust in science.

Encounters with Treats and Use of Force

Cesario and Johnson are likely to argue that it is wrong to use police encounters as a benchmark and that violent crime statistics are more appropriate because police officers mostly use force in encounters with violent criminals. However, this is simply an assumption that is not supported by evidence. For example, it is questionable to use homicide statistics because homicide arrests account for a small portion of incidences of fatal use of force.

A more reasonable benchmark are incidences of non-fatal use of force. The PPCS data make it possible to do so because respondents also report about the nature of the contact with police, including the use of force. It is not even necessary to download and analyze the data because Hyland et al. (2015) already reported on racial disparities in incidences that involved threats or non-fatal use of force (see Table 2, Table 1 in Hyland et al. (2015).

The crucial statistic is that there are 159,100 encounters with Black civilians and 445,500 encounters with White civilians that involve threat or use of force; a ratio of 1: 2.8. Using non-fatal encounters as a benchmark for fatal encounters still results in a greater probability of a Black civilian to be killed than a White civilian, although the ratio is now down to a more reasonable ratio of 1.4:1.

It is not clear why Cesario and Johnson did not make use of a survey that was designed to measure police encounters when they are trying to estimate racial disparities in police encounters. What is clear is that these data exist and that they lead to a dramatically different conclusion than the surprising results in Cesario and Johnson’s analyses that rely on violent crime statistics to estimate police encounters.

Implications

It is important to keep in mind that the racial disparity in the fatal use of force in the population is 3:1 (Tregle et al., 2019, Table 1). The evidence from the PPCS only helps to shed light on the factors that contribute to this disparity. First, Black civilians are not considerably more likely to have contact with police than White civilians. Thus, it is simply wrong to claim that different rates of contact with police explain racial disparities in fatal use of force. There is also no evidence that Black civilians are disproportionally more likely to be stopped by police by driving. The caveat here is that Whites might drive more and that there could be a racial bias in traffic stops after taking amount of driving into account. This simply shows how difficult it is to draw conclusions about racial bias based on these kind of data. However, the data do show that the racial disparity in fatal use of force cannot be attributed to more traffic stops of Black drivers. Even the ratio of street stops is not notably different from the population ratios.

The picture changes when threats and use of force as added to the picture. Black civilians are 2.5 times more likely to have an encounter that involves threats and use of force than White civilans (3.5% vs. 1.4%, in Table 2; Table 1 from Hyland et al., 2015).

These results shed some light on an important social issue, but these numbers also fail to answer important questions. First of all, they do not answer questions about the reasons why officers use threats and force more often with Black civilians. Sometimes the use of force is justified and some respondents of the PPCS even admitted that the use of force was justified. However, at other times the use of force is excessive. The incidence rates in the PPCS are too small to draw firm conclusions about this important question.

Unfortunately, social scientists are under pressure to publish to build their careers, and they are under pressure to present strong conclusions to get their manuscripts accepted. This pressure can lead researchers to make bigger claims than their data justify. This is the case with Cesario and Johnson’s claim that officers have a strong bias to use deadly force more frequently with White civilians than Black civilians. This claim is not supported by strong data. Rather it rests entirely on the use of violent crime statistics to estimate police encounters. Here I show that this approach is questionable and that different results are obtained with other reasonable approaches to estimate racial differences in police encounters.

Unfortunately, Cesario and Johnson are not able to see how offensive their claims are to family members of innocent victims of deadly use of force, when they attribute the use of force to violent crime, which implies that the use of force was justified and that victims are all criminals who threatened police with a weapon. Even if the wast majority of cases are justified and fatal use of force was unavoidable, it is well known that this is not always the case. Research on fatal use of force would be less important if police officers would never make mistakes in the use of force. Cesario and Johnson receive tax-payer money to found their research because fatal use of force is sometimes unnecessary and unjustified. It is those cases that require an explanation and interventions that minimize the unnecessary use of force. To use taxpayer’s money to create the false impression that fatal use of force is always justified and that police officers are more afraid of using force with Black civilians than they are afraid of Black civilians is not helpful and offensive to the families of innocent Black victims that are grieving a loved one. The families of Tamir Rice, Atatiana Jefferson, Eric Garner, Philando Castile, to name a few, deserve better.

Police Officers are not Six Times more Likely to Shoot White Civilians than Black Civilians: A Coding Error in Johnson et al. (2019)

Rickard Carlsson and I submitted a letter to the Proceedings of the National Academy of Sciences. The format allows only 500 words (PDF). Here is the long version of our concerns about Johnson et al.’s PNAS article about racial disparities in police shootings. An interesting question for meta-psychologists is how the authors and reviewers did not catch an error that led to the implausible result that police officers are six times more likely to shoot White civilians than Black civilians when they felt threatened by a civilian.

Police Officers are not Six Times more Likely to Shoot White Civilians than Black Civilians: A Coding Error in Johnson et al. (2019)

Ulrich Schimmack Rickard Carlsson
University of Toronto, Mississauga Lineaus University

The National Academy of Sciences (NAS) was founded in 1863 by Abraham Lincoln to provide independent, objective advice to the nation on matters related to science and technology (1).  In 1914, NAS established the Proceedings of the National Academy of Sciences (PNAS) to publish scientific findings of high significance.  In 2019, Johnson, Tress, Burke, Taylor, and Cesario published an article on racial disparities in fatal shootings by police officers in PNAS (2).  Their publication became the topic of a heated exchange in the Oversight Hearing on Policing Practices in the House Committee on the Judiciary on September 19, 2019. Heather Mac Donald cited the article as evidence that there is no racial disparity in fatal police shootings. Based on the article, she also claimed “In fact, black civilians are shot less, compared with whites, than their rates of violent crime would predict” (3). Immediately after her testimony, Phillip Atiba Goff challenged her claims and pointed out that the article had been criticized (4). In a rebuttal, Heather MacDonald cited Johnson from the authors response that the authors stand by their finding (5).  Here we show that the authors’ conclusions are based on a statistical error in their analyses.

The authors relied on the Guardian’s online database about fatal use of force (7). The database covers 1,146 incidences in 2015.  One aim of the authors’ research was to examine the influence of officers’ race on the use of force. However, because most officers are White, they only found 12 incidences (N = 12, 5%) where a Black citizen was fatally shot by a Black officer. This makes it impossible to examine statistically reliable effects of officers’ race.  In addition, the authors examined racial disparities in fatal shootings with regression models that related victims’ race to victims, officers, and counties’ characteristics. The results showed that “a person fatally shot by police was 6.67 times less [italics added] likely (OR = 0.15 [0.09, 0.27]) to be Black than White” (p. 15880).  This finding would imply for every case of a fatal use of force with a Black citizen like Eric Garner or Tamir Rice, there should be six cases similar cases with White citizens.  The authors explain this finding with depolicing; that is, officers may be “less likely to fatally shoot Black civilians for fear of public and legal reprisal” (p. 15880).  The authors also conducted several additional analyses that are reported in their supplementary materials.  However, they claim that their results are robust and “do not depend on which predictors are used” (p. 15881). We show that all of these statements are invalidated by a coding mistake in their statistical model.

Table 1
Racial Disparity in Race of Fatally Shot Civilians

    Model County Predictor Odds-Ratio (Black/White), 95%CI
M1 Homicide Rates 0.31 (0.23, 0.42)
M2 Population Rates 2.03 (1.21, 3.41)
M3 Population & Homicide Rates 0.89 (0.44, 1.80)

The authors did not properly code categorical predictor variables. In a reply, the authors acknowledge this mistake and redid the analyses with proper weighted effect coding of categorical variables. Their new results are reported in Table. 1   The correct results show that the choice of predictor variables does have a strong influence on the conclusions.  In a model that only uses homicide rates as predictor (M1), the intercept still shows a strong anti-White bias, with 3 White civilians being killed for every 1 Black civilian in a county with equal proportions of Black and White citizens. In the second model with population proportions as predictor, the data show anti-Black bias. When both predictors are used, the data show parity, but with a wide margin of error that ranges from a ratio of 2 White civilians for 1 Black civilian to 2 Black civilians for 1 Black civilian.  Thus, after correcting the statistical mistake, the results are no longer consistent and it is important to examine which of these models should be used to make claims about racial disparities.

We argue that it is necessary to include population proportions in the model.  After all, there are many counties in the dataset with predominantly White populations and no shootings of Black civilians. This is not surprising. For officers to encounter and fatally shoot a Black resident, there have to be Black civilians. To ignore the demographics would be a classic statistical mistake that can lead to false conclusions, such as the famous example that is used to teach the difference between correlation and causation. In this example, it appears as if Christians commit more homicides because homicide rates are positively correlated with the number of churches. This inference is wrong because the correlation between churches and homicides simply reflects the fact that counties with a larger population have more churches and more homicides.  Thus, the model that uses only population ratios as predictor is useful because it tells us whether White or Black people are shot more often than we would expect if race was unrelated to police shootings. Consistent with other studies, including an article by the same authors, we see that Black citizens are shot disproportionally more often than White citizens (8,9).

The next question that a scientific study of police shootings can examine is why there exist racial disparities in police shootings.  Importantly, answering this question does not make racial disparities disappear. Even if Black citizens are shot more often because they are more often involved in crimes, as the authors claim, there exists a racial disparity.  It didn’t disappear, nor does this explanation account for incidences like the death of Eric Garner or Tamir Rice.  However, the authors’ conclusion that “racial disparity in fatal shootings is explained by non-Whites’ greater exposure to the police through crime” (p. 15881) is invalid for several reasons.

First of all, the corrected results for the model that takes homicide rates and population rates into account no longer provides conclusive evidence about racial disparities. The data still allow for a racial disparity where Black civilians are shot at twice the rate as White civilians.  Moreover, this model ignores the authors’ own finding that victims’ age is a significant predictor of victims’ race.  Parity is obtained for the average age of 37, but the age effect implies that 20-year old victims are significantly more likely to be Black, OR(B/W) = 3.26, 95%CI = 1.26 to 8.43 while 55-year old victims are significantly more likely to be White, OR(B/W) = 0.24, 95%CI = 0.08 to 0.71.  Thus, even when homicide rates are included in the model, the authors’ data are consistent with the public perception that officers are more likely to use force with young Black men than with young White men.

The second problem is that the model does not include other potentially relevant predictor variables, such as poverty rates, and that an analysis across counties is unable to distinguish between actual and spurious predictors because all statistics are highly correlated with counties’ demographics (r > .9).

A third problem is that it is questionable to rely on statistics about homicide victims as a proxy for police encounters. The use of homicide rates implies that most victims of fatal use of force are involved in homicides. However, the incidences in the Guardian database show that many victims were involved in less severe crimes.

Finally, it is still possible that there is racial disparity in unnecessary use of force even if fatal incidences are proportional to violent crimes. If police encounter more Black people in ambiguous situations because Black people are disproportionally more involved in violent crime, they would still accidentally shoot more Black citizens than White citizens. It is therefore important to distinguish between racial bias of officers and racial disparities in fatal incidences of use of force.  Racial bias is only one of several factors that can produce racial disparities in the use of excessive force.

Conclusion

During a hearing on policing practices in the House Committee on the Judiciary, Heather MacDonald cited Johnson et al.’s (2019) article as evidence that crime accounts for racial disparities in the use of lethal force by police officers and that “black civilians are shot less, compared with whites, than their rates of violent crime would predict.” Our analysis of Johnson et al.’s (2019) article shows that these statements are to a large extent based on a statistical error.  Thus, the article cannot be used as evidence to claim that there are no racial disparities in policing or as evidence that police officers are even more reluctant to use excessive force with Black suspects than with White civilians.  The only lesson that we can learn from this article is that social scientists make mistakes and that pre-publication peer-review alone does not ensure that these mistakes are caught and corrected. It is puzzling how the authors and reviewers did not detect a statistical mistake when the results implied that police officers fatally shoot 6 White suspects for every Black suspect. It was this glaring finding that made us conduct our own analyses and to detect the mistake. This shows the importance of post-publication peer review to ensure that scientific information that informs public policy is as objective and informative as it can be

References

1. National Academy of Sciences.  Mission statement of the (http://www.nasonline.org/about-nas/mission/)

2. Johnson, D. J., Trevor T., Nicole, B., Carley, T., & Cesario, J. (2019). Officer characteristics and racial disparities in fatal officer-involved shootings. Proceedings of the National Academy of Sciences, 116(32), 15877–15882.

3. MacDonald, H. (2019). False Testimony, https://www.city-journal.org/police-shootings-racial-bias

4. Knox, D. & Mummolo, J. (2019). Making inferences about racial disparities in police violence. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3431132

5. Johnson, D. J., & Cesario, J. (2019). Reply to Knox and Mummolo: Critique of Johnson et al. (2019). https://psyarxiv.com/dmhpu/

6. Johnson, D. J., & Cesario, J. (2019). Reply to Schimmack: Critique of Johnson et al. (2019).

7. “The counted.” The Guardian. https://www.theguardian.com/us-news/ng-interactive/2015/jun/01/the-counted-police-killings-us-database#

8. J. Cesario, D. J. Johnson, W. Terrill, Is there evidence of racial disparity in police use of deadly force? Analyses of officer-involved fatal shootings in 2015–2016. Soc. Psychol. Personal. Sci. 10, 586–595 (2018).

9. Edwards, F., Lee, H., Esposito, M. (2019). Risk of being killed by police use of force in the United States by age, race-ethnicity, and sex. Proceedings of the National Academy of Sciences, 116(34), 16793-16798. doi: 10.1073/pnas.1821204116


Does PNAS article show there is no racial bias in police shootings?

Politics in the United States is extremely divisive and filled with false claims based on fake facts. Ideally, social scientists would provide some clarity to these toxic debates by informing US citizens and politicians with objective and unbiased facts. However, social scientists often fail to do so for two reasons. First, they often lack the proper data to provide valuable scientific input into these debates. Second, when the data do not provide clear answers, social scientists’ inferences are shaped as much (or more) by their preexisting beliefs than by the data. It is therefore not a surprise, that the general public increasingly ignores scientists because they don’t trust then to be objective.

Unfortunately, the causes of police killings in the United States is one of these topics. While a few facts are known and not disputed, these facts do not explain why Black US citizens are killed by police more often than White citizens. While it is plausible that there are multiple factors that contribute to this sad statistic, the debate is shaped by groups who either blame a White police force on the one hand and those who blame Black criminals on the other hand.

On September 26, the House Committee on the Judiciary held an Oversight Hearing on Policing Practices. In this meeting, an article in the prestigious journal Proceedings of the National Academy of Sciences (PNAS) was referenced by Heather Mac Donald, who works for the conservative think tank Manhattan Institute, as evidence that crime is the single factor that explains racial disparities in police shootings.

The Manhattan Institute posted a transcript of her testimony before the committee. Here claim is clear. She not only claims that crime explains the higher rate of Black citizens being killed, she even claims that taking crime into account shows a bias of the police force to kill disproportionally FEWER Black citizens than White citizens.

Image

Heather MacDonald is not a social scientist and nobody should expect that she is an expert in logistic regression. This is the job of scientists; authors, reviewers, and editors. The question is whether they did their job correctly and whether their analyses support the claim that after taking population ratios and crime rates into account, police officers in the United States are LESS likely to shot a Black citizen than a White citizen.

The abstract of the article summarizes three findings.

1. As the proportion of Black or Hispanic officers in a FOIS increases, a person shot is more likely to be Black or Hispanic than White.

In plain English, in counties with proportionally more Black citizens, proportionally more Black people are being shot. For example, the proportion of Black people killed in Georgia or Florida is greater than the proportion of Black people killed in Wyoming or Vermont. You do not need a degree in statistic to realize that this tells us only that police cannot shoot Black people if there are no Black people. This result tells us nothing about the reasons why proportionally more Black people than White people are killed in places where Black and White people live.

2. Race-specific county-level violent crime strongly predicts the race of the civilian shot.

Police do not shoot and kill citizens at random. Most, not all, police shootings occur when officers are attacked and they are justified to defend themselves with lethal force. When police officers in Wyoming or Vermont are attacked, it is highly likely that the attacker is White. In Georgia or Florida, the chance that the attacker is Black is higher. Once more this statistical fact does not tell us why Black citizens in Georgia or Florida or other states with a large Black population are killed proportionally more often than White citizens in these states.

3. The key finding that seems to address racial disparities in police killings is that “although we find no overall evidence of anti-Black or anti-Hispanic disparities in fatal shootings, when focusing on different subtypes of shootings (e.g., unarmed shootings or “suicide by
cop”), data are too uncertain to draw firm conclusions”

First, it is important to realize that the authors do not state that they have conclusive evidence that there is no racial bias in police shootings. In fact, they clearly that that for shootings of unarmed citizens” their data are inconclusive. It is a clear misrepresentation of this article to claim that it provides conclusive evidence that crime is the sole factor that contributes to racial disparity in police shootings. Thus, Heather Mac Donald lied under oath and misrepresented the article.

Second, the abstract misstates the actual findings reported in the article, when the authors claim that they “find no overall evidence of anti-Black or anti-Hispanic disparities in fatal shootings”. The problem is that the design of the study is unable to examine this question. To see this, it is necessary to look at the actual statistical analyses more carefully. Instead, the study examines another question: Which characteristics of a victim make it more or likely that a victim is Black or White. For example, an effect of age could show that young Black citizens are proportionally more likely to be killed than young White citizens, while older Black men are proportionally less likely to be shot than older White men. This would provide some interesting insights into the causal factors that lead to police shootings, but it doesn’t change anything about the proportions of Black and White citizens being shoot by police.

We can illustrate this using the authors’ own data that they shared (unfortunately, they did not share information about officers to fully reproduce their results). However, they did find a significant effect for age. To make it easier to interpret the effect, I divided victims into those under 30 and those 30 and above. This produces a simple 2 x 2 table.

An inspection of the cell frequencies shows that the group with the highest frequency are older White victims. This is only surprising if we ignore the base rates of these groups in the general population. Older White citizens are more likely to be victims of police shootings because there are more of them in the population. As this analysis does not examine proportions in the population this information is irrelevant.

It is also not informative, that there are about two times more White victims (476) than Black victims (235). Again, we would expect more White victims simply because more US citizens are White.

The meaningful information is provided by the odds of being a Black or White victim in the two age groups. Here we see that older victims are much less likely to be Black (122/355) than younger victims (113/121). When we compute the odds ratio, we see that young victims are 1.89 times more likely to be Black than old victims. This shows that young Black man are disproportinally more likely to be the victims of police shootings than young White men. Consistent with this finding, the article states that “Older civilians were 1.85
times less likely (OR = 0.54 [0.45, 0.66]) to be Black than White”

In Table 2, the age effect remains significant after controlling for many variables, including rates of homicides committed by Black citizens. Thus, the authors found that young Black citizens are killed more frequently by police than young White men, eve when they attempted to control statistically for the fact that young Black men are disproportionally involved in criminal activities. This finding is not surprising to critics who claim that there is a racial bias in the police force that has resulted in deaths of innocent young Black men. It is actually exactly what one would expect if racial bias plays a role in police shootings.

Although this finding is statistically significant and the authors actually mention it when they report the results in Table 1, they never comment on this finding again in their article. This is extremely surprising because it is common practice to highlight statistically significant results and to discuss their theoretical implications. Here, the implications are straightforward. Racial bias does not target all Black citizens equally. Young Black men (only 10 Black and 25 White victims were female) are disproportionally more likely to be shoot by police even after controlling for several other variables.

Thus, while the authors attempt to look for predictors of victims’ race provides some interesting insights into the characteristics of Black victims, these analyses do not address the question why Black citizens are more likely to be shot than White citizens. Thus, it is unclear how the authors can state “We find no evidence of anti-Black or anti-Hispanic disparities across shootings” (p. 15877) or “When considering all FOIS in 2015, we did not find anti-Black or anti-Hispanic disparity” (p. 15880).

Surely, they are not trying to say that they didn’t find evidence for it because their analysis didn’t examine this question. In fact, their claims are based on the results in Table 3. Based on these results, the authors come to the conclusion that “controlling for predictors at the civilian, officer, and county levels,” a victim is more than 6 times more likely to be to be White than Black. This makes absolutely no sense if the authors did, indeed, center continuous variables and effect coded nominal variables, as they state.

The whole point of centering and effect coding is to keep the intercept of an analysis interpretable and consistent with the odds ratio without predictor variables. To use age again as an example, the odds ratio of a victim being Black is .49. Adding age as a predictor shows us how the odds change within the two age groups, but this does not change the overall odds ratio. However, if we do not center the continuous age variable or do not take the different frequencies of young (224) and old (477) victims into account, the intercept is no longer interpretable as a measure of racial disparities.

This image has an empty alt attribute; its file name is image-64.png

To illustrate this, here are the results of several logistic regression analysis with age as a predictor variable.

First, I used raw age as a predictor.
summary(glm(race ~ pc$age,family=binomial(link=”logit”)))

The intercept changes from -.71 to 1.08. As these values are log-odds, we need to transform them to get the odds ratios, which are .49 (235/711) and 2.94. The reason is that the intercept is a prediction of the racial bias at age 0, which would suggest that police officers are 3 times more likely to kill a Black newborn than a White newborn. This prediction is totally unrealistic because there are fortunately very few victims younger than 15 years of age. In short, this analysis changes the intercept, but the results do no longer tell us anything about racial disparities in general because the intercept is about a very small, and in this case, non-existing subgroup.

We can avoid this problem by centering or standardizing the predictor variable. Now a value of 0 corresponds to the average age.

age.centered = pc$age – mean(pc$age)
summary(glm(race ~ age.centered,family=binomial(link=”logit”)))

The age effect remains the same, but now the intercept is proportional to the odds in the total sample [disclaimer: I don’t know why it changed from -.71 to -.78; any suggestions are welcome]

This is also true when we split age into young (< 30) and old (30 or older) groups.

When the groups are dummy coded (< 30 = 0, 30+ = 1), the intercept changes and shows that victims are now more likely to be Black in the younger group coded as zero.

summary(glm(race ~ (pc$age > 30),family=binomial(link=”logit”)))

However, with effect coding the intercept hardly changes.

summary(glm(race ~ (scale(pc$age > 30)),family=binomial(link=”logit”)))


Thus, it makes no sense when the authors claim that they centered continuous variables and effect coded nominal variables and the intercept changed from exp(-.71) = .49 to exp(-1.90) = .15, which they report in Table 3. Something went wrong in their analyses.

Even if this result were correct, the interpretation of this result as a measure of racial disparities is wrong. One factor that is omitted from the analysis is the proportion of White citizens in the counties. It doesn’t take a rocket scientist to realize that counties with a larger White population are more likely to have White victims. The authors do not take this simply fact into account, although they did have a measure of the population size in their data set. We can create a measure of the proportion of Black and White citizens and center the predictor so that the intercept reflects a population with equal proportions of Black and White citizens.

When we use this variable as a predictor, the surprising finding that police officers are much more likely to shot and kill White citizens disappears. The odds ratio changes from 0.49 to exp(-.04) = .96, and the 95%CI includes 1, 95%CI = 0.78 to 1.19.

This finding may be taken as evidence that there is little racial disparity after taking population proportions into account. However, this ignores the age effect that was found earlier. When age is included as a predictor, we see now that young Black men are disproprotionally likely to be killed, while the reverse is true for older victims. One reason for this could be that criminals are at a higher risk of being killed. Even if White criminals are not killed in their youth, they are likely to be killed at an older age. As Black criminals are killed at a younger age, there are fewer Black criminals that get killed at an older age. Importantly, this argument does not imply that all victims of police shootings are criminals. The bias to kill Black citizens at a younger age also affects innocent Blacks, as the age effect remained significant after controlling for crime rates.

The racial disparity for young citizens becomes even larger when homicide rates are included using the same approach. I also excluded counties with a ratio greater than 100:1 for population or homicide rates.

summary(glm(race ~ (pc$age > 30) + PopRatio + HomRatio,family=binomial(link=”logit”)))


The intercept of 0.65 implies that young (< 30) victims of police shootings are two times more likely to be Black than White when we adjust the risk for the proportion of Black vs. White citizens and homicides. The significant age effect shows again that this risk switches for older citizens. As we are adjusting for homicide rates, this suggest that older White citizens are at an increased risk of being killed by police. This is an interesting observation as much of the debate has been about young Black men who were innocent. According to these analyses, there should also be cases of older White men who are innocent victims of police killings. Looking for examples of these cases and creating more awareness about these cases does not undermine the concerns of the Black Lives Matter movement. Police killings are not a zero sum game. The goal should be to work towards reducing the loss of Black, Blue (police), White, and all other lives. Scientific studies can help to do that when authors analyze and interpret the data correctly. Unfortunately, this is not what happened in this case. Fortunately, the authors shared (some of) their data and it was possible to put their analyses under the microscope. The results show that their key conclusions are not supported by their data. First, there is no disparity that leads to the killing of more White citizens than Black or Hispanic citizens by police. This claim is simply false. Second, the authors have an unscientific aversion to take population rates into account. In counties with mostly White population, crime is mostly committed by White citizens, and police is more likely to encounter and kill White criminals. It is not a mistake to include population rates in statistical analyses. It is a mistake not to do so. Third, the authors ignored a key finding of their own analysis that age is a significant predictor of police shootings. Consistent with the Black Lives Matter claim, their data show that police disproportionally shoots young Black men. This bias is offset to some extent by the opposite bias in older age groups, presumably because Black men have already been killed, which reduces the at risk population of Black citizens in this age group.

In conclusion, the published article already failed to show that there is no racial disparity in police shootings, but it was easily misunderstood as providing evidence for this claim. A closer inspection of the article shows even more problems with the article, which means this article should not be used to support empirical claims about police shootings. Ideally, the article would be retracted. At a minimum, PNAS should publish a notice of concern.

Poverty Explains Racial Bias in Police Shootings

Statistics show that Black US citizens are disproportionally more likely to be killed by police than White US citizens. Cesario, Johnson, and Terrill (2019 estimated that the odds of being killed by police are 2.5 times higher for Black citizens than for White citizens. To my knowledge, no social scientist has disputed this statistical fact.

However, social scientists disagree about the explanation for this finding. Some social scientists argue that racial bias is at least a contributing factor to the disparity in police killings. Others, deny that racial bias is a factor and point out that Black citizens are killed in proportion to their involvement in crime.

Cesario et al. write “when adjusting for crime, we find no systematic evidence of
anti-Black disparities in fatal shootings, fatal shootings of unarmed citizens, or fatal shootings involving misidentification of harmless objects” (p. 586).

They argue that criminals are more likely to encounter police and that “exposure to police accounts for the racial disparities in fatal shootings observed at the population
level” (p. 591).

They also argue that the data are strong enough to rule out racial bias as a contributing factor that influences police shootings in addition to disproportionate involvement in criminal activities.

None of their tests “provided evidence of systematic anti-Black disparity.
Moreover, the CDC data (as well as the evidence discussed in Online Supplemental Material #2) provide a very strong test of whether biased policing accounts for these
results” (p. 591).

“When considering all fatal shootings, it is clear that systematic anti-Black disparity at the national level is not observed” (p. 591).

The authors also point out that their analyses are not conclusive, but recommend their statistical approach for future investigations of this topic.

“The current research is not the final answer to the question of race and police use of deadly force. Yet it does provide perspective on how one should test for group
disparities in behavioral outcomes and on whether claims of anti-Black disparity in fatal police shootings are as certain as often portrayed in the national media” (p. 591).

Here I follow the authors advice and use their statistical approach to demonstrate that crime rates do not account for racial disparities in police killings. Instead, poverty is a much more likely cause of racial disparities in police killings.

Imagine a scenario, where a cop stops a car on a country road for speeding. In scenario A, the car is a brand new, grey Lincoln, and the driver is neat and wearing a suit. In the other scenario, the car is a 1990s old van, and the driver is unkempt and wearing an undershirt and dirty jeans. Which of these scenarios is more likely to end up with the driver of the vehicle being killed? Importantly, I argue that it doesn’t matter whether the driver is Black, White or Hispanic. What matters is that they fit a stereotype of a poor person, who looks more like a potential criminal.

The poverty hypothesis explains the disproportionate rate of police killings of Black people by the fact that Black US citizens are more likely to be poor, because a long history of slavery and discrimination continues to produce racial inequalities in opportunities and wealth. According to this hypothesis, the racial disparities in police killings should shrink or be eliminated, when we use poverty rates rather than population proportions as a benchmark for police killings (Cesario et al., 2019).

I obtained poverty rates in the United States from the Kaiser Family Foundation website (KFF).

In absolute numbers, there are more White citizens who are poor than Black citizens. However, proportional to their representation in the population, Black citizens are 2.5 times more likely to be poor than White citizens.

These numbers imply that there are approximately 40 million Black citizens and 180 million White citizens.

Based on Cesario et al’s (2019) statistics in Table 1, there are on average 255 Black citizens and 526 White citizens that are killed by police in a given year.

We can now use this information to compute the odds of being killed, the odds of being poor, and the odds of being killed given being poor, assuming that police predominantly kill poor people.

First, we see again that Black citizens are about two times more likely to be killed by police than White citzens (Total OR(B/W) = 2.29). This matches the odds ratio of being Black among poor people (.20/.08 = 2.5).

More important, the odds ratio of getting killed by police for poor Black citizens, 3.34 out of 100,000, is similar to the odds ratio of getting killed by police for poor White citizens, 3.64 out of 100,000. The odds ratio is close to 1, and does no longer show a racial bias for Black citizens to be killed more often by police, OR(B/W) = 0.92. In fact, there is a small bias for White citizens to be more likely to be killed. This might be explained by the fact that White US citizens are more likely to own a gun than Black citizens, and owning a gun may increase the chances of a police encounter to go wrong (Gramlich, 2018).

The present results are much more likely to account for the racial bias in police killings than Cesario et al.’s (2019) analyses that suggested crime is a key factor. The crime hypothesis makes the unrealistic assumption that only criminals get killed by police. However, it is known that innocent US citizens are sometimes killed by accident in police encounters. It is also not clear how police could avoid such accidents because they cannot always know whether they are encountering a criminal or not. In these situations of uncertainty, police officers may rely on cues that are partially valid indicators such as race or appearance. The present results suggest that cues of poverty play a more important role than race. As a result, poor White citizens are also more likely to be killed than middle-class and well-off citizens.

Cesario et al.’s (2019) results also produced some surprising and implausible results. For example, when using reported violent crimes, Black citizens have a higher absolute number of severe crimes (67,534 reported crimes in a year) than White citizens (29,713). Using these numbers as benchmarks for police shootings leads to the conclusion that police offers are 5 times more likely to kill a White criminal than a Black criminal, OR(B/W) = 0.21.

According to this analyses, police should have killed 1,195 Black criminals, given the fact that they killed 526 White criminals, and that there are 2.3 times more Black criminals than White criminals. Thus, the fact that they only killed 252 Black criminals shows that police disproportionally kill White criminals. Cesario et al. (2019) offer no explanation for this finding. They are satisfied with the fact that their analyses show no bias to kill more Black citizens.

The reason for the unexplained White-bias in police killings is that it is simply wrong to use crime rates as the determinant of police shootings. Another injustice in the United States is that Black victims of crime are much less likely to receive help from the police than White victims (Washington Post). For example, the Washington Post estimated that every year 2,600 murders go without an arrest of a suspect. It is much more likely that the victim of an unsolved murder is Black (1,860) than White (740), OR(B/W) = 2.5. Thus, one reason why police offers are less likely to kill Black criminals than White criminals is that they are much less likely to arrest Black criminals who murdered a Black citizen. This means, that crime rates are a poor benchmark for encounters with the police because it is more likely that a Black criminal gets killed by another Black criminal than that he is arrested by a White police officer. This means that innocent, poor Black citizens face two injustices. They are more likely to be mistaken as a criminal and killed by police and they do not receive help from police when they are a victim of a crime.

Conclusion

I welcome Cesario et al.’s (2019) initiative to examine the causes of racial disparities in police shootings. I also agree with them that we need to use proper benchmarks to understand these racial disparities. However, I disagree with their choice of crime statistics to benchmark police shootings. The use of crime statistics is problematic for several reasons. First, police do not always know whether they encounter a criminal or not and sometimes shoot innocent people. The use of crime statistics doesn’t allow for innocent victims of police shootings and makes it impossible to examine racial bias in the killing of innocent citizens. Second, crime statistics are a poor indicator of police encounters because there exist racial disparities in the investigation of crimes with Black and White victims. I show that poverty is a much better benchmark that accounts for racial disparities in police shootings. Using poverty, there is only a relatively small bias that police officers are more likely to shoot White poor citizens than Black poor citizens, and this bias may be explained by the higher rate of gun-ownership by White citizens.

Implications

My new finding that poverty rather than criminality accounts for racial disparities in police shootings has important implications for public policy.

Cesario et al. (2019) suggest that their findings imply that implicit bias training will have little effect on police killings.

This suggests that department-wide attempts at reform through programs such as implicit bias training will have little to no effect on racial disparities in deadly force, insofar as
officers continue to be exposed after training to a world in which different racial groups are involved in criminal activity to different degrees (p.
592).

This conclusion is based on their view that police only kill criminals during lawful arrests and that killings of violent criminals are an unavoidable consequences of having to arrest these criminals.

However, the present results lead to a different conclusions. Although some killings by police are unavoidable, others can be avoided because not all victims of police shootings are violent criminals. The new insight is that the bias is not only limited to Black people, but also includes poor White people. I see no reason why better training could not reduce the number of killings of poor Americans.

The public debate about police killings also ignores other ways to reduce police killings. The main reason for the high prevalence of police killings in the United States are the gun laws of the United States. This will not change any time soon. Thus, all citizens of the United States, even those that do not own guns, need to be aware that many US citizens are armed. A police officer who makes 20 traffic stops a day, is likely to encounter at least five drivers who own a gun and maybe a couple of drivers who have a gun in their car. Anybody who encounters a police officer needs to understand that they have to assume you might have a gun on you. This means citizens need to be trained how to signal to a police officer that they do not own a gun or pose no threat to the police officer’s live in any other way. Innocent until proven guilty applies in court, but it doesn’t apply when police encounter citizens. You are a potential suspect, until officers can be sure that you are not a treat to them. This is the price US citizens pay for the right to bear arms. Even if you do not exercise this right, it is your right, and you have to pay the price for it. Every year, 50 police officers get killed. Everyday they take a risk when they put on their uniform to do their job. Help them to do their job and make sure that you and them walk away sound and save from the encounter. It is unfair that poor US citizens have to work harder to convince the police that they are not a threat to their lives, and better communication, contact, and training can help to make encounters between police and civilians better and saver.

In conclusion, my analysis of police shootings shows that racial bias in police shootings is a symptom of a greater bias against poor people. Unlike race, poverty is not genetically determined. Social reforms can reduce poverty and the stigma of poverty, and sensitivity training can be used to avoid killing of innocent poor people by police.

Police Shootings and Race in the United States

The goal of social sciences and social psychology is to understand human behavior in the real world. Experimental social psychologists use laboratory experiments to study human behavior. The problem with these studies is that some human behaviors cannot be studied in the laboratory for ethical or practical reasons. Police shootings are one of them. In this case, social scientists have to rely on observations of these behaviours in the real world. The problem is that it is much harder to draw causal inferences from these studies than from laboratory experiments.

A team of social psychologists examined whether police shootings in the United States are racially biased (Are victims of police shootings more likely to be not White (Black, Hispanic). This is an important political issue in the United States. The abstract of their article states their findings.

The abstract starts with a seemingly clear question. “Is there evidence of a Black-White disparity in death by police gunfire in the United States?” However, even this question is not clear because it is not clear what we mean by disparity. Disparity can mean “a lack of equality or a lack of equality that is unfair (Cambridge dictionary).

There is no doubt that Black citizens of the United States are more likely to be killed by police gunfire than White citizens. The authors themselves confirmed this in their analysis. They find that the odds of being killed by police are three times higher for Black citizens than for White citizens.

The statistical relationship implies that race is a contributing causal factor to being killed by police. However, the statistical finding does not tell us why or how race influences police shootings. In psychological research this question is often framed as a search for mediators; that is, intervening variables that are related to race and to police shootings.

In the public debate about race and police shooting, two mediating factors are discussed. One potential mediator is racial bias that makes it more likely for a police officer to kill a Black suspect than a White suspect. Cases like the killing of Tamir Rice or Philando Castile are used as examples of innocent Black citizens being killed under circumstances that may have led to a different outcome if they had been White. Others argue that tragic accidents also happen with White suspects and that these cases are too rare to draw scientific conclusions about racial bias in police shootings.

Another potential mediator is that there is also a disparity between Black and White US citizens in violent crimes. This is the argument put forward by the authors.

When adjusting for crime, we find no systematic evidence of anti-Black disparities in fatal shootings, fatal shootings of unarmed citizens, or fatal shootings involving identification of harmless objects.

This statement implies that the authors conducted a mediation analysis, which uses statistical adjustment for a potential mediator to examine whether a mediator explains the relationship between two other variables.

In this case, racial differences in crime rates are the mediator and the claim is that once we take into account that Black citizens are more involved in crimes and involvement in crimes increases the risk of being killed by police, there are no additional racial disparities. If a potential mediator fully explains the relationship between two variables, we do not need to look for additional factors that may explain the racial disparity in police shootings.

Readers may be forgiven if they interpret the conclusion in the abstract as stating exactly that.

Exposure to police given crime rate differences likely accounts for the higher per capita rate of fatal police shootings for Blacks, at least when analyzing all shootings.

The problem with this article is that the authors are not examining the question that they are stating in the abstract. Instead they are conducting a number of hypothetical analyses that start with the premises that police officers only kill criminals. They then examine racial bias in police shootings under this assumption.

For example, in Table 1 they report that the NIBRS database recorded 135,068 sever violent crimes by Black suspects and 59,426 violent crimes by White suspects in the years 2015 and 2016. In the same years, 475 Black citizens and 1168 White citizens were killed by police. If we assume that all of those individuals killed by police were suspected of a violent crime recorded in the NIBRS database, we see that White suspects are much more likely to be killed by police (1168 / 59,426 = 197 out of 10,000) than Black suspects (475 / 135068 = 35 out of 10000). The odds ratio is 5.59, which means for every Black suspect police kills over 5 White suspects. This is shown in Figure 1 of the article as the most extreme bias against White criminals. However most other crime statistics also lead to the conclusion that White criminals are more likely to be shot by police than Black criminals.

This is a surprising finding to say the least. While we started with the question why police officers in the United States are more likely to kill Black citizens than White citizens, we end with the conclusion that police officers only kill criminals and are more likely to kill White criminals than Black criminals. I hope I am not alone in noticing a logical inconsistency. If police doesn’t shoot innocent citizens and they shoot more White criminals than Black criminals, we should see that White US citizens are killed more often by police than Black citizens. But that is not the case. We started our investigation with the question why Black citizens are killed more often by police than White citizens. The authors statistical analysis does not answer this question. Their calculations are purely hypothetical and their conclusions suggest only that their assumptions are wrong.

The missing piece is information about the contribution of crime to the probability of being killed by police. Without this information it is simply impossible to examine to what extent racial differences in crime contribute to racial disparities in police shootings. And therewith it is also impossible to say anything about other factors, such as racial bias, that may also contribute to racial disparities in police shootings. This means that this article makes no empirical contribution to the understanding of racial disparities in police shootings.

The fundamental problem of the article is that the authors think they can simply substitute populations. Rather than examining killings in the population of citizens, which the statistic is based on, they think they can replace it by another population, the population of criminals. But, the death counts apply to the population of citizens and not to the population of criminals.

In this article, we approached the question of racial disparities in deadly force by starting with the widely used technique of benchmarking fatal shooting data on population
proportions. We questioned the assumptions underlying this analysis and instead proposed a set of more appropriate benchmarks given a more complete understanding of the context of police shootings

The authors talk about benchmarking and discuss the pros and cons of different benchmarks. However, the notion of a benchmark is misleading. We have a statistic about the number of police killings in the population of the United States. This is not a benchmark, it is a population. In this population, Black citizens are disproprotionally more likely to get killed by police. That is a fact. It is also a fact that in the population of US citizens more crimes are being committed by Black citizens (discussing the reasons for this is another topic that is beyond this criticism of the article). Again, this is not a benchmark, it is a population statistic. The author now use the incident rates of crime to ask the question how many Black or White criminals are being shot by police. However, the population statistics do not provide that information. We could also use other statistics that lead to different conclusions. For example, White US citizens own disproportionally more guns than Black citizens. If we would use that to “benchmark” police shootings, we would see a bias to shoot more Black gun-owners than White gun-owners. But we don’t really see that in the data because we have no information about the death rates of gun owners, just as the article does not provide information about the death rates of criminals and innocent citizens. Thus, the fundamental flaw of the article is the idea that we can simply take two population statistics and compute conditional probabilities from these statistics. This is simply not possible.

The authors caution readers that their results are not conclusive. “The current research is not the final answer to the question of race and police use of deadly force” In fact, the results presented in this article do not even begin to address the question. The data simply provide no information about the causal factors that produce racial inequality in police shootings.

The authors then contradict themselves and reach a strong and false conclusion.

Yet it does provide perspective on how one should test for group disparities in behavioral outcomes and on whether claims of anti-Black disparity in fatal police shootings are as certain as often portrayed in the national media. When considering all fatal shootings, it is clear that systematic anti-Black disparity at the national level is not observed.

They are wrong on two counts. First, their analysis is statistically flawed and leads to internally inconsistent results. Police only kill criminals and are more likely to kill White criminals, which does not explain why we see more Black victims of police shootings. Second, even if their study had shown that there is no evidence of racial inequality, we cannot infer that racial biases do not exist. Absence of evidence is not the same as evidence of absence. Cases like the tragic death of Tamir Rice may be rare, and they may be too rare to be picked up in a statistic, but that doesn’t mean they should be ignored.

The rest of the discussion section reflects the authors’ personal views more than anything that can be learned from the results of this study. For example, the claim that better training will produce no notable improvements is pure speculation, and ignores a literature on training in the use of force and its benefits for all citizens. The key of police training in shooting situations is for police officers to focus on relevant cues (e.g., weapons) and to ignore irrelevant factors such as race. Better training can reduce killings of Black and White citizens.

This suggests that department-wide attempts at reform through programs such as implicit bias training will have little to no effect on racial disparities in deadly force, insofar as
officers continue to be exposed after training to a world in which different racial groups are involved in criminal activity.

It is totally misleading to support this claim with trivial intervention studies with students.

This assessment is consistent with other evidence that the effects of such interventions are short lived (e.g., Lai, 2017).

And once more the authors attribute racial differences in police shootings to crime rates and they ignore that the influence of crime rates on shootings is their own assumption and not an empirical finding that is supported by their statistical analyses.

Note that this analysis does not blame unarmed individuals shot by police for their own behavior. Instead, it highlights the difficulty of eliminating errors under conditions of uncertainty when stereotypes may bias the decision-making process. This difficulty is amplified when the stereotype accurately reflects the conditional probabilities of crime across different racial groups.

Like many articles, the limitation section is not really a limitation section, but the authors pretend that these limitations do not undermine their conclusions.

One potential flaw is if discretionary stops by police lead to a higher likelihood of being shot in a way not captured by our crime report data sets. If officers are more likely to stop and frisk a Black citizen, for example, then officers might be more likely to enter into a deadly force situation with Black citizens independent of any actual crime rate differences across races. Online Supplemental Material #5 presents some indirect data relevant to this possibility. Here, we simply note that the number of police shootings that start with truly discretionary stops of citizens who have not violated the law is low (*5%) and probably do not meaningfully impact the analyses.

There are about 1000 police killings a year in the United States. If 5% of police killings started without any violation of the law, this means 50 people are killed every year by mistake. This may not be a meaningful number to statisticians for their data analysis, but it is a meaningful number for the victims and their families. In no other Western country, citizens are killed in such numbers by their police.

The final conclusion shows that the article lacks any substantial contribution.

Conclusion
At the national level, we find little evidence within these data for systematic anti-Black disparity in fatal police deadly force decisions. We do not discount the role race may play in individual police shootings; yet to draw on bias as the sole reason for population-level disparities is unfounded when considering the benchmarks presented here. We hope this research demonstrates the importance of unpacking the underlying assumptions inherent to using benchmarks to test for outcome disparities.

The authors continue their misguided argument that we should use crime rates rather than population to examine racial bias. Once more, this is nonsense. It is a fact that Black citizens are more likely to be killed by police than White citizens. It is worthwhile to examine which causal factors contribute to this relationship, but the authors approach cannot answer this question because they lack information about the contribution of crime rates to police shootings.

The statement that their study shows that racial bias of police offers is not the only reason is trivial and misleading. The authors imply that crime rates alone explain the racial disparity and even come to the conclusion that police is more likely to kill White suspects. In reality, crime rates and racial biases are likely to be factors, but we need proper data to tease apart those factors and this article does not do this.

I am sure that the authors truly believe that they made a valuable scientific contribution to an important social issue. However, I also strongly believe that they failed to do so. They start with the question “Is there evidence of a Black-White disparity in death by police gunfire in the United States?” The answer to their question is an unequivocal yes. The relevant statistic are the odds of being killed by police for Black and White US citizens, and these statistics show that Black citizens are at greater risk to be killed by police than White citizens. The next question is why this disparity exist. There will be no simple and easy answer to this question. This article suggests that a simple answer is that Black citizens are more likely to be criminals. This answer is not only too simple, it is also not supported by the authors statistical analysis.

Scientists are human, and humans make mistakes. So, it is understandable that the authors made some mistakes in their reasoning. However, articles that are published in scientific journals are vetted by peer-review, and the authors thank several scientists for helpful comments. So, several social scientists were unable to realize that the statistical analyses are flawed even though they produced the stunning result that police officers are 5 times more likely to kill White criminals than Black criminals. Nobody seemed to notice that this doesn’t make any sense. I hope that the editor of the journal and the authors carefully examine my criticism of this article and take appropriate steps if my criticism is valid.

I also hope that other social scientists examine this issue and add to the debate. Thanks to the internet, science is now more open and we can use open discussion to fix mistakes in scientific articles much faster. Maybe the mistake is on my part. Maybe I am not understanding the authors’ analyses properly. I am also not a neutral observer living on planet Mars. I am married to an African American woman with an African American daughter and my son is half South-Asian. I care about their safety and I am concerned about racial bias. Fortunately, I live in Canada where police kill fewer citizens.

I welcome efforts to tackle these issues using data and the scientific method, but every scientific result needs to be scrutinized even after it passed peer-review. Just because something is published in a peer-reviewed journal doesn’t make it true. So, I invite everybody to comment on this article and my response. Together we should be able to figure out whether the authors’ statistical approach is valid or not.

Open Communication about the invalidity of the race IAT

In the old days, most scientific communication occured behind closed doors, when reviewers provide anonymous peer-reviews that determine the fate of manuscripts. In the old days, rejected manuscripts would not be able to contribute to scientific communications because nobody would know about them.

All of this has changed with the birth of open science. Now authors can share manuscripts on pre-print servers and researchers can discuss merits of these manuscripts on social media. The benefit of this open scientific communication is that more people can join in and contribute to the communication.

Yoav Bar-Anan co-authored an article with Brian Nosek titled “Scientific Utopia: I. Opening Scientific Communication.” In this spirit of openness, I would like to have an open scientific communication with Yoav and his co-author Michelangelo Vianello about their 2018 article “A Multi-Method Multi-Trait Test of the Dual-Attitude Perspective

I have criticized their model in an in press article in Perspectives of Psychological Science (Schimmack, 2019). In a commentary, Yoav and Michelangelo argue that their model is “compatible with the logic of an MTMM investigation (Campbell & Fiske, 1959). They argue that it is important to have multiple traits to identify method variance in a matrix with multiple measures of multiple traits. They then propose that I lost the ability to identify method variance by examining one attitude (i.e., race, self-esteem, political orientation) at a time. They then point out that I did not include all measures and included the Modern Racism Scale as an indicator of political orientation to note that I did not provide a reason for these choices. While this is true, Yoav and Michelangelo had access to the data and could have tested whether these choices made any differences. They do not. This is obvious for the modern racism scale that can be eliminated from the measurement model without any changes in the overall model.

To cut to the chase, the main source of disagreement is the modelling of method variance in the multi-trait-multi-method data set. The issue is clear when we examine the original model published in Bar-Anan and Vianello (2018).

In this model, method variance in IATs and related tasks like the Brief IAT is modelled with the INDIRECT METHOD factor. The model assumes that all of the method variance that is present in implicit measures is shared across attitude domains and across all implicit measures. The only way for this model to allow for different amounts of method variance in different implicit measures is by assigning different loadings to the various methods. Moreover, the loadings provide information about the nature of the shared variance and the amount of method variance in the various methods. Although this is valuable and important information, the authors never discuss this information and its implications.

Many of these loadings are very small. For example, the loading of the race IAT and the brief race IAT are .11 and .02. In other words, the correlation between these two measures is inflated by .11 * .02 = .0022 points. This means that the correlation of r = .52 between these two measures is r = .5178 after we remove the influence of method variance.

It makes absolutely no sense to accuse me of separating the models, when there is no evidence of implicit method variance that is shared across attitudes. The remaining parameter estimates are not affected if a factor with low loadings is removed from a model.

Here I show that examining one attitude at a time produces exactly the same results as the full model. I focus on the most controversial IAT; the race IAT. After all, there is general agreement that there is little evidence of discriminant validity for political orientation (r = .91, in the Figure above), and there is little evidence for any validity in the self-esteem IAT based on several other investigations of this topic with a multi-method approach (Bosson et al., 2000; Falk et al., 2015).

Model 1 is based on Yoav and Michelangelo’s model that assumes that there is practically no method variance in IAT-variants. Thus, we can fit a simple dual-attitude model to the data. In this model, contact is regressed onto implicit and explicit attitude factors to see the unique contribution of the two factors without making causal assumptions. The model has acceptable fit, CFI = .952, RMSEA = .013.

The correlation between the two factors is .66, while it is r = .69 in the full model in Figure 1. The loading of the race IAT on the implicit factor is .66, while it is .62 in the full model in Figure 1. Thus, as expected based on the low loadings on the IMPLICIT METHOD factor, the results are no different when the model is fitted only to the measure of racial attitudes.

Model 2 makes the assumption that IAT-variants share method variance. Adding the method factor to the model increased model fit, CFI = .973, RMSEA = .010. As the models are nested, it is also possible to compare model fit with a chi-square test. With five degrees of freedom difference, chi-square changed from 167. 19 to 112.32. Thus, the model comparison favours the model with a method factor.

The main difference between the models is that there the evidence is less supportive of a dual attitude model and that the amount of valid variance in the race IAT decreases from .66^2 = 43% to r = .47^2 = 22%.

In sum, the 2018 article made strong claims about the race IAT. These claims were based on a model that implied that there is no systematic measurement error in IAT scores. I showed that this assumption is false and that a model with a method factor for IATs and IAT-variants fits the data better than a model without such a factor. It also makes no theoretical sense to postulate that there is no systematic method variance in IATs, when several previous studies have demonstrated that attitudes are only one source of variance in IAT scores (Klauer, Voss, Schmitz, & Teige-Mocigemba, 2007).

How is it possible that the race IAT and other IATs are widely used in psychological research and on public websites to provide individuals with false feedback about their hidden attitudes without any evidence of its validity as an individual difference measure of hidden attitudes that influence behaviour outside of awareness?

The answer is that most of these studies assumed that the IAT is valid rather than testing its validity. Another reason is that psychological research is focused on providing evidence that confirms theories rather than subjecting theories to empirical tests that they may fail. Finally, psychologists ignore effect sizes. As a result, the finding that IAT scores have incremental predictive validity of less than 4% variance in a criterion is celebrated as evidence for the validity of IATs, but even this small estimate is based on underpowered studies and may shrink in replication studies (cf. Kurdi et al., 2019).

It is understandable that proponents of the IAT respond with defiant defensiveness to my critique of the IAT. However, I am not the first to question the validity of the IAT, but these criticisms were ignored. At least Banaji and Greenwald recognized in 2013 that they do “not have the luxury of believing that what appears true and valid now will always appear so” (p. xv). It is time to face the facts. It may be painful to accept that the IAT is not what it was promised to be 21 years ago, but that is what the current evidence suggests. There is nothing wrong with my models and their interpretation, and it is time to tell visitors of the Project Implicit website that they should not attach any meaning to their IAT scores. A more productive way to counter my criticism of the IAT would be to conduct a proper validation study with multiple methods and validation criteria that are predicted to be uniquely related to IAT scores in a preregistered study.

References

Bosson, J. K., Swann, W. B., Jr., & Pennebaker, J. W. (2000). Stalking the perfect measure of implicit self-esteem: The blind men and the elephant revisited? Journal of Personality and Social Psychology, 79, 631–643.

Falk, C. F., Heine, S. J., Takemura, K., Zhang, C. X., & Hsu, C. (2015). Are implicit self-esteem measures valid for assessing individual and cultural differences. Journal of Personality, 83, 56–68. doi:10.1111/jopy.12082

Klauer, K. C., Voss, A., Schmitz, F., & Teige-Mocigemba, S. (2007). Process components of the Implicit Association Test: A diffusion-model analysis. Journal of Personality and Social Psychology, 93, 353–368.

Kurdi, B., Seitchik, A. E., Axt, J. R., Carroll, T. J., Karapetyan, A., Kaushik, N., . . . Banaji, M. R. (2019). Relationship between the Implicit Association Test and intergroup behavior: A meta-analysis. American Psychologist, 74, 569–586.

The Diminishing Utility of Replication Studies In Social Psychology

Dorthy Bishop writes on her blog.

“As was evident from my questions after the talk, I was less enthused by the idea of doing a large, replication of Darryl Bem’s studies on extra-sensory perception. Zoltán Kekecs and his team have put in a huge amount of work to ensure that this study meets the highest standards of rigour, and it is a model of collaborative planning, ensuring input into the research questions and design from those with very different prior beliefs. I just wondered what the point was. If you want to put in all that time, money and effort, wouldn’t it be better to investigate a hypothesis about something that doesn’t contradict the laws of physics?”


I think she makes a valid and important point. Bem’s (2011) article highlighted everything that was wrong with the research practices in social psychology. Other articles in JPSP are equally incredible, but this was ignored because naive readers found the claims more plausible (e.g., blood glucose is the energy for will power). We know now that none of these published results provide empirical evidence because the results were obtained with questionable research practices (Schimmack, 2014; Schimmack, 2018). It is also clear that these were not isolated incidents, but that hiding results that do not support a theory was (and still is) a common practice in social psychology (John et al., 2012; Schimmack, 2019).

A large attempt at estimating the replicability of social psychology revealed that only 25% of published significant results could be replicated (OSC). The rate for between-subject experiments was even lower. Thus, the a-priori probability (base rate) that a randomly drawn study from social psychology will produce a significant result in a replication attempt is well below 50%. In other words, a replication failure is the more likely outcome.

The low success rate of these replication studies was a shock. However, it is sometimes falsely implied that the low replicability of results in social psychology was not recognized earlier because nobody conducted replication studies. This is simply wrong. In fact, social psychology is one of the disciplines in psychology that required researchers to conduct multiple studies that showed the same effect to ensure that a result was not a false positive result. Bem had to present 9 studies with significant results to publish his crazy claims about extrasensory perception (Schimmack, 2012). Most of the studies that failed to replicate in the OSC replication project were taken from multiple-study articles that reported several successful demonstrations of an effect. Thus, the problem in social psychology was not that nobody conducted replication studies. The problem was that social psychologists only reported replication studies that were successful.

The proper analyses of the problem also suggests a different solution to the problem. If we pretend that nobody did replication studies, it may seem useful to starting doing replication studies. However, if social psychologists conducted replication studies, but did not report replication failures, the solution is simply to demand that social psychologists report all of their results honestly. This demand is so obvious that undergraduate students are surprised when I tell them that this is not the way social psychologists conduct their research.

In sum, it has become apparent that questionable research practices undermine the credibility of the empirical results in social psychology journals, and that the majority of published results cannot be replicated. Thus, social psychology lacks a solid empirical foundation.

What Next?

It is implied by information theory that little information is gained by conducting actual replication studies in social psychology because a failure to replicate the original result is likely and uninformative. In fact, social psychologists have responded to replication failures by claiming that these studies were poorly conducted and do not invalidate the original claims. Thus, replication studies are both costly and have not advanced theory development in social psychology. More replication studies are unlikely to change this.

A better solution to the replication crisis in social psychology is to characterize research in social psychology from Festinger’s classic small-sample, between-subject study in 1957 to research in 2017 as exploratory and hypotheses generating research. As Bem suggested to his colleagues, this was a period of adventure and exploration where it was ok to “err on the side of discovery” (i.e., publish false positive results, like Bem’s precognition for erotica). Lot’s of interesting discoveries were made during this period; it is just not clear which of these findings can be replicated and what they tell us about social behavior.

Thus, new studies in social psychology should not try to replicate old studies. For example, nobody should try to replicate Devine’s subliminal priming study with racial primes with computers and software from the 1980s (Devine, 1989). Instead, prominent theoretical predictions should be tested with the best research methods that are currently available. Thus, the way forward is not to do more replication studies, but rather to use open science (a.k.a. honest science) that uses experiments to subject theories to empirical tests that may also falsify a theory (e.g., subliminal racial stimuli have no influence on behavior). The main shift that is required is to get away from research that can only confirm theories and to allow for empirical data to falsify theories.

This was exactly the intent of Danny Kahneman’s letter, when he challenged social priming researchers to respond to criticism of their work by going into their labs and to demonstrate that these effects can be replicated across many labs.

Kahneman makes it clear that the onus of replication is on the original researchers who want others to believe their claims. The response to this letter speaks volumes. Not only did social psychologists fail to provide new and credible evidence that their results can be replicated, they also demonstrated defiant denial in the face of replication failures by others. The defiant denial by prominent social psychologists (e.g., Baumeister, 2019) make it clear that they will not be convinced by empirical evidence, while others who can look at the evidence objectively do not need more evidence to realize that the social psychological literature is a train-wreck (Schimmack, 2017; Kahneman, 2017). Thus, I suggest that young social psychologists search the train wreck for survivors, but do not waste their time and resources on replication studies that are likely to fail.

A simple guide through the wreckage of social psychology is to distrust any significant result with a p-value greater than .01 (Schimmack, 2019). Prediction markets also suggest that readers are able to distinguish credible and incredible results (Atlantic). Thus, I recommend to build on studies that are credible and to stay clear of sexy findings that are unlikely to replicate. As Danny Kahneman pointed out, young social psychologists who work in questionable areas face a dilemma. Either they have to replicate the questionable methods that were used to get the original results, which is increasingly considered unethical, or they end up with results that are not very informative. On the positive side, the replication crisis implies that there are many important topics in social psychology that need to be studied properly with the scientific method. Addressing these important questions may be the best way to rescue social psychology.