Open Science in Psychology
What is open science? Isn’t open science a tautology like “new innovation.” If there is open science, what is closed science? The need for open science arises from the fact that many academic practices are unscientific. They benefit academics without advancing or even hurting science. For example, conducting experiments and not reporting the results when they do not show a favorable outcome is a common academic practice that many people would recognize as undermining science. In psychology, this academic practice is widespread and explains why psychology journals have success rates over 90% (Sterling et al., 1995). Aside from just not publishing unfavorable results, academics also use a number of questionable statistical practices to turn failures into successes (John et al., 2012). All of these practices are well known and accepted among academics who understand the pressure to publish, while the general public focuses on the outcome and not the personal consequences of individual researchers.
Open science is basically the idea of an utopia where academic work produces scientific progress and creates incentive structures that reward honest attempts to advance science rather than meeting invalid indicators like publication and citation counts that can be gamed and can waste millions of dollars without any real progress.
In psychology, Brian Nosek spearheaded the Open Science movement and founded the Open Science Foundation (OSF). He also wrote several influential articles to promote Open Science practices in psychology (e.g., Nosek & Bar-Anan, 2012; Nosek, Spies, & Motyl, 2012).
These articles laid out a comprehensive vision to reform unscientific and counterproductive practices and incentive structures in psychology. Key elements focussed on (a) aligning incentives so truth-seeking wins over career advancement, (b) restructuring the unit of research itself from small teams to distributed collaborations, and (III) promoting a culture of transparency, openness to criticism, and willingness to find out you were wrong.
The Open Science movement has changed psychology in ways that nobody in 2010 could have imagined. Helped by empirical evidence that many results in Brian Nosek’s field of social psychology could not be replicated (a replication rate of 25% in the Open Science Reproducibility Project, 2015), journals now often demand assurances that results are reported honestly and reward practices that limit researchers’ abilities to change hypotheses or results when the original results are disappointing.
However, in other ways, progress has been limited. The main problem is that open admission of mistakes is still rare and researchers fear that any admission of mistakes harms their reputation. Thus, the incentive structure continues to reward promoting false claims. This problem is exacerbated by psychological mechanisms that have been documented in psychological research for decades and are highly robust. Motivated biases make it easier for people to see mistakes in others’ work than in their own work. The Bible calls this “seeing a splinter in others’ eyes, but missing the beam in one’s own eye.” The Nobel Laureate Feynman warned fellow scientists, “The first principle is that you must not fool yourself — and you are the easiest person to fool.”
Motivated Blindness
Ironically Brian Nosek’s work on the IAT provides an example of motivated blindness. All his knowledge and intelligence that helped him to spot the problem in colleague’s work with small samples that does not replicate, does not help him to see the problems in his own work on implicit biases. Originally invented by Anthony Greenwald, Brian Nosek helped to promote the Implicit Association Test (IAT) as a measure of associations that are sometimes called implicit, automatic, or unconscious. The IAT is a reaction time task, but modern technology made it possible to administer it on a website, hosted by Project Implicit and backed by Harvard University.
The IAT was never validated to the psychometric standards required for individual assessment. In practice, it functions like a distorting mirror — reflecting back what people largely already know about their attitudes, buried under substantial measurement error. If it were presented that way, no one would object, and no one would need a warning. But Project Implicit does not present it that way. Instead, visitors are warned that the test may reveal something undesirable about themselves. That warning only makes sense if the results are trustworthy. A distorting mirror does not come with a warning — it comes with a laugh. By framing the IAT as capable of revealing uncomfortable truths, Project Implicit treats an unvalidated research tool as a diagnostic instrument.
The problem is that even in 2024, Brian Nosek is still unable to openly admit that the IAT does not measure implicit biases (reference) and that his own studies, which convinced him the IAT is valid, were flawed. For example, in one study he claimed that a weak correlation of r = .2 between racial bias on the IAT and self-reported racial attitudes demonstrated convergent validity (reference). This is false. A correlation of r = .5 between self-reported height and self-reported weight does not validate either measure — it simply shows that two different constructs share a common method. Convergent validity requires measuring the same construct with different methods, not different constructs with the same method. When the IAT is compared to other implicit measures, the correlations are equally weak and, more importantly, no higher than the correlations with self-report measures (Schimmack, 2021). The IAT therefore provides no evidence that it reveals something about individuals that they do not already know. If somebody is biased against a particular group, they know it. The IAT does not uncover hidden biases — it merely repackages what people can already report about themselves.
While Brian Nosek is no longer actively involved in IAT research, he is still associated with Project Implicit and has made no attempt to correct the misinformation about the IAT given to visitors of the website that even administers mental health IATs without proven validity. Moreover, his students continue to publish misleading articles that make false claims about the IAT. These articles are published in journals that claim to promote open science, but do not allow for open criticism of statistical errors in their publications.
The article “On the Relationship Between Indirect Measures of Black Versus White
Racial Attitudes and Discriminatory Outcomes: An Adversarial
Collaboration Using a Sample of White Americans” by Axt et al. (2026) seems to meet the latest standards of open science. The research team is diverse with different opinions about the IAT. Hypotheses are preregistered with a clear criterion to claim validity. Brain Nosek was not a collaborator, but strongly endorsed this article on social media as a posterchild of open science practices.

Yet, the paper had a major limitations. It totally ignored the criticism of earlier structural equation modeling studies that failed to take shared method variance into account (Schimmack, 2021) and it made the same mistake again. By including two IATs, the published model treated all shared variance between the two IATs as valid variance, ignoring the well known evidence that IAT scores are also influenced by factors unrelated to the associations being measured. The authors could have avoided this mistake because they inspected Modification Indeces that show problem with a theoretically specified model They used these modification indices to adjust the measurement model for self-ratings, but not for the two IATs.
This mistake itself is not the main problem. Even a large team of scientists can make mistakes, especially if they are not trained in psychometrics and are working with measurement models. The real problem is that the editor of the journal that published the article is unwilling to correct it (Schimmack, 2026). This decision does not meet Open Science standards of open admission of mistakes or even engagement with criticism. Open science requires open discussion and responding to scientific criticism. I emailed Dr. Axt on December 2nd about my concerns and reanalysis of his data, but did not receive a response. This reaction highlights how far we still have to come before we can reach Brian Nosek’s utopia of open criticism and open admission of mistakes. Marketing the IAT as a “window into the unconscious” (Banaji & Greenwald’s, 2013, words, not mine) was a mistake, but Greenwald, Banaji, and Nosek have yet to admit so openly. Instead, Project Implicit continues to give people invalid feedback and Harvard does not care. This is not Open Science. This is naked self-interest to preserve a reputation that was earned with the false promise of addressing racial bias in the United States of America.
Why Do I care?
After cognitive performance tests, the IAT is arguably the most influential psychological test. Implicit bias was a major topic during the 2016 presidential campaign. Hillary Clinton made implicit bias a campaign issue, claiming that many Americans still harbor implicit racial biases. Asked for comment, Greenwald relied on IAT results for the two candidates to “go out on a limb to predict that Clinton’s vote margin on November 8 will exceed the prediction of the final pre-election polls.” The opposite happened. Trump became president and created a new culture that made open expression of racial bias “great again.”
Greenwald’s trust in the IAT was not justified. The IAT had already failed to predict racial bias in the 2008 election that Barack Obama won despite widespread racial prejudice. The IAT did not predict this outcome, but self-reports showed that some people openly admitted to biases that predicted their voting intentions over and above party affiliation (Greenwald et al., 2009).
Hillary Clinton’s endorsement of implicit bias may have cost her votes. The notion of implicit bias is that white people no longer endorse racist ideology, are motivated to avoid racial biases, but are still unconsciously influenced by them. That narrative has not aged well. A decade later, a presidential candidate can stand on a debate stage and say “they’re eating the cats and dogs” to applause, and win. The problem America faces is not hidden bias operating below the threshold of awareness. It is open prejudice, stated plainly, rewarded electorally, and entirely accessible to self-report.
The implicit bias framework misjudged the landscape. It assumed that the social norm against racism was strong and stable, and that the remaining work was to address what operated beneath it. Instead, the norm itself collapsed. Many white Americans are fully aware of their racial biases, are not motivated to change them, and are willing to vote for a candidate who hesitated to distance himself from the KKK. These voters were probably more offended by the suggestion that they are motivated to be unbiased than by the accusation that they have racial biases. Implicit bias training — which cost organizations millions — failed to address the real problem because it was designed for a world in which people wanted to be fair but couldn’t help themselves. That is not the world we live in.
Conclusion
Open science promises to align academic structures, incentives, and practices with the scientific aim of discovering the truth. To do so, science needs to check itself, notice mistakes, and correct them. However, the incentive structure continues to work against this goal. It is telling that Brian Nosek, the most visible proponent of open science in psychology, is unable to follow his own open science principles and admit that his work on the IAT did not produce a valid measure of implicit biases.
One might think that Nosek is in an enviable position to admit past mistakes given his achievements in making psychology more open. He is the Executive Director of the Center for Open Science and has a legacy that does not depend on the IAT. Other psychologists, like John Bargh, built their careers on a single line of research. When social priming failed to replicate, there was little else to fall back on. Walking away from the IAT should be easier by comparison. The fact that Nosek is unable to acknowledge the problems of the IAT shows even more the power of motivated blindness. It also highlights the most important change that is needed to make psychology a science. We need to normalize failure and see it as the inevitable outcome of exploration. Every failure that is openly acknowledged is a learning opportunity that makes success more likely the next time. Daniel Kahneman is a rare example of a psychologist who admitted mistakes in public and gained in recognition as a result. Maybe we should give Brian Nosek a Nobel Prize for his open science work so that he can admit his mistakes about the IAT.
References
Axt, J. R., Connor, P., Hoogeveen, S., Clark, C. J., Vianello, M., Lahey, J. N., Hahn, A., To, J., Petty, R. E., Costello, T. H., Mitchell, G., Tetlock, P. E., & Uhlmann, E. L. (2026). On the relationship between indirect measures of Black versus White racial attitudes and discriminatory outcomes: An adversarial collaboration using a sample of White Americans. Journal of Personality and Social Psychology. Advance online publication. https://dx.doi.org/10.1037/pspa0000480
Greenwald, A. G., Smith, C. T., Sriram, N., Bar-Anan, Y., & Nosek, B. A. (2009). Implicit race attitudes predicted vote in the 2008 U.S. presidential election. Analyses of Social Issues and Public Policy, 9(1), 241–253.
Nosek, B. A., & Bar-Anan, Y. (2012). Scientific Utopia I: Opening Scientific Communication Psychological Inquiry, 23(3), 217–243. DOI: 10.1080/1047840X.2012.692215
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific Utopia II: Restructuring Incentives and Practices to Promote Truth Over Publishability. Perspectives on Psychological Science, 7(6), 615–631. DOI: 10.1177/1745691612459058
Nosek, B. A. (2024, November 8). Highs and lows on the road out of the replication crisis [Interview]. Clearer Thinking with Spencer Greenberg, Episode 235.
Schimmack, U. (2021). The Implicit Association Test: A method in search of a construct. Perspectives on Psychological Science, 16(2), 396–414. https://doi.org/10.1177/1745691619863798
Schimmack, U. (2021). Invalid claims about the validity of Implicit Association Tests by prisoners of the implicit social-cognition paradigm. Perspectives on Psychological Science, 16(2), 435–442. DOI: 10.1177/1745691621991860