What Is Science?
What is science? According to ChatGPT, the most basic concept of science lacks a clear definition. There is not one science, but many sciences that share overlapping features, creating a family resemblance rather than a set of necessary and sufficient conditions. As Laudan (1983) argued, “the search for a demarcation criterion between science and non-science is a pseudo-problem.”
Attempts to define science in terms of verification, falsifiability, empirical content, prediction, method, progress, or realism have all faced objections. Nevertheless, these concepts remain relevant for distinguishing science from other belief systems.
Even in the absence of strict definitions, concepts can be characterized by prototypes. Psychology distinguishes between descriptive prototypes, which capture typical features (e.g., feathers and flight for birds), and ideal prototypes, which represent standards rather than averages. Nobody is perfectly healthy or happy, but comparison to an ideal allows meaningful evaluation. I argue that science functions in the same way: not as a fixed set of practices, but as an ideal prototype against which actual scientific activity can be evaluated.
An old-fashioned ideal prototype of science describes it as a collective effort to test beliefs and to revise or replace them when new information reveals inconsistencies (Hume, Popper, Peirce, Dewey). A defining feature of this ideal is openness—openness to new evidence and openness to changing beliefs.
The value science places on discovery reflects this openness. Novelty matters because scientific inquiry is oriented toward progressive improvement in understanding. The history of science shows that expanding, revising, and sometimes challenging existing belief systems is a central driver of progress, even when that progress is slow and non-linear.
Paradigms and Confined Openness
The definition of science as an ideal prototype differs from descriptions of actual scientific practice because reality rarely matches ideals. Scientists are human agents embedded in social and institutional contexts, and their behavior is shaped by incentives that can conflict with norms of openness and belief revision. Kuhn’s analysis of paradigms and paradigm shifts illustrates these tensions between epistemic ideals and community dynamics.
A scientific paradigm functions much like a culture: it has foundational beliefs, socialization practices, initiation rituals, and a collective goal of preservation and expansion. Within paradigms, researchers may revise beliefs and pursue novelty, but foundational assumptions are typically treated as off-limits. For example, a foundational assumption in mainstream social psychology is that experiments are the primary source of valuable knowledge, privileging laboratory studies over field research.
This produces what I call confined openness: openness to criticism, replication, and revision within a paradigm, combined with resistance to challenges that target its foundations. A visitor to a scientific conference would see many hallmarks of science on display, yet might not notice that certain questions are never asked.
The Replication Crisis in Social Psychology
In the early 2010s, it became evident that common research practices in experimental social psychology deviated from the ideal of open inquiry in which evidence can genuinely threaten beliefs. A key flashpoint was Bem (2011), which reported evidence for a phenomenon incompatible with established physical and psychological assumptions. The lesson was not that Bem committed fraud, but that ordinary analytic flexibility combined with selective publication can make implausible claims appear empirically supported (Schimmack, 2012).
The core problem was that researchers could accumulate confirmatory evidence without reporting nonconfirmatory outcomes. When null or contradictory results are systematically underreported, the published literature ceases to constrain belief revision. Psychology has long exhibited unusually high rates of statistically significant findings; Sterling (1959) reported that roughly 97% of tests rejected the null, and later work confirmed this excess of positive results (Motyl et al., 2017).
In addition, many journals are organized around specific paradigms and explicitly aim to promote them. Such journals are structurally unlikely to publish work that challenges the paradigm’s core assumptions.
Open Science and Its Limits
In response, researchers advocated reforms under the banner of open science, typically operationalized as procedural transparency and reproducibility: sharing data, materials, and code; preregistration; replication; and safeguards against selective reporting. These reforms improve error detection and accountability within paradigms by making claims easier to audit and by reducing reporting flexibility.
The replication crisis also socialized a new generation of researchers to view credibility as a methodological and institutional problem rather than a matter of personal integrity. However, the open science movement’s focus on single studies and single findings risks deflecting attention from deeper structural sources of closedness. These include incentive systems that reward publishable success, norms that delimit legitimate questions, and paradigm-level assumptions treated as nonnegotiable.
The most fundamental constraint is the trap of paradigmatic research described by Kuhn. Paradigms restrict openness by confining criticism to questions that can be addressed within an accepted framework. In mature sciences, stable theoretical foundations allow paradigmatic research to produce cumulative progress. Psychology, by contrast, lacks a unifying paradigm and is fragmented into numerous micro-paradigms sustained as much by social and institutional commitments as by decisive empirical support. Debates such as the personality–situation controversy illustrate how paradigm boundaries can become sites of identity and norm enforcement rather than objects of open-ended inquiry.
Current incentive structures exacerbate this problem. Science operates as a reputational marketplace in which publications, grants, and visibility are assumed to signal quality. Yet producers and evaluators largely overlap. Reviewers, editors, and panelists are drawn from within paradigms, creating selection pressures favoring work that extends existing frameworks. These dynamics propagate into citation counts, funding decisions, and career advancement, reinforcing paradigmatic stability.
Open science rightly targets misaligned incentives for transparency and reproducibility (Nosek et al., 2015), but it remains focused on improving paradigmatic research rather than evaluating paradigms themselves. As a result, research programs can become highly replicable without producing theoretical progress. For example, it is highly replicable that self-report measures and Implicit Association Test scores correlate weakly. Yet this regularity alone does not resolve whether the discrepancy reflects measurement error or a substantive distinction between conscious and unconscious processes (Schimmack, 2021).
After more than a decade of reform, these deeper concerns remain largely unaddressed. Researchers can adopt open practices while leaving foundational assumptions intact. While implausible claims are now harder to publish, the incentive structure still rewards work that stabilizes paradigms rather than subjects them to serious challenge.
Meta-Analysis, Meta-Science, and Meta-Paradigmatic Critique
The term meta has a long history in psychology. Meta-analysis emerged in the 1970s to integrate evidence across studies, and today meta-analyses are highly cited because they summarize large literatures. However, meta-analyses typically aggregate results produced within paradigms rather than evaluating the paradigms themselves. Theoretical questions and under-studied alternatives fall outside their scope, and conclusions are shaped by publication bias and researcher allegiance.
Addressing these limitations requires moving beyond meta-analysis to meta-science. Meta-science evaluates research programs rather than producing new findings. Meta-scientists function as knowledgeable consumers who assess whether bodies of research adhere to best practices and whether paradigms remain epistemically productive.
Yet most meta-science operates at a high level of abstraction, focusing on general properties of science rather than sustained critique of specific paradigms. What is needed instead is meta-paradigmatic evaluation: paradigm-specific critique conducted by domain experts who are institutionally independent of the paradigms they evaluate.
Toward Open Meta-Paradigmatic Science
Systematic paradigm evaluation is likely to encounter resistance, just as open science did. Meta-paradigmatic critique may be framed as a threat to academic freedom. But academic freedom has never been absolute. Researchers accept ethics review and other forms of oversight when fundamental norms are at stake. Critical evaluation does not restrict inquiry; it is a constitutive feature of science.
Unlike open science reforms, meta-paradigmatic evaluation requires institutional change. It must be recognized as a legitimate scholarly activity with its own funding, review panels, positions, and journals. While meta-science is itself imperfect and subject to capture, it offers a cost-effective means of preventing the long-term stagnation of research programs.
Existing outlets provide only partial solutions. Journals with low rejection rates reduce gatekeeping but carry high costs and low prestige. Specialized journals such as Meta-Psychology welcome critical evaluation but remain marginal. Journals devoted to meta-science typically operate at an abstract level and do not engage deeply with specific paradigms.
What is missing are field-specific, meta-paradigmatic journals insulated from paradigm capture. Meta-paradigmatic critique requires deep disciplinary expertise combined with institutional independence—an uncommon combination given current training and reward structures.
Conclusion: Science as Utopia
The ideal prototype of science is a utopia: a state that cannot be fully realized but that serves as a regulative aspiration. Open and honest reporting of results should not be an utopia; it is a minimal requirement of scientific practice, and reward structures must support it.
A more demanding utopia requires something further: openness to sustained critical examination of fundamental beliefs. Such beliefs can carry emotional and identity-laden significance for scientists, comparable to religious beliefs for believers. Because humans naturally resist scrutiny of core commitments, openness at this level cannot be left to individual virtue. It must be institutionalized.
Open and independent challenges to scientific paradigms—especially at the level of foundational assumptions—should therefore be understood not as threats to science, but as necessary conditions for its long-term epistemic vitality.
After more than a decade of reform, these deeper concerns remain largely unaddressed. Researchers can adopt open practices while leaving foundational assumptions intact. While implausible claims are now harder to publish, the incentive structure still rewards work that stabilizes paradigms rather than subjects them to serious challenge.
Core References
Laudan, L. (1983). The demise of the demarcation problem. In R. S. Cohen & L. Laudan (Eds.), Physics, philosophy and psychoanalysis: Essays in honor of Adolf Grünbaum (pp. 111–127). D. Reidel. (SciSpace)
Kuhn, T. S. (1970). The structure of scientific revolutions (2nd ed.). University of Chicago Press. (UW-Madison Libraries)
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615–631. https://doi.org/10.1177/1745691612459058 (SAGE Journals)
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., et al. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. https://doi.org/10.1126/science.aab2374 (Aspen Institute)
Schimmack, U. (2012). The ironic effect of significant results on the credibility of multiple-study articles. Psychological Methods, 17(4), 551–566. https://doi.org/10.1037/a0029487 (UBC Emotion & Self Lab)
Schimmack, U. (2020). A meta-psychological perspective on the decade of replication failures in social psychology. Canadian Psychology/Psychologie canadienne, 61(4), 364–376. https://doi.org/10.1037/cap0000246 (CoLab)
Schimmack, U. (2021). Invalid claims about the validity of implicit association tests by prisoners of the implicit social-cognition paradigm. Perspectives on Psychological Science, 16(2), 435–442. https://doi.org/10.1177/1745691621991860 (SAGE Journals)
Bem, D. J. (2011). Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100(3), 407–425. https://doi.org/10.1037/a0021524 (American Psychological Association)
Sterling, T. D. (1959). Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa. Journal of the American Statistical Association, 54, 30–34. (The James Lind Library)
Motyl, M., Demos, A. P., Carsel, T. S., Hanson, B. E., Melton, Z. J., Mueller, A. B., Prims, J. P., Sun, J., Washburn, A. N., Wong, K. M., Yantis, C. A., & Skitka, L. J. (2017). The state of social and personality science: Rotten to the core, not so bad, getting better, or getting worse? Journal of Personality and Social Psychology. https://doi.org/10.1037/pspa0000084 (sciencedaily.com)