Against the “Against” — A Response to Against the Uncritical Adoption of AI in Academia

Against the Uncritical Adoption of ‘AI’ Technologies in Academia

My prelude (ChatGPT thought it is witty).
Ironically, I worked with ChatGPT on this response to Olivia Guest et al.’s warning against the uncritical use of AI. Of course, we should never use AI uncritically — but the same goes for journal articles, textbooks, or even peer reviews. The real problem with Guest et al.’s piece is that it collapses all AI use into “uncritical adoption.” It does not distinguish between uncritical and critical use. That distinction matters. Used properly, AI can benefit science — by accelerating learning, enhancing equity, and sharpening critical thinking. In fact, the authors themselves might have benefitted from subjecting their own arguments to a critical dialogue with an AI.


Against the “Against” — A Response to Against the Uncritical Adoption of AI in Academia

A recent position paper, Against the Uncritical Adoption of AI in Academia, argues that universities are rushing headlong into adopting artificial intelligence under the banner of “progress.” The authors warn that bundling chatbots into tools like Microsoft Office normalizes AI use without consent, blurs boundaries of academic integrity, and risks undermining both pedagogy and research quality. Their framework rests on five principles of research integrity: honesty, scrupulousness, transparency, independence, and responsibility. From their perspective, most current AI tools fail these tests because they are opaque, corporate-controlled, environmentally costly, and prone to generating polished but shallow text. In short: uncritical AI use threatens to hollow out the critical and self-reflective fabric of academia.

These are serious concerns, and I share the view that uncritical adoption is a danger. But I want to highlight what the paper does not: examples of critical and productive AI use that enhance, rather than erode, academic standards.

  • Accelerating statistical learning. Many academics in psychology and related fields struggle with quantitative methods. AI gives students and reviewers the chance to query statistical models, check assumptions, and explore methods interactively. This doesn’t deskill — it scaffolds. It allows scholars to learn concepts faster and more deeply than in the past.
  • Probing conflicting ideas. I train students to use AI as an interlocutor: to ask probing questions, compare theoretical perspectives, and practice evaluating arguments. Far from outsourcing thought, this cultivates the critical stance that integrity requires.
  • Accessing difficult texts. Undergraduates often struggle with primary sources written for specialists. AI can summarize, rephrase, and contextualize these texts so students can engage with them meaningfully. They still must return to the original, but the entry barrier is lowered.
  • Language equity. English dominates global academia, privileging native speakers. As a non-native speaker, I use AI to polish my writing so that reviewers judge my ideas on their scientific merits, not on my fluency. This does not lower standards. It raises them by removing a linguistic bias that has long distorted evaluation.

The authors are right: if we simply accept AI outputs as final products, we risk replacing reasoning with rhetoric. But if we use AI critically — with disclosure, scrutiny, and accountability — it can make academics more skillful, more equitable, and more rigorous. Avoiding AI altogether is as counterproductive as insisting students still search “dusty archives” rather than using online databases. The real challenge is not to ban AI, nor to normalize it uncritically, but to teach and model how to use it wisely.


Leave a Reply