•      Fri Dec 5 2025
Logo

AI Is Hollowing Out Higher Education



NIJMEGEN – Modern AI technologies severely impede humans’ ability to learn and retain skills, while also making it nearly impossible for academics and other experts to cultivate and disseminate knowledge.

Many scholars, including us, have highlighted the threat posed by techno-solutionism in education: Rather than expanding our intellectual horizons, these technologies undermine the very conditions that allow us to think for ourselves.

The corporations profiting from AI – including Microsoft, OpenAI, Nvidia, and ASML – have a vested interest in maintaining the current hype. Their oligopolistic control over both hardware and software depends on the exaggerated claim that cognitive labor can be fully outsourced to their models.

In reality, the apparent achievements of these systems rest on the wholesale theft of humans’ intellectual labor. Most notably, large language models (LLMs) have been trained by scraping books and scholarly works without the authors’ or publishers’ consent, fragmenting and remixing them into patchwork plagiarism packaged as human-like responses.

According to the AI industry’s narrative, all human creativity, innovation, and knowledge are essentially automatable, rendering people obsolete. Even advocates of “human-centered AI” assume this to be true.

Yet studies in automation and cognitive science, together with the long history of AI boom-and-bust cycles, demonstrate that sweeping claims of near-total automation are exaggeratedself-defeating, and toxic. Automation – even when it works – must operate at a much lower level than these companies suggest to succeed without eroding the skills and agency of human operators.

Ultimately, the collective strategy of AI companies threatens to deskill precisely those people who are essential for society to function.

What value, after all, is there in automating art, thinking, or reading – especially in educational and academic settings? None. On the contrary, automation of knowledge and culture by private companies is a worrying prospect – conjuring dystopian and outright fascistic scenarios.

The deskilling, denigration, and displacement of teachers and scholars have historically been central to fascist takeovers, since educators serve as bulwarks against propaganda, anti-intellectualism, and illiteracy. Today, AI advocates do not merely assume automation is necessary; they aggressively proselytize their faith, thereby paving the way for techno-fascism.

Academics exist, in part, to speak truth to power, which requires their independence from government and corporate influence. The United States today demonstrates the consequences of hollowing out academic freedom, critical thinking, and impartiality, as President Donald Trump’s administration and Big Tech companies collaborate to undermine academic institutions’ ability to sustain the kind of scientific work that exposes AI’s false promises.

Worse still, the techno-fascist assault on universities increasingly comes from within. In recent years, the AI industry has captured university administrations, co-opted faculty unions, and even enlisted individual teachers and researchers to promote its tools to colleagues and students. Far from offering genuine solutions, these technologies exacerbate social injustices and corrode the ecosystem of human knowledge.

In a recent position paper, we challenged the claims advanced by the AI industry. While corporations urge us to embrace – and even celebrate – their imagined, supposedly inevitable tech-driven future, academics and their allies must take a principled stand and defend universities and scholarly institutions by barring toxic, addictive technologies from classrooms.

The industry counts on us forgetting that we have been here before. Universities have long been used to whitewash harmful products, and AI itself has been repeatedly repackaged through various hype cycles. In fact, the term “artificial intelligence,” coined in 1955, has always been more a marketing phrase than a taxonomic classification.

To sustain the latest iteration of the AI con, the industry relies on anthropomorphic sleight of hand – claiming that models “think,” “reason,” and “learn” to suggest cognitive abilities they demonstrably lack and may never develop. This rhetorical trick not only exaggerates the products’ capabilities but also dehumanizes people by falsely humanizing machines, which is why many educators and students have begun to reject AI.

In response, the AI industry has lobbied government agencies to mandate the use of its products, claiming that without them, students will be unprepared for the job market.

What is actually needed is the opposite: scholars with expertise in AI-related fields must have the freedom to critique these technologies, while those in other fields must be able to teach without interference by companies seeking to cash in.

Industry agendas – whether the industry is tobacco, petroleum, pharmaceuticals, or tech – rarely align with human welfare or disinterested research, especially when left unchecked and unregulated. Instead, their interests are maximizing profit and market power. The AI industry is no different, and educators should repudiate its false promises.

This commentary is based on joint work with Marcela Suarez and Barbara Müller.

Olivia Guest is Assistant Professor of Computational Cognitive Science at Radboud University. Iris van Rooij is Professor of Computational Cognitive Science at Radboud University.

Copyright: Project Syndicate, 2025.
www.project-syndicate.org