This is the text of a seminar given at the Goldsmiths Centre for Philosophy and Critical Thought on June 11th 2025
I would like to thank the Centre for Philosophy and Critical Thought for inviting me to give this seminar. This talk is titled 'The role of the University is to resist AI', and takes as its text Ivan Illich's 'Tools for Conviviality'.
AI's impact on higher education come primarily from historical forces, not from its claim to be sci-fi tech from the future. Society can't throw up its hands in shock as students outsource their thinking to simulation machines when fifty years of neoliberalism has masticated education into something homogenised, metricised and machinic. Meanwhile, so-called Ed Tech has claimed for decades that learning is informational rather than relational and ripe for technical disruption.
When Illich refers to tools, he's taking this broader view. As he writes:
"I use the term 'tool' broadly enough to include not only simple hardware such as drills, pots, syringes, brooms, building elements, or motors, and not just large machines like cars or power stations; I also include among tools productive institutions such as factories that produce tangible commodities like corn flakes or electric current, and productive systems for intangible commodities such as those which produce 'education,' 'health,' 'knowledge,' or 'decisions'."
I want to ask the question "What kind of tool is AI?", to help determine whether Illich's ideas can assist us in responding to it.
ai
Contemporary AI is a specific mode of connectionist computation based on neural networks and transformer models. AI is also a tool in Illich's sense; at the same time, an arrangement of institutions, investments and claims. One benefit of listening to industry podcasts, as I do, is the openness of the engineers when they admit that no-one really knows what's going on inside these models.
Let that sink in for a moment: we're in the midst of a giant social experiment that pivots around a technology whose inner workings are unpredictable and opaque.
But there are some things we can be sure of, which is that the whole show depends on scale. None of the party tricks of latent space representations or next token predictions will work without tons of data or wall-to-ceiling computers, and if you want to beat the other lot you need more of all of it.
This means that AI is actually a giant material infrastructure with huge demands for energy, water and concrete, while the supply chain for specialised computer chips is entangled with geopolitical conflict. It also means that the AI industry will beg, borrow and steal, or basically just steal, all the text, images and audio that it can get its spidery hands on.
A marginal point of note for both AI and UK higher education was a recent outburst by former politician and Facebook exec Nick Clegg, who was complaining that copyright is killing the AI industry. Clegg being, of course, the man who betrayed his promise to scrap student fees in 2011, and is now betraying writers, artists and musicians.
In any case, scale is a core concern for Illich, and in Tools for Conviviality he writes:
"It is possible to identify a natural scale. When an enterprise grows beyond a certain point on this scale, it first frustrates the end for which it was originally designed, and then rapidly becomes a threat to society itself. These scales must be identified and the parameters of human endeavours within which human life remains viable must be explored."
higher education
Generative AI's main impact on higher education has been to cause panic about students cheating, a panic that diverts attention from the already immiserated experience of marketised studenthood. It's also caused increasing alarm about staff cheating, via AI marking and feedback, which again diverts attention from their experience of relentless and ongoing precaritisation.
The hegemonic narrative calls for universities to embrace these tools as a way to revitalise pedagogy, and because students will need AI skills in the world of work. A major flaw with this story is that the tools don't actually work, or at least not as claimed.
AI summarisation doesn't summarise; it simulates a summary based on the learned parameters of its model. AI research tools don't research; they shove a lot of searched-up docs into the chatbot context in the hope that will trigger relevancy. For their part, so-called reasoning models ramp up inference costs while confabulating a chain of thought to cover up their glaring limitations.
The way this technology works means that generative AI applied to anything is a form of slopification, of turning things into slop. However, where AI is undoubtedly successful is as a shock doctrine, as a way to further precaritise workers and privatise services.
This casts a different light on the way OpenAI, Anthropic and Google are circling higher education, dangling offers of educational LLM programmes that have already signed up the LSE, California State University and the whole of Estonia's high school system. It brings to mind Illich's warning about radical monopolies:
"I speak about radical monopoly when one industrial production process exercises an exclusive control over the satisfaction of a pressing need, and excludes nonindustrial activities from competition. The establishment of radical monopoly happens when people give up their native ability to do what they can for themselves and for each other, in exchange for something 'better' that can be done for them only by a major tool."
critical thought
More specifically, in light of today's seminar what does this mean for critical thought?
The University of London is already promoting a tool that provides "personalised AI generated feedback in under 2 minutes ...including advice on critical thinking". But thinking for yourself is a frictional activity not a statistical correlation. An AI-mediated essay plan has already missed the point by bypassing the student's own capacity to develop and substantiate propositions about the world.
When similar AI was adopted by the LA Times to add journalistic balance to opinion pieces, it rebalanced an article about the KKK by clarifying the Klan as a product of white Protestant culture that was simply responding to societal changes.
There's already research indicating students' problem-solving and creativity can decline when they off-load cognition to chatbots. Google's recently launched Gemini 2.5 Flash model even has a "thinking budget" feature that allows control over AI's so-called reasoning levels. and boasts of reducing output costs by up to 600%.
Moreover, the more these models claim to be safe for education, the more they become machines for metapolitical control. Whatever ketamine-fuelled 3am tweak of the system prompt made Grok insist on discussing white genocide will be more powerfully nuanced when done by nice people who are applying ministry-approved fine tuning.
Critical thought is not something you can stochastically optimise, and I agree with Hannah Arendt that thoughtlessness is a precondition for fascism.
students
But what about the students? Aren't we doing them a disservice if we don't prepare them for a world of AI?
As soon as they leave university, they're going to be faced with AI-powered recruitment apps that mashup deep learning and psychometrics to predict their future value to the company. In their white collar job they'll use AI to write reports for management who'll use AI to summarise them, while every chatbot interaction feeds analytics that assess their alignment with corporate goals.
They'll constantly be faced by AI that fails to actually complete the task at hand, despite the CEO's beliefs to the contrary, and will have to work overtime to backfill its failures. If they're stressed or depressed they'll be passed to AI-powered therapy bots optimised for workforce adaptation rather than for getting to the bottom of their distress.
According to a survey of 16-21 year olds by the British Standards Institution, 46% said they would rather be young in a world without the internet altogether. That's the result of two decades of algorithmic toxicity; how long do you think it will take for them to feel the same about AI? And yet universities are falling over themselves to convince faculty and students alike that AI is the only possible future for higher education, and research funders only want to fund things that add AI instead of researching alternatives.
Any university with a focus on graduate employability should question the hype about workplace AI which, in the words of Microsoft's own researchers, can result in the deterioration of cognitive faculties and leave workers atrophied and unprepared. Students already have a sackful of reasons to be disaffected from the world we're bequeathing them; do we really want to find out what happens when we gaslight their doubts about the value of a synthetic education?
As Illich put it in Tools for Conviviality: "When ends become subservient to the tools chosen for their sake, the user first feels frustration and finally either abstains from their use or goes mad"
Or, as the 17 and 18 year olds from state schools rated less well by Ofsted's crappy Covid spreadsheet put it more succinctly; "Fuck the algorithm".
labour government
Whatever we or the students might feel about the role of the university, our political masters are quite clear that the only direction of travel is more AI.
This Labour government is possibly the most AI-pilled in the world, so at least they got their wish to be world leading at something. The AI Action Plan issued in January is a heady mix of nationalist vibes and startup pitch that's going to 10x AI, while handing over land and the electricity grid to a rash of data centres in so-called AI Growth Zones.
Labour's single political vision is growth through AI, where scaling tech will somehow stop people voting Reform or burning down immigrant hostels. This is a vision articulated by the Tony Blair Institute in reports titled ‘Governing in the Age of AI: A New Model to Transform the State' and 'The Future of Learning: Delivering Tech-Enabled Quality Education for Britain'.
As an aside, as we all need a laugh in the face of this nonsense, their research into how many jobs would be replaced by AI included asking ChatGPT.
It does indeed seem that the chef's kiss in the managerial dismantling of higher education is going to be from the lips of a chatbot. What's just as bad is the way AI is being shoved into other vital services like there's no tomorrow. The combination of the Data (Use and Access) Bill and the Fraud, Error and Recovery Bill are a literal recipe for repeating Australia's 'robodebt' disaster at scale. It's like we've learned nothing from the Post Office Horizon IT scandal.
The Department of Work and Pensions is leading the charge in seeking algorithmic ways to optimise the disposability of the disabled, in line with government rhetoric about social burden. Deep learning has historical and epistemological connections to eugenics through its mathematics, its metrics and through concepts like AGI, and we shouldn't be surprised if and when it gets applied in education to weed out 'useless learners'.
It looks increasingly like the twinning of the Labour government's fear of Reform UK and its absolute commitment to AI are going to bring about the same fusion of high tech and reactionary politics as we've seen with MAGA and Silicon Valley.
resistance
I'm proposing that the role of the university is to resist AI, that is, to apply rigorous questioning to the idea that AI is inevitable.
This resistance can be based on environmental sustainability when looking at AI's carbon-emitting data centres and their seizure of energy, water and land. It can be based on the defence of creativity when looking at the theft of creative work to train tools that then undermine those professions. It can be based on decolonial commitments when looking at AI's outsourcing of exploitative labour to the global south, and its dumping of data centres in the midst of deprivation.
Resistance is necessary to preservce the role of higher education in developing a tolerant society. For the alternative, we only have to look at the resonances between right wing narratives and the ambitions of the tech broligarchy, resonances that are antiworker, antidemocratic, committed to epochal transformation, resentful and supremacist. Resonances which, channelled through a UK version of DOGE, will finish off university autonomy in the name of national growth and ideological alignment.
DOGE has provided a template for complete political and cultural rollback, exploiting AI's brittle affordances to trash any pretence at social contract. What the so-called educational offers from AI companies are actually doing is a form of cyberattack, building in the pathways for the hacker tactic of 'privilege escalation' to be used by future threat actors, especially those from a hostile administration.
This is why our resistance needs to be technopolitical. I'm proposing that higher education look towards thinkers like Ivan Illich for an alternative approach to assessing what kinds of tools are both pedagogical and convivial.
illich
Illich proposed what he called counterfoil research to reverse the kind of obsessive focus on the refinement of thoughtless mechanism so visible in the AI industry. He said that "Counterfoil research has two major tasks: to provide guidelines for detecting the incipient stages of murderous logic in a tool; and to devise tools and tool-systems that optimize the balance of life, thereby maximizing liberty for all."
Illich's purpose in Tools for Conviviality was "to lay down criteria by which the manipulation of people for the sake of their tools can be immediately recognized". We can take advantage of subsequent efforts to define specific starting points, such as the Matrix of Convivial Technologies, which lays out a structured way for any group developing or adopting a technology to ask questions about key aspects such as relatedness (how does it affect relations between people?) and bio-interaction (how does the tech interact with living organisms and ecologies?).
What we need right now, instead of more soft soap about responsible AI or consultancy hype about future jobs, are institutes that assembles the emerging evidence of AI's actual consequences across different material, social & political dimensions.
To stay relevant as spaces for higher education, universities will need to model the kind of social determination of technology which has been buried since the 1970s; the preemptive examination of tech's value to society. As illich says: "Counterfoil research must clarify and dramatize the relationship of people to their tools. It ought to hold constantly before the public the resources that are available and the consequences of their use in various ways".
people's councils
While Tools for Conviviality is a general argument that technology should be subject to social determination, and the Matrix of Convivial Technology gives us a set of specific starting points, it's pretty clear that the drive to AI has already advanced from regulatory capture towards institutional and state capture.
In the UK we already have Palantir placed at the heart of the NHS, a military-intelligence company founded by Peter Thiel that openly espouses cultural supremacy, whose UK exec is the grandson of Oswald Mosley and who, after meeting Keir Starmer said of the Prime Minister "You could see in his eyes that he gets it".
Instead of waiting for a liberal rules-based order to magically appear, we need to find other ways to organise to put convivial constraints into practice. I suggest that a workers' or people's council on AI can be constituted in any context to carry out the kinds of technosocial inquiry advocated for by Illich, that the act of doing so prefigures the very forms of independent thought which are undermined by AI's apparatus, and manifests the kind of careful, contextual and relational approach that is erased by AI's normative scaling.
When people's councils on AI are constituted as staff-student formations they can mitigate the mutual suspicion engendered by AI. The councils are means by which to ask rigorous questions about the conviviality of AI and, as per Illich's broad definition of tools, to ask about the conviviality of universities by applying the same set of criteria to both infrastructures.
They're also be an opportunity to form coalitions with allies outside higher education whose work or lived experience relates to programmes of study, and is also being undermined by degenerative AI, from the software engineers in DeepMind and Microsoft concerned about the entanglement of AI with genocide to the health professionals who see funds diverted into shiny AI projects instead of fixing the basics.
It's also clear that AI is flooding into primary and secondary education, both organically thanks to big tech, and systemically through government initiatives. We need practical collaboration between educators at all levels to challenge the way AI is flooding the zone, or the students of the future will be fully AI-cooked even before they make it to university.
More optimistically, it's not so hard to imagine a near future where a course or programme that's vocal about the way it's limited or even eliminated AI will have additional appeal as an alternative to the current pathway where universities conclude that, thanks to AI, they don't really need most lecturers, and then students come to the conclusion that, for similar reasons, they don't really need the universities.
The function of people's councils on AI is also to imagine a future for universities in societies heading for collapse, where the bridgeheads to a desirable future for all aren't correlational computations but campuses and communities.
imagination
I want to conclude by emphasising that the proposition that the role of the university is to resist AI is not simply a defence of pedagogy, but an affirmation of the social importance of imagination.
The technopolitical transformation of which AI is a part isn't simply a matter of market capture, but of a wider nihilism that seizes material and energy resources, driven by an unrelenting will to power and the reformulation of racial supremacy via algorithmically-mediated eugenics.
It's important to talk about resistance as a way to find resonant struggles that can amplify each other. The capacity for resistance draws on the resources of independent thought and critical reflection which are the qualities I've argued are diluted or dissolved by a dependence on AI.
These qualities aren't developed solely or mainly through time at university, and yet it is also true that students have often formed a catalytic part of many social movements. In some sense, and possibly despite itself, the university has been a space for developing collective forms of hope and imagination which are not only in short supply but are actively foreclosed by technogenic patterns of social and psychic ordering.
The role of the university isn't to roll over in the face of tall tales about technological inevitability, but to model the forms of critical pedagogy that underpin the social defence against authoritarianism and which makes space to reimagine the other worlds that are still possible.