This talk was given at the conference 'Rethinking the legacy of 1968: Left fields and the quest for common ground' held at The Centre for Cultural Studies Research, University of East London on September 22nd 2018 http://rethinking1968.today/
There's a definite resonance between the agitprop of '68 and social media. Participants in the UCU strike earlier this year, for example, experienced Twitter as a platform for both affective solidarity and practical self-organisation. However, there is a different geneaology that speaks directly to our current condition; that of systems theory and cybernetics. What happens when the struggle in the streets takes place in the smart city of sensors and data? Perhaps the revolution will not be televised, but it will certainly be subject to algorithmic analysis. Let's not forget that 1968 also saw the release of '2001: A Space Odyssey' featuring the AI supercomputer HAL.
While opposition to the Vietnam war was a rallying point for the movements of '68, the war itself was also notable for the application of systems analysis by US Secretary of Defense Robert McNamara, who attempted to make it, in modern parlance, a data-driven war. During the Vietnam war the hamlet pacification programme alone produced 90,000 pages of data and reports a month, and the body count metric was published in the daily newspapers. The milieu that helped breed our current algorithmic dilemmas was the contemporaneous swirl of systems theory and cybernetics, ideas about emergent behaviour and experiments with computational reasoning, and the intermingling of military funding with the hippy visions of the Whole Earth Catalogue.
The double helix of DARPA and Silican Valley can be traced through the evolution of the web to the present day, where AI and machine learning are making inroads everywhere carrying their own narratives of revolutionary disruption; a Ho Chi Minh trail of predictive analytics. They are playing Go better than grand masters and preparing to drive everyone's car, while the media panics about AI taking our jobs. But this AI is nothing like HAL, it's a form of pattern finding based on mathematical minimisation; like a complex version of fitting a straight line to a set of points. These algorithms find the optimal solution when the input data is both plentiful and messy. Algorithms like backpropagation can find patterns in data that were intractable to analytical description, such as recognising human faces seen at different angles, in shadows and with occlusions. The algorithms of Ai crunch the correlations and the results often work uncannily well.
But it's still computers doing what computers have been good at since the days of vacuum tubes; performing mathematical calculations more quickly than us. Thanks to algorithms like neural networks this calculative power can learn to emulate us in ways we would never have guessed at. This learning can be applied to any context that is boiled down to a set of numbers, such that the features of each example are reduced to a row of digits between zero and one and are labelled by a target outcome. The datasets end up looking pretty much the same whether it's cancer scans or netflix viewing figures. There's nothing going on inside except maths; no self-awareness and no assimilation of embodied experience. These machines can develop their own unprogrammed behaviours but utterly lack an understanding of whether what they've learned makes sense. And yet, machine learning and AI are becoming the mechanisms of modern reasoning, bringing with them the kind of dualism that the philosophy of '68 was set against, a belief in a hidden layer of reality which is ontologically superior and expressed mathematically.
The delphic accuracy of AI comes with built-in opacity because massively parallel calculations can't always be reversed to human reasoning, while at the same time it will happily regurgitate society's prejudices when trained on raw social data. It's also mathematically impossible to design an algorithm be fair to all groups at the same time. For example, if the reoffending base rates vary by ethnicity, a recidivism algorithm like COMPAS will predict different numbers of false positives and more black people will be unfairly refused bail. The wider impact comes from the way the algorithms proliferate social categorisations such as 'troubled family' or 'student likely to underachieve', fractalising social binaries wherever they divide into 'is' and 'is not'. This isn't only a matter of data dividuals misrepresenting our authentic selves but of technologies of the self that, through repetition, produce subjects and act on them. And, as AI analysis starts overcode MRI scans to force psychosocial symptoms back into the brain, we will even see algorithms play a part in the becoming of our bodies.
What we call AI, that is, machine learning acting in the world, is actually a political technology in the broadest sense. Yet under the cover of algorithmic claims to objectivity, neutrality and universality
there's an infrastructual switch of allegiance to algorithmic governance. The dialectic that drives AI into the heart of the system is the contradiction of societies that are data rich but subject to austerity. One need only look at the recent announcements about a brave new NHS to see the fervour welcoming this salvation. While the global financial crisis is manufactured, the restructuring is real; algorithms are being enrolled in the refiguring of work and social relations such that precarious employment depends on satisfying algorithmic demands and the public sphere exists inside a targeted attention economy.
Algorithms and machine learning are coming to act in the way pithily described by Pierre Bourdieu, as structured structures predisposed to function as structuring structures, such that they become absorbed by us as habits, attitudes, and pre-reflexive behaviours. In fact, like global warming, AI has become a hyperobject so massive that its totality is not realised in any local manifestation, a higher dimensional entity that adheres to anything it touches, whatever the resistance, and which is percieved by us through its informational imprints. A key imprint of machine learning is its predictive power. Having learned both the gross and subtle elements of a pattern it can be applied to new data to predict which outcome is most likely, whether that is a purchasing decision or a terrorist attack. This leads ineluctably to the logic of preemption in any social field where data exists, which is every social field, so algorithms are predicting which prisoners should be given parole and which parents are likely to abuse their children.
We should bear in mind that the logic of these analytics is correlation. It's purely pattern matching not the revelation of a causal mechanism, so enforcing the foreclosure of alternative futures becomes effect without cause. The computational boundaries that classify the input data map outwards as cybernetic exclusions, implementing continuous forms of what Agamben calls states of exception. The internal imperative of all machine learning, which is to optimise the fit of the generated function, is entrained within a process of social and economic optimisation, fusing marketing and military strategies through the unitary activity of targeting.
A society who's synapses have been replaced by neural networks will generally tend to a heightened version of the status quo. Machine learning by itself cannot learn a new system of social patterns, only pump up the existing ones as computationally eternal. Moreover, the weight of those amplified effects will fall on the most data visible i.e. the poor and marginalised. The net effect being, as the book title says, the automation of inequality. But at the very moment when the tech has emerged to fully automate neoliberalism the wider system has lost it's best-of-all-possible-worlds authority, and racist authoritarianism mestastasizes across the veneer of democracy. The opacity of algorithmic classifications already have the tendency to evade due process, never mind when the levers of mass correlation are at the disposal of ideologies based on paranoid conspiracy theories. A common core to all forms of fascism is a rebirth of the nation from its present decadence, and a mobilisation to deal with those parts of the population that are the contamination. The automated identification of anomalies is exactly what machine learning is good at, at the same time as promoting the kind of thoughtlessness that Arendt identified in Eichmann.
So much for the intensification of authoritarian tendencies by AI. What of resistance? Dissident Google staff forced them to partly drop project Maven, which develops drone targeting, and Amazon workers are campaigning against the sale of facial recognition systems to the government. But these workers are the privileged guilds of modern tech; this isn't a return of working class power. In the UK and USA there's a general institutional push for ethical AI, in fact you can't move for initiatives aiming to add ethics to algorithms, but i suspect this is mainly preemptive PR to head off people's growing unease about their coming AI overlords. All the initiatives that want to make AI ethical seem to think it's about adding something i.e. ethics, instead of about revealing the value-laden-ness at every level of computation, right down to the mathematics.
Models of radical democratic practice offer a more political response through structures such as people's councils composed of those directly affected, mobilising what Donna Haraway calls situated knowledges through horizontalism and direct democracy. While these are valid modes of resistance, there's also the '68 notion from groups like the Situationists that the Spectacle generates the potential for it's own supersession. I'd suggest that the self-subverting quality in AI is its latent surrealism. For example, experiments to figure out how image recognition actually works probed the contents of intermediary layers in the neural networks, and by recursively applying filters to these outputs produced hallucinatory images that are straight out of an acid trip, such as snail-dogs and trees made entirely of eyes. When people deliberately feed AI the wrong kind of data it makes surreal classifications. It's a lot of fun, and can even make art that gets shown in galleries but, like the Situationist derive through the Harz region of Germany while blindly following a map of London, it can also be a poetic disorientation that coaxes us out of our habitual categories.
While businesses and bureaucracies apply AI to the most serious contexts to make or save money or, through some miracle of machinic objectivity, solve society's toughest problems, its liberatory potential is actually ludic. It should be used playfully instead of abused as a form of prophecy. But playfully serious, like the tactics of the Situationists themselves, a disordering of the senses to reveal the possibilities hidden by the dead weight of commodification. Reactivating the demands of the social movements of '68 that work becomes play, the useful becomes the good, and life itself becomes art.
At this point in time, where our futures are becoming cut off by algorithmic preemption we need to pursue a political philosophy that was embraced in '68 of living the new society through authentic action in the here and now. A counterculture of AI must be based on immediacy. The struggle in the streets must go hand in hand with a detournement of machine learning; one that seeks authentic decentralisation not Uber-ised serfdom, and federated horizontalism not the invisible nudges of algorithmic governance. We want a fun yet anti-fascist AI, so we can say "beneath the backpropagation, the beach!".