GLOBAL

Artificial intelligence, research and internet blah blah
In a 2024 article in The New York Times titled “As China’s internet disappears, ‘We lose parts of our collective memory’”, Li Yuan considers the internet as an archive of much of our recent collective memory. That being said, China’s shared knowledge is quite different than that known to most Google users, albeit via filter bubbles and algorithmically curated information.Although Chinese people are well aware that their internet is ‘different’, a rather more disturbing detail is that large parts of it seem to be disappearing. Yuan quotes a blogger, He Jiayan, summing it up neatly: “We used to believe that the internet had a memory. But we didn’t realise that this memory is like that of a goldfish.”
The philosopher of technology, Bernard Stiegler, spent many of his waking hours thinking about this problem of exteriorised memory and its relationship to digital technology.
For Stiegler, the problem is not simply about the relationship between exteriorised memory and technology, nor about the effects that tools – including digital tools – have on our lives, though these are of course important. Neither is he worried merely about how these technological tools are used, and sometimes controlled, by powerful individuals and organisations such as Amazon, Google and Facebook, or states like Russia and China.
Our relationship to technology is, for him, about something more fundamental, namely technics. What this means is that Stiegler considers the invention of tools and technologies not only as extension of our abilities, but as ways of imagining and inventing life by means other than biological life.
Technology and exteriorising memory
As the internet illustrates par excellence, technical inventions allow for individual memory to be exteriorised. Long before the advent of the digital, this took place via rituals, tools, music and art. Once exteriorised, individual memory takes on a social aspect that becomes intergenerationally transmissible via what we know as ‘culture’.
The exteriorisation of memory is thus not a problem per se, though it does raise the question of adjustment between the technical and social milieus for Stiegler, because when a technical system becomes radically transformed, as it has with the digital, there occurs what he calls a phase of disadjustment that requires collective processes of meaning-making for stabilisation.
In particular, Stiegler conceives of this in terms of automation. To be sure, automation is not in itself a problem – all societies, individuals and even biological cells deal with sets of automatisms. In some ways, automation is the basis of life.
The problem, then, lies instead with the way in which digital automations short-circuit the intentional functions of the mind. This applies also to peda¬gogical milieus which, due to digitality, and especially the recent introduction of ChatGPT, are themselves in stages of disadjustment and hyper-automation.
In the same way that the social requires collective intervention, so too does the research environment demand academic narratives that refuse the mindless externalisation of our collective knowl¬edges. Because although many have hailed ‘AI’ as revolutionary for research practices, Emily Bender and her fellow researchers rightly argue that there is nothing intelligent about AI.
Large language models (LLMs) like ChatGPT rely more readily on machine learning methods that probabilistically cluster patterns in data, stitching together linguistic chains that have very little, if any, reference to considered meaning, yet are believed to be exhibiting some level of intelligence.
Consider that LLMs are trained on our externalised memories and knowledges, implying that in lieu of producing knowledge, they are reflecting our collective consciousness back at us – or at least an algorithmically altered version thereof.
This is not knowledge production; it is a short-cut that bypasses the processes of knowledge production in favour of the end product. So, when a student or researcher turns to ChatGPT to produce their class assignments or research projects, they become accustomed to the algorithmically modulated rendering of knowledge production in place of the authentic experience of a revelatory moment – the ‘Eureka!’
The three stages of proletarianisation
For Stiegler, this kind of automation of the deliberative functions of the psyche leads to what he calls a generalised proletarianisation.
Stiegler identifies three stages of proletarianisation.
The first, in reference to Marx’s work, describes the externalisation of workers’ knowledge into machines which, while having some benefits, also provokes a loss of work-knowledge (savoir-faire) and its intergenerational transfer through mechanisation. This period is typically described under the rubric of the first industrial revolution.
The second stage, although not entirely replacing the first, occurs with the wide application of mass production, or Taylorism-Fordism. In this instance, the externalisation of memory is more subtle because it does not have the same direct relation between human and machine.
The shift occurs rather at the level of desire, specifically the desire to be a consumer. For Stiegler, consumption is accompanied by the loss of life-knowledge (savoir-vivre), meaning material and psychological security is mediated by purchasing power rather than collective practices of care.
The third phase is the one we find ourselves in now which, through a mixture of the financialisation of life and the arrival of digitisation, has led to a generalised
proletarianism and the loss of conceptual knowledge (savoirs théoriques) due to the continuous and near-automatic externalisation of memory into devices like cell phones.
What is certain is that we have not even begun to understand the short- and long-term implications of the kinds of cognitive infrastructures by which our collective memory and knowledge production become eclipsed by processes of recursive optimisation and predictive performance.
Given this, it is no wonder that Stiegler describes the processes by which our memories become externalised and automated in terms of a trauma or techno-logical shock. Trauma is, of course, a natural part of life though it requires care for it not to become pathological.
But how can we create caring worlding practices – including those related to research and pedagogy more generally – when people far and wide are always already dreaming of the next stages of their self-externalisation? And who can blame them? With so much complexity to navigate and so few shared practices of meaning to ameliorate the sense of an ending, it is no wonder multitudes opt for ‘Netflix and chill’ in onesies instead.
Reparative hermeneutics
Diagnosing and not turning away from the uncomfortable feelings and situations produced by, for example, contemporary digitised life is what the theorist Eve Kosofsky Sedgwick calls a reparative hermeneutics.
This is a practice that replaces mimetic and anticipatory paranoid readings of suspicion and the overdetermined geographical, historical, political and other relations they feed on with an emendatory exploration aimed at transformation through the creation of communal resources of care and sustenance.
This is not an end-product option; it is a process that calls on us – as collectives, as researchers and as pedagogues – to undo our own numbness by decolonising the algorithmic unconscious in each of us, beginning by resisting mass media homogenisation and technological overdependence and addiction.
It is a call, moreover, to resist the ‘blah blah’ of the internet: the noise that overshadows what is really happening to our collective memory, which is far, far worse than the disappearance of chunks of the internet in China.
Chantelle Gray is a professor in the school of philosophy, faculty of humanities, and chair of the Institute for Contemporary Ethics at North-West University in South Africa.