University of Notre Dame’s Fremantle campus is marbled into the city’s West End, which means the latter goes eerily quiet during the long summer break between semesters. Located at the southern end of the port – still a working one, despite various plans to move it – and dominated by Georgian and Victorian architecture, the West End begins to resemble an old photograph of itself, as the students return home and the bustle yields to a slower pace. Yes, the cafes, boutiques and cocktail bars continue to operate across the hiatus, drawing in holidaymakers as they meander towards the Round House – a former jail built on Bentham’s notorious panopticon design and the first permanent building of the Swan River Colony. But otherwise the area feels strangely quiet – not a ghost town, exactly, but in some sense bereaved, denuded of the human presence that brings it to life.
Still, there is a distinctive buzz within the university itself, as tutors and tenured academics return to prepare for the new semester. Like an army that’s received intelligence of a new and frighteningly effective weapon, the teaching staff is all aflutter at the next big thing in artificial intelligence – a development so potentially ruinous to current modes of academic practice that it will make the trade in “contract cheating” look as old-fashioned as crib notes on the palm of your hand. As essay questions are set and course outlines prepared, everyone involved in the transfer of knowledge is acutely aware that the terrain has changed, and everyone is chatting about ChatGPT.
Launched in November 2022, ChatGPT is an AI-powered chatbot that uses deep-learning algorithms to generate responses to natural-language prompts. Trained on a massive database of materials, it can instantly synthesise original content into the form of answers to specific questions, essays on particular subjects, literary parodies, scripts, et cetera, in a way that leaves the Turing test decisively in humanity’s rear-view mirror. That’s to say, there is very little in its responses that would give it away as a non-human actor, as countless journalists have already demonstrated by reproducing snippets of AI-generated content in their articles and challenging readers to spot the difference. (Given the job losses to which ChatGPT and its equivalents are likely to lead, this is perhaps an unwise strategy.) Certainly, there is nothing in the various essays I’ve asked it to write that would raise any eyebrows, other than the program’s stubborn insistence on not confusing “then” with “than” or “alternate” with “alternative”. There really is no getting around it: ChatGPT is brilliant, a game-changer.
Not that it’s otherwise regarded in the tertiary sector as a problem, necessarily. For the moment, opinion on ChatGPT appears to be split between those who deem it a potential tool in research and academic writing, and those (like me) who regard it as a challenge – possibly an existential challenge – to a certain model of education. The former group is undeniably the smaller one, though their opinions have been over-represented in the media recently, perhaps because of the fetish for “balance”. In essence, those opinions boil down to the charge that concerns about ChatGPT are a kind of Luddism, no different from the panics that greeted the printing press, television, the internet or Wikipedia (still lazily maligned in academic circles as a reliable source of unreliable information). As such, the former group is channelling what is sometimes termed the “instrumental” view of technology, which characterises all tools and techniques as fundamentally neutral phenomena, used by humans to achieve their ends, as opposed to phenomena with the power to shape the culture in which those ends are framed. The instrumental view is very much favoured in Silicon Valley, and amounts to a reflexive belief in progress, even a kind of fatalism that strikes me as borderline nihilistic. It also tends to assume implicitly that the human brain is itself a “technology” that can be retooled for greater efficiency. For example, when defenders of new technologies say that their use will free up space for students to dedicate to other tasks, they are reproducing the very conceptual model – call it “the brain-computer model” – that led the adepts of AI to pursue the dream of natural language processing in the first place. They are also sorely mischaracterising the process by which human intelligence is created.
To my mind, this goes to the principal challenge of ChatGPT, which has less to do with accuracy, or bias, or toxic speech, than it does with its potential to inflict yet another defeat on our agency, and thus our capacity for freedom and flourishing. As the author and artist James Bridle has argued, the information revolution has brought about a “new dark age” in which the price of increasingly smart devices is increasingly ignorant human beings. It isn’t only working practices that are disappearing into algorithmic machines, our practical understanding of the world – of how things fit together – is disappearing too, with the result that the world is becoming a “black box”, opaque to its human inhabitants. Despite the tech-bros’ transhumanist dreams, and notwithstanding Elon Musk’s warnings about the existential risks of “strong” AI, few of us are daft enough to think that ChatGPT is actually thinking. The problem is that anyone using it isn’t really thinking either. And since thinking is still a university’s reason for being (even if only semi-officially), that’s a problem that outsoars any narrow concerns about accuracy or plagiarism. It’s a challenge to the university itself, or to the liberal conception of it, and to the society it is assumed to serve.
It follows that the emergence of ChatGPT presents an opportunity to think about the role of technology in formal education more broadly. What is education for, after all? If its aim is to create workers who can use AI, then teaching to ChatGPT makes sense. But if its aim is to create thinking individuals, whose ability to think is bound up with their flourishing, then it might be better to exercise caution. Nor should we stop at ChatGPT. We might extend our re-evaluation to encompass other technologies, some of which have insinuated themselves still further into academia as a consequence of the COVID lockdowns. (I don’t know a single academic who thinks that online classes are as effective as face-to-face ones, or anyone who thinks that such teaching practices are beneficial to student mental health.) We might even think about writing itself, or at least about the prominence given to it in modern educational practice. One common move of the Pollyannas is to invoke the example of Socrates, who regarded writing as deleterious to the memory and a corruption of intellectual enquiry, as a text can neither clarify nor modify its arguments in the way that a human being can. What could be more regressive, they ask, than being against the alphabet? How revealing of the technophobic spirit! But one doesn’t need to go all the way with the gadfly of Athens to see that he was right about some things (writing does cause the memory to atrophy), or that he was asking a pertinent question: what does this technology give us and what does it take away? We Cassandras can be a reactionary bunch, and of course it’s always necessary to ask whether one has mistaken the state of the world for the state of one’s own lower back. Nevertheless, the Pollyanna approach strikes me as naive. At the very least, it’s surely worth reflecting that Socrates, as well as originating what we might call the “techno-critical” tradition, was also the progenitor of the dialectical method of reasoning on which the liberal university is founded.
Of course, I’m not suggesting that the modern university should be reconceived as an Athenian agora. But as we move ever deeper into the era of technoscientific capitalism – an era in which it will be possible not only to reproduce human speech but also to reproduce human beings themselves, and not by the conventional method of meeting at the student bar – we need to rediscover our capacity to evaluate emerging technologies in the spirit of what Lewis Mumford called “democratic technics”, by placing human freedom and flourishing at the centre of our deliberations. That this capacity is the very one that ChatGPT would remove, if pressed into service for unscrupulous ends, makes this a matter of urgency. We need to start thinking about thinking machines, upon pain of a future society in which we become uncanny to one another – spectres haunting a silicon city, denuded of the flesh-and-blood others without whom we can never be fully human.
There is nowhere quite like The Monthly. We are told that we live in a time of diminished attention spans; a time where the 24-hour-news-cycle has produced a collective desire for hot takes and brief summaries of the news and ideas that effect us. But we don’t believe it. The need for considered, reflective, long-form journalism has never been greater, and for almost 20 years, that’s what The Monthly has offered, from some of our finest writers.
That kind of quality writing costs money, and requires the support of our readers. Your subscription to The Monthly allows us to be the home for the best, most considered, most substantial perspectives on the state of the world. It’s Australia’s only current affairs magazine, an indispensable home for cultural commentary, criticism and reviews, and home to personal and reflective essays that celebrate and elevate our humanity.
The Monthly doesn’t just comment on our culture, our society and our politics: it shapes it. And your subscription makes you part of that.
Select your digital subscription