IThe start of term is fast approaching. Parents are starting to worry about packed lunches, uniforms and textbooks. School leavers heading to university are wondering what welcome week will be like for new students. And some professors, especially in the humanities, are anxiously wondering how to handle students who are already more adept at Large Language Models (LLMs) than they are.
They have good reason to be worried. Ian Bogost, a professor of film and media, said: and “If the first year of AI college ended with disappointment, the situation has now descended into absurdity,” said a computer science professor at Washington University in St. Louis. “Teachers struggle to continue teaching while wondering whether it’s the students or the computers that are grading. Meanwhile, the race to AI cheat and detect it continues unabated in the background.”
As expected, the arms race is already intensifying. The Wall Street Journal Recently, we reported that “OpenAI has a way to reliably detect people who use ChatGPT to write essays and research papers, but the company has not made it public, despite widespread concerns that students are using artificial intelligence to cheat.” This refusal has infuriated sectors of academia that imaginatively imagine that the “cheating” problem must have a technical solution. Apparently they have not read the Association for Computing Machinery’s statement on development principles for systems to detect generative AI content. The statement reads, “Reliably detecting the output of generative AI systems without embedded watermarks is beyond the current state of the art and is unlikely to change within a foreseeable timeframe.” And while digital watermarks are useful, they can also cause problems.
The LLM is a particularly pressing problem for the humanities because the essay is a critical pedagogical tool in teaching students how to research, think, and write. Perhaps more importantly, the essay also plays a central role in grading. Unfortunately, the LLM threatens to make this venerable pedagogy unviable. And there is no technological solution in sight.
The good news is that the problem is not insurmountable if educators in these fields are willing to rethink and adapt their teaching methods to fit new realities. Alternative pedagogies are available. But it will require two changes of thinking, if not a change of heart.
The first is the recognition that the LLM is, as the eminent Berkeley psychologist Alison Gopnik puts it, a “cultural technology,” like writing, printing, libraries, and Internet searching. In other words, the LLM is a tool for human beings to navigate in society. AugmentIt’s not an exchange.
Second, and perhaps more importantly, the importance of writing needs to be reinstated in students’ minds. processI think E.M. Forster once said that there are two kinds of writers: those who know their ideas and write them, and those who find their ideas by trying to write. The majority of humanity belongs to the latter. That’s why the process of writing is so good for the intellect. Writing teaches you the skills to come up with a coherent line of argument, select relevant evidence, find useful sources and inspiration, and most importantly, express yourself in readable, clear prose. For many, that’s not easy or natural. That’s why students turn to ChatGPT even when they’re asked to write 500 words to introduce themselves to their classmates.
Josh Blake, an American academic who has written sensible books about engaging with AI rather than trying to “integrate” it into the classroom, thinks it’s worth making the value of writing as an intellectual activity abundantly clear to students: “If students don’t understand the value of writing as an intellectual process, you If you think about it, naturally they would be interested in outsourcing the labor to law students, and if writing (or any other job) is really just about the deliverable, why not? If the means to an end aren’t important, why not outsource it?
Ultimately, the problems that LLMs pose to academia can be solved, but it will require new thinking and different approaches to teaching and learning in some areas. The bigger problem is the slow pace at which universities are moving. I know this from experience. In October 1995, the American scholar Eli Noam published a very insightful article, “Electronics and the Dark Future of the University.” ScienceBetween 1998 and 2001, I asked every vice-chancellor and senior university leader I met in the UK what they thought about this.
Still, things have improved since then: at least now everyone knows about ChatGPT.
What I’m Reading
Online Crime
Ed West wrote a fascinating blog post about sentences handed down for online posts during the Southport stabbing riots, highlighting the inconsistencies in the UK justice system.
Ruth Bannon
Here is an interesting interview Boston Review Documentarian Errol Morris discusses Steve Bannon’s dangerous “dharma” — his sense of being part of the inevitable unfolding of history.
Online forgetting
A sobering article by Neil Firth MIT Technology Review In a world of ever-growing data, we discuss efforts to preserve digital history for future generations.