hairDirected by Jell Jensen As the professor who heads Arizona State University’s writing program prepares for the fall semester, the responsibility is immense: 23,000 students take writing courses under his supervision each year. With AI tools now helping to create high-quality college essays in just seconds, instructors’ jobs are even tougher than they were just a few years ago.
Just one week after ChatGPT was launched in November 2022, Atlantic Ocean Jensen declared the “college essay is dead” in America in the 1960s. Two years later, Jensen has recovered from his grief and is ready to move forward. The tall, affable English professor co-runs a National Endowment for the Humanities-funded project on generative AI literacy for humanities faculty, and is incorporating large-scale language models into his English courses at Arizona State University. Jensen is part of a new breed of faculty who are willing to embrace generative AI while resisting its temptations. He is a firm believer in the value of traditional writing, but also believes AI could facilitate education in new ways. In Arizona State’s case, it’s a way to improve access to higher education.
But his vision must overcome the harsh realities of college campuses. The first year of AI University was marred by students testing the limits of the technology and catching faculty off guard. Cheating was rampant. Tools to identify computer-written essays proved inadequate for the task. Academic integrity committees found they couldn’t fairly adjudicate uncertain cases. Students who used AI for legitimate reasons, or even those who merely consulted grammar-checking software, were deemed cheaters. So faculty asked students not to use AI, or at least to tell them when they did, hoping that would be enough. But it wasn’t.
Now that AI College is in its third year, the problem seems just as intractable. When I asked Jensen how the 150-plus instructors who teach writing classes at Arizona State University are preparing for the new semester, he was quick to mention his concerns about cheating. He said many instructors have messaged him to ask about recent cheating cases. The Wall Street Journal An article about an unannounced product from OpenAI that can detect AI-generated text. The lack of an announcement of such a tool has left beleaguered faculty members scratching their heads.
ChatGPT comes at a vulnerable time on college campuses, with faculty still reeling from the coronavirus pandemic. Jensen said the university’s response of relying on honor codes to deter misconduct worked to some extent in 2023, but is no longer enough. “When you look at Arizona State and other universities, they’re calling for a coherent plan.”
and othersIn early spring, I A professor of writing at a Florida school was so demoralized by students cheating that he gave up and was about to get a tech job. “I was so devastated,” he told me at the time. “I loved teaching and enjoyed my time in the classroom, but ChatGPT made it all seem pointless.” When I reached him again this month, he told me he’d sent out loads of resumes, but nothing had come of it. As for teaching, things were only getting worse. He said he’d lost the trust of his students. Generative AI “almost ruined the integrity of online classes.” Online classes are becoming more common as schools like Arizona State University try to expand access. No matter how small the assignment, many students use ChatGPT to complete their assignments. “Students even submit ChatGPT responses to prompts like ‘Describe yourself in 500 words or less,’” he told me.
If the first year of the AI University ended with a feeling of disappointment, now the situation has descended into absurdity. Teachers struggle to continue teaching while wondering whether they are grading students or computers. Meanwhile, the race to detect and detect AI cheating continues endlessly. Technologists have tried new ways to curb the problem, but The Wall Street Journal The article describes one of several frameworks. OpenAI is experimenting with ways to hide a digital watermark in the output. This watermark can later be detected and used to show that a particular piece of text was created by AI. However, watermarks can be tampered with, and detectors built to look for watermarks can only check for those created by a particular AI system. This may be why OpenAI has chosen not to expose its watermarking capabilities publicly; doing so would simply cause customers to migrate to services without watermarks.
Other approaches have been tried. Researchers at Georgia Tech devised a system to compare how students answered a particular essay question before ChatGPT was invented with how they answer it now. A company called PowerNotes has integrated its OpenAI service into Google Docs with AI-made change tracking, allowing instructors to see all of ChatGPT’s additions to a particular document. But such methods have not been proven in the real world or have limited ability to prevent cheating. In a formal statement of principles on generative AI published last fall, the Society for Computing Machinery argued that “reliably detecting the output of a generative AI system without an embedded watermark is beyond the current state of the art and is unlikely to be altered in any foreseeable timeframe.”
This inconvenient fact won’t slow the arms race. One of the generative AI providers will probably release a watermarked version. They will also release an expensive service that universities can use to detect the watermark. To justify purchasing that service, universities may institute policies forcing students and faculty to use their chosen generative AI provider in their classes. Enterprising cheaters will devise workarounds, and the cycle will continue.
But giving up doesn’t seem to be an option. If college professors seem fixated on student cheating, it’s because cheating is so prevalent. This was the case even before ChatGPT came along. Historically, surveys have estimated that more than half of high school and college students have cheated in some way. As of early 2020, nearly a third of undergraduates admitted to cheating on exams in a survey, according to a report from the International Center for Academic Integrity. “I’ve been fighting Chegg and Course Hero for years,” said Hollis Robbins, dean of humanities at the University of Utah, referring to two “homework help” services that were hugely popular until OpenAI upended their businesses. “Professors are still coming back to the same old paper themes after decades: The five senses and the many senses or Moby Dick“For a long time, students had to buy matched papers from Chegg or get them from their dorm files, but ChatGPT offers another option. Students believe cheating is wrong, but opportunity and circumstances prevail.
vinegarStudents are not alone Some teachers feel generative AI might solve their problems. They, too, are using the tool to improve the quality of their classes. Even last year, more than half of K-12 teachers used ChatGPT for course and lesson planning, according to one survey. Another survey conducted just six months ago found that more than 70% of higher education teachers who regularly use generative AI do so to give grades and feedback on student work. And the tech industry is giving them the tools to do so. In February, education publisher Houghton Mifflin Harcourt acquired Writable, a service that uses AI to comment on elementary school students’ papers.
Jensen acknowledged that before AI, Arizona State’s cheat-fearing writing instructors were overwhelmed; some were teaching five courses with as many as 24 students at a time. (The Council on Collegiate Writing and Communication recommends no more than 20 students per writing course, with 15 ideally, and warns that overburdened instructors can be “overstretched and unable to effectively engage with student writing.”) John Warner, a former college writing instructor and author of a forthcoming book, said: More than just words: Writing in the age of AIHe worries that the mere existence of these course loads will encourage teachers and their institutions to use AI for efficiency, even if it means depriving students of better feedback. “If we can prove that teachers can serve more students with a new chatbot tool that gives them roughly the same mediocre feedback they were getting before, then that could be a success,” he told me. In the most farcical version of this setup, students would be incentivized to use AI to create assignments, and teachers would respond with AI-generated comments.
Steven Aguilar, a professor at the University of Southern California who studies how educators use AI, told me that many educators simply want room to experiment. Jensen is one of them. Because ASU’s goal is to expand affordable access to education, he feels that AI doesn’t have to be a compromise. Rather than giving students a way to cheat or faculty an excuse to suspend learning, AI might open up possibilities for expression that would never have been made otherwise. In his words, “a path through the woods.” He told me about an introductory-level English course in ASU’s Learning Enterprise program, which offers online learners a path to college admission. Students start by reading about AI and learning about it as a contemporary phenomenon. They then write about the pieces they read and use AI tools to critique and improve their work. Rather than focusing on the essays themselves, the course concludes with a reflection on the AI-assisted learning process.
Robbins said the University of Utah is taking a similar approach. She showed me the syllabus for a college writing course in which students will use AI to learn “what makes writing compelling.” In addition to reading and writing about AI as a social issue, students will read literary works and try to generate corresponding works in the same style and genre using ChatGPT. They will then compare the AI-generated works with works written by humans to look for differences.
But Warner has a simpler idea. Instead of making AI both a subject and a tool in education, he suggests that teachers should update how they teach the basics. One reason AI can easily produce reliable college papers is because those papers tend to follow a strict, almost algorithmic format. Writing instructors are in a similar position, he says, because of the sheer amount of work they have to grade. The feedback they give students is also almost algorithmic. Warner thinks that teachers can address these issues by reducing what they ask for in assignments. Rather than asking students to create full-length papers that are supposed to stand alone as essays or arguments, he suggests giving them shorter, more specific prompts that are linked to useful writing concepts. For example, they might be instructed to write a paragraph of vivid prose, or a clear observation about something they saw, or a few lines that translate a personal experience into a general idea. Can students still use AI to complete this kind of work? Surely, they would have less reason to cheat on specific assignments that they understand, and want To achieve it through one’s own efforts.
“I long for a world where we stop getting excited about generative AI,” Aguilar told me. If that happens, he believes, we’ll finally understand what it’s good for. In the meantime, deploying more technology to combat AI misbehavior will only prolong the arms race for students and teachers. Universities would be much better off doing something different.AnythingIt’s really about how we teach and what students learn. It may not be in the nature of these institutions to evolve, but it should be. If we can’t curb the impact of AI on campuses, we should at least consider it. “If you’re a literature professor and you’re still asking about major themes, The five senses and the many senses“Then shame on you,” Robbins said.
If you buy a book through a link on this page, we receive a commission. Thank you for your support. Atlantic Ocean.