Universities needs a more honest approach to ChatGPT and academic dishonesty

By: Daniel Arauz Nuñez

ChatGPT hasn’t made classrooms worse. It has highlighted problems that students, professors, and teaching assistants know all too well.

Last year, the launch of OpenAI’s ChatGPT triggered widespread warnings of the “inevitable” disruption large-language models (LLMs) would cause. Academic assignments was perhaps, among the most straightforward fixations, as students could now generate passable essays on any topic of their choosing in seconds. Western University is among many Canadian universities that have created “cheating” task forces, to gauge how university policy should come down on the use of LLMs, and to advise faculty on how to create assignments that either accommodate “appropriate” uses of ChatGPT, or that have parameters and requirements that would unable to be convincingly produced with the programs.

But the discourse surrounding this threat of “academic dishonesty” amongst students has largely failed to genuinely empathize with the motivations students might have in using ChatGPT. We still do not have a systematic study of the ways in which students are using these tools. Media coverage and the messaging from some university faculty and administration suggests that this inquiry only matters in so far as how to “catch” dishonest undergraduates.

As a Graduate Student Teaching Assistant (GTA), teaching courses in Media Studies, our role in the ChatGPT panic has been silent. Our only instructions from course instructors have only been to flag “suspicious” essays from students. Current academic plagiarism detection tools like Turnitin are inaccurate and vague about how they detect LLM produced text. In practice, this means that if a student writes with greater clarity than their in-class tests or quizzes, we may ask for instructors to take a closer look, but often there is rarely sufficient evidence to say for certain.

The panic about this is understandable, but the reality is that for both instructors and GTAs, the extra work involved in both re-orienting syllabi to emphasize in-person exams and “GPT proof” can often unintentionally produce more administrative planning and marking. While revisiting the pedagogical goals of assignments has been fruitful in some courses, universities fail to adequately provide sufficient policy guidelines, pedagogical guidance, or monetary compensation for the additional work “ChatGPT vigilance” requires on the part of instructors.

Though I cannot speak for my colleagues, I think it should come as no surprise that we as GTAs, have little capacity to read the work of undergraduates with this skeptical and investigative eye. We are already working for poverty wages. We are already pulled between multiple responsibilities, including multiple jobs to survive. The last thing we can worry about is whether or not a student, who often shares similar material struggles, might have consulted ChatGPT to produce a 1000-word essay.

I say all of this as someone who, even in my limited role as a GTA, would love nothing more than to be able to take the time my students require in order to give them the guidance they need to thrive, and aid in the restructuring of a classroom that has the resources and capacity to help students in the humanities and social sciences with their writing. Though I greatly believe in the benefits of using classes as an opportunity to practice this craft that I love so much, most students are constantly having their time pulled away from our classes in every direction. Students are working part time, even full-time jobs while they are taking our classes. Students live in both expensive, and precarious rental homes. For many more, the prospects of secure career trajectories following their studies is uncertain. Moreover, GTAs are increasingly pressured to provide minimal feedback, so that we do not go over our very limited and undercompensated hours allotted to marking. We cannot give the close-reading and feedback of students essays that they deserve, and I think students feel these circumstances renders their written work as disposable, a necessary means to obtaining their marks and moving on.

I say all of this as someone who feels very strongly that the use of LLMs to produce written work is an affront to craft that I love the most. We need to reject claims that LLMs will render the craft of writing as obsolete and reject the notion that the future of writing will belong to “prompt engineers”. But we cannot place the blame on students for not valuing academic writing the way that instructors and GTAs hope they might.

But as someone who values the process of writing, I still understand why undergraduate students might sacrifice their own learning, and knowingly engage in the risk of using ChatGPT

to produce their writing assignments. Even within the humanities and social sciences, where writing and oral communication of course theory and concepts is the principal mode of assessment, I fear that we repeatedly demonstrate to students that the process of writing, the critical thinking required to effectively put thoughts to words, words to pages, does not matter.

All the joy and pain of the craft of writing, in my view, is found when we are clenching at a blank page. It is the greatest challenge, one that I struggle to admit has gotten any easier. But the only thing that has allowed me to continue to produce the work I have had to was to approach the task at hand with an honesty about my own limitations as both a thinker and a writer” It is that reflectiveness that comes out of every essay, article, or journal, that makes me believe that all the deep frustration and love I hold for this medium. We cannot let the fixation on catching cheaters obscure the fact that the academy has already produced the environment where students are unlikely to feel attended to and rewarded for their writing efforts. With the perspective of students, GTAs, instructors, and meaningful material support from university administration, we have an opportunity to reverse this course.

Image c/o: Tim Guow via unsplash