We’re looking to broaden our understanding of the new generative technologies impacting the work of staff and students across Higher Education, and a large part of this is through reading. This is the second of a series of short posts that aim to highlight a piece that we’ve found in some way thought-provoking, and our response(s) to it. Obviously, things are changing apace, and we want to emphasise that our thinking is also in development, always subject to fluctuation and new influences as we continue to learn – just as it should be!
By Rosemary Pearce
Do those of us in Higher Education need to redefine what we mean by authorship, now that GenAI is becoming intertwined with research and writing processes (and in ways that we might not even realise1)? In which case, what should claiming and clarifying authorship look like? In a Comment article in Nature: Machine Intelligence, Porsdam Mann et al. propose an “ethical framework consisting of three criteria for the responsible use of LLMs in Scholarship” that aims to address “fundamental questions about authorship attribution and the nature of original work.” The framework consists of three criteria (though here we will be referring to only the first):
- Human vetting and guaranteeing
- Substantial human contribution
- Acknowledgement and transparency
Reading about the first of the criteria particularly resonated with conversations our team have been having lately about responsibility in written work. The idea that authors need to be able to “vouch for the accuracy and integrity of LLM-assisted work” through thoroughly checking all “claims, arguments, and evidence presented in the text, verifying their accuracy against reliable sources, and correcting any errors or misleading statements” is a way of conceptualising authorship that doesn’t preclude GenAI being involved in nearly every stage of the research and writing process.
Our team is always viewing what we read through a learning and teaching lens, so we’ve been wondering whether this “responsibility” conceptualisation of authorship might soon become how those of us in Higher Education think about student assessment, especially if Porsdam Mann et al.’s framework, or something similar, is adopted widely in publishing research. This change would have implications for academic integrity policies, influencing how institutions approach originality detection and student assessment declarations. These areas are not a part of our daily roles within the team, but we have found that discussions of the impacts of GenAI tend to rapidly lead us to a lot of “big picture” topics within HE that we haven’t often discussed before!
We’ve also been thinking about whether this understanding of authorship is a potential response to the issue raised by Jiahui Luo (Jess) in her 2024 critical review of GenAI HE policies. Here she argues that these policies frame the challenge GenAI presents as one in which GenAI tools are “a type of external assistance” that undermines the originality of student work. Yet, she says, this conceptualisation of the problem:
fails to acknowledge how the rise of GenAI further complicates the process of producing original work and what it means by originality in a time when knowledge production becomes increasingly distributed, collaborative and mediated by technology.
Now that GenAI is emerging prominently in the set of tools that students have access to for their work, can we in Higher Education hold onto a prescriptive, traditional, notion of authorship and originality? Should efforts be refocused on helping students to make informed decisions about how they use digital tools in their work, as well as to be able to confidently vouch for what they produce? Our team can’t predict what sorts of consensus might develop on GenAI with regards to student assessments, but the popularisation of GenAI tools has prompted us, and HE more widely, to think about and perhaps reevaluate how we interact with the fundamental skills and concepts of our sector.
Luo (Jess), J. (2024). A critical review of GenAI policies in higher education assessment: a call to reconsider the “originality” of students’ work. Assessment & Evaluation in Higher Education, 49(5), 651–664. https://doi.org/10.1080/02602938.2024.2309963
Porsdam Mann, S., Vazirani, A.A., Aboy, M. et al. Guidelines for ethical use and acknowledgement of large language models in academic writing. Nat Mach Intell (2024). https://doi.org/10.1038/s42256-024-00922-7
1 For more on the integration of GenAI tools into the software we’re already using see, for instance, Justus and Janos, (2024) “Your AI Policy is Already Obsolete” Inside Higher Education.