Collusion, plagiarism, or contract cheating? How generative AI fits into existing academic integrity policies
Author Identifier (ORCID)
Miriam Sullivan: https://orcid.org/0000-0002-9870-2734
Abstract
Universities are increasingly accepting the use of generative AI tools to aid student learning; however, many still consider unacknowledged use or the submission of completely AI-generated content as academic misconduct. Many universities have tried to fit unauthorised and unacknowledged use of AI tools into their pre-existing academic integrity policies and procedures. This raises the question of how misuse of AI should be classified as a form of misconduct: is it plagiarism, collusion, contract cheating, or something new? This is an important decision, as most procedures use the form of misconduct to determine the level of seriousness and penalties applied. In this chapter, I argue that retrofitting existing academic integrity policies does not sufficiently equip universities or students to manage the impacts of generative AI. Instead, the seriousness of misconduct should be determined by considering the impact on student learning.
Keywords
Academic integrity, academic misconduct, digital literacy, fabrication, generative artificial intelligence, policy
Document Type
Book Chapter
Date of Publication
1-1-2025
Publication Title
Handbook of Artificial Intelligence in Higher Education
Publisher
Edward Elgar Publishing
School
Centre for Learning and Teaching
Copyright
subscription content
First Page
372
Last Page
386
Comments
Sullivan, M. (2025). Collusion, plagiarism, or contract cheating? How generative AI fits into existing academic integrity policies. Handbook of artificial intelligence in higher education (pp. 372–386). Edward Elgar Publishing. https://doi.org/10.4337/9781035338764.00033