Academic Integrity in the Context of AI
Generative AI has intensified concerns about academic integrity, primarily by increasing uncertainty about how student work is produced and how misconduct should be interpreted. This page explains what academic integrity and academic misconduct mean, summarizes evidence on the prevalence of misconduct before and after widespread use of AI tools, and highlights what instructors can do to promote learning, trust, and ethical use of AI.Ìý
What Is Academic Integrity?
Academic Integrity refers to a commitment to the core values of honesty, trust, fairness, respect, responsibility, and courage (). These values enable:Ìý
- Consistent and fair evaluation of student work
- Appropriate recognition for student effort and learning
- Credibility and trust in the learning process
Academic integrity is thereforeÌýnot just about rule compliance, but also aboutÌýcultivating ethical reasoningÌýand action.
What Is Academic Misconduct?
Academic misconduct refers to any action that violates principles of academic integrity. Âé¶¹Ãâ·Ñ°æÏÂÔØBoulder’sÌýhonor code considers any behavior that could result in anÌýunfair academic advantage as academic misconduct. Examples include:
- Using unauthorized materials or tools (e.g., notes, study aids, generative AI tools)Ìý
- Portraying another’s work as one’s ownÌý
- Aiding misconduct by sharing homework with peers or uploading course materials on third-party sites without the instructor's permission.
You can review thisÌýshort primer on Âé¶¹Ãâ·Ñ°æÏÂÔØBoulder’s Honor Code to learn more.
Is AI Increasing Academic Misconduct? What the Evidence SaysÌý
Academic misconduct is not a new problem. Research across multiple decades shows thatÌýmisconduct rates have consistently been higher than 45%, sometimes as high as 88% ().Ìý

Disruptions to assessment practices can temporarily change rates of academic misconduct. This includes:Ìý
- COVID-19-related shifts:ÌýIncreased adoption of remote or asynchronous exams has been associated with academic misconduct,Ìýoften exceeding 54%. Nonetheless, these estimates are within the range of previously reported rates of academic misconduct. ().Ìý
- AI-related shifts: Since the release of ChatGPT, some studies report thatÌýunauthorized use of AI has increased fourfold, with up to 45% of students reporting using AI in ways that are explicitly prohibited by course policies (;Ìý;Ìý;Ìý).Ìý
However, multiple studies indicate that theÌýoverall rate of misconduct has remained the same as that predating generative AI tools (;Ìý;Ìý;Ìý).Ìý
Why Does Increased AI Use Not Always Translate to Misconduct?
Instructors overestimate text being AI-generated Ìý
Distinguishing between human-generated and AI-generated text is notoriously difficult (;Ìý). Increased scrutiny and uncertainty around AI use can amplify perceptions of misconduct without corresponding to evidence of or actual academic misconduct.
Limits of AI detection tools
AI detectors are unreliable and introduce false positives. They are also heavily biased against non-native speakers of English (;Ìý;Ìý). As a result, many suspected cases may be dismissed if students were falsely accused of using AI.
Students rationalize their AI use
Students may underestimate or justify the extent to which their AI use is reasonable, particularly if they perceive it as being socially undesirable (;Ìý). So even if used inappropriately, they are unlikely to cite or admit to unauthorized AI use.
Traditional forms of academic misconduct have declined
Studies have shown that students now seem to rely on AI to complete assessments instead of other types of academic misconduct, such as plagiarism from other sources, contract cheating, or copying work of peers, suggesting why academic misconduct rates have largely remained the same ().
Students' actual use of AIÌý
Although over 80% of students report using AI tools, the majority do so for initial ideation, troubleshooting, or when they get stuck. Thus, despite an increase in the usage of AI tools, students may not be submitting sections or entire academic works without modifying AI output (ETRA, 2026;Ìý).
What Does This Mean for Instructors?
The above evidence suggests that AI may be reshaping how students seek support, rather than the rates of misconduct itself. Research shows that academic misconduct generally tends to decline with increased institutional and instructional support (). Moreover, the reasons why students may engage in misconduct have largely remained unchanged. Given the ubiquity of AI tools, surveillance or prohibition alone is unlikely to work. Instead,Ìýeffective approaches should focus on addressing:Ìý
Recommended Resources
- Bertram Gallant T.,ÌýRettinger, D.A. (2025).Ìý. The University of Oklahoma Press.
- ETRA (2026).ÌýUndergraduate perspectives on AI at Âé¶¹Ãâ·Ñ°æÏÂÔØBoulder. Âé¶¹Ãâ·Ñ°æÏÂÔØBoulder
- International Center for Academic Integrity.Ìý.
- Lang, J. M. (2013).Ìý. Harvard University Press.
- Rettinger, D.A.,ÌýBertram Gallant, T. (2022).Ìý. Jossey-Bass.
- Student Conduct & Conflict Resolution.ÌýStudent Honor Code and Code of Conduct. Âé¶¹Ãâ·Ñ°æÏÂÔØBoulder.
- Online Teaching Resources
- Teaching & Technology
- Teaching, Learning, & AI
- AI & Academic Integrity
- AI & Assessment
- AI Dialogue with Students
- AI Ethical Considerations
- AI Literacy Ambassadors Program
- AI Literacy in Teaching and Learning
- AI Syllabus Statements
- Considerations before Using AI in Teaching and Learning
- Supporting Student Learning while Reducing Overuse of Gen AI: Checklist of Evidence-based Strategies
- Teaching, Learning & AI Community of Practice (TLAI CoP)
- Teaching, Learning & AI Repository
- 2026 AI Summer Design Studio, [Re]shaping your AI Narrative.