The launch of ChatGPT and other artificial intelligence (AI) chatbots has triggered an alarm for many educators, who worry about students using the technology to cheat by passing its writing off as their own. But two Stanford researchers say that concern is misdirected, based on their ongoing research into cheating among U.S. high school students before and after the release of ChatGPT.
“There’s been a ton of media coverage about AI making it easier and more likely for students to cheat,” said Denise Pope, a senior lecturer at Stanford Graduate School of Education (GSE). “But we haven’t seen that bear out in our data so far. And we know from our research that when students do cheat, it’s typically for reasons that have very little to do with their access to technology.”
Pope is a co-founder of Challenge Success, a school reform nonprofit affiliated with the GSE, which conducts research into the student experience, including students’ well-being and sense of belonging, academic integrity, and their engagement with learning. She is the author of Doing School: How We Are Creating a Generation of Stressed-Out, Materialistic, and Miseducated Students, and coauthor of Overloaded and Underprepared: Strategies for Stronger Schools and Healthy, Successful Kids.
Victor Lee is an associate professor at the GSE whose focus includes researching and designing learning experiences for K-12 data science education and AI literacy. He is the faculty lead for the AI + Education initiative at the Stanford Accelerator for Learning and director of CRAFT (Classroom-Ready Resources about AI for Teaching), a program that provides free resources to help teach AI literacy to high school students.
Here, Lee and Pope discuss the state of cheating in U.S. schools, what research shows about why students cheat, and their recommendations for educators working to address the problem.
What do we know about how much students cheat?
Pope: We know that cheating rates have been high for a long time. At Challenge Success we’ve been running surveys and focus groups at schools for over 15 years, asking students about different aspects of their lives — the amount of sleep they get, homework pressure, extracurricular activities, family expectations, things like that — and also several questions about different forms of cheating.
For years, long before ChatGPT hit the scene, some 60 to 70 percent of students have reported engaging in at least one “cheating” behavior during the previous month. That percentage has stayed about the same or even decreased slightly in our 2023 surveys, when we added questions specific to new AI technologies, like ChatGPT, and how students are using it for school assignments.
Isn’t it possible that they’re lying about cheating?
Pope: Because these surveys are anonymous, students are surprisingly honest — especially when they know we’re doing these surveys to help improve their school experience. We often follow up our surveys with focus groups where the students tell us that those numbers seem accurate. If anything, they’re underreporting the frequency of these behaviors.
Lee: The surveys are also carefully written so they don’t ask, point-blank, “Do you cheat?” They ask about specific actions that are classified as cheating, like whether they have copied material word for word for an assignment in the past month or knowingly looked at someone else’s answer during a test. With AI, most of the fear is that the chatbot will write the paper for the student. But there isn’t evidence of an increase in that.
So AI isn’t changing how often students cheat — just the tools that they’re using?
Lee: The most prudent thing to say right now is that the data suggest, perhaps to the surprise of many people, that AI is not increasing the frequency of cheating. This may change as students become increasingly familiar with the technology, and we’ll continue to study it and see if and how this changes.
But I think it’s important to point out that, in Challenge Success’ most recent survey, students were also asked if and how they felt an AI chatbot like ChatGPT should be allowed for school-related tasks. Many said they thought it should be acceptable for “starter” purposes, like explaining a new concept or generating ideas for a paper. But the vast majority said that using a chatbot to write an entire paper should never be allowed. So this idea that students who’ve never cheated before are going to suddenly run amok and have AI write all of their papers appears unfounded.
But clearly a lot of students are cheating in the first place. Isn’t that a problem?
Pope: There are so many reasons why students cheat. They might be struggling with the material and unable to get the help they need. Maybe they have too much homework and not enough time to do it. Or maybe assignments feel like pointless busywork. Many students tell us they’re overwhelmed by the pressure to achieve — they know cheating is wrong, but they don’t want to let their family down by bringing home a low grade.
We know from our research that cheating is generally a symptom of a deeper, systemic problem. When students feel respected and valued, they’re more likely to engage in learning and act with integrity. They’re less likely to cheat when they feel a sense of belonging and connection at school, and when they find purpose and meaning in their classes. Strategies to help students feel more engaged and valued are likely to be more effective than taking a hard line on AI, especially since we know AI is here to stay and can actually be a great tool to promote deeper engagement with learning.
What would you suggest to school leaders who are concerned about students using AI chatbots?
Pope: Even before ChatGPT, we could never be sure whether kids were getting help from a parent or tutor or another source on their assignments, and this was not considered cheating. Kids in our focus groups are wondering why they can't use ChatGPT as another resource to help them write their papers — not to write the whole thing word for word, but to get the kind of help a parent or tutor would offer. We need to help students and educators find ways to discuss the ethics of using this technology and when it is and isn't useful for student learning.
Lee: There’s a lot of fear about students using this technology. Schools have considered putting significant amounts of money in AI-detection software, which studies show can be highly unreliable. Some districts have tried blocking AI chatbots from school wifi and devices, then repealed those bans because they were ineffective.
AI is not going away. Along with addressing the deeper reasons why students cheat, we need to teach students how to understand and think critically about this technology. For starters, at Stanford we’ve begun developing free resources to help teachers bring these topics into the classroom as it relates to different subject areas. We know that teachers don’t have time to introduce a whole new class, but we have been working with teachers to make sure these are activities and lessons that can fit with what they’re already covering in the time they have available.
I think of AI literacy as being akin to driver’s ed: We’ve got a powerful tool that can be a great asset, but it can also be dangerous. We want students to learn how to use it responsibly.
Subscribe to our monthly newsletter.