Found this content helpful? Log in or sign up to leave a like!
Preventing Copy and Paste from Exam to AI
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi all,
I teach 4-5 sections of American Government online, about 250-300 students. I used to have an average of mid 70s on my online American Government classes. Since the advent of AI, the average has crept up and now is in the mid to high 80s.
Students are simply copying the multiple choice questions, posting them in ChatGPT and getting the answers. I tried that myself on one of the 100 question exams where the questions strictly came from my online lectures and got 88/100.
I have monitored exam logs and made students take the exam again if I noticed suspicious logs. I have written exams with tougher language. But that only managed to lower the average TWO percentage points. Hardly anything.
What strategies have you tried that were successful in limiting copy and paste from your exam into AI? What worked for you? What do you think would work?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Are you sure the students are copy/pasting the questions?
I just gave a 50 question statistics final. In the past, very few students would score above 60% (a combination of not knowing the material and/or not needing a high grade to keep their class grade). This last semester, I had no one score below 80%. Only about three or four were in the boat of having to do well. In online quizzes taken outside of class, they did very well. In class, they had blank looks and several scored below 40% on Kahoots that were over the same thing they just had a quiz on and got 90% on.
For the final, I was afraid they were going to cheat, but AI never crossed my mind. I was afraid that the finals had been out there for several years and they would cheat that way. I spent the day before the final completely rewriting the final. I ended up with enough questions that I was able to create 8 versions of the final and so basically there was a final for every 2 students. I used randomization, one question at a time, no back tracking, don't show answers, etc. They were brand new questions, but they were based off the old questions.
After the first three students had taken the final and done exceptionally well, I had someone in our Teaching & Learning Center crush me. She said there was an app that allowed you to just drag a rectangle on the screen and it would return the answer in a few seconds with about 90% accuracy. You don't need to copy/paste, just select the question and answers. I tried it with a demo account and it was pretty good. It even gave explanations for the answers that were pretty spot-on. It's only $5 a month when purchased for an entire year.
The good news was that it showed up in the quiz logs. After every question, the student would leave the page, come back in a few seconds and then answer the question. My dean wasn't helpful and said that he knew they were cheating and I knew they were cheating but that it would likely go to a grade appeal and I would lose.
But then I discovered something else. All of my class did well, but only three showed up as leaving the quiz and coming back to it. They were still answering the quiz questions quickly and doing well. They were using some other AI tool, I thought. Then my wife explained that they could take a picture with their phone and get an answer on it. There is nothing in the Canvas logs when that happens.
What I decided was that those three students were using a single device to take the exam and cheat. One only had a phone (no laptop) so they might have been switching over to use a calculator or look something up (they weren't, but you need convincing evidence students are cheating). If I was to make those three students retake the final (I had it in my syllabus that some students might have to take the final in a proctored setting), all I would be doing is punishing those who don't have two devices (or didn't think to use two devices).
I then wrote a program that would download the log files for every student who took the test. I wanted to look at the mean and standard deviation for the time it took to answer the questions. A low standard deviation would explain consistency in answering the questions, which would not happen without the use of AI as some questions are harder than others.
Limiting copy/paste isn't a solution as students don't need to copy/paste to get the answers. A lack of evidence in the quiz logs doesn't mean they didn't cheat.
This could also explain why my trig and calculus students do so well on the homework but so poorly on the exams. There are some hard questions on the homework, yet low C and D students were getting them right on their first attempt.
I haven't been studying AI like others have, so I don't have a solution.
What I came up with is that any un-monitored online assessment is suspect and should not count for much of the grade. You should not administer a high-stakes assessment in an online situation unless you have some kind of monitoring system set up that will watch what the student is doing.
I'm not advocating that; I'm old-school and give most of my tests face-to-face. It was just statistics and that was an artifact of COVID where I had to go online and then changed to a flipped classroom when we came back to class.
Restricting copy/paste isn't much of a deterrent.
If I were to teach that class again, I would completely rework the percentages for assignment groups. It used to be 45% for concepts (online quizzes). That would go down to 10%. I had the thought that students don't do things if it's not worth points, but most of the class wasn't doing the quizzes on their own anyway. With AI, 45% was just giving away enough points to allow them to pass the class without knowing anything.
I would put more emphasis on things done in class or that required customization (students continue to share assignments, it's not just AI). I did some of that customization already where I had students take a quiz to get information that they then had to use in their analyses. That quiz used randomization so your data wasn't the same as your friends. I would do more of that as I had too many assignments where the whole class was doing the same thing.
I would make sure that the students included screenshots from the statistics package. So far, according to my wife, ChatGPT hasn't been able to generate Minitab or JASP output. It can interpret it, but not generate it.
I would be more emphatic about you must use the language of the course. I word things in ways that are atypical, but didn't really take off for it. I would definitely make the Kahoots based on knowledge rather than based on participation.
If I were teaching that class again, I would probably spend time trying to figure out which questions AI missed and exploit those. But that is futile as it will get better and so it's a lot of work for a temporary relief.
Our leaders say all you have to do is teach students responsible use of AI and it won't be a problem. They are delusional. When it's this easy to cheat and get a passing grade and not get caught, students are going to do it. I don't know if students even view it was cheating, but that's bigger than just AI.
I'm not going to be able to help with how to use AI effectively. Some are going that way -- especially with discussions -- where they require students to use ChatGPT and include their transcript as part of the discussion. I know we've gone through technological revolutions before. Google, Wikipedia (not so much for math), and now AI.
Like I said, I've not been on the forefront of fighting AI. I just wanted to share that restricting copy/paste probably won't help like you think. You'll need to do something else.