The beginning of the fall semester brought about the end of St. Joe’s AI detection feature on Turnitin.
Turnitin, a plagiarism detection software, is part of St. Joe’s Canvas learning management system. Last April, Turnitin announced a new AI writing detector, which was integrated into St. Joe’s system and was available for faculty to use within the Canvas Gradebook.
Turnitin’s FAQ says papers submitted are broken into approximately five to 10 sentence segments before being overlapped with each other “to capture each sentence in context.” The segments are then run against Turnitin’s AI detection model and each sentence is given a score between 0 and 1 to determine whether it was written by a human (0) or by AI (1).
Turnitin claims its tool is 98% accurate in detecting AI writing, and that their “efforts have primarily been on ensuring a high accuracy rate accompanied by a less than 1% false positive rate, to ensure that students are not falsely accused of any misconduct,” according to a blog post on their website. However, a number of false positives have lowered its accuracy rate.
In a June 2023 blog update, Turnitin’s Chief Product Officer Annie Chechitelli said Turnitin’s “document false positive rate –incorrectly identifying fully human-written text as AI-generated within a document – is less than 1% for documents with 20% or more AI writing.” Additionally, their sentence-level false positive rate is about 4%.
Andy Starr, manager of Academic Systems at St. Joe’s, said Turnitin released the AI detection feature and turned it on automatically. But from the start, its reliability was suspect.
“We found that there’s a lot of false positives, and we started doing some research and some testing on it, and we came out somewhere between 50 and 75% accurate at detecting it,” Starr said. “It was causing more problems than it was solving because there is no real way to say, ‘yes, this was written by artificial intelligence.’”
James Janco ’25, a tutor in the Writing Center, worked with a student in May to rewrite an essay that was flagged by Turnitin as AI writing. The student insisted she had not used AI to write the essay, but the professor said she had to revise it as her own.
“It was frustrating because she was pretty clear that it was not AI generated and she had to rewrite it for no reason,” Janco said. “It was just such a unique situation. All this technology is so new; we don’t have a standard operating procedure here.”
Janco spent the summer studying AI writing as part of his Summer Scholars project, “Standard, White Artificial–English: An Analysis of Language, Voice, and AI.” He said his research pointed to both inaccuracy and bias in AI detection softwares.
Janco pointed to an April 2023 study by Cornell University which found “detectors consistently misclassify non-native English writing samples as AI-generated, whereas native writing samples are accurately identified.”
“If you’re a professor and you’re concerned about students using AI, there are other ways to look out for it. These detection softwares are not the way to go,” Janco said.
Starr said it is an “arms race” trying to detect AI writing.
“Artificial intelligence is here, and it’s just going to keep getting better and better,” Starr said. “Soon it’s going to be in Office365, if it’s not already. There’s no way we’re gonna be able to stop AI. It’s kind of like the internet, when the internet first happened.”
Starr’s best advice for faculty is to try to incorporate AI into teaching, as he said he believes students will use AI in their work after they graduate college.
“One of the things we’re working on for the spring is a workshop for writing-intensive courses on how to use AI, how to incorporate this tool that’s going to be everywhere soon,” Starr said.