“UBC has a license for TurnItIn, a platform that compares student written work against a database of online resources and prior submissions, and provides a “similarity” measure that can support further discussion and assessment of potential plagiarism. It is widely used at UBC, and can be a formative tool for students to review and improve their writing.
We recently learned that TurnItIn plans to activate a new and separate feature that attempts to identify text that has been generated by an AI-writing tool such as ChatGPT. The feature was initially going to be made available in the product on April 4th, without an option for institutions to deactivate it. In the last week, a number of institutions have requested the option to disable the feature, and they have agreed to do so for a limited number of cases.
The LT Hub Leadership group, with the support of the Provosts at both UBC Vancouver and UBC Okanagan, has made the decision not to enable this feature at this time, for the reasons noted below.
- Lack of ability to review and validate the feature: Because of the speed of its release we have not been able to go through our normal process of vetting prior to its UBC-wide release. As such we do not know very much yet about the functional limitations, drawbacks, and risks of use of this feature. That lack of vetting is acutely felt in this case, since many generative AI tools such as ChatGPT are relatively new, and AI detection tools even more so.
- Timing of the release: TurnItIn’s plan to release April 4, with little advance notice, means we do not have a chance to prepare the UBC Community before the rollout, including preparing guidelines for use and support resources. In addition, having a significantly new functionality appear in the tool during an academic term, when users are not expecting it and haven’t prepared themselves or students for it, is very challenging.
- Results not available to students. The report provided by the AI detector tool will only be accessible to instructors; students will not be able view the results. For the existing functionality in TurnItIn, students may be able to see the “similarity score” of their work against the database, unless prohibited by instructors. With this new functionality, there is no way for students to access the results.
- Testing for accuracy is in early stages. While TurnItIn states that their accuracy for identifying AI-written text is highly reliable, with few false hits and few misses, this claim has not been independently evaluated. TurnItIn claims 98% accuracy, but they have arrived at that by checking against their own training set of AI-written text versus human-written texts, and they have not provided many details about that training set. So far as we can tell, they have also only tested in a lab setting; they have not done testing on a large set of data yet.
- No way to double check the results. Most plagiarism detection tools provide instructors both the source material and the flagged areas of the submission to help support an assessment of whether plagiarism occurred. However with AI-written text detection the source material simply does not exist. Instead, users are shown passages that are suspected of being AI-written, but with no way to check whether the detector was correct. Because the tool is not 100% accurate, and with no way to double check, it is impossible to tell if the sample is actually erroneous. This limitation means over-reliance on such tools for academic integrity purposes can be problematic.
- Testing for potential bias in the detector is in early stages. TurnItIn has stated that they have tried to address this concern by including in their training dataset works by “statistically under-represented groups like second-language learners, English users from non-majority-English countries, students at historically black colleges and universities, and less common subject areas such as anthropology, geology, sociology, and others.” Without further information about their training dataset, the training process, or any testing for bias, we cannot know the degree to which the tool may be more likely to flag certain kinds of writing as AI-generated than others.
- Ability to keep up with rapidly-evolving generative AI is unknown. TurnItIn’s detector has been trained to try to detect AI-generated work from the GPT-3 and 3.5 language models, but OpenAI has now released GPT-4. Like the race between anti-virus companies and hackers, there will be a race between AI writers and detectors, and it’s not yet clear the degree to which such detectors will be able to keep up.
We are taking a pause to have time to review this new feature from TurnItIn. This pause also provides a chance for broader discussions at the institution around the capabilities, limits, and risks of AI detector tools (including this one but also others), and their value for academic integrity purposes. We aim to make a decision during the summer, in time for the start of fall term.
We will be planning for such broader discussions in the near future and can let you know of opportunities to participate. ”
If you have any questions, you can reach out to Stephen Michaud in the Learning Technology Hub: stephen.michaud@ubc.ca
Sincerely,
Simon Bates, Vice-Provost and Associate Vice-President Teaching and Learning, UBC Vancouver
Heather Berringer, Associate Provost, Academic Operations and Services, UBC Okanagan