Address the originality of student work and emerging trends in misconduct with this comprehensive solution.
Deliver and grade paper-based assessments from anywhere using this modern assessment platform.
This high-stakes plagiarism checking tool is the gold standard for academic researchers and publishers.
This robust, comprehensive plagiarism checker fits seamlessly into existing workflows.
Give feedback and grade assignments with this tool that fosters writing excellence and academic integrity.
Improve program outcomes with instant data insights from secure digital exams taken offline.
Uphold academic integrity, streamline grading and feedback, and protect your reputation with these tools.
Improve student writing, check for text similarity, and help develop original thinking skills with these tools for teachers.
Publish with confidence using the tool top researchers and publishers trust to ensure the originality of scholarly works.
Discover the Turnitin Partner Program that offers flexible solutions for integration and commercial partnerships.
Get inspired by educators who are transforming assessment into meaningful learning while maintaining integrity at its core.
Follow our progress on detection initiatives for AI writing, ChatGPT, and AI-paraphrasing
It’s been seven weeks since the initial launch of our Preview for AI writing detection. We remain steadfast in our...
Turnitin blog posts, delivered straight to your inbox.
Recently, I shared an update on what we’ve learned since we launched the Preview for AI writing detection. In the same article, I highlighted that when it comes to AI writing detection metrics, there is a difference between sentence- and document-level metrics.
Our document false positive rate - incorrectly identifying fully human-written text as AI-generated within a document- is less than 1% for documents with 20% or more AI writing.
Our sentence-level false positive rate is around 4%. This means that there is a 4% likelihood that a specific sentence highlighted as AI-written might be human-written. The incidence for this is more common in documents that contain a mix of human- and AI-written content, particularly in the transitions between human- and AI-written content.
As explained in my earlier article, there is a correlation between these sentences and their proximity in the document to actual AI writing. 54% of the time, these sentences are located right next to actual AI writing.
Watch this short video where David Adamson, an AI scientist at Turnitin and a former high school teacher, explains more about false positives in a sentence.
While we cannot mitigate the risk of false positives completely given the nature of AI writing and analysis, we believe that by being transparent and helping instructors understand what our metrics mean and how to use them, we can enable them to use the data meaningfully.
Here are a few tips that can help you further understand how to use our AI detection metrics:
For more on false positives, read our previous blog Understanding false positives within our AI writing detection capabilities.