campaign
Turnitin launches iThenticate 2.0 to help maintain integrity of high stakes content with AI writing detection
Learn more
cancel
Blog   ·  

Similarity in the Classroom

Part 2: The "Similarity or Plagiarism?" debate series

Patti West-Smith
Patti West-Smith
20-year education veteran; Senior Director of Customer Engagement

Subscribe

 

 

 

 

By completing this form, you agree to Turnitin's Privacy Policy. Turnitin uses the information you provide to contact you with relevant information. You may unsubscribe from these communications at any time.

 

In our first post in this series, we attempted to truly tease out the distinctions between plagiarism and similarity while squarely placing Turnitin in the similarity arena. If you didn’t read that post, it’s worth a visit, but here is the TL;DR - Similarity is not the same as plagiarism, and Turnitin does not detect plagiarism.

With that being said, millions of users around the world are interacting with Turnitin’s Similarity Report, so it’s important to understand what it is and how to leverage its power to meet instructional goals. We know that there are many questions about exactly how similarity and the Similarity Report impact classroom practice; in this post, we’ll get into the “nuts and bolts” of that by going through some of the most frequently asked questions from educators.

Questions about classroom implications What is a good/bad similarity score?

This is the ONE question that every person working at or with Turnitin has been asked repeatedly and about which there is still a great deal of debate. From our perspective, though, the answer is quite simple: there is no good or bad similarity score, no magic number. The score absolutely must be interpreted in context. 0% similarity is inherently neither good nor bad, nor is 30%, or even 80%. There are contextual details that an EDUCATOR (not the software) must apply to that number in order to make an informed decision about exactly what is happening in the work submitted by the student and what the appropriate next steps should be.

Some of the contextual details:

  • Length of the assignment
  • Specific requirements of the assignment (for example, have you required X number of sources to be cited?)
  • Writing genre (HINT: Some genres actually lend themselves to a higher/lower level of similarity)
  • Developmental level of the writers
  • Level of mastery of students with integration of research/evidence such as summarizing, quoting, and paraphrasing
  • Opportunities for feedback and revision
  • Comfort level of students with proper citation
How should educators determine “an” acceptable score?

It’s complicated, but the very first step should be to consider the factors outlined above. Each of those elements impacts the determination.

Once educators have considered those elements, there are a few additional important tips:

  • Set a RANGE, not a single “cut score” or “threshold.” Looking at those factors above, educators will quickly realize that these are not issues that are easy to quantify exactly. There will be some subjectivity, and a range is far better able to respond to a “gray area” than a single line in the sand.
  • Establish the range per assignment. Those considerations should also quickly tell educators that they can’t pick one universal number or even range because the contextual details will vary over assignments. The genre might change, which could shift the requirements; over time, students should have more practice, which will impact expectations for proficiency, etc. Do yourself a favor and determine the range on a case-by-case basis. A non-negotiable, decontextualized number is going to create many problems, not the least of which is undermining students’ confidence in the fairness of the measure.
  • Consider the concept of “expected similarity” especially as it relates to those assignment-specific contextual details. Expected similarity is the level of similarity that should occur, based on factors such as the writing genre, the specific demands of the prompt/assignment, and the length. It is also impacted by those number of contextual factors that we discussed at the beginning of this blog. For example, if students have had very little instruction in or practice with effectively integrating evidence into their writing yet the assignment requires them to do so, the level of expected similarity must rise.
  • Check into the norms of the institution and work to align. It doesn’t do students any favors if one set of expectations is far out of sync with the rest of the school. There may be a policy in place that must be considered or it may be necessary to evaluate how expectations measure against previous OR future instructors. TIP: If the institutional policy in place sets a firm threshold, consider talking to the decision-makers and share some of our tips here to see if a more fair and appropriate policy can be put in place, while still holding students to high standards regarding academic integrity.
What should educators do when the Similarity Report report comes back too high?

Here, again, there is not one, single answer, but there are some steps that should be taken, though not necessarily in any specific order:

  • Talk to the student. An important source of data is the student writers themselves. Try to determine why they think the score is so high; consider asking about their process. We’ ve developed some resources that can be helpful in these situations. Check out our “Approaching a student about questionable work” guide, along with the “Discussion starters for tough conversations.”
  • Step back and consider all of those contextual details we have talked about before and determine if any one of them may be influencing the situation.
  • If there still seems to be a problem, consider the question of intentionality; perhaps this is a case of a skill deficit. If so, try to determine where the weakness is so that an instructional path can be taken.
  • If the situation is still unclear, consider utilizing Turnitin tools such as our Flags feature that can help to uncover intentional acts.
  • Finally, regardless of the outcome of the investigation, take some time to ask whether anything could be done differently next time. In talking to educators around the world, they often report that they wish they had taken some proactive steps that might have changed the outcome. One simple step we hear frequently is that educators waited until the final product was due to check in on progress; by establishing checkpoints along the way, it helps students manage their time and can often avoid some of the situations that result in problematic behavior. (PRO TIP: We created a resource to help you do THAT too!)
When should educators refer a student for discipline related to an academic integrity violation?

This is not a decision we would ever attempt to make for educators. As with so many elements of teaching and learning, there are far too many variables and far too much information we just don’ t have for us to ever attempt to replace the expert judgment of an educator in the moment.

There are, however, a few factors to consider here that may be helpful:

  • Past behavior - Has this kind of thing occurred before? What steps have already been taken?
  • Instruction - Is there a solid base of explicit instruction around plagiarism, academic integrity, citation, paraphrasing, etc. upon which the student should have been able to rely so that they know better and have the skills to make a different choice?
  • Intentionality - For many educators, the degree of intention makes a big difference in how they view and respond to the situation.

PRO TIP: Check out this whitepaper that dives into how to use an incident of plagiarism as the quintessential “teachable moment.”

What can educators do to prevent plagiarism BEFORE it happens?

Many of the previous discussions have touched on this subject, and like a variety aspects of academic integrity, it involves quite a few nuances. However, we have a FULL set of materials to help educators tackle this challenge. Our Disrupting Plagiarism pack includes a webinar, an educator guide, lessons, slide decks, activities, posters, student-facing resources, and more. These resources can help institutions or individual educators establish a culture of academic integrity in their classrooms and provide direct instruction that will ensure that students truly understand what they can and cannot do.

The single most important idea to remember from this series about the Similarity Report and all of Turnitin’s products is that they are not meant to be used in isolation, without context. Instead, the score returned is a data point, one that is PART of a bigger picture, full of nuances and variables that can only be truly understood by applying educators’ expertise and experience. The Similarity Report, while helpful and robust, has its limitations. And because we know that similarity is NOT plagiarism, we also know that educators can and should use the Similarity Report and other Turnitin tools wisely to augment, not replace, their own judgment to help students on their learning journeys.