Since the release of ChatGPT late in November, seemingly every educational journal, publication, and pundit, as well as the mainstream media, have weighed in on what artificial intelligence (AI) writing assistants mean for the future of education.
Like so many of you, we here at Turnitin have been following every development. My team– Teaching and Learning Innovations (TLI)–is composed entirely of veteran educators, and what we’ve seen is manifold. Some prognosticators have predicted the death of the essay while others have described a world in which no one knows where humans end and machines begin. In some versions of the narrative, we should all be terrified and in others, we should embrace an inevitable, glorious future… and the truth is that we don’t know which of these (or both? all? none?) will occur, no matter how confidently anyone states their opinion.
What we do know is that educators will have to make real-time decisions about how AI writing assistants and AI-generated text will live within the classroom. Educators will have to “figure it out” without knowing the future and without having all the answers, which is intimidating, even scary. Educators have been here before, though, remaining constant (and relevant!) through wave after wave of change throughout time, and educators will weather this storm as well. Educators will hold onto sound, research-based best practice and clock countless hours learning the ins and outs of our new reality and testing what works and what doesn’t. Educators will look to each other to collaborate, seek feedback and guidance, and even console one another when something goes wrong. Educators will evolve and adapt, and Turnitin is prepared to be a partner - to help face the threats, the responsibilities, and the promise of this moment.
It’s important to go into this discussion acknowledging and confronting the challenges while still keeping our eyes open to the possibilities. In and of itself, AI is neither good nor bad; like most technology, it will come down to what we do with it. Instructionally, that will include what we teach students to do with it.
Let’s first address the worrisome possibilities.
At any point in time, educators need to know what students know, understand, and can do, and institutions need to certify the validity of the credentials they award to their students. Part of that paradigm rests on assessments: informal, formal, authentic, performance, portfolio, formative, summative, big, small, high stakes, low stakes - all of the above. So the question many educators are asking is how they can continue to TRUST their assessments in the face of AI. If the AI is good enough to produce something that could be a student’s work, how will they ever really know what is the student’s work and what was done by a machine?
As a long-time leader in the world of academic integrity, global educators are asking us daily here at Turnitin whether we have a tool that will help them to renew their confidence in the integrity of their assessments and student work. The answer is that Turnitin Originality, an in-market product that investigates the authenticity of student work, can detect some forms of AI-assisted writing and report on indicators of contract cheating today. Additionally, we are looking toward tomorrow, advancing quickly and incorporating our latest AI writing detection capabilities–including those that recognize ChatGPT writing–into our products for educator use in 2023. Check out this sneak preview of where our AI Innovation Lab is currently with their fast-evolving AI detection capabilities!
And while the discussion of the impact of this developing technology may be the buzzword of the moment, it is not a new topic for Turnitin. For the last two and a half years, we have been researching and developing technology that recognizes the signature of AI-assisted writing. However, the answer to this threat will not lie in a single magic bullet. Instead, it will require a full suite of solutions working together to truly meet the moment, and in addition to Turnitin Originality, some of those tools are already in the hands of educators today.
In fact, many of the strategies institutions and educators have been using for years to build a culture of academic integrity are just as applicable to our new circumstances. Tried and true methods around policy, communication, awareness-building, relationship-building, instruction, and formative feedback that have worked to address threats like intentional plagiarism and contract cheating are still going to help now.
Beyond that, there are also ways to make assignments and assessments less vulnerable to the misuse of AI-generated text tools. Take a look at our Guide for Approaching AI-generated Text in Your Classrooms for a set of 11 concrete recommendations that every institution and educator around the world can begin to leverage immediately.
Regardless of all the uncertainty, educators once again carry responsibility in this time. In fact, there are multiple tracks of responsibility, which may even seem to conflict at times. On the one hand, we have a responsibility to help students learn how to operate ethically and morally. Some are already suggesting this should mean that all AI writing assistants are strictly forbidden. Seemingly diametrically opposed to that is the need to help students understand HOW to use these tools because writing in the world WILL look different now. It isn’t coming in the future; it’s here already.
If anyone is unconvinced, take a moment to visit YouTube, where there are already videos about using AI writing assistants to mass produce content for marketing or influencing campaigns. Look at any number of articles that have been written which now include a note that a portion of the text was generated by AI. When change like this comes, educators cannot afford to ignore it, and when they do, we know that students will fill the gap with their own understanding, for the best or the worst.
Those two streams of responsibility would be challenging enough, but there’s a third we cannot ignore. This one, for educators, is perhaps the most important: learning. Writing is a tool for thinking and learning. So, for those folks out there saying that teachers should simply stop assigning writing and do “something else,” educators will have to do the hard work of helping them understand that writing isn’t simply a way for us to assess or for students to demonstrate their knowledge, understandings, and skills. No, writing is actually a critical way that people make meaning from new ideas, input, and data. That means that we have a responsibility to see to it that writing isn’t displaced from instructional practice. That would remove one of the key ways that humans learn and grow. In an article in Nature (2022), Watcher emphasizes, “If students start to use ChatGPT, they will be outsourcing not only their writing, but also their thinking.”
We can’t know yet what the full potential is for this new technology, but already we see educators discussing the possibilities they can envision. Whether it is overcoming blank page paralysis or combating impacts of learning disabilities, teachers are already full of ideas for how they might use AI writing assistants in their classrooms. The veteran educators at Turnitin are hearing people discuss using AI to generate exemplars to act as models for students or to create raw material for those fantastic revision workshops we all love. We’re hearing about AI writing tools as support for language learning and general brainstorming. Undoubtedly, the ideas will continue to pour in. Educators SHOULD experiment, take chances, and even fail sometimes. Instructors should take a look at our collection of AI and Teacher Success resources in order to develop a well-researched response to student usage of these tools. Through it all, educators should talk to each other, students, and all stakeholders.
What educators must NOT do is close our minds to the promise. We cannot allow our fear of the unknown to push us to turn our backs on the potential. I’m not suggesting that we should rush in and use AI writing assistants at every turn, without guardrails. We will need to be cautious, and we’ll need to know that we can identify the use of these tools when they are not acceptable. When we do have those safeguards in place, though, it will be time for us to turn our attention to the new possibilities the technology may present.
In some ways, meeting that fear and securing the guardrails first, speaks to Maslow’s theories. Before we can start to explore application and innovation with the technology, educators will need to know that academic integrity is “safe.” At Turnitin, we see this thread running through our discussions with advocates and users around the world. The most common question we are hearing is whether we will have tools to help address this shift. Rest assured: we have a dedicated team of artificial intelligence experts working around the clock to refine and expand our existing capabilities to better support educators facing AI writing in the classroom over the next few months.
As you may have read about in one of our previous blog posts, we are also in the process of developing an actionable user experience that can guide educators to identify and evaluate work with traces of AI-assisted writing. Essentially, we are going to go as fast as we can for as long as it takes. However, educators don’t have to wait for a technology tool to help them address the threat.
A great deal of the fear and anxiety educators are feeling is because they feel unprepared and powerless; while it may be true that there is still a great deal to learn about what AI generated text will mean for education and the world, it is not true that educators have no power in this moment. In fact, this is a moment made for educators. Who better to help shape how students think about and understand these tools in their lives and their learning? Who better to fill the void with information and analysis? Who better to guide discussions and new understandings? Who better to help students begin to use new tools in ethical and innovative ways?
This has always been the work of educators, and it always will be.
Stokel-Walker, C. (2022, December) AI bot ChatGPT writes smart essays-should professors worry? Nature. Retrieved from https://www.nature.com/articles/d41586-022-04397-7