In 2016, the Washington Post used an artificial intelligence reporter, Heliograf, to write some 300 reports from the Rio Olympics. In 2017, the newspaper used it to write some 850 other reports, which combined for more than 500,000 clicks.
As impressive as Heliograf may be, its main skill is taking data-driven stories, such as high school football games or election stories, and crafting short blurbs about them. In general, these are stories that the newspaper would not or could not have assigned to a human reporter but nonetheless wanted to include in their publication.
Still, if an AI writer is good enough to draft articles for a major newspaper, clearly it’s just a matter of time before students are able to use AI authors to help them complete assignments the same way some turn to essay mills or other ghost authors now.
But while the day may come when a student uses an essay bot to crank out a paper for class, that day is likely a long way off right now.
The first reason is that there is a major difference between writing blurbs about data-oriented news stories and offering meaningful commentary on a nuanced subject. For example, right now an AI might be able to write a summary of a book but would struggle to compare symbolism in that work to the text of another.
The second reason is that Heliograf is at the very surface of what AI authoring can do and, while it can write reasonably competent blurbs and even tweet them out, there are still significant limitations.
But that’s not to say that AI is not an issue to watch in 2018. It very much is an issue to consider, but not one that will take the form of AI ghostwriters helping students, at least not yet.
We’ve already seen the use of Wolfram|Alpha, an AI-driven tool for answering questions, to cheat in mathematics classes. Automatic paraphrasing, tools that attempt to rewrite the unoriginal text, have also been growing in popularity (even if they too have quality issues).
As with other industries, AI isn’t going to hit the classroom like a tidal wave, with students suddenly turning to robot ghostwriters to cheat. Instead, it’s going to trickle in, likely on the back of legitimate tools.
For example, writers of all stripes have long used automated spelling and grammar checking. AI can help supplement human editing even further, suggesting rewrites and addressing increasingly complex issues. However, it’s possible that these tools will grow more involved and shoulder more of the burden, further blurring the identity of the true author.
In that regard, AI writing has a great deal in common with driverless cars. The streets haven’t become miraculously overrun with driverless cars, but we’ve started with cars that automatically brake or stay within a lane on their own. Human drivers are still necessary, for now, but as computers take over more and more driving tasks, the question of “Who is really driving the car?” takes form.
But the fact that we aren’t overrun with AI authors yet makes this a critical time in the conversation about AI. Now is the time to open the dialogue about if, when and how we want to allow students to use AI. What are the ethical boundaries in using AI to complete an assignment? How do we detect when a student has gotten help from an AI?
These aren’t simple questions but, if we start the conversation now, we likely have enough time to weigh them before the storm is truly upon us.
This won’t be the year AI overruns the classroom, but it should be the year we start talking and thinking about it. Otherwise, it may be too late to get ahead of what may be one of the most important technological shifts in the classroom in the last century.
This post was contributed by Jonathan Bailey, a foremost expert in plagiarism. He has spent over 16 years fighting plagiarism professionally and currently blogs on Plagiarism Today, where he raises awareness about the importance of digital literacy and the societal effects of plagiarism.