A lot of well-deserved concern is spreading across academia over students’ use of ChatGPT to write their class essays and do their homework. The OpenAI.com program is so sophisticated that it can compose original material that consistently fools experts — and teachers — into believing it was written by a human. It can even be instructed to write at a certain level of human competency, such as mimicking a fourth-grader’s syntax and vocabulary, or to write in Shakespearean English or the slang used by a 1950s gangster.
ChatGPT is fascinating, fun and dangerous all at the same time. It takes plagiarism to an entirely new level because students can ask the program to write essays for them, and since the text is entirely original, it renders useless the software that teachers have come to rely upon to determine when students have put their name on work that isn’t theirs.
Until now, if a teacher or professor suspected a student’s submission was partially or fully plagiarized, all the instructor had to do was copy and paste it into an online service that would instantly identify sections that had been lifted from other published works.
ChatGPT can cull billions of data points from around the globe, including items in other languages, and synthesize it into something unique. It can write song lyrics, construct computer programs, and churn out literature of varying lengths depending on whatever a human user wants. It can even come up with jokes, albeit lame ones.
Or the program can just chat and pass the time talking about the weather. But there are subject limits. It won’t take the bait if asked to take a stand on a controversial political issue or, say, whether the world would be better off if the Nazis had won World War II.
For a lot of people, the novelty quickly wears off, and they move on to other websites. Those who have an ongoing practical use for it — students being among them — tend to stick around.
Have teachers finally been outmaneuvered? Hardly.
We asked ChatGPT to write an essay, then pasted the same text back into the program along with this question: Did you write this? Like George Washington, the program couldn’t tell a lie. “Yes, I wrote this,” it responded. It can even identify when portions of its own material have been mixed with other people’s work.
So, in addition to being a writing program, it also appears to be its own plagiarism detector. The program also can trip itself up with dumb mistakes.
For worried teachers who suspect that a fundamental, irreversible shift may have occurred in student ethics, they need only learn how to copy and paste to hold their students accountable. The next big challenge for ChatGPT: Can it write something so original that even it can’t detect the text’s origins?
This piece originated with the St. Louis Post-Dispatch Editorial Board.