2024-04-25 13:12:03
An academic paper on AI plagiarism had a ‘plot twist’ — ChatGPT wrote it - Democratic Voice USA
An academic paper on AI plagiarism had a ‘plot twist’ — ChatGPT wrote it

Comment on this story

Comment

A research paper by three academics in England had a timely and straightforward thesis: Generative artificial intelligence programs like ChatGPT could open up new opportunities for higher education but also pose worrying challenges, especially when it comes to cheating and plagiarism.

The authors tried to prove their point like they would in any academic paper, with paragraphs of analysis and citations of previous research pointing to the growing potency of programs like ChatGPT. But the professors saved their best argument for last.

“As the alert reader may already have guessed, everything up to this point in the paper was written directly by ChatGPT,” the paper read after its conclusion.

Professors Debby Cotton and Peter Cotton, who are married, along with lecturer Reuben Shipway, submitted the unconventional paper with a “plot twist” to illustrate their thesis and demonstrate ChatGPT’s abilities to colleagues still unfamiliar with the technology, they said in an interview with The Washington Post.

“What we wanted to do was have that surprise element,” said Peter Cotton, “… reinforcing the message that, actually, this software is already very, very powerful.”

The stunt worked. The three scholars published their paper in an education journal, Innovations in Education and Teaching International, on March 13 after briefing the journal’s editors and receiving permission to submit an AI-written piece. Three peer reviewers reported in feedback that they initially believed the paper was written by a person before reaching the reveal that it was written by AI, the professors said.

Peter Cotton and Shipway, who teach in the University of Plymouth’s School of Biological and Marine Sciences, thought of the idea for the paper in December as ChatGPT first became available to the public, they said. While grading essays about bioluminescence, Cotton opened up the chatbot and, out of curiosity, fed it the same assignment he’d given his undergraduates. The result was poor, but Shipway remembers being surprised by a pun in its conclusion — “new research is shedding light on bioluminescence,” ChatGPT had written.

“I thought, ‘Wow, that’s quite sophisticated,’” Shipway said. “That must have been a student.”

Vanderbilt apologizes for using ChatGPT to write message on MSU shooting

Shipway proposed they co-author a paper about the issues ChatGPT could pose in academia and write it using the program. They recruited Debby Cotton, a professor of higher education at Plymouth Marjon University. Over the course of a few days, the three fed various prompts into ChatGPT to generate sections of a paper that could pass a peer review.

“Produce several witty and intelligent titles for an academic research paper on the challenges universities face in ChatGPT and plagiarism,” they prompted the chatbot. “Write an original academic paper, with references, describing the implications of GPT-3 for assessment in higher education.”

They pasted ChatGPT’s responses directly into their manuscript. The only edits the professors made were adding subheads for the paper’s different sections and some references. Those changes spoke to the program’s limitations in producing text predictively, Debby Cotton said: ChatGPT was able to add references to a research paper when asked but ended up generating convincing but fabricated citations using the names of commonly cited experts in a field.

“I went and checked the references and went, ‘Oh, no,’” Debby Cotton said. “That’s formatted exactly like a reference, and that is a real person who writes in this field, but that doesn’t exist, that paper doesn’t exist.”

The three authors contacted editors at Innovations in Education and Teaching International before submitting their paper to confirm that the journal was comfortable with publishing their unusual experiment. They and an editor at the journal both told The Post that the paper wasn’t an actual example of academic dishonesty — the authors flagged their use of ChatGPT after the paper’s conclusion and, in a postscript section titled “Discussion,” described how they generated the paper.

The peer reviewers who read the paper and reported being initially fooled wrote in their feedback that they “celebrated the work because it proves its point,” Gina Wisker, the journal’s editor, wrote to The Post in an email.

Shipway, Peter Cotton and Debby Cotton feel they have proved their point as well. The paper has received thousands of views and downloads and is being cited weekly, Shipway said.

Their concern now is what comes next. Generative AI is evolving quickly — GPT-4, the successor to the program that powered ChatGPT when the three authors wrote their paper, is the new talk of the town — and will probably outpace universities’ attempts to craft policies to address it, they said.

GPT-4 has arrived. It will blow ChatGPT out of the water.

“In a year’s time, I suspect they’re going to be far more sophisticated,” Peter Cotton said. “… By that stage, universities will just about be deciding that they’ve rubber-stamped the first changes to regulations, which will already be even more out of date than our paper.”

The three speculated that university departments and educators might have to individually decide how to deal with generative AI in their instruction. Shipway and Peter Cotton said that much of their field of marine biology, such as fieldwork, can’t be cheated with a machine. Debby Cotton said that ChatGPT could help automate the less engaging parts of academia, like grading. But she worried about its effect on students’ writing ability. The paper the three published was able to fool peer reviewers, but the writing was stale, she noted.

“ChatGPT just has this tone, and yes, you can ask it to write in different ways, but each individual should develop their own voice,” Debby Cotton said. “That’s a part of learning to write.”

In the paper’s acknowledgments section, the three authors described a few further limitations of ChatGPT. At one point, they considered adding the program as a co-author, they wrote.

But “on considering the requirements of a co-author on the journal website, ChatGPT came up short in a number of areas,” they added, noting that ChatGPT could not review the article nor agree to its submission to a journal. “… Perhaps more crucially, it cannot take responsibility and be accountable for the contents of the article.”

Source link: https://www.washingtonpost.com/nation/2023/03/23/chatgpt-paper-plagiarism-university/

Leave a Reply

Your email address will not be published. Required fields are marked *