These students cheat with an AI (and it works)

If an artificial intelligence is able to generate a convincing text on a given subject, why wouldn’t it write dissertations? This is precisely what a student said to herself who testified under her Reddit pseudonym innovate_rye. The latter has indeed decided to have his homework done by the GPT-3 AI developed by OpenAI.

Soon the end of homework?

In particular, he explained that his university asked him to name five good and bad uses of biotechnology. He then asks the question to the text generator. The result is clearly there, and the young man who spent two hours preparing his homework now only takes 20 minutes, and he gets excellent grades.

While institutions use anti-plagiarism algorithms to fight against cheating, the production of AI goes completely unnoticed. Our colleagues from Vice rightly asked George Veletsianos, holder of the Canada Research Chair in Learning and Innovative Technologies and associate professor at Royal Roads University, to better understand.

According to him, this makes perfect sense, because the text generated by the AI ​​is in fact original and does not correspond to any previous production. Therefore, he does not think that this type of creation will be detectable by using this type of tool.

Nevertheless, the use of this type of artificial intelligence for homework inevitably raises ethical questions, and for the time being OpenAI has not reacted to the article by our colleagues. As for students, they are eagerly awaiting the release of GPT-4, an AI that will be even more sophisticated and should produce better texts.

Inconclusive experiments in journalism

Remember that this is far from the first time that we have spoken to you about this type of system. Thus, the media specializing in video games, my cityparticularly had fun writing a review to a GPT-2 type generator.

The result was then not very conclusive and our colleagues noted: ” The AI ​​cannot rate a video game. Items where machine learning was asked to rely on experience and emotion turned out to be better than expected, but on each occasion the model could be seen to have taken the “fake” approach. it till you make it”. It introduced wrong details, sometimes even completely different project names and developers. »

For its part, the British daily The Guardian tried to get GPT-3 to write an op-ed. Attractive on paper, the experience has been criticized by some AI specialists.

Daniel Leufer, for example, described this attempt as ” absolute joke “. ” It would have actually been interesting to see the eight essays that the system actually produced, but editing them in this way only adds hype and misinforms people who won’t read the details “, he added.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *