Even OpenAI has given up trying to detect ChatGPT plagiarism

OpenAI, the creator of the wildly popular artificial intelligence (AI) chatbot ChatGPT, has shut down the tool it developed to detect content created by AI rather than humans. The tool, dubbed AI Classifier, has been shuttered just six months after it was launched due to its “low rate of accuracy,” OpenAI said.

Since ChatGPT and rival services have skyrocketed in popularity, there has been a concerted pushback from various groups concerned about the consequences of unchecked AI usage. For one thing, educators have been particularly troubled by the potential for students to use ChatGPT to write their essays and assignments, then pass them off as their own.

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

OpenAI’s AI Classifier was an attempt to allay the fears of these and other groups. The idea was it could determine whether a piece of text was written by a human or an AI chatbot, giving people a tool to both assess students fairly and to combat disinformation.

Yet even from the start, OpenAI did not seem to have much confidence in its own tool. In a blog post announcing the tool, OpenAI declared that “Our classifier is not fully reliable,” noting that it correctly identified AI-written texts from a “challenge set” just 26% of the time.

The decision to drop the tool was not given much fanfare, and OpenAI has not posted a dedicated post on its website. Instead, the company has updated the post in which it revealed the AI Classifier, stating that “the AI classifier is no longer available due to its low rate of accuracy.”

The update continued: “We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.”

Better tools are needed

A person typing on a laptop that is showing the ChatGPT generative AI website.
Matheus Bertelli / Pexels

The AI Classifier is not the only tool that has been developed to detect AI-crafted content, as rivals like GPTZero exist and will continue to operate, despite OpenAI’s decision.

Past attempts to identify AI writing have backfired in spectacular fashion. For instance, in May 2023, a professor mistakenly flunked their entire class after enlisting ChatGPT to detect plagiarism in their students’ papers. Needless to say, ChatGPT got it badly wrong, and so did the professor.

It’s cause for concern when even OpenAI admits it can’t properly perceive plagiarism created by its own chatbot. It comes at a time of increasing anxiety about the destructive potential of AI chatbots and calls for a temporary suspension of development in this field. If AI has as much of an impact as some people are predicting, the world is going to need stronger tools than OpenAI’s failed AI Classifier.

Editors’ Recommendations






FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Chronicles Live is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – chronicleslive.com. The content will be deleted within 24 hours.

Leave a Comment