Is there a way to tell if a text was created using Artificial Intelligence-assisted writing tools? Can these tools reliably tell if a person or a machine wrote a text? With the growth of AI content production platforms and technologies like ChatGPT, we see a lot more generative content - and it's becoming increasingly more work to discern which pieces are written by humans.
If you are a writer, you probably think, "It is another entity to compete with." And yes, you are correct. However, AI content is getting better and better, and it will only get even better.
AI Everywhere
Organisations have begun to invest in methods to recognise AI-generated content. Although OpenAI is developing a watermarking mechanism to help identify between natural and AI output, no formal technique is yet accessible.
AI is generating blog posts, high school essays, research papers, product descriptions for e-commerce, and even pieces of code. Further, as AI gets better at imitating the way humans write, it poses a big question whether it will even be possible to reliably assess the difference between text written by humans and by machines.
However, what's the point? Why does it matter if an AI can create an article that looks and feels identical to one authored by a human? First, it is becoming increasingly more work to trust what we read. In a world where anybody may say anything, distinguishing between reality and fiction is crucial.
With the rise of AI-generated material, the legitimacy of popular websites, blogs, and scholarly literature will be at risk if people cannot be certain what is the source behind the content. According to Google's Webmaster Guidelines, content generated automatically using AI writing tools is viewed negatively by its search engine.
Therefore, how can you confirm that the stuff you are reading is authentic?
Yin And Yang - AI vs AI?
Yes, there are AI content detection technologies such as HuggingFace's OpenAI GPT-2 Detector and there’s GLTR, developed by a small team of AI researchers at MIT and Harvard. They take the text you provide as input and provide information indicating the likelihood that it was generated by artificial intelligence. Additionally, the GPT-2 detector may frequently detect GPT-3 writings.
Therefore, there is no reason why Google cannot develop and deploy AI-based automated content detection systems. It appears only to be a matter of time before Google and other search engines begin to penalise and demote websites employing AI content detection on a massive scale.
With these tools, you can tell if a human or a computer wrote the text. Both of these tools are supplied for free as demonstrations rather than as final products for end customers. Therefore, their results are just suggestive. They can sometimes create false positives and false negatives.
It is unsettling to watch them instantly discern human-written language from AI-written material. Let's take a quick look at how these two methods can identify AI-related material. OriginalityAI is one such commercial solution that performs AI content detection and plagiarism checking for serious content providers.
HuggingFace's GPT-2 Output Detector application is based on OpenAI's available source code. It examines the text you provide and returns the percentage likelihood that it is Real (human-written) or Fake (generated by AI).
Giant Language Model Test Room, or GLTR, is comparable to a forensic instrument that examines a text corpus. First, the words are separately coloured in green, yellow, red, and violet. These hues indicate whether a given word has a high likelihood of occurring after the preceding word. So, for example, if most of your text is displayed in green, it was likely generated using a language model. And if your writing contains a healthy variety of colours, it is evident that a human authored it.
Why AI Detection Tools?
Many variations of AI detection tools will be necessary for enforcing prohibitions on AI-generated text and code, such as the one recently announced by Stack Overflow after it was swamped by volunteers using ChatGPT to offer answers to coding problems for which people sought help on the website.
ChatGPT is capable of confidently reciting solutions to software issues, although it is not failsafe. Incorrect coding can result in the buggy and defective software, which is costly and potentially chaotic to repair. So the whole purpose of a website like Stack Overflow is lost if people depend on ChatGPT to “help” those facing genuine difficulties.
But in reality, it is exceedingly challenging to police AI generated content, and such prohibitions are probably nearly impossible to enforce.
Turnitin Ups Its Game
Turnitin claims to have developed software to detect if a student has utilised an artificial intelligence chatbot, such as ChatGPT, in their work. In response to the proliferation of assisted writing software, the top plagiarism detection service has established an AI Innovation Lab to determine whether an essay was composed using an AI writing tool.
Annie Chechitelli, Chief Product Officer of Turnitin, stated, "Our model has been trained on academic writing from a comprehensive database, as opposed to merely public information." As a result, Turnitin could grow increasingly adept at detecting possible academic dishonesty in student papers.
Research into detecting AI-generated text is progressing in various ways, both in industry and academia. The common strategies involve assessing several aspects of the text, such as its readability, the frequency of specific phrases, existence of punctuation, or sentence length patterns. Soon, Google, Bing, and other search engines could start marking content as plagiarised or artificial intelligence-generated.
Therefore, the safest action is to adhere to human writing workflows and guarantee high-quality work to prevent sullying one’s own reputation. Of course, a lot more work has to be done on these AI detection tools to ensure foolproof identification of AI authored articles.
Hopefully, the evolution of search engines to detect AI generated content will also improve the world wide web in other ways by making it easier for people who don't trust everything they see on the web, to smoke out fake news and misinformation.
Nivash Jeevanandam writes stories about the AI landscape in India and around the world, with a focus on the long-term impact on individuals and society. Views are personal and do not represent the stand of this publication.
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!