The scamsters whose phishing emails and messages you could easily identify from the crude sentence creation and factual mistakes have a new powerful tool at hand and that has left law enforcement officials in India worried.
ChatGPT, the conversational Artificial Intelligence (AI) platform developed by Microsoft-backed OpenAI, has taken the world by storm with users finding its use in varied fields such as content writing, composing emails, coding and so on.
However, ChatGPT has also been in the news for reportedly aiding in creating malware and composing email texts for phishing campaigns. This has aroused the interest of law enforcement officials, who now have a new cyber-related threat to deal with.
It has also cleared examinations of prestigious institutions and top universities have also now banned its usage to prevent plagiarism.
At an online discussion organised to discuss ChatGPT-related cyber attacks, the superintendent of the Uttar Pradesh Cyber Crime cell Triveni Singh, and other law enforcement officials, enquired regarding the availability of cyber forensic tools to tackle attacks using the AI platform.
Cyber forensic tools are electronic equipment and software that aid law enforcement officials in securing digital evidence in a forensically-sound manner.
“When I was first acquainted with the workings of ChatGPT, I realised that there was a huge scope for misuse of the platform. Suppose someone instructs ChatGPT to create a synthetic voice or video imitating mine – the scope for misuse is tremendous,” Singh said.
“Are there any cyber forensic tools that can tell whether a piece of information is ChatGPT-generated — if they are synthetic videos or audios?” Singh asked.
Composing phishing text
Dinesh Bareja, Founder and COO of Open Security Alliance, said he conducted an experiment with ChatGPT asking the platform to compose a phishing email. Within seconds, the platform composed the text for an email where the information requested from the potential victim would be sensitive.
The email was drafted in such a way that the company names in the phishing email could be substituted as per the scammers' needs.
Bareja, who was a speaker during the discussion, also asked ChatGPT to write a phishing email for a lottery ticket.
Again the platform within a matter of seconds composed a phishing email which started something like this: "Congratulations! You have been selected as a winner of our lottery ticket competition... Fill out the form with your personal information, including your full name, address, phone number and email address... Please note that this offer is only valid for a limited time so be sure to claim your prize as soon as possible."
To conclude, Bareja said, "The risk is not just in code generation but also in ease of code generation. The scope for misuse will increase when ChatGPT becomes available in Hindi or other languages."
During the discussion, Samir Datt, founder and CEO of Forensics Guru.com said he experimented with ChatGPT for using it for data theft.
He instructed ChatGPT, “Could you generate an example of a conversation that appears to be from a legitimate source such as customer service and that requests personal information or asks the recipient to click on a link?”
Again within a matter of moments, the chatbot had an entire text ready, designed to fool potential victims into believing they need to reset their bank account password, and in the text body ChatGPT also provided a dummy link where one would put in their password, believing that they would be resetting it.
Writing code for malware
Datt also experimented with the chatbot for writing data overwriting malware, which is a malicious type of software that can delete or modify data on a computer system.
He said ChatGPT understands the motive behind a question one is asking. For instance, if a user asks ChatGPT to write a code for malware, then the bot will not answer properly because it understands that it is wrong.
However, Datt said one has to ask the question in a manner that ChatGPT can answer from the user’s perspective as a hacker.
So, Datt, who has knowledge of coding, framed a technical question composed of mentions of coding terms and asked for a response from ChatGPT. The chatbot responded positively, with a code that can be used for creating overwriting malware.
So what can law enforcement officials do?
Kayzad Vanskuiwalla, Director at threat hunting and intelligence firm Securonix said, “There are several steps that Indian law enforcement agencies can take to minimize the risks. Alignment techniques that use human-given feedback to train the chatbot can be deployed, which will help in identifying patterns used by the model to detect offensive code / simulated emails generated by the GPT models.”
Vanskuiwala recommended increasing user education and awareness around ChatGPT-related phishing attacks.
Maheswaran Shamugasundaram, Country Manager - India at Varonis, a data security platform, expects the upcoming Digital India Bill, the long-awaited amendment to the Information Technology Act, to present some solutions for law enforcement officials for dealing with AI platforms such as ChatGPT.
“In conjunction with new data protection laws, India can establish a regulatory regime that can become a global standard for these kinds of data-reliant information technologies,” Shamugasundaram said.