A hacker has created a version of ChatGPT without any moral or ethical constraints. The tool called WormGPT has no guardrails and responds to any malicious request.
According to a blog post by SlashNext, who tried the tool, cybercriminals have started using WormGPT to create highly plausible-sounding emails, which are personalized according to the recipient.
Also read | An Evolutionary Question: Will AI models go big or small?
As you might guess, this increases the chances of a successful phishing attack. Phishing is a form of social-engineering attack where people are tricked into revealing information, usually by something like a convincing email.
The tool is currently being sold on a popular hacking forum and was trained on a diverse array of data including malware. You can even ask the means to create malware for you.
WormGPT runs on a model called GPT-J, released by EleutherAI in 2021. It has 6 billion parameters and a vocabulary size of 50257 tokens, the same as OpenAI's GPT-2.
The tool developer then trained the program on tons of information regarding malware and other malicious techniques.
Also read | AI in legal practice still has many hoops to jump through
“In summary, it’s similar to ChatGPT but has no ethical boundaries or limitations,” wrote SlashNext.
“This experiment underscores the significant threat posed by generative AI technologies like WormGPT, even in the hands of novice cybercriminals.”
Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!
Find the best of Al News in one place, specially curated for you every weekend.
Stay on top of the latest tech trends and biggest startup news.