Moneycontrol PRO
HomeNewsOpinionHow AI is challenging cybersecurity

How AI is challenging cybersecurity

The AI era’s cybersecurity threats need more scrutiny. In this article, we analyse three of them. Jailbreaking of generative AI tools like ChatGPT can turn them into assistants for cyber criminals. Invasive AI tools like WormGPT ease phishing attacks. Large Language Models can produce dynamically changing malware code making detection harder

August 18, 2023 / 11:47 IST
With AI now serving as a potent, versatile and accessible tool, the whole cybersecurity paradigm could see a dramatic overhaul. (Image: Unsplash)

Cyberthreats are everywhere. We regularly hear about people losing money because they shared their OTP’s and other sensitive information with bad actors. Social media accounts regularly get compromised. And even critical infrastructure suffers outages and disruptions due to incessant onslaughts by nefarious groups. Thus, the threat landscape is wide, and dealing with cybercrime is extremely challenging.

An expansive list of vulnerabilities, with over 78,000 known exploits, however, isn’t all that there is to cybercrime. With AI now serving as a potent, versatile and accessible tool, the whole cybersecurity paradigm could see a dramatic overhaul. This is because it simultaneously changes the nature of existing threats, while also creating new challenges that we don’t yet know how to respond to.

To understand how AI is changing the nature of existing threats, we first need to understand the most common forms of attacks that people encounter. These range from phishing, social engineering attacks, DDoS, business email compromise, and ransomware amongst others.

Off-The-Shelf Cybercrime Aids

AI is making it easier to commit all of these crimes through automation, reducing the barrier for entry, and allowing the same attacks to occur on a much larger scale. Moreover, it is creating entirely new forms of threats that our modern cyber defences are completely incapable of defending against.

Let us look at some recent phishing attacks. Phishing involves gaining the trust of the victim by pretending to be something you are not, and then leveraging that trust to perform your mischief. Traditionally, to perform attacks like this, a certain command over the language and familiarity over a sector’s lexicon was needed. Generative AI, however, briskly dispenses with such needs due to its power of language synthesis.

For example, tools like WormGPT, which is a large language model trained on malware-related data, makes phishing attacks much easier to conduct due to its awareness of what type of communication appears convincing to the victim. This makes it very useful for hackers who are going after a particular victim, as well as those who are running massive phishing campaigns, by curating different messages for different audiences.

The firm SlashNext, which first intercepted WormGPT, moreover, also found it very effective in business email compromise attacks, as it could generate convincing emails that pressured an account manager into paying fraudulent invoices.

Beyond such narrow attacks which utilise a single vector like phishing or business-email-compromise, there are now general-purpose AI tools which can give you assistance for almost any type of cybercrime. Tools like FraudGPT, which cost anywhere from $200 to $1,700 per month, are now emerging as all-in-one solutions, providing tutorials on how to hack machines and networks, identifying vulnerable websites, creating malware, giving step-by-step instructions etc. Also, they are circulating the mainstream networks like telegram, showing just how close these threats are to our shores.

Making ChatGPT Go Rogue

Beyond these specialised tools made by black-hat developers, another type of threat that we need to be cognisant about is the misuse of existing platforms for malicious purposes. Jailbreaking of AI, something we rarely get to hear about in AI’s mainstream coverage, is a very potent threat and it involves feeding text prompts to tools like ChatGPT or Google’s Bard so that their ethical safeguards get broken, and they become free from any restrictions. Through this, AI chatbots get transformed into powerful assistants for criminal enterprise.

This weaponisation of general-purpose AI tools, while crucial to avoid, has proved exceedingly difficult to manage. A recent paper out of Carnegie Mellon University discussed a universal jailbreak for all AI models, and could generate a near-infinite number of prompts to break AI safeguards.

Moreover, developers and adopters of AI are regularly trying to find ways to “hack” AI systems and succeeding. Indeed, as of now, no known universal solution for jailbreaking exists and governments and corporates should be quite concerned about this as AI’s mass adoption continues to take off.

AI Generated Malware’s Coming

Yet another cybercrime avenue through AI is runtime code synthesis.

Runtime code synthesis or Dynamic Code Creation is when you generate a new form of malware each time that the code gets executed. This differs from traditional forms of malware as, in traditional setups, there is only one variant getting created during code execution, and therefore, it is easier to detect and control.

HYAS Labs created a polymorphic malware called BlackMamba using Large Language Models, as a proof of concept, which worked as a keylogging tool that could collect sensitive information like usernames, passwords, and credit card numbers.

It took a different form each time the code was executed and was thus capable of bypassing any automated security-detection system without raising red flags. It also showed clearly that benign code can be converted dynamically at runtime into malicious programs, and is a new form of threat that we need to learn how to respond to.

Boom Time For Crime & Crimefighting

Overall, all of this demonstrates that as generative AI becomes an increasingly larger part of our lives, our understanding of cybersecurity will also need a radical reshift. It is best encapsulated by the statements made by Anthropic CEO Dario Amodei at the US Senate Hearing on AI.

Amodei stated that in the medium term there are threats that are both imminent and severe. IBM has estimated that the time needed to deploy a ransomware attack has fallen by 94 percent in 2 years, and market research indicates that the global AI cybersecurity market will grow from $8.8 billion in 2020 to $38.2 billion in 2026.

What these two figures make clear is that the twin industries of cybercrime and cybersecurity defence against it are poised for a monumental boom in this age of AI.

Srimant Mishra is a computer science engineer from VIT university, Vellore, with a deep interest in the field of Artificial Intelligence. He is currently pursuing a law degree at Utkal University, Bhubaneshwar. Views are personal, and do not represent the stand of this publication.

Invite your friends and family to sign up for MC Tech 3, our daily newsletter that breaks down the biggest tech and startup stories of the day

Srimant Mishra is a computer science engineer from VIT university, Vellore, with a deep interest in the field of Artificial Intelligence. He is currently pursuing a law degree at Utkal University, Bhubaneshwar. Views are personal, and do not represent the stand of this publication.
first published: Aug 18, 2023 11:47 am

Discover the latest Business News, Sensex, and Nifty updates. Obtain Personal Finance insights, tax queries, and expert opinions on Moneycontrol or download the Moneycontrol App to stay updated!

Subscribe to Tech Newsletters

  • On Saturdays

    Find the best of Al News in one place, specially curated for you every weekend.

  • Daily-Weekdays

    Stay on top of the latest tech trends and biggest startup news.

Advisory Alert: It has come to our attention that certain individuals are representing themselves as affiliates of Moneycontrol and soliciting funds on the false promise of assured returns on their investments. We wish to reiterate that Moneycontrol does not solicit funds from investors and neither does it promise any assured returns. In case you are approached by anyone making such claims, please write to us at grievanceofficer@nw18.com or call on 02268882347