Cyber crime

WormGPT: how hackers are exploiting the dark side of AI

Shanna Hall
5 Min
WormGPT-ChatGPT

[Note: This article was updated on 8 February, 2024.]

In previous articles, we’ve explored how generative artificial intelligence (AI) has massive potential – both positive and negative – to change the security game. As expected, hackers and cyber-criminals are quickly finding malicious use cases for large language models (LLMs).

One of the clearest examples is WormGPT, a dark version of its progenitor, ChatGPT.

Researchers from cybersecurity firm SlashNext uncovered the promotion of WormGPT on a hacker forum in 2023. The firm says the module is being advertised as a “blackhat alternative” to ChatGPT. In other words, it’s a sinister new tool for carrying out illegal activities in a faster, more efficient, more scalable way.

Here’s what’s most concerning for finance leaders: the tool appears perfectly suited for more effective business email compromise (BEC) attacks, one of the most common tactics for bamboozling accounts payable (AP) employees and defrauding organisations. And it’s hardly the only one of its kind, with researchers already finding other malicious bots like FraudGPT.

Generative AI and its role in BEC attacks

In our generative AI explainer, we touched on the ways that scammers might apply tools built on large language models (LLM) for fraudulent or criminal purposes. Tools like ChatGPT help users construct polished text-based messages, including those written in highly specific styles and tones of voice.

It’s not hard to imagine how this could help cyber-criminals impersonate your trusted contacts and suppliers. But SlashNext has highlighted the specific ways in which malicious actors are using this new tech for BECs.

On one cyber-crime forum, a recent thread illustrated the grim potential of using generative AI for illegal activities. A participant suggested crafting emails in their native tongue, translating the text and then feeding it into an AI interface to enhance its sophistication and formality.

Basically, it’s now far easier for attackers to create convincing emails for phishing or BEC attacks, even without proficiency in the target language.

Main insight: AI helps users create more polished, professional-sounding text in any style or tone. That user group includes scammers, who no longer need technical skills or language proficiency to target your organisation more effectively than ever.

Fraudsters are maximising AI’s criminal capabilities

AI companies incorporate security controls into interfaces like ChatGPT and Google Bard, attempting to minimise the ways they can be used for cyber-crime, disinformation and other bad-faith purposes. While these guardrails can’t stop a fraudster from using AI to proofread and polish their phishing messages, they can make it harder to use AI for activities like generating malicious code.

But, famously, hackers are always looking to exploit vulnerabilities. Researchers have been ringing the alarm about the ways that AI guardrails can be circumvented or manipulated, with users sidestepping controls through cleverly constructed prompts.

Sure enough, SlashNext found similar discussions around the development of “jailbreaks” for AI interfaces. These inputs aim to manipulate the AI into generating potentially harmful output.

Cyber-criminals aren’t just manipulating existing interfaces, though. The SlashNext investigation confirmed that some are going a step further to create their own custom AI modules, closely resembling ChatGPT yet tailor-made for ill-intentioned uses. Particularly enterprising cyber-criminals are even advertising these modules to other criminals.

Enter WormGPT.

Main takeaway: Fraudsters were already finding workarounds to use existing AI models for malicious purposes, but malicious AI tools will help them skip the need for workarounds.

What is WormGPT, a “blackhat alternative” to ChatGPT?

WormGPT basically functions like ChatGPT or Bard, just without ethical guardrails. Not only does it lack the content moderation of its ethical counterparts, but (allegedly) it’s been trained on data sources that include malware-related information.

Operating under a veil of anonymity, the AI tool’s designer promises users the ability to generate illicit material, including convincing phishing emails and possibly even malicious code. Its features include chat memory retention, code formatting and unlimited character support. To test these capabilities, the researchers at SlashNext accessed WormGPT and used it to design an email that could dupe an AP employee into paying a fraudulent invoice. They’ve called the results “unsettling” and “remarkably persuasive.”

WormGPT’s growing userbase

The tool is reportedly being marketed through a subscription model, ranging from $60 to $700, according to a Telegram channel promoting WormGPT. In 2023, an alleged 1,500 users were already using it, as per claims made by a channel member.

While it would be hard to quantify exactly how many users have accessed the blackhat bot, counterfeit versions have popped up and it’s been used in phishing attacks. It’s tempting to cheer for would-be scammers getting scammed themselves, but the fact that WormGPT can function as a lure probably means there’s a healthy demand for it – and, unfortunately, that probably also means scammers are finding value in it. 

A new wave of cyber-crime tools

It’s a concerning development for two reasons:

  1. Even inexperienced cyber-criminals can harness generative AI to produce malicious code or transform rudimentary phishing scams into intricate operations, increasing the likelihood of success. Versions of WormGPT are even occasionally accessible on the surface web, meaning there are lower barriers to using it.
  2. WormGPT is only one example of a growing black market for ‘malicious AI’ tools, which will continue to grow in the same way that non-malicious tools have grown out of ChatGPT’s release. One of those other malicious LLMs is FraudGPT, a tool specifically designed for and marketed to “fraudsters, hackers, scammers and like-minded individuals.” In other words, the cyber-crime arsenal is getting a lot bigger.

 

Main takeaway: As the evil twin of ChatGPT, WormGPT is designed for malicious purposes like phishing and hacking. With a growing user base, it represents one of many fast-spreading AI tools that will make it easier to defraud your company.

Regulatory bodies respond

Europol, in a 2023 report, emphasised the importance of monitoring this growing trend, suggesting that such “dark” large language models could become future criminal business models. In the US, the Federal Trade Commission is investigating OpenAI, while the UK’s National Crime Agency warns of AI-enabled threats.

Of course, regulation tends to happen at an infamously slower clip than the unregulated, criminal parts of the web. That means organisations can’t stand by waiting for the problem to be solved by new legislation or policy changes.

What does this mean for finance and AP teams?

The team at SlashNext is already warning about WormGPT’s BEC capabilities, but the broader problem is an influx of malicious AI tools that can make it easier for cyber-criminals to compromise systems, steal data or credentials, impersonate trusted contacts and successfully manipulate AP professionals into making fraudulent payments.

So there are big risks now, which are likely to evolve quickly and unpredictably. Finance leaders should be taking a close look at their controls and anti-fraud procedures. Are they ready for digital fraud attempts that are larger in volume and more strategically sophisticated?

Consider pressure-testing your current controls, and think about aligning other leaders around a CFO-driven cyber-crime strategy. With the right processes, tools and people in place, leaders can better equip their organisations against a new generation of AI-enabled threat actors.

Cybersecurity Guide for CFOs 2024
Guide: defend against AI-enabled attacks
Tools like WormGPT are just a drop in the bucket – generative AI is helping cybercriminals sharpen their tactics at scale. Find out why, as well as how to protect your organisation, in the newest edition of the Cybersecurity Guide for CFOs.

Related articles

The new security standard for business payments

End-to-end B2B payment protection software to mitigate the risk of payment error, fraud and cyber-crime.