Scammers use DocuSign API to send fraudulent invoices
CFOs, beware: cybercriminals are exploiting DocuSign’s legitimate business tools to deliver fraudulent invoices directly through trusted channels. This scheme is particularly dangerous …
[Note: This article was updated on 8 February, 2024.]
In previous articles, we’ve explored how generative artificial intelligence (AI) has massive potential – both positive and negative – to change the security game. As expected, hackers and cyber-criminals are quickly finding malicious use cases for large language models (LLMs).
One of the clearest examples is WormGPT, a dark version of its progenitor, ChatGPT.
Researchers from cybersecurity firm SlashNext uncovered the promotion of WormGPT on a hacker forum in 2023. The firm says the module is being advertised as a “blackhat alternative” to ChatGPT. In other words, it’s a sinister new tool for carrying out illegal activities in a faster, more efficient, more scalable way.
Here’s what’s most concerning for finance leaders: the tool appears perfectly suited for more effective business email compromise (BEC) attacks, one of the most common tactics for bamboozling accounts payable (AP) employees and defrauding organisations. And it’s hardly the only one of its kind, with researchers already finding other malicious bots like FraudGPT.
In our generative AI explainer, we touched on the ways that scammers might apply tools built on large language models (LLM) for fraudulent or criminal purposes. Tools like ChatGPT help users construct polished text-based messages, including those written in highly specific styles and tones of voice.
It’s not hard to imagine how this could help cyber-criminals impersonate your trusted contacts and suppliers. But SlashNext has highlighted the specific ways in which malicious actors are using this new tech for BECs.
On one cyber-crime forum, a recent thread illustrated the grim potential of using generative AI for illegal activities. A participant suggested crafting emails in their native tongue, translating the text and then feeding it into an AI interface to enhance its sophistication and formality.
Basically, it’s now far easier for attackers to create convincing emails for phishing or BEC attacks, even without proficiency in the target language.
Main insight: AI helps users create more polished, professional-sounding text in any style or tone. That user group includes scammers, who no longer need technical skills or language proficiency to target your organisation more effectively than ever.
AI companies incorporate security controls into interfaces like ChatGPT and Google Bard, attempting to minimise the ways they can be used for cyber-crime, disinformation and other bad-faith purposes. While these guardrails can’t stop a fraudster from using AI to proofread and polish their phishing messages, they can make it harder to use AI for activities like generating malicious code.
But, famously, hackers are always looking to exploit vulnerabilities. Researchers have been ringing the alarm about the ways that AI guardrails can be circumvented or manipulated, with users sidestepping controls through cleverly constructed prompts.
Sure enough, SlashNext found similar discussions around the development of “jailbreaks” for AI interfaces. These inputs aim to manipulate the AI into generating potentially harmful output.
Cyber-criminals aren’t just manipulating existing interfaces, though. The SlashNext investigation confirmed that some are going a step further to create their own custom AI modules, closely resembling ChatGPT yet tailor-made for ill-intentioned uses. Particularly enterprising cyber-criminals are even advertising these modules to other criminals.
Enter WormGPT.
Main takeaway: Fraudsters were already finding workarounds to use existing AI models for malicious purposes, but malicious AI tools will help them skip the need for workarounds.
WormGPT basically functions like ChatGPT or Bard, just without ethical guardrails. Not only does it lack the content moderation of its ethical counterparts, but (allegedly) it’s been trained on data sources that include malware-related information.
Operating under a veil of anonymity, the AI tool’s designer promises users the ability to generate illicit material, including convincing phishing emails and possibly even malicious code. Its features include chat memory retention, code formatting and unlimited character support. To test these capabilities, the researchers at SlashNext accessed WormGPT and used it to design an email that could dupe an AP employee into paying a fraudulent invoice. They’ve called the results “unsettling” and “remarkably persuasive.”
The tool is reportedly being marketed through a subscription model, ranging from $60 to $700, according to a Telegram channel promoting WormGPT. In 2023, an alleged 1,500 users were already using it, as per claims made by a channel member.
While it would be hard to quantify exactly how many users have accessed the blackhat bot, counterfeit versions have popped up and it’s been used in phishing attacks. It’s tempting to cheer for would-be scammers getting scammed themselves, but the fact that WormGPT can function as a lure probably means there’s a healthy demand for it – and, unfortunately, that probably also means scammers are finding value in it.
It’s a concerning development for two reasons:
Main takeaway: As the evil twin of ChatGPT, WormGPT is designed for malicious purposes like phishing and hacking. With a growing user base, it represents one of many fast-spreading AI tools that will make it easier to defraud your company.
Europol, in a 2023 report, emphasised the importance of monitoring this growing trend, suggesting that such “dark” large language models could become future criminal business models. In the US, the Federal Trade Commission is investigating OpenAI, while the UK’s National Crime Agency warns of AI-enabled threats.
Of course, regulation tends to happen at an infamously slower clip than the unregulated, criminal parts of the web. That means organisations can’t stand by waiting for the problem to be solved by new legislation or policy changes.
The team at SlashNext is already warning about WormGPT’s BEC capabilities, but the broader problem is an influx of malicious AI tools that can make it easier for cyber-criminals to compromise systems, steal data or credentials, impersonate trusted contacts and successfully manipulate AP professionals into making fraudulent payments.
So there are big risks now, which are likely to evolve quickly and unpredictably. Finance leaders should be taking a close look at their controls and anti-fraud procedures. Are they ready for digital fraud attempts that are larger in volume and more strategically sophisticated?
Consider pressure-testing your current controls, and think about aligning other leaders around a CFO-driven cyber-crime strategy. With the right processes, tools and people in place, leaders can better equip their organisations against a new generation of AI-enabled threat actors.
CFOs, beware: cybercriminals are exploiting DocuSign’s legitimate business tools to deliver fraudulent invoices directly through trusted channels. This scheme is particularly dangerous …
Because LinkedIn is used as a professional networking platform, account holders don’t use the same caution as they would on Facebook or …
Fraud can strike any time, but certain periods increase your business’s vulnerability to fraudulent activities. During these times, your teams may be …
Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error.