Sydney hospital loses $2 million in alleged BEC fraud
A Sydney hospital lost $2M in a BEC scam. Learn how to protect your business with MFA, email authentication, and robust financial controls.
Imagine someone wants to defraud your company and has access to your emails (or your supplier’s emails). Now imagine how much damage they could do if artificial intelligence (AI) allowed them to automatically identify sensitive financial documents and swap in fraudulent details.
Unfortunately for finance teams, these AI capabilities already exist. Security researchers have uncovered an AI tool capable of quickly analysing compromised emails, identifying those with invoices or payment details, and altering the banking information. Then, the tampered invoice is replaced in the original message or sent to a preset contact list.
The end result? Finance teams are about to be inundated with a much bigger volume of fraud attempts, ones that use far more sophisticated tactics and harder-to-detect, phony documents.
To be clear, the core methods described here aren’t new. They’ve been around for a while in business email compromise (BEC) attacks and wire fraud. What’s new is the mind-boggling speed and efficiency made possible through AI. Let’s look at why this is a game-changing threat for finance professionals.
This “invoice swapper” tool has been used to create emails across Europe, including the UK, Spain, France, and Germany. This tool, operating through POP3/IMAP4 email protocols, scans compromised email accounts for messages containing invoices or payment details. Once it identifies those messages, it alters banking information to reroute payments to accounts controlled by the fraudsters.
The platform is equipped with options for configuring SMTP settings, making it easier to send a raft of fraudulent emails all at once. It also boasts a feature that sends reports to a Telegram channel, providing an innovative alternative to traditional command-and-control communication methods. This functionality includes updates about the generated invoices.
For operation, the tool requires a list of compromised email accounts, along with specific credentials and banking codes (IBAN and BIC) for the spoofing process. The tool then scans each account for potential invoices to modify, systematically replacing recipient information with fraudulent details.
Worryingly, the tool features multi-language support, allowing it to process invoices in various languages automatically and without human intervention. It replaces the banking details on these invoices with those specified by the perpetrator, then either reinserts the altered invoice into the original email or distributes it to a preselected list of contacts.
This tool represents a significant advancement in the use of AI for criminal purposes, particularly wire fraud and bogus invoice scams. Its automated, multilingual capacity poses a considerable threat to businesses, especially given the tendency for staff to take familiar-looking invoices at face value.
For finance teams, this type of tool turns fraud and cybercrime attempts into a matter of ‘when,’ not ‘if.’ It enables a volume of fraud attempts that’s simply impossible to counter with traditional approaches.
It’s not just an increase in the volume of attempts, though. The requirement for compromised email account credentials is part of a targeted approach. By focusing on specific accounts, often belonging to key financial personnel, the platform can significantly boost fraudsters’ success rate. Combined with other advanced features, this targeted approach places finance and AP professionals at a much higher risk of falling victim.
Plus, the use of SMTP settings for sending out fabricated invoices, as well as the Telegram channel feature for command-and-control communication, signifies a shift towards more advanced, less traceable methods of operation.
Unfortunately, this is just one of the new AI tools in a cybercriminal’s arsenal. While there are likely to be even more on the horizon, we already know of several different platforms explicitly designed and marketed for assisting criminal activities. And even tightly moderated LLMs like ChatGPT can be manipulated for nefarious reasons.
The invoice swapper tool by itself makes your organisation vulnerable through the sheer scale and volume of fraud attempts – but even that is just a drop in the bucket compared to the potential for other AI tools to radically increase the number of attacks against your organisation.
The uncomfortable reality is that generative AI is advancing rapidly, creating bigger and more uncertain threats as the technology is used to refine itself and spawn new applications. Finance leaders will need a multi-faceted, layered approach to counterbalance cybercriminals’ growing advantages, including regular employee training.
But one of the most critical layers involves leveraging technology yourself. Fight fire with fire and look for automated ways to strengthen your controls and authentication measures, creating guardrails around the employees who are most likely to be targeted.
A Sydney hospital lost $2M in a BEC scam. Learn how to protect your business with MFA, email authentication, and robust financial controls.
CFOs, beware: cybercriminals are exploiting DocuSign’s legitimate business tools to deliver fraudulent invoices directly through trusted channels. This scheme is particularly dangerous …
Because LinkedIn is used as a professional networking platform, account holders don’t use the same caution as they would on Facebook or …
Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error.