Cyber crime

Finance worker loses $25m to deepfake scam

Shanna Hall
4 Min
Corporate worker faces crowd of deepfake figures

The age of deepfake fraud is truly here. 

Hong Kong police recently reported a fraudulent scheme involving deepfake technology, resulting in a finance worker transferring millions of dollars to impostors. The employee, working for a multinational firm, fell victim during a video conference call. 

Believing he was interacting with his company’s chief financial officer and colleagues, the worker later discovered all participants were deepfake imitations. The scam involved a request for a confidential transaction, leading to the transfer of approximately $25 million USD ($195m Hong Kong Dollars).

The incident has made international headlines, a starter shot in the race to defend businesses’ money against deepfake-enabled scams. 

How did the deepfake scam happen?

Senior Superintendent Baron Chan Shun-ching of the Hong Kong police detailed the incident. Initially sceptical, the worker dismissed a suspicious email from someone claiming to be the firm’s UK-based chief financial officer – this wasn’t a reckless employee, but instead seems to be a case of a cautious sceptic who was alert to the possibility of phishing.

But the worker’s doubts subsided after the extremely realistic video call. CNN programme TYT recently broke down what happened in the video below.

Tip of the iceberg: malicious deepfakes on the rise

Chan said the ruse was part of a broader pattern of deepfake-assisted frauds.

Hong Kong authorities reported six arrests related to similar scams. Investigations revealed that eight stolen Hong Kong identity cards facilitated 90 loan applications and 54 bank account registrations. Fraudsters used AI-generated deepfakes to deceive facial recognition systems on at least 20 occasions.

The misuse of deepfake technology extends beyond financial deception, with recent incidents – like the sexually suggestive material of Taylor Swift that went viral – highlighting its potential for creating damaging and deceptive content.

Generative AI is ramping up scam attempts

Deepfake videos aren’t the only way scammers leverage generative artificial intelligence (AI). AI is largely acting as an accelerant for existing scam tactics, but it’s also creating brand new threats altogether:

  • Voice impersonation scams. New generative AI technology allows malicious actors to recreate a real person’s voice using only a few seconds of audio. This recreation can then be used to swindle targets out of money or information, or potentially bypass voice authentication processes.
  • BECs and phishing attacks. Business email compromise (BEC) tactics and phishing messages often rely on well-timed, carefully written emails or text messages to trick targets into making the wrong payment or giving up sensitive information. Large language models (LLMs) help fraudsters craft, test and scale these messages, radically improving the efficiency of common cybercrime tactics.
  • Malicious code and new attack strategies. Some LLMs are specifically designed for illicit activity – their creators say they can generate malicious code or even help cybercriminals find new vulnerabilities or devise new attacks.

What can finance leaders do about deepfake threats?

To manage this new risk environment, finance leaders need to think creatively, stay informed and implement technology-driven processes. Crucially, this should involve a combination of solutions rather than relying only on training or financial controls that were designed during a pre-digital era.

Leaders will need to reassess three major areas:

  1. Processes. Your control procedures are some of your most critical defences when it comes to scams – whether they’re using old or new tactics. Pressure-testing can help you determine which processes (or people) may be most vulnerable to new AI-assisted scams.
  2. People. In the Hong Kong example, it’s clear the employee was aware of phishing risks but trusted a video call that turned out to be fake – if the worker had been trained to scrutinise video authenticity, they might have taken a few extra steps to verify the request, demonstrating the vital importance of staff awareness and training.
  3. Technology. Fight fire with fire – cybercriminals will keep leveraging the latest technology to work faster and smarter, so finance leaders will need to do the same. Look for solutions that equip employees with additional information and automate previously manual steps in your control procedures, which can reduce the risk of a human making mistakes. After all, it’s impossible to expect humans to never make mistakes, so technology should be doing some of the heavy lifting.
Cybersecurity Guide f
Want more tips for protecting your business’s money? 
Whether scams are assisted by AI or not, your organisation’s financial health and reputation depends on your ability to thwart all fraud attempts. Check out our Cybersecurity Guide for CFOs to learn more about defending your business in a dangerous new era of AI and cybercrime.

Related articles

Cyber crime

How to block spam calls

If you’ve ever gotten a call from a number you didn’t recognize and picked up the call only to realize that it …

Read more

The new security standard for business payments

Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error.