Cyber crime

7 Deepfake Attacks Examples: Deepfake CEO scams

Catherine Chipeta
5 Min

Imagine answering your phone and hearing the unmistakable voice of your CEO as they confidently instruct you to transfer funds. There’s no reason to doubt the request because every detail checks out. The voice, the tone, the casual reference to last week’s meeting.

But it isn’t your CEO at all. It’s an AI-generated deepfake, and your business has just entered the dangerous new frontier of cyber fraud. A frontier where artificial intelligence is leveraged to mimic human voices and faces with precision.

In fact, North America experienced a staggering 1,740% increase in deepfake fraud in 2023, suggesting that the region may be a primary target for fraudsters due to its large digital economy and high adoption of online services. ​

Here are seven real-world CEO deepfake scams that showcase how far cybercriminals will go and why finance teams need stronger controls.

1. LastPass

You’d expect cybersecurity leader LastPass to be an improbable target for scammers. But in early 2024, an employee received calls, texts, and voicemail from someone impersonating the company’s CEO on WhatsApp.

Thankfully, the savvy employee ignored the message and reported the incident to the company’s cybersecurity team. They recognized that the communication occurred outside of normal business hours and, perhaps more importantly, was not influenced by forced urgency—a common social engineering tactic.

The models used in the scam were likely trained from videos of the CEO on YouTube and other publicly available sources.

2. Wiz

Cloud security firm Wiz faced a similar deepfake attack in late 2024. Criminals used AI technology to clone CEO Assaf Rappaport’s voice and then sent voicemails to dozens of employees asking them for their credentials.

However, the fraudsters made two critical errors:

  1. They created the AI deepfake based on a conference where Rappaport had been a speaker. Unbeknownst to them, however, the CEO’s voice differs when speaking in public versus his day-to-day voice.
  2. They targeted a cloud security company with employees who treat voicemails from the CEO with suspicion—particularly when his tone of voice sounded unusual.

While Wiz was not able to locate the fraudsters, the example demonstrates the power of an alert workforce in thwarting cyberattacks.

3. Ferrari

In July 2024, Italian automotive icon Ferrari experienced an attack where impostors attempted to deceive finance executives with a digital impersonation of CEO Benedetto Vigna.

The scammers first reached out to senior executives on WhatsApp with the question:

“Hey, did you hear about the big acquisition we’re planning? I could need your help.”

Then, they dialed it up a notch with a deepfake impersonation that replicated Vigna’s voice and distinctive southern Italian accent. Despite the near-perfect accent, Ferrari executives remained skeptical.

To verify the caller’s identity, they asked the likeness what book Vigna had recently recommended to them. It couldn’t answer, of course, and promptly ended the conversation.

4. Arup

An employee at British multinational Arup was duped into sending $39 million to fraudsters after the company’s CEO and other staff members were impersonated in a video call.

The professional in question was initially contacted by Arup’s UK office with an email stressing the need for a secret transaction. The individual suspected it to be a phishing email, but their doubts were removed after the video call because of the extremely realistic impersonations.

Arup’s example shows how much damage can be inflicted when basic verification steps are not followed.

5. WPP

Advertising behemoth WPP almost became another victim when scammers cloned the voice of CEO Mark Read.

In this example, attackers created a WhatsApp account with a public photo of Read and used it to set up a meeting on Microsoft Teams. Read and another executive were impersonated (both on and off camera) and asked staff to help them establish a new business to solicit funds and personal details.

Fortunately, the attack was unsuccessful. “Thanks to the vigilance of our people, including the executive concerned, the incident was prevented,” explained a company spokesperson.

6. Italian business leaders

In early 2025, a coordinated wave of deepfake attacks shook Italy’s corporate elite, with fashion icon Giorgio Armani and several prominent business executives among those targeted.

Criminals posed as Italian defence minister Guido Crosetto and claimed they needed help to free journalists detained in the Middle East. At least one of the victims transferred €1 million to a Hong Kong-based account after they were told they’d be reimbursed by the Bank of Italy.

Crosetto later explained how the criminals used social engineering (through patriotism) to manipulate victims’ emotions and influence their actions: “In this case, they identified major Italian entrepreneurs, people who, at the request of a minister, would perhaps be ready to make a bank transfer because of their love of Italy.”

7. An unnamed UK energy firm

Back in 2019, a UK-based energy firm became one of the first examples of deepfake use in financial fraud.

The company’s CEO received a phone call from someone perfectly mimicking the voice and accent of the German parent company’s CEO. The voice stressed the need to immediately transfer around $243,000 USD to a Hungarian supplier.

A second call was then made to request another transfer after it was claimed the first one had been reimbursed. This call, however, was met with suspicion. The company had not been reimbursed, and the second call had been made with an Austrian phone number.

In actuality, fraudsters had transferred the money to a Mexican account (and then accounts in other locations) to cover their tracks and evade detection.

What finance teams can learn from these incidents

AI-powered deepfake fraud is poised to reshape the landscape of financial crime. When scammers can use AI to deceive even the most astute professionals, it proves beyond doubt that yesterday’s protection measures will certainly not work in 2025 and beyond.

For CFOs and other finance leaders, the cost of inaction can be catastrophic. Now is the time to arm your finance team with advanced cybersecurity knowledge and comprehensive payment controls designed specifically to combat AI-driven threats.

Ready to fortify your defences and protect your company’s financial future? Access practical, actionable, up-to-date advice today by downloading Eftsure’s Cybersecurity Guide for CFOs 2025.

Don’t wait until it’s too late—make sure your team is prepared to identify, prevent, and counteract tomorrow’s sophisticated threats.

Padlock with key
See Eftsure in action
Schedule a demo today and discover how eftsure can protect your business from AI-driven fraud. Book a demo now!

Related articles

Cyber crime

What Is an AI Voice Scam?

AI voice scams are targeting finance teams—using deepfake tech to mimic executives and authorise payments. Learn how they work—and how to stop them.

Read more

The new security standard for business payments

End-to-end B2B payment protection software to mitigate the risk of payment error, fraud and cyber-crime.