Deepfake statistics (2024): 25 new facts for CFOs

The latest serious issue facing finance: deepfake scams. They’re not just theoretical anymore—they’re happening, and deepfake statistics show they’re hitting finance hard.

Take the recent case in Hong Kong, where a finance worker transferred $39 million, thinking they were on a video call with their CFO and colleagues. Turns out, they were talking to deepfake impostors. This incident shows how deepfake tech is becoming a real threat in financial fraud.

But it’s not just about money. Deepfakes are also used to spread lies and create fake content that can harm reputations. With AI getting smarter, scammers can now mimic voices and trick people with convincing emails.

As finance leaders, it’s crucial to stay ahead. Regularly review security measures, train your teams to spot suspicious requests, and invest in tech that can catch deepfakes before they cause damage.

Join us as we dig into the latest deepfake statistics and ways to protect your company from these evolving cyber risks.

Author’s Top Picks

  • 1 in 4 leaders unfamiliar with deepfakes
  • 31% underestimate deepfake fraud risk
  • 32% doubt employee ability to detect deepfakes
  • 1 in 10 executives have already faced deepfake threats
  • 10x increase in deepfakes detected globally across all industries in 2023.

AI deepfake statistics

1. 60% of consumers have encountered a deepfake video within the last year.

Only 15% state that they have never encountered a deepfake video.

2. Human detection of deepfake images averages 62% accuracy.

Human subjects identified high-quality deepfake videos only 24.5% of the time.

3. Only 38% of students have received guidance from their schools on how to identify AI-generated images, texts, or videos, despite a desire for such training expressed by 71% of students.

A significant gap exists in educating students about identifying AI-generated content, with only 38% receiving guidance despite 71% expressing a need for such training.

4. Only 22% of students feel very confident in their ability to detect whether an image they are viewing was generated with AI versus produced by a human.

Many students lack confidence in discerning AI-generated images, highlighting a critical need for improved media literacy education.

5. DeepFaceLab is used for over 95% of all deepfake videos.

Available as open source code on GitHub, DeepFaceLab utilizes artificial neural networks to replicate visual and auditory features from an original video onto a target video.

Deepfake fraud statistics

6. 1 in 4 leaders unfamiliar with deepfakes

Despite AI’s growing prominence, about one in four company leaders had little to no familiarity with deepfake technology.

7. 31% underestimate deepfake fraud risk

31 percent of business leaders believe deepfakes have not increased their fraud risk.

8. 32% doubt employee ability to detect deepfakes

32 percent of leaders had no confidence their employees would be able to recognize deepfake fraud attempts on their businesses.

9. Over 50% lack deepfake training

More than half of leaders say their employees haven’t had any training on identifying or addressing deepfake attacks.

10. 1 in 10 executives have already faced deepfake threats

A further 10% were unsure if their enterprises had fallen victim to deepfake-based cyberattacks.

11. 10x increase in deepfakes detected globally across all industries in 2023.

Last year saw a 10x increase in the number of deepfakes detected globally across all industries. This dramatic rise in deepfake detection underscores the rapid advancement of AI-powered fraud techniques and the urgent need for more sophisticated detection methods.

12. 88% of all deepfake cases detected in 2023 were in the crypto sector.

Crypto emerged as the main target sector for deepfake fraud, accounting for 88% of all deepfake cases detected in 2023. This statistic highlights the particular vulnerability of the cryptocurrency industry to advanced fraud techniques, likely due to its digital nature and potentially high financial stakes.

13. 1740% increase in deepfake fraud in North America in 2023.

North America experienced a staggering 1740% increase in deepfake fraud. This enormous regional increase suggests that North America may be a primary target for deepfake fraudsters, possibly due to its large digital economy and high adoption of online services.

14. Generative AI fraud losses could reach US$40 billion by 2027

Fraud losses facilitated by generative AI technologies are predicted to escalate to US$40 billion in the United States by 2027. This projection reflects a compound annual growth rate of 32% from US$12.3 billion in 2023.

15. 700% increase in deepfake incidents in fintech in 2023

Incidents involving deepfakes in fintech surged by 700% in 2023, highlighting the rapid adoption of generative AI in perpetrating fraudulent activities.

Deepfake scams statistics

16. Scams, phishing, and malvertising accounted for 90% of threats on mobile devices in Q1 2024.

Avast’s Q1 2024 Threat Report highlights a significant rise in cyberthreats exploiting human manipulation tactics like deepfakes and AI-manipulated audio synchronization. These threats are particularly rampant on platforms such as YouTube, where cybercriminals leverage deepfake videos and hijacked channels to disseminate fraudulent content.

Synthetic content technologies like deepfakes pose dual risks and benefits, complicating societal trust in media authenticity. While malicious uses, such as creating misleading audiovisuals, threaten public trust, they also offer innovative applications in fields like filmmaking and language translation.

18. Deepfake detection technology lags with a 65% detection rate against advanced tools like DeepFaceLab and Avatarify.

Analysts have observed an increase in dark web conversations about using deepfake tools like DeepFaceLab and Avatarify to manipulate selfies or videos for identity verification bypass.

19. 65% of Americans worry about privacy violations due to AI

According to a 2024 survey, a significant 65% of Americans express concerns about potential privacy violations stemming from AI technologies. This apprehension reflects growing unease over the capability of AI to exploit personal data and invade privacy rights.

Deepfake crime statistics

20. Deepfake face swap attacks on ID verification systems up 704% in 2023.

Threat actors increasingly use “face swap” deepfakes and virtual cameras to evade remote identity verification, reflecting a significant surge in attacks.

21. 30% of enterprises will no longer consider identity verification and authentication solutions reliable in isolation by 2026 due to AI-generated deepfakes.

Gartner’s analysis indicates AI-generated deepfakes in face biometrics are prompting enterprises to adopt more robust approaches.

22. 66% of cybersecurity and incident response professionals experienced a security incident involving deepfake use in 2022, marking a 13% increase from the previous year.

This finding underscores the growing prevalence of deepfake-related incidents in cybersecurity operations.

23. 92% of executives surveyed expressed significant concerns about the risks associated with implementing generative AI.

Conducted in 2023 across various industries, this survey highlights the widespread apprehension among business leaders regarding the risks posed by generative AI technologies.

24. In 2021, more than 80% of professionals across various industries perceived deepfakes as a potential risk to their business.

Despite this awareness, only 29% of firms have taken steps to protect themselves against deepfake threats, with 46% lacking any mitigation plan.

25. Over 80% of respondents in the insurance sector were concerned about manipulated digital media, yet only 20% had taken action against deepfake threats.

This finding from a 2022 study indicates a significant gap between concern and proactive measures within the insurance industry.

Cybersecurity guide for CFOs
Protect yourself from the latest generative AI threats
Uncover the current threat landscape in finance. Know what you're up against in an AI-powered world.

FAQs

A deepfake is a type of synthetic media created using artificial intelligence, typically altering videos or images to depict someone saying or doing something that never occurred.

Deepfakes use deep learning algorithms to analyze and manipulate existing images and videos. They employ techniques like generative adversarial networks (GANs) to generate highly realistic but fake media.

Deepfakes are used in cybercrime for various malicious purposes such as fraud, political manipulation, and blackmail. They can impersonate individuals to gain access to sensitive information or manipulate public opinion.

Look for signs such as unusual facial expressions, unnatural movements, low video quality, inconsistent audio, and asymmetries in the image or video. Asking specific questions or verifying with additional information can also help confirm authenticity.

To protect against deepfake scams, individuals and businesses should implement multi-factor authentication, regularly update passwords, keep devices updated, implement corporate controls like call-back procedures, and maintain a healthy skepticism by questioning and fact-checking all interactions.

Subscribe to our blog

Subscribe to the eftsure blog to receive updates when we post.

The new security standard for business payments

Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error.