What Is an AI Voice Scam?
AI voice scams are targeting finance teams—using deepfake tech to mimic executives and authorise payments. Learn how they work—and how to stop them.
Artificial intelligence is accelerating the pace and sophistication of cybercrime — and finance teams are increasingly in the crosshairs. That was a key theme explored in Eftsure’s recent Cybercrime 2.0 webinar, hosted in collaboration with Strategic Treasurer. Live polling during the session revealed a concerning insight: just 13% of finance leaders said they’re very confident their organization could detect and stop an AI-powered cyberattack targeting treasury operations. For accounts payable (AP), that number dropped to just 9%.
As generative AI enables cybercriminals to scale phishing, impersonation, and payment manipulation attacks, this confidence gap signals an urgent need for more proactive defenses.
When asked which AI-driven threat concerned them most, 30% of respondents selected deepfake payment fraud. These synthetic audio and video impersonations can convincingly mimic senior executives and authorize fake transactions in real time.
Other top concerns included:
As discussed in the webinar, tools enabling these attacks are increasingly accessible on the dark web, making them cheaper, faster, and more scalable than ever before. It’s no longer a matter of if an organization will be targeted — but whether it’s ready when it happens.
While 13% of treasury leaders and 9% of AP leaders reported being “very confident” in their existing controls, the majority were only “somewhat confident” — 55% for treasury and 46% for AP. Nearly 1 in 5 finance leaders said they were not confident, and others were unsure altogether.
This uncertainty often stems from limited testing, lack of visibility, or unclear lines of responsibility between finance and IT. As threats evolve, so too must the controls and collaboration frameworks in place to combat them.
On a positive note, many organizations are taking meaningful steps to address these risks. Nearly half of respondents said they have updated fraud detection tools and delivered staff training on AI-enabled threats.
However, deeper, treasury-specific preparedness remains limited:
These figures highlight a readiness gap — one that attackers are poised to exploit unless closed through more comprehensive planning and investment.
To close the confidence gap, finance and treasury leaders should:
The nature of financial fraud is changing — fast. As explored in the Cybercrime 2.0 webinar, AI is not only reshaping how attacks are carried out, but also how frequently and convincingly they occur. For finance teams, adapting to this new reality is no longer optional.
If you missed the live session, you can watch it on demand below to explore key insights and case studies shared during the discussion.
Looking to take the next step?
Download the Cybersecurity Guide for CFOs 2025 for practical strategies to strengthen your payment defense posture.
AI voice scams are targeting finance teams—using deepfake tech to mimic executives and authorise payments. Learn how they work—and how to stop them.
Discover 14 real-world AI-driven tax scams targeting US finance teams this season—what they look like, how they work, and how to stop them in action.
A cyberattack on Aussie super funds reveals major control gaps. Learn what finance leaders must do now to protect payments and prevent fraud.
Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error.