Cyber crime

Only 1 in 10 finance leaders are ‘very confident’ they could stop an AI-powered cyberattack

Catherine Chipeta
3 Min
13 AI Scam Tools Criminals Are Using to Drain Business Bank Accounts

Artificial intelligence is accelerating the pace and sophistication of cybercrime — and finance teams are increasingly in the crosshairs. That was a key theme explored in Eftsure’s recent Cybercrime 2.0 webinar, hosted in collaboration with Strategic Treasurer. Live polling during the session revealed a concerning insight: just 13% of finance leaders said they’re very confident their organization could detect and stop an AI-powered cyberattack targeting treasury operations. For accounts payable (AP), that number dropped to just 9%.

As generative AI enables cybercriminals to scale phishing, impersonation, and payment manipulation attacks, this confidence gap signals an urgent need for more proactive defenses.

The rising threat: AI supercharges fraud tactics

When asked which AI-driven threat concerned them most, 30% of respondents selected deepfake payment fraud. These synthetic audio and video impersonations can convincingly mimic senior executives and authorize fake transactions in real time.

Other top concerns included:

  • AI-powered phishing and business email compromise (BEC) (18%)
  • Real-time manipulation of treasury platforms (16%)
  • Credential cracking and system hijacking (16%)

As discussed in the webinar, tools enabling these attacks are increasingly accessible on the dark web, making them cheaper, faster, and more scalable than ever before. It’s no longer a matter of if an organization will be targeted — but whether it’s ready when it happens.

Confidence isn’t enough, and most leaders know it

While 13% of treasury leaders and 9% of AP leaders reported being “very confident” in their existing controls, the majority were only “somewhat confident” — 55% for treasury and 46% for AP. Nearly 1 in 5 finance leaders said they were not confident, and others were unsure altogether.

This uncertainty often stems from limited testing, lack of visibility, or unclear lines of responsibility between finance and IT. As threats evolve, so too must the controls and collaboration frameworks in place to combat them.

What’s being done, and what’s still missing

On a positive note, many organizations are taking meaningful steps to address these risks. Nearly half of respondents said they have updated fraud detection tools and delivered staff training on AI-enabled threats.

However, deeper, treasury-specific preparedness remains limited:

  • Only 20% have conducted a treasury-specific cyber risk assessment
  • Just 19% have simulated AI-based attack scenarios
  • Only 23% have implemented a cross-functional response plan

These figures highlight a readiness gap — one that attackers are poised to exploit unless closed through more comprehensive planning and investment.

From awareness to action: three steps to improve cyber resilience

To close the confidence gap, finance and treasury leaders should:

  1. Stress-test existing controls using realistic, AI-specific attack simulations
  2. Establish coordinated response plans that involve finance, IT, legal, and executive leadership
  3. Continuously review and adapt fraud detection protocols to align with current attack patterns

 

Final thoughts

The nature of financial fraud is changing — fast. As explored in the Cybercrime 2.0 webinar, AI is not only reshaping how attacks are carried out, but also how frequently and convincingly they occur. For finance teams, adapting to this new reality is no longer optional.

If you missed the live session, you can watch it on demand below to explore key insights and case studies shared during the discussion.

Looking to take the next step?
Download the Cybersecurity Guide for CFOs 2025 for practical strategies to strengthen your payment defense posture.

Padlock with key
Think your controls could catch an AI-powered cyberattack?
Book a demo to see how Eftsure helps finance teams detect and prevent emerging threats before payments go out the door.

Related articles

Cyber crime

What Is an AI Voice Scam?

AI voice scams are targeting finance teams—using deepfake tech to mimic executives and authorise payments. Learn how they work—and how to stop them.

Read more

The new security standard for business payments

Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error.