Cyber crime

What are AI scams? An explainer for finance professionals

Shanna Hall
5 Min
A

While artificial intelligence (AI) has hit mainstream consciousness over the past couple of years, it’s been around for a while. In fact, as a concept, it first came to people’s attention way back in the 1950s.

It was the launch of ChatGPT in November 2022, however, that really seemed to put the ‘intelligence’ into AI. More than 100 million people across the globe used the platform, developed by the research lab OpenAI, and we’ve subsequently seen AI impact many aspects of our world. 

In addition to the positives AI is delivering, from streamlining business processes to aiding disaster management and improving healthcare outcomes, it also brings a significant amount of negatives. One of those downsides is the rapidly lowering barriers for novice cybercriminals, while another is the increased efficiency that AI is providing to even the most seasoned of fraudsters – which is why it’s so important that finance professionals are alert to the risks

While AI is being used to enhance, automate and scale up various existing threats – such as ransomware attacks – there are a number of other threats that are being enabled by AI. Here are seven AI-related cyber threats every finance professional needs to be aware of.

AI scam threat: voice cloning

Voice cloning is where cybercriminals take a voice sample from a legitimate source – like a recorded interview or social media video, for example – and create an AI clone. Thanks to current AI capabilities, all that’s needed to mimic someone’s voice is a few seconds of audio.

This clone can then be used to sidestep financial security controls. For example, a scammer can call asking for money to be transferred under the pretence of an emergent payment that needs to be made to a supplier. That’s exactly what happened when an energy company lost US$243,000 after the CEO’s voice was cloned – but that was several years ago, when access to generative AI was far less widespread. Today, there are reports of far more sophisticated voice cloning scams in which fraudsters use the voices of their targets’ loved ones to trick them out of money.

AI scam threat: Deepfake videos

Similar to voice cloning, deepfake videos can take existing video footage that exists legitimately, and turn it into something else altogether. Celebrities across the globe have already fallen victim to this, with faked videos endorsing scam investment opportunities. As well as causing potential financial loss, a business or individual could suffer serious reputational damage here.

AI scam threat: Enhanced phishing and BECs

Many of us will have played around with ChatGPT and discovered its range of capabilities, especially those that can help us quickly finish off tasks that used to be more manual. For scammers, generative AI tools also help them finish tasks more efficiently – for example, by producing incredibly realistic emails that replicate tone of voice, and even common errors or language nuances.

For instance, business email compromise (BEC) is a tried-and-tested cybercrime tactic and is being enhanced by AI. While scam emails with spelling and grammatical errors are easier to spot, more sophisticated scam emails are much more difficult. When you add AI’s ability to replicate tone of voice, those language nuances mean that impersonating an individual’s email style is now far easier. Plus, AI can automate testing to determine the best send times or optimal targets.

To understand the impact AI has had on phishing emails, consider the 1265% rise in phishing emails since the fourth quarter of 2022.

AI scam threat: Bulk invoice creation and swapping

AI tools that can produce fraudulent documents are also growing in use. For example, AI tools can scrutinise huge volumes of compromised email data, find invoices that are to be paid, and alter payee details – meaning that, unless the company takes time to verify payment details every single time, those payments are made to the cybercriminals instead.

AI scam threat: Analysing data sets, writing code and spotting opportunities

While ChatGPT is heavily moderated, similar large language models (LLMs) specifically for criminal activity are available on the dark web, and can analyse large datasets to quickly find vulnerabilities and high-value targets. WormGPT and FraudGPT are just two examples. AI-powered programs can also perform consistent vulnerability scanning, system weakness detection and the development of adaptive malware – while cybercriminals are also increasingly using AI to refine password cracking.

Another example, meanwhile, is that email systems can be hacked, and an LLM asked to read all of the conversations and advise on the best way to scam the organisation.

AI scam threat: Intellectual property theft

A business’s IP is integral to its operations. However, AI algorithms can analyse high volumes of data to identify valuable trade secrets or sensitive information, compromising the integrity of a business.

AI scam threat: ‘New cybercrime strategies, please!’

While people on the level may use ChatGPT to come up with ideas for a big birthday party, help refine a new marketing strategy, or come up with sales email templates, cybercriminals are also using AI to create new ideas for cybercrimes. With the unprecedented level of information we’re putting into AI, the understanding of vulnerabilities in businesses and digital code, for example, is growing – and AI can find new ‘opportunities’ that may not have been thought of before.

Being aware of AI scams

The threats we’ve talked about are merely the beginning of AI-enabled cybercrime. However, it’s not all doom and gloom. Cybersecurity companies are deploying AI at an equal pace, with malicious activity detection, malware detection, threat management and security analytics all being enhanced by AI.

Ultimately, however, it’s important for any business to ensure its teams are up to date and aware of the threats that exist – and, where possible, implement systems and processes that help minimise the risk. This is especially true for finance teams, since many cybersecurity measures won’t be effective if AP or finance employees are successfully tricked into facilitating fraud.

Cybersecurity Guide f
Learn to protect your finance team from AI threats
Download our Cybersecurity Guide for CFOs for a closer look at AI risks and practical ways to defend against scammers.

Related articles

The new security standard for business payments

Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error.