See if your information has been exposed in a data breach with our latest free tool Check Now
Cyber crime

What are AI scams? An explainer for finance professionals

Shanna Hall
4 Min
An illustration of a shadowy AI scammer

While artificial intelligence (AI) has hit mainstream consciousness over the past couple of years, it’s been around for a while. In fact, as a concept, it first came to people’s attention way back in the 1950s.

It was the launch of ChatGPT in November 2022, however, that really seemed to put the ‘intelligence’ into AI. More than 100 million people across the globe used the platform, developed by the research lab OpenAI, and we’ve subsequently seen AI impact many aspects of our world. 

In addition to the positives AI is delivering, from streamlining business processes to aiding disaster management and improving healthcare outcomes, it also brings a significant amount of negatives. 

We’ve no doubt all seen examples of AI-generated images, which to the naked eye are impossible to tell apart from genuine images. This is just one example of how AI is distorting our reality. 

The bad news is that AI has lowered the barriers for novice cybercriminals and enhances what the experienced ones are doing, too – which is why it’s so important that finance professionals are alert to the risks

As a result, just like those AI-generated images, AI is helping cybercrime become even more difficult to spot, which means AI-powered scams are on the rise. 

While AI is being used to enhance, automate and scale up various existing threats – such as ransomware attacks – there are a number of other threats that are being enabled by AI. Here are seven AI-related cyber threats every finance professional needs to be aware of.

AI scam threat: voice cloning

Often in business, key people have a media profile of some description. Whether they’re on TV regularly or have once appeared on a podcast, images of them and recordings of their voice are out there. Voice cloning is where cybercriminals take a voice sample from a legitimate source – like a podcast, for example – and create an AI clone. This clone can then be used to call asking for money to be transferred under the pretence of an emergent payment that needs to be made to a supplier, for example. In the UK in 2020, an energy company lost US$243,000 after the CEO’s voice was cloned.

AI scam threat: Deep fake videos

Similar to voice cloning, deep fake videos can take existing video footage that exists legitimately, and turn it into something else altogether. Celebrities across the globe have already fallen victim to this, with faked videos endorsing scam investment opportunities. As well as causing potential financial loss, a business or individual could suffer serious reputational damage here.

AI scam threat: Enhanced phishing and BECs

Many of us will have played around with ChatGPT and discovered its range of capabilities, especially those that can help us quickly finish off tasks that used to be more manual. For scammers, generative AI tools also help them finish tasks more efficiently – for example, by producing incredibly realistic emails that replicate tone of voice, and even common errors or language nuances.

For instance, business email compromise (BEC) is a tried-and-tested cybercrime tactic and is being enhanced by AI. While scam emails with spelling and grammatical errors are easier to spot, more sophisticated scam emails are much more difficult. When you add AI’s ability to replicate tone of voice, those language nuances mean that impersonating an individual’s email style is now far easier. To understand the impact AI has had on phishing emails, consider the 1265% rise in phishing emails since the fourth quarter of 2022.

AI scam threat: Bulk invoice creation and swapping

AI tools that can produce fraudulent documents are also growing in use. For example, AI tools can scrutinise huge volumes of compromised email data, find invoices that are to be paid, and alter payee details – meaning that, unless the company takes time to verify payment details, those payments are made to the cybercriminals, instead.

AI scam threat: Analysing data sets, writing code and spotting opportunities

While ChatGPT is heavily moderated, similar large language models (LLMs) specifically for criminal activity are available on the dark web, and can analyse large datasets to quickly find vulnerabilities and high-value targets.

AI-powered programs can also perform consistent vulnerability scanning, system weakness detection and the development of adaptive malware – while cybercriminals are also increasingly using AI to refine password cracking.

Another example, meanwhile, is that email systems can be hacked, and an LLM asked to read all of the conversations and advise on the best way to scam the organisation.

AI scam threat: Intellectual property theft

A business’s IP is integral to its operations. However, AI algorithms can analyse high volumes of data to identify valuable trade secrets or sensitive information, compromising the integrity of a business.

AI scam threat: ‘New cybercrime strategies, please!’

While people on the level may use ChatGPT to come up with ideas for a big birthday party, help refine a new marketing strategy, or come up with sales email templates, cybercriminals are also using AI to create new ideas for cybercrimes. With the unprecedented level of information we’re putting into AI, the understanding of vulnerabilities in businesses and digital code, for example, is growing – and AI can find new ‘opportunities’ that may not have been thought of before.

Being aware of AI scams

The threats we’ve talked about are merely the beginning of AI-enabled cybercrime. However, it’s not all doom and gloom. Cybersecurity companies are deploying AI at an equal pace, with malicious activity detection, malware detection, threat management and security analytics all being enhanced by AI.

Ultimately, however, it’s important for any business to ensure its teams are up to date and aware of the threats that exist – and, where possible, implement systems and processes that help minimise the risk.

Cybersecurity Guide for CFOs 2024
Learn to protect your finance team from AI threats
Download our Cybersecurity Guide for CFOs for a closer look at AI risks and practical ways to defend against scammers.

Related articles

The new security standard for business payments

End-to-end B2B payment protection software to mitigate the risk of payment error, fraud and cyber-crime.