Cyber crime

CFOs beware: new AI is helping hackers scam you

Niek Dekker
6 Min

Late last year, powerful new artificial intelligence (AI) capabilities became accessible to anyone with an internet connection. One of the most popular tools, ChatGPT, can out-code human programmers, create wedding vows in Snoop Dogg’s voice, and produce detailed essays on niche topics.

We aren’t the first to point out that this could have major cybersecurity implications. Even though AI creators are trying to make sure the tech is used for good instead of evil, scammers are notoriously adept at finding loopholes – it’s kind of their whole thing.

Cyber-crime rates were already exploding in Australia. So what does the heightened risk of AI-powered cyber attacks mean for AP and finance teams?

What sort of AI tools are available and why are they different?

AI and machine learning technologies have been around for a while, with plenty of legitimate use cases for businesses. From customer service chatbots to supporting sales reps with next best actions, there’s a good chance your own company is using AI-enabled solutions right now.

So what is new? Most of the chatter is around OpenAI’s ChatGPT, the new interface for its Large Language Model (LLM). Launched in November 2022, the chatbot is unique in its detailed, human-like responses and generation of moderately complex code.

While ChatGPT is probably the more versatile tool, AI-generated images deserve a mention, too. Tools like Midjourney are producing images impressive enough to have sparked debates around ethics and art.

But if you think this has nothing to do with finance, think again – along with increasingly accessible Deep Fake technology, AI-generated images mean that fraudsters no longer need sophisticated tech abilities to boost their scamming skills. 

Let’s look at why.

1. With AI’s help, writing scam emails just got easier, faster and typo-free

One of the most common ways for cyber-criminals to target finance and AP professionals is through phishing, a social engineering tactic that tries to lure users into giving hackers valuable information like passwords or payment details. Previously, phishing emails often tipped their hand through atrocious grammar. Even if their spelling and grammar were passable, fraudsters might (hilariously) fail to imitate the tones of executives, employees or suppliers during spear-phishing attempts and BEC attacks. 

But ChatGPT has already produced standup routines, poetry and movie scripts in the style of famous writers and creators. Do we really think one little ol’ phishing message would be too challenging? 

Whatever scammers lack in communication skills, AI is more than capable of picking up the slack. Anyone can use it to produce unique, human-sounding messages quickly and at scale. Expect the volume of phishing attempts to ramp up as the cost of generating scam emails goes down.

Example of a potential phishing message generated by AI

2. AI makes it easier to build websites – including spoofed websites

It’s not just emails that have gotten easier for scammers to create. 

Threat actors have long used spoofed websites – that is, phishing websites built for the sole purpose of getting users to surrender information. They might design these sites to look like those of trusted entities, such as the ATO or a bank. 

Of course, it takes time and effort to set up a domain and build a website that looks even remotely convincing enough to win a target’s trust, but ChatGPT removes a lot of heavy lifting. 

With tonnes of tutorials already online, ChatGPT can help anyone can build a website without any coding or web development skills. Again, it’s easy to see the legitimate business use cases, but there’s no reason to think malicious actors won’t be using this capability for less legitimate purposes.

3. Scammers can create malicious code, without coding skills

There are already documented cases of threat actors claiming to use ChatGPT for recreating malware strains and building hacking tools. Security company Check Point has documented some of these instances and assessed that, while the AI-generated code can be used for benign purposes, cyber-criminals could easily modify the script to create ransomware and other malicious software. 

It’s worth noting that OpenAI has implemented controls to thwart the more obvious requests to create spyware. And Check Point has warned that it’s too early to know if ChatGPT will become the tool of choice for threat actors. However, the firm also noted that underground communities are already showing “significant interest” and jumping onto the trend in order to create malicious code.   

4. AI can help hackers identify system vulnerabilities

When we asked ChatGPT how it can assist security teams with penetration testing (“pentesting” is the process of using simulated attacks to check for exploitable vulnerabilities), it told us it could:

  1. Identify potential exploits based on known vulnerabilities and a system’s current configuration
  2. Generate payloads and commands that could trigger an exploit

And that response appears to be at least partially correct, based on what’s happened when users have provided an actual code snippet and asked the chatbot how to exploit vulnerabilities in it. This can help security teams find exploits before malicious actors do, but, well, those bad actors still have access to the same technology. 

Of course, it’s not a CFO’s job to find or address system vulnerabilities, but it could mean that cyber-criminals have a big assist in infiltrating systems that do impact a CFO’s responsibilities. For instance, it might become easier to impersonate trusted contacts, tamper with payment data or redirect payments to accounts controlled by fraudsters. 

5. There are more ways than ever to pretend to be someone else

Example of an AI-generated headshot of a person who does not exist

Profile photos on email, chat and social media lend a bit of credibility – why would someone attach their face to a fraud attempt? Of course, cyber-criminals have been using other people’s faces to run social engineering attacks for a while. But there are ways to verify a photo, including reverse-searching images through Google. 

Now, AI-generated art includes unique, life-like images in a matter of seconds. It can even create a fictional person out of nothing, tailoring their appearance to a written description. This was already a problem for dating apps, but today cyber-criminals can easily generate a unique “face” for social engineering attempts against your business.

Even before AI capabilities ramped up, these sorts of scams could be sophisticated and hard to detect. Will your AP officers be able to spot them every time, especially now that there are so many more ways to fake an identity? 

What can CFOs do about AI-related cyber risks?

We aren’t fortune tellers here at Eftsure. It’s unclear exactly how AI creators might decide to monetise tools like ChatGPT, or what the extent of cybersecurity impacts could be. 

Instead, it feels safer to posit that this is a starting point – ChatGPT and similar AI will become increasingly sophisticated, likely learning at faster rates than ever before and becoming more widely used across the general population. 

So what can CFOs do to protect their organisations’ finances? And are the usual ways of doing business – including manual controls – really enough protection in such rapidly evolving threat landscapes? As AI and other emerging technologies continue to advance, the upper hand will go to the parties who find ways to integrate the right tech into their day-to-day work. 

Will you cede that upper hand to hackers and scammers? Or will you reevaluate processes, people and technology to ensure your organisation has the best defences possible? 

2023-cybersecurity-guide-for-CFOs
Find out how to protect your organisation with the 2023 Cybersecurity Guide for CFOs
A dedicated cyber-crime strategy unites your organisation's security practices and financial controls, helping you defend against new cyber risks and scams. Our 2023 Cybersecurity Guide for CFOs can help you develop and implement this strategy.

Related articles

Cyber crime

How to block spam calls

If you’ve ever gotten a call from a number you didn’t recognize and picked up the call only to realize that it …

Read more

The new security standard for business payments

Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error.