Industry news

Maintaining financial control in an AI-powered future

Shanna Hall
5 Min
generative-ai-webinar-financial-control

AI’s role in scams is a double-edged sword: while AI can provide new tools for countering fraud attempts, it’s also making those fraud attempts more sophisticated and harder to detect. And, while the banks and public sector are making moves to protect organisations from scammers, the speed of AI evolution means that businesses can’t afford to passively wait for a solution.

That’s the crux of the problem we explored in our webinar, Financial control in an AI-powered world, led by Eftsure’s Chief Growth Officer Gavin Levinsohn. Don’t have time to watch the full replay of that discussion? In this blog, we’ll explore the biggest takeaways, including how you can protect your organisation through awareness and collaboration.

AI has already changed the threat landscape – and it’s not done yet

Noting that Eftsure specialises in anti-fraud controls rather than machine learning, Gavin explained that the discussion was undergirded by insights and explanations from AI specialists like Jason Ross, co-founder of Time Under Tension, a generative AI consultancy.

And specialists are warning about the dual nature of generative AI tools. Businesses can harness AI for growth in areas like sales, marketing and customer experience. But the tech can just as easily be used against your business, often in the form of unnervingly cunning scams.

For starters, AI is already making scams harder to spot. Think machines trained on data, learning to dupe us in ways we’ve never imagined. In fact, there are already malicious AI tools like WormGPT or FraudGPT available on the dark web and specifically designed to help users carry out illicit activity. Unlike their benevolent counterparts, these tools are trained on data that likely includes malware and phishing information.

But, first, let’s do a quick primer on what we’re talking about when we mention AI.

AI and its components

As Gavin explained, AI is more than just machine learning. It’s a layered cake of specialist areas aimed at mimicking human-like intelligence – decision-making, thought generation, the works.

Think of artificial intelligence as the broadest term – it’s the umbrella under which various sub-fields reside. Its goal is to create machines that can perform tasks that, if a human did them, would require intelligence. This spans everything from basic rule-based systems to complex problem-solving. Machine learning is a subset of AI that focuses on training models on data. These models then make predictions or decisions without being explicitly programmed for the task.

As for generative AI, that’s another step deeper. It’s a form of machine learning that not only learns from data but also creates new data. This new data could be anything from an image that hasn’t existed before, to a paragraph of text that mimics a specific writing style. While machine learning can predict what comes next, generative AI can create what comes next.

Drilling down further, large language models (LLMs) like GPT-3 or GPT-4 are a specific type of generative AI focused on natural language. They’re often trained on datasets from the internet, books, articles and more. This training enables them to predict the next word in a sequence, generating human-like, contextually relevant. It’s not just spitting out what it knows – it’s piecing together language in a way that mimics human conversation or writing.

Understanding these nuances can be pivotal in appreciating the complexity and range of the AI technologies that both scammers and legitimate organisations can leverage.

AI is turbo-charging scam efforts

Even before the widespread accessibility of tools like ChatGPT and its offshoots, scammers were using increasingly sophisticated tactics to circumvent organisations’ financial processes. Business email compromise (BEC) attacks weaponise the email accounts of trusted contacts and are one of the most common ways that scammers defraud companies.

Now, even heavily moderated, benign tools like ChatGPT can help scammers craft thousands of polished, professional-sounding phishing emails at scale. These tools can even help them craft a message in a specific person’s tone and style.

But that’s just the beginning. Not only do malicious versions of ChatGPT already exist, but AI-powered technology like deepfakes are helping scammers imitate more than text-based communication – they can construct audio and video that closely imitate real people’s voices and faces. The tech is still early, but how will your financial controls eventually fare against a scammer who can easily imitate a phone call or video call from your CEO?

In other words, scams have already evolved and they’re smarter than ever. AI’s reach in scams isn’t just the problem – it’s how they’re constantly changing, always a step ahead. These are no longer the days of easily spottable phishing emails. AI-powered scams are the new norm and they’re a moving target.

Banking and government responses

Famously, legislation and regulation tend to move at a slower clip than technology. Plus, cyber-criminals tend to have far more freedom to innovate, unfettered by oversight or bureaucratic processes.

But that doesn’t mean governments or financial sectors are idle. For over a decade, private and public sectors have been grappling with the rise of digital fraud and scams. On one side, there’s an ever-present demand for easier, more seamless user experiences, but those seamless digital experiences often make users into softer targets for cyber-criminals.

Banks have been introducing products like confirmation of payee (CoP), while the public sector is billing PayID as a way to weed out scam attempts. Although these steps certainly help minimise some fraud risks, it’s important to understand that they’re only partial solutions. They’re also designed for a threat landscape that’s largely static, when the real landscape is anything but.

Scams are evolving – so should businesses

For businesses, complacency is the enemy. The name of the game is vigilance and collaboration. As risks evolve, companies have to strengthen their internal controls with a new AI-powered future in mind. Moreover, it’s about understanding cyber vulnerabilities as a collective problem, not an individual problem – even if your own systems and data are secure, your AP team can still be duped if a trusted supplier’s systems are infiltrated.

That means businesses can’t assume that banks or government regulators will protect them singlehandedly, nor can they depend on a single control or solution to keep their finances secure. Instead, they’ll need an ever-evolving, multi-faceted approach that prioritises collaboration and community.

The power of collaborative cybersecurity

Evidence proves that community collaboration works. When multiple elderly couples fell for the same “grandson in jail” scam, their bank manager caught it. It was a ‘safety in numbers’ moment that flagged the scam and prevented financial ruin.

That brings us to Eftsure, a platform that has pulled together thousands of businesses in a communal fight against scams. Rather than manually alerting each other about scams, users receive automatic, real-time alerts if payment information is anomalous. This happens through sophisticated cross-matching and verifications, built around a unique database of millions of verified suppliers.

The biggest takeaway for businesses? In the fight against AI-enabled scams, two heads are better than one. Or in the case of platforms like Eftsure, nearly 2,000 heads. We’re all in this together – staying ahead requires collective vigilance and action.

 

Want to see the full discussion?

Check out the replay of the webinar, Financial control in an AI-powered world.

Watch now

Related articles

Industry news

How is AI being used in finance?

The finance industry is undergoing a major transformation thanks to the rapid adoption of AI technology. Much of this trend has been …

Read more

The new security standard for business payments

End-to-end B2B payment protection software to mitigate the risk of payment error, fraud and cyber-crime.