How is AI being used in finance?
The finance industry is undergoing a major transformation thanks to the rapid adoption of AI technology. Much of this trend has been …
Each month, the team at Eftsure monitors the headlines for the latest accounts payable (AP) and security news. We bring you all the essential stories in our cyber brief so your team can stay secure.
A critical vulnerability in ChatGPT’s long-term memory feature allowed attackers to implant false information and exfiltrate user conversations, security researcher Johann Rehberger has revealed. The flaw enabled malicious actors to inject permanent false memories through untrusted content like emails, documents and images.
While OpenAI has patched the data exfiltration vector, concerns remain that the system is vulnerable to permanent false memory insertion through prompt injection attacks. In other words, be conscientious about the information you (or your employees) are inputting into ChatGPT and other large language models (LLMs).
It’s one of many emerging, AI-enabled risks – we’ve also explored Agent Zero and how it’s creating heightened threats alongside new business and productivity opportunities.
The chief executive of cybersecurity firm Wiz, Assaf Rappaport, has revealed a recent deepfake attack that targeted company employees. Rappaport disclosed that dozens of staff received fake voice messages purporting to be from him, attempting to steal their credentials.
Luckily, employees were able to spot a discrepancy between the deepfake and Rappaport’s normal speaking voice. Since the deepfake was generated from conference footage – and Rappaport experiences public speaking anxiety – the artificial voice differed massively from the day-to-day speaking voice that was familiar to staff.
The attack wasn’t successful, but that doesn’t mean attackers paid a major price. Rappaport noted they were unable to track the source of the attack and that it illustrates the low stakes for cybercriminals. “That’s why cyberattacks are so beneficial [for the attackers] … the risk of getting caught is very low.”
In October, millions of Australians were hit with an email extortion scam, in which malicious actors claimed to have compromising videos of them watching adult content. The scammers also incorporated personal information, likely gleaned from data breaches, to lend credibility to the threat.
The scam was wide-reaching enough to prompt the Australian Competition and Consumer Commission (ACCC) to issue an alert.
This type of extortion, sometimes called “sextortion,” is an old tactic. However, this instance was noteworthy as a mass attempt that leveraged tailored information about each target. It’s an unfortunate example of how scammers are using stolen data and technology-enabled efficiencies to scale and personalise existing attack tactics.
Cybercriminals have stolen a collective $7.7 million through business email compromise scams that target US organisations. A Texas construction firm lost $6 million after fraudsters hacked a vendor’s email account and redirected payments to fake bank accounts. Meanwhile, in North Carolina, Cabarrus County lost $1.7 million when scammers impersonated a school construction contractor.
Despite authorities recovering $776,000 from the county attack, most funds remain unrecovered due to rapid dispersal through overseas accounts.
Across most regions, not just the US, sectors like construction and government are major targets for scammers. We explore why in this article.
While AI scams dominate headlines, a recent case in Kentucky highlights the persistent threat of insider fraud. A former Camco Chemical Company employee has been indicted for allegedly stealing $200,000 through a sophisticated scheme using a fake company.
The employee allegedly exploited his corporate credit card access and created fraudulent invoices between 2021 and 2024, demonstrating how internal fraud – despite its “old-fashioned” analogue nature – remains a significant business risk.
The US Department of Treasury says it has dramatically increased its anti-fraud efforts, preventing and recovering over $4 billion in improper payments during fiscal year 2024 – up from $652.7 million in 2023.
The agency says it has used AI and machine learning to recover $1 billion in check fraud, while risk-based screening and high-risk transaction monitoring prevented $3 billion in fraudulent payments.
The finance industry is undergoing a major transformation thanks to the rapid adoption of AI technology. Much of this trend has been …
Discover how Australia and the US are tackling payment fraud, using the UK’s proactive measures as a benchmark. Learn why prevention is key to staying ahead of scams.
All the news, tactics and scams for finance leaders to know about in July 2024.
Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error.