Industry news

Accounts Payable Security Report: November 2023

Shanna Hall
3 Min
accounts-payable-security-report

Each month, the team at Eftsure monitors the headlines for the latest accounts payable (AP) news. We bring you all the essential stories in our Security Report so your team can stay secure.

ASIC says SMBs, supply chains are security’s ‘weak links’ 

A survey by the Australian Securities and Investments Commission (ASIC) has uncovered major gaps in the security practices of small businesses, warning that it creates vulnerabilities for larger organisations through their supply chains. 

The survey reveals that a third of small businesses lack adequate multifactor authentication, with over 40% failing to update their applications. Additionally, nearly half of these businesses do not conduct risk assessments of vendors or third parties. 

It’s consistent with Eftsure’s report on digital fraud prevention, with one in five small businesses not using any payment controls at all. 

Australian government pledges not to pay ransoms, while 73% of businesses end up paying

Joining a US-led international alliance, the Australian government has pledged to never pay ransoms to cybercriminals. Forty countries have joined the alliance in a bid to stymie the rising tide of ransomware attacks. 

Meanwhile, in a recent survey, about half of Australian companies say they’ve faced ransomware attacks in the past five years, with approximately 73% of those affected paying the ransom – often within the first 48 hours. The survey indicates that two-thirds of victimised firms negotiate with cybercriminals, usually angling for benefits like detailed information about the stolen data or advice on preventing future breaches. 

Given the opaque nature of cybercrime groups, those assurances are hardly reliable, so we expect to see further guidance in the national cybersecurity strategy that’s slated for release next week.

Microsoft introduces deepfake creator

Microsoft has launched the Azure AI Speech text-to-speech avatar, a tool that allows users to create photorealistic avatars. After uploading images and scripting dialogue to animate them, the avatars can be used for training videos, chatbots and more. They speak multiple languages and interact with AI models like OpenAI’s GPT-3.5.

Recognising the potential for abuse, Microsoft has built in several precautions – for starters, access to custom avatars is limited and strictly controlled, and most Azure subscribers will start with prebuilt avatars. However, as we’ve explored in previous pieces, deepfake technology is still emerging and its cybercrime potential is startling. Like Microsoft’s tool, ChatGPT is also tightly moderated but can still be gamed for illicit purposes – and ‘dark’ versions like WormGPT are already available on the dark web. 

The tool is currently available in public preview.

Boeing data released by ransomware gang

The LockBit ransomware gang has followed through on threats to release sensitive data stolen from leading aerospace company Boeing. For several weeks, the company was investigating a cyber attack on its distribution business following data theft claims by the LockBit ransomware gang. It’s unclear whether Boeing attempted to negotiate with the group –  but, by 15 November, over 40Gb of data was published. 

Boeing has said that flight safety is unaffected and that the company is working closely with law enforcement and regulatory bodies. Its services website is down due to “technical issues,” although the company hasn’t confirmed whether this is related to the cyber incident. 

The ransomware gang, active since 2019, has extorted millions through cyber attacks, including notable victims like Continental, UK Royal Mail and the Italian Internal Revenue Service.

World leaders work to agree on AI standards

At a summit early in November, global tech leaders pledged to set aside competing interests and jointly address the potential harms of artificial intelligence. The summit, led by Britain, saw participation from the US, EU, China, India, and 25 other nations. They focused on reconciling differing approaches to AI regulation: self-regulation in the US, strict laws in the EU, and China’s use of AI as a political tool.

Some key outcomes included pledges for stronger collaboration on the issue and a broader agreement to continue the summit process, with future meetings in Korea and France. However, differences in regulatory models remain unbridged, with future cooperation uncertain. The summit marks a positive step but real-world impacts are likely to be slow, while the spread of malicious AI use is moving fast. 

Related articles

Industry news

How is AI being used in finance?

The finance industry is undergoing a major transformation thanks to the rapid adoption of AI technology. Much of this trend has been …

Read more

The new security standard for business payments

End-to-end B2B payment protection software to mitigate the risk of payment error, fraud and cyber-crime.