33 Emerging Artificial Intelligence Statistics

Niek has worked at Eftsure for several years and has developed a clear understanding of the cyber threat landscape and the controls Australian businesses put in place to combat these threats.

Artificial Intelligence (AI) is playing a crucial role in the world of finance and technology.

AI and machine learning tools can drive new efficiencies and minimise risk in accounts payable teams – for example, automating manual processes that save employees hours of work and reduce risks of human error.

But cyber-criminals and fraudsters are also leveraging AI-enabled technology to work more efficiently. AI-driven cyberattacks can break through defences and develop mutating malware, sometimes with disastrous consequences.

To illustrate the power of this technology – both the good and bad – here’s a roundup of the top artificial intelligence statistics.

Author’s Top Picks

  • 69% of enterprise executives believe artificial intelligence (AI) will be necessary to respond to cyberattacks with the majority of telecom companies (80%) saying they are counting on AI to identify cyber threats.
  • The global AI market is projected to grow from 86.9 billion in 2022 to USD 407 billion by 2027, growing at a CAGR of 36.2% during the forecast period.
  • Only 12% of Australian businesses are currently prioritising consumer confidence and trust in their AI rollout and only 8% believe their AI adoption is mature.
  • Results from a study show that 56% of AI-driven cyber-attack techniques identified were demonstrated in the access and penetration phase, and 12% were demonstrated in exploitation.

General Artificial Intelligence Statistics

1. 27% of executives say their organisation plans to invest in cybersecurity safeguards that use AI and machine learning.

With the integration of AI in businesses, cyber-criminals are starting to identify vulnerabilities and weaknesses in using AI against businesses. According to the PwC report, “cyberattacks will be more powerful because of AI – but so will cyber defence.” Currently, AI demonstrates a great tool for cybersecurity in terms of detecting anomalies and behavioural patterns.

2. Wholesale and Retail industries have the largest number of jobs at high risk of AI automation - 2.1 million.

Transportation and storage also have the potential to replace human labour with just under 1.5 million jobs at high risk. As AI usage looks to trend upward in the following years, businesses will have the increased opportunity to automate processes and functions that result in improved productivity.

3. Across all company verticals, AI has the potential to reduce operating expenses by up to 20%.

AI allows accounting companies to automate almost all accounting tasks, including payroll, banking processes, audits, and more. Not to mention, the big four accounting firms like Deloitte, KPMG, Ernst & Young, and PwC have started to invest billions of dollars in machine learning technology.

4. 69% of enterprise executives believe artificial intelligence (AI) will be necessary to respond to cyberattacks with the majority of telecom companies (80%) saying they are counting on AI to identify cyber threats.

Rapid advances in AI and machine learning are defining cybersecurity’s future daily. IT teams and financial leaders are starting to use AI to assist their cybersecurity strategies in fraud detection and malware detection. AI technology is proving to be effective in finding suspicious patterns, interrelationships, and valid links between emerging factors.

5. 51% of enterprises primarily rely on AI for threat detection, leading prediction, and response.

Gartner predicts $137.4 billion will be spent on information security and risk management in 2019 which is expected to reach $175.5 billion in 2023. As cybersecurity budgets increase, so will the focus on AI security. According to Statista, 75% of enterprises are relying on AI solutions for network security in 2019, as well as relying on AI to form their security automation frameworks.

6. Almost half (48%) said that budgets for AI in cybersecurity will increase in FY2020 by nearly a third (29%).

Oliver Scherer, CISO of Media Markt Saturn Retail Group, says, “AI offers huge opportunities for cybersecurity. This is because you move from detection, manual reaction, and remediation towards automated remediation, which organisations would like to achieve in the next three or five years”.

Senior executives who are looking to implement AI in their accounting or technology processes must also adapt their cyber security awareness training to raise awareness around AI security. If done correctly, this can mitigate cyber-crime in years to come.

7. Two-thirds (69%) of enterprises acknowledge that they will not be able to respond to cyber threats without AI.

A clear majority of senior IT executives identified that AI is fundamental to their organisations’ future of cybersecurity. For instance, 74% stated that AI has enabled a faster response time such as reducing the time taken to detect cyber threats. However, there’s still a major lack of understanding of AI’s core capabilities and security functions. This can be a challenge for small to medium businesses.

8. Only 18% of professional service industry leaders expect AI to improve organisational efficiency but 96% believe it will help their company grow.

CFOs and accounts payable managers expect that AI can help improve operational efficiency by automating manual tasks and processes. Financial leaders are also recognising that AI can quickly crunch big data.

9. 54% of business executives say AI solutions implemented in their businesses have already increased productivity.

AI has a crucial role to play in accounting firms, as well, since CFOs can use AI to automate tedious tasks from billing, to general accounts, to compliance. Despite the technological advances, data security is still a major concern when looking to invest in AI. Because of the large sums of sensitive data used on AI platforms, CFOs need an effective AI governance strategy to manage all information. This can encompass login credentials, banking information and more.

Artificial Intelligence Adoption Statistics

10. The global AI market is projected to grow from 86.9 billion in 2022 to USD 407 billion by 2027, growing at a CAGR of 36.2% during the forecast period.

With so many use cases and even more potential ahead, it’s no surprise that the artificial intelligence market is expected to grow significantly over the next few years. As of 2022, the market is projected to grow from USD 86.9 billion in 2022 to USD 407.0 billion by 2027.

11. The Committee for Economic Development report says AI is still in the early phases of implementation in many Australian organisations and industries, with only 34% of firms using it across their operations.

According to the Committee for Economic Development report, Australian enterprises are less sophisticated than their overseas counterparts when it comes to AI adoption. The report refers to Stanford University’s AI Index, suggesting that Australia is behind countries like France, Canada, and China. One of the concerns that Australian organisations face is AI governance in data security and business practices.

12. 81% of accountants believe that levering AI would save up as much as ten working hours a week by helping them automate redundant tasks and potentially unlock up to 68,163 pounds of additional revenue.

Nearly one in two accountants believe that automation will lead to a reduction in stress in manual tasks. One interesting component in AI adoption is that younger demographics are embracing new technology, while 40% of over 55-year-olds say they are not interested in using new technology in their practice.

13. 69% of Australian leaders have incorporated AI into their business strategy for 2022.

According to Data Agility, 47% of Australian leaders have not started to consider AI as part of their strategy, whereas 22% have adopted AI as a core part of their business strategy and 20% are waiting for AI to mature before implementation.

14. 8% of 416 organisations in Australia are in the maturing stage of responsible AI, while some 38% are in the developing stage, 34% are in the initiating stage and 20% are in the planning stage.

Australian organisations who are looking to deploy AI ethically and safely into their accounting processes can do so by incorporating “responsible AI.” According to Accenture, responsible AI is the practice of designing, developing, and deploying AI with good intentions that can empower accounts payable departments. Its main principles are fairness, transparency, explainability, privacy and security.

15. Only 22% of Australian AI implementations and 35% worldwide trust how organisations are using AI.

With the advancement of AI technology, applications are already rapidly transforming the financial landscape. AI is becoming a vital tool used in data processing, auditing, and transforming financial processes. So far, AI applications are fuelling growth not just in accounting but also in construction, healthcare and other industries.

AI Security Statistics

16. Only 12% of Australian businesses are currently prioritising consumer confidence and trust in their AI rollout and only 8% believe their AI adoption is mature.

Data security is one of the complex challenges CFOs face when looking to invest in AI technology. In the era of COVID-19 and hybrid working models, cyber-crimes have become more common, more sophisticated and more costly. Data breaches are a persistent issue that needs to be addressed by all senior management. When handling bulk data, it’s crucial that AP teams are trained on how to manage sensitive files.

17. 44% of executives are assessing AI-enabled security systems, and 38% are deploying autonomous response technology.

The deployment of autonomous response technology can help minimise the risk of fraud and data breaches. However, attackers may incorporate AI-powered cyber-attacks that can overcome detection tools. Cyber-criminals can circumvent AI-enabled systems by using technology like deep fakes or AI-powered malware to manipulate information.

18. 80% of telecommunications executives stated that they believe their organisation would not be able to respond to cyberattacks without AI.

According to Capgemini’s AI research, the technology has proven to be an effective detection and prevention method against cyber-attacks. This is reinforced in Capgemini’s cybersecurity report, where organisations are benefiting from AI in cybersecurity. For example, the time taken to detect threats and breaches is reduced by up to 12%.

19. Replacing traditional threat-hunting techniques with AI can increase detection rates by up to 95%.

Proactive threat hunting involves searching for cyber threats that are undetectable in a network. IT teams use threat hunter technology to dive into the potential malicious compromise of their organisation’s system. This allows them to stop the advanced persistent threats from remaining in your network. With the adoption of AI, detecting malicious software can be found quicker compared to other security tools.

20. AI reduces the time taken to remediate a breach or implement patches in response to an attack by 12%.

Organisations understand that cybersecurity is an ever-increasing threat that IT teams, CEOs, and CFOs need to face each year. Capgemini’s research revealed that artificial intelligence-enabled cybersecurity is increasingly vital.

Artificial intelligence statistics demonstrate that AI has proven to be an effective solution in combating cyber threats like malware and phishing emails. Capgemini believes that organisations should focus their AI security initiatives on fraud detection, malware detection and more.

21. Network security is the most common artificial intelligence (AI) use case for cybersecurity, as 75% of surveyed IT executives reported the use of AI for this purpose as of 2019.

Cybersecurity is crucial in protecting your AP team from cybercriminals. One of the best AP security practices that CFOs can start incorporating into their workplace is security awareness training. Particularly in network security, the first step is to make your AP team aware of the risks involved.

22. AI handles 75% of network security solutions in international enterprises.

AI presents a great opportunity to strengthen organisation cybersecurity through continuous learning. It has the ability to record and monitor big data, helping humans and machines alike recognise threat patterns. This ensures that security is adapted as cyber-attacks evolve and become more creative.

23. 50% of American consumers feel “optimistic” about AI while the other half feel “fearful and uninformed.”

According to a Blumberg Capital report, half of surveyed consumers feel optimistic and the other half feel fearful. The report highlights the disconnect, pointing out that most consumers are getting their information on AI from entertainment like movies and TV shows or social media. In addition, 53% think that AI primarily involves robots or self-driving cars.

24. 91% of organisations are sure their sensitive data is stored safely, but about one in four organisations admitted they had actually discovered such data outside of designated secure locations in the past 12 months.

43% of organisations stated that their data was left overexposed for days, while 23% said exposures lasted for weeks before incidents were discovered. Cyber-criminals are known to fly under the radar of IT teams through the use of rootkits, rootkits or firmware kits. For example, a bootkit is designed to control all stages of the operating system start-up.

AI-enabled Attacks Statistics

25. 77% of enterprises state that increased vulnerability and disruption to the business is what is holding them back in implementing AI, followed by the potential for biases and lack of transparency at 76%.

While machine learning presents many benefits to businesses, it can also be used by cyber-criminals to enhance their attack methods. According to Spiceworks, attackers are leveraging AI software for malicious purposes through advanced social engineering techniques, deep fakes, malware hiding and improved brute force attacks.

26. Results from a study show that 56% of AI-driven cyber-attack techniques identified were demonstrated in the access and penetration phase, and 12% were demonstrated in exploitation.

The most significant key finding from the study is that current defence mechanisms to combat AI-driven attacks are inadequate. With cyber-attacks becoming more sophisticated, security measures need to evolve in tandem. Organisations that are incorporating machine learning tools need to invest in AI cybersecurity infrastructures to combat emerging cyber threats.

27. Two out of three respondents saw malicious deep fakes (AI) being used as part of cyberattacks.

A deep fake is a digitally forged image or video of an individual that is used to show someone else. Deep fakes use deep learning artificial intelligence to make fake images or events, like using a computer-generated face on another individual or creating fake audio of a public figure.

Cybercriminals who utilise these techniques can be used to destroy the image and credibility of a CEO or CFO, or distribute false information about a company.

28. A startling 66% – up from 13% in 2021 – of the respondents said they had experienced a security incident involving a deep fake use over the past 12 months.

Detecting and mitigating deep fake attacks is a difficult problem for all organisations. Unfortunately, there is no tool available that addresses the challenge and removes the deep fake. The best way to mitigate this type of attack is to increase awareness of the problem among financial leaders, board and IT teams who are the main targets for these attacks.

For financial professionals, internal controls are another important measure. Multiple types of verification can help determine when impersonation is at play – whether visual or otherwise.

29. In 2021, around 68% of survey respondents stated that AI can be used for impersonation and spear-phishing attacks against their organisation.

CFOs that incorporate security awareness training are most effective in spotting and dealing with cybercrime. Most importantly, training must be updated every 6-12 months to inform AP teams of new threats that have risen in popularity and how to deal with these situations. In addition, these training workshops should be tailored and interactive to keep employees engaged at all levels.

30. 62% of 100 attendees said they firmly believe AI will be used by hackers within the next twelve months.

One way AI defences can help organisations mitigate cybercrime is through AI incident responses. AI-powered systems have the power to provide security alerts as well as prioritise cyber incidents. This involves key elements like behavioural analytics, monitoring and prediction with the help of AI identification anomalies. When investigating cyber incidents, machine learning tools can best provide a large pattern analysis across a distributed network compared to human observers.

FAQs

Artificial intelligence is a field that develops technology to imitate – and improve upon – human cognitive functions, such as learning and deductive reasoning. An artificial intelligence system is designed to use logic that helps the machine learn and analyse new information, along with applying that information to real-world contexts.

A related but very distinct concept is machine learning, in which data models help the computer to continue learning even without the direct involvement of a human. This means using algorithms and statistical models that enable the system to improve its performance on a specific task over time.

To put it simply, AI is the ability of a machine or computer program to think and learn, while ML is the science of getting a computer to act without being explicitly programmed. AI is the goal, and ML is the means of achieving that goal.

One of the main ways AI can be applied in cybersecurity is by enabling real-time responses and analysing large amounts of data, such as network traffic or user behaviour. For instance, security professionals could train an AI system to recognise the characteristics of malware or suspicious network activity, and then automatically alert security teams to respond.

Just like AP and finance processes, cybersecurity teams can use AI to automate certain tasks. As an example, security teams can use AI algorithms to automatically detect and block potential attacks or suspicious activity, freeing employees to focus on other tasks.

Similar to financial control automation, an organisation’s security policies can inform the AI system, ensuring that certain standards are enforced automatically and are less vulnerable to human oversight or corner-cutting.

There are a lot of different ways a malicious actor can try to exploit an organisation’s AI system. For example, they could attempt to gain unauthorised access to the system and manipulate its algorithms or data in order to steer the AI toward the outcomes they want. This could include modifying the AI system’s output or decision-making processes, or even manipulating the data that the AI system uses to learn and make predictions.

Another way that a malicious actor could exploit an organisation’s AI is by using it to drive attacks against other systems or networks. For example, they could use the AI system to generate large amounts of traffic that overwhelm a targeted network or server.

This is why most AI systems need a human-in-the-loop model, which leverages the best of machine learning and human reasoning. Organisations need to monitor AI systems closely for any signs of tampering, and mitigate risks through a comprehensive strategy that embeds checks and internal controls throughout everyday operations. For finance and AP leaders, this often means robust internal controls and verification methods.

The Essential Cyber Security Guide for CFOs
Understanding the cyber threat landscape is the first step in mitigating the risks to your organisation's financial assets.

Download our free guide now to begin enhancing your organisation's cyber resilience.

Subscribe to our blog

Subscribe to the eftsure blog to receive updates when we post.

The new security standard for business payments

Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error.