Payment Security 101
Learn about payment fraud and how to prevent it
Gavin: Thanks to the five hundred and forty-seven of you and going up who’ve joined. As I mentioned, we’ve seen a lot of interest and no secret why, and what today is about is a learning journey into AI that we’re on with you. I’ve got a caveat there because I think it’s always been a core value of Eftsure to be of high integrity and accuracy.
We are not in the interest of that accuracy. We are not deep AI experts, we are accountants and technologists obsessed with our area, which is internal controls to mitigate fraud and risk and how AI might impact that. And today we’re just going to take you on some of that learning journey with us.
I also want to acknowledge right at the outset. Some content in person. The person is Jason Ross. He’s a co-founder of a fascinating consultancy. A bunch of people left Accenture recently to form a consultancy called Time Under Tension. And I would encourage you to, they’re focused on how businesses can grow using AI, you know, in terms of growth, go to markets, marketing, sales, and so forth.
If you are interested in learning about how AI can impact, that part of your business, maybe direct your revenue or growth or marketing people towards time under tension. So, thanks to Jason Ross. And then some of the content here I’ve taken from two fascinating places. The one is a talk called The AI Dilemma.
Dome of you on Netflix might’ve seen something called the social dilemma about the concerns and ills of social media. It’s a widely regarded documentary and the producers and creators of that documentary. Or now speaking about what they call the AI dilemma, and there are certainly a couple of examples I’ve taken from them, so I’d like to acknowledge them and encourage you to engage with those people with that content online.
And then more recently, as a kind of an opposing view, I’ve been listening to a lot to a guy called Marc Andressen, those of us with this kind of grey hair will remember, might remember that in the 90s, he was the creator of Mosaic, the first web browser, which became Netscape, which ultimately became dominated by Chrome from Google and, but, but he’s a leading technologist, venture capitalist.
And I think he’s got some very interesting thoughts in opposition to Raskin and Harris. So, with those caveats and thanks in mind, let’s get into it. If you’ve been listening to myself or any of my other colleagues from Eftsure attending financial services conferences, financial leaders conferences, or, you know, events like this, you would know that for the past two years, we’ve been speaking about, the growth of cyber-crime and scams, where it originates from, we’ve been unpacking the nature of those scams, things like BEC, executive fraud, we constantly go on around why it’s growing, we’ve spoken about, you know, in detail the scams and frauds.
And then we’ve also covered off, you know, case studies, more recently, the past few months, you might have recognised some of that content, which, those black slides, which is really, we’ve looked at how breaches, these large data breaches have impacted, the growth of scams and frauds. And then we often talk about in general principles as to what to do about it, we be as specific as we can in a short space of time.
And I’ll cover a bit of that at the end, but I propose like what I started about saying, we’re in a period of rapid, relentless change, and I know every speaker at every webinar and every conference for the past 20 years has been talking about the only constant is change, but my God, it feels like it’s ramping up. Just a couple of headlines and articles and really on the one side in the past few months, we’ve seen a huge volume of articles about AI and what that means for security for society.
And on the other hand, there’s been a big uptick in sort of policy or the intensive policy from the government and the banks as two of many parties but those are the two closest to home for us around how to respond to not AI per se but growing cyber-crime. With all that in mind what today’s going to be about is really how AI can accelerate scams and frauds like what it means for the evolution of scams and frauds and then on the other side to look at not so much what your business personally can do.
I’ll touch on that. But because we are many because we’ve spoken about that in the past with high frequency, but rather to talk about like a prime on what government’s doing as some of these mitigating measures and then also what the banks are doing. And the reason we’ve put that in is we often get asked about that.
What’s happening publicly, public sector and what’s happening government and, you know, I’ll cover that ground and at the end, we’ll talk a bit about what that, what that means for your business. All of this stuff to use an old phrase is like drinking from a fire hose. It’s a lot to take in and sometimes you just want a lake and sometimes you want a glass of water and people give you a lake, so to speak, and there’s too much to take in.
So what I’d encourage you to do is really treat today’s talk as just like a supermarket, you walk in, you can pick some things off the shelf, there’ll be some things you don’t agree with or like, put those things back, read all the ingredient labels if you can, but really today, I’m going to throw quite a bit of stuff at you and, what doors you want to go through.
Let’s start, with a little prime on AI and how it’s impacting scams and fraud. So this is like a quick map, to how we’ve learned to think about AI, AI itself. And also just so we can use the same terms, AI is a really broad area of computer science and it’s the quest for instilling or ascribing or creating a more human-like intelligence in machines and or systems.
You know, we use. When I say system, I mean technology and, and software and processes, all of it, and a machine, really, hardware and software combined. So, looking for more human-like intelligence in systems, and what we mean by that is, as the first level, decision making, and as the next level, the creation of new thoughts, and new ideas.
Within that, you get the subset called machine learning, you might have heard that term, and that’s really one method of achieving that human-like intelligence. Under machine learning, you train a system or software, a set of code on data. You have to provide data and parameters, and then that system or software can make predictions without ongoing super-visional planning.
So it’s a kind of a pattern-making system. To predict the future within machine learning. This area’s evolved called, uh, deep learning and deep learning is like a subset of machine learning, but they realized if they change the way the software is coded and maps, patterns and so forth, it behaves a bit more like a human brain and while machine learning models and systems required quite a lot of supervision and adjustment, these deep learning models are.
Require less corrections. They almost learn a bit and correct on their own based on, uh, their own output and, and constant new, new inputs. Okay. And then within that, you get this world called AI and specifically generative AI. And, the difference between generative AI and like a broader deep learning slash AI model or system is generative AI, the term generative AI or gen AI produces new content.
Based on existing content. Now, studies, research, investigations, and projects and creation in this artificial intelligence learning era have been going on for some time. Why suddenly has it exploded? The big shift is this, until 2017, there were all these separate areas of AI, learning, development, and exploration, both in academia and the corporate world, but those areas were quite discreet.
They didn’t inform each other. If you were working on improving a computer’s ability to see or recognise speech or create music or recognise images, or even if we’re going to sort of the mechanical world robotics progress in one area was not creating progress in another area. People were looking at these things very separately.
However, in 2017, there was a breakthrough, for lack of a better term. And that breakthrough was an understanding or realisation that regardless of which discipline of, let’s say deep learning or machine learning, or let’s call it AI generally, you were working on all of these software and models could be done through interpreting whatever specific area you were in, whether it was numbers or images or sound or the programming of robotics as language.
Okay, everything can be treated as language. The text of the internet is obviously language, right? But so are images and video. And that’s a good way to explain why an image can be language, right? If you look at any image, it’s made up of pixels, right? Any image you see on your computer screen, is made up of pixels, right?
Each pixel has a coordinate, an X and a Y. And then each pixel has a colour and all colours are made up of red, green, or blue, depending on how much red, how much green and how much blue RGB, you’d be familiar with the term RGB, makes a different infinite number of colours, right? Almost infinite, right?
And black, for example, is 255 red, 255 green, and 255 blue. The scale goes north to 255. White is 0, 0, 0. And then any combination makes, you know, a difference. Now, if you know the X and Y position, which are two numbers of a pixel, and you know the colour content, which is the next three numbers, the RGB, then you can surely express any image in five numbers.
And if you can express it in five numbers, you can do the math on it. Images and video or language, right? So is sound and music. You know, you’ve got music scales, you’ve got different tones, you’ve got different chords, it’s kind of like an oral image and it can be expressed as language. MRI images, that, you know, a brain scan or an x-ray or blood flow patterns.
Anything in the body biologically can probably be expressed as language. Computer code is obviously language. Financial information is really close to language, whether it’s words or whether it’s text. But so are radio waves, so they understood in 2017, there was this breakthrough that all of the AI development and research and work can be expressed as language.
And then there was some software code called Transformer code, which did the translation of images into maths and language or MRI information. It was explained to me as like 200 lines of code that they started putting into these large models, these, these learning, they’re called large language learning models (LLMs), that united all the development and I and their progress absolutely exploded and I’m going to just cite two examples, which are probably like three months old.
But just to me, they were really cutting-edge examples as to how fast AI has grown. So they did an experiment where they got a person to look at an image of an animal in my example it’s a lion, but I think the actual subject matter was a giraffe or what it’s worth. They hooked the person who was looking at these various images, they hooked them up to an MRI machine and then the MRI machine was tracking blood and you know blood movements in the brain and producing these images. Okay, they then fed those images into an AI model and the AI model.
So remember the AI model cannot see anything. It can only look at images or ingest images based on what it was, it takes those images. It converts them into language or maths. It’s been, it’s been fed millions of images before because you have to train these models on data and inputs. Most of these common models we see today, like ChatGPT, have been trained on kind of all the literature ever written by humankind up until 2018.
Like, think of it as all the knowledge of the internet. Okay, the AI could produce 85% of the image of the animal. So I’m just going to pause there. Think about that. An AI scene brain scans images and it recreates an image of what the person saw. Okay, that is confronting. Then another experiment at the bottom, they hooked up a webcam and a router, like a Wi-Fi router to an AI.
And the AI observed that information and ingested that information of people moving around a room. They then unplugged the camera. Okay. But the AI, based on, what it had mapped between the camera and the Wi-Fi signal, the radio waves, the AI could keep predicting how people would move around the room.
Okay and that, again, is for me, just two examples of, how powerful and quickly these AI models and tools are moving. I think what gives some people concern is that, and a lot of people within the AI community, and when I say the AI community, universities, and these large tech companies that are developing these models, Google, Microsoft, Facebook, you know, the tech giants, probably IBM, and then leaders in the area like OpenAI.
Who created chat GPT, they’ve sort of rang these warning bells to say it’s moving so quickly that we as the creators and developers can’t quite predict how powerful it is. I’m not going to go into the details of these charts, but if you look at the one on the right. They were training a large model.
When you hear the term training, it’s inputting data so the AI can absorb it. Learn the data, and develop patterns, so to speak, but they’re training all these AI models. I think there were two here, ChatGPT, which you’ll be familiar with and palm, which is Google’s one of these skills, like making arithmetic or, unscrambling words.
And then they were training, they were doing question and answers with one of the models in English. But at some point, the model started answering those questions, in fluent Persian. But it was never trained to do that. So that kind of thing gives people alarm and gives even experts within the industry alarm and cause for concern now.
There is a proliferation of AI tools right in on our phones and on our desktops. I’m sure if I did a show of hands, a huge number of people have heard of or used ChatGPT. It’s, there’s a free version. There’s a, there’s a version you can buy. and it’s a large language-learning model. You can ask questions, you can converse with it, you know, and it’s incredibly powerful and growing in power every day.
It’s produced by a company called Open AI, and never before has a piece of technology sort of been adopted so quickly. It was made public in November last year and 1 July this year, with 100 million users. Nothing else in the history of mankind has gone that far. Again, that’s owned by OpenAI, Google has their own one called DeepMind and a tool called BARD.
There’s another AI company called Anthropic that is developing stuff. Microsoft have their tools and, you know, so, I’ve just picked this popular example, but there’s a proliferation of these tools. The sister tool to, ChatGPT, the image generation tool is called DALL-E, which is like a play. I guess a pun on the famous artist, Salvador Dali, but they’ve called it DALL-E.
And yeah, just, I mean, that is a live example on your screen. I went in and quickly wrote, can you give me a picture of Deadpool, the superhero in the style of Picasso? And it, you know, produces those results. And then there’s companies and tech companies have jumped on the stuff. So there’s Synthesia where you can produce like an AI sort of assistant.
Some of you might’ve used Grammarly. Some of you have used Grammarly, which corrects grammar and spelling, and they’ve now put AI into their tools, you know, so there’s a proliferation now, without a doubt, there is a huge amount of good, augmented intelligence, additional intelligence to add to our own native human intelligence can do for society, without a doubt. I’ve just thrown up a few examples on the screen, that can help with medical diagnoses, we haven’t I mean, I don’t want to sound silly or naive, but, there are cancers we haven’t cured.
Is this the leap to that? There are sort of travel challenges we can’t solve. Does AI augmenting our own capabilities add to that? So healthcare, disaster management, and better modelling resource distribution. We’re facing a climate crisis. Does AI help solve those things?
Because it increases the computing power of people, you know, and I don’t, I think there are very few examples in history where intelligence has led to worse. It often leads to better, education, social services and welfare. And obviously, those are societal, enhancements and improvements, but within our own world as business people, we’re probably already starting to see or can figure out the benefits.
I know in our business; we’re starting to explore how AI can help various things. And I’m perhaps some of your businesses too. And the thing is, this doesn’t have to be big business stuff. Corporate institutional stuff. These tools are available to everyone. Some of them are free, and some of them for low cost.
There are lots of ways it can help, the ATO knows this. I don’t know whether you put this in the good bucket or the bad bucket. I can’t resist a tax joke, but nevertheless, there’s this article recently on the ATO using, some AI tools to help close loopholes and or at least pickup tax avoidance and so forth.
The AI can certainly be used for good. And, one of the sorts of leading thinkers on that line of positivity and hope is Marc Andressen, who I mentioned earlier. And he says to think of generative AI as more of a puppy than a person. But what I throw back to Mr. Andressen, not absolutely, but certainly as a bone of contention is you get puppies, and you get puppies.
Okay, and I think if you train a puppy badly, it becomes the dog on the left. And if you train a puppy, well, it becomes like my Luna. My retrieval dog on the right is really just a joy and making everyone’s life better. Just to touch on this point of training by training, we mean.
What data is put into the AI from which it learns, Okay, you can feed AI on dark, illicit contents and code and it’ll produce something, or you could feed it more neutrally or positively or producing. I think the other thing is AI is provided, ethical parameters through its programming. So, beyond the data that’s put into it, the kind of parameters or how is data collected and used.
Is there a correction for bias mitigation fairness transparency and explainability safety levels? How much human oversight is there on these tools and systems? And then are there any regulatory or legal issues? It needs to be compliant. All It needs to do. All of that is put into and onto the system and how that is applied will determine, I think, how AI evolves.
It’s not hard to foresee. I mean, arguably, it’s starting to happen already with social media tools that have used machine learning for years. If you’ve ever wondered, you know, how does Facebook know just what you’re interested in or Instagram or LinkedIn for that matter?
Because they’ve had machine learning tools in operation for ages. One of the ills of AI could be election manipulation. I think another thing, which is an interesting one for many professionals who will be on this call is automation has already stolen jobs, blue-collar jobs, and it’s been doing that for years.
Just go look at a checkout at a supermarket. There are fewer people. It always somehow concerns me, but AI can replace knowledge work. You know, it can replace the role of auditors to a degree or marketers to a degree. You know, or coders to a degree. So, you know, what does that mean for society? It can certainly destroy privacy and get closer to today’s topic.
It can, and will supercharge crime without a doubt. And then if you extrapolate on that line and I don’t think it’s childish and it, we’ve all watched the sky fire movies and sometimes the robots win in the end, but even before you get to that crazy kind of, I’m not going to call it crazy even before you get to that.
Kind of potentially terrifying end state of what they call the AGI apocalypse. The reason for the term AGI is an artificial generalised intelligence and the G for generalised is the software kind of. Learns enough to become sentient, to become totally self, sort of deterministic, and it determines that humans are inefficient and pointless, this will lead to the destruction of the AI, and so the AI moves to protect itself, and the robots take over because they’re robots in the military and behind tanks and missiles and computer systems sort of take over, we’ve all seen that movie. Even before you get to that point, long before you get that point, we know it can impact, scams and frauds.
And I’m going to bring it closer to home, talk a bit about that as I’m just going to go back to this thought of, the robots went in the end. Marc Andressen, who, as I’ve just quoted, and we’ll continue to quote at the end of this talk. He makes an interesting point about this so-called fear or risk. He says that how can the people who fear that the robots will win in the end, or we’ll have some sort of apocalyptic mankind existential threat from AI in the end of days kind of thing?
He says those people give the AI the intelligence to do it, but they refuse to give AI the wisdom to know it’s morally incorrect to do it and he says you can’t have both. You can’t have a system that’s so smart that it, determines this is the outcome without having it enough processing and intelligence to work out that that’s not a good outcome and that that’s morally bankrupt.
So just an interesting counterpoint to that kind of. What he would call hysteria. I’m frankly undecided at the moment. And I think there are both sides to that argument, which are legit. Let’s get closer to home. So how is AI driving scams, frauds, and the world we’re concerned with? And hopefully, you are a financial leader as well.
I’m not going to spend so much time on breaches because it’s almost like breaches are, AI empowers coders to do more sophisticated coding and that’ll drive up data breaches. But a lot of what I talk about regarding scams and fraud can be applied to breaches. Now, fundamentally, AI, most of the scams and frauds we’ve been talking about for the past two years of social engineering scams.
They are scams of manipulation. They are frauds, they’re using email and software access to manipulate people into paying the wrong people. Okay. Or doing the wrong thing. So what AI will do is radically drive up the ability of fraudsters to steal identities quicker and more effectively, which makes them better at impersonating people.
Impersonation is what drives scams. So that’s the sort of short equation, but just to get very practical, practical and street level on the stuff. Fraudsters who operate from all over the world are often not operating in English as a first language. So all these phishing emails we’ve seen over the past years, and I’m sure inundating your inboxes or certainly that you’ve seen, sometimes you can pick up the scams and frauds because.
The language isn’t quite right. Here’s a real phishing email. This is a real example. it was sent, I picked it up, and our business did, but our record indicates that you recently made a request to terminate your Office 365 email, and this process has begun by our administrator. If this request was made accidentally, I’ll let you read the end of it, but there are certain weird glitches in the language that would give me pause for concern.
ChatGPT could already correct all those, English second language was a scammer trying to make sure my phishing emails or phishing messages were perfect I just dropped that into ChatGPT and bummer would correct those. Maybe you can always you can also convert it into other languages so you don’t have to just target English-speaking countries and vice versa Okay, so that’s the first thing other thing and I’ve given this example before so forgive me if you’ve heard this from me before but ChatGPT Isn’t just for natural verbal languages.
It’s for programming languages. That’s also a language. So, two examples. This comes from a computer security company who, basically what you’re seeing on the screen is a post by a hacker on the dark web where he says, I’ve recently been playing with ChatGPT to write what he calls a stealer program.
And what that stealer program does is if it’s inserted into your system, like your email, let’s say, cause someone clicks on a bad link, in an email. That program will sit and crawl across a company’s systems, but certainly just starting with the email and grabbing all sorts of office or other related attachments.
It’ll grab PDFs, and any office document, zip them up, compact them, and then email them back to the hacker over the internet. Okay, so the hacker posts, he’s done that with ChatGPT, and then the security company went and tested if what that code he produced is accurate, and they then did an article, which is how I found out about that. It’s completely accurate.
And then here’s the thing you might’ve seen from me before, where here’s me playing around with ChatGPT. I know nothing about coding. What you’re seeing is me just typing on the screen, write me a script. I didn’t know it should be in this language called Python, but a Googling to basically scan the web for people’s web details and extract them for me.
Just people’s personal details. And I could however many fields I wanted, name, address, and there’s ChatGPT quickly writing out some code. Now that’s an AI that’s been trained. For no illicit purposes. No sooner had ChatGPT come out than a bunch of hackers had because a lot of these language large models are open source, meaning their coding is published.
They have open-source versions. You can sort of buy a model or borrow code to create your own, large language AI. Some hackers have done this instead of calling it ChatGPT, they’ve called it WormGPT. And instead of being trained on what ChatGPT is on, it’s probably trained on a lot of malicious software code, cybercrime content, phishing emails, and so forth.
And you can use this to do some nasty stuff. So that’s a very ground-level view of what hackers can do. Then I spoke about impersonation and these new tools only need about, here you go, three seconds. That was three seconds to fully impersonate someone’s voice. So just three seconds of your voice, which is a couple of words, and it can work out how you’d say all the words.
And this is already being used. Carlier this year, there were some couples in Chicago who got phoned by what they thought was their grandson. And the grandson would phone the grandparent and go I’ve made a mistake. I’ve had a driving accident. I’m in jail. Please can you withdraw $4,000 dollars to come bail me out of jail.
That’s what happened and this elderly couple did it. And I’m going to come back to tell you what happened. Okay, because I think there’s some quite powerful learning in there. So voice impersonation is happening. As recently as this month, you can see the date there, August the 3rd.
A study came across my desk just as I’ve been learning my way into all this stuff where researchers at the University of Surrey, used AI to determine what people were typing on their keyboards while listening through a Zoom meeting. Exactly like a Zoom meeting, we’re on now. And if I typed the AI could work out what I was saying, with 93% accuracy.
That is really scary because if you’re on a meeting and logging into something and there’s an AI bite on that meeting, it’ll pick up, you know, maybe you’re logging credentials. You wouldn’t even know. So that’s, you know, that’s another sort of new area. Everyone’s familiar with deep fakes in the past.
I’ve shown examples of actors who have been impersonated and so forth. But here’s an interesting one. So what you’re going to see on the left of your screen is a guy called Martin Lewis. He’s the UK’s, let’s say, barefoot investor, their equivalent. He’s a consumer finance expert, widely known, and a person of high repute.
And he often goes on morning TV shows and talks. And this is the real Martin Lewis on the left talking. The likelihood is not, but you’re just on the cusp of trying. And it’s actually an important question. Uh, at the moment, because there is an urgent case. That’s the real Martin Lewis, what caused alarm both from Martin Lewis and actually the government.
Because he’s so publicly known and this was so widespread was what you’re going to see next, which is Martin Lewis popping up on Twitter. Allegedly promoting one of these equally nonsense investment scams by Elon Musk. That’s also nonsense but nevertheless, he has fake Martin Lewis “Elon Musk’s new project in which he has already invested more than three billion dollars, Musk’s new project opens up great investment opportunities for British citizens, no project has ever given such”. I mean, it is a little staccato but that could also be because I’ve downloaded it a few times, meaning it’s deteriorated. But a lot of people would have fallen for that. Looks like Martin Lewis, sounds like Martin Lewis is not Martin Lewis at all.
So you’ve got this massive proliferation of AI and people who are on the concern side, the sort of concerned group, let’s say, some of them liken this to, it’s got the risk of having a weapon of mass destruction. Right? And I know the image you’re seeing on your screen right now is huge, you know, maybe it’s an extreme example, maybe a bit prescient because Oppenheim is out the movie.
I haven’t seen it, but obviously quite topical and then also topical within Eastern Europe and the bluster or threats of Vladimir Putin and so on. it’s kind of like nuclear concerns have sort of emerged again. And maybe there is some, validity in comparing what AI could do to what the fear of what nuclear could do in the 50s and onwards.
We’re, we’re now 70 years past the Trinity test and, and, you know, the first atomic explosion and yet only nine countries, have nuclear weapons and there has not been any kind of nuclear war or nuclear holocaust on mass. Some of how the world and society and structures reacted, to the risk, the threat of nuclear weapons did work.
Arguably, it worked very well. And then what people concerned about AI would say is that nuclear weapons do not create more nuclear weapons. Whereas AI produces more AI, it learns on its own output and proliferates itself. So maybe there’s some logic. And if you had to take, the response to the threat of nuclear war, let’s say, and apply to AI, then the things to do around managing AI so it benefits society, not destroys it. It’s collaboration, more coordination and more provocative control.
Now, control goes to regulation and certainly what Mr Andressen would say and those who feel regulate regulation and control is not a good way to respond to AI and it should remain for lack of a better open source and just public and to the benefit of everyone and government should largely stay out of it.
They would argue that control starts off with good intentions. But then ends up in very draconian ways. And that’s philosophical, that’s a big discussion beyond the realm of this talk, but I would like to acknowledge it because to just sit and go AI is bad or AI just produces negative outcomes or puts us at risk is wrong without acknowledging that there is.
Another point of view and that other point of view championed at the moment by several people, but certainly Marc Andressen and maybe Mark Zuckerberg from Facebook would be AI must be left. It cannot, it doesn’t need regulation. It doesn’t need control. The genies are the bottle of trust human nature and so forth.
And I haven’t read his paper. I’ve listened to a lot of his talks, but I’ve just put it up here. If anyone wants to check it out. He’s written this paper on his, on his business’s website, Andressen Horowitz, which is arguably one of the best venture capital firms in the world. And he’s called it why AI will save the world.
I’m certainly going to read it. I’ve heard a lot of its thinking, but I’d encourage you to get a point of view there too. But let’s soften the debate on control per se. And let’s just talk about collaboration and coordination, because I think it’s a healthy response. My view or consider is, that you do need businesses, and governments.
You can’t just let it proliferate with no thought. And certainly in the face of these rising scams and frauds and the things which impact us as financial professionals, that is the case. I’m going to shift now to, what’s being put in place, not, not to stop AI per se, although I do believe we will start seeing rapid legislative concern and more coordinated responses to AI itself, but more what is the government and than what are the banks doing around scams and frauds, which are going to ramp up because of AI. And again, that’s because this is what’s in the news.
And that’s where we get asked a lot of questions. And again, this is just access sort of water out of a fire hose stuff. All these things need to be delved into, but I’m just going to give you kind of a primer. I’m also just aware of my time and I’m probably going to speak till about 12:45. I’ll go a bit longer than I said if that’s okay with everyone.
Government, is lots of news about government response and these are kind of the headlines or the summary. The first thing is, of late, in the past few months, governments elevated cybersecurity into the cabinet, right? And they’ve appointed our first Minister of Cyber Security. It goes along with the Home Affairs portfolio.
It’s been given to Claire O’Neil and Claire’s got a vision. And she’s also, I think, confronting some of the, let’s say, brutal facts, which is she feels we are quite vulnerable, and I think that would be arguably correct. Just given that we clearly over-index to some degree on frauds and scamming.
There’s lots of data to say when the top third most targeted countries for scams like business email compromise. I’ve read somewhere and unvalidated, but that last year we experienced 20 times the global average number of data breaches. I mean, there were 853 data breaches reported to the Office of the Australian Information Commissioner last year, 853.
And those are just reported ones. It is an issue and we need to do work. Arguably, some of this could be politically motivated content, but she says we’re way behind. We’re vulnerable, but she has a vision and the vision is that we’ll be a leading cyber security company by 2030 to do that.
She’s developing a strategy. I think at the moment, the strategy is being formulated and you can contribute to the strategy. You can offer, you can apply and get invited to provide a contribution. There is obviously a strategy in place at the moment, but it’s old and it’s waning.
The first thing that Minister O’Neil did was appoint Darren Goldie the military officer, as cyber defence coordinator. Now, why do we need a coordinator? Because what’s been merged in these breaches, whether it’s Optus, Medibank, or anything that happened before then is coordination is a massive part of it.
You’ve got the media, you’ve got the company itself, you’ve got the government, and you’ve got the stakeholders of that company. Who coordinates a response to a massive, cyber security failure and someone needs to do that much the same way that we have coordinated responses for floods or fires, and that helps? He will also be in charge of industries and their responses and what codifies risk mitigation in certain key infrastructure industries and there are eleven.
Those are them. And then more obligations have been created in law, in acts for these industries to report quicker and improve their cyber security postures. So that’s happening. The next thing that’s happening is, and I can’t manifest that in the specific evidence, but what’s happening is, there seems to be a shift that government’s going to be more offensive in cyber security strategy.
So, as I understand it, the general government posture to date is we will be defensive. We will put measures in place so no one can attack us. Dates or private, whether it’s state actors or arguably mass, you know, actors on large corporates, but the government postures historically being defensive.
They’re now debating whether in addition to being defensive. We should be allowed to attack back if we can work out the aggressor who the bad actor is. there’s a shift in sort of general guidance around that. Then they’ve strengthened consumer data protection.
And the details of this are probably emerging or still to emerge. But if you carry consumer data, there’s going to be more obligations on businesses that do that. That’s certainly in response to these scams and frauds. Governments also put their money where their mouth is. They’ve increased, funding in the last budget.
The ACCC was given another 58 million dollars to form a national, anti-scam center. And 44 million of that 58 goes into actual technology. Really to create a better system and platform for grabbing intelligence and data on scams and then communicating that out to everyone because you can’t stop what you’re not aware of and that can certainly affect your business positively.
If the ACCC has more resources and a better platform to keep everyone informed on what to look out for, that’s got very pragmatic benefit. To all of us. Okay. And then there are three more things that the government’s doing, which I’m going to talk about separately because they will go into a little more detail.
They’re aiming to ban ransomware payments. This is hugely contentious. I’ll go to that in a moment. They’re promoting e-invoicing through the ATO. I’m going to talk about that. And there’s increased pressure on the banks to be more active. Okay. And that’ll take me into a discussion about what the banks are doing.
Let’s look at those ransomware payments. The question with ransomware is always this to pay or not to pay. And really, there’s a strong argument and the government’s point of view, which hasn’t been legislated. It’s just at this stage, encouragement or opinion. It’s been debated, but it might form its way into law, I guess, is if we ban companies from paying ransoms, then cyber criminals will get the signal that if you target an Australian business, they’re not allowed to pay you.
So, you’re wasting your time. And in her words, we’re challenging the business model of cybercriminals. As a counterpoint, I will say this, because they have a business model, they need to give you your data back if you pay the ransom. If they don’t, the next company will never pay.
They also break their business model, by not giving you the data back. The point of banning ransomware payments is if you pay ransomware, you don’t get the data back anyway. So, you’re just encouraging terrorism. There is a counterpoint that goes, hang on, in many cases, when you pay the ransom, you do get the data back because the cybercriminals are commercials that need to protect their own business model too. I would say, I think it was the CFO of Medibank who, in the end, did not pay the ransom.
To get their data back. I think a leader of that company, I don’t want to misquote but did say, and I agree with this, it’s got to be case by case. So, surely, if the government does choose to ban or keep encouraging the banning of ransomware payments, they’ve got to make provision for edge cases.
What if thousands of jobs are at risk? What if lives are at risk? What if physical things are at risk? What if, you know, I don’t think you can have a blanket rule, but it certainly to be debated. Then e-invoicing. Many of you might have come across this, or have heard of it. What is e-invoicing? And I’m going to unpack all these sorts of protocols and solutions because I think there’s confusion.
E-invoicing is a digitisation of the billing process. So no longer will you attach a PDF invoice, and send it to a customer, you will put your information into your ERP, and then through sort of unified plumbing in the back end, like network plumbing and interoperability of documents and ERP systems, that information will pop up in your customer’s ERP, they will press a button, they will pay you or initiate a payment, but at no point will you be sending a physical or PDF or document to them for invoicing.
The government is pushing this mainly through the ATO first to, certainly councils. They’ve been pushing for some time to the local government level. They’re starting to push the state government. We’re following a European standard called PEPOL, which stands for Pan European Public Procurement Framework, I think.
That framework governs standardisation, the legal sort of agreements to back it and how the systems will talk to each other in the back end. It certainly can help with fraud and scams because you remove the PDF attachments that so many of us send as invoices. But it also creates risk, and it creates risk because while that fraud vector is removed, that aspect of fraud is removed.
The information still must be put into an ERP. The invoicing information still has to be entered somewhere. And what we’ve seen is fraudsters operate always at the vendor management level, not really at the payment level. So, where are you getting the information to put into the system to send to your customer?
Well, a fraudster can, you know, still… Corrupt that information before it went into your system, but now under e-invoicing, we think it’s secure. So, we reduce checking measures. So that’s something to be mindful of. So that’s a limitation. I think then, just before I go on to the banks, I did, as recently as this week, there’s been some US legislation that’s come out and this is a memo, it’s public, a memo from the Director of the Office of Science and Technology.
At the White House to various agencies as they form their 2025 budgets in the U.S. And in that memo on page two is this article, and I think it’s one of the first times that AI has, has, has shown face in government, so federal government, and they call out that when agencies who belong, you know, who submit for the budget to government create their budgets, they must consider AI, both to improve the services of government, but be mindful, but be mindful of.
Some of the risks of that AI, and it needs to be done in a trustworthy manner, and we don’t have to go into it, but interesting to note that it’s now surfacing in American legislation, and I think other governments around the world will copy. The government is also putting pressure on the banks, which brings me to what the banks are doing.
The banks, arguably the banking system leaked very dramatically from a pre-internet area where the software and systems and back ends were created in the 60s and 70s to a digital era. We lost chequebooks, we lost check payments, we lost tellers. But the back-end system was designed to support that.
For a long time now, and it’s certainly one of the sort of Genesis aspects of our business at Eftsure, the banks do not match BSB and account numbers to names to business history and registration. And because they don’t make that match, all fraudsters have had to do. Is convince you that their bank account belongs to one of your vendors and that’s sort of the most popular scam running today. The banks have put increasing pressure on the government to fix this, you know, they don’t carry liability yet for incorrect payments. But there’s pressure on them to do better checking and I will credit the banks for some time.
They’ve improved both processes and technology to check. But they’re not doing it adequately. And you will have heard all these terms around what the banks are doing. And I’m going to go into some of those. And yes, I want to acknowledge I’m mindful of time, but I’m just going to keep going. We’ll just compact Q& A time and we can always answer your questions separately.
Hopefully, you’re finding this valuable. So, you’ll have heard this thing, the new payments platform. And the reason why I bring it up is it gets so confused as a security measure. The new payment platform is not a security measure per se. What it is, is a modernisation of the switching and payment rails or payment infrastructure, the clearing, transaction clearing system designed predominantly around speed and ease, not security.
So, when people say or might say, or a speaker says, oh, the NPP will solve this. Well, the NPP makes payments faster. And they’re sort of the first manifestations of that, like OSKO with instant payments and ease. Things like Pay ID, which I’ll talk about in a moment, but they’re not security measures. They really ease convenience measures and allow you to add some information to payments and move them quicker.
Pay ID can help in the consumer space with fraud, but fundamentally the NPP is not going to make everything safer. Okay. So be mindful of that. Then you get a similar misunderstood thing in open banking. So often people say, oh, open banking solves fraud or with all this open banking sort of evolution.
This is what stops AI-powered scams or will help us. That’s also incorrect. So, I want to talk a bit about open banking. And again, these are just kind of tasters. Each of these things are subject on its own. Open banking is not technology. Open banking is. A legal framework, and it’s a legal framework that allows a person to give a third party, largely the banks, sorry, a person that allows the banks to give your information to a third party, which will often be a financial technology provider or innovator.
They can use that information to create something. It’s a legal framework whereby you can grant information to the bank to share that information with a third party, largely a fintech, right? So, what’s good about that? Well. It certainly can stimulate innovation. So, you know, there’ll be more tools more integrations and more ways to make your business life easier.
It can possibly be used by companies like Eftsure to verify vendors more easily, but I’ll talk about the massive limitation of that. And it creates easier switching between banking platforms for people. And yes, it helps innovation generally, but again, open banking is not in and of itself, any security measure.
Let’s say a company like ours wanted to use open banking to verify vendors in a very easy way. So how we might do that is. You grant us permission to verify your vendor. They log into their bank account, pass the information to us, and we verify that detail. The problem is it comes with a timestamp. So, if you grant us, and I’m just using us as an example.
If you grant us permission to get that information from your vendor, you have to re-grant us that information, that permission in a few months. So, it’s not forever. The second problem is. Let’s say an accounts receivable person at a vendor that we want to verify needs to log into their bank account for us to validate it.
Well, do they even have access to the bank account? So, there are all these nuances to pay open banking. Companies that have used open banking today to build a tool often complain that it still relies on all sorts of paper-based Commission rights. I mentioned earlier that under NPP. There’s a thing called Pay ID that can help with security, and it can.
Pay ID basically, it’s part of, it’s the only part of the NPP that works towards security and it’s the nickname addressing system of the NPP. You guys might be familiar with this. Basically, you can register an ABN or a mobile number as a Pay ID. And then when I want to pay you, I pay to that mobile number or ABN.
How it helps is it is another degree of checking at the point of payment, but it’s fundamentally only workable at the moment for consumers and individuals, There’s talk of it being shifted to batch processing, but there’s a huge amount of work that needs to be done over several years to make ERPs compliant with it, batch processing compliant with it and so on.
Okay, so we’ve got a long way to go. But here’s the thing. And we often get asked this, even if all those things are added, all those integrations and compliance have happened and your ERP can account for ABNs and or a Pay ID, like a mobile number and your processes can fraudsters can easily get around this, right?
How did they do that? Well, very simply this. a fraudster registers, let’s say a fraudster registers a business like Officeworks, but instead of calling it Officeworks with an S, they call it Officeworks with an X. Okay, they then register that business, get an ABN, they then do all the same stuff that they currently do to get into your email or your supplier’s email, and they inform you that, hey, you know, we’ve changed our bank account to this ABN.
You go pay to that ABN and what comes up in banking is, yes, it matches something called Officeworks. But with an X, you then what’s the best way you could check that? Well, you’d have to go into ASIC and kind of lookup that ABN or you know, and then you get lots of businesses that have similar names.
So Pay ID can help, but lots of work to be done. And then lastly, I’m going to call out namecheck, which is a tool done by CBA. That matches, uses CBA data to match names to numbers and reduce fraud certainly a step in the right direction, but only available to CBA customers. They talk about an API, but I think it’s going to be years.
And only uses CBA data, and it’s very loose in its language. It says things like “don’t seem to match their account” and so forth. So, it does add a helpful check, but again, for consumers and even when it moves to batch batches, if it does, lots of challenges for business. So, I’m getting to the end of my talk.
I’ve overused my time. I am sorry, but I’m going to go just a little bit longer and talk, What does it mean for your business? I think big picture, I stand by the need for collaboration, and coordination with control to be debated, and we’ve been talking about collaboration coordination for a while.
I started the talk by mentioning all this, all the things we’ve been talking about. And one of the things we’ve certainly been talking about at Eftsure is closer collaboration between the CTO and the CFO. The IT department and the financial department need to get much closer together, and we, I’m not going to go into it now, but we talk about that’s how you formulate a cyber-crime strategy as opposed to cybersecurity strategy, and there’s all these tenets of doing that, but at the heart of that is collaboration.
Then, I think you need to stay abreast of what the government and what the banks are doing and see how that works for all of you because there needs to be a more national spirit of court with national collaboration. Okay, I’m going to go back to, I’m going to go back to before I talk about Eftsure, I’m going to go back to that example I gave about the voice impersonation where the couple was they, they were phoned by their grandson and them, and then they, the grandson’s voice was impersonated and the couple started running around Chicago, withdrawing their limit from ATMs.
And when they ran out of what the ATM would give them, they went to their bank manager and asked their bank manager if they could withdraw more money from the branch. And, and here’s what happened. The bank manager stopped them. Why? Because four or five other elderly couples had come into the same bank.
And asked them to withdraw money because their grandsons were in jail. And at that point, the bank manager realised that there was a scam at play. How did the bank manager realise that? Well, there’s a communal response. There is a. There’s safety in numbers. You can’t fool all the people all the time. And when he saw that three different people were having this problem, he knew it was a scam.
And that’s an example of collaboration. And, you know, that’s, I guess, just my last slide will be on how Eftsure fits into this picture. And how Eftsure fits into this picture is exactly that. What we’ve done is we’ve created what I’d call a collaborative community. Of businesses who use our platform to protect themselves from scams and frauds.
There are almost 2000 members of that community of customers and what we’ve done for those customers and trust me, I will not go into a hard sell on the honey of Tisha. But what we’ve done is we verified all the vendors of all those businesses and then created a platform for the benefits of that verification to be shared.
If multiple of our customers are paying a business at the same set of vendor details, we know they’re right. Similarly, if any of our customers, we pick up a scam or fraud attempt from any of our customers, we inform our whole community of members, that is a blacklisted account. And I’m being very cursory here, but the point is.
We’re a platform of collaboration and coordination, which aren’t really terms often used with things like cybersecurity and fraud mitigation. But that’s what we do. Thank you for listening today. I did go much longer than I thought. Most of you have stuck around, so I appreciate you listening. I hope you all got value and just going to Luke will fire off any Q&A and hopefully, I can answer those.
Segregation of duties is critical for safeguarding business finances – and keeping auditors happy. Eftsure makes it easier, helping you...
Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error.