Cyber crime

What is deepfake AI?

Bristol James
7 Min
Dee

Deepfakes are a type of synthetic media, usually a video or image that convincingly depicts someone doing something they did not do. These artificial videos and images are usually created through generative artificial intelligence (AI). The term “deepfake AI” comes from the combination of “deep learning” and “fake” – coupled with the use of AI, they often lean on existing content to create unrecognisable changes to the original files and weave a convincing yet misleading story.

How does a deepfake work?

Deepfakes are created through deep learning, artificial intelligence, and machine learning to mimic a person. Although the practice of changing and editing photos and videos has been around for many years, the key difference is that deepfake technology can be applied to existing videos and images in a way that far surpasses simple image editing. For instance, the (fairly realistic) likenesses of celebrities such as Tom Cruise and Keanu Reeves have garnered millions of views on TikTok despite being artificial.

“The technology behind deepfakes is known as a generative adversarial network, or GAN. GANs consist of two neural networks, a generator, and a discriminator, which compete against each other to produce increasingly realistic outputs.”*

To simplify, the technology analyses a culmination of images and videos, then uses machine learning to mimic the movements and expressions through video replication. Once there’s a grasp on the visual, deepfake technology usually uses a similar process to grab voice notes to overlay seemingly accurate audio clips. Some content is so realistic, digital forensic experts struggle to differentiate what’s real and what’s AI generated.

So how hard is it to create deepfake videos? Unfortunately, it’s become increasingly more accessible. DeepFaceLab and Deepfakes Web are just two examples among hundreds of apps and software readily available to create compelling deepfakes.

Examples of deepfakes in the media

Though they’ve technically been around since 2019, deepfakes have been increasingly showing up in media headlines. You may heard how a finance worker in Hong Kong recently lost $39 million to imposters using deepfake technology, or the latest viral scandal where deepfake pornographic images of Taylor Swift took the internet by storm after being shared on X (formerly Twitter) and Telegram. This one causing a more widespread shake-up about the use of deepfakes, noting that pornographic images make up 98% of artificially generated images and 99% of the victims are women.

Other popular and well-known deepfake examples include the Morgan Freeman video created by Dutch deepfake YouTube channel Diep Nep. This video, which still garners plenty of discussion and attention, was an eye-opening turning point in how truly convincing deepfake technology can be when impersonating well-known celebrities, and media personalities.

Speaking of impersonation, another popular and unfortunate real-life scenario includes the deepfake video of journalist Martin Lewis promoting Elon Musk’s new trading platform, Quantum AI. The video was a scam which very quickly went viral, resulting in people losing thousands of dollars to a fake investment opportunity. One Australian man lost $80,000 of cryptocurrency after viewing the viral video promoting the new platform.

Is deepfake technology used for good?

Although much of what we see in the media focuses on the dark side of deepfake technology, it’s not all doom and gloom. Some positive use cases for deepfake technology include education, where universities and schools are looking for ways to use the technology to create a more engaging and immersive learning experience.

Deepfake technology has also become a major topic of discussion when it comes to augmenting cinema, storytelling and video games. Creating an even more immersive experience, deepfake technology could see gamers avatars mimicking their facial movements and proving a voiceover to match the character they’re playing – for example, a Luke Skywalker.

Not only is this expected to elevate the gaming experience, but another silver lining is also the expectation it may reduce a lot of harassment against the transgender community by allowing them to mask their voices to their gender identity. Surprisingly, last year, almost half of gamers had experienced harassment via voice chat while playing.

How deepfakes are used for criminal activity

  • False evidence: the creation or manipulation of evidence, submitted to be used to skew the outcome of an ongoing investigation.
  • Fraud: deepfakes can impersonate an individual with the hope of obtaining access to highly sensitive information, systems or processes, posing a cybersecurity threat to companies and individuals.
  • Political manipulation: deepfakes created using political figures have been used to skew public opinion. A popular example of this was the deepfake of Ukrainian President Volodomyr Zelenskyy, falsely urging Ukrainian troops to lay down arms against invading Russian troops.
  • Customer phone support: deepfake voice calls to mimic a financial institution to get access to personal identification questions.
  • Blackmail and cyberbullying: creating falsified videos or images of people or executives to cause reputational harm.
  • Stock price manipulation: deepfake videos have been used to affect publicly traded company stock prices, creating fake news or statements on behalf of executives about company performance, product releases.

How to spot a deepfake

  • Always look closely at the faces in videos, specifically eyes and eyebrows
  • Unusual expressions or a lack of facial expression (including lack of blinking or strange lip movements)
  • Awkward or unnatural facial and body movements
  • Low quality videos
  • Inconsistent audio and voiceovers
  • Look for strange shadows, even watermarks on the screen
  • Asymmetries, such as jewellery not matching or facial features appear uneven or skewed
  • In live environments like video calls, if you feel unsure about someone’s presence, ask them to do something like turn their face to either side or wave both hands.
Evolution of a deepfake using Nicole Kidman
Stages of deepfake creation using Nicole Kidman – the face swap method. © Source: Lee, Tariq, Shin & Woo, 2021

Legal implications of deepfakes

The falsified explicit images of Taylor Swift referenced earlier in this article have caused a stir about the need to regulate and criminalize the creating of deepfake images. Although we’ve already pointed out that deepfakes are not always used with malicious intent, there’s plenty to be said for the harm inflicted on victims if deepfake technology is used to misinform, scam or misrepresent others.

Regulators are looking closely at ways to implement legislation around the use of deepfakes. In the US, there are currently no federal laws to prohibit the creation or sharing of these images or videos. However, as of January 2024, there had been a proposal to implement a No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act. This act takes a stance against the using a digital replica of a person without their permission, both appearance and voice.

At the state level, however, some have implemented legislation against the use of deepfakes, including California and Texas, among others.

In Australia, there are no laws prohibiting the use of deepfakes, but the country does have strict penalties against the non-consenting use and distribution of pornographic imagery. This can be linked to the use of deepfake technology to create imagery as well.

The rising implications of deepfakes on corporations

As mentioned above, deepfakes are becoming increasingly popular and accessible, especially in the corporate space. According to an .

The biggest target for these crimes? C-level executives and notably, finance professionals. Deepfakes are contributing to the growing cost of cyber scams, with criminals using deepfake technology to humiliate or harass business leaders, leaning on practices such as doxxing to cause personal and professional harm.*

What to do if you spot a deepfake

It’s important to question any suspicious activity and perform due diligence when it comes to information and security – especially if there’s any financial risk at play. If you’re concerned about something you’ve seen online or an interaction you had over video call or the phone, report it to your local authorities and, depending on your risk level, seek legal advice.

You can also report videos or images online to have platforms review and potentially remove them if they seem to be misleading or compromising.

Be proactive: how to protect yourself and your business from cyber criminals

There’s several ways individuals and businesses can protect themselves, including:

  • Implementing multi-factor authentication: reinforcing your accounts and devices by requiring a minimum of two-factor authentication to gain access.
  • Update passwords consistently: a simple way people and teams can protect their logins, which is commonly overlooked. A brief look at what details scammers are selling on the dark web will quickly make you reconsider updating your details now instead of later.
  • Regularly updating your device: to ensure all your security software is up-to-date against the latest cyber attacks.
  • Implementing corporate controls: practices such as call-back-controls or segregation of duties help catch potential phishing attacks against corporations. As companies turn to automated practices, using technology solutions like Eftsure can remove the risks of manual processes and add a technical layer of security to your payment processes.
  • Question everything: when it comes to your financial (and emotional) well-being, don’t hesitate to question or fact-check anything and everything. Given how realistic – and common – deepfakes have become, it’s important to be confident all your interactions are genuine and real.

Summary:

  • A deepfake is a type of artificial intelligence (AI) devoted to generating artificial video, images or audio, generally used to mislead a person.
  • Deepfake software is becoming increasingly accessible, making it easier for cyber criminals to access
  • Regulators are looking closely at ways to implement legislation around the use of deepfakes, however, in most instances, legal repercussions are still scarce
  • According to an Identity Fraud Report, there was a 3000% increase year-over-year on deepfake attempts
  • Be proactive and protect yourself and your business against cyber criminals

 

Related articles

The new security standard for business payments

Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error.