New AI tools and training are critical to combating deepfake impersonation fraud
AI-generated images and videos pose a growing risk to society and our economy; they are becoming easier to create and much harder to distinguish from reality. The discussion so far has largely focused on political deepfakes by bad actors seeking to influence democracies. This has proven to be largely unfounded in Europe, the UK, the EU and India – despite Sumsub data detecting a more than 245% year-on-year increase in deepfakes worldwide by 2024.
Now the concern is about deepfakes that influence organizations or people through financial fraud. Companies know this when it comes to new customers, with identity verification and fraud monitoring a central part of any financial onboarding process.
However, deepfake augmented phishing and impersonation scams are something companies are not prepared for. Imposter scams remained the largest fraud category in the US in 2023, with reported losses of $2.7 billion according to the Federal Trade Commission, and as deepfakes improve, there will be more victims. Business leaders know this: New data from Deloitte shows that surveyed executives will have experienced at least one (15.1%) or more (10.8%) deepfake financial fraud incidents in 2023.
While this is likely to increase, with more than half of executives surveyed (51.6%) expecting an increase in the number and scale of deepfake attacks targeting financial and accounting data, little is being done. A fifth (20.1%) of respondents said they were not at all confident in their ability to respond effectively to deepfake financial fraud.
While deepfake detection tools exist and are critical to preventing external fraudsters from bypassing verification procedures during onboarding, companies must also protect themselves from internal threats.
Here, a low-confidence approach to financial requests or other potentially impactful decisions, along with new AI-based digital tools, will be critical for companies to detect deep-dive phishing and impersonation fraud. This means that training, education and a change in our philosophical approach to visual and auditory information must be implemented from the top down.
Head of AI and ML at Sumsub.
A holistic deepfake strategy
Socio-cultural improbabilities: Perhaps the best tool against deepfake fraud is context and logic. Each stakeholder must view the information with a new-found skepticism at every step. In the recent case where a finance executive was paid out $25 million after a video call with a deepfaked Chief Financial Officer, you might think, “why is the CFO asking for $25 million?” and ‘how special is this request?’ This is certainly easier in some contexts than in others, as the most effective fraudster will design his approach to appear well within someone’s normal behavior.
Course: This new-found skepticism needs to be a company-wide approach. From the C-Suite to all stakeholders. Companies need to create a culture where videos and phone calls are subject to the same verification processes as emails and letters. Training should help realize this new way of thinking.
A second opinion: Companies would be wise to implement processes that encourage obtaining a second opinion on audio and visual information and any subsequent requests or actions. One person may not notice an error or inconsistency while someone else does.
Biology: This may be the most obvious, but keep in mind the natural movements and features. Maybe someone doesn’t blink as much during a video call, or the subtle movement in their throat while speaking isn’t normal. While deepfakes will become more sophisticated and realistic over time, they are still prone to inconsistencies.
Break the pattern: Because AI-generated deepfakes all rely on relevant data, they cannot mimic extraordinary actions. For example, at the time of writing this article, an audio deepfake may struggle to convincingly whistle or hum a tune, and on video calls one may ask the caller to turn their head to the side or move slightly in front of their face. Not only is this an unusual move, on which data models are less often trained as extensively, but it also breaks the anchor points that hold the generated visual information in place, which could lead to blurring.
Relief: Video deepfakes rely on consistent lighting, so you can ask someone on a video call to change the light in their room or the screen they’re sitting in front of. Software programs also exist that can make one’s screen flicker in unique and unusual ways. If the video doesn’t reflect the light pattern properly, you know it’s a generated video.
Technology: AI helps fraudsters, but it can also help stop them. New tools are being developed that can detect deepfakes by analyzing audio and visual information for inconsistencies and inaccuracies, such as the free-to-use For Fakes’ Sake for visual assets, or Pindrop for audio. While these are not infallible, they are an essential arsenal to help extract reality from fiction.
It is important to note that no single solution, tool, or strategy should be relied upon entirely, as the sophistication of deepfakes is rapidly increasing – and may evolve to defeat some of these detection methods.
Skepticism at every step
In an age of massively synthetic information, companies should strive to apply the same level of skepticism toward trusting visual and audible information as they do toward new contracts, onboarding new users, and weeding out illegal actors. For both internal and external threats, AI-enhanced verification tools and new training and education regimes are critical to minimizing potential financial risks from deepfakes.
We recommended the best online cybersecurity course.
This article was produced as part of TechRadarPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, you can read more here: