- Advertisement -
Glitching videos. Spelling errors. Don’t blink. Robot -like audio. Unnatural tone.
Historical characteristics of a cyber attack from DeepFake may seem clearly to the well-informed individual, but if recent news events have taught us something, people can no longer be trusted to recognize AI-generated content such as Deepfakes correctly.
However, many online security frameworks are still dependent on human intervention as a crucial defense mechanism against attacks. For example, employees are expected to spot phishing -e -mails and scams after completing the company cyber security course. Identity verification at a distance is often based on a manual check of uploaded images, or a video call from person to person to verify the identity of a user.
Today’s reality is that people can no longer detect a generative AI content and no longer have to remain a central defense mechanism. A new approach is urgently needed.
Founder and CEO of Iproov.
The threat landscape changes dual -time
AI-driven fraud and cyber attacks have recently included various headlines. A remarkable example was the global engineering firm Arup, which was the victim of a £ 20 million deepfake scam after a financial employee was duped to send cash to criminals, after a series of hoax-ai-generated Video calls From high officials.
About the incident, Arup’s Global Cio, Rob Gray, said: “Like many other companies around the world, our activities are subject to regular attacks, including invoice fraud, phishing -wang, WhatsApp Voice spoofing and deepfakes. What we have seen is that the number and refinement of these attacks has risen sharply in recent months. “
Here Gray underlines the two biggest changes that AI is currently running into the threat landscape: attacks rise in volume and refinement. Generative AI Tools This can make video, audio and messages, are now available on a large scale and accelerate the speed and scale with which attacks can be launched. Moreover, the technology has become so advanced that people are not reasonably expected to detect AI-driven attacks.
Other organizations are also starting to worry; A recent iProov study by technology changers showed that 70% believes that AI-generated attacks will significantly influence their organizations, while more than two-thirds (62%) are concerned that their organization does not take the threat seriously enough.
There are different ways in which AI transforms traditional attacks.
Phishing gets a power-up
Despite the widespread awareness of techniques for social engineering, this method remains very effective in cyber attacks. The report of the Undaag Break of Verizon from 2023 showed that Phishing was involved in 36% of the infringements in 2022, making it the most common type of attack.
It is common for organizations to teach staff About spotting phishing attacks, such as searching for typing errors, grammar errors or uncomfortable layout. But with AI able to quickly create personalized phishing messages, polishes and scales, those training sessions have become superfluous.
Tools such as wormgpt, a malignant cousin of chatgpt, enable bad actors to quickly create convincing, personalized phishing messages without mistakes and in every language.
AI also helps to create spear-phishing (highly targeted social engineering attacks) even more impact and more scalable. Traditional social engineering attacks become even more convincing in combination with a deep -fake telephone conversation or speech note from a family member or colleague, for example, just like the Arup incident.
Because AI creates convincing content that no longer requires high technical skills, the number of potential attackers has been greatly expanded. The entry threshold to make these attacks is also much lower than before, with generative AI tools that are now easily accessible at marketplaces for crime-as-a-service.
Onboarding becomes a top goal
Remote Onboarding, the point at which a user first sets access to a system or service and verifies his identity, is a risky point in user trips for every organization, and is another area that A-Attacks focus on. Allowing a criminal access to the systems or accounts of an organization can cause significant, uncontrolled damage that could spirit quickly. Consider how easily a criminal fraudulent money can borrow, stealing identities or arming company data once they have received an account or have gained access.
Knowbe4, an American cyber security company, recently shared details about an attack that is confronted with this risk all too well. They unconsciously hired a North Korean hacker who used AI and a stolen ID to mislead renting teams and the identity verification process. Once on board, the cheaper tried to upload almost immediately malware Before being detected.
Verifying remote identities is much more common in today’s worldwide and digital age. Whether it is hiring a new employee such as the Knowbe4 example, making a bank account or access to government services; People are much more used to verify their identity remotely. However, traditional methods such as video calls with human operators are clearly no longer able to defend against deep fee trailers. As the CEO of Knowbe4 said: “If it can happen to us, it can happen to almost everyone. Let it not happen to you.”
So how can organizations prevent them from happening?
Fire with fire
No organization can ignore the emerging AI threats – the examples of Knowbe4 and Arup have to call alarm bells for each company. They also underline how vulnerable people are like a defense method. Employees cannot be expected to see every smartly disguising phishing -e -mail, nor can human operators flawlessly manage the external identity verification. Bad actors consciously exploit human vulnerabilities.
Our recent DeepFake Detection study showed that only 0.1% of the 2,000 participants could realize the realistic fake content, and despite these poor performance, more than 60% of people had trust in the accuracy of their detection skills. Regardless of how self -confident someone can investigate a photo or e -mail message during a cyber training session, the chances of detecting it when a report is received in real life during a busy working day or during the running of groceries considerably lower.
The good news is that AI is both Sword and Shield. And luckily technical leaders recognize the power of technology as the solution, where 75% turn on biometric system systems as a primary defense against deep fakes and the majority acknowledges the crucial role of AI in defending against these attacks.
Biometric verification systems transform external online identity verification, allowing organizations to verify that the user on the other side of the screen is not only the right person, but also a real person. Liverness assurance, as it is called, prevents attackers from stolen or shared copies of the faces of victims or forged synthetic images.
Organizations must be aware that what really distinguishes biometric systems is the quality of this liveliness assurance and not all Loryy Assurance systems are made equal. Although many solutions claim to offer robust security, organizations must dig deeper and ask critical questions.
Does the solution offer a limitation of liveliness that constantly adapts to evolving threats such as deepfakes through AI-driven learning? Does it create a unique challenge response mechanism to make every authentication unique? Does the provider have a dedicated security operation center that offers proactive threats and incident response to keep pace with emerging threats and to ensure that the defense remains robust?
The implementation of these more advanced solutions are crucial to stay ahead of the ever-evolving attacks, to reduce the burden for individuals and ultimately strengthen organizational safety in the AI-driven threat landscape.
We mention the best software for identity management.
This article is produced as part of the TechRadarpro expert insight channel, where today we have the best and smartest spirits in the technology industry. The views expressed here are those of the author and are not necessarily those of TechRadarpro or Future PLC. If you are interested in contributing to find out more here: https://www.techradar.com/news/submit-your-story-techradar-pro
- Advertisement -