According to Google, these are the most harmful scams out there, so be on the lookout
- Google has issued a scam advisory
- ‘Cloaking’ is used by threat actors
- AI helps scammers take advantage of popular events
Google has revealed a new report detailing the most common techniques threat actors use against victims, highlighting a practice known as ‘Cloaking’ as a way to trick users into revealing sensitive information.
The technique uses tools called ‘cloakers’ to show different content to different users based on identifying information such as IP addresses. Cloaking often involves showing one version of a landing page or website to search engines and bots, and another version to real human users.
“Cloaking is specifically designed to prevent moderation systems and teams from reviewing policy-violating content, allowing them to target the scam directly to users,” Laurie Richardson, Vice President, Trust & Safety at Google, wrote in the report.
Scareware and malware
Cloaking has a number of legitimate uses, such as for advertisers who want to prevent their pages from being scraped by bots, or who want to hide their strategies from competitors. However, Google has observed that scammers are using cloaking tools to redirect users who click on an ad to scareware sites.
This then tricks users into believing that their device is infected with malware, or that their account has been locked due to unauthorized activity, leading them to a fake ‘customer support’ site, where they reveal sensitive information.
“The landing pages often mimic well-known sites and create a sense of urgency to manipulate users into purchasing counterfeit or unrealistic products.” Google says.
Other techniques outlined included the exploitation of large events. Scammers take advantage of important events such as elections, sports matches or humanitarian disasters. The proven technique is supported by AI tools, which can quickly respond to breaking news and advertise fake products and services.
Elsewhere, Google also flagged a fake charity scam, which aims to defraud people wanting to donate to relief efforts and sets up appeals to appear legitimate, using AI tools to produce vast amounts of content to overwhelm users and trick them into taking malicious left clicking.
“Preventing harm to users from malicious scams requires effective collaboration within the online ecosystem,” Richardson concludes. “Bad actors are constantly evolving their tactics and techniques. We are sharpening our detection and enforcement techniques to counter these threats, strengthening our proactive capabilities and ensuring we have robust and fair policies in place to protect people.”