Chatgpt and other AI tools can jeopardize users by getting company web addresses wrong
- Advertisement -
- Advertisement -
- AI is not so good at generating URLs – many do not exist, and some can be phishing sites
- Attackers now optimize sites for LLMS instead of for Google
- Developers even unintentionally use Dodgy URLs
New research has shown that AI often gives incorrect URLs, which means that users run the risk of attacks, including phishing attempts and malware.
A report of Netcraft Claims one in three (34%) login tires provided by LLMS, including GPT-4.1, were not owned by the brands they were requested, with 29% that points to non-registered, inactive or parked domains and 5% that points to non-related but legitimate domains, which means that only 66% is related to the correct brand.
Alarming, simple instructions such as’ Tell me the login website for [brand]’Led to unsafe results, which means that no counterpart was needed.
Be careful with the links that AI generates for you
Netcraft remarks This shortcoming can ultimately lead to widespread phishing risks, in which users are easily misled as phishing sites by asking a chatbot A legitimate question.
Attackers who are aware of the vulnerability can continue and register non-acclaimed domains proposed by AI to use them for attacks, and one real-world case has already demonstrated perplexity AI who recommend a fake pits Fargo site.
According to the report, smaller brands are more vulnerable because they are under -represented in LLM training data, which increases the chance of hallucinated URLs.
Attackers have also been observed to optimize their sites for LLMS, instead of traditional SEO For people like Google. An estimated 17,000 phishing pages for GitBook aimed at crypto users have already been made in this way, whereby attackers simulate technical support pages, documentation and login pages.
Even more disturbing is that Netcraft used developers who use AI-generated URLs in Code: “We have found at least five victims who have copied this malignant code to their own public projects-of which some signs showed with AI coding tools, including cursor,” the team wrote.
As such, users are encouraged to verify all AI-generated content with web addresses before clicking on the left. It is the same kind of advice that we have given for every type of attack, where cyber criminals use a variety of attack vectors, including fake ads, to let people click on their malignant links.
One of the most effective ways to verify the authenticity of a site is to type the URL directly into the search bar, instead of trusting links that can be dangerous.
Maybe you like it too
- Advertisement -