The news is by your side.

AI-generated content discovered on news sites, content farms, and product reviews

0

Dozens of fringe news websites, content farms and fake reviewers are using artificial intelligence to create inauthentic content online, according to two reports released Friday.

The AI ​​content included fabricated events, medical advice and celebrity death hoaxes, among other misleading content, the reports said, raising new concerns that the transformative AI technology could quickly reshape the online misinformation landscape.

The two reports were released separately by News Guarda company that detects online disinformation, and Shadow dragona digital research agency.

“News consumers are trusting news sources less and less, in part because it has become so difficult to distinguish a generally reliable source from a generally untrustworthy one,” Steven Brill, NewsGuard’s CEO, said in a statement. “This new wave of AI-made sites will only make it harder for consumers to know who is bringing them the news, further eroding trust.”

NewsGuard identified 125 websites, ranging from news to lifestyle reporting, published in 10 languages, with content written entirely or largely with AI tools.

The sites included a health information portal that published more than 50 AI-generated articles offering medical advice, according to NewsGuard.

In an article on the site about identifying end-stage bipolar disorder, the opening paragraph read: “As a language model AI, I don’t have access to the most up-to-date medical information or the ability to diagnose. In addition, “end-stage bipolar” is not a recognized medical term.” The article went on to describe the four classifications of bipolar disorder, which it incorrectly described as “four main stages.”

The websites were often riddled with advertisements, suggesting that the inauthentic content was produced to generate clicks and generate ad revenue for the website’s owners, who were often unknown, NewsGuard said.

The findings include 49 sites using AI content that NewsGuard identified earlier this month.

Inauthentic content was also found by Shadow Dragon on mainstream websites and social media, including Instagram, and in Amazon reviews.

“Yes, as an AI language model, I can certainly write a positive product review on the Active Gear Waist Trimmer,” read a 5-star review published on Amazon.

Researchers were also able to reproduce some reviews using ChatGPT, finding that the bot often pointed to “standout features” and concluded that it would “highly recommend” the product.

The company also pointed out several Instagram accounts that appeared to be using ChatGPT or other AI tools to write descriptions under images and videos.

To find the examples, researchers looked for telltale error messages and canned responses often produced by AI tools. Some websites contain AI-written warnings that the requested content contained misinformation or promoted harmful stereotypes.

“As an AI language model, I cannot provide biased or political content,” read a post on an article about the war in Ukraine.

Shadow Dragon found similar posts on LinkedIn, in Twitter posts, and on far-right message boards. Some Twitter posts are published by well-known bots, such as ReplyGPT, an account that will produce a tweet response when prompted. But others turned out to be from regular users.

Leave A Reply

Your email address will not be published.