Pedophiles use AI to create and sell lifelike images of child sexual abuse, the report found

Pedophiles use AI to create and sell lifelike images of child sexual abuse, a shocking report reveals

  • A shocking study suggests that AI is being used to create images of child abuse
  • These images are then traded and sold on Pixiv’s Japanese site
  • Under UK law, AI-generated images of child abuse are treated in the same way as real images

Pedophiles use artificial intelligence (AI) to create lifelike images of child sexual abuse, it is alleged.

A shocking study has claimed that images of raped babies and toddlers are among many images created by abusers using Stable Diffusion.

While the image generation software is intended to create illustrations, the AI’s use of word prompts ensures that any desired image can be formed.

The BBC claims these are then sold and traded on Pixiv’s Japanese site, with accounts often leading to more explicit content on Patreon.

Journalist Olivia Sheepshanks told the publication: ‘Since AI-generated imagery has become possible, there has been a huge flood… it’s not just very young girls, they’re [paedophiles] speaking of toddlers.’

An investigation alleges that Stable Diffusion is being used to create child abuse images

The new report comes just a month after the AI ​​platform Midjourney was discovered to be transforming real photos of children into sexualized images.

In some cases, perverts have gone even further, experimenting with “deepfake” technology to paste the faces of real youths and child actors onto naked bodies, authorities said.

Amid these revelations, the National Council of Police Chiefs (NPCC) criticized platforms for not taking “moral responsibility” for their content.

Under UK law, these AI-generated images are treated in the same way as genuine pornographic photographs of children that are illegal to possess or trade.

But sexualized cartoons of children are not illegal in Japan, which hosts the photo-sharing site Pixiv, meaning creators can easily promote this content.

Obviously, hashtags are primarily used to do this, with creators using them to index niche keywords.

Some of the responses to these posts would also lead users to other offensive content that is sometimes completely genuine.

Still, a Pixiv spokesperson told the company it had strengthened its monitoring systems in an effort to address this child pornography problem.

Images are alleged to be sold and traded on Pixiv’s Japanese site, with accounts often leading to more explicit content on Patreon

Photorealistic AI images of children were also banned on May 31, but it’s unclear what impact this has had.

“The volume is just huge,” Ms. Sheepshanks continued.

So folks [creators] will say “we aim to make at least 1,000 images per month”.’

Meanwhile, both Patreon and Stability AI have told the publication that they will not tolerate child sexual abuse material.

Stability AI said: “We strongly support law enforcement efforts against those who misuse our products for illegal or nefarious purposes.”

abuseAIchildcreatedaily mailimageslifelikePedophilesreportscience technologysellsexual
Comments (0)
Add Comment