YouTube now lets you report AI content that uses your face or voice
YouTube is expanding its mechanism for protecting user privacy to include artificial intelligence (AI) content. The video streaming giant on Thursday announced that users can now report AI-generated content that simulates their face or voice. Users can report such videos using the Privacy Complaint Process, an online reporting system that allows them to fill out a form to highlight their concern, share evidence of the privacy violation and share the uploader’s details. Once a complaint is filed, it is manually reviewed based on various parameters and if the complaint is found to be legitimate, the video is removed.
YouTube lets users report AI-generated content
In a community afterYouTube revealed that it is expanding its privacy request process for AI-generated content that replicates both faces and voices. In November 2023, the company announced announced that it planned to introduce responsible AI innovation on the platform. At the time, it listed a number of new features it promised to introduce to protect users from misinformation and deepfakes.
Deepfakes can be understood as synthetic media (AI-generated or otherwise) that has been digitally altered to impersonate another individual. In recent times, the number of deepfakes has increased significantly. YouTube now lets users use its existing Privacy Complaint Process to report AI-generated content that simulates their face or voice.
Users can also report a channel if they feel the channel is impersonating them. However, this is not considered a Community Guidelines strike, even if a creator’s video is removed through this process. YouTube will disable a creator’s channel after they receive three strikes.
How does YouTube’s privacy complaints procedure work?
YouTube’s privacy complaint form is accessible here. It’s a lengthy process that first takes users through six pages that help users determine if their privacy has been violated and if they’ve explored all other options before contacting the platform. These include asking the user if they’ve been harassed, if they’ve contacted the uploader, and if they’ve reviewed the Community Guidelines. They’re also told that abusing the process could result in their account being suspended.
However, for genuine privacy complaints, users can go through these steps to reach a lengthy form where they have to share details about the incident, evidence, and information about the uploader. Once the form is submitted, YouTube evaluates the validity of the incident. Once the validity is determined, the video is removed from the platform.