The news is by your side.

How your child’s online mistake can ruin your digital life

0

When Jennifer Watkins got a message from YouTube saying her channel was being shut down, she wasn’t concerned at first. After all, she wasn’t using YouTube.

However, her seven-year-old twin sons used a Samsung tablet logged into her Google account to watch children’s content and make YouTube videos of them doing silly dances. Few videos have more than five views. But the video that got Mrs. Watkins in trouble, which a son made, was different.

“Apparently it was a video of his buttocks,” said Ms Watkins, who has never seen it. “He was challenged by a classmate to make a nude video.”

Google-owned YouTube has done just that AI-powered systems who watch the hundreds of hours of video uploaded to the service every minute. The scanning process can sometimes go wrong and label innocent people as child molesters.

The New York Times has documented other episodes in which parents’ digital lives were turned upside down by nude photos and videos of their children that were flagged by Google’s AI systems and determined by human reviewers to be illegal. As a result, a number of parents were investigated by the police.

The “nude video” in Ms. Watkins’ case, which was uploaded in September, was flagged within minutes as possible sexual exploitation of a child, a violation of Google’s terms of service with very serious consequences.

Ms. Watkins, a medical provider from New South Wales, Australia, soon discovered that she had access not only to YouTube, but also to all of her accounts with Google. She lost access to her photos, documents and email, she said, meaning she couldn’t receive messages about her work schedule, view her bank statements or “order a big shake” through her McDonald’s app – which she logs into with her Google account.

Her account would eventually be deleted, a Google login page informed her, but she could appeal the decision. She clicked the “Start Appeal” button and wrote in a text box that her seven-year-old sons thought “butts were funny” and were responsible for uploading the video.

“This is hurting me financially,” she added.

Child activists and lawmakers around the world have pushed tech companies to stop the spread of offensive images online by monitoring such material on their platforms. Many communications providers are now scanning the photos and videos stored and shared by their users, looking for known images of abuse that have been reported to authorities.

Google also wanted to be able to flag never-before-seen content. It developed a few years ago an algorithm – trained on the known images – that attempts to identify new exploitation material; Google has made it available to other companies, including Meta and TikTok.

After an employee confirmed that the video posted by Ms. Watkins’ son was problematic, Google reported it to the National Center for Missing and Exploited Children, a nonprofit organization that acts as a point of contact federal clearinghouse for flagged content. The center can then add the video to its known footage database and decide whether to report it to local police.

According to statistics from the national center, Google is one of the top reporters of “apparent child pornography.” Googling archived more than two million reports last year, far more than most digital communications companies, but less than the number filed by Meta.

(It’s difficult to judge the severity of the child abuse problem based on the numbers alone, experts say a study Of a small sample of users flagged for sharing inappropriate images of children, data scientists at Facebook said more than 75 percent “did not show any malicious intent.” Users included teens in romantic relationships who shared intimate images of themselves, and people who shared a “meme of a child’s genitals being bitten by an animal because they think it’s funny.”)

Apple has resisted pressure to scan iCloud for exploitative material. A spokesperson pointed out a letter which the company sent to an interest group this year, express concern about the “security and privacy of our users” and reports “that innocent parties have been dragged into dystopian dragnets.”

Last fall, Susan Jasper, Google’s head of trust and security, wrote in a letter a blog post that the company planned to update its appeals process to “improve the user experience” for people who “think we made bad decisions.” In a big change, the company now offers more information about why an account was suspended, instead of a blanket notification about a “serious violation” of company policy. For example, Ms Watkins was told that child exploitation was the reason she had been left out.

Regardless, Ms. Watkins’ repeated requests were denied. She had a paid Google account that allowed her and her husband to exchange messages with customer service representatives. But in digital correspondence reviewed by The Times, the officers said that even though the video was the unconscious act of a child, it still violated company policy.

The draconian punishment for one silly video seemed unfair, Ms Watkins said. She wondered why Google couldn’t give her a warning before cutting off access to all her accounts and more than a decade of digital memories.

After more than a month of failed attempts to change the company’s mind, Ms Watkins contacted The Times. A day after a reporter inquired about her case, her Google account was restored.

“We do not want our platforms to be used to endanger or exploit children, and there is widespread demand that internet platforms take the strongest action to detect and prevent CSAM,” the company said in a statement, adding a commonly used acronym for child sexuality is used. abuse material. “In this case, we understand that the infringing content was not maliciously uploaded.” The company had no answer as to how an appeal denial could escalate beyond emailing a Times reporter.

Google is in a difficult position when assessing such appeals, says Dave Willner, a fellow at Stanford University’s Cyber ​​Policy Center who has worked in trust and security at several major tech companies. Even if a photo or video is innocent in origin, it can be shared maliciously.

“Pedophiles will share images that parents have defused or collect them into collections because they just want to see naked children,” Mr Willner said.

The other challenge is the sheer amount of potentially exploitative content that Google flags.

“It’s just a very, very difficult problem to solve that regulates value judgment on this scale,” Mr. Willner said. “They make hundreds of thousands or millions of decisions a year. If you roll the dice that many times, you’ll roll snake eyes.”

He said Ms Watkins’ struggle after losing access to Google was “a good argument for spreading your digital life” and not depending on one company for so many services.

Ms. Watkins took another lesson from the experience: Parents should not use their own Google Account for their children’s Internet activities, but should instead set up a special account — a choice that Google encourages.

She has not yet created such an account for her twins. They are now banned from the internet.

Leave A Reply

Your email address will not be published.