The news is by your side.

Twitter criticized for allowing Texas recordings to spread

0

Pat Holloway has seen her share of destruction during a 30-year career as a photojournalist: the 1993 confrontation in Waco, Texas; the 1995 bombing of a federal building in Oklahoma City by Timothy McVeigh; and the 2011 tornado that hit Joplin, Mo.

But this weekend, she said in an interview, she had had enough. When graphic images began circulating on Twitter showing bloodied victims of a mass shooting at a Texas mall that killed at least nine people, including the shooter, she tweeted to Elon Musk, the owner of Twitter, and demanded that he do something.

“This family doesn’t deserve to see the deceased relatives scattered around Twitter for all to see,” Ms Holloway, 64, said in the interview on Sunday.

Ms Holloway was one of many Twitter users to criticize the social network for allowing the horrific images – including of a child spattered in blood – to spread virally across the platform after Saturday’s shooting. While gruesome images have become commonplace on social media, where a cell phone camera and internet connection make anyone a publisher, the unusually graphic nature of the images provoked continued outrage from users. And they cast a harsh spotlight on Twitter’s content moderation practices, which have been curtailed since Musk acquired the company last year.

Like other social media companies, Twitter is once again in a position similar to that of traditional newspaper editors, struggling with tough decisions about how much to show their audience. While newspapers and magazines generally protect their readers from really graphic images, they have made some exceptions, as Jet magazine did in 1955 when it published open-casket images of Emmett Till, a 14-year-old black boy who was beaten to death in Mississippi, to illustrate the horrors of the south of the Jim Crow era.

However, unlike newspaper and magazine publishers, tech companies like Twitter must enforce their decisions at scale and monitor millions of users with a combination of automated systems and human content moderators.

Other tech companies, such as Facebook’s parent company, Meta, and YouTube’s parent company, Alphabet, have invested in large teams to combat the proliferation of violent imagery on their platforms. Twitter, on the other hand, has scaled back its content moderation since Musk bought the site in late October last year, laying off full-time employees and contractors in the trust and safety teams that manage content moderation. Mr Musk, who describes himself as a “free speech absolutist”, said last November that he would create a “content moderation board” that would decide which posts should remain and which should be removed. He later went back on that promise.

Twitter and Meta did not respond to requests for comment. A YouTube spokesperson said the site had begun removing videos of the massacre, adding that it was promoting authoritative sources of information.

Graphic content was never completely banned by Twitter, even before Mr. Musk took over. For example, the platform has allowed images of people killed or injured in the war in Ukraine, arguing that they are newsworthy and informative. The company sometimes places warning labels or popups on sensitive content, requiring users to sign in to see the images.

While many users clearly circulated the images of the carnage, including of the dead attacker, for shock value, others retweeted them to underline the horrors of gun violence. “The America of the NRA,” read one tweet. “This isn’t going away,” said another. The New York Times does not link to the social media posts containing the graphics.

Claire Wardle, the co-founder of the Information Futures Lab at Brown University, said in an interview that tech companies must balance their desire to protect their users with the responsibility to preserve newsworthy or otherwise important images — even images that be uncomfortable to watch. She cited as precedent the decision to publish an image of Kim Phuc Phan Thi in the Vietnam War, who became known as “Napalm Girl” after a photo of her suffering after a napalm attack went around the world.

She added that she preferred graphics of notable events that remain online, with some kind of overlay where users have to choose to see the content.

“This is news,” she said. “We often see these kinds of images in other countries and nobody is shocked. But then it happens to Americans and people say, ‘Should we see this?’”

For years, social media companies have had to grapple with the proliferation of gory images and videos following horrific violence. Last year, Facebook was criticized for running ads alongside a graphic video of a racist shooting in Buffalo, NY, which was streamed live on video platform Twitch. The Buffalo gunman claimed inspiration from a 2019 mass shooting in Christchurch, New Zealand that killed at least 50 people and was broadcast live on Facebook. Twitter has removed versions of the Christchurch video for years, arguing that the footage glorifies the violent messages espoused by the shooter.

While the graphic images of the Texas mall shooting circulated widely on Twitter, they appeared to be less prominent on other online platforms on Sunday. Searching by keyword for the recordings in Allen, Texas on Instagram, Facebook and YouTube mainly yielded news items and less explicit eyewitness videos.

Sarah T. Roberts, a University of California Los Angeles professor who studies content moderation, distinguished between editors at traditional media companies and social media platforms, who are not bound by the ethics traditional journalists adhere to — including minimizing harm to the public. viewer and the friends and family of the people who died.

“I understand where people come from on social media who want to spread these images in the hope that it will bring about a change,” Ms Roberts said. “Unfortunately, social media as a business is not set up to support that. What it needs to do is take advantage of the dissemination of these images.”

Ryan Mac reporting contributed.

Leave A Reply

Your email address will not be published.