YouTube’s stronger election misinformation policies had a spillover effect on Twitter and Facebook, researchers say.

YouTube’s stricter policy against election misinformation was followed by a sharp drop in the prevalence of false and misleading videos on Facebook and Twitter, according to new research released Thursday, underscoring the video service’s strength on social media.

Researchers at New York University’s Center for Social Media and Politics found a significant increase in YouTube videos featuring voter fraud shared on Twitter immediately after the November 3 election. In November, those videos consistently accounted for about a third of all election-related video shares on Twitter. The top vote-fraud YouTube channels shared on Twitter that month came from sources that had historically promoted electoral misinformation, such as Project Veritas, Right Side Broadcasting Network and One America News Network.

But the proportion of vote-fraud claims shared on Twitter dropped sharply after December 8. That was the day YouTube said it would remove videos promoting the baseless theory that widespread errors and fraud would alter the outcome of the presidential election. By December 21, the share of YouTube election fraud content shared on Twitter had fallen below 20 percent for the first time since the election.

The stock fell further after Jan. 7, when YouTube announced that all channels that violated its election disinformation policy would receive a “warning” and that channels that received three warnings in a 90-day period would be permanently removed. On the day of the inauguration, the share was about 5 percent.

The trend was repeated on Facebook. A wave of video sharing depicting fraud theories after the election peaked at about 18 percent of all videos on Facebook just before December 8. After YouTube implemented its stricter policies, the stock fell sharply for much of the month, before rising slightly before January 6. riots at the Capitol. The stock fell again to 4 percent on inauguration day after the new policy was implemented on Jan. 7.

To arrive at their findings, researchers collected a random sample of 10 percent of all tweets each day. They then isolated tweets linking to YouTube videos. They did the same for YouTube links on Facebook, using a social media analytics tool from Facebook, CrowdTangle.

From this large dataset, the researchers filtered YouTube videos about the election in general, as well as election fraud, using a range of keywords like “Stop the Steal” and “Sharpiegate.” This gave the researchers a sense of the volume of YouTube videos about election fraud over time, and how that volume shifted in late 2020 and early 2021.

Misinformation on major social networks has increased in recent years. YouTube, in particular, has lagged behind other platforms in tackling various types of misinformation, often announcing stricter policies several weeks or months after Facebook and Twitter. In recent weeks, however, YouTube has tightened its policies, such as banning all misinformation about anti-vaccination and suspending the accounts of prominent anti-vaccine activists, including Joseph Mercola and Robert F. Kennedy Jr.

Megan Brown, a research scientist at the NYU Center for Social Media and Politics, said it was possible that after YouTube banned the content, people could no longer share the videos promoting election fraud. It’s also possible that interest in the vote-fraud theories has dropped significantly after states certified their election results.

But the bottom line, Ms Brown said, is that “we know these platforms are deeply connected.” YouTube, she pointed out, has been identified as one of the most shared domains across other platforms, including in both Facebook’s recently released content reports and NYU’s own research.

“It’s a big part of the information ecosystem,” Ms Brown said, “so as the YouTube platform gets healthier, so do others.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button