YouTube Algorithm Steers People Away From Radical Content

YouTube video icons
Photo by Javier Miranda on Unsplash

"If you randomly follow the algorithm, you probably would consume less radical content using YouTube as you typically do!"

So says Manoel Ribeiro, co-author of a new paper on YouTube's recommendation algorithm and radicalization, in an X (formerly Twitter) thread about his research.

The study—published in February in the Proceedings of the National Academies of Sciences (PNAS)—is the latest in a growing collection of research that challenges conventional wisdom about social media algorithms and political extremism or polarization.

Introducing the Counterfactual Bots

For this study, a team of researchers spanning four universities (the University of Pennsylvania, Yale, Carnegie Mellon, and Switzerland's École Polytechnique Fédérale de Lausanne) aimed to examine whether YouTube's algorithms guide viewers toward more and more extreme content.

This supposed "radicalizing" effect has been touted extensively by people in politics, advocacy, academia, and media—often offered as justification for giving the government more control over how tech platforms can run. But the research cited to "prove" such an effect is often flawed in a number of ways, including not taking into account what a viewer would have watched in the absence of algorithmic advice.

"Attempts to evaluate the effect of recommenders have suffered from a lack of appropriate counterfactuals—what a user would have viewed in the absence of algorithmic recommendations—and hence cannot disentangle the effects of the algorithm from a user's intentions," note the researchers in the abstract to this study.

To overcome this limitation, they relied on "counterfactual bots." Basically, they had some bots watch a video and then replicate what a real user (based on actual user histories) watched from there, and other bots watch that same first video and then follow YouTube recommendations, in effect going down the algorithmic "rabbit hole" that so many have warned against.

The counterfactual bots following an algorithm-led path wound up consuming less partisan content.

The researchers also found "that real users who consume 'bursts' of highly partisan videos subsequently consume more partisan content than identical bots who subsequently follow algorithmic viewing rules."

"This gap corresponds to an intrinsic preference of users for such content relative to what the algorithm recommends," notes study co-author Amir Ghasemian on X.

Pssst. Social Media Users Have Agency 

"Why should you trust this paper rather than other papers or reports saying otherwise?" comments Ribeiro on X. "Because we came up with a way to disentangle the causal effect of the algorithm."

As Ghasemian explained on X: "It has been shown that exposure to partisan videos is followed by an increase in future consumption of these videos."

People often assume that this is because algorithms start pushing more of that content.

"We show this is not due to more recommendations of such content. Instead, it is due to a change in user preferences toward more partisan videos," writes Ghasemian.

Or, as the paper puts it: "a user's preferences are the primary determinant of their experience."

That's an important difference, suggesting that social media users aren't passive vessels simply consuming whatever some algorithm tells them to but, rather, people with existing and shifting preferences, interests, and habits.

Ghasemian also notes that "recommendation algorithms have been criticized for continuing to recommend problematic content to previously interested users long after they have lost interest in it themselves." So the researchers set out to see what happens when a user switches from watching more far-right to more moderate content.

They found that "YouTube's sidebar recommender 'forgets' their partisan preference within roughly 30 videos regardless of their prior history, while homepage recommendations shift more gradually toward moderate content," per the paper abstract.

Their conclusion: "Individual consumption patterns mostly reflect individual preferences, where algorithmic recommendations play, if anything, a moderating role."

It's Not Just This Study

While "empirical studies using different methodological approaches have reached somewhat different conclusions regarding the relative importance" of algorithms in what a user watches, "no studies find support for the alarming claims of radicalization that characterized early, early, anecdotal accounts," note the researcher in their paper.

Theirs is part of a burgeoning body of research suggesting that the supposed radicalization effects of algorithmic recommendations aren't real—and, in fact, algorithms (on YouTube and otherwise) may steer people toward more moderate content.

(See my defense of algorithms from Reason's January 2023 print issue for a whole host of information to this effect.)

A 2021 study from some of the same researchers behind the new study found "little evidence that the YouTube recommendation algorithm is driving attention to" what the researchers call "far right" and "anti-woke" content. The growing popularity of anti-woke content could instead be attributed to "individual preferences that extend across the web as a whole."

In a 2022 working paper titled "Subscriptions and external links help drive resentful users to alternative and extremist YouTube videos," researchers found that "exposure to alternative and extremist channel videos on YouTube is heavily concentrated among a small group of people with high prior levels of gender and racial resentment" who typically subscribe to channels from which they're recommended videos or get to these videos from off-site links. "Non-subscribers are rarely recommended videos from alternative and extremist channels and seldom follow such recommendations when offered."

And a 2019 paper from researchers Mark Ledwich and Anna Zaitsev found that YouTube algorithms disadvantaged "channels that fall outside mainstream media," especially "White Identitarian and Conspiracy channels." Even when someone viewed these types of videos, "their recommendations will be populated with a mixture of extreme and more mainstream content" going forward, leading Ledwich and Zaitsev to conclude that YouTube is "more likely to steer people away from extremist content rather than vice versa."

Some argue that changes to YouTube's recommendation algorithm in 2019 shifted things, and these studies don't capture the old reality. Perhaps. But whether or not that's the case, the new reality—shown in study after recent study—is that YouTube algorithms today aren't driving people to more extreme content.

And it's not just YouTube's algorithm that has been getting reputation rehabbed by research. A series of studies on the influence of Facebook and Instagram algorithms in the lead up to the 2020 election cut against the idea that algorithmic feeds are making people more polarized or less informed.

Researchers tweaked user feeds so that they saw either algorithmically selected content or a chronological feed, or so that they didn't see re-shares of the sorts of that algorithms prize. Getting rid of algorithmic content or re-shares didn't reduce polarization or increase accurate political knowledge. But it did increase "the amount of political and untrustworthy content" that a user saw.

Today's Image

Esme side-eyes your algorithm panic (ENB/Reason)
Esme side-eyes your algorithm panic (ENB/Reason)

The post YouTube Algorithm Steers People Away From Radical Content appeared first on Reason.com.