How does the YouTubes recommendation algorithm work

Radicalization through YouTube? Large study recommending right-wing extremist content

It is not a new thesis that recommendation algorithms on large platforms such as Facebook and YouTube are attracting more attention to more extreme political perspectives. Technology sociologist Zeynep Tufekci described YouTube as “the great radicalizer” over a year ago, but she relied primarily on individual examples.

A team led by the Brazilian computer scientist Manoel Ribeiro has now taken up the topic again on the basis of a large data set of over 300,000 videos, two million recommendations and 79 million comments. In their contribution “Auditing Radicalization Pathways on YouTube”, which is freely available on the arXiv.org portal - but has not yet been assessed - they try to trace radicalization paths on YouTube. Your conclusion (p. 2; my translation):

We provide strong evidence of radicalization among YouTube users and that YouTube's recommendation engine supports the discovery of far-right channels, even in a non-personalization scenario.

The authors arrive at this result on the basis of an analysis of commenting users. They first sort them into three categories of different radicalism:

  • With Eric Ross Weinstein's concept of "Intellectual Dark Web" (I.D.W.) are right-wing anti-mainstream channels that distinguish themselves through criticism of political correctness, feminism and political Islam.
  • As "Alt-Lite" in turn, videos and channels are referred to that take openly nationalist and latently racist positions.
  • In the third and most extreme group of "Alt-Right" theses of racist superiority ("white supremacy") are openly represented. Borderline and doubtful cases in the allocation of channels to the three clusters have, according to the authors, been sorted into the less radical category in each case.

These three groups are compared with a control group from traditional media channels on Youtube. A historical analysis of videos, views and comments from the last ten years shows strong growth in all three right-hand categories compared to the control group (see Figure 1). Above all, interaction rates also increase the more radical a channel is.

Pathways of radicalization

In order to investigate the question of “radicalization paths”, i.e. whether YouTube's recommendation algorithms make a significant contribution to the growth of more extreme channels, the authors focus on comments in the further analysis steps. On the basis of various statistical similarity and overlap measures, an alignment between the three categories I.D.W., Alt-Lite and Alt-Right can be seen over time.

Finally, to show how exactly this alignment happened, the authors track comment activities from users who did not initially comment under alt-right videos. Compared with the control group, the proportion of I.D.W. and Alt-Lite users, who subsequently also comment increasingly under Alt-Right videos, rises more strongly. The quasi “manual” examination of 900 randomly selected comments suggests that the comments should not be criticism but tend to be approving comments.

User videos and YouTube contribution

In the last part of the study, Ribeiro and colleagues finally focus on YouTube's recommendation algorithm and suggest that a relevant part of users will still be able to access right-wing extremist content via YouTube's recommendation algorithms. Here, however, the researchers struggle with the fact that recommendation effects are particularly strongly influenced by personalization algorithms that cannot be captured with the selected method (as in a similar analysis by the Guardian or the algotransparency.org project). The investigation period is also shorter because a historical survey of recommendations is difficult or even impossible. One of the most interesting results of the available data is that more channel recommendations from the control group led to Alt-Lite and I.D.W. channels than vice versa.

In summary, the results of the study by Ribeiro and colleagues fuel discussions about the unintended effects of recommendation algorithms on large online platforms. In addition, the effects are likely to be even stronger if recommendation personalization is taken into account. It is therefore all the more important to ask how alternative recommendation mechanisms could be designed, regardless of whether they would then be called “democratic algorithms” or not.

Would you like more critical reporting?

Our work at netzpolitik.org is financed almost exclusively by voluntary donations from our readers. With an editorial staff of currently 15 people, this enables us to journalistically work on many important topics and debates in a digital society. With your support, we can clarify even more, conduct investigative research much more often, provide more background information - and defend even more fundamental digital rights!

You too can support our work now with yours Donation.

About the author

leonido

Leonhard Dobusch, business economist and lawyer, researches as a university professor for organization at the University of Innsbruck on the management of digital communities and transnational copyright regulation. He tweets as @leonidobusch and blogs privately as Leonido and together with others on governance across borders and on the OS ConJunction blog and is co-founder and scientific director of the Momentum Institute and the Momentum series of congresses. Mail: [email protected]
Published 08/26/2019 at 11:30 AM