<<

The Aftermath Of The Adpocalypse: Systemic on YouTube

Saleem Masadeh Bill Hamilton PLEX Lab PLEX Lab Computer Science Department Computer Science Department New Mexico State University New Mexico State University Las Cruces, NM, USA Las Cruces, NM, USA [email protected] [email protected]

ABSTRACT Social media platforms are increasingly shaping political discourse by providing a platform for new generations of political commentators and serving as a primary means for the distribution of news. However, the business models of these platforms, largely driven by advertising, may be driving the introduction of bias both at systemic and personal cognitive levels. In this work, we briefly discuss how the YouTube Platform is potentially promoting bias through a combination of platform designs, policies, and recommendation algorithms. Finally, we discuss design ideas for augmenting the user experience of YouTube that may work to mitigate these .

CCS CONCEPTS • Human-centered computing → Collaborative and social computing.

KEYWORDS YouTube, adpocalypse, recommendation system, bias

CHI ’20, April 25–30, 2020, Honolulu, HI © 2020 Proceedings of the CHI 2020 Workshop on Detection and Design for Cognitive Biases in People and Computing Systems, April 25, 2020, Honolulu HI, USA. Copyright is held by the owner/author(s). The Aftermath Of The Adpocalypse: Systemic Bias on YouTube CHI ’20, April 25–30, 2020, Honolulu, HI

ACM Reference Format: Saleem Masadeh and Bill Hamilton. 2020. The Aftermath Of The Adpocalypse: Systemic Bias on YouTube. In Proceedings of the CHI 2020 Workshop on Detection and Design for Cognitive Biases in People and Computing Systems, April 25, 2020, Honolulu, HI. ACM, New York, NY, USA, 8 pages.

INTRODUCTION In recent years, social media platforms’ potential to shape online political discourse has come under more scrutiny [6, 18, 29]. Platforms have fostered an emergence of new political movements including both progressive and extreme right-wing voices and communities. At the same time, there has seemingly been an increase in the dissemination of misinformation or “fake news” through these platforms [3, 16, 30]. Both of these phenomena have forced tech companies to reevaluate the design and business models of their platforms in light of their potential implications for our democratic society. For example, Facebook has deployed new mechanisms for tagging fake news as disputed [4, 12]. They then worked to contextualize tagged articles by showing alternative related articles to give the users more context about the story[22]. However, the impact of these mechanisms are of yet unclear, but in some cases have backfired, further reinforcing user biases [4, 12, 27, 28]. We have become primarily interested in how these phenomena are playing out on the YouTube platform, where a new era of political discourse has begun. The platform is facilitating the emergence of a new generation of political commentators. The business model of YouTube, which enables the monetization of video content through advertising, has enabled many of these commentators to go from moonlighting amateurs to full-time professionals. However, since the 2016 elections and the accompanying scrutiny, YouTube has repeatedly updated their monetization and recommendation policies [2, 8]. In 2017, YouTube faced a boycott by some of their largest advertising clients, when it wasfound that ads for those clients were run alongside hateful and violent videos [8, 25, 26]. In response, YouTube reformed its advertiser-friendly content guidelines, and executed an extensive overhaul, which demonetized millions of videos and channels that purportedly violated the new guidelines. This policy shift became evident when YouTube creators began noticing a marked decrease in their earned revenue and recommendations of their videos [2, 10, 21, 23]. This event has colloquially become known as the “Adpocalypse”. By the end of 2017, YouTube recovered as advertisers started to return. However, since then the platform has continued to implement new content policies in response to scrutiny. For example, YouTube has announced that they are working adjust their recommendation algorithms to elevate more “authoritative souces” [24] and crack down on more “borderline” content [1]. Predictably, many creators, including many of the emerging generation of political commentators, have reported that the recommendation algorithm no-longer recommends their videos to unsubscribed viewers while largely The Aftermath Of The Adpocalypse: Systemic Bias on YouTube CHI ’20, April 25–30, 2020, Honolulu, HI

promoting mainstream media outlets [11, 31]. These efforts by YouTube, are undeniably creating bias within the system through the application of automated monetization and recommendation algorithms to optimize the platform’s business model by promoting acceptable content and hiding “borderline content”. Notably, the YouTube algorithms have no inherent civic responsibility. The algorithms interpret the rules of the platform and user interest models to create representations of for viewers based on variety of contextual factors, including both personal and systemic biases. With YouTube being the largest video watching platform in the world [17], this automated bias has the potential to have a significant impact on political discourse. Thus, we argue there is a need for tools that illuminate the biases of these platforms and provide means for users to exercise media literacy in these contexts. In the following sections, we discuss how the business model of YouTube has driven updates in platform policies and algorithms that enable different forms of . Finally, we envision new design concepts for the YouTube search and viewing experience that may help lessen the impact of these biases.

SYSTEMIC BIAS IN ADVERTISING FUNDED MEDIA In their seminal work “Manufacturing ”, Herman and Chomsky establish a theory of the influence of corporate ownership and the advertising economy on [15]. They hypothesize that because most media outlets are funded by advertising, advertisers have an inherent ability to “license” what political and economic views are acceptable. They also argue that large profit driven media outlets must serve the interests of corporate owners and investors. This theory has been applied in many contexts to examine systemic bias in media [7, 9, 13, 14]. Given the advertising driven business models of most social media platforms, we can also assume that advertisers have significant influence over what content is monetized and promoted on these platforms. Further, with the advent of big data and deep learning technologies, these platforms now have the tools to implement these biases at scale.

HOW YOUTUBE IS ENABLING BIAS In this section, we examine how the design of the YouTube platform and the policies it implements Figure 1: The YouTube search algorithm may be facilitating different forms of cognitive bias. In particular, we examine how the platform may is highly preferential to authoritative be reinforcing confirmation and anchoring biases. sources when searching for political top- ics. Screenshot: the authors. Bias of Authoritative Sources Policy When responding to search queries related to the political topics, the YouTube search algorithm is highly preferential to content from authoritative sources, e.g., The Washington Post, NBC News, Fox News, etc. Figure 1 shows an example search result for impeachment taken on the day U.S. President The Aftermath Of The Adpocalypse: Systemic Bias on YouTube CHI ’20, April 25–30, 2020, Honolulu, HI

Donald Trump was acquitted by the Senate. We note this screenshot was taken from oneofthe authors’ YouTube accounts, which is subscribed to at least 15 alternative news and politics channels. Additionally, the recommendation algorithms that select videos to promote on users’ front pages and recommended videos tabs are increasingly promoting content from authoritative sources. By favoring authoritative sources, YouTube promote the reporting, ideas, and narratives expressed by those outlets. This is facilitating anchoring bias, where people tend to utilize initial pieces of encountered to frame their further thinking [33]. By exposing users first to authoritative sources when searching new topics or checking their recommended videos, that platform is establishing those sources as an anchor that frames future thinking.

Confirmation Bias via Recommendations and Subscriptions On the YouTube front page and on video pages, YouTube recommends videos for users to watch. This content is likely to be content that users select to watch or that the “autoplay” feature of YouTube selects for users to watch. As noted previously, these algorithms are increasingly defaulting to authoritative sources. However, these videos often still appear to be selected based on the prior watching behavior and subscriptions of users. While it is difficult to know precisely the behavior of the recommendation algorithm, it appears probable that recommended videos align with content users have already watched. This likely encour- ages by exposing users to content they may already agree with. In confirmation biases, people seek to support and increase the credibility of their views while avoiding support- ive information of alternative hypothesis [19, 32]. However, if we imagine the opposite case where YouTube is not matching the users with their preferences, it will likely become frustrating for users. Additionally, the subscription model of YouTube enables viewers to easily return to their favorite channels, likely further enabling confirmation biases.

HOW THE YOUTUBE BUSINESS MODEL DRIVES BIAS Given that YouTube’s revenue is largely driven by advertising, which brought in more than than $15 billion in 2019 [20], we can assume that it weighs heavily on policy and design choices for the platform. As we noted in the introduction, advertisers desires for their ads to run on non-controversial content has led to a number of changes. This includes the policies that favor political content from authoritative sources. However, the platform’s reliance on advertising revenue also necessitates designing to maximize the amount of time watched, thus maximizing the number of ads seen. This leads to the design of recommendation algorithms that select content that viewers will be more inclined to watch [5]. That content may be more likely to be controversial or to reinforce the already held views of of the viewer. The Aftermath Of The Adpocalypse: Systemic Bias on YouTube CHI ’20, April 25–30, 2020, Honolulu, HI

Figure 2: Proposed YouTube Search Page. Search results are presented in three lists to help people compare and contrast videos to watch. The lists provide search results from different types of video channels. For example, authoritative sources such as ABC news may provide more high-production coverage of political events. However, these sources likely have a underlying established political perspective. To encourage user awareness of alternative viewpoints, we can include search results from alternative perspectives such as independent and controversial political commentators.

Additionally, subscription models may drive up watch time by help viewers identify content they want to watch, but again may reinforce cognitive biases.

POTENTIAL WAYS TO MITIGATE BIAS IN YOUTUBE In this section, we discuss potential ways to mitigate how YouTube promotes cognitive biases by conceptualizing potential augmentations to the YouTube platform. Through each one of these ways, we aim to promote users’ awareness of their own biases, that may be previously reinforced by the system, and alternative ideas while maintaining the YouTube business model. In the following, we present the design and motivations of these potential ways.

Highlight Alternative Perspectives in YouTube Search Viewers use the YouTube search algorithm to help them locate videos of interest within the corpus of available YouTube videos. Figure 1 illustrates how the search results are presented as one list of homogeneous results. When users start watching a video from this list of results, they are effectively in an information environment created by the YouTube recommendation algorithm, which as we have seen is designed to keep users watching for long hours by recommending relevant videos. We propose augmenting the search results to highlight alternative viewpoints. For example, as shown in Figure 2 we show results from different types of sources, such as authoritative outlets, independent commentators, and controversial political commentators. We note that algorithms for identifying these nuanced characteristics of channels would be non-trivial, but assume here that The Aftermath Of The Adpocalypse: Systemic Bias on YouTube CHI ’20, April 25–30, 2020, Honolulu, HI

they could be created. Additionally, the recommendation algorithms and autoplay features could be modified to also present users with content from alternative sources. The goal of this design is to increase the diversity of information that is produced from a search query and to enable users to compare ideas, framing, and narrative across a variety of sources. These alternatives will help users reason inductively about the search results and build a deeper understanding of the topic of their search query.

Helping Viewers Reflect on Personal and Platform Biases To help viewers be more aware of their own biases and those promoted by the YouTube platform, we propose tools for helping viewers reflect on their own watch behavior. First, we imagine analytics tools for helping viewers explore their watch history. While the YouTube platform provides a rich of analytics tools for creators to understand how their content is being received by audiences, there are virtually no analytics tools for viewers. We can imagine viewers benefiting from being able to see how much of their watch history was determined by the platform’s autoplay feature, is focused on particular topics, or is distributed between authoritative or controversial sources. By enabling viewers to easily see data regarding their own behavior, we expect that we can support them in starting to understand their own biases as well as those reinforced by the platform. Tools such as these could also be automatically invoked by the platform. For example, if a user has been watching an automatically selected set of videos for several hours, the platform could notify them of this and invite the viewer to reflect on the watching history.

CONCLUSION Social media platforms are increasingly becoming influential venues for political discourse. In this work, we described how the business model of one such platform, YouTube, is potentially resulting in policies and algorithms that introduce systemic bias and enable individual cognitive biases. It is important for us to clearly understand these phenomena if we want to design social media technologies to promote media literacy and an informed democratic society. We briefly discuss design concepts, including highlighting alternative perspectives and tools for support viewer reflection, that could be developed to augment the user experience of YouTube to mitigate these biases. Much work still needs to be done to further understand these phenomena and evaluate designs for mitigating bias.

REFERENCES [1] 2019. Continuing our work to improve recommendations on YouTube. YouTube Official Blog (2019). https://youtube. googleblog.com/2019/01/continuing-our-work-to-improve.html [2] Julia Alexander. 2019. The Golden Age of YouTube is Over. https://www.theverge.com/2019/4/5/18287318/youtube-logan- paul-pewdiepie-demonetization- adpocalypse-premium-influencers-creators. The Verge (2019). The Aftermath Of The Adpocalypse: Systemic Bias on YouTube CHI ’20, April 25–30, 2020, Honolulu, HI

[3] Hunt Allcott, Matthew Gentzkow, and Chuan Yu. 2019. Trends in the diffusion of misinformation on socialmedia. Research & Politics 6, 2 (2019), 2053168019848554. [4] Katherine Clayton, Spencer Blair, Jonathan A Busam, Samuel Forstner, John Glance, Guy Green, Anna Kawata, Akhila Kovvuri, Jonathan Martin, Evan Morgan, et al. 2019. Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing in false stories on social media. Political Behavior (2019), 1–23. [5] Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep Neural Networks for YouTube Recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems (Boston, Massachusetts, USA) (RecSys ’16). Association for Computing Machinery, New York, NY, USA, 191–198. https://doi.org/10.1145/2959100.2959190 [6] Harry De Quetteville. 2019. The way social media is shaping our politics is truly terrifying. https://www.telegraph.co.uk/ technology/2019/09/18/way-social-networks-shape-politics-truly- terrifying/. The Telegraph (2019). [7] Ralf Dewenter and Ulrich Heimeshoff. 2014. Media Bias and advertising: Evidence from a German car magazine. Review of Economics 65, 1 (2014), 77–94. [8] Elizabeth Dwoskin. 2019. YouTube’s arbitrary standards: Stars keep making money even after breaking the rules. https:// www.washingtonpost.com/technology/2019/08/09/youtubes-arbitrary-standards-stars-keep- making-money-even-after- breaking-rules/. The Washington Post (2019). [9] Matthew Ellman and Fabrizio Germano. 2009. What do the papers sell? A model of advertising andmediabias. The Economic Journal 119, 537 (2009), 680–704. [10] Jake Eyes. 2019. YouTube Demonetized my Channel and Removed my Ability to Contact Them. YouTube. https://www. youtube.com/watch?v=DvfmpAtKbLI [11] William Feuer. 2019. YouTube actually steers people away from radical videos, researchers say. https://www.cnbc.com/ 2019/12/28/youtube-recommendation-algorithm-discourages- radicalism-researchers.html. CNBC (2019). [12] Sarah Frier. 2017. Facebook stumbles with early effort to stamp out fake news. Bloomberg. com 30 (2017). [13] Martin Gilens and Craig Hertzman. 2000. Corporate ownership and news bias: Newspaper coverage of the 1996 Telecom- munications Act. The Journal of Politics 62, 2 (2000), 369–386. [14] Umit G Gurun and Alexander W Butler. 2012. Don’t believe the hype: Local media slant, local advertising, and firm value. The Journal of Finance 67, 2 (2012), 561–598. [15] Edward S Herman and Noam Chomsky. 2010. Manufacturing consent: The political economy of the mass media. Random House. [16] Muhammad Nihal Hussain, Serpil Tokdemir, Nitin Agarwal, and Samer Al-Khateeb. 2018. Analyzing and crowd manipulation tactics on YouTube. In 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEE, 1092–1095. [17] Mansoor Iqbal. 2019. YouTube Revenue and Usage Statistics (2019). Business of Apps (2019). https://www.businessofapps. com/data/youtube-statistics/ [18] John T Jost, Pablo Barberá, Richard Bonneau, Melanie Langer, Megan Metzger, Jonathan Nagler, Joanna Sterling, and Joshua A Tucker. 2018. How social media facilitates political protest: Information, motivation, and social networks. Political psychology 39 (2018), 85–118. [19] Asher Koriat, Sarah Lichtenstein, and Baruch Fischhoff. 1980. Reasons for confidence. Journal of Experimental Psychology: Human learning and memory 6, 2 (1980), 107. [20] Paige Leskin. 2020. YouTube brought in $15 billion in advertising revenue in 2019 — 9 times more than Google paid to acquire the site 14 years ago. https://www.businessinsider.com/youtube-advertising-revenue-9-times-google- acquired- video-platform-2020-2. Business Insider (2020). [21] LyndzyTIME. 2019. MY CHANNEL GOT DEMONETIZED WITHOUT COPYRIGHT STRIKES | What Happen? (LyndzyVLOGS). YouTube. https://www.youtube.com/watch?v=VH_3Kpd7jnc The Aftermath Of The Adpocalypse: Systemic Bias on YouTube CHI ’20, April 25–30, 2020, Honolulu, HI

[22] Tessa Lyons. 2017. Replacing Disputed Flags With Related Articles. https://about.fb.com/news/2017/12/news-feed-fyi- updates-in-our-fight-against- misinformation/. Facebook (2017). [23] MxR Mods. 2019. My Entire Channel Was Demonetized. YouTube. https://www.youtube.com/watch?v=HO1boP96LiA [24] Neal Mohan. 2018. Building a better news experience on YouTube, together. YouTube Official Blog (2018). https: //youtube.googleblog.com/2018/07/building-better-news-experience-on.html [25] Alexi Mostrous. 2017. YouTube hate preachers share screens with household names. https://www.thetimes.co.uk/article/ youtube-hate-preachers-share-screens-with- household-names-kdmpmkkjk. Times Investigation (2017). [26] Jack Nicas. 2017. Google’s YouTube Has Continued Showing Brands’ Ads With Racist and Other Objectionable Videos. https://www.wsj.com/articles/googles-youtube-has-continued-showing-brands- ads-with-racist-and-other- objectionable-videos- 1490380551. The Wall Street Journal (2017). [27] Gordon Pennycook, Adam Bear, Evan Collins, and David G Rand. 2019. The Implied Truth Effect: Attaching Warnings toa Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings. Pennycook, G., Bear, A., Collins, E., & Rand, DG The implied truth effect: Attaching warnings to a subset of fake news headlines increases perceived accuracy of headlines without warnings. Management Science, Forthcoming (2019). [28] Gordon Pennycook, Tyrone D Cannon, and David G Rand. 2018. Prior exposure increases perceived accuracy of fake news. Journal of experimental psychology: general 147, 12 (2018), 1865. [29] Kevin Roose. 2019. The Making of a YouTube Radical. (2019). https://www.nytimes.com/interactive/ 2019/06/08/technology/youtube-radical.html [30] Jieun Shin, Lian Jian, Kevin Driscoll, and François Bar. 2018. The diffusion of misinformation on social media: Temporal pattern, message, and source. Computers in Human Behavior 83 (2018), 278–287. [31] David Pakman Show. 2019. SHOCK: YouTube Is Now Corporate Media. YouTube. https://www.youtube.com/watch?v= KeA_ZNUKFHE [32] Peter C Wason. 1960. On the failure to eliminate hypotheses in a conceptual task. Quarterly journal of experimental psychology 12, 3 (1960), 129–140. [33] . 2020. Anchoring — Wikipedia, The Free Encyclopedia. http://en.wikipedia.org/w/index.php?title=Anchoring& oldid=934476161. [Online; accessed 05-February-2020].