Algorithmic logics of taste: Cultural taste and the recommender system

Marie Fatima Iñigo Gaw

A thesis submitted in fulfilment of the requirements for the degree of

Master of Digital Communication and Culture

Department of Media and Communications

School of Literature, Art, and Media

Faculty of Arts and Social Sciences

The University of Sydney

November 2019

Statement of originality

This is to certify that to the best of my knowledge, the content of this thesis is my own work. This thesis has not been submitted for any degree or other purposes.

I certify that the intellectual content of this thesis is the product of my own work and that all the assistance received in preparing this thesis and sources have been acknowledged.

Marie Fatima Iñigo Gaw

ii Acknowledgements

Like algorithms, this research began as an ambiguous and messy project, buried under complex literature and convoluted thoughts. Through the help and support of mentors, peers and friends, I managed to weave everything together and craft something that I am genuinely proud of.

Utmost gratitude to my research supervisor, Dr Justine Humphry, for the intellectual generosity and unwavering patience in helping me clarify, strengthen, sharpen my research. The time and support gave me was beyond what was expected of her and was what inspired me to think and work even harder, especially during the difficult moments in the research process.

I thank and appreciate the researchers at the STuF Lab led by Dr Marcus Carter for sharing new perspectives and constructive feedback on my research. The collective intelligence of the room never left me empty-handed.

This research and my study at the university is supported by the Department of

Foreign Affairs and Trade of the Government of Australia, through the program

Australia Awards. I am indebted to the people of Australia for the opportunity to experience life abroad and attend one of the world’s most prestigious universities.

To my friends—Nico Pablo, Tina Sablayan, Nathan Gatpandan, Reeneth Santos,

Tristan Zinampan and Carmela Bangsal—for letting me bother them every time I need to sense-check my ideas and find the perfect words to capture those thoughts, thank you.

iii A million thanks to my partner, Gabriel Murillo, for all the late-night consultations and relentless reminders to breathe when things get overwhelming. It is now my turn to cheer you on as you complete your master’s thesis.

Lastly, to my family, Rosemarie Gaw, Charlotte Gaw and Nimfa Iñigo, who always let me pursue my passions even if it meant I would be away from them, I am grateful for all the love and support.

iv Abstract

Algorithms are new cultural intermediaries (Bourdieu, 1984) that shape contemporary cultural experiences and identities. Their obscurity and complexity, however, hinder us from understanding their logics and processes as tastemakers.

This research investigates the algorithmic logics of taste of the Netflix recommender system (NRS) to theorise the NRS’s construction of taste and workings as a cultural intermediary. I adapt Taina Bucher’s (2016) technography as a methodological approach in studying algorithms beyond the ‘black box’, through the analysis of discursive materials and traces of algorithmic interactions with key social actors.

Findings reveal that the NRS constructs taste as rules—universal, definite and durable assumptions about cultural identities and objects. They are enacted through the algorithmic infrastructure and constrain human agency through predefined choices without adequate mechanisms for negotiation. This post- hegemonic power (Lash, 2007) contributes to the reproduction of dominant social structures through new ways to interpellate and codify social categories that are the basis of cultural taste. Through their logics, algorithms as cultural intermediaries transcend being capital translators (Hutchinson, 2017) and encompass cultural production, distribution and consumption. Their limitations and fallibilities, however, open pathways for rejecting and subverting the algorithmic construction of taste.

The study presents theoretical and empirical contributions to research on algorithmic cultures and cultural taste, as well as methodological innovation in studying socio-technical actors. I acknowledge that research on algorithms is

v always partial and limited and thus, I prescribe directions for further research, with the intent to open a conversation on how algorithms can work better for/with humans.

Keywords: algorithms, Netflix recommender system, cultural intermediaries, cultural taste, technography, discourse

vi Contents

Statement of originality ii

Acknowledgements iii

Abstract v

Contents vii

1 Introduction 1

1.1 Research inquiry ...... 3

1.2 Significance and innovation ...... 5

1.3 Considerations and limitations ...... 6

2 Literature review 8

2.1 Algorithms as new cultural intermediaries ...... 8

2.2 Theorising cultural taste ...... 11

2.3 The Netflix recommender system ...... 14

3 Methodology 18

3.1 Methodological approach: Technography ...... 18

3.2 Research methods ...... 19

3.2.1 Discourse analysis ...... 19

3.2.2. Accretion measures ...... 21

3.3 Data collection ...... 22

3.4 Data analysis ...... 24

3.5 Ethics statement ...... 26

vii 4 Findings 27

4.1 Discursive formations ...... 27

4.1.1 Extraction: Implicit and explicit taste preferences ...... 28

4.1.2 Appraisal: Altgenres, taste communities, criteria and

A/B testing ...... 29

4.1.3 Prediction: Recommendations, homepage personalisation

and artwork and evidence personalisation ...... 31

4.2 Discursive manifestations and contradictions ...... 32

4.2.1 Issues and criticisms in media discourse ...... 33

4.2.2 Netflix user interactions on Twitter ...... 35

5 Discussion and analysis 39

5.1 Construction of taste ...... 39

5.2 Algorithmic logics of taste ...... 41

5.3 Algorithms as cultural intermediaries ...... 47

6 Conclusion and recommendations 51

References 56

Appendices 68

Appendix A: Discursive formations in the Netflix and media discourse 68

Appendix B: Issues and criticisms in the media discourse 75

Appendix C: Netflix user interaction coding scheme 81

Appendix D. Netflix user interaction geolocation on Twitter 88

viii Chapter 1

Introduction

Since it launched its streaming business in 2007, Netflix has disrupted the way we access and consume television content. Core to its business is the Netflix recommender system (NRS), a set of algorithms that suggests content based on individuals’ taste preferences. Through the algorithms’ computational power,

Netflix devised new mechanisms to refine its recommendations. Its vast library of content is organised into thousands of hyper-specific categories called altgenres

(Madrigal, 2014). Rejecting demographic profiling, it created global taste communities as a way to distinguish its member base (Rodriguez, 2017).

Recommendations have also become more granular as the algorithms customise title sequencing, artwork, search, and so on.

Algorithmic recommendations are a compelling part of the Netflix experience, but they are not free of criticisms. Controversies of alleged racial and gender-based recommendations surrounded Netflix last year (Berkowitz, 2018; Ha, 2019).

Conflicts between Netflix and Hollywood emerged with the purported ascendancy of algorithmic rationality over creative decisions (Ramachandran & Flint, 2018).

Critics also contend that recommendations have not been responsive to users’ taste preferences (Diaz, 2018). In all these controversies, Netflix executives upheld the algorithms’ neutrality by reiterating that the recommendations are exclusively based on users’ taste preferences and viewing habits.

The NRS represents the distinct ways algorithmic machines are shaping cultural encounters and facilitating everyday cultural processes. Algorithms are a new kind

1 of cultural intermediary (Morris, 2015; Gillespie, 2016; Hutchinson, 2017), which

Bourdieu (1984) defines as entities that create and manipulate meanings attached to commodities to facilitate their movement in the market. Negus (2002) builds on this definition by positioning cultural intermediaries as the bridge between the production and consumption of symbolic goods. Hutchinson (2017) designates them as capital translators, transforming capital from one form to another to create new value. However, algorithms are inherently complex and are concealed in proprietary ‘black boxes’ (Bucher, 2016). Despite their ubiquity and significance in contemporary culture, their inner workings and how they operate as cultural intermediaries remain unknown.

Taste is central to understanding algorithms as cultural intermediaries as it is both the basis and the product of algorithmic processes. Bourdieu (1984) defines taste as a disposition, acquired and cultivated through social and cultural capital. At the same time, taste is also an act embodied through consumption (Bourdieu, 1984;

Peterson, 1992; Hennion, 2004). This two-fold character constitutes taste and renders it unstable and plural in nature (Peterson, 1992). Algorithms construct taste following their own computational logics, as illustrated by the NRS. Taste becomes data extracted from users, which is then appraised into categories such as altgenres and taste communities. The NRS then predicts taste through recommendations, which when consumed, create more data about individuals’ tastes. These algorithmic processes suggest that, contrary to existing theories of taste, algorithms may regard taste as fixed, quantifiable, and predictable.

These logics of taste are important to investigate not only for their theoretical importance but also because historically, taste has been a mechanism to define and

2 classify individuals and cultures according to dominant social structures (Bourdieu,

1984). The NRS is already performing such categorisations, but there is a lack of knowledge of how they are generated. What is evident from earlier research are the consequences of the dominance and embeddedness of algorithmic logics in social life. Building on the works of Bucher (2012), Gillespie (2016) and Cohn (2019), in what ways might algorithms privilege particular tastes and neglect others? Do they form calculated publics—algorithmically-generated groups of individuals

(Gillespie, 2014)—that may perpetuate and normalise presumptions about social identities? Lastly, how might algorithms influence individuals’ cultivation of taste by enacting what Lash (2007) terms post-hegemonic power, which refers to the recursive reproduction of social structures within culture? The obscure yet prominent position of algorithms in shaping our cultural experiences and identities necessitates a critical understanding of their logics and their implications for taste cultivation.

Research inquiry

This research examines the algorithmic logics of taste through a discursive analysis of the Netflix recommender system. In computer science, logics are sets of rules that control the functioning of an algorithm (Huth & Ryan, 2004). When logics are set in motion through algorithmic processes, their interplay manifests the algorithms’ assumptions about concepts, objects and people (Kitchin & Dodge,

2011). The complexity of these processes, however, needs to be disentangled to be critically examined. In order to do this, I narrowed down the investigation to three major algorithmic processes carried out by the NRS as previously noted: extraction, appraisal, and prediction. Extraction involves obtaining the input of ‘taste’ to the

3 algorithms, appraisal transforms and assesses the input following certain rules, and prediction generates the output in the form of recommendations.

Unlike Facebook, YouTube, and Spotify, the NRS does not draw on social actions between users to generate content and recommendations. It exclusively governs the cultural processes within its algorithmic infrastructure and this makes it an exemplary case of algorithms as cultural intermediaries. The prevalence of discourse about the NRS in Netflix’s marketing and media coverage also opens pathways to study its algorithmic functioning outside the ‘black box.’ In the methodology chapter, I explain how reframing algorithms as relational infrastructures (Star, 1999) enabled me to discursively construct the internal logics of the NRS and reveal information and insights beyond those programmed in code.

Through the examination of the algorithmic logics of taste of the NRS, this research aims to elucidate the nature of algorithms as cultural intermediaries. It intends to explore the power and extent of the NRS’s influence over our cultural experiences and identities, specifically how it manifests in cultural processes of taste cultivation.

In sum, the research seeks to answer the following questions:

RQ1. How is taste constructed through the logics of the NRS?

SQ1. What does it assume about taste?

SQ2. How does it extract taste?

SQ3. How does it appraise taste?

SQ3. How does it predict taste?

RQ2. How does it inform our understanding of algorithms as cultural intermediaries?

4 Significance and innovation

Research on algorithms has mostly focused on bias and its social impacts with search engine algorithms having been found to reinforce status quo inequalities and prejudices (Goldman, 2008; Granka, 2010; Noble, 2018). Social media algorithms have been thoroughly exposed for their polarising effects on the public sphere

(Pariser, 2011; Seaver, 2013; Carlson, 2018). Work on algorithms and culture, however, has been scarce, with little attention to their relationship to taste. This research situates the study of algorithms at its intersection with cultural production, identity formation, and the creative industries. It provides empirical evidence to support arguments forwarded by scholars on the cultural intermediation of algorithms (Morris, 2015; Gillespie, 2016; Hutchinson, 2017) and identifies directions for research.

The study also builds on theories of cultural taste in contemporary digital contexts.

Examined in-depth in the next chapter, existing frameworks of taste cannot account for algorithms’ distinct forms and mechanisms of influence (Bhaskar, 2016) nor represent non-human cultural agents (Latour, 2005). Algorithmic infrastructures also blur and even transcend defined categorical boundaries such as cultural field, habitus and cultural capital (Bourdieu, 1984). More importantly, their internal, self-fulfilling power is absent in these models, particularly the conditions they impose on human agency (Beer, 2013). The research contributes to critical new thinking on the role of algorithms in taste cultivation.

Lastly, the research supports the exploration of new methods of studying algorithms that challenge the ‘black box’ rhetoric. Building on Taina Bucher’s

5 (2016) technography, which is a way to study algorithms in their social entanglements, I strengthen and refine the methodology by suggesting ways to reappropriate traditional methods from media studies and adjacent fields to examine the embodiment of algorithms in their interactions with social actors.

Considerations and limitations

Algorithms are inherently complex and unstable; thus, they are not completely knowable. This research acknowledges that the knowledge it can produce on the

NRS is always partial and limited. Partiality, however, can be enriched by employing plural and creative research methods. Technography is a methodological approach that supports this multiplicity of methods to make sense of the “complex, diffuse and messy” (Law, 2004, p. 2) realities of algorithms. It relies on methods that involve ‘found data’ such as discursive materials and traces. These can be further supported with an interrogation of the algorithms through reverse engineering (Diakopoulos, 2015) and the use of digital ethnography to examine user experiences (Kitchin, 2017; Seaver, 2017). The research is also temporally specific.

While it investigates the NRS in its iterations from August 2007 to May 2019, I recognise that the system is always in flux and may evolve or change in the next months and years.

In this research, there is a presumption that the NRS is designed to respond to individual taste profiles, and it is the basis for the analysis of its logics of taste. In reality, however, Netflix accounts are shared, and some owners do not avail themselves of the affordance of setting up separate profiles for its users. This presents specific consumption contexts that complicate our relationship with

6 algorithms and can be explored in another research that focuses on people’s lived experiences with algorithms.

Conclusion

The research inquiry centres on examining the algorithmic logics of taste of the NRS to help us theorise algorithms as cultural intermediaries. It is crucial to recognise that the knowledge produced and limitations drawn in this study of algorithms equally contribute to its theoretical and methodological significance. The next chapter reviews the literature to problematise the intersections between algorithms, taste and the NRS.

7 Chapter 2

Literature review

This chapter delves into the literature on algorithms to comprehend their entanglement with taste, culture, and creative industries. I first establish algorithms as cultural intermediaries and explore how they facilitate cultural processes. Then, I present the theoretical debate on taste and situate it in contemporary algorithmic cultures. To close the review, I expound on the NRS and its surrounding political, economic and cultural contexts.

Algorithms as new cultural intermediaries

An algorithm seems to be a straightforward concept—“a set of mathematical procedures whose purpose is to expose some truth or tendency about the world”

(Striphas, 2015, p. 405). It presents a rendition of the social through data manipulation and computational reasoning (Finn, 2017). In reality, Roberge and

Seyfert (2016) characterise algorithms as ambiguous, messy and often hidden from sight. Their automated nature obfuscates the human and computational involvement in decision making (Cohn, 2019). As they embed themselves in everyday life, algorithms further obscure their workings in structuring social realities (Gillespie, 2014). Their recursiveness also makes it impossible to demarcate when and where they begin and end as the data that reconfigure culture infinitely feeds and folds back (Beer, 2013). Kitchin and Dodge (2011) designate algorithms as both producers and products of social processes; as “models analyse the world and the world responds to the models” (p. 30). Despite this secrecy and

8 complexity, the mythology of algorithmic objectivity persists and entities that employ algorithms use them to deflect and dismiss criticisms (Gillespie, 2014).

An algorithm, in its operation and representation, is fundamentally a self-affirming machine. Lash (2007) identifies this as a new form of power that works from within culture as post-hegemonic power. This power defines the way algorithms work as cultural intermediaries—entities that operate between production and consumption (Negus, 2002)—by exclusively constructing meanings and values through logics. Cohn (2019) contends, however, that their legitimacy is derived from their purported computational rationality and neutrality claimed to be ‘”blind of race, gender, class” and other social identities (p. 26). Gillespie (2014) highlights how this claim supports the narrative that algorithms know people better than themselves.

The underlying presumption of algorithms is that culture can be quantified (Beer,

2013). They repackage cultural artefacts and practices into computable abstractions

(Finn, 2017), splicing them into infinite attributes and reorganising them into new categories (Morris, 2015). Gillespie (2014) argues that “categorisation is a powerful semantic and political intervention” and algorithms maintain and produce social orders through this process (p. 171). Finn (2017) regards algorithms as ontological structures that frame our understanding of the world.

Langlois (as cited in Gillespie, 2014) asserts that algorithms have the capability to ascribe meaningfulness and grant relevance (and irrelevance) to cultures and identities. Their calculations of cultural value are obscured, but their considerations manifest in the ways they provide visibility (Bucher, 2012) or direct individuals to

9 certain objects and away from others (Amoore, 2011). According to Gillespie (2014), commercial logics often determine their rubric of relevance, privileging populist and novel forms of cultures. Beer (2013) further argues that algorithms draw and redraw possibilities and boundaries in the circulation of culture. They prescribe behaviour and action through predefined choices, keeping the full range of prospects “hidden and impenetrable” (Cohn, 2019, p. 33). Lash (2007) points out that even the dynamics of discovery are reworked as algorithms allow culture to

‘find’ us instead of us looking for it.

Subjectivity is structured by algorithms through what Cheney-Lippold (2017) terms algorithmic identity. He illustrates how algorithms interpret individuals’ data to reify identities based on arbitrary and proprietary assumptions about social groups, which Nakamura (2002) asserts, in the broader context of digital technology, reinforce dominant hierarchies of race, gender, and sexuality. Algorithmic identities are constructed by simultaneously inferring from our identity performances (Butler, 1988) and our responses to coded interpellations (Althusser,

2011). They are also formulated in relation to other data connections (Cheney-

Lippold, 2017) and when taken collectively form what Gillespie (2014) calls calculated publics. Gillespie (2014) underlines these algorithmic representations of publics, regardless of whether they exclude certain individuals or groups, codify social perceptions of ourselves and others. However, there is little to no means for users to challenge or negotiate with algorithms to correctly represent them

(Gillespie, 2014) as often, they are “only talked about, not to” (Cheney-Lippold,

2017, p. 170).

10 The entrenchment of algorithms in circuits of culture (Hall, 1997) provoke questions about their actual and potential capacity in the reproduction of dominant social structures. However, Cohn (2019) contends that “power exists in the dialectical relationship between users, algorithmic technologies and the industries that employ them” (p. 16). As much as they are powerful, algorithms are equally ambiguous and entangled in the social, political and economic contexts of their users and proprietors.

Theorising cultural taste

How algorithms construct cultural taste requires tracing the theoretical debate on taste and its relationship with social structures, agency and subjectivity. The study of taste rest on the premise that taste is socially constructed, and it distinguishes individuals and social groups from each other as developed by Bourdieu and others, whose work provides the groundwork for a contemporary evaluation of the theory of taste.

In Bourdieu’s (1984) seminal book Distinction, he designates taste as a marker of social class. Taste is cultivated through the accumulation of cultural capital and embodied through the habitus and performed in various cultural fields (Bourdieu,

1984). Cultural capital shapes individuals’ predispositions that are generated through the privileges and restraints of their social origin. Habitus is a “structuring and structured structure” that governs social practices, as well as the norms and conditions of social divisions (Bourdieu, 1984, p. 170). It enacts a person’s predispositions through embodied practices within regulated liberties (Bourdieu,

1992), which makes habitus not deterministic but generative of social order. The

11 cultural field is the locus of power relations between cultural agents and intermediaries. It is within the field where meanings and associations attached to practices and objects are determined and produced. Key to Bourdieu’s theory is the relational nature of capital, habitus and field in cultivating cultural taste.

Bourdieu’s theory of taste in Distinction was widely recognised and extensively criticised, and from such criticism emerged new thinking. Bennett et al. (2009) point out that Bourdieu neglected the intersectional nature of taste and the equally important social positions of gender, ethnicity and age in its cultivation. The underlying assumption of taste as a manifestation of class hierarchy is challenged by Peterson (1992) through his model of cultural omnivorousness. His research illustrates that individuals engage in diverse taste practices that occupy “more or less equal taste value” not necessarily defined by social affiliations (Peterson, 1992, p. 254). Lahire (2005) also disputed Bourdieu’s concept of the unified habitus, arguing that individuals have dissonant tastes and exhibit contradictory taste practices across cultural fields. In her discussion of gender and agency, McNay

(2000) further argues that Bourdieu overlooked “the ambiguities and dissonances that exist in the way that men and women occupy masculine and feminine positions” (p. 54) in relation to taste.

Cultural capital has also taken more plural forms such as subcultural capital

(Thornton, 1996) and is derived from new sources such as multiculturalism (Hage,

1998) and new media (Emmison & Frow, 1998). Bennett et al. (2009) also challenge the static hierarchical position of cultural objects in Bourdieu’s framework, arguing that individuals are observed to ‘play with texts’ beyond the confines of social categories and their own social positions. This implies that objects of taste can be

12 resignified and appropriated by individuals with the “greater fluidity of reading and interpretative contexts” through discursive social practices (Bennett et al., 2009, p. 22).

Bourdieu (1992) identified habitus as the dominant class’ mechanism to inflict what he calls symbolic violence to subjugate social groups through their own actions.

Butler (1993) concurs with him, but she highlights that the performative and reflexive nature of social practices allows for the possibility of agency to negotiate social positions. Bourdieu (1992) later redefined habitus as “an open system of dispositions that is constantly subjected to experiences” and recognised it as a vehicle for both the reproduction and subversion of social order (p. 133).

Intersecting with classed, gendered and racial social structures is the dominance of neoliberal market logic in cultural fields, which Bennett et al. (2009) assert privileges specific forms of cultural capital over others. Latour (2005) also commented on Bourdieu’s model as leaving out key components of the social, in particular, the discussion of technical, non-human actors in taste cultivation.

There is minimal overlap between the scholarship on taste and that on algorithms, but the literature suggests that both involve processes that presuppose and produce categories and hierarchies that organise the world (Bourdieu, 1984; Gillespie, 2014;

Cheney-Lippold, 2017). However, models of taste inextricably attach these distinctions on individuals’ social positions, to which algorithms are claimed to be indifferent (Cohn, 2019). How taste manifests in the NRS’ logics is a focus of this research investigation.

13 The Netflix recommender system

Netflix is many things—a global media network, a technology company, a consumer brand, among other characterisations. It has the discursive slipperiness of most new media platforms and an analysis of it is contingent on how it is defined (Lobato,

2019) and who is doing the defining. In this research, I conceptualise the NRS as an infrastructure, whose meanings emerge from its relationships with social actors

(Star, 1999) and its surrounding political, economic and social discourses.

The NRS is the operating system of Netflix’s on-demand video streaming business.

It primarily employs collaborative filtering using large user datasets to infer individuals’ taste preferences and content-based filtering leveraging content attributes from users’ past interactions to generate its recommendations (Aggarwal,

2016). Together, the system nudges users to particular content that suits their assumed taste and preferences—at least that is the idea.

Netflix CEO Reed Hastings aspires for the platform to be “so good at suggestions that [it’s] able to show you exactly the right film or TV show for your mood when you turn on Netflix” (The Economist, 2017, para. 1). Netflix has made this ambition a public project through the Netflix Prize (Hallinan & Striphas, 2016). Tasked to improve the then Netflix recommendation algorithm Cinematch, the best developers and technologists in the US worked on the problem from 2007 to 2009.

The winning algorithm, however, was never implemented but Netflix has continued to develop algorithmic innovations.

14 Netflix’s algorithmic breakthroughs are characterised by the unprecedented granularity in its data and recommendations. Content has been torn apart into

77,000 altgenres (Madrigal, 2014) and dozens of personalised artworks are deployed for every film and TV show (Wilson, 2017). Through the creation of new consumer categories such as taste communities, Netflix also adapts what Rogers

(2013) defines as postdemographic profiling. By exclusively generating insights from users’ behaviour, algorithms allegedly are freed from traditional markers of identity. Ultimately, the NRS intends to deliver a hyper-personalised experience, but little is known about the specific mechanisms it uses.

Netflix’s business also shifted its activities from primarily distributing content to producing its own Netflix Originals. House of Cards, its first self-commissioned show, was promoted to have been informed by consumer data as a guaranteed success (Finn, 2017). Since then, Netflix Originals have been constantly visible in the Netflix platform, and many presume these to be algorithmically favoured

(Jenner, 2018). Netflix’s success also opened up closer collaboration with the entertainment industry, but its data-driven approach sometimes conflicts with

Hollywood’s creative discretion in the areas of show development, marketing and audience targeting (Ramachandran & Flint, 2018).

Crucial to Netflix’s business is its global reach. Since Netflix expanded in 2016, it started to introduce ‘local’ content to its libraries to appeal to transnational audiences (Jenner, 2018). However, ‘local’ content is usually less than one-fifth of its collection and a majority are still American-produced (Lobato, 2019). According to Lobato (2019), this follows the historical one-directional flow of media from

15 dominant Western cultures to the rest of the world that underpins the operations of cultural imperialism.

The high-profile recommender system, however, is not devoid of criticisms, which are investigated in-depth in the later chapters. Key to this is what Cohn (2019) calls the Napoleon Dynamite problem, which similar to its referent film, is the uncertainty over the predictors that drive people’s preferences. He argues that

“prioritising certain details over (or instead of) others can result in very different conceptions of taste” (p. 108). Zaslow (2002) also documents anxiety among heterosexual male users over the misidentification of their sexual identity by the earlier automated technology TiVo. At the time, they claim that TiVo ‘thinks’ they are gay based on what they watched and counteract it by “recording war movies and other ‘guy stuff’” (Zaslow, 2012, para. 3).

The narrative of the NRS espoused by the company is as important to understand as its algorithmic logics in the production and dissemination of global culture. The imaginaries that Netflix has built around it shifts the responsibility to users to account for its algorithmic results. The semantic use of ‘recommendations’ implies the performance of an agency, but Cohn (2019) argues that there is a need to qualify the ‘choices’ users have in making Netflix serve their taste and interests.

Conclusion

The literature illustrates that taste has historically been understood as a cultural phenomenon utilised for social classification and order. This necessitates

16 examining the role of algorithms in taste construction and asking if algorithmic logics further entrench existing hierarchies or construct new opportunities to negotiate social positions through taste practices. However, it was also made clear that algorithms are ambiguous and constituted by their social interactions, which undermines their influence over taste cultivation. As in the case of the NRS, acts of conformity and subversion are contingent on our expectations of the algorithms and the mechanisms that allow us to negotiate with them. A critical interrogation of algorithms first necessitates uncovering their logics to overcome their obscurity and ambiguity (Roberge & Seyfert, 2016), which I address in the next chapter.

17 Chapter 3

Methodology

The literature has established that far from being perfect machines, algorithms are vague, obscure and messy. This chapter expounds on how I surfaced and assembled the algorithmic logics of taste of the NRS that involves adapting Taina Bucher’s

(2016) technography as a methodological approach. Subject to this investigation is the NRS as a set of inextricably entwined algorithms. I first introduce the research methods employed to make the algorithmic processes visible and then detail the data collection and analysis process, which enabled me to discursively construct the algorithmic infrastructure of the NRS.

Methodological approach

Technography

Algorithms’ deemed opaqueness reinforces the narrative that they are impenetrable

‘black boxes’, and therefore incomprehensible if not analysed through code. Bucher

(2016) argues that this thinking only distracts scholars from critically examining algorithms and contends that they are “neither black [n]or box, but a lot more grey, fluid and entangled” (p. 94). In this research, I adapted her innovative methodological approach of technography that involves studying the intersection of software and sociality (Bucher, 2016). Technography examines algorithms in relation to social actors to document the materialisation of norms and values in their interactions. This relational character of algorithms is inherent in many kinds

18 of infrastructures, which Star (1999) asserts are inscribed with meaning based on social circumstance.

Technography is an agile methodology similar to the ethnographic approach and thus, is not prescriptive of specific methods. Inspired by the experimental methods of earlier research on algorithms (Bucher, 2016; Kitchin, 2017; Seaver, 2017), I focused on their discoverable aspects. I looked at two evident entanglements of the algorithm with social actors: the discourse established by the Netflix company and media reports and the traces of its social interactions with its users. Discourse analysis of corporate and media documents serves as my starting point in constructing an “account of the algorithms by assembling information from different sources” (Bucher, 2016, p. 87). Then, I investigated the social interactions on Twitter through the method of accretion measures (Webb et al., 1966), which locates the digital traces of everyday algorithmic encounters. The data produced from these methods serve as discursive embodiments of the logics of the NRS, which explicate how it constructs taste and works as a cultural intermediary.

Research methods

Discourse Analysis

Taste is front and centre in the discourse of the NRS, which has been widely discussed not only for its major impact on the television industry but also for its controversial algorithmic output as introduced in the earlier chapters. A discourse is defined as a “group of statements which structure the way a thing is thought and the way we act on the basis of that thinking” (Rose, 2001, p. 136). The discourse is

19 then both the embodiment of the NRS and the premise of users’ interaction with it.

Discourse analysis serves as a suitable method as it requires close reading of texts while being cognisant of their interlocutors (Stokes, 2013). This way, it does not only answer “how it works” but also “who it works for” (Galloway, 2004, p. xiii).

A systematic approach in analysing discourses of infrastructures is to identify the master narrative first and in the process discover ‘other’ perspectives (Star, 1999).

The master narrative represented by Netflix engineers and executives is found in materials from Netflix’s corporate websites, academic publications and social media platforms. In analysing the discourse espoused by corporate entities, Barthes

(1993) asserts that the texts are assumed to be ‘motivated’ by political and economic interests. However, Stokes (2013) argues that there is value to these corporate narratives such that they carry implicit and extra-textual cues, which Bucher (2016) suggests can illuminate new meanings about the complexity of algorithms.

Surfacing ‘other’ narratives requires us to look in multiple locations (Seaver, 2017).

Media reports were thus included to provide more diversity of social actors involved in constructing the NRS discourse, particularly the controversies and issues around the algorithms. These narratives exhibit the instances in which the NRS produces results that are not built-in or anticipated by its developers, as well as resistance to the computational rationality by traditional cultural intermediaries such as

Hollywood producers and actors. From the juxtaposition of these conflicting narratives, I have drawn out emergent consistencies and inconsistencies to reveal the underlying algorithmic infrastructure and logics of taste of the NRS.

20 Accretion Measures

While discourse illustrates how the algorithms work, the interactions of people with the algorithms reveal how they are working, or not working as intended. However, interactions with the NRS are often solitary and private, characterised by more automatic, non-reflexive consumption (Steiner & Xu, 2018).

There are instances, however, when algorithms rise to the foreground when dysfunctions, controversies and unexpected events happen (Star, 1999; Law, 2000;

Latour, 2005). During these salient encounters, Bucher (2017) asserts that individuals’ algorithmic imaginaries escape the private realm and are ‘articulated, discussed, and contested’ in public spaces (p. 90). Apart from making the algorithms visible, an examination of system failures exposes other factors and actors involved in the complex dynamics of the infrastructure (Law, 2000).

These incidents materialise as social media posts and create digital traces of users’ interactions with the NRS. I used the anthropological method accretion measures first documented by Webb et al. (1966) to examine traces of a social phenomenon

‘naturally’ accrued over a specific period of time and within particular spaces. It is an unobtrusive method as data is ‘found’ without the intervention of a researcher.

The accretion measures technique was adopted to analyse the social patterns that manifest, augment or contradict the algorithmic logics of taste earlier identified in the discourse analysis.

21 Data collection

Surfacing the algorithmic infrastructure requires iteratively eliciting evidence through a number of stages of collection and analysis as advocated by Timmermans and Tavory (2012) in the abduction analysis framework presented later in the chapter. The data collection follows this sequential process by first drawing from the Netflix discursive materials, then adding to this the media discourse.

Afterwards, I processed the data from the accretion measures using the codes and themes that emerged from the discursive materials.

I began examining the Netflix discourse by perusing the earliest to the most recent published materials about the NRS in Netflix’s platforms, which includes the Netflix

Tech Blog, media and consumer help websites. Netflix executives and engineers serve as the narrators of this discourse, which led me to explore discursive materials they have produced on Quora, LinkedIn and academic journals. Through close reading, I lifted the statements in the materials pertinent to my inquiry on the algorithmic logics of taste and excluded corporate and technical details on marketing, systems architecture, data storage, and so on. I organised and coded them iteratively and came up with concepts and procedures that form the discourse on the NRS built by Netflix.

Unlike the Netflix documents, media reports have no fixed starting point. This necessitated setting up a broad search on the Google search engine using the keywords ‘Netflix’ with either ‘algorithm’ or ‘recommendation system’. From there,

I identified the major topics related to the NRS and the timeframes when they were covered. I used these to direct a narrow search on Factiva, a database that collates

22 documents from mainstream media sources and high-profile blogs, and on Google to augment the limitations of the former. I collected articles and reports from major and niche media outlets, including general interest, entertainment and technology news, until I reached a saturation point. After which, I employed close reading and classified the media discursive materials following the same codes I generated from the Netflix discourse, but augmented it with issues and criticisms identified from the reports on controversial algorithmic output.

For the collection of the accretion measures, the digital traces of user interactions were sourced exclusively from Twitter, where a significant number of users were observed to share their algorithmic encounters. I extracted tweets through

Sysomos, a commercial social listening tool subscribed to the Twitter API, with the keywords ‘Netflix’ with ‘algorithm’, ‘recommendation system’ or

‘recommendations’ to account for the looseness of language used in social conversations. I collected three months of tweets from February to April 2019 to work within the research limitations. No location was used to filter the tweets to capture Netflix’s transnational audiences.

A total of 6,676 tweets were collected and filtered. I refined the data to exclude posts from brands and organisations, as well as promotional and repetitive retweets.

After finalising the sample, I categorised the accretion measures with the emergent codes from the discourse analysis and developed granular sub-codes given the specificities of the tweets. I identified recurring patterns across the codes and sub- codes and transformed them into themes to see larger patterns in the social interactions of users with the NRS.

23 Data analysis

The challenge of discursively assembling the algorithmic infrastructure is the plural nature of the data in this research, with multiple interlocutors aligning and conflicting in their accounts. To analyse the data, I engaged two analytical frameworks to account for its concurrent stability and instability with the algorithms discursively constructed and reconstructed through their social interactions. I first built the discourse on the NRS using Foucault’s discursive formations (1972) by identifying the relevant concepts consistent in the Netflix and media discourse. Then, I employed abduction analysis (Timmermans & Tavory,

2012) to compare the significance of the incompatibility of media discourse and

Twitter accretion measures with the emergent discursive formations.

Discursive formations are the schemes that organise discourse within particular knowledge contexts, attached to a field or network of associations (Foucault, 1972).

In this research, the NRS is the substantive field where the discursive statements are ‘anchored’ (Schaanning, 2000), comprised of functional concepts such as algorithms, recommendations and taste, among others. Discursive formations elucidate the underlying logics of the algorithms by revealing how these concepts are formulated by actors and how they are organised.

Key to formulating the concepts that construct the discourse about the NRS includes identifying the following: the actors articulating these concepts and their positions in the field, the context and modalities of their utterance, and the rhetoric and distribution strategies that enable the emergence of the concepts. From there,

I defined the meanings of, and relations between these concepts to map out the

24 algorithmic processes of extraction, appraisal and prediction and formulate the logics that construct taste within the NRS.

The issues and criticisms in the media discourse and the accretion measures represent instances of unexpected and dysfunctional algorithmic output. I used abduction analysis as an analytical framework to formulate new hypotheses based on the same ‘surprising research evidence’ (Timmermans & Tavory, 2012).

Timmermans and Tavory (2012) define abduction analysis as an ‘inferential creative process’ characterized by its interactive and iterative approach in analysing intriguing and anomalous data. As a critical response to grounded theory, it privileges abduction (Peirce, 1934) over induction such that discovery and justification are inseparably performed during the course of data analysis. Where there are no theoretical assumptions in the analysis using grounded theory, abduction analysis works with theoretical preconceptions that fuels its innovative approach, which in this case are the discursive formations.

By drawing temporal, semantic and analytical distance (Timmermans & Tavory,

2012), I have iteratively coded and recoded the Twitter data with emergent meanings and insights. Afterwards, I tested them against the discursive formations, with the intent of augmenting and refuting hypotheses to decide which of those are

‘worth pursuing’ (Timmermans & Tavory, 2012). The initial findings were validated by carrying out the abduction process again until I procured coherent insights to define the discursive manifestations and contradictions of the NRS.

25 Ethics statement

The tweets collected in this research are considered to be in the public domain and do not contain sensitive information. Nonetheless, appropriate measures have been taken to ensure that the they are non-identifiable and protect the privacy and confidentiality of their authors. These include removing unnecessary metadata and only keeping the key data points (content of the tweet and the location of the tweet) in the dataset. Considering that the exact tweet can be traced back using a search engine, the tweets are generalised in the discussion of findings.

Conclusion

This chapter presents a specific and systematic methodology developed to discursively surface and assemble the discoverable aspects of the NRS. Through the research methods of discourse analysis and accretion measures, I recognised the stability and instability of discourses within the corporate documents, media reports and Twitter data and attempted to understand how they discursively construct the algorithmic logics of taste. The emergent analysis through the frameworks of discursive formations and abduction analysis serves as an organising principle in the presentation of the findings in the next chapter.

26 Chapter 4

Findings

This chapter presents the findings from the examination of the algorithmic infrastructure of the NRS using technography as a methodological approach

(Bucher, 2016). First, I present and explain the discursive formations (Foucault,

1972) or the functional concepts identified that construct the discourse of the NRS from Netflix and media. Then, I examine the manifestations and contradictions that emerged from the analysis of the media discourse and accretion measures through abduction analysis (Timmermans & Tavory, 2012), the purpose of which is to iteratively test dissonant data against earlier assumptions to generate new findings.

The findings exhibit coherence and dissonance between the promise of demographic-neutral personalisation and users’ experiences with the NRS, some of which are reported to be too broad—if not limiting, , and anchored on their race, gender and sexual orientation. This instability in the algorithmic infrastructure elucidates the logics that govern the construction of taste of the NRS, which I analyse in detail in the next chapter.

Discursive formations

The first stage of my investigation identified ten discursive formations (Foucault,

1972) from a total of 60 Netflix documents and media reports from August 2007 to

April 2019 (Appendix A). These key discursive formations organise our understanding of the NRS and represent the official accounts of the algorithmic infrastructure as well as additional media narratives. Given the inherent complexity of algorithms, I concentrated on the discursive formations that related to the major

27 algorithmic processes I previously defined and allocated them based on their relationship with these data processes: those involving data collection from users were assigned to extraction, data processing to appraisal, and data presentation to prediction.

Extraction

One of the main discursive formations identified that related to the process of extraction, by which signals and actions that define user tastes and interests are identified and collected, was that of implicit and explicit taste preferences.

The Netflix documents have consistently referred to two types of data collected by the NRS to determine its users’ tastes and interests, namely explicit and implicit taste preferences. Explicit taste preferences refer to user actions that directly provide feedback to the NRS about individual tastes. This includes answering a new profile survey, creating titles queues, and rating content with a thumbs up or down.

Before the thumb-based rating system, earlier documents indicate that the NRS used a five-star rating system characteristic of review platforms such as Rotten

Tomatoes and Metacritic. In a Wired magazine interview in 2017, Netflix executives said that they have replaced this rating system with a binary rating system to capture intuitive personal feedback instead of evaluation based on perceived social standards. In an earlier press release, Netflix has also eliminated external data such as reviews and box office results. Implicit taste preferences, on the other hand, are user signals that indicate consumption behaviour within the system. Plays, watch time, searches and navigation are all implicit data. The Netflix viewing patterns of

28 users’ Facebook friends used to be taken into account in the determining recommendations, but the practice was discontinued after 2012.

Netflix and supporting media reports have underlined that implicit taste preferences take precedence over explicit data as they are a stronger predictor of consumption. Netflix VP Carlos Uribe shared that many viewers express their interests in foreign films and documentaries, but “in practice, that doesn’t happen very much” (Vanderbuilt, 2016, para. 4). Netflix regards explicit taste preferences as aspirational taste and are deemed performative to an imagined audience.

Appraisal

A number of key discursive formations were found to relate to the procedure of appraisal, involving the transformation, categorisation and evaluation of the extracted data: altgenres, taste communities, criteria and A/B testing.

Altgenres are thousands of new genre categories created through the segmentation of taste into infinite facets, which was first revealed in a report by the Atlantic in

2014. The report explains that the process of formulating altgenres starts with splitting films and television programs into granular attributes, such as the moral stance of the main character, the degree of romance, and the level of gore, and then grading them on a scale. The algorithms then weigh the microtags attached to each title and assign it to hyper-specific altgenres, such as “Mind-bending Cult Horror

Movies from the 1980s” or “Visually-striking Cerebral Fight-the-System Movies” to name a few. In a press release, Netflix explains how altgenres repel ‘genre bias’ by

29 revealing underlying affinities between seemingly unrelated shows and opening new pathways in discovering content.

Taste communities were introduced in February 2016 in the Netflix documents as part of its transition to its global algorithms, which collapsed its national audiences into a singular global entity. Netflix claims that this shift expanded the algorithms’ capabilities to identify individuals with similar tastes and group them into clusters, indifferent to their demographic background. However, the actual categories, sizes and compositions of these taste communities were not specified in any of the documents and reports. At most, the media reports confirm that the NRS assigns users into three to four taste communities to account for their range of interests and preferences.

The Netflix documents refer to a set of criteria that the algorithms rely on to prioritise which altgenres and titles that would appear in the Netflix homepage. One of the criteria is relevance, which is the alignment of the recommendation to individual tastes, while popularity is the prominence of certain categories within taste communities, across regions and in between time periods. Diversity is another criterion that ensures that the algorithms cater to an individual or household’s range of tastes and interests by making sure each row in the homepage is distinct from one another. Further information about the criteria, however, was sparse and limited in the Netflix documents. Relevance and popularity appear to share equal footing in the algorithmic calculus because they consistently mentioned in tandem in multiple Netflix documents. Other criteria such as context, stability and novelty were mentioned in passing in the same materials but were not defined.

30 Consistent in the Netflix documents as early as 2011 is the practice of A/B testing.

A/B testing assesses the effectiveness of several dozens of recommendation variants by assigning them to ‘cells’ of control and experimental groups of users. According to a Netflix Tech Blog post, the cells are randomly selected ‘homogenous’ groups, but no further details explain the variables that are considered to generate the groupings. Ultimately, the better performing recommendation measured through clicks and views is rolled out to unspecified user categories.

Prediction

Prediction is the step within the NRS that presents resulting recommendations to

Netflix users in various formats and arrangements. The discursive formations that captured the process of prediction that emerged in the discursive analysis consisted of recommendations, home page recommendation, artwork personalisation and evidence personalisation.

Across the discursive materials, the concept of recommendation has been redefined from a singular suggested title to the entirety of the Netflix interface.

Recommendations now take many forms, but they all intend to deliver a

‘personalised’ experience. The Netflix documents refer to the homepage as the most personalised part of the NRS, where every row, altgenre and title are arranged based on one’s taste communities. Each of the rows is labelled with the corresponding altgenres and the aggregate of the rows represent the users’ distinct tastes. A journal publication by Netflix executives discloses that there are also special rows designed to serve specific interests, such as the trending row that presents all popular titles

31 within an unspecified parameter, and the similarity row that groups titles similar to the most recent title viewed by the user.

Introduced in 2016 in the Netflix Tech Blog, personalised artworks represent every title recommendation on the platform. Artwork personalisation involves selecting from dozens of algorithmically captured images and evaluating them through A/B testing. The winning image is intended to grab your attention given its visceral appeal to your taste preferences. Along with the artwork, the Netflix documents mention that the NRS also personalise the evidence or the accompanying description to the title to convince users to watch it. It highlights certain information about the title over others, such as its cast, awards won or similarity with the previous content viewed. It shows a figure called match score as well, which indicates the compatibility of the title to users’ tastes. According to media reports, titles with scores lower than 50 per cent are said to be excluded from the recommendation list altogether.

Discursive manifestations and contradictions

The second stage of the investigation revealed inconsistencies and discrepancies against the official Netflix and supporting media discourse based on an analysis of

40 media reports between March 2017 to May 2019 (Appendix B) and 990 tweets about user social interactions with the algorithms. I examined these deviations through abduction analysis (Timmerman & Tavory, 2012) and identified the emergent discursive manifestations and contractions.

32 Issues and criticisms in media discourse

There were three major issues identified in the media discourse that signalled contradictions in the official narratives of the NRS embodied in the discursive formations examined above. There were: racial targeting on artworks, sexual orientation-based episode sequencing, and misrepresenting titles on artwork.

Minor criticisms discovered included poor recommendations, attributing show performance to algorithmic targeting and anxiety over surveillance.

Nineteen media reports presented two cases of recommendations allegedly based on race and sexual orientation despite the assertion that the NRS is demographic neutral. The first case in October 2018 involved a number of African American

Netflix users who observed that they were presented with artworks featuring minor

Black characters of titles with a predominantly White cast. All reports on the controversy indicate that the viewers feel ‘manipulated’ by Netflix reducing their taste to their ethnicity and manufacturing diversity in content where minorities are underrepresented. The second case happened in March 2019 when the algorithms were criticised for the alleged arrangement of episodes based on sexual orientation of the animated anthology Love, Sex and Robots. The media reports centred on queer viewers claiming that they were first served with the sexually explicit lesbian episode while their straight friends were initially presented with the heteronormative episode. Netflix consistently denies in its media responses that it has any demographic information about its users and all the recommendations were machine-generated and randomised through A/B testing.

33 The algorithms were also scrutinised in 20 media reports for misrepresenting titles through personalised artwork. For the show Grace and Frankie, Wall Street

Journal reports that Netflix engineers suggest removing the images of the lead actress after the algorithms indicate that users are less likely to click on them. The reason for the low click-rate is unclear but a Daily Mail report points out that it might be because of Fonda’s polarising persona in American culture for her activism. More recently, there was also an issue of the whitewashing of some artworks of the show Nailed It! by featuring two white men, who are only supporting personalities, instead of its African American host Nicole Byers. In a now-deleted series of tweets, Byers called out Netflix for contributing to the deliberate erasure of Black women in culture just to appeal to certain groups of viewers.

Four media articles shared their criticisms of the NRS for its poor recommendations and its replacement of the stars rating system. Two other tech news featured actress, Natasya Lyonne, attributing the success of her genre-ambiguous show

Russian Doll to the Netflix algorithms’ audience targeting. In contrast, the producer

Kenji Kohan of GLOW, an award-winning all-female wrestling drama, ascribed the weak performance of the show’s first season to the algorithms covered in a report by Wall Street Journal. Kohan asserted that the algorithms had recommended the show to a male audience when it was intended to cater to women. In relation to the

Black Mirror interactive program Bandersnatch, two reports raised issues about further surveillance on taste with the data mined from the user choices in the narrative.

34 Netflix user interactions on Twitter

In the final stage of the analysis, ten discursive themes (Figure 1) were distilled from

990 coded tweets about users’ interactions with the NRS. These represented individuals’ cultural experiences with the Netflix algorithms, and revealed broader patterns that exhibit the discursive formations and issues and criticisms discussed above (See coding scheme in Appendix C). Only 523 tweets had location geotags and 66% of them were from the US, 14% from the UK, and 20% were from 20 other countries (Appendix D). While Netflix recommendations were reported as accurate in 9% of the tweets, 82% indicated dissatisfaction for various reasons. About 720 tweets of users eliciting recommendations from their social network were isolated from the main dataset but were still analysed for the way they illustrated a key limitation in the NRS algorithmic infrastructure: the lack of other social inputs.

Figure 1: Discursive themes from Twitter Accretion Measures

February to May 2019, all countries

Poor recommendations 25% Buried titles 12% Odd recommendations 11% Bias for Originals 11% Misreading behaviour 10% Accurate recommendations 9% Demographic identification 8% Show cancellation complaints 7% 5% General observations Work with algorithm 4%

Social recommendations 720*

0 100 200 300 400 500 600 700 800

Source: Sysomos, n=990 Main data Isolated data*

35 As shown in Figure 1 above, of all the discursive themes, 25% were complaints about poor recommendations, which were mostly about titles at odds with users’ tastes, low-quality content, or repetitive recommendations. Contrary to the promise of personalised recommendations, some users reported that the recommendations were generic and far from their taste preferences. Users were also frustrated with the algorithms misreading their preferences and behaviour in 10% of the tweets, including having low match scores with content they have high affinity with, and vice versa. They said that the misinterpretation might be because the NRS deliberately dismisses their feedback, with content they down-rated still appearing in the recommendations while the ones they queued on their ‘My List’ were not showing in their homepage.

Users were also baffled with odd recommendations in 11% of the tweets, particularly the peculiar connections between the titles they previously watched to the shows the algorithms suggest seeing next. An example of this were recommendations of the true-crime documentary The Ted Bundy Tapes right after finishing the children’s show Peppa Pig or the ’80s classic The Breakfast Club. This ambiguous logic also extends to the algorithmic basis for cancelling shows in its catalogue, with

8% of the tweets expressing frustration about it. One case is the cancellation of the

Latino comedy sitcom One Day at a Time, which some users complained about as allegedly demoted by the algorithm before it was even recommended to interested viewers. This relates to tweets about titles buried in the catalogue by the algorithms.

Apart from the NRS not surfacing titles relevant to their interests, users also said that they had a hard time searching for the shows they discovered outside the platform. Instead, users noticed that the algorithms were heavily recommending

36 the big-ticket Netflix Originals, which were increasingly patterned after generic story themes and previous blockbusters.

People believed that the algorithms were making assumptions about them based on certain identities from the title recommendations to artwork personalisation in 8% of the data. Consistent with the major issues on demographic targeting in the media discourse, users were convinced that they were identified by their race, gender and sexual orientation. More so, they are further confined by assigning them to very limited taste communities when they might have more diverse interests. Despite their disappointment of the algorithms, users expressed their intent to work on and work with the algorithm in understanding their tastes. Some created a wish list of new actions they can perform on the platform such as the ability to edit their viewing history, while others opted to train the algorithms to read their tastes accurately. Regardless of whether the algorithms work for them, 720 of the tweets outside the main dataset indicate that users seek their social connections for recommendations, which implies that users have a wider recommendation ecosystem beyond the NRS.

Conclusion

This chapter renders a detailed depiction of the discursive structure of the NRS constructed from its discursive formations, manifestations and contradictions.

Central to the findings is the dissonance between the intentions of the algorithms and the experience of the algorithms from users’ interactions. The presumed personalised, hyper-specific recommendations were regarded by some individuals as generic and inconsistent, if not at odds with their tastes, while the claim that the

37 NRS is demographic agnostic also conflicted with users’ experiences of targeting based on race, gender and sexual orientation. While some inconsistencies in the recommendation system are inevitable, the evidence suggests that these might be coded into the NRS algorithmic infrastructure itself. The next chapter explores this prospect and makes the argument that these inconsistencies manifest as a result of the underlying assumptions and logics of taste, and the pervasive power of algorithms as cultural intermediaries.

38 Chapter 5

Discussion and analysis

The findings illustrate the discursive structure that constructs the NRS and exhibit both its intended and unintended manifestations, as well as its inherent and emergent contradictions. In this chapter, I examine how the findings inform the algorithmic construction of cultural taste and the logics that govern its extraction, appraisal and prediction. I also explore their implications on theorising algorithms as cultural intermediaries.

Construction of taste

The findings indicate that the NRS is limited to individual taste preferences in its calculation of taste, rendered within the constraints of the algorithmic infrastructure. This assumption deviates from Bourdieu’s (1984) conception of taste in key ways. Firstly, the algorithmic infrastructure of the NRS formulates taste in a social vacuum, devoid of certain forms of cultural capital that shape one’s disposition. In particular, social and cultural capital are detached from taste with the dissolution of the star rating system, friend recommendations and external data such as reviews and box office results. In extension, taste cultivation is isolated from the larger cultural field that regulates the circulation of cultural goods. The algorithms only rely on data generated within and by the recommender system, from which all recommendations are based. Lastly, the NRS algorithmic infrastructure has structured the habitus such that it is the sole arbiter of norms and values that govern social practices. The match score represents this argument perfectly by positioning recommendations as exclusively determined by personal

39 affinity, undermining traditionally relevant influences on taste practice like critical acclaim and social feedback.

The findings further specify that the NRS confers primacy to implicit behavioural data over explicit taste preferences. This assumption follows a commercial imperative because Netflix primarily derives value from consumption, which I examine more thoroughly later in the chapter. The more important point here is that algorithms neglect the reflexive and performative articulations of selves inherent in the taste cultivation process by dismissing explicit data as aspirational.

Overt expressions of taste may not be consistent in practice, but they represent aspects of individuals’ cultural identities (Bourdieu, 1984). Butler (1993) asserts that these iterative performances of identity engender agency, which confers the power to define the self against the imposition of social structures. However, this agency is superseded in the NRS through datafication by divorcing the subjective self from its acts of consumption. This tension between the human agency and algorithms’ computational agency is further exemplified in the following discussion on the algorithmic logics of taste.

The algorithmic system undertakes the quantification of taste (Beer, 2013) by transforming users and content into units of information and splicing them into infinite attributes to create taste communities and altgenres. This schema assumes that taste is a divisible concept, driven by parts of a whole than by the whole itself.

It relies on the premise that individuals like certain titles because of specific aspects, such as a strong female lead or a morally deviant plot, but not necessarily other attributes also descriptive of the title. However, Cohn (2019) suggests that this assumption heightens the likelihood of algorithmic miscalculation of taste, which

40 explains the seemingly unrelated or conflicting recommendations from the titles that users watched. Further, Crossley (2015) argues that culture is inherently relational and is mutually constituted by the interaction of its parts. This means that taste is only coherent and meaningful if understood, to borrow Leibniz’s (1989) term, as a cultural monad.

The same logic of breaking down everything into its micro components enables the algorithmic system to create new models of identifying and classifying tastes, individuals, and content. More than sets of computations, these quantified relationships represent taste as sets of rules, if-then statements entangled in its complex algorithmic infrastructure. There is an assumed universality in these rules, such that they overrule more nuanced details of taste. The issues on artwork personalisation illustrate this generalisation by assuming that all people are more receptive to images of characters of the same ethnicity. Even if there are ambiguities

(McNay, 2000) or dissonance (Lahire, 2005) in taste, the algorithms can ignore them following their computational logics and constant inflow of data. These rules govern the generation of new rules, and the system itself becomes self-generative.

They become the ontological foundation (Finn, 2017) of the NRS that transforms taste rules into taste truths.

Algorithmic logics of taste

The findings imply that NRS’s new categories depict the same dominant social classifications it claims to reject. Media reports present instances of personalisation based on individuals’ race, gender and sexual orientation when the NRS is meant to be agnostic of demographics. Programs and films, particularly its in-house

41 Netflix Originals, are becoming even more typical of dominant genres. Gillespie

(2016) explains that the developers of the algorithms inevitably impress their conceptions and values about individuals and society in the system. Even if the NRS does not collect demographic information about its users, it can identify them given the built-in associations between behaviour and identities, which often are normative representations of race, gender and sexuality (Cohn, 2019). The NRS interpellates them, that is ‘hailing’ them to identify with social identifiers

(Althusser, 2011), into identities using semantic codes through the selections offered in the interface (Cheney-Lippold, 2017). Beyond language and images, the algorithms also make use of space and order to perform this interpellation through homepage personalisation and episode sequencing. Supposing that the algorithms started with a clean slate, theories of taste would contend that the algorithms can still recognise emergent identity categories because cultural taste is inevitably attached to one’s social positions (Bennett et al., 2009). However, the logic of recommendation prescribes projecting back these normative identity categories instead of articulating new subjectivities (Bhabha, 2004).

The NRS, idealised to reject normative social classifications, can potentially be another technological apparatus for the reproduction of hegemonic social structures. Critical research on algorithms demonstrates the same tendencies of the technology in perpetuating existing inequalities (Eubanks, 2018; Noble, 2018).

Further, Netflix positions its categories as objective products of algorithmic calculation. It not only deflects the issues but affirms particular presumptions and associations about identities (Cheney-Lippold, 2017), such as racial and gender stereotypes. Deleuze (1992) argues that the reduction of social identities into numbers transforms the individual to ‘dividuals’, who cease to have control over

42 information about themselves. Consequently, this abstraction also depoliticises the consequences of codifying certain representations about individuals and groups in recommendations (Cohn, 2019), as can be seen in the Netflix controversies.

The recommendation criteria that appraise these categories, as much as they are already vague, may conceal even more complex mechanisms. It makes sense to have relevance to taste as a constant criterion in its algorithmic calculus. Popularity, however, implies that there is an invisible majority that dictates what should be prioritised by the algorithms. Apart from asking who is included in computing for popularity, it is also crucial to ask who is excluded (Gillespie, 2014). The Netflix global audience is primarily comprised of viewers from the US and developed

Western nations (Moody, 2019), which suggests that there might be a lack of representation from other cultures. Cohn (2019) also argues that there has been a historic lack of economic interest in minorities that can possibly result in the exclusion of their tastes in popularity metrics.

Despite the precedence of popularity as criteria, Netflix guarantees that it serves a diverse spectrum in its recommendations (Jenner, 2018). However, upon closer reading, algorithmic diversity is characterised as simply ensuring that the recommendations are different from one another, not necessarily catering to niche or underrepresented tastes. Popular genres, as long as they are distinct from one another, can pass as ‘diverse’. These criteria significantly affect the kinds of titles that are recommended (and not recommended) in its every iteration. Titles that satisfy these criteria are rendered prominent, but those that fail might not even appear in the recommendations. This calculus also governs the values we attach to the cultures and identities that these titles represent.

43 The NRS was designed to recommend titles to its users, but its logics have fundamentally deconstructed and reconstructed those titles into new cultural forms. Content has been dissected into infinite attributes and ‘repackaged’ into new genres (Morris, 2015). It is wrapped with layers of packaging through personalised artwork, match score and evidence to appeal to viewers’ affinities. It is also positioned in a specific row on the homepage, granted that it appears at all, to

‘frame’ (Goffman, 1974) its relative relevance to one’s tastes against the backdrop of seemingly infinite options. Television shows and films have been turned into malleable cultural objects, taking the shape that the algorithms have specified. The show can easily become a sci-fi thriller, a period drama or a teen romance as it is repackaged using various algorithmic mechanisms. Reassembling them consequently changes the way people perceive them and assess their relative value (Radway, 1991). While not changing the material itself, the use of secondary

Black characters as a stand-in for a film of mostly White actors affects one’s impression of the movie and how it is experienced. This judgment happens in a split second and can make or break the show’s chances of being chosen and consumed, and in extension, its visibility (Bucher, 2012) and relevance (Gillespie, 2014). By engineering cultural experiences, the algorithmic system manufactures new standards of what is ‘quality’, ‘popular’, ‘diverse’, or even deflate their importance since what matters is how content ‘fits’ your interests (Morris, 2015).

Taste communities are a kind of calculated public that is algorithmically produced and reproduced through its logics (Gillespie, 2014), but it might be difficult to develop inferences about them without clarity on how they are created. At the very least, assumptions can be made about people’s perception of their algorithmic identity (Cheney-Lippold, 2017) from the user tweets. The recommendations are

44 essentially ‘categorised images of self’ (Prey, 2018) that one can reflexively recognise. Evidence presents that users have ‘algorithmic anxieties’ on the algorithms’ reading of their behaviour and such hyperawareness gives them a

“sense of ownership, agency and presence” (Cohn, 2019, p. 89) over their algorithmic identities. Users’ attempts to work on the misalignment between the algorithmic identity and self-identity exemplified in the user tweets suggest that they are reasserting their agency for the two to mirror each other. Our algorithmic identities may also structure our subjectivities in more subtle ways (Cohn, 2019), such as through personalised artwork or title evidence that may help us internalise the algorithmic depiction of our self.

Computational agency, however, restricts and often overrules human agency. One way this happens is through the algorithms allocating people into certain taste communities despite their wider spectrum of interests. What this means is that the

NRS only suggests titles that mirror the taste communities they are identified with, turning choices into constraints (Cohn, 2019). This goes back to my earlier argument of algorithms having structured the habitus and thus only allowing for regulated liberties (Bourdieu, 1992), which are neither deterministic nor emancipatory. The expansive tastes of cultural omnivores (Peterson, 1992) and the complex preferences of those ‘unrecommendables’ (Cohn, 2019) are forced into these categories because the algorithms limit membership to only three to four taste communities.

Furthermore, even when individuals intend to work with algorithms to accurately capture their tastes, the findings show that there is a lack of mechanisms to do so.

The inability to refine one’s viewing history, the absence of social

45 recommendations, and the limitations of the binary rating system demonstrate the rigid algorithmic infrastructure and its narrow notion of taste. Taste is always positioned as a choice among a predefined cultural selection and not a negotiation between individuals and the algorithms. From the period of Bourdieu’s writing to contemporary times, taste is a means to negotiate social positions and construct cultural identities. The act of negotiation conveys agency and entails probing possibilities to resist the ‘structuring structure’ of the habitus (Bourdieu, 1984).

However, the NRS restricts the practice of agency by setting up finite one-way instruments to express one’s tastes. At best, there is only tacit negotiation (Gillespie,

2014) whenever users adjust their behaviour to signal their taste preferences to the algorithms. The restricted agency and the consequent imposition of algorithmic identities on individuals are in the guise of free choice and control over one’s taste preferences. The NRS fundamentally facilitates the automation of symbolic violence (Bourdieu, 1992) against individuals by enacting their subjugation through their own choices and decisions.

It is crucial to understand that these algorithmic logics are subject to, if not designed for, the underlying political and economic interests of its owner. In principle, the NRS is meant to be demographic-neutral, but Netflix achieves scale by targeting economically viable demographic groups (Cohn, 2019) and not dispersed taste communities. Popularity as a criterion creates efficiency in their investments while artwork personalisation, even if it misrepresents titles, allows speedy selection to encourage continuous consumption. Our algorithmic identities are framed not as profiles of individuals but as consumers, whose value depends on one’s capacity to consume. Taste may even be a means to an end, and the algorithms may exploit every aspect of people’s identity just to keep them watching.

46 Algorithms as cultural intermediaries

The NRS serves as the exemplar of algorithms as dominant cultural intermediaries

(Morris, 2015; Gillespie, 2016; Hutchinson, 2017) in contemporary culture.

However, findings show that it departs from traditional cultural intermediation in distinct ways. Unlike human cultural intermediaries, algorithms are machines and are inherently automated. As established earlier, the NRS functions by following its rules and logics of taste and there is an assumed universality in its application.

However, there is no opportunity to examine and criticise these rules as algorithms remain opaque and impenetrable. As the findings prove, it does not have the capacity to self-correct errors and issues nor the discretion to make sense of ambiguity. The lack of adaptability of algorithmic systems can steer cultural processes in certain directions oblivious to individuals and cultural industries.

Instead of limiting its power, these conditions of algorithmic cultural intermediation enable its unprecedented capacity in structuring cultural experiences.

Negus (2002) positions cultural intermediaries in the middle of the production and consumption of cultural goods. The NRS, however, transcends that boundary by structuring cultural production, distribution and consumption within its closed algorithmic infrastructure. It not only translates capital to different forms

(Hutchinson, 2017) but also wields logistical power over the habitus and the field.

Algorithmic logics effectively infringe on and override the creative decisions of producers by repackaging shows and films into seemingly new cultural objects.

Titles are reappropriated with artwork personalisation and, to an extent, the

47 altgenres they are assigned. Mediocre content has become recommendable because it was reconstructed to match the viewers’ tastes.

The NRS also has a monopoly on the visibility of content in the platform and the manner in which they appear. Instead of individuals seeking meaningful cultural experiences, algorithms enable culture to find us (Lash, 2007) by determining the kinds of people a certain show might appeal to. This suggests that any kind of content, may it be subpar or generic, can be promoted in this cultural distribution model. It is also possible to diminish the visibility of shows by recommending them to the inappropriate audiences, as in the case of the algorithms targeting men instead of women for the show GLOW. At the same time, it has the power to make films and shows disappear and become irrelevant if they do not satisfy its appraisal criteria, burying them at the bottom of the catalogue (Bucher, 2012). All this is happening while portraying that users have infinite choices within their finite taste communities. These choices within the algorithmic interface have been framed as reductive judgments of taste that diminish the cultural experience to false binaries.

Jenner (2018) illustrates that there is a conflict between choice and control as more data enables surveillance. This is rationalised, however, by positioning

“surveillance as a form of care” (Cohn, 2019, p. 22). The troves of information that

Netflix has on its users enable it to explore exponential possibilities to manipulate the data and engineer cultural experiences.

As much as the NRS has restrictive control over cultural experiences, it is evident that individuals resist against it and assert their agency over their taste practices.

Algorithms are “susceptible to misunderstanding, misuse, and subversion” (Cohn,

2019, p. 8) and they are often not equipped to process these anomalies and

48 deviations. Even if it makes perfect sense for the machine, altgenres are sometimes interpreted as illogical and peculiar. Findings further demonstrate that individuals constantly articulate their interests, reject recommendations, and seek recommendations from friends from social media. This implies that despite the limitations and constraints within the algorithmic infrastructure, it is not a barrier for them in defining their cultural experiences and cultivating their taste.

Conclusion

The assumptions, logics and mechanisms of algorithmic cultural intermediation are mutually constitutive of how the NRS constructs cultural taste. The form that algorithmic taste takes—divisible, stable and universal zeros and ones—creates new structures of control that are indiscernible to human cognition. This confers algorithms with the power to dictate what taste is and what it can become, obstructing and dismissing the role of agency in taste making. Further, the reduction of taste to individual acts of consumption divorces taste from the individual enacting it and isolates its cultivation from the larger social structures where it is performed. Taste becomes devoid of the politics of race, gender and sexual identities despite their entrenchment in its cultivation.

Even if the NRS was designed to reject conventional social categories, its logics of taste may translate to the maintenance and reproduction of social structures. Its mechanisms of classification and codification of hegemonic social categories are resistant to interrogation and this works to preserve asymmetrical power relations among social groups. Algorithms employ old and new mechanisms of subjugation, primarily by exclusively governing the habitus within the algorithmic

49 infrastructure. With the appearance of freedom and control over taste practice, individuals are covertly deprived of their agency and subdued to their social positions by their own actions.

This unprecedented power of algorithms to shape cultural experiences requires rethinking their nature as cultural intermediaries. Algorithms are both agents and infrastructure in the cultural field. They do not wield their power by espousing ideologies but by structuring the field based on their logics (Lash, 2007). Arguably, their influence expands across cultural processes and over other cultural players in the field. Algorithms, therefore, not only regulate the circulation of cultural goods within their infrastructure but also impress the same logics on the cultural agents to achieve visibility and relevance. Only by breaking down the boundaries of the infrastructure can algorithmic power be challenged and undermined.

This emergent framework of taste affords new ways of thinking about cultural objects, experiences and identities but also introduces new apparatuses of control and subjugation. Moreover, it provides a purview of the issues and implications of the emergence of non-human cultural intermediaries such as algorithms in the contemporary cultural landscape. The next chapter stipulates the algorithmic logics of taste and its significance in culture and society, as well as recommendations and provocations for future research.

50 Chapter 6

Conclusion and recommendations

Algorithms are new cultural intermediaries that shape contemporary cultural experiences and identities. This research provides theoretical and empirical insights about these tastemakers by surfacing and assembling their obscure infrastructure through technography as a methodological approach (Bucher, 2012).

In particular, it examines the algorithmic logics of taste through a discursive analysis of the NRS. Through its investigation of the logics of extraction, appraisal and prediction of taste, it explained how cultural taste is algorithmically constructed and developed an understanding of algorithms as cultural intermediaries.

The NRS treats cultural taste as rules—universal, definite and durable assumptions about cultural identities and objects. It derives its fundamental assumptions from select behavioural signals that diminish the relevance of reflexive and performative expressions of taste, which are significant aspects of cultural identity. Algorithmic taste is also removed from certain forms of cultural capital, isolating taste practice in a social vacuum. Through datafication, algorithms divide taste into multiple attributes assigned with predetermined meanings and values, which suggests that taste is quantifiable and divisible. This does not only undermine cultural objects as coherently meaningful artefacts, but also reduces individuals to abstract ‘dividuals’

(Deleuze, 1992) who are deprived of control over their own information. The emergent categories and relationships from the quantification of taste serve as the rules that govern what taste is and how it is extracted, appraised and predicted.

Algorithmic recursivity translates these taste rules into taste truths that overrule

51 ambiguities (McNay, 2000), dissonance (Lahire, 2005), and diversity (Peterson,

1992) in taste practice.

The algorithmic surveillance of taste, in the guise of free choice, allows for the manipulation and repackaging of taste in the form of recommendations towards particular desired ends. This confers algorithms ontological power by constructing new categorical frameworks, such as taste communities and altgenres, and by manufacturing new cultural standards of ‘quality’, ‘popularity’ and ‘diversity’. This algorithmic resignification of taste is embedded in and enforced by the algorithmic infrastructure, which makes algorithms ‘structuring structures’ (Bourdieu, 1984) of our cultural experiences. Further, recommendations are deemed legitimate not only because they are perceived as objective, but also because they have become personal through iterative identity performances. Netflix users might take ownership over their algorithmic identities and reassert agency in their construction. However, agency is constrained by the predefined choices within algorithmic infrastructure. Without mechanisms to facilitate more user control over the logics of taste, negotiation of identities and subjectivities through taste practices is tacit at best.

The algorithmic construction of taste illuminates the distinct capabilities of algorithms as cultural intermediaries. Firstly, algorithms, operating with their logics of taste, potentially serve as mechanisms for the reproduction of hegemonic social structures. Apart from the reinstating normative knowledge systems in their algorithmic schema, they also reappropriate old and introduce new means of interpellations (Althusser, 2011) through the use of semantics, space and order.

Through NRS’s recommendations, algorithms codify representations of social

52 identities and when acted on effectively automate symbolic violence (Bourdieu,

1992) against subjected individuals. Secondly, algorithmic cultural intermediation transcends the role of ‘capital translator’ (Hutchinson, 2017) and encompasses cultural production, distribution and consumption. Algorithms repackage cultural objects and manufacture cultural norms that other cultural intermediaries working within its infrastructure need to abide by, demonstrating their influence over the production of goods in the market. They have the power to render the visibility/invisibility (Bucher, 2012) of cultural objects based on their recommendation criteria and consequently, assign their respective relevance/irrelevance (Gillespie, 2014). This means that the cultures and identities embodied in those cultural objects are also subjected to the algorithmic logics of promotion and demotion. All these processes are hidden as the algorithms present finite choices to individuals and, in effect, dictate the extent of their performance of agency. However, algorithms inevitably have limitations and fallibilities which open pathways for rejection and subversion of algorithmic taste. Regardless, they exist in a larger cultural field and individuals can elude their influence by breaking out of the boundaries of the algorithmic infrastructure.

Diving deeper into the NRS, I propose recommendations and directions for future research. As much as the methods that have been employed are productive in investigating the algorithms, it only presents a partial and limited perspective. It is recommended that further research expand on the algorithmic logics of taste by directly engaging the algorithmic interface through reverse engineering

(Diakopoulos, 2015), potentially through controlled experimentation using user personas (Sandvig et al., 2014; Kitchin, 2017). Another method would be an ethnographic approach on everyday cultural experiences of users (Kitchin, 2017;

53 Seaver, 2017), considering more broadly their algorithmic imaginaries beyond instances of anomalies and breakdown. The persistence of the findings of the research should also be examined by designing longitudinal research, especially since the algorithms themselves are temporally specific. Many of the arguments in this research implicate a relationship between the algorithmic logics of taste and the formation of cultural identities. While the study developed preliminary insights, it is recommended to probe further on how the algorithmic conceptualisation of taste influences subjectivities and extend the work of Cheney-Lippold (2017) on algorithmic identity.

The research contributes to the emerging area of algorithmic cultures research, particularly in building on the works on algorithms as cultural intermediaries by

Beer (2013), Morris (2015), Gillespie (2016) and Hutchinson (2017). It begets situating algorithms in larger cultural fields and characterising the new forms of powers they wield in influencing cultural experiences and other cultural intermediaries alike. It also serves as complementary work in the discussion of the entanglement of the political economy of Netflix and its algorithmic production of cultural experiences in the works of Jenner (2018) and Lobato (2019). Moreover, the research contributes to the theoretical debate on cultural taste, particularly in the emergence of algorithmic mediation of culture. The findings present tensions between the research on cultural studies and algorithmic cultures on how taste is constructed. Yet, the research supports that taste remains as a mechanism for hegemonic entities to subjugate social groups through the codification of social classification and the automation of symbolic violence. This argument opens up questions about new cultural machines and their role in maintaining asymmetries in social power relations. Lastly, the research provides new possibilities for studying

54 algorithms by repurposing traditional research methods to perform Bucher’s

(2016) innovative methodology of technography. It establishes the relational nature of algorithmic infrastructure and the significance of the discourse constructed by social actors in surfacing its fundamental cultural assumptions and logics.

I close by offering pragmatic and theoretical provocations. Algorithms are expanding their grounds in cultural and media industries, which further strengthens their dominance and ubiquity as cultural intermediaries. However, the findings indicate that they have discrepancies and contradictions in providing individuals with relevant and enriching cultural experiences. The first provocation intends to open a productive conversation on how algorithms can work better for/with humans. Apart from the demand for transparency in their assumptions and logics, there is a need for mechanisms to enable people to negotiate their taste and identities with algorithms. This means algorithms need to acknowledge individual agency and make their systems adaptable to more specificities and nuances of taste practice. The second provocation reconsiders recommendation as a cultural logic. Recommendations have been in culture for as long as the marketplace, but they have been reclaimed by algorithmic culture through data surveillance and computational rhetoric. As illustrated by the literature, particularly by the work of Cohn (2019), and the research, recommendations are apparatuses of control to promote neoliberal and postdemographic discourse. It is important to further explore emergent manifestations of algorithmic recommendations and understand their social, cultural and political significance.

55 References

Aggarwal, C. C. (2016). Recommender systems: The Textbook. Cham: Springer.

Althusser, L. (2011). Ideology and ideological state apparatuses: notes towards an

investigation. In I. Szeman & T. Kaposy (Eds.) Cultural theory: an

Anthology, (pp. 204-222).

Amoore, L. (2011). Data derivatives: On the emergence of a security risk calculus

for our times. Theory, Culture & Society, 28(6), 24-43.

Barthes, R. (1993). Rhetoric of the Image. In S. Heath (Ed.), Image, Music, Text

(pp. 32-51). London: Fontana.

Beer, D. (2013). Algorithms: shaping tastes and manipulating the circulations of

popular culture. In Popular Culture and New Media (pp. 63-100). New

York: Palgrave Macmillan.

Bennett, T., Savage, M., Silva, E., Warde, A., Gayo-Cal, M., & Wright, D. (2009).

Culture, class distinction. Abingdon, Oxon: Routledge.

Berkowitz, J. (2018, October 18). Is Netflix racially personalizing artwork for its

titles? Fast Company. Retrieved from

https://www.fastcompany.com/90253578/is-netflix-racially-

personalizing-artwork-for-its-titles

56 Bhabha, H. K. (2004). The location of culture. London: Routledge.

Bhaskar, M. (2016). Curation: The power of selection in a world of excess. UK:

Hachette Book Group.

Bourdieu, P. (1984). Distinction: A social critique of the judgement of taste.

Cambridge: Harvard University Press.

Bourdieu, P. (1992). An Invitation to Reflexive Sociology. Cambridge: Polity

Press.

Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of

invisibility on Facebook. New media & society, 14(7), 1164-1180.

Bucher, T. (2016). Neither Black Nor Box: Ways of Knowing Algorithms. In S.

Kubitschko & A. Kaun (Eds.), Innovative Methods in Media and

Communication Research (pp. 81-98). Cham: Springer International

Publishing.

Bucher, T. (2017). The algorithmic imaginary: exploring the ordinary affects of

Facebook algorithms. Information, Communication & Society, 20(1), 30-

44.

Butler, J. (1988). Performative acts and gender constitution: An essay in

phenomenology and feminist theory. Theatre journal, 40(4), 519-531.

57 Butler, J. (1993). Bodies that Matter: On the Discursive Limits of Sex. Routledge:

London.

Carlson, M. (2018). Automating judgment? Algorithmic judgment, news

knowledge, and journalistic professionalism. New media & society, 20(5),

1755-1772.

Cheney-Lippold, J. (2017). We are data: algorithms and the making of our

digital selves. New York: New York University Press.

Cohn, J. (2019). The Burden of Choice: Recommendations, Subversion, and

Algorithmic Culture. New Brunswick: Rutgers University Press.

Crossley, N. (2015). Relational sociology and culture: a preliminary framework.

International Review of Sociology, 25(1), 65-85.

Deleuze, G. (1992). Postscript on the Societies of Control. October, 59, 3.

Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of

computational power structures. Digital journalism, 3(3), 398-415.

Diaz, J. (2018, August 22). Netflix’s recommendations suck–but it’s not too late to

fix them. Fast Company. Retrieved from

https://www.fastcompany.com/90221403/netflixs-recommendations-

suck-but-its-not-too-late-to-fix-them

58 Emmison, M., & Frow, J. (1998). Information technology as cultural capital.

Australian Universities’ Review, 41(1), 41-45.

Eubanks, V. (2018). Automating inequality: how high-tech tools profile, police,

and punish the poor (First ed.). New York, NY: St. Martin's Press.

Finn, E. (2017). What algorithms want: Imagination in the age of computing.

Cambridge: MIT Press.

Foucault, M. (1972). The archaeology of knowledge. London: Tavistock

Publications.

Galloway, A. R. (2004). Protocol: How control exists after decentralization.

Cambridge: MIT Press.

Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski, &

K. Foot (Eds.), Media technologies: Essays on communication,

materiality, and society (pp. 167-193). Cambridge, Massachusetts: MIT

Press.

Gillespie, T. (2016). # trendingistrending: When algorithms become culture. In

Algorithmic cultures (pp. 64-87). Abingdon, Oxon: Routledge.

Goffman, E. (1974). Frame analysis: an essay on the organization of experience

(First ed.). New York: Harper & Row.

59 Goldman, E. (2008). Search engine bias and the demise of search engine

utopianism. In Web Search (pp. 121-133). Berlin, Heidelberg: Springer.

Granka, L. A. (2010). The politics of search: A decade retrospective. The

information society, 26(5), 364-374.

Ha, A. (2019, March 19). Netflix is experimenting with different episode orders for

‘Love, Death & Robots’. TechCruch. Retrieved from

https://techcrunch.com/2019/03/19/love-death-robots-experiment/

Hage, G. (1998). White nation: fantasies of white supremacy in a multicultural

society. Sydney: Pluto Press.

Hall, S. (1997). Representation: cultural representations and signifying

practices. London: Sage.

Hallinan, B., & Striphas, T. (2016). Recommended for you: The Netflix Prize and

the production of algorithmic culture. New media & society, 18(1), 117-137.

Hennion, A. (2004). Pragmatics of taste. In M. Jacobs & N. W. Hanrahan (Eds.),

The Blackwell companion to the sociology of culture (pp. 131-144).

Hutchinson, J. (2017). Algorithmic Culture and Cultural Intermediation. In

Cultural Intermediaries: Audience Participation in Media Organisations

(pp. 201-219). Cham: Palgrave Macmillan.

60 Huth, M., & Ryan, M. (2004). Logic in Computer Science: Modelling and

reasoning about systems. Cambridge: Cambridge University Press.

Jenner, M. (2018). Netflix and the Re-invention of Television. Cham: Springer

International Publishing.

Kitchin, R. (2017). Thinking critically about and researching algorithms.

Information, Communication & Society, 20(1), 14-29.

Kitchin, R., & Dodge, M. (2011). Code/space: Software and everyday life.

Cambridge, Massachusetts: MIT Press.

Lahire, B. (2005). La culture des individus. Dissonances culturelles et distinction

de soi. Paris: La Découverte.

Lash, S. (2007). Power after Hegemony: Cultural Studies in Mutation? Theory,

Culture & Society, 24(3), 55-78.

Latour, B. (2005). Reassembling the social: An introduction to actor-network-

theory. New York: Oxford University Press.

Law, J. (2000). Ladbroke Grove, or how to think about failing systems. Retrieved

from http://www.comp.lancs.ac.uk/sociology/papers/Law-Ladbroke-

Grove-Failing-Systems.pdf

61 Law, J. (2004). After method: Mess in social science research. London:

Routledge.

Leibniz, G. W. (1989). The monadology. In Philosophical papers and letters

(pp.643-653). Amsterdam: Springer.

Lobato, R. (2019). Netflix Nations: The Geography of Digital Distribution. New

York: New York University Press.

Madrigal, A. (2014, January 2). How Netflix Reverse-Engineered Hollywood. The

Atlantic. Retrieved from

https://www.theatlantic.com/technology/archive/2014/01/how-netflix-

reverse-engineered-hollywood/282679/

McNay, L. (2000). Gender and agency: reconfiguring the subject in feminist and

social theory. Cambridge: Polity Press.

Moody, R. (2019, April 9). Which countries pay the most and least for Netflix?

Retrieved from https://www.comparitech.com/blog/vpn-

privacy/countries-netflix-cost/

Morris, J. W. (2015). Curation by code: Infomediaries and the data mining of

taste. European Journal of Cultural Studies, 18(4-5), 446-463.

62 Nakamura, L. (2002). Cybertypes: race, ethnicity and identity on the Internet.

New York: Routledge.

Negus, K. (2002). The work of cultural intermediaries and the enduring distance

between production and consumption. Cultural Studies, 16(4), 501-515.

Noble, S. U. (2018). Algorithms of oppression: how search engines reinforce

racism. New York: New York University Press.

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you: UK:

Penguin.

Peirce, C. S. (1934). Collected papers of charles sanders peirce (Vol. 5).

Cambridge: Harvard University Press.

Peterson, R. (1992). Understanding audience segmentation: From elite and mass

to omnivore and univore. Poetics, 21(4), 243-258.

Prey, R. (2018). Nothing personal: algorithmic individuation on music streaming

platforms. Media, Culture & Society, 40(7), 1086-1100.

Radway, J. A. (1991). Reading the romance women, patriarchy, and popular

literature (2nd ed.). Chapel Hill: University of North Carolina Press.

63 Ramachandran, S., & Flint, J. (2018, November 10). At Netflix, Who Wins When

It’s Hollywood vs. the Algorithm? The Wall Street Journal. Retrieved from

https://www.wsj.com/articles/at-netflix-who-wins-when-its-hollywood-vs-

the-algorithm-1541826015

Roberge, J., & Seyfert, R. (2016). What are algorithmic cultures. In Algorithmic

cultures: Essays in meaning, performance, and new technologies (pp. 1-

25). New York, NY: Routledge.

Rodriguez, A. (2017, March 23). Netflix divides its 93 million users around the

world into 1,300 “taste communities”. Quartz. Retrieved from

https://qz.com/939195/netflix-nflx-divides-its-93-million-users-around-

the-world-not-by-geography-but-into-1300-taste-communities/

Rogers, R. (2013). Social Media and Postdemographics. In Digital Methods (pp.

153-164). Cambridge: MIT Press.

Rose, G. (2001). Visual methodologies : an introduction to the interpretation of

visual materials. London: Sage.

Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014). An algorithm

audit. Data and discrimination: collected essays. New York, NY: New

America, Open Technology Institute, 6-10.

64 Schaanning, E. (2000). Fortiden i våre hender: Foucault som videnshåndtør. Bd.

2: Historisk praksis. Oslo: Unipub.

Seaver, N. (2013). Knowing algorithms. Paper presented at the Media in

Transition 8, Cambridge, Massachusetts. Retrieved from

http://nickseaver.net/papers/seaverMiT8.pdf

Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of

algorithmic systems. Big Data & Society, 4(2), 1-12.

Star, S. L. (1999). The Ethnography of Infrastructure. American Behavioral

Scientist, 43(3), 377-391.

Steiner, E., & Xu, K. (2018). Binge-watching motivates change:Uses and

gratifications of streaming video viewers challenge traditional TV research.

Convergence: The International Journal of Research into New Media

Technologies, 1-20. doi:10.1177/1354856517750365

Stokes, J. C. (2013). How to do media and cultural studies (Second ed.). Los

Angeles, : Sage.

Striphas, T. (2015). Algorithmic culture. European Journal of Cultural Studies,

18(4-5), 395-412. doi:10.1177/1367549415577392

65 The Economist. (2017, February 9). How to devise the perfect recommendation

algorithm. The Economist. Retrieved from

https://www.economist.com/special-report/2017/02/09/how-to-devise-

the-perfect-recommendation-algorithm

Thornton, S. (1996). Club cultures: music, media, and subcultural capital (First

US ed.). Hanover: University Press of New England.

Timmermans, S., & Tavory, I. (2012). Theory construction in qualitative research:

From grounded theory to abductive analysis. Sociological theory, 30(3),

167-186.

Vanderbuilt, T. (2016, August 8). The Science Behind the Netflix Algorithms That

Decide What You'll Watch Next. Wired. Retrieved from

https://www.wired.com/2013/08/qq-netflix-algorithm/

Webb, E., Campbell, D., Schwartz, R., & Sechrest, l. (1966). In E. Webb, D.

Campbell, R. Schwartz, & L. Sechrest (Eds.), Unobtrusive Measures (pp.

35-52). Chicago: Rand McNally.

Wilson, M. (2017, December 18). Netflix Is Even Personalizing Its Graphic Design

To You Now. Fast Company. Retrieved from

https://www.fastcompany.com/90154608/netflix-is-even-personalizing-

its-graphic-design-to-you-now

66 Zaslow, J. (2002, November 22). If TiVo Thinks You Are Gay, Here's How to Set It

Straight. Wall Street Journal. Retrieved from

https://www.wsj.com/articles/SB1038261936872356908

67 Appendices

Appendix A: Discursive formations in the Netflix and media discourse

Algorithmic Discursive formation Netflix discourse source Media discourse source Timeframes process Explicit taste All except Netflix August 2007 to April 10 preference LinkedIn 2017 Extraction Implicit taste Netflix tech blog, Netflix November 2010 to 9 preference help site, Quora October 2016 Major & minor general April 2012 to August Altgenres Netflix tech blog 3 4 news 2017 Major & minor general, Netflix tech blog & February 2016 to Taste communities 3 entertainment & tech 8 Netflix media site December 2018 news Appraisal April 2012 to April Criteria Netflix tech blog 3 2015 Netflix tech blog, Netflix January 20122 to A/B testing Media site & Academic 8 December 2017 publication Major & minor general December 2010 to Recommendation All 17 10 & tech news November 2018 Netflix tech blog, Netflix Homepage April 2012 to help site & Academic 7 personalisation November 2017 publication Prediction Artwork Netflix tech blog & Major & Minor general May 2016 to April personalisation 3 7 Netflix media site & tech mews 2019

Evidence Netflix tech blog, Quora April 2012 to April personalisation 3 & Academic publication 2017

Total number of documents & reports 31 29

68 Detailed breakdown of the Netflix discourse

Date of Topic Source Discursive formations Source URL publication 1 Academic https://www.cs.uic.edu/~liub/KDD-cup- Aug 12 2007 The Netflix Prize Explicit taste preferences

paper 2007/NetflixPrize-description.pdf 2 https://www.quora.com/What-types-of- Nov 20 2010 A/B testing Quora Implicit taste preferences things-does-Netflix-A-B-test-aside-from-

member-sign-up 3 Netflix tech https://medium.com/netflix-techblog/netflix- Dec 10 2010 Blog launch Recommendations

Blog tech-blog-1caed01764f2 4 Netflix tech https://medium.com/netflix-techblog/how- Jan 19 2011 Product success A/B testing

Blog we-determine-product-success-980f81f0047e 5 Implicit & explicit taste preferences, altgenres, https://medium.com/netflix-techblog/netflix- Recommender Netflix tech criteria, A/B testing, Apr 6 2012 recommendations-beyond-the-5-stars-part-1- system Part 1 Blog recommendations, homepage

55838468f429 personalisation, evidence personalisation 6 Implicit & explicit taste preferences, altgenres, https://medium.com/netflix-techblog/netflix- June 20 Recommender Netflix tech criteria, A/B testing, recommendations-beyond-the-5-stars-part-2- 2012 system Part 2 blog

recommendations d9b96aa399f5

7 Implicit & explicit taste https://medium.com/netflix-techblog/system- System Netflix tech May 23 2013 preferences, architectures-for-personalization-and- architecture blog

recommendations recommendation-e081aa94b5d8 8 https://www.quora.com/How-many- algorithms-are-used-in-the-Netflix- Dec 23 2014 Netflix algorithms Quora Recommendations recommendation-system-I-heard-they-

combined-800+-algorithms-Is-that-necessary

69 9 https://www.quora.com/How-many- algorithms-are-used-in-the-Netflix- Dec 27 2014 Netflix algorithms Quora Recommendations recommendation-system-I-heard-they-

combined-800+-algorithms-Is-that-necessary 10 https://medium.com/netflix- Netflix tech techblog/netflixs-viewing-data-how-we-know- Jan 27 2015 Viewing data Recommendations blog where-you-are-in-house-of-cards-

608dd61077da 11 Implicit & explicit taste Netflix tech preferences, https://medium.com/netflix-techblog/whats- Feb 10 2015 Trending row

blog recommendations, homepage trending-on-netflix-f00b4b037f61 personalisation 12 Implicit & explicit taste https://medium.com/netflix- Personalised Netflix tech preferences, altgenre, April 19 2015 techblog/learning-a-personalized-homepage- homepage blog criteria, homepage

aa8ec670359a personalisation 13 https://medium.com/netflix- Holiday Netflix tech Dec 1 2015 Recommendations techblog/caching-content-for-holiday- streaming blog

streaming-be3792f1d77c 14 A/B testing, Recommender Academic recommendations, homepage

Dec-15 https://dl.acm.org/citation.cfm?id=2843948 system paper personalisation, evidence personalisation 15 https://medium.com/netflix- Netflix tech Feb 12 2016 Time travel data A/B testing techblog/distributed-time-travel-for-feature- blog generation-389cccdd3907 16 Explicit taste preference, https://medium.com/netflix- Netflix tech taste communities, Feb 17 2016 Global algorithms techblog/recommending-for-the-world- blog recommendations, homepage 8da8cbcf051b personalisation 17 Netflix media https://media.netflix.com/en/company- Feb 17 2016 Global algorithms Taste communities

site blog/a-global-approach-to-recommendations

70 18 https://medium.com/netflix-techblog/its-all- April 29 Netflix tech A/B testing A/B testing a-bout-testing-the-netflix-experimentation- 2016 blog platform-4e1ca458c15 19 https://medium.com/netflix- Artwork Netflix tech A/B testing, Artwork May 3 2016 techblog/selecting-the-best-artwork-for- optimisation blog personalisation

videos-through-a-b-testing-f6155c4595f6 20 Artwork Netflix media A/B testing, Artwork https://media.netflix.com/en/company- May 3 2016

optimisation site personalisation blog/the-power-of-a-picture 21 https://medium.com/netflix-techblog/to-be- Continue Netflix tech Oct 12 2016 Implicit taste preferences continued-helping-you-find-shows-to- watching blog

continue-watching-on-7c0d8ee4dab6 22 Netflix tech https://medium.com/netflix-techblog/netflix- Oct 28 2016 ACM conference Implicit taste preferences

blog at-recsys-2016-recap-e32d50d22ecb 23 Netflix media https://media.netflix.com/en/press- Feb 9 2017 Programming Taste communities

site releases/theres-never-enough-tv-on-netflix 24 Thumbs up & Netflix media https://media.netflix.com/en/company- April 5 2017 Explicit taste preferences

down site blog/goodbye-stars-hello-thumbs 25 https://www.quora.com/Why-is-Netflix- Thumbs up & Explicit taste preferences, Apr 7 2017 Quora replacing-its-star-ratings-with-a-thumbs-up- down Evidence personalisation

and-thumbs-down-system-2017 26 https://www.linkedin.com/pulse/data- May 24 2017 Marketing LinkedIn Recommendations science-netflix-promoting-originals-kelly-

uphoff/ 27 https://medium.com/netflix- Netflix tech Nov 30 2017 Personalised rows Homepage personalisation techblog/interleaving-in-online-experiments- blog

at-netflix-a04ee392ec55 28 https://medium.com/netflix- Artwork Netflix tech A/B testing, Artwork Dec 8 2017 techblog/artwork-personalization- Personalisation blog personalisation

c589f074ad76 29 Engineering for https://medium.com/netflix- Netflix tech Mar 15 2018 marketing Recommendations techblog/engineering-to-improve-marketing- blog

effectiveness effectiveness-part-1-a6dd5d02bab7

71 30 https://medium.com/netflix- Netflix tech Nov 30 2018 Data storage Recommendations techblog/interleaving-in-online-experiments- blog

at-netflix-a04ee392ec55 31 Implicit & explicit taste Netflix help preferences, no date Overview https://help.netflix.com/en/node/100639 site recommendations, homepage personalisation

Detailed breakdown of media discourse

Search Date of Discursive formation Source Source URL Source publication 1 https://www.inverse.com/article/35780- Google Aug 22 2017 Inverse

netflix-genre-bias-data 2 http://bworldonline.com/content.php?section =Technology&title=just-keep-bingeing-how- Factiva Jun 8 2017 Business world netflix-knows-what-you-want-to-

watch&id=146421 3 https://www.theatlantic.com/entertainment/a Factiva Mar 21 2017 The Atlantic rchive/2017/03/netflix-believes-in-the-power-

of-thumbs/520242/ 4 https://www.businessinsider.com/how-the- Recommendations Google Feb 26 2016 Business Insider netflix-recommendation-algorithm-works-

2016-2 5 https://www.theverge.com/2016/2/17/110302 Google Feb 17 2016 Verge 00/netflix-new-recommendation-system-

global-regional 5 https://gizmodo.com/the-algorithm-that-tells- Google Aug 8 2013 Gizmodo

netflix-which-movies-you-reall-1054431105 7 https://www.huffingtonpost.com.au/2013/08/ Google Aug 8 2013 HuffPost 07/netflix-movie-

suggestions_n_3720218.html

72 8 https://www.wired.com/2013/08/qq-netflix- Google Aug 7 2013 Wired

algorithm/ 9 https://www.nytimes.com/2013/02/25/busine Google Feb 23 2013 NYT ss/media/for-house-of-cards-using-big-data-

to-guarantee-its-popularity.html 10 https://edition.cnn.com/2012/04/09/tech/ins Google Apr 9 2012 CNN ide-netflixs-popular-recommendation-

algorithm/index.html 11 https://www.buzzfeed.com/jp/nicolenguyen/n Google Dec 18 2018 Buzzfeed etflix-recommendation-algorithm-explained-

binge-watching-1 12 https://www.adweek.com/tv-video/netflix- Factiva Jul 29 2018 AdWeek thrives-by-programming-to-taste-

communities-not-demographics/ 13 https://nationalpost.com/entertainment/televi Factiva Jun 12 2018 National Post sion/five-things-we-learned-from-vultures-

behind-the-scenes-look-at-netflix 14 https://www.cnet.com/news/stranger-things- Google Oct 23 2017 CNET

Taste communities addict-heres-how-netflix-sucked-you-in/ 15 https://www.wired.co.uk/article/how-do- Google Aug 22 2017 Wired netflixs-algorithms-work-machine-learning-

helps-to-predict-what-viewers-will-like 16 https://www.wired.com/story/netflix-the- Google Aug 22 2017 Wired

defenders-audience-data/ 17 https://qz.com/939195/netflix-nflx-divides- Google Mar 23 2017 Quartz its-93-million-users-around-the-world-not-by-

geography-but-into-1300-taste-communities/ 18 https://variety.com/2017/digital/news/netflix- Factiva Mar 18 2017 Variety

lab-day-behind-the-scenes-1202011105/ 19 https://qz.com/1059434/netflix-finally- Altgenres Google Aug 22 2017 Quartz explains-how-its-because-you-watched-

recommendation-tool-works/

73 20 https://www.nytimes.com/2016/05/29/opinio Factiva May 29 2016 NYT

n/sunday/the-psychology-of-genre.html 21 https://www.theguardian.com/media- Factiva May 24 2016 Guardian network/2016/may/24/media-design-media-

planning-brands 22 https://www.theatlantic.com/technology/archi Google Jan 2 2014 Atlantic ve/2014/01/how-netflix-reverse-engineered-

hollywood/282679/ 23 https://www.smh.com.au/technology/how- Google Apr 2 2019 SMH netflix-decides-what-you-want-to-watch-

20190402-p519wb.html 24 https://www.vox.com/2018/11/21/18106394/ Google Nov 21 2018 Vox

why-your-netflix-thumbnail-coverart-changes 25 https://can.megam.info/tech/you-more-likely- Factiva May 10 2016 Mirror

click-villains-7933748 26 https://www.dailymail.co.uk/news/article- 3579027/The-power-picture-Netflix-reveals- Factiva May 8 2016 Daily Mail Artwork picks-thumbnail-images-draw-90-seconds-

Personalisation browsing.html 27 https://www.express.co.uk/life-style/science- Factiva May 4 2016 Express technology/666932/Netflix-Thumbnail-

Preview-Image-Different 28 https://www.theverge.com/2016/5/3/1158238 Google May 3 2016 Verge

2/netflix-thumbnail-test 29 https://www.independent.co.uk/life- style/gadgets-and-tech/news/netflix- Factiva May 2 2016 Independent thumbnails-ab-testing-pictures-experiments-

design-watch-a7012966.html

74 Appendix B: Issues & criticisms in the media discourse

Category Specific issues & criticisms Media discourse sources Timeframes Major & minor general, entertainment Racial targeting on artwork 12 October 2018 and tech news

Major issues & Sexual orientation-based episode Major general and entertainment news 7 March 2019 criticisms sequencing & minor tech & entertainment news

Major & minor general and November 2018 & Misrepresenting titles on artwork 12 entertainment news May 2019

March 2017 to Poor recommendations Minor entertainment & tech news 4 August 2018 Minor issues & Attributing show performance to criticisms Minor tech news 2 June 2019 algorithmic targeting

Anxiety over surveillance Minor tech & entertainment news 2 January to May 2019

Total number of documents & reports 69

75 Detailed breakdown of issues & criticisms in media discourse

Search Issues & criticisms Date of publication Source Source URL Source 1 https://slate.com/culture/2018/10/stephen Google Oct 25 2018 Slate -colbert-netflix-algorithm-misleading-

poster.html 2 https://nationalpost.com/entertainment/m Factiva Oct 24 2018 The National Post

ovies/is-netflix-racially-profiling-its-users 3 https://www.vanityfair.com/hollywood/201 Google Oct 23 2018 Vanity Fair 8/10/netflix-artwork-personalization-

backlash-racial-profiling 4 https://www.wired.com/story/why-netflix- Google Oct 23 2018 Wired features-black-actors-promos-to-black-

users/ 5 https://www.nytimes.com/2018/10/23/arts The New York Factiva Oct 22 2018 /television/netflix-race-targeting- Times

Racial targeting on personalization.html 6 artwork https://www.foxnews.com/entertainment/n etflix-denies-racism-accusations-after- Factiva Oct 22 2018 Fox News users-noticed-cover-images-that-seemed-

targeted-at-minorities 7 https://www.bloomberg.com/news/articles Google Oct 22 2018 Bloomberg /2018-10-22/netflix-denies-tailoring-movie-

promotions-based-on-users-race 8 https://deadline.com/2018/10/netflixs- Google Oct 22 2018 Deadline artwork-personalization-attracts-online-

criticism-1202487598/ 9 https://www.bbc.com/news/newsbeat- Google Oct 22 2018 BBC

45939044 10 https://www.independent.co.uk/arts- Google Oct 22 2018 The Independent entertainment/films/news/netflix-black-

76 subscribers-targeted-posters-love-actually-

a8594926.html 11 https://www.theguardian.com/media/2018 Factiva Oct 21 2018 The Guardian /oct/20/netflix-film-black-viewers-

personalised-marketing-target 12 https://www.fastcompany.com/90253578/i Google Oct 18 2018 Fast Company s-netflix-racially-personalizing-artwork-for-

its-titles 13 https://www.theverge.com/2019/3/22/1827 Google Mar 22 2019 Verge 7634/netflix-love-death-robots-different-

episode-orders-anthology-show 14 https://www.sbs.com.au/topics/sexuality/fa st-lane/article/2019/03/21/one-users- Google Mar 21 2019 SBS netflix-algorithm-has-got-lgbtiq-viewers-

talking-about-data 15 https://news.avclub.com/netlfix-no-the- Factiva Mar 20 2019 AV Club changing-episode-order-has-nothing-to-

Sexual orientation- 1833444969 16 based episode https://deadline.com/2019/03/netflix- sequencing denies-shuffling-episode-order-of-love- Google Mar 20 2019 Deadline death-robots-based-on-sexual-orientation-

1202579065/ 17 https://tvline.com/2019/03/20/netflix- Google Mar 20 2019 TV Line changes-episode-orders-not-based-on-

sexuality/ 18 https://www.vanityfair.com/hollywood/201 Google Mar 20 2019 Vanity Fair

9/03/netflix-sexuality-data 19 https://techcrunch.com/2019/03/19/love- Google Mar 19 2019 TechCrunch

death-robots-experiment/ 21 https://www.businessinsider.com/netflix- Misrepresenting titles Factiva Nov 13 2018 Business Insider tests-liked-grace-and-frankie-without-jane- on artwork

fonda-wsj-2018-11

77 22 https://www.thetimes.co.uk/article/jane- Factiva Nov 13 2018 Times fonda-proves-a-turn-off-for-netflix-viewers-

t2qwdd3wn 23 https://www.wsj.com/articles/at-netflix- Factiva Nov 10 2018 WSJ who-wins-when-its-hollywood-vs-the-

algorithm-1541826015 24 https://www.dailymail.co.uk/news/article- 6375821/Netflix-refused-cut-Jane-Fonda- Google Nov 10 2018 Daily Mail Grace-Frankie-ads-avoid-alienating-

star.html 25 https://www.independent.co.uk/arts- entertainment/tv/news/netflix-nicole-byer- Factiva May 29 2019 The Independent nailed-it-whitewash-jacques-torres-wes-

a8934311.html 26 https://tvline.com/2019/05/29/nailed-it- Factiva May 29 2019 TV Line whitewashed-thumbnail-removed-by-

netflix/ 27 https://www.bbc.com/news/entertainment- Google May 29 2019 BBC

arts-48443979 28 https://www.huffingtonpost.co.uk/entry/ni cole-byer-netflix- whitewashing_uk_5cee50ebe4b0975ccf5da da5?guccounter=1&guce_referrer=aHR0cH M6Ly93d3cuZ29vZ2xlLmNvbS8&guce_refe rrer_sig=AQAAAHC8PVL7NP8zTbn0CIBB Google May 29 2019 Huffington Post qzjr9NUDpouitPOqYij_IqZvSazX09IBrDGN cypT- 2DP81gRBjogJ0Y9Q40vDfjdq2v3HgTzdfRI yC19s8NI8T8MHRq1S7aFCOTJFGgcIonilbe 7yZaiHkLup2wEKt4EJFeh0BvwvguSC9o7F

yl23b-u

78 29 https://ph.news.yahoo.com/netflix-nailed- Google May 28 2019 Yahoo it-host-nicole-byer-whitewashing-

comedian-152333982.html 30 https://edition.cnn.com/2019/05/29/entert Google May 29 2019 CNN ainment/nicole-byer-netflix-nailed-

it/index.html 31 https://www.hollywoodreporter.com/live- Google May 29 2019 Hollywood Reporter feed/nicole-byer-happy-netflix-action-

nailed-it-image-snafu-1214207 32 https://www.eonline.com/ap/news/104510 9/nailed-it-host-nicole-byer-calls-out- Google May 29 2019 Eonline netflix-for-f-ked-up-and-disrespectful-

advertising 33 https://www.fastcompany.com/90221403/ Google Aug 22 2018 Fast Company netflixs-recommendations-suck-but-its-not-

too-late-to-fix-them 34 https://www.indiewire.com/2017/07/netflix Google Jul 12 2017 Indiewire -amazon-algorithms-destroying-the-movies-

1201853974/ Poor recommendation 35 https://theoutline.com/post/1300/netflix- Google Mar 24 2017 The Outline recommendation-

algorithm?zd=1&zi=5hdhjj7s 36 https://www.indiewire.com/2017/03/netfli Google Mar 20 2017 Indiewire x-ratings-system-thumbs-up-down-bad-

idea-1201794830/ 37 https://www.techradar.com/news/russian- Google Jun 12 2019 Techradar doll-star-thanks-netflix-algorithm-for-the- Attributing show

shows-success performance to 38 https://techcrunch.com/2019/06/12/russia algorithmic targeting Google Jun 12 2019 Techcrunch n-doll-will-return-to-netflix-for-a-second-

season/

79 39 https://www.indiewire.com/2019/02/netfli Factiva May 13 2019 Indiewire x-saved-black-mirror-bandersnatch-choice-

Anxiety over made-viewers-1202043808/ 40 surveillance https://www.theverge.com/2019/1/2/18165 Google Jan 2 2019 The Verge 182/black-mirror-bandersnatch-netflix-

interactive-strategy-marketing

80 Appendix C: Netflix user interaction coding scheme* *Transformed to ten discursive themes, as presented in the findings chapter

Discursive Codes Sub-code Description Tally formations (and other emergent subcategories*) User behaving in a way that would send Confusing the algorithm confusing signals to the algorithm about one's 17 taste profile Users training the algorithm into reading a Training the algorithm 6 desired taste profile Viewing history Algorithms misinterpreting behaviour, e.g. Misreading behaviour misreading discontinued viewing as dislike for a 4 Implicit taste title and thus, demoting the title preference Algorithms interpreting behaviour as Categorising behaviour 2 characteristic of specific identities or categories Location Inclusion of location-aware signals in data 2 Contextual Inclusion of background activities, moods, etc. Other signals 2 in data Navigation & Rewind, fast forward and Inclusion of video navigation in data 1 interaction pause Content that were rated thumbs down were still Malfunctioning 9 appearing in the recommendation feed Explicit taste Thumbs up & preference down Unutilised Function not used to achieve intended results 8 Users deliberately down rating content they did Training the algorithm 4 not like

81 Either the removal of a whole genre of the down Misreading action rated title or recommending too much of the 4 same content after up rating a title Users requesting to edit their own viewing Alter viewing history 10 history to correct algorithm Express interest & Users requesting to directly express interest or Direct feedback 4 disinterest disinterest for certain genres and titles Users requesting to bring back the star rating Star rating 4 system Inclusion of external reviews such as IMDB or Reviews 2 External in data feedback Social Inclusion of a share function 2 ‘My List’ Unutilised Function not used to achieve intended results 4 Specific genres (teen, Algorithms prioritising specific genre in 5 crime) recommendations, e.g. teen, crime, etc. Prioritising Content were prioritised/deprioritised in certain Regional filter 1 certain tastes regions Niche taste neglected Niche taste communities were deprioritised 1 Taste Race & sexual User assuming that they were incorrectly communities Failure to 1 categorise in the orientation assigned to a demographic category right taste Users assuming they were grouped in limited Limited 2 community taste community/limited titles Reading users Users assuming that they were identified based into certain Demographics 9 on age, gender, sexual orientation, etc. identities

82 Assignment to Users assuming that they were grouped in an an inaccurate 4 inaccurate taste community taste community Accurate to taste Genres aligned with tastes 6 Specific genres Suggestions Users suggesting new genres 3 Odd label Genres with odd labels 4 Odd connection between the title and the genre Altgenres Odd genre 24 or between titles in a genre Improper tags Content were improperly tagged 4 Title in multiple Titles appearing in multiple genres 3 genres Users blaming the algorithms for impaired Algorithmic judgment 18 judgment User criticising Netflix's weak push for the show Weak promotion 25 within and outside its recommendation system User demanding for Netflix to be transparent Transparency in metrics 11 Show with ratings and metrics cancellation Users assuming that the algorithm has a three- Criteria Three-season limit 7 season limit Users assuming that Netflix has biased against Underserved themes 4 underserved themes Users demanding for Netflix to account for user Viewer feedback 4 feedback Recency Recency of show as a key criterion over others 2 Popularity Popularity of show as a key criterion over others 2

83 Accurate Recommendations that were aligned with their 75 recommendation tastes Algorithmic Users curious of algorithmic reading of their 3 reading taste Bias in favour of Recommendations were assumed to prioritise 36 Originals the Originals Buried by the Recommendations that were missing after the 20 algorithm next login Recommendations that were claimed to be Demographics 2 based on demographic categories Gender & sexual Recommendations that were claimed to be 11 orientation based on gender and sexual orientation Shows discovered outside their own profile; May Recommendation External discovery it be on social media, online media or another 16 persons' account Shows that were needed to be searched for even Need to search 17 Good show that if they align with the tastes were not Shows that were ideally recommended as they recommended Shows aligned to tastes 19 aligned to the users’ tastes Shows that were ideally recommended given Viewed previous season that the user have watched the previous 13 of the show episode/film Inconsistent Recommendations that were inconsistent to 13 recommendation one's tastes Odd Odd connection in Recommendations that were oddly connected 68 recommendation general with viewing history

84 Odd connection to push Recommendations force-fitted to push the 16 Originals Originals Recommendations that were claimed to be Race 7 based on Repetitive Recommendations that were already ignored 62 recommendation and down rated that keep on appearing General Recommendations deemed to be in bad taste 44 Recommendations that suggest generic content Generic content or content similar to what has already been 21 consumed/recommended before Recommendations that were not age- Age-inappropriate 3 appropriate Recommendations that limit the user into a few Limited tastes 12 tastes Poor Recommendations that push for low-quality Mediocre content 8 recommendation work Recommendations either in contrast or Misaligned with taste 76 unrelated to the users' taste Recommendations force-fitted to push the Pushing Originals 24 Originals Recommendations that misread actions and behaviour into certain taste preferences, e.g. one Misreading taste 8 horror film lead to every recommendation being about horror Poor quality Content deemed to be of low quality 21 Production* Algorithmic Users assuming that the Originals were 15 creation algorithmically produced

85 Content is based on universally appealing Generic 10 attributes or from a recent hit Good feedback Content deemed to be of good quality 6 General General awareness that Netflix is customising 18 observation the title thumbnails Gender & Sexual Artwork that seem to appeal to users' gender 14 orientation and sexual orientation General Artwork that appeals to users’ tastes 5 Artwork Accurate personalisation recommendation Artwork that makes use of actors to appeal to Actor as hook 4 user Artwork that seem to appeal to users' Demographics 4 demographic identities Artwork that seem to appeal to users' racial Race 3 identity Low score for taste aligned Low match score for titles aligned to users’ tastes 30 Evidence shows personalisation High score on off High match score for titles at odds with users’ 8 content tastes General awareness that Netflix is customising Experiment 10 the episode sequence Sexual Episode sequence that seem to appeal to users' 9 Orientation sexual orientation Sequence* Episode sequence that were arranged in an Odd sequencing 3 unusual way Episode sequence that were arranged in an Incorrect 2 incorrect way

86 Buried by Difficult to find specific titles 22 algorithm Search* Odd Search results generates titles that vaguely 7 recommendation connects with the search terms

87 Appendix D: Netflix user interaction geolocation on Twitter

Location Number of Percentage Location Number of Percentage tweets tweets USA 346 66.16% UK 73 13.96% Guam 1 0.19% Canada 22 4.21% Hongkong 1 0.19% Australia 21 4.02% Iceland 1 0.19% India 8 1.53% Israel 1 0.19% Germany 7 1.34% Kenya 1 0.19% Ireland 7 1.34% Lebanon 1 0.19% France 5 0.96% Netherlands 1 0.19% South Africa 5 0.96% Nigeria 1 0.19% Denmark 4 0.76% Pakistan 1 0.19% Finland 3 0.57% Portugal 1 0.19% Mexico 3 0.57% Russia 1 0.19% New Zealand 3 0.57% Singapore 1 0.19% Argentina 2 0.38% Spain 1 0.19% Philippines 2 0.38% Taiwan 1 0.19% Belgium 1 0.19% Tuvalu 1 0.19% Chile 1 0.19% Venezuela 1 0.19%

88