Exploring Positive Outcomes of Surveillance in the Form of Group
Total Page:16
File Type:pdf, Size:1020Kb
Surveillance in the digital age: Exploring positive outcomes of surveillance in the form of group-based recognition Submitted by Josephine Ann Cooper to the University of Exeter as a thesis for the degree of Doctor of Philosophy in Psychology In February 2020 This thesis is available for Library use on the understanding that it is copyright material and that no quotation from the thesis may be published without proper acknowledgement. I certify that all material in this thesis which is not my own work has been identified and that no material has previously been submitted and approved for the award of a degree by this or any other university. Signature: ………………………… ACKNOWLEDGMENTS There are several special people who made it possible for me to complete this thesis and the research within it. I would first like to thank my supervisors, Andrew Livingstone and Mark Levine. Thank you for giving me the opportunity to embark on this research and for your encouragement throughout. I feel privileged to have learned and gained inspiration from your own enthusiasm, knowledge, and expertise. Aside from academic guidance, I would also like to express my sincere gratitude for the kindness you have both given me throughout this project. Outside of my research team, I would like to thank the SEORG group at Exeter; your advice and insight helped shaped this thesis. I am particularly grateful to Anna Rabinovich and Teri Kirby for their invaluable guidance at my upgrade. My fellow PhD students at Exeter have also provided me with support and friendship whilst I worked on campus. I would like to thank everyone, especially for the much-needed trips to the campus shop to get some air and normalcy. A special thanks goes to Asha Ladwa, who was a sister to me whilst we shared an office together. I’ll always miss our gym sessions and cooking extravaganzas. I am also hugely grateful to Denise Wilkins, my mentor and friend; you are and have been relentlessly supportive. Finally, I would like to thank those that have been by my side throughout the PhD and beyond. My parents, Sandra and Stephen Cooper, have been my cheerleaders throughout life. I am beyond lucky to have won the parental lottery and feel proud to call you mamgu (sorry!) and dad. I would also like to thank my brother, Bevis. Our marathon phone calls putting the world to rights have i reminded me that there is life outside my PhD. My good friend Henry Maher also deserves a special thank you for never failing to make me laugh (and for being the best research assistant one could ask for). A big thank you also goes to my best friend Emma Banbrook. You are continually a beacon of support and confidence. Lastly, but certainly not least, I would like to thank my husband Will, who has been unfathomably patient, loving, and encouraging throughout this process. ii ABSTRACT Narratives surrounding algorithmic surveillance typically emphasise negativity and concerns about privacy. In contrast, we argue that current research underestimates potentially positive consequences of algorithmic surveillance in the form of group-based recognition. Specifically, we test whether (accurate) algorithmic surveillance (i.e., the extent to which those surveilled believe surveillance mirrors their own self-concept) provides a vehicle for group-based recognition in two contexts: (1) those under outgroup surveillance and (2) surveillance from the perspective of stigmatised and misrecognised groups. In turn, we test whether this can lead to more positive (and less negative) feelings towards surveillance. Alongside this, we also test whether a countervailing negative pathway exists, whereby more accurate surveillance is associated with more privacy concern, and in turn, more negative (and less positive) feelings towards surveillance. The final study also tests whether positive perceptions of accurate surveillance arising through group- based recognition are limited only to misrecognised groups, or whether this is true for people more generally. Across seven studies, we test the core hypothesis that group-based recognition from accurate surveillance provides a basis for positive reactions to algorithmic surveillance that countervails the negative pathway through privacy concern. Overall, we found support for the positive pathway, whereby more accurate surveillance was associated with more positive feelings towards surveillance through group-based recognition. The positive pathway was present for both typically recognised and misrecognised groups. We also found partial support for the negative pathway; whereby privacy concern was associated with less positive feelings towards surveillance. However, we did not find that surveillance accuracy was iii associated with privacy concern; one implication of this is that the presence of surveillance per se overwhelms any additional effect of surveillance accuracy. Additionally, surveiller social identity (ingroup vs. outgroup) influenced both the positive and negative pathways: surveillance from an outgroup was considered less trustworthy than ingroup surveillance, which in turn predicted less positive outcomes in the form of more privacy concern and less group-based recognition. This thesis challenges the current techno-pessimistic view that algorithms are inherently negative and contributes to research that endeavours to gain a greater understanding of society’s relationship with algorithms and artificial intelligence. iv GENERAL INTRODUCTION AND SUMMARY Processes that contribute towards the public’s feelings towards algorithmic surveillance are poorly understood. Modern forms of surveillance use algorithms, which are integral to the online surveillance architecture. Algorithmic surveillance enables organisations (both corporate and state) to gather and analyse our online behaviour through code (or a set of rules), which use data to produce a specified output (Sandvig, Hamilton, Karahalios, & Langbort, 2015). Within a surveillance context, algorithms scan individuals’ data to identify patterns or correlations (Tene & Polonetsky, 2014), which are then used to categorise people into social groups. From these groups, organisations infer our characteristics and make predictions about our future behaviour (Lyon, 2003). Ultimately, algorithmic surveillance allows companies to recognise who we are, and to tailor their response accordingly. Websites often claim that algorithmic surveillance will ‘improve the user experience’ (Unidrain, 2018, para. 5), as material can be ‘tailored to your own specifications’ (The Independent, n.d., How do we use cookies section, para. 1) to ensure it is ‘relevant and engaging’ (Bright Horizons, n.d., Marketing section, para. 1). In other words, users are assumed to feel more positively towards surveillance, as it can recognise who they are and what they want. These narratives surrounding commercial surveillance highlight the potential for both positive and negative outcomes. On one hand, users are offered the opportunity of greater recognition, yet on the other hand the harvesting of users’ data may threaten privacy. This thesis examines whether these two countervailing processes contribute to users’ feelings towards surveillance. Additionally, we predict that the accuracy of surveillance (i.e. the extent to which targeted material reflects the user’s identity) will affect these outcomes: individuals may v simultaneously experience more privacy concern and recognition when surveillance is of greater accuracy. In this instance, accurate algorithmic surveillance functions as a double-edged sword, providing both positive and negative psychological outcomes. Privacy concern Surveillance has historically prompted discussions surrounding privacy. Digital privacy advocates argue that online mass surveillance threatens both individual liberty (Gillmor, 2014) and national security (Schneier, 2016). Some have taken this further: in 2015, the United Nations privacy chief accused surveillance practices of being worse than the dystopia illustrated in Orwell’s 1984 (Culpan, 2015). Indeed, the growing concern for our online privacy has given rise to groups and movements dedicated to restoring online freedom. For example, the protest ‘The Day We Fight Back’ aimed to protect user privacy by exerting pressure on US law makers to restrict the state’s ability to engage in mass surveillance (Gillmor, 2014). Additionally, The American Civil Liberties Union (ACLU) has campaigned to end the United States of America (USA) Patriot Act, which gives the government greater powers to collect data from those who are not necessarily under suspicion (ACLU, n.d.). Algorithmic surveillance can thus in some circumstances create concern for privacy online, and this in turn can create animosity towards those surveillance systems. Recognition However, an alternative narrative (typically put forward by those conducting commercial surveillance) argues that algorithmic surveillance can foster positive feelings towards surveilling platforms, as it can enhance user recognition. In particular, this thesis focuses on group-based recognition: the vi extent to which others (specifically those from other groups) perceive the ingroup in a way that reflects the ingroup’s own self-concept (Tajfel, 1981).1 For example, the cookie disclaimer is a familiar form of algorithmic surveillance and an inevitable part of the online landscape for many internet users. The cookie pop-up notifies users that the website employs surveillance, and that continued use of the site implicitly provides consent to this. Prior