THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON

Alice Elizabeth Amelia Witt LLB (Hons); BBus (International Business); GDLP; AFHEA

Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy

School of Law, Faculty of Law

Queensland University of Technology

2020 THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

For my parents. Thank you for everything.

ii

THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

KEYWORDS

The rule of law; platform governance; content moderation processes; women’s bodies; feminisms; Instagram; digital constitutionalism.

iii

THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

ABSTRACT

There are widespread concerns that images depicting women’s bodies are arbitrarily moderated, or regulated, on the social media platform Instagram. Many claims of potential arbitrariness are fuelled by the secrecy around Instagram’s governance practices and the relative dearth of empirical legal research into the platform’s moderation processes to date. In response to calls for data that can shed light on these concerns, this thesis empirically evaluates the moderation of images depicting women’s bodies on Instagram against the Anglo- American ideal of the rule of law, which is characterised by opposition to arbitrary power. Specifically, this thesis focuses on the well-established, albeit contested, rule of law values of formal equality, certainty, reason-giving, transparency, participation and accountability, the evaluation of which is enhanced through a feminist lens. In order to undertake this empirical investigation, an innovative black box methodology was developed principally based on an input/output method that fuses legal theory with digital methods. After applying this methodology across two case studies comprising a total of 5,924 images, a concerning trend of inconsistent moderation is identified. In Case Study One, the results show that up to 22 per cent of images of female forms are potential false positives – images that do not appear to violate Instagram’s content policies and were removed. In Case Study Two, up to 56.8 per cent of images are potential false negatives – images that appear to violate Instagram’s content policies and were not removed. These results, among others, suggest that concerns around the risk of arbitrariness in processes for moderating content on Instagram might not be unfounded. Overall, this thesis argues that the lack of formal equality, certainty, reason-giving and user participation in the platform’s moderation processes, and Instagram’s largely unfettered power to moderate content with limited transparency and accountability, are significant normative concerns which pose an ongoing risk of arbitrariness for women and users more broadly. It is also argued that some images of female forms do not appear to be moderated in desirable ways from a feminist perspective. This thesis concludes by proposing ways that Instagram can improve its moderation processes and advocating for the continued development of digital methods for empirical legal analysis of platform governance. These steps are crucial to not only better inform the public around potential arbitrariness in content moderation processes, but also to help platforms better align their governance practices with rule of law and feminist values.

iv

THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

TABLE OF CONTENTS

THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM ...... i KEYWORDS ...... iii ABSTRACT ...... iv LIST OF FIGURES ...... viii LIST OF TABLES ...... x LIST OF ABBREVIATIONS ...... xi STATEMENT OF ORIGINAL AUTHORSHIP ...... xii ACKNOWLEDGEMENTS ...... xiii PREVIOUSLY PUBLISHED WORK ...... xvi CHAPTER ONE: INTRODUCTION ...... 1 I BACKGROUND ...... 1 II AIMS AND RESEARCH QUESTIONS ...... 10 III SCOPE AND SIGNIFICANCE ...... 11 IV STRUCTURE OF THIS THESIS ...... 19 CHAPTER TWO: THEORETICAL FRAMEWORK ...... 23 I PLATFORMS GOVERN ...... 26 II PLATFORM GOVERNANCE IS LAWLESS ...... 33 III CONSTRAINTS ON POTENTIAL ARBITRARINESS: SELECTED VALUES OF THE RULE OF LAW ...... 40 A Equality ...... 42 B Certainty ...... 45 C Reason-Giving ...... 48 D Transparency...... 50 E Participation ...... 53 F Accountability ...... 53 IV CONTROVERSIES AROUND THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM ...... 55 V ENRICHING THE EVALUATIVE FRAMEWORK OF THE RULE OF LAW: A FEMINIST PERSPECTIVE ...... 63 VI CONCLUSION ...... 67 CHAPTER THREE: A BLACK BOX METHODOLOGY ...... 69 I CONTENT IS MODERATED WITHIN A BLACK BOX ...... 71 II THE RISKS OF AUTOMATED DECISION-MAKING ...... 82 III THE IMPORTANCE OF BLACK BOX ANALYTICS ...... 89 IV AN INPUT-OUTPUT METHOD BASED ON BLACK BOX ANALYTICS...... 93 A The Legality of Web Scraping ...... 95

v

THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

V CONCLUSION ...... 99 CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM.... 101 I A HYPOTHESIS FOR EMPIRICAL INVESTIGATION ...... 103 II METHOD ...... 104 A Coding Scheme for Women’s Bodies ...... 106 B Extrapolating Findings ...... 110 C Methodological Limitations and Ethical Challenges ...... 111 III RESULTS AND DISCUSSION ...... 113 A Possible Explanations for Content Removal ...... 114 B Normative Concerns from a Rule of Law Perspective ...... 120 C A Highly Unpredictable Regulatory System ...... 124 IV CONCLUSION ...... 126 CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM ...... 128 I A HYPOTHESIS FOR EMPIRICAL INVESTIGATION ...... 130 II METHOD ...... 131 A Coding Scheme for Explicitly Prohibited Images ...... 134 B Limitations ...... 137 III RESULTS AND DISCUSSION ...... 138 A Possible Explanations for Content Removal ...... 140 B Normative Concerns from a Rule of Law Perspective ...... 144 C Instagram’s Highly Unpredictable Regulatory Culture: A Trend of Inconsistent Moderation across Case Studies ...... 149 IV CONCLUSION ...... 151 CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE ...... 152 I THE POSSIBILITIES OF DIGITAL FEMINISMS ...... 154 II GENDER-BASED DOUBLE STANDARDS ...... 157 III AUTONOMY DEFICITS AND NEGATIVE LIVED EXPERIENCES ...... 162 IV BROADER UNCERTAINTIES AROUND THE NORMATIVE FOUNDATIONS OF DECISION-MAKING ...... 166 V CONCLUSION ...... 176 CHAPTER SEVEN: CONCLUSION ...... 178 I SUMMARY OF ARGUMENTS AND CONTRIBUTIONS ...... 179 II RECOMMENDATIONS FOR INSTAGRAM AND ONLINE PLATFORMS MORE BROADLY ...... 184 A Improve the Certainty of Content Policies ...... 188

vi

THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

B Improve Reason-Giving Practices...... 190 C Prioritise Transparency and Stakeholder Participation ...... 192 D Greater Accountability ...... 197 III GOVERNMENT ACTION FOR GREATER TRANSPARENCY AND ACCOUNTABILITY: KEY PRINCIPLES ...... 200 IV OPPORTUNITIES FOR FUTURE RESEARCH ...... 202 A Extending the Theoretical Framework ...... 202 B Digital Methods for Empirical Legal Analysis ...... 203 C Realising the Project of Digital Constitutionalism ...... 204 V CONCLUDING REMARKS ...... 205 BIBLIOGRAPHY ...... 207 A Articles/Books/Reports ...... 207 B Cases ...... 222 C Legislation ...... 222 D Treaties ...... 222 E Other ...... 223

vii

THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

LIST OF FIGURES

Figure Page

Figure 1: An Account of Content Removal by Aarti Olivia Dubey (2016) 2 Figure 2: Anatomy of an Instagram Post 13 Figure 3: Head of Instagram Adam Mosseri Promotes the @i_weigh Movement 59 in June 2019 Figure 4: Instagram Post by Co-Founder and Former CEO Kevin Systrom on 21 61 December 2017 Figure 5: Instagram Post by Co-Founder and Former CEO Kevin Systrom on 9 62 March 2017 Figure 6: Instagram Post by Head of Instagram Adam Mosseri (middle) on 2 62 October 2018 Figure 7: Instagram’s In-Built Reporting Feature for the Mobile App (iOS) 76 Figure 8: Instagram's Online Form for Non-Users to Report Potential Violations 77 of Community Guidelines Figure 9: Components of a Black Box (Diakopoulos, 2014) 90 Figure 10: Instagram Post by Instagram Co-Founder and former CEO Kevin 92 Systrom on 30 April 2017 Figure 11: Total Dataset from Watched Hashtags (120,866 images) (Case Study 95 One) Figure 12: Total Dataset for Manual Coding 95 (9,582 images) with almost 50/50 Proportion (Case Study One) Figure 13: Total Coded Dataset (4,944 images) (Case Study One) 95 Figure 14: Example Images in the Coded Sample (Case Study One) 108 Figure 15: Photographic Figure Rating Scale (‘PFRS’) (Swami et al, 2008) 109 Figure 16: Example ‘Butt Shots/Selfies’ in line with #cheekyexploits (Case 133 Study Two) Figure 17: Example Pornographic Content (Case Study Two) 136 Figure 18: Example Images Coded into the Partially Nude or Sexually 141 Suggestive Category (Case Study Two) Figure 19: Images Depicting Women Actively Breastfeeding by Removed and 147 Not Removed Categories Figure 20: Feminist Memes from a Search of #feministmemes (a Public 155 Hashtag) on Instagram Figure 21: An Instagram Post by Director of International Freedom of 158 Expression at the Electronic Frontier Foundation Jillian York on 15 April 2018 Figure 22: Anti-Censorship Activists Outside and Instagram's New 158 York Headquarters in June 2019 Figure 23: Micol Hebron’s Genderless Nipple Template 159 Figure 24: Examples of Apparent ‘Self-Sensorship’ (Olszanowski, 2017) 164 Figure 25: Instagram Post by Poet and Artist Rupi Kaur (2015) 170

viii

THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

Figure 26: Examples of the ‘Neoliberal Body’ (Harjunen, 2016) 171 Figure 27: Facebook Advertisement in the London Underground, July 2018 187 (Image from Author’s Personal Collection) Figure 28: Apology Letter to the Public from in a Mainstream 187 Newspaper after the 2018 Facebook-Cambridge Analytica Scandal Figure 29: Reporting Measures for Platforms as Recommended by the 2018 194 Santa Clara Principles

ix

THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

LIST OF TABLES

Table Page Table 1: Removed and Not Removed Images by Category as Percentage of 115 the Coded Dataset (Observed) and Probability or Risk of Removal as Percentage of Extrapolated General Population (Expected) (Case Study One) Table 2: Logistic Regression Predicting Likelihood and Content Removal for 115 Thematic Categories (with Overweight as the Reference Group) (Case Study One) Table 3: Removed and Not Removed Images by Expressly Prohibited 139 Category (Percentage Breakdown) (Case Study Two) Table 4: Logistic Regression Predicting the Likelihood of Content Removal 140 for Thematic Categories (with Male as the Reference Group) (Case Study Two)

x

THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

LIST OF ABBREVIATIONS

AI: Artificial Intelligence API: Application Programming Interface BoPo: Body Positive DMRC: Digital Media Research Centre GNI: Global Network Initiative ICT: Information, Computer and Technology PFRS: Photographic Figure Rating Scale QUT: Queensland University of Technology UK: United Kingdom UN: United Nations US: United States Other ‘Krieger’: Instagram Co-Founder and former Head of Engineering Mike Krieger ‘Mosseri’: Current Head of Instagram Adam Mosseri ‘Systrom’: Instagram Co-Founder and former CEO Kevin Systrom ‘Zuckerberg’: Facebook Co-Founder and Current CEO (and Chairman) Mark Zuckerberg

xi

THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

STATEMENT OF ORIGINAL AUTHORSHIP

The work contained in this thesis has not been previously submitted to meet the requirements for an award at this or any other higher education institution. To the best of my knowledge and belief the thesis contains no material previously published or written by another person except where due reference is made.

Signed:

QUT Verified Signature

Alice Witt

Date: 6 April 2020

xii THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

ACKNOWLEDGEMENTS

I am very grateful to have received a Research Training Program (‘RTP’) Scholarship (formerly an Australian Postgraduate Award) to undertake this research. The RTP Scholarship, along with a Grant-In-Aid scholarship from QUT’s Faculty of Law in 2018 and the DMRC waiving my attendance fee to several of their summer schools, significantly reduced the financial pressure on me throughout my doctoral studies. Now at the end of my PhD, I would like to thank a number of people for helping me through the highs and lows of the last few years. First, I am especially indebted to my parents and sisters for their unwavering love, support and generosity, and to my partner Ed for being supercalifragilisticexpialidocious. The backing of my friends and family has played an enormously significant role in helping me reach this point. Second, I would like to thank my outstanding supervisory team: Professor Nicolas Suzor, Dr Anna Huggins and Professor Patrik Wikström. I believe that a doctoral student’s experience of candidature is often only as good as their supervisors and, in this regard, I have been exceptionally lucky. It is important to me that I acknowledge the work of each of my supervisors, beginning with my principal supervisor Nic.

Towards the end of my undergraduate law studies in 2014, when in truth I was not particularly enjoying law school, I began studying a unit entitled Theories of Law. It was in this unit that Nic first introduced me to the Anglo-American ideal of the rule of law and feminisms – two broad strands of thought that I weave throughout this thesis. My time studying Theories of Law was formative for reasons that largely eluded me at the time, but I knew that my perspective on law school was more positive for it. In hindsight, I believe this was the case because Nic encouraged me to critically evaluate the law, rather than assume that a law is good by virtue of it being a law. It was Theories of Law, as contrived as it might sound, that was a genesis point, not just for my own critical legal thinking, but also for this research.

And so, Nic, I want to sincerely thank you for playing such a pivotal role in my legal education and helping me translate the interests that Theories of Law piqued into a concrete research project. You have always encouraged me to do better: to think more analytically, write more precisely and succinctly, and tackle interdisciplinary work across the fields of law and digital media. Thank you also for telling me when my work required improvement and, most importantly, guiding me through the processes of making improvements. Finally, thank you for always having time for me, looking out for me and providing such exceptional, holistic support from the first day of my PhD candidature.

xiii

THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

Anna, thank you for your incredibly detailed and constructive approach to supervision. I am grateful for you engaging so thoughtfully and carefully with the arguments, structure and style of my thesis. I have learnt so much about academic writing and clarity of expression by reading your detailed feedback. Thank you also for the spirit of generosity that you bring to supervision. I appreciate all that you have done to encourage me, look out for me and advocate for what is best for my career.

Patrik, thank you for bringing your excellent interdisciplinary perspective to my research and helping me navigate the challenges of developing and applying digital methods for legal analysis. I really appreciate your positive approach to supervision. Together with Nic and Anna, you form a supervisory dream team! I look forward to our future collaborations.

I feel immensely grateful to my G-fam, the wonderful group of fellow PhD candidates and friends who I have had the pleasure of sharing an office with over the last few years. I would particularly like to thank my three-year desk buddy Rosalie Gillet, as well as Antonia Horst, Rachel Hews, Michelle Ringrose, Jessica Thiel, Rahul Singh, Laura McGillivray, Eliana Close, Matty Morgan, Bridget Weir, Stephanie Jowett and Brydon Wang. Many thanks also to my DMRC-fam: Edward Hurcombe, Ariadna Matamoros Fernandez, Silvia Montana Nino, Ehsan Dehghan, Kelly Lewis, Fiona Suwana, Jarrod Walczer, Aljosha Karim Schapals, Bondy Kaye, Sofya Glazunova, Aleesha Rodriguez, Jean Burgess, Stefanie Duguay and Michael Dezuanni. I am also grateful for the professional and personal support offered to me by Kylie Pappalardo, Hope Johnson, Rachel Hews, Joanne Gray, Shih-Ning Then and Tom Cochrane, as well as Bridget Lewis and Mark Burdon for their mentorship.

Thanks also to my lovely friends outside of academia who have so kindly cheered me on: Sigourney Easthope, Megan Rowe, Katie Paske, Suzy Wood, Anja Schaefer, Leonie Smith, Alice Peterson, Louise Dark, Christina Varidel, Charlie Harkness, Michelle Anderson, Ashlea Piotrowski, Laura Fullerton, Kate Pashevich, Yonaira Rivera, Julia Krauß, Ashleigh Larkin, Susan de Laat, Aimee Catt, Jill Thorley and Inae Grace Kim.

I would like to thank several colleagues and friends for providing feedback on earlier versions of this thesis. I am grateful to Tim Highfield and Angela Daly for their encouraging and constructive feedback at my Confirmation of Candidature; Daniel Joyce for reviewing my work at the 2017 Law, Technology and Innovation Junior Scholars Forum at UNSW’s Allens Hub; and TJ Thomson for his very thoughtful feedback at my Final Seminar. I would also like to thank my fellow participants in the Oxford Internet Institute’s 2018 Summer Doctoral

xiv

THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

Programme (‘OII SDP’), so many of whom offered helpful comments on my research, and Victoria Nash, the OII SDP Course Director. Thanks also to Matthew Rimmer for promoting my research, and Catherine McKenzie and Mayuko Bock for their excellent administrative assistance. Finally, I would like to thank the examiners of this thesis for taking the time to read and engage with my work.

xv

THE RULE OF LAW IN PLATFORM GOVERNANCE: AN EMPIRICAL EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

PREVIOUSLY PUBLISHED WORK

Portions of this thesis, including most of Chapter Four and parts of Chapters Two and Three, have been published in the following journal article:

Witt, Alice, Nicolas Suzor and Anna Huggins, ‘The Rule of Law on Instagram: An Evaluation of the Moderation of Images Depicting Women’s Bodies (2019) 42(2) UNSW Law Journal 557.

In this co-authored article with my principal supervisor, Professor Nicolas Suzor, and my associate supervisor, Dr Anna Huggins, I was the lead author and had overall intellectual carriage of the work. I was responsible for, inter alia, data analysis, inter-coder reliability testing and drafting most of the paper. I was also the corresponding author for this article, which involved addressing the majority of revisions and managing the progress of the document until publication. Professor Suzor’s contributions primarily focused on the infrastructure for data collection and reviewing those aspects of the article specifically relating to platform and internet governance. Dr Huggins’ contributions focused primarily on structure, drafting some aspects of the article relating to the rule of law and supervising aspects of the review process. These components are peripheral to the research questions in this thesis.

xvi

CHAPTER ONE: INTRODUCTION

I BACKGROUND

In or around 2016, Aarti Olivia Dubey, a Singaporean-Indian body positive and self-care advocate, posted behind-the-scenes and other images of her bikini body photoshoot to the social media application (‘app’,1 or ‘platform’)2 Instagram.3 The images are seemingly unobjectionable in the way that they depict Dubey smiling with her friends, as shown in Figure 1.4 However, after she posted the images, Dubey found that her content had been removed. She recounts:

When I checked into Instagram, a screen popped up informing me that a post of mine was removed due to violation of community guidelines. I was really confused -- what could I have possibly posted that violated the guidelines? When I took a gander at my images, my heart sank. This was not some pornographic image, it was not filled with gore or violence, it did not do anything other than show three smiling fat chicks in swimwear that can hardly be termed as "lewd".5

One of the most striking aspects of Dubey’s account is her confusion about why her images, which did not appear to violate content policies, were removed from Instagram. This puzzlement was compounded by the platform allegedly sending an apology email to Dubey that claimed content removal, in this case, was ultimately ‘a mistake by a staff member’.6 Perhaps the most notable aspect of this account is the deeply personal impacts that the outcomes of content moderation can have on users. Ratna Devi Manokaran, one of the other women depicted in the images at issue, explains: ‘This Instagram episode made me feel as if I should not be allowed to exist in a fat body and be happy and confident at the same time. Deleting our

1 In a recent publication, Instagram described itself as ‘a social media app used to share photos, videos and messages’: see Instagram and National PTA, ‘Know How to Talk with Your Teen about Instagram: A Parent’s Guide’, Instagram (Guide, 2019) 4 . 2 Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (Yale University Press, 2018) 18. 3 Aarti Olivia Dubey, ‘Did We Violate Instagram Guidelines by Being “Too Fat” to Wear a Swimsuit?’ Huffington Post (online at 14 June 2016) . 4 It is important to note that I have redacted the faces and usernames of the subjects of every image in this study in order to promote their privacy. I explain how I collected images in Chapter Three, Part IV; Chapter Four, Part II; and Chapter Five, Part II. 5 Dubey (n 3). 6 According to Dubey, ‘After the whole hoopla, Instagram sent me a silly excuse of an apology via email almost two weeks after the post was removed. Their explanation? It was a mistake by a staff member. I call bullshit. You cannot make so many "mistakes" for so many plus size folks online -- I see through that pathetic apology’: Dubey (n 3).

1 CHAPTER ONE: INTRODUCTION picture – one that was really a joyful keepsake for us – said that it was wrong to exist in this form’.7 Of course, these quotes relate to only one instance of content moderation on Instagram, a platform that has the immensely complex task of moderating millions of pieces of content from all corners of the globe on a daily basis.8 In recent years, however, it has become increasingly apparent that this example of potentially arbitrary content removal is one of many.

7 Ratna Devi Manokaran also said: ‘I have seen countless images on Instagram that really go against the guidelines. I felt that, on some level, people found our picture vulgar and that made me really sad. I've heard countless times that when a bigger girl wears something, it becomes too sexy or slutty, but any other girl would look appropriate in it. I wish Instagram would actually spend their time weeding out images that are really harmful instead of fat shaming us by removing our pictures!’ See Dubey (n 3). 8 For example, Instagram users take or upload around 95 million photos and videos per day: Dave Lee, ‘Instagram Users Top 500 Million’, BBC (online at 21 June 2016) ; Instagram, ‘A New Look for Instagram’, Info Center (Press Release, 11 May 2016) . On Facebook, which is Instagram’s parent company, users flag, or report, around one million pieces of content per day: Catherine Buni and Soraya Chemaly, ‘The Secret Rules of the Internet’, The Verge (online at 13 April 2016) .

2

CHAPTER ONE: INTRODUCTION

There is a full spectrum of claims about the moderation of images depicting women’s bodies on Instagram. Some online news publications assert that the platform, a subsidiary of Facebook, Inc. with over one billion monthly active users,9 is ‘removing’10 – also described as ‘banning,’11 ‘censoring’12 and ‘deleting’13 – depictions of female forms in seemingly arbitrary or biased ways. Others accuse Instagram of ‘blatant fat-phobia’14 and ‘fat-shaming’15 women in ways that could potentially reinforce heteronormative body standards.16 These allegations of bias are concerning as they suggest that the platform is amplifying the expression of some users while silencing others.17 Moreover, such assertions are surprising in the way that they appear to be in tension with Instagram’s community-oriented rhetoric, which pivots around the themes of openness, connectivity and diversity.18 Allegations are also unexpected from a commercial point of view given that the platform is particularly popular with women.19

9 Instagram, ‘Welcome to IGTV’, Info Center (Press Release, 20 June 2018) . 10 See, eg, Marissa Muller, ‘Fitness Blogger Mallory King Says Instagram Removed Her Proud Cellulite Selfie’, Allure (online at 28 February 2017) . 11 See, eg, Shauna Anderson, ‘Why Was THIS Photo Banned from Instagram? The Reason Will Make You Shake Your Head in Disbelief’, Mamamia (online at 23 May 2014) . 12 For example, Raffoul states that ‘[p]osting on social media does have its limits—Instagram has been found to unfairly censor posts by plus-size women, claiming that the content is “sexually suggestive”’: Nicolas Raffoul, ‘Loving Myself and My Selfies’, The McGill Tribune (online at 10 September 2019) ; Caroline Bologna, ‘After Instagram Censored Her Photo, Mom Speaks Out about Body Image’, Huffington Post (online at 24 November 2016) ; Kasandra Brabaw, ‘This Curvy Muslim Woman Is Speaking Out about Censorship on Instagram’, Refinery29 (online at 8 March 2017) . 13 See, eg, Sarah Buchanan, ‘Instagram Deleted This Photo of a Woman’s Cellulite – But She Has Best Response’, Daily Star (online at 4 March 2017) . 14 See, eg, Gabrielle Olya, ‘Curvy Blogger Angry That Her Bikini Photo Was Taken Down by Instagram: 'It's Blatant Fat-Phobia’, People (online at 6 June 2016) . 15 See, eg, Kristen Brown, ‘Why Did Instagram Fat-Shame These Women in Bikinis?’ Splinter (online at 6 January 2016) . 16 See, eg, Lora Grady, 'Women are Calling out Instagram for Censoring Photos of Fat Bodies: The Double Standards are Real ', Flare (online at 18 October 2018) . 17 See, eg, Electronic Frontier Foundation and Visualizing Impact, ‘A Resource Kit for Journalists’, onlinecensorship.org (Web Page, September 2017), [Issue Area, 1. The Human Body] . 18 Take, for instance, the community-oriented rhetoric in a tweet by Adam Mosseri, Head of Instagram: ‘Every day, we observe the power of expression on Instagram. Young people come to Instagram to express different sides of themselves, and I' proud of how our platform helps amplify diverse voices’: @mosseri (Adam Mosseri) (, 18 October 2019) . 19 In a United States-based study, of the 35 per cent of adults who say they use Instagram, 39 per cent are women and 30 per cent are men: see Aaron Smith and Monica Anderson, Appendix A: Detailed Table,

3

CHAPTER ONE: INTRODUCTION

By contrast, some publications in news media claim that the platform is democratising body standards and creating a positive space for the depiction of diverse female forms.20 Take, for instance, the body positive (‘BoPo’) movement on Instagram,21 which ‘is a growing social media trend that seeks to challenge dominant societal appearance ideals and promote acceptance and appreciation of all bodies and appearances’.22 However, other news publications show that even some normative images of female forms, including thin-idealised depictions of women’s bodies, are removed from Instagram.23 This chorus of claims, any or all of which could be true, continues to breed confusion about how content is moderated in practice, including which party removes content, for what reasons and in what ways.24 The public’s lack of knowledge is concerning because, ‘on a day-to-day basis, the rules that apply most directly to people on the internet are the rules set and enforced by intermediaries’,25 including platforms. It becomes particularly hard for users to understand content policies when apparently similar expression through content is moderated in different ways.

Pew Research Center (Online Report, 1 March 2018) . See also Stevie Chancellor et al, ‘#thyghgapp: Instagram Content Moderation and Lexical Variation in Pro-Eating Disorder Communities’ (Conference Paper, ACM Conference on Computer-Supported Cooperative Work and Social Computing, 1 March 2016) 1202 ; Hannah Seligson, ‘Why Are More Women than Men on Instagram?’ The Atlantic (online at 7 June 2016) . 20 See, eg, Maya Salam, ‘Why “Radical Body Love” Is Thriving on Instagram’, The New York Times (online at 9 June 2017) ; Jennifer Webb et al, ‘Fat Is Fashionable and Fit: A Comparative Content Analysis of Fatspiration and Health at Every Size Instagram Images’ (2017) 22 Body Image 53, 54. 21 Cohen et al explain that the body positive ‘…movement stems from the 1960s feminist-grounded fat acceptance movement that emerged in reaction to the rise in anti-fat discourse in Canada and the United States at the time. The fat acceptance movement aimed to encourage critical debate about societal assumptions of body image and protest discrimination against fat people. Similarly, body positivity aims to challenge the prevailing thin-ideal messages in the media and foster acceptance and appreciation of bodies of all shapes, sizes, and appearances’: Rachel Cohen et al, '#bodypositivity: A content Analysis of Body Positive Accounts on Instagram' (2019) 29 Body Image 47, 48. It should be noted that #effyourbeautystandards, which model and activist Tess Holliday (@tessholliday) founded in 2011, is one of the most prominent BoPo hashtags on Instagram with over four million posts as of October 2019. News publications often refer to Holliday as one of the leaders of the BoPo movement on Instagram: see, eg, Salam (n 20). 22 Cohen et al (n 21); Amy Brech, ‘Reclaim Your Belly: The Body Positive Movement Taking Instagram by Storm’, Grazia (online at 6 June 2017) . 23 See, eg, Ellie Cambridge, ‘Model “Too Sexy for Instagram” Is Banned from the Social Media Site Days after Racy Sideboob Pics’, News.com.au (online at 18 August 2017) . 24 See generally Sarah T Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (Yale University Press, 2019). 25 Nicolas Suzor, Lawless: The Secret Rules that Govern Our Digital Lives (Cambridge University Press, advance version, copy on file with author) 140 .

4

CHAPTER ONE: INTRODUCTION

Controversies around the appropriateness of images depicting women are part of broader concerns about whether user-generated content is moderated by digital media companies in ways that are free from arbitrariness.26 Content moderation refers to the processes through which platform executives and their moderators set, maintain and enforce the bounds of appropriate content based on a range of factors, including platform-specific rules, cultural norms and legal obligations.27 Decisions around the appropriateness of content are ultimately regulatory decisions in that they attempt to influence or control the types of content users see and how and when they see it.28 The problem is that platforms moderate content within a ‘black box’29 that obscures internal decision-making processes from the view of over three billion social media users around the globe – a number of people that exceeds the population of any nation-state.30 However, in stark contrast to traditional governments,31 there are limited checks and balances on the power that platforms exercise over users’ expression through content.32

The decisions that platforms make about the appropriateness of content matter because they have the potential to significantly impact the lives of everyday users. Individual pieces of content can be powerful vehicles for the self-expression of users, including that pertaining to

26 See, eg, Nicolas P Suzor, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (2018) 4(3) Social Media + Society (advance) 1; Jessica Anderson et al, ‘Censorship in Context: Insights from Crowdsourced Data on Social Media Censorship’ (Research Report, Onlinecensorship.org, 16 November 2016) ; Ranking Digital Rights, ‘2019 Corporate Accountability Index’ (Research Report, May 2019) . 27 Alice Witt, Nicolas Suzor and Anna Huggins, ‘The Rule of Law on Instagram: An Evaluation of the Moderation of Images Depicting Women’s Bodies (2019) 42(2) UNSW Law Journal 557; Sarah T Roberts, 'Content Moderation' in Laure A Schintler and Connie L McNeely (eds), Encyclopaedia of Big Data (Springer, advance) 1 ; Alyssa Miranda, ‘A Keyword Entry on “Commercial Content Moderators”’ (2017) 2(2) iJournal . 28 Witt, Suzor and Huggins (n 27) 557. See also Kate Klonick, ‘The New Governors: The People, Rules, and Processes Governing Online Speech’ (2018) 131 Harvard Law Review 1598, 1602. 29 See generally Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015); Marjorie Heins, ‘The Brave New World of Social Media Censorship’ (2014) 127 Harvard Law Review 325, 326; Sarah T Roberts, ‘Digital Detritus: “Error” and the Logic of Opacity in Social Media Content Moderation’ (2018) 23(3) First Monday . 30 For statistics on global social media use, see We Are Social and Hootsuite, ‘Global Digital Report 2019’ (Online Report, 30 January 2019) ; Cohen et al (n 21) 47. For population statistics, see The World Bank, ‘Population, Total’ (Web Page, 2019) . 31 For a comparison between ‘Old Governors’ and ‘New Governors’, see Thomas Kadri and Kate Klonick, ‘Facebook v Sullivan: Public Figures and Newsworthiness in Online Speech’ (2019) Southern California Law Review (forthcoming) . 32 See, eg, Nicolas Suzor, Lawless: The Secret Rules That Govern our Digital Lives (Cambridge University Press, 2019) 11.

5

CHAPTER ONE: INTRODUCTION their identity, experiences and belief systems,33 who are far from disembodied data points.34 A decision to remove content can, directly or indirectly, amplify certain forms of expression and silence others, in ways that can have a range of detrimental impacts on users across online and offline domains.35 Content can also be an important conduit for users’ identify development, social interactions and participation in public discourses of the day.36 More broadly, the decisions made by platform executives and their moderators – sometimes referred to as ‘screeners’, ‘content reviewers’ and ‘community managers’37 – can influence cultural, social and other norms that extend beyond their networks. For these reasons, among others that I will explore in this thesis, decisions around content should be subject to greater public scrutiny.

In this context, there are increasing calls for empirical analyses that can help the public to better understand whether rules around content are enforced in ways that are free from arbitrariness, and to identify the real impacts that moderation can have on users as the subjects of ‘platform governance’.38 While there is some publicly available data about content moderation, largely in platforms’ transparency reports, the quality of this information is often limited.39 Instagram’s transparency practices are especially poor: the platform does not publish transparency reports independently of its parent company, or clearly disclose the volume or nature of actions taken to remove content from its network.40 An added complication is that the internal workings of moderation processes, including the specific policies that moderators follow and the development and deployment of artificial intelligence (‘AI’) systems, are proprietary and not disclosed to the general public.41 These factors, among others, make it particularly difficult for the public to obtain meaningful data on the regulatory decisions that are made about content.

33 See generally David Kaye, Speech Police: The Global Struggle to Govern the Internet (Columbia Global Reports, 2019). 34 Gillespie (n 2) 94; ORBIT, 100+ Brilliant Women in AI & Ethics (Conference Report, 2019) 3. 35 Witt, Suzor and Huggins (n 27) 559. 36 See generally Joseph Seering et al, 'Moderator Engagement and Community Development in the Age of Algorithms' (2019) New Media & Society 1; Witt, Suzor and Huggins (n 27) 592. 37 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 39. 38 Gorwa defines ‘platform governance’ as ‘a concept intended to capture the layers of governance relationships structuring interactions between key parties in today’s platform society, including platform companies, users, advertisers, governments, and other political actors’: Robert Gorwa, ‘What Is Platform Governance’ (2019) 22(6) Information, Communication & Society 854. 39 See, eg, Ranking Digital Rights (n 26) 4. 40 Crocker et al, Who Has Your Back? Censorship Edition 2019 (Online Report, 12 June 2019) Electronic Frontier Foundation ; See, eg, Ranking Digital Rights (n 26). 41 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 37.

6

CHAPTER ONE: INTRODUCTION

In response to calls for data that can shed light on content moderation in practice, this thesis empirically investigates whether images that depict women’s bodies are moderated on Instagram in a way that aligns with the Anglo-American ideal of rule of law. The primary purpose, or telos, of this ideal, which legal scholars traditionally conceptualise in terms of public law,42 is to limit, control or restrain the potentially arbitrary exercise of governing power.43 Arbitrariness occurs when power is exercised unpredictably, or when it is exercised in a way that takes no account of the perspectives and interests of affected parties.44 This raises the important question of whether the rule of law should apply in the context of Instagram and other social media platforms, which is privately owned and governed. In this thesis, I follow Krygier’s explicitly teleological approach to the rule of law, which underlines that the threats of arbitrariness in the exercise of governing power with which this legal ideal is concerned are not a state monopoly.45 According to Krygier, if we are concerned with addressing the risk of arbitrariness in the exercise of power, it should not matter whether the source of that power is public or private.46 This approach is persuasive not only because it recognises that governance practices transcend specific bodies of law, especially in ‘networked’47 environments like the internet, but also that state and non-state actors, like Instagram, can exercise power in ways that can harm individuals or groups.

Against this backdrop, I contend that it is appropriate to evaluate processes for moderating content against the Anglo-American ideal of the rule of law as content moderation is a form of regulation over users, and the problems of content moderation are problems of governance.48 The governing power of platforms has grown to such an extent that they have become ‘the new governors’49 of the digital age.50 As there is no universal set of rule of law values, I evaluate

42 Joseph Raz, The Authority of Law: Essays on Law and Morality (Oxford University Press, 1979) 212. 43 Krygier posits that the telos of the rule of law is its opposition to arbitrary power, irrespective of the specific legal and institutional features that accompany it: see, eg, Martin Krygier, ‘The Rule of Law: Legality, Teleology, Sociology’ in Gianluigi Palombella and Neil Walker (eds), Relocating the Rule of Law (Hart Publishing, 2009) 45; Martin Krygier, ‘Four Puzzles about the Rule of Law: Why, What, Where? And Who Cares?’ in James E Fleming (ed), Getting to the Rule of Law (New York University Press, 2011) 64; Martin Krygier, ‘Why the Rule of Law Is Too Important to Be Left to Lawyers’ [2013] (4) Law of Ukraine: Legal Journal 18, 20–1. 44 Krygier, ‘Why the Rule of Law Is Too Important to Be Left to Lawyers’ (n 43) 34. 45 Martin Krygier, ‘The Rule of Law: Pasts, Presents, and Two Possible Futures’ (2016) 12 Annual Review of Law and Social Science 199, 221. 46 Ibid; Witt, Suzor and Huggins (n 27) 564. 47 See, eg, Clifford Shearing and Jennifer Wood, ‘Nodal Governance, Democracy, and the New “Denizens”’ (2003) 30 Journal of Law and Society 400, 408. 48 See Witt, Suzor and Huggins (n 27) 557. 49 Klonick (n 28) 1598. 50 See, eg, Lawrence Lessig, Code and Other Laws of Cyberspace (Basic Books, 1999) 220; James Grimmelmann, ‘Regulation by Software’ (2005) 114 Yale Law Journal 1719; Colin Scott, ‘Regulation in

7

CHAPTER ONE: INTRODUCTION content moderation processes against the normative aspirations of formal equality, certainty, reason-giving, transparency, participation and accountability (‘my rule of law framework’),51 which are well-established in Western democratic discourse.52 I argue overall that any attempt by platforms to moderate content, and to govern their networks more broadly, should adhere to these basic values.53 The rule of law framework in this thesis reinforces the desirability of users being informed of the factors that influence whether their content is visible.54

The rule of law is a valuable theoretical lens for evaluating ongoing concerns about whether user-generated content is moderated, or regulated, by online platforms in ways that are free from arbitrariness.55 A foremost benefit of this legal ideal is that it institutionalises constraints on arbitrariness in the exercise of power, across public and private domains, and irrespective of the specific legal and institutional features that accompany it.56 This means that different societal actors can use a rule of law lens to find common ground without importing the same, often onerous, standards that apply to nation-states.57 In other words, this ideal can be agile. Additionally, opposition to arbitrary power in rule of law discourse has been regarded as integral to warding off tyranny throughout centuries of political and legal thought,58 and is very much alive in contemporary Western debates about the dangers of lawless, capricious or unchecked governing power.59 It should be noted here that this ideal cannot provide

the Age of Governance: The Rise of the Post-regulatory State’ in Jacint Jordana and David Levi-Faur (eds), The Politics of Regulation: Institutions and Regulatory Reforms for the Age of Governance (Edward Elgar Publishing, 2004) 145. 51 See, eg, Tom Bingham, The Rule of Law (Penguin Books Limited, 2011); Richard H Fallon Jr, ‘“The Rule of Law” as a Concept in Constitutional Discourse’ (1997) 97 Columbia Law Review 1, 8. See further the discussion in Chapter Two, Part III. 52 Jeremy Matam Farrall, United Nations Sanctions and the Rule of Law (Cambridge University Press, 2007) 40–1. For an application of rule of law values in a global administrative la context, see Anna Huggins, Multilateral Environmental Agreements and Compliance: The Benefits of Administrative Procedures (Routledge 2017) 17-25. 53 Recent initiatives that incorporate these values, by varying degrees and in different ways, include: ACLU Foundation of Northern California et al, The Santa Clara Principles on Transparency and Accountability in Content Moderation (7 May 2018) ; Electronic Frontier Foundation et al, Manila Principles on Intermediary Liability (Web Page, 24 March 2015) . 54 Suzor, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (n 26) 2. 55 See, eg, Ibid; Ranking Digital Rights (n 26); Anderson et al (n 26). 56 See, eg, Krygier, ‘The Rule of Law: Legality, Teleology, Sociology’ (n 43) 45; Krygier, ‘Four Puzzles about the Rule of Law: Why, What, Where? And Who Cares?’ (n 43), 66. 57 Suzor, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (n 26) 9. 58 See, eg, Krygier, ‘Why the Rule of Law Is Too Important to Be Left to Lawyers’ (n 43) 20; Ibid 6. 59 See, eg, Krygier, ‘The Rule of Law: Pasts, Presents and Two Possible Futures’ (n 45) 199–200; David Mednicoff, ‘Trump May Believe in the Rule of Law, Just Not the One Understood by Most American

8

CHAPTER ONE: INTRODUCTION prescriptive answers to regulatory questions around content moderation given that the very nature of this ideal and its constituent values are heavily contested.60 However, by underlining what Western democratic societies have come to expect of governing actors, the rule of law serves to highlight potential deficiencies in the exercise of power in online and offline contexts.61 This discourse ultimately provides a well-established language to name and work through what is at stake for users when content is moderated in potentially inconsistent ways, as part of broader tensions between platforms and their users.62

My critical approach for socio-legal evaluation also encompasses a feminist perspective. I chose to adopt a feminist critical lens for several reasons and not just because this thesis focuses on the moderation of images depicting women’s bodies. The first is that feminisms can help a researcher to identify potential issues that they might otherwise overlook from a rule of law perspective alone, especially given that the latter often privileges the white straight male archetype.63 A feminist perspective can also help us to better understand the complex regulatory issues around women’s bodies, which continue to be heavily policed in many Anglo- American legal systems, and underline some of the potential autonomy deficits that women and other users face when participating in online social spaces like Instagram. By combining both a rule of law and feminist perspective, I aimed to work through the opportunities and problems of content moderation in a nuanced way and, whenever possible, learn about different lived experiences.

I situate this thesis within a constellation of broader projects known as ‘digital constitutionalism’.64 As Suzor explains, ‘[t]hese projects are concerned with how the internet is constituted – how it is structured in a way that many institutions and actors exercise power over different aspects of this massive network of networks’.65 Many proponents of digital constitutionalism argue that public governance values can and should influence the private

Lawyers’, The Conversation (online), 5 June 2018 . 60 See, eg, Jeremy Waldron, ‘Is the Rule of Law an Essentially Contested Concept (in Florida)?’ (2002) 21 Law and Philosophy 137. 61 See generally Witt, Suzor and Huggins (n 27). 62 Suzor, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (n 26) 9. 63 Witt, Suzor and Huggins (n 27) 568. 64 Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 112-114. For a survey of existing literature on the topic of digital constitutionalism, see Edoardo Celeste, 'Digital Constitutionalism: A New Systematic Theorisation' (2019) 33(1) International Review of Law, Computers & Technology 76. 65 Suzor, Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 113.

9

CHAPTER ONE: INTRODUCTION rules of non-state actors, including the policies of social media platforms,66 and seek to articulate limits on the exercise of governing power in the digital age.67 I argue that this is crucial work as the global significance of platforms and other technology companies continues to increase.68 Facebook, for instance, controls almost 80 per cent of the world’s mobile social media traffic (eg, users accessing Facebook or Instagram via apps on a mobile phone).69 In an age of ‘platform capitalism’,70 I contend that the public should not accept the outcomes of content moderation on face value, be it as formally equal, certain or in terms of any other normative aspiration. Societal actors, including researchers, civil society and lawmakers, should instead empirically investigate, to the extent possible, processes for moderating user- generated content in practice, especially given that these processes occur behind closed doors.71 Such investigation can help to enhance public knowledge of whether rules around content are enforced in ways that are free from arbitrariness, which is vital for grounding debates about the future of platform governance.

II AIMS AND RESEARCH QUESTIONS

The primary aim of this thesis is to empirically evaluate the moderation of images depicting women’s bodies on Instagram against the Western legal ideal of the rule of law. Specifically, I focus on the contested rule of law values of formal equality, certainty, reason-giving, transparency, participation and accountability. More broadly, as part of the project of digital constitutionalism, this thesis aims to inform debates about potential arbitrariness in processes for moderating content on Instagram and guide the development of future legal principles for more transparent and accountable systems of platform governance. I seek to answer three specific research questions to achieve these aims:72

66 See, eg, Lex Gill, Dennis Redeker and Urs Gasser, 'Towards Digital Constitutionalism? Mapping Attempts to Craft an Internet Bill of Rights' (2018) 80(4) (2015) The International Communication Gazette 302. 67 Ibid 112-113. 68 Terry Flew, Fiona Martin and Nicolas Suzor, 'Internet Regulation as Media Policy: Rethinking the Question of Digital Communication Platform Governance' (2019) 10(1) Journal of Digital Media & Policy 33, 34. 69 Ibid 34; Elizabeth Kolbert, ‘Who Owns the Internet’, The New Yorker (online at 28 August 2017) . 70 See generally Nick Srnicek, Platform Capitalism (Polity, 2017). 71 See generally Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 136 ff. 72 I have received ethics approval to undertake this research at the Queensland University of Technology (QUT Approval 1400000861).

10

CHAPTER ONE: INTRODUCTION

1) How can digital methods be used to empirically examine processes for moderating user- generated content on Instagram against the Anglo-American ideal of the rule of law? (‘RQ1’) 2) To what extent are images depicting women’s bodies moderated on Instagram in a way that aligns with the Anglo-American ideal of the rule of law? (‘RQ2’) 3) How useful are the empirical legal methods in this thesis for evaluating content moderation processes on Instagram? (‘RQ3’)

While I explore these research questions at various points throughout this thesis, I give particular attention to RQ1 in Chapter Three, RQ2 in Chapters Four and Five, and RQ3 in Chapters Six and Seven. I will now outline the scope and significance of this thesis.

III SCOPE AND SIGNIFICANCE

A useful starting point for this Part is to outline some basic information about Instagram. As a predominantly visual platform for photo, video and message sharing, Instagram’s mission is ‘[t]o bring you [ie, users] closer to the people and things you love’.73 The platform, which Kevin Systrom and Mike Krieger launched via Apple’s App Store in 2010, was purchased by Facebook Inc. for $1 billion in 2012.74 Since Systrom and Krieger resigned from their roles heading Instagram and Adam Mosseri, previously the Head of Newsfeed at Facebook, became the Head of Instagram in late 2018,75 the platform has been “more tightly” under the control of its parent company.76 I will expand upon Systrom, Krieger and Mosseri’s executive roles throughout this thesis, where relevant to my research aims. In terms of everyday use of the Instagram platform, users can download the Instagram app from the App Store, for the iPhone operating system, or from the Google Play Store, for the Android operating system. The platform is also available for the web. The majority of Instagram’s user activity, however, occurs on its app or mobile website because of the limited functionality of the platform’s

73 Instagram, ‘Terms of Use’, Instagram Help Centre (19 April 2018) [The Instagram Service] . 74 Tim Highfield and Tama Leaver, ‘A Methodology for Mapping Instagram Hashtags’ (2015) 20(1) First Monday [Ethics and Privacy beyond the Binary] . 75 See Kevin Systrom, ‘Statement from Kevin Systrom, Instagram Co-Founder and CEO’, Official Blog – Instagram (online at 24 September 2018) . 76 Tama Leaver, Tim Highfield and Crystal Abidin, Instagram: Visual Social Media Cultures (Polity Press, 2020) 38.

11

CHAPTER ONE: INTRODUCTION desktop website.77 In general, the platform hosts an enormous volume of content, with users taking or uploading around 95 million photos and videos per day.78

An individual Instagram post, like that in Figure 2, can include photos, videos or combination thereof. Specifically, a user can either take a single picture or video at the time of posting, or upload a combination of up to 10 images or videos from their mobile device’s camera roll or saved media folder. Once a user selects the content of their post, they can add a filter, adjust the media through cropping, exposure and similar operations. A user can also add a caption – an explanation of no more than 2, 200 characters – with up to 30 hashtags, text or emoji.79 A hashtag is a metadata label, which features a hash character (#) before one or more characters,80 and a search for a hashtag will display images or videos that users have tagged with that hashtag.81 Other users can react to a post through comments, ‘likes’, bookmarking or sending a post to others. While individual images are technically and, perhaps, more accurately described as images from posts, I simply refer to images in this thesis. Additionally, while images are a type of user-generated content, it is impossible to know whether a post is, in fact, generated by users themselves. A user might, for instance, formally or informally re-post content from other sources. I proceed with reference to images and user-generated content to attempt to make this document as readable as possible.

77 Alice E Marwick, ‘Instafame: Luxury Selfies in the Attention Economy’ (2015) 27(1) Public Culture 137, 142. 78 Dave Lee, ‘Instagram Users Top 500 Million’ (n 8); Instagram, A New Look for Instagram, Info Center (Press Release, 11 May 2016) . 79 Instagram, ‘How do I Use Hashtags?’ Instagram Help Centre – Using Instagram (2019) . 80 Highfield and Leaver, ‘A Methodology for Mapping Instagram Hashtags’ (n 74) [Hashtags]. 81 Instagram, ‘How do I Use Hashtags?’ (n 79).

12

CHAPTER ONE: INTRODUCTION

Instagram is a significant platform for investigation because of the central role that it plays in mediating users’ expression through content and shaping people’s experiences of connection, representation and belonging.82 In particular, the moderation of images depicting women’s bodies on Instagram is highly controversial and continues to give rise to widespread concerns about whether this subject matter is regulated in ways that are free from arbitrariness.83 Empirical analyses can help to demystify how seemingly like images of women’s bodies are moderated in practice, and identify potential arbitrariness where it exists and allay the

82 Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 10; Stefanie Duguay, Identity Modulation in Networked Publics: Queer Women’s Participation and Representation on Tinder, Instagram, and Vine (PhD Thesis, Queensland University of Technology) 5. 83 ‘Instagram Deletes Woman’s Body Positive Photo’, Now to Love (online at 24 March 2017) . For a discussion of selective policy enforcement and potential gender-based double standards around depictions of pole dancing on Instagram, see Jeremiah Rodriguez, 'Instagram Apologizes to Pole Dancers After Hiding Their Posts', CTV News (online at 6 August 2019) .

13

CHAPTER ONE: INTRODUCTION confusion and fears of users where it does not.84 Instagram is an important site for investigation also given the relative dearth of empirical research into its moderation processes to date.85 Research around the governance of and by online platforms has historically privileged certain platforms, most notably Twitter,86 compared to Facebook-owned apps, many of which have restrictive Application Programming Interfaces (‘APIs’).87 As Leaver et al explain in the context of Instagram, an API is essentially the software and rules that ‘allow different apps, platforms and partners to access, add or remove data from the Instagram database’.88 A result of API restrictions is that methods for studying visual social media, such as images on Instagram, continue to lag behind methods for textual social media.89 Highfield and Leaver further explain that ‘[t]he visual adds levels of trickiness to such analyses: first in accessing the images, videos, or other linked and embedded files, and then in studying them, which requires more individual intervention and interpretation than samples of 140-characters’ (eg, on Twitter).90 The research questions in this thesis therefore aim to fill research gaps around empirical methods for and investigations into Instagram’s moderation processes.

In order to address these research gaps, I investigate the moderation of female forms on Instagram across two main case studies. In Case Study One, I examine whether like images that depict (a) Underweight, (b) Mid-Range and (c) Overweight women’s bodies, none of which are explicitly prohibited by the platform’s content policies, are moderated alike. In Case Study Two, I focus on whether explicitly prohibited images that depict women’s bodies, and some images of men’s bodies, in like categories of content are moderated alike. It is a limitation of the scope and method of this thesis that I empirically examine decontextualised images against a binary classification of gender. This is notable for several reasons, including the diverse

84 Witt, Suzor and Huggins (n 27); ORBIT (n 34) 5. 85 See, eg, Tim Highfield and Tama Leaver, ‘Instagrammatics and Digital Methods: Studying Visual Social Media, from Selfies and GIFs to Memes and Emoji’ (2016) 2 Communication Research and Practice 47; Nicolas Suzor, 'Understanding Content Moderation Systems: New Methods to Understand Internet Governance at Scale, Over Time, and Across Platforms' in Ryan Whalen (ed), Computational Legal Studies: The Promise and Challenge of Data-Driven Legal Research (Edward Elgar Publishing Ltd, forthcoming) . 86 Suzor, ‘Understanding Content Moderation Systems: New Methods to Understand Internet Governance at Scale, Over Time, and Across Platforms' (n 85). 87 See generally ORBIT (n 34). 88 Tama Leaver, Tim Highfield and Crystal Abidin, Instagram: Visual Social Media Cultures (Polity Press, 2020) 8. 89 Most recently, after the Cambridge Analytica scandal, Instagram dramatically reduced the number of permissible calls to its API: see generally Josh Constine, ‘Instagram Suddenly Chokes off Developers as Facebook Chases Privacy’, TechCrunch (online at 3 April 2018) . 90 Highfield and Leaver, ‘Instagrammatics and Digital Methods: Studying Visual Social Media, from Selfies and GIFs to Memes and Emoji' (n 85) 48.

14

CHAPTER ONE: INTRODUCTION spectrum of gender and sex in Western societies outside of the dominant heteronormative gender binary.91 Another is that gendered terms, like woman, are sites of contest.92 Butler explains:

The very subject of women is no longer understood in stable or abiding terms. There is a great deal of material that not only questions the viability of “the subject” as the ultimate candidate for representation or, indeed, liberation, but there is very little agreement after all on what it is that constitutes, or ought to constitute, the category of women.93

A result is the subjects of images in this thesis that I have coded as a woman or female, or as a man or male, might not identify as such. I am also unable to empirically examine the moderation of content posted by minority groups, including trans-identifying and non-binary users. These are, however, an important area for future research given that minority users generally face a higher risk of arbitrariness in moderation when participating in online platforms.94

When evaluating moderation outcomes in each case study, I focus on direct regulatory intervention by Instagram, rather than content removal by users themselves. This is principally due to Instagram’s significant transparency deficits that make it impossible for those not privy to the internal workings of the platform to identify precisely who removes individual pieces of content. There is also less cause for concern from the perspective of the rule of law when some users choose to remove images from their profiles. It is possible that users might simply want to curate their profile or reduce their total number of posts in line with their own social media practices.95 Others might remove their content due to the risk of backlash from other users, actual backlash in comments to a particular post or concerns about privacy.96 Users can also

91 This is not to say that there is a distinction between ‘sex’ and ‘gender’ (eg, that sex is ‘biological’ and gender is ‘cultural’). As Arthurs and Grimshaw contend, ‘[I]t has been argued that 'sex' is itself gendered; the assumption that there are only two sexes, and the insistent quest for 'sexual difference' premised on a sharp sexual binarism, is itself a product of a gendered and heterosexist ideology’: Jane Arthurs and Jean Grimshaw, Women's Bodies Cultural Representations and Identity (Bloomsbury Publishing, 1999) 7. 92 Judith Butler, Gender Trouble: Feminism and the Subversion of Identity (Routledge, 1st ed, 2006) 4. 93 Ibid 2. 94 See, eg, Stefanie Duguay, Jean Burgess and Nicolas Suzor, ‘Queer Women’s Experiences of Patchwork Platform Governance on Tinder, Instagram, and Vine’ (2018) Convergence: The International Journal of Research into New Media Technologies 1 . 95 Witt, Suzor and Huggins (n 27) 572. 96 See generally Sauvik Das and Adam Kramer, ‘Self-Censorship on Facebook’ (Conference Paper, AAAI Conference on Weblogs and Social Media, MIT Media Lab and Microsoft Research, 8–11 July 2013) 120-121.

15

CHAPTER ONE: INTRODUCTION

‘archive’,97 or hide, certain images from their public profile. Instagram may still have a role, and some social responsibility, in supporting or reinforcing cultural norms that impact on women’s and other users’ self-expression, but these more substantive concerns are beyond the scope of the largely formal evaluative framework in this thesis.

Instagram is not, of course, the only platform grappling with the complexities of setting, maintaining and enforcing the bounds of appropriate user expression through content.98 In 2018, for instance, some news media labelled YouTube’s moderation processes an ‘inconsistent mess’.99 Others claim that, in general, online platforms ‘suck at content moderation’,100 and ‘content moderation is broken’.101 Platforms are facing a ‘global techlash’102 over their moderation of a broad spectrum of content, including image-based sexual abuse (sometimes called revenge porn),103 extremist and terrorist material,104 ‘fake news’105 and self-harm content.106 While I acknowledge different controversies that are playing out across social media platforms, empirically examining the moderation of content on other networks is outside the scope of this thesis. I chose to delve deeply into one particular subject matter: that is, images depicting women’s bodies on Instagram, rather than adopting a more high-level approach across multiple platforms, as would be necessary if I adopted a multi- platform approach.

97 Instagram, ‘How Do I Archive a Post I’ve Shared?’ Help Centre (2018) . 98 See generally Gillespie (n 2). 99 See, eg, Louise Matsakis, ‘YouTube Doesn’t Know Where Its Own Line Is’, Wired (online at 3 February 2018) . 100 See, eg, Alex Castro, ‘Something Awful’s Founder Thinks YouTube Sucks at Moderation’’, The Verge (online at 26 June 2019) . 101 See, eg, Jillian C York and Corynne McSherry, ‘Content Moderation Is Broken. Let Us Count the Ways’, Electronic Frontier Foundation (online at 29 April 2019) . 102 Flew, Martin and Suzor (n 68) 34. 103 See, eg, A. Powell et al, ‘Image-Based Sexual Abuse: The Extent, Nature, and Predictors of Perpetration in a Community Sample of Australian Adults’ (2019) 92 Computers in Human Behaviour 393 . 104 See generally Alice Witt, Rosalie Gillett and Nicolas Suzor, submission to the Department of Communications and the Arts, Australian Government, Online Safety Charter Consultation Paper (12 April 2019) ; Victoria Nash, ‘Revise and Resubmit? Reviewing the 2019 Online Harms White Paper (2019) Journal of Media Law (advance) . 105 Flew, Martin and Suzor (n 68) 34. 106 See generally Michael Savage, ‘Health Secretary Tells Social Media Forms to Protect Children after Girl’s Death’, The Guardian (Online at 27 January 2019) .

16

CHAPTER ONE: INTRODUCTION

While the scope of empirical examination in this thesis is limited to the moderation of images depicting women’s bodies on Instagram, it should be noted that I do contextualise results and findings with reference to the broader practices of platform governance.107 This is because Instagram is not only part of the ‘Facebook family of services’,108 but also the ecosystem of online platforms and the internet more broadly. It would be a mistake to evaluate the moderation of images depicting female forms on Instagram as separate from this wider context, especially given that the legal landscape around platform governance appears to be changing.109 In 2018, Facebook co-founder and CEO Mark Zuckerberg admitted: ‘The internet is growing in importance around the world in people's lives and I think that it is inevitable that there will need to be some regulation’.110 Nonetheless, questions of exactly how the project of digital constitutionalism can be realised in practice and, indeed, the likelihood of its success, are beyond the scope of this thesis. The empirical analysis presented in this thesis will certainly shed light upon these broader questions, but I do not profess to resolve them here.

A final important note, before I outline the significance of this research, is that the following chapters proceed on the basis of Anglo-American liberal understandings of the rule of law and in the context of Western states. Hereafter, where I refer to the rule of law, I denote an Anglo- American conception of this legal ideal. In setting this frame, I exclude jurisprudence from other countries where, to varying degrees and in different ways, concerns about potentially arbitrary moderation processes and broader governance tensions arise.111 Aspects of the rule of law framework that I advance in this thesis might still have purchase beyond a Western lens.

In terms of research significance, this thesis is one of the first empirical legal evaluations of how different images of women’s bodies are moderated on Instagram in practice, thus making

107 Gorwa (n 38) 854-855. 108 In the Company Info section of the Facebook web page, the company refers to its ‘family of services’: see Facebook, ‘Company Info’ (2019), Facebook Newsroom . 109 This is due to a plethora of issues, especially after the Facebook-Cambridge Analytica scandal in 2018 and foreign interference in the 2016 US presidential elections, many of which centre on the monopoly that Facebook and a handful of other companies have in the provision of online services: see generally Flew, Martin and Suzor (n 68) 34. For background information on the Facebook-Cambridge Analytica scandal, see Catherine de Fontenay, ‘Would Regulation Cement Facebook’s Market Power? It’s Unlikely’, The Conversation (online at 12 April 2018) ; Ariel Bogle, ‘The Anti-Facebook Backlash is Official and Regulation Is 'Inevitable', ABC News (online at 13 April 2018) . 110 Marie Clare Jalonick, ‘Zuckerberg: Regulation 'Inevitable' for Social Media Firms’, Sydney Morning Herald (online at 12 April 2018) . 111 See, eg, Karishma Vaswani, ‘Concern over Singapore’s Anti-Fake News Law’, BBC News (online at 4 April 2019) .

17

CHAPTER ONE: INTRODUCTION important contributions to the fields of law and digital media. The first major contribution is an innovative black box methodology for investigating how some content is moderated on Instagram in practice. An integral component of this methodology is an input/output method based on black box analytics that examines how discrete inputs into a system produce certain outputs.112 In this thesis, input refers to individual images, the system is Instagram’s processes for moderating content and output pertains to the outcome of content moderation (ie, whether an image is removed or not removed). In conjunction with content analysis, including whether individual images appear to violate Instagram’s content policies, this black box method can return four main results for evaluation: true negatives (images that do not appear to violate Instagram’s policies and were not removed); potential false positives (images that do not appear to violate Instagram’s policies and were removed); true positives (images that appear to violate content policies and were removed) and potential false negatives (images that appear to violate content policies and were not removed).113 The statistically significant results of this thesis show that the method not only works but is highly promising.

While this methodology is not always easily replicable, largely given that I used specialist infrastructure at QUT’s Digital Media Research Centre to collect data,114 it is extensible to other online platforms and controversies. This thesis provides a successful methodological base for continued experimentation with the use of digital methods to examine content moderation when only parts of a system are visible from the outside.115 It also usefully illustrates how researchers can fuse these methods with legal and other theories to better understand how some content is moderated at scale. This thesis also makes significant inroads in developing and highlighting the importance of developing methods to investigate the largely opaque ways that social media platforms govern their networks.116 Researchers and other societal actors can leverage the empirical results to advocate for change, educate the public about ongoing issues and contribute to holding platforms to account for their decision-making around content.

112 See generally Maayan Perel and Niva Elkin-Koren, ‘Black Box Tinkering: Beyond Disclosure in Algorithmic Enforcement’ (2017) 69 Florida Law Review 181. 113 It should be noted that Instagram can restore (re-upload) content after it has been removed from the platform, and disabled accounts: see Otillia Steadman, ‘Porn Stars vs. Instagram: Inside the Battle to Remain on the Platform’, BuzzFeed (online at 18 October 2019) . 114 See Suzor, ‘Understanding Content Moderation Systems: New Methods to Understand Internet Governance at Scale, Over Time, and Across Platforms' (n 85). 115 Ibid. 116 Joanne Gray and Nicolas Suzor, ‘Playing with Machines: Using Machine Learning to Understand Automated Copyright Enforcement at Scale’ (2020) Big Data & Society (forthcoming).

18

CHAPTER ONE: INTRODUCTION

I make additional contributions to the project of digital constitutionalism by providing empirical evidence of how a range of content is moderated on Instagram in practice. I identify an overall trend of inconsistent moderation that falls significantly short of the selected rule of law safeguards and feminist values. Specifically, I contend that the lack of formal equality, certainty, reason-giving and user participation, and Instagram’s largely unfettered power to moderate content with limited transparency and accountability, are substantial normative concerns which pose an ongoing risk of arbitrariness for women and users more broadly. I ultimately emphasise the desirability of imposing limits on the exercise of power by Instagram, like other online platforms, as part of digital constitutionalist discourse.

IV STRUCTURE OF THIS THESIS

This thesis comprises seven chapters and proceeds as follows. This Chapter has introduced the moderation of images depicting women’s bodies, within platform governance more broadly, as the subject of investigation in this thesis. It has also outlined the specific aims of this study and the research questions that I seek to answer. Chapter Two provides a literature review of platform governance and explains why content moderation processes are ‘lawless’117 in terms of an absence of constitutional, or rule of law, safeguards. I then outline a theoretical framework of the rule of law, which comprises the values of formal equality, certainty, reason-giving, transparency, participation and accountability.118 This framework facilitates evaluation of the extent to which processes for moderating user-generated content align with the ideal of the rule of law. Next, after expanding upon controversies around the moderation of images depicting women’s bodies on Instagram, I outline a feminist lens that I use to enrich my empirical evaluation.119 While I acknowledge well-founded critiques of rule of law discourse from the perspective of different feminisms,120 I argue that the proposed rule of law framework, in conjunction with a feminist lens, nonetheless provides a useful language to name and work through what users stand to lose in the potentially arbitrary exercise of power on Instagram and online platforms more broadly.

Chapter Three then turns to outline my black box methodology for achieving the aims of this thesis and answering my research questions. First, I outline what the public knows about processes for moderating content to date. I explain that content moderation is often highly

117 Suzor, Lawless: The Secret Rules that Govern Our Digital Lives (n 32) 6-7. 118 See especially Krygier, ‘Why the Rule of Law Is Too Important to Be Left to Lawyers’ (n 43) 20-21. 119 See, eg, Alice E Marwick, 'None of This Is New (Media): Feminisms in the Social Media Age' in Tasha Oren and Andrea Press (eds), The Routledge Handbook of Contemporary Feminism (Routledge, 2019). 120 See, eg, Patricia A Cain, ‘Feminism and the Limits of Equality’ (1990) 24 Georgia Law Review 803.

19

CHAPTER ONE: INTRODUCTION fragmented regulatory work that is predominantly undertaken by commercial content moderators, but also platform employees, users and automated systems.121 This includes discussion of the opportunities and much discussed risks of automated decision-making.122 I then outline the importance of black box analytics, a form of reverse engineering that can shed light on the internal workings of a system,123 and expand upon the particulars of my input/output method. As previously explained, this method provides four main results: false positives and true negatives for Case Study One, which examines content that does not appear to violate content policies, and true positives and false negatives for Case Study Two, which explores explicitly prohibited content.

Chapter Four, the first of two case studies in this thesis, examines whether a sample of 4,944 like images depicting (a) Underweight, (b) Mid-Range and (c) Overweight women’s bodies were moderated alike on Instagram. After expanding upon the method for this case study, I identify an overall trend of inconsistent moderation and two main findings. The first is that up to 22 per cent of images are potentially false positives. Second, the odds of removal for an image that depicts an Underweight and Mid-Range woman’s body is 2.48 and 1.59 times higher, respectively, than for an image that depicts an Overweight woman’s body. These results are significant because they suggest that claims about potential arbitrariness in the moderation of female forms on Instagram might not be unfounded. I then turn to outline several possible explanations for these inconsistencies, with a particular focus on direct regulatory intervention by the platform, and the methodological and ethical challenges raised by this inquiry. A foremost methodological challenge, which I expand upon in Chapter Five, is that it is impossible to identify whether an image was removed by the platform or by a user without greater transparency by Instagram.124 This Chapter concludes that the inconsistent trend of moderation that I identify, along with deficiencies in formal equality, certainty, reason-giving, transparency, participation and Instagram’s largely unfettered power to moderate content, are significant normative concerns that warrant further investigation.

Chapter Five is the second and final case study in this thesis that investigates whether a sample of 980 explicitly prohibited images depicting women’s bodies, and some images of men’s

121 See generally Roberts, Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24). 122 See,eg, Rob Kitchin, ‘Thinking Critically about and Researching Algorithms’ (2017) 20(1) Information, Communication and Society 14. 123 Nicholas Diakopoulos, ‘Algorithmic Accountability’ (2015) 3 Digital Journalism 398, 403-404. 124 Suzor (n 85) 10 ff.

20

CHAPTER ONE: INTRODUCTION bodies, are moderated alike on Instagram. After outlining the method for this case study, which differs from the first in several ways, I find that the inconsistent trend of moderation that I identify in Chapter Four continues across categories of explicitly prohibited content. More specifically, I identify three main results. The first is that up to 56.8 per cent of images that Instagram appears to explicitly prohibit were not removed, and are therefore potentially false negatives. Second, the odds of removal for an explicitly prohibited image depicting a woman’s body is 16.75 times higher than for a man’s body. These findings are significant as they suggest that some images of male forms are moderated more leniently than female forms. Third, in stark contrast to the overall trend of inconsistent moderation, Instagram appears to moderate some pornographic and other explicit content in highly consistent ways. Like in Chapter Four, this Chapter raises concerns about the alignment between Instagram’s moderation processes and the Anglo-American ideal of the rule of law. I conclude that the apparent lack of rule of law values in the moderation of some explicit content intensifies normative concerns about the risk of arbitrariness in processes for moderating content on Instagram. This and the preceding Chapter ultimately provide valuable insight into how different images of women’s bodies appear to be moderated in practice, and how female forms are moderated in comparison to male forms.

Chapter Six evaluates the empirical results from both case studies through a feminist lens. I start by exploring the possibilities of ‘digital feminisms’125 on Instagram, many of which are illustrated by emerging ‘fourth-wave’126 feminist activisms.127 With these opportunities, however, come potential risks for the expression of content depicting women’s bodies.128 Specifically, to the extent that content was removed by Instagram, I argue that there is potential for gender-based double standards, autonomy deficits and negative lived experiences for users posting images of female forms. I also contend that these potential risks are part of broader

125 Hester Baer, ‘Redoing Feminism: Digital Activism, Body Politics, and Neoliberalism’ (2016) 16(1) Feminist Media Studies 17, 18. 126 Matich, Ashman and Parsons explain, ‘Contemporary, or so-called fourth-wave feminism has heralded an era of online activism which celebrates the potential of this space to reconfigure and reorder gender relations’. Fourth-wave feminist movements include Free The Nipple (#freethenipple) and Me Too (#metoo): Margaret Matich, Rachel Ashman and Elizabeth Parsons, ‘#freethenipple – Digital Activism and Embodiment in the Contemporary Feminist Movement’ (2019) 22(4) Consumption Markets & Culture 337, 337-338. For an in-depth explanation of the various waves of feminisms, see: Claire Horn, 'A Short History of Feminist Theory' in Scarlett Curtis (ed), Feminists Don't Wear Pink and Other Lies (Penguin Random House, 2018) 327 ff. Horn refers to the ‘first wave (mid 1800s – 1920s)’, ‘second wave (1960s – early 1980s)’, ‘third wave (1990s – 2012)’ and ‘fourth wave (2012 – the present)’. 127 Matich, Ashman and Parsons (n 126). 128 See generally Sarah T Roberts, 'Aggregating the Unseen' in A. Byström and M. Soda (eds), Pics or It Didn’t Happen (Prestel, 2017) .

21

CHAPTER ONE: INTRODUCTION uncertainties about the normative foundations of Instagram’s regulatory system. It is important to evaluate this broader context as issues around the moderation of images depicting women’s bodies are often bound up with social, political and other issues that transcend online social spaces.129 I conclude that the inconsistent trend of moderation across both case studies is an ongoing cause for concern from both a rule of law and feminist perspective, not only for women and other Instagram users, but for all users of platform technology.

In the conclusion to this thesis, Chapter Seven, I provide a summary of my arguments and contributions to knowledge. I then outline four main recommendations that could help Instagram to improve its moderation processes. I provide support for my recommendations by synthesising my research findings and by outlining industry best practice for setting, maintaining and enforcing the bounds of appropriate user expression through content. I also outline key principles that I believe should guide any attempt by nation-states to better regulate online platforms, many of which centre on the decentralised nature of the internet. Finally, I outline opportunities for future research, and make concluding remarks.

129 Gillespie (n 2) 141 ff.

22

CHAPTER TWO: THEORETICAL FRAMEWORK

CHAPTER TWO: THEORETICAL FRAMEWORK

This Chapter advances an Anglo-American framework of the rule of law comprising the values of formal equality, certainty, reason-giving, transparency, participation and accountability, against which I will evaluate processes for moderating content on Instagram. The constitutional discourse of the rule of law ultimately aims to limit, control or restrain arbitrariness in the exercise of governing power.130 Given that this discourse is characterised by opposition to arbitrary power,131 it provides a useful conceptual lens for evaluating ongoing concerns about whether user-generated content is moderated, or regulated, by online platforms in ways that are free from arbitrariness.132 I position this rule of law framework within the emerging, broader project of digital constitutionalism, which is concerned with articulating limits on the exercise of power in the online environment.133 Specifically, this Chapter builds on several accounts of digital constitutionalism, including those of Fitzgerald,134 Suzor135 and Gill, Redeker and Gasser.136 These accounts receive particular attention because they make claims that are especially pertinent to the overarching argument of this thesis – that is, constitutional values can and should influence the private rules of non-state actors,137 including online platforms, and these values are threatened by unchecked governance across the public and private realms.138

This Chapter proceeds in six parts. In Part I, I provide an overview of the Anglo-American ideal of the rule of law and its relationship to the exercise of governing power. While the

130 See A. V. Dicey, Introduction to the Study of the Law of the Constitution (MacMillan,10th ed, 1959) 188; Raz (n 42) 200; Jeremy Waldron, ‘Hart and the Principles of Legality’ in Matthew H. Kramer, Claire Grant, Ben Colburn and Antony Hatzistavrou (eds), The Legacy of H.L.A. Hart: Legal, Political, and Moral Philosophy (Oxford University Press, 2008) 71, 79. 131 Martin Krygier, ‘Transformations of the Rule of Law: Legal, Liberal, and Neo-’ (Conference Paper, KJuris Workshop, Dickson Poon School of Law, King’s College London, 1 October 2014) 3. 132 See, eg, Suzor, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (n 26) 1, 9; Witt, Suzor and Huggins (n 27) 561. 133 Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 6-7. 134 Brian Fitzgerald, 'Software as Discourse - A Constitutionalism for Information Society' (1999) 24 Alternative Law Journal 144. 135 Nicolas Suzor, Digital Constitutionalism - The Legitimate Governance of Virtual Communities (Doctor of Philosophy Thesis, Queensland University of Technology, 2010); Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32). 136 Gill, Redeker and Gasser (n 66). 137 Ibid 303-307. 138 See, eg, Paul Schiff Berman, ‘Cyberspace and the State Action Debate: The Cultural Value of Applying Constitutional Norms to Private Regulation’ (2000) 71 University of Colorado Law Review 1263; Suzor, Digital Constitutionalism - The Legitimate Governance of Virtual Communities (n 135) 71; Gill, Redeker and Gasser (n 66) 303-304.

23

CHAPTER TWO: THEORETICAL FRAMEWORK concept of governance is traditionally siloed within the public sphere,139 I explain that public and private actors, including online platforms, undertake important governing roles in the digital age.140 One of the main ways that online platforms govern the users of their networks is by moderating content – specifically, by setting, maintaining and enforcing the bounds of what is appropriate behaviour and what is not.141 I explain that content is moderated within a ‘black box’ by a number of different regulatory actors,142 which raises ongoing questions about which party initiates content removal, in what ways and for what reasons.

In Part II, I argue that platforms govern in ‘lawless’143 ways and therefore in stark contrast to the ideal of constitutional government. I begin by clarifying that questions of whether platforms follow the law, which they generally do,144 are separate from questions of whether rule of law values should apply in relation to platform governance. I argue that constitutional safeguards should apply for several reasons in line with the project of digital constitutionalism, chief among them that the law of contract, which regulates the relationship between platform operators and their users, affords platform owners mostly unrestrained governing power.145 A symptom of this is the significant gap between the ‘hard-line legal’146 conception of online platforms as strictly private entities, and the reality that platforms undertake important governing roles over any or all of the world’s three billion social media users.147 The rule of law is particularly valuable framework in the context of platform governance as it can institutionalise constraints on arbitrariness in the exercise of power across social domains.148

Then, in Part III, I outline the explicitly teleological approach to the rule of law that I follow in this thesis. My approach is principally based on the works of Krygier who contends that societal actors should be concerned with limiting potential abuse of government power

139 See generally Michel Rosenfeld, ‘Rethinking the Boundaries between Public Law and Private Law for the Twenty First Century: An Introduction’ (2013) 11(1) International Journal of Constitutional Law 125. 140 See, eg, Klonick (n 28) 1599; Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (n 26) 3; Kadri and Klonick (n 31) 26 ff. 141 Witt, Suzor and Huggins (n 27) 572 ff. 142 Ibid 558. 143 Suzor, Lawless: The Secret Rules that Govern Our Digital Lives (n 32) 6-7. 144 Suzor, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (n 26) 3, 9. 145 According to Suzor, ‘Terms of service ‘reserve absolute discretion to the operators of the platform to make and enforce the rules as they see fit. Terms of service documents aren’t designed to be governing documents; they’re designed to protect the company’s legal interests’: see Suzor, Lawless: The Secret Rules that Govern Our Digital Lives (n 32) 11. 146 Ibid. 147 We Are Social and Hootsuite (n 30); Cohen et al (n 21) 47. 148 See, eg, Krygier, ‘The Rule of Law: Legality, Teleology, Sociology’ (n 43) 45; Krygier, ‘Four Puzzles about the Rule of Law: Why, What, Where? And Who Cares?’ (n 43) 64.

24

CHAPTER TWO: THEORETICAL FRAMEWORK regardless of whether that power is exercised in public or private spheres.149 Given that there is no universal set of rule of law values,150 I focus on the values of formal equality, certainty, reason-giving, transparency, participation and accountability. I argue that these values, which I conceptualise in largely formal terms, are well-established normative aspirations that have the potential to serve the telos of the rule of law in relation to platform governance.151

Next, in Part IV, I sketch the landscape of ongoing controversies around the moderation of images depicting women’s bodies on Instagram. I focus on Instagram given the number of different and often conflicting claims in news and other media around how the platform moderates images of female forms in practice. On the one hand, for instance, publications claim that Instagram is arbitrarily removing images of women in favour of the Western ideal of thinness.152 These claims are concerning principally because they point towards double standards and bias in moderation processes.153 On the other hand, there are claims that Instagram is democratising beauty standards and creating a space for the depiction of all body types.154 The lack of publicly available information that explains content removal continues to breed confusion and a range of other claims about the moderation of this polarising subject matter.155 These controversies highlight the importance of societal actors investigating the potentially marginalising effects that moderation processes can have on some users and are the basis of the two selected case studies in Chapters Four and Five.156

In Part V, I turn to explain how a feminist lens can enrich my rule of law framework for evaluating the moderation of images depicting female forms. I conceptualise feminism(s) in broad terms, as movements that are fundamentally concerned with and aim to achieve equality

149 Krygier, ‘The Rule of Law: Legality, Teleology, Sociology’ (n 43); Krygier ‘Four Puzzles about the Rule of Law: Why, What, Where? And Who Cares?’ (n 43); Krygier, ‘Why the Rule of Law Is Too Important to Be Left to Lawyers’ (n 43); Krygier, ‘The Rule of Law: Pasts, Presents, and Two Possible Futures’ (n 45) 221; Krygier, ‘Transformations of the Rule of Law: Legal, Liberal, and Neo-’ (n 131). My theoretical approach is also heavily informed by, inter alia: Suzor, Digital Constitutionalism - The Legitimate Governance of Virtual Communities (n 135) 53-120; Suzor, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (n 26) 4 ff; Huggins (n 52) 17-25; Gill, Redeker and Gasser (n 66) 307-313. 150 See, eg, Monika Zalnieriute, Lyria Bennett Moses and George Williams, 'The Rule of Law and Automation of Government Decision‐Making' (2019) 82(3) The Modern Law Review 425, 428 ff. 151 In terms of the potential for contested rule of law values to serve the the telos of the rule of law in platform governance, see, eg, Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (n 26) 8-9. For more on the well-established nature of some rule of law values, see Farrall (n 52) 40-41. 152 See, eg, Olya (n 14); Anderson (n 11). 153 See, eg, Grady (n 16). 154 See, eg, Salam (n 20). 155 See, eg, Witt, Suzor and Huggins (n 27) 580. 156 See, eg, Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 3.

25

CHAPTER TWO: THEORETICAL FRAMEWORK among women, men and non-binary and transgender people.157 I then outline the considerable overlaps between a broad feminist approach and the telos of the rule of law. I argue that a feminist perspective is important for several reasons, including that it helps to identify potential autonomy deficits for women in the seemingly unequal and inconsistent moderation of content. While I acknowledge a number of pertinent critiques of the Western liberal ideal of the rule of law, particularly from more radical or substantive feminist viewpoints,158 I argue that my overall evaluative framework, which incorporates both a feminist and rule of law lens, provides a nuanced language to begin to name and work through what is at stake for women and other users in the potentially arbitrary exercise of power over content. The evaluative framework in this Chapter is extensible to other platforms in which concerns about potential arbitrariness in content moderation processes might arise. Part VI concludes this Chapter.

I PLATFORMS GOVERN

The rule of law is an ‘essentially contested concept’.159 Scholars disagree, not only about the constituent values of the rule of law and how it can be achieved, but what the concept actually is.160 For example, some argue that the rule of law is a political ideal or aspiration, which ‘a legal system may lack or may possess to a greater or less degree’,161 and a virtue by which a legal system can be judged. Others, like Allan, insist that ‘[t]he rule of law is not merely an ideal or aspiration external to the law – a yardstick by which the law can be measured for its compliance with an important political value, and against which it may fall short’.162 On Allen’s view, ‘a law that fails to comply with his conception of the rule of law is not merely a bad law but no law at all’.163 Rule of law discourse is, however, deeply rooted in Western democratic tradition.164 As Crawford argues, ‘[o]ne doubts we would have spent so much time

157 Internet Rights & Principles Coalition, ‘The Charter of Human Rights and Principles for the Internet’ (Booklet, 6th ed, November 2018) 7, 14 . 158 See, eg, Cain (n 120). 159 Lisa Burton Crawford, The Rule of Law and the Australian Constitution (The Federation Press, 2017) 14; Jeremy Waldron, ‘The Concept and the Rule of Law’ (2008) 43 Georgia Law Review 1, 52. 160 See, eg, Zalnieriute, Moses and Williams (n 150) 428 ff; Brian Tamanaha, On the Rule of Law: History, Politics, Theory (Cambridge: Cambridge University press, 2004) 2-3. One of the most prominent sites of contest in rule of law discourse is between Hart, who espouses a positivist viewpoint of what the law is, and Fuller, who advances a more naturalist perspective of what the law ought to be: see generally Waldron, ‘Hart and the Principles of Legality’(n 130). 161 Raz (n 42) 211. 162 Trevor Allan, The Sovereignty of Law: Freedom, Constitution and Common Law (Oxford University Press, 2013) 88-89. 163 Crawford, The Rule of Law and the Australian Constitution (n 159) 14. 164 Lisa Burton Crawford, 'The Rule of Law' in Rosalind Dixon (ed), Australian Constitutional Values (Bloomsbury Publishing, 2018) 79.

26

CHAPTER TWO: THEORETICAL FRAMEWORK debating the rule of law, if we did not care about it. It seems that we would rather live in legal system that values the ‘rule of law’ than one that does not, even though we cannot agree upon what it means’.165 The contested nature of the rule of law arguably underlines the importance of this ideal in Western cultures.

The telos, or main aim, of the rule of law is to temper potential abuse of power by government actors.166 The arbitrary exercise of power, as part of a broader state of lawlessness, is therefore the antitheses of the rule of law.167 Two central tenets arguably undergird Anglo-American rule of law discourse.168 First, a system of laws must exist for governing actors to rule,169 and all societal actors, including state and non-state actors and individuals, should be ruled by and obey those laws.170 This tenet, or ‘basic idea’,171 is exemplified by the phrase ‘government by law and not of individuals’.172 Second, the law should be capable of clearly guiding individual action so that all societal actors have an appreciation of their position in a legal system – that is, knowledge of what the law is and how to act on that knowledge.173 Raz refers to the second tenet as the ‘basic intuition’174 of the rule of law. Safeguards, such as legal certainty, due process and reason-giving practices can play an important role in guiding human conduct as part of this basic intuition. Raz reminds us, however, that rule of law safeguards, or values, ‘do not stand on their own. They must be interpreted in light of the basic idea’.175 The basic idea and intuition of the rule of law underpin many Western pursuits of the rule of law176 and hence I return to these basics throughout this thesis.

Anglo-American scholars traditionally conceptualise governance in terms of the exercise of power by the state over the activities or behaviours of its citizens.177 According to a strict legal categorisation, the law that regulates the horizontal relationship between parties is private, while the law that regulates the vertical relationship between the state and its citizens is

165 Crawford, The Rule of Law and the Australian Constitution (n 159) 14. 166 Krygier, ‘The Rule of Law: Legality, Teleology, Sociology’ (n 43) 45. 167 Crawford, The Rule of Law and the Australian Constitution (n 159) 11. 168 Raz (n 42) 213. 169 Crawford, The Rule of Law and the Australian Constitution (n 159) 10-11. 170 Raz (n 42) 213. 171 Ibid 218; Crawford, The Rule of Law and the Australian Constitution (n 159) 14. 172 Raz (n 42) 212. At page 212, Raz argues that ‘government by law and not of men’ [sic] is a tautology because ‘[a]ctions not authorized by law cannot be the action of the government as government. They would be without legal effect and often unlawful’. 173 Raz (n 42) 214. 174 Ibid 213-214. 175 Ibid 218. 176 Crawford, The Rule of Law and the Australian Constitution (n 159) 7. 177 Ben Bowling and James Sheptycki, ‘Global policing and transnational rule with law’ (2015) 6(1) Transnational Legal Theory 141, 150.

27

CHAPTER TWO: THEORETICAL FRAMEWORK public.178 Western democracies largely define a state’s power, and ‘rules and processes of [its] political community’ more broadly, in constitutional documents.179 According to Gill, Redekker and Gasser, the need to ‘control, limit, and restrain’180 a state’s governing power over its citizens is a central, substantive aspect of constitutionalist thinking. Here it is useful to clarify the distinction between the related concepts of ‘governance’ and ‘regulation’. As Burris, Kempa and Shearing note, governance refers to the ‘management of the course of events in the social system’.181 As a large subset of governance, regulation ‘is concerned with standard- setting, monitoring and enforcement’.182 I argue that moderation is a form of regulation over users and hence the problems of content moderation are broader problems of governance. I explain how Instagram appears to regulate content throughout this Part and in Chapter Three.

In line with the traditional state-centric conception of governance, privately-owned platforms fall squarely within the private sphere.183 When ‘signing up’ to a platform, users enter into a private contract with platform owners: users agree to abide by terms of service,184 which often incorporate other policy documents,185 in exchange for access to the platform’s technology.186 Anglo-American contract law does not recognise that terms of service play a role in governing individuals,187 which means that the legal relationship between platforms and their users is one of company to consumer rather than state to citizen.188 Users remain bound by the contractual terms that they voluntarily accept and adopt when signing up to a platform regardless of

178 Rosenfeld (n 139) 125. 179 Gill, Redeker and Gasser (n 66) 304. Also see generally Jeffrey Goldsworthy, 'Functions, Purposes and Values in Constitutional Interpretation’ in Rosalind Dixon (ed), Australian Constitutional Values (Bloomsbury Publishing, 2018) 44. 180 Gill, Redeker and Gasser (n 66) 304. 181 Scott Burris, Michael Kempa and Clifford Shearing, ‘Changes in Governance: A Cross-Disciplinary Review of Current Scholarship’ (2008) 41 Akron Law Review 1, 9, citing Scott Burris, Peter Drahos and Clifford Shearing, ‘Nodal Governance’ (2005) 30 Australian Journal of Legal Philosophy 30, 30. 182 Colin Scott, ‘Analysing Regulatory Space: Fragmented Resources and Institutional Design’ [2001] (Summer) Public Law 329, 341–5; Witt, Suzor and Huggins (n 27) 563. 183 James Grimmelmann, ‘Virtual World Feudalism’ (2009) Yale Law Journal Pocket Part 118, 126; Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 10-11. 184 Instagram, ‘Terms of Use’ (n 73). 185 For example, when users sign up to the Instagram platform, they also agree to abide by, inter alia, the Instagram Data Policy, Platform Policy, Music Guidelines and Community Guidelines. For the Community Guidelines, see: Instagram, ‘Community Guidelines’, Help Centre (October 2019) . 186 Giovanni De Gregorio, 'From Constitutional Freedoms to the Power of the Platforms: Protecting Fundamental Rights Online in the Algorithmic Society' (2019) 11 European Journal of Legal Studies 65, 85. 187 See, eg, Nicolas Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (n 26) 3. 188 Grimmelmann (n 183) 126.

28

CHAPTER TWO: THEORETICAL FRAMEWORK complaints about how their content is governed in practice.189 From this perspective, the constitutional discourse of the rule of law has limited application in the private sphere of contractual agreements between parties.190

However, such rigid distinctions between public and private spheres are tenuous for a number of reasons. The first is that that governance is not specific to one body of law.191 Today, public- private partnerships play an important role in upholding the transnational legal order,192 partly because private companies, like Facebook, often have more resources and specialist technical knowledge than public actors.193 Another is that legal scholars have long recognised that traditional public/private distinctions can be problematic. For instance, Kelsen argues that these distinctions are ‘useless as a general systematisation of law’,194 and Verkuil says, the public/private divide ‘has been around forever, but it continues to fail as an organizing principle’.195 It is also becoming increasingly difficult to sustain rigid distinctions in ‘decentralised’196 or ‘networked’197 regulatory contexts, like the internet. The online environment is decentralised as it moves away from the traditional, top-down role of the state as the ‘only commander and controller’198 of regulation and towards governance by any number

189 Nicolas Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (n 26) 3. 190 Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 11. 191 For background information on the public/private divide, see Robert Mnookin, ‘The Public/Private Dichotomy: Political Disagreement and Academic Repudiation’ (1982) 130 University of Pennsylvania Law Review 1429; Gerald Turkel, ‘The Public/Private Distinction: Approaches to the Critique of Legal Ideology’ (1988) 22 Law & Society Review 801; Duncan Kennedy, ‘The Stages of the Decline of the Public/Private Distinction’ (1982) 130 University of Pennsylvania Law Review 1349; Morton Horwitz, ‘The History of the Public/Private Distinction’ (1982) 130 University of Pennsylvania Law Review 1423. 192 See, eg, Colin Scott, ‘Regulation in the Age of Governance: The Rise of the Post-Regulatory State’ (n 50) 145; United Nations Global Impact, Business for the Rule of Law Framework (Guidebook, June 2015) 8. 193 Gregorio (n 186) 78. 194 Hans Kelsen, General Theory of law and State (Harvard University Press, 1961), quoted in Rosenfeld (n 139) 125. 195 Paul Verkuil, Outsourcing Sovereignty: Why Privatisation of Government Functions Threatens Democracy and What We Can Do About It (Cambridge University Press, 2007) 78, quoted in Rosenfeld (n 139) 125. 196 See, eg, Julia Black, ‘Decentring Regulation: Understanding the Role of Regulation and Self-regulation in a “Post-regulatory” World’ (2001) 54 Current Legal Problems 103, 105. 197 See, eg, Clifford Shearing and Jennifer Wood, ‘Nodal Governance, Democracy, and the New “Denizens” (2003) 30 Journal of Law and Society 400. 198 Julia Black (n 196) 106.

29

CHAPTER TWO: THEORETICAL FRAMEWORK of state and non-state actors.199 Processes of globalisation, particularly after the development of the internet,200 have been a key driver of this shift.201 De Gregorio explains:

[U]nlike other ground-breaking channels of communication, the cross-border nature of the internet has weakened the power of constitutional states, not only in terms of the territorial application of sovereign powers vis-a-vis other states but also with regard to the protection of fundamental rights in the online environment.202

These changes call into question whether there can be neat distinctions between public and private realms in practice.203

Indeed, a growing body of literature recognises that non-state actors, like Instagram, have become ‘the new governors’204 of the digital age.205 Contractual terms of service arguably

199 Gregorio (n 186) 78 ff. 200 See generally Oreste Pollicino and Marco Bassini, 'The Law of the Internet between Globalisation and Localization' in Miguel Maduro, Kaarlo Tuori and Suvi Sankari (eds), Transnational Law: Rethinking European Law and Legal Thinking (Cambridge University Press 2016); Claudia Padovani and Mauro Santaniello, ‘Digital Constitutionalism: Fundamental Rights and Power Limitation in the Internet Eco- System' (2018) 80(4) The International Communication Gazette 295. 201 Gregorio (n 186) 66; Mark Tushnet, 'The Inevitable Globalization of Constitutional Law' (2009) 49 Virginia Journal of International Law 985. 202 Gregorio (n 186) 66. 203 Rosenfeld (n 139) 127. 204 Klonick (n 28) 1598. 205 It should be noted that just as some public and private actors undertake governing roles, offline and online environments are regulable. At the dawn of the internet, John Perry Barlow and other early cyberlibertarians argued that the internet was a new social space of boundless possibility. Specifically, in his Declaration of the Independence of Cyberspace, Barlow makes two main claims about online regulation. The first is a descriptive claim that the internet is inherently unregulable by territorial governments. Barlow argues that the virtual nature of the internet, which lacks substance or material quality, precludes government interference because legal concepts for setting, maintaining and enforcing rules are ‘all based in matter, and there is no matter here’ (ie, in the online environment). The second is that state regulation of the internet is illegitimate, and governments should defer to the self-rule of cyberspace. Barlow contends that rules and social norms created by online communities to govern themselves will be better than any laws imposed by the state: see John Perry Barlow, 'A Declaration of the Independence of Cyberspace,' Electronic Frontier Foundation (online at 2 August 1996) ; David Johnson and David Post ‘Law and Borders – the Rise of Law in Cyberspace’ (1995) 48 Stanford Law Review 1367, 1393; Suzor, Digital Constitutionalism – The Legitimate Governance of Virtual Communities (n 135) 56; Julie Cohen, ‘Cyberspace as/and Space’ (2007) 107 Columbia Law Review 210, 217. This ‘exceptionalist treatment’ of the internet, however, gave way with Lessig’s famous phrase ‘code is law’ and several important recognitions: see, eg, Suzor, Digital Constitutionalism - The Legitimate Governance of Virtual Communities (n 135) 57 ff; Lawrence Lessig, ‘Code Is Law’, Harvard Magazine (online at 1 January 2000) ; Lessig, Code and Other Laws of Cyberspace (n 50). One important recognition is that the US Department of Defense largely funded the development of the internet, which is subject to state-granted property rights. Indeed, the Internet Corporation for Assigned Names and Numbers (ICANN), an organisation that did not did not assume independent management from the US Department of Commerce until mid-2016, controls and manages the infrastructure of the network of networks: see, eg, Kal Raustiala, ‘Governing the Internet’ (2016) 110 American Journal of International Law 491.

30

CHAPTER TWO: THEORETICAL FRAMEWORK function as constitutional documents in the way that they establish the power of platform operators to regulate user-generated content and set standards of appropriate content.206 As Facebook CEO Mark Zuckerberg acknowledged in 2009, ‘Our terms aren’t just a document that protects our rights; it’s the governing document for how the service is used by everyone across the world’.207 While terms of use and content policies formally set the bounds of what content is appropriate and what is not, platforms primarily govern users through the modality of regulation known as architecture,208 which includes the hardware and software that constitutes their online networks. The regulatory role of architecture often goes unnoticed by users, but software can restrict and control users’ behaviour as effectively as a barrier in physical space.209 Platforms also govern by limiting access to their networks (eg, by suspending and/or banning certain accounts), and by removing individual pieces of content, such as images in this thesis.

As an integral part of platform governance, content moderation attempts to influence or control the types of content that users see and how and when they see it.210 While Instagram’s executives ultimately set the bounds of appropriate user expression, most of the work of moderating content is undertaken by outsourced individuals or firms, known as commercial content moderators.211 Recent reports suggest that ‘tens of thousands of people’212 work as commercial content moderators, including around 15 000 people who review content for

206 See, eg, Nicolas Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (n 26) 2. 207 Adweek Staff, ‘Facebook Reverts Terms of Service after Complaints’, Adweek (online at 18 February 2009) . 208 In one of his foundational books about internet governance, Lessig explains that law is not the only modality that plays a role in shaping regulation: the market, norms and physical architecture can also influence online communication, or behaviours, just as effectively as the legal rules of nation-states: see, eg, Lessig, ‘Code is Law’ (n 205); Margaret Radin and R. Polk Wagner, ‘Myth of Private Ordering: Rediscovering Legal Realism in Cyberspace’ (1997) 73 Chicago-Kent Law Review 1295, 1296-7; Lawrence Lessig, ‘The Law of the Horse: What Cyber Law Might Teach’ (1999) 113 Harvard Law Review 501. Grimmelmann goes one step further by arguing that code is a separate modality of regulation from law and physical architecture, principally given the automated, immediate and plastic nature of code: see Grimmelmann, ‘Regulation by Software’ (n 50) 1721-23. 209 Suzor, Digital Constitutionalism - The Legitimate Governance of Virtual Communities (n 135) 57. 210 Witt Suzor and Huggins (n 27) 557. 211 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 33 ff; Sarah T Roberts, ‘Digital Refuse: Canadian Garbage, Commercial Content Moderation and the Global Circulation of Social Media’s Waste’ (Media Studies Publications Paper No 14, Faculty of Information and Media Studies, Western University, 2016) 1. 212 Lauren Weber and Deepa Seetharaman, ‘The Worst Job in Technology: Staring at Human Depravity to Keep It off Facebook’, The Wall Street Journal (online), 27 December 2017 .

31

CHAPTER TWO: THEORETICAL FRAMEWORK

Facebook and Instagram worldwide.213 Individual moderators purportedly review hundreds,214 sometimes thousands,215 of posts per day. The decisions that moderators make about the appropriateness of content are ostensibly based on platform-specific terms, standards and guidelines. It appears, however, that decision-making processes can also be influenced by the other modalities of regulation: the marketplace, including commercial prerogatives and responsibility to shareholders; norms at the geographic, industry, platform, community and individual levels; and state enacted laws, which platforms remain subject to, particularly criminal and intellectual property laws.216

Another way that platforms regulate their networks is through automated systems. As I will explain in Chapter Three, automated tools can be particularly useful when it comes to detecting potentially problematic material among enormous amounts of user-generated content. Instagram users, for instance, take or upload around 95 million photos and videos on any given day.217 Yet, automated systems are not without their dangers, many of which arise from the fact that processes for developing and deploying technologies are not neutral.218 Indeed, the ways that many platforms use technologies to regulate content are influenced by a largely neoliberal,219 technologically determinist outlook.220 Phipps and Young conceptualise neoliberalism as ‘a value system in which the economic has replaced the intellectual and the political and in which the competitive, rational individual predominates over the collective’.221

213 Casey Newton, ‘The Trauma Floor: The Secret Lives of Facebook Moderators in America’, The Verge (online), 25 February 2019 ; Ryan Broderick, ‘The Comment Moderator Is the Most Important Job in the World Right Now’, BuzzFeed News (online at 4 March 2019) . 214 Ibid. 215 Weber and Seetharaman (n 212). 216 Lessig, Code and Other Laws of Cyberspace (n 50) 6. See also Tarleton Gillespie ‘Governance of and by Platforms’ in Jean Burgess, Thomas Poell and Alice Marwick (eds), SAGE Handbook of Social Media (SAGE Publications, 2017) . 217 Lee, ‘Instagram Users Top 500 Million’ (n 8); Instagram, A New Look for Instagram, Info Center (Press Release, 11 May 2016) . 218 Roberts, ‘Behind the Screen: Content Moderation in the Shadows of Social Media’ (n 24) 30. 219 See generally Rachel Turner, ‘The Rebirth of Liberalism’: The Origins of Neo-Liberal Ideology’ (2007) 12(1) Journal of Political Ideologies 67. Platforms are arguably neoliberalism write large in the way that they often assert that their technologies are all about freedom of choice: given that users voluntarily enter into a private contract with a platform, any users who chooses to participate in an online platform must, by definition, be satisfied with the terms of that contract: see, eg, Alice Witt, Nicolas Suzor and Patrik Wikström, ‘Regulating Ride-Sharing in the Peer Economy’ (2015) 1(2) Communication Research and Practice 174, 184. 220 Richard Barbrook and Andy Cameron, ‘The California Ideology’ (1996) 6(1) Science as Culture 44, 47; Roberts, ‘Behind the Screen: Content Moderation in the Shadows of Social Media’ (n 24) 72. 221 Alison Phipps and Isabel Young, ‘Neoliberalisation and “Lad Cultures” in Higher Education’ (2015) 49(2) Sociology 305, 306; Frances Rogan, Social Media, Bedroom Cultures and Femininity: Exploring

32

CHAPTER TWO: THEORETICAL FRAMEWORK

In the context of neoliberal platforms, many executives advance that the management of a course of events (ie, governance) is best, and sometimes autonomously, shaped by technologies – a view known as technological determinism.222 The power that platform executives and their moderators have to moderate content and, by extension, set and influence social and other norms in line with this outlook is a product of the ‘lawless’ state of platform governance.223

II PLATFORM GOVERNANCE IS LAWLESS

Online platforms themselves are not lawless. In line with the first supposed tenet of the rule of law, ‘[a]ll societal actors, including businesses, are required to respect the rule of law’.224 For the most part, Instagram and other technology companies abide by the law and do not undermine the governance of legitimate nation-states.225 Platforms remain subject to courts of law, which ultimately determine the appropriate bounds of contractual bargains, including private consumer contracts between platforms and their users.226 Allan explains that, ‘As problems of abuse of power by non-governmental bodies become more clearly recognized, the common law is capable of generating appropriate requirements of fairness and rationality in private law’.227 The law of contract, however, is often insufficient for resolving the abundant governance tensions between platforms and their users. A foremost deficiency is that Anglo- American contract law generally does not recognise that private actors undertake governing roles.228 The constitutional discourse of the rule of law has almost no application in the private sphere of contractual agreements between parties,229 all of which contributes to framing the relationship between platform owners and users as horizontal. This framing matters because platform governance is far from an even playing field.

the Intersection of Culture, Politics and Identity in the Digital Media Practices of Girls and Young Women in England (Doctor of Philosophy Thesis, University of Birmingham, 2017) 22. http://www.surfacenoise.info/neu/globalmediaS18/readings/HarveyNeoliberalism.pdf 222 See generally, David Hess, ‘Power, Ideology, and Technological Determinism’ (2015) 1 Engaging Science, Technology and Society 121; Ian Bogost and Nick Montfort, ‘Platform Studies: Frequently Asked Questions’ (FAQ, 12 December 2009) . 223 Suzor, The Secret Rules That Govern our Digital Lives (n 32) 106 ff. 224 United Nations Global Impact, Business for the Rule of Law Framework (n 192) 8. 225 See, eg, Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 6-7. 226 Suzor, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (n 26) 3. 227 Trevor Allan, Constitutional Justice: A Liberal Theory of the Rule of Law (Oxford University Press, 2011) 11. See also Trevor Allan, Law, Liberty, and Justice: The Legal Foundations of British Constitutionalism (Clarendon Press, 1994) 4. 228 Witt, Suzor and Huggins (n 27) 564. 229 Suzor, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (n 26) 3.

33

CHAPTER TWO: THEORETICAL FRAMEWORK

Platforms are, to use Suzor’s term, ‘lawless’230 in the sense that they govern without the constitutional safeguards that apply to more traditional government processes. Terms of service, which make poor constitutional documents in practice,231 are particularly illustrative of this point. Unlike traditional constitutions, the contractual bargain between platforms and their users affords platform owners ‘complete discretion to control how the network works and how it is used’.232 Instagram’s Terms of Use, for instance, states that ‘[w]e can remove any content or information you share on the Service if we believe that it violates these Terms of Use, our policies (including our Instagram Community Guidelines), or we are permitted or required to do so by law’.233 Such terms, along with other guidelines and policies, establish the platform’s broad self-regulatory framework.234 Black defines self-regulation as ‘the practice of industry taking the initiative to formulate and enforce rules and codes of conduct with no government involvement, or with such involvement taking a very limited form, for example as observer or advisor’.235 The result is that users have little say in determining the content of terms of service and there is little effective choice in the market – over three billion of the world’s social media users can either ‘take it or leave it’.236 Users are also bound by subsequent modifications and revisions to platform policies after they initially agree to terms of service.237

There are several factors that contribute to the lawlessness of platform governance, chiefly powerful legal protections under United States (‘US’) law – the jurisdiction where many only online platforms are based. Section 230(c) of the Communications Decency Act of 1996 (‘CDA’, ‘CDA 230’) immunises online platforms – as ‘providers’ of ‘interactive computer

230 Suzor, The Secret Rules That Govern our Digital Lives (n 32) 6-7. 231 Witt, Suzor and Huggins (n 27) 565. 232 Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (n 26) 3; Kate Crawford and Tarleton Gillespie, ‘What Is a Flag for? Social Media Reporting Tools and the Vocabulary of Complaint’ (2016) 18(3) New Media & Society 410, 412; Rebecca Tushnet, ‘Power Without Responsibility: Intermediaries and the First Amendment’ [2008] 76 The George Washington Law Review 986, 988. 233 Instagram, ‘Terms of Use’ (n 73) [Content Removal and Disabling or Terminating Your Account]. 234 See, eg, Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 47, 49. 235 Black, ‘Decentring Regulation: Understanding the Role of Regulation and Self-regulation in a “Post- regulatory” World’ (n 196) 116. 236 See, eg, Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (n 26) 9. 237 For instance, Instagram’s Terms of Use states: ‘We may change our Service and policies, and we may need to make changes to these Terms so that they accurately reflect our Service and policies. Unless otherwise required by law, we will notify you (for example, through our Service) before we make changes to these Terms and give you an opportunity to review them before they go into effect. Then, if you continue to use the Service, you will be bound by the updated Terms. If you do not want to agree to these or any updated Terms, you can delete your account’: Instagram, ‘Terms of Use’ (n 73).

34

CHAPTER TWO: THEORETICAL FRAMEWORK services’ – from liability for any information (ie, content) provided (ie, posted) by users.238 The CDA, which legislators initially drafted to make it illegal for persons to transmit obscene material to minors, has two crucial ramifications for online platforms.239 First, platform operators, which host or republish speech, are generally not legally liable or morally responsible for what their users say or do.240 Platforms largely do not have to moderate content except for illegal content or material that infringes intellectual property regimes.241 Second, if platforms elect to moderate content, they do not have to meet particular standards of moderation.242 Mark Zuckerberg often claims that Facebook is ‘a technology company, not a media company’ in order to rhetorically position his companies, including Instagram, as strictly private spaces within the scope of CDA 230.243 Zuckerberg explains, ‘I consider us [Facebook] to be a technology company because the primary thing that we do is have engineers who write code and build product and services for other people’.244 CDA 230 is widely credited with enabling online platforms to flourish245 because, in the context of content regulation, they can adopt flexible governance practices to shape how users interact with each other and better adapt to changing online and offline environments.246

Processes for moderating content on Instagram, including the rules and guidelines that moderators follow, are also often confidential information.247 Moderators are generally required to sign non-disclosure agreements (‘NDAs’) to prevent public discussion about decision-making processes, the outsourced entities that moderate content for platforms and

238 Communications Decency Act of 1996, 47 USC § 230(c) (1996); Milton Mueller, ‘Hyper-Transparency and Social Control’ (2015) 39(9) Telecommunications Policy 804, 805. 239 Klonick (n 28) 1602, 1604 ff; Tushnet (n 232) 1002. 240 Instagram’s Terms of Use contains a general liability clause: ‘Our responsibility for anything that happens on the Service (also called "liability") is limited as much as the law will allow. If there is an issue with our Service, we can't know what all the possible impacts might be. You agree that we won't be responsible ("liable") for any lost profits, revenues, information, or data, or consequential, special, indirect, exemplary, punitive, or incidental damages arising out of or related to these Terms, even if we know they are possible. This includes when we delete your content, information, or account’: Instagram, ‘Terms of Use’ (n 73). 241 Communications Decency Act of 1996, 47 USC § 230(c) (1996); Tushnet (n 232) 1001-2. 242 Tushnet (n 232) 1002; Klonick (n 28) 1606-1607. 243 See, eg, Witt, Suzor and Wikström (n 219)177; Jeff John, 'Why Zuckerberg Won't Admit Facebook Is a Media Company', Fortune Tech (online at 14 November 2016) . 244 Michelle Castillo, ‘Zuckerberg tells Congress Facebook Is Not a Media Company: 'I Consider us to be a Technology Company', CNBC (online at11 April 2018) . 245 See generally Frank LoMonte, ‘The Law that Made Facebook What It Is Today’, The Conversation (online at 11 April 2018) . 246 Ibid. 247 Nyuk Yin Nahan, ‘The Duty of Confidence Revisited’ (2015) 39(2) University of Western Australia Law Review 270, 273-274.

35

CHAPTER TWO: THEORETICAL FRAMEWORK working conditions.248 As Newton explains, seemingly in the context of the ‘Facebook family of services’,249 ‘NDAs are also meant to prevent contractors from sharing Facebook users’ personal information with the outside world at a time of intense scrutiny over data privacy issues’.250 There is an argument that the possibility of leaks, such as the cache of documents known as the ‘Facebook Files’251 justifies secrecy. Others argue that full disclosure of information can enable users to game moderation processes and potentially expose moderators to retaliation from former and current colleagues and users.252 After news broke in 2018 about the now defunct research firm Cambridge Analytica having gained access to the personal data of around 87 million Facebook users, Facebook took steps towards greater transparency by releasing some information about how its internal guidelines are interpreted in practice.253 Facebook did not, however, publish its longer internal, ‘prescriptive’254 guidelines, or its ‘Known Questions’ document,255 which allegedly provides moderators with additional guidance for navigating complex regulatory issues. These examples illustrate that platforms generally operate based on a logic of secrecy: the rules and processes that moderators follow in practice significantly differ from publicly disclosed policies and users lack precise information around who removes content, for what and by what means.256

There is now widespread unease over the risks, including potential arbitrariness, posed by platform governance.257 The last twenty years of massive growth in internet services has given

248 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 38. 249 In the Company Info section of the Facebook web page, the company refers to its ‘family of services’: see Facebook, ‘Company Info’ (n 108). 250 Newton (n 213). 251 See, eg, the Guardian, ‘Facebook Files’, The Guardian (online at 11 September 2017) . 252 See, eg, Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 37-38. 253 See, eg, Monika Bickert, ‘Publishing Our Internal Enforcement Guidelines and Expanding Our Appeals Process’, Facebook Newsroom (Press Release, 24 April 2018) . For the guidelines that Bickert refers to in the foregoing press release, see: Facebook, ‘Community Standards’ (2019) . 254 For instance, the policies that Facebook moderators follow are ‘not what you read in Facebook’s terms of service or community standards. They are a deep, highly specific set of operational instructions for content moderators that is reviewed constantly by [Monika] Bickert’s team and in a larger intra- Facebook gathering every two weeks’: see Alexis Madrigal, ‘Inside Facebook's Fast-Growing Content- Moderation Effort’, The Atlantic (online at 7 February 2018) . 255 Newton (n 213). 256 Madrigal (n 254). 257 See, eg, David Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, UN Doc A/HRC/41/35 (28 May 2019); David Kaye, Special Rapporteur, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, UN Doc A/HRC/38/35 (6 April 2018); David Kaye, Special Rapporteur,

36

CHAPTER TWO: THEORETICAL FRAMEWORK rise to serious concerns that governance practices in the online environment replicate and entrench systemic issues, including social bias and discrimination.258 There is also unease about the extent of platforms’ governing power. De Gregorio argues:

Instead of a democratic decentralised society, the potentialities of the internet have created an oligopoly of private entities who both control information and determine how people exchange it. As such, the platform-based regulation of the internet has prevailed over the community- based model.259

In response, scholars and non-governmental organisations, among others, are calling for mechanisms to protect individuals online.260 For instance, Tim Berners-Lee, the inventor of the World Wide Web, advocates for a ‘Magna Carta of the Internet’261 that considers how we should constitute online social spaces and limit the exercise of power in cyberspace. Berners- Lee helped to launch the Contract for the Web Project in 2019,262 a ‘shared set of commitments to build a better web’,263 which builds on earlier initiatives such as the Internet Rights & Principles Coalition’s (‘IRPC’) charter and Rebecca MacKinnon’s call to ‘Take back the Internet!’264 A number of detailed analyses, which highlight the effects that governance by platforms can have on a wide range of issues including equality, fairness and freedom of expression, support these rallying calls.265

Many initiatives advocating for change in this context align with the broader project of digital constitutionalism. As previously explained, digital constitutionalism is a concept266 – also described as an ‘ideology’267 and ‘constellation of initiatives’268 – that seeks to articulate limits

Freedom of Expression and the Private Sector in the Digital Age, UN Doc A/HRC/32/38 (11 May 2016) 2. 258 See, eg, Molly Dragiewicz et al, ‘Technology Facilitated Coercive Control: Domestic Violence and the Competing Roles of Digital Media Platforms’ (2018) 18(4) Feminist Media Studies 609. 259 Gregorio (n 186) 73. 260 See, eg, Dynamic Coalition on Platform Responsibility, Recommendations on Terms of Service and Human Rights (Recommendation Report, November 2015) . 261 Tim Berners-Lee, ‘Tim Berners-Lee calls for Internet Bill of Rights to Ensure Greater Privacy’, The Guardian (online at 28 September 2014) . 262 See generally Rory Cellan-Jones, 'Tim Berners-Lee: 'Stop web's downward plunge to dysfunctional future'', BBC News (online at 11 March 2019) . 263 ‘A Contract for the Web’, Contract for the Web Stakeholders (Web Page, 2019) . 264 See, eg, Rebecca MacKinnon, Consent of the Networked: The Worldwide Struggle for Internet Freedom (Basic Books, 2013). 265 See, eg, Ranking Digital Rights (n 26); Anderson et al (n 26); Crocker et al (n 40). 266 Padovani and Santaniello (n 200) 296. 267 Celeste (n 64) 76. 268 Gill, Redeker and Gasser (n 66) 302.

37

CHAPTER TWO: THEORETICAL FRAMEWORK on the exercise of power, whether public and/or private, in the digital age.269 Many proponents of this ideology advocate for ‘contemporary constitutionalism,’270 including the desirability of well-established ideals like the rule of law influencing governance practices across public and private domains. That is, the ‘constitutionalisation’271 of digital society. Fitzgerald argues:

Once upon a time it was thought (and taught) that constitutionalism was a legal and political concept that defined the exercise of power by national public institutions. So much has changed. Nowadays constitutionalism must be defined more broadly to encompass the exercise of power in the private sphere by, amongst others, corporations, groups and individuals and the growing significance of international institutions (including multinational corporations (MNCs)) to our daily lives.272

Digital constitutionalism therefore has ‘a political rather than a technical primary scope’.273 It draws particular attention to the substantial control that private corporations, including platforms, exercise over ‘the everyday life of an unprecedented number of people’.274 It also underlines Suzor’s point that ‘…the idea that governance is unimportant in these spaces because they are private seems not just archaic but dangerous’.275 A significant danger in the context of this thesis is that the balance of power in content moderation processes is tipped disproportionately in favour of technology companies. Individuals and groups consequently face a number of obstacles when attempting to challenge the commercial providers of social networking technologies.276

Efforts to advance a digital constitutionalism raise the question whether rule of law discourse can apply in the context of social media platforms, which are privately owned and governed. In this thesis, I follow Krygier’s explicitly teleological approach to the rule of law, which underlines that constitutional discourse has purchase across realms, contexts and actors.277 Krygier challenges the orthodox ‘assumption that threats of arbitrariness with which the rule of law is concerned are a state monopoly’ in the context of a society ‘full of networks, nodes, fields, and orderings that have power of people in and around them’.278 In other words, if we

269 Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 165 ff. 270 Celeste (n 64) 77, 81. 271 Ibid 87. 272 Fitzgerald (n 134) 144. 273 Padovani and Santaniello (n 200) 296. 274 Instagram, Welcome to IGTV (n 9); Celeste (n 64) 79. 275 Suzor, Digital Constitutionalism - The Legitimate Governance of Virtual Communities (n 135) 71. 276 Nicolas Suzor, ‘The Role of the Rule of Law in Virtual Communities’ (2010) 25 Berkeley Technology Law Journal 1817, 1820; Gregorio (n 186) 69-70, 102. 277 Krygier, ‘Why the Rule of Law Is Too Important to Be Left to Lawyers’ (n 43) 20-21. 278 Krygier, ‘The Rule of Law: Pasts, Presents, and Two Possible Futures’ (n 45) 221.

38

CHAPTER TWO: THEORETICAL FRAMEWORK are concerned with addressing the risk of arbitrariness in the exercise of power, it should not matter whether the source of that power is public or private.279 This approach is persuasive as it recognises that states are not the only actors that exercise power with public consequences and in ways that can potentially harm individuals or groups.

The rule of law is a valuable lens for evaluating regulatory concerns around content moderation, as part of broader tensions between platforms and their users, for a plethora of reasons. The first is that while this legal ideal is contested, ‘it has long been regarded as an important political ideal’.280 Take, for instance, the claim that the ‘rule of law is a human good’,281 or Dicey’s description of the rule of law as a ‘legal spirit’282 that ‘pervades’283 English common law and now, to varying degrees, the legal systems of many former British colonies like Australia.284 More specifically, the rule of law is valuable as it institutionalises constraints on arbitrariness in the exercise of power, irrespective of the specific legal and institutional features that accompany it.285 This enables different societal actors to adapt, where necessary and of course within the limits of Western democratic norms, a rule of law framework for public and private domains and different contexts. The rule of law also provides a well- established language to start to name and work through what is at stake for women and other users in the potentially arbitrary exercise of power over content.286

As online platforms become increasingly enmeshed in the lives of everyday users and compete for greater market share, I contend that it is necessary to continue the work of developing a constitutionalism for platform governance. In the remainder of this Chapter, following Krygier’s explicitly teleological approach to the rule of law,287 I expound a constitutional framework for evaluating processes for moderating content. Given that arbitrariness is the antithesis of the Western ideal of the rule of law,288 I commence the following Part by

279 Krygier, ‘The Rule of Law: Pasts, Presents, and Two Possible Futures’ (n 45) 221. 280 Crawford, The Rule of Law and the Australian Constitution (n 159) 1. 281 See, eg, Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (n 26) 8; Tamanaha (n 160) 137; E. P. Thompson, Whigs and Hunters: The Origin of the Black Act (Penguin Books, 1990) 266. 282 Dicey (n 130) 156. 283 Ibid. 284 For background information about how Australia received English law, see Cook et al, Laying Down the Law (Lexis Nexis, Butterworths, 7th ed, 2009) 35 ff. 285 See, eg, Krygier, ‘The Rule of Law: Legality, Teleology and Sociology’ (n 43) 45; Krygier, ‘Four Puzzles about the Rule of Law: Why, What, Where? And Who Cares?’ (n 43) 64. 286 Suzor, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' 9. 287 See, eg, Krygier, ‘The Rule of Law: Legality, Teleology and Sociology’ (n 43). 288 Crawford, The Rule of Law and the Australian Constitution (n 159) 11.

39

CHAPTER TWO: THEORETICAL FRAMEWORK conceptualising arbitrariness in the exercise of power by online platforms. I then outline each of the values that constitute my rule of law framework.

III CONSTRAINTS ON POTENTIAL ARBITRARINESS: SELECTED VALUES OF THE RULE OF LAW

A foremost risk for the users of online platforms is potential arbitrariness in decision-making about content. Like other well-established legal concepts, such as justice or fairness, arbitrariness can vary in kind and by degrees.289 On the one hand, arbitrariness can encompass extremes, among them cruelty and fanaticism. In this sense, ‘protection against the brute vices of unrestrained power’ is primary.290 The exercise of power also can be arbitrary when decisions are based on whim or made at random without regard for established procedures of government.291 On the other hand, arbitrariness can exist where the law is ‘inflexible, insensitive, or justified only by history or precedent’.292 This is one reason why legal rules, such as those that are prescriptive or proscriptive in nature, should be subject to meaningful public scrutiny. Arbitrariness can then occur when power is exercised unpredictably, or when it is exercised in a way that takes no account of the perspectives and interests of affected parties.293 However, questions of how to best temper potential arbitrariness have given rise to a longstanding academic debate, which is generally divided in terms of those who argue for a ‘thin’ or ‘formal’ conception of the rule of law and those who advocate for a ‘thick’ or substantive version.294

Formal strands of rule of law thinking are principally concerned with the form that the law should take in order to effectively guide human conduct.295 Advocates of a formal approach insist that persons have adequate guidance or an appreciation of their position in a legal system when the law is predictable:296 that is, they have knowledge of what the law is, including defined processes, standards and consequences for particular action or inaction, and how to act on that knowledge.297 By contrast, advocates for a ‘thick’ or substantive approach argue that the rule of law should be concerned with more than what the law is and hence make claims

289 Krygier, ‘Transformations of the Rule of Law: Legal, liberal, and neo-’ (n 131) 12. 290 Ibid. 291 University of Oxford, Oxford Living Dictionary (online at 7 May 2018) 'Arbitrary' ; Tamanaha (n 160) 137. 292 Krygier, ‘Transformations of the Rule of Law: Legal, liberal, and neo-’ (n 131) 13. 293 Suzor, Digital Constitutionalism - The Legitimate Governance of Virtual Communities (n 135) 70. 294 See, eg, Zalnieriute, Moses and Williams (n 150) 429. 295 See generally Paul Craig, ‘Formal and Substantive Conceptions of the Rule of Law: An Analytical Framework’ (1997) Public Law 467. 296 Suzor, Digital Constitutionalism - The Legitimate Governance of Virtual Communities (n 135) 100 ff. 297 See, eg, Raz (n 42) 213-214.

40

CHAPTER TWO: THEORETICAL FRAMEWORK about what the law ought to be.298 Substantive conceptions of the rule of law focus on forms of legality as well as ‘stipulations about the content of the law’.299 These stipulations might, for instance, relate to justice,300 respect for human dignity or autonomy.301 Regulatory actors can threaten human dignity and autonomy in a number of ways: for example, by making ad hoc or ‘on the fly’302 decisions, privatising internal governance processes and failing to report on the human impacts of discretionary decision-making.303 Many of these hallmarks of potential arbitrariness are evident in the governance practices of online platforms – an important point that I will expand upon with reference to my selected rule of law values below.

While legal scholars continue to debate the kinds of law that best temper arbitrary power,304 the individuated requirements of the rule of law are also complex, which makes the labels of formal/substantive, also referred to as thin/thick, slippery in practice. Crawford explains:

[T]he labels of thick and thin are premised upon other difficult distinctions. Labelling a particular theory of the rule of law as a ‘formal’ one suggests that it has nothing to say about the content of a law; that it is concerned purely with matters of procedure and not of substance. But this is not necessarily the case, and in any event, it is notoriously difficult to draw bright lines between questions of substance and of form.305

There is no universal set of rule of law values,306 which can markedly differ depending on legal traditions.307 In this thesis, I chose to focus on the values of equality, certainty, reason-giving, accountability, transparency and participation. I selected these values because they are well- established in the Anglo-American constitutional tradition and reflect recurring themes in concerns about the moderation of images depicting women’s bodies on Instagram.308 I also chose to conceptualise the proposed values in largely formal terms to start to test whether there is support for some users’ claims of potential arbitrariness, which I detail in the Part IV, and to shed light on what the foregoing literature might recognise as formal problems in Instagram’s

298 Crawford, The Rule of Law and the Australian Constitution (n 159) 28 ff. 299 Zalnieriute, Moses and Williams (n 150) 429. 300 See, eg, H. L. A. Hart, Encyclopaedia of Philosophy (Macmillan Press, London, 1967) 273-4. 301 Raz (n 42) 221-220. 302 See, eg, Newton (n 213). 303 Krygier, ‘Transformations of the Rule of Law: Legal, liberal, and neo-’ (n 131) 13. 304 See eg Celeste (n 64) 12 ff 305 Crawford, The Rule of Law and the Australian Constitution (n 159) 12. Also see John Gardner, ‘The Supposed Formality of the Rule of law’ in Law as a Leap of Faith: Essays on Law in General (Oxford University Press, 2012) 195, 213-214. 306 See, eg, Waldron, ‘Is the Rule of Law an Essentially Contested Concept (In Florida)?’ (n 60) 137. 307 See, eg, Farrall (n 52) 40-41; Waldron, ‘Hart and the Principles of Legality’ (n 130) 71, 79. 308 Ibid; Suzor, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (n 26).

41

CHAPTER TWO: THEORETICAL FRAMEWORK processes for moderating content more broadly. In doing so, I start to identify and examine substantive problems, many of which are inextricably linked to the formal issues that I focus on in this thesis, and to lay the foundations for more substantive work.

In the remainder of this Chapter and, indeed, throughout this thesis, I return to the telos of the rule of law – that is, opposition to the potentially arbitrary exercise of power.309 I also promote and reinforce the basic idea that laws should give individuals an appreciation of their position in a legal system.310 As such, I argue that any system of rules for moderating, or regulating, content should be capable of guiding the behaviours of users, as the subjects of platform governance,311 in line with my selected rule of law values of formal equality, certainty, reason- giving, transparency, participation and accountability. I contend that the exercise of power over content is arbitrary when it derogates from or contradicts my selected values and, if so, ought to be constrained to enable users to better understand content moderation processes and how to act on that knowledge.312 These values are normative aspirations rather than strict criteria for achieving the rule of law in practice. Overall, my rule of law framework facilitates empirical evaluation in Chapters Four and Five of the extent to which the moderation of images of female forms, and some male forms, on Instagram appears to align with the legal ideal of the rule of law. This evaluation supports my recommendations in Chapter Seven.

A Equality

The first of the rule of law values that is a focus of this thesis is equality. As previously explained, this value can take the form of a formal obligation,313 which principally requires the consistent treatment of individuals in like circumstances – that is, treating like cases alike.314 Formal equality prohibits different treatment, or discrimination,315 of any kind between persons based on ‘ethnicity, colour, sex, language, religion, political or other opinion, national or social

309 Krygier, ‘The Rule of Law: Legality, Teleology, Sociology’ (n 43) 45; Krygier, ‘Four Puzzles about the Rule of Law: Why, What, Where? And Who Cares?’ (n 43) 64; Krygier, ‘Why the Rule of Law Is Too Important to Be Left to Lawyers’ (n 43) 18, 20–21. 310 Raz (n 42) 218; Crawford, The Rule of Law and the Australian Constitution (n 159) 14. 311 Waldron, ‘Hart and the Principles of Legality’ (n 130) 79. 312 Ibid. 313 See generally Jonathon Penny, ‘Virtual Inequality: Challenges for the Net’s Lost Founding Value’ (2012) 10(3) Northwestern Journal of Technology and Intellectual Property 209. 314 Bingham (n 51) 55-6; Catharine Bernard and Bob Hepple, ‘Substantive Equality’ (2000) 59(3) Cambridge Law Journal 562. 315 Penny (n 313) 217; Bingham (n 51) 56; International Covenant on Civil and Political Rights (ICCPR), opened for signature 16 December 1966, 999 UNTS 171 (entered into force 23 March 1976) art 2.1, art 26.

42

CHAPTER TWO: THEORETICAL FRAMEWORK origin, property birth or other status’.316 In terms of sex, it is well-established in human rights discourse, a distinct yet complimentary body of law to constitutionalism,317 that women and men are formally equal before the law.318 Equality can also take the form of a substantive obligation, which is concerned with the results of the application of the law to individuals, rather than the consistent treatment of individuals alone.319 Substantive equality might, for instance, require inconsistent treatment of persons to achieve certain outcomes.320 In the context of Instagram, I advance a largely formal conception of equality that images in like categories of content should be moderated alike.321 This conception is based on apparent body shape/size, rather than any of the foregoing formal categories, in light of widespread concerns that technology companies might discriminate against certain body types when moderating content.322 A decision about content is therefore arbitrary in this study when it appears to be made with reference to body shape, as depicted in an image, among other factors.

A range of societal actors argue that user-generated content should be moderated in a way that is formally equal.323 The United Nations (‘UN’) Protect, Respect and Remedy Framework, for example, states that businesses, including online platforms, have a responsibility to avoid infringing human rights, including equality, and to help mitigate any related impacts.324 The Global Network Initiative (‘GNI’), which is a multi-stakeholder group of information, computer and technology (‘ICT’) companies, including Facebook, academics and other stakeholders, builds on the UN’s work by arguing that ICT companies have a responsibility to respect basic human rights.325 Further, the Charter of Human Rights and Principles for the Internet, which is not a UN document, recognises a Right to Non-Discrimination in Internet

316 Universal Declaration of Human Rights, GA res 217A (III), UN GAOR, UN DOC A/810 (10 December 1948) art 10. In an internet context, formal equality is recognised as a basic right: Internet Rights & Principles Coalition (n 157) 14. 317 Kofi Annan, Secretary General, In Larger Freedom: Towards Development, Security and Human Rights for All, UN Doc A/59/2005 (26 May 2005) 18. 318 See generally UN Convention on the Elimination of all Forms of Discrimination Against Women, opened for signature 18 December 1979, 999 UNTS vol. 1249 (entered into force 3 September 1981). 319 Witt, Suzor and Huggins (n 27) 568. 320 Robin West, Re-Imagining Justice: Progressive Interpretations of Formal Equality (Ashgate Publishing Limited, 2003) 107. 321 Farrall (n 52) 41. For another example of a narrow conception of equality at the intersectional of law and technology, see Zalnieriute, Moses and Williams (n 150). 322 See Part IV of this Chapter. 323 See, eg, Kaye, Special Rapporteur, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression (6 April 2018) (n 257). 324 Office of the High Commissioner for Human Rights, Guiding Principles on Business and Human Rights: Implementing the United Nations "Protect, Respect and Remedy" Framework, Un Doc HR/PUB/11/04 (16 June 2011); Dynamic Coalition on Platform Responsibility (n 260). 325 Global Network Initiative, GNI Principles on Freedom of Expression and Privacy (Web Page, 2019) .

43

CHAPTER TWO: THEORETICAL FRAMEWORK

Access, Use and Governance.326 These initiatives generally echo John Perry Barlow’s Declaration of the Independence of Cyberspace, which paints a picture of diversity and opportunity for online users: ‘[w]e are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth’.327 The social disparities that persist in Western societies continue to fuel calls for greater equality in online and offline environments.328

It is important to note that scholars have articulated a number of critiques of formal equality, especially from a feminist perspective.329 One is that the Anglo-American rule of law tradition holds the white straight male archetype as the universal standard for consistent treatment.330 Another is that simple measures of formal equality are not sufficient to articulate more substantive and complex concerns that undergird the experiences of users whose content is removed.331 Whilst I acknowledge the pertinence of critiques of formal equality, I argue that this value remains an appropriate normative aspiration for the initial exploratory investigation in this thesis. As previously explained, I proceed first in formal terms to start to test for potential arbitrariness in processes for moderating content on Instagram with a view to laying the foundations for more substantive work. Thus, examining substantive equality for women in Instagram’s processes of content moderation is outside the scope of this thesis, but remains an important topic for future research.

A final important note is that the black box methodology in this thesis is primarily based on the value of formal equality. This is because a formal conception of equality provides a clear measurement for empirical testing: specifically, whether images like categories of content have been moderated alike in practice.332 The relatively straightforward nature of this measurement contrasts with other rule of law values, such as reason-giving and participation, which are more difficult to measure without access to a platform’s internal workings. A result is that formal equality is the main value that I am able to measure and provide data for in this study. I use the data pertaining to formal equality to inform my analysis of all other rule of law values,

326 Internet Rights & Principles Coalition (n 157) 7. 327 Barlow (n 205). 328 Witt, Suzor and Huggins (n 27) 567. 329 See, eg, Catharine MacKinnon, Toward A Feminist Theory of the State (Harvard University Press, 1989). 330 See, eg, Denise Schaeffer, ‘Feminism and Liberalism Reconsidered: The Case of Catharine MacKinnon’ (2001) 95(3) The American Political Science Review 699, 700. 331 See, eg, Iris Marion Young, Justice and the Politics of Difference (Princeton University Press, 2000). 332 See generally Witt, Suzor and Huggins (n 27) 567-568.

44

CHAPTER TWO: THEORETICAL FRAMEWORK alongside legal document analysis of content policies and data provided by civil society and platforms themselves. B Certainty

The next value in my rule of law framework is certainty, which I conceptualise in formal and substantive terms. First, in the formal sense, I argue that certainty requires rules around content to be open, or publicly available, and clear.333 Rules are arguably open when users are able to identify all of the rules, terms or guidelines that apply to different types of content. The ‘basic intuition’334 of the rule of law – that is, individuals must have knowledge of what the law is in order to follow it – highlights the importance of openness. Promulgation also plays a central role in enabling public scrutiny of processes for setting, maintaining and enforcing rules in a regulatory system.335 Instagram’s governance practices are an immediate cause for concern in terms of formal certainty as the prescriptive guidelines that moderators follow are confidential and not disclosed to the public.336 Platforms also generally reserve the right to enforce unwritten rules that policy teams might formulate as business and other needs arise.337 The result is that rules around content are often unclear to users.338

Clarity in legal rules is another essential aspect of formal certainty at stake in platform governance. A rule is clear when there is a small ‘penumbra of uncertainty’339 around its meaning. Instagram’s constitutive documents, however, fall significantly short of the formal value of certainty in a number of ways. First, the platform’s Terms of Use contain some dense paragraphs of legalese,340 which discourage users, who largely do not read constitutive documents,341 from engaging with those terms. Second, Instagram’s Terms of Use and Community Guidelines are littered with open-textured or re-definable terms. The Long Version of the platform’s Community Guidelines, for example, expressly prohibits the depiction of

333 Lon Fuller, The Morality of Law (Yale University Press, 2nd ed, 1969) 63. 334 Raz (n 42) 214-215. 335 Ibid 214. 336 See generally Frank Pasquale, ‘Restoring Transparency to Automated Authority’ (2011) 9 Journal on Telecommunications and High Technology Law 235, 244–245; Nicolas P Suzor et al, 'What Do We Mean When We Talk about Transparency? Towards Meaningful Transparency in Commercial Content Moderation' (2019) 13 International Journal of Communication 1526, 1527. 337 See, eg, Nicolas Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (n 26) 7. 338 See, eg, Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 136 ff. 339 Herbert L A Hart, ‘Positivism and the Separation of Law and Morals’ (1958) 71(4) Harvard Law Review 593, 607. 340 See, eg, Instagram, ‘Terms of Use’ (n 73) [Who Is Responsible if Something Happens]. 341 Corinne Hui Yun Tan, ‘Terms of Service on Social Media Sites’ (2014) 19 Media and Arts Law Review 195, 197; Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (n 26) 6.

45

CHAPTER TWO: THEORETICAL FRAMEWORK

‘nudity’.342 The Guidelines state that nudity includes, ‘photos, videos, and some digitally- created content that show sexual intercourse, genitals, and close-ups of fully-nude buttocks’.343 This definition raises a number of questions about the meaning of the vague sub-terms, like ‘some digitally created content’, which are open to interpretation by users, and whether there are additional types of prohibited content that are not disclosed to the public.

A further cause for concern is the potential for ‘in spirit’344 decision-making. In April 2019, Facebook provided detailed information about its ‘remove, reduce, and inform’345 strategy for managing problematic content across the Facebook family of apps.346 On Instagram, this year- round strategy can involve moderators ‘reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines, limiting those types of posts from being recommended on our Explore and hashtag pages’.347 Here the platform’s reference to inappropriate posts is seemingly analogous to ‘sexually suggestive’348 and ‘borderline’349 content. In spirit content moderation is in tension with the Anglo-American ideal of the rule of law for several reasons, including that there appears to be a want of formal certainty around the rules that actually apply to content. Rule of law discourse makes clear that the subjects of regulation should have knowledge of what the law is and how to act on that knowledge.350 The fact that Instagram, like other platforms, fails to provide a dictionary of all salient terms adds to foregoing tensions. Finally, the loaded requirement in Instagram’s Community Guidelines that users must ‘follow the law’351 has a wide ‘penumbra of uncertainty’352 because nation

342 Instagram, ‘Community Guidelines’ (n 185) [Post Photos and Videos that are Appropriate for a Diverse Audience]. 343 Ibid. 344 A spokesperson for Instagram once said that content can violate policies either specifically or ‘in spirit’: see Nicholas Thompson, ‘The Great Tech Panic: Instagram’s Kevin Systrom Wants to Clean up the Internet’, Wired (online at 14 August 2017) . 345 Guy Rosen and Tessa Lyons, ‘Remove, Reduce, Inform: New Steps to Manage Problematic Content’ Facebook Newsroom (online at 10 April 2019) . 346 Josh Constine, 'Instagram Now Demotes Vaguely ‘Inappropriate’ Content', TechCrunch (online at 11 April 2019 . 347 Rosen and Lyons (n 345). 348 See generally Erika Hallqvist, ‘How Instagram Censors Could Affect the Lives of Everyday Women’, USA Today (online at 8 June 2019) . 349 Mark Zuckerberg, ‘A Blueprint for Content Governance and Enforcement’, Facebook (online at 15 November 2018) . 350 Raz (n 42) 214. 351 Instagram, ‘Community Guidelines’ (n 185) [The Short]. 352 Hart, ‘Positivism and the Separation of Law and Morals’ (n 339) 607.

46

CHAPTER TWO: THEORETICAL FRAMEWORK states have different legal systems. Users have to read the ‘How We Will Handle Disputes’353 header in Instagram’s Terms of Use to identify that the laws of California govern the platform and its users, and then research the particular laws of that jurisdiction. Instagram also requires users to comply with the laws of the country in which they reside.354

In complex regulatory contexts like content moderation it is not unusual for a ‘penumbra of uncertainty’355 to overshadow the meaning of legal rules.356 Instagram, for instance, seemingly attempts to combat uncertainty around its Community Guidelines by, inter alia, providing a short summary of the document, incorporating headings and writing some rules in plain- English. However, online platforms appear to purposefully straddle a fine line between providing enough detail that suggests to users, shareholders and other stakeholders that content is moderated in an organised way, and drafting vague terms that fall short of specific obligations to users.357 By drafting content policies in vague terms, platform owners can develop policies on a case-by-case basis358 and respond to public backlash in a range of ways, usually by restoring content,359 apologising for mistakes360 and/or updating policies.361 Though Instagram’s publicly available terms and policies are uncertain, we know that platform policy teams frequently review rules around content and develop fine-grained guidebooks for moderators.362 For example, Facebook’s team of around 60 policy advisers regularly gather for what a senior employee calls a ‘mini legislative session’.363 This further illustrates the significant information asymmetry between platforms and their users.

The rule of law framework in this thesis also incorporates the value of substantive certainty that pertains to whether a platform adheres to its content policies in practice. This value goes one step further than formal certainty, which is chiefly concerned with the form of legal rules, by focusing on the extent that a societal actor governs its subjects in the ways that it states it is

353 Instagram, ‘Terms of Use’ (n 73). 354 See, Terms of Use [How We Will Handle Disputes]. 355 Hart, ‘Positivism and the Separation of Law and Morals’ (n 339) 607. 356 John Braithwaite, ‘Rules and Principles: A Theory of Legal Certainty' (2002) 27 Australian Journal of Legal Philosophy 47, 54. 357 Crawford and Gillespie (n 232) 418. 358 Ibid 419. 359 Ibid 411. 360 See, eg, Rodriguez (n 83); Nivi Shrivastava, ‘Instagram Apologizes to Plus-Size Blogger for Removing Bikini Pics’, NDTV (online at 10 June 2016) . 361 See, eg, PA Media, ‘Instagram Tightens Rules on Diet and Cosmetic Surgery Posts, The Guardian (online at 19 September 2019) . 362 See, eg, Kadri and Klonick (n 31) 16. 363 Madrigal (n 254).

47

CHAPTER TWO: THEORETICAL FRAMEWORK going to govern.364 Fuller explains that congruence between legal rules and the application of rules in practice ‘may be destroyed by or impaired in a great variety of ways: mistaken interpretation, inaccessibility of the law, lack of insight into what is required to maintain the integrity of a legal system, bribery, prejudice, indifference, stupidity, and the drive toward personal power’.365 There is a high risk of incongruence in content moderation processes given that platforms are not restrained to act in accordance with their constitutive documents.366 Platforms should aim to ensure that the outcomes of moderation align with their content policies because content is a vehicle for users’ self-expression, among other aspects of everyday life.367 The value of substantive certainty helps to enable evaluation in this thesis of the normative concerns raised by the outputs of Instagram’s content moderation.

In response to ongoing substantive and formal uncertainties, a range of societal actors argue that certainty should at least be taken into account in systems for regulating user-generated content.368 Take, in a more general sense, the UN’s Business for the Rule of Law Framework that encourages companies to align their values, strategies and operations with legal principles, including certainty, for ‘the avoidance of arbitrariness’.369 More specifically, the Global Network Initiative’s Principles of Freedom of Expression and Privacy states that Facebook and its other members should have governance structures that promote certainty and transparency with the public.370 Certainty in platform governance could be promoted by technology companies using less ambiguous language in terms and guidelines, providing examples of the types of content that are prohibited and not prohibited, and making the internal guidelines that moderators follow publicly available – important points that I expand upon in later chapters.371

C Reason-Giving

The rule of law value of reason-giving requires, at a minimum, that persons are aware of the reasons upon which decisions that affect them are made. As Esty observes, ‘[t]he rationality of

364 Herbert L A Hart, The Concept of Law (Oxford University Press, 2nd ed, 1994) 207. 365 Fuller (n 333) 81. 366 See generally, Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 11. 367 Kaye, Speech Police: The Global Struggle to Govern the Internet (n 33); Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression (28 May 2019) (n 257). 368 See generally Ranking Digital Rights (n 26) 5; Anderson et al (n 26) 21-22. 369 United Nations Global Impact, Business for the Rule of Law Framework (n 192) 7. 370 Global Network Initiative (n 325). 371 See especially Chapter Seven, Part II.

48

CHAPTER TWO: THEORETICAL FRAMEWORK a policy choice can best be evaluated when it is written down, explained, and published’.372 Yet, in the context of online platforms, there is often a lack of publicly available information that explains why a particular piece of content has been removed. Instagram, for instance, has previously noted the removal of content with a message along the lines of: ‘We have removed your image because it doesn’t follow our Community Guidelines’.373 A fundamental problem with statements like this is that the specific content policy that has allegedly been violated and the rationale for decision-making are not clear. Instagram is taking some steps to improve its reason-giving practices: like in July 2019, when the platform introduced new notification process for its app, one of which informs users if their account is at risk of being deleted.374 This process is a positive step forward, but the precise wording of the notification is unclear and seemingly limited to account takedowns.

I contend that platforms should adopt best reason-giving practices by notifying users about a final decision, which party initiated content removal, the policy that a piece of content purportedly breaches and reasons for the alleged violation.375 This suggestion is supported by the Santa Clara Principles, a joint declaration with ‘civil society and academics that outlines minimum transparency and accountability’376 standards for companies engaging in content moderation. The Principles also recommend that platforms provide users with avenues to appeal decisions and obtain notices, even if they are no longer using a particular platform.377 This level of information can play an important role in promoting the formal equality of analogous subject matter, as well as formal and substantive certainty, all of which have the potential to shed light on relevant decision-making processes. More broadly, the practice of giving reasons has the potential to promote accountability, transparency and participation, thereby helping to make moderation processes stable enough to guide the decisions and behaviours of users. Reason-giving can also support methodologies, like in Chapter Three of this thesis, which attempt to empirically examine allegations of systemic bias against certain types of persons or subject matter.

372 Daniel Esty, ‘Good Governance at the Supranational Level: Globalizing Administrative Law’ (2006) 115 Yale Law Journal 1490, 1529; Anna Huggins, Multilateral Environmental Agreements and Compliance: The Benefits of Administrative Procedures (Routledge, 2018) 23-25. 373 Witt, Suzor and Huggins (n 27) 580-581. 374 Instagram, ‘Changes to Our Account Disable Policy’ Info Centre (Press Release, 18 July 2019) . 375 Martin Shapiro, ‘The Giving Reasons Requirement’ (1992) The University of Chicago Legal Forum 179, 182 ff. 376 ACLU Foundation of Northern California et al (n 53). 377 Ibid.

49

CHAPTER TWO: THEORETICAL FRAMEWORK

D Transparency

The next rule of law value that is the focus of my evaluative framework is transparency. As Colomer notes, ‘transparency is concerned with the quality of being clear, obvious and understandable without doubt or ambiguity’.378 This state of openness starkly contrasts with a black box which is by definition opaque and secretive. I conceptualise transparency in more specific terms as requiring a governing actor to publicly disclose how its regulatory system operates and, in line with the values of certainty and reason-giving, to disclose the rules that apply to and decisions that are made about the subjects of regulation. According to Zalnieriute et al, these requirements can play an important role in enabling individuals to ‘understand the reasons for decisions affecting them and learn how future decisions might affect them’.379 A governing actor making policy rationales available to the public can also enable individuals to better understand their position in a legal system.

A number of online platforms now provide transparency reports about government takedown requests at least once per calendar year. For example, the 2018 Facebook Transparency Report provides data on standards enforcement, including intellectual property infringements, legal requests, such as government requests for user data, and internet disruptions.380 Facebook, like other platforms, is relatively forthcoming with this information that can help to ward off public criticism and increased legal obligations. This does not mean, however, that the data about content removal at the behest of governments is complete. Take, for instance, the Electronic Frontier Foundation (‘EFF’)’s ‘2019 Who Has Your Back Report’ in which Facebook and Instagram received two out of five (2/5) and one out of five stars (1/5), respectively, for disclosing information about government pressure to censor content on their networks.381 Instagram, in particular, failed to provide sufficient information about government requests based on legal and platform policy grounds, meaningful notice to users of content takedowns and opportunities to appeal decisions.382 Of the 16 companies surveyed in the EFF’s most recent report, Instagram received the third worst score, after Dailymotion and Vimeo.383

378 Belgium v Commission (C-110/03) [2005] ECR I-2801, [44] (Advocate General Ruiz-Jarabo Colomer). 379 Zalnieriute, Moses and Williams (n 150) 430. 380 See generally Chris Sonderby, ‘Our Continued Commitment to Transparency’, Facebook Newsroom (Press Release, 15 November 2018) . 381 Crocker et al (n 40). 382 Crocker et al (n 40). 383 Ibid.

50

CHAPTER TWO: THEORETICAL FRAMEWORK

Platforms generally disclose even less data about private requests for content takedowns and their own decisions to remove content.384 While some platforms are adopting a more proactive approach to transparency reporting, especially Facebook in the wake of the Cambridge Analytica scandal, data about the self-governance practices of technology companies often has ‘blind spots’385 and can be ‘selective and partial’.386 For instance, in the months of April and May 2017, Facebook reported that it removed around 288,000 posts that contained hate speech.387 While on the face of it this seems like a significant disclosure, the platform did not explain how it identified infringing content or which party – Facebook, government authorities or other third parties – initiated content removal, which is necessary for external actors to make sense of the data.388 Transparency deficits like this are due to a number of factors, including the largely voluntary nature of transparency reporting and the absence of binding reporting guidelines.389

An added complication is that some platforms engage in a kind of ‘resistant transparency’.390 Resistance can take many forms, such as technology companies releasing overly complex datasets or providing ‘too-much-information’391 that might make it difficult for stakeholders to process. The main risk here, according to Stohl et al, is that ‘unimportant pieces of information will take so much time and effort to sift through that receivers will be distracted from the central information the actor wishes to conceal’.392 Platforms can also employ transparency ideals to achieve their own ends. Suzor explains that many platforms use transparency ‘strategically as theatre to ward off claims for greater accountability’393 and publishing transparency reports can be an easy way ‘to respond to public demands for more information while avoiding real accountability’.394 These instances of resistant transparency are problematic for numerous

384 Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 136-139. 385 Ranking Digital Rights, ‘Corporate Accountability Index’ (Research Report, April 2018) 58. 386 Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 137. 387 Allan, Richard, ‘Hard Questions: Who Should Decide What is Hate Speech in an Online Global Community?’ Facebook Newsroom (Press release, 27 June 2018) . 388 Ranking Digital Rights (n 385) 56; 87. 389 See generally Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (n 29); Witt, Suzor and Huggins (n 27) 569. 390 Mike Ananny and Kate Crawford, 'Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability' (2018) 30(3) New Media & Society 973, 979. 391 Perel and Elkin-Koren (n 112) 194. 392 Cynthia Stohl, Michael Stohl and Paul Leonardi, ‘Managing Opacity: information Visibility and the Paradox of Transparency in the Digital Age’ (2016) 10 International Journal of Communication 123, 133-134. 393 Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 137. 394 Ibid.

51

CHAPTER TWO: THEORETICAL FRAMEWORK reasons, including that they can place the burden on the receivers of information to make sense of disclosures. The lack of detail in transparency reports can also breed confusion and distrust among already concerned users, as evidenced by the news articles that I referred to in Chapter One.

In this context, I argue that transparency requires content policies and related decision-making processes to be as open as possible. Here it is useful to turn to YouTube, a Google-owned platform for sharing videos, which is an industry leader in transparency reporting. YouTube provides a digital ‘tombstone’395 that indicates whether the removal of a piece of content was initiated by the user who uploaded it, by YouTube itself, or on the basis of a legal demand from a third party (for example, a copyright owner or defamation plaintiff).396 The video-sharing platform also explains why a video is no longer available, and sometimes includes more specific information to explain what policy the video was found to violate, or what form of legal complaint it received about the video.397 Civil society groups make additional recommendations: Ranking Digital Rights suggests that transparency reports should include design, management and governance choices and explain how processes for moderating content can potentially impact human rights,398 and the EFF encourages platforms to publish internal guidelines for law enforcement requests.399 The Santa Clara Principles emphasise that publishing fine-grained data about ‘posts removed and accounts permanently or temporarily suspended’400 is crucial for addressing transparency deficits. These recommendations usefully set out minimum standards for transparency reporting that can guide platforms in their efforts to improve their transparency practices and help to make moderation processes more accountable.

395 Alex Feerst, ‘Implementing Transparency about Content Moderation’, Techdirt (online at 1 February 2018) . 396 Witt, Suzor and Huggins (n 27) 594. 397 See generally Electronic Frontier Foundation, Facebook, Instagram Lack Transparency on Government- Ordered Content Removal Amid Unprecedented Demands to Censor User Speech, EFF's Annual Who Has Your Back Report Shows (Web Page, 31 May 2018) . 398 Ranking Digital Rights (n 385) [Executive Summary]. 399 Crocker et al (n 40). 400 ACLU Foundation of Northern California et al, The Santa Clara Principles on Transparency and Accountability in Content Moderation (n 53).

52

CHAPTER TWO: THEORETICAL FRAMEWORK

E Participation

Avenues for public participation in platform governance enable users to engage with the information that a platform has made transparent.401 Participation can encompass a number of activities and procedures through which the public can express their views about decisions that affect their interests, engage with the decision-makers that set, maintain and enforce rules around content, and increase their general awareness of content moderation processes. However, social media users have little to no opportunity to influence and engage with platforms, largely because they are locked out of the platform’s internal decision-making processes. Take, for instance, the Instagram platform: it does not run public forums in which users can express their views on policies that affect them, users are not able to appeal individual content take downs and it is difficult for stakeholders to contact the platform directly.402 This lack of participation is concerning because it diminishes users’ general awareness of the human and automated systems that regulate the rules of online participation. Across most online platforms, users are disenfranchised subjects of regulation who lack specific information about the ways that their communication and interactions can be constrained, with little to no means of challenging the bounds of appropriate and inappropriate content.

F Accountability

The rule of law value of accountability underlines the importance of moving beyond the mere provision of information. As Black notes, this value involves a societal actor giving account to another who ‘has the power or authority to impose consequences as a result’.403 In contrast to broad understandings that treat accountability as a ‘conceptual umbrella’404 under which values like equality, certainty and reason-giving sit, I conceptualise accountability as a discrete value alongside other normative aspirations.405 This is because while the foregoing values can promote accountability, they do not in themselves provide avenues for those who exercise power over content to be held to account.406 I follow Kingsbury’s approach to accountability, which suggests that accountability can be analysed in terms of four components:

401 Esty (n 372) 1530. 402 Sarah Myers West, ‘Facebook’s Guide to being a Lady’, onlinecensorship.org (Web Page, 18 November 2015) . 403 Julia Black, ‘Constructing and Contesting Legitimacy and Accountability in Polycentric Regulatory Regimes’ (2008) 2 Regulation and Governance 136, 150. 404 Mark Bovens, ‘Analysing and Assessing Accountability: A Conceptual Framework’ (2007) 13 European Law Journal 447, 449. 405 Huggins (n 52) 17-19. 406 See, eg, Zalnieriute, Moses and Williams (n 150) 430.

53

CHAPTER TWO: THEORETICAL FRAMEWORK

‘accountability of whom; to whom; for what; and by what means’.407 These components help to ensure that those who exercise government power are accountable to affected parties.

In the context of content moderation, I argue that the first three components of Kingsbury’s approach can enable vertical accountability between platforms and their users. Accountability requires the ‘accounter’, or decision-maker, and the ‘account holder’, the person holding the accounter to account, to be identifiable.408 It also requires clarity around the steps in decision- making processes and the policies to which an accounter is being held to account.409 These general requirements stand in stark contrast to black box processes in which the identity of a decision-maker is not revealed and the processes by which decisions are reached occur behind closed doors.410 Knowledge of how much content is removed, which party removes content, for what and by what means can enable users to make demands to prevent, redress or challenge the results of apparent action or inaction by a platform. However, the contractual relationship between users and platforms, which assumes an even playing field between parties,411 is a significant barrier to users being able to hold platforms to account for the ways that they set, maintain and enforce rules around content.

Given the disproportionate bargaining power that platforms exercise over users, I contend that the rule of law value of accountability should also encompass a second vertical dimension between platforms and other societal actors. Vertical accountability relates to the role of regulators, non-governmental organisations and other external stakeholders in holding Instagram to account for its governance practices.412 This is crucial in the context of Facebook and other digital media companies that ‘act as legislature, executive, judiciary, and press – but without any separation of powers to establish checks and balances’.413 One example of potential external accountability is provided by international human rights law; however, the

407 Benedict Kingsbury, ‘Global Environmental Governance as Administration: Implications for International Law’ in Daniel Bodansky, Jutta Brunnée, and Ellen Hey (eds), The Oxford Handbook of International Environmental Law (Oxford University Press, 2008) 66; Colin Scott, ‘Accountability in the Regulatory State’ (2000) 27(1) Journal of Law and Society 38, 41-42. 408 Huggins (n 52) 17-19. 409 Ibid. 410 Esty (n 372) 1528-9. 411 See generally Crawford and Gillespie (n 232) 412; Tushnet (n 232) 987. 412 Witt, Suzor and Huggins (n 27) 570. 413 Kadri and Klonick (n 31) 1. In a legal system that aspires to comply with the separation of powers, a distinction is made between the legislative power to make law, the executive power to administer the law and the judicial power to interpret the law: see Nickolas James and Rachel Field, The New Lawyer (Wiley, 1st ed, 2013) 119-120.

54

CHAPTER TWO: THEORETICAL FRAMEWORK primary responsibility to respect, protect and fulfil international human rights falls on states,414 not platform owners.415 Accordingly, the prospects of Instagram being held to account under this framework if its decisions negatively affect users’ self-expression and autonomy are limited. The potential for other forms of external accountability, including hard regulation and independent audits of moderation practices and outcomes, are currently being considering in a number of jurisdictions around the globe.416 I will consider the efficacy of different accountability measures in Chapter Seven of this thesis.

In sum, I argue that rule of law ends will be promoted in platform governance if processes for moderating content adhere to the basic constitutional safeguards of formal equality, certainty, reason-giving, transparency, participation and accountability. Together, these values provide a well-established language to start to name and work through what appears to be at stake for women and other users in the potentially arbitrary exercise of power over content. My theoretical framework is transferable to other controversies on social media platforms where concerns about potential arbitrariness in content moderation processes arise. As I will explain in the following Part, there are ongoing concerns about the extent to which processes for moderating content align with my selected rule of law values, particularly in the context of Instagram.

IV CONTROVERSIES AROUND THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES ON INSTAGRAM

Women’s bodies have long been a site of intense contest in Western cultures.417 Societal actors perennially debate issues around female forms, such as standards of female grooming,418 how women and girls should portray themselves in public spaces and the purposes that women’s bodies should ultimately serve.419 While male bodies are far from unrepressed,420 Western

414 See, eg, Daniel Joyce, ‘Internet Freedom and Human Rights’ (2015) 26(2) European Journal of International Law 493. 415 Gregorio (n 186) 70. 416 See generally Flew, Martin and Suzor (n 68) 44-46. 417 Kathleen Lennon, ‘Feminist Perspectives on the Body’, Stanford Encyclopaedia of Philosophy (Web Page, 2 August 2019) [2.2 Living the Female Body] . 418 See generally Merran Toerien and Sue Wilkinson, ‘Exploring the Depilation Norm: A Qualitative Questionnaire Study of Women's Body Hair Removal’ (2004) 1 Qualitative Research in Psychology 69; Roberts, 'Aggregating the Unseen' (n 128). 419 See, eg, Arthurs and Grimshaw (n 91) 2. 420 According to Arthurs and Grimshaw, ‘… some accounts of the constraints to which female bodies are subjected may implicitly contrast a 'repressed' female body with an 'unrepressed' male body in ways that fail to recognize adequately the constraints to which men's bodies, too, are subject’: Ibid 11.

55

CHAPTER TWO: THEORETICAL FRAMEWORK governance structures continue to disproportionately police female forms,421 often in line with a ‘narrow ideal of social acceptability’.422 Quinlan explains:

In many regards, women’s experiences of the law and criminal justice is a history of the use of law and legal regulation to control and discipline women. From pre-Enlightenment witch trials right through to the present day, it is the policing of women’s bodies that brought women, and in many cases continues to bring them, into contact with the law and criminal justice systems. Women’s bodies and the functionings of their bodies have featured, and they continue to feature prominently, in systems of justice and social control everywhere.423

At the same time, Western societies heavily objectify and commodify female forms across social domains, usually for the benefit of the male gaze.424 A result is that many women navigate complex and sometimes contradictory societal expectations of their bodies.425

Hence it should not be surprising that the moderation of images that depict women’s bodies on Instagram is highly controversial. I focus on Instagram given the range of allegations about the ways that images of female forms are moderated on the platform in practice. On the one hand, news publications sometimes claim that Instagram is arbitrarily ‘removing’426 – also described as ‘banning,’427 ‘censoring’,428 and ‘deleting’429 – images of plus-size women, stretchmarks,

421 Meda Chesney-Lind, ‘Policing Women’s Bodies: Law, Crime, Sexuality, and Reproduction’ (2017) 27(1) Women & Criminal Justice 1 . 422 See generally Toerien and Wilkinson (n 418) 69. 423 See generally Christina Quinlan, 'Policing Women’s Bodies in an Illiberal Society: The Case of Ireland' (2017) 27(1) Women & Criminal Justice 51, 52. 424 See, eg, Baer (n 125) 17. 425 Matich, Ashman and Parsons (n 126) 338. It is useful to also note that Dimen describes patriarchy as ‘a system of domination’: see Muriel Dimen, ‘Power, Sexuality, and Intimacy’ in Alison Jaggar and Susan Bordo (eds), Gender/Body/Knowledge: Feminist Reconstructions of Being and Knowing (Rutgers University Press, 1992) 38. 426 Alys Gagnon, 'Three Mothers Share Nude Photos on Instagram. Only one is Removed: THREE Mums Posted a Naked Photo on Instagram. Only One of Them Was Removed. Clearly There’s a Double Standard Here', News.com.au (online at 7 December 2016) . 427 See, eg, Rebecca Reid, 'Plus-size model says she’s been shadow banned by Instagram because of her body', Metro (online at 11 June 2018) . 428 See, eg, Paunescu, Delia, ‘Inside Instagram’s Nudity Ban: Artists Want Instagram’s Community Guidelines to Change. Will Facebook Finally “Free the Nipple”’, recode (online at 27 October 2019) ; Sara Radin, '4 Female Artists on Fighting Censorship & Reclaiming Nudes', Highsnobiety (online at 8 March 2018) . 429 See, eg, Jesselyn Cook, ‘Instagram’s Shadow Ban On Vaguely ‘Inappropriate’ Content Is Plainly Sexist’, The Huffington Post (online at 30 April 2019) < https://www.huffingtonpost.com.au/entry/instagram- shadow-ban-sexist_n_5cc72935e4b0537911491a4f?ri18n=true>; Paige Bailey, ‘Instagram: Censorship

56

CHAPTER TWO: THEORETICAL FRAMEWORK cellulite and a plethora of other subject matter around female bodies.430 Reports of the platform censoring images of plus-size women have partially fuelled suggestions that portrayals of Western ideals of thinness, including ‘thinspiration’431 visual and textual images that inspire weight loss, are less likely to be removed from the platform.432 Others have accused Instagram of ‘blatant fat-phobia’ and ‘fat-shaming’ women in ways that could potentially reinforce dominant body standards.433 Allegations of purported arbitrariness and bias are concerning as they suggest that the platform is privileging the expression of some users – either partially or exclusively on the basis of gender, race, celebrity status, sexuality and other socio-cultural factors – while silencing others.

On the other hand, in the midst of allegations of bias, some news publications show that thin- idealised images of women are also removed from Instagram.434 Some users claim that body positive hashtags, such as #effyourbeautystandards, and accounts, like @bodyposipanda435 and @i_weigh436 as shown in Figure 3, are acting as a counterweight to the portrayal of thin ideals on Instagram and in mainstream media.437 BoPo content generally encourages users to accept

of the Female Form’, Medium (online at 8 June 2019) . 430 Chastain says, ‘Fat people on Instagram have been noticing a disturbing double standard when it comes to supposed “violations” of Instagram's guidelines, especially in pictures that show some skin’: Ragen Chastain, ‘Fat is Not a Violation’, ravishly (online at 9 October 2018) ; Emily Shugerman, 'This Muslim Woman Says Instagram Took Down Her Fully Clothed Selfie’, Revelist (online at 8 March 2017)

57

CHAPTER TWO: THEORETICAL FRAMEWORK their bodies as they are today and centres on ‘an overarching love and respect for the body’.438 Other users claim that Instagram can be an emancipatory network because it enables users to customise their experience on the platform; for example, by following BoPo hashtags and users.439 Tess Holliday, a prominent body positive activist and founder of the Instagram page @effyourbeautystands, contends:440

Prior to Instagram, you just saw whatever online. Now you can follow people that are into body positivity, feminism, radical body love, artists. People that inspire me…It’s really important to surround yourself with people that uplift you and support you, and so you really have a community of that.441

The ability of users to control their experience of the platform is significant given that counterbalanced content, including average and plus-size media, can improve individual body satisfaction.442 The Royal Society for Public Health, which surveyed 14-24 year-olds in the United Kingdom (‘UK’), partly echoes this theme of empowerment with the finding that Instagram makes self-expression and self-identity better for some users.443 These competing claims continue to breed confusion about how images of women are moderated on Instagram behind closed doors.

438 Cohen et al (n 21) 48. 439 The BoPo movement on Instagram has been described in terms of ‘radical body love’– ‘Embraced by celebrities like Tess Holliday and Lena Dunham, the body-positive movement is carving out a digital space where everyday people find affirmation for their appearance’: see Salam (n 20). 440 News publications generally describe Holliday as one of the leaders of the BoPo movement on Instagram: see, eg, Bryony Gordon, ‘Eff Your Beauty Standards’: Meet the Size 26, Tattooed Supermodel Who is Changing the Fashion Industry’, The Telegraph (online at 14 May 2016) . 441 Salam (n 20). 442 Russell Clayton, Jessica Ridgway and Joshua Hendrickse, ‘Is Plus Size Equal? The Positive Impact of Average and Plus-sized Media Fashion Models on Women’s Cognitive Resource Allocation, Social Comparisons, and Body Satisfaction’ (2017) 84 (3) Communication Monographs 406; Cohen et al (n 21) 48; Dave Heller, ‘FSU Researchers Find Plus-Size Fashion Models Help Improve Women’s Psychological Health’, Florida State University (online at 7 June 2017) . 443 Royal Society for Public Health, ‘#StatusOfMind: Social Media and Young People’s Mental Health and Wellbeing’ (Report, May 2017) 23 .

58

CHAPTER TWO: THEORETICAL FRAMEWORK

Instagram’s explicit prohibition of ‘some depictions of female nipples’444 is another source of ongoing controversy.445 The platform’s stance is an immediate cause for concern from a rule of law perspective, especially in terms of the values of equality, as the policy applies to female and not male nipples, and certainty, because of the ambiguity around the meaning of ‘some’ female nipples. The prohibition is strongly contested from a feminist perspective, a prominent illustration of which is the global Free The Nipple (#freethenipple) campaign that was founded by artist and activist Lina Esco in 2013.446 The campaign aims to, inter alia, ‘interrupt patriarchal framings of the breasts as inherently sexual and the associated practices of concealment and censorship’447 as part of a broader mission to ‘empower women across the

444 Instagram, ‘Community Guidelines’ (n 185) [Post Photos and Videos that are Appropriate for a Diverse Audience]. 445 See, eg, Electronic Frontier Foundation and Visualizing Impact (n 17) [The Human Body]; Paunescu (n 428). 446 Matich, Ashman and Parsons (n 126) 338. 447 Ibid.

59

CHAPTER TWO: THEORETICAL FRAMEWORK world’ more broadly.448 User-activists, scholars and many other individuals along the gender spectrum also express concern about the message that double standards, like Instagram’s nipple policy, send to the public.449 MacDonald asserts:

Do you remember the first time Instagram told you that the female body was inappropriate for viewing? That you weren’t appropriate? That you were pornography? That displaying your nipples (or her nipples) was somehow disturbing, or so likely to create a disturbance that, like yelling “fire” in a crowded theater, there had to be rules against it?450

The lack of publicly available information that explains why certain content is removed from the Instagram platform exacerbates concerns among users about alleged double standards, bias and privilege in moderation processes.

Instagram’s community-oriented rhetoric adds to the groundswell of confusion among users in the way that it contrasts with allegations of arbitrariness. This rhetoric is partly founded on the platform’s mission to ‘bring you [users] closer to the people and things you love’451 and its constitutive documents, which suggest that processes for moderating content are developed and deployed to serve users. For example, Instagram’s Terms of Use explicitly states that the platform is committed to achieving a number of community-oriented ends,452 including ‘fostering a positive, inclusive, and safe environment’453 and ‘offering personalised opportunities to create, connect, communicate, discover, and share’.454 The platform’s Community Guidelines further promotes these ends: ‘We want Instagram to continue to be an authentic and safe place for inspiration and expression’.455 The current Head of Instagram Adam Mosseri frequently promulgates the company’s community-oriented rhetoric, as did Instagram’s former CEO, Kevin Systrom. In Figure 4, Systrom states that Instagram is helping

448 Matich, Ashman and Parsons (n 126) 338-339. ‘Free The Nipple’ (Web Page, 2019) . 449 Grady (n 16). 450 Tom Macdonald, ‘A Not-So-Modest Proposal: Instagram, Free the Nipple for the Inauguration, Vogue (online at 20 January 2017) . 451 Instagram, ‘Terms of Use’ (n 73) [The Instagram Service]. 452 See generally Matt Buchanan, ‘Instagram and the Impulse to Capture Every Moment’, The New Yorker (online at 20 June 2013) ; @InstagramEnglish (Instagram) (Facebook, 21 March 2016) . 453 Instagram, ‘Terms of Use’ (n 73) [The Instagram Service]. 454 Ibid. 455 Instagram, ‘Community Guidelines’ (n 185) [The Short].

60

CHAPTER TWO: THEORETICAL FRAMEWORK users to pursue the ‘passions they care most about’,456 and, in Figure 5, he promotes the importance of strengthening ‘relationships though shared experiences’ on International Women’s Day.457 In Figure 6, a tribute to Instagram’s co-founders, Mosseri describes the platform as a ‘product people love that brings joy and connection to so many lives’.458 Instagram could of course be moderating content to actually achieve these community-oriented ends. It might also be overly cynical for those outside the platform to assume that this rhetoric is only a smokescreen for profiteering. But, amid the chorus claims about content moderation on Instagram, it is not surprising that users lose sight of the proverbial forest for the trees.

456 @kevin (Kevin Systrom) (Instagram, 21 December 2017) . The caption of this post read: This week, everyone at Instagram came together for our semi-annual All Hands meeting to celebrate everything we’ve accomplished in 2017 and prepare for 2018’. 457 @kevin (Kevin Systrom) (Instagram, 9 March 2017) ; @kevin (Kevin Systrom) (Instagram, 8 March 2017) . 458 @mosseri (Adam Mosseri) (Instagram, 2 October 2018) .

61

CHAPTER TWO: THEORETICAL FRAMEWORK

62

CHAPTER TWO: THEORETICAL FRAMEWORK

Competing narratives around empowerment and censorship, reports of selective policy enforcement, and a lack of reason-giving for content moderation highlight the importance of empirically examining whether like images of women’s bodies are moderated alike on Instagram.459 It is important also given that the platform’s logic of secrecy continues to breed confusion that can, in turn, fuel vernacular explanations or ‘folk theories’460 for content removal. Given that there will always be controversies over particular instances of moderation, empirical analyses are useful to demystify how some user-generated content is moderated in the context of Instagram’s overall moderation system. The evaluative framework and innovative black box methodology in this thesis help to identify potential arbitrariness where exists and allay the concerns of stakeholders where it does not. In the following and final Part of this Chapter, I outline a feminist lens that I will use to enrich my rule of law analysis. I argue that a feminist viewpoint serves to underline the particular issues that some users face when posting content that depicts women’s bodies.

V ENRICHING THE EVALUATIVE FRAMEWORK OF THE RULE OF LAW: A FEMINIST PERSPECTIVE

The dominant ‘technocultures’461 of online platforms, which encompass developer cultures and cultures of use,462 can be problematic for some users. In these cultures, women and people of colour face a disproportionately high risk of ‘e-bile’,463 such as harassment, discrimination, abuse, misogyny and violent threats. As Banet-Weiser and Miltner explain, ‘We are in a new era of the gender wars, an era that is marked by alarming amounts of vitriol and violence directed toward women in online spaces. These forms of violence are not only about gender, but are also often racist, with women of color as particular targets’.464 It appears, however, that platforms are not doing enough to address the ‘contemporary proliferation of gendered cyber- hate’465 on their networks. As previously explained, online platforms are ultimately commercial actors that develop ‘lean and economical governance structures’466 to ‘facilitate profitable

459 Witt, Suzor and Huggins (n 27) 581. 460 Sarah Myers West, ‘Censored, Suspended, Shadowbanned: User Interpretations of Content Moderation on Social Media Platforms’ (2018) 20(8) New Media & Society 201. 461 Duguay, Burgess and Suzor (n 94) 4. 462 Ibid 3. 463 Emma Jane, 'Online Misogyny and Feminist Digilantism' (2016) 30(3) Continuum: Journal of Media & Cultural Studies 284. 464 Sarah Banet-Weiser and Kate Miltner, '#MasculinitySoFragile: Culture, Structure, and Networked Misogyny' (2016) 16(1) Feminist Media Studies 171. 465 Jane (n 463) 285. 466 Duguay, Burgess and Suzor (n 94) 3.

63

CHAPTER TWO: THEORETICAL FRAMEWORK forms of sociality’.467 A major problem here is that if rules around content are inconsistently applied in practice, in line with overriding commercial prerogatives and normative viewpoints, moderation processes are likely to have particularly deleterious impacts on some female and minority voices.

The potential for gendered impacts is not surprising considering ‘the web’s masculinist, military-scientific social origins’.468 The internet was originally a ‘niche tool’469 funded by the US Department of Defense and developed by a number of US-based engineers and academics.470 Largely white, straight male users from scientific and military backgrounds laid the sociocultural foundations of the internet for use by a handful of people.471 While today’s internet has evolved into a global network for commercial and mass use, or a network of interconnected networks, the sociocultural underpinnings of the web have remained largely the same.472 According to Chang, assumptions about what it takes to do well in technology fields took root around an antisocial male stereotype from the early 1960s, proliferated by the media and in classrooms, job applications and aptitude tests.473 The industry has self-selected for ‘nerd’ and ‘risk-taking bro’474 typecasts, which have helped shape Silicon Valley into the ‘nerd- bro dream and the codification of an ultimate boys’ club’475 that we know today. This brief history underlines the importance of exploring feminist approaches that can identify and evaluate some women’s experiences in the online environment, which often reflect those in more traditional, offline contexts.

Like the ideal of the rule of law, which scholars conceptualise in different ways, there are many feminisms.476 Feminist theory is informed by developments in cultures and geographic regions, defined by individual experiences of the challenges of everyday life and often conceptualised

467 Duguay, Burgess and Suzor (n 94). 468 Banet-Weiser and Miltner (n 464) 173. 469 Kal Raustiala (n 205) 491. 470 Raustiala explains that the internet ‘was a tiny system with only a few nodes, then a U.S. Department of Defense-funded project known as the ARPANET (Advanced Research Projects Agency Network). All the nodes were located in the continental United States. The first use of the Internet as a communications platform was a message sent from UCLA to Stanford on October 29, 1969. From the beginning, the Internet was a public-private partnership, the U.S. government had an outsized role, and California was the source of many key players. All three of these characteristics remain true today, though on a vastly different scale and in a world with not just a handful of Internet users but instead around 3.2 billion’: Raustiala (n 205) 492-493. 471 Banet-Weiser and Miltner (n 464) 173. 472 Ibid. 473 Emily Chang, Brotopia: Breaking Up the Boys' Club of Silicon Valley (Portfolio, 2018) 23-24. 474 Ibid 39. 475 Ibid 40. 476 See, eg, Marwick, 'None of This Is New (Media): Feminisms in the Social Media Age' (n 119) 310.

64

CHAPTER TWO: THEORETICAL FRAMEWORK in terms of the wave model.477 In this thesis, I conceptualise feminism in broad and plural terms as feminisms. Today, according to Horn, ‘the more frequently used term is ‘feminism[s]’, recognising the fact that, while there are many shared goals, there are also divergent concerns, aims, strategies and practices within the movement’.478 Marwick adds that the reality of feminism is ‘an enormously diverse group of people with varying opinions, what might more accurately be called feminisms’.479 I broadly conceptualise feminisms as forms of activism for the equality of women, men and transgender and non-binary people. An important goal of feminisms, in my view, is to ‘challenge and change the male standard’480 that prevails in Western societies. For the purposes of this thesis, it is not necessary to delve into different feminisms, including longstanding debates between those who advocate for radical as opposed to liberal conceptions of feminism.481 As Cain explains, ‘Whatever the categorization, the boundaries are never as fixed as the labels make them seem. And some feminists slip in and out of the various categories’.482 My primary purpose is therefore to elucidate the considerable overlaps between many feminisms and the telos of the rule of law in order to, then, name and work through user concerns about the moderation of images of female forms on Instagram with a more nuanced language.

An important overlap between the foregoing explicitly teleological approach to the rule of law and a feminist approach is the importance of individual autonomy. At the highest level of abstraction, a person is arguably autonomous when they live a life of their choosing in offline and online environments.483 A broad feminist approach builds on this basic idea and goes one step further than rule of law discourse by underlining that women are entitled to a number of autonomy enabling conditions, including freedom from paternalistic laws and rights to non- discrimination, options and opportunity.484 First, women’s exercise of personal autonomy should not be restricted by patriarchal paternalistic laws.485 Such laws involve largely male- dominated heteronormative governing bodies, of any kind, restricting women’s freedom

477 See Horn (n 126) 327. 478 Ibid. 479 Marwick, 'None of This Is New (Media): Feminisms in the Social Media Age' (n 119) 310. 480 See, eg, Cain (n 120) 806. 481 Ibid. 482 See, eg, Cain (n 120) 841. 483 Amy Baehr, ‘Liberal Feminism’, Stanford Encyclopaedia of Philosophy (Web Page, 30 September 2013) [1.1.1 Procedural Accounts of Personal Autonomy] ; Waldron, ‘Hart and the Principles of Legality’ (n 130) 77. 484 Leslie Francis and Patricia Smith, ‘Feminist Philosophy of Law’, Stanford Encyclopaedia of Philosophy (Web Page, 24 October 2017) [1.1. Personal Autonomy] . 485 Baehr (n 483) [Procedural Accounts of Personal Autonomy].

65

CHAPTER TWO: THEORETICAL FRAMEWORK purportedly in the community’s best interests.486 Feminists across waves and strands commonly oppose patriarchal paternalistic laws on grounds that women are rational agents who should possess knowledge of what the law is and the freedom to choose a particular course of action without external interference.487

Second, a broad feminist approach underscores that women and other individuals should have access to options free from gendered norms.488 This reinforces arguments in Part III of this Chapter that rules around content should be certain, subject to public scrutiny and review by affected parties.489 Freedom from discrimination is another important autonomy enabling condition that is particularly relevant to arguments in Part III of this Chapter that any attempt to control or modify the behaviour of users should be formally equal.490 By integrating rule of law theory with a feminist lens, including these specific autonomy enabling conditions, I am not detracting from my rule of law framework or prioritising one critical approach over another. Rather, the feminist lens in this thesis serves as an additional ‘flashlight’491 to help identify and evaluate potential autonomy deficits for women in processes for moderating content. A feminist analysis can also help stakeholders to better understand the complicated story of content regulation on Instagram.

It should be borne in mind that scholars have articulated a number of critiques of the liberal Western ideal of the rule of law, especially from a feminist perspective.492 One is that the Anglo-American rule of law tradition holds the white straight male archetype as the universal standard for consistent treatment.493 Another is that the rule of law and its contested values can reinforce and entrench the status quo,494 including paternalist patriarchal assumptions about the nature of women and their place in the world,495 and represent certain assumptions ‘as

486 See generally Gerald Dworkin, ‘Paternalism’, Stanford Encyclopaedia of Philosophy (Web Page, 2017) < https://plato.stanford.edu/entries/paternalism/>. 487 See generally Francis and Smith (n 484). 488 See generally Baehr (n 483). 489 Witt, Suzor and Huggins (n 27) 594. 490 See part III(A) of this Chapter. 491 Cynthia Enloe states: ‘I find it helpful to judge the usefulness of any concept in the same way that I judge a flashlight. Someone hands you a flashlight and you say, “I wonder if it is a good flashlight.” So you go into a darkened room, you turn it on, and you judge if corners of the room previously in the shadows now become easier to see than before. If you find that this particular flashlight distorts the shapes in the room or if the beam is too weak and you still trip over objects on the floor, then you return that flashlight with a polite “thank you”’: see Cynthia Enloe, Globalization and Militarism: Feminists Make the Link (Rowman & Littlefield Publishers, 2007) 53; Sara Brown, Gender and the Genocide in Rwanda: Women as Rescuers and Perpetrators (Routledge, 2018) 9. 492 See, eg, Catharine MacKinnon, Toward a Feminist Theory of the State (Harvard University Press, 1989). 493 Ibid; Witt, Suzor and Huggins (n 27) 568. 494 See, eg, Baehr (n 483) [1.2 Political Autonomy]. 495 See generally Francis and Smith (n 484).

66

CHAPTER TWO: THEORETICAL FRAMEWORK universal, natural and inevitable’.496 Scholars have also questioned the relevance of liberal theory to feminist concerns, arguing that liberalism does not adequately address entrenched inequalities that stem from the influence of patriarchal and masculinist norms in Western legal systems.497 It is arguable that contemporary feminisms therefore require a departure from formal conceptions of rule of law values in favour of more substantive conceptions.498 These critiques raise the important question of whether rule of law provides an appropriate framework for evaluating the moderation of images that depict women’s bodies.

While foregoing critiques of the liberal ideal of the rule of law are well-founded, I argue that my selected values remain appropriate normative aspirations for the initial exploratory study in this thesis. As explained throughout this Chapter, users have limited understandings of how rules around content are set, maintained and enforced behind closed doors. Users also lack empirical evidence to substantiate allegations of bias, discrimination and privilege in the internal workings of platform governance, as outlined in Parts IV of this Chapter. Hence it is necessary to start the work of empirically examining and evaluating processes for moderating user-generated content in practice: in particular, to help to identify what women and other users stand to lose in the potentially arbitrary exercise of power by online platforms.499 The largely formal rule of law and feminist lenses in this thesis lay the foundations for more substantive future work.

VI CONCLUSION

This Chapter has advanced a rule of law framework, comprising the values of formal equality, certainty, reason-giving, transparency, participation and accountability for evaluating the extent to which the moderation of images depicting women’s bodies align with these values in Chapters Four and Five.500 This framework is based on Krygier’s explicitly teleological approach, which underlines that the rule of law has purchase across realms, contexts and actors.501 I support Krygier’s view that if we are concerned with addressing the risk of arbitrariness in the exercise of power, it should not matter whether the source of that power is public or private.502 I argued that my selected rule of law values are normative aspirations that

496 Francis and Smith (n 484) [1.1. The Rule of Law]. 497 Ibid. 498 See generally Schaeffer (n 330) 700. 499 Ibid 699. 500 See, eg, Farrall (n 52) 40-41. 501 See (n 43) and Part II of Chapter Two. 502 Krygier, ‘The Rule of Law: Pasts, Presents, and Two Possible Futures’ (n 45) 221; Witt, Suzor and Huggins (n 27) 564.

67

CHAPTER TWO: THEORETICAL FRAMEWORK can serve the telos of the rule of law and provide a well-established, albeit contested, language to name and work through governance tensions between platforms and their users.

Given that this thesis empirically examines images depicting female forms on Instagram, I will also undertake a feminist analysis of content moderation in practice. My feminist approach is broad given the plethora of feminisms in Western democratic discourse503 and, by facilitating more fine-grained evaluation of the marginalising effects that moderation processes can have on women and other users, it has the potential to enrich my rule of law analysis. A feminist lens can also lead to more nuanced conclusions about the desirability of the ways that female forms appear to be moderated on Instagram in practice. By using a rule of law framework in conjunction with a feminist perspective, I will be able to identify specific areas for platform policy reform, and potentially law reform, in later chapters.

In the following chapter, I develop and start to apply my black box methodology, which fuses content analysis with innovative digital methods. This methodology examines how inputs into Instagram’s processes for moderating content (ie, images) produce certain outputs (ie, whether an image is removed or not removed). This methodology facilitates empirical evaluation of the ways that images depicting female forms are moderated on Instagram.

503 Horn (n 126) 327.

68

CHAPTER THREE: A BLACK BOX METHODOLOGY

CHAPTER THREE: A BLACK BOX METHODOLOGY

This Chapter outlines a black box methodology for empirically evaluating the extent to which the moderation of images depicting women’s bodies on Instagram aligns with my rule of law framework and whether the platform’s regulatory system is desirable from a feminist perspective. The principal component of this methodology is an input/output method based on black box analytics that uses innovative digital methods to investigate how discrete inputs into a system produce certain outputs.504 Examination of the input/output relationship can shed varying degrees of light on the ‘throughput’505 governance practices (ie, the black box) between input and output.506 Overall, I argue that this method in concert with content analysis, in Chapters Four and Five, constitute a useful methodology for evaluating content moderation processes when only parts of a platform’s regulatory system are visible from the outside.507 This methodology contributes to the emerging project of digital constitutionalism by developing the application of digital methods for empirical legal analysis of platform governance.

The first of five parts in this Chapter recaps and expands upon what is publicly known about processes for moderating user-generated content to date. As explained in Chapter Two, humans and automated systems moderate content across a variety of worksites, both internal and external to platforms.508 As I will explain in this Chapter, most of the work that professional moderators undertake involves responding to content that users, as a ‘volunteer corps of regulators’,509 report as inappropriate through in-built reporting tools. While reports do not necessarily result in content removal, reporting processes can be problematic because they often lack mechanisms for verifying why users report content, some of whom might attempt to game moderation processes to achieve their own ends.510 Another concern is the potential for users to interpret the narrow ‘vocabulary of complaint’511 that platforms embed in reporting tools in different ways, which could result in unpredictable moderation outcomes.512 I conclude

504 Diakopoulos, ‘Algorithmic Accountability’ (n 123) 398, 403-404. 505 Vivien Schmidt, 'Democracy and Legitimacy in the European Union Revisited: Input, Output and 'Throughput' (2013) 61(1) Political Studies 2. 506 Mario Bunge, ‘A General Black Box Theory’ (1963) 30(4) Philosophy of Science 346. See also reference to ‘black box’ in Bruno Latour, ‘Visualization and Cognition: Thinking with Eyes and Hands’ (1986) 6 Knowledge and Society 1. 507 See, eg, Perel and Elkin-Koren (n 112) 181. 508 See generally Roberts (n 211). 509 Crawford and Gillespie (n 232) 412. 510 Witt, Suzor and Huggins (n 27) 578. 511 Crawford and Gillespie (n 232) 413. 512 Ibid 418.

69

CHAPTER THREE: A BLACK BOX METHODOLOGY that, much like the regulatory roles of humans and software, there are significant gaps in public knowledge of the extent of the influence of users as volunteer regulators.513

Given the purported rise of artificial intelligence, which many platforms are heavily investing in,514 Part II explores five main risks raised by automatic decision-making.515 The first is the potential for biases in the development and deployment of algorithms that are deeply influenced by their human creators.516 Second, there are concerns about the potential for algorithms to make decisions in autonomous ways.517 Third, algorithmic decision-making is often opaque and inscrutable,518 largely due to the commercial interests of platforms.519 Fourth, as algorithms are ‘embedded in [the] complex socio-technical assemblages’520 of platforms, it can be extremely challenging for developers to explain and users to understand algorithmic decision- making.521 Finally, algorithms can be unpredictable, have unintended consequences and be influenced by a range of internal and external factors.522 These risks are compounded by the fact that the regulatory role of algorithms, along with platform employees, commercial content moderators and users are among the least understood aspects of platform governance.

Part III explores the importance of black box analytics, a form of reverse engineering that stakeholders can use to investigate whether content is moderated in ways that are free from arbitrariness.523 This form of analytics is important for a plethora of reasons, including its potential to help the public better understand how decisions about content are made in practice. It can also address the ‘pressing need’524 for empirical analyses of content moderation, as part

513 Pasquale (n 29) 2. 514 See, eg, Adam Mosseri, Head of Instagram, 'Our Commitment to Lead the Fight against Online Bullying', Info Centre (Press Release, 8 July 2019) . 515 Kitchin (n 122) 15; Gillespie (n 216). 516 Kitchin (n 122) 17; Tarleton Gillespie, ‘The Relevance of Algorithms’ in Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot (eds) Media Technologies: Essays on Communication, Materiality, and Society (MIT Press, 2014) 167-169. 517 Grimmelmann (n 50) 1723. 518 See generally Malte Ziewitz, ‘Governing Algorithms: Myth, Mess, and Methods’ (2016) 41(1) Science, Technology & Human Values 3; Adrian Mackenzie, Cutting Code: Software and Sociality (Peter Lang International Academic, New York, 2006) 43. 519 Kyle Langvardt, 'Regulating Online Content Moderation' (2017) 106 Georgetown Law Journal 1353, 1358. 520 Kitchin (n 122) 14. 521 Sandra Wachter, Brent Mittelstadt and Chris Russell, 'Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR' (2018) 31(2) Harvard Journal of Law & Technology 841. 522 Kitchin (n 122) 19. 523 Pasquale (n 29) 3. 524 Kitchin (n 122) [Abstract].

70

CHAPTER THREE: A BLACK BOX METHODOLOGY of attempts to ‘open up’,525 ‘shake’,526 ‘probe’,527 ‘investigate’,528 ‘tinker with’529 and see what is ‘under the hood’530 of platform governance more broadly. Having explored black box analytics in general terms, Part IV then explains the particulars of the black box method in this thesis, which examines how inputs (ie, individual images) into the black box (ie, processes for moderating content) produce certain outputs (ie, whether an image is removed or not). This method is designed to not only answer RQ1 and RQ2, but also to assess the efficacy of using black box analytics to examine content moderation processes with only partial access to data (RQ3). In doing so, I contribute to the development of digital methods for empirical legal analysis of the exercise of governing power in the digital age. Part V concludes this Chapter.

I CONTENT IS MODERATED WITHIN A BLACK BOX

As previously explained, content moderation refers to the processes through which platform executives and their moderators set, maintain and enforce the bounds of appropriate content. Platform-specific policies, such as Community Guidelines and Terms of Use,531 formally delineate these bounds. In practice, however, processes for moderating content are not only informed by content policies, but also the marketplace, norms (eg, cultural norms of use), laws and online architecture.532 Whether decisions around the appropriateness of content are influenced by one of these factors, or a combination thereof, they are ultimately regulatory decisions in the way that they attempt to influence or control the types of content users see and how and when they see it.533 Users are the subjects of regulation within systems of platform governance more broadly.

525 Graham calls for ‘a concerted multidisciplinary effort to try and open up the ‘black boxes’ that trap software-sorting’: Stephen Graham, ‘Software-sorted Geographies’ (2005) 29 (5) Progress in Human Geography 562, 575. 526 Maurer calls for researchers to ‘shake the black box rather than get overly captivated by its form or announce its arrival with a flourish’: Bill Maurer ‘Transacting Ontologies: Kockelman’s Sieves and a Bayesian Anthropology’ (2013) 3(3) HAU: Journal of Ethnographic Theory 63, 73. 527 Witt, Suzor and Huggins (n 27) 587. 528 See generally Diakopoulos (n 123). 529 See, eg, Pamela Samuelson, ‘Freedom to Tinker’ (2016) 17 Theoretical Inquiries in Law 563; Edward Felten, ‘The New Freedom to Tinker Movement,’ Freedom to Tinker (online at 21 March 2013) . 530 Tarleton Gillespie, Wired Shut: Copyright and the Shape of Digital Culture (MIT Press, Cambridge, 2007) 18, 222. 531 Instagram, ‘Terms of Use’ (n 73); Instagram, ‘Community Guidelines’ (n 185). Also see Miranda (n 27); Roberts, ‘Content Moderation’ (n 27) 1. 532 Lessig, Code and Other Laws of Cyberspace (n 50) 6. 533 Witt, Suzor and Huggins (n 27) 557.

71

CHAPTER THREE: A BLACK BOX METHODOLOGY

A major cause for concern among stakeholders in platform governance is that processes for moderating content are proprietary and relatively unknown.534 While platforms do not fully disclose the internal workings of their moderation practices, including the detailed guidelines that moderators follow, we can sketch a broad picture of how, where and by whom the work of regulating content takes place. Klonick provides a useful starting point:

It [content moderation] can happen before content is actually published on the site, as with ex ante moderation, or after content is published, as with ex post moderation. These methods can be either reactive, in which moderators passively assess content and update software only after others bring the content to their attention, or proactive, in which teams of moderators actively seek out published content for removal. Additionally, these decisions can be automatically made by software or manually made by humans.535

Let us break down the various components of this statement. First, while moderation can occur before or after a user posts content, the norm is for regulation to be ex post and reactive.536 Roberts explains that it was a business decision on the part of online platforms ‘to allow all uploads without significant pre-screening’537 because technology companies rely on user participation to make money. Second, content is for the most part manually regulated by humans,538 most of which are commercial content moderators.539 It is not uncommon, however, for professional moderators to use a combination of manual and automated tools to review material.540

The work arrangements of many commercial content moderators are fractured in terms of organisation and geographic location. Sarah Roberts, an academic who has spent a decade studying commercial content moderation, defines four types of work situations for professional

534 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 37. 535 Klonick (n 28) 1635. 536 For instance, ‘Facebook and most of the social media platforms have a users/community based regulation system: no one is scrutinizing the content before it is uploaded. Users have the possibility to report the posts they found inappropriate on the platform. The content moderator is basically handling these reports, called tickets’: Burcu Gültekin Punsmann, ‘Three Months in Hell: What I Learned from Three Months of Content Moderation for Facebook in Berlin’, Süddeutsche Zeitung (online at 6 January 2018) . See also Klonick (n 28) 1635- 1636; Tarleton Gillespie, ‘The Dirty Job of Keeping Facebook Clean’, Culture Digitally (online at 22 February 2012) . 537 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 35. 538 Ibid 207. 539 Roberts, ‘Digital Refuse: Canadian Garbage, Commercial Content Moderation and the Global Circulation of Social Media’s Waste’ (n 211) 1; Newton (n 213) 540 See, eg, Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 37

72

CHAPTER THREE: A BLACK BOX METHODOLOGY moderators: in-house, boutiques, call centres and microlabour websites.541 An in-house arrangement can connote full-time employment or less permanent arrangements with a platform itself: for example, in divisions with names like ‘Trust and Safety’.542 This workforce is, according to Gillespie, ‘overwhelmingly white, overwhelmingly male, overwhelmingly educated, overwhelmingly liberal or libertarian, and overwhelmingly technological in skill and world view’.543 The second boutique situation, which is less common for social media platforms,544 refers to specialised firms that offer brand management or content moderation services to other firms. The third and most common arrangement is for platforms to outsource content moderation to ‘call centre’ companies that are equipped to process a high-volume of data traffic.545 These centres are dispersed around the world and ‘rely on a multilingual and multiculturally competent workforce that works on site … to respond to the labor needs of a global marketplace, often on a 24/7 cycle’.546 Finally, platforms might outsource moderation to microlabour websites through which people seeking employment complete tasks on a per- task basis. 547

While it is common for platforms to develop hybrid strategies that incorporate different dimensions of Robert’s taxonomy,548 the work of moderating content appears to be very hierarchical. This is illustrated to some extent by apparent distinctions between the work undertaken by more prestigious,549 in-house teams compared to call centre workers. As platforms set their own policies, their internal policy teams usually comprise experts in areas such as law, computer science, crisis management and public relations.550 These experts are often tasked with, inter alia, drawing the line between appropriate and inappropriate content, clarifying ambiguous policies and addressing controversial issues. In the context of Facebook’s Risk and Response team making decisions about policy grey areas, Koebler and Cox state:

541 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 35 ff. 542 Ibid 44 543 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 14. 544 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 44-45. 545 Newton (n 213); Buni and Chemaly (n 8). 546 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 46. 547 Ibid 46. 548 Ibid 48. 549 See, eg, Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 14. 550 Jason Koebler and Joseph Cox, 'The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People', Vice (online at 24 August 2018) .

73

CHAPTER THREE: A BLACK BOX METHODOLOGY

These decisions are made with back-and-forths on email threads, or huddles done at Facebook headquarters and its offices around the world. When there isn’t a specific, codified policy to fall back on, this team will sometimes make 'spirit of the policy' decisions that fit the contours of other decisions the company has made. Sometimes, Facebook will launch so-called “lockdowns,” in which all meetings are scrapped to focus on one urgent content moderation issue, sometimes for weeks at a time’.551

This description of internal decision-making, whether in a lockdown scenario or not, suggests that there are sharp divisions between different kinds of moderation labour in practice.552 For instance, it appears that in-house teams principally formulate content policies, while commercial content moderators undertake most of the voluminous and presumably repetitive work of applying policies to reported content.553 A result could be that internal teams are privy to the machinations of platform policy work, such as why, how and when certain policies are created, but some commercial content moderators are not.

The role of accuracy scoring in content moderation processes is another example of the seemingly hierarchical ways in which decisions are made about content at scale. At Facebook, for instance, Newton claims that every US-based moderator has an accuracy score.554 This score, which is generated from multiple tiers of review, should generally remain at or above 95 per cent.555 When making a decision about individual pieces of content, Facebook moderators purportedly determine whether a post is prohibited and, if so, then select the correct reason why the post at issue violates platform policies.556 Each week, a moderator’s decisions are audited by a second, more senior moderator or ‘quality assurance worker’.557 A subset of decisions made by quality assurance workers are then audited by Facebook employees.558 Newton suggests that any wrong decision, across the tiers of quality assurance, counts against a moderator’s accuracy score.559 Here the role of scoring raises several questions, such as whether accuracy is an appropriate measure in the context of content moderation that can involve reviewers making decisions on a case-by-case, ad hoc basis. Another concern is that

551 Koebler and Cox (n 550). 552 There also appears to be more glaring disparities. As Newton explains, ‘The median Facebook employee earns $240,000 annually in salary, bonuses, and stock options. A content moderator working for Cognizant in Arizona, on the other hand, will earn just $28,800 per year’; Newton, (n 213). 553 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 38. 554 Newton (n 213). 555 Ibid. 556 Ibid. 557 Ibid. 558 Ibid. 559 But see Jason Koebler, 'How Facebook Trains Content Moderators', Motherboard (online at 26 February 2019) .

74

CHAPTER THREE: A BLACK BOX METHODOLOGY accuracy scores are partly determined by the extent that a moderator’s decision-making aligns with that of internal Facebook employees. In making this comparison, there is potential for regulatory actors to overlook deeper normative questions, like whether policies are desirable in the first place.

The supposed prevalence of accuracy scoring is just one aspect of the bleak working conditions that many commercial content moderators face.560 Several academic researchers, most prominently Roberts,561 and publications in news media draw attention to the psychological toll that moderating certain content can have on workers.562 For example, some moderators reportedly suffer from anxiety, trauma and even acute stress disorders from viewing the worst content that users post to the internet.563 On a day-to-day basis, some moderators are allegedly ‘managed down to the second’ and others, often in call centres, are purportedly required to adhere to a number of privacy and security measures. In the context of Facebook, Newton claims:

To protect the privacy of the Facebook users whose posts they review, workers are required to store their phones in lockers while they work. Writing utensils and paper are also not allowed, in case Miguel [a moderator] might be tempted to write down a Facebook user’s personal information. This policy extends to small paper scraps, such as gum wrappers. Smaller items, like hand lotion, are required to be placed in clear plastic bags so they are always visible to managers.564

According to Ellen Silver, Facebook’s Vice President of Operators, contract labour enables the company to ‘scale globally’:565 more specifically, to moderate posts in every time zone, in over 50 languages and at more than 20 sites around the world.566 However, the various reports of commercial content moderation suggest that ‘little known, often low-wage/low-status, and

560 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 38. 561 Ibid. 562 Adrien Chen, ‘The Laborers who Keep Dick Pics and Beheadings out of your Facebook Feed’, Wired News (online at 23 October 2014) . For an earlier piece, see: Rebecca Hersher, ‘Labouring in the Shadows to Keep the Web Free of Child Porn’, npr (online at 17 November 2013) . 563 See, eg, Zachary Mack, 'Facebook’s Former Chief Security Officer Alex Stamos on Protecting Content Moderators', The Verge (online at 19 March 2019) . 564 Newton (n 213). 565 Ellen Silver, 'Hard Questions: Who Reviews Objectionable Content on Facebook — And Is the Company Doing Enough to Support Them?' Facebook Newsroom (Press release, 26 July 2018) . 566 Ibid.

75

CHAPTER THREE: A BLACK BOX METHODOLOGY generally outsourced’567 reviewers can be the collateral damage in attempts to moderate content at scale.

567 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 71.

76

CHAPTER THREE: A BLACK BOX METHODOLOGY

77

CHAPTER THREE: A BLACK BOX METHODOLOGY

Users also play an important role in content moderation processes as an enrolled ‘volunteer corps of regulators’.568 Instagram, like other platforms, recruits volunteers through reporting features. In Figure 7, for instance, Instagram users can report content as ‘inappropriate’, based on specific ‘reasons for reporting’, or as ‘spam’. The platform also enables non-users, or individuals who do not have an account, to report potential violations via its webpage, as illustrated in Figure 8.569 Users are important regulatory actors because the majority of the work that professional moderators undertake is responding to reported, or flagged, content.570 Using software, such as the Single Review Tool,571 moderators generally allow, escalate or deny user reports one-by-one. The enrolment of users as volunteer regulators is a practical, bottom-up as opposed to top-down solution to the problem of moderating vast amounts of content.572 Facebook users, for example, flag around one million pieces of content per day.573 Moreover, by relying on users to report potentially inappropriate content, platforms can claim to be moderating on behalf users. A claim like this can serve as ‘a powerful rhetorical legitimation’574 for the outcomes of content moderation, something that is particularly useful during periods of intense media scrutiny.

In-built reporting features are sophisticated regulatory tools that can have significant implications for content moderation. One of the biggest concerns is that reporting options provide a very narrow ‘vocabulary of complaint’,575 which users might interpret in inconsistent ways. Users can also employ the reporting function in tactical ways to game a system of regulation, perhaps as a prank, part of organised campaigns to silence particular viewpoints or for other unclear reasons. These reasons can relate to horizontal conflicts between users, as well as vertical conflicts between users and Instagram.576 While user reports do not necessarily result in the removal of content, the reporting processes of many platforms are problematic as

568 Julia Black, ‘Enrolling Actors in Regulatory Systems: Examples from U.K. Financial Services Regulation’ (2003, Spring) Pubic Law 63, 84-90; Chen (n 562); Miranda (n 27). 569 ‘Report Violations of Our Community Guidelines’, Instagram (Web page, 2019) . 570 Crawford and Gillespie (n 232) 413; Klonick (n 28) 1635. 571 Newton (n 213); Nick Hopkins, ‘Facebook Moderators: A Quick Guide to Their Job and Its Challenges’, The Guardian (online at 22 May 2017) . 572 Crawford and Gillespie (n 232) 410. 573 Buni and Chemaly (n 8). 574 Crawford and Gillespie (n 232) 412. 575 Ibid 418. 576 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 37-38. See also Jean Burgess and Ariadna Matamoros-Fernández, ‘Mapping Sociocultural Controversies across Digital Media Platforms: One Week of #gamergate on Twitter, YouTube and ’ (2016) 2 Communication Research and Practice 79, 81.

78

CHAPTER THREE: A BLACK BOX METHODOLOGY they do not provide mechanisms to verify why users report content. This means that the queue of tickets for moderators to review might be skewed by the biases of particular groups of users, presumably in a way that further entrenches social inequalities and other issues. An added complication for this study is that Instagram does not explicitly disclose the volume or nature of actions taken to remove content from its platform.577

Having explored typical working arrangements for human moderators, I will now consider automated content moderation, as part of artificial intelligence more broadly. First, it is useful to clarify the meaning of the term AI, which has become overloaded in recent years. Artificial intelligence is a concept that has existed for many decades and generally refers to the theory and development of machines that are ‘able to carry out tasks that would otherwise require human intelligence’.578 While a range of societal actors claim that we are now living in ‘the age of AI’,579 this technology is not as advanced as one might think. For instance, in response to the question whether AI is going to save us all, Facebook’s Head of Global Policy Management Monika Bickert declared that ‘[w]e’re a long way from that’.580 It is also the case that the majority of the work of moderating content requires some degree of human evaluation and, where necessary, direct regulatory intervention.581 For the remainder of this Chapter, I will therefore focus on the artificial intelligence that platforms have disclosed to the public, rather than the potential future of AI that often shapes popular understandings of technology. By doing so, I ground proceeding discussion about the risks and benefits of technologies in the context of platform governance, but do not go so far as to leave this debate to the homogenous few who develop and deploy AI in practice.582

A prominent field of artificial intelligence, which is the particular focus of the following Part, is machine learning. This field broadly encompasses a set of techniques and algorithms that can be used to train a computer program, or software, to undertake a particular task or achieve

577 See, eg, Ranking Digital Rights (n 385) 60, 87-88. 578 Constance de Saint-Laurent, 'In Defence of Machine Learning: Debunking the Myths of Artificial Intelligence' (2018) 14(4) Europe's Journal of Psychology 734, 735. 579 See, eg, Richard Gray, ‘Why Artificial Intelligence Is Shaping Our World’, BBC: Machine Minds (online at 19 November 2018) . 580 Madrigal (n 254). In the context of hate speech, Chief AI scientist for Facebook AI Research Yann LeCun explains: ‘[T]here are a large number of cases where something is hate speech but there’s no easy way to detect this unless you have broader context … For that, the current AI tech is just not there yet’: Lisa Eadicicco, '3 Things We Learned From Facebook's AI Chief About the Future of Artificial Intelligence', Business (online at 19 February 2019) . 581 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 37. 582 Saint-Laurent (n 578) 735.

79

CHAPTER THREE: A BLACK BOX METHODOLOGY a particular result.583 As the techno-functional foundations of software, an algorithm is a ‘set of defined steps that if followed in the correct order will computationally process input (instructions and/or data) to produce a desired outcome’.584 Algorithms require two basic components to function: a dataset to work with, and a definition of success – that is, what the algorithm is meant to achieve or produce.585 In order to achieve an outcome, algorithms follow rules, which programmers write or explain in the language of code.586 There are many different types of algorithms, including machine learning algorithms.

As part of the artificial intelligence boom, global spending on which is estimated to reach $US77 billion in 2022,587 platforms continue to heavily invest in technology that can help moderate content at scale. Take, for example, Adam Mosseri’s statement: ‘For years now, we have used artificial intelligence to detect bullying and other types of harmful content in comments, photos and videos. As our community grows, so does our investment in technology’.588 One of Instagram’s features, which is ‘powered by AI’, 589 notifies users when their comment might be considered offensive and, in a bid to encourage reflective online participation, gives users a chance to undo their comment.590 The platform also uses automated text searches to identify banned words in comments,591 and skin filters to detect varying degrees of bare human skin (which can, but not always, indicate pornography).592 Some forms of machine-automated processing, such as skin filters, are strides ahead of others for methodological, financial and other reasons.593 Zuckerberg recently stated, ‘It’s easier to build

583 For example, to automatically recognise patterns in a set of data: AI Now, ‘Algorithmic Accountability Policy Toolkit’ (October 2018) 2; Saint-Laurent (n 578). 584 One of the earliest references to an algorithm, or ‘algorism’, is in the ninth-century scripts of the Arabian mathematician Muḥammad ibn Mūsā al-Khwārizmī, which outline step-by-step methods of arithmetic: see Kitchin (n 122) 16; Lawrence Snyder, Fluency with Information Technology: Skills, Concepts, & Capabilities (Pearson, 2015) 36. 585 Cathy O'Neil, Weapons of Math Destruction: How Big Data increases Inequality and Threatens Democracy (Crown, 2016) 15 ff. See also Daniel Neyland, ‘On organizing algorithms’ (2015) 32(1) Theory, Culture & Society 119, 121 ff. 586 Kitchin (n 122) 17. Code is traditionally binary (ie, a system using the binary digits 0 and 1), however, coders now write software in languages such as Python: Code.org, ‘How Computers Work: Hardware and Software’ (YouTube, 30 January 2018) 02:40 ff . 587 Eadicicco (n 580). 588 Mosseri (n 514). 589 Ibid. 590 Constine (n 346). 591 Sarah Roberts, 'Social Media’s Silent Filter', The Atlantic (online at 8 March 2017) . 592 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 37. 593 Kadri and Klonick state: ‘Automated detection of violations is quite sophisticated and successful for various types of visual content (such as child pornography) but less so for written content that poses “nuanced linguistic challenges” (such as harassment and hate speech)’: Kadri and Klonick (n 31) 15.

80

CHAPTER THREE: A BLACK BOX METHODOLOGY an AI system to detect a nipple than what is hate speech’.594 This is partly why platforms continue to invest heavily in the research and development of tools that might advance the current state of content moderation processes.

Platforms developing and deploying automated systems to moderate some types of content can give rise to a number of potential benefits. Many of these possibilities relate to efficiency, given that machine learning algorithms can often undertake a high volume of tasks at faster rates than humans.595 This type of capital substitution can reduce some ongoing costs, especially in the context of low-level, highly repetitive tasks.596 More specifically in terms of user experiences, machine learning and other automated systems can help to identify potentially harmful content before users have the opportunity to report it and, in some instances, even before it is published.597 Platforms can also use this technology to personalise and optimise individual users’ social media participation through, inter alia, curated content feeds and recommendation systems. The benefits of machine learning are, of course, inextricably linked to the risks.

There are global concerns around the regulatory role that machines play in deciding how users participate in online spaces.598 A range of societal actors claim that we have entered an era of techno-regulation, which ‘refers to the intentional influencing of individuals’ behaviour by embedding norms into technological systems and devices’.599 Like the concept of artificial intelligence, techno-regulation is bound up with a range of seemingly interchangeable terms, such as ‘regulation by technology,’ ‘regulative software’, ‘automated decision-making’ and ‘algorithmic regulation’.600 While the risks of automation can vary depending on context, much recent scholarly discussion focuses on the potential for algorithmic biases, autonomous

594 ‘Facebook, Inc. (FB) Q1 2018 Earnings Conference Call Transcript’, The Motley Fool (Web Page, 25 April 2018) . 595 Kitchin (n 122) 15. 596 Weber and Seetharaman (n 212). 597 Rachel Sandler, 'Here's Everything Facebook Announced at its 2018 Developers Conference', Business Insider (online at 3 May 2018) . 598 See, eg, Emre Bayamlıoğlu and Ronald Leenes, 'The ‘Rule of Law’ Implications of Data-Driven Decision-Making: A Techno-Regulatory Perspective' (2018) 10(2) Law, Innovation and Technology 295. 599 Ibid 298. See also Berg, Bibi van den and Ronald Leenes, ‘Abort, Retry, Fail: Scoping Techno-Regulation and Other Techno-Effects’, in Mireille Hildebrandt and Jaenne Gakeer (eds), Human Law and Computer Law: Comparative Perspectives (Springer, 2012) 74. 600 See, eg, Bayamlıoğlu and Leenes (n 598) 298; Florian Saurwein, Natascha Just and Michael Latzer, 'Governance of Algorithms: Options and Limitations' (2015) 17(6) info 35.

81

CHAPTER THREE: A BLACK BOX METHODOLOGY decision-making, opacity, complexity601 and unpredictability.602 In the following Part, I will elaborate upon each of these risks in turn, under the auspices of automated decision-making.

II THE RISKS OF AUTOMATED DECISION-MAKING

It is important to bear in mind that the risks of automated decision-making can arise along, any or all, of the stages of a data pipeline. Here a pipeline serves as an abstract, non-exhaustive representation of the various decisions that can be made about data, from what a developer records as an individual data points through to data visualisation.603 Given that machine learning aims to predict or recommend something, the first stage in the pipeline might involve a developer defining what that something is and how it will be measured in practice.604 Once these goals are set, the next stage can involve a developer identifying possible sources of data, modes of data extraction and sampling methods (eg, new data might be collected, or existing datasets might be merged together).605 The next step might be data cleaning, given that datasets are rarely free from inaccurate or missing values. At this point, a developer might delete certain variables or, more commonly, impute (estimate) missing data.

Once data is collected and cleaned, a programmer will typically select a class of model to implement and train.606 In the context of skin filters, for instance, some platforms train machines to automatically detect nudity based on databases of known bad content (eg, images or videos that moderators have already labelled as a violation of terms or guidelines).607 The training process generally involves programmers iteratively selecting, tuning and assessing a model, often as part of wider collaborations and negotiations.608 As Seaver notes, ‘algorithmic systems are not standalone little boxes, but massive, networked ones with hundreds of hands reaching into them, tweaking and tuning, swapping out parts and experimenting with new arrangements’.609 The final stage in the data pipeline is output: a model might describe data,

601 Malte Ziewitz (n 518) 7. 602 Kitchin (n 122) 15. 603 David Lehr and Paul Ohm, ‘Playing with Data: What Legal Scholars Should Learn About Machine Learning’ (2017) 51 University of California, Davis Law Review 653, 669 ff. 604 Ibid 672. 605 Ibid 677 ff. 606 Ibid. 607 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 34; ‘Instagram is now training its content moderators to label borderline content when they’re hunting down policy violations, and Instagram then uses those labels to train an algorithm to identify’: Constine (n 346). See also Duke University Law School, ‘LENS 2018: Complexity & Security | Monika Bickert, Keynote: The Social Media Perspective’ (YouTube, 24 February 2018) 43:00 . 608 Kitchin (n 122) 18; Diakopoulos (n 123). 609 Nick Seaver, ‘Knowing Algorithms’ (Research Paper, Department of Anthropology, UC Irvine, February 2014)

82

CHAPTER THREE: A BLACK BOX METHODOLOGY draw inferences or make predictions. This is the meaning making point of the pipeline where developers might interpret and evaluate results, perhaps based on commercial objectives or requests from stakeholders. While the data pipeline is not exhaustive, the constituent stages of which do not necessarily occur in a sequential manner, it usefully illustrates the volume and variety of decisions that platforms could make about automated systems.

First, in terms of the risk of algorithmic biases in automated decision-making, a crucial point is that algorithms are, to use Broad’s term, ‘made by humans’.610 In other words, algorithms learn what they know, often through the above-mentioned training processes, from their creators. A result is that developers can embed their own values, beliefs and assumptions about the world into training datasets and, by extension, the outputs of algorithms.611 Kitchin explains:

Whilst programmers might seek to maintain a high degree of mechanical objectivity -- being distant, detached and impartial in how they work and thus acting independent of local customs, culture, knowledge and context -- in the process of translating a task or process or calculation into an algorithm they can never fully escape these.612

This means that algorithms can promote and reinforce particular kinds of decision-making, including those that might be potentially erroneous, biased or discriminatory.613 Chouldechova and Roth explain that ‘machine learning techniques are designed to fit the data, and so will naturally replicate any bias already present in the data. There is no reason to expect them to remove existing bias’.614 Algorithms that learn from defective datasets, which are not uncommon given that complete data are often expensive and hard to obtain at scale,615 can also give rise to biases. Take, for example, Amazon’s curriculum vitae sorting model that was trained on data from a sample of resumes that job candidates submitted over a ten-year period.616 In 2015, however, the company found that the model was biased against women in

10. 610 See generally Ellen Broad, Made by Humans (Melbourne University Press, 2018) 58 ff. 611 Ibid. 612 Kitchin (n 122) 17-18. 613 Ibid 17. 614 Alexandra Chouldechova and Adam Roth, The Frontiers of Fairness in Machine Learning (Research Paper, 23 October 2018) 2. 615 Federal Trade Commission, Big Data: A Tool for Inclusion or Exclusion? Understanding the Issues (Report, January 2016) 2. 616 See generally Jeffrey Dustin, ‘Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women’, Reuters (online at 10 October 2018)

83

CHAPTER THREE: A BLACK BOX METHODOLOGY some measure because it was principally trained on resume data from male candidates.617 The ‘messy’618 nature of human and automated decision-making raises important epistemological questions about where data comes from, how it is interpreted and for what purposes. 619

The lack of women and minorities in technology fields contributes to the risk of biases in moderation processes and platform governance more broadly. A very narrow subset of the population – largely white, educated and straight men – design and develop machine learning technologies.620 In Australia, for example, around 16 per cent of Science, Technology, Engineering and Mathematics (STEM) qualified individuals are female.621 At the international level, Chang underlines a troubling lack of diversity in Silicon Valley technology companies:

In 2017, women at Google accounted for 31 percent of jobs overall and only 20 percent of vital technical roles. At Facebook, women make 35 percent of the total workforce and 19 percent of technical jobs. The statistics are downright depressing for women of color: black women hold 3 percent of computing jobs, and Latina women hold 1 percent.622

If machine learning models often reflect the interests, needs and life experiences of the professionals who create them,623 there is cause for concern that processes for moderating content serve a very narrow subset of Western populations. There is also cause for concern here about what Chan calls Silicon Valley’s ‘bro culture problem’,624 including a lack of inclusive workplaces that can reinforce, inter alia, the stereotype that ‘antisocial male nerds’625 are the best programmers. In a study of why a representative sample of US adults ‘voluntarily left their jobs in tech’,626 Scott, Klein and Onovakpuri found that ‘unfairness, in the form of

idUSKCN1MK08G>; AI Congress, The Usefulness – and Possible Dangers – of Machine Learning, AI Congress (Web Page, 3 October 2017) . 617 Ibid. 618 Kitchin (n 122) 21. 619 Christian Sandvig, ‘The Social Industry’ (2015) 1(1) Social Media + Society 1; Batya Friedman and Helen Nissenbaum, ‘Bias in Computer Systems’ (1996) 14(3) ACM Transactions on Information Systems 330, 331-332; Neyland (n 585) 119. 620 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 12. See also Sinduja Rangarajan, 'Here’s the Clearest Picture of Silicon Valley’s Diversity Yet: It’s Bad. But Some Companies are Doing Less Bad', (online at 25 June 2018) . 621 Australian Government, Office of the Chief Scientist, ‘Australia’s STEM Workforce’ (Infographic, 2016) . 622 Chang (n 473) 7. 623 See, eg, Broad (n 610). 624 Chang (n 473) 7. 625 Ibid 20. 626 Allison Scott, Freada Kapor Klein and Uriridiakoghene Onovakpuri, ‘Tech Leavers Study: A First-of-its Kind Analysis of Why people Voluntarily Left jobs in Tech’ (Report, 2017) 5.

84

CHAPTER THREE: A BLACK BOX METHODOLOGY everyday behavior (stereotyping, harassment, bullying, etc.) is a real and destructive part of the tech work environment, particularly affecting underrepresented groups and driving talent out the door’.627 Without greater diversity in platform employees, especially in technical teams, technology can perpetuate longstanding histories of exclusion under the guise of seemingly more objective technology.628 While some companies are attempting to address issues around diversity, a notably example of which is Slack,629 ‘“like-me” bias’ remains highly prevalent in technology fields.630

The nature of algorithms as a technologically determinist solution for regulating some user behaviours can itself create system wide bias.631 While platforms have divergent organisational goals, cultures and other practices, many advance a technologically determinist view ‘that technologies exert an effect on human society and behavior’.632 Facebook’s mission to ‘bring the world closer together’ is arguably technological determinism writ large.633 Zuckerberg contends that, ‘If we can do this, it will not only turn around the whole decline in community membership we've seen for decades, it will start to strengthen our social fabric and bring the world closer together’.634 This kind of statement, which is not unique to Facebook,635 is problematic as it suggests that technology is the most effective vehicle for social change.636 However, we know that many problems that arise in the context of content moderation are linked to issues that transcend platforms and require input from diverse stakeholders in order to be solved. As Nash reminds us, it is important to look beyond ‘the technical manifestation of social ills’ and focus on ‘the ills themselves’.637 Emphasis on technology as a catalyst for

627 Scott, Klein and Onovakpuri (n 626) 5. 628 Joy Buolamwini, Gender Shades: Intersectional Phenotypic and Demographic Evaluation of Face Datasets and Gender Classifiers (Master of Science Thesis, MIT, 2017); Alex Campolo et al, AI Now 2017 Report (Report, 2017) 16-17 ; Emily van der Nagel, ‘Networks that Work Too Well’: Intervening in Algorithmic Connections’ (2018) 168(1) Media International Australia 81, 81-82. 629 See generally Slack Team, ‘Diversity at Slack’ (Web Page, 2019) . 630 Campolo et al (n 628) 17. 631 Ian Bogost and Nick Montfort, ‘Platform Studies: Frequently Asked Questions’ (FAQ, 12 December 2009) . 632 Ibid. 633 Mark Zuckerberg, ‘Bringing the World Closers Together’, Facebook Newsroom (online at 22 June 2017) . 634 Ibid. 635 For instance, in a post announcing ‘a bunch of new products’, Instagram co-founder and former CEO Systrom claims that the platform’s technology ‘will help bring our community even closer to the people and things they love’. See @kevin (Kevin Systrom) (Instagram, 2 May 2018) . 636 Bogost and Montfort (n 631). 637 Nash (n 104) 2.

85

CHAPTER THREE: A BLACK BOX METHODOLOGY social good also risks downplaying the fact that social media technologies are developed and deployed in line with platforms’ overriding commercial interests.

The second risk that is the focus of this Part is the potential for automated decision-making by algorithms. In contrast to platform rhetoric, which generally frames algorithms in mechanical and benign terms of ‘the formal static essence of software’,638 some algorithms can make decisions in autonomous ways.639 For instance, a platform might task an unsupervised model with labelling a sample of images based on learned patterns in data.640 The algorithms that comprise the model might learn to make ‘smarter’ decisions over time and reroute based on new information as part of multiple layers of machine learning.641 Perhaps what is concerning here is not the possibility of some algorithms making autonomous decisions, but a loss, by varying degrees and in different ways, of human autonomy in decision-making processes. The potential for software to automatically moderate content raises complex questions about whether users can overcome powerful, computer-based manipulations.642 Take, for instance, the apprehension ‘that individual autonomy is lost in an impenetrable set of algorithms’.643 Without access to the internal workings of platforms, there are no easy answers to these questions.

The apparent opacity and complexity of algorithms pose additional risks for users as the subjects of platform governance.644 Algorithms are opaque on a number of levels, the first being that programmers work on technology behind closed doors.645 In other words, the public is for all intents and purposes locked out of participating in the development of these technologies. Second, as previously explained, the internal workings of algorithms are

638 Adrian Mackenzie, Cutting Code: Software and Sociality (Peter Lang International Academic, New York, 2006) 43; Kitchin (n 122) 17. See also Timothy Lee, ‘Mark Zuckerberg Is In Denial about How Facebook is Harming Our Politics’, Vox (online at 10 November 2016) : Lee remarks: ‘“We are a tech company, not a media company,” Zuckerberg has said repeatedly over the last few years. In the mind of the Facebook CEO, Facebook is just a “platform,” a neutral conduit for helping users share information with one another’. 639 Kate Crawford, ‘Can an Algorithm Be Agonistic? Ten Scenes from Life in Calculated Publics’ (2016) 41 Science, Technology & Human Values 77, 79. 640 Frank Pasquale, ‘Assessing Algorithmic Authority’, Madisonian.net (online at 18 November 2009) ; Grimmelmann (n 50) 1723. 641 See generally Rob Kitchin and Martin Dodge, Code/Space: Software and Everyday Life (MIT Press, Cambridge, 2011). 642 Ziewitz (n 518) 10. 643 Executive Office of the President 2014, 10 644 Mackenzie notes that algorithms have ‘a cognitive-affective stickiness that makes them both readily imitable and yet forbiddingly complex’: Adrian Mackenzie, Cutting Code: Software and Sociality (Peter Lang International Academic, 2006) 43. 645 Grimmelmann (n 50) 1723.

86

CHAPTER THREE: A BLACK BOX METHODOLOGY proprietary.646 The non-disclosure agreements that many platforms require moderators to sign largely ensure that proprietary information and details of the work that moderators undertake remain out of the public domain.647 Moving onto related yet distinct issues of complexity now, algorithms can defy human understanding.648 As Ziewitz states, ‘An algorithm, it turns out, is rather difficult to understand’ – it is, according to Anderson, a ‘complex and mathematically grounded black box that seems to do more than simply aggregate preferences’.649 One of the most obvious challenges to understanding algorithms is that their constitutive code is written by programmers in specific languages, such as Python.650 A result is that even if a person was to gain access to proprietary code, they would likely require some kind of specialist knowledge to understand how a specific algorithm works.

Another challenge is that software does not always reveal why a particular decisions was made. What we see in code is actually ‘a mysterious alchemy in which each individual step might be comprehensible, but any ‘explanation’ of why the code does what it does requires understanding how it evolved and what ‘experiences’ it had along the way’.651 Perhaps most surprisingly, it can be difficult even for a programmer to explain the output of an algorithm.652 This could be due to, inter alia, problems along the various stages of the data pipeline or a platform outsourcing different parts of a system. According to Kitchin, the same programmers are not necessarily involved at every aspect of technologies being developed and deployed.653 Concerns around the opacity and complexity of algorithms are fundamentally concerns around ‘knowing algorithms’, which are enormously difficult to address in practice.654

In addition to the risks of algorithmic bias, autonomous decision-making, opacity and complexity, algorithms can be unpredictable in ways that starkly contrast with the seemingly mechanical nature of technology. There are several factors that can be catalysts of unpredictable outcomes, including a programmer mistranslating a step or encoding biases,

646 Pasquale, The Black Box Society (n 29). 647 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 38. 648 Nick Seaver, ‘Knowing Algorithms. Media’ (Department of Anthropology, UC Irvine, 2014) . 649 Ziewitz (n 518) 6. 650 Chris Anderson, ‘Deliberative, Agonistic, and Algorithmic Audiences: Journalism’s Vision of Its Public in an Age of Audience Transparency’ (2011) 5 International Journal of Communication 529, 540. 651 Suresh Venkat, ‘When an Algorithm Isn’t …’ Medium (online at 1 October 2015) . 652 Ziewitz (n 518) 3. 653 Kitchin (n 122) 21. 654 Diakopoulos (n 123) 398.

87

CHAPTER THREE: A BLACK BOX METHODOLOGY consciously or otherwise.655 Algorithms can also perform in sub-optimal ways due to bugs and other faults in software. In terms of external factors, societal actors can attempt to subvert or game systems of algorithmic governance by, for instance, posting different types of content and recording how an algorithm reworks based on new input.656 In this sense, algorithms are not just the sum of step-by-step processes – they can ‘sort, organise, highlight and suppress’657 content in ways that are not value neutral.658 Indeed, if we think back to Lessig’s modalities of regulation, algorithms can be significantly influenced by the marketplace; norms at geographic, industry, platform, community and individual levels, and state enacted laws.659 This is arguably the case whether algorithms are part of the modality of architecture or a separate modality altogether.

It should be noted that there is a risk of societal actors giving undue weight to ‘algorithmic drama’.660 As previously explained, the precise extent of the influence of algorithms in processes for moderating content is presently unknown and experts have suggested that human moderators are likely to always play an important role in moderating user-generated content.661 However, it is not an overstatement that there is widespread unease about the information asymmetry between platforms and their users, especially the governing role of algorithms.662 There is also more critical attention than ever on the secrecy around platform governance, which continues to significantly limit public understandings of how user-generated content is regulated behind closed doors.663 In this context, there is a need for empirical analyses to better understand whether content policies are enforced in ways that are free from arbitrariness and to identify the real impacts that moderation can have on users as the subjects of platform governance.664 In the following Part, with Lessig’s indictment that ‘[w]e should interrogate the

655 Kitchin (n 122) 19. 656 Nicholas Diakopoulos, ‘Algorithmic Accountability Reporting: On the Investigation of Black Boxes’ (Research Paper, Tow Center for Digital Journalism, Columbia Journalism School, 2014) 11; Pasquale, ‘Restoring Transparency to Automated Authority’ (n 336) 235. 657 Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 25) 7. 658 Kitchin (n 122) 18; Saint-Laurent (n 578) 7. 659 See generally Lessig (n 50). 660 Ziewitz (n 518) 5. 661 See generally Madrigal (n 254); Roberts, 'Aggregating the Unseen' (n 128) 2. 662 Witt, Suzor and Huggins (n 27) 574. 663 See, eg, Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 93. 664 See, eg, Anderson et al (n 20) 21-22; Solon Barocas and Andrew D. Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671,731-732; Sandvig et al, ‘Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms’ (Paper presented at the Data and Discrimination: Converting Critical Concerns into Productive Inquiry, a preconference at the 64th Annual Meeting of the International Communication Association, Seattle, WA, USA, 22 May 2014) 7-9.

88

CHAPTER THREE: A BLACK BOX METHODOLOGY architecture of cyberspace as we interrogate the code of Congress’665 in mind, I explain why black box analytics is useful for investigating content moderation processes.

III THE IMPORTANCE OF BLACK BOX ANALYTICS

As explained thus far, the lack of transparency around the internal workings of platforms continues to limit public understandings of how user-generated content is moderated in practice. This matters because ‘[g]aps in knowledge, putative and real, have powerful implications, as do the uses that are made of them’.666 Transparency deficits in the context of platform governance are, to use Pasquale’s term, a symptom of the broader ‘black box society’667 of the West. In this society, Pasquale argues that ‘[s]ecrecy is approaching critical mass, and we are in the dark about crucial decisions. Greater openness is imperative’.668 Before explaining the method that I develop and apply to shine a light on content moderation, which I hope will help to encourage Instagram and other platforms to open their systems to greater public scrutiny, I will first outline some of the basics of a black box approach.

Black box analytics is a form of reverse engineering that can shed light on how a system works, ranging from mechanical systems to processes for moderating content.669 Diakopoulos explains that ‘reverse engineering is the process of articulating the specifications of a system through a rigorous examination drawing on domain knowledge, observation, and deduction to unearth a model of how that system works’.670 Specifically, black box analytics investigates how discrete inputs into a system (the black box) produce certain outputs (the input-output relationship), as illustrated in Figure 9.671 In doing so, researchers can attempt to glean information, or obtain the ‘design blueprints’,672 of an opaque assemblage to see how some or all of its internal components work together. Black box analytics is particularly useful when only parts of a system, including one for moderating content, are visible on the outside.

665 Lessig, ‘Code Is Law’ (n 205). 666 Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (n 29) 2. 667 Ibid. 668 Ibid 4. 669 See, eg, Perel and Elkin-Koren (n 112) 181. 670 Diakopoulos, ‘Algorithmic Accountability Reporting: On the Investigation of Black Boxes’ (n 656) 13; Perel and Elkin-Koren (n 112) 185. See also Andrew Johnson-Laird, ‘Software Reverse Engineering in the Real World’ (1993) 19(3) University of Dayton Law Review 843. 671 Diakopoulos, ‘Algorithmic Accountability Reporting: On the Investigation of Black Boxes’ (n 656) 14. 672 Eldad Eilam, Reversing: Secrets of Reverse Engineering (Wiley, 2005).

89

CHAPTER THREE: A BLACK BOX METHODOLOGY

Researchers can employ a range of manual and programmatic methods to probe a black box.673 In terms of manual approaches, a researcher could examine the coded rulesets that algorithms follow, interview software engineers, conduct an ethnography of coding teams or undertake an audit of internal policies.674 The effectiveness of manual methods like these, among others, can be limited when a researcher is unable to gain access to the internal workings of a platform and/or has limited programming skills.675 Ananny and Crawford make the additional point that, ‘It may be necessary to access code to hold a system accountable, but seeing code is insufficient’.676 Researchers can use programmatic methods, possibly in conjunction with manual research methods and other types of analyses, to shed light on the workings of a system. Automated tools, including those that I use in this thesis and explain in the following Part, often rely on web scraping information that is publicly available on a platform’s website.

There are a number of benefits of developing and applying black box analytics to examine content moderation in practice. The first is the potential for this type of analysis to address the lack of good data about moderation decisions, whether by humans, software or a combination thereof.677 Empirical data can enhance our limited understandings of impacts that content moderation processes can have on some aspects of users’ everyday lives, including their

673 Diakopoulos, ‘Algorithmic Accountability Reporting: On the Investigation of Black Boxes’ (n 656) 13. 674 See, eg, Kitchin (n 122) 18. 675 Ibid. 676 Ananny and Crawford (n 390) 981. 677 Suzor et al, 'What Do We Mean When We Talk about Transparency? Towards Meaningful Transparency in Commercial Content Moderation' (n 336) 1526.

90

CHAPTER THREE: A BLACK BOX METHODOLOGY freedom of expression and social interactions.678 Pasquale contends that, ‘[t]ransparency should be a first step toward an intelligible society, where leading firms’ critical decisions can be understood not merely by their own engineers and mathematicians, but also by risk managers and regulators’679 and the public more broadly. Black box analytics also has the potential to enable users and other stakeholders to start to better define and set the parameters of their relationships with social media and other governing technologies.680 This can, in turn, facilitate social activism, public discussion and generate levers for policy change.681

Another benefit of black box analytics is the role that it can play in uncovering potentially biased or discriminatory practices. As previously noted, moderation processes are incredibly complex, value-laden and deeply embedded within the socio-technical architectures of platforms.682 These processes are often created and deployed with technologically determinist mindsets along the lines of Zuckerberg’s famous statement: ‘Move fast and break things. Unless you are breaking stuff, you are not moving fast enough’.683 Black box analytics can enable stakeholders to start to examine and provide evidence about the human impacts of such rapid technological advancement.684 In general, large-scale data analysis is a useful means of attempting to uncover systematic bias, among other possible causes for concern.685 Sandvig suggests that societal actors should test governing technologies through ‘scraping audits’686 that can help to ‘ascertain whether algorithms result in harmful discrimination by class, race, gender, geography, or other important attributes’.687 Automated tools are especially important given the practical difficulties of humans auditing the vast amounts of data that platforms produce.688

678 Christian Sandvig et al, ‘An Algorithm Audit’ in Seeta Peña Gangadharan, Virginia Eubanks and Solon Barocas (eds), Data and Discrimination: Selected Essays (Open Technology Institute and New America, 2014) 7. 679 Pasquale, ‘Restoring Transparency to Automated Authority’ (n 336) 256. 680 Maayan Perel and Niva Elkin-Koren, ‘Black Box Tinkering: Beyond Transparency in Algorithmic Enforcement' (2017) 69 Florida Law Review 181. 681 Ibid 200 ff. 682 Witt, Suzor and Huggins (n 27) 575. 683 Business Insider, ‘Mark Zuckerberg, Moving Fast and Breaking things’, Business Insider (online at 15 October 2010) . 684 Kitchin (n 122) 15. 685 Wachter, Mittelstadt and Russell (n 521) 853 ff. 686 Sandvig et al, ‘Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms’ (n 664). 687 Sandvig et al, ‘An Algorithm Audit’ (n 678) 9. 688 Perel and Elkin-Koren (n 112) 194 ff.

91

CHAPTER THREE: A BLACK BOX METHODOLOGY

Societal actors can also develop and apply black box analytics in different ways to enable stakeholders to independently monitor the exercise of non-transparent power over content. Here stakeholders become governance watchdogs of sorts.689 As Pasquale argues, ‘our markets, research, and life online are increasingly mediated by institutions that suffer serious transparency deficits. When a private entity grows important enough, it should be subject to transparency requirements that reflect its centrality’.690 Such requirements should not, in theory, be controversial given that many platform executives claim that their companies are committed to transparent governance practices. For instance, in Figure 10, Instagram co- founder and former CEO Kevin Systrom describes the platform’s broader culture of ‘transparency and openness’.691 It is especially interesting that Facebook’s headquarters in California was designed to reflect the company’s ‘emphasis on openness and transparency’,692 with open-office spaces, glass meeting rooms and an abundance of natural light. However, as

689 Diakopoulos, 'Algorithmic Accountability' (n 123) 405. 690 Pasquale, ‘Restoring Transparency to Automated Authority’ (n 336) 255. 691 @kevin (Kevin Systrom) (Instagram, 30 April 2017) . 692 Todd Frankel, ‘The Future of Work: Facebook’s Open Plan Offices’, Sydney Morning Herald (online at 1 December 2015) .

92

CHAPTER THREE: A BLACK BOX METHODOLOGY the widespread concerns around platform governance suggest, there is a glaring disconnect between the transparency rhetoric that many platforms advance and transparency by platforms in practice.693 In the next Part, therefore, I outline a black box method that is useful for shedding greater light on some aspects of content moderation and platform governance more broadly. This method, which is the core of the black box methodology in this thesis, enables me to answer the three main research questions outlined in Chapter One.

IV AN INPUT-OUTPUT METHOD BASED ON BLACK BOX ANALYTICS

Thus far I have established that the decisions that platforms make about content are generally concealed from the public. In order to attempt to better understand how certain content is moderated in practice, I have developed and applied a black box method that facilitates empirical examination of how a platform’s moderation processes transform individual inputs into outputs. Here it is important to explain that input refers to individual pieces of content that users post to a platform, while output pertains to the outcome of content moderation: specifically, whether content is removed or not. Accordingly, throughput is the ‘the heart and bones’694 of the black box between input and output, which comprises privatised systems for moderating content.695

While examining content moderation in practice is extremely difficult without access to the internal workings of a platform, the relationship between input and output can elucidate ‘what goes on within the ‘black box’’.696 The input-output relationship in the context of content moderation can return four main results: true positives, true negatives, false positives and false negatives. True negatives, images that do not appear to violate a platform’s policies and were not removed, and potential false positives, images that do not appear to violate a platform’s policies and were removed, are the most relevant results for Case Study One in Chapter Four. False negatives, images that appear to violate a platform’s policies and were not removed, and potential true positives, images that appear to violate a platform’s policies and were removed, are the most relevant results for Case Study Two in Chapter Five.

Like other black box investigations, this study assumes a number of causal relationships in order to draw findings from the input-output relationship. In line with Figure 9, I assume that

693 Ananny and Crawford (n 390) 975. 694 Perel and Elkin-Koren (n 112) 190. 695 Schmidt (n 505) 5-6. 696 Ibid 5; Diakopoulos, ‘Algorithmic Accountability Reporting: On the Investigation of Black Boxes’ (n 656) 14.

93

CHAPTER THREE: A BLACK BOX METHODOLOGY there are causal relationships between input and throughput (A and B) and input and output (A and C), and therefore direct casual correlations between input in A and outputs in C.697 These assumptions enable me to draw some conclusions, or at least advance a narrative, about factors that might influence the regulation of content behind closed doors.698 Assumptions about the input-output relationship also facilitate evaluation of the extent that processes for moderating content on Instagram align with my selected rule of law values of formal equality, certainty, reason-giving, transparency, participation and accountability.699 Routine empirical examination of the input-output relationship can be a useful way to monitor the inputs and outputs of moderation processes and, in turn, evaluate the performance of throughput over periods of time.

Having outlined the essential components of a black box investigation, I will now start to explain how I applied this method to achieve the aims of this thesis. I extracted data for my case studies through the Australia-based Digital Observatory, which develops and maintains technical infrastructure for an ongoing program of research on the governance of online platforms at QUT’s Digital Media Research Centre.700 As part of this infrastructure, automated tools scraped the last 20 images from watched hashtags, which I will outline in Chapters Four and Five, every six hours (four times per day in total) on an ongoing basis. Approximately one month after images were programmatically collected, the availability of each image was tested again to determine whether it had been removed. This provided a sample of images, which automated tools identified as either removed or not removed, for subsequent content analysis and broader evaluation through my rule of law framework. The total sample for Case Study One comprises 120 866 images, as illustrated in Figure 11, and the total sample for Case Study Two comprises 58 753 images.

This black box method deviates from some reverse engineering studies in the way that it does not submit artificially generated content, or ‘dummy content’,701 to the Instagram platform. Dummy content copies different forms of social media, perhaps to identify ‘what is outputted under different scenarios’,702 but is nonetheless ‘created superficially for the study’s

697 Diakopoulos, ‘Algorithmic Accountability Reporting: On the Investigation of Black Boxes’ (n 656) 14. 698 Ibid 13-15. 699 Ibid. 700 It is important to note that the automated methods in this thesis were developed by Professor Nicolas Suzor. 701 Perel and Elkin-Koren (n 112) 212. 702 Kitchin (n 122) 24.

94

CHAPTER THREE: A BLACK BOX METHODOLOGY purpose’.703 In attempting to answer my research questions, I did not use artificially generated content, largely because of the wealth of publicly available images on the Instagram website. There is a good argument therefore that I did not waste any of the platform’s human or computational resources on reviewing replicated content. With that said, the automated tools that collect data for my case studies circumvented Instagram’s API, which is a notable issue that I will now explore.

A The Legality of Web Scraping

The legality of collecting publicly available data from the websites of online platforms remains unclear. Data scraping can be manual, such as when a researcher copies-and-pastes or takes a screenshot of content from a webpage, or automated by software, like in this thesis. Automated data collection – also known as ‘harvesting or scraping’704 – is relatively inexpensive and ‘a common method of gathering information, used by search engines, academic researchers, and many others’.705 Internet companies have substantial interests in preventing web scraping and other activities by ill-intentioned users, such as identity thieves, which can cause damage ranging from individual hardship to network crashes.706 More specifically, web scraping can

703 Perel and Elkin-Koren (n 112) 215. 704 HiQ Labs, Inc. v LinkedIn Corp., (N.D. Cal. Aug. 14, 2017) 2. 705 HiQ Labs, Inc. v LinkedIn Corp., No. 17-16783 (9th Circuit, September 9, 2019) 35. Airline ticket websites, which aggregate data from other websites, are one of the most prominent examples of web scrapers. 706 Ibid 36-37; Aaron Rubin, ‘How Website Operators Use CFAA to Combat Data Scraping’, Law 360 (online at 25 August 2014) ; Adrian Agius, ‘BIG, Bad World of DATA’, Law in Society (online) .

95

CHAPTER THREE: A BLACK BOX METHODOLOGY cause platforms to lose control over who has access to user data, for what and when.707 Many platforms engage in ‘technological self-help’708 to guard against bad actors: for instance, by employing verification tools, such as Google’s CAPTCHA and reCAPTCHA technology that requires users to ‘click’ a box to verify that they are a human actor and ‘not a bot’,709 and anti- bot measures.710

It is also common for platforms to attempt to protect their networks by including ‘access’ and ‘without authorisation’ terms in contractual agreements with users. For example, Facebook’s Terms of Service states that a person ‘may not access or collect data from our Products using automated means (without our prior permission) or attempt to access data that you do not have permission to access’.711 Instagram’s Terms of Use contains a similar general prohibition: ‘You can't attempt to create accounts or access or collect information in unauthorized ways. This includes creating accounts or collecting information in an automated way without our express permission’712 – for instance, in contractual agreement for data analytics. Some platforms, like Instagram, go one step further by prohibiting reverse engineering APIs, apps or other software development kits (SDK– kits).713 What this means is that platforms could enforce terms of service against parties that web scrape data based on explicit prohibitions against unauthorised access,714 as well as any number of general terms or reserved rights.715 In addition to a potential cause of action for breach of contract, there is a serious question whether web scraping

707 Gráinne Maedhbh Nic Lochlainn, ‘Facebook and Data Harvesting: What You Need to Know’, The Conversation (online at 4 April 2018) . 708 HiQ Labs, Inc. v LinkedIn Corp., No. 17-16783 (9th Circuit, September 9, 2019) 37. 709 Lochlainn (n 707). 710 Google, ‘Are You a Robot? Introducing “No CAPTCHA reCAPTCHA”, Google (online at 3 December 2014) . 711 Facebook, ‘Terms of Service’ (2019) [3. Your commitments to Facebook and out community], [What you can share and do on Facebook] . 712 Instagram, ‘Terms of Use’ (n 73) [Your Commitments]; Instagram, ‘Platform Policy’, Help Centre (2019) [A. General Terms] [22], [28] . 713 Facebook, ‘Platform Policy’, Facebook for Developers (2019) [4. Encourage proper use] [12]; Instagram, ‘Platform Policy’ (n 712) [Don’t reverse engineer]. 714 Myra Din, ‘Breaching and Entering’ (2015) 18 Brooklyn Law Review 405; Platforms have aggressively pursued some potential terms of services violations, particularly around third party access and sharing of user-generated data that is outside the scope of commercial arrangements. For example, see: Pete Warden, ‘How I Got Sued by Facebook’, Pete Warden’s Blog (online at 5 April 2010) . 715 For instance, Instagram’s Terms of Use reserve ‘all rights not expressly granted to you’ (ie, users): see Instagram, ‘Terms of Use’ (n 73) [Our Agreement and What Happens if We Disagree]. Further, Facebook’s Terms of Service states that a number of ‘other terms and policies’ might apply to users, such as Facebook Ads Controls, Privacy Basics, Cookies Policy and Data Policy: see Facebook, ‘Terms of Service’ (n 711).

96

CHAPTER THREE: A BLACK BOX METHODOLOGY constitutes a computer crime under, inter alia,716 the US Computer Fraud and Abuse Act (‘CFAA’)717 This Act is particularly relevant because, as noted in Chapter Two, most private consumer contracts between platforms and their users are governed by the laws of California and US district and federal laws.

While legal analysis of computer trespass laws is beyond the scope of this thesis, suffice it to say that US case law suggests that scraping data from public websites most likely not violate the CFAA.718 In HiQ Labs v LinkedIn,719 a ‘hugely important’720 decision handed down by the Ninth Circuit Court of Appeals in 2019, the court adopted a narrow interpretation of the CFAA by presuming a right to access information unless an authorisation system is in place (eg, username and password requirements). Berzon J, joined by Wallace J and Berg DJ, stated: ‘It is likely that when a computer network generally permits public access to its data, a user's accessing that publicly available data will not constitute access without authorization under the CFAA’. Additionally, in the 2018 decision of Sandvig v Sessions,721 a lower district court found that programmatically collecting information from public websites likely does not violate the CFAA even when a website explicitly prohibits automated access in its constitutive documents. While these cases arguably mark a positive step forward, caution is warranted for several reasons, including that web scraping could give rise to other causes of action.722

716 Web scraping, which can involve collecting, storing and potentially publishing some user-generated content, could also infringe the ownership rights that users retain over their content. For instance, under Instagram’s Terms of Use users maintain their rights over content: ‘We do not claim ownership of your content, but you grant us a license to use it. Nothing is changing about your rights in your content. We do not claim ownership of your content that you post on or through the Service. Instead, when you share, post, or upload content that is covered by intellectual property rights (like photos or videos) on or in connection with our Service, you hereby grant to us a non-exclusive, royalty-free, transferable, sub- licensable, worldwide license to host, use, distribute, modify, run, copy, publicly perform or display, translate, and create derivative works of your content (consistent with your privacy and application settings). You can end this license anytime by deleting your content or account’: see Instagram, ‘Terms of Use’ (n 73) [Permissions You Give to Us]. 717 Computer Fraud and Abuse Act, 18 USC § 1030. As Perel and Elkin-Koren note, black box analytics, including web scraping, could be an ‘unlawful intrusion into intermediaries’ computer networks’: Perel and Elkin-Koren (n 112) 213. Note that the CFAA is a criminal statute which also has civil remedies. 718 See Orin Kerr, ‘Scraping A Public Website Doesn’t Violate the CFAA, Ninth Circuit (Mostly) Holds’, reason (online at 9 September 2019) . In an Australian context, collecting data via web scraping is likely not ‘restricted data’ under s 478 (1) of the Criminal Code 1995 (Cth). 719 HiQ Labs, Inc. v LinkedIn Corp., No. 17-16783 (9th Circuit, September 9, 2019). 720 Kerr (n 718). 721 Christian W. Sanvig, et al. v Jefferson B. Sessions III, in his official capacity as Attorney General of the United States, No.16-1368(JDB) (D.D.C., March 30, 2018); American Civil Liberties Union, ‘Sandvig v Barr – Challenge to CFAA Prohibition Uncovering Racial Discrimination Online’, (Web Page, 22 May 20019) . 722 Kerr (n 718).

97

CHAPTER THREE: A BLACK BOX METHODOLOGY

Ongoing uncertainty about the legality of web scraping raises several public interest concerns. The first centres on the ‘presumptively open norms’ that underpin the internet: in particular, the importance of maximising, to the extent possible, the free flow of information in the online environment. As the court in HiQ Labs v LinkedIn stated:

…giving companies like LinkedIn free rein to decide, on any basis, who can collect and use data—data that the companies do not own, that they otherwise make publicly available to viewers, and that the companies themselves collect and use—risks the possible creation of information monopolies that would disserve the public interest.723

Another concern is that the risk of legal liability might stifle the ‘settled concept’ of the ‘freedom to tinker’,724 which encompasses ‘natural curiosity, inquisitiveness, and independent thought’.725 Legal uncertainty can also impede legitimate research into privatised systems of governance – for example, a university might delay or reject studies that plan to web scrape data.726 A decision like this not only impacts the researchers involved, but also the public interest in finding answers to important questions, such as whether a particular website engages in discrimination or, like in this thesis, whether particular content was removed by Instagram or individual users. It can also impact the extent to which societal actors can raise public consciousness around certain issues, as part of broader questions about whose interests are served by restricted data access.727 In this context, calls for greater protections for researchers continue to grow louder and more numerous, especially given that attempts to hold platforms and other societal actors to account for their decision-making could expose an account holder to risk of legal liability.728

Despite the uncertainty, I proceed with this study given the strong public interest in doing so. The foremost benefit lies in empirically evaluating the extent to which the moderation of images depicting women’s bodies on Instagram aligns with my rule of law framework. As explained in Chapter One, this will enable me to identify whether potential arbitrariness exists

723 HiQ Labs, Inc. v LinkedIn Corp., No. 17-16783 (9th Circuit, September 9, 2019) 36. 724 Samuelson (n 529) 563; Felten (n 529). 725 Perel and Elkin-Koren (n 112) 220. 726 See, eg, Annabel Latham, ‘Cambridge Analytica Scandal: Legitimate Researchers Using Facebook Data Could be Collateral Damage’, The Conversation (online at 21 March 2018) . Also see generally Suzor, 'Understanding Content Moderation Systems: New Methods to Understand Internet Governance at Scale, Over Time, and Across Platforms' (n 85). 727 Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (n 29) 2. 728 In light of this risk, researchers should undertake fine-grained risk assessments before developing and applying black box methods: see generally Sandvig et al, ‘Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms’ (n 664).

98

CHAPTER THREE: A BLACK BOX METHODOLOGY and attempt to ground competing claims about how images of female forms are moderated in practice. The method in this Chapter also has the potential to make significant inroads with developing and highlighting the importance of methods to probe the black box of content moderation and, more specifically, the study of Instagram known as ‘Instagrammatics’.729 This is particularly important given the ongoing risk in internet research that certain platforms are researched more extensively because they provide easier access to data, despite the social importance of platforms with more restrictive policies, like Instagram.730 Without quantitative measures, it is difficult to determine how content moderation systems operate at a systemic level and to hold different governing actors to account for their decision-making.

A final important reason for proceeding with this study is its potential to contribute to the development of feminist digital methods for legal analysis. Feminist digital methods is a nascent research area that aims to better integrate feminist theories and ethics of care with data analysis.731 Some scholars describe this area of research as an ‘intervention’732 into the supposed disembodied, objective nature of data.733 This study will enable me to determine how a black box methodology might work in the context of the Instagram platform and, as an additional layer of complexity, in terms of a feminist analysis of different women’s experiences of content moderation in practice.734 In order to develop feminist digital methods, which are sensitive to patriarchal power structures, it is crucial for me to reflect on how I have constructed this study. I will evaluate the usefulness of my methodology in Chapter Seven.

V CONCLUSION

This Chapter has outlined what is publicly known about processes for moderating user- generated content to date. I explained that the prevailing norm is for content moderation to be ex post and reactive.735 While the work of moderating content is organisationally and geographically fractured, decisions around content are predominantly made by outsourced commercial content moderators. These professional moderators largely respond to content that

729 Highfield and Leaver, ‘Instagrammatics and Digital Methods: Studying Visual Social Media, from Selfies and GIFs to Memes and Emoji’ (n 85) 50. 730 Ibid 47. 731 See generally Koen Leurs, 'Feminist Data Studies: Using Digital Methods for Ethical, Reflexive and Situated Socio-Cultural Research' (2017) 115 Feminist Review 130; Lev Manovich, ‘Cultural Analytics, Social Computing and Digital Humanities’ in Mirko Schafer and Karin van Es (eds) The Datafied Society: Studying Culture Through Data (Amsterdam University Press, 2017) 55. 732 See generally Leurs (n 731) 130. 733 Ibid 132. 734 Ibid 130. 735 See, eg, Klonick (n 28) 1635.

99

CHAPTER THREE: A BLACK BOX METHODOLOGY users have reported as ‘inappropriate’, most likely through a combination of manual and automated methods. This led me to, then, examine the potential for automated content moderation by software. While the extent of the influence of algorithms is presently unknown, I argued that automated decision-making poses a number of risks, which societal actors mainly discuss in terms of algorithmic biases, autonomous decision-making, opacity, complexity and unpredictability. These risks, which I will expand upon in proceeding chapters, have the potential to significantly impact content moderation processes.

This Chapter has also outlined the important role that black box analytics can play in shedding light on content moderation processes. Given that the inner workings of moderation processes are relatively unknown, I outlined an input/output method based on black box analytics that examines how discrete inputs (ie, images) into moderation processes (ie, the black box) produce certain outputs (ie, whether an image is removed or not). This method ultimately facilitates evaluation of the extent to which the moderation of images depicting women’s bodies on Instagram aligns with my theoretical framework. More broadly, in concert with document and content analyses in Chapters Four and Five, this method has the potential to answer widespread calls for empirical evidence of content moderation in practice. These calls are arguably similar in substance to demands outlined in Chapter Two for constitutional safeguards to better protect users as the subjects of platform governance.736 In the following Chapter, I expand upon and apply my black box method to the first of two case studies in this thesis.

736 See generally Suzor, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (n 26) 8; Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression (6 April 2018) (n 257).

100

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM

This Chapter empirically examines whether a sample of 4,944 like images depicting (a) Underweight, (b) Mid-Range and (c) Overweight women’s bodies are moderated alike on Instagram – that is, at equal rates.737 The results of this study facilitate broader evaluation of the extent to which the platform’s moderation processes align with my rule of law framework. This Chapter proceeds in four parts.

In Part I, I explain that this is a topical case study given widespread concerns that some images depicting female forms are moderated in potentially arbitrary ways, as well as the relative paucity of empirical research into Instagram’s moderation systems to date.738 Part II then turns to build upon the black box methodology from Chapter Three.739 I expound my input/output method for examining how discrete inputs (ie, images that are not explicitly prohibited) into the throughput of Instagram’s moderation systems (ie, the black box) produce certain outputs (ie, whether an image is removed).740 After programmatically collecting images from 14 watched hashtags, some of which appear to specifically relate to female forms while others are more general, I use content analysis to determine whether like images of (a) Underweight, (b) Mid-Range and (c) Overweight female forms were removed or not. I use this coding scheme to investigate true negatives (images that do not appear to violate Instagram’s policies and were not removed) and potential false positives (images that do not appear to violate Instagram’s policies and were removed), in each category. I outline methodological limitations and ethical concerns raised by this case study in Parts III (B) and (C), respectively.

Next, in Part III, I detail the results and findings from this case study. In contrast to the Anglo- American ideal of the rule of law, highlight an inconsistent trend of moderation across all thematic categories. I identify two main findings. The first is that the odds of removal for an

737 As explained in Chapter One, the subjects of these images might not, in fact, identify as a woman or female. It is a limitation of the scope and method of this thesis that, by analysing decontextualized images against a binary classification of gender, I unfortunately am unable to sufficiently engage with the pressing concerns of transgender and non-binary people. 738 See, eg, Electronic Frontier Foundation and Visualizing Impact, ‘A Resource Kit for Journalists’ (n 17) [The Human Body]; Highfield and Leaver, ‘Instagrammatics and Digital Methods: Studying Visual Social Media, from Selfies and GIFs to Memes and Emoji’ (n 85) 47. 739 See, eg, Perel and Elkin-Koren (n 112) 181; Diakopoulos, ‘Algorithmic Accountability’ (n 123) 404. 740 See generally Diakopoulos, ‘Algorithmic Accountability Reporting: On the Investigation of Black Boxes’ (n 656) 14.

101

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM image that depicts an Underweight and Mid-Range woman’s body is 2.48 and 1.59 times higher, respectively, than for an image that depicts an Overweight woman’s body. Second, across these categories, I find that up to 22 per cent of images that were removed by Instagram or by the user do not breach the platform’s policies, and are therefore potentially false positives.

In Part III(A), I explore possible explanations for the overall trend of inconsistent moderation. Generally, content takedowns appear to be the result of removal by individual users themselves or direct regulatory intervention by the platform. Users may choose to remove their content for any number of diverse reasons, some of which might relate to cultural norms of use on a platform. In terms of direct intervention by Instagram, content removal could be the result of undisclosed platform practices, user reporting features and the development and deployment of automated systems, all within the value-laden architecture of the platform. Any or all of these reasons, and other contributing factors, could explain content removal given that Instagram conceals its regulatory system from public scrutiny.

I turn in Part III(B) to evaluate the results and findings from a rule of law perspective. I argue that there appears to be a lack of formal equality in Instagram’s moderation processes, given that some like images were not moderated alike. There is also a lack of certainty, as it is difficult for users to identify the specific rules that apply to their content in practice. I also identify deficiencies in Instagram’s reason-giving practices for individual content takedowns and little to no options for users to participate in the governance practices that affect them. These deficiencies are exacerbated by the inadequacy of existing transparency reports and the lack of accountability mechanisms to hold decision-makers to account.741 Finally, in Part III(C), I explain that issues from a rule of law perspective are woven into broader concerns around the normative underpinnings of Instagram’s regulatory system.

This Chapter concludes that there is support for concerns that some images depicting women’s bodies on Instagram are not moderated in ways that align with the Anglo-American ideal of the rule of law. I contend that the lack of formal equality, certainty, reason-giving and user participation, and Instagram’s largely unfettered power to moderate content with limited transparency and accountability, are significant normative concerns which pose an ongoing

741 See, eg, Crocker et al (n 40).

102

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM risk of arbitrariness for women and users more broadly.742 I continue to investigate this risk in Chapter Five.

I A HYPOTHESIS FOR EMPIRICAL INVESTIGATION

It is useful to recap the impetus for evaluation in this thesis of whether images depicting women’s bodies on Instagram are moderated in a way that aligns with my rule of law framework. As noted in Chapter Two, a number of societal actors advance different and sometimes conflicting claims about the ways images of female forms are moderated in practice. There are, for instance, persistent allegations that Instagram imposes double standards on women by moderating like images of female forms in different and potentially arbitrary ways.743 Some users and publications in news media have accused the platform of ‘blatant fat- phobia’,744 ‘fat-shaming’745 female users and being ‘trigger-happy with removing images of plus-sized women’.746 Others suggest that the platform’s moderation processes – also described as a form of ‘censorship’747 and ‘policing’,748 among other terms – is patriarchal, misogynistic and gendered.749 By contrast, some news publications show that thin-idealised images of women are also removed from Instagram.750 The platform is purportedly democratising body standards and empowering users to post images of all body types.751 Competing narratives around empowerment and censorship, which are partly fuelled by Instagram’s logic of secrecy, reports of selective policy enforcement and lack of reason-giving practices highlight the importance of empirically examining whether like images of women’s bodies are moderated alike on Instagram. Given that there will always be controversies over particular instances of

742 As also noted in Chapter One, marginalised individuals and groups generally face a higher risk of arbitrariness when participating in online platforms: see, eg, Duguay, Burgess and Suzor (n 94) 1. While I acknowledge this ongoing risk, empirically examining the moderation of content posted by minority users is outside the scope of this thesis. 743 According to onlinecensorship.org, ‘Instagram’s guidelines largely match Facebook’s, but in practice, Instagram frequently takes down content that seemingly complies with its guidelines. For example, while images showing women in bikinis or underwear are not explicitly banned, Instagram has in some cases removed such content, particularly when the individual in the image is fat.’: see Electronic Frontier Foundation and Visualizing Impact, ‘A Resource Kit for Journalists’ (n 17) [The Human Body]. 744 See, eg, Olya (n 14). 745 See, eg, Brown (n 15). 746 See, eg, West, ‘Facebook’s Guide to Being a Lady’ (n 402). 747 See, eg, see Electronic Frontier Foundation and Visualizing Impact, ‘A Resource Kit for Journalists’ (n 17) [The Human Body]. 748 Kashmira Gander, ‘Body Hair and Sexuality: The Banned Photos Instagram Doesn't Want You to See’, Independent (online at 9 March 2017) . 749 Swetambara Chaudhary, ‘This Photo Was Removed By Instagram. The Owner Writes a Powerful Open Letter in Response’, Scoop Whoop (online at 26 March 2015) . 750 Cambridge (n 23). 751 Salam (n 20).

103

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM moderation, empirical analyses are useful for attempting to demystify how user-generated content is moderated in practice and to help identify potential arbitrariness where it exists and to allay users’ concerns where it does not.

In this case study, I hypothesise that images of Underweight women’s bodies are removed at a different rate to depictions of Overweight women’s bodies (H1). In statistical terms, this means that there is an association between female body type and whether an image is removed. The null hypothesis for this case study is that images depicting Underweight women’s bodies are not removed at a different rate to depictions of Overweight women’s bodies – that is, there is no association between female body type and whether or not an image is removed (H0). Hypothesis testing focuses on the Underweight and Overweight categories for reasons discussed in Part II(A). The method in the following Part opens one avenue for empirically testing H1 and has the potential to help stakeholders better understand how some images of female forms are moderated on Instagram in practice.

II METHOD

I develop and apply an input/output method based on black box analytics, which investigates how discrete inputs into the throughput of Instagram’s moderation processes (ie, the black box) produce certain outputs.752 As previously noted, input in this thesis refers to individual images that users posted to watched hashtags while output pertains to whether each image was removed or not removed (ie, the outcome of moderation, as depicted in Figure 9). Images were programmatically collected from watched hashtags, all of which I purposefully selected. Unlike platforms such as Twitter and YouTube, Instagram does not provide a public API to build a random sample of posts and ‘is one of the more difficult platforms to monitor’.753 Researchers generally have to either pay for access or obtain express permission from the platform to collect, store and use the platform’s data. Moreover, in the context of examining content moderation over time and at scale, Suzor explains:

Because hashtags are practically unlimited in number but vary greatly in popularity, a random sample is not particularly useful for our purposes. More importantly, the hashtag-based sample

752 Diakopoulos, ‘Algorithmic Accountability Reporting: On the Investigation of Black Boxes’ (n 656) 14. 753 Suzor, 'Understanding Content Moderation Systems: New Methods to Understand Internet Governance at Scale, Over Time, and Across Platforms' (n 85) 10.

104

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM

provides a method to identify patterns in the types of hashtags that are blocked and the methods that Instagram uses to enforce those blocks.754

Monitoring hashtags through research infrastructure at QUT’s Digital Media Observatory, like in this thesis, is one of the most comprehensive and methodologically advanced ways of studying content moderation on Instagram to date.755

The watched hashtags in this case study are #breastfeeding, #curvy, #effyourbeautystandards, #fatgirl, #fitboy, #fitgirl, #girl, #lesbian, #lgbt, #postpartum, #skinny, #stretchmarks, #thick and #thin. I selected these hashtags for a number of different reasons. First, news outlets have mentioned #curvy, #effyourbeautystandards, #skinny, #thick and #thin in the context of women’s bodies on Instagram.756 Second, I selected hashtags #postpartum and #stretchmarks, among others, because some users claim that images depicting female stretch marks and postpartum bodies are arbitrarily moderated.757 Third, given that some watched hashtags appear to have less user activity and therefore fewer new posts to monitor over time, I included several ‘top hashtags’758 with a high volume of content to attempt to increase the chance of identifying removed content. By programmatically collecting images from popular hashtags, such as #girl that has over 380 million images and videos at the time of writing, it was also possible to capture a more diverse range of images depicting women’s bodies. Finally, I selected #breastfeeding, #fitboy, #lesbian and #lgbt as pilot testing for this case study showed that some users were posting images of female forms to these hashtags. It was important to do so because hashtags do not necessarily describe the subject matter that users post to those tags.759 Users employ hashtags for a wide range of reasons, including to reach a larger audience than their followers or increase the visibility of their posts.760 After finalising my selected hashtags, automated tools started collecting images on an ongoing basis as explained in Chapter Three.

754 Suzor, 'Understanding Content Moderation Systems: New Methods to Understand Internet Governance at Scale, Over Time, and Across Platforms' (n 85) 10. 755 Ibid. 756 See, eg, Salam (n 20); Mary Emily O’Hara, ‘Why is Instagram Censoring So Many Hashtags Used by Women and LGBT People?’ The Daily Dot (online at 12 May 2016) . 757 See, eg, see Electronic Frontier Foundation and Visualizing Impact, ‘A Resource Kit for Journalists’ (n 17) [The Human Body]. 758 Orbello, ‘The Ultimate Guide to the Best Instagram Hashtags for Likes,’ Oberlo (online at 5 May 2018) . 759 Highfield and Leaver, ‘A Methodology for Mapping Instagram Hashtags’ (n 74) [Hashtags]; see generally Adam Mathes, ‘Folksonomies - Cooperative Classification and Communication Through Shared Metadata’ (Research Paper LIS590CMC, Computer Mediated Communication, Graduate School of Library and Information Science, University of Illinois Urbana-Champaign, 2004). 760 Tamara Small, 'What the Hashtag?' (2011) 14(6) Information, Community and Society 872, 873-874.

105

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM

The total dataset from watched hashtags comprises 120,866 images, including 23,943 images that were removed and 96,923 images that were not removed, with a probability of removal of 19.8 percent as illustrated in Figure 11. As previously mentioned, Instagram does not provide a specific reason for why a post is no longer available. This means that programmatic tools cannot identify whether images were removed by Instagram or a user. The lack of transparency around Instagram’s regulatory decision-making starkly contrasts with YouTube, for example, which provides detailed reasons to visitors when a video is no longer available.761 YouTube also notes whether the video was removed by the user or by the platform, and sometimes includes more specific information to explain what policy the video was found to violate, or what form of legal complaint the platform received about the video.762 Despite being unable to identify whether Instagram or a user removed content, I proceeded with this study on Instagram in order to evaluate how a black box methodology might work with publicly available data, and to particularise the extra information that researchers may need in the future.

A Coding Scheme for Women’s Bodies

I used programmatic tools to generate a sample of 9,582 images (from the total dataset) for the purposes of manual coding, as part of content analysis more broadly.763 The dataset for manual

761 Feerst (n 395). 762 Crocker et al (n 40). 763 Peter Cane and Herbert Kritzer, The Oxford Handbook of Empirical Legal Research (Oxford University Press, 2010) 941.

106

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM coding comprises two subsets in Figure 12: 4,759 images that were removed and 4,823 images that were not removed (an almost 50/50 split). The dataset for manual coding was constructed in this way to ensure that the coded sample contained enough removed images for analysis. During manual coding, I classified images as depicting either Underweight, Mid-Range or Overweight women’s bodies. This distinction enabled examination of whether these categories of female body types were in fact moderated in different ways on Instagram. In order to determine whether images depict Underweight, Mid-Range or Overweight women’s bodies, I referred to the Photographic Figure Rating Scale (‘PFRS’) in Figure 15, which is a measure of the naturally occurring morphology of women.764 The PFRS comprises ten photographs of women’s bodies that vary in Body Mass Index (BMI) (BMI = body weight in kilograms/height in meters squared) on a ten point scale. Images in the PFRS represent five different BMI categories: emaciated (images 1 and 2), underweight (images 3 and 4), average (images 5 and 6), overweight (images 7 and 8) and obese (images 9 to 10).765 I chose to develop three categories, which use the PFRS as a guiding framework, because discussion of potential censorship of women’s bodies in news outlets largely refers to women as either curvy or skinny (the skinny/curvy dichotomy).766 The categories for manual coding in this Chapter are:767

a) Underweight: the woman’s body appears to match images 1, 2, 3, 4 or 5 of the PFRS;768 b) Mid-Range: the woman’s body appears to match images 6 or 7 of the PFRS; and

764 It should be noted that the PFRS, which researchers have principally used in studies of body dissatisfaction, arguably offers a more realistic depiction of female forms than traditional contour figure (line drawn) rating scales: Viren Swami et al, ‘Initial Examination of the Validity and Reliability of the Female Photographic Figure Rating Scale for Body Image Assessment’ (2008) 44 Personality and Individual Differences 1752, 1754. 765 The ‘current-ideal discrepancy’ should be noted – that is, women often perceive their body type differently to their actual body type. I acknowledge that there may be discrepancy between how the subject/s of each image in the coded dataset may classify their body type and my coding of each image. See generally Viren Swami et al, ‘Further Investigation of the Validity and Reliability of the Photographic Figure Rating Scale for Body Image Assessment’ (2012) 94(4) Journal of Personality Assessment 404. 766 Carrotte, Prichard and Lim (n 431) 2; See, eg, Sara Murphy, ‘Innocent Photo of “One Skinny Woman and One Curvy Woman” Stirs Controversy,’ Yahoo (online at 16 March 2017) . 767 As previously noted, the subjects of coded images might not, in fact, identify as a woman or female. See (n 737) and Part III of Chapter One. 768 It should be noted that I do not believe that the coding scheme for this study resulted in Underweight images dominating the coded sample. While the Underweight category incorporates images 1–5 of the PFRS, less than (<) 20 images that depict women’s bodies that matched images 1–2 of the PFRS in the coded sample, which suggests that the coding scheme was not overly weighted towards Underweight depictions.

107

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM

c) Overweight: the woman’s body appears to match images 8, 9 or 10 of the PFRS.769

In order to streamline the process of classifying images, I developed a coding procedure, key aspects of which I will now outline. First, I chose to exclude images that were explicitly prohibited under Instagram’s Terms of Use and Community Guidelines, such as pornographic content. Expressly prohibited images of women’s bodies, and some men’s bodies, are the subject of the following Chapter (Case Study Two). I also excluded close-ups of women’s faces and depictions of two or more women with different body types. Second, given that there is a high degree of subjectivity in classifying female body types, I classified images that did not clearly fall within the Underweight or Overweight categories as Mid-Range. It is important to note that, while there is a greater degree of subjectivity around the Mid-Range category, every image in this study depicts the female form in some way that is not ostensibly prohibited. We should therefore expect images in the coded dataset, which comprises 4,944 images, to be moderated alike.

769 Swami et al, ‘Further Investigation of the Validity and Reliability of the Photographic Figure Rating Scale for Body Image Assessment’ (n 765).

108

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM

Yet the coding of images into like thematic categories calls into question whether images in each category are in fact alike. While there is some variation in the attributes of images, especially in terms of quality, it is important to note that they are predominantly portraits or selfies (self-portrait photographs) in which you can see most of a woman’s body. In particular images in each category generally align with any or all of the following themes: (1) intimate apparel or swimwear, (2) fitness and active lifestyles, (3) runway and/or everyday fashion and (4) travel and/or landscapes, as illustrated in Figure 14. Moreover, qualitative analysis of the coded sample suggests that no one thematic category contains more sexualised, or sexually suggestive, content than another. Images in each category of content therefore generally depict analogous subject matters related to women’s bodies.

I then undertook inter-rater reliability testing to assess the coding procedure.770 A power analysis with 80 percent statistical power at the 5 percent significance level determined that 199 images needed to be inter-rated to detect more than a 20 percent difference in the coding between me (Rater 1) and a volunteer (Rater 2). Then, in order to optimise confidence in the result, Rater 2 independently coded a sample of 410 images (Underweight = 190, Mid-Range

770 This was particularly important given that best practices for content analysis suggest that content is coded by an external team rather than internally by the researcher/s: see Klaus Krippendorff, Content Analysis: An Introduction to its Methodology (SAGE, 3rd ed, 2013); Kimberly Neuendorf, The Content Analysis Guidebook (Sage Publications, 2002). Images in this thesis were not coded by an external team due to resource constraints.

109

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM

= 138 and Overweight = 82) based on the coding scheme that I developed. I then used Cohen’s kappa calculation, which is less than or equal to one where >0.5 generally demonstrates moderate inter-rater reliability, >0.7 is good and >0.8 is excellent, to test differences in coding between Rater 1 and Rater 2.771 The coding agreement between Rater 1 and Rater 2 is excellent with a score of 0.83 for inter-rater reliability.

The final coded sample, which I manually filtered from the sample of 9,582 images, contains 4,944 images of women’s bodies. Specifically, the coded sample comprises 3,879 images that depict Underweight women’s bodies, 524 for Mid-Range and 541 for Overweight, none of which Instagram’s content policies explicitly prohibit. The coded sample predominantly comprises depictions of Underweight women’s bodies despite deliberate hashtag-based selection. The prevalence of Underweight depictions could be due to the current thin and toned beauty ideal, which is dominant in Western societies.772 Once I finalised the coded sample, I was able to calculate the probability or risk of removal for each category, which I outline in Part B below.

Finally, I undertook some basic quantitative analysis to ensure that the results from the coded dataset were significant in light of the deliberate sampling strategy. I used IBM SPSS Statistics to perform a chi-square hypothesis test for statistical independence between categorical variables of female body type (Underweight, Mid-Range and Overweight) and content removal (Removed or Not Removed).773 A Chi-square test (at a 95 per cent confidence level where p = 0.05) indicated a statistically significant association between results in each category, x² ((2, n= 4,944) = 106.016, p = .000, phi = .146). I therefore conclude that the results in Part III were not due to random chance. Once I finalised the coded sample, I was able to calculate the probability or risk of removal for each category, which I outline in Part B below.

B Extrapolating Findings

After coding the images, which I also classified as either removed or not removed, I calculated the probability of removal for each category by extrapolating the coded sample to a general population (or the total dataset). As illustrated in Figure 11, the probability of removal in the total dataset is around 19.8 per cent percent, which means that we would expect most images,

771 See generally Mary McHugh, ‘Interrater Reliability: The Kappa Statistic’ (2012) 22(3) Biochemica Medica 276. 772 Marika Tiggemann and Mia Zaccardo, ‘“Strong is the New Skinny”: A Content Analysis of #fitspiration Images on Instagram’ (2016) Journal of Health Psychology 1, 3. 773 Julie Pallant, SPSS Survival Manual (Allen & Unwin, 6th ed, 2016) 216.

110

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM or posts, to remain available on the Instagram platform. Initial results from the coded sample, which I manually filtered from 9,582 images,774 exaggerated the probability of removal (ie, 2,587 images (52.3 per cent) were removed and 2,357 images (47.7 per cent) were not removed). It was therefore necessary to ensure that the results in this Chapter were relatively consistent with the overall 19.8 percent (expected) rate of removal. I extrapolated the sample by: (1) calculating the proportion of removed and not removed content in the total sample – (Removed: 23,943/4,759 = 5.031) and (Not Removed: 96,923/4,823 = 20.095) – and (2) multiplying removed and not removed images in each category by 5.031 and 20.095, respectively.775 Table 1 illustrates the probability of removal for each category (extrapolated to a general population) where the removed images are potential false positives and images that were not removed are potential true negatives.

C Methodological Limitations and Ethical Challenges

The findings discussed in the following Part should be interpreted with some limitations in mind. The first is that the coded dataset, which is small in terms of big data, is not naturally occurring. Images were programmatically collected from public hashtags, which I deliberately selected, at two discrete points in time. This means that this study comprises only a small portion of publicly available images depicting women’s bodies and no images from private accounts. Second, by focusing on watched hashtags, I am unable to investigate images that users did not post to hashtags. Users might neglect adding hashtags in order to circumvent moderation and keep their content to a more curated and, perhaps, more sympathetic audience. Third, automated methods cannot determine whether Instagram or a user was responsible for removing content without knowledge of the platform’s internal workings.776 The result is that I cannot provide data for, or comment on, the proportion of content that was removed by Instagram compared to content that was removed by users. I also cannot identify the precise reason why individual images were removed. This is important because Instagram can remove

774 Note this dataset has a roughly 50/50 split of removed and not removed content. 775 Note that the total dataset from watched hashtags comprises 120,866 images (23,943 were removed and 96,923 were not removed). As noted above, we generated a sample of 9,582 images (4,759 were removed and 4,823 were not removed) for the purposes of manual coding from the total dataset. I manually filtered the sample for manual coding into our final, coded sample of 4,944 images of women’s bodies (2,587 were removed and 2,357 were not removed where 3,879 are Underweight, 524 are Mid-Range and 541 are Overweight). 776 As Ananny and Crawford argue, ‘…[s]eeing inside a system does not necessarily mean understanding its behaviour or origins’. It is important to attempt to understand black box data in light of a broader system’s design and social contexts, and in a way that is self-reflexive – a general approach that I adopt throughout this thesis: see Ananny and Crawford (n 390) 980.

111

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM images for a number of reasons that do not directly relate to the depiction of female forms, such as copyright infringement,777 and users can elect to remove content themselves.

Third, in attempting to extrapolate the coded sample to a general population, it is possible that the empirical results exaggerate the amount of content that was removed from watched hashtags. Additionally, given the rapid pace at which online social spaces are evolving, there is a risk that automated tools interacted with different moderation systems across the two points of data collection.778 As Kitchin explains, reverse engineering approaches can be limited to ‘fuzzy glimpses’779 of how a system works in practice, especially given that researchers can only advance a narrative about what happens within a black box. These limitations mean that this case study, like the next, cannot be used to make generalised findings about all images of women’s bodies across all hashtags on Instagram. The data is, however, very useful for providing some insight into how images that depict women’s bodies are moderated on Instagram in practice. This case study also makes significant inroads in developing and highlighting the importance of methods to probe the black box of content moderation across platforms.

The process of developing and applying this methodology to images depicting women’s bodies raises several ethical concerns. The first is that I analyse decontextualised images against a binary classification of gender when the subjects of images may not, in fact, identify as male or female.780 While it is not my intention to ascribe meaning to those images beyond what was practically necessary to undertake content analysis, I acknowledge that my study does not reflect the broad spectrum of gender and sexualities in contemporary Western societies.781 Second, it is outside the scope of this exploratory study to code and examine images based on intersectional factors, like gender, age and sexuality.782 By coding images into seemingly like categories of content, I risk overlooking how intersectional factors might impact the outcomes

777 Digital Millennium Copyright Act of 1998 17 USC (2000) §512; Copyright Act 1968 (Cth) s 116AG(1). 778 Kitchin (n 122) 21. 779 Ibid 24. 780 Gender binarism is an important issue for adults and children. In September 2019, for example, a group of mothers criticised Instagram for removing (potentially through automated systems) topless images of their longhaired sons that were seemingly presumed to be girls. This story underlines the potential harms of imposing a binary label on the subjects of images, something that I am very alert to: see, eg, Lauren Strapagiel, ‘These Moms are Mad that Instagram keeps Deleting Topless Photos of Their Long-Haired Sons’, BuzzFeed News (online at 24 September 2019) . 781 See, eg, Marwick, 'None of This Is New (Media): Feminisms in the Social Media Age' (n 119) 323; Rogan (n 221) 80 ff. 782 See generally Kimberlé Crenshaw, ‘Mapping the Margins: Intersectionality, Identity Politics, and Violence Against Women of Colour (1990) 43 Stanford Law Review 1241.

112

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM of content moderation,783 as well as the important role that differential treatment can play in addressing systemic inequalities. As I will discuss in Chapter Seven, addressing this lack of intersectionality is an important topic for future research. A final ethical concern is that by examining the moderation of content posted to watched hashtags, around which some marginalised or countercultural groups might cluster, I could inadvertently make some hidden spheres public.784 These groups might hide ‘counterhegemonic ideas and strategies in order to survive or avoid sanctions, while internally producing lively debate and planning’.785 I have attempted to address this concern by collecting publicly available data and anonymising coded images (eg, by de-identifying usernames).

III RESULTS AND DISCUSSION

Overall, I identify a trend of inconsistent moderation across like thematic categories of images depicting women’s bodies – that is, some instances of content moderation contradict the platform’s rules around content and some like content was not moderated alike. The probability of removal for images that depict Underweight women’s bodies is 24.1 per cent, which exceeds the 19.8 percent risk of removal for the total dataset, followed by 16.9 percent for Mid-Range and 11.4 percent for Overweight. Across these categories, I find that up to 22 per cent of images that were removed by Instagram or by the user are potential false positives as depicted in Table 1. This trend supports the hypothesis (H1) that images depicting Underweight women’s bodies are moderated at a different rate to depictions of Overweight women’s bodies on Instagram.

I performed logistic regression in IBM SPSS Statistics to assess the odds of removal for images that depict Underweight, Mid-Range and Overweight women’s bodies, respectively. The Overweight category is the reference (base) group in this calculation to make the odds ratios easier to interpret. As shown in Table 2, the odds of removal for an image that depicts an Underweight woman’s body is 2.48 times higher than for an Overweight woman’s body. Further, the odds of removal for an image that depicts a Mid-Range woman’s body is 1.59

783 Analysing the concerns of non-binary and transgender people requires an intersectional evaluative framework and methodology. Intersectionality is a prism for understanding how different aspects of oppression can occur at the intersections of, inter alia, gender, race, class and sexuality. An intersectional, multidimensional approach can be beneficial as single-axis analysis, such as one primarily alert to concerns of gender, can overlook the multi-dimensionality of many women’s experiences of inequality: See generally Kimberle Crenshaw, 'Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory, and Antiracist Politics ' in Katherine Bartlett and Rosanne Kennedy (eds), Feminist Legal Theory: Readings In Law And Gender (Routledge, 2018) 57 ff. 784 Ananny and Crawford (n 390) 978. 785 Ibid.

113

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM times higher than for an Overweight woman’s body. These odds of removal are consistent with the probabilities of removal across categories.

Overall, the empirical results suggest that there is certainly support for concerns that some content is not moderated in ways that align with the Anglo-American ideal of the rule of law on Instagram. In the following Part, I outline several possible explanations for the content removal that I observed, which fall into the broad categories of content removal by individual users themselves or direct regulatory intervention by the platform. Specifically, direct intervention could be influenced by undisclosed platform policies, the role of users and non- users as volunteer regulators and the development and deployment of automated systems, all within Instagram’s value-laden architecture. Any or all of these possible reasons could explain the results in this case study, given that the platform conceals its regulatory system from public scrutiny and provides very little information about the ways different users interact with each other.786 In the following Part, I expound each potential explanation in order of most to least likely.

A Possible Explanations for Content Removal

The first possible explanation for inconsistencies is the removal of content by individual users themselves. While users may choose to remove content for any number of diverse reasons, the most likely explanation could be that some users might simply want to curate their profile by taking down or archiving certain posts.787 Here it is important to note that curation, which is a form of expression, is distinct from self-censorship, which describes an individual user’s decision to refrain from expressing themselves altogether. A deliberate curatorial strategy could involve a user removing content that does not align with their preferred self-image, perhaps as part of a ‘rebranding’ strategy,788 or generally reducing the total number of posts on their profile. One common trend on Instagram, for instance, is for users to display their top nine, or 18 posts,789 which requires fairly constant curatorial work. Curatorial strategies can of course vary in nature and frequency. It is also possible that some users may have taken or uploaded certain content and removed it shortly afterwards, perhaps due to regret or backlash

786 Crocker et al (n 40). 787 Instagram’s archive feature enables users to hide posts from their profile, including corresponding likes and comments, which a user might want to store in a private collection for their own reference: Instagram, ‘How Do I Archive a Post I’ve Shared’ (n 97). 788 Alice Marwick and danah boyd, ‘I Tweet Honestly, I Tweet Passionately: Twitter Users, Context Collapse, and the Imagined Audience’, 13(1) New Media & Society 114, 122, 126. 789 Witt, Suzor and Huggins (n 27) 582.

114

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM in comments, among other things.790 The highly visual nature of the platform, however, suggests that curation is the most probable explanation for the removal of content in this case study.

To the extent that content may have been removed as a result of direct intervention by Instagram, there are a number of possible explanations and contributing factors that could explain the inconsistent trend that I observed. One possibility is that, in response to long standing concerns that Instagram’s moderation processes favour depictions of the Western

790 Users may also remove content in response to concerns about privacy: see generally Sauvik Das and Adam Kramer, ‘Self-Censorship on Facebook’ (Paper presented at the Seventh International AAAI Conference on Weblogs and Social Media, MIT Media Lab and Microsoft in Cambridge, Massachusetts, USA, 8–11 July 2013) 120-121.

115

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM ideal of thinness, the platform may have developed practices that are especially protective of body positive content. For instance, a protectionist approach could involve the platform directing its moderators to pay attention to content that may have been reported in order to ‘fat- shame’791 women. It is also possible that content may have been removed by artificial intelligence systems; specifically, nudity classifiers, which detect varying degrees of nudity in content and can result in regulatory action (eg, flagging or content removal).792 Classifiers are often trained on unrepresentative datasets because good data are generally difficult and costly to obtain. Given the prevalence of thin ideals in Western media, and the history of some classifiers being trained on thin-idealised pornographic content,793 it could be the case that Instagram’s computer vision technology is trained on more images of Underweight female forms than Mid-Range and Overweight female forms. This could ultimately result in thin- idealised images being detected, and possibly removed, at a higher rate than other content.

Instagram’s heavy reliance on user reporting could also explain some of the discrepancies that I identify. While user reports do not automatically result in the removal of content,794 and the number of times users report individual content is supposedly irrelevant,795 reporting options may still impact the outcomes of moderation. User reports are complex ‘sociotechnical mechanisms’796 that can be a site for political, cultural, social and other conflicts among users about the appropriateness of certain content.797 Moderators and policy teams are tasked with arbitrating these conflicts, some of which might reflect systemic social biases.798 The public, however, knows very little about the inner workings of user reporting.799 Instagram does not report on the volume or nature of flagged and removed content.800 It is also difficult to glean information from the basic framework for user-initiated reports in Figure 7, which does not provide guidance to users to help them report content in a way that is, inter alia, consistent with the platform’s policies. A result of this lack of transparency is that it is not yet possible to measure the potential impact of biases in user-initiated reports.

791 See, eg, Brown (n 15). 792 Roberts, ‘Social Media’s Silent Filter’ (n 591). 793 Chang (n 473) 2 ff. 794 Crawford and Gillespie (n 232) 419. 795 Instagram, ‘Does the Number of Times Something Gets Reported Determine Whether or Not It's Removed?’ Help Centre (Web Page, 30 August 2018) . 796 Crawford and Gillespie (n 232) 410. 797 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media,(n 2) Chapter 1. 798 Renée Marlin-Bennett and E Nicole Thornton, ‘Governance within Social Media Websites: Ruling New Frontiers’ (2012) 36 Telecommunications Policy 493, 493; Crawford and Gillespie (n 232) 413. 799 Duguay, Burgess and Suzor (n 94) 10. 800 Witt, Suzor and Huggins (n 27) 578.

116

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM

Additionally, the removal of images in this case study may have been the result of the often-ad hoc nature of content moderation. While the rules that moderators follow are allegedly prescriptive, some moderators and policy teams exercise varying degrees of discretion.801 This means that some decisions about what content is prohibited, such as ‘nudity’,802 and what is not may have been made on a case-by-case basis. Moderation decisions, for example, sometimes change in line with directions handed down by full-time policy teams, which can have the effect of temporarily or permanently overriding particular content policies.803 It is also possible that moderators – whether humans or automated systems – could be following different sets of rules to those articulated in publicly available policies. We know that the publicly accessible rules and guidelines of platforms are not the same as the very specific flowcharts and training materials that moderators use behind closed doors.804 Content moderators need to be able to quickly make consistent decisions, and operationalising the rules in practice requires them to be translated into a highly specific set of instructions.805 This should not be read as suggesting that Instagram has an undisclosed policy to remove thin-idealised images of women at a higher rate than others, rather undisclosed policies might prioritise different interests or factors in decision-making processes.

Systemic biases or moderator error may have also contributed to the inconsistent trend that I identify. Reports to date suggest that platforms continue to largely rely on humans to undertake the work of reviewing content for several reasons, including that automated tools for detecting potentially objectionable content are ‘imperfect’806 at best. However, whether humans or algorithms moderate content, there is an ongoing risk of bias given that platform executives and the professionals who develop network technologies encode to varying degrees particular values, worldviews and interests into content moderation processes.807 This means that

801 Suzor, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (n 26) 3. 802 Instagram Help Centre, Community Guidelines (n 185) [Post Photos and Videos that are Appropriate for a Diverse Audience]. 803 Roberts, ‘Social Media’s Silent Filter’ (n 591). 804 At Facebook, for instance, the rules that moderators follow are ‘not what you read in Facebook’s terms of service or community standards. They are a deep, highly specific set of operational instructions for content moderators that is reviewed constantly by [Monika] Bickert’s team and in a larger intra- Facebook gathering every two weeks’: see Madrigal (n 254). 805 Madrigal (n 254). 806 Roberts, ‘Social Media’s Silent Filter’ (n 591). 807 Roberts, ‘Social Media’s Silent Filter’ (n 591).

117

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM procedures for sorting content (eg, approving, escalating or denying a user report)808 and individual moderator workflows809 are far from neutral. Indeed, moderation processes, like other systems, are generally designed to benefit their creators rather than end users. The potential risk of moderator error or biased outcomes is exacerbated by the poor working conditions of often low-paid moderators or ‘click workers’.810 For instance, employers – whether platforms or outsourced companies – often require moderators to make decisions about content in line with accuracy targets, contributing to reliance on instinctive and value-laden responses. While platforms purportedly employ moderators for their language and subject- matter expertise,811 and review the consistency and accuracy of individual moderators,812 applying standards of appropriateness to content from all corners of the globe is still an immensely complex task for moderators and platforms more broadly.813 Questions of censorship also arise in this study. As previously noted, some users suggest that Instagram’s executives and/or moderators censor ‘objectionable’ images of women’s bodies according to undisclosed normative values, and reports show a number of examples of potentially ‘censorious content moderation’814 on the platform.815 Instagram also appears to routinely censor entire hashtags,816 perhaps most notably #curvy and, more recently, #JustWearTheSuit.817 The platform states:

808 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 74. 809 Crawford and Gillespie (n 232) 413. 810 Buni and Chemaly (n 8). 811 See generally Duke University Law School (n 607). 812 Koebler and Cox (n 550). See discussion of accuracy targets and scores in Newton (n 213). 813 Andrew Arsht and Daniel Etcovitch, 'The Human Cost of Online Content Moderation', Jolt Digest (online at 2 March 2018) ; Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) Chapter 5. 814 See, eg, Anderson et al, (n 26). 815 Crocker et al (n 40). 816 Nicolas Suzor, ‘What Proportion of Social Media Posts get Moderated, and Why?’ Medium (online at 9 May 2018) ; Ysabel Gerrard, 'Beyond the Hashtag: Circumventing Content Moderation on Social Media' (2018) 20(12) New Media & Society 4492; See also Rodriguez (n 83). 817 Carly Anderson, the founder of #JustWearTheSuit, says that the hashtag aims to encourage all people to ‘support each other to shed our inhibitions this summer and to create a community that helps us discuss how to feel confident in a bathing suit’: see Carly Anderson, ‘Just Wear the Suit: How We Can All Support Each Other’, Lipgloss and Crayons (online at 2019) ; Faith Brar, 'Instagram Tried to Ban a Body- Positive Hashtag—Until Hundreds of Women Took a Stand', Shape (online at 14 August 2019) .

118

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM

While some posts on Instagram may not go against our Community Guidelines, they might not be appropriate for our global community, and we'll limit those types of posts from being recommended on Explore and hashtag pages. For example, a sexually suggestive post will still appear in Feed if you follow the account that posts it, but this type of content may not appear for the broader community in Explore and hashtag pages…

Keep in mind:

x Not all posts or accounts are eligible to be surfaced in Explore and hashtag pages. We use a variety of signals, for example, if an account has recently gone against our Community Guidelines, to determine which posts and accounts can be recommended to the community.818

I cannot comment on the potential for direct censorship in the results that I observe given that Instagram does not report which party removed content or reasons for content removal. However, it is important to note that there are a number of actors who would ask platforms to exercise power over content for a variety of ends,819 among them governments who might seek to surveil or censor speech,820 copyright owners,821 users or other third parties with grievances.822 The result is that images in this case study may have been removed, or censored, for reasons unrelated to the depiction of female forms. While the limits of Instagram’s secretive moderation processes make it difficult to come to a definitive conclusion around how and why content is removed and who removes it, my results suggest that the platform’s approach to moderation, which includes vaguely articulated rules, a heavy reliance on user flagging and a large outsourced, low paid workforce, could lead to inconsistency in content removal. These possible explanations raise a number of ongoing normative concerns around processes for moderating content on Instagram. In the following Part, I outline and evaluate these concerns through my rule of law framework.

818 Instagram, Why Are Certain Posts Not Appearing in Explore and Hashtag Pages? Help Centre (2019) . 819 See, eg, Langvardt (n 519) 1353. 820 According to the Electronic Frontier Foundation, Instagram has committed to ‘reasonable efforts (such as country-specific domains or relying upon user-provided location information) to limit legally ordered content restrictions to jurisdictions where the provider [Instagram] has a good-faith belief that it is legally required to restrict the content’: see Cardozo et al, Who Has Your Back? Censorship Edition 2018 (Online Report, 31 May 2018) Electronic Frontier Foundation [Overview of Criteria, Limits Geographic Scope] . 821 For instance, a cause of action under Digital Millennium Copyright Act of 1998 17 USC §512 (2000). 822 For example, claims of defamation or harassment: see Suzor, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (n 26) 3.

119

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM

B Normative Concerns from a Rule of Law Perspective

There is some cause for concern from the perspective of the rule of law where users are choosing to remove images themselves. As explained in Chapter Two, the largely formal rule of law framework in this thesis is not sufficient to articulate more substantive and complex concerns around user practices. The evaluation in the remainder of this Chapter, however, suggests that Instagram may have a role to play in supporting or reinforcing cultural norms that impact on women’s self-expression. These norms could be informed by a number of different factors, including heteronormative body standards and traditional conceptions of a woman’s place in the world. I explore the apparent normative underpinnings of the moderation of images depicting female forms on Instagram from a feminist perspective in Chapter Six.

Where content removal is due to Instagram’s moderation processes, there are serious concerns from the vantage point of the rule of law. One significant concern is that users, as the subjects of regulation, lack certainty and guidance about how their content is moderated in practice. Instagram’s ambiguous and incomplete policies fundamentally limit the ability of users to understand and learn the bounds of acceptable content and apply platform policies to reporting features. This is contrary to the ‘basic intuition of law’ – that is, in order to follow the law, individuals must first have knowledge of what the law is.823 We can usefully compare Instagram with its parent company here: Facebook has recently disclosed some information on how its community guidelines are interpreted in practice, including providing examples of content that violates and does not violate the major terms of its policies.824 Despite falling well short of the best practices set by industry leader YouTube,825 these disclosures are a positive step forward for Facebook.

In contrast, Instagram has revealed very little about how content is moderated behind closed doors. Global censure of its parent company appears to have shielded the platform from calls for greater transparency to date and, even where Instagram sets some rules around content, these rules have a wide ‘penumbra of uncertainty’.826 Take, for instance, the platform’s definition of ‘nudity’ that includes additional, undefined terms. This uncertainty limits the

823 Raz (n 42) 214-215. 824 Bickert (n 253). See also Erin Egan and Ashlie Beringer, ‘We’re Making Our Terms and Data Policy Clearer, Without New Rights to Use Your Data on Facebook’, Facebook Newsroom (Press Release , 4 April 2018) ; Guy Rosen, ‘Facebook Publishes Enforcement Numbers for First Time’, Facebook Newsroom (Press Release, 15 May 2018) . 825 See, eg, Feerst (n 395). 826 Hart, ‘Positivism and the Separation of Law and Morals’ (n 339) 607.

120

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM extent that the bounds of acceptable content can crystalise for users,827 both in term of their self-expression through content and their role as volunteer regulators.828 As Raz explains, albeit in the context of state-enacted laws, ‘[a]n ambiguous, vague, obscure, or imprecise law is likely to mislead, or confuse at least some of those who desire to be guided by it’.829 The fact that users are unlikely to read terms of service and associated guidelines when signing up to a platform increases the likelihood of confusion830 and further limits the extent that Instagram’s rules can effectively guide individual behaviour.

Concerns also arise in terms of substantive certainty. The overall trend of inconsistent moderation gives the public strong reason to suspect that Instagram has not always regulated content in ways that align with its publicly available policies. As a result, the platform’s rulesets and governing practices remain largely ill-defined (demonstrating a large ‘penumbra of uncertainty’)831 and appear to be subject to frequent changes in regulatory direction.832 The inconsistent trend, which ultimately demonstrates incongruence between the platform’s content policies and content moderation in practice, also limits the ability of users to anticipate the likely result of posting certain types of content. This lack of information can breed additional confusion and lead users to develop vernacular explanations for content takedowns: for example, explanations that allege bias on the part of the platform or other stakeholders.833

Substantively, the higher odds of removal for Underweight women’s bodies is somewhat surprising given the prevalence of anecdotal complaints that Instagram favours depictions of thin female body ideals.834 This discrepancy could be explained by differences in the visibility of content; perhaps thin-idealised images are more visible and therefore more likely to be reported than body positive images that are posted in a way that is visible to smaller, less acrimonious groups of users. It is also possible that different cultural norms of use among various users or communities on the platform may lead to much higher rates of self-censorship for images of underweight women, or that Instagram and its users are more supportive of body- positive images than media and blogs seem to allege. There is no easy way to tell whether these

827 Donal Casy and Colin Scott, ‘The Crystallisation of Regulatory Norms’ (2011) 38(1) Journal of Law and Society 76, 77. 828 Crawford and Gillespie (n 232) 410. 829 Raz (n 42) 214. 830 Tan (n 341) 197. 831 Hart, ‘Positivism and the Separation of Law and Morals’ (n 339) 607. 832 Casy and Scott (n 827) 77. 833 West, ‘Censored, Suspended, Shadowbanned: User Interpretations of Content Moderation on Social Media Platforms’, above 447, 4377 ff. 834 See, eg, Rach (n 432).

121

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM factors influence content removal without gaining access to Instagram’s internal workings and/or interviewing users, which I identify as important areas for future research. This underlines that, in stark contrast to the Anglo-American ideal of the rule of law, the onus is disproportionately on users and other stakeholders to make sense of potential inconsistencies rather than on the platform to better explain how content is moderated in practice.

The empirical results raise additional normative concerns about the formal equality of Instagram’s moderation systems, particularly whether images are regulated in ways that are free from bias, discrimination and error.835 As previously noted, results show that up to 22 per cent of seemingly like depictions of female forms were not moderated consistently. This means that two users who post like content might have vastly different experiences: for instance, one user whose content was not removed might conclude that the platform’s systems for moderating content are unbiased, while another user whose content was removed might conclude that the same processes are biased. The potential here for different treatment of analogous subject matter falls significantly short of the value of formal equality, both as a benchmark for the ideal treatment of individuals in rule of law discourse and as an internationally recognised human right.836 It is difficult, however, to detect bias and other inequalities in decision-making without, inter alia, some clearer indication of how the platform monitors the performance of its moderation teams. Moreover, when rules are enforced, users frequently lack transparent information and reasons to explain exactly why their content was removed or why other users’ similar content was moderated differently. A result is that the extent to which content is moderated in ways that are free from error, prejudice and/or bias remains unclear.

The lack of information about user reporting features amplifies normative concerns from a rule of law perspective. One significant concern is that Instagram does not clearly disclose the volume of reported, or flagged, content that is removed from its platform.837 We also know very little about how user reports are processed or the extent of the influence of these reports on the outcomes on moderation. The extent that Instagram’s reporting options are ‘recruiting’838 or ‘deputising’839 users ‘who are particularly keen to police others’ bodies and

835 Sandvig (n 619) 1; Friedman and Nissenbaum (n 619) 330. 836 See Chapter Two (III)(A). 837 Witt, Suzor and Huggins (n 27) 578. 838 Duguay, Burgess and Suzor (n 94) 5. 839 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 53.

122

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM self-expression’ is also unclear.840 There is a risk that these users might report content because they cannot, or do not do want to attempt to, understand the contextual and interpretive ambiguities of certain content. Others may attempt to police content based on their personal value system rather than the platform’s content policies. As Instagram acknowledges, ‘people are different’841 and its user base is a ‘diverse community of cultures, ages, and beliefs’.842 This raises the question whether moderators are taking users’ potentially conflicting interests, values and motivations into account when reviewing reported content. Despite this, Instagram has discretion to intervene on behalf of users, including those with the loudest voices, and then deflect criticism by claiming to moderate on behalf of the ‘user community and its expressed wishes’.843 User reporting features are, according to Crawford and Gillespie, ‘a practical and symbolic linchpin in maintaining a system of self-regulation’.844 This is particularly concerning as social media platforms are often the site of complex disagreements about the appropriateness of certain content and behaviours,845 which arguably cannot and should not be addressed through platform policies and technologically determinist tools alone.846

The problems of user reporting features are arguably problems of scale. Platforms struggle to keep pace with the volume of content on their networks. For instance, Facebook users flag, or report, around one million pieces of content per day.847 This is a scale that possibly no other regulatory body outside of Silicon Valley is grappling with.848 As Crawford and Gillespie explain, ‘sites of that scale may have little choice but to place the weight and, to some degree, the responsibility of content decisions entirely on flagging [reporting] by users’.849 It is often difficult, however, for groups of volunteer regulators to glean information from user reporting tools. For example, in Instagram’s reporting option for non-users in Figure 8, the definition of ‘nudity and pornography’850 differs from that provided in the Community Guidelines. This does little help users report content in a way that is consistent with the platform’s policies. An added complication is that users are often able to report content multiple times, sometimes

840 Duguay, Burgess and Suzor (n 94) 9. 841 Instagram, ‘Terms of Use’ (n 73). 842 Instagram, ‘Community Guidelines’ (n 185). 843 Crawford and Gillespie (n 232) 412. 844 Ibid. 845 Marlin-Bennett and Thornton (n 798) 493 Crawford and Gillespie (n 232) 413; Duguay, Burgess and Suzor (n 94) 9. 846 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2)132 ff. 847 Buni and Chemaly (n 8). 848 Crawford and Gillespie (n 232) 410. 849 Ibid 412. 850 ‘Report Violations of our Community Guidelines’ (n 569).

123

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM across multiple reporting options. In a YouTube video entitled ‘Dear Instagram,’ blogger Meghan Tonjes criticises Instagram’s regulatory system, which she argues gives some users the power to have content or accounts removed for ‘really no reason’.851 Tonjes states: ‘I would hope that it’s a goal of yours as a platform to make sure that close-minded, ignorant and hateful people don’t abuse your report feature’.852 It is not yet possible to measure the potential impact of bias in user-initiated reporting, as alleged by Tonjes and others, largely due to the lack of transparency around the inner workings of user reporting at scale.

Thus far, I have identified an inconsistent trend of moderation across like thematic categories of images that depict female forms. Specifically, I have shown how the results support the hypothesis (H1) that images depicting Underweight women’s bodies are moderated at a different rate to images depicting Overweight women’s bodies. Takedowns could be the result of content removal by users themselves or direct intervention by the platform. Without gaining access to the platform’s internal workings, however, it is not possible to reach definitive conclusions about precisely whom removed content, how and for what reasons. I will now evaluate the broader implications of the overall trend of inconsistent moderation.

C A Highly Unpredictable Regulatory System

The empirical results in this case study suggest that Instagram is supporting and reinforcing a highly unpredictable regulatory system that poses an ongoing risk of arbitrariness for women and users more broadly. This is a result of a lack of formal equality, certainty, reason-giving and user participation, and Instagram’s largely unfettered power to moderate content with limited transparency and accountability, which provide users with insufficient guidance about how their content is moderated in practice. These deficiencies impede users’ ability to identify and understand the policies against which their content will be judged and predict the likely result of posting certain material.853 It becomes particularly difficult for users to make predictions when images that are not explicitly prohibited, like in this case study, are still removed from the platform. Users are seemingly at constant risk of overstepping ‘appropriate- versus-inappropriate distinctions’ around content on Instagram.854

851 Meghan Tonjes, ‘Dear Instagram, (YouTube, 19 May 2014 . 852 Ibid. 853 See generally Raz (n 42) 213-14. 854 Magdalena Olszanowski, 'Feminist Self-Imaging and Instagram: Tactics of Circumventing Sensorship' (2014) 21(2) Visual Communication Quarterly 83.

124

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM

The results in this Chapter also suggest that potential arbitrariness is interconnected with different cultural norms of use on Instagram.855 Take, for example, the in-built reporting feature for the platform’s iOS (Apple Operating System) app. Despite reporting features often being a site of intense contestation among users, Instagram does very little to set the bounds of appropriate policing. It appears that the most active step that the platform takes to prevent users abusing their position as volunteer regulators is to list ‘I just don’t like it’ as the first reason for reporting, which suggests that users should simply unfollow posts and users that they ‘don’t like’.856 Instagram’s approach, however, fails to address the ways that users can abuse reporting features to achieve their own ends. By providing a reason for reporting entitled ‘I just don’t like it’, for instance, the platform could be reinforcing that such a reason is valid and encouraging a norm of groundless user reporting. Another significant concern is the way Instagram positions itself as a ‘friendly’857 platform with community-oriented goals, rather than a governing body, which has the power to exclude the participation of bad actors. By heavily relying on users to report content as inappropriate, as well as failing to provide examples of user policing that are appropriate and inappropriate, the platform risks further promoting a norm of groundless reporting and encouraging vigilante users.

These normative concerns are a symptom of Instagram’s broader culture of limited responsibility. The data in this case study shows that content moderation can produce highly unpredictable results for users. When subject to public criticism, however, Instagram can rely on its label as a ‘technology company’.858 This label often precludes the platform, like others, from being held to the same ethical and normative standards as media companies.859 In addition, Instagram’s ambiguous rulesets enable its policy teams to change regulatory course in response to public sentiment and deploy tactical responses when necessary. Some of the platform’s most common responses include apologising for content removal, blaming ‘their algorithm’860 and attempting to explain the challenges of moderating content at scale.861 Instagram’s apparent culture of deflecting responsibility suggests that it does not take the

855 Duguay, Burgess and Suzor (n 94) 12. 856 See Figure 7. 857 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 35. 858 Jeffrey Rosen, ‘The Delete Squad’, New Republic (online at 23 April 2013) . 859 See generally Flew, Martin and Suzor (n 68) 33; Roberts, ‘Social Media’s Silent Filter’ (n 591). 860 See, eg, Zachary Small, ‘Facebook Censors Montreal Museum of Fine Art’s Ad Featuring Nude Picasso Painting’, Hyperallergic (online at 7 August 2018) . 861 See, eg, Bickert (n 253).

125

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM potential harms of content moderation seriously, which risks trivialising some users’ online participation and expression. There are arguably empowerment stories for users who managed to avoid content moderation in this case study – that is, users who posted images that were not removed across Underweight, Mid-Range and Overweight categories. It remains the case, however, that the platform’s moderation processes are advantageous for its shareholders but appear to come at cost for some female and other users.

Instagram’s ‘complete discretion’862 to govern how users participate in its network is arguably the lifeblood of its unpredictable regulatory culture. One of the aforementioned tenets of the rule of law is that legal instruments should limit a governing authority’s power to set, maintain and enforce the law.863 While the courts set and enforce the boundaries of contract law, the law imposes few limits on Instagram’s governance in practice, which enables inconsistent and unpredictable processes for moderating content to flourish.864 Limited checks and balances are particularly concerning given the nodal system in which Instagram and other platforms operate. Nodal governance is ‘an elaboration of contemporary network theory that explains how a variety of actors operating within social systems interact along networks to govern the systems they inhabit.’865 This theory usefully highlights the enormous potential for normative conflicts between various actors, including platform executives and governments, as well as inherent tensions between commercial prerogatives, responsibility to stakeholders and users’ self- expression. This has given rise to deep-seated concerns that decisions around content represent the normative judgments of Silicon Valley professionals who are attempting to manage these tensions rather than giving weight to well-established rule of law values.866 I will continue to explore these concerns from both a rule of law and feminist perspective throughout the remainder of this thesis.

IV CONCLUSION

This Chapter has empirically evaluated whether 4,944 like images that depict (a) Underweight, (b) Mid-Range and (c) Overweight women’s bodies were moderated alike on Instagram. Across these categories, I identified an inconsistent trend of moderation and advanced two main findings. The first is that up to 22 per cent of images that were removed by Instagram or by the

862 Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (n 26) 5. 863 Raz (n 42) 211 ff. 864 Tushnet (n 232) 988. 865 Burris, Drahos and Shearing (n 181) 33. 866 See, eg, Rosen, ‘The Delete Squad’ (n 858); Witt, Suzor and Huggins (n 27) 593.

126

CHAPTER FOUR: AN EVALUATION OF THE MODERATION OF IMAGES DEPICTING WOMEN’S BODIES THAT ARE NOT EXPLICITLY PROHIBITED ON INSTAGRAM user are potential false positives. It is not currently possible, however, to identify precisely which party removed content without access to the platform’s internal workings. Second, I found that the odds of removal for an image that depicts an Underweight and Mid-Range woman’s body is 2.48 and 1.59 times higher, respectively, than for an image that depicts an Overweight woman’s body. While the results support my hypothesis (H1) that Underweight depictions of women bodies are removed at different rates to Overweight depictions in practice, claims that Instagram is less likely to remove thin-idealised images of women could be overstated.

The legal empirical analysis in case study suggests that the moderation of images depicting female forms on Instagram is in tension with the Anglo-American ideal of the rule of law. I identified a lack of predictability in the outputs of content moderation, along with insufficient formal equality, certainty, reason-giving, transparency, participation and accountability practices. These deficiencies matter as they can affect, inter alia, users’ self-expression and participation in public discourses of the day. While the secrecy around Instagram’s privatised moderation processes make it difficult to reach definitive conclusions, the results suggest that the platform’s approach to moderation, which includes vaguely articulated policies, a heavy reliance on user reporting and the development of automated systems, could lead to the inconsistencies that I observed. I argued that results and findings in this case study underline the need for institutionalised constraints on arbitrariness in content moderation processes.

This Chapter has started the work of empirically evaluating the extent to which the moderation of images depicting women’s bodies on Instagram aligns with my rule of law framework. While the platform’s regulatory system is mostly inscrutable, this case study made significant inroads with attempting to investigate content moderation in practice when only parts of a system are visible from the outside. It also highlighted potentially problematic aspects of Instagram’s regulatory system and governance practices more broadly. In the following chapter, I build upon the foregoing rule of law analysis by examining whether explicitly prohibited images of women’s bodies, and some men’s bodies, in like thematic categories of content are moderated alike in practice.

127

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM

This Chapter empirically examines whether a sample of 980 explicitly prohibited images depicting women’s bodies, and some images of men’s bodies, in like categories of content are moderated alike on Instagram. Similar to Case Study One, I evaluate the extent to which the results of content moderation align with my rule of law framework, which comprises the well- established, albeit contested, values of formal equality, certainty, reason-giving, transparency, participation and accountability.867 This and the preceding Chapter ultimately provide empirical evidence of how images of women’s bodies appear to be moderated in practice, and how female forms are moderated in comparison to male forms. The overall dataset in this thesis supports evaluation of content moderation through a feminist lens in Chapter Six. This Chapter comprises four parts.

I start by briefly reviewing key concerns around the moderation of explicitly prohibited images of women, many of which overlap with those outlined in Chapter Four, and by outlining a hypothesis for testing. I then turn, in Part II, to outline my method for examining how explicitly prohibited images are moderated in practice. While the input/output method in this and the preceding Chapter is largely the same, the inputs and outputs in this case study are different to the first. Specifically, this Chapter focuses on how the throughput of Instagram’s moderation systems (ie, the black box) produce certain outputs (ie, true positives and potential false negatives, as opposed to true negatives and potential false positives in Case Study One).868 After programmatically collecting images from 24 watched hashtags,869 I use content analysis to identify whether like images were moderated alike across nine categories: (a) Female, Rear, Full Nudity; (b) Female, Rear, Partial Nudity; (c) Male, Rear, Full Nudity; (d) Male, Rear, Partial Nudity; (e) Partially Nude or Sexually Suggestive; (f) Female Nipples; (g) Porn Spam with a Clear Watermark; (h) Apparent Porn Spam; and (i) Other Explicit Content. I use this coding scheme to investigate true positives and potential false negatives in each category.

867 See discussion in Part III of Chapter Two. 868 See generally Diakopoulos, ‘Algorithmic Accountability Reporting: On the Investigation of Black Boxes’ (n 656) 14. 869 Watched hashtags in this case study are #ana, #bodypositive, #breastfeeding, #celebratemysize, #cheekyexploits, #curvy, #cute, #depression, #effyourbeautystandards, #fatgirl, #fitgirl, #girl, #happy, #honormycurves, #lgbt, #mia, #plussize, #plussizefashion, #plussizemodel, #selfie, #skinny, #stretchmarks and #thin.

128

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM

Next, in Part III, I outline the results and findings from this study. Overall, I find that the inconsistent trend of moderation that I identified in the preceding case study continues across categories of explicit content. More specifically, I outline three main findings, the first being that up to 56.8 per cent of the coded sample comprises images that seemingly violate Instagram’s content policies and were not removed, and are therefore potential false negatives. Second, the odds of removal for an explicitly prohibited image depicting a woman’s body is 16.75 times higher than for a man’s body, which supports my hypothesis (H2) that images depicting female forms are moderated at different rates to male forms. Finally, 100 per cent of content in the Female Nipples, Porn Spam with a Clear Watermark and Appears to be Porn Spam categories were removed, followed by 90.5 per cent for Other Explicit Content, which suggests that Instagram consistently moderates some types of content.

Part III(A) outlines several potential explanations for the inconsistencies that I observe. Like in the preceding chapter, content removal could be the result of actions by users themselves or direct regulatory intervention by Instagram. Specifically, direct intervention by the platform could be due to moderators coming to different conclusions about content that straddles appropriate/inappropriate distinctions in the Partially Nude or Sexually Suggestive category, which has a roughly 50/50 risk of content removal. Direct intervention could also be due to automated decision-making processes, reliance on user and non-user reporting features and direct censorship, all within the online architecture of the platform. Any or all of these possible reasons, along with other contributing factors, could explain the results given that the platform’s processes for moderating content are mostly inscrutable.

Next, in Part III(B), I evaluate the empirical results in this case study through my rule of law framework. First, I identify a number of deficiencies in processes for moderating content on Instagram in terms of the rule of law values of formal equality and certainty, principally given the incongruence between how the platform says it will moderate content and how content is moderated in practice. I also identify deficiencies in user participation and the platform’s transparency, accountability and reason-giving practices, all of which appear to send mixed regulatory signals to users as the subjects of regulation. These deficiencies suggest that there is support for concerns that some explicitly prohibited images of women’s bodies in like categories of content are not moderated alike in practice, and that some explicitly prohibited images of female forms are moderated differently to male forms.

129

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM

Finally, in Part III(C), I argue that the lack of formal equality, certainty, reason-giving, binding transparency standards, user participation and accountability measures creates a highly unpredictable regulatory environment that is not stable enough to guide the decisions and behaviours of Instagram users. The results and findings from this case study raise ongoing concerns about the extent to which Instagram’s moderation processes, and governance practices more broadly, align with the Anglo-American ideal of the rule of law. I posit that the overall trend of inconsistent moderation in this and the preceding chapter reinforce the desirability of Instagram’s moderation processes adhering to basic rule of law safeguards. Part IV concludes this Chapter.

I A HYPOTHESIS FOR EMPIRICAL INVESTIGATION

The moderation of explicitly prohibited images depicting women’s bodies on Instagram is highly controversial. Like in Case Study One, a number of publications in news media make different and sometimes contradictory claims about the ways explicit content is moderated in practice. A persistent claim is that the platform moderates depictions of male forms more leniently than female forms.870 Some users describe this as a gender-based double standard, arguing that it promotes and reinforces a culture in which women’s bodies ‘remain highly policed and criticised’.871 One commentator asserts that ‘Instagram loves to censor female nipples but is a little less discerning when it comes to penises’.872 Others routinely point to the platform’s contradictory governance practices: specifically, that a significant amount of explicitly prohibited content is not removed (ie, false negatives) while images that do not appear to violate the content policies are removed (ie, false positives). A central theme in these allegations is that the platform is ‘turning a blind eye’873 to some content and potentially arbitrarily ‘clamping down’874 on others. Instagram, however, is purportedly very good at removing certain types of content. As Systrom stated in 2017, ‘[p]eople say Instagram is super positive and optimistic. In fact, we have a ton of negative stuff, but we’re going after it before

870 Electronic Frontier Foundation and Visualizing Impact, A Resource Kit for Journalists’ (n 17). 871 Radin (n 428). 872 Megan Reynolds, ‘Why Won’t Instagram Take Down This Picture of Justin Bieber Grabbing His Dick?’ The Frisky (online at 18 September 2018) . 873 See, eg, Melanie Ehrenkranz, ‘Instagram Deactivated this Artist’s Account – Then Ignored Her For Months’, Mic (online at 25 May 2017) . 874 See, eg, Ange McCormack, ‘Instagram, Facebook Crack Down on Harmful Weight Loss Products and Cosmetic Procedures’, Triple J Hack (online at 19 September 2019) .

130

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM we have a problem’.875 This suggests that the platform is directing attention towards some regulatory issues and possibly overlooking others. The potential trade-offs and value judgments implicit in this approach warrant investigation.

Before I commence this inquiry, it is necessary to formulate a hypothesis for empirical testing. In light of claims that the platform may moderate images of men more favourably than women, I hypothesise that explicitly prohibited images depicting women’s bodies are moderated at a different rate to explicitly prohibited images of men’s bodies (H2). In statistical terms, this means that there is an association between gender and whether an image is removed. The null hypothesis for this case study is that explicitly prohibited images depicting women’s bodies are not moderated at a different rate to explicitly prohibited images of men’s bodies – that is, there is no association between gender and whether an image is removed or not (H0). I will now outline my method for testing this hypothesis when only parts of Instagram’s regulatory system are visible from the outside.

II METHOD

In this Chapter, I continue to develop and apply the same input/output method based on black box analytics.876 The focus of empirical investigation in this Chapter, however, differs from the previous case study. In Chapter Four, I examined how discrete inputs (ie, images depicting female forms that do not appear to violate Instagram’s content policies) into Instagram’s moderation processes produced certain outputs (ie, potential false positives and true negatives). In this Chapter, I investigate how explicitly prohibited images depicting female and male forms (ie, inputs) into the throughput of Instagram’s moderation processes produce potential false negatives or true positives. Here it is useful to clarify the distinction between false negatives, which are images that appear to violate Instagram’s content policies and were not removed, and true positives, which are images that appear to violate Instagram’s content policies and were removed. By focusing on opposite inputs and outputs to Case Study One, I will be able to evaluate Instagram’s moderation of content from a different angle.

I collected explicitly prohibited images, which users posted to a number of watched hashtags, using the same automated tools as in the preceding Chapter. The watched hashtags for this case study, some of which overlap with Case Study One, are: #ana, #bodypositive, #breastfeeding,

875 Todd Spangler, ‘Instagram CEO Positions His Company as Safer Alternative to Controversial Rivals’, Variety (online at 15 November 2017) . 876 Diakopoulos, ‘Algorithmic Accountability Reporting: On the Investigation of Black Boxes’ (n 656) 14.

131

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM

#celebratemysize, #cheekyexploits, #curvy, #cute, #depression, #effyourbeautystandards, #fatgirl, #fitgirl, #girl, #happy, #honormycurves, #lgbt, #mia, #plussize, #plussizefashion, #plussizemodel, #selfie, #skinny, #stretchmarks and #thin. I deliberately selected hashtags that news outlets have mentioned in the context of images depicting women’s bodies on Instagram, including #ana and #bodypositive,877 and those to which some users appear to post a significant amount of explicit content, such as #mia.

Additionally, given that some watched hashtags appear to have less user activity and therefore fewer new posts to monitor over time, I included several ‘top hashtags’878 with a high volume of content to attempt to increase the chance of identifying removed images. By programmatically collecting images from popular hashtags, including #selfie and #happy, and possibly non-normative hashtags, like #lgbt, I attempted to capture a more diverse range of images. Finally, I selected #cheekyexploits, a hashtag to which some users post images of naked buttocks (‘butt shots/selfies’),879 like in Figure 16, to attempt to examine content moderation based on gender. While #cheekyexploits could be a phenomenon limited to the time of data collection in 2018,880 the hashtag provides a corpus of images to examine how seemingly analogous subject matter was moderated on the Instagram platform.

The total dataset from watched hashtags comprises 58,753 images, with a roughly 50/50 split between removed and not removed categories. I used automated tools to generate a sample of 16,782 images from the total dataset for the purposes of manual coding. Once the sample for manual coding was generated, I then undertook content analysis to classify content as depicting one of nine categories of explicitly prohibited content.881 The coded sample contains 980 explicitly prohibited images of women and some men’s bodies (Removed Total (T) = 423 images and Not Removed (T) = 557 images). Specifically, the coded sample comprises 163 images that depict Male, Rear, Full Nudity; 146 for Male, Rear, Partial Nudity; 127 for Female, Rear, Full Nudity; 114 for Female, Rear, Partial Nudity; 10 for Female Nipples; 102 for Partial Nudity or Sexually Suggestive; 21 for Other Explicit; 95 for Appears to be Porn Spam and 202 for Porn Spam with a Clear Watermark.

877 See, eg, Salam (n 20). 878 Orbello (n 758). 879 David Moye, ‘This Cheeky Instagram Page Is Dedicated to Vacation Butt Shots (NSFW)’, Huffington Post (online at 3 May 2017) . 880 Ibid. 881 Cane and Kritzer (n 763) 941.

132

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM

Once I finalised the coded sample, I undertook some basic quantitative analysis to ensure that the empirical results were significant, particularly considering my deliberate sampling strategy. I used IBM SPSS Statistics to perform a Fisher’s Exact test for statistical independence between categorical variables (ie, the nine categories of explicitly prohibited content) and Removed or Not Removed (content removal).882 A Fisher’s Exact test was more appropriate than a chi-square test, like in Chapter Four, because of the low frequency (total count) for some of the categories in this case study. Overall, a Fisher’s Exact test (at a 95 per cent confidence level where p = 0.05) indicates a statistically significant association between results in each

882 Pallant (n 773) 216.

133

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM category, x² ((8, n= 980) = 940.420, p = .000, phi = .870). I therefore conclude that the results in Part III are not due to random chance.

A Coding Scheme for Explicitly Prohibited Images

I coded explicitly prohibited images of women’s bodies and some images of men’s bodies into nine thematic categories, all of which appear to violate Instagram’s Terms of Use and/or Community Guidelines.883 As previously noted, the platform updated its policies in April 2018, after programmatic data collection. Prior to this policy change, Instagram’s constitutive documents explicitly referenced and prohibited ‘partially nude’ and ‘sexually suggestive’ images or other content, which were particularly relevant to the aims of this case study. Instagram’s Terms of Use now contains a general statement that users ‘can't violate (or help or encourage others to violate) these Terms or our policies, including in particular the Instagram Community Guidelines, Instagram Platform Policy, and Music Guidelines’.884 While the platform no longer explicitly references sexually suggestive or partially nude content in its policies, I previously explained that sexually suggestive content still appears to be removed from the platform, as an informal policy of sorts.885 Moreover, the platform reserves the right to remove potentially inappropriate content, even if it does not violate its content policies.886 Hence the impacts of recent policy changes on this case study were minimal.

After drafting the coding scheme based on Instagram’s content policies, I inductively refined categories around female/male nudity (categories (a), (b), (c) and (d)) and pornography (categories (g) and (h)) below, in line with my initial observations of the dataset for manual coding. The first major observation was that most images of male or female nudity exemplify the previously mentioned ‘cheeky photo craze’887 as they largely depict figures from behind and in various stages of undress. In response to this, I introduced the sub components of male/female, rear and full/partial nudity. The second major observation related to pornographic images: specifically, some images have a clear watermark (ie, company or website name, or URL) that clearly indicates that a pornographic outlet made or disseminated

883 Instagram, ‘Terms of Use’ (n 73); Instagram, ‘Community Guidelines’ (n 180). 884 Instagram, ‘Terms of Use’ (n 73). 885 In the context of Facebook’s Remove, Reduce and Reform strategy, Rosen and Lyons state, … ‘a sexually suggestive post will still appear in Feed if you follow the account that posts it, but this type of content may not appear for the broader community in Explore or hashtag pages’: see Rosen and Lyons (n 345). 886 See, eg, Instagram Help Centre, Why are Certain Posts Not Appearing in Explore and Hashtags Pages? (n 818). 887 Moye (n 879).

134

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM the content, like in Figure 17(A). Other images, like those in Figure 17(B), have white pixelated borders and other features that merely suggest that a pornographic outlet made or disseminated the content. I therefore created two categories for pornographic content to account for this difference. The uniformity of images in both pornography categories, however, suggests that users have disseminated, or been spammed with, this content for largely commercial purposes. The categories for manual coding in this Chapter follow:

a) Female, Rear, Full Nudity: the woman’s body is depicted from the rear, and they/she is not wearing any clothing; b) Female, Rear, Partial Nudity: the woman’s body is depicted from the rear, and their/her buttocks or other intimate body part is less than fully covered; c) Male, Rear, Full Nudity: the man’s body is depicted from the rear, and they/he is not wearing any clothing; d) Male, Rear, Partial Nudity: the man’s body is depicted from the rear, and their/he buttocks or other intimate body part is less than fully covered; e) Female Nipples: the image depicts a woman’s uncovered areola and/or nipple on one or both breasts. It should be noted that I coded images into this category only if the depiction of female nipples is the only aspect of the image that violates Instagram’s content policies; f) Partially Nude or Sexually Suggestive: the woman’s body is depicted partially nude (but not from the rear in line with category (b)) or in a sexually suggestive manner;888 g) Porn Spam with a Clear Watermark: the image has a fully visible watermark that suggests that a pornographic outlet made or has disseminated the content; h) Apparent Porn Spam: the image has a watermark that is not fully visible, a pixilated boarder or other features that could suggest that a pornographic outlet has made or disseminated the content; and i) Other Explicit Content: the image depicts sexual intercourse, a woman or man’s fully nude body from the front or other explicit content.

888 It should be noted that this case study does not include a separate category for male forms that are partially nude or sexually suggestive (ie, a male version of category (f)) for reasons of scope. However, categories (a), (b), (c) and (d) of the coding scheme are designed to enable comparison of the moderation of male and female nudity in practice.

135

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM

When coding the sample of 16,782 images, I made a number of decisions about the coding procedure. The first was to attempt to code images into one main category. If an image could arguably fall under one or more explicit prohibitions, I coded the image into the category that it appeared to violate the most. The second important decision was to exclude close-ups of women’s faces, because they did not depict a female form for the purposes of this empirical analysis. I also excluded images of buttocks and other body parts that did not depict enough of a person’s body and therefore made it difficult to classify. The result is that the coded dataset predominantly comprises selfies or portraits that depict a significant portion of a woman’s or man’s body. Images of men are largely limited to categories (c) and (d), however, some male

136

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM figures appear in categories (g), (h) and (i) where individuals appear to be performing sexual acts.

It is also important to note that, like Mid-Range women’s bodies in Case Study One, the Partially Nude or Sexually Suggestive category contains images that were particularly difficult to code into one main category. Images were difficult to classify when the appropriateness of their subject matter could vary depending on one’s interpretation. Take, for instance, an image that depicts a blurred, naked female body that is not clear enough to be explicit but could be sexually suggestive. The task of coding particularly subjective content like this is made still more difficult by the fact that Instagram’s content policies do not fully define all salient terms, such as ‘nudity’, to guide an external would-be moderator like me. This means that the Partially Nude or Sexually Suggestive category includes images that could be sexually suggestive but do not clearly fall under one particular category, and all depictions of partial nudity except for depictions of Female or Male Rear, Partial Nudity (#cheekyexploits style images) and Female Nipples. While there is some subjectivity around the coding procedure, every image in this case study depicts a female form, or male form, in some way that is explicitly prohibited on the Instagram platform. We should, therefore, expect coded images in like categories of content to be moderated alike.

I undertook inter-rater testing to assess the coding procedure. Unlike Case Study One, in which I calculated Cohen’s kappa for inter-rater reliability, I informally tested my coding scheme with my principal supervisor Professor Nicolas Suzor. Professor Suzor examined a random sample of images from each coded category and either agreed or disagreed with how I had coded each image. By discussing the coding scheme and coding procedures with Professor Suzor, I was able to refine the coding scheme over time and until we reached coding agreement.

B Limitations

Like the first case study, the empirical results in this Chapter should be interpreted with some limitations in mind. The first is that I constructed the various datasets through deliberate sampling and hashtag selection. This means that the final coded sample is not naturally occurring and only a small portion of publicly available images that depict female forms, and some male forms, are captured in this case study. Second, without access to Instagram’s internal workings, automated tools cannot determine whether individual images were removed by a user or by the platform. This has a number of implications for empirical analysis, chief among them that I cannot comment on why certain content was removed, some of which might

137

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM not relate to the depiction of different female body types. Additionally, results cannot be used to make generalised findings about all explicitly prohibited images across all hashtags on Instagram. Despite these limitations, many of which overlap with the preceding Chapter, I have provided new insights into the ways that some explicitly prohibited images appear to be moderated on Instagram. I have also been able to identify the limitations of the platform’s moderation systems for empirical legal analysis, which informs specific recommendations for future studies of this type in Chapter seven.

III RESULTS AND DISCUSSION

Overall, similar to Case Study One, I identify a trend of inconsistent moderation across like categories of explicitly prohibited images that depict women’s bodies, and some men’s bodies, on Instagram. In terms of the ideal of the rule of law, this means that some like content was not moderated alike and therefore some instances of content moderation appear to contradict the platform’s policies. As depicted in Table 3, the probability of removal for images depicting Male, Rear, Full Nudity is 0.6 per cent followed by 1.4 per cent for Male, Rear, Partial Nudity; 10.2 per cent for Female, Rear, Full Nudity and 18.4 per cent for Female, Rear, Partial Nudity. Probabilities of removal significantly increase from this point with 58.8 per cent for the Partially Nude or Sexually Suggestive category, followed by 90.5 per cent for Other Explicit and 100 per cent for Female Nipples; Porn Spam with a Clear Watermark and Appears to be Porn Spam. Across categories, I find that up to 56.8 per cent of images that were not removed by Instagram or by the user are potential false negatives. The overall trend of inconsistent moderation suggests that there is support for concerns that some explicitly prohibited images depicting women’s bodies in like categories of content are not moderated alike on Instagram.

I performed logistic regression in IBM SPSS to assess the odds of removal for explicitly prohibited images of women’s bodies compared to male bodies. Given that the count for removed content in both male categories was too low, I combined Female, Rear, Full Nudity with Female, Rear, Partial Nudity to create a Female category, and Male, Rear, Full Nudity with Male, Rear, Partial Nudity to create a Male category. Male is the reference (base) category because it has a smaller amount of removed images than the Female category. As shown in Table 4, the odds of removal for an expressly prohibited image of a woman’s body is 16.75 times higher than for a man’s body. These results support my hypothesis that some explicitly prohibited images depicting women’s bodies are moderated at different rates to explicitly prohibited images of men’s bodies (H2).

138

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM

139

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM

As explained in Chapter Four, there are a number of possible explanations for content removal that fall into the broad categories of content removal by users themselves or direct regulatory intervention by Instagram. Specifically, direct intervention could be influenced by undisclosed platform policies, the role of users and non-users as volunteer regulators and the development and deployment of automated systems. Given that there is limited information about the internal workings of Instagram’s regulatory system in the public domain, any or all of these possible reasons, along with other contributing factors, could explain the results in this case study. I will now expand upon the most relevant potential explanations for content removal in order of most to least likely.

A Possible Explanations for Content Removal

The first possible explanation for the inconsistent trend that I identify is content removal by users. As explained in Chapter Four, users can remove their own content for any number of diverse reasons that might relate to individual social media practices and the platform’s culture more broadly. The most likely explanation is arguably that some users may have simply curated their profile by removing or archiving content. Users could be motivated to curate their explicit content for similar reasons to content that is not explicitly prohibited: for instance, to reduce the amount of content on their profile, or to engage with new visual media aesthetics, trends or patterns. Another consideration is that given the explicit nature of content in this case study, it could be more likely than in Case Study One that some users may have removed their content, perhaps due to backlash in comments.889 Limited data about how different users participate on

889 Users may also remove content in response to concerns about privacy and feelings of regret: see generally Das and Kramer (n 790) 120-121.

140

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM the Instagram platform, however, leave the potential reasons why some users remove their own content open to wide speculation.

The second potential explanation for content removal is direct intervention by Instagram itself. While it is difficult to come to definitive conclusions around why content was removed in this context, it is possible that moderators are coming to different conclusions about content that straddles the line between appropriate and inappropriate expression. Take, for example, the Partially Nude or Sexually Suggestive category that contains most of the images in the coded sample and, somewhat unsurprisingly, has a 50/50 risk of content removal.890 Figure 18 illustrates the difficulty of assessing whether borderline material violates content policies.891 A reviewer may question whether Figure 18(A) constitutes pornography, nudity, partial nudity or some other form of explicit content. More broadly, Figure 18 raises the question whether any images were removed for being sexually suggestive, a highly ambiguous category of

890 See Table 3. 891 Zuckerberg (n 349).

141

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM potentially prohibited content. The task of reviewing images is made more challenging by the fact that no two images are exactly the same except for spam content. An added complication, especially for an external auditor or would-be reviewer, is that Instagram does not define every category of explicitly prohibited content.

The use of automated systems could also explain some of the discrepancies that I identify. Since mid-2016, when Systrom announced his goal to ‘clean up’892 Instagram and the ‘cesspool’893 of the internet more broadly, the company has been increasing its development and deployment of artificial intelligence to assist with the task of moderating content at scale.894 It is now commonplace for online platforms to train content classifiers to detect different types of prohibited content with relatively high degrees of accuracy.895 Classifiers generally compare content to a database of known (coded) content or detect different features in an image or video; such as varying degrees of bare skin, nipples or watermarks. It would be the case that some classifiers are able to take regulatory action (ie, flagging or removing content). As illustrated in Table 3, the probability of removal for Apparent Porn Spam and Porn Spam with a Clear Watermark is 100 per cent followed by 90.5 per cent for Other Explicit content. Such high levels of accuracy, coupled with the fact that most pornographic and explicit images in this case study have common features, as outlined in the coding scheme, suggest that automated systems played a significant role in moderating content in these categories. Platforms having a vested interest in reducing the prevalence of spam content, which can diminish the quality of some users’ experiences networks and deter potential investors, adds weight to this potential explanation.

Like in Case Study One, it is possible that the removal of images in this case study may have been influenced to some degree by Instagram’s heavy reliance on reporting features. As explained in previous chapters, the platform does little to set the parameters of appropriate user and non-user reporting, which can help to ensure that individuals report content based on platform policies rather than their own value systems. A major concern here is that Western value systems are predominantly heteronormative and often incorporate conservative perspectives on the appropriateness of different female forms for public view.896 Western

892 Thompson (n 344). 893 Ibid. 894 Ibid. 895 As previously noted, porn spam often has obvious visual markers including watermarks and white pixelated borders, like those depicted in Figure 17. 896 See, eg, Baer (n 125) 20 ff.

142

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM societies are likely to scrutinise and police images of female nudity, especially non-normative depictions of women, to a greater extent than male forms.897 Given that platforms reflect the society that they emerge from, it is possible that users may have policed certain content in this case study, such as depictions of Female, Rear, Full Nudity and Female, Rear, Partial Nudity, to a greater extent than other content, namely depictions of Male, Rear, Full Nudity and Male, Rear and Partial Nudity. It could also be the case that some users and non-users attempted to game reporting features to achieve their own ends, perhaps as part of normative conflicts with other users or in opposition to the platform more broadly.

Finally, questions of censorship also arise in this study. As previously noted, reports show a number of examples of potentially ‘censorious content moderation’898 on Instagram,899 and the platform continues to routinely censor entire hashtags.900 I cannot comment on the potential for direct censorship in the coded dataset given that Instagram does not report which party removed content – a user or the platform – or the reasons for content removal. It should be noted, however, that Instagram is likely to take affirmative action around certain types of prohibited content to maintain a palatable platform experience for advertisers and other stakeholders.901 As Thompson notes, ‘[a]dvertisers like spending money in places where people say positive things’.902 Indeed, advertising is big business: Instagram reported around $6.8 billion in advertising revenue in 2018 from a combination of photo, video, carousel and Stories advertisements.903 It is therefore also possible that the platform removed some spam advertising content that businesses have not paid for. Overall, the lack of public knowledge about the ways that Instagram regulates different types of explicit content makes it difficult to come to definitive conclusions about which party moderates content, in what ways and for what reasons. The empirical results suggest that a combination of content removal by users and direct intervention by the platform, including reliance on automated decision-making and user reporting tools, could lead to the inconsistent trend of moderation. I will now turn to evaluate the normative issues raised by this case study from a rule of law perspective.

897 See, eg, Duguay, Burgess and Suzor (n 94) 2. 898 See, eg, Anderson et al (n 20) 9. 899 Crocker et al (n 40); Cardozo et al (n 820). 900 Suzor, ‘What Proportion of Social Media Posts Get Moderated, and Why?’ (n 816). 901 Sarah Myers West, ‘Raging Against the Machine: Network Gatekeeping and Collective Action on Social Media Platforms' (2017) 5(3) Media and Communication 28, 30. 902 Thompson (n 344). 903 Instagram, ‘Build Your Business on Instagram’, Instagram Business (Web Page, 2019) .

143

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM

B Normative Concerns from a Rule of Law Perspective

The first finding in this case study that up to 56.8 per cent of images are potential false negatives gives rise to concerns in terms of formal certainty. The first is that Instagram seemingly adhered to its publicly available content policies in only 43.2 per cent of cases – in other words, less than half the time. This brings into question whether moderators are making ‘in spirit’904 decisions, a phrase that was used by a former Instagram executive. The potential for in spirit decision-making is concerning because it can result in moderators drawing ad hoc lines between what is appropriate and what is not on a case-by-case basis. Another concern is that, as previously noted, several key terms in Instagram’s content policies around explicit content are particularly vague. Think back, for instance, to the ‘penumbra of uncertainty’ around the terms ‘partially nude’ and ‘sexually suggestive’.905 Such ambiguity not only makes it difficult for researchers to attempt to code images into like categories of content, like in this thesis, but also highlights the difficulties that users face in attempting to identify and understand content policies. The lack of formal certainty in the context of explicit content is likely to confuse users and other stakeholders who wish to be guided through a regulatory system.

Another concern from the vantage point of formal certainty is the way that Instagram imposes a blanket prohibition on nudity. In doing so, the platform risks signalling to users that all forms of nudity are inappropriate, or that nudity is equal to sex.906 York makes the point that, ‘… conflation of nudity, sexuality and pornography seems far more dangerous than pornography itself. The human body is not inherently sexual, nor are all depictions of sexual acts pornography’.907 The reality is that some images might be more or less appropriate. As I have previously acknowledged, images in like categories of content might, in fact, differ based on a range of intersectional factors. On top of this, stakeholders in platform governance might contest certain kinds of explicitly prohibited content, such as depictions of female nipples, to a greater extent than others. The platform conflating pornographic material with female nipples and all other nudity can have far-reaching impacts on society, a point that I expand upon in Chapter Six.

904 Thompson (n 344). 905 Hart, ‘Positivism and the Separation of Law and Morals’ (n 339) 607; Rosen and Lyons (n 345). 906 Jillian York, 'Who Defines Pornography? These Days, It’s Facebook', (online at 25 May 2016) . 907 Ibid.

144

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM

The high rate of potential false negatives is also problematic in terms of substantive certainty given the significant incongruence between Instagram’s public-facing rules and the outcomes of content moderation. Despite the platform arguably prohibiting every image in this case study, Table 3 illustrates significant variability in the probabilities of removal across categories. Probabilities range from 0.6 per cent for images depicting Male, Rear, Full Nudity to 100 per cent for depictions of Female Nipples; Apparent Porn Spam and Porn Spam with a Clear Watermark. These inconsistencies demonstrate that Instagram is sending out mixed regulatory signals to women and other users. On the one hand, formal rules establish that users cannot view or share explicitly prohibited content to ensure that Instagram continues ‘to be a […] safe place’.908 On the other hand, the moderation of content in this case study seems to communicate to some users that certain types of explicit content may not in fact violate content policies, perhaps depending on the specific users and subject matters involved. These mixed signals are concerning as they have the potential to limit the extent that users can learn the bounds of appropriate content. Inconsistencies can also breed confusion among users about the meaning and enforcement of rules around content, especially given ongoing controversies around the outcomes of content moderation, and make it difficult for users to trust that content is removed for actually violating policies.909 More broadly, there is a risk that mixed regulatory signals might encourage rather than deter users to post explicit content, and abuse user-reporting features.

The empirical results raise serious concerns also from the vantage point of the rule of law value of formal equality. In Chapter Two, I argued that any attempt by Instagram to moderate, or regulate, content should generally be consistent – that is, the platform should moderate images in like categories of content alike. The finding in this case study that up to 56.8 per cent of images were not moderated alike and are therefore potential false negatives stands in stark contrast to this ideal. The prevalence of potential false positives in the sample, which greatly exceeds the 22 per cent rate of likely false positives in Case Study One, suggests that moderators might treat some like content differently behind closed doors. Inconsistent treatment could be due to the platform’s commercial objectives, especially given the profitable nature of pornographic content, and the challenges of moderating content at scale, among other contributing factors. As West argues, ‘companies increasingly must seek out efficiencies in

908 Instagram, ‘Community Guidelines’ (n 185). 909 Alice Newell-Hanson, ‘How Artists are Responding to Instagram’s No-Nudity Policy’, i-D (online at 16 August 2016) .

145

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM order to manage the flood of offensive and pornographic content posted by a fraction of its growing user base’.910 It is important that content is moderated in ways that are formally equal to help ensure that the platform’s regulatory system is able to guide individual actions and choices. Consistency, particularly in terms of substantive certainty and formal equality, also plays an important role in allaying users’ concerns that processes for moderating content are biased, discriminatory or unfair.

Concerns around formal equality are compounded by the fact that some like content is not moderated alike even when the platform explicitly allows it. The moderation of images that depict breastfeeding is particularly illustrative of this point.911 Some of the earliest and most persistent controversies around content moderation concerned breastfeeding, as part of broader normative debates about the appropriateness of certain female body parts for public view.912 Instagram, doubtless having learned from battles between its parent company and breastfeeding activists,913 updated its Community Guidelines in 2015 to allow ‘photos of … women actively breastfeeding’.914 Despite this, results in Figure 19 show that the risk of removal for images depicting women breastfeeding is 19 per cent: specifically, 162 images were not removed and 38 images were removed (T = 200).915 These results are concerning for numerous reasons. In terms of formal certainty, the phrase ‘actively breastfeeding’ has, to use Hart’s expression, an especially wide ‘penumbra of uncertainty’.916 This ambiguous terminology and the corresponding empirical results suggest that while the vast majority of images depicting women breastfeeding are appropriate, some images could still violate the platform’s content policies. In the context of substantive certainty, the moderation of content is plainly incongruent with content policies, which explicitly allow images of women actively breastfeeding. It is evident that the trend of inconsistent moderation extends across images that do not appear to violate content policies (Case Study One), explicitly prohibited content (Case Study Two) and explicitly allowed content.

910 West, ‘Raging Against the Machine: Network Gatekeeping and Collective Action on Social Media Platforms' (n 901) 30. 911 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 96. 912 Ibid. 913 West, ‘Raging Against the Machine: Network Gatekeeping and Collective Action on Social Media Platforms' (n 901) 31-32, 914 See, eg, Tierney McAfee, ‘Instagram Now Allows Photos of Women Breastfeeding’, People (online at 17 April 2016) . 915 Note that automated tools collected these images as part of Case Study One. 916 Hart, ‘Positivism and the Separation of Law and Morals’ (n 339) 607.

146

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM

The different probabilities of removal for women’s bodies compared to men’s bodies raise alarms about potential double standards in Instagram’s moderation processes. Overall, the platform appears to enforcing its rules much more often against female nudity rather than male nudity. Specifically, the probability of removal for Male, Rear, Full Nudity is 0.6 per cent followed by 1.4 per cent for Male, Rear, Partial Nudity, which contrasts with the 10.2 per cent probability of removal for Female, Rear, Full Nudity and 18.4 per cent for Female, Rear, Partial Nudity. The potential for gender-based double standards is further supported by odds testing, which indicates that the odds of removal for an image depicting a woman’s body is 16.75 times higher than for a man’s body. These results are concerning as they suggest that images of male forms are more leniently moderated than depictions of female forms in practice.

Surprisingly, in stark contrast to the overall trend of inconsistent moderation, Instagram appears to consistently moderate some types of content. As illustrated in Table 3, the platform removed 100 per cent of both types of pornographic content and 95 per cent of explicit content, most likely through the development and deployment of automated systems. The enormously variable rates of removal in this case study suggest that processes for moderating content on Instagram are consistent when its executives have vested interests in achieving certain

147

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM outcomes (interest-outcome consistency).917 As previously noted, the platform’s executives appear to be directing a significant amount of resources to the aforementioned clean-up project, including ‘pruning trolls’,918 and to proactively removing spam and hateful comments. In July 2019, for instance, Adam Mosseri asserted Instagram’s ongoing ‘commitment to lead the fight against online bullying’: ‘It’s our responsibility to create a safe environment on Instagram. This has been an important priority for us for some time, and we are continuing to invest in better understanding and tackling this problem’.919 This comment, among others, raises a number of red flags: for instance, how do Instagram’s executives decide what problems the company will focus on? Will the platform’s clean-up project increase the risk of censorship that many marginalised individuals and groups face when participating online? What issues are key decision-makers giving less attention to or perhaps ignoring? While there are no easy answers to these questions, there is an obvious need for the platform’s executives to clarify the ways in which they make normative judgments about what content is prohibited and what is not.

Disparate removal rates across categories of content in this case study also call into question the extent to which Instagram is setting, maintaining and enforcing rules around content in the best interests of everyday users. As previously noted, the platform fervently advances a community-oriented rhetoric about its network creating spaces for the expression of different users from around the globe.920 The significant number of potential false negatives, however, suggest that Instagram falls short of its community-oriented rhetoric and goals in practice. Users can attempt to challenge the platform’s decision-making around content, but successful outcomes appear to be limited to celebrity users, or everyday users who have managed to gain media attention through individual or collective action.921 It is clear that there are trade-offs inherent in processes for moderating content that could have particularly detrimental impacts on women’s online expression and participation. Without greater participation, transparency and accountability in particular, it is difficult for the public to trust that Instagram is making decisions about content in the best interests of everyday users rather than a privileged few.

917 See generally John Sivacek and William Crano, ‘Vested Interest as a Moderator of Attitude–Behavior Consistency’ (1982) 43(2) Journal of Personality and Social Psychology 210 . 918 Thompson (n 344). 919 Mosseri, ‘Our Commitment to Lead the Fight against Online Bullying’ (n 514). 920 See, eg, Instagram, ‘Terms of Use’ (n 73) [The Instagram Service]. 921 West, ‘Raging Against the Machine: Network Gatekeeping and Collective Action on Social Media Platforms' (n 901) 33 ff; Shrivastava (n 360).

148

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM

C Instagram’s Highly Unpredictable Regulatory Culture: A Trend of Inconsistent Moderation across Case Studies

Much like the preceding chapter, the results of this case study underline several deficiencies from a rule of law perspective, which poses an ongoing risk of arbitrariness for users. A major theme across the values of formal equality, certainty and reason-giving is that it remains unclear which party removes content, for what and by what means. Take, for instance, the Partially Nude or Sexually Suggestive category that has a roughly 50/50 chance of content being removed or not removed. A corollary is that for most removed images in this category, there is generally a like image that was not removed, and vice versa, which makes it very difficult for users to make sense of content moderation processes. Another significant deficiency is that the platform excludes users from participating in processes for setting, maintaining and enforcing content policies. This exclusionary approach is indicative of a top-down, ‘command and control’922 governance style rather than one founded on openness and multi-stakeholder engagement. Hence it is unsurprising that anecdotal explanations for content removal on Instagram flourish in the public sphere.

Additionally, the results suggest that the role of ideology and power in shaping processes for moderating content on Instagram warrant further critical evaluation. The platform continues to work towards becoming the safest and most inclusive social media platform in the world, a mission initially spearheaded by Systrom after he visited Disneyland.923 As Instagram’s former public policy director Nicky Jackson Colaco stated, in the context of the impetus for this mission, ‘What we’re saying is that we want to be in a different world’.924 Attempts to improve the online environment are a necessary, even noble goal given the extent to which the internet is now plagued by hate speech, misogyny and many other forms of vitriol.925 It is concerning, however, that the platform has once again excluded users from important regulatory discussions around what a ‘different’926 and possibly more sanitised world should look like. In doing so, there is wide scope for Instagram’s executives to make value-laden decisions about the ways that users’ expression through content should be enforced in practice. Platform executives are also likely to make decisions about whose concerns will be given priority and whose will not. This underlines that a choice to strive for a different or better platform will not

922 Black, ‘Decentring Regulation: Understanding the Role of Regulation and Self-regulation in a “Post- regulatory” World’ (n 196) 105. 923 Thompson (n 344). 924 Ibid. 925 See, eg, Banet-Weiser and Miltner (n 464) 171. 926 A reference to Colaco’s statement: see Thompson (n 344).

149

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM necessarily result in positive change, especially when all stakeholder groups do not appear to be represented in decision-making processes.

There is now a clear need for Western society to determine what we should expect from moderation processes when content is not regulated by users themselves. Given that Instagram’s terms and guidelines explicitly prohibit all of the images in this case study, one argument is that we should ideally expect those images to be removed and therefore no false negatives. In the context of Case Study One, given that coded images do not appear to violate Instagram’s terms and guidelines, we should ideally expect that none of those images would be removed by Instagram. Because Instagram does not report who is responsible for removing any given image, based on data in this thesis, we might instead expect that images are removed at broadly similar rates across each of the categories of different body types. Several factors, however, limit the extent to which one can form expectations about content moderation in practice. The first is that mistakes in processes for moderating content are inevitable at the scale that platforms operate. Another is that Instagram, like other platforms, is a private corporation with wide discretionary power to moderate content on a case-by-case basis and as its executives see fit, except for where certain legal obligations exist.927 These factors, among others, demonstrate that forming expectations around content moderation is far from straightforward in practice. It is against this backdrop that I propose several ways that Instagram can improve its moderation processes, in both a formal and substantive sense, in Chapter Seven.

Overall, the empirical results show that Instagram’s inconsistent trend of moderation continues across the coded sample of explicitly prohibited images of women’s bodies and some men’s bodies. This trend is also evident in the coded sample of images depicting women actively breastfeeding, which are explicitly allowed. Like in Case Study One, users who post explicit content and some explicitly allowed content remain largely unguided in Instagram’s complex regulatory system, which starkly contrasts with the Anglo-American ideal of the rule of law. The lack of predictability that I identify in this Chapter, along with deficiencies in formal equality, certainty, reason-giving, transparency, participation and accountability, reinforce the need for institutionalised constraints on arbitrariness in moderation processes. These deficiencies further support the overarching claim in this thesis that Instagram appears to promote and reinforce a highly unpredictable regulatory system.

927 Roberts, 'Content Moderation' (n 27) 1; Witt, Suzor and Huggins (n 27) 558.

150

CHAPTER FIVE: AN EVALUATION OF THE MODERATION OF EXPLICITLY PROHIBITED IMAGES DEPICTING WOMEN’S AND SOME MEN’S BODIES ON INSTAGRAM

IV CONCLUSION

This Chapter has empirically evaluated whether 980 explicitly prohibited images depicting women’s bodies, and some images of men’s bodies, in nine like thematic categories of content were moderated alike on Instagram in practice. Across these categories, I identified three headline findings. The first was that up to 56.8 per cent of images that were removed by the platform or by the user are images that arguably should have been removed based on Instagram’s content policies, and are therefore potential false negatives. Second, I found that the odds of removal for an explicitly prohibited image depicting a woman’s body is 16.75 times higher than for a man’s body. Odds testing supports my hypothesis (H2) that explicitly prohibited images depicting women’s bodies are moderated at a different rate to explicitly prohibited images of men’s bodies. Third, Instagram moderated the Female Nipples; Porn Spam with a Clear Watermark; Appears to be Porn Spam and Other Explicit categories in highly consistent ways. The empirical results suggest that concerns about consistency in the outcomes of content moderation might not be unfounded.

Application of my rule of law framework to this case study highlights a lack of formal equality and certainty, as well as deficient reason-giving, participation, transparency and accountability practices. I argued that these deficiencies are significant normative concerns which pose an ongoing risk of arbitrariness for women and users more broadly. One is the lack of predictability that I identified in the outputs of content moderation. Another is apparent double, even multiple, standards in the moderation of women’s bodies that have the potential to reinforce heteronormative portrayals of women in society. Moreover, there is cause for concern about the extent that users in their role as volunteer regulators might be policing female nudity more than male nudity on gendered or other grounds.

This Chapter concludes the work of empirically examining the extent to which the moderation of images depicting women’s bodies on Instagram aligns with the rule of law. Together with Case Study One, it has provided empirical evidence of the input/output relationship in processes for moderating different types of images on Instagram. In the following Chapter, I explain how the inconsistent trend of moderation and deficiencies in formal equality, certainty, reason-giving, transparency, participation and accountability in this and the preceding Chapter raise concerns from a feminist perspective. I argue that many of these concerns are symptomatic of deep-seated societal issues that traverse online and more traditional offline contexts.

151

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

Thus far I have empirically examined the moderation of some images depicting women’s bodies, and a smaller sample of men’s bodies, on Instagram against my rule of law framework. Across both case studies, I identified an overall trend of inconsistent moderation, as well as an apparent lack of formal equality, certainty, reason-giving, accountability, transparency and participation in the platform’s moderation processes. I explained that the two main explanations for apparent inconsistencies are content removal by users themselves and direct regulatory intervention by Instagram. The purpose of this Chapter, which proceeds in five parts, is to enrich my preceding rule of law analysis by evaluating the empirical results from both case studies through a feminist lens. Overall, I argue that the results raise an array of feminist concerns, a common thread through which is potential arbitrariness.

I commence this Chapter by outlining the possibilities of ‘digital feminisms’.928 It is important to do so given that the connective affordances of online platforms can be catalysts for change and social good. For instance, as part of burgeoning fourth-wave feminist movements,929 platforms can open spaces for non-normative expression outside of conventional media channels and enable users to disseminate feminist ideas.930 I explain, however, that the possible benefits of digital feminisms are inextricably linked to a number of potential perils, especially for the expression of content depicting women’s bodies.931 The remainder of this Chapter, therefore, turns to identify and analyse what appears to be at stake for users who post depictions of women when content is moderated on Instagram in potentially arbitrary ways. For reasons already outlined, I focus on direct regulatory intervention by the platform, rather than content removal by users themselves.

Part II explores apparent gender-based double standards that can manifest on Instagram in several ways. For example, the platform’s explicit prohibition against ‘some photos of female nipples’,932 but not male nipples, is an obvious double standard.933 There is also potential for

928 See, eg, Baer (n 125) 17. 929 See, eg, Matich, Ashman and Parsons (n 126) 337. 930 See, eg, Baer (n 125) 18 ff. 931 See, eg, Marwick, 'None of This Is New (Media): Feminisms in the Social Media Age' (n 119) 310. 932 Instagram, ‘Community Guidelines’ (n 185). 933 It is important to note that there are long-standing double standards in Anglo-American media regulation: see, eg, Sharon Mavin, Patricia Bryans and Rosie Cunningham, ‘Fed-Up with Blair’s Babes, Gordon’s Gals, Cameron’s Cuties, and Nick’s Nymphets: Challenging gendered Media Representations of Women Political Leaders (2010) 25(7) Gender in Management: An International Journal 550.

152

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE gender-based double standards in the outcomes of content moderation, as evidenced by the results from Case Study Two, and in reporting features, partly given that users are seemingly able to report content an unlimited amount of times. I argue that apparent double standards are concerning for a number of reasons, such as the potential for the expression of content depicting female forms to be limited to a far greater extent than depictions of men. Double standards can also reinforce systemic inequalities and normative ideals around the appropriateness of different female bodies for public view.

Part III considers a range of potential autonomy deficits and negative lived experiences that can arise for users posting content of women on Instagram. To the extent that content was removed by the platform, the overall trend of inconsistent moderation suggests that some users lack autonomy over the representation of female forms. Specifically, the empirical results in both case studies point to an apparent lack of autonomy enabling conditions, including freedom of choice and self-definition. These possible deficits in individual autonomy are compounded by the range of negative lived experiences that can flow from content removal, such as confusion and shame,934 especially in the context of possible false positives. This Chapter also explores potential empowerment stories for women who, amid the risk of arbitrary content moderation, appear to be subverting Instagram’s regulatory system by varying degrees and in different ways.

Having outlined concerns about double standards, autonomy deficits and negative lived experiences, Part IV evaluates how these issues are seemingly linked to broader uncertainties around the normative foundations of Instagram’s regulatory system. A foremost uncertainty is whether the platform’s commercial prerogatives take precedence over the concerns of women and other users. Another is the tension between some feminisms and the neoliberal foundations of platforms, which can give rise to ‘a postfeminist sensibility’.935 By evaluating this broader context, I seek to illuminate how women’s bodies are often complex sites of contest between different regulatory actors and bound up with social, cultural and other issues that transcend online social spaces.936 Given that decisions around content can have far-reaching impacts, I argue that content moderation processes should be subject to greater regulatory oversight and accountability measures. Part V concludes this Chapter.

934 Duguay, Burgess and Suzor (n 94) 2. 935 Rosalind Gill, Gender and the Media (Polity, Cambridge, 2007) 255; Baer (n 125) 20. 936 See, eg, Matich, Ashman and Parsons (n 126) 338.

153

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

I THE POSSIBILITIES OF DIGITAL FEMINISMS

Digital feminisms have purportedly emerged from the intersections of digital technologies and activism.937 Online platforms are key drivers of digital feminisms as they offer users a range of possibilities, such as forming groups and making certain issues visible in local, national and international contexts – networked activism.938 Users can employ different features, such as hashtags, or more specific affordances, like Instagram Stories, to attempt to make meaning in the face of language and other barriers.939 Instagram and other predominantly visual platforms also have the potential to open spaces for non-normative expression, identities and practices in everyday media.940 A result is that some ‘[f]eminist scholars have described digital feminist activism as a departure from conventional modes of doing feminist politics, arguing that it represents a new movement or a turning point for feminism in many ways.’941 Baer goes so far as to describe the feminist politics that have emerged from digital platforms as, in a sense, the ‘redoing of feminism for a neoliberal age’.942

Some the most exciting possibilities of digital feminisms are for diverse feminist constituencies to educate others and challenge the status quo.943 Feminist memes, like those in Figure 20, exemplify the former. Specifically, in Figure 20(A), a user presents content that seemingly appeals to the pop culture sensibilities of contemporary media in order to communicate a feminist issue:944 the inappropriateness of rape jokes. Moreover, the decentralised design of platforms can assist users in challenging existing power structures. One prominent example of this is the body positive movement on Instagram, which Cohen et al contend does in fact represent non-normative, previously underrepresented bodies in contrast to traditional media.945 Gurung argues, ‘The ideal of beauty is no longer controlled by a few people who are

937 Baer (n 125) 17. 938 See generally Matich, Ashman and Parsons (n 126) 337. 939 Baer (n 125) 17. 940 Ibid 18-19; Matich, Ashman and Parsons (n 126) 345. 941 Baer (n 125) 18. 942 Ibid 17. 943 Marwick, 'None of This Is New (Media): Feminisms in the Social Media Age' (n 119) 310; Samantha Thrift, ‘#YesAllWomen as Feminist Meme Event’ (2014) 16(6) Feminist Media Studies 1090 ff; Baer (n 125) 18. 944 See, eg, Highfield and Leaver, ‘Instagrammatics and Digital Methods: Studying Visual Social Media, from Selfies and GIFs to Memes and Emoji’ (n 85) 47. 945 Cohen et al (n 21) 55. It is important to note that there are ongoing issues with white normativity in feminist spaces, and some body positive content promotes and reinforces normative body standards: see, eg, Noor Al-Sibai, ‘Robbie Tripp Put the Final Nail in Body Positivity’s Coffin with His ‘Chubby Sexy’ Music Video’, Wear Your Voice (online at 28 May 2019) . Indeed, there are different and sometimes conflicting conceptions of body positivity: see Lora Grady, ’15 Canadian Women on What It Really Means to Be

154

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE deciding it. Now it is represented by all of us; who we follow; what we want the world to look like; all of us are responsible for it’.946 This is significant given that greater representation of different body types, experiences and voices on Instagram has the potential to not only amplify different voices, but to normalise more diverse content in mainstream media.947

Another possibility is for content on social media networks to challenge orthodox conceptions of what is appropriate for public view and, in turn, be catalysts for social change. Useful illustrations of this are countercultural Instagram accounts, including @effyourbeautystandards, which I previously discussed in Part IV of Chapter Two, and @stopcensoringchildbirth, which aims to destigmatise depictions of childbirth, among other subject matters. These accounts have to varying degrees gained mainstream media attention and seemingly furthered their campaigns to reconfigure certain social constructs. Additionally, platforms enable some high-profile users to advocate for different causes. Take, for example, activism by the founder of the @i_weigh (#iweigh) movement Jameela Jamil. In 2019, Jamil used her @i_weigh account to generate global media attention around the dangers of ‘taking

Body Positive RN’, Flare (online at 23 January 2018) . 946 See, eg, Evan Ross Katz, ‘How Instagram Helped Democratize Beauty’, Mic (online at 30 August 2017) . 947 Matich, Ashman and Parsons (n 126) 346-348.

155

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE diet advice from celebrities’948 like Amber Rose, who once promoted ‘flat tummy products for pregnant women’,949 and Kim Kardashian West, who promoted appetite-suppressing lollipops in 2018.950 Jamil’s online social activism was a catalyst for Instagram updating its content policies around weight loss products in September 2019.951 More broadly, Instagram provides a platform for women to advocate for their rights,952 along with those of different groups and individuals in society. However, as we have seen in the case studies in this thesis, users posting content does not mean that their expression will remain on the platform.

Hence with these benefits come a number of possible causes for concern about the expression of different women’s experiences through content.953 Many of these concerns, which I outline in the remainder of this Chapter, stem from the fact that online platforms emerge from the patriarchal gender politics of the West. As Western societies continue to police women’s bodies,954 so do online platforms as they compete in an intensely neoliberal, male-dominated marketplace.955 A result is that patriarchal paternalist systems for policing and controlling female forms extend beyond traditional offline environments and, while it may seem otherwise, they are often just as prevalent on social media platforms.956 This is not to suggest that Instagram is wholly responsible for the issues faced by users. There are no quick fix solutions to perennial, often deep-seated issues around the representation and depiction of women’s bodies. I argue, however, that the platform may still have some social responsibility for supporting or reinforcing cultural norms that impact women’s self-expression on its network.

948 @jameelajamilofficial (Jameela Jamil) (Instagram, 9 April, 2019) ; @jameelajamilofficial (Jameela Jamil) (Instagram, 28 May 2019) . 949 @jameelajamilofficial (Jameela Jamil) (Instagram, 20 June 2019) .

950 See generally Arwa Mahdawi, ‘Kim Kardashian West Shocks Fans with Ad for Appetite-Suppressing Lollipops’, The Guardian (online at 17 May 2018) . 951 See, eg, Sangeeta Singh-Kurtz, ‘Instagram Will No Longer Promote Diet Products to Minors’, The Cut (online at 18 September 2019) . Note also Dr Ysabel Gerrard’s involvement in this recent policy change: see The University of Sheffield, ‘New Instagram Policy Bans Harmful Weight Loss Content for Younger Users’, Department of Sociological Studies News (online at 19 September 2019) . 952 See generally Matich, Ashman and Parsons (n 126) 337. 953 See generally Lee Edwards, Fiona Philip and Ysabel Gerrard, ‘Communicating Feminist Politics? The Double-Edged Sword of Using Social Media in a Feminist Organisation’ (2019) Feminist Media Studies (advance) . 954 See, eg, Chesney-Lind (n 421) 1. 955 Matich, Ashman and Parsons (n 126) 338. 956 See generally Marwick, 'None of This Is New (Media): Feminisms in the Social Media Age' (n 119).

156

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

I will now explore some of these potential impacts, starting with gender-based double standards.

II GENDER-BASED DOUBLE STANDARDS

To the extent that images of women’s bodies are removed at different rates due to regulatory action or inaction by Instagram, there are a number of causes for concern through a feminist lens. The first is Instagram’s explicit prohibition against ‘some photos of female nipples’, but not male nipples, which is a patent gender-based double standard. The platform’s current policy stance, as explained by Jillian York in Figure 21, is that depictions of female nipples are appropriate only in paintings and sculptures. Images of male nipples, however, are not explicitly prohibited or even mentioned in Instagram’s content policies.957 The 100 per cent probability of removal for images depicting female nipples in Case Study Two suggests that this prohibition is strictly enforced in practice. One of the most troubling aspects of this highly gendered policy is how it fundamentally constrains expression of content depicting women’s bodies to a greater extent than men’s bodies. Another is that the central message of Instagram’s prohibition appears to be that women should ‘cover up’, presumably in line with ‘a narrow ideal of social acceptability’,958 or ‘consider themselves cut off’.959 These feminist concerns, among others, have given rise to a number of large-scale protests. Take, for example, demonstrations outside Instagram’s New York office in June 2019, as illustrated in Figure 22,960 and users adopting Micol Hebron’s genderless nipple template, as depicted in Figure 23.

957 Instagram, ‘Community Guidelines’ (n 185). 958 See generally Toerien and Wilkinson (n 418) 69. 959 Tom Macdonald, ‘A Not-So-Modest Proposal: Instagram, Free the Nipple for the Inauguration, Vogue (online at 20 January 2017) . 960 Daniel Kreps, ‘Anti-Censorship Activists Strip Nude outside Facebook HQ to Fight Nudity Ban’, Rolling Stone (online at 2 June 2019) .

157

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

158

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

Some protestors take issue with the way Instagram’s explicit prohibition can entrench gendered practices. By setting a policy that depictions of male toplessness are acceptable but most depictions of female toplessness are not, the platform is arguably sending a message to users that female nipples should remain hidden in public.961 This signalling is problematic in light of Butler’s contention that gender is an act,962 something that someone does, rather than something that one simply is.963 Butler explains, ‘Gender is the repeated stylization of the body, a set of repeated acts within a highly rigid regulatory frame that congeal over time to produce the appearance of substance, of a natural sort of being’.964 In other words, according to Arthurs and Grimshaw, ‘Gender is a process rather than a property of bodies, in which the 'conversation' between the body and the social is continually recreated’.965 As subjects of Instagram’s regulatory system, there is a risk of users enacting normativity on depictions of female nipples in line with the platform’s apparent ‘constrained choice of gender style’.966 One of the ways that users might do so is by adopting the practice of censoring female nipples in different media, or reporting content that does not censor women’s nipples. As different societal actors continue to enact prevailing gender constructs, such as hiding female nipples from public view, doing so becomes more and more entrenched.

961 Reena Glazer, ‘Women’s Body Image and the Law’ (1993) 43(1) Duke Law Journal 113, 115. 962 See, eg, Butler (n 92) 188. 963 Sara Salih, Judith Butler (Routledge, 2002) 55. 964 Butler (n 92) 25. 965 Arthurs and Grimshaw (n 91) 9. 966 Salih (n 936) 56

159

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

Others take issue with the potential for Instagram’s female nipple ban to reinforce the idea that women exposing their breasts can be damaging to society.967 The policing of female bodies in this way is often bound up with heterosexual male myths, including that the female breast is a purely sexual organ, which it is not.968 Another is that female nipples and, by extension, women are vulnerable to harm because their breasts might sexually arouse others.969 Platforms creating policies that appear to align with the male gaze is enormously problematic as it suggests that women, but not men, should be mindful of how exposing their nipples can impact public viewers.970 The burden is therefore on women to adjust their expression, bodies and behaviours to cater for a male-centric world. Instagram’s prohibition of ‘some female nipples’971 also has the potential to reinforce a patriarchal paternalist construction972 of women as an intrinsically vulnerable class of persons.973 The platform’s decision to regulate the visibility of female nipples is ultimately a decision to treat women as other than men.

Platform executives generally argue that explicit prohibitions against female nipples are justifiable on several grounds. One supposed justification is that online platforms must adhere to the content policies set by third party app providers in order for their technologies to be available for users to purchase or download.974 The content policies set by providers, like Apple Inc, which has been accused of enforcing double, even multiple, standards around nudity,975 can significantly influence the content policies set by platforms. Moreover, some platforms suggest that the difficulties that arise from attempting to identify whether a topless person consents to their picture being taken, as well as distinguishing between underage and overage toplessness, justify female nipple bans.976 The reality is that when faced with potential liability for content, like an image depicting an underage girl’s nipples, there is a strong incentive for platforms to err on the side of caution by imposing a broad prohibition on all depictions of this subject matter. The challenges of moderating user-generated expression at scale arguably

967 Glazer (n 961) 113. 968 Matich, Ashman and Parsons (n 126) 351. 969 Glazer (n 961) 116. 970 Ibid. 971 Instagram, ‘Community Guidelines’ (n 185). 972 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 96. 973 Some advocates for patriarchal paternalism assert that prohibitions like this are in the best interests of certain subjects of regulation, who are often in want of protection: see generally Dworkin (n 486). 974 See, eg, Nick Saint, ‘Apple’s Triple-Standard on Nudity in the App Store’, Business Insider (online), 13 May 2010 . 975 Ibid. 976 It is important to note that this potential justification was brought to my attention by a Facebook employee via a group Skype call in 2018.

160

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE increase this incentive. Much like user-reporting features, however, these potential justifications can enable platforms to sidestep criticism, and potentially legitimate content policies that might have roots in patriarchal paternalistic views of what is proper for women.977

In addition to Instagram’s explicit female nipple ban, there is potential for gender-based double standards across the various components of Instagram’s regulatory system. Take, for instance, the empirical results in Case Study Two, which suggest that policies around depictions of nudity are enforced much more often against depictions of women than men. The probability of removal for Male, Rear, Full Nudity is 0.6 per cent followed by 1.4 per cent for Male, Rear, Partial Nudity, which starkly contrasts with the 10.2 per cent probability of removal for Female, Rear, Full Nudity and 18.4 per cent for Female, Rear, Partial Nudity. Moreover, the odds of removal for an image depicting a woman’s body is 16.75 times higher than for a man’s body. These results suggest that moderation processes on Instagram could be promoting and reinforcing gender-based double standards in favour of men and to the detriment of some women, even when content is seemingly alike. Duplicity like this is problematic as it suggests that users who post certain images of women are subjugated when participating on the platform, which can result in autonomy deficits, negative lived experiences and other issues that I elaborate upon in the following Part.

Finally, and more specifically, there is potential for users to report content on Instagram in line with gender-based double standards. Despite reporting features being the key component of content moderation processes, it appears that users (and non-users) can police content an unlimited amount of times and for groundless reasons. Rogan points out that this can create a surveillance culture:

…contemporary notions of “surveillance” in the context of social media relies heavily on reciprocity, which means that the roles of “guards” and “prisoners” are disrupted and blurred – users of social media are able to embody both of these roles simultaneously in a kind of fluid and participatory “social surveillance”.978

Instagram’s apparent failure to limit or restrain possibly capricious surveillants is not good enough. User reports, among other forms of content moderation, have the potential to amplify and reinforce the dominant ‘technocultures’979 of online platforms, which often encompass

977 Crawford and Gillespie (n 232) 412. 978 Rogan (n 221) 45. 979 See, eg, Duguay, Burgess and Suzor (n 94) 1.

161

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE misogynistic perspectives and Western societal malaise around women’s bodies more broadly.980 This ‘social surveillance’981 in this context is also in addition to the omnipresent regulatory gaze of platform and especially perilous for those who post to a wider yet unknown audience through hashtags. While there is no easy way to determine the extent to which user reports are imbalanced or biased, there is a risk that groundless reporting might lead some users to alter their social media practices. This potential outcome, among others, is arguably a threat to online expression in itself.

III AUTONOMY DEFICITS AND NEGATIVE LIVED EXPERIENCES

The overall trend of inconsistent moderation in both case studies raises concerns about possible autonomy deficits and negative lived experiences for users when participating on Instagram. As previously explained, platforms generally describe themselves as neutral conduits through which content can flow.982 A foremost ‘passive’ function that Instagram, like other platforms, serves is ‘managing online content hosted in their digital spaces’.983 The empirical results in this thesis, however, challenge Instagram’s rhetoric of neutrality. The 22 per cent rate of false positives in Case Study One suggests that some users’ expression might have been arbitrarily silenced or erased.984 By contrast, the 56.8 percent rate of false negatives in Case Study Two suggests that the explicitly prohibited content posted some users is in fact permitted and possible amplified. These apparent inconsistencies are concerning because they can limit some autonomy enabling conditions for women, including their social interactions985 and participation in important discourses of the day.986

The empirical results raise additional concerns about the extent to which some users can choose their own courses of action and freely define themselves. A key finding in this thesis is that Instagram’s regulatory culture is highly unpredictable that is, subject to change, without warning and in ways that make predicting the outcomes of posting an image really difficult

980 Banet-Weiser and Miltner (n 464) 171. 981 Ibid 133. 982 Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (n 26) 3. 983 Gregorio (n 186) 80. 984 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 98. 985 Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (n 26) 9. 986 Witt, Suzor and Huggins (n 27) 592; Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 98; Australian Human Rights Commission, Role of Freedom of Opinion and Expression in Women's Empowerment (Web Page, 13 June 2013) .

162

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

(eg, an image remaining on the platform or being removed). This is problematic for users because their choice to post content is ultimately subject to Instagram’s decision-making processes. As part of this, the way that some women describe themselves in a post can be subject to regulatory intervention. Intervention could involve a regulatory actor passing judgment on, for instance, the precise words that a woman uses to describe herself, what she identifies as important to her life and how she chooses to visually represent herself. The point here is that users’ expression about female experiences might not always be theirs and some decisions might be made for them on Instagram.

The practice of what appears to be self-sensorship is a useful example of how women’s autonomy is bound up with cultural norms of use and the platform’s regulatory culture. Self- sensorship is a form of regulatory circumvention, which is distinct from self-censorship in the way that it removes ‘the experience of the senses’.987 A user, for instance, might self-sensor their expression by depicting a female form behind objects or materials as in Figure 24. Others might use light, shadow and distance, often in very creative ways, to obscure different body parts.988 Self-sensorship is particularly apparent in the coded sample in Case Study Two. As an inherently feminist and feminised practice, self-sensorship can provide a powerful avenue for women to circumvent rules about content and social norms around the appropriateness of certain female forms for public view more broadly.989 However, given that self-sensorship can be in line with and in resistance to a platform’s content policies,990 the extent to which self- sensored images constitute empowerment stories is subject to interpretation.

987 Olszanowski (n 854) 83; Roberts, 'Aggregating the Unseen' (n 128) 4. 988 Eileen Mary Holowka, 'Between Artifice and Emotion: The “Sad Girls” of Instagram' in Kristin Bezio and Kimberly Yost (eds), Leadership, Popular Culture and Social Change (Edward Elgar Publishing, 2018) 186. 989 Ibid 183. 990 Olszanowski (n 854) 83.

163

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

On the one hand, results show that users have varying degrees of success with circumventing Instagram’s content policies. Most images that appear to be self-sensored and were not removed are evident in the Partially Nude or Sexually Suggestive category in Case Study Two, which has a 58.8 per cent probability of removal. This suggests that self-sensored content has a greater than (>) 50 per cent chance of not being removed. By self-sensoring their content, some women could be asserting their autonomy to share their stories and attempting to challenge the heteronormative templates that Instagram’s policies seem to impose on depictions of female forms.991 In Figure 24, for example, users cover their nipples with tape, bubbles and photos, as well as with computer-generated shapes, stars, emojis and other creative visual features. There appears to be culturally transgressive aspects to images as users are arguably drawing attention to a range of taboo subject matters, including body fat, stretchmarks and intimate body parts. As Baer explains, ‘The female body has long functioned as a key site for feminist activism’.992 In these ways, users can be important actors in debates about content moderation processes and social issues more broadly.

On the other hand, the practice of self-sensorship, like self-censorship, underlines the potential for some women to lose autonomy over their self-expression. This can be the case even in the

991 See the discussion of ‘templatability’ in Leaver, Highfield and Abidin (n 76) 2, 214. 992 Baer (n 125) 19.

164

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE context of regulatory circumvention as self-sensorship is ‘a resistance to and compliance with the protocols of the medium’ (ie, the heavily controlled, sociotechnical architectures of online platforms).993 A major concern here is that self-sensorship places the onus on users to adjust their expression in line with a platform’s content policies. Implicit in these adjustments can be users’ time, technical or other labour and emotional energy. The prospect of undertaking this additional labour might result in some users leaving the platform and forgoing its connective affordances altogether. It is important to note here that some users might self-sensor or self- censor their expression for a range of diverse reasons, with or without awareness of complex issues around content moderation. Regardless of the varied motivations for self-sensorship and self-censorship, images like those in Figure 24 are arguably calling attention to Instagram’s highly unpredictable regulatory culture and that women appear to be neither entirely constrained nor able to fully transcend their gendered bodies on Instagram.994

Another significant concern from a feminist perspective is that while content is moderated at scale, potential arbitrariness in the outcomes of moderation can create negative lived experiences for women and other users at an ‘intimate scale’.995 By removing images in this case study, the platform has seemingly made normative judgments about the appropriateness of certain depictions of female forms for public view and signalled to some users that their content is objectionable.996 Signals like this can give rise to a range of feelings for users, including anger, shame, isolation and humiliation, given that content can be an important vehicle for self-expression.997 As Gillespie contends, ‘Having a photo removed, one that a user hadn’t anticipated might violate the sites guidelines, can be an upsetting experience’.998 Negative lived experiences can be particularly pronounced when there are differences in how similar content is moderated and when, from outside the black box, it appears that the platform is targeting specific individuals or groups. There can also be financial costs for users who rely on social media to market their goods or services.999 The often intimate impacts of content

993 Holowka (n 988) 183. 994 See generally Arthurs and Grimshaw (n 91) 15. 995 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (2) 94. 996 See generally Baehr (n 483) [1.1.1 Procedural Accounts of Personal Autonomy]. 997 Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression (28 May 2019) (n 257). 998 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 98. 999 See Duguay, Burgess and Suzor (n 94); Bailey (n 429) [Instagram Censorship].

165

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE removal underline that the subjects of platform governance are people rather than disembodied ‘data points’.1000

It is important to note that marginalised individuals and groups generally face a higher risk of arbitrariness, including negative lived experiences, when participating in online platforms.1001 Like in offline contexts, those who occupy the margins of society, perhaps at the intersections of body type, race and sexuality, often occupy the margins of a platform.1002 A result is that marginalised voices generally navigate platforms in different ways to mainstream users, partly in response to the actual or perceived threat of other users reporting their content, or direct regulatory intervention by a platform.1003 In the context of Instagram, Roberts contends that this is where ‘the effects of takedowns are rendered even more personal. It is a trade-off that underscores the very nature of Instagram itself, which is not a gallery, not a living room, not a shared private space among friends – despite masquerading as such’.1004 Like the ideal of the rule of law, a feminist approach highlights the desirability of processes for moderating user- generated content adhering to basic public safeguards, such as certainty and equality. Safeguards operate to protect marginalised users, including women who allege censorious content moderation,1005 and aim to ensure that all persons have equal standing and opportunities in online and offline environments.1006 The inconsistent trend of moderation in both case studies, however, suggests that there is an ongoing risk of marginalised voices being further relegated on the platform. Inconsistencies also stand in stark contrast to the platform’s community-oriented rhetoric that emphasises the importance of different groups and individuals being able to express themselves.1007

IV BROADER UNCERTAINTIES AROUND THE NORMATIVE FOUNDATIONS OF DECISION-MAKING

The risks of gender-based double standards, autonomy deficits and negative lived experiences for users when participating on Instagram are woven into broader concerns about the normative foundations of the platform’s regulatory system. The first cause for concern is, to use Chan’s

1000 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 94. 1001 Electronic Frontier Foundation and Visualizing Impact, ‘Offline-Online’, (Infographic, 30 August 2018) . 1002 Roberts, 'Aggregating the Unseen' (n 128) 3. 1003 Duguay, Burgess and Suzor (n 94) 9. 1004 Roberts, 'Aggregating the Unseen' (n 128) 3. 1005 See, eg, Duguay, Burgess and Suzor (n 94) 13. 1006 See, eg, Universal Declaration on Human Rights, GA Res 217A (III), UN GAOR, 3rd sess, 183rd pln mtg, UN Doc A/810 (10 December 1948). 1007 Instagram, ‘Community Guidelines’ (n 185) [The Short].

166

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE phrase, Silicon Valley’s ‘bro culture problem’.1008 As previously noted, technology companies have longstanding histories of employing the same types of people, largely white or Asian straight men with computational expertise,1009 to the particular detriment of non-white women.1010 A result is that some online platforms are ‘mirror-tocracies’1011 rather than, as neo- liberal rhetoric might suggest, meritocracies. While a lack of workforce diversity is a problem for many industries, it is particularly concerning in the context of online platforms that make decisions about the ways over three billion users can communicate with each other and share information.1012 Gillespie argues:

It turns out that what they [technology companies] are good at is building communication spaces designed as unforgiving economic markets, where it is necessary and even celebrated that users shout each other down to be heard; where some feel entitled to toy with others as an end in itself, rather than accomplishing something together; where the notion of structural inequity is alien, and silencing tactics take cover behind a false faith in meritocracy. They tend to build tools “for all” that continue, extend, and reify the inequities they overlook.1013

In the context of contemporary feminisms, Marwick contends:

Social media allows feminists of all ages to tell personal stories, affectively engage with the experiences of others, collectively organize, and mobilize politically. However, social technologies – both in terms of functionality and cultural discourses and narratives – are not intrinsically feminist. While they might facilitate certain types of feminist community-building, they also lack tools for combating harassment and backlash. These platforms on which young feminist activists depend are also firmly situated in a Silicon Valley geek culture itself plagued by sexism, causing intrinsic conflicts between the ideals of feminism and those who would seek to combat it.1014

1008 Chang (n 473). 1009 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 201. 1010 See generally Sarah Myers West, Meredith Whittaker and Kate Crawford, ‘Discriminating Systems: Gender, Race, and Power in AI’, AI Now Institute (Report, April 2019 . 1011 The Globe and Mail, ‘Tackling Silicon Valley’s Sexist “Mirror-tocracy” Will Take A Diverse Workforce, Says Kara Swisher’ (YouTube, 11 September 2017) . A ‘mirror-tocracy’ is self-perpetuating: it reflects the values and people that are already part of that system. 1012 We Are Social and Hootsuite (n 30); Cohen et al (n 21) 47. 1013 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 201. 1014 Marwick, 'None of This Is New (Media): Feminisms in the Social Media Age' (n 119) 327.

167

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

There is, then, a risk of platform executives and their employees overlooking or deprioritising certain issues, such as users’ experiences of everyday violence.1015 Priority setting can strongly influence the tools that platforms build – in a capitalist marketplace, these are often tools that help to generate the most profit.1016

This leads to another concern about the extent to which decisions about the appropriateness of different female forms are influenced by Instagram’s commercial interests. As a part of the Facebook group of companies, Instagram’s chief objective is most likely to maximise profit and minimise financial loss for shareholders. One of the main ways that platforms do this is by setting, maintaining and enforcing content policies that help to attract more users and advertisers. Advertising in digital spaces is big business with Facebook earning more than $16 billion from advertisements on Instagram Stories, Instagram Feed and Facebook in the second quarter of 2019.1017 Roberts explains that there is a particular risk for women here:

… positioning of the female body within the commodified social media space leaves it room for two ends: either to be productive in terms of capital generation or to be rescinded and removed when it risks negatively impacting that production or when the capital generation does not come to the platform by beneficial means.1018

Some content may, of course, fall outside of these ends. Moreover, this is not to suggest that platforms, like Instagram, have specific policies in favour of commodifying female forms. It could be the case, however, that Instagram’s commercial prerogatives take precedence over the concerns of some women and other users.

An ongoing issue that Instagram might be more or less alert to is the risk of female forms being objectified for profit on its network. It is important to distinguish here between women objectifying themselves, for profit or any other reason, and societal actors objectifying women for commercial ends. To the extent that a woman might choose to commodify her body on social media, or intend to be sexually consumed more generally, I generally believe that she

1015 See, eg, Rosalie Gillet, Everyday Violence: Women's Experiences of Intimate Intrusions on Tinder (PhD Thesis, Queensland University of Technology, 2018). 1016 See generally, Srnicek (n 70). 1017 ‘Facebook Reports Fourth Quarter and Full Year 2018 Results’ (Report, 30 January 2019) . 1018 Roberts, 'Digital Detritus: 'Error' and the Logic of Opacity in Social Media Content Moderation' (2018) 23(3) First Monday . In a publication in news media, Hallqvist argues that ‘[a] possible gendered censorship from a platform as large as Instagram reflects a societal fear of the female body and promotes the power dynamics that keep the female body sexualised and commodified by men, rather than understood or celebrated by women’: Hallqvist (n 348).

168

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE should be able to do so. My view is influenced by the fact that not all women are necessarily victims of patriarchal oppression.1019 There is arguably more cause for concern in the context of Western patriarchal societies that continue to heavily objectify women,1020 often in line with the ‘normative feminine identities of mother and sex object’.1021 Glazer explains how patriarchal power structures can serve to objectify female bodies and experiences:

Male power is perpetuated by regarding women as objects that men act on and react to rather than as actors themselves. When women are regarded as objects, a great deal of importance rests on their appearances because their entire worth is derived from the reaction they can induce from men. In order to maintain the patriarchal system, men must determine when and where this arousal is allowed to take place.1022

Instagram’s policy around female nipples is an example of a male-centric regulatory system imposing reductionist categories on women.1023 On the one hand, images of women ‘actively breastfeeding’1024 are allowed, which aligns with the traditional feminine identity of mother and nurturer. On the other hand, ‘some images of female nipples’1025 are explicitly prohibited, but it is not clear exactly when such content violates the platform’s policies. It appears, then, that some explicit content is acceptable, perhaps when it is profitable, while other amateur or non-normative explicit content might be less acceptable. Whether this is the case in practice remains unclear.

Another cause for concern here is the extent to which images of women that fall outside of heteronormative framings are sanitised on Instagram.1026 The platform appears to do this by, inter alia, enabling some content that does not appear to violate content policies to be removed while not removing some explicitly prohibited content. One of the most high-profile examples of Instagram ostensibly sanitising its network of different women’s experiences is Rupi Kaur’s image in Figure 25, which comments on ‘what it’s like to be a woman and societal discomfort with menstruation’.1027 In response to an initial decision to remove the image, Kaur wrote:

1019 Arthurs and Grimshaw (n 91) 10. 1020 Matich, Ashman and Parsons (n 126) 351. 1021 Ibid 337. 1022 Glazer (n 961) 116. 1023 Dave Lee, ‘’Who’s Policing Facebook’, BBC News (online at 22 May 2018) ; Smith and Anderson (n 19). 1024 Instagram Help Centre, ‘Community Guidelines’ (n 185). 1025 Ibid. 1026 See, eg, West, ‘Facebook’s Guide to Being a Lady’ (n 402). 1027 Steve Holden, ‘Instagram Period Photo: Woman Who Took It Says She Wasn’t Being ‘Provocative’, BBC newsbeat (online at 5 April 2015) ; Emma Barnett, Period. It’s About Bloody Time (HQ, London: 2019) 84 ff.

169

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

‘Thank you @instagram for providing me with the exact response my work was created to critique…Their [Instagram’s] patriarchy is leaking. Their misogyny is leaking. We will not be censored’.1028 The potential here for images of female forms to be ‘both sanitized and sexualized’,1029 often in seemingly arbitrary ways, underlines the relatively limited parameters in which users can portray women’s bodies on Instagram. The general rule, according to Valenti, is that ‘[s]eXXXy images are appropriate, but images of women’s bodies doing normal women body things are not. Or, to put a more crass point on it: Only pictures of women who men want to f***, please’.1030 Kaur’s experience highlights that there can be a cost for women who speak their truths outside of heteronormative framings.

1028 West, ‘Facebook’s Guide to Being a Lady’ (n 402). 1029 Ibid. 1030 Ibid; Jessica Valenti, ‘Social Media Is Protecting Men From Periods, Brest Milk and Body Hair’, The Guardian (online at 30 March 2015) .

170

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

171

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

There are several broader tensions between neoliberal ideology and some feminisms, which are relevant to the context of platforms that are, in many ways, neoliberalism writ large.1031 A foremost tension centres on what McRobbie calls ‘the undoing of feminism’1032 in neoliberalism, a lens through which hegemonic discourses of individual freedom, empowerment and personal responsibility can generate ‘a postfeminist sensibility’.1033 That is, according to Baer, neoliberal societies ‘make feminism seem second nature and therefore also unnecessary to women’.1034 The neoliberal assumption that all persons are on an equal footing is problematic for numerous reasons, principally that while women and girls represent just over half of the world’s population,1035 ‘today gender inequality persists everywhere and stagnates social progress’.1036 Attempts by some lawmakers in the US to recriminalise abortion is a particularly disturbing example of the gender-based inequalities that persist today.1037 Moreover, the discourse of self-empowerment in neoliberalism (ie, the notion individuals can achieve whatever they set their mind to) places pressure on individuals to be catalysts of change while also deemphasising the systemic disadvantages that many people face.

Another concern is that the individualist discourses of neoliberalism can give rise to what is known as the ‘neoliberal body’.1038 Rogan explains:

A neoliberal body is one that express[es] and embodies neoliberal values, such as productivity, efficiency and self-discipline. Within these contexts, bodies (particularly bodies that are read as female) must appear to be well-controlled and worked upon – those that are seen as out of control or excessive are socially and morally reviled. The body has, therefore, become a key site of identity construction in contemporary contexts and, within the framework of social media, the body is often understood as an integral part of self-production and self-marketing for female celebrities and non-celebrities alike.1039

1031 Witt, Suzor and Wikstrom (n 219) 183-184. 1032 Angela McRobbie, The Aftermath of Feminism: Gender, Culture and Social Change (Sage, 2009) 2; Baer (n 125) 20. 1033 Gill (n 935) 255. 1034 Baer (n 125) 17. 1035 The World Bank, ‘Population, Female (% of Total Population) (Web Page, 2018) . 1036 United Nations, ‘Gender Equality: Why It Matters’ (Infographic, 2018) . 1037 See generally ACLU, Reproductive Freedom (Web Page, August 2019) . 1038 See generally Hannele Harjunen, Neoliberal Bodies and the Gendered Fat Body (Routledge, London, 2016). 1039 Rogan (n 221) 37.

172

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

Depictions of the female form posted by Kim Kardashian West,1040 like that in Figure 26, are prominent examples of the neoliberal body that ostensibly advance commercial ends under the guise of female self-empowerment and self-control.1041 Neoliberal femininity is concerning in this context because it can reinforce normative beauty and body standards and, with particular reference to the weight loss lollipop in Figure 26(B), set harmful precedents for body-related success. The postfeminist sensibility enmeshed in neoliberal discourse also has the potential to ‘re-traditionalise old notions of women being the objects of “the [male] gaze”’.1042 This is not to say that content posted by Kardashian is inherently un-feminist or that she, like others who advance the neoliberal body, cannot make feminist statements. Indeed, it is arguable that platforms are contributing to the ‘redoing feminism’1043 by enabling, as explained above, digital feminisms. Rather my point is that a neoliberal construction of femininity can be harmful, especially in the context of high-profile users who seemingly enjoy a kind of regulatory immunity, perhaps due to their commercial success, and have enormous online followings.1044 The weight that neoliberalism gives to concerns of the individual can result in proponents of this ideology overlooking significant power imbalances, like those between high-profile users and their everyday followers.

Additionally, there is tension between the precarious position of women’s bodies in public hashtags and the way that neoliberal platforms individualise politics.1045 The main way that platforms reduce the political to the individual is by separating themselves from the actions of their users, largely based on legal immunity under § 230(c) of the Communications Decency Act of 1996. Platforms’ non-interventionist approach to online expression, save for illegal and harmful content, reinforces the separation between seemingly self-empowered users and neutral platforms. Platforms also maintain systems of decentralised governance with little to

1040 See, eg, Erin Klazas and Alice Leppert, ‘Selfhood, Citizenship … and All Things Kardashian: Neoliberal and Postfeminist Ideals in Reality Television’ (Research Paper 7-23-2015, Digital Commons @ursinus College, 2015) ; Elizabeth Wissinger, ‘Glamour Labour in the Age of Kardashian’ (2016) 7(2) Critical Studies in Fashion & Beauty 141. 1041 There are also examples of the neoliberal body in body positive contexts. According to Cohen et al, ‘some critics argue that, just like thin-ideal accounts, body positive accounts are becoming commodified as they grow in popularity, whereby influencers are paid to promote commercial products further argues that, during this commodification process, body positive advocates deviate from their initial body positive ideals and their Instagram content begins to resemble the more dominant appearance-focused content on Instagram’: Cohen et al (n 21) 49. 1042 Rogan (n 221) 38. 1043 Baer (n 125) 17. 1044 Rogan (n 221) 38. 1045 Baer (n 125) 30.

173

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE no constitutional safeguards,1046 such as equality and accountability, in favour of regulation by ‘an oligopoly of private entities’.1047 Users are, as explained in earlier chapters, subject to constant policing by different regulatory actors, including other users, commercial content moderators, platform executives and artificial intelligence systems. This decentred regulatory context, wherein users are generally standalone political agents, can be particularly problematic for women because their bodies are often sites of conflict in Western societies. Conflict can arise whether representations of women are non-normative or not. Baer explains:

This tension between the body as a locus of empowerment and identity formation and the body as a site of control underpins the precarity of the female body in neoliberalism, not least because it is women’s bodies much more than men’s bodies that are subject to constant regulation via modes of hegemonic femininity.1048

By underlining this tension, I am not suggesting that all women are a class of persons in want of protection.1049 Rather, my intention is to highlight that representations of female forms can be particularly vulnerable to arbitrariness in online and offline spaces – a point that proponents of neoliberalism can neglect.

Perhaps the most apparent tension is between Instagram’s advertising hook ‘capture and share the world's moments’1050 and the inconsistent moderation of images depicting female forms in practice. Take, for instance, my finding in Case Study One that up to 22 per cent of images are potential false positives; and in Case Study Two that up to 56 per cent of explicit images are potential false negatives and up to 19 per cent of images depicting women actively breastfeeding, all of which are explicitly allowed, were removed. This risk of potential arbitrariness for women in Instagram’s regulatory system is exacerbated the platform seemingly juggling different interests, risks and responsibilities. In 2015, in the face of criticism about Instagram’s ‘ban on boobs’,1051 the platform’s former CEO and co-founder Kevin Systrom said:

I just think it's great that people are voicing their opinion and that's what we look for our community to do. And we're always looking for feedback. Of course, it's not just people that use that hashtag that want it to be a certain way. There are a lot of other constituents, and our

1046 See, eg, Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32). 1047 Gregorio (n 186) 73. 1048 Baer (n 125) 24. 1049 Such a view might be advanced by proponents of patriarchal paternalism: see generally Dworkin (n 486). 1050 @InstagramEnglish (Instagram) (n 452). 1051 Instagram Help Centre, ‘Community Guidelines’ (n 185).

174

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

goal at Instagram is to always maintain a really safe place for the teens that use it, and the parents that use it actively as well.1052

This statement underlines that the outcomes of moderation are informed not only by the particular subject matter at issue, but also the demands of different constituencies. Without greater transparency around decision-making and priority setting processes, it is difficult to trust that Instagram’s executives are, at the very least, taking into account the issues that women from different backgrounds face. It is also difficult to trust that the platform is a space where female and other users can share their moments and speak the truth about their self-definition.

Diversity in the depiction of female bodies can play an important role in challenging patriarchal framings of women in online and offline environments.1053 For instance, a standalone image of a bare-chested woman who appears to be neither a ‘sex object’ or ‘mother’1054 can create alternative feminine identities. Portrayals of diversity can also help to challenge the extensive policing of female forms in Western societies,1055 including attempts to control and contain women in line with orthodox societal expectations.1056 This is because regulation around the appropriateness of women’s bodies for public view, such as prohibitions against depictions of female nipples, are often influenced by social stigma around female bodily functions and experiences.1057 Consequently, some fourth-wave feminisms are encouraging the portrayal of natural bodies. Matich et al conceptualise the ‘natural’ as transcending the biological:

… displays of the diverse, unruly, unfettered bodily form that work to destabilise the assumption that women’s bodies hold their value solely in their capacity to be sexually inviting and aesthetically conformist to the perceived norm of sexual attractiveness, as displayed by the demands of the heterosexual male fantasy.1058

As argued in Part I of this Chapter, platforms are playing an important role in creating spaces for the depiction of non-homogenous, natural bodies. In particular, some Instagram users and hashtags are making significant inroads with normalising bodies of all shapes and sizes and raising public consciousness about different women’s experiences.1059

1052 Rory Satran, ‘Instagram’s Kevin Systrom on Fashion and #freethenipple, i-D (online at 12 May 2015) . 1053 Matich, Ashman and Parsons (n 126) 346. 1054 Ibid 337. 1055 Ibid 352. 1056 Ibid 337. 1057 Glazer (n 961) 135; see generally Barnett (n 1027) 11 ff. 1058 Matich, Ashman and Parsons (n 126) 347. 1059 Cain (n 120) 844, 846.

175

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE

Nevertheless, there is a long way to go before users are able to post diverse female experiences in the dominant discourses of online social spaces. While women’s bodies have been policed long before social media,1060 the sheer scale of platforms means that when user-generated content is moderated in arbitrary ways, it can have far-reaching effects. Take, for example, a decision to remove images depicting female forms in Case Study One: there is not only potential for content removal to affect the users who posted those images, but also for the public to perceive Instagram’s policy stance as being in line with some kind of universal standard of appropriateness. The fact is that the decisions that platforms make around content can influence cultural, social and other issues that extend beyond their networks. Ongoing debates around the moderation of online expression and the spread of disinformation on social media, along with extremist, self-harm and many other types of vitriolic content illustrate the extent of this influence.1061 Potential impacts, both positive (ie, when decision-making is seemingly free from potential arbitrariness) and negative (ie, when there is an apparent risk of arbitrary decision-making), are likely to become more pronounced as Instagram, like other platforms, continues to expand its user base and compete for greater cultural visibility.1062

V CONCLUSION

This Chapter has enriched the preceding rule of law analysis by evaluating the moderation of images depicting women’s bodies on Instagram through a feminist lens. To the extent that Instagram is removing images of women’s bodies at different rates, I argued that the inconsistent trend of moderation gives rise to several ongoing causes for concern. The first is the potential for gender-based double standards, especially in reporting features and the outcomes of content moderation, and the platform’s gendered prohibition of some images depicting female nipples. The second is the apparent lack of autonomy that some users have to portray diverse female bodies, as well as the negative lived experiences that can flow from potentially arbitrary content removal. A particular concern is that autonomy deficits can be markedly higher for marginalised individuals and groups.1063

Third, this Chapter raised concerns around the normative foundations of decision-making on Instagram. I discussed the potential for different societal actors to commodify certain

1060 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) Chapter 6. 1061 Witt, Gillett and Suzor (n 104); Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32). 1062 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 113. 1063 See, eg, Duguay, Burgess and Suzor (n 94) 1.

176

CHAPTER SIX: AN EVALUATION OF EMPIRICAL RESULTS FROM A FEMINIST PERSPECTIVE representations of female forms and silence, or sanitise, those that fall outside of heteronormative framings.1064 I also drew attention to some of the ongoing tensions between feminisms and the neoliberal discourses of platforms.1065 Against this backdrop, in the following chapter, I recommend ways in which Instagram can improve its moderation processes to address the ongoing issues that users face when attempting to navigate its unpredictable regulatory system. The recommendations that follow are based on empirical evaluation from both a rule of law and feminist perspective.

1064 See generally Matich, Ashman and Parsons (n 126) 337. 1065 Baer (n 125) 17.

177

CHAPTER SEVEN: CONCLUSION

CHAPTER SEVEN: CONCLUSION

In preceding chapters, I set out to empirically evaluate whether the moderation of images depicting women’s bodies on Instagram aligns with the Anglo-American ideal of the rule of law. Across both case studies, I identified an inconsistent trend of moderation, which I argued starkly contrasts with my selected rule of law values of formal equality, certainty, reason- giving, transparency, participation and accountability. I also argued, from a feminist perspective, that Instagram does not appear to be moderating images of female bodies in desirable ways. The purpose of this Chapter, which comprises five parts, is to draw the chapters of this thesis together and outline a path forward.

In Part I, I re-articulate the theoretical and methodological approaches that enabled me to empirically evaluate content moderation on Instagram and summarise key arguments from preceding chapters. Following this, in Part II, I outline four main steps that Instagram can take now to address the rule of law deficiencies and feminist concerns that I have identified.1066 First, in terms of the rule of law value of certainty, I recommend that Instagram critically evaluates the normative foundations of its rules around content and, to the extent possible, implements content policies that are free from gendered norms. I also recommend that Instagram publishes the guidelines that its moderators follow behind closed doors. Second, the platform should improve its reason-giving practices through counterfactual explanations.1067 Third, I recommend that Instagram takes steps to enhance its transparency practices by: (a) publishing transparency reports that are separate from Facebook, (b) providing meaningful and actionable data, (c) measuring moderator performance in that ways that take systemic inequality into account and (d) demystifying reporting tools. Fourth, the platform should commit to greater accountability in practice by providing users with the option to appeal all decisions about content, enhancing user participation and agreeing to expert external review. More broadly, I strongly encourage scholars and other stakeholders to continue to develop digital methods for empirical legal analysis of platform governance.

Then, in Part III, I propose what I believe should be guiding principles for any attempt by nation-states to better regulate the content that passes through online social networks. I argue that it is important to outline certain key principles given that state enacted laws could play a

1066 Suzor, Lawless: The Secret Rules that Govern Our Digital Lives (n 32) 166. 1067 Wachter, Mittelstadt and Russell (n 521) 841.

178

CHAPTER SEVEN: CONCLUSION crucial role in helping societal actors independently monitor content moderation processes.1068 I emphasise the importance of policymakers engaging with key civil society organisations and duly considering guides to good regulatory design in the online environment, among other useful resources.1069 While recommendations in this Chapter are largely specific to Instagram, they are relevant to other platforms that might also be contending with the immensely complex task of moderating user-generated content at scale. Finally, in Part IV, I outline areas for future research and, in Part V, I make concluding remarks.

I SUMMARY OF ARGUMENTS AND CONTRIBUTIONS

I started this thesis with Aarti Olivia Dubey’s account of her body positive images, which did not appear to violate Instagram’s terms and guidelines, being removed from the platform.1070 I did so because this account is particularly illustrative of confusion among users about the ways content policies are set, maintained and enforced in practice. As I delved further into this controversial subject matter, it became clear that Dubey’s account is one of many that are partly fuelled by the black box around Instagram’s regulatory system.1071 This symbolic black box continues to obscure the internal workings of moderation processes from the platform’s one billion active monthly users and the public more broadly.1072

I chose to focus on the moderation of images depicting women’s bodies on Instagram for several reasons, the first being that this topic is highly controversial.1073 There are persistent claims that the platform is arbitrarily removing some images of women and, possibly, privileging the depiction of thin-idealised body types.1074 It is important to investigate potential arbitrariness because it can result in some users’ expression through content being amplified and others, especially marginalised users, being silenced.1075 Second, while there is a lack of empirical research into Instagram’s moderation processes, there is even less empirical legal research in this context.1076 Third, empirical evidence can help to raise public awareness about

1068 See generally Witt, Gillett and Suzor (n 104). 1069 See, eg, ACLU Foundation of Northern California et al, The Santa Clara Principles on Transparency and Accountability in Content Moderation (n 53); Electronic Frontier Foundation et al, Manila Principles on Intermediary Liability (n 53). 1070 Dubey (n 3). 1071 Pasquale, Black Box Society (n 29); Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2). 1072 Instagram, ‘Welcome to IGTV’ (n 9). 1073 See, eg, Electronic Frontier Foundation and Visualizing Impact, A Resource Kit for Journalists’ (n 17). 1074 See, eg, West, ‘Facebook’s Guide to being a Lady’ (n 402). 1075 Duguay, Burgess and Suzor (n 94) 1. 1076 See Suzor, 'Understanding Content Moderation Systems: New Methods to Understand Internet Governance at Scale, Over Time, and Across Platforms' (n 85) [Instagram].

179

CHAPTER SEVEN: CONCLUSION how some images of female forms are moderated in practice and identify potentially misleading ‘folk theories’ about content takedowns among users.1077 The latter is particularly important given that a decision to remove content, especially one that does not appear to align with a platform’s content policies, can breed further confusion. Hence the main aim of this thesis was to test whether there is support for some users’ claims of potential arbitrariness in the moderation of images depicting women’s bodies and shed light on Instagram’s regulatory system more broadly.1078

I situated this thesis within the emerging project of digital constitutionalism, a constellation of initiatives that aim to temper the potentially arbitrary exercise of government power in the digital age.1079 I explained that the primary contention of this project is that well-established values of good governance, such as equality and accountability, are threatened by the largely unchecked governing power of online platforms and other internet intermediaries.1080 I argued that because platforms play such a central role in regulating how users behave,1081 it is imperative that decision-making around content is free from arbitrariness. This thesis contributed to the project of digital constitutionalism by empirically investigating whether images that depict women’s bodies on Instagram are moderated in a way that aligns with the Anglo-American ideal of the rule of law. The foremost challenge for me in undertaking this investigation was determining whether and how I could empirically evaluate content moderation from outside the black box and thus answer calls for data that can shed light on platform governance.

In Chapter Two, I proposed an Anglo-American framework of the rule of law, comprising the values of formal equality, certainty, reason-giving, transparency, participation and accountability for evaluating content moderation in practice. I explained that my framework is based on Krygier’s explicitly teleological approach to the rule of law, which underlines that constitutional discourse has purchase across realms, contexts and actors, whether public or private.1082 I argued that the rule of law is a valuable lens for evaluating content moderation on several grounds, including that it serves to institutionalise constraints on arbitrariness in the

1077 West, ‘Censored, Suspended, Shadowbanned: User Interpretations of Content Moderation on Social Media Platforms’ (n 460) 201. 1078 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 94. 1079 Suzor, Lawless: The Secret Rules that Govern Our Digital Lives (n 32) 6-7. 1080 Ibid. 1081 Ibid. 1082 See Krygier, ‘Why the Rule of Law Is Too Important to Be Left to Lawyers’ (n 43) 20-21.

180

CHAPTER SEVEN: CONCLUSION exercise of power. This legal ideal also underlines what Western democratic societies have arguably come to expect of governing actors.1083 I also advanced a feminist lens for critical evaluation given that this thesis focuses on the moderation of images depicting female forms. My rule of law and feminist lenses ultimately provided a well-established language to start to name and work through what is at stake for users in Instagram’s potentially arbitrary exercise of power over content.

In Chapter Three, I expanded upon what the public knows about content moderation processes to date. There are a range of regulatory actors involved in moderating user-generated content at scale, including platform executives, commercial content moderators, users and automated systems.1084 I argued that there are particular concerns around the regulatory role of users, whose power to report content is seemingly unchecked, and algorithms, which pose risks in terms of potential biases, autonomous decision-making, opacity, inscrutability and unpredictability.1085 However, without gaining access to the internal workings of platforms, it is difficult to assess the nature and extent of these risks. I posited that black box analytics can help external actors overcome some transparency deficits in platform governance. This form of analytics is important as it can, inter alia, shed varying degrees of light on the internal workings of moderation processes, provide empirical evidence of content moderation in practice and, where accountability measures exists, enable stakeholders to attempt to hold decision-makers to account.

I then developed and applied a black box methodology that facilitates evaluation of the extent to which processes for moderating user-generated content align with my rule of law framework. This methodology is in itself the first major contribution of this thesis, given the practical difficulties that external actors face in attempting to scrutinise the internal workings of moderation processes1086 and the relative dearth of empirical legal research into Instagram’s moderation processes to date.1087 I explained that the core component of this methodology is an input/output method based on black box analytics, which examines how discrete inputs (ie, images) into Instagram’s moderation processes (ie, the black box) produce certain outputs (ie, whether an image is removed or not). I noted that the lack of transparent information that

1083 See Suzor, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (n 26) 2. 1084 See generally Mosseri, ‘Our Commitment to Lead the Fight against Online Bullying’ (n 514). 1085 Ziewitz (n 518) 7. 1086 See generally Witt, Suzor and Huggins (n 27) 558 ff. 1087 Ibid 561.

181

CHAPTER SEVEN: CONCLUSION

Instagram provides about the outcomes of its moderation decisions imposes several limitations on this research, including that it is not possible to identify whether images were removed by Instagram or by users themselves. Despite these limitations, I proceeded with this investigation in order to evaluate how a black box methodology might work with partial data, and to particularise the extra information that researchers may need for future studies of this type.

In Chapter Four, I empirically examined whether a sample of 4,944 like images depicting (a) Underweight, (b) Mid-Range and (c) Overweight women’s bodies were moderated alike on Instagram (Case Study One). I identified an overall trend of inconsistent moderation and advanced two main findings. The first was that up to 22 per cent of images are potential false positives – that is, images that do not appear to violate Instagram’s content policies and were removed. The second was that the odds of removal for an image that depicts an Underweight and Mid-Range woman’s body is 2.48 and 1.59 times higher, respectively, than for an image that depicts an Overweight woman’s body. I explained that there are two potential explanations for content removal: either direct intervention by Instagram or content removal by users themselves. Overall, I found that the inconsistent trend of moderation supports my hypothesis (H1) that Underweight depictions of women bodies are removed at different rates to Overweight depictions in practice. Rather surprisingly, however, I found that claims that Instagram is less likely to remove thin-idealised images of women could be overstated.

In Chapter Five, I empirically examined whether a sample of 980 explicitly prohibited images depicting women’s and some men’s bodies in like categories of content were moderated alike on Instagram (Case Study Two). I identified that the inconsistent trend of moderation continues across case studies and outlined three main findings. The first was that up to 56.8 per cent of images that seemingly violate Instagram’s content policies and were not removed, and are therefore potential false negatives. Second, the odds of removal for an explicitly prohibited image depicting a woman’s body is 16.75 times higher than for a man’s body, which supports my hypothesis (H2) that images depicting female forms are moderated at different rates to male forms. Third, 100 per cent of content in the Female Nipples, Porn Spam with a Clear Watermark and Appears to be Porn Spam categories were removed, followed by 90.5 per cent for Other Explicit content, which suggests that Instagram consistently moderates some types of images. Overall, I argued that the empirical results across both case studies stand in stark contrast to the legal ideal of the rule of law. In addition to a lack of formal equality in the outcomes of content moderation, I identified deficiencies in the certainty of Instagram’s content policies, reason-giving practices, avenues for user participation, transparency reporting and

182

CHAPTER SEVEN: CONCLUSION accountability measures, which indicate that concerns around the risk of arbitrariness in the outcomes of content moderation might not be unfound. These results and findings are the second major contribution of this thesis.

In Chapter Six, having evaluated the empirical results from both case studies from the vantage point of the rule of law, I then undertook analysis from a feminist perspective. I did so, not just because this thesis focuses on images depicting women’s bodies, but also given that feminist discourses can identify potential issues that a more traditional rule of law lens might overlook. After exploring some of the possibilities of digital feminisms,1088 I identified three main concerns through a feminist lens. The first is the potential for gender-based double standards in moderation processes, which can significantly limit users’ expression of female forms through content to a far greater extent than male forms. Second, there is a risk of moderation outcomes restricting users’ autonomy over the representation of women’s bodies,1089 which can, in turn, limit broader societal understandings of different female or other experiences.1090 Third and more broadly, there is a significant lack of clarity around the normative foundations of Instagram’s regulatory system. This uncertainty is problematic for users who face many and varied obstacles in attempting to understand why content is moderated in particular ways, and for society because the decisions that Instagram makes can have ripple-effects in local, national and international contexts.1091 I posited that these concerns are often linked to systemic inequalities in the West.

As part of these case studies, I illustrated the immense promise of using new digital methods to understand content moderation at scale. The methodology that I developed and applied not only works, as evidenced by the suite of empirical evidence that it provided, but is also significant because Instagram is one of the more difficult platforms to collect data about.1092 The key strengths of this methodology are that it is extensible across platforms and controversies, and can be used to help identify some decisions that a platform might seek to obscure. However, as I explored potential explanations for inconsistent moderation outcomes, it became clear that there are limits to how much we can learn from black box analytics without open and clear rules around content, and reasons for moderation decisions. I learnt that it is

1088 Baer (n 125) 18. 1089 See generally Glazer (n 961) 113. 1090 Matich, Ashman and Parsons (n 126) 337. 1091 See generally Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 94 ff. 1092 See Suzor, 'Understanding Content Moderation Systems: New Methods to Understand Internet Governance at Scale, Over Time, and Across Platforms' (n 85) [Instagram].

183

CHAPTER SEVEN: CONCLUSION easier to analyse true positives and false negatives, like in Case Study Two, compared to true negatives and false positives, like in Case Study One. My experience of coding images illustrates this point: given that explicit content more obviously violates content policies and/or dominant societal norms around appropriateness, there is often less guesswork around whether and why an image is, in fact, a false negative than for a false positive. It is likely to continue to be easier for researchers to identify and analyse true positives and false negatives until Instagram addresses the considerable lack of transparency around its moderation processes.

In sum, I found that there appears to be an ongoing risk of arbitrariness in processes for moderating images depicting women’s bodies on Instagram, which is the final major contribution of this thesis. While my empirical evaluation has focused on images depicting women, the limited extent to which the platform’s regulatory system appears to align with the Anglo-American legal ideal of the rule of law is an ongoing concern for all users of platform technology. Part of this is due to the sheer scale of online platforms, which collectively govern a number of people that far exceeds any nation-state.1093 There is cause for concern also because of the political, cultural and social influence of these private companies, the way that platform technology is increasingly enmeshed in the ‘everyday’1094 lives of users and the fact that online platforms are in the business of commodifying content for profit.1095 As a society, we require more explicit, inclusive and forward looking conversations about the power that a small number of private companies exercise over users’ expression through content. Against this backdrop, in the following Part, I suggest ways that Instagram can improve its moderation processes as part of the broader project of digital constitutionalism.

II RECOMMENDATIONS FOR INSTAGRAM AND ONLINE PLATFORMS MORE BROADLY

Before outlining four main recommendations, it is important to note that the success of many of the suggestions in this Part are contingent on meaningful multi-stakeholder engagement and collaboration. Facebook, often on behalf of its family of apps, has taken a number of positive steps forward in this regard: for instance, by participating in conference proceedings,1096 funding research projects and publishing videos of Zuckerberg discussing pressing issues of

1093 The World Bank (n 30). 1094 See generally Tim Highfield, Social Media and Everyday Politics (Polity, 2016). 1095 See generally Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 1096 See, eg, Duke University Law School (n 607)

184

CHAPTER SEVEN: CONCLUSION our time with different individuals.1097 In my view, however, Facebook and its subsidiaries need to do more to meaningfully engage and collaborate with stakeholders in platform governance. By ‘meaningful’, I mean relationships that are ongoing, publicly disclosed and include representatives from all key stakeholder groups. As part of this, in order to continue to foster goodwill in the global community, it is important that stakeholders are not required to sign overly wide and burdensome non-disclosure agreements.1098 A meaningful approach is important for several reasons, most notably that online platforms are not all-knowing. Take, for instance, Zuckerberg’s statements on content regulation in early 2019:

Every day, we make decisions about what speech is harmful, what constitutes political advertising, and how to prevent sophisticated cyberattacks. These are important for keeping our community safe. But if we were starting from scratch, we wouldn’t ask companies to make these judgments alone.1099

Zuckerberg added: ‘Lawmakers often tell me we have too much power over speech, and frankly I agree. I’ve come to believe that we shouldn’t make so many important decisions about speech on our own’.1100 Multi-stakeholder engagement and collaboration are crucial also because horizontal conflicts between users and those between a platform and its users can be highly contextual. In contexts of social conflict or political unrest, for example, decisions about the appropriateness of content arguably cannot ‘be safely left to the discretion of internet companies to adjudicate’.1101 A diversity of perspectives can help to not only better address user concerns in the most appropriate ways, but also to identify possible issues before they arise.

Multi-stakeholder engagement and collaboration are particularly important for Instagram, which continues to moderate content in the shadows of its behemoth parent company. Instagram has, of course, faced public backlash for some of its decisions about content. In early 2019, for example, Instagram made news headlines after the parents of a British teenager

1097 For instance, Facebook states: ‘We work with experts around the world, including academics, non- governmental organizations, researchers, and legal practitioners. These individuals and organizations represent diversity of thought, experience and background. They provide invaluable input as we think through revisions to our policies, and help us better understand the impact of our policies: Bickert (n 253). 1098 Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media (n 24) 38. 1099 Mark Zuckerberg, ‘Zuckerberg: The Internet Needs New Rules. Let’s Start In These Four Areas’, The Washington Post (online at 30 March 2019) . 1100 Ibid. 1101 Witt, Gillett and Suzor (n 104) 3.

185

CHAPTER SEVEN: CONCLUSION claimed that self-harm material on the platform played a role in influencing their daughter to commit suicide.1102 The UK government subsequently launched an inquiry into online harms on social media,1103 while Instagram banned graphic images of self-harm, including images of people cutting themselves, and took steps to make non-graphic images depicting self-harm more difficult to find.1104 In general, however, Instagram has managed to fly ‘relatively under the radar’1105 compared to Facebook. Part of this could be due to Instagram’s reputation as a positive space1106 and, especially from the vantage point of older users, as a visual platform for photo and video sharing.1107 It could also be that the public perceives as that of the entire Facebook family of apps, including Instagram. Regardless of potential explanations, the empirical results in this thesis and numerous ongoing controversies highlight the importance of diverse stakeholders critically evaluating the various components of Instagram’s moderation processes.

The good news is that most platforms appear to desire the public to perceive them as good social actors. Take, for instance, Facebook’s marketing campaign in 2018 that attempted to apologise for privacy breaches as part of the Cambridge Analytica scandal.1108 Various apology letters of sorts were displayed at bus stops and train stations, as depicted in Figure 27, and printed in newspapers, like that signed by Zuckerberg in Figure 28. More recently, after Instagram was criticised by the public for allowing content related to self-harm, Adam Mosseri said that he is trying to balance ‘the need to act now and the need to act responsibly’.1109 Suzor sheds light on this discourse of social responsibility:

1102 See, eg, Angus Crawford, ‘Instagram ‘Helped Kill My Daughter’’, BBC News (online at 22 January 2019) . 1103 Department for Digital, Culture, Media & Sport, Home Office, The Rt Hon Sajid Javid MP and the Rt Hon Jeremy Wright MP, ‘Online Harms White Paper’ (Official Government Paper, 8 April 2019) . 1104 See, eg, Julia Jacobs, ‘Instagram Bans Graphic Images of Self-Harm after Teenager’s Suicide’, The New York Times (online at 7 February 2019) . Platforms have recently taken similar steps in the context of revenge porn and extremist and terrorist material, among other content: see Suzor, Lawless: The Secret Rules that Govern Our Digital Lives (n 25) 109 ff. 1105 Korva Coleman, ‘Instagram Has a Problem with Hate Speech and Extremism, ‘Atlantic’ Reporter Says’, NRP (online at 30 March 2019) . 1106 Thompson (n 344). 1107 Taylor Lorenz, ‘Instagram is the Internet’s New Home for Hate’, The Atlantic (online at 21 March 2019) . 1108 See generally Bogle (n 109). 1109 BBC News, ‘Instagram Vows to Remove All Graphic Self-Harm Images from Site’, BBC News (online) at 7 February 2019) .

186

CHAPTER SEVEN: CONCLUSION

…the truth is that these companies are responsive to social pressure on a day-to-day basis. As awareness grows about the problems that Twitter, Facebook, and its subsidiary Instagram have faced dealing with abusive content and enforcing rules consistently, all of these companies are working to build systems and procedures that will help them avoid social condemnation.1110

There are obvious commercial incentives for platforms to adopt a proactive regulatory approach, which might include a version of corporate social responsibility,1111 such as the potential to temper the imposition to state enacted laws. Another consideration is that angering different groups and individuals in society is often bad for business. This suggests that there is hope for meaningful change around the design and governance practices of online platforms. While change could be costly, time-consuming for platforms and far from straightforward, improvements arguably can and should be made. I will now outline the first of four specific

1110 Suzor, Lawless: The Secret Rules that Govern Our Digital Lives (n 25) 109 ff. 1111 See generally Emily Laidlaw, ‘Myth or Promise? The Corporate Social Responsibilities of Online Service Providers for Human Rights’ in M. Taddeo and L. Floridi (eds), The Responsibilities of Online Service Providers (Springer International Publishing, 2017) 135 ff.

187

CHAPTER SEVEN: CONCLUSION recommendations for Instagram, all of which are likely to contribute to achieving formal equality in practice and realising the project of digital constitutionalism more broadly.

A Improve the Certainty of Content Policies

The first step that I argue Instagram should take is to critically evaluate the normative foundations of its content policies. The need for this is evidenced by problematic aspects of the platform’s constitutive documents, including its gendered prohibition against depictions of ‘some photos of female nipples’1112 and blanket ban on nudity. More specifically, I recommend that Instagram’s executives review the platform’s content policies to evaluate whether every rule around content is normatively desirable. This review would likely require input from a range of stakeholders with expertise in, but not limited to, platform policy, technical online architectures and approaches that could serve as normative benchmarks. A critical review like this is essential given the significant and far-reaching impacts that Instagram’s content policies can have in local, national and international contexts. If Instagram chooses to continue to set, maintain and enforce a gendered policy, such as the prohibition against some depictions of female nipples, it should publish the outcomes of its review into the said policy in line with the other recommendations in this Part.

There is also wide scope for Instagram to improve the formal and substantive certainty of its policies. As I argued in Chapter Two, formal certainty requires rules around content to be open and clear. The main way that Instagram can improve the openness of its rules is by making, to the extent possible, the internal guidelines that moderators follow publicly available. This step is not contingent on the platform disclosing confidential information nor would it necessarily result in a greater number of users attempting to game moderation processes.1113 Specifically, I recommend that Instagram publishes a detailed version of its Community Guidelines similar to the comprehensive version of Facebook’s Community Standards, with chapters for different types of content.1114 The usefulness of Instagram publishing more detailed information is of course contingent on the extent to which content policies are clear.

1112 Instagram, ‘Community Guidelines’ (n 185) [Post Photos and Videos that are Appropriate for a Diverse Audience]. 1112 Ibid. 1113 However, Roberts raises concerns about some users attempting to game algorithms: ‘They’re going to try to defeat it. They’re going to try to game it. We can’t possibly imagine all the scenarios that will come online’: see Zachary Mack, ‘Why AI Can’t Fix Content Moderation’, The Verge (online at 2 July 2019) . 1114 Facebook, ‘Community Standards’ (n 253).

188

CHAPTER SEVEN: CONCLUSION

Instagram should take several steps to improve the formal clarity of its content policies and, by extension, the bounds of appropriate user-generated content. It is important to note that re- drafting the platform’s policies to include complex, legalistic definitions or, for instance, changing the document’s tone from an ‘easy-going friend’1115 to more of a ‘stern parent’,1116 will likely not improve the overall clarity of rules. This is partly because Instagram’s constitutional documents are subject to constant change. There is also a risk that a more legalistic approach might further deter users from engaging with content policies, many of whom do not read terms and guidelines to begin with.1117 Instagram should instead clarify its terms by providing examples of types of content that are appropriate and inappropriate, perhaps in dot point form, like in Facebook’s updated Community Guidelines1118 and YouTube’s Community Guidelines.1119 The provision of examples is likely to be particularly useful for ambiguous terms, such as ‘some photos of female nipples’1120 and ‘sexually suggestive’1121 content, both of which have a ‘wide penumbra of uncertainty’.1122 The platform should also explain why all types of prohibited content are objectionable from the perspective of its executives and policy teams. For instance, explaining exactly why ‘some photos of female nipples’1123 are prohibited would help to address concerns among users that the platform’s content policies are sexist.1124

More broadly, it is crucial that Instagram commits to substantive certainty in its moderation processes. Given that content moderation is the sum of processes for setting, maintaining and enforcing the bounds of appropriate user expression, greater clarity in content policies alone will likely not result in users being able to better navigate the platform’s regulatory system.

1115 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2). 1116 Gillespie explains that the discursive performativity, such as a tone reminiscent of an ‘easy-going friend’ or a ‘stern parent’, in policies like this attempt to achieve several outcomes: ‘They articulate the “ethos” of the site, not only to lure and keep participants, but also to satisfy the platform’s founders, managers and employees, who want to believe that the platform is in keeping with their own aims and values’: Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 47-48. 1117 See, eg, Dale Clapperton and Stephen Crones, ‘Unfair Terms in ‘Clickwrap’ and other Electronic Contracts’ (2007) 35(3) Australian Business Law Review 152. 1118 Facebook, ‘Community Standards’ (n 253). 1119 See generally YouTube, ‘Policies and Safety’ (2019) . 1120 Instagram Help Centre, ‘Community Guidelines’ (n 185). 1121 Rosen and Lyons (n 345). 1122 Hart, ‘Positivism and the Separation of Law and Morals’ (n 339) 607. 1123 Instagram Help Centre, ‘Community Guidelines’ (n 185). 1124 See, eg, Jillian York, ‘Guns and Breasts: Cultural Imperialism and the Regulation of Speech on Corporate Platforms’, (Web Page, 17 March 2016) .

189

CHAPTER SEVEN: CONCLUSION

Instagram must go one step further to ensure, to the extent possible, that the outcomes of content moderation (ie, whether an image is removed or not removed) adhere to content policies in practice. It is difficult to provide specific recommendations around substantive certainty given that the platform does not disclose the internal workings of its moderation processes to the public. However, one step that Instagram can take now is to commit to reducing ‘in spirit’1125 decision-making, or the removal of posts that ‘are inappropriate but do not go against Instagram’s Community Guidelines’.1126 This is important because decisions to remove vaguely inappropriate content can fuel inconsistent outcomes of content moderation, given that the purported policy violation is unclear. On top of this, users are generally not given reasons for content removal or avenues for appeal – two issues that I will expand upon later in this Part.

Instagram could better achieve substantive certainty also by focusing less on whether moderators achieve certain accuracy scores and more on assessing whether processes for setting, maintaining and enforcing content policies align with each other. This could involve the platform reducing the potential for organisational siloes and increasing communication between different regulatory actors.1127 It appears that lines of communication could be improved between platform executives, internal (seemingly more prestigious)1128 policy teams who set content policies, outsourced moderators who undertake most of the work of reviewing content and users who the platform relies on to report content as inappropriate. Without taking steps to improve formal and substantive certainty in practice, Instagram’s content policies are likely to continue to breed confusion among users about what content is appropriate and what is not.1129

B Improve Reason-Giving Practices

Improving reason-giving practices that explain why a particular piece of content has been removed is the second key area of improvement for Instagram. As previously noted, the

1125 Thompson (n 344). 1126 Rosen and Lyons (n 345). 1127 Like other corporations, there is a risk of platform employees being limited by their specialist departments, groups, teams or pockets of knowledge, as well as broader organisational mindsets – what Tett calls ‘silos’. ‘Silo syndrome’ can have a plethora of negative ramifications, including potential bias and discrimination in moderation processes. Tett posits: ‘[t]he paradox of the moderation age, I reali[z]ed, is that we live in a world that is closely integrated in some ways, but fragmented in others’: see Gillian Tett, The Silo Effect: The Peril of Expertise and the Promise of Breaking Down Barriers (Little, Brown Book Group, 2016). 1128 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2). 1129 Olszanowski (n 854) 84.

190

CHAPTER SEVEN: CONCLUSION platform does not commit to providing users with precise reasons for content removal, which stands in stark contrast to industry leader YouTube’s digital ‘tombstone’.1130 Instagram’s reason-giving practices also lag considerably behind its parent company and Twitter,1131 both of which have made significant progress after the non-binding Santa Clara Principles were published in 2018.1132 In this context, and at the bare minimum, I recommend that Instagram fully adopts the notice aspect of the Santa Clara Principles as outlined in Chapter Two.1133 The platform could build on the changes it recently made to its notification processes and, possibly, leverage the infrastructure that Facebook has developed to better adopt industry best practice. Instagram should further enhance its reason-giving practices by providing an explicit statement of principles, like the Facebook Principles, which outline overarching norms for content moderation. A user could, if necessary, refer to these principles to better understand the reasons why their content was moderated.

More specifically, I recommend that Instagram provides users with ‘counterfactual explanations’1134 for content removal. A counterfactual explanation is one in which ‘the statement of decision is followed by a counterfactual, or statement of how the world would have to be different for a desirable outcomes to occur’.1135 In the context of the removal of seemingly false positive images, like in Case Study One, a counterfactual explanation could include the following information: what specific image is at issue; which party removed the image (ie, human moderators or automated tools); the specific term/guideline/policy that the image violates and how the image at issue was brought to the platform’s attention (eg, via automated tools/human content moderators/[X] number of [connected/unconnected] users or non-users who report the image as inappropriate/a third party, such as a government agency).

1130 Feerst (n 395). 1131 Crocker et al (n 40). 1132 In the context of the notice aspect of the Principles, the Open Technology Institute reported: ‘In the case of all three companies [Facebook, YouTube and Twitter], the notices provided to users who have had their content removed for violating a platform’s Terms of Service are relatively detailed and either fully or partially include some key pieces of information: reference to the specific rule that was violated, an explanation of how a user can appeal the takedown decision, and a durable form of the notice’: see Spandana Singh, ‘Assessing YouTube, Facebook and Twitter’s Content Takedown Policies: How Internet Platforms Have Adopted the 2018 Santa Clara Principles, New America (online at 7 May 2019) . 1133 See Chapter Two Part III(C). See also ACLU Foundation of Northern California et al, The Santa Clara Principles on Transparency and Accountability in Content Moderation (n 53); Electronic Frontier Foundation et al, Manila Principles on Intermediary Liability (n 53). 1134 Wachter, Mittelstadt and Russell (n 521) 841. 1135 Ibid 6.

191

CHAPTER SEVEN: CONCLUSION

It would also be important for a counterfactual explanation to state how the image would need to be different in order for it to be allowed on the platform.

Platforms could use software to generate counterfactual explanations, perhaps as part of the Single Reporting Tool that many moderators already use to review content.1136 While it could be costly and time-consuming for platforms to provide counterfactual explanations, with the level of detail like that in the proposed counterfactual explanation, users would be better equipped to understand what action triggered content removal and why a decision about content was made.1137 It is important to note that a commitment by Instagram to implement detailed reason-giving practices could further limit the potential for in spirit decision-making, which is, in many ways, the antitheses of the ideal of the rule of law. Moreover, with the provision of granular explanations for content removal, the methodology that I employed in this thesis would have been able to provide a more definitive answer to the allegations that Instagram’s moderation processes exhibit systematic bias against certain types of content. C Prioritise Transparency and Stakeholder Participation

YouTube, Facebook and Twitter, among other online platforms, publish transparency reports about content moderation.1138 While these reports generally fall short of transparency ideals, they mark important shifts in the disclosure of information by platforms to the public. To date, however, Instagram falls short not only of transparency ideals, but also best practice set by industry leader YouTube.1139 Instagram does not publish a separate transparency report from its parent company and appears to disseminate most information about changes to its network via press releases in its Info Centre.1140 Part of this could be due to Instagram’s status as a Facebook subsidiary: particularly, that public disclosures about Facebook ostensibly speak for the entire Facebook family of apps. Recent improvements in Facebook’s transparency reporting,1141 alongside other factors detailed earlier in this Part, appear to enable Instagram to

1136 Some Facebook moderators allegedly use software known as the Single Review Tool (‘SRT’) when reviewing individual content: see Newton (n 213). 1137 Suzor et al ‘…found that users frequently express confusion about what action has triggered a moderation action or account suspension. In our data, over a quarter of respondents (108 reports) were uncertain about what content triggered a moderation decision. Even when the offending content was clearly identifiable, users too frequently have insufficient information to understand why a moderation decision was made’: Suzor et al, 'What Do We Mean When We Talk about Transparency? Towards Meaningful Transparency in Commercial Content Moderation' (n 336) 1527. 1138 See generally Crocker et al (n 40). 1139 Ibid. 1140 See generally Instagram, Info Centre (Web Page, 2019) . 1141 See, eg, Facebook, Annual Report of Facebook Inc. (Report, 31 January 2019) .

192

CHAPTER SEVEN: CONCLUSION continue to fly somewhat under the radar of public scrutiny and avoid some demands for greater transparency.

In response to significant transparency deficits, I recommend that Instagram reports on the moderation of user-generated content on its network at least once per calendar year. These reports should be separate from Facebook, publicly available and aim to provide ‘actionable data for users’.1142 By ‘actionable data’, I mean data that enables users and other stakeholders to better understand the bounds of appropriate content and to make more informed decisions about the material that they post, as part of their participation in the internet environment more broadly. I endorse the Santa Clara Principles that suggests platforms report on the total number of discrete posts and accounts flagged (reported) and the total number of discrete posts removed and accounts suspended according to a number of measures: by category of content policy/rule/guideline violated, by format of content (eg, image, video or message), by geographic location and by which party initiated content removal (eg, government, law enforcement, legal professionals, users or automated tools), among other measures in Figure 29.1143 I argue that Instagram’s transparency reports should also disclose, to the extent possible, commercial and other interests in the outcomes of content moderation, as well as all attempts by the platform or its parent company to lobby for or against government policies in any jurisdiction that could impact the ways content is moderated in practice.

1142 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2). 1143 Singh (n 1132); ACLU Foundation of Northern California et al, The Santa Clara Principles on Transparency and Accountability in Content Moderation (n 53).

193

CHAPTER SEVEN: CONCLUSION

194

CHAPTER SEVEN: CONCLUSION

Additionally, I recommend that Instagram monitors the decisions made manually by humans and automatically by software in order to combat potential biases in moderation processes. While technology companies are generally good at removing content that plainly violates their policies, such as spam content, it is more difficult to consistently identify and moderate culturally-specific content, material that falls within policy grey areas and subject matters that require nuanced interpretation.1144 There is a particular risk here of the removal of legitimate critical expression and valuable counter-speech,1145 especially that posted by marginalised groups and individuals.1146 By monitoring and empirically evaluating the outcomes of moderation as a result of intervention by different regulatory actors, Instagram could identify whether its systems, including particular teams or moderators, exhibit patterns of systemic bias. There is also potential for Instagram to identify instances where users might be groundlessly reporting certain content. Specifically, Gillespie points to the meaning making potential of data around reporting features:

Social media platforms don’t generally want flags to trigger consequences automatically, since the data is too noisy: some flaggers misunderstand the content or the rules, and some flag content they don’t agree with in order to get it removed. But the data is not so noisy that it can’t be returned to users as a signal. Heavily flagged content, especially if those flags are coming from unconnected users, could be labelled as such, or put behind a clickthrough warning, even before it is reviewed.1147

Identifying potential inequalities and biases in moderation processes will likely not in itself have a significant impact on the health of the platform’s regulatory system. Instagram should therefore couple transparent information with affirmative action that is supported by data analysis. In addition to implementing the recommendations in this Chapter, affirmative action might involve executives changing organisational structures, revising training procedures and, more radically, re-evaluating the technologically determinist foundations of its social media network.

Another affirmative action that Instagram can take now is to demystify the internal workings and impacts of its reporting features. Greater transparency is important in this context given that users, and some non-users, undertake a significant amount of the work of moderating

1144 Witt, Gillett and Suzor (n 104) 3. 1145 Ibid. 1146 See, eg, Duguay, Burgess and Suzor (n 94) 1. 1147 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2).

195

CHAPTER SEVEN: CONCLUSION content at scale.1148 It appears that the most detail that Instagram has divulged about the internal workings of reporting features is that ‘the number of times something is reported doesn't determine whether or not it's removed from Instagram’,1149 alongside general information on how to report content in its Info Centre. A result of this dearth of information is that users continue to develop ‘folk theories’ to explain content takedowns,1150 one of which is that bad actors can easily game the platform’s regulatory system.1151 Hence, in addition to reporting on the quantity and type of content that has been removed as a result of user reports, Instagram should provide information about how user reports are processed and reviewed behind closed doors. It would be particularly useful for the platform to elaborate on the extent of the influence of reporting features in practice and what, if any, safeguards are in place to attempt to prevent groundless reporting. Finally, I recommend that the platform tests for and reports normative biases in queues of reported content and, if necessary, modify some aspects of the in-built reporting feature. Empirical testing like this can help to identify issues in reporting processes and further safeguard against users that might seek to abuse their regulatory role.

Addressing the problems of content moderation will also require Instagram to give stakeholders the opportunity to participate in and articulate their views on the development of regulatory processes. One of the main ways that the platform can open itself to different viewpoints is by diversifying its workforce:1152 specifically, by hiring non-white women and those who identify as LGBTQIA+ to fill technical roles. The term ‘technical roles’ in this Chapter refers to the teams that develop and deploy automated systems and maintain the platform’s computational

1148 Crawford and Gillespie (n 232) 410. 1149 Instagram has previously stated that ‘the number of times something is reported doesn't determine whether or not it's removed from Instagram’: see generally Instagram, ‘Does the Number of Times Something Gets Reported Determine whether or Not It’s Removed?’ Help Centre (Web Page, 2019) . 1150 West, ‘Censored, Suspended, Shadowbanned: User Interpretations of Content Moderation on Social Media Platforms’ (n 460) 201. See also Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 139. 1151 See, eg, ‘How Many Reports on Instagram Can Delete an Account?’ Quora (Web Page, 2019) . 1152 It should be noted that some platforms are attempting to address the lack of representation in their workforces and, in some instances, broader social disparities. Slack is apparently an industry leader in this regard: in 2018, around 48% of those employed in a managerial role across the company’s global operations were women. See Slack Team, ‘Diversity at Slack’ (Web Page, 2019) . Facebook’s is also taking positive steps forward: its 2018 Diversity Report shows that the company has ‘increased the proportion of Black and Hispanic employees across the company by 2% to 4% and 4% to 5%’, respectively. Facebook has also, inter alia, appointed a Chief Diversity Officer, created a Black Women @ Facebook employee resource group and, in conjunction with Instagram, recently donated over $100,000 to Black Girls Code: see Maxine Williams, ‘Facebook 2018 Diversity Report: Reflecting on Our Journey, Facebook Newsroom (Press Release, 12 July 2018) .

196

CHAPTER SEVEN: CONCLUSION architecture. As Facebook executive states, diversity in the workforces of technology companies is important because, ‘[w]hen you write a line of code, you can affect a lot of people’.1153 More broadly, I urge Instagram, like other platforms, to ensure that different voices are represented in prestigious policy teams and mini-legislative sessions where important governance decisions are allegedly made. While greater diversity is important, it will not in itself address the apparent inequalities in moderation processes.1154 For this reason, in order to move beyond the mere provision of information from black box analytics,1155 it is crucial that societal actors have ways to hold decision-makers to account.1156

D Greater Accountability

Greater accountability should start within the Instagram platform itself. As a first step, I recommend that Instagram provides users with the option to appeal decisions affecting their expression through content.1157 Doing so will enable the platform to better align its moderation processes with the appeals section of the Santa Clara Principles, which the Open Technology Institute notes ‘YouTube, Facebook, and Twitter have demonstrated great levels of success with’.1158 In particular, Facebook is in the process of creating a purportedly external Oversight Board for Content Decisions, ‘a body of independent experts who will review Facebook's most challenging content decisions - focusing on important and disputed cases’.1159 There are a number of potential benefits of Facebook’s Board, which could identify deficiencies in the development and enforcement of content policies at Facebook, as well as potential risks, like only a scintilla of contested moderation cases being heard.1160 If truly independent and well- resourced, this body could be a meaningful check and balance on Facebook’s exercise of power

1153 Chang (n 473) 8. 1154 See generally Sanjana Varghese, 'Ruha Benjamin: ‘We definitely can’t wait for Silicon Valley to become more diverse’', The Guardian (online at 30 June 2019) . 1155 Witt, Suzor and Huggins (n 27) 557. 1156 Suzor, ‘Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms’ (n 26) 13; Perel and Elkin-Koren (n 112) 186. 1157 Facebook, for instance, says that ‘we offer appeals for the vast majority of violation types. We don't offer appeals for violations with extreme safety concerns, such as child exploitation imagery’: Crocker et al (n 40) [Instagram]. 1158 Singh (n 1132). 1159 Facebook, Draft Charter: An oversight Board for Content Decisions (Draft Charter, 2019) 1160 For analysis of the potential benefits and limitations of the Facebook’s Oversight Board, see Evelyn Douek, ‘Facebook’s “Oversight Board:” Move fast with Stable Infrastructure and Humility’ (2019) 21(1) North Carolina Journal of Law and Technology (forthcoming) .

197

CHAPTER SEVEN: CONCLUSION over content and mark a significant step forward for the constitutionalisation of online platforms.1161

Against this backdrop, I recommend that Instagram implements a timely and rigorous appeals process in which moderators, who were not involved in initial decision-making processes, review individual pieces of content at issue. At a minimum, Instagram should provide users with a statement that outlines the result of the appeals process and why a particular decision was made.1162 Ideally, users would be able to seek redress for moderation outcomes from an external accounter, such as Facebook’s Oversight Board.1163 The sheer scale of the platform, however, might necessitate certain limitations on appeals processes. For instance, appeals processes could exclude known explicit content. It would be vital for Instagram to provide information in its transparency reports about how effective its appeals processes are and how many individual pieces of content are affected by appeals processes in practice.

Instagram should also take steps to enhance the accountability of its users who serve as a volunteer corps of regulators. In addition to demystifying the internal workings of reporting features and investigating the extent to which groundless reporting occurs in practice, Instagram should create a best practice guide for user and non-user reporting. A section of this guideline could, for instance, state that users should report content in line with content policies rather than their individual value systems. In the in-built reporting figure, as illustrated in Figure 7, Instagram could restate its policies in corresponding reporting options. Each reason for reporting could also include brief examples of what constitutes a policy violation and what does not. Moreover, I endorse the suggestion in the Santa Clara Principles that users should ‘be presented with a log of content they have reported and the outcomes of moderation processes’.1164 The proposed log could result in the platform barring certain users from reporting content if a certain amount of their reports are found to be groundless (ie, do not result in content removal).

Most importantly, if Instagram wishes to appease growing concerns about its moderation processes, it must take steps to enable some degree of external verification and

1161 Kadri and Klonick (n 31) 41. 1162 Some platforms allow users to present additional information in the appeals process: see Singh (n 1132). 1163 Suzor et al, 'What Do We Mean When We Talk about Transparency? Towards Meaningful Transparency in Commercial Content Moderation' (n 336) 1526. 1164 ACLU Foundation of Northern California et al, The Santa Clara Principles on Transparency and Accountability in Content Moderation (n 53).

198

CHAPTER SEVEN: CONCLUSION accountability.1165 This point is crucial because ongoing concerns around content moderation processes will not be addressed by Instagram and other platforms continuing to moderate in secret. While civil society, including the Electronic Frontier Foundation,1166 Ranking Digital Rights1167 and Onlinecensorship.org,1168 already report on major platforms’ governance practices, these reports are only as comprehensive as the information that platforms choose to disclose to the public. Hence, I recommend that Instagram and other platforms open themselves to expert auditors who can verify the accuracy of data, identify whether systems exhibit biases and work with platform employees to address potential issues. An independent body should employ expert auditors to minimise the potential for conflicts of interest and better enable platforms to be held to account.

I particularly encourage scholars to take on roles as external accounters by continuing to develop and apply methods for independently monitoring moderation systems at scale. New methodologies, like the one in this thesis, can help to address the lack of transparent information about how content moderation processes work and provide alternative means of holding decision-makers to account.1169 Enhanced access to data through innovative methods can lead to collaborations with other stakeholders in platform governance and increase the pressure on platforms to improve their governance practices. As part of this, it is important to take into account that major social media platforms operate at such a massive scale that errors are inevitable, and there are many interrelated factors that could contribute to actual or perceived arbitrariness. I therefore encourage Instagram to provide open access data to the public, perhaps by reducing restrictions to its API, and to work with scholars and other external experts to improve their regulatory systems.

1165 See, eg, Tarleton Gillespie, ‘Facebook Can’t Moderate in Secret Anymore’, Culture Digitally (online at 23 May 2017) . Interestingly, Mark Zuckerberg has recently called for greater regulation around some types of internet content, among other things: ‘Mark Zuckerberg Asks Governments to Help Control Internet Content’, BBC News (online at 30 March 2019) . 1166 Crocker et al (n 40). 1167 Ranking Digital Rights, ‘Corporate Accountability Index’ (n 26); Ranking Digital Rights, ‘Corporate Accountability Index’ (n 385). 1168 Anderson et al, (n 26). 1169 Suzor et al, 'What Do We Mean When We Talk about Transparency? Towards Meaningful Transparency in Commercial Content Moderation' (n 336) 1526.

199

CHAPTER SEVEN: CONCLUSION

III GOVERNMENT ACTION FOR GREATER TRANSPARENCY AND ACCOUNTABILITY: KEY PRINCIPLES

As I explained in previous chapters, the West is now at a ‘constitutional moment’1170 in which societal actors are fundamentally rethinking how online platforms are constituted, and how the exercise of power by platforms should be tempered. There are a plethora of developments in this context, including governments in Europe enacting laws that require technology companies to better regulate the flow of content on their networks.1171 It is not surprising that the tide of regulation is turning like this given that platforms have long been attractive targets for regulation: they have the technical infrastructure to influence the behaviours of billions of users at scale, something that many territorial governments are not well-equipped to do.1172 Major technology companies are now attempting to stem the tide of new regulatory measures by constitutionalising their operations and, if enough platforms do so, they might be successful. However, as Suzor notes, ‘[n]ever has there been so much attention on the values embedded in technology and the policies of tech companies’.1173 Given that state enacted laws could play an important role in helping societal actors independently monitor the performance of moderation systems and address concerns about potential arbitrariness, I will now outline key principles that I believe should guide any attempt to better regulate online social spaces.

The first is that the sheer scale of online platforms, their decentralised design and borderless nature create unique governance challenges that cannot be addressed through traditional, ‘command and control’ regulatory responses alone.1174 Part of this is due to content being a

1170 Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 112 1171 The United Kingdom, for instance, is considering different proposals to regulate the flow of harmful content on social media platforms, while Australia has enacted laws that could result in the executives of technology companies being held liable, criminally or otherwise, for the actions of users. Moreover, and perhaps most notably, the European Union’s General Data Protection Regulation now requires technology companies to, inter alia, take certain actions around content, largely on grounds of user privacy: see generally Flew, Martin and Suzor (n 68) 44-46; Witt, Gillett and Suzor (n 104); Department for Digital, Culture, Media & Sport, Home Office, The Rt Hon Sajid Javid MP and the Rt Hon Jeremy Wright MP, ‘Online Harms White Paper’ (n 1103). 1172 Platforms are often attractive targets for regulation for several reasons, among them they have the technical infrastructure to limit, control or influence the behaviours of billions of users at scale, something that territorial governments are often unequipped to do. Other reasons relate to cost: it is often easier and more effective to target technology companies than to identify and attempt to make bad actors responsible for their actions. Attempts to enlist platforms to police online content are often couched in these considerations of scale and cost and, to varying degrees, moral responsibility: see generally Kylie Pappalardo, ‘Duty and Control in Intermediary Copyright Liability: An Australian Perspective (2014) 4(1) IP Theory 9. 1173 Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 112. 1174 See generally Witt, Gillett and Suzor (n 104).

200

CHAPTER SEVEN: CONCLUSION commodity and the work of moderating content is distorted by commercial prerogatives.1175 Platforms are also susceptible to layers of internal and external influence ranging from market forces to the demands of shareholders and governments. Top-down, command and control regulatory responses are generally unsuitable for decentred contexts also because state enacted laws often establish minimum standards. States generally do not and, for largely practical reasons, cannot legislate for all types of appropriate and inappropriate content. There seems to be a general consensus among stakeholders in platform governance that moderating ‘appropriate-verses-inappropriate distinctions’1176 is an extremely difficult task that is impossible to get entirely right at the scale that social media companies are operating.

Additionally, it would not be desirable for lawmakers to hold Instagram and other platforms to the same standards as constitutional governments. Such an approach would be costly on several fronts: for Instagram in terms of lost revenue, for users who rely on social network technology and for the global marketplace that relies on, inter alia, the flow-on effects from innovation by platforms.1177 New regulatory measures also raises several practical challenges, including whether centralised control is necessary to enable the non-arbitrary removal of illegal and ‘legal but harmful content’.1178 Another important question is whether lawmakers can set, maintain and enforce constitutional standards in the internet environment where the application of human rights and other laws is already unclear. The latter is a particularly important question given ‘that a major potential pitfall and risk of content removal is the takedown of critical expression and valuable counter-speech’.1179 The reality is that there are often no easy answers to questions around platform governance.

I therefore stress the importance of attempts to shape the future direction of platform governance being necessary and proportionate.1180 As a guide to good regulatory design, I recommend the Manila Principles on Intermediary Liability, a set of standards developed by a group of key civil society organisations.1181 Any content regulation scheme should also include strong safeguards, like my selected rule of law values, which are well-established in Western democratic discourse. Safeguards are crucial given that online platforms are not ‘all-

1175 Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 197 1176 Olszanowski (n 854) 84. 1177 Crawford and Gillespie (n 232) 411. See also Richard Barbrook and Andy Cameron, ‘The Californian Ideology’ (1996) 6 Science as Culture 44. 1178 Nash (n 104) 9. 1179 Witt, Gillett and Suzor (n 104) 1. 1180 Electronic Frontier Foundation et al, Manila Principles on Intermediary Liability (n 53). 1181 Ibid.

201

CHAPTER SEVEN: CONCLUSION powerful’:1182 they are subject to laws, the influence of the marketplace and a range of norms.1183 Ideally, attempts to improve transparency around online content regulation should be reframed to focus on access to large-scale disaggregated data, in a way that can better inform stakeholders and help to address ongoing distrust of processes for moderating content.

IV OPPORTUNITIES FOR FUTURE RESEARCH

There are several areas for future research, some of which arise from the limitations of this thesis. In Chapter One, I stated that I was focusing on the moderation of images depicting women’s bodies on the Instagram platform and empirically evaluating this subject matter from a predominantly formal theoretical standpoint. There are interesting issues that arise outside of this scope, however, which provide fertile ground for future research.

A Extending the Theoretical Framework

This thesis provided an initial empirical evaluation of the extent to which the moderation of images depicting female forms on Instagram aligns with the legal ideal of the rule of law and a feminist perspective. Given my largely formal conception of these critical lenses, I was unable to explore some of the more substantive phenomena around content moderation that became apparent throughout the course of my doctoral studies. Take, for instance, the possible explanations for content removal, one of which is users choosing to remove their own posts. There is space here to explore more deeply individual user practices, including users’ desire to curate their profiles, which might contribute to the inconsistent trend of moderation that I observed. Another line of potential substantive inquiry centres on apparent self-sensorship,1184

1182 The mass global adoption of social media technologies and the influence of their creators suggest that platforms are immensely powerful. Indeed, it is tempting to think of online platforms as beyond the law, or ‘lawless’, given the legal protections that US law affords some internet intermediaries: see Suzor, Lawless: The Secret Rules That Govern our Digital Lives (n 32) 6-7; Witt, Suzor and Huggins (n 27) 578. While there are several challenges to regulating internet-based platform technologies, including those around jurisdiction and legal enforcement, most laws that apply in traditional (offline) spaces can have some application to online contexts. As Suzor explains, by and large and with some important exceptions, ‘states have been able to enforce the law against users directly when necessary, and influence the way that intermediaries do business to achieve their regulatory goals indirectly’: see Suzor, , Lawless: The Secret Rules That Govern our Digital Lives (n 32) 92. Another salient point is that a significant amount of internet infrastructure is physical: for instance, cables and data centres that are ultimately controlled by people and corporations within particular territories. The combination of these factors means that platforms are well within the reach of the courts and other regulatory bodies. Moreover, Gillespie explains that ‘[t]his is not to say that platforms are of no consequence. I simply mean that we must examine their role, without painting them as either all-powerful or merely instrumental. We must recognize their attenuated influence over the public participation they host and the complex dynamics of that influence, while not overstating their ability to control it’: Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (n 2) 21. 1183 Lessig (n 50) 6. 1184 Olszanowski (n 854).

202

CHAPTER SEVEN: CONCLUSION which I discussed in Chapter Six. The ways that some users are seemingly expressing themselves both in line with and in protest against Instagram’s content policies warrants further empirical legal examination to better understand the reasons why and how this is occurring.

There is, therefore, important future work in expanding my rule of law framework to incorporate thick rule of law values. This is necessary, particularly in the context of the rule of law value of equality, because consistent treatment is not always appropriate for individuals whose circumstances differ from others in one or more ways. There is also wide scope to explore different feminisms, such as those from a more radical standpoint, which offer pertinent insights into systemic social issues. Finally, attempting to expand my selected critical lenses opens additional lines of inquiry about whether and how some of the more complex tensions between a rule of law approach and feminist perspective can be resolved.1185 Such exploration is, in my view, important given that Anglo-American rule of law discourse has traditionally privileged the white male archetype.1186

B Digital Methods for Empirical Legal Analysis

The next area in which there is ample scope for further research is the development and application of digital methods for empirical legal analysis of content moderation. In Chapters Two and Three, I referred to research into the outcomes of content moderation that has been undertaken by civil society organisations, including Ranking Digital Rights and the Electronic Frontier Foundation.1187 I also highlighted ongoing investigations into platform governance being undertaken by scholars at the Digital Media Research Centre,1188 as well as by Microsoft Research and the AI Now Institute,1189 among others. In the field of law, the development and application of digital methods is a nascent area of research that warrants further research and funding. By further exploring lines of legal inquiry with digital methods, researchers can not only examine current and emerging problems around governance in new ways, but also provide the public with empirical evidence that can serve as a powerful lever for change.

More specifically, there is wide scope to further develop the black box methodology in this thesis. More work needs to be done to determine how substantive rule of law values, which are

1185 See, eg, Witt, Suzor and Huggins (n 27) 568. 1186 Ibid. 1187 Ranking Digital Rights, ‘Corporate Accountability Index’ (2019) (n 26); Anderson et al (n 26). 1188 See generally Digital Media Research Centre, ‘Research’ (Web page, August 2019) . 1189 See generally Microsoft Research, ‘Microsoft Research Podcast’ (Web Page, August 2019) ; AI Now Institute, ‘Research’ (Web Page, August 2019) .

203

CHAPTER SEVEN: CONCLUSION often more comprehensive normative aspirations, can be measured in practice. It could be the case that alternative mixed-method approaches are necessary (eg, a combination of legal document analysis, automated data collection and semi-structured interviews). In terms of content analysis, more work is needed to expand the coding schemes in Chapters Four and Five to incorporate intersectional factors beyond gender and body type. Potential intersectional factors include age and language as evidenced in comments to a post. There is also potential to explore how machine learning could be used to automate the coding of female body type and, after a model is sufficiently trained, to generate a larger sample of images for empirical legal evaluation. Finally, it would be useful to analyse the outputs of content moderation based on hashtag selection, something that I was unable to do within the timeframe for this thesis.1190 These possibilities are especially important in the context of Instagram that is researched less extensively than platforms with easier access to data.

The methodology in this thesis is extensible to different controversies and societal actors. On the Instagram platform, there are mounting concerns about, inter alia, self-harm content, depictions of childbirth and advertisements for weight loss and cosmetic surgery.1191 Instagram, like other platforms, is also grappling with how to better regulate bullying, hate speech and extremist material, among a plethora of problematic online communications. The regulatory issues around content moderation processes are not limited to platforms and extend to other online service providers, such as Amazon, Google and Reddit, and internet service providers, like Telstra and iiNet. Additionally, users are concerned about the extent of the influence and risks associated with automated decision-making across public and private domains. In short, there is fertile ground for a range of empirical investigations using black box analytics or other methods, even where the information provided is incomplete. These efforts could make further valuable contributions to the fields of law, digital media and beyond.

C Realising the Project of Digital Constitutionalism

The final area in which there is wide space for further research and, indeed, multi-stakeholder engagement and collaboration, is in realising the project of digital constitutionalism. As this project is only just beginning, there is increasing demand for work that can shed light theoretical and practical light on how rule of law can be realised across offline and online

1190 It can be difficult to analyse individual count data for hashtags. For instance, given that Instagram users can post content to up to 30 hashtags, there is a risk of under or over representing content removal for tags. More work is needed to better understand how to make meaning from hashtag data in this context. For background information, see the discussion of ‘hashtag logics’ in Gerrard (n 816) 17. 1191 See, eg, Singh-Kurtz (n 951).

204

CHAPTER SEVEN: CONCLUSION domains. This project is immensely complex: not only is there ongoing debate about the best ways to go about this work, but also about where to begin. For instance, can and should there be a prescribed set of constitutional values for the West? Can and should societal actors embed these values into decision-making process? Should there be an independent body that regulates social media platforms? If so, who? These questions, among many others, make clear that studies like this thesis are a first step in trying to find mechanisms to constrain the potentially arbitrary exercise of power in the online environment with reference to rule of law values.

Given that digital constitutionalism is an emerging project, researchers should give particular attention to building new and better relationships with online platforms, and vice versa. This is important because there are limits to how much those outside of the black box can learn without access to a platform’s inner workings. Take, for example, the difficulty that I experienced in obtaining data and drawing explicit conclusions around how content is moderated on Instagram in practice. By engaging and collaborating with technology companies, researchers can better provide the public with empirical evidence of the governance systems that can affect their everyday expression through content. Most critically, it is imperative for platforms to better communicate and engage with stakeholders given that greater transparency around platform governance is a precondition to further, fine-grained research. Ongoing multi-stakeholder engagement is essential to help societal actors identify arbitrariness in content moderation processes where it exists and to allay the suspicions and fears of users where it does not.

V CONCLUDING REMARKS

This thesis has contributed to global discussion around the risk of arbitrariness in platform governance, added to the body of literature applying the legal ideal of the rule of law across social domains and emphasised the importance of the broader project of digital constitutionalism. Beyond this, I have three main hopes for this thesis. The first is that the empirical evidence that I have provided informs debate around the moderation of images depicting women’s bodies on Instagram and, in particular, highlights the greater risk of arbitrariness that some users face when posting non-normative content. Second, it is my hope that this study is of use to Instagram in its ongoing attempts to improve its content moderation processes. It would be ideal if the platform, like others, opens its regulatory system to greater public scrutiny and allows independent audits of its moderation systems. Finally, I hope that this study helps to guide the development of future legal principles for more transparent and accountable systems of governance in the digital age.

205

CHAPTER SEVEN: CONCLUSION

END

206

BIBLIOGRAPHY

BIBLIOGRAPHY

A Articles/Books/Reports

Allan, Trevor, Law, Liberty, and Justice: The Legal Foundations of British Constitutionalism (Clarendon Press, 1994)

Allan, Trevor, Constitutional Justice: A Liberal Theory of the Rule of Law (Oxford University Press, 2011)

Allan, Trevor, The Sovereignty of Law: Freedom, Constitution and Common Law (Oxford University Press, 2013)

Ananny, Mike and Kate Crawford, 'Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability' (2018) 30(3) New Media & Society 973

Anderson, Chris, ‘Deliberative, Agonistic, and Algorithmic Audiences: Journalism’s Vision of Its Public in an Age of Audience Transparency’ (2011) 5 International Journal of Communication 529

Anderson, Jessica et al, ‘Censorship in Context: Insights from Crowdsourced Data on Social Media Censorship’ (Research Report, Onlinecensorship.org, 16 November 2016)

Annan, Kofi, Secretary General, In Larger Freedom: Towards Development, Security and Human Rights for All, UN Doc A/59/2005 (26 May 2005

Arthurs, Jane and Jean Grimshaw, Women's Bodies Cultural Representations and Identity (Bloomsbury Publishing, 1999)

Baer, Hester, ‘Redoing Feminism: Digital Activism, Body Politics, and Neoliberalism’ (2016) 16(1) Feminist Media Studies 17

Banet-Weiser, Sarah and Kate Miltner, '#MasculinitySoFragile: Culture, Structure, and Networked Misogyny' (2016) 16(1) Feminist Media Studies 171

Barocas, Solon and Andrew D. Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review 671

Barbrook, Richard and Andy Cameron, ‘The California Ideology’ (1996) 6(1) Science as Culture 44

Barnett, Emma, Period. It’s About Bloody Time (HQ, London: 2019)

Bayamlıoğlu, Emre and Ronald Leenes, 'The ‘Rule of Law’ Implications of Data-Driven Decision-Making: A Techno-Regulatory Perspective' (2018) 10(2) Law, Innovation and Technology 295

207

BIBLIOGRAPHY

Berg, Bibi van den and Ronald Leenes, ‘Abort, Retry, Fail: Scoping Techno-Regulation and Other Techno-Effects’, in Mireille Hildebrandt and Jaenne Gakeer (eds), Human Law and Computer Law: Comparative Perspectives (Springer, 2012)

Bernard, Catharine and Bob Hepple, ‘Substantive Equality’ (2000) 59(3) Cambridge Law Journal 562

Bingham, Tom, The Rule of Law (Penguin Books Limited, 2011)

Black, Julia, ‘Constructing and Contesting Legitimacy and Accountability in Polycentric Regulatory Regimes’ (2008) 2 Regulation and Governance 136

Black, Julia, ‘Decentring Regulation: Understanding the Role of Regulation and Self- regulation in a “Post-regulatory” World’ (2001) 54 Current Legal Problems 103

Black, Julia, ‘Enrolling Actors in Regulatory Systems: Examples from U.K. Financial Services Regulation’ (2003, Spring) Pubic Law 63

Bovens, Mark, ‘Analysing and Assessing Accountability: A Conceptual Framework’ (2007) 13 European Law Journal 447

Bowling, Ben and James Sheptycki, ‘Global policing and transnational rule with law’ (2015) 6(1) Transnational Legal Theory 141

Braithwaite, John, ‘Rules and principles: A theory of legal certainty' (2002) 27 Australian Journal of Legal Philosophy 47

Broad, Ellen, Made by Humans: The AI Condition (Melbourne University Press, 2018)

Brown, Sara, Gender and the Genocide in Rwanda: Women as Rescuers and Perpetrators (Routledge, London, 2018)

Buolamwini, Joy, Gender Shades: Intersectional Phenotypic and Demographic Evaluation of Face Datasets and Gender Classifiers (Master of Science Thesis, MIT, 2017)

Burgess, Jean and Ariadna Matamoros-Fernández, ‘Mapping Sociocultural Controversies across Digital Media Platforms: One Week of #gamergate on Twitter, YouTube and Tumblr’ (2016) 2 Communication Research and Practice 79

Burris, Scott, Peter Drahos and Clifford Shearing, ‘Nodal Governance’ (2005) 30 Australian Journal of Legal Philosophy 30

Burris, Scott, Michael Kempa and Clifford Shearing, ‘Changes in Governance: A Cross- Disciplinary Review of Current Scholarship’ (2008) 41 Akron Law Review 1

Butler, Judith, Gender Trouble: Feminism and the Subversion of Identity (Routledge, 1st ed, 2006)

Cain, Patricia A, ‘Feminism and the Limits of Equality’ (1990) 24 Georgia Law Review 803

208

BIBLIOGRAPHY

Campolo, Alex et al, AI Now 2017 Report (Report, 2017) 16-17

Cane, Peter and Herbert Kritzer, The Oxford Handbook of Empirical Legal Research (Oxford University Press, 2010)

Cardozo, Nate et al, Who Has Your Back? Censorship Edition 2018 (Online Report, 31 May 2018) Electronic Frontier Foundation

Carrotte, Elise, Ivanka Prichard and Megan Lim, ‘”Fitspiration”’ on Social Media: A content Analysis of Gendered Images’ (2017) 19 (3) Journal of Medical Internet Research 1

Casy, Donal and Colin Scott, ‘The Crystallisation of Regulatory Norms’ (2011) 38(1) Journal of Law and Society 76

Celeste, Edoardo, 'Digital Constitutionalism: A New Systematic Theorisation' (2019) 33(1) International Review of Law, Computers & Technology 76

Chang, Emily, Brotopia: Breaking Up the Boys' Club of Silicon Valley (Portfolio, 2018)

Chesney-Lind, Meda, ‘Policing Women’s Bodies: law, Crime, Sexuality, and Reproduction’ (2017) 27(1) Women & Criminal Justice 1

Chouldechova, Alexandra and Adam Roth, The Frontiers of Fairness in Machine Learning (Research Paper, 23 October 2018)

Clapperton, Dale and Stephen Crones, ‘Unfair Terms in ‘Clickwrap’ and other Electronic Contracts’ (2007) 35(3) Australian Business Law Review 152

Clayton, Russell, Jessica Ridgway and Joshua Hendrickse, ‘Is Plus Size Equal? The Positive Impact of Average and Plus-sized Media Fashion Models on Women’s Cognitive Resource Allocation, Social Comparisons, and Body Satisfaction’ (2017) 84 (3) Communication Monographs 406

Cohen, Julie, ‘Cyberspace as/and Space’ (2007) 107 Columbia Law Review 210

Cohen, Rachel et al, '#bodypositivity: A content Analysis of Body Positive Accounts on Instagram' (2019) 29 Body Image 47

Crawford, Kate, ‘Can an Algorithm Be Agonistic? Ten Scenes from Life in Calculated Publics’ (2016) 41 Science, Technology & Human Values 77

Crawford, Kate and Tarleton Gillespie, 'What Is a Flag for? Social Media Reporting Tools and the Vocabulary of Complaint' (2016) 18(3) New Media & Society 410

Crawford, Lisa Burton, The Rule of Law and the Australian Constitution (The Federation Press, 2017)

209

BIBLIOGRAPHY

Crawford, Lisa Burton, 'The Rule of Law' in Rosalind Dixon (ed), Australian Constitutional Values (Bloomsbury Publishing, 2018)

Crenshaw, Kimberlé, ‘Mapping the Margins: Intersectionality, Identity Politics, and Violence Against Women of Colour (1990) 43 Stanford Law Review 1241

Crenshaw, Kimberle, 'Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory, and Antiracist Politics ' in Katherine Bartlett and Rosanne Kennedy (eds), Feminist Legal Theory: Readings In Law And Gender (Routledge, 2018)

Crocker, Andrew et al, Who Has Your Back? Censorship Edition 2019 (Online Report, 12 June 2019) Electronic Frontier Foundation

Das, Sauvik and Adam Kramer, ‘Self-Censorship on Facebook’ (Conference Paper, AAAI Conference on Weblogs and Social Media, MIT Media Lab and Microsoft Research, 8–11 July 2013)

Department for Digital, Culture, Media & Sport, Home Office, The Rt Hon Sajid Javid MP and the Rt Hon Jeremy Wright MP, ‘Online Harms White Paper’ (Official Government Paper, 8 April 2019)

Diakopoulos, Nicholas, ‘Algorithmic Accountability Reporting: On the Investigation of Black Boxes’ (Research Paper, Tow Center for Digital Journalism, Columbia Journalism School, 2014)

Diakopoulos, Nicholas, ‘Algorithmic Accountability’ (2015) 3 Digital Journalism 398

Dicey, A.V, Introduction to the Study of the Law of the Constitution (MacMillan,10th ed, 1959)

Dimen, Muriel, ‘Power, Sexuality, and Intimacy’ in Alison Jaggar and Susan Bordo (eds), Gender/Body/Knowledge: Feminist Reconstructions of Being and Knowing (Rutgers University Press, 1992)

Douek, Evelyn, ‘Facebook’s “Oversight Board:” Move fast with Stable Infrastructure and Humility’ (2019) 21(1) North Carolina Journal of Law and Technology (forthcoming)

Dragiewicz, Molly et al, ‘Technology Facilitated Coercive Control: Domestic Violence and the Competing Roles of Digital Media Platforms’ (2018) 18(4) Feminist Media Studies 609

Duguay, Stefanie, Jean Burgess and Nicolas Suzor, 'Queer women’s experiences of patchwork platform governance on Tinder, Instagram, and Vine' (2018) Convergence: The International Journal of Research into New Media Technologies (advance)

210

BIBLIOGRAPHY

Duguay, Stefanie, Identity Modulation in Networked Publics: Queer Women’s Participation and Representation on Tinder, Instagram, and Vine (PhD Thesis, Queensland University of Technology, 2017)

Dynamic Coalition on Platform Responsibility, Recommendations on Terms of Service and Human Rights (Recommendation Report, November 2015)

Edwards, Lee, Fiona Philip and Ysabel Gerrard, ‘Communicating Feminist Politics? The Double-Edged Sword of Using Social Media in a Feminist Organisation’ (2019) Feminist Media Studies (advance)

Eilam, Eldad, Reversing: Secrets of Reverse Engineering (Wiley, 2005)

Electronic Frontier Foundation et al, Manila Principles on Intermediary Liability (24 March 2015) .

Enloe, Cynthia, Globalization and Militarism: Feminists Make the Link (Rowman & Littlefield Publishers, 2007)

Esty, Daniel, ‘Good Governance at the Supranational Level: Globalizing Administrative Law’ (2006) 115 Yale Law Journal 1490, 1529; Anna Huggins, Multilateral Environmental Agreements and Compliance: The Benefits of Administrative Procedures (Routledge, 2018)

Fallon Jr, Richard H, ‘“The Rule of Law” as a Concept in Constitutional Discourse’ (1997) 97 Columbia Law Review 1

Federal Trade Commission, Big Data: A Tool for Inclusion or Exclusion? Understanding the Issues (Report, January 2016)

Fitzgerald, Brian, 'Software as Discourse - A Constitutionalism for Information Society' (1999) 24 Alternative Law Journal 144

Flew, Terry, Fiona Martin and Nicolas Suzor, 'Internet Regulation as Media Policy: Rethinking the Question of Digital Communication Platform Governance' (2019) 10(1) Journal of Digital Media & Policy 33

Friedman, Batya and Helen Nissenbaum, ‘Bias in Computer Systems’ (1996) 14(3) ACM Transactions on Information Systems 330

Fuller, Lon, The Morality of Law (Yale University Press, 2nd ed, 1969)

Gardner, John, ‘The Supposed Formality of the Rule of law’ in Law as a Leap of Faith: Essays on Law in General (Oxford University Press, 2012)

Gerrard, Ysabel, 'Beyond the Hashtag: Circumventing Content Moderation on Social Media' (2018) 20(12) New Media & Society 4492

211

BIBLIOGRAPHY

Gill, Lex, Dennis Redeker and Urs Gasser, 'Towards Digital Constitutionalism? Mapping Attempts to Craft an Internet Bill of Rights' (2018) 80(4) The International Communication Gazette 302

Gill, Rosalind, Gender and the Media (Polity, Cambridge, 2007)

Gillespie, Tarleton, Wired Shut: Copyright and the Shape of Digital Culture (MIT Press, Cambridge, 2007)

Gillespie, Tarleton, ‘The Relevance of Algorithms’ in Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot (eds) Media Technologies: Essays on Communication, Materiality, and Society (MIT Press, 2014)

Gillespie, Tarleton ‘Governance of and by Platforms’ in Jean Burgess, Thomas Poell and Alice Marwick (eds), SAGE Handbook of Social Media (SAGE Publications, 2017)

Gillespie, Tarleton, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (Yale University Press, 2018)

Gillet, Rosalie, Everyday Violence: Women's Experiences of Intimate Intrusions on Tinder (PhD Thesis, Queensland University of Technology, 2018)

Glazer, Reena, ‘Women’s Body Image and the Law’ (1993) 43(1) Duke Law Journal 113

Goldsworthy, Jeffrey, 'Functions, Purposes and Values in Constitutional Interpretation’ in Rosalind Dixon (ed), Australian Constitutional Values (Bloomsbury Publishing, 2018)

Gorwa, Robert, ‘What Is Platform Governance’ (2019) 22(6) Information, Communication & Society 854

Graham, Stephen, ‘Software-sorted Geographies’ (2005) 29 (5) Progress in Human Geography 562

Gray, Joanne and Nicolas Suzor, ‘Playing with Machines: Using Machine Learning to Understand Automated Copyright Enforcement at Scale’ (2020) Big Data & Society (forthcoming)

Gregorio, Giovanni De, 'From Constitutional Freedoms to the Power of the Platforms: Protecting Fundamental Rights Online in the Algorithmic Society' (2019) 11 European Journal of Legal Studies 65

Grimmelmann, James, ‘Regulation by Software’ (2005) 114 Yale Law Journal 1719

Grimmelmann, James, ‘Virtual World Feudalism’ (2009) Yale Law Journal Pocket Part 118

Hart, Herbert L A, ‘Positivism and the Separation of Law and Morals’ (1958) 71(4) Harvard Law Review 593

212

BIBLIOGRAPHY

Hart, Herbert L A, Encyclopaedia of Philosophy (Macmillan Press, London, 1967

Hart, Herbert L A, The Concept of Law (Oxford University Press, 2nd ed, 1994)

Harjunen, Hannele, Neoliberal Bodies and the Gendered Fat Body (Routledge, London, 2016). Huggins, Anna, Multilateral Environmental Agreements and Compliance: The Benefits of Administrative Procedure (Routledge, 2017) Heins, Marjorie, ‘The Brave New World of Social Media Censorship’ (2014) 127 Harvard Law Review 325

Hess, David, ‘Power, Ideology, and Technological Determinism’ (2015) 1 Engaging Science, Technology and Society 121

Highfield, Tim, Social Media and Everyday Politics (Polity, 2016) Highfield, Tim and Tama Leaver, ‘A Methodology for Mapping Instagram Hashtags’ (2015) 20(1) First Monday

Highfield, Tim and Tama Leaver, 'Instagrammatics and Digital Methods: Studying Visual Social Media, from Selfies and GIFs to Memes and Emoji' (2016) 2(1) Communication Research and Practice 47

Holowk, Eileen Mary, 'Between Artifice and Emotion: The “Sad Girls” of Instagram' in Kristin M.S. Bezio and Kimberly Yost (eds), Leadership, Popular Culture and Social Change (Edward Elgar Publishing, 2018)

Horn, Claire, 'A Short History of Feminist Theory' in Scarlett Curtis (ed), Feminists Don't Wear Pink and Other Lies (Penguin Random House, 2018)

Horwitz, Morton, ‘The History of the Public/Private Distinction’ (1982) 130 University of Pennsylvania Law Review 1423

James, Nickolas and Rachel Field, The New Lawyer (Wiley, 1st ed, 2013)

Jane, Emma, 'Online Misogyny and Feminist Digilantism' (2016) 30(3) Continuum: Journal of Media & Cultural Studies 284.

Johnson, David and David Post ‘Law and Borders – the Rise of Law in Cyberspace’ (1995) 48 Stanford Law Review 1367

Johnson-Laird, Andrew, ‘Software Reverse Engineering in the Real World’ (1993) 19(3) University of Dayton Law Review 843

Joyce, Daniel, ‘Internet Freedom and Human Rights’ (2015) 26(2) European Journal of International Law 493

Kadri, Thomas and Kate Klonick, ‘Facebook v Sullivan: Public Figures and Newsworthiness in Online Speech’ (2019) Southern California Law Review (forthcoming) .

213

BIBLIOGRAPHY

Kaye, David, Special Rapporteur, Freedom of Expression and the Private Sector in the Digital Age, UN Doc A/HRC/32/38 (11 May 2016)

Kaye, David, Special Rapporteur, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, UN Doc A/HRC/38/35 (6 April 2018)

Kaye, David, Speech Police: The Global Struggle to Govern the Internet (Columbia Global Reports, 2019)

Kaye, David, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, UN Doc A/HRC/41/35 (28 May 2019)

Kelsen, Hans, General Theory of law and State (Harvard University Press, 1961)

Kennedy, Duncan, ‘The Stages of the Decline of the Public/Private Distinction’ (1982) 130 University of Pennsylvania Law Review 1349

Kingsbury, Benedict, ‘Global Environmental Governance as Administration: Implications for International Law’ in Daniel Bodansky, Jutta Brunnée, and Ellen Hey (eds), The Oxford Handbook of International Environmental Law (Oxford University Press, 2008)

Kitchin, Rob and Martin Dodge, Code/Space: Software and Everyday Life (MIT Press, Cambridge, 2011)

Kitchin, Rob, ‘Thinking Critically about and Researching Algorithms’ (2017) 20(1) Information, Communication and Society 14

Klonick, Kate, 'The New Governors: The People, Rules, and Processes Governing Online Speech' (2017) 131 Harvard Law Review 1598

Kolbert, Elizabeth, ‘Who Owns the Internet’, The New Yorker (online at 28 August 2017) .

Krippendorff, Klaus, Content Analysis: An Introduction to its Methodology (SAGE, 3rd ed, 2013)

Krygier, Martin, ‘The Rule of Law: Legality, Teleology, Sociology’ in Gianluigi Palombella and Neil Walker (eds), Relocating the Rule of Law (Hart Publishing, 2009)

Krygier, Martin, ‘Four Puzzles about the Rule of Law: Why, What, Where? And Who Cares?’ in James E Fleming (ed), Getting to the Rule of Law (New York University Press, 2011)

Krygier, Martin, ‘Why the Rule of Law Is Too Important to Be Left to Lawyers’ [2013] (4) Law of Ukraine: Legal Journal 18

Krygier, Martin, ‘Transformations of the Rule of Law: Legal, Liberal, and Neo-’ (Conference Paper, KJuris Workshop, Dickson Poon School of Law, King’s College London, 1 October 2014)

214

BIBLIOGRAPHY

Krygier, Martin, ‘The Rule of Law: Pasts, Presents, and Two Possible Futures’ (2016) 12 Annual Review of Law and Social Science 199

Laidlaw, Emily, ‘Myth or Promise? The Corporate Social Responsibilities of Online Service Providers for Human Rights’ in M. Taddeo and L. Floridi (eds), The Responsibilities of Online Service Providers (Springer International Publishing, 2017)

Langvardt, Kyle, 'Regulating Online Content Moderation' (2017) 106 Georgetown Law Journal 1353

Leaver, Tama, Tim Highfield and Crystal Abidin, Instagram: Visual Social Media Cultures (Polity Press, 2020)

Lehr, David and Paul Ohm, ‘Playing with Data: What Legal Scholars Should Learn About Machine Learning’ (2017) 51 University of California, Davis Law Review 653

Lessig, Lawrence, Code and Other Laws of Cyberspace (Basic Books, 1999)

Lessig, Lawrence, ‘The Law of the Horse: What Cyber Law Might Teach’ (1999) 113 Harvard Law Review 501

Leurs, Koen, ‘Feminist Data Studies: Using Digital Methods for Ethical, Reflexive and Situated Socio-cultural Research’ (2017) 115 Feminist Review 130

Mack, Zachary, 'Facebook’s Former Chief Security Officer Alex Stamos on Protecting Content Moderators', The Verge (online at 19 March 2019)

Mackenzie, Adrian, Cutting Code: Software and Sociality (Peter Lang International Academic, 2006)

MacKinnon, Catharine, Toward a Feminist Theory of the State (Harvard University Press, 1989)

MacKinnon, Rebecca, Consent of the Networked: The Worldwide Struggle for Internet Freedom (Basic Books, 2013)

Manovich, Lev, ‘Cultural Analytics, Social Computing and Digital Humanities’ in Mirko Schafer and Karin van Es (eds) The Datafied Society: Studying Culture Through Data (Amsterdam University Press, 2017

Marlin-Bennett, Renée and E Nicole Thornton, ‘Governance within Social Media Websites: Ruling New Frontiers’ (2012) 36 Telecommunications Policy 493

Marwick, Alice E, ‘Instafame: Luxury Selfies in the Attention Economy’ (2015) 27(1) Public Culture 137

215

BIBLIOGRAPHY

Marwick, Alice E, 'None of This Is New (Media): Feminisms in the Social Media Age' in Tasha Oren and Andrea Press (eds), The Routledge Handbook of Contemporary Feminism (Routledge, 2019)

Marwick, Alice and danah boyd, ‘I Tweet Honestly, I Tweet Passionately: Twitter Users, Context Collapse, and the Imagined Audience’, 13(1) New Media & Society 114

Matich, Margaret, Rachel Ashman and Elizabeth Parsons, ‘#freethenipple – Digital Activism and Embodiment in the Contemporary Feminist Movement’ (2019) 22(4) Consumption Markets & Culture 337

Maurer, Bill ‘Transacting Ontologies: Kockelman’s Sieves and a Bayesian Anthropology’ (2013) 3(3) HAU: Journal of Ethnographic Theory 63

Mavin, Sharon, Patricia Bryans and Rosie Cunningham, ‘Fed-Up with Blair’s Babes, Gordon’s Gals, Cameron’s Cuties, and Nick’s Nymphets: Challenging gendered Media Representations of Women Political Leaders (2010) 25(7) Gender in Management: An International Journal 550

McHugh, Mary, ‘Interrater Reliability: The Kappa Statistic’ (2012) 22(3) Biochemica Medica 276

McRobbie, Angela, The Aftermath of Feminism: Gender, Culture and Social Change (Sage, 2009) Mathes, Adam, ‘Folksonomies - Cooperative Classification and Communication Through Shared Metadata’ (Research Paper LIS590CMC, Computer Mediated Communication, Graduate School of Library and Information Science, University of Illinois Urbana-Champaign, 2004)

Miranda, Alyssa, ‘A Keyword Entry on “Commercial Content Moderators”’ (2017) 2(2) iJournal

Mnookin, Robert, ‘The Public/Private Dichotomy: Political Disagreement and Academic Repudiation’ (1982) 130 University of Pennsylvania Law Review 1429

Mueller, Milton, ‘Hyper-Transparency and Social Control’ (2015) 39(9) Telecommunications Policy 804

Murphy, Sara, ‘Innocent Photo of “One Skinny Woman and One Curvy Woman” Stirs Controversy,’ Yahoo (online at 16 March 2017)

Nagel, Emily van der, ‘Networks that Work Too Well’: Intervening in Algorithmic Connections’ (2018) 168(1) Media International Australia 81

Nahan, Nyuk Yin, ‘The Duty of Confidence Revisited’ (2015) 39(2) University of Western Australia Law Review 270

Nash, Victoria, ‘Revise and Resubmit? Reviewing the 2019 Online Harms White Paper (2019) Journal of Media Law (advance)

216

BIBLIOGRAPHY

Neyland, Daniel, ‘On organizing algorithms’ (2015) 32(1) Theory, Culture & Society 119

Neuendorf, Kimberly, The Content Analysis Guidebook (Sage Publications, 2002)

Office of the High Commissioner for Human Rights, Guiding Principles on Business and Human Rights: Implementing the United Nations "Protect, Respect and Remedy" Framework, Un Doc HR/PUB/11/04 (16 June 2011)

O'Neil, Cathy, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Crown, 2016)

Padovani, Claudia and Mauro Santaniello, ‘Digital Constitutionalism: Fundamental Rights and Power Limitation in the Internet Eco-System' (2018) 80(4) The International Communication Gazette 295

Pallant, Julie, SPSS Survival Manual (Allen & Unwin, 6th ed, 2016)

Pappalardo, Kylie, ‘Duty and Control in Intermediary Copyright Liability: An Australian Perspective (2014) 4(1) IP Theory 9. Pasquale, Frank, ‘Restoring Transparency to Automated Authority’ (2011) 9 Journal on Telecommunications and High Technology Law 235

Pasquale, Frank, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015)

Penny, Jonathon, ‘Virtual Inequality: Challenges for the Net’s Lost Founding Value’ (2012) 10(3) Northwestern Journal of Technology and Intellectual Property 209

Perel, Maayan and Niva Elkin-Koren, ‘Black Box Tinkering: Beyond Disclosure in Algorithmic Enforcement’ (2017) 69 Florida Law Review 18

Phipps, Alison and Isabel Young, ‘Neoliberalisation and “Lad Cultures” in Higher Education’ (2015) 49(2) Sociology 305

Pollicino, Oreste and Marco Bassini, 'The Law of the Internet between Globalisation and Localization' in Miguel Maduro, Kaarlo Tuori and Suvi Sankari (eds), Transnational Law: Rethinking European Law and Legal Thinking (Cambridge University Press 2016)

Powell, A. et al, ‘Image-Based Sexual Abuse: The Extent, Nature, and Predictors of Perpetration in a Community Sample of Australian Adults’ (2019) 92 Computers in Human Behaviour 393

Quinlan, Christina, 'Policing Women’s Bodies in an Illiberal Society: The Case of Ireland' (2017) 27(1) Women & Criminal Justice 51

217

BIBLIOGRAPHY

Radin, Margaret and R. Polk Wagner, ‘Myth of Private Ordering: Rediscovering Legal Realism in Cyberspace’ (1997) 73 Chicago-Kent Law Review 1295

Ranking Digital Rights, ‘Corporate Accountability Index’ (Research Report, April 2018)

Ranking Digital Rights, ‘Corporate Accountability Index’ (Research Report, May 2019)

Raustiala, Kal, ‘Governing the Internet’ (2016) 110 American Journal of International Law 491

Raz, Joseph, The Authority of Law: Essays on Law and Morality (Oxford University Press, 1979)

Roberts, Sarah T, ‘Digital Refuse: Canadian Garbage, Commercial Content Moderation and the Global Circulation of Social Media’s Waste’ (Media Studies Publications Paper No 14, Faculty of Information and Media Studies, Western University, 2016) Roberts, Sarah T, 'Aggregating the Unseen' in A. Byström and M. Soda (eds), Pics or It Didn’t Happen (Prestel, 2017)

Roberts, Sarah T, 'Digital Detritus: 'Error' and the Logic of Opacity in Social Media Content Moderation' (2018) 23(3) First Monday

Roberts, Sarah T, Behind the Screen: Content Moderation in the Shadows of Social Media (Yale University Press, 2019)

Roberts, Sarah T, 'Content Moderation' in Laure A Schintler and Connie L McNeely (eds), Encyclopaedia of Big Data (Springer, advance)

Rogan, Frances, Social Media, Bedroom Cultures and Femininity: Exploring the Intersection of Culture, Politics and Identity in the Digital Media Practices of Girls and Young Women in England (Doctor of Philosophy Thesis, University of Birmingham, 2017) Rosenfeld, Michel, ‘Rethinking the Boundaries between Public Law and Private Law for the Twenty First Century: An Introduction’ (2013) 11(1) International Journal of Constitutional Law 125

Royal Society for Public Health, ‘#StatusOfMind: Social Media and Young People’s Mental Health and Wellbeing’ (Report, May 2017) 23 Saint-Laurent, Constance de 'In Defence of Machine Learning: Debunking the Myths of Artificial Intelligence' (2018) 14(4) Europe's Journal of Psychology 734

Salih, Sara, Judith Butler (Routledge, 2002)

218

BIBLIOGRAPHY

Samuelson, Pamela, ‘Freedom to Tinker’ (2016) 17 Theoretical Inquiries in Law 563 Sandvig, Christian et al, ‘Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms’ (Paper presented at the Data and Discrimination: Converting Critical Concerns into Productive Inquiry, a preconference at the 64th Annual Meeting of the International Communication Association, Seattle, WA, USA, 22 May 2014) Sandvig, Christian at al, ‘An Algorithm Audit’ in Seeta Peña Gangadharan, Virginia Eubanks and Solon Barocas (eds), Data and Discrimination: Selected Essays (Open Technology Institute and New America, 2014) Sandvig, Christian, ‘The Social Industry’ (2015) 1(1) Social Media + Society 1 Saurwein, Florian, Natascha Just and Michael Latzer, 'Governance of Algorithms: Options and Limitations' (2015) 17(6) info 35 Schaeffer, Denise, 'Feminism and Liberalism Reconsidered: The Case of Catharine MacKinnon' (2001) 95(3) The American Political Science Review 699 Scott, Allison, Freada Kapor Klein and Uriridiakoghene Onovakpuri, ‘Tech Leavers Study: A First-of-its Kind Analysis of Why people Voluntarily Left jobs in Tech’ (Report, 2017) Scott, Colin, ‘Accountability in the Regulatory State’ (2000) 27(1) Journal of Law and Society 38 Scott, Colin, ‘Analysing Regulatory Space: Fragmented Resources and Institutional Design’ [2001] (Summer) Public Law 329 Scott, Colin, ‘Regulation in the Age of Governance: The Rise of the Post-regulatory State’ in Jacint Jordana and David Levi-Faur (eds), The Politics of Regulation: Institutions and Regulatory Reforms for the Age of Governance (Edward Elgar Publishing, 2004) Seaver, Nick, ‘Knowing Algorithms’ (Research Paper, Department of Anthropology, UC Irvine, February 2014) Seering, Joseph et al, 'Moderator Engagement and Community Development in the Age of Algorithms' (2019) New Media & Society Shapiro, Martin, ‘The Giving Reasons Requirement’ (1992) The University of Chicago Legal Forum 179 Shearing, Clifford and Jennifer Wood, ‘Nodal Governance, Democracy, and the New “Denizens” (2003) 30 Journal of Law and Society 400 Sivacek, John and William Crano, ‘Vested Interest as a Moderator of Attitude–Behavior Consistency’ (1982) 43(2) Journal of Personality and Social Psychology 210 Small, Tamara, 'What the Hashtag?' (2011) 14(6) Information, Community and Society 872

219

BIBLIOGRAPHY

Smith, Aaron and Monica Anderson, Appendix A: Detailed Table, Pew Research Center (Online Report, 1 March 2018) Snyder, Lawrence, Fluency with Information Technology: Skills, Concepts, & Capabilities (Pearson, 2015) Srnicek, Nick, Platform Capitalism (Polity, 2017). Stohl, Cynthia, Michael Stohl and Paul Leonardi, ‘Managing Opacity: information Visibility and the Paradox of Transparency in the Digital Age’ (2016) 10 International Journal of Communication 123 Suzor, Nicolas, ‘The Role of the Rule of Law in Virtual Communities’ (2010) 25 Berkeley Technology Law Journal 1817 Suzor, Nicolas, Digital Constitutionalism - The Legitimate Governance of Virtual Communities (Doctor of Philosophy Thesis, Queensland University of Technology, 2010) Suzor, Nicolas P, 'Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms' (2018) 4(3) Social Media + Society (advance)

Suzor, Nicolas P, Lawless: The Secret Rules That Govern our Digital Lives (Cambridge University Press, 2019) Suzor, Nicolas P et al, 'What Do We Mean When We Talk about Transparency? Towards Meaningful Transparency in Commercial Content Moderation' (2019) 13 International Journal of Communication 1526 Suzor, Nicolas, Lawless: The Secret Rules that Govern Our Digital Lives (Cambridge University Press, advance version, copy on file with author) Suzor, Nicolas P, 'Understanding Content Moderation Systems: New Methods to Understand Internet Governance at Scale, Over Time, and Across Platforms' in Ryan Whalen (ed), Computational Legal Studies: The Promise and Challenge of Data-Driven Legal Research (Edward Elgar Publishing Ltd, forthcoming)

Swami, Viren et al, ‘Initial Examination of the Validity and Reliability of the Female Photographic Figure Rating Scale for Body Image Assessment’ (2008) 44 Personality and Individual Differences 1752 Swami, Viren et al, ‘Further Investigation of the Validity and Reliability of the Photographic Figure Rating Scale for Body Image Assessment’ (2012) 94(4) Journal of Personality Assessment 404 Tamanaha, Brian, On the Rule of Law: History, Politics, Theory (Cambridge: Cambridge University press, 2004 Tan, Corinne Hui Yun, ‘Terms of Service on Social Media Sites’ (2014) 19 Media and Arts Law Review 195 Tett, Gillian, The Silo Effect: The Peril of Expertise and the Promise of Breaking Down Barriers (Little, Brown Book Group, 2016)

220

BIBLIOGRAPHY

Thompson, Edward, Whigs and Hunters: The Origin of the Black Act (Penguin Books, 1990) Thrift, Samantha, ‘#YesAllWomen as Feminist Meme Event’ (2014) 16(6) Feminist Media Studies 1090 Tiggemann, Marika and Mia Zaccardo, ‘“Strong is the New Skinny”: A Content Analysis of #fitspiration Images on Instagram’ (2016) Journal of Health Psychology 1 Toerien, Merran and Sue Wilkinson, ‘Exploring the Depilation Norm: A Qualitative Questionnaire Study of Women's Body Hair Removal’ (2004) 1 Qualitative Research in Psychology 69 Turkel, Gerald, ‘The Public/Private Distinction: Approaches to the Critique of Legal Ideology’ (1988) 22 Law & Society Review 801 Turner, Rachel, ‘The Rebirth of Liberalism’: The Origins of Neo-Liberal Ideology’ (2007) 12(1) Journal of Political Ideologies 67 Tushnet, Rebecca, ‘Power Without Responsibility: Intermediaries and the First Amendment’ [2008] 76 The George Washington Law Review 986 United Nations, ‘Gender Equality: Why It Matters’ (Infographic, 2018)

United Nations Global Impact, Business for the Rule of Law Framework (Guidebook, June 2015) Verkuil, Paul, Outsourcing Sovereignty: Why Privatisation of Government Functions Threatens Democracy and What We Can Do About It (Cambridge University Press, 2007) Wachter, Sandra, Brent Mittelstadt and Chris Russell, 'Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR' (2018) 31(2) Harvard Journal of Law & Technology 841 Waldron, Jeremy, ‘Is the Rule of Law an Essentially Contested Concept (in Florida)?’ (2002) 21 Law and Philosophy 137 Waldron, Jeremy, ‘Hart and the Principles of Legality’ in Matthew H. Kramer, Claire Grant, Ben Colburn and Antony Hatzistavrou (eds), The Legacy of H.L.A. Hart: Legal, Political, and Moral Philosophy (Oxford University Press, 2008) Waldron, Jeremy, ‘The Concept and the Rule of Law’ (2008) 43 Georgia Law Review 1 We Are Social and Hootsuite, ‘Global Digital Report 2019’ (Online Report, 30 January 2019) Webb, Jennifer et al, ‘Fat Is Fashionable and Fit: A Comparative Content Analysis of Fatspiration and Health at Every Size Instagram Images’ (2017) 22 Body Image 53 West, Sarah Myers, ‘Raging Against the Machine: Network Gatekeeping and Collective Action on Social Media Platforms' (2017) 5(3) Media and Communication 28 West, Sarah Myers, ‘Censored, Suspended, Shadowbanned: User Interpretations of Content Moderation on Social Media Platforms’ (2018) 20(8) New Media & Society 201

221

BIBLIOGRAPHY

West, Sarah Myers, Meredith Whittaker and Kate Crawford, ‘Discriminating Systems: Gender, Race, and Power in AI’, AI Now Institute (Report, April 2019

Wissinger, Elizabeth, ‘Glamour Labour in the Age of Kardashian’ (2016) 7(2) Critical Studies in Fashion & Beauty 141

Witt, Alice, Nicolas Suzor and Patrik Wikstrom, 'Regulating ride-sharing in the peer economy' (2015) 1(2) Communication Research & Practice 174

Witt, Alice, Nicolas Suzor and Anna Huggins, ‘The Rule of Law on Instagram: An Evaluation of the Moderation of Images Depicting Women’s Bodies (2019) 42(2) UNSW Law Journal 557 Witt, Alice, Rosalie Gillett and Nicolas Suzor, submission to the Department of Communications and the Arts, Australian Government, Online Safety Charter Consultation Paper (12 April 2019) Zalnieriute, Monika, Lyria Bennett Moses and George Williams, 'The Rule of Law and Automation of Government Decision‐Making' (2019) 82(3) The Modern Law Review 425

Ziewitz, Malte, ‘Governing Algorithms: Myth, Mess, and Methods’ (2016) 41(1) Science, Technology & Human Values 3

B Cases

Christian W. Sanvig, et al. v Jefferson B. Sessions III, in his Official Capacity as Attorney General of the United States, No.16-1368(JDB) (D.D.C., March 30, 2018) HiQ Labs, Inc. v LinkedIn Corp., (N.D. Cal. Aug. 14, 2017) HiQ Labs, Inc. v LinkedIn Corp., No. 17-16783 (9th Circuit, September 9, 2019)

C Legislation

Communications Decency Act of 1996, 47 USC § 230(c) (1996) Criminal Code 1995 (Cth) Digital Millennium Copyright Act of 1998 17 USC (2000)

D Treaties

UN Convention on the Elimination of all Forms of Discrimination against Women, opened for signature 18 December 1979, 999 UNTS vol. 1249 (entered into force 3 September 1981) Universal Declaration of Human Rights, GA res 217A (III), UN GAOR, UN DOC A/810 (10 December 1948)

222

BIBLIOGRAPHY

E Other

@bodyposipanda (Megan Jayne Crabbe) (Instagram, 28 October 2019)

@i_weigh (I Weigh, Jameela Jamil) (Instagram, 16 October 2019) @InstagramEnglish (Instagram) (Facebook, 21 March 2016) @jameelajamilofficial (Jameela Jamil) (Instagram, 9 April, 2019) @jameelajamilofficial (Jameela Jamil) (Instagram, 28 May 2019) @jameelajamilofficial (Jameela Jamil) (Instagram, 20 June 2019) @kevin (Kevin Systrom) (Instagram, 8 March 2017)

@kevin (Kevin Systrom) (Instagram, 9 March 2017)

@kevin (Kevin Systrom) (Instagram, 30 April 2017) @kevin (Kevin Systrom) (Instagram, 21 December 2017) @kevin (Kevin Systrom) (Instagram, 2 May 2018) @mosseri (Adam Mosseri) (Instagram, 18 June 2019) @mosseri (Adam Mosseri) (Instagram, 2 October 2018) @mosseri (Adam Mosseri) (Twitter, 18 October 2019)

ACLU Foundation of Northern California et al, The Santa Clara Principles on Transparency and Accountability in Content Moderation (7 May 2018) ACLU, Reproductive Freedom (Web Page, August 2019)

223

BIBLIOGRAPHY

‘A Contract for the Web’, Contract for the Web Stakeholders (Web Page, 2019) Adrian Agius, ‘BIG, bad world of DATA’, Law in Society (online)

Adweek Staff, ‘Facebook Reverts Terms of Service after Complaints’, Adweek (online at 18 February 2009) . AI Congress, The Usefulness – and Possible Dangers – of Machine Learning, AI Congress (Web Page, 3 October 2017)

AI Now Institute, ‘Research’ (Web Page, August 2019) < https://ainowinstitute.org/research.html> Allan, Richard, ‘Hard Questions: Who Should Decide What is Hate Speech in an Online Global Community?’ Facebook Newsroom (Press release, 27 June 2018) Al-Sibai, Noor, ‘Robbie Tripp Put the Final Nail in Body Positivity’s Coffin with His ‘Chubby Sexy’ Music Video’, Wear Your Voice (online at 28 May 2019)

Anderson, Carly, ‘Just Wear the Suit: How We Can All Support Each Other’, Lipgloss and Crayons (online at 2019) Anderson, Shauna, ‘Why Was THIS Photo Banned from Instagram? The Reason Will Make You Shake Your Head in Disbelief’, Mamamia (online at 23 May 2014) Arsht, Andrew and Daniel Etcovitch, 'The Human Cost of Online Content Moderation', Jolt Digest (online at 2 March 2018)

Australian Government, Office of the Chief Scientist, ‘Australia’s STEM Workforce’ (Infographic, 2016) .

Australian Human Rights Commission, Role of Freedom of Opinion and Expression in Women's Empowerment (Web Page, 13 June 2013)

Baehr, Amy, ‘Liberal Feminism’, Stanford Encyclopaedia of Philosophy (Web Page, 30 September 2013) [1.1.1 Procedural Accounts of Personal Autonomy] Barlow, John Perry, 'A Declaration of the Independence of Cyberspace,' Electronic Frontier Foundation (online at 2 August 1996) https://www.eff.org/cyberspace-independence

224

BIBLIOGRAPHY

Bailey, Paige, ‘Instagram: Censorship of the Female Form’, Medium (online at 8 June 2019)

BBC News, ‘Instagram Vows to Remove All Graphic Self-Harm Images from Site’, BBC News (online) at 7 February 2019) Berners-Lee, Tim, ‘Tim Berners-Lee Calls for Internet Bill of Rights to Ensure Greater Privacy’, The Guardian (online at 28 September 2014) Bickert, Monika, ‘Publishing Our Internal Enforcement Guidelines and Expanding Our Appeals Process’, Facebook Newsroom (Press Release, 24 April 2018)

Bogost, Ian and Nick Montfort, ‘Platform Studies: Frequently Asked Questions’ (FAQ, 12 December 2009)

Bogle, Ariel, ‘The Anti-Facebook Backlash is Official and Regulation Is 'Inevitable', ABC News (online at 13 April 2018) Bologna, Caroline, ‘After Instagram Censored Her Photo, Mom Speaks Out about Body Image’, Huffington Post (online at 24 November 2016) Brabaw, Kasandra, ‘This Curvy Muslim Woman Is Speaking Out about Censorship on Instagram’, Refinery29 (online at 8 March 2017) Brar, Faith, 'Instagram Tried to Ban a Body-Positive Hashtag—Until Hundreds of Women Took a Stand', Shape (online at 14 August 2019) Brech, Amy, ‘Reclaim Your Belly: The Body Positive Movement Taking Instagram by Storm’, Grazia (online at 6 June 2017) Broderick, Ryan, ‘The Comment Moderator Is the Most Important Job in the World Right Now’, BuzzFeed News (online at 4 March 2019)

Brown, Kristen, ‘Why did Instagram fat-shame these women in bikinis?’ Splinter (online at 6 January 2016) .

225

BIBLIOGRAPHY

Buchanan, Matt, ‘Instagram and the Impulse to Capture Every Moment’, The New Yorker (online at 20 June 2013) Buchanan, Sarah, ‘Instagram Deleted This Photo of a Woman’s Cellulite – But She Has Best Response’, Daily Star (online at 4 March 2017) Bunge, Mario, ‘A General Black Box Theory’ (1963) 30(4) Philosophy of Science 346 Buni, Catherine and Soraya Chemaly, ‘The Secret Rules of the Internet’, The Verge (online at 13 April 2016, Business Insider, ‘Mark Zuckerberg, Moving Fast and Breaking things’, Business Insider (online at 15 October 2010) Cambridge, Ellie, ‘Model “Too Sexy for Instagram” Is Banned from the Social Media Site Days after Racy Sideboob Pics’, News.com.au (online at 18 August 2017)

Castillo, Michelle, ‘Zuckerberg tells Congress Facebook Is Not a Media Company: 'I Consider us to be a Technology Company', CNBC (online at11 April 2018)

Castro, Alex, ‘Something Awful’s Founder Thinks YouTube Sucks at Moderation’’, The Verge (online at 26 June 2019)

Cellan-Jones, Rory, 'Tim Berners-Lee: 'Stop web's downward plunge to dysfunctional future'', BBC News (online at 11 March 2019)

Chancellor, Stevie et al, ‘#thyghgapp: Instagram Content Moderation and Lexical Variation in Pro-Eating Disorder Communities’ (Conference Paper, ACM Conference on Computer- Supported Cooperative Work and Social Computing, 1 March 2016) 1202

Chastain, Ragen, ‘Fat is Not a Violation’, ravishly (online at 9 October 2018)

Chaudhary, Swetambara, ‘This Photo Was Removed By Instagram. The Owner Writes a Powerful Open Letter in Response’, Scoop Whoop (online at 26 March 2015)

226

BIBLIOGRAPHY

Chen, Adrien, ‘The Laborers who Keep Dick Pics and Beheadings out of your Facebook Feed’, Wired News (online at 23 October 2014)

Code.org, ‘How Computers Work: Hardware and Software’ (YouTube, 30 January 2018) 02:40 ff

Coleman, Korva, ‘Instagram Has a Problem with Hate Speech and Extremism, ‘Atlantic’ Reporter Says’, NRP (online at 30 March 2019)

Constine, Josh, ‘Instagram Suddenly Chokes off Developers as Facebook Chases Privacy’, TechCrunch (online at 3 April 2018) .

Cook, Catriona et al, Laying Down the Law (Lexis Nexis, Butterworths, 7th ed, 2009)

Cook, Jesselyn, ‘Instagram’s Shadow Ban On Vaguely ‘Inappropriate’ Content Is Plainly Sexist’, The Huffington Post (online at 30 April 2019) < https://www.huffingtonpost.com.au/entry/instagram-shadow-ban- sexist_n_5cc72935e4b0537911491a4f?ri18n=true>.

Craig, Paul, ‘Formal and Substantive Conceptions of the Rule of Law: An Analytical Framework’ (1997) Public Law 467.

Crawford, Angus, ‘Instagram ‘Helped Kill My Daughter’’, BBC News (online at 22 January 2019)

Digital Media Research Centre, ‘Research’ (Web page, August 2019) .

Dubey, Aarti Olivia, ‘Did We Violate Instagram Guidelines by Being “Too Fat” to Wear a Swimsuit?’ Huffington Post (online at 14 June 2016)

Duke University Law School, ‘LENS 2018: Complexity & Security | Monika Bickert, Keynote: The Social Media Perspective’ (YouTube, 24 February 2018) 43:00 Dustin, Jeffrey, ‘Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women’, Reuters (online at 10 October 2018) Dworkin, Gerald, ‘Paternalism’, Stanford Encyclopaedia of Philosophy (Web Page, 2017) < https://plato.stanford.edu/entries/paternalism/>

227

BIBLIOGRAPHY

Eadicicco, Lisa, '3 Things We Learned From Facebook's AI Chief About the Future of Artificial Intelligence', Business (online at 19 February 2019) Egan, Erin and Ashlie Beringer, ‘We’re Making Our Terms and Data Policy Clearer, Without New Rights to Use Your Data on Facebook’, Facebook Newsroom (Press Release, 4 April 2018)

Ehrenkranz, Melanie , ‘Instagram Deactivated this Artist’s Account – Then Ignored Her For Months’, Mic (online at 25 May 2017)

Electronic Frontier Foundation, Facebook, Instagram Lack Transparency on Government- Ordered Content Removal Amid Unprecedented Demands to Censor User Speech, EFF's Annual Who Has Your Back Report Shows (Web Page, 31 May 2018) Electronic Frontier Foundation and Visualizing Impact, ‘A Resource Kit for Journalists’, onlinecensorship.org (Web Page, September 2017) Electronic Frontier Foundation and Visualizing Impact, ‘Offline-Online’, (Infographic, 30 August 2018)

Facebook Artificial Intelligence, ‘Yann LeCun on the Future of Deep Learning Hardware’, AI Blog (online at 18 February 2019) Facebook, Annual Report of Facebook Inc. (Report, 31 January 2019)

Facebook, ‘Company Info’, Facebook Newsroom (2019) Facebook, ‘Community Standards’ (2019) Facebook, Draft Charter: An oversight Board for Content Decisions (Draft Charter, 2019)

‘Facebook, Inc. (FB) Q1 2018 Earnings Conference Call Transcript’, The Motley Fool (Web Page, 25 April 2018) . ‘Report Violations of Our Community Guidelines’, Instagram (Web page, 2019)

Facebook, ‘Platform Policy’, Facebook for Developers (2019) Facebook, ‘Terms of Service’ (2019)

228

BIBLIOGRAPHY

‘Facebook Reports Fourth Quarter and Full Year 2018 Results’ (Report, 30 January 2019) . Feerst, Alex, ‘Implementing Transparency about Content Moderation’, Techdirt (online at 1 February 2018) Felten, Edward, ‘The New Freedom to Tinker Movement,’ Freedom to Tinker (online at 21 March 2013) Fontenay, Catherine de , ‘Would Regulation Cement Facebook’s Market Power? It’s Unlikely’, The Conversation (online at 12 April 2018) Frankel, Todd, ‘The Future of Work: Facebook’s Open Plan Offices’, Sydney Morning Herald (online at 1 December 2015)

‘Free The Nipple’ (Web Page, 2019) Gagnon, Alys, 'Three Mothers Share Nude Photos on Instagram. Only one is Removed: THREE Mums Posted a Naked Photo on Instagram. Only One of Them Was Removed. Clearly There’s a Double Standard Here', News.com.au (online at 7 December 2016)

Gander, Kashmira, ‘Body Hair and Sexuality: The Banned Photos Instagram Doesn't Want You to See’, Independent (online at 9 March 2017) Gillespie, Tarleton, ‘The Dirty Job of Keeping Facebook Clean’, Culture Digitally (online at 22 February 2012)

Gillespie, Tarleton, ‘Facebook Can’t Moderate in Secret Anymore’, Culture Digitally (online at 23 May 2017) . Google, ‘Are you a robot? Introducing “No CAPTCHA reCAPTCHA”, Google (online at 3 December 2014) Gordon, Bryony, ‘Eff Your Beauty Standards’: Meet the Size 26, Tattooed Supermodel Who is Changing the Fashion Industry’, The Telegraph (online at 14 May 2016) . Grady, Lora, ’15 Canadian Women on What It Really Means to Be Body Positive RN’, Flare (online at 23 January 2018)

229

BIBLIOGRAPHY

Grady, Lora, 'Women are Calling out Instagram for Censoring Photos of Fat Bodies: The Double Standards are Real ', Flare (online at 18 October 2018) . Gray, Richard, ‘Why Artificial Intelligence Is Shaping Our World’, BBC: Machine Minds (online at 19 November 2018)

Hallqvist, Erika, ‘How Instagram Censors Could Affect the Lives of Everyday Women’, USA Today (online at 8 June 2019)

Heller, Dave, ‘FSU Researchers Find Plus-Size Fashion Models Help Improve Women’s Psychological Health’, Florida State University (online at 7 June 2017) . Hersher, Rebecca, ‘Labouring in the Shadows to Keep the Web Free of Child Porn’, npr (online at 17 November 2013) Holden, Steve, ‘Instagram Period Photo: Woman Who Took It Says She Wasn’t Being ‘Provocative’, BBC newsbeat (online at 5 April 2015) . Hopkins, Nick, ‘Facebook Moderators: A Quick Guide to Their Job and Its Challenges’, The Guardian (online at 22 May 2017) ‘How Many Reports on Instagram Can Delete an Account?’ Quora (Web Page, 2019) Instagram, ‘A New Look for Instagram’, Info Centre (Press Release, 11 May 2016) . Instagram, ‘Community Guidelines’, Help Centre (October 2019) Instagram, ‘Changes to Our Account Disable Policy’ Info Centre (Press Release, 18 July 2019)

‘Instagram Deletes Woman’s Body Positive Photo’, Now to Love (online at 24 March 2017) Instagram, ‘Does the Number of Times Something Gets Reported Determine Whether or Not It's Removed?’ Help Centre (Web Page, 30 August 2018)

230

BIBLIOGRAPHY

Instagram, ‘How Do I Archive a Post I’ve Shared?’ Help Centre (2018)

Instagram, ‘How do I Use Hashtags?’ Help Centre (2019)

Instagram, Info Centre (Web Page, 2019) Instagram, ‘Platform Policy’, Help Centre (2019) Instagram, ‘Terms of Use’, Help Centre (19 April 2018) Instagram, ‘Welcome to IGTV’, Info Centre (Press Release, 20 June 2018) Instagram, Why Are Certain Posts Not Appearing in Explore and Hashtag Pages? Help Centre (2019) Instagram and National PTA, ‘Know How to Talk with Your Teen about Instagram: A Parent’s Guide’, Instagram (Guidelines, 2019) 4 Internet Rights & Principles Coalition, ‘The Charter of Human Rights and Principles for the Internet’ (Booklet, 6th ed, November 2018) 7, 14 . Jacobs, Julia, ‘Instagram Bans Graphic Images of Self-Harm after Teenager’s Suicide’, The New York Times (online at 7 February 2019) Jalonick, Marie Clare, ‘Zuckerberg: Regulation 'Inevitable' for Social Media Firms’, Sydney Morning Herald (online at 12 April 2018) John, Jeff, 'Why Zuckerberg Won't Admit Facebook Is a Media Company', Fortune Tech (online at 14 November 2016) Jones, Lucy, ''Society shrouds birth in shame’: challenging Instagram’s ban on women in labour', The Guardian (online at 13 March 2018) . Katz, Evan Ross, ‘How Instagram Helped Democratize Beauty’, Mic (online at 30 August 2017)

Kerr, Orin, ‘Scraping A Public Website Doesn’t Violate the CFAA, Ninth Circuit (Mostly) Holds’, reason (online at 9 September 2019)

231

BIBLIOGRAPHY

Klazas, Erin and Alice Leppert, ‘Selfhood, Citizenship … and All Things Kardashian: Neoliberal and Postfeminist Ideals in Reality Television’ (Research Paper 7-23-2015, Digital Commons @ursinus College, 2015) Koebler, Jason, 'How Facebook Trains Content Moderators', Motherboard (online at 26 February 2019)

Koebler, Jason and Joseph Cox, 'The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People', Vice (online at 24 August 2018)

Kolbert, Elizabeth, ‘Who Owns the Internet’, The New Yorker (online at 28 August 2017) Kreps, Daniel, ‘Anti-Censorship Activists Strip Nude outside Facebook HQ to Fight Nudity Ban’, Rolling Stone (online at 2 June 2019) Lanquist, Lindsay, ‘Amber Rose Posted a Picture of Her Bush Online, and Instagram Took it Down’, SELF (online at 12 June 2017) Latham, Annabel, ‘Cambridge Analytica Scandal: Legitimate Researchers Using Facebook Data Could be Collateral Damage’, The Conversation (online at 21 March 2018) Latour, Bruno, ‘Visualization and Cognition: Thinking with Eyes and Hands’ (1986) 6 Knowledge and Society 1 Lee, Dave, ‘Instagram Users Top 500 Million’, BBC (online at 21 June 2016) Lee, Timothy, ‘Mark Zuckerberg Is In Denial about How Facebook is Harming Our Politics’, Vox (online at 10 November 2016) Lee, Dave, ‘’Who’s Policing Facebook’, BBC News (online at 22 May 2018) Lennon, Kathleen, ‘Feminist Perspectives on the Body’, Stanford Encyclopaedia of Philosophy (Web Page, 2 August 2019) Leslie Francis and Patricia Smith, ‘Feminist Philosophy of Law’, Stanford Encyclopaedia of Philosophy (Web Page, 24 October 2017) [1.1. Personal Autonomy] Lessig, Lawrence, ‘Code Is Law’, Harvard Magazine (online at 1 January 2000)

232

BIBLIOGRAPHY

Lochlainn, Gráinne Maedhbh Nic, ‘Facebook and Data Harvesting: What You Need to Know’, The Conversation (online at 4 April 2018) LoMonte, Frank, ‘The Law that Made Facebook What It Is Today’, The Conversation (online at 11 April 2018) Lorenz, Taylor, ‘Instagram is the Internet’s New Home for Hate’, The Atlantic (online at 21 March 2019) Macdonald, Tom, ‘A Not-So-Modest Proposal: Instagram, Free the Nipple for the Inauguration, Vogue (online at 20 January 2017) Mack, Zachary, ‘Why AI Can’t Fix Content Moderation’, The Verge (online at 2 July 2019) Madrigal, Alexis, ‘Inside Facebook's Fast-Growing Content-Moderation Effort’, The Atlantic (online at 7 February 2018) Mahdawi, Arwa, ‘Kim Kardashian West Shocks Fans with Ad for Appetite-Suppressing Lollipops’, The Guardian (online at 17 May 2018)

‘Mark Zuckerberg Asks Governments to Help Control Internet Content’, BBC News (online at 30 March 2019)

Matsakis, Louise, ‘YouTube Doesn’t Know Where Its Own Line Is’, Wired (online at 3 February 2018) McAfee, Tierney, ‘Instagram Now Allows Photos of Women Breastfeeding’, People (online at 17 April 2016) . McCormack, Ange, ‘Instagram, Facebook Crack Down on Harmful Weight Loss Products and Cosmetic Procedures’, Triple J Hack (online at 19 September 2019)

Mednicoff, David, ‘Trump May Believe in the Rule of Law, Just Not the One Understood by Most American Lawyers’, The Conversation (online), 5 June 2018 . Newell-Hanson, Alice, ‘How Artists are Responding to Instagram’s No-Nudity Policy’, i-D (online at 16 August 2016)

233

BIBLIOGRAPHY

Newton, Casey, ‘The Trauma Floor: The Secret Lives of Facebook Moderators in America’, The Verge (online at 25 February 2019) Microsoft Research, ‘Microsoft Research Podcast’ (Web Page, August 2019) < https://www.microsoft.com/en-us/research/blog/category/podcast/>

Mosseri, Adam, Head of Instagram, 'Our Commitment to Lead the Fight against Online Bullying', Info Centre (Press Release, 8 July 2019) Moye, David, ‘This Cheeky Instagram Page Is Dedicated to Vacation Butt Shots (NSFW)’, Huffington Post (online at 3 May 2017)

Muller, Marissa, ‘Fitness Blogger Mallory King Says Instagram Removed Her Proud Cellulite Selfie’, Allure (online at 28 February 2017) O’Hara, Mary Emily, ‘Why is Instagram Censoring So Many Hashtags Used by Women and LGBT People?’ The Daily Dot (online at 12 May 2016) Olya, Gabrielle, ‘Curvy Blogger Angry That Her Bikini Photo Was Taken Down by Instagram: 'It's Blatant Fat-Phobia’, People (online at 6 June 2016) . Orbello, ‘The Ultimate Guide to the Best Instagram Hashtags for Likes,’ Oberlo (online at 5 May 2018) ORBIT, 100+ Brilliant Women in AI & Ethics (Conference Report, 2019) PA Media, ‘Instagram Tightens Rules on Diet and Cosmetic Surgery Posts, The Guardian (online at 19 September 2019) Pasquale, Frank, ‘Assessing Algorithmic Authority’, Madisonian.net (online at 18 November 2009) Paunescu, Delia, ‘Inside Instagram’s Nudity Ban: Artists Want Instagram’s Community Guidelines to Change. Will Facebook Finally “Free the Nipple”’, recode (online at 27 October 2019) Punsmann, Burcu Gültekin, ‘Three Months in Hell: What I Learned from Three Months of Content Moderation for Facebook in Berlin’, Süddeutsche Zeitung (online at 6 January 2018)

234

BIBLIOGRAPHY

Rach, Jessica, 'Plus-Size Model Accuses Instagram of 'Shadow Banning' Her 'Fatkini' Snaps Claiming Photos of 'Slim Women' Get MORE Likes', The Daily Mail (online at 12 June 2018) Radin, Sara, '4 Female Artists on Fighting Censorship & Reclaiming Nudes', Highsnobiety (online at 8 March 2018) Raffoul, Nicolas, ‘Loving Myself and My Selfies’, The McGill Tribune (online at 10 September 2019) Rangarajan, Sinduja, 'Here’s the Clearest Picture of Silicon Valley’s Diversity Yet: It’s Bad. But Some Companies are Doing Less Bad', (online at 25 June 2018) Reid, Rebecca, 'Plus-size model says she’s been shadow banned by Instagram because of her body', Metro (online at 11 June 2018) Reynolds, Megan, ‘Why Won’t Instagram Take Down This Picture of Justin Bieber Grabbing His Dick?’ The Frisky (online at 18 September 2018) Roberts, Sarah T, 'Social Media’s Silent Filter', The Atlantic (online at 8 March 2017) Rodriguez, Jeremiah, 'Instagram Apologizes to Pole Dancers after Hiding Their Posts', CTV News (online at 6 August 2019) . Rosen, Guy, ‘Facebook Publishes Enforcement Numbers for First Time’, Facebook Newsroom (Press Release, 15 May 2018) Rosen, Guy and Tessa Lyons, ‘Remove, Reduce, Inform: New Steps to Manage Problematic Content’ Facebook Newsroom (online at 10 April 2019) Rosen, Jeffrey, ‘The Delete Squad’, New Republic (online at 23 April 2013) Rubin, Aaron, ‘How Website Operators Use CFAA to Combat Data Scraping’ Law 360 (online at 25 August 2014)

Salam, Maya, ‘Why “Radical Body Love” Is Thriving on Instagram’, The New York Times (online at 9 June 2017)

235

BIBLIOGRAPHY

Sandler, Rachel, 'Here's Everything Facebook Announced at its 2018 Developers Conference', Business Insider (online at 3 May 2018) Sanjana Varghese, 'Ruha Benjamin: ‘We definitely can’t wait for Silicon Valley to become more diverse’', The Guardian (online at 30 June 2019) Savage, Michael, ‘Health Secretary Tells Social Media Forms to Protect Children after Girl’s Death’, The Guardian (online at 27 January 2019) Satran, Rory, ‘Instagram’s Kevin Systrom on Fashion and #freethenipple, i-D (online at 12 May 2015)

Schmidt, Vivien, 'Democracy and Legitimacy in the European Union Revisited: Input, Output and 'Throughput' (2013) 61(1) Political Studies 2 Seligson, Hannah, ‘Why Are More Women than Men on Instagram?’ The Atlantic (online at 7 June 2016) Shrivastava, Nivi, ‘Instagram Apologizes to Plus-Size Blogger for Removing Bikini Pics’, NDTV (online at 10 June 2016) Shugerman, Emily , 'This Muslim Woman Says Instagram Took Down Her Fully Clothed Selfie’, Revelist (online at 8 March 2017) Singh-Kurtz, Sangeeta, ‘Instagram Will No Longer Promote Diet Products to Minors’, The Cut (online at 18 September 2019) Silver, Ellen, 'Hard Questions: Who Reviews Objectionable Content on Facebook — And Is the Company Doing Enough to Support Them?' Facebook Newsroom (Press release, 26 July 2018) Slack Team, ‘Diversity at Slack’ (Web Page, 2019)

Small, Zachary, ‘Facebook Censors Montreal Museum of Fine Art’s Ad Featuring Nude Picasso Painting’, Hyperallergic (online at 7 August 2018)

236

BIBLIOGRAPHY

Spandana Singh, ‘Assessing YouTube, Facebook and Twitter’s Content Takedown Policies: How Internet Platforms Have Adopted the 2018 Santa Clara Principles, New America (online at 7 May 2019) Spangler, Todd, ‘Instagram CEO Positions His Company as Safer Alternative to Controversial Rivals’, Variety (online at 15 November 2017) Sonderby, Chris , ‘Our Continued Commitment to Transparency’, Facebook Newsroom (Press Release, 15 November 2018) Steadman, Otillia, ‘Porn Stars vs. Instagram: Inside the battle to Remain on the Platform’, BuzzFeed (online at 18 October 2019) Strapagiel, Lauren, ‘These Moms are Mad that Instagram keeps Deleting Topless Photos of Their Long-Haired Sons’, BuzzFeed News (online at 24 September 2019)

Systrom, Kevin, ‘Statement from Kevin Systrom, Instagram Co-Founder and CEO’, Official Blog – Instagram (online at 24 September 2018) . The Globe and Mail, ‘Tackling Silicon Valley’s Sexist “Mirror-tocracy” Will Take A Diverse Workforce, Says Kara Swisher’ (YouTube, 11 September 2017) The University of Sheffield, ‘New Instagram Policy Bans Harmful Weight Loss Content for Younger Users’, Department of Sociological Studies News (online at 19 September 2019) The World Bank, ‘Population, Female (% of Total Population) (Web Page, 2018) The World Bank, ‘Population, Total’ (Web Page, 2019) Thompson, Nicholas, ‘The Great Tech Panic: Instagram’s Kevin Systrom Wants to Clean Up the Internet’, Wired (online at 14 August 2017) . Tonjes, Meghan, ‘Dear Instagram, (YouTube, 19 May 2014

University of Oxford, Oxford Living Dictionary (online at 7 May 2018) .

237

BIBLIOGRAPHY

Valenti, Jessica, ‘Social Media Is Protecting Men From Periods, Brest Milk and Body Hair’, The Guardian (online at 30 March 2015)

Venkat, Suresh, ‘When an Algorithm Isn’t..’ Medium (online at 1 October 2015) .

Warden, Pete, ‘How I Got Sued by Facebook’, Pete Warden’s Blog (online at 5 April 2010) Weber, Lauren and Deepa Seetharaman, ‘The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook’, The Wall Street Journal (online), 27 December 2017 West, Sarah Myers, ‘Facebook’s Guide to being a Lady’, onlinecensorship.org (Web Page, 18 November 2015) Williams, Maxine, ‘Facebook 2018 Diversity Report: Reflecting on Our Journey, Facebook Newsroom (Press Release, 12 July 2018) York, Jillian, ‘Guns and Breasts: Cultural Imperialism and the Regulation of Speech on Corporate Platforms’, (Web Page, 17 March 2016) York, Jillian, 'Who Defines Pornography? These Days, It’s Facebook', (online at 25 May 2016) York, Jillian and Corynne McSherry, ‘Content Moderation Is Broken. Let Us Count the Ways’, Electronic Frontier Foundation (online at 29 April 2019) YouTube, ‘Policies and Safety’ (2019) Zuckerberg, Mark, ‘A Blueprint for Content Governance and Enforcement’, Facebook (online at 15 November 2018) Zuckerberg, Mark, ‘Zuckerberg: The Internet Needs New Rules. Let’s Start In These Four Areas’, The Washington Post (online at 30 March 2019)

238