THE POLITICS OF PRIVACY PROTECTION: AN ANALYSIS OF RESISTANCE TO METADATA RETENTION AND ENCRYPTION ACCESS LAWS

Michael Peter Wilson Bachelor of Justice (Honours) Bachelor of Behavioural Science

A thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy

School of Justice, Faculty of Law

Queensland University of Technology

2020

Abstract

The politics of privacy protection are contested. In recent years, the Australian

Government has justified the expansion of surveillance powers, under the

Data Retention Act (2015) and Encryption Access Act (2018), by invoking the

‘problem of going dark’ – a claim that the investigative capabilities of law enforcement agencies are being undermined by the use of privacy-enhancing technologies. As such, the ‘problem of going dark’ draws a moral equivalence between privacy protection and methods of evading criminal investigations. This thesis examines the politics of privacy protection within the context of ’s national debates about digital surveillance powers, focusing on how privacy advocates contest this moral equivalence embedded within the ‘problem of going dark’. Specifically, this research involves a political discourse analysis (Laclau & Mouffe, 1985) of twenty-one (n=21) semi-structured interviews with Australian privacy advocates involved in campaigns opposing data retention and encryption access laws. Using the analytical constructs of signification, subjectivation, and identification, the research examines how privacy advocates challenge the ‘problem of going dark’, discursively position the subjects of surveillance laws, and mobilise through the construction of shared identities.

This thesis argues that Australian privacy advocates contest the moral equivalence embedded within the ‘problem of going dark’ via the articulation of a civic duty to disrupt the relations of domination that enable ‘morally arbitrary’ surveillance under the Data Retention Act (2015) and Encryption Access Act (2018). Overall, it is argued this property of ‘moral arbitrariness’ is important for differentiating between the protection of privacy rights and enabling the evasion of criminal investigations.

Key Words: Privacy Protection, Surveillance, Human Rights, Problem of Going Dark, Moral Arbitrariness, Metadata Retention, Encryption Access.

2

Table of Contents

Front Matter

Abstract 2

Table of Contents 3

List of Abbreviations 8

Statement of Original Authorship 9

Acknowledgements 10

Chapter One: Introduction 12

1.1. Introduction 12

1.2 Political and Legal Context 15

1.3 Research Aim and Questions 21

1.4 Chapter Outlines 22

Chapter Two: Literature Review 26

2.1 Chapter Introduction 26

2.2 Justifying Surveillance Powers 26

2.2.1 The Logics of Preventive Justice 27

2.2.2 Policy Problems and ‘Going Dark’ 30

2.2.3 Accepting the Surveillance Solution 36

2.2.3.1 Familiarity and Normalisation 38

2.2.3.2 Fear and Uncertainty 43

2.2.3.3 Surveillance Sub-Cultures 46

2.2.4 The Limits of the Surveillance Solution 49

2.3 The Symbiosis of Surveillance and Privacy 58

2.3.1 Privacy Litigation 59

3

2.3.2 Privacy Campaigns 65

2.3.3 Privacy Behaviours 71

2.3.4 Privacy-Enhancing Technologies 76

2.3.5 Counter-Surveillance 83

2.4 The Moral Right to Privacy 87

2.4.1 The Contested Properties of Privacy 88

2.4.2 Consequentialist Theories of Privacy 89

2.4.3 Deontological Theories of Privacy 95

2.4.4 The Limits of Liberal Privacy Rights 99

2.4.5 The Communitarian Critique of Privacy 102

2.4.6 The Civic Republican Theory Privacy 105

2.4.7 The Problems with Illiberal Privacy 108

2.5 The Gap in the Literature 111

Chapter Three: Theoretical Framework and Research Methodology 112

3.1 Chapter Introduction 112

3.2 Political Discourse Analysis 112

3.2.1 Semi-Structured Interviews 115

3.2.2 Sampling and Participants 117

3.2.3 Coding and Analysis 119

3.2.4 Trustworthiness 122

3.3 Research Ethics 125

3.4 Limitations of Research 127

4

Chapter Four: Contesting the ‘Problem of Going Dark’ 129

4.1 Chapter Introduction 129

4.2 The Relational Properties of Privacy 130

4.2.1 The ‘Trading-Off’ of Privacy and Security 130

4.2.2 Privacy as ‘Necessary’ for Security 132

4.3 Privacy Advocacy and Liberal Discourse 136

4.3.1 Articulating the ‘Presumption of Privacy’ 137

4.3.2 Privacy as a Right to Non-Interference 140

4.3.2.1 Privacy and the Necessity Principle 140

4.3.2.2 Privacy and the Proportionality Principle 147

4.3.2.3 Privacy and the Accountability Principle 153

4.3.2.4 Privacy and the (Collapsed) Harm Principle 156

4.4 Ascribing Arbitrariness to Surveillance Powers 164

4.5 Chapter Conclusion 172

Chapter Five: The Subjects of Surveillance Law 174

5.1 Chapter Introduction 174

5.2 Positioning Citizens as Culpable Victims 175

5.2.1 Positioning Citizens as Technologically Ignorant 176

5.2.2 Positioning Citizens as Politically Apathetic 178

5.3 Positioning Political Elites as Antagonists 181

5.3.1 The Positioning of the 182

5.3.1.1 Positioning the Government as Disingenuous 182

5.3.1.2 Positioning the Government as Incompetent 189

5.3.2 The Positioning of Law Enforcement Agencies 192

5

5.3.2.1 Positioning Law Enforcement as Manipulative 193

5.3.2.2 Deference to the Five Eyes Intelligence Community 196

5.3.2.3 Encumbered by the Demands of Preventive Justice 197

5.3.3 The Positioning of Technology Companies 203

5.3.3.1 Companies as Complicit in Data Commodification 203

5.3.3.2 Technology Companies and Corporate Responsibility 208

5.4 Surveillance Subjects and Relations of Power 213

5.4.1 Surveillance Powers, Citizens, and Political Elites 214

5.4.2 The Civic Corruption of Political Institutions 218

5.5 Chapter Conclusion 225

Chapter Six: Advocating Resistance to Surveillance Power 227

6.1 Chapter Introduction 227

6.2 Self-Identification as a ‘Privacy Advocate’ 228

6.2.1 Conflict within the Australian Privacy Movement 229

6.2.1.1 Interpersonal and Intergroup Conflict 229

6.2.1.2 The Struggle to Build Privacy Coalitions 236

6.2.2 Identity within the Australia Privacy Movement 239

6.2.2.1 Privacy Advocacy as a Shared Struggle 240

6.2.2.2 Recognition of Expertise in Law and Technology 242

6.2.2.3 Shared Experiences of Marginalisation 244

6.3 Constructing a Civic Duty to Advocate Resistance 248

6.3.1 Privacy Protection as Defending Human Rights 249

6.3.2 Privacy Protection and Citizen Cultivation 262

6.4 Chapter Conclusion 271

6

Chapter Seven: Conclusion 273

7.1. Introduction 273

7.2. Summary of the Thesis 273

7.3. Original Contributions of the Research 278

7.4 Avenues for Further Research 281

Reference List 284

Cases Cited 340

Legislation Cited 342

Appendices

Appendix A 344

Appendix B 347

Appendix C 353

Appendix D 354

Appendix E 356

Appendix F 357

Appendix G 358

7

List of Abbreviations

APF Australian Privacy Foundation

APPs Australian Privacy Principles

Data Retention Act Telecommunications (Interception and Access) Amendment (Data Retention) Act Data Retention Bill Telecommunications (Interception and Access) Amendment (Data Retention) Bill DRW Digital Rights Watch

EFA Electronic Frontiers Australia

Encryption Access Act Telecommunications and Other Legislation Amendment (Assistance and Access) Act HUMINT Human Intelligence

NHMRC National Health and Medical Research Council

PJCIS Parliamentary Joint Committee on Intelligence and Security QUT University of Technology

UHREC University Human Research Ethics Committee

8

Statement of Original Authorship

The work contained in this thesis has not been previously submitted to meet requirements of an award at this or any other higher education institution. To the best of my knowledge and belief, the thesis contains no material previously published or written by another person, except where due reference is made.

Date: 03 April 2020

Michael Peter Wilson

QUT Verified Signature

(Signed)

9 Acknowledgements

This thesis would not have been possible without the advice, assistance, and support from my colleagues, friends, and family over these past four years. The ideas discussed within these pages are just as much the product of conversations over coffees as they are the result of hours spent in isolation. Additionally, I note that the research was made possible because of financial support from the Research Training Program and the QUT Excellence Scholarship. To all of you who helped – thank you.

I am forever indebted to my supervisor, Erin O’Brien, who has nurtured my curiosity, listened to my frustrations, and supported me whenever the task of finishing the manuscript felt insurmountable. Thank you for all the evenings and weekends sacrificed reading drafts – I am truly lucky to have had a supervisor, colleague, and friend, as compassionate as you along with me for this journey. As per our promise,

I’ll be the one buying the coffees from now on.

Thank you to everyone within the QUT Faculty of Law who helped challenge, refine, and shape my ideas during my candidature. I am particularly thankful to Russell Hogg and Belinda Carpenter for reading through several versions of the thesis. Your guidance and feedback challenged me to make my work better. I also wish to thank

Nic Suzor, Monique Mann, Cassandra Cross, Helen Berents, and Angela Daly. All of you have influenced my thinking as a socio-legal researcher.

I would also like to thank all the privacy advocates who gave their time and shared their thoughts with me as part of this research. It is our conversations that have shaped the ideas within these pages more than anything else. Your dedication to the cause of protecting human rights has been a constant source of motivation.

Thank you to my friends who provided company over the past several years, despite how busy I always seemed. In particular, I wish to express thanks to Amanda Beem

10 for the camaraderie during our candidatures. Our catch-ups every morning helped me both academically and personally. Similarly, thank you to Amando Azul for listening to me talk about my thesis over many lunches and while exploring Brisbane. A special thanks to David Jordan for the coffees, conversations, and support – your friendship has been invaluable during the highs and lows of research.

Writing a thesis is not like a nine-to-five job. It is a task that follows you around during evenings and on weekends. So, I owe thanks to my family for their patience and support during those times where I was working long hours. Thank you to Karen, Jeff, and Brown for the family celebrations, and specifically to Jarrod Brown for our rigorous debates and equally-rigorous laughter. A special thanks to Melissa and

Travis Youngman for the evenings spent catching-up over dinner, for the delicious cakes provided, and for bringing my nephew, Nathan Youngman, into the world during this period of my life – it has truly been wonderful.

To my parents, Peter and Desley Wilson, thank you for your love and support, particularly during periods where the research was stressful. With the thesis now complete, I promise to cook you plenty of homemade pizza. In particular, I am indebted to you for cultivating (and putting up with) my sense of curiosity – you were my first educators and I am truly privileged to have you as my parents.

To my partner, Chandre Clark, thank you for coming along with me for this crazy ride.

Over these past four years, this PhD has been as much a part of your life as it has been a part of mine. In particular, thank you for cheering me up after the setbacks and celebrating with me after meeting the milestones. You brighten each and every day of my life and I could not have done this without you by my side – I love you.

Michael Wilson 03 April 2020

11

Chapter One: Introduction

1.1. INTRODUCTION

In 2015, the Australian Government passed the Telecommunication

(Interception and Access) Amendment (Data Retention) Act (2015; hereafter Data

Retention Act), which grants law enforcement agencies warrantless access to metadata retained by telecommunications service providers.1 In response, Australian privacy advocates mobilised the Citizens, Not Suspects (GetUp!, 2014), Go Dark Against Data

Retention (GetUp!, 2015; Electronic Frontiers Australia, 2015), and National Get a

VPN Day (Digital Rights Watch, 2017) campaigns to promote the use of privacy- enhancing technologies. Subsequently, the Australian Government invoked the

‘problem of going dark’ to justify additional powers for accessing the contents of encrypted communications under the Telecommunications and Other Legislation

Amendment (Access and Assistance) Act (2018; hereafter Encryption Access Act;

Turnbull, 2014, p. 12560). This ‘problem of going dark’ involves an empirical claim that the investigative capabilities of law enforcement agencies are being undermined by the use of privacy-enhancing technologies (e.g. Comey & Yates, 2015; Weimann,

2016). As such, the ‘problem of going dark’ draws a moral equivalence between the protection of privacy rights and an enabling of criminals to evade police investigations

(e.g. Joh, 2013, p. 997). This moral equivalence is at the core of Australia’s national debates about data retention and encryption access laws.

1 Metadata refers to data about communications, as opposed to the contents of communications. Under the Data Retention Act (2015, s. 187A), metadata is defined as information about the source, destination, date, time, duration, type, and location of digital communications. However, the distinction between metadata and data is contested. For example, metadata can be used to determine an individual’s physical location and social network. See Suzor, Pappalardo, and McIntosh (2017) for a full discussion about this distinction within the context of Australia’s national debate about metadata. 12

This thesis explores the politics of privacy protection, focusing on how privacy advocates contest the moral equivalence at the core of the ‘going dark’ argument used to justify the Data Retention Act (2015) and Encryption Access Act (2018). As such, this research builds upon existing knowledge about the moral right to privacy. The concept of privacy “frames the ways most ordinary people see the contemporary surveillance issue” and “animates civil society activism and resistance”

(Bennett, 2011, pp. 493; 495). For example, liberal theories conceptualise ‘privacy’ as an individual right to non-interference from others (Warren & Brandeis, 1890; Frey,

2000). Such frameworks may be based upon deontological arguments about the inherent value of privacy (e.g. Bloustein, 1964) or consequentialist perspectives about its associated instrumental effects (e.g. Prossner, 1960). However, as privacy rights are necessarily justified “not just in and of themselves, but in terms of the consequences of their existence” (Waldron, 2003, p. 208), they are also susceptible to the consequentialist logics of ‘preventive justice’ that justify pre-emptive state interference on the basis of predicted harm (Zedner, 2007a, p. 187; Simone, 2009;

Finnane & Donkin, 2013). This is also the logic at the core of the ‘problem of going dark’ – that the expansion of surveillance powers is necessary for law enforcement agencies to effectively detect, deter, and pre-empt criminal behaviour.

The limitations of liberal theories have prompted scholars to reconceptualise the moral right to privacy. Indeed, as liberal privacy rights are susceptible to the logics of preventive justice, critics have argued they are an “ally” of surveillance powers

(Coll, 2014, p. 1250; Stalder, 2002, p. 120). As such, communitarian theorists have argued privacy should be conceptualised as a common good rather than an individual right (Etzioni, 2000, p. 902; Regan, 2002, p. 394). This perspective highlights how the moral right to privacy cannot be justified in isolation from considering the behaviours

13 enabled within an associated ‘private’ sphere, such as domestic violence within the home (Allen, 2003, p. 42) or the use of encryption technologies to obstruct justice and frustrate criminal investigations (Etzioni, 2015, pp. 137-138). Yet, this approach cannot resolve the underlying issue of how to appropriately ‘balance’ competing interests in privacy and security (Mann, Daly, Wilson, & Suzor, 2018, pp. 376-377). In contrast, civic republican theorists have argued privacy is freedom from arbitrary surveillance powers (Newell, 2018, p. 2; Hoye & Monaghan, 2018, p. 354). This framework highlights how a moral right to privacy is dependent upon corresponding processes for establishing ‘legitimate’ surveillance. However, the critics of these illiberal theories argue they are similarly vulnerable to the logics of

‘moral responsibilisation’, where invasive surveillance powers are justified on the basis of majoritarian and populist support (van Houdt & Schinkel, 2013; Petersen &

Tjalve, 2013). Overall, the moral right to privacy is, itself, a contested concept.

Building upon this literature about the moral right to privacy, this thesis analyses the politics of privacy protection within the context of Australia’s national debates about metadata retention and encryption access laws. Specifically, the research explores how privacy protection is conceptually differentiated from methods of criminal evasion, which is a moral equivalence drawn by the ‘problem of going dark’

(e.g. Joh, 2013, p. 997). To accomplish this, the research involves a political discourse analysis (Laclau & Mouffe, 1985) of twenty-one (n=21) semi-structured interviews with Australian privacy advocates. Using the analytical constructs of signification, subjectivation, and identification, this research analyses how privacy advocates contest the ‘problem of going dark’, discursively position the subjects of surveillance laws, and mobilise through the construction of shared identities. The findings are presented across three corresponding chapters. First, it is argued that Australian privacy

14 advocates contest the ‘problem of going dark’ using both liberal and illiberal discourses, including by ascribing ‘moral arbitrariness’ to the surveillance powers established under the Data Retention Act (2015) and Encryption Access Act (2018).

Second, it is argued this property of ‘moral arbitrariness’ is ascribed by discursively positioning the subjects of surveillance laws within relations of domination, thereby positioning citizens as incapable of conferring legitimate authority. Third, it is argued that privacy advocates are mobilised through a shared identity as marginalised subject- matter experts, who are compelled by a corresponding civic duty to advocate resistance to the morally arbitrary surveillance powers established under the Data Retention Act

(2015) and Encryption Access Act (2018). Overall, it is argued this property of ‘moral arbitrariness’ is important for clarifying the distinction between advocating for the protection of privacy rights and enabling the evasion of criminal investigations.

1.2. POLITICAL AND LEGAL CONTEXT

The research presented within this thesis examines the national debate about surveillance powers within Australia between 2014 and 2018. The

Telecommunications (Interception and Access) Amendment (Data Retention) Bill

(2014; hereafter Data Retention Bill) was introduced to the House of Representatives of the Commonwealth of Australia on 30 October 2014. The Bill proposed a requirement for telecommunications service providers to retain customer metadata for at least twenty-four months and provide warrantless access to authorised law enforcement agencies.2 The category of ‘metadata’ is defined as information about the

2 The full list of authorised ‘criminal law-enforcement agencies’ is outlined within section 110A of the Telecommunications (Interception and Access) Act (1979). This list includes Australian federal and state police services, various anti-corruption bodies, and immigration authorities. 15 source, destination, date, time, duration, type, and location of digital communications

(Data Retention Bill, 2014, s. 187A). In his second reading speech, the Minister for

Communications (2014, p. 12560) explicitly invoked the ‘problem of going dark’ where he claimed the amendments were necessary to “prevent the further degradation of the investigative capabilities of Australia’s law enforcement and national security agencies.” This was the central argument used to politically justify

Australia’s data retention laws.

The public debate about the Data Retention Bill (2014) was impacted by the events of the Lindt Café Siege terrorist attack. At 9:44am on 15 December 2014, Man

Haron Monis took eighteen people hostage in the Lindt Café located in Martin Place,

Sydney. This included ten customers and eights members of staff. As part of the sixteen-hour siege that followed, Monis had the hostages hold the Islamic black flag3 to the café window, demanded to speak with Prime Minister on live radio, and threatened to detonate four devices he allegedly placed around Sydney. The siege ended early the following morning, after Monis shot café manager Tori Johnson, prompting law enforcement officers to storm the café. During the operation, law enforcement officers shot and killed Man Haron Monis and one hostage, a prominent

Sydney barrister named Katrina Dawson. The events were covered live across the world. Subsequently, Prime Minister Tony Abbott argued law enforcement could have prevented the Lindt Café Siege if they had access to Man Haron Monis’ metadata, and that Australia needed to “redraw the line” between privacy rights and community safety (ABC News, 2015). The Federal Police Commissioner Andrew Colvin cited how metadata was essential in 92% of counter-terrorist investigations and it was thus

3 The flag was initially reported to be that of Islamic State. However, subsequent reports indicate it was the Islamic Black Standard flag. The one used by Monis was inscribed with a Shahada, a declaration of faith (ABC News, 2014). 16 essential the Data Retention Bill (2014) was passed through the parliament

(Benson, 2014). In his second reading speech, Attorney-General George Brandis

(2014, p. 2247) argued “metadata is in fact used in every significant criminal, organised criminal, paedophile and terrorist investigation.” Again, these arguments reflect an empirical claim that robust surveillance powers are necessary to counteract the ‘problem of going dark’ impacting criminal investigations.

In contrast, public figures articulated counterarguments by denying the empirical link between the inability of criminal justice agencies to access metadata and events like the Lindt Café Siege. For example, political commentator Waleed Aly

(2014, para. 5) argued metadata retention could not prevent the seemingly irrational behaviour of Man Haron Monis because “there’s no metadata inside an apparently deranged mind.” Within the parliament, the most robust criticism of the Data Retention

Bill (2014) was articulated by Senator Scott Ludlam (2015, pp. 2132-2133), who suggested Australians “circumvent mandatory data retention just by using overseas providers” because “there is abundant evidence that [data retention] will do nothing at all to keep people safe, or to reduce crime.” Similarly, journalist Quentin Dempster

(2015, para. 2) proclaimed “this country’s entire communications industry will be turned into a surveillance and monitoring arm of at least 21 agencies of executive government”, while the Chief Information Security Officer of Telstra, Mike Burgess

(2015, para. 3), argued that the metadata retention laws would create a “honey pot for hackers and criminals to target.” These counterarguments reflect competing empirical claims that surveillance powers are not necessary, and are potentially counter- productive, for effective investigations.

While these public debates were underway, the Parliamentary Joint Committee on Intelligence and Security (PJCIS) conducted an inquiry into the Data Retention Bill

17

(2014), including a call for public submissions and hearings into the legislation.

Altogether there were 204 public submissions, as well as three days of public hearings across December 2014 and January 2015. As part of their participation in this process, public advocacy groups GetUp! (2014) and Electronic Frontiers Australia (2014; hereafter EFA) ran the Citizens, Not Suspects campaign, encouraging citizens to register their disapproval of surveillance powers equating law-abiding citizens with criminals under investigation. As such, the campaign explicitly rejected the moral equivalence at the core of the Australian Government’s argument. Other organisations who made public submissions included the Australian Privacy Foundation (2014; hereafter APF) and Privacy International (2014). The PJCIS (2015) handed down an

Advisory Report on 27 February 2015 with 39 recommendations, including clarifying the scope and timeframe of data to be retained (Recommendations 2-10) and miscellaneous proposals such as requiring metadata be encrypted for storage

(Recommendation 37). Despite protests from privacy advocacy organisations, the Data

Retention Bill (2014) passed both Houses of Parliament on 26 March 2015.

After becoming law, data retention remained a contentious issue. On 25 March

2015, GetUp! (2015) and EFA (2015) launched the Go Dark Against Data Retention campaign. The campaign promoted “tools and services everybody can use to circumvent data retention” with an aim to “show just how ineffective the invasive scheme will be” (GetUp! 2015, para. 3). This time around, the campaigns advocated for citizens to circumvent data capture processes entirely. Similarly, to mark the official commencement of the metadata retention program on 13 April 2017, Digital

Rights Watch (2017, para. 2) promoted National Get a VPN Day to encourage

“Australian citizens to educate themselves about the scale of this surveillance and take precautions accordingly” and thereby “do everything in our power to equip people to

18 circumvent surveillance”. The campaign asked people to install a virtual private network, promote #GetaVPN on social media, and contact their elected representatives about their concerns. Evidently, Australian privacy advocates conducted these campaigns to promote strategies for citizens to ‘go dark’ to surveillance, and thereby illustrate their limited utility for criminal investigations.

However, these privacy protection campaigns have not necessarily had desirable political consequences. In mid-2017, the Australian Government announced plans to introduce additional surveillance powers for accessing the contents of encrypted communications (Evershed, 2017; ABC News, 2017). Prime Minister

Malcolm Turnbull (quoted in Johnson, 2017, para. 12) argued the proposal was necessary because “end-to-end encryption, all of that information, all of that data, that communication [is] effectively dark to the reach of the law.” Once again, this was an explicit invocation of the ‘problem of going dark’ – a claim the use of privacy- enhancing technologies was negatively impacting the investigative capabilities of law enforcement agencies. The Explanatory Memoranda (2018, p. 2) justifies the legislation on the following grounds:

“The increasing use of encryption has significantly degraded law enforcement and

intelligence agencies’ ability to access communications and collect intelligence,

conduct investigations into organised crime, terrorism, smuggling, sexual

exploitation of children and other crimes, and detect intrusions into Australian

computer networks.”

The legislation establishes “frameworks for voluntary and mandatory industry assistance to law enforcement and intelligence agencies in relation to encryption technologies” (, 2018, para 1). Thus, it is clear how the

19

‘problem of going dark’ was used to justify the Encryption Access Act (2018), with language as ‘degradation’ used to describe the impact of privacy-enhancing technologies on the investigative capabilities of law enforcement agencies.

In response to the Telecommunications and Other Legislation Amendment

(Assistance and Access) Bill (2018), Australian privacy advocates argued such legislation would coerce companies to facilitate access to communications either by building in ‘backdoors’ to encryption technologies or avoiding encryption altogether

(EFA, 2017a; White, 2018). Prime Minister Malcolm Turnbull dismissed this line-of- argument on the grounds such critics espouse “a very libertarian culture, which is quite anti-government” (McGuirk, 2017, para. 9). Subsequently, the Telecommunications and Other Legislation Amendment (Assistance and Access) Act (2018; hereafter

Encryption Access Act) was passed by both Houses of the Australian Parliament on

6 December 2018, establishing requirements to provide ‘technical assistance’ to law enforcement agencies for accessing the contents of communications without introducing ‘systemic weaknesses’ or ‘systemic vulnerabilities’ (Telecommunications

Act, 1997, ss 317E, 317ZG). Indeed, recent developments indicate the Australian

Government, acting together with the US and UK, are using the legislation to pressure social media companies to avoid introducing end-to-end encryption on their platforms

(Karp, 2019, para. 4). Overall, the Encryption Access Act (2018), which was politically justified on the basis of the ‘problem of going dark’, establishes a moral and legal responsibility upon industry to avoid enabling the evasion of criminal investigations insofar as they protect their users’ privacy.

20

1.3. RESARCH AIM AND QUESTIONS

Previous research has established how surveillance powers have been justified via the logics of preventive justice, where predicted harms are used to justify pre- emptive interference with individual privacy rights (e.g. Simone, 2009, p. 8; Zedner,

2007a, p. 174). Within the context of Australia’s national debates about surveillance legislation, the Government has invoked the ‘problem of going dark’ to draw a moral equivalence between advocating strategies of privacy protection and enabling the evasion of criminal investigations (e.g. Weimann, 2016, p. 196; Joh, 2013, p. 997). As a result, critics have argued that privacy is often the “ally” of surveillance powers (e.g.

Coll, 2014, p. 1250; Stalder, 2002, p. 120). This political dynamic presents a strategic problem for privacy advocacy organisations, as evidenced within the context of

Australia’s national debates about metadata retention and encryption access laws.

Based upon this literature, this thesis is guided by the following research aim: x To understand how privacy advocates contest the politics of privacy protection

within the context of the Australian Government’s use of the ‘problem of going

dark’ to justify expanding surveillance powers.

To operationalise this aim, this thesis is guided by the following research questions:

1. How do Australian privacy advocates differentiate the meaning of ‘privacy

protection’ from methods of ‘criminal evasion’?

2. How are the subjects of the Data Retention Act (2015) and Encryption Access

Act (2018) discursively positioned by Australian privacy advocates?

3. How are the identities of Australian privacy advocates constructed, and how

do these political identities mobilise resistance to surveillance powers?

21

1.4. CHAPTER OUTLINES

This thesis examines the politics of privacy protection within the context of

Australia’s national debates about surveillance powers, focusing on how privacy advocates contest the moral equivalence at the core of the ‘problem of going dark’.

Specifically, across seven chapters, the thesis argues privacy advocates are mobilised by a civic duty to advocate resistance to the morally arbitrary surveillance powers established under the Data Retention Act (2015) and Encryption Access Act (2018).

As such, the thesis argues this property of ‘moral arbitrariness’ is important for differentiating between the protection of privacy rights and enabling the evasion of criminal investigations. Chapter Two (Literature Review) summarises the existing knowledge about privacy protection and surveillance powers. First, the chapter establishes how surveillance is a privileged policy solution based upon the logics of preventive justice. Second, it establishes the existence of a symbiosis between the expansion of surveillance powers and strategies of privacy protection. Third, the chapter establishes how the moral right to privacy is conceptualised and contested through liberal and illiberal discourses, including deontological, consequentialist, communitarian, and civic republican perspectives. Overall, the chapter identifies how methods of privacy protection occupy contested spaces as means for defending human rights and masking criminal behaviours. It is argued this presents a strategic problem and highlights the need for additional research examining how privacy advocates contest the moral equivalence at the core of the ‘problem of going dark’.

Chapter Three (Theoretical Framework and Research Methodology) outlines the project’s theoretical framework and research design. The research is informed by

Ernesto Laclau and Chantal Mouffe’s (1985) framework for political discourse analysis and specifically the analytical constructs of signification, subjectivation, and

22 identification. The chapter explains how data was collected via semi-structured interviews with twenty-one (n=21) Australian privacy advocates who were involved in campaigns opposing the Data Retention Act (2015) and Encryption Access Act

(2018). The sample includes participants based within privacy-oriented, technology- oriented, and human rights advocacy organisations. Additionally, the chapter surveys the pragmatic aspects of political discourse analysis as a framework. This includes the development of a mixed inductive-deductive category frame to analyse how participants contest the ‘problem of going dark’, discursively position the subjects of surveillance laws, and mobilise through the construction of shared identities. The findings are presented across three corresponding chapters.

Chapter Four (Contesting the ‘Problem of Going Dark’) analyses how the discursive meaning of ‘privacy protection’ is articulated by focusing on how privacy advocates differentiate the practice from methods of ‘criminal evasion’ – a moral equivalence at the core of the ‘problem of going dark’. The chapter uses the analytical construct of signification and provides an answer to the first research question. First, the chapter argues the signified meaning of ‘privacy’ as a moral value is discursively linked with the concept of ‘security’. As such, the moral equivalence between ‘privacy protection’ and ‘criminal evasion’ can be considered a by-product of the relational properties of discourse. Second, the chapter argues privacy advocates are encumbered by a liberal framework for contesting the ‘problem of going dark’ that is vulnerable to the consequentialist logics of preventive justice that prioritise security interests. Third, the chapter argues that privacy advocates supplement these liberal discourses by ascribing ‘moral arbitrariness’ to Australian surveillance laws. Overall, the chapter argues this discursive property of ‘moral arbitrariness’ is important for differentiating between privacy advocacy and promoting methods of criminal evasion.

23

Chapter Five (The Subjects of Surveillance Laws) analyses the subjectivation strategies used by privacy advocates to discursively position the subjects of the Data

Retention Act (2015) and Encryption Access Act (2018). The chapter employs the analytical construct of subjectivation and provides an answer to the second research question. The relevant subjects include ordinary citizens and political elites, the latter category consisting of the Australian Government, law enforcement agencies, and technology companies. The chapter demonstrates the complexity with which moral responsibility for the harms of surveillance is distributed among the subjects of metadata retention and encryption access laws. Specifically, it is argued that privacy advocates discursively position citizens as incapable of conferring non-arbitrary authority to surveillance powers due to their status as the subordinate party within relations of domination. In this sense, privacy advocates ascribe moral arbitrariness to the Data Retention Act (2015) and Encryption Access Act (2018). Overall, the chapter clarifies how the moral arbitrariness of Australia’s surveillance powers is dependent upon the discursive positioning of subjects within relations of domination.

Chapter Six (Advocating Resistance to Surveillance Power) analyses the associated mobilisation of Australian privacy advocates. The chapter uses the analytical construct of identification and provides an answer to the third research question. It is argued ‘privacy advocates’ transcend interpersonal conflict and intergroup disagreements through the construction of a shared identity as subject-matter experts who have been marginalised from political decision-making processes. Subsequently, it is argued this identity is mobilised via the construction of a civic duty to advocate resistance to morally arbitrary surveillance powers and thereby disrupt associated relations of domination by cultivating citizens capable of protecting their privacy rights. This civic duty is discursively linked with the identity of privacy

24 advocates, who seek to reconcile their competing commitments to technocracy and democracy as sources of non-arbitrary power. Overall, the chapter argues that ‘privacy advocates’ are mobilised through a shared political identity and corresponding civic duty to advocate resistance to morally arbitrary surveillance powers.

Chapter Seven (Conclusion) provides a summary of the thesis, discusses the significance of the results, and identifies avenues for further research. The chapter argues that the signified meaning of ‘privacy protection’ is differentiated from

‘criminal evasion’ via the ascription of moral arbitrariness to Australia’s surveillance powers. Additionally, the chapter argues this is based upon the discursive positioning of the subjects of surveillance legislation within relations of domination and informs a corresponding civic duty to advocate strategies for protecting privacy rights. The conclusion also demonstrates the importance of the research findings for researchers interested in understanding how the normative status of surveillance powers influences public debates about privacy rights, the limits of liberalism as a framework for defining the scope of privacy rights, and the unavoidable tensions between democratic and technocratic sources of authority for establishing non-arbitrary forms of surveillance.

Finally, it is argued the findings have applied significance for privacy advocates and criminal justice policymakers interested in understanding the underlying disagreements that entrench public debates about privacy rights and surveillance laws.

Overall, the chapter provides a summary of the significance and limitations of the research, while also noting the potential for further research examining how discursive signifiers influence criminal justice debates.

25

Chapter Two: Literature Review

2.1. CHAPTER INTRODUCTION

The politics of privacy protection and surveillance powers are well-researched within the humanities and social sciences. This chapter presents a review of this literature across three identifiable themes. The first section examines how the logics of preventive justice are used to justify surveillance powers. The second section examines the political symbiosis between the expansion of surveillance powers and the development of strategies of privacy protection. The third section examines surveillance ethics and the moral right to privacy. Overall, the chapter identifies how strategies of privacy protection occupy contested spaces as methods of defending human rights and masking criminal behaviours. Based upon this observation, it is argued that there is a gap in the literature concerning how such contested definitions of privacy protection influence debates about surveillance powers. Specifically, further research is needed to understand how privacy advocates contest the moral equivalence between strategies of privacy protection and the evasion of criminal investigations at the core of the ‘going dark’ argument.

2.2. JUSTIFYING SURVEILLANCE POWERS

The expansion of surveillance powers is a privileged policy solution for social problems within liberal democracies. Indeed, Clive Norris (2007, p. 139) has characterised the tendency for governments to rely upon surveillance as a criminal justice response to social problems as the privileging of the surveillant solution. Thus,

26 this section examines why surveillance occupies such a privileged position. Across four sub-sections, it is argued that: such surveillance solutions are discursively justified through the logics of preventive justice; there is (limited) evidence supporting the

‘problem of going dark’; citizens generally accept the surveillance solution due to different psychosocial processes; and empirical evaluations of the effectiveness of surveillance solutions are circumspect. Overall, the section argues that the expansion of surveillance powers is a privileged policy solution, despite empirical and ethical concerns about their effectiveness as tools for administering justice.

2.2.1. THE LOGICS OF PREVENTIVE JUSTICE

Within liberal democracies there is an observable trend where the surveillance powers of criminal justice and intelligence agencies are justified via the logics of preventive justice.4 The concept of preventive justice has been developed by Lucia

Zedner (2007a, p. 174) to describe a consequentialist mode of reasoning about law and justice that “operates pre-emptively in the name of public protection”. Thus, the contemporary “pre-crime” society is characterised by the pre-emptive identification of security threats and associated crime control practices (Zedner, 2007b, p. 261). These are justified because “the possibility of forestalling risks competes with and even takes precedence over responding to wrongs done” (Zedner, 2007b, p. 262). The theory shares similarities with several ideas from across the social sciences. For example, the concept of ‘actuarialism’ has been used to describe how datasets are increasingly used to predict criminal behaviour and administer pre-emptive interventions (Harcourt,

4 Lucia Zedner (2003; 2005; 2007a; 2007b; 2008) has developed a suite of concepts to describe the consequentialist logics of preventive justice, including the notions of “pre-crime” and “future law”. These refer to the overarching ethos of harm prevention that dominate criminal justice policymaking. 27

2007, p. 16). Similarly, sociologist Ulrich Beck (1992, p. 26) developed the risk society thesis (risikogesellshaft) to describe how the politics of knowledge informs the distribution of risk within society, arguing that powerful classes use scientific and technocratic discourses of ‘risk management’ to disguise the implicit normative values legitimating their positions of power over marginalised social groups. Additionally,

Beck (1992, pp. 191-193) argues judgements about ‘risk’ are sites of conflict within parliamentary systems seeking to navigate tensions between subject-matter expertise and democratic decision-making processes. As such, the power to pre-emptively label a behaviour as ‘risky’, ‘criminal’, or ‘deviant’ is the power to politically justify methods of pre-emptive social control.

These logics of preventive justice are observable within the perennial “ticking time bomb” mentality that permeates politics within pre-crime societies (Zedner, 2008, p. 18). There is a constant need to identify and manage risk. For example, ‘risk’ is an

“organising logic” for Australian law enforcement agencies, governing management of their public image, dissemination of information about the risks of crime, and beliefs about the need for the public to remain vigilant (Lee & McGovern, 2016, p 1300).

Similarly, in the governance of Australia’s border, officials make determinations to grant or deny entry by predicting the likelihood a visitor will breach their visa conditions (Wilson & Weber, 2008, p. 130). Domestically, evidence of wrongdoing is supplanted by pre-emptive intelligence reports, which justify the detention of citizens without the need to make an arrest (McCulloch & Pickering, 2010; 2009, p. 634).

Additionally, well-established criminal justice principles have been overturned by legislation passed by several Australian jurisdictions enabling pre-emptive control orders to disrupt outlaw motorcycle gangs based solely on their risk of involvement in organised crime (Ayling, 2011, p. 253). Australian courts are increasingly reliant upon

28 evidence gathered by surveillance technologies, including instances where visual or auditory information must be interpreted through expert opinion (Edmond & San

Roque, 2013, p. 256). Finally, the increasingly punitive judicial decisions about access to bail or the type and length of custodial sentences may be attributed to ongoing anxieties about risk management (Baldry et al., 2011, pp. 29-33). The idea that risk can be measured, predicted, and prevented pervades these criminal justice practices.

The political legitimacy of the surveillance solution is based upon the logics of preventive justice. For example, following the attacks in New York and Washington on 11 September 2001, the United Kingdom had a public debate about potential metadata retention powers to be established under the Anti-Terrorism, Crime and

Security Act (2001). The Blair Government characterised metadata retention as uncontroversial, akin to accessing information available on a telephone bill, and essential to the effective prevention of terrorism (Whiteley & Hosein, 2005, p. 862).

Associated studies of political discourses about surveillance within the United

Kingdom confirm that “crime prevention” and “risk management” dominated the debates (Barnard-Wills, 2011, p. 555). Similarly, in an analysis of 595 newspaper articles about Closed Circuit Television (CCTV) cameras in Canada from 1999 to

2005, Greenberg and Hier (2009, p. 472) observed that the thematic focus of 54% of articles was their preventive function.5 In a post-mortem analysis of media reporting about Australia’s metadata retention laws, Suzor, Pappalardo, and McIntosh (2017, pp. 9-11) observe how the complexity and ambiguity of the scheme enabled claims about effective prevention to escape critical scrutiny. Instead, there was general acceptance of “claims that data retention is necessary to maintain national security”

5 To demonstrate the significance of this result, the next most common theme was detention/ capture (34%), followed by collecting evidence (8%), and addressing community fear (4%). 29

(Suzor et al., 2017, p. 12). Overall, the logics of preventive justice underpin the legitimacy of government surveillance powers within liberal democracies.

The logics of preventive justice frame these public debates about surveillance powers within a consequentialist moral framework. Specifically, where surveillance programs are accepted as effective tools for protecting national security, privacy is relegated to being ‘balanced’ with, or ‘traded-off’ against, collective security interests.

Indeed, Frank Bannister (2005, p. 72) has observed how “[a]t the centre of any discussion of citizen privacy vis a vis the state is the question of individual and societal risk.” It is the “interaction of these two discourses”, of privacy and security, that “is fundamental to the legal architectures that govern surveillance” within Australia

(Murphy & Anderson, 2016, p. 121). For example, in an analysis of Australia’s telecommunications surveillance powers, Bronitt and Stellios (2005, p. 887) argue,

“[i]n the relentless expansion of the powers of criminal investigation, individual rights are invariably ‘traded off’ against the community interests in preventing, detecting and prosecuting crime”. The resulting ‘balance’ between privacy and security legitimates surveillance as a ‘reasonable’ solution to the problems of crime or terrorism.

2.2.2. POLICY PROBLEMS AND ‘GOING DARK’

The logics of preventive justice are dependent upon the construction of antecedent policy problems that require solutions. Indeed, there is a fundamental dilemma for governments “in fulfilling their obligation to protect people from the harms of cybercrime” while also “differentiat[ing] between legitimate security imperatives and any ‘over-reach’ evident in the implementation of [cybercrime] legislation” (Bessant, 2012, p. 4). Thus, difficulty arises where privacy-enhancing

30 technologies are observed to facilitate cybercrime, as claimed within the ‘problem of going dark’ (Mylan, 2016; Weimann, 2016). For example, criminal investigations into the distribution of illicit goods and services may be impeded by their presence on the

‘dark web’, a collection of websites not accessible via indexed searches (Weimann,

2016, p. 196). Research does suggest the dark web is a hub for child exploitation material and anonymous trading of illicit goods (O’Brien, 2014, pp. 247-248; Maras,

2014, p. 22). Indeed, there were 372 convictions for accessing digital child exploitation material under federal criminal law across the 2012-2013 financial year (Broadhurst,

2016, p. 9), which is not inclusive of additional prosecutions under state legislation.6

Similarly, cross-sectional analyses of transactions on the (now defunct) online black- market Silk Road7 confirm the use of such platforms for selling illicit substances, including a significant market presence within Australia (Phelps & Watt, 2014, pp.

266-267; Martin, 2014, p. 358). A similar study of the Silk Road 2 marketplace revealed that illicit substances constituted 19% of all advertised products (Dolliver,

2015, p. 1119). Evidently, the existence of encrypted ‘dark web’ marketplaces does present a policy problem for law enforcement and intelligence agencies.

Attempts by governments to expand surveillance powers as policy responses to the misuse of privacy-enhancing technologies have been politically resisted by members of the crypto-anarchist and techno-libertarian movements. These groups use similar platforms on the dark web yet are not necessarily engaged in criminal activity, and their motivations are thus not analogous to those of cybercriminals. For example,

6 Data about the number of state-based prosecutions for online child exploitation material are unclear. The administrative statistics do not indicate whether child exploitation material has been accessed or distributed via the internet. In contrast, the federal criminal offences are confined to accessing, possessing, and distributing child exploitation material using a carriage service (Criminal Code Act, 1995, ss. 474.20). 7 Silk Road was closed by the US Federal Bureau of Investigations in 2013, and its founder (Ross William Ulbrecht) sentenced to life imprisonment without parole. See Maras (2014) for a full explanation of the investigation. 31 in an ethnographic analysis, Coleman and Golub (2008, p. 267) argue the fundamental impetus of ‘hacker’ collectives is to preserve online communities free from government control. Instead, these groups adopt informal processes of social control to regulate the behaviour of community members. As internet governance scholar Nic

Suzor (2010, p. 1824) argues, “the norms that develop in virtual communities are generally better than any law that could be imposed by the state,” affirming the sociological truism that informal social controls are the most effective strategies for regulating human behaviour (Goode, 2016, pp. 62-63). There is research to support this claim. In an analysis of the dark web social networks, accessible via the Tor web browser, Robert Gehl (2016, p. 1227) observes how users develop communal and pro- active strategies for policing hosted content to prevent cybercriminals coopting the platform. Other communities occupy more contested spaces where they promote electronic civil disobedience. For example, members of crypto-market communities rationalise their behaviours as resistance to the moral vicissitude of taxation and as defending private property from redistribution by governments (Munksgaard &

Demant, 2010, pp. 81-82; Karlstrøm, 2014, p. 32). Similarly, an anti-capitalist discourse prevails within file-sharing communities, whose members rationalise their behaviour by positioning copyright regimes as repressive (Beyer & McKelvey, 2015, pp. 891-892). Overall, the existence of these dark web communities highlights how the boundaries between privacy protection and criminal evasion can become blurred.

Perhaps the most serious alleged problem concerns the use of privacy- enhancing technologies to organise and coordinate terrorist activity. This claim has dominated debates about privacy protection and surveillance powers. For example, a content analysis of cybersecurity speeches delivered by Bill Clinton, George W. Bush, and Barack Obama reveals how the US Presidents discursively link the concepts of

32 crime, terrorism, and cyberspace to politically justify digital surveillance powers (Hill

& Marion, 2016, p. 171). Indeed, cyberspace is often condemned as a facilitator of terrorist communications, operational planning, and the dissemination of extremist propaganda (Brown & Korff, 2009, p. 120). For example, an affect analysis of dark web communities within the US and Middle East has documented a significant amount of violent and racist speech (Abbasi & Chen, 2007, p. 287).8 In a content analysis of dark web jihadist communities, Gabriel Weimann (2016, pp. 198-202) observes how terrorists advise members how to circumvent surveillance programs, donate anonymously using cryptocurrencies, and disseminate operational information.

Similar behaviours have been observed on ‘surface web’ platforms such as Stormfront

– an organising platform for neo-Nazis – where members rationalise and reinforce beliefs about white supremacy (Vysotsky & McCarthy, 2017, pp. 452, 459). Indeed,

Stormfront has been directly implicated in the radicalisation of multiple perpetrators of white supremacist terrorist attacks (Bever, 2014, paras. 4-5). Finally, the growth of the misogynist online Incel9 communities has been linked to the radicalisation of the perpetrators of the 2014 Isla Vista killings and 2018 Toronto van attack (Baele, Brace,

& Coan, 2019, p. 17). Overall, the presence of extremist communities on the internet presents another crime policy problem requiring a solution.

8 Note that the Middle Eastern platform accounted for almost twice as much measured hate and racist speech. However, it is important to note the methodological limitations of automated affect analysis, and to note regional differences in the use of the dark or surface web. The point the study offers is the dark web is being used by radicalised groups globally. 9 Incel is a neologism for involuntary celibate, a phrase used to describe individuals who struggle to find romantic and sexual partners. The term was coined in 1993 by a Canadian university student (known as ‘Alana’) to describe and discuss her personal circumstances with like-minded individuals. The label was subsequently co-opted by online sub-cultural groups to describe a misogynistic worldview based upon pseudo-scientific evolutionary psychology, the strict enforcement of traditional gender roles, and the promotion of rape cultures. See Baele, Brace, and Coan (2019, pp. 8-17) for an in-depth analysis of the ‘incel’ worldview. 33

A related concern is how the ubiquity of cyberspace enables methods of cyber- fraud. For example, available data suggests 4% of the Australian population have been victims of computer-enabled fraud totaling losses of over $4 Billion (Broadhurst,

2016, p. 2). Furthermore, based upon data gathered by the Australian Cybercrime

Online Reporting Network (ACORN, 2016), these financial crimes constituted about

47% of all reported cybercrime. During the financial year 2015-2016, there were

14,804 reported cyber-attacks targeting Australian businesses (Australian Cyber

Security Centre, 2016, pp. 14-15). The Australian Government has also acknowledged an unauthorised foreign actor gained access to the Bureau of Meteorology network via infecting government servers with a remote access tool (Australia Cyber Security

Centre, 2016, p. 11). It is also clear that malicious hackers justify such behaviour as forms of benevolence, insofar as they ‘expose’ security weaknesses or target

‘deserving’ victims who are technologically illiterate and have not taken steps to secure their communications (Young, Zhang, & Prybutok, 2007, pp. 285-286; Young

& Zhang, 2007, pp. 44-45). Thus, new information technologies such as the internet have enabled and exacerbated crime problems requiring policy solutions.

Finally, there is a significant body of research examining how the anonymous and ubiquitous character of cyberspace enables interpersonal harms, including gender- based harassment, cyberbullying, and child grooming. International crime data suggests that women are disproportionately likely to experience online sexual harassment, cyberstalking, and image-based exploitation (Henry & Powell, 2016, p. 200). These behaviours are reinforced and enabled by online communities organised around cultures of toxic masculinity (Schmitz & Kazyak, 2016, pp. 6-10; Dragiewicz,

2011). For example, research by Banet-Weiser and Miltner (2016, p. 171) argues there is a problem of ‘networked misogyny’ on social media, highlighting how ‘men’s

34 rights’ groups responded to the ‘#MasculinitySoFragile’ movement with threats of violence. Research suggests between 29% (15 and older) and 52% (13-14 years) of Australian schoolchildren have been victims of cyberbullying, while many victims also report participating as perpetrators (Price & Dalgleish, 2010, pp. 54-56).

Similarly, a self-report survey of 1,004 students enrolled in Victorian schools found that 72% of children report experiencing “unwanted or unpleasant contact with a stranger” on social media services, and that none of the participants informed an adult about the event (de Zwart et al., 2011, p. 57). There is also an interaction between age and gender, with female adolescents almost twice as likely to report experiences of online sexual harassment (Bossler, Holt, & May, 2012, p. 518). Although this suggests a pandemic of cyber-harassment, scholars have argued there is disproportionate anxiety about cyberbullying due to inconsistent and broad research instruments

(Cesaroni, Downing, & Alvi, 2012, p. 201). Regardless, there remains an apparent pattern of online victimisation according to gender and age. As with financially motivated cyber-fraud and cyber-terrorism, these experiences contribute to a holistic understanding of the policy problems created by communications technologies.

There is little doubt the internet has enabled new methods of causing harm.

Yet, while it is difficult to determine the extent to which fear of crime and terrorism can be considered reasonable, it is also clear moral panics about ‘hackers’ have justified invasive digital surveillance powers (Wall, 2008, p. 45; Sandywell, 2006, pp. 42-44). In their analysis of how discourses of risk promote invasive criminal justice responses, Ericson and Haggerty (1997, p. 90) argue that “social rationalities or risk never settle the question of when it is reasonable to be worried or fearful” (emphasis added). The problem is that what precisely constitutes a ‘reasonable’ fear of cybercrime is contested. While harm ‘to’ cyberspace might be quantified via

35 documented damage to infrastructure, harm ‘through’ cyberspace is influenced by processes of social construction (Deibert & Rohozinkski, 2010, pp. 15-16).

Furthermore, the targets of moral panics are, themselves, socially constructed. Indeed, as Thomas (2005, p. 599) surmises, the “moral nature of computer deviance is slightly more ambiguous and far more complex than we often recognise” and has evolved through political struggles between governments and ‘hackers’ since the 1980s.10

While this makes it difficult to differentiate between moral panics and reasonable concerns about cybercrime, it is also illustrative of how the surveillance solution is readily justified through the discursive logics of preventive justice, where the harms of cybercrime – such as cyber-fraud, cyber-terrorism, or cyber-harassment – are both perceived and experienced as warranting a policy response.

2.2.3. ACCEPTING THE SURVEILLANCE SOLUTION

The surveillance solution is controversial among privacy advocates. However, this sub-section examines why ordinary citizens often accept government proposals to expand surveillance powers. Research suggests a majority of the public support invasive surveillance powers where they are framed as effective counter-terrorism policies. For example, while 28% of Australians believe metadata retention laws unduly restrict civil liberties, 46% believe the laws do not go far enough (Australian

National University, 2016, p. 7). Another study suggests 47% of Australians believe metadata retention is a violation of privacy, in comparison with only 16% who do not

(Goggin et al., 2017, p. 2). Yet, when asked whether privacy violations are justified if

10 The term ‘hacker’ was popularized within science fiction during the 1970s (Wall, 2012, p. 7), although it originated among students of the computer science program at the Massachusetts Institute of Technology during the 1950s (Levy, 1984). 36 they assist counter-terrorism operations, 57% expressed support for, and only 31% to, metadata retention laws (Goggin et al., 2017, p. 2). Finally, another study examining privacy-related attitudes and behaviour suggests Australians are cautiously supportive of existing surveillance powers, with mild concerns about privacy not translating into privacy-protective behaviour (Kininmonth, Thompson,

McGill, & Bunn, 2018, p. 6). This suggests public opinion about government surveillance is inconsistent and influenced by policy agenda-setting processes. Indeed, there is a “pendulum effect” where measurable public opinion about surveillance programs shifts according to high profile security or privacy-related events (Murphy,

2014, p. 218). However, support for surveillance powers has also withstood significant controversy. For example, Reddick, Chatfield, and Jaramillo (2015, p. 138) found the moral outrage concerning the Snowden disclosures, observable on Twitter, was not replicated within public polling data, where 52.7% of Americans supported the

National Security Agency’s (NSA) surveillance programs following the disclosures.

In the aftermath of Edward Snowden’s disclosures about classified programs operated by the US National Security Agency, sociologist Zygmunt Bauman and other surveillance scholars (2014) examined why it is that citizens actively or passively comply with invasive surveillance powers. Their analysis suggested that compliance is attributable to three psychosocial processes: familiarity, fear, and fun (Bauman et al., 2014, p. 142). These processes each involve complex interactions between individual agency and social structure, and problematise the notion that citizens straightforwardly ‘accept’ or ‘reject’ the legitimacy of surveillance powers. Similarly, the concept of “digital citizenship” has be developed to describe how people

“increasingly enter the sphere of civic activity – and develop agency – through digital media” (Hintz, Denick, & Wahl-Jorgensen, 2017, p. 734). As such, the existence of

37 surveillance programs may complicate the successful “enactment of citizenship” through a technologically-informed and politically-empowered populace (Hintz et al.,

2017, pp. 735-736). Building upon these ideas, the following sub-sections explore why

‘citizens’ generally accept the surveillance solution. Indeed, while privacy advocates condemn uncritical arguments for surveillance such as ‘just trust us,’ ‘nothing to hide, nothing to fear,’ or ‘security trumps’ (Moore, 2011, p. 142), these sentiments capture something compelling for the citizens they convince.

2.2.3.1. FAMILIARITY AND NORMALISATION

This sub-section examines how citizens accept the surveillance solution because they experience surveillance as an aspect of their everyday lives. This corresponds with Bauman et al.’s (2014, p. 142) first psychosocial process – participation in the surveillance society is familiar and normalised. This claim is familiar to surveillance studies, where the thought of Michel Foucault (1977) has dominated theorising about the causes and consequences of surveillance power.

Indeed, Foucault’s use of Jeremy Bentham’s metaphor of the Panopticon in Discipline and Punish (1977) prompted a wave of research examining how discourse legitimises forms of social control. This notion of ‘familiarity’ positions citizens as passively accepting the surveillance solution. To demonstrate this, the sub-section provides a brief genealogy of the normalisation of the surveillance solution.

The surveillance solution is considered normal because it is an established practice within the history of public administration. For example, the act of state authorities collecting and retaining information about citizens can be traced back to the use of papyrus-based paper by Egyptian bureaucrats (Lanier & Cooper, 2016,

38 p. 94). There is evidence that early versions of a census were conducted in Ancient

Rome (Hin, Conde, & Lenart, 2016, p. 50), with population data subsequently used to administer state affairs, enforce taxation requirements, and manage the Roman army

(Fuhrmann, 2012). Similarly, the use of a census was important for public administration in Choson-era in Korea and during the Ming Dynasty in China (Rhee,

2005, p. 25). The act of collecting information about the population for the purposes of social control is therefore intrinsically linked to the existence of large-scale human societies with a form of centralised government. Subsequently, as policing practices and technologies evolved, these early record-keeping procedures developed into formalised surveillance powers.

Coinciding with the formalisation of policing in the contemporary sense via the Metropolitan Police Act (1822) in the United Kingdom, new technologies extending natural surveillance capabilities were in development. Following improvements in photography during the early 19th Century, European police began experimenting with the technology as a means of collecting evidence for prosecutors

(Jäger, 2001, p. 2). Subsequently, photography became important for police and the judiciary as a technology enabling covert phrenological and physiognomic analyses11 of those under criminal investigation (Hagins, 2013, p. 288). The permanent storage of an individual’s image thereby enabled social control across time and space, with photography laying the groundwork for the subsequent adoption of audio recording technology within criminal investigations. Indeed, what Josh Lauer (2012, pp. 570-

11 Phrenology refers to the pseudoscientific discipline that attempted to make causal observations about human character and behaviour from measuring the shape of the human skull, based upon an underlying theory that skull shape correlates with brain structure. Physiognomy refers to the pseudoscientific discipline that attempted to make similar causal observations from the shape of the human body, and particularly from measurements of facial structures. The disciplines were highly influential within nineteenth century criminological thought and informed associated criminal justice and social welfare practices. See Rafter, Posick, and Rocque (2016) for a complete history of the biological theories of crime and their impact on criminal justice policy. 39

576) describes as a shift towards a “new evidential paradigm,” law enforcement started recording “photographs of the voice” via phonographic and telephonic technologies.

These early telecommunications were comparatively easy to eavesdrop on by service operators and other subscribers (Lauer, 2012, p. 577). However, politicians developed a specific interest in encrypted communications during the early 20th Century (Davies,

1997, p. 15), including the SIGSALY12 system that enabled Winston Churchill and

Franklin D. Roosevelt to communicate securely during the second world war (Kahn,

1984, p. 72). However, because these methods of encrypting communications also posed a threat to the ability of law enforcement and intelligence agencies to intercept audio, as part of intelligence-gathering operations and criminal investigations, the technology was legally classified as a munition in order to regulate its development and dissemination (Levin, 1998, p. 532). Regardless, the use of audio surveillance technologies became standard practice for criminal investigations.

This history of using photographic and audial surveillance technologies within intelligence-gathering and criminal investigations has continued into the twenty-first century. In this sense, critics argue there has been a gradual ‘sleepwalk’ into a surveillance society (e.g. O’Brien, 2008, p. 25). Indeed, recent research suggests modern surveillance technologies are highly normalised. For example, interviews and focus groups with 31 participants representing the CCTV industry, and citizens regularly monitored by CCTV cameras, suggests video surveillance is a ‘banal good’ that is “mundane, commonplace, [and] scarcely worthy of comment” (Goold, Loader,

& Thumala, 2013, p. 978). Similarly, the use of body-worn cameras by law enforcement is broadly accepted by detainees as aiding accountability, with only

12 SIGSALY is a code name, rather than an acronym. The system added algorithm-generated noises to encrypt the contents of telecommunications. An encryption key was required to remove the noise and return the audio to its original state. See Kahn (1984) for a complete explanation of the process. 40 limited concern for privacy implications (Lee, Taylor, & Willis, 2019, p. 187; Taylor

& Lee, 2019). Finally, ethnographic research highlights the lengths law enforcement officers will go in order to “stage” potential crime scenes for audio-visual surveillance, including the creation of ‘trap rooms,’ disguising CCTV cameras as streetlamps, or recording staged deliveries of intercepted drugs to secure evidence for prosecution

(Loftus & Goold, 2012, pp. 280-283). The ethical implications of these techniques are rarely interrogated. Instead, surveillance programs, as “visible manifestation[s] of the state’s concern about crime and security” (Wood & Webster, 2009, p. 263), tend to normalise their care functions rather than control functions. As a result, the subjects of surveillance – including both law enforcement and members of the public – tend to accept the “surveillance consensus” in the absence of effective or organised dissent

(Hempel & Topfer, 2009, pp. 173-174). Overall, this ‘banal good’ of surveillance is familiar because of this normalised status as a tool for administering justice.

Another reason the surveillance solution is normalised is via the gradual introduction of surveillance technologies into other spheres of social life. These sites of “everyday surveillance” are often integrated with the “mundane and innocuous policing practices” described above (O’Neill & Loftus, 2013, p. 437). For example, there has been a proliferation of surveillance within the education system. Schools are often manned by police officers and administrators who monitor staff and students via video cameras, electronic access controls, and digital surveillance of internet browsing behaviours (Kupchick & Monahan, 2006, p. 624). Importantly, these practices normalise the experience of surveillance during childhood and adolescence, where student behaviour is shaped through processes of formal and informal social control.

An observable ‘school-to-prison pipeline’ in the United States is a result of the increasingly disciplinary function of educators, who are expected to police patterns of

41 non-normative student behaviour (Raible & Irizarry, 2010, p. 1198). Indeed, schools with assigned law enforcement officers exhibit 12.7% more referrals of students to the justice system for non-serious violent offences (Na & Gottfredson, 2013, p. 640).

Demographically, the brunt of this social control is experienced by non-white students subjected to racialised expectations of normative behaviour (Raible & Irizarry, 2010, pp. 1199-1200). Similarly, male students are more likely to be subject to disciplinary action for disruptive behaviours (Bryan, Day-Vines, Griffin, & Moore-Thomas, 2012, p. 184), with qualitative research suggesting the construction of hegemonic masculinity structures educator attitudes about how to appropriately discipline male and female students (Martino & Frank, 2006, p. 26). Overall, these disciplinary functions of the education system contribute to the normalisation of surveillance during a critical time period for human psychosocial development.

This normalisation also accelerates once students graduate and enter both the workforce and regularly interact with financial markets. In an overview of this process,

Kirstie Ball (2010, p. 91) argues there are three functions of surveillance within the workplace: personal data for human resource management, biometrics for access controls, and behaviour monitoring for performance reviews. These goals are pursued via data retention and real-time surveillance technologies and are broadly accepted by corporate managers and employees alike. For example, research suggests discourses of ‘new public management’ are used to justify intense workplace surveillance of academics and professionals, careers previously characterised by high degrees of independence (Lorenz, 2012, p. 604). Instead, through discourses of output

‘efficiency’ and personal ‘accountability’ employees are subjected to increasing oversight and metric-based performance indicators (Lorenz, 2012, p. 617). Interviews with 154 managers and employees based within the United States suggest employees

42 negotiate the care and control functions of workplace surveillance programs, with 85% of interviewees holding overall accepting attitudes (Allen et al., 2006, p. 190).

Importantly, these employees are also the consumers of goods and services produced within market-organised societies. As surveillance technologies evolve, there has been developments in what Shoshana Zuboff (2015, p. 77) refers to as “surveillance capitalism.” Under the market conditions of surveillance capitalism, corporations collect information “to predict and modify human behaviour as a means to produce revenue and market control” (Zuboff, 2015, p. 75). For example, the financial models of social media platforms – and many internet-based companies – involves the use of deep packet inspection to harvest browsing data, automate targeted advertising, and resell personal information to third parties (Fuchs, 2013, pp. 1339-1341). Despite a small decline in sales following the Snowden disclosures, commercial surveillance technologies, like deep packet inspection, have continued to expand their market presence (Patsakis, Charemis, Papageorgiou, Mermigas, & Pirounias, 2018, p. 204).

Finally, the increasing ubiquity of social media platforms, which trade in personal information, lead to users ‘trading-off’ privacy concerns in favour of immediate conveniences (Best, 2010, pp. 17-19). As a result, ‘citizens’ of liberal democracies are socialised into surveillance capitalism, both as employees under the watchful eye of employers and as targeted consumers of the goods and services they produce.

2.2.3.2. FEAR AND UNCERTAINTY

This sub-section examines how the surveillance solution is accepted due to experiences of fear and uncertainty about the future. This corresponds with Bauman et al.’s (2014, p. 142) second psychosocial process – participation in the surveillance

43 society is motivated by fear of the unknowable. This notion of ‘fear’ sympathetically positions ‘citizens’ as uncritically accepting the surveillance solution. In their book

Policing the Risk Society, Richard Ericson and Kevin Haggerty (1997, p. 86) argue risk management logics are based upon a human desire to achieve “control of the irrational by rational means.” Because the persistence of irrational events outside human control is disconcerting, the result is “everyone and everything is to be made knowable through surveillance mechanisms” (Ericson & Haggerty, 1997, p. 42). The impetus for exerting this control trends towards expansion because of the discrepancy between this desire for control and the limitations of crime prevention (Ericson &

Haggerty, 1997, p. 90). As such, Policing the Risk Society highlights how surveillance, and preventive justice broadly, is often an earnest attempt to exert control over unpredictable events. Similarly, Murray Lee (2001, p. 481) has argued that the ‘fear of crime’ is a pre-discursive object that has been naturalised through the disciplinary actions of governments and the academic community. In this sense, ‘fear’ is not an essential property of the human condition, and is instead a property cultivated by subjects, perhaps unintentionally, for other ends.

Social scientists have observed a correlation between experiences of fear and increased support for ‘law and order’ and counter-terrorism programs. As Walklate and Mythen (2008, p. 220) argue, “fear is a multidimensional entity through which individuals make sense of and order their experience” with socially interactive processes informing the “fear of crime, fear of terrorism or fear of spiders”. For example, following the murder of Jill Meagher in the Melbourne suburb of Brunswick on 22 September 2012,13 there were prominent calls on traditional and social media

13 The murder of Jill Meagher was a widely reported event in Australia, and particularly within M elbourne. See Milivojevic and McGovern (2014) for a summary and analysis of the media coverage. 44 for increased preventive policing strategies and an expansion of CCTV programs

(Milivojevic & McGovern, 2014, pp. 32-33). Quasi-experimental research also suggests support for invasive counter-terrorism policies is better predicted by perceptions of how much ‘the public’ fears terrorism, rather than individual experiences of fear (Joslyn & Haider-Markel, 2007, p. 313). The result is that the social value of privacy decreases for individuals in favour of collective security interests

(Rauhofer, 2008, p. 186). Furthermore, the effects of surveillance programs themselves on fear are not straightforward. While the physical presence of counter- terrorism programs14 corresponds with immediate feelings of safety (Dalgaard-

Nielsen, Laisen, & Wandorf, 2016, pp. 705-706), increased visibility also exacerbates underlying anxieties about crime (Haggerty & Gazso, 2005, p. 183). Fear is also mediated through self-reflection. In research based on interviews and focus groups with 60 members of the Muslim and ‘broader’ Australian community, Anne Aly and

Lelia Green (2010, pp. 272-274) argue people mediate their fear due to concerns they are being manipulated for political purposes. Overall, it is this persistent unpredictability of crime that renders the surveillance solution appealing, even where the ‘risk’ of harm is recognised as low.

In the years following the terrorist attacks of 11 September 2001, there has been a plethora of research examining public perceptions of risk and associated levels of public support for the surveillance solution. Longitudinal polling suggests the willingness of Americans to support surveillance powers, at the expense of privacy rights, is mediated by perceptions that “the security gain is real” (Lewis, 2005, p. 25).

There is no room for uncertainty. Similarly, a study examining perceptions of data

14 This refers to physical manifestations of surveillance, including CCTV cameras and uniformed guards. 45 mining and email interception suggests an inverse relationship between perceptions of effectiveness and intrusiveness (Sanquist, Mahy, & Morris, 2008, p. 1130). Indeed, the more effective a program is perceived as, the less it is adversely judged as intrusive.

Interestingly, the study found digital surveillance was not uniformly judged as an effective means for combatting terrorism (Sanquist et al., 2008, p. 1125). Similar research using a between-groups factorial design observed how manipulating exposure to information about the effectiveness of counter-terrorism programs influences levels of public support (Garcia & Geva, 2016, p. 39). Specifically, where people perceived the effectiveness of counter-terrorism policies with ‘certainty’, they feel comfortable making the trade-off between security and privacy. Overall, this highlights how both

‘fear’ and ‘uncertainty’ influence public acceptance of the surveillance solution.

2.2.3.3. SURVEILLANCE SUB-CULTURES

This sub-section examines how surveillance is accepted by the public due to its prominence and role within culture. It corresponds with Bauman et al.’s (2014, p. 142) third psychosocial process – participation in the surveillance society can be

‘fun’ and rewarding. This notion of ‘fun’ positions ‘citizens’ as actively accepting, or at least tolerating, the surveillance solution in pursuit of other ends. For example, one particularly eye-catching research article about how people value their privacy is titled

Your Browsing Behaviour for a Big Mac (Carrascal et al., 2013). The study asked 168 people how much money they would demand in exchange for either personal information or their internet browsing history (Carrascal et al., 2013, p. 189). The researchers found the median amount necessary to purchase someone’s browsing history was €7 (~$11 AUD), while €25 (~$40 AUD) was the average nominated price

46 for personal information such as age, gender, home address, or economic status

(Carasscal et al., 2013, p. 190). The researchers therefore argue people value their privacy “equivalent to a Big Mac meal in Spain, circa 2011” (Carasscal et al., 2013, p. 190). Such behavioural economic research likely lacks ecological validity, yet the study still highlights how people value their privacy. In a review of the research literature examining this ‘privacy paradox,’ Spyros Kokolakis (2017, pp. 128-130) argues the under-valuation of privacy can be attributed to the prioritisation of immediate benefits of, and a social environment that encourages, self-disclosure.

Similarly, the concept of ‘participatory surveillance’ has been developed by Anders

Albrechtslund (2008) and danah boyd (2011), who argue the act of disclosing information can be experienced as empowering. As Albrechtslund and Lauritsen

(2013, p. 312) argue, the concept of ‘participation’ in everyday surveillance is rather complex, ranging from “obvious, voluntary and enjoyable to something inconspicuous, involuntary and feared”.

Social scientists have examined decision-making about online information disclosure, observing a pattern of preferencing immediate benefits over privacy protection. For example, a study involving focus groups with 13 members of

‘Generation Y’ suggests people prioritise interpersonal interaction on social media and the convenience of internet-based commerce (Lee & Cook, 2015, p. 683). The results suggest while “participants are aware of online surveillance risks from their exposure and visibility, and of some of the technical infrastructure of the internet, the desire for immediacy trumps all” (Lee & Cook, 2015, p. 685). Similarly, an analysis of interviews with 10 ‘regular’ internet users based in Canada, suggests users do not intuitively understand their decision to use digital communications as privacy-related matters (Viseu, Clement, & Aspinall, 2004, pp. 100-101). This is because they have

47

“nothing to hide, nothing to fear” where surveillance is distant and the benefits are immediate (Viseu et al. 2004, p. 106). In interviews with 21 regular users of the (now- defunct) Google location-based social media platform Dodgeball, Lee Humphreys

(2011, p. 583) argues the immediate benefits were more important than privacy concerns because users enjoy the ability to choose to publicise their location when using the service. This highlights how communication technologies problematise the divide between the ‘private’ and ‘public’ spheres of life, a dichotomy that is blurred when people use social media platforms, such as Facebook, to publicise their private lives (West, Lewis, & Currie, 2009, p. 624). Instead, as West et al.’s (2009, p. 618) interview data suggests, the value of online privacy is more closely linked with the cultivation of public images. Thus, people enjoy the immediate benefits of social media if they control what information others may access.

The acceptance of the surveillance solution is also influenced by how individuals are socialised within a culture permeated by norms of voyeurism. In this vein, Thomas Mathiesen (1997, p. 218) proposes the concept of the ‘synopticon’ – a society where the ‘many’ watches, and model their behaviour after, the ‘few’. The theory suggests that media enables elites to broadcast their image to publics, who internalise associated norms. Indeed, there is evidence that media normalises the enjoyment of voyeurism, dating back to Alfred Hitchock’s (1954) film Rear Window where a man with a broken leg spies on his neighbourhood using binoculars (Zimmer,

2011, p. 435). This enjoyment of voyeurism is also found within media such as Candid

Camera and procedural crime dramas such as The Wire (Tebbutt, 2011, p. 245).

Recently, the popularity of ‘reality television’ has created a new model of ‘celebrity’ as subjects of the constant voyeuristic gazes (Turner, 2006, p. 160). Importantly, the internalisation of these voyeuristic norms is then reflected in ‘social surveillance’

48 behaviours (Marwick, 2012, p. 378). For example, viewers of ‘reality television’ are observed to reproduce similar behaviours on social media platforms (Stefanone,

Lackaff, & Rosen, 2010, p. 522). Within public spaces, citizens ‘hijack’ mobile phones to conduct surveillance and collect evidence used during law enforcement investigations (Koskela, 2011, pp. 273-276). Similarly, there are websites where people can ‘crowdsource’ CCTV surveillance or consume crime-based programs and provide information to investigators (Trottier, 2014, p. 613). This has enabled what

Daniel Trottier (2017, p. 62) calls ‘digital vigilantism’ where communities on platforms such as ‘Reddit’ and ‘4Chan’ aggregate resources to analyse data, assist law enforcement, or correct perceived injustices.15 Overall, it is clear surveillance can be

‘fun’, both for individuals constructing their online identities, and for people who have been socialised within a culture permeated by voyeuristic norms.

2.2.4. THE LIMITS OF THE SURVEILLANCE SOLUTION

Despite the prominence of the surveillance solution, social analysts have documented empirical and ethical issues with such practices. This is despite the fact that surveillance technologies are often justified on the premise that ‘pre-crime science’ is capable of “precision, certitude and scientific neutrality” (McCulloch &

Wilson, 2015, p. 76). Thus, this sub-section examines the ethical problems with, and empirical limitations of, the surveillance solution. Specifically, the core problem is that surveillance is not an apolitical process and potentially reinforces existing social inequalities. This is an issue captured by sociologist David Lyon’s (2003, p. 13)

15 For example, one of highest profile instances of this phenomenon involved an erroneous attempt by users of Reddit (https://www.reddit.com) to crowdsource the identification of the Boston Bombers. See Suran and Kilgo (2017, p. 1039) for a discussion of the role of Reddit during and after the attacks. 49 concept of “social sorting” – if the purpose of surveillance is to prevent harm and manage risk, surveillance programs must necessarily categorise and classify people.

Therefore, the sub-section first examines some of the unjust patterns of classification that existing surveillance powers reproduce and concludes by considering some fundamental methodological difficulties of prediction in the social and behavioural sciences. Overall, the sub-section demonstrates how the surveillance solution is unavoidably political and routinely discriminatory.

One of the most prominent patterns observed within the literature is how surveillance programs reproduce categories of risk that are created by socio-economic inequalities. For example, a study of the everyday use of surveillance by Canadian law enforcement highlights how “technologies are often used by patrol officers as a means to legitimate the policing of populations that are already socially profiled” (Sanders &

Hannem, 2012, pp. 407-408). Specifically, the researchers argue that despite the use of ‘decontextualized’ language referring to behaviour rather than attributes, individuals are profiled as ‘risky’ on the basis of categories such as homelessness and poverty (Sanders & Hannem, 2012, p. 404). This is also evident within Australia, where significant resources are expended to differentiate between the ‘worthy’ and

‘unworthy’ poor to determine access to public housing and social welfare (Henman &

Marston, 2008, p. 196).16 These invasive surveillance powers also enable Australian criminal justice agencies to police welfare recipients for potential fraud at a higher rate than other populations (Marriott, 2013, pp. 409-411). As Kaaryn Gustafson (2009, p. 643) argues, this causes a “criminalisation of poverty” that reproduces categories of risk. Importantly, this pattern of welfare surveillance also reproduces gendered

16 Additionally, according to Henman and Marston (2008, pp. 197-198), despite servicing a smaller customer population Centrelink conducts almost twice as many compliance reviews as the Australian Taxation Office. 50 inequalities, as single mothers are over-represented among welfare recipients (Maki,

2011, pp. 57-58). The result is a calculation of ‘risk’ by welfare surveillance that is predicated upon pre-existing socio-economic inequalities of class and gender.

Since 11 September 2001 and the advent of the ‘War on Terror’17 there has been a significant expansion in surveillance powers as a means to prevent threats to national security (Lyon, 2015). These practices reproduce perceptions about the relationship between race and violence. For example, the United States uses data mining to aid in the development of criminal profiles used for surveillance at the border, incorporating variables such as country-of-origin, religious affiliation, and ethnicity in their risk calculations (Guzik, 2009, p. 11). This has the effect of disproportionately targeting visitors from the Middle East or North Africa for additional questioning or placement on a ‘no fly’ list (Guzik, 2009, p. 11). Reliance on risk assessments for terrorism within the United States also increases scrutiny of financial records, international wire transfers, and the ability to produce identification documents, disproportionately impacting migrant workers from Mexico and South

America (Amoore & De Goede, 2005, pp. 154-166). This has an impact where law- abiding residents are unduly subjected to surveillance. Similarly, migrants from South

Asia and the Middle East living in the United Kingdom are disproportionately subjected to everyday surveillance on the basis of the markers of their ethnicity, such as skin tone and dress (Patel, 2012, pp. 223-230). According to research by Shamila

Ahmed (2015, p. 555), the effects of this heighted degree of surveillance are “feelings

17 The 11 September 2001 attacks refers to the hijacking of four passenger airliners by 19 al-Qaeda terrorists, who flew the planes into the towers of New York City’s World Trade Centre and The Pentagon, the headquarters of the United States Department of Defence in Washington D.C. The attacks marked the beginning of the War on Terror, a label ascribed to the subsequent multi-decade conflict between states and terrorist groups occurring primarily within the Middle East. President George W. Bush used the phrases “war on terrorism” and “war on terror” in the immediate aftermath of the attacks to justify expanding surveillance powers via the PATRIOT Act (2001). See Kellner (2007) and Reese and Lewis (2009) for further summaries of the etymology of the phrase. 51 of fear, insecurity, vulnerability and helplessness” among the British Muslim population. The end result is that using the social markers of race, class, and gender, as variables used to calculate ‘risk’ in pursuit of preventive justice, results in individuals being unfairly subject to the distribution of surveillance powers.

The arbitrariness of such distributions of surveillance is exacerbated by the questionable effectiveness of predictive behavioural science. As Kevin Miller (2014, p. 118) argues, “the successes have been troublingly hard to locate and quantify” among “total surveillance, big data, and predictive crime technology.” Part of the problem is the sheer volume of data available. One estimate suggests that with an unrealistically high 99% accuracy rate for predicting terrorism, a data retention system analysing one trillion18 data points a day would still produce one billion false positives

(Schneier, 2006, paras. 14-15). These would need to be examined by human analysts to determine whether they should be followed-up. Alternatively, Timme Munk (2017) argues the scientific validity of anti-terrorism analytics is limited by the statistical base rate fallacy – there is insufficient data to accurately predict behaviour. There are simply too many variables and insufficient samples to accurately model causative relationships, leading to over-generalisations that cause “100,000 false positives for every real terrorist that the algorithm finds” (Munk, 2017, para 29). These limitations are apparent within the experimental literature. For example, Wang, Gerber, and

Brown (2012, p. 237) observe how localised semantic analysis of twitter data can predict the occurrence of hit-and-run incidents above the rate of chance. However, even interpreting the model in a generous manner, predicting 80% of hit-and-runs in a

18 This is a highly conservative estimate of the daily production of telecommunications metadata within the United States. For example, Schneier (2006) notes that one trillion data points would require each American adult create only 10 data points a day, which is notably less than the amount of active and passive data routinely produced by each individua during everyday use of a mobile phone and the internet. 52 mid-sized US city still involves an associated 40% false positive rate (Wang et al.,

2011, p. 7). Similarly, crime mapping models integrating demographic information and historical crime data predict macro-level crime rates19 with only 70% accuracy

(Bogomolov et al., 2014, p. 8; Simmons, 2016). Thus, there are fundamental methodological issues that limit the validity of behavioural prediction.

In addition to a problem with having too much data, there are methodological problems with predicting human behaviour based on selective variables. This is true even where law enforcement has access to detailed historical and demographic information. For example, the predictive validity of risk assessment tools used within corrective institutions is generally low (Foellmi, Rosenfeld, & Galietta, 2016, p. 608;

Kopkin, Brodsky, & DeMattteo, 2017). The inaccuracies occur because causal relationships observed within social scientific research trend towards exaggeration, as they are based on samples of known offenders rather than the total populations of actual offenders (Miller, 2014, p. 123). And, within a context where social injustice leads to different rates of arrest for different social groups, these sorts of actuarial instruments are discriminatory. For example, a study of automated risk assessment tools used by the US District Court found that African Americans receive harsher prison sentences based on algorithmic predictions about their risk of recidivism

(Angwin et al., 2016). While the specific tool does not incorporate race as a variable, the tendency of the algorithm to predict twice as many ‘high risk’ African American offenders is directly based upon their historical overrepresentation within the US justice system (Dressel & Farid, 2018, p. 3; Flores, Bechtel & Lowenkamp, 2017).

19 Note that Bogomolov et al. (2014, p. 8) further qualify the accuracy of this model to being “whether a given geographical area of a large European metropolis will have high or low crime levels in the next month.” 53

Overall, the reliance of behavioural science on imperfect datasets causes empirical and ethical issues for the surveillance solution as a tool of preventive justice.

It is still possible that the surveillance solution causes a general deterrent effect and may therefore be justifiable empirically and ethically. There is evidence that knowledge of digital surveillance programs alters the decision-making of some cybercriminals. For example, one randomised control trial study examined whether computer hackers were deterred by a notification their actions were being monitored by a surveillance program (Wilson et al., 2015). The study suggests some computer hackers are deterred by the presence of a surveillance notification, yet those who stay connected longer than 50 seconds will repeatedly trespass without trepidation (Wilson et al., 2015, p. 847). Still, demonstrating a general deterrent effect is akin to proving a negative claim, as the theory suggests crime can be stopped before any observable behaviour occurs (Jacobs, 2010, p. 430). Furthermore, as will be unpacked in the next section (Chapter Two, Section 2.3.4), digitally literate cybercriminals can still cover their tracks using a variety of privacy-enhancing technologies (Yar & Steinmetz, 2019, p. 246; Utsest, 2017, p. 1486). Thus, any deterrent effects are likely to be partial and temporary. Indeed, as cybercrime-enabling technologies are constantly evolving,

Benoit Dupont (2017, p. 101) has argued the work of cyber-law enforcement as akin to the myth of Sisyphus, who was condemned for eternity to push a boulder up a mountain only to have it immediately roll back down.

Relatedly, there are complicated methodological issues with accurately evaluating the effectiveness of the surveillance solution. Because surveillance programs find evidence of crime wherever they are targeted, they tend to gather sufficient evidence to justify their ongoing existence. Kevin Haggerty (2009, p. 277) has described the evaluation of surveillance programs as an unregulated “knife fight”,

54 owing to the no holds barred approach of using research to justify reforms. The central problem is how researchers may interpret “both increases and decreases in the crime rate as a positive sign” (Haggerty, 2009, pp. 280-281). Specifically, surveillance programs can always be evaluated as effective, depending upon whether the criteria for success are based upon crime prevention or detection. An analysis of CCTV evaluations published between 2000 and 2009 suggests competing criteria are simultaneously invoked within political justifications, including the use of qualitative police testimony to claim crime was being reduced while lauding the technology’s ability to improve crime detection as indicated by quantitative data (Lett, Hier, &

Walby, 2012, p. 343). Indeed, law enforcement agencies have an interest in securing these sorts of positive evaluations of surveillance programs because “novel technologies of control” ensure they are “seen to be doing something tangible about public anxieties” (Crawford, 2009, p. 810). Evidently, articulating an argument about the effectiveness, or ineffectiveness, of the surveillance solution is always a political performance subject to the limits of empirical research.

The privileged position of the surveillance solution is partially explained by psychological biases involving technology. For example, the tendency for humans to privilege technological solutions to social problems was first identified within ergonomic psychology as “automation abuse” (Parasuraman & Riley, 1997, p. 230;

Dzindolet et al., 2003). Within the broader social sciences, the phenomena came to be known as “automation bias” (e.g. Lyell & Coiera, 2016) and has been observed within the context of criminal justice policymaking as the tendency to accept “techno-fixes” to crime problems (Haggerty, 2004, p. 494). The above discussion illustrates why this faith is misplaced – the claim that surveillance programs provide accurate and objective solutions to crime or terrorism does not withstand critical scrutiny.

55

Furthermore, individuals who are identified as breaking the law by automated surveillance programs, such as speed cameras, have been observed to still reject the procedural legitimacy of technical decision-making processes (Wells, 2008, p. 806).

Finally, there is potential for surveillance solutions to experience ‘crises of confidence’ where they are found to not live up to the “unrealistic cultural assumptions about the capabilities of technology” (Kearon, 2012, p. 415). Overall, there are a plethora of methodological issues with preventive social and behavioural science that limit the effectiveness of surveillance programs, which are themselves unavoidably political and routinely discriminatory in practice.

Ultimately, this section has established how surveillance powers are privileged policy solutions within liberal democracies, despite concerns raised within the scholarly literature. First, it has been demonstrated how governments discursively justify surveillance powers using the logics of preventive justice, which legitimate pre- emptive interferences with individuals on the basis of consequentialist moral reasoning about socially constructed categories of risk. Second, it has been demonstrated how there is (limited) evidence to support the empirical claim of the ‘problem of going dark’ – that the use of privacy-enhancing technologies enables the evasion of criminal investigations. Specifically, it is clear that privacy-enhancing technologies, such as onion routing and cryptocurrencies, are being used by malicious subjects to mask criminal behaviours such as committing cyber-fraud, spreading hate speech, trading illicit goods, distributing child exploitation material, and organising terrorist activity.

However, a survey of the literature suggests such technologies are also developed and utilised by politically-motivated and (mostly) law-abiding communities. Third, it has been demonstrated how citizens accept surveillance powers due to the normality of being monitored, a desire to manage fear about uncertainty, and associated benefits of

56 information disclosure. Fourth, it has been demonstrated how ethical issues and empirical limitations of predictive behavioural science render surveillance powers as unavoidably political and routinely discriminatory. Overall, the section has established how the surveillance solution is politically privileged, despite various empirical and ethical concerns about it as a tool for administering justice.

57

2.3. THE SYMBIOSIS OF SURVEILLANCE AND PRIVACY

Despite the popularity of the surveillance solution, citizens often respond by developing strategies of privacy protection. Indeed, as Mark Andrejevic (2017, p. 879) argues, from the perspective of the state, citizens occupy a contested role as subjects who require protection from security threats and as “obstacles to overcome” in the pursuit of pre-emptive surveillance powers. Relatedly, a symbiotic relationship between surveillance powers and privacy protection has been observed by sociologist

Gary T. Marx (2016, p. 168; 2009, p. 299) in Windows into the Soul, where he describes a “surveillance arms race” characterised by “strategic moves, countermoves, and counter-countermoves” between the subjects of surveillance. Such a symbiosis is also observed by David Lyon (2002, p. 242) within his notion of “everyday surveillance”, where “the choices and chances of data-subjects” influence, and are influenced by, surveillance programs. Finally, the symbiosis is found within the

Foucauldian tradition of surveillance studies, based upon Michel Foucault’s (1990, p. 95) observation that “where there is power, there is resistance”.

Consequently, there has emerged a collection of research examining

“resistance to surveillance” (e.g. Martin, van Brakel, & Bernhard, 2009, p. 228;

Fernandez & Huey, 2009, p. 198) and “privacy protection” (e.g. Kokolakis, 2017, p. 122; Youn, 2009, p. 389). Thus, this section reviews this literature. Across five sub- sections, the symbiotic relationship between surveillance powers and privacy protection is surveyed, focusing on: privacy litigation; privacy campaigns; privacy behaviours; privacy-enhancing technologies; and practices of counter-surveillance.

Overall, the section argues these diverse strategies of privacy protection share similarities as moral and political responses to “the abuse of power” where “[p]rivacy

58 is one vehicle, among many, for redressing the balance between the powerful and the powerless” (Bennett, 2008, pp. 22-23).

2.3.1. PRIVACY LITIGATION

One of the most impactful methods of privacy protection is through public interest litigation. Thus, this sub-section surveys how law is used to protect privacy rights. Within the Australian context, there are no constitutional protections for the right to privacy and legislatures have been reluctant to establish a cause of action under tort law (Butler, 2005, pp. 363-365). Instead, the Privacy Act (1988, s. 14) outlines thirteen Australian Privacy Principles (APPs), with any alleged breaches dealt with through the investigatory powers of a Privacy Commissioner empowered to make determinations (Privacy Act, 1988, s. 52). Within this system, parties have a right to appeal determinations to the Australian Administrative Appeals Tribunal and the

Federal Court of Australia. There have been efforts to relocate privacy matters to the courts. The Australian Law Reform Commission (2008, p. 1477) has recommended a statutory cause of action for serious invasions of privacy. This was the focus of a corresponding discussion paper (Commonwealth of Australia, 2011), although the proposal has never made it out of the consultation process. Thus, the Australian system has limited the capacity for public interest litigation on matters of privacy.

One example of public interest litigation pursued with the support of privacy advocacy groups in Australia is the 2015 case of Grubb v Telstra,20 which determined whether metadata constitutes ‘personal information’ under APP 6.1. If metadata is considered personal information, telecommunications service providers would have an

20 Ben Grubb and Telstra Corporation Limited [2015] AICmr 35 (1 May 2015) (Austl.) 59 obligation to disclose the information to customers upon request (Privacy Act, 1988,

Sch. 1, Pt. 3). In Grubb v Telstra the Privacy Commissioner initially found metadata to be ‘personal information’ on the basis it enables individuals to be re-identified. The determination required Telstra to hand over any metadata produced by the complainant. On appealing that determination, in Telstra v Privacy Commissioner,21 the Administrative Appeals Tribunal agreed with Telstra that metadata is not about the customer themselves, and, is instead, about the services provided. On this interpretation, metadata is information about the service provider rather than the customer. This decision was subsequently upheld in a 2017 appeal by the Privacy

Commissioner to the Full Federal Court of Australia,22 and despite calls to appeal to the High Court (APF, 2017; EFA, 2017b), the Privacy Commissioner opted not to pursue the matter further.

Privacy law does not have a well-established history in Australia outside of the

APPs. The High Court originally rejected any claims based upon the right to privacy in the case of Victoria Park v Taylor (1937).23 The case concerned whether Victoria

Park Racecourse held exclusive control over information about horse racing occurring within its premises. Victoria Park Racecourse’s business model involved charging an entry fee and operating an on-the-ground betting operation. However, on an adjacent residential property, the respondent, Taylor, allowed the radio broadcaster 2UW to construct a platform from which, using binoculars, the races could be observed and broadcasted. Victoria Park Racecourse experienced a decline in patronage and sought an injunction on the basis their right to privacy was being violated. However, in the

21 Telstra Corporation Limited and Privacy Commissioner [2015] AATA 991 (18 December 2015) (Austl.) 22 Privacy Commissioner v Telstra Corporation Limited [2017] FCAFC 4. (Austl.) 23 Victoria Park Racing and Recreation Grounds Co Ltd v Taylor (1937) 58 CLR 479 (Austl.). 60 majority opinion rejecting the grounds for the appeal, Latham CJ (quoted in Butler,

2005, p. 341) argued “however desirable some limitation upon invasions of privacy might be, no authority was cited which shows that any general right of privacy exists.”

As such, the Court declined to infer a right to privacy regarding what Latham CJ refers to as ‘spectacle’24 – visual information about an activity.

Despite the apparent absence of either a statutory or common law right to privacy in Australia, there have been some recent cases involving civil damages for privacy invasions and metadata retention by governments. This is, in part, because in

ABC v Lenah Game Meats25 the High Court cast doubt on the long-standing authority of Victoria Park for preventing any cause of action based upon the violation of a right to privacy (Butler, 2005, pp. 340-341). Subsequent cases have therefore considered the potential for a right to privacy. In a survey of lower court cases involving the use of drones and aerial surveillance, Des Butler (2014, p. 443) argues plaintiffs may have an associated cause of action for unreasonable intrusion and private nuisance. For example, in the 2003 Queensland case of Grosse v Purvis,26 Senior Judge Skoien awarded the plaintiff $178,000 in damages for violations to her right to privacy (and associated rights) caused by the defendant’s repeated and prolonged attempts at communication. However, as Butler (2014, p. 445) observes, “subsequent decisions have been less willing to embrace such a tort.” So, despite the possibility of a cause of action arising from privacy violations, the limited case law appears to deter Australian privacy advocates from pursuing public interest litigation compared to counterparts in

24 Victoria Park, n 23, 497. 25 Australian Broadcasting Corporation v Lenah Game Meats Pty Ltd (2001) 208 CLR 199 (Austl.). This case involved the confidentiality of an abattoir slaughtering process covertly recorded by animal rights activists. The recording was passed on the ABC, against whom Lenah Game Meats was seeking an injunction to prevent broadcasting the footage. While the case primarily involved breach of confidentiality, the Court considered the issue of privacy in detail and rejected the precedent of Victoria Park in its deliberations. 26 Grosse v Purvis [2003] QDS 151; (2003) Aust Tort Reports 81-706 (Austl.) 61 the United Kingdom or United States (de Zwart, Humphreys, & Van Diesel, 2014, pp. 742-746). The limited scope of privacy litigation within Australia is “partly explained by the lack of constitutional redress in Australia, but also by other difficulties in exacting legal accountability” such as a preference for establishing non- judicial alternatives to statutory causes of action (Ransley, Andersen, & Prenzler,

2007, p. 146). As a result, legal protections for privacy rights within Australia are limited, although such strategies are more common in other jurisdictions.

The United States has a longer history of jurisprudence about the right to privacy. Since Samuel Warren and Louis Brandeis (1890) published The Right to

Privacy and Melville B. Nimmer (1954) published The Right to Publicity, American political and constitutional theory has gradually accepted privacy as “the right to engage in certain conduct without government restraint” (Sandel, 1995, p. 93). The applicability of such privacy rights to telecommunications has been contested by privacy litigants and the US Government. For example, a 1979 ruling in Smith v.

Maryland27 suggested communications metadata was not covered by a right to privacy, limiting public interest litigation about surveillance within the United States for several decades (Agur, 2013, p. 438). Following the Snowden disclosures, American courts again became sites of “privacy struggles” concerning metadata retention programs

(Joh, 2013, p. 1022). In ACLU v. Clapper28 a US District Court considered the legality of the NSA’s metadata retention program under the First and Fourth Amendments to the United States Constitution.29 The Court initially accepted the claim that Americans

27 Smith v. Maryland, 442 U.S. 735 (1979) (USA). 28 American Civil Liberties Union v. James Clapper, 959 F.Supp.2d 724 (28 Dec 2013) (USA). 29 The First Amendment to the United States Constitution prevents the making of laws that abridge freedom of speech, while the Fourth Amendment to the United States Constitution prohibits unreasonable searches and seizures. The right to privacy, and the limits of government surveillance powers, has generally been dealt with as a Fourth Amendment issue. See Joh (2013, p. 1005) for a discussion of privacy protection from law enforcement within the context of US constitutional law. 62 do not have a reasonable expectation of privacy for information provided to telecommunications service providers. However, this was overturned by the Second

Circuit Court of Appeals,30 which, while avoiding questions of constitutionality, concluded the program exceeded the scope of section 215 of the USA PATRIOT Act

(2001).31 In contrast, the initial judgement in Klayman v. Obama,32 which similarly objected to metadata retention on the basis it contravened the First and Fourth

Amendment and exceeded the powers outlined in section 215 of the USA PATRIOT

Act (2001), placed an injunction on the NSA and service provider Verizon to cease bulk metadata collection. However, this was vacated and dismissed on appeal.33

Regardless, these events sparked a change in how lawmakers approached the 2015 re- authorisation of the USA PATRIOT Act (2001). While reaffirming most existing surveillance powers, the USA FREEDOM Act (2015) expressly prohibits the bulk collection of metadata previously occurring under section 215 of the USA PATRIOT

Act (2001). Similar dynamics have also unfolded in the European Union (EU).

Perhaps the most active jurisdictions in this regard have been within the EU, where there are observable tensions between EU privacy protections and individual member states. According to Carly Nyst and Tomaso Falchetta (2017, p. 115),

“European court judgments are increasingly interpreting existing human rights standards to protect individuals’ privacy against unlawful modern communications surveillance”, and, that these protections under EU law have prompted member states to circumvent restrictions under domestic legislation. For example, in Digital Rights

30 American Civil Liberties Union v. James Clapper, 785 F.3d 787 (7 May 2015) (USA). 31 Section 215 of the USA PATRIOT Act (2001) requires telecommunications service providers to hand over any records considered relevant to a counter-terrorism or intelligence investigation upon the provision of a warrant from the Foreign Intelligence Surveillance Court. As such, section 215 of the USA PATRIOT Act (2001) is comparable to the surveillance powers introduced under Australia’s Data Retention Act (2015). 32 Klayman v. Obama, 142 F.Supp.3d 172 (DC Cir. 2015) (USA). 33 Obama v Klayman, 800 F.3d. 559 (DC Cir. 2015) (USA). 63

Ireland v. Ireland,34 a digital rights advocacy group successfully challenged the legality of the Irish Criminal Justice (Terrorist Offences) Act (2005) in the Court of

Justice of the European Union (CJEU). The Irish legislation empowered the government to retain metadata for between six and twenty-four months in accordance with their obligations under the European Data Retention Directive (2006/24/EC).

The case thus concerned the legality of both the Irish and EU legislation under the

Charter of Fundamental Rights of the European Union (2000), with the CJEU observing metadata retention violated the right to protection of personal data found within Articles 7 and 8 (Murphy, 2014, p. 20). The invalidation of the Data Retention

Directive had a flow-on effect for privacy and surveillance law across the EU.

The dynamics of privacy protection through law played out in the subsequent back-and-forth between the Parliament of the United Kingdom and the CJEU.

Following the invalidation of the Data Retention Directive, the British Government passed the Data Retention and Investigatory Powers Act (2014) in an attempt to retain data retention powers in accordance with the judgement in Digital Rights Ireland.

Essentially, the Act clarified the limited scope of data retention obligations for telecommunications service providers, tightening access to data by law enforcement agencies and requiring providers to dispose of data after 24 months. This was the strategy then-Secretary of Home Department Theresa May (2014, p. 705) used to argue the new legislation complied with the decision in Digital Rights Ireland, remarking

“we are very clear about its focus in terms of how it will be operated and in terms of its scope… we are addressing the very issue that was raised by the Court.” Indeed, in the earlier case of Liberty and Others v. GCHQ,35 the UK Investigatory Powers

34 Digital Rights Ireland v. Ireland and Seitlinger and Others (2014) CJEU Joined Cases C-293/12 and C- 594/12 (EU). 35 Liberty and Others v GCHQ and Others [2014] UKIPTrib 13_77-H (5 December 2014) (UK). 64

Tribunal concluded that mass surveillance was lawful where there were sufficient safeguards and limited scope.36 However, once again, privacy protection through law frustrated surveillance powers. In Davis v. Home Department,37 the Court of Appeal deferred the legality of the UK’s revised data retention laws back to the CJEU for clarification. The final judgement in Davis rejected the government’s claim the legislation complied with the EU Charter of Fundamental Rights. The result was, as

Nyst and Falchetta (2017, p. 114) observe, that privacy rights established under

Articles 7 and 8 prohibit metadata retention of EU Citizens in the absence of reasonable suspicion. This includes access to data held by foreign governments. The to-and-fro continued with the passage of the Investigatory Powers Act (2016), which further refined the scope of bulk data retention and collection. This was subsequently challenged by Liberty International backed by a crowdsourced funding campaign, with the High Court once again invalidating the UK’s data retention laws in Liberty v

Home Department.38 The British Government is once again redrafting data retention legislation, highlighting the ongoing dynamic between the legislation of surveillance programs and privacy protection through public interest litigation.39

2.3.2. PRIVACY CAMPAIGNS

This sub-section surveys research into how subjects politically mobilise around the concept of privacy in response to surveillance powers. There are dedicated civil society and human rights organisations that, in addition to pursuing privacy litigation,

36 Weber and Saravia v. Germany, 54934/00 ECHR (EU). 37 Davis and Others v. Secretary of State for the Home Department [2015] EWHC 2092 (Admin) (UK). 38 Liberty v. Home Department [2018] EWHC 975 (Admin) (UK). 39 It is noted that this legal dynamic will be interrupted by the withdrawal of the United Kingdom from the European Union, which at the time of writing is scheduled for 31 October 2019. 65 advocate for law reform with regards to privacy and surveillance issues. For example, the Australian Privacy Foundation (2018; APF) has a well-established history of legal and political activism challenging government surveillance powers. As an advocacy organisation, the APF frames its arguments using legal and moral concepts such as reasonable justification, proportionality, and transparency (Clarke, 2015, p. 129).

Internationally, privacy advocacy groups have also sought to develop common languages for political practice. For example, Privacy International (2013) led the development of the International Principles on the Application of Human Rights to

Communications Surveillance, which established consequentialist principles of necessity and proportionality as standards for assessing surveillance powers. Indeed, the Privacy International (2013) principles have successfully influenced the development of international digital rights norms (Nyst & Falchetta, 2017, p. 107).

Specifically, following the Snowden disclosures, the Report of the Special Rapporteur on the Promotion and Protection of Human Rights while Countering Terrorism (2014) integrated the concepts within the resulting recommendations concerning appropriate digital surveillance practices. Subsequently, the United Nations General Assembly

(2014) ratified Resolution 69/166 on The Right to Privacy in the Digital Age, which affirmed the application of Article 12 of the Universal Declaration of Human Rights40 as covering freedom from arbitrary interference with privacy.

However, the international privacy ‘movement’ remains politically diverse. In the first systemic analysis of the international movement published within The Privacy

Advocates, Colin Bennett (2008) argues civil society and human rights organisations who mobilise around a commitment to privacy protection have a plurality of goals and

40 Article 12 of the Universal Declaration of Human Rights (UN General Assembly, Art. 12) specifically prohibits “arbitrary interference” with privacy and protection against privacy violations under the law. 66 strategies. This includes legal, political, and technological strategies of resistance.

However, Bennett (2008, pp. 22-23, 25-28) argues that while the movement is disorganised, it is commonly animated by shared concerns about the abuse of power.

For example, a social network analysis of privacy advocacy organisations on 2 August

2008 reveals how organisations – who had cross-referenced their respective websites

– exhibit different areas of interests and degrees of issue engagement (Introna &

Gibbons, 2009, p. 242). Organised around Privacy International, the movement includes privacy-specific, region-specific, and more generalised groups (Introna &

Gibbons, 2009, pp. 242-243). Similarly, an analysis of the network of actors involved in resistance to the United Kingdom’s metadata retention program observed how resistance is “central to the dynamics of surveillance” and more complicated than the

“two-actor paradigm” of state and citizen, with actors from civil society, non- government organisations, and the union movement, participating in public debates and privacy protests (Martin, van Brakel, & Bernhard, 2009, p. 228). Evidently, there is diversity in who is campaigning to protect privacy rights.

Part of this diversity originates from the distinction between the mobilisation of privacy advocates, other activists, and concerned members of the public. While

‘advocates’ generally engage with criminal justice policymakers via formal political institutions, ‘activists’ generally operate outside of these institutions and seek to mobilise popular support for law reform (Bennett, 2011, p. 127). Furthermore, these groups employ different discourses to strategically frame the issue. Colin Bennett

(2008, pp. 95-132) has documented how groups will use a variety of framing strategies, including: the dissemination of information about surveillance and privacy; drawing

67 upon the symbolic politics of Big Brother;41 and highlighting the lack of institutional oversight. Similarly, privacy advocates adapt their framing strategies in response to agenda-setting dilemmas, such as the extent to which advocates should engage with internal or external stakeholders, pursue narrow or broad messaging, or run long- or short-term campaigns (Bennett, 2008, pp. 17-21). Evidently, the strategic framing of

‘privacy’ and ‘surveillance’ is malleable within such campaigns, while groups share a core commitment to protecting citizens against unjust power.

The mobilisation of privacy protection campaigns, and associated protests among members of the general public, has also been documented. For example, on 11

February 2014 protestors simultaneously gathered in forums, social media platforms, and city streets as part of the Electronic Frontiers Foundation’s Today We Fight Back campaign (e.g. Gabbatt, 2014). The protests were a direct response to the 2013

Snowden disclosures. A qualitative analysis of action frames used during these privacy protests reveals how Today We Fight Back involved “posting online banners, creating memes, and writing emails to representatives” and utilised literary references to ‘Big

Brother’ from George Orwell’s (1949) Nineteen Either-Four to mobilise support

(Wäscher, 2017, pp. 369-371). Similar protests have previously occurred via

‘blackouts’ or ‘blanking’42 of digital platforms such as Reddit, Wikipedia, and Google in response to proposed amendments to intellectual property laws in the United States

(Logie, 2014, pp. 20-25). These latter examples were successful in preventing the

41 Big Brother refers to the character from George Orwell’s (1949) novel Nineteen Eighty-Four, who is the supposed leader of the governing Ingsoc Party. Although the actual existence of Big Brother is not established within the text, his imagery is regularly used for the purposes of state propaganda. The concept of Big Brother and the slogan “Big Brother is watching you” have been used by privacy advocates to represent problems with surveillance powers. See Bennett (2008) for further details about the use of Big Brother as a political frame. 42 While a ‘blackout’ was a complete shutdown of a service, ‘blanking’ was merely an aesthetic change. For example, Wikipedia changed its colour scheme to a black background with white text. See Logie (2014, 21) for further details. 68 passage of the Stop Online Privacy Bill (2012), although the protests following the

Snowden disclosures have had comparatively less success limiting the ongoing expansion of surveillance programs.

These strategies of web-based privacy protests can be traced back to political resistance to the expansion of surveillance powers under the USA PATRIOT Act

(2001), following the terrorist attacks on 11 September 2001. Research by Brian

Krueger (2005, p. 446) suggests heightened knowledge of the surveillance powers introduced via the USA PATRIOT Act (2001) correlated with increased online privacy activism on internet forums. Similarly, users of social media platforms have been observed to engage in privacy protests. When Facebook first introduced the ‘newsfeed’ feature showing the activity of other users, concerned users responded by starting petitions, bombarding newsfeeds with protest messages, and disseminating blog posts critical of the change (Sanchez, 2009, p. 282). This eventually led to the introduction of ‘privacy settings’ enabling users to control what information they want to share with their connections. Finally, with the expansion of social media platforms relying upon surveillance-based models of capitalism,43 concerned users of Facebook have disseminated information about Facebook’s data harvesting practices and advocated for other users to adopt strategies of privacy protection (Fernback, 2013, p. 15).

Evidently, social groups and members of the public mobilise around privacy protection campaigns where there is a perception that social and political power is being abused.

Within some contexts, privacy protests occupy a contested space between lawful and unlawful action. For example, there are various privacy-aligned forms of hacktivism. In 2010 the web-based activist collective Anonymous conducted

43 See the discussion above about Surveillance Sub-Cultures (Section 2.2.3.3) for a discussion of Shoshana Zuboff’s (2015) concept of surveillance capitalism. 69

Operation Payback, which, in part, focused on conducting Distributed Denial-of-

Service (DDoS) attacks on various corporations who stopped accepting online money transfers for Wikileaks (Sauter, 2013, p. 984). Founded on 4 October 2006 by former participants of the cypherpunk mailing list,44 Wikileaks has been described as a

“technology of dissent” that facilitates whistle-blowing and news leaks of classified national security information (Curran & Gibson, 2013, p. 1). As such, the site has been the focal point of an ongoing cat-and-mouse game between state regulators and anti- surveillance activists, with the latter using DDoS attacks to counter regulatory efforts.

These DDoS attacks involve overwhelming a system with traffic, often by commandeering compromised devices inflected with a virus or malware, to temporarily prevent access to a domain. They are therefore “a digital machine de guerre” for pursuing direct action “against the legal apparatus of the State” (Beck,

2016, p. 334). As DDoS attacks do not cause permanent damage to digital infrastructure, they have been described as “digital sit-ins” by their proponents, although when targeted towards online service providers they can cause significant financial harm by temporarily crippling functionality (Karanasiou, 2014, p. 99). As such, Noah Hampson (2012, p. 511) argues they would not likely qualify as forms of protected free speech under US and UK laws, even when conducted without exploiting compromised machines, due to the foreseeable financial harms. In response to potentially unlawful strategies of privacy protection, law enforcement agencies have reacted in an ongoing cat-and-mouse game.

The state has reacted strongly to web-based privacy protests. For example, individuals who use privacy-enhancing and disruptive technologies to register political

44 Although Julian Assange is publicly recognised as the creator of Wikileaks, the platform was a highly collaborative endeavour by a collection of crypto-anarchists from across Europe, Asia, and Oceania. 70 dissent and enable whistle-blowing generally receive harsh punishments. Because such technologies “threaten the basic tenets of a state’s legitimacy and ability to maintain its interests internationally without exposure,” states have pursued lengthy

“retaliatory prosecutions” for implicated activists (Warren, 2015, p. 301; Rothe &

Steinmetz, 2013, p. 283). Indeed, citizens who use disruptive technologies such as

DDoS attacks to register dissent are criminalised as ‘hackers’ who are trespassing or threatening critical infrastructure (Bessant, 2016, p. 930). For example, based upon a sample of 45 members of Anonymous,45 including the ‘PayPal 14’ who participated in

Operation Payback, one analysis suggests they receive average custodial sentences of

7½ years and a fine of $200,000 USD (Tomblin & Jenion, 2016, p. 510). Specifically in response to the arrest and prosecution of the PayPal 14, British members of

Anonymous organised the Million Mask March to protest the punitive treatment of

‘hacktivists’ (Harbisher, 2016, p. 303). Again, this highlights the symbiosis between surveillance and privacy protection, as law enforcement subsequently disrupted these protests by posing as ‘hackers’, infiltrating encrypted chat rooms, and identifying

“notable hacktivists” involved in the campaign (Harbisher, 2016, p. 303). Overall, there is a cat-and-mouse dynamic between the subjects of surveillance, where expanding surveillance powers mobilises campaigns of privacy protection.

2.3.3. PRIVACY BEHAVIOURS

There are less organised, and more subtle, forms of political resistance to unjust power. Thus, this sub-section examines how everyday privacy-protective behaviours are another symbiotic response to surveillance programs. In Weapons of the Weak,

45 Specifically, the sample included 45 members of Anonymous prosecuted within the United States and United Kingdom between 2008 and 2014 (Tomblin & Jenion, 2016). 71

James C. Scott (1984, p. 29) argues “everyday resistance” to power occurs through behavioural adaptions that can undermine oppressive social structures. He refers to this dynamic of individualised behavioural resistance as “infrapolitics” (Scott, 1990, p. 193). Correspondingly, there is a collection of social scientific research documenting this type of unorganised resistance to surveillance. For example, people make active decisions to avoid the use of social media platforms due to privacy concerns (Casemajor, Couture, Delfin, Goerzen, & Delfanti, 2015, p. 856). Similarly, knowledge of surveillance can have a ‘chilling effect’ where lawful behaviours are deterred due to fear or concerns about oppression (Penney, 2015, p. 125). While the distinction between active and passive behavioural adaptations is complex and contested, it is apparent that people adopt privacy-protective behaviours to limit their exposure to surveillance powers.

The information that people choose to disclose online is strategic, particularly on social media platforms. For example, one study examining the perceptions of 343 users of social media platform Facebook observes how a desire for social interaction

(operationalised as ‘need for popularity’) predicts patterns of information disclosure, including user willingness to share birthdays, contact information, and photographs

(Christofides, Muise, & Desmarais, 2009, pp. 343-344). Interestingly, the researchers found that information disclosure behaviours were entirely consistent with privacy conscientiousness, as privacy is only perceived to be violated where users are denied agency and control (Christofides et al., 2009, p. 344). Indeed, as danah boyd (2012, pp. 30-31) argues “people feel as though their privacy has been violated when their agency has been undermined” rather than when they have consciously chosen to

72 disclose personal information.46 In contrast, where people believe their agency is being threatened, they react by adopting privacy-protective behaviours. For example, where platforms require the provision of a name, age, and gender as terms of use, users may construct pseudonyms and provide false information (boyd, 2012, p. 30). External access to online communities may also be restricted to actively manage the flow of disclosed information (Lingel & boyd, 2013, p. 989). When it is practical, users may choose to avoid platforms where they must authenticate their identity (Dinev, Hart, &

Mullen, 2008, p. 226). As a more extreme example of attempts to retain control over information disclosure, activists have developed face masks and fashion accessories designed to obscure their identities from automated facial recognition technologies

(Monahan, 2015, pp. 163-168). These sorts of privacy-protective behaviours are examples of individualised infrapolitics, rather than highly organised campaigns.

These everyday patterns of privacy protection shift in response to social and political events. This was particularly evident following the 2013 global surveillance disclosures. For example, one study tracking internet search behaviour suggests

Google Trends47 data has been influenced by public knowledge of mass surveillance

(Marthews & Tucker, 2015, p. 13). The researchers provide an example of the declining popularity of the phrase ‘pipe bomb’ to search for news events, to avoid attracting the suspicion of law enforcement agencies (Marthews & Tucker, 2015, p. 30). The analysis suggests a statistically significant decrease in searches for such

‘sensitive’ terms following the Snowden disclosures on 5 June 2013 (Marthews &

Tucker, 2015, pp. 16-25). Similarly, in an interrupted time series analysis48 of

46 See also Taddei and Cortena (2013) for experimental evidence within psychology for this relationship. 47 Google Trends provides analytics about the popularity of different search terms globally and regionally. The service is available at: https://trends.google.com/trends/ 48 This is a method of statistical analysis that examines whether an event causes a change in the trend. 73

Wikipedia traffic from 2012 to 2014, Jonathan Penney (2015, p. 146) demonstrates how the Snowden disclosures coincided with a 19.5% decline in average monthly visits to ‘terrorism-related’ pages. In comparison, non-terrorism-related pages experienced no observable change during the same period (Penney, 2015, p. 160).

Similarly, research based upon interviews with 113 German citizens, observed using their laptops in public, suggest the strongest predictors of webcam-covering behaviour were disclosure-related concerns that governments could covertly activate them

(Machuletz et al., 2016, p. 5). Evidentially, patterns of privacy-protective behaviour are influenced by social events, with methods of everyday behavioural adaptations, potentially considered ‘chilling effects’, embraced to limit information disclosure.

These patterns of privacy protection are also influenced by social categories of class, age, race, and gender. This is because surveillance powers disproportionately target already-marginalised groups.49 For example, research has revealed how employees obfuscate performance metrics gathered by their employers. Where workers are placed under workplace surveillance to measure their productivity, they have been observed to share “tricks” with one another on how to fool workplace surveillance programs (Townsend, 2005, p. 56). For example, strategies include never logging out of computers, placing customers on hold while taking breaks, or avoiding the use of software monitored by management (Townsend, 2005, pp. 55-57). Another study observes how employees whose computers are monitored will perform

“distorting moves” such as “holding down computer keys to appear productive”

(Johansson & Vinthagen, 2014, p. 428). Finally, interviews with British public service employees suggest such strategies of resistance are conscious attempts by workers to

49 See the above section on The Limits of the Surveillance Solution (Section 2.2.4) for a discussion of the unavoidably political and routinely discriminatory character of surveillance powers. 74 reclaim their agency (Thomas & Davies, 2005, p. 700). Schoolchildren will similarly avoid surveillance of their internet use by accessing webpages with misleading domains50 (Hope, 2005, pp. 367-369). Single mothers on welfare within the United

States have explained how they will mask income by accepting cash-based jobs, as acts of defiance against oppressive bureaucratic surveillance programs (Gilliom, 2005, pp. 72, 80). Following the 2005 London Bombings, British Muslims engaged in the

“performance of safety” to minimise public surveillance by law enforcement officers

(Mythen, Walklate, & Khan, 2009, p. 749). This included dressing in Western-style clothing and maintaining “an acceptable European regulation length beard” (Mythen et al., 2009, p. 749). Overall, these everyday behavioural adaptations are embraced by segments of the population who are disproportionately targeted by the surveillance gaze, including social groups structured according to class, age, race, or gender.

In addition to benign behavioural adaptations, there is research examining how activists operating within politically oppressive jurisdictions use privacy protection to avoid government surveillance powers. For example, research involving interviews with representatives of 71 social justice organisations reveals how groups advocating

‘radical politics’ attract greater surveillance from law enforcement agencies and, subsequently, adopt privacy-protective behaviours (Starr, Fernandez, Amster, Wood,

& Caro, 2008, pp. 258-266). The results of the analysis suggest these groups “work to reinforce a hard line between legal and illegal political activities” to minimise attention from law enforcement, while nearly all interviewees “reduced their use of email and telephones” and “try to have their meetings in person” out of fear they were being monitored (Starr et al., 2008, p. 266). An insightful study by Oliver Leistert (2012)

50 As an additional and anecdotal piece of evidence for this, the author can attest to the popularity of the ‘games’ section of the website https://www.mathsisfun.com at a Brisbane-based primary school during the early 2000s. 75 examines the experiences of 50 political activists from the Americas and Asia about their experiences with metadata retention programs operated by state agencies. The research documented the complex cat-and-mouse dynamics of privacy protection: activists started using code words to mask their meeting locations, which prompted law enforcement to covertly collect location-based metadata, which prompted activists to remove the batteries from their mobile phones before attending meetings (Leistert,

2012, p. 444). Again, this highlights how there is a dynamic social dance occurring between the subjects of surveillance.

2.3.4. PRIVACY-ENHANCING TECHNOLOGIES

Another method of privacy protection is the use of privacy-enhancing technologies. This sub-section examines existing research about the development, dissemination, and use of privacy-enhancing technologies as a political response to surveillance powers. For example, cryptographic software can be used to ‘mask’ the contents of communications and ‘block’ access by surveillance programs (Dupont,

2008, pp. 269-271). The process of cryptography can be broadly defined as any process of securing communications between authorised parties, and encompasses a collection of analog, electrical, and digital methods (Ferguson, Schneier, & Kohno, 2010, pp. 23-24). Essentially, communications are secured using protocols – the keys used to encrypt and decrypt communications held by authorised parties (Delfs & Knebl,

2015, pp. 5-6). These protocols do not prevent an adversary51 from intercepting a message, however they do prevent them from deciphering the contents. Historical

51 Adversary is the cryptographic term used to describe ‘unauthorised parties’ to a message. See Ferguson, Schneier and Kohno’s (2010) Cryptography Engineering for an entry-level explanation of the process. 76 methods of cryptography include manually substituting individual letters within written language according to mathematical algorithms (Martin, 2012, p. 48) or adding background noise to verbal communications that can only be removed by the intended recipient (Kahn, 1984, p. 73). For example, the Enigma machines used by the Allies during the Second World War are the most prominent historical example of the use of cryptography. These were a series rotor machines capable of increasingly complex algorithm-based encryption to substitute individual letters within words (Bruen &

Forcinito, 2005, p. 25). Importantly, non-digital forms of cryptography use symmetric- key encryption methods, where the protocol for encryption and decryption is the same

(Delfs & Knebl, 2015, p. 11). As discussed above (Section 2.2.3.1), these technologies were defined as a form of munition by governments seeking to protect knowledge about encryption (Mendelson, Walker, & Winston, 1998; Stay, 1996). Indeed, within

Australia, the Defence Trade Controls Act (2012) regulates overseas distribution of encryption technology. Yet, it was the emergence of digital communications in the late twentieth-century that enabled them to become common tools of privacy protection.

As the political and legal battles of the first ‘crypto-war’ were heating up in the

1990’s, the technologist and activist Timothy C. May (1992) authored the Crypto

Anarchist Manifesto. The manifesto proclaims cryptographic software “will alter completely the nature of government regulation, the ability to tax and control economic interactions, the ability to keep information secret, and will even alter the nature of trust and reputation” while also recognising the potential that “crypto anarchy will allow national secrets to be trade freely and will allow illicit and stolen materials to be traded… an anonymous computerized market will even make possible abhorrent markets for assassinations and extortion” (May, 1992, paras. 1, 3). Similarly, prominent crypto-anarchist Jim Bell (1996) published the tenth (and final) volume of

77 his essay Assassination Politics in 1996, where he argued for an online assassination market as a tool of social control in service of the comparatively powerless. The idea was that citizens could use cryptography to mask their identities to place ‘bets’ on the time of death of public officials and business leaders. The logic is that everyone who is willing to contribute to a bounty will place a ‘bet’ on the time of death, while the

‘winner’ would be the person who conducts the assassination via their foreknowledge of the time of death. The proposal attests to how the politics of privacy protection were particularly controversial during the 1990s.

It is for this reason that the ‘crypto-war’ was so fiercely fought on both sides, prompted by the initial release of public key encryption to members of the public.

Following the Second World War, cryptographic software was legally classified as a munition and subject to strict regulation and export controls (Levin, 1998, p. 532). By the early 1990s, the NSA was attempting to install a ‘Clipper Chip’ within telephones, to provide the US Government with ‘backdoor’ access to encrypted communications

(Froomkin, 1995, p. 745). As a result of the proposal, technologist Philip Zimmerman published public-key encryption tool Pretty Good Privacy (PGP) via file-sharing services in June of 1991. Characteristic of crypto-anarchists of the time, Zimmerman

(1994) published PGP’s code in a physical book titled PGP Source Code and Internals, so the technology could be discretely distributed. As Zimmerman (1994, para. 3) observes in the foreword of the book, “cryptography is a surprisingly political technology.” By publishing the code, Zimmerman became the subject of an investigation by the US Customs Service examining whether he had violated the US

Arms Export Control Act (1976), although the case was dropped in 1996 without explanation (Lauzon, 1998, p. 1327). The release of public-key encryption technology thus empowered ordinary users to protect their privacy.

78

The popularity and development of such privacy-enhancing technologies has increased significantly since the ‘crypto-war’ of the 1990s. Tracking the trends in their development during the decade from 1997, Goldberg (2007, p. 11) observes how the then-recently developed Tor Browser was exponentially growing in popularity.

Similarly, in response to growing awareness about mass surveillance, the CryptoParty movement was established in 2011 to educate the public about cryptographic software.

The decentralized movement promotes “crypto parties” where experts educate citizens about encryption and digital anonymity. In this vein, they are a type of information security workshop (e.g. Albrechtsen & Hovden, 2010). Data provided by the Tor

Project (2018) highlights how the number of publicly52 connecting users increased six- fold after the Snowden Disclosures in June 2013. Subsequent studies suggest some of this increase was attributable to a Ukrainian botnet connecting to the network (Gehl,

2016, p. 1223), however there has been a sustained two-fold increase (amounting to at least two million daily users) on the network (Tor Project, 2018). Evidently, such privacy-enhancing technologies are becoming increasingly popular.

There is some comparative data that provides insights into why people decide to install and use privacy-enhancing technologies. External studies of network patterns suggest anonymising technologies, of which Tor is the most common, are popular in

Europe (Li, Erdin, Gunes, Bebis, & Shipley, 2013, pp. 1272-1273). Another recent study observed an inverse ‘U-shaped’ relationship between measures of political repression and numbers of per capita Tor users53 within a country (Jardine, 2018). For example, Canada and Uzbekistan both have about 150 Tor users per 100,000

52 Not all connections to Tor are able to be counted. Users connecting via a bridge, or a non-public relay, are not captured by Tor’s metrics. See: https://metrics.torproject.org/glossary.html#bridge 53 Jardine’s (2016) study measure bridge users directly, providing a more accurate measure of overall network popularity across regions. 79 population despite placing on opposite ends of measures of political repression, while a middle-ranged country such as Guatemala has about a quarter as many users (Jardine,

2018, pp. 448-449). The pattern was interpreted as the result of independent effects of normative and pragmatic motivations of privacy protection. However, despite a general increase in their uptake, there remain design limitations preventing citizens or activists from embracing privacy-enhancing technologies. Indeed, there are concerns that “few anonymity technologies are available for public use that offer the ability for full online anonymity” and those currently available “are difficult for the average computer user to operate” (Winkler & Zeadally, 2015, p. 436). That is, privacy campaigns struggle, in part, due to the inaccessibility of privacy-enhancing technologies to non-experts (Denick, Hintz, & Cable, 2016, p. 10). Overall, these quantitative and qualitative measures demonstrate how the popularity of privacy- enhancing technologies vary according to political environment and accessibility.

Attempts by governments to expand surveillance powers have prompted the development of new privacy-enhancing technologies. For example, cryptocurrencies enable users to circumvent centralised financial systems via the use of a distributed network technology known as Blockchain (Karlstrøm, 2014, p. 31), which is notoriously the foundation of Satoshi Nakamoto’s (2008) Bitcoin.54 Such cryptocurrencies are primarily used to purchase goods and services on crypto-markets such as the now-defunct Silk Road (Maras, 2014, p. 22). These platforms are primarily used as tools for evading state authority. For example, a content analysis of cryptocurrency communities between 2011 and 2015 suggests users rationalise the sale of illicit drugs on the platforms as a form of freedom from interreference

(Munksgaard & Denmant, 2016, p. 82). Similar research by Isak Ladegaard (2018)

54 Satoshi Nakamoto is a pseudonym whose true identity (or identities) remains unknown. 80 examined the effect of the closure of Silk Road on the popularity of similar crypto- markets. Specifically, the study examined whether the 2015 sentencing of Silk Road founder, Ross Ulbricht, to life in federal prison for charges of trafficking narcotics and money laundering, caused any observable effects on crypto-markets Evolution and

Agora. The research observed how the total numbers of users on other platforms increased in response to Ulbricht’s sentencing, while qualitative analyses of forum posts highlight how crypto-market users specifically cited Silk Road when rationalising the need for privacy-enhancing technologies (Ladegaard, 2018, pp. 426-427). Although these communities blur the boundary between political and criminal activity, their actions are another example of the symbiosis between surveillance and privacy protection. In this sense, they share similarities with forms of organised advocacy and everyday strategies of resistance that protect privacy due to concerns about the abuses of power enabled by government surveillance.

A less morally ambiguous example of privacy protection through technology may be found in journalism, where, since the Snowden disclosures in June 2013, additional precautions have been adopted to protect the confidentiality of sources. This is particularly the case where sources disclose information about national security matters. Based on interviews with 12 senior journalists across the member states of the

Five Eyes intelligence community, Paul Lashmar (2016, p. 679) argues the Snowden disclosures have had a lasting impact on the quality of relationships between journalists and confidential sources. Similar research with 51 interviews with journalists and media experts, including staff at The Guardian who were involved in the Snowden disclosures, suggest staff have had to take a “cryptography crash course” to reassure sources they can protect their identities, including via the use of burner phones, removing batteries from phones, using Signal, Proton Mail, PGP, and Tor to

81 encrypt communications, and eschewing digital communications for sensitive matters entirely (Mill & Sarikakis, 2016, p. 7). Interestingly, the researchers also observed how journalists feel such strategies of privacy protection are a direct result of the additional attention they received from law enforcement agencies who are suspicious of media as enabling whistle-blowing (Mill & Sarikakis, 2016, p. 7). This clearly highlights how the development and use of privacy-enhancing technologies and the expansion of surveillance powers are connected.

The cat-and-mouse game between privacy protection and surveillance powers is ongoing. Indeed, law enforcement agencies continue to be suspicious of the potential for privacy-enhancing technologies to enable the evasion of criminal investigations.

For example, Elizabeth Joh (2013, pp. 1012-1013) argues law enforcement officers do not differentiate between the intentions of criminal evaders and privacy protesters. As a result, strategies of privacy protection, such as using burner phones, Faraday cages,55 or privacy-enhancing technologies are regarded as suspicious by law enforcement agencies who rely on surveillance during criminal investigations (Joh, 2013, pp. 1021-

1022). Additionally, there are ongoing efforts to frustrate the effectiveness of privacy- enhancing technologies by improving techniques for tracking the movement of data

(Erdin, Zachor, & Gunes, 2015). This includes installing malware within applications to enable end-point identification, using timing analysis attacks to narrow the scope of potential locations of a device, fingerprinting attacks to compare the latent ‘signatures’ within encrypted communications from a device, and congestion attacks to pinpoint the location of a device through trial-and-error by ‘clogging’ different relay points and observing the effects on potential entry and exit points (Erdin et al. 2015, pp. 2300-

55 A Faraday cage is a box that prevents electrostatic and electromagnetic signals from entry/ exit. Therefore, it can be used to block all telecommunication signals. 82

2309). Thus, while privacy-enhancing technologies have enabled people to protect privacy rights, they are part of a struggle about the legitimacy of surveillance.

2.3.5. COUNTER-SURVEILLANCE

A final collection of research has documented strategies of citizen counter- surveillance. This sub-section examines this research, analysing how privacy rights are protected by reverting the gaze of surveillance. Such acts of counter-surveillance are

“modes of monitoring that are reversed onto the original source of observation”

(Welch, 2011, p. 302). For example, the phenomenon of “crowdsourced counter- surveillance” has been observed where Facebook users share the real-time locations of

Random Breathalyser Testing sites across Australia (Wood, 2016, pp. 1, 5). In a similar study, Laura Wilson (2011, pp. 192-193) examines how drivers use phone applications to share the locations of Random Roadside Drug Tests (RRDTs) and adapt their routes to avoid detection. The interviewees expressed beliefs that RRDTs were unfair and beyond the legitimate authority of law enforcement (Wilson, 2011, p. 192). These practical strategies of counter-surveillance – as enabling the evasion of law enforcement – contrast with the politically performative character of sousveillance

(Welch, 2011, p. 303). The concept of sousveillance describes the dynamics of a

“wearable computing devices for data collection in surveillance environments” (Mann,

Nolan, & Wellman, 2003, p. 331). That is, the researchers developed a wearable video camera to be worn around the neck, which, despite its impractical size, would simultaneously stream to a wearable video screen (Mann et al., 2003, p. 336). The idea was that by wearing the device, “the experience of surveillance is reflected back to the

83 surveillers” (Mann et al. 2003, p. 345). Overall, such methods of counter-surveillance are prompted by a belief in the abuse of surveillance powers.

There is a collection of research examining methods of counter-surveillance as political responses to surveillance powers. As Torin Monahan (2006, p. 527) argues,

“surveillance and counter-surveillance appear to be engaged in a complicated dance.”

As an example, he argues the proliferation of CCTV surveillance technology has prompted citizens to document camera locations online, as well as efforts to collaboratively immobilise cameras via obscuring the lenses or destroying the wiring

(Monahan, 2006, pp. 520-521). Similarly, the practice of ‘cop watching’, where citizens monitor the actions of law enforcement officers, evolved as people gained access to video camera and digital communications technology (Huey, Walby, &

Doyle, 2006, p. 154). For example, based upon interviews with 12 participants across a two-year observational study of ‘cop watchers’ in the United States, Mary Bock

(2016, pp. 14-15) argues the practice is a performance of citizens’ rights, rather than as an instrumental strategy for documenting police misconduct. Counter-surveillance also involves on-the-ground work of ‘street’ legal teams who coordinate protestor actions, document associated policing strategies using counter-surveillance technologies, and negotiate or contest legal claims made by police officers and commanders in real-time (Starr & Fernandez, 2009, p. 52). Finally, as wearable or handheld computing technologies improve, such as Google Glass and mobile phones, these practices may be live broadcasted via internet streaming services (Mann &

Ferenbok, 2013, pp. 27-28). Overall, counter-surveillance is an increasingly viable strategy of drawing attention to perceived abuses of power by documenting the activities of law enforcement.

84

The use of counter-surveillance by privacy and anti-surveillance activists prompts countermoves by law enforcement agencies. For example, mundane actions such as taking photographs have become politicised where it involves capturing images associated with public safety and national security, such as images of bollards outside government buildings (Simon, 2012, p. 167). Additionally, the performance of counter-surveillance renders practitioners “exceptionally visible to police” and vulnerable to countermoves such as “physical force, the confiscation of equipment or both” (Wilson & Serisier, 2010, pp. 166-172). A high-profile example of this dynamic is observable in the filming of the police beating of Rodney King, and the ensuing riots in Los Angeles in 1991, which attests to the “problematic, if not dialectical, relationship between surveillance and counter-surveillance practitioners” (Monahan,

2006, p. 528). The use of the internet to mobilise political campaigns also attracts greater scrutiny. Indeed, the permanence of material posted online can expose practitioners of counter-surveillance to increased scrutiny by law enforcement agencies who monitor users of social networking platforms (Schaefer & Steinmetz,

2014, p. 513). Finally, responses to whistle-blowing demonstrates the readiness with which states criminalise anti-statist political activity (Bessant, 2015, pp. 331-333), for example via the 35-year sentence to maximum security prison given to Chelsea

Manning for leaking, among other documents, the Collateral Murder video.56 Overall, strategies of counter-surveillance, which seek to protect privacy by redirecting the surveillance gaze, may also render practitioners vulnerable to increased surveillance.

Ultimately, this section has established that there is a symbiotic relationship between the surveillance solution and strategies of privacy protection. Indeed, this

56 It is noted that President Obama subsequently commuted Chelsea Manning’s sentence on 17 January 2017. 85 symbiosis is observable within a documented “arms race” between the subjects of surveillance systems (Marx, 2009, p. 299). For example, such a dynamic is found within the domains of public interest litigation and organised political campaigns as strategies of challenging surveillance powers, particularly within the US and EU where rights to privacy and data protection are actionable. Within these jurisdictions, attempts to expand surveillance powers are complicated by an ongoing cat-and-mouse game between legislators and litigators. Similarly, there is social scientific research documenting behavioural and technological adaptations as responses to surveillance powers. This includes behavioural adaptations that aim to withhold or distort information being collected, the use of privacy-enhancing technologies to obscure the contents of communications, and methods of resistance via acts of counter- surveillance. Such strategies of privacy protection have prompted reciprocal attempts by governments to criminalise acts of electronic civil disobedience, the distribution of privacy-enhancing technologies, and the counter-surveillance of state actors or infrastructure. Throughout, it has also been argued these strategies of privacy protection share common characteristics as political responses to perceived abuses of power (i.e. Bennett, 2008, pp. 22-23; Marx, 2016, p. 276). Overall, the section has established the symbiosis between surveillance powers and privacy protection and highlighted the moral and political character of this relationship.

86

2.4. THE MORAL RIGHT TO PRIVACY

This section examines the ways in which surveillance ethics are contested through competing conceptualisations of the moral right to privacy. Indeed, privacy is a notoriously contested moral and political concept. The opening line of Judith

Thomson’s (1975, p. 295) influential paper The Right to Privacy reads, “the most striking thing about the right to privacy is that nobody seems to have any very clear idea what it is.” As a result, it often appears as if privacy advocates contradict one another, while “honest advocates of privacy protections are forced to admit that the concept of privacy is embarrassingly difficult to define” (Whitman, 2004, p. 1153).

For example, disagreements about the concept of privacy are reflected in how “French people won't talk about their salaries, but will take off their bikini tops” while

“Americans comply with court discovery orders that open essentially all of their documents for inspection, but refuse to carry identity cards” (Whitman, 2004, p. 1160).

The confusion presents a strategic problem for privacy advocates, who have engaged in a rigorous debate about how to effectively mobilise resistance to surveillance powers (e.g. Bennett, 2011; Regan, 2011). Cognisant of these conceptual ambiguities, legal scholar David Solove (2008, pp. 171-172) has remarked how ‘privacy’ actually reflects a “cluster of problems” that share resemblances with one another, rather than possessing essential characteristics.

The ways in which ‘privacy’ may be conceptualised are surveyed below.

Across seven sub-sections, the theories of privacy and their implications for surveillance ethics are unpacked. First, it is demonstrated how privacy is a philosophically contested concept with both inherent and instrumental value. Second, consequentialist theories of privacy are unpacked, highlighting the dominance of instrumental perspectives within the English-speaking world. Third, deontological

87 theories of privacy are considered, highlighting how the inherent value of privacy as an aspect of human dignity underpins an alternative discourse influential in continental

Europe. Fourth, the limits of liberal theories of privacy are unpacked, highlighting how a “collapse of the harm principle” (Harcourt, 1999, p. 139) problematises privacy as a right to non-interference from others. Fifth, the communitarian critique of liberal privacy rights is considered, highlighting an ongoing debate about whether privacy should be conceptualised as an individual or common interest. Sixth, civic republican privacy theories are examined, where it observed how reconceptualising privacy as freedom from arbitrary surveillance power is increasingly accepted as an alternative framework. Seventh, the problems with these illiberal alternatives are also considered, highlighting how they similarly justify state interference through a process of moral responsibilisation (Garland, 1996, p. 452). Overall, the chapter highlights the contested character of privacy and how it is important for surveillance ethics.

2.4.1. THE CONTESTED PROPERTIES OF PRIVACY

This brief sub-section explores the fundamentally contested character of privacy as a philosophical concept. This is demonstrated by briefly considering deductive arguments developed by political philosophers. For example, James H.

Moor (1991, p. 81) proposes a thought experiment to demonstrate the inherent value of privacy, asking the reader to “consider someone who has his entire life under surveillance by others… [who] do not interfere with his life and he doesn’t know that the surveillance is taking place.” With all other things being equal, Moor (1991, p. 81) argues the mere fact of being under surveillance is morally distinct from a life not under surveillance. And the fact people make this distinction suggests privacy may

88 have inherent value independent of its instrumental effects on individuals. Conversely, a thought experiment developed by Jesper Ryberg (2007) questions whether this inherent value has moral force in the absence of instrumental effects. The experiment describes a hypothetical situation where, every day, Mrs Aremac watches people in the square below her third story window (Ryberg, 2007, pp. 129-130). While Mrs

Aremac’s behaviour might be characterised as ‘nosey’, her surveillance of public space is mostly unproblematic or even honourable where it contributes to crime prevention.

Again, assuming all else being equal, Ryber (2007, p. 143) argues Mrs Aremac could be replaced by a CCTV camera without altering the moral status of the surveillance occurring.57 Rather, for there to be a distinction, there would need to be a change in instrumental effects experienced by the townspeople. Overall, it is clear there are disagreements about the inherent and instrumental value of a moral right to privacy, which directly influence judgements about the legitimacy of surveillance powers.

2.4.2. CONSEQUENTIALIST THEORIES OF PRIVACY

This sub-section examines the consequentialist theories of privacy as an instrumental good, which is the dominant perspective within the contemporary

English-speaking world. This conceptualisation understands privacy as a form of freedom from non-interference by others delineated by the application of the harm principle (Frey, 2000, p. 47). Originating in the thought of John Stuart Mill (1859, p. 13) in On Liberty, the harm principle specifies how “the only purpose for which power can be rightfully exercised over any member of a civilized community, against

57 Ryberg (2007, pp. 130-131) notes how the CCTV camera in question would have comparable abilities to the human eye and mind. Thus, it would scan and track objects, and not have a permanent ‘memory’ functionality. 89 his will, is to prevent harm to others.” That is, consequentialism privacy articulates a

“presumption of privacy” that may only be interfered with where it is necessary to prevent harm (Frey, 2000, p. 47). Individuals are presumed to possess a property right to their personal information on the basis of sovereign self-ownership (Locke, 1689, p. 116). Overall, the idea is that people own themselves, and this gives rise to a property right in their personal information that may only be limited when necessary to prevent harm.

The consequentialist theories of privacy have dominated contemporary thinking about surveillance within the English-speaking world. For example, Judith

Thomson (1975) argues the underlying logic of sovereign self-ownership justifies a

‘cluster’ of privacy rights. She asks the reader to suppose a couple are having a fight in their home and someone overhears their argument. Whether a right to privacy is violated depends on how others come to overhear the argument and what they do with the information once it is possessed (i.e. means and effects). For example, if the argument is overheard using a listening device, it violates an individual’s exclusive right to their property-in-themselves (Thomson, 1975, pp. 303-304). However, if a passer-by overhears the fight because a window is open, without trying to listen in, no rights can be said to be violated (Thomson, 1975, p. 296). It would be comparable to overhearing a radio broadcast in public space. Privacy rights also extend to control over personal property. If the argument began over the presence of a magazine within the house, the same logic applies to using visual recording equipment to examine the magazine without the owner’s knowledge and consent (Thomson, 1975, p. 303).

Privacy rights also logically attach to the use of information regardless of how it is acquired (Thomson, 1975, p. 309). For example, a passer-by who inadvertently overheard the argument should not spread personal information where it would cause

90 psychological distress for the subject. This line-of-reasoning about privacy protection, as a form of liberty derived from sovereign self-ownership, is the version reflected within the political and legal tradition of contemporary liberal democracies.

Before being codified into constitutional or tort law in the United States, privacy rights were embedded within the US Postal Services Act (1792) to prevent undue monitoring of citizen communications (Desai, 2007, p. 594). As Anuj Desai

(2007, 554) observes, “it was through the post office, not the Constitution or the Bill of Rights, that early Americans first established that [privacy] principle.” Specifically, the privacy provisions within the US Postal Service were developed as a precaution against uses of the Postal Service as an intelligence gathering mechanism in service of the British monarch and parliament (Desai, 2007, p. 594). Within the United Kingdom, the Post Office was used to ‘intercept’ political communications, including letters sent and received by foreign diplomats, the leader of the opposition, and the general population (Desai, 2007, p. 560). This idea of ‘privacy’ eventually spread to American tort law after Samuel Warren and Louis Brandeis (1890) published The Right to

Privacy in the Harvard Law Review. They articulated the basis for a tort of privacy as

“the right to be let alone”58 based on how sovereign self-ownership and proprietary rights imply a right to exclude access to oneself (Warren & Brandeis, 1890, p. 205).

Writing seventy years later, William Prosser (1960, pp. 390-401) argued The Right to

Privacy reshaped tort law, while reformulating the right to privacy as dependent upon four distinct harms: intrusion into private life, accessing private facts, publicising information in a false light, and appropriation of an individual’s identity. Overall, the

58 Although they popularised the phrase, Warren and Brandeis (1890) are often erroneously attributed as the source of the idea of the right ‘to be let alone’. In their article, they specifically attribute the right to Justice Thomas Cooley of the Michigan Supreme Court (Warren & Brandeis, 1890, p. 195). 91 right to privacy embedded within American jurisprudence is based upon observable harms to both the individual’s property and person.

Based upon this presumption of sovereign self-ownership, contemporary analytic philosophy embraced the moral device of ‘liberty’ as a means for articulating consequentialist theories of privacy. For example, in Principles of Liberty and the

Right to Privacy, Robert Hallsborg Jr. (1986, p. 217) argues privacy is an

“unoppressive liberty interest” on the basis that recognising it within the ‘private sphere’ of social life enables only minor harms and produces significant benefits for individuals. Similarly, Boudewijn de Bruin (2010, p. 511) argues there are practical reasons for conceptualising privacy as a liberal right. He asks the reader to suppose a circumstance where an officer in the United States Navy discloses their sexuality to the ship chaplain, who then informs a commanding officer, and which directly leads to the officer being coerced to resign.59 Such examples demonstrate a clear link between interference with information privacy (an improper disclosure) and an observable harm experienced by an individual (the resignation). However, built-into this discourse of liberty, and the harm principle, is a recognition that where privacy rights conflict with other liberties derived from sovereign self-ownership, it becomes necessary to ‘compare’ or ‘balance’ competing rights (Waldron, 2003, p. 192). For example, the right to privacy forms the basis of a woman’s right to access abortion in the US under the precedent set in Roe v Wade,60 yet within a liberal consequentialist paradigm the competing ‘interests’ of a foetus are still morally relevant61 (Epstein,

59 Note that this thought experiment relies upon the now revoked Don’t Ask, Don’t Tell (DADT) policy governing LGBT people serving within the United States Armed Forces, as established by the Clinton Administration. DADT was subsequently revoked under President Obama. See Yerke and Mitchell (2013) for a complete discussion of the policy, its consequences, and revocation. 60 Roe v. Wade, 410 US 113 (1973) (USA). 61 The point is not that the foetus has comparable rights. But rather that liberalism cannot avoid the comparison of competing rights. See Judith Thomson’s (1971) A Defense of Abortion for a complete analysis of the moral dilemma. 92

2000, p. 4). It is in the unavoidable ‘balancing’ or ‘comparing’ of harms where the consequentialist theories of privacy encounter conceptual difficulties.

The necessity of ‘balancing’ and ‘trading off’ privacy with competing interests has been researched, highlighting how political debates descend into competing claims about whether privacy-enhancing technologies enable or prevent harm. For example, in an anthropological analysis of ‘hacker’ communities, Steinmetz (2015, p. 125) observed how the idea of self-ownership promotes ‘craftyness’ in the development of free and open access to software (F/OSS) and privacy-enhancing technologies. Thus, many developers of privacy-enhancing technologies position them as beneficial tools, although their instrumental effects are contested. For example, another ethnographic analysis of online communities who contribute to the development of Peer-to-Peer

(P2P) software – studied between 2007 and 2011 – suggests an impetus to “evade the state by developing better code” leads to the proliferation of digital piracy (Beyer &

McKelvey, 2015, p. 896). Such “technologies of dissent” also enable whistle-blowers to speak ‘truth to power’ through web-based platforms such as WikiLeaks (Curran &

Gibson, 2013, p. 294), yet depending on who controls these platforms, they also enable anonymous information warfare and election meddling (Shad, 2018, p. 41). Thus, it is difficult to decouple the morally beneficial and problematic consequences of privacy- enhancing technologies, leading to circumstances where they “can be used or abused”

(Moore & Rid, 2016, p. 26). Overall, for consequentialist theories of privacy to remain internally consistent, their advocates must contest any privacy-enabled harms and distribute moral culpability accordingly.

This contested character of privacy-enabled harms has also been studied within crypto-anarchist and hacker communities. For example, in a 1996 email to the

Cypherpunk mailing list, Timothy C. May (quoted in Moore & Rid, 2016, pp. 24-25)

93 used the phrase “Crypto = Guns” to draw a moral equivalency between a right to cryptographic software and the Second Amendment to the United States

Constitution.62 This highlights how the harmful effects of privacy-enhancing technologies are politically neutralised by individualising responsibility. Indeed, social scientists have observed this process among adherents of the ‘hacker ethic’, who articulate commitments to privacy-enhancing technologies, demonstrate engrained distrust for authority, and privilege technical competence (Nissenbaum, 2004, p. 197).

For example, in an analysis of the moral genres of hacking, E. Gabriella Coleman and

Alex Golub (2008, p. 257) argue “hackers discuss freedom and liberty constantly” and, through their development and promotion of F/OSS, proselytise the virtues of meritocracy and individual responsibility. Similarly, a content analysis of popular hacker magazine 2600: The Hacker Quarterly – between 2002 and 2012 – observes how ‘individual responsibility’ is the central rhetorical device within the ‘hacker ethic’ used to justify civil disobedience and opposition to state authority, regardless of the intentions of individual ‘black hat’ hackers63 (Steinmetz & Geber (2015, pp. 37-42).

Consequently, responsibility for privacy protection is individualised, with hacker communities regularly blaming victims for ‘allowing’ data breaches through their incompetence (Steinmetz & Geber, 2015, p. 37). Overall, although the consequentialist theories of privacy are the dominant framework within liberal democracies, their internal logics require the neutralisation of associated privacy-enabled harms.

62 The Second Amendment to the United States Constitution protects the right to keep and bear arms. 63 A “black hat” hacker is differentiated from a “white hat” hacker by their intentions and actions. 94

2.4.3. DEONTOLOGICAL THEORIES OF PRIVACY

This sub-section examines a second broad liberal tradition – deontological perspectives of privacy rights. Such theories focus on the inherent value of privacy and predominate within the continental European traditions of privacy and data protection.

They are therefore based on a more abstract philosophical foundation. Indeed, in describing the harms produced by surveillance powers, Daniel Solove (2007, p. 768) has argued that “most privacy problems lack dead bodies.” His point is that privacy harms are often abstract and unpersuasive in comparison with competing and more visceral interests. This is because the underlying interests in privacy, the reason why an individual ought to have control over personal information, is based upon the idea of human dignity. This continental tradition of human dignity is derived from

Immanuel Kant’s (1785, p. 33) concept of ‘Würde’ (literally, ‘worth’) within

Groundwork of the Metaphysics of Morals, which is central to his explanation of the categorical imperative. For Kant (1785, p. 33), human dignity has no “market price” and is why humans should be considered ends-in-themselves rather than means-to- ends.64 This concept of human dignity has had a profound impact on the historical development of, and justifications for, human rights (Habermas, 2010, p. 464;

Schroeder, 2012, p. 324), including the moral right to privacy.

Seventy-four years after The Right to Privacy (Warren & Brandeis, 1890),

Edward J. Bloustein (1964) offered an alternative foundation for legal protections for privacy rights. He argued privacy was “an aspect of human dignity” rather than a right derived from the principle of sovereign self-ownership (Bloustein, 1964, p. 962).

Specifically, he argued the harms identified in privacy laws reflect an antecedent moral

64 For further explanation of Kant’s full views about human dignity and its role in justifying the categorical imperative, see Kant on Human Dignity by Oliver Sensen (2011). 95 violation of “the individual's independence, dignity and integrity” which “defines man's essence as a unique and self-determining being” (Bloustein, 1964, p. 971). This

Kantian conceptualisation of human dignity is a reoccurring theme within deontological perspectives of privacy. For example, Joseph Kupfer (1987, p. 89) argues privacy is an important precondition for the development of an autonomous identity, while Mark Alfino and G. Randolph Maynes (2003, p. 8) argue privacy is essential for the exercise of practical reason. Similarly, a cross-cultural analysis of privacy norms suggests the idea of human dignity underpins much of the Western construction of the ‘private sphere’ surrounding ‘rational individuals’ (Capurro, 2005, p. 38). Thus, the European notion of privacy is justified “to protect what we consider a fundamental of Western civilization, namely the conception of a stable, free and autonomous subjectivity” (Capurro, 2005, p. 40). The overarching point is that privacy, within the continental tradition, is derived from the concept of dignity of persons conceived of as autonomous moral agents.

This conceptualisation of privacy has informed an alternative model of privacy protection policies within the European context. However, while human dignity provides a more ‘direct’ justification for privacy rights, affronts to human dignity as less tangible than the harms of interfering with property rights associated with consequentialist perspectives (Floridi, 2013, p. 308). For example, French privacy norms consider the accessing of credit histories by financial merchants as “intuitively distasteful” (Whitman, 2004, p. 1192). Indeed, whereas consequentialists might allow access to financial information to prevent associated harms, a deontologist is concerned with the inherent discomfort of such privacy violations. In practice, the

European tradition of ‘data protection’ is based upon this latter conceptualisation. In

Foundations of EU Data Protection Law, Orla Lynsky (2015, p. 7) argues the right to

96

‘data protection’ has come to dominate European political and legal discourse in the place of privacy rights. Rather than being facsimiles, data protection and privacy are theoretically distinct insofar as “data protection grants individuals more control over more personal data than privacy” (Lynsky, 2015, p. 11). Thus, the notion of control is integral to ensuring human dignity is preserved (Floridi, 2016, p. 311). In this vein, it is considered undignified to allow a random merchant to access a credit history without exerting control over the process.

There is a long tradition within sociology that captures the notion of dignity as a conceptual basis for a moral right to privacy. Within his foundational book The

Presentation of Self in Everyday Life, Erving Goffman (1959, p. 155) describes the process of impression management as “when an individual appears before others, he wittingly and unwittingly projects a definition of the situation, of which a conception of himself is an important part.” According to this view, all social interaction and behaviour is dramaturgical, requiring a ‘front stage’ for performing social roles and a

‘backstage’ for the true self (Goffman, 1959, p. 113). Although Goffman may not have anticipated the ways social interaction has evolved due to the internet, the idea of impression management eloquently explains behavioural performances on social networks and online platforms (Bullingham & Vasconcelos, 2013; Miller, 1995).

People strategically present a version of themselves in online and offline contexts.

Importantly, where information about the ‘true self’ is revealed to be incongruent with a ‘social role’ being performed, this can cause significant embarrassment and disrupt a person’s sense of worth (Goffman, 1959, pp. 155-156). The process is exacerbated by digital communications technologies where online profiles are strategically curated by users (Smith, 2016, p. 125). The point is people care about their social image, and

97 where that image is no longer under their direct control, they experience indignity and embarrassment.

It is therefore not surprising the deontological discourses of privacy have positioned the category of ‘control’ as the means for preserving human dignity. In

Privacy and Freedom, Alan Westin (1967, p. 7) defines privacy as “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated.” This type of control is important precisely because it preserves dignity by allowing people to manage their presentation of self, or public image. In a similar line-of-argument, Jeffery Johnson (1989, p. 157) argues “the notion of control is central to the concept of privacy” because “privacy is conceptually connected to the fact that human beings make evaluative judgements of their surroundings, including the appearance, behaviour, [and] activities of other human beings.” Alternatively expressed, privacy as dignity is the freedom from the social judgement of others. For example, Judith DeCew (2000, p. 213) argues “privacy is a shield protecting us from prejudice, pressure to conform, and the judgement of others.” Evidently, people place moral value on the process of controlling and managing their self-presentation.

While the notion of dignity is a useful rhetorical device for describing abstract value, its intangibility leads to practical difficulties operationalising the concept within privacy protection policies. A prominent feminist critic of rights-based conceptualisations of privacy, Anita Allen (2000, p. 873), has argued that reducing privacy to individual decision-making is an ineffective safeguard against social structures that incentivise the disclosure of personal information. Indeed, these discourses of privacy are conceptually blind to the potential for consensual privacy violations to still cause or enable harm (Lucas, 2014, p. 35). Practical attempts to

98 embody ‘consent’ within privacy laws, such as via ‘notice-and-consent’ provisions under the European Union’s General Data Protection Regulation (2016/679), assume the subject has the technical competence necessary to understand the scope and consequences of their digital footprint (Nissenbaum, 2011, p. 36). This is a fundamental limitation with the perspective. For example, an analysis of 54 European

Union documents about government surveillance programs reveals how ‘data protection’ consistently ascribes responsibility to individuals to control their personal information (Barnard-Wills, 2013, p. 178). Thus, as renowned surveillance scholar

David Lyon (2010, p. 33) characterises the problem, “everyone is responsible for surveillance duty [and] everyone is also responsible for seeing to it that they are not surveilled against their will.” Yet, if individual control is not an appropriate standard for privacy protection policies, the appropriate standards must be independently codified into ‘privacy principles’ or ‘principles for data protection’ (Clarke, 2015;

Wright & Raab, 2014). Such standards would require a form of political deliberation about when interference with privacy is justifiable (Parent, 1983, p. 278). Overall, this demonstrates how the concept of human dignity captures moral intuitions about abstract harms, and yet does not necessarily provide practical guidance for the development of substantive privacy protection policies.

2.4.4. THE LIMITS OF LIBERAL PRIVACY

Recent work within privacy and surveillance scholarship has questioned the hegemony of liberal theories of privacy, including both consequentialist and deontological perspectives. Accordingly, this sub-section surveys these issues, noting how the unavoidability of consequentialist reasoning and associated collapse of Mill’s

99

(1859) harm principle render the perspectives vulnerable to distortion by the logics of preventive justice. For example, within the first issue of Surveillance & Society, Felix

Stalder (2002, p. 120) provocatively argued that “privacy is not the antidote to surveillance.” His argument was that reducing privacy to an individual right leads to its erosion, because people are “making dozens of complex decisions each day about which data collection to consent to and which to refuse” (Stalder, 2002, p. 122). This leads to cognitive overload, poor decision-making, and a failure to appreciate how humans exist within social networks rather than as isolated individuals. This claim is not an isolated example. In a similarly critical article, Sami Coll (2014, pp. 1250;

1261), argues that operationalising privacy protection using such a “principle of self- determination” renders privacy the “ally” or “partner-in-crime” of surveillance powers. Finally, in another academic post-mortem of the struggle to prevent the expansion of surveillance powers, Laura Huey (2009, pp. 705-708) describes three fundamental problems hindering effective resistance: the intangibility of privacy harms; a lack of clear purpose in the movement; and the scope of existing surveillance programs. These arguments contend that privacy protection, and the legitimacy of surveillance powers, should not be reduced to an individual responsibility.

The liberal discourses of privacy are susceptible to co-option by the political logics of preventive justice underpinning invasive state intervention policies. Indeed, critics argue the surveillance solution is built into liberal theories of privacy. It has been argued there exist “permanent crises of security management” within liberal democracies that perpetually justify invasive national security programs (Neocleous,

2007, p. 144). Similar objections have been raised about the logics justifying invasive criminal justice practices, insofar as the state has a history of pre-emptive intervention justified in the language of liberalism (Finnane & Donkin, 2013, p. 3). For example,

100 the institutionalisation of ‘dangerous lunatics’ within mental health institutions, confining ‘enemies of the state’ to internment, and sentencing ‘dangerous and habitual criminals’ to indefinite detention in prisons (Finnane & Donkin, 2013, pp. 5-10). All in the name of public safety and security. Another example is found in the Northern

Territory Emergency Response, where the Australian Government intervened in

Aboriginal communities on the basis of preventing child sexual abuse (Strakosch,

2012, p. 18). These social policies are readily justified within democracies using the political logics of preventive justice.

Such political logics are also found within the rhetoric justifying invasive surveillance programs. Following the terrorist attacks in New York on 11 September

2001, a mass metadata retention regime was established within the United States under the USA PATRIOT Act (2001). The legislation was justified with a similarly structured discourse: to preserve liberty (including privacy) the state must have the power to interfere and prevent threats to national security (Simone, 2009, p. 8). That is, to prevent non-consensual interference among individuals, the state must have the authority for pre-emptive interference. Invoking an Orwellian metaphor, Maria

Simone (2009, p. 5) argues this enables liberal governments to argue that “security equals liberty.” A similar line-of-reasoning is used to justify the secrecy of law enforcement operations, by introducing an argumentative premise where secrecy is a prerequisite for effective security programs (Masco, 2010, p. 448). State secrets are thereby equated with freedom. So, while the liberal discourses of privacy capture the inherent value of the concept (i.e. as self-ownership and human dignity), they are intrinsically vulnerable to the logics of preventive justice in service of illiberal ends.

These discursive distortions have prompted privacy scholars to critique the conceptual relationship between privacy and security. Critics argue that privacy and

101 security have a “paradoxical, yet constitutive” relationship, where privacy can be described as either necessary for, or as undermining, national security interests

(Möllers & Hälterlein, 2013, p. 58). For example, legislation compelling access to encrypted communications in pursuit of protecting national security can be framed as undermining cybersecurity (Zajko, 2018, pp. 44-45). In contrast, the collection and retention of biometric data can be used to enhance information privacy by strengthening requirements for accessing data (Jain, Ross, & Rankanti, 2006, p. 125). Additionally, although privacy-enhancing technologies can be used to evade criminal investigations (Weimann, 2016, pp. 198-202), the same technologies protect the information security of potential victims of cybercrime (Hildebrandt, 2013, p. 360).

The constitutive constructions of privacy and security complicate debates about privacy protection. It reflects what Bernard Harcourt (1999, p. 109) refers to as “the collapse of the harm principle” within contemporary liberal democracies, where the logics and language of liberalism are used to justify state intervention. It enables governments to incorporate dissenting views about ‘harm’ into public debates about surveillance powers without the need to actually deliberate (Hegeman & Kahl, 2017, p. 48; Kienscherf, 2016, pp. 1179-1181), while also compelling those concerned about privacy rights to pursue extra-legal strategies of protection “to have at least a modicum of privacy” (Bodó, 2014, p. 9). Evidently, despite their usefulness describing privacy as a concept, liberal theories of privacy have practical and political limitations.

2.4.5. THE COMMUNITARIAN CRITIQUE OF PRIVACY

These limitations have prompted scholars to articulate competing conceptualisations of the moral right to privacy. This sub-section surveys the

102 communitarian critiques of liberal privacy, highlighting an ongoing debate about whether such a value ought to be defined as an individual right or common good.

Indeed, communitarian theorists “abandon the notion of an individual right to privacy” and “emphasise that the human rights justification supports a more social orientation for privacy” (Regan, 2011, p. 498). Thus, the communitarian ideal of privacy is about the common good rather than individual rights. At its core, the communitarian discourse understands privacy as a relational value. For example, as Beate Roessler and Dorota Mokrosinska (2013, p. 785) argue, a society without privacy would be one that lacks “meaningful social relations” as intimacy and friendship require different levels of information disclosure. In his influential article Privacy, Charles Fried (1968, p. 475) argues this is the reason why people value privacy, because they feel indignant when relationships characterised by “respect, love, friendship, and trust” are interfered with. In a philosophical sense, this demonstrates how the moral right to privacy is the product of the “social formulation of the good” rather than the amalgamation of individual interests (Etzioni, 2000, pp. 902; 897). The communitarian discourse thus articulates privacy as a ‘good’ insofar as it is important for human dignity, freedom, psychological well-being, social relationships, and political participation (Westin,

2003, pp. 432-433).65 Thus, the communitarian discourses raise and address some important criticisms of the liberal theories of privacy rights.

The communitarian discourses of privacy have been applied within critiques of government surveillance powers. For example, Pricilla Regan (2002, p. 394) argues all internet users have a shared interest in online privacy protection because it is a

65 Many communitarians agree with a view that privacy norms ought to be based on human dignity and freedom. However, they are not persuaded these are exhaustive justifications for privacy rights. 103

“common pool resource” where every individual’s privacy is linked66 (Regan, 2002, p. 399). Similarly, drawing upon Michael Walzer’s (1983) communitarian theory of justice, Helen Nissenbaum (2011, p. 37) argues contextually sensitive “informational norms” guide moral judgements about information disclosure. Thus, while it might be acceptable for a physician to monitor medical information, it would be inappropriate for a co-worker to engage in similar surveillance. Finally, surveillance scholar Gary T.

Marx (2016, p. 277) has argued privacy norms “speak to an ethics embedded in folk culture” that determine when and where surveillance is acceptable. This is reflected in how the users of digital communications technologies intuitively practice privacy protection. A study examining young Canadian social media users found that they consider privacy contextually, depending upon the nature of the information and their relationships with the individuals accessing it (Steeves & Regan, 2014, p. 304). The study also suggests that privacy is performative, as users strategically cultivate the information they choose to disclose (Steeves & Regan, 2014, p. 307; see also Moore,

2003, p. 222). Evidently, the communitarian discourse captures the social elements of privacy as a moral and political concept.

Another strength of the communitarian perspective is how it renders visible the contingent character of the moral right to privacy. Specifically, the normative status of privacy cannot be abstracted from the social practices it enables. In Why Privacy Isn’t

Everything, Anita Allen (2003) argues notionally ‘private’ behaviours are still regulated by informal social controls, with the social categories of class, race, and gender structuring differential levels of accountability. As a result, privacy protections can reinforce the privilege of powerful groups, because “the collectors of, and traders

66 By common pool resource, Regan (2002, p. 399) means how “technology and market forces are making it hard for any one person to have privacy without all persons having a similar minimum level of privacy.” 104 in, personal information are advantaged by the current market” (Regan, 2002, p. 397).

For example, the ‘private’ sphere protects male perpetrators of domestic violence in the home from public accountability (Allen, 2003, p. 42). Similarly, privacy-enhancing technologies construct an analogous ‘private’ sphere within cyberspace, enabling crimes such as harassment and obstruction of justice (Etzioni, 2015, pp. 137-138). The point is that the benefits of the moral right to privacy must be weighed against the benefits of holding perpetrators accountable for ‘private’ behaviours (Allen, 2005, p.

409; Etzioni, 2015, pp. 124-125). This is not a straightforward or objective process, as the victims of domestic violence also retain an interest in privacy, despite the fact that the concept may also be used to limit accountability (Bailey, 2012, p. 1780). Finally, researchers have observed how social categories of age, class, gender, and race influence patterns of technological literacy (Gonzales, 2016; Jackson et al., 2008), with a corresponding ‘privacy divide’ emerging that reproduces existing social inequalities

(Woo, 2006, pp. 956-957). As such, people who are born into privilege are more likely to have the knowledge and resources to protect their privacy, or alternatively interpreted, to avoid accountability for their actions. Overall, the communitarian perspective clarifies this contextual character of the moral right to privacy.

2.4.6. THE CIVIC REPUBLICAN THEORY OF PRIVACY

A variation of this communitarian critique can be found within the civic republican perspective of privacy rights. This sub-section surveys this perspective as a potential alternative to the dominance of liberal theories of privacy. Indeed, civic republican theorists argue privacy should be understood as freedom from arbitrary surveillance powers, where rules about reasonable interferences with personal

105 information have been derived via democratic decision-making processes (Newell,

2014a, p. 520). For example, in the article The Massive Metadata Machine, Bryce

Newell (2014a, p. 520) argues “a person living under a friendly despot is not in the same position – in terms of freedom – as the person living in a properly constituted constitutional democracy with limits on domination.” Specifically, Newell (2014a, p. 520) argues metadata retention programs are akin to a benevolent dictator, who retains the potential to dominate citizens in the absence of democratic checks on their power. The metaphor of the ‘friendly despot’ is a useful device for conceptualising the civic republican ideal of freedom as non-domination, as a ‘friendly despot’ may change their mind and arbitrarily interfere at any point in the future (Pettit, 2011, p. 714). As such, the means for institutionalising non-domination is through the practices of self-government and political deliberation about law. For example, privacy in the republican sense cannot exist where biometric surveillance is adopted as a policy solution without civic deliberation (Laas-Mikko & Sutrop, 2012, p. 378).

This is because privacy is a multifaceted value and “without including people in the discussion, one value is chosen and set up to be higher than the others” (Laas-Mikko

& Sutrop, 2012, p. 372). This conceptualisation of the moral right to privacy renders it dependent upon avenues for contesting surveillance powers, including freedom of information laws, robust public consultation, and actionable privacy litigation

(Newell, 2014a, p. 521). Overall, the civic republican discourse of privacy as freedom from morally arbitrary forms of surveillance enables critiques of antecedent distributions of political power.

Scholars have argued the European Court of Justice has interpreted the right to data protection as a civic republican privacy right. For example, in the case of Digital

106

Rights Ireland,67 the court struck down the European Data Retention Directive on the basis “the loss of privacy leads to the acquisition of power to interfere on an arbitrary basis” (Roberts, 2015, p. 545). Indeed, as Andrew Roberts (2015, p. 543) remarks, the effect of metadata retention on human autonomy and dignity remains contestable, yet

“we might instead understand the decision in Digital Rights as having been motivated by a concern about the arbitrary exercise of power.” Similarly, Bart van der Sloot

(2018, pp. 539-542) has argued a right to privacy that is based upon a principle of non- domination has been embraced by European courts to capture the moral harms of metadata retention “in the absence of concrete individual harm.” Finally, within the

American context, public interest litigation cases, such as Clapper v. Amnesty

International,68 are attempts by civil society to exert a form of civic republican

“reciprocal surveillance” and resist domination by the state (Newell, 2014b, p. 425).

Evidently, the moral right to privacy is defined, in part, by the legitimacy of any corresponding surveillance powers.

This civic republican discourse articulates the moral right to privacy in terms of the political legitimacy of corresponding surveillance powers. In this sense, the discourse “can accommodate much of the (post)Foucauldian analyses” of power dynamics within surveillance studies (Hoye & Monaghan, 2018, pp. 343, 355).

Specifically, the discourse provides a complementary normative lens through which the empirical post-Foucauldian analyses of power may be examined and critiqued.

Indeed, civic republicanism provides a coherent alternative to the deontological and consequentialist paradigms that dominate normative theories of criminal justice broadly (Braithwaite & Pettit, 1990, p. 85). The theory renders visible the underlying

67 See section 2.2.3 Privacy Law for a discussion of this case. 68 Clapper v. Amnesty International 568 U.S. 398 (2013) (USA) 107 problems with unequal power distributions within and across societies, including within the domain of privacy protection. For example, for civic republican privacy to be secured, there must be conditions of social justice that ensure citizens have equal access to the institutions of civic deliberation (Braithwaite, 1995, p. 277; Pettit, 2012, pp. 75, 239). Thus, while the civic republican perspective is idealistic, it provides insight into the importance of power for the moral right to privacy.

2.4.7. THE PROBLEMS WITH ILLIBERAL PRIVACY

The previous two sub-sections have examined the moral right to privacy as derived from communitarian and civic republican theory. This sub-section surveys the limitations of these illiberal alternatives, arguing they are vulnerable to the logics of moral responsibilisation (Garland, 1996, p. 452). For example, the philosopher

Raymond Frey (2000, p. 48) argues “so long as there are competing conceptions of the good life held by others… it can remain unclear where operative control over what finally happens to information about one[self] lies.” His point is that even where it is acknowledged privacy depends upon conceptions of the common good, this does not resolve which conception of the ‘good’ should be institutionalised within privacy protection policies. There is an inherent danger that illiberal theories will be co-opted in service of authoritarian, or punitive, “law and order” practices (Hughes, 1996, pp. 17-18). Indeed, critical scholars argue “neoliberal communitarian crime politics” readily justifies invasive penal welfarism (van Houdt & Schinkel, 2013, p. 493). It is also the form of criminal justice championed by the Broken Windows perspective, which explicitly argues the need to “protect communities as well as individuals”

(Kelling & Wilson, 1982, para. 52), and which politically legitimises zero-tolerance

108 policing practices that disproportionately target disadvantaged communities (Howell,

2016, p. 1059). Thus, the results of communitarian discourses can be indistinguishable from those of the liberal discourses noted above.

The discourse of civic republicanism can similarly be co-opted to justify invasive surveillance powers. For example, in 2010 the United States Department of

Homeland Security conducted the If You See Something, Say Something campaign, where Departmental Secretary Janet Napolitano appeared in sixty-second commercials aired in public areas (Reeves, 2012, p. 235). As noted by Joshua Reeves (2012, p. 245), the campaign “manipulates structures of social responsibility, subsuming them under loyalty to state objectives like community tranquillity and homeland security.” Indeed, the civic republican notion of ‘citizenship’ may be co-opted in service of surveillance as a collective moral responsibility. Similar to the logics of preventive justice, the responsibility to be a “vigilant citizen” operates constantly at a “pre-incident” stage

(Larsson, 2017, p. 99). As a result, a “neo-republican mode of security governance” may be used to justify surveillance powers as instruments of informal social control, where citizens internalise obligations to obey the state without critical reflection

(Petersen & Tjalve, 2013, pp. 2-3). This reflects what sociologist David Garland (1996, p. 452) describes as the logics of ‘responsibilisation’ within crime control – the tendency for states to indirectly regulate behaviour through the institutions of civil society. In granting the general public the moral responsibility to define the scope of privacy rights, there is an unavoidable danger of enabling a populist politics that justifies invasive criminal justice and national security policies (e.g. Hogg, 2013a, pp. 107-108). Overall, the liberal and illiberal discourses surveyed within this section provide insight into the competing constructions of privacy as a normative value, each with conceptual strengths and limitations.

109

Ultimately, the articulation of a consistent surveillance ethics is complicated by the contested nature of the moral right to privacy. This section has surveyed both liberal and illiberal frameworks for the moral right to privacy, including the perspectives of consequentialism, deontology, communitarianism, and civic republicanism. First, it was argued that ‘privacy’ is a fundamentally contested concept due to the tension between its instrumental effects and inherent values. These disagreements are observable in the tension between the respective liberal traditions of consequentialism and deontology. Additionally, it was argued that, due to the unavoidability of consequentialist reasoning, liberal theories of privacy are susceptible to the logics of preventive justice due to the collapse of the harm principle (Harcourt,

1999, p. 139). This suggests liberal privacy discourses may operate through the logics of preventive justice as an “ally of surveillance” (Coll, 2014, p. 1250; Stalder, 2009, p. 120). Second, it was argued that the illiberal theories of communitarianism and civic republicanism, which respectively conceptualise privacy as a common good and as freedom from arbitrary surveillance powers, provide possible alternatives to liberal perspectives. However, it was noted these perspectives are similarly susceptible to the logics of moral responsibilisation, where invasive surveillance powers are legitimised by constructing a duty for citizens to remain vigilant of security threats (van Houdt &

Schinkel, 2013, p. 493; Petersen & Tjalve, 2013, pp. 2-3). Overall, the section has established how these characteristics of the moral right to privacy contribute to the observed symbiosis with surveillance powers.

110

2.5. THE GAP IN THE LITERATURE

The literature reviewed within this chapter highlights the complex relationship between strategies of privacy protection and surveillance powers. First, it has established how surveillance is a privileged policy solution based upon the logics of preventive justice, where categories of risk are used to politically justify state interference. Second, it has demonstrated how there is a symbiotic relationship between privacy protection and surveillance powers, where subjects are engaged in an ongoing political, legal, and technological “arms race” (i.e. Marx, 2009, p. 299). Third, it has established how the moral right to privacy is contested, with liberal theories considered vulnerable to the logics of preventive justice and illiberal theories vulnerable to the logics of moral responsibilisation. Overall, it is argued the literature reviewed within this chapter demonstrates how ‘privacy protection’ occupies a contested moral space as a method of defending human rights and evading criminal investigations. As such, there is a gap in the literature concerning how contested definitions of ‘privacy protection’ influence debates about surveillance powers, with further research needed to understand how privacy advocates contest the moral equivalence at the core of the ‘going dark’ argument.

111

Chapter Three: Theoretical Framework and Research Methodology

3.1. CHAPTER INTRODUCTION

This thesis examines the contested politics of privacy protection within the context of Australia’s national debate about surveillance legislation, focusing on how privacy advocates contest the moral equivalence at the core of the ‘going dark’ argument. Specifically, the research examines the advocation of privacy protection as a form of political resistance to the Data Retention Act (2015) and Encryption Access

Act (2018), and via privacy protection campaigns such as Citizens, Not Suspects

(2014), Go Dark Against Data Retention (2015), and National Get a VPN Day (2018).

To address the aim and research questions outlined above (Chapter One, Section 1.3), this chapter outlines the project’s theoretical framework and research methodology.

The research draws upon Ernesto Laclau and Chantal Mouffe’s (1985) framework of political discourse analysis developed in Hegemony and Socialist Strategy: Towards a Radical Democratic Politics. After first summarising the theoretical elements of the framework, the practical features of the research design are unpacked across four sub- sections. This includes explanations about the collection, sampling, analysis, and trustworthiness of data analysed during research. Overall, the chapter provides an overview of the theoretical framework and research methodology.

3.2. POLITICAL DISCOURSE ANALYSIS

The concept of discourse has multiple meanings within the social sciences. As a result, this section provides a summary of the specific theoretical framework used

112 during the research process, before unpacking the practical elements of the research design. The research is based upon the framework of political discourse analysis developed by Ernesto Laclau and Chantal Mouffe (1985, p. 105) in Hegemony and

Socialist Strategy, which conceptualises discourse as the “structured totality of articulating practice”. Recently, Laclau (2005, p. 68) has described discourse as “the primary terrain of the constitution of objectivity as such”. That is, discourse is constitutive of “the whole set of social regimes and practices” (Bernard-Wills, 2012, p. 66). This conceptualisation of discourse rejects any distinction between linguistic and behavioural practices, because it is though discourse that behaviour is ascribed meaning (Barnard-Wills, 2012, p. 107). Additionally, Laclau and Mouffe (1985, pp.

112-113) understand language as a “system of differences without positive terms”. In this sense, they adopt Ferdinand de Saussure’s (1916/1959, p. 117) model of linguistic signifiers as deriving meaning from discourses that describe their relational properties.

Overall, political discourse analysis conceptualises ‘discourse’ as inherently relational, enabling meaning-making through the articulation of differences among categories.

Understood in this manner, there are several important concepts within political discourse analysis to unpack. First, discourses are considered the product of articulation – the practice of describing relations among subjects and objects in an attempt to fix meaning (Laclau & Mouffe, 1985, p. 105; Barnard-Wills, 2016, p. 70).

However, these meanings can never be completely fixed. Second, it is via articulation that discourses ascribe meaning to signifiers – the signs or symbols that represent an underlying concept (Laclau & Mouffe, 1985, p. 113; Atkinson, 2019, pp. 27-28).

Third, a floating signifier describes any symbol that lacks a commonly agreed meaning

(Laclau & Mouffe, 1986, p. 113; Saussure, 1916/1959). For example, this explains how two individuals may use identical signifiers (e.g. ‘justice’), however mean

113 different things (e.g. ‘fairness’, ‘equality’, or ‘desert’). In this sense, discourses are engaged in ongoing struggles to fix the meaning of floating signifiers (Laclau &

Mouffe, 1985, p. 134; Barnard-Wills, 2016, p. 71). Fourth, discourses ascribe meaning to signifiers by describing relational properties (e.g. Barnard-Wills, 2012, p. 81).

That is, descriptions of subjects or objects are only intelligible through their relationships to one another. For example, the property of ‘redness’ (the colour) may only be coherently described by drawing comparisons with different colours, while the concept of ‘justice’ may similarly only be described within a context where ‘injustice’ or ‘crime’ can be described. Overall, the analytical construct of signification describes this process of ascribing meaning to linguistic signifiers.

The articulation of discourse also involves processes of subjectivation and identification, which discursively position subjects in relation to one another and in relation to objects (Laclau & Mouffe, 1985, pp. 114-116; Barnard-Wills, 2012, pp. 63;

74-75). These analytical constructs help explain how signifiers structure political beliefs and behaviours. Indeed, it is through the development of shared identities, constituted around objects and juxtaposed with the positions of other subjects, that collective action is possible (Laclau, 2005, p. 77-93). Whereas the analytical construct of subjectivation describes how other subjects are positioned within discourse, the analytical construct of identification describes the development of shared psychological bonds with objects (e.g. shared commitments to ‘justice’). Importantly, this framework of political discourse analysis has been previously applied to analyse contemporary criminal justice policy debates (Quilter, 2015; Hogg, 2013a; 2013b).

Additionally, based upon the existing literature examined above, concerning the moral right to privacy (Chapter Two, Section 2.4), it may be reasonably inferred that privacy is a floating signifier lacking consensus about its underlying meaning (e.g. Thomson,

114

1975; Bennett, 2011; Regan, 2011). As such, the analytical constructs of signification, subjectivation, and identification can enable a discursive analysis of the politics of privacy protection, focusing on how privacy advocates contest the moral equivalence at the core of the ‘going dark’ argument.

3.2.1. SEMI-STRUCTURED INTERVIEWS

This sub-section examines the practical features of the research design involved in the data collection process. Political discourse analysis requires the collection of textual data. Thus, the research design involved the collection of textual data through semi-structured interviews with Australian privacy advocates involved in campaigns opposing the Data Retention Act (2015) and Encryption Access Act (2018), including those directly involved in the Citizens, Not Suspects (2014), Go Dark

Against Data Retention (2015), and National Get a VPN Day (2017) campaigns. It is worth noting that these campaigns, as sources of secondary data, did not include sufficient textual information for analysis. As such, the research collected primary data from the advocates involved in the public consultation and campaigning processes. As the purpose of the interviews was to collect qualitative data from the perspective of privacy advocates who mobilised campaigns opposing surveillance legislation, the data is understood as originating as accounts articulated from their particular perspectives (De Fina, 2009, p. 254). The research interviews were semi-structured to allow for structured flexibility in the format, length, and contents of discussions. A full list of the interview questions can be found within Appendix A, although these were adapted as necessary for each individual participant. The number of questions could

115 thereby be reduced where necessary due to time constraints or increased as participants guided the direction of the discussion.

The semi-structured interview questions included in Appendix A were developed via a deductive process and structured to capture relevant data. This was the instrument used to guide the semi-structured interview process. Specifically, the instrument was developed to operationalise the research questions through both broad preliminary questions and follow-up ‘probing’ questions (Rabionet, 2011, p. 564). An initial version of the research instrument was piloted with a senior researcher familiar with interviewing activists and refined to avoid any confusion or misinterpretation.

The questions were developed via deductive themes identified within the existing literature, and to be relevant to the socio-legal context for privacy protection as advocated within campaigns such as Go Dark Against Data Retention (2015) and

National Get a VPN Day (2017). The interviews began with an introduction, explanation of the project, and details about confidentiality and consent requirements

(Rabionet, 2011, p. 564). Subsequently, interviews followed a broad structure across several topics: the right to privacy, metadata retention laws, public debate and consultation, practices of privacy protection, the ‘going dark’ problem, and future solutions. These broadly operationalise discussions about the politics of privacy protection as identified within the literature and as specifically applied to the

Australian socio-political context. The individual questions were specific enough to prompt participants to initially recall memories and construct stories or anecdotes about their experiences (Gemignani, 2014), while also open-ended and phrased to encourage detail and elaboration (Turner, 2010, p. 758). Altogether, twenty-one (21) interviews were conducted during the period August 2017 to March 2018.

116

3.2.2. SAMPLING AND PARTICIPANTS

This sub-section describes the sampling process used within the research design. The research aims to understand how privacy advocates contest the moral equivalence drawn within the ‘going dark’ argument. As such, the research is interested in understanding the experiences of privacy advocates within the context of public debates about surveillance powers, such as those established in Australia under the Data Retention Act (2015) and Encryption Access Act (2018). Therefore, as guided by Oliver Robinson’s (2014, p. 26) criteria for qualitative interview-based research, a four-stage sampling process was adopted: define the relevant population; determine an appropriate sample size; decide on a sampling strategy; and conduct participant recruitment. Because the relevant population relates to a very specific group of subjects – privacy advocates who campaigned against Australia’s surveillance legislation – the population of potential participants can be considered purposive. An initial purposive sample was thus produced by identifying advocacy organisations who participated within the PJCIS (2015) review of the Data Retention Act (2015). A full list of this initial sample can be found within Appendix B. The use of a purposive sample was appropriate given the necessity of “key informants in the field who can help in identifying information-rich cases” (Suri, 2011, p. 66). That is, only advocates who campaigned against the Data Retention Act (2015) could provide the data necessary to answer the research questions.

The latter three stages of Robinson’s (2014) sampling method were conducted simultaneously. Based upon a meta-analysis of long-form interview research in the social sciences, it was estimated the project would require a minimum of twelve (12) interviews to reach data saturation (Guest, Bruce, & Johnson, 2006, p. 74). To reach this target, the researcher sent recruitment letters to the publicly available email

117 addresses of organisations who constituted the purposive sample in Appendix B. This letter was approved by the University Human Research Ethics Committee (UHREC).

A copy of the recruitment letter can be found within Appendix C. In order to encourage participants to respond, the interview process was offered in a flexible format.

Interviews could be via face-to-face, teleconference, or a written format. When potential participants contacted the researcher, they were sent a copy of the participant information sheet and consent form. Copies of these can be found within Appendices

D and E respectively. Additionally, in order to reach data saturation, the initial purposive sample was expanded using a snowball sampling method. That is, at the conclusion of interviews with a participant, they were asked to pass along details about the project to any other suitable participants. This enabled the building of social capital with participants (Noy, 2008, pp. 334-335), which was necessary to access the broader network of privacy advocates. The strategy of snowball sampling is an appropriate method for interview-based studies of insular social groups (Browne, 2005, p. 48), and is useful for accessing ‘naturally’ occurring social networks through existing relationships (Noy, 2008, p. 329).

Altogether, twenty-one (21) research interviews were conducted with representatives of civil society advocacy organisations who participated in the PJCIS

(2015) review of the Data Retention Act (2015) and the then-ongoing debate about the

Encryption Access Act (2018). The final sample was constituted by seventeen (17) men and four (4) women, drawn from ten (10) participating civil society organisations.

Categorised according to group membership, eleven (11) participants were members of privacy advocacy organisations, five (5) participants were members of technology- oriented advocacy organisations, and five (5) participants were members of human rights advocacy organisations. Of the twenty-one interviews, three (3) were conducted

118 during face-to-face meetings, fourteen (14) were conducted via teleconference, and four (4) were conducted via written responses to interview questions. Face-to-face interviews were conducted in mutually agreeable public locations and recorded using an Olympis VN-7800 Digital Voice Recorder. The teleconference interviews were conducted using software such as Skype and Zoom or via telephone calls. These interviews were audio recorded using Audio Recorder for Skype software on Skype, on Zoom via the platform’s built-in recording capabilities, and using Automatic Call

Recorder software for telephone calls. The average length of audio-based interviews was an hour (58 minutes, 58 seconds), with length ranging between 18 minutes and

113 minutes. The range of interview lengths was a product of varying time constraints among research participants. For the four written interviews, the average length of responses was 1,943 words and ranged between 1,026 and 2,678 words. Overall, the final sample included participants with backgrounds in technology, law, and activism, and who were variously associated with privacy, technology, and human rights organisations. After initially exceeding the minimum number of interviews required to reach data saturation (Guest et al., 2006, p. 74), the researcher continued monitoring the contents of interviews to assess whether additional sampling was required. The snowball sampling strategy was concluded when data saturation was judged as achieved (Fusch & Ness, 2015, p. 1409).

3.2.3. CODING AND ANALYSIS

This sub-section describes the processes of transforming, coding, and analysing the data during research. As data collection was ongoing, the researcher transformed the interviews into a format suitable for political discourse analysis

119

(Schilling, 2006, p. 30). This required the transformation of the seventeen (17) audio- recorded interviews into a textual format. During the transcription process all identifying information was removed. Consistent with the project’s ethics approval, the interviews were all transcribed within a six-week period and a copy was provided to the relevant participant for review. This enabled participants to confirm the transcribed data reflected their experiences. All instances where participants suggested edits were accepted. Once the transcripts were finalised, the researcher began the process of coding and analysis. The practice of political discourse analysis requires the operationalisation of discourses used to articulate signifiers (Barnard-Wills, 2016, p. 81). It therefore involves reducing the volume of information without a loss of complexity. Indeed, according to Schilling (2006, p. 31), the purpose of qualitative analysis is to “reduce the material while preserving the essential contents” through the use of ‘structured protocols’ or ‘categories’ that capture the contents of a discourse.

To ensure this was accomplished, the coding of interview data adhered to the guide developed by Satu Elo and Helvi Knygäs (2008), where qualitative data analysis is advised to follow three stages: preparation, organisation, and reporting. Specifically, this involves the development and refinement of a category protocol, the coding of transcripts to identify patterns and trends, and reproducing these observations and identifying their interrelationships within an original analysis of the data.

The preparation phase of data analysis involved the construction of a preliminary category protocol for the purposes of coding the interview transcripts. The preliminary category protocol can be found within Appendix F. This was constructed via a deductive process based upon the research questions, theoretical framework, and existing literature. The preliminary protocol was constructed to capture the signification of ‘privacy protection’ as a practice within the discourses articulated by

120 civil society advocates. Subsequently, the organising phase of qualitative analysis involved analysing the data and inductively refining the category protocol. This approach allows the researcher to draw upon existing knowledge, while also transparently generating novel categories based upon exposure to the data (Fereday &

Muir-Cochrane, 2006, p. 81). Thus, a pilot coding phase for analysing the interview transcripts was conducted. The process of inductive category construction involved the categorisation of data according to distinct ‘categories’, and which can be further sub-divided into increasingly narrow ‘sub-categories’ (Elos & Knygäs, 2008, p. 111).

Where a theme cannot coherently fit into an existing category (or sub-category), the protocol may be refined to accommodate the data. This process was completed across repeated analyses of the data across a six-month period, until the researcher was satisfied the protocol had reached ‘thematic exhaustion’ (Bowen, 2008, p. 148). A copy of the revised category protocol can be found in Appendix G. Specifically, to ensure the finalised categories were coded with consistency, the category protocol included a label, description, and multiple examples (Thomas, 2006, p. 240). This manner of constructing the category protocol allows the analysis of relationships between categories in the same grouping (e.g. analysing the differences between ‘sub- categories’), and alternatively, comparisons across groupings (e.g. analysing the differences between ‘categories’).

Once all data had been coded and input into the category protocols, the researcher began the third phase of qualitative analysis: analysing the results. Broadly, this required presenting and describing the patterns of categories observed within the data and unpacking their relationships with one another. Specifically, it involves presenting ‘thick’ and ‘rich’ examples and explanations of the themes observed within the discourse (Onwuegbuzie & Leech, 2007, p. 244). These concepts correspond to the

121 ideas of ‘quantity’ and ‘quality’ of examples provided, and therefore ensure the themes are both reliable and valid. While the trustworthiness of the research is discussed in greater detail below (Chapter Three, Section 3.2.4), it is worth noting here how this informs the practicalities of presenting qualitative research. The data is presented according to the groups of specific categories (e.g. signification, subjectivation, and identification). These include multiple and detailed examples of the observed theme, to ensure the reported data adheres to the requirements of thickness and richness.

Additionally, the relationships between and within observed themes are explained throughout the data chapters. It is therefore consistent with the general requirements of qualitative data analysis, which involves the fracturing and rearrangement of data for the purposes of identifying consensus and conflict across categories (Schilling,

2006, p. 34; Fereday & Muir-Chochrane, 2006, p. 89). Overall, this enables an analysis of how ‘privacy protection’ as a practice is articulated – constructed and contested – within the discourses produced by privacy advocates.

3.2.4. TRUSTWORTHINESS

The question of quality in qualitative research is intrinsically linked to questions of epistemology and the practicalities of the research design. As such, much of what is discussed in this sub-section relates back to points previously made about these features. Broadly, research quality can be categorised according to ‘validity’ and

‘reliability’ (Bryman, Becker, & Sempik, 2008, p. 264), although qualitative researchers often reject these labels and develop alternative measures of

‘trustworthiness’ (e.g. Elo & Knygäs, 2008). However, the crux of the ideas is the same. Generally, validity refers to how well the data, as interpreted through analytical

122 constructs, captures the phenomena under investigation. Alternatively, reliability refers to the consistency with which research instruments detect the phenomena under investigation. Both of these measures of data trustworthiness will be considered.

The research methodology has been designed to maximise the validity of the data insofar as practical within the qualitative paradigm. It is noted that the researcher is considered a ‘co-constructor’ of data through the semi-structured interview process

(Gemignani, 2014, p. 127). With this in mind, it is important to acknowledge the unavoidability of the researcher influencing the production of interview data, while also taking reasonable steps to minimise this process. According to Berger (2015, p. 230), this can be accomplished through repeatedly reviewing the data and engaging in peer consultations about the research process. That is, to engage in ‘reflexivity’ where the researcher remains aware of their involvement in the construction of knowledge, cognisant of the purpose of the research, and provide detailed descriptions of the research process (Guillemin & Gillam, 2004, p. 275). In this regard, the validity of the research presented in this thesis depends primarily upon the degree of detail offered within this chapter. It is thus argued validity has been secured.

The reliability of data in qualitative research is similarly dependent on the structure of the research design, particularly concerning transparency at the analysis and reporting stages of research. Thus, the clarity of the category protocol is paramount for establishing reliability. This requires clear labels, descriptions, and examples that could be readily applied by another researcher to the dataset (Thomas, 2006, p. 244).

Again, this relates to the idea of methodological transparency. However, as Jan

Schilling (2006, p. 32) observes, “the strength of the qualitative approach lies in the fact that such as model can be elaborated or changed within the course of analysis” and therefore “preliminary model[s] should be made explicit and used for structuring

123 the material.” That is, the research instruments can and should change when using an inductive framework, however these changes need to be acknowledged and explained.

Therefore, this chapter has remained transparent about the changes made to the category protocol as illustrated from Appendices F and G. Similarly, qualitative reliability depends upon reproducing data consistent with its original context. This is because data produced through semi-structured interviews is micro-sociological and, yet, influenced by the contextualising social and political context (Tracy, 2010, p. 843). As such, the data analysis chapters have explicitly incorporated contextualising information for the ‘rich’ and ‘thick’ data reproduction.

Overall, the research has remained reflexive about the unavoidably active involvement of the researcher within the data collection process, while also seeking to minimise any corruption of the data. Indeed, the trustworthiness of qualitative data is always a matter of degree (Onwuegbuzie & Leech, 2007, p. 238). The choices made within the research design have been oriented towards maximising data validity and reliability. For example, through the act of transcribing the interviews, the researcher was reminded of their personal contributions to the flow of the conversations (Frost,

2009, p. 18). Through implementing the requirements of ‘rich’ and ‘thick’ thematic description, the research design requires reflection about the degree of consensus and conflict within and between categories (Tracy, 2010, p. 841). Through the inductive refinement of the category protocol, the research instrument was sensitive to the discursive themes articulated during interviews without distorting or compressing the data (Elo & Kyngäs, 2008, p. 113). The level of detail offered in the category protocol

– the use of labels, descriptions, and examples – also ensures the research instrument can be independently evaluated (Elos & Kyngäs, 2008, p. 112; Fereday & Muir-

Cochrane, 2006, pp. 84-85). Finally, it is important to note that individual pieces of

124 data are always partial and must be analysed within the context of the broader dataset

(Polkinghorne, 2007, p. 482). Indeed, qualitative interviews are personal ‘accounts’ that must be deconstructed, reassembled, and interpreted. As a result, the veracity of any individual ‘account’ is irrelevant insofar as it forms part of the overarching pattern of articulating a discourse, as long as data saturation and thematic exhaustion are achieved. These various elements of the research design, and the transparency with which they have been documented, ensure the original research contribution of the thesis is consistent with the requirements of maximising trustworthiness.

3.3. RESEARCH ETHICS

This section summarises the ethics requirements of the research, highlighting the relevant issues and explaining any associated strategies of risk management. The research component of the project received ethics approval from the University Human

Research Ethics Committee (UHREC; Ethics Approval Number 1700000517). The project was approved as being negligible or low risk for participants and was conducted consistent with the standards required by the UHREC as guided by the

Australian Code for the Responsible Conduct of Research (National Health and

Medical Research Council, 2007; hereafter NHMRC). Specifically, the project required consideration of the requirements of informed consent and data management.

The issues of informed consent were central to ensuring the project adhered to university and national ethics requirements. Underpinning this is the recognition the data gathered during research involving human participants is produced via collaboration between researchers and participants, and it is therefore important to avoid treating participants merely as means to the researcher’s ends (Guillemin &

125

Gillam, 2004). Indeed, as Marilys Guillemin and Lynn Gillam (2004, p. 272) observe,

“[t]he potential harms to participants in qualitative social research are often quite subtle and stem from the nature of the interaction between researcher and participant.”

Thus, to ensure this risk of harm was minimised, the researcher did not provide incentives to participants and ensured they understood the requirements of informed consent at multiple stages of the research process. As previously noted, after contacting the researcher, participants were provided with Information and Consent forms

(Appendices D and E) and provided time to determine whether they wanted to participate in the project. Additionally, the contents of the documents were verbally discussed prior to all interviews. In particular, participants were reminded they could determine whether the interviews were audio recorded, they would have the timely opportunity to revise the transcript, and they could withdraw from the study any time within ten weeks of participation.

The other significant ethical consideration built-into the research design concerned the data management procedures, both in terms of storage and reporting.

This is important because data ownership is complicated in projects involving collaborative knowledge production (Parry & Mauthner, 2004). Thus, participants were reminded that their involvement would be confidential at all stages, with identities known only by the researcher and their supervisors. The format of the data presented within the original research component of the thesis are suitably modified to ensure individual participants cannot be re-identified: the use of pseudonyms, removing references to specific roles in organisations, changing details or disguising characteristics where it does not affect data, and avoiding the inclusion of extracts where they include identifying details (Wiles et al., 2008, p. 423). These criteria were applied to all interview transcripts to ensure consistency when the researcher removed

126 or altered any potentially identifying personal information (Wiles et al., 2008). As noted earlier, participants also reviewed transcripts of their interviews to review these changes and request additional changes where necessary. Finally, strict data management procedures were implemented to ensure the integrity of both the raw and transformed data, consistent with the requirements of the QUT Management of

Research Data Policy69 and Records Management Policy.70 As such, all physical data are archived within locked storage facilities while digital copies are stored on encrypted drives. In accordance with the requirements of the Queensland Public

Records Act (2002), retained copies of the audio recordings will also be disposed after five years from the date of publication. Altogether, the original research presented within this thesis was guided by the research ethics literature for interview-based qualitative analyses and adhered to the practical and ethical standards required by the

UHREC and NHMRC.

3.4. LIMITATIONS OF RESEARCH

This final section provides a summary of the limitations of the research associated with the use of a qualitative methodological framework. Indeed, despite the rigorous character of political discourse analysis, acknowledging the limitations of post-structuralist and qualitative – rather than essentialist and quantitative – research minimises the potential misrepresentation of the generated knowledge. For example, the use of a purposive and snowball sample does limit the generalisability of the research conclusions to other populations (Brown, 2010, p. 231). However, the details

69 The management of research data policy can be found at: http://www.mopp.qut.edu.au/D/D_02_08.jsp. 70 The records management policy can be found at: http://www.mopp.qut.edu.au/F/F_06_01.jsp. 127 provided about the category protocol, and associated explanations of the social and political context of data collection, should enable transferability of findings to qualitatively similar contexts (Tracy, 2010, p. 845). Similarly, despite the strengths of

‘thick’ and ‘rich’ qualitative data, the research does not (and cannot) determine experimental causation with scientific certainty (Gelo, Braakmann, & Benetka, 2008, p. 8). However, it is important to note that this is not the aim of the thesis, which explicitly seeks to understand the contested politics of privacy protection through the analysis of discursive categories. Finally, there is debate among researchers whether a participant’s ‘account’ must be independently verified if it is to be of empirical value

(e.g. Spector-Mersel, 2010, p. 208; cf. De Fina, 2009). As discussed above (Chapter

Three, Section 3.2.4), political discourse analysis does not require independent verification of participant accounts beyond achieving thematic exhaustion across multiple semi-structure interviews. Indeed, it is analytically irrelevant whether individual accounts are ‘true’ or ‘false’ when they contribute to observable patterns of articulation that ascribe meaning to linguistic signifiers, discursively position subjects, and construct shared identities to mobilise collective action. Overall, despite the limitations inherent to the adopted qualitative framework, the documented details and transparency of the research design ensures the method is appropriate for answering the specific research questions.

128

Chapter Four: Contesting the ‘Problem of Going Dark’

4.1. INTRODUCTION

The psychosocial process of signification describes how subjects ascribe meaning to objects through the articulation of discourses. However, as explained above (Chapter Three, Section 3.2), these discourses articulate meaning in a negative sense – by drawing comparisons between categories. Therefore, this chapter presents an analysis of the signification strategies used by privacy advocates to differentiate the meaning of ‘privacy protection’ from methods of ‘criminal evasion’ – a moral equivalence at the core of the ‘problem of going dark’ (Chapter One, Section 1.2). As such, the chapter answers the first research question. The first section argues that the articulated meaning of ‘privacy protection’ is constructed through discourses that define the conceptual relationship between privacy and security as moral values.

Specifically, the moral equivalence between ‘privacy protection’ and ‘criminal evasion’ can be considered a by-product of the relational properties of discourse. The second section argues that privacy advocates are encumbered by a liberal framework when contesting the ‘problem of going dark’ that is vulnerable to conceptual distortion by the consequentialist logics of preventive justice. Specifically, it is argued the categories of necessity, proportionality, accountability, and harm are readily co-opted to justify intrusive surveillance powers. Finally, the third section argues that privacy advocates alternatively contest the ‘problem of going dark’ by ascribing moral arbitrariness to the surveillance powers established under the Data Retention Act

(2015) and Encryption Access Act (2018). Overall, the chapter provides an answer to the first research question: Australian privacy advocates differentiate the meaning of

129

‘privacy protection’ from ‘criminal evasion’, and therefore contest the ‘problem of going dark’, via ascribing moral arbitrariness to surveillance powers.

4.2. THE RELATIONAL PROPERTIES OF PRIVACY

There is a significant amount of literature within the social sciences examining the ‘trading-off’ between, or ‘balancing’ of, privacy and security (e.g. Bronitt &

Stellios, 2005; Mann et al., 2018). However, much of this scholarship treats ‘privacy’ and ‘security’ as distinct values rather than as conceptually and discursively linked.

This section examines how the articulated meaning of ‘privacy protection’ is constructed through discourses that define the conceptual relationship between

‘privacy’ and ‘security’ as moral values. As such, it is argued any moral equivalence between ‘privacy protection’ and ‘criminal evasion’ can be considered a by-product of the relational properties of discourse. Specifically, across two sub-sections, it is argued that the concept of ‘privacy’ has an unavoidably dialectical relationship with the concept of ‘security’, as either: 1) competing moral values requiring a trade-off; or

2) constitutive elements of a single moral value. Consequently, it is argued the moral equivalence between ‘privacy protection’ and ‘criminal evasion’ can be drawn wherever ‘security’ is logically prioritised within a discourse.

4.2.1. THE ‘TRADING-OFF’ OF PRIVACY AND SECURITY

This sub-section examines the first signification strategy used by privacy advocates to articulate the relationship between privacy and security – as distinct moral values that must be ‘traded-off’ or ‘balanced’ against one another. A minority of four (4) participants articulated this discourse of ‘trading-off’ during interviews.

130

For example, Participant 8 argued there is always some degree of trade-off between the two values:

“A friend of mine has a good metaphor for this: if you really want to be secure,

you can walk down the street in a suit of armour. But you don’t for practical

reasons. Security is constantly a trade-off, and the most secure things to do are

to not have a computer and throw your phone away. But we don’t because there

are benefits.”

This articulates a notion that a ‘trade-off’ between privacy and security is unavoidable, where broadening the scope of one value causes a reduction in the other. Similarly, while Participant 5 observed there is a ‘false dichotomy’ between privacy and security, they reaffirmed that the values must be ‘balanced’ against one another:

“It is also posing a false dichotomy, the notion that we can choose either safety

or freedom. Societies have had to balance these two for centuries and have

developed important principles like the presumption of innocence and other

safeguards to ensure we have checks and balances on any encroachment on

freedoms. The same principles should apply.” (emphasis added)

Evidently, these participants viewed it is as inevitable that some measure of ‘privacy’ must be ceded in pursuit of ‘security’ interests. Indeed, Participant 19, a long-time member of the APF, similarly remarked that the trade-off is unavoidable:

“Unless you are going to become some sort of survivalist and retreat into a

cave, there is not much you can do to keep information hidden. You can try

and control what may be done with it, but keeping it hidden isn’t going to get

you anywhere.”

131

These extracts demonstrate how discourses that ascribe meaning to ‘privacy’ have explicit and implicit consequences for the meaning of ‘security’, as it is not possible to articulate the meaning of signifiers in isolation. This relational property of ‘privacy’ discourse enables the moral equivalence at the core of the ‘problem of going dark’, where the signified meaning of ‘privacy’ is equated with a ‘threat to security’.

For example, Participant 12 acknowledged this specific problem:

“The real problem is that this is something that does not have a good

compromise solution. If you want the kinds of technology that we have been

enjoying for the last decade or two, you ultimately need to lock the criminals

out of it, so investigations need to be done via other measurements.”

Here, the participant notes how ensuring communications technologies respect

‘privacy’ inevitably creates security risks, and therefore “you… need to lock the criminals out”. While the idea that rights inherently conflict is established within jurisprudence and political theory (Thompson, 2001, p. 17), it is argued such relational properties are embedded within ‘privacy’ discourse itself (i.e. Saussure, 1916/1959, p. 117). Therefore, when articulating the meaning of ‘privacy’ as a value, participants are unavoidably also articulating – explicitly or implicitly – the meaning of ‘security’ as a concept. It is these relational properties that enable the moral equivalence between

‘privacy protection’ and ‘criminal evasion’ to be drawn, because the meaning of protecting ‘privacy’ can be equated with enabling ‘threats to security’.

4.2.2. PRIVACY AS ‘NECESSARY’ FOR SECURITY

This sub-section examines the second signification strategy used by privacy advocates to articulate the relationship between privacy and security – as components

132 of a single moral value that are constitutive of one another. Relevantly, this was a more frequently articulated discourse, with nine (9) participants framing the conceptual relationship as constitutive. Specifically, the discursive strategy constructs ‘privacy’ and ‘security’ as common elements of ‘non-interference’ from others. For example,

Participant 4 described the ethos of data austerity (datensparsamkeit) that captures how the protection of ‘privacy’ is important for ‘security’:

“There is an ethos of datensparsamkeit where, as a software developer, you

should only collect the things that you need. The reason for that is because you

want to respect your users’ privacy and also you can’t 100% guarantee the

security of anything you build. With very minor exceptions.”

The relationship articulated here is that ‘privacy’ and ‘security’ are elements of a common interest in ‘non-interference’ from other subjects. However, the participant is logically prioritising the signified value of ‘privacy’ over the signified value of

‘security’ (i.e. ‘[t]he reason… to respect users’ privacy [is to] guarantee the security’).

This was a reoccurring strategy used by participants to articulate why surveillance powers are counterproductive. For example, Participant 15 argued “the security of the nation is not served by making the state’s job easier if it also weakens the security of everybody.” Participant 4 elaborated on this argument:

“Although I am not a national security expert, I do know that by trying to

weaken the security of people’s communications, storing and handling them in

insecure ways, in a material way they are making people less secure.”

These discourses highlight how weak privacy protections can render people vulnerable to cybersecurity threats (i.e. Zajko, 2018; Hildebrandt, 2013, p. 360). For example, if the Encryption Access Act (2018) compels technology companies to “start storing all

133 that data in plaintext… it opens up to others as well” (Participant 11), “then you have all these backdoor phones that people can turn into a botnet, gain access to private information, or do whatever” (Participant 1), and “otherwise you get crime everywhere, you get impersonations, sequel-injections, cross-site scripting, and all the other problems that have been long-known about” (Participant 12). Participant 11 also articulated this particular argument:

“Information security is hard, and no one does it well. If they start storing all

that data in plaintext, it opens up to others as well. It is why we advise other

companies to not store data in plaintext, things like credit card details, because

if they get hacked they will lose it all. That is not a question. No one is

suggesting we store credit card numbers in plaintext, because they’re valuable.

But this other data can be as valuable. We need to talk about encrypting that,

as a society, in the same way.”

Altogether, these extracts demonstrate how the relational properties of discourse enable privacy advocates to equate the meaning of ‘privacy’ with ‘security’ where the former is logically prioritised. However, as a by-product of such relational properties, these discourses also enable the moral equivalence at the core of the ‘problem of going dark’ where the signified value of ‘security’ is considered logically prior to ‘privacy’.

For example, previous research has demonstrated how intrusive surveillance powers have been legitimated via discourses that assert “privacy requires security” (e.g.

Simone, 2009, p. 8). This highlights how the moral equivalence at the core of the

‘problem of going dark’ is a by-product of the same relational properties used by privacy advocates to articulate the meaning of ‘privacy protection’.

134

These relational properties are also observable where participants instead articulated the signified meaning of ‘security’ as a moral value. For example, in response to a question about whether data retention legislation produces benefits for national security, Participant 2 responded by saying, “define national security… if you define it as being safe, then no” (emphasis added). Similarly, Participant 16 remarked how, “in an open and democratic society, the security of the state, the job of intelligence agencies, is not well served by weakening the security of the populace”

(emphasis added). Finally, Participant 13 made a similar observation about this constitutive relationship of the values:

“A lot of privacy stuff these days comes from small data leakage, like the

cookies from a site you visited… so, it becomes more of a security thing, the

primary benefit of using a security tool is therefore greater privacy.”

(emphasis added)

By highlighting how the “benefits of… security… is therefore greater privacy” these participants are articulating ‘security’ as dependent on, and equivalent to, ‘privacy’.

Similarly, in describing the risks of data retention programs, Participants 4 and 22 articulated how such ‘interferences’ are equivalent to threats to security:

“The point is, by building this vast trove of data, that we are with metadata

retention and other mass surveillance programs, it is an incredibly powerful

tool for whoever gets hold of it, and that could be the current government.”

(Participant 4)

“The reason that you don’t create a honey pot [is] so anyone who has a

malicious intention, for example, if we had a Minister absorb multiple

135

portfolios who had an abrasive approach to how the community is treated, puts

you into a very dangerous situation.” (Participant 21)

Overall, by invoking the risks of potential data breaches resulting from the Data

Retention Act (2015) and Encryption Access Act (2018), the participants highlight how ‘privacy’ and ‘security’ are common elements of an underlying value of ‘non- interference’ from others. Although, participants still signify that the concept of

‘privacy’ is logically prior to the concept of ‘security’. Yet, it is clear how this logic may be readily reversed by the proponents of surveillance legislation. As such, the moral equivalence at the core of the ‘problem of going dark’ is a by-product of these relational properties of discourse. Overall, whether these moral values are constructed as constitutive or exclusive, these relational properties always enable the proponents of surveillance powers to articulate the ‘problem of going dark’ by signifying ‘privacy’ as either a ‘threat to security’ or as ‘dependent on security’.

4.3. PRIVACY ADVOCACY AND LIBERAL DISCOURSE

The previous section demonstrated how the articulated meaning of ‘privacy protection’ is constructed through discourses that define the conceptual relationship between privacy and security as moral values. This section examines the associated discourses articulated by participants to contest the moral equivalence at the core of the ‘problem of going dark’ – that ‘privacy protection’ enables ‘criminal evasion’.

The section argues that privacy advocates are encumbered by a liberal framework that is vulnerable to the consequentialist logics of preventive justice, as the categories of necessity, proportionality, accountability, and harm can be co-opted to justify intrusive surveillance powers. Specifically, this first sub-section examines how this framework

136 includes a “presumption of privacy” (i.e. Frey, 2000, p. 47) that places the onus for justifying surveillance powers upon the state. Subsequently, the section analyses the dominant discursive strategies used to contest the ‘problem of going dark’ – the liberal categories of necessity, proportionality, accountability, and harm. Overall, the section argues these dominant discourses articulated by privacy advocates are vulnerable to being distorted by the consequentialist logics of preventive justice.

4.3.1. ARTICULATING THE ‘PRESUMPTION OF PRIVACY’

As discussed within Chapter Two (Section 2.4.2) the dominant discourses of privacy are as an individual right to non-interference from others. The philosopher

Raymond Frey (2000, p. 47) has referred to the cultural dominance of this perspective as the “presumption of privacy” – the taken-for-granted notion that individuals possess a moral right to privacy as non-interference. This sub-section analyses how this

“presumption of privacy” is embedded within observable discourses articulated by

Australian privacy advocates. This includes examining how ‘privacy’ is articulated as a moral right to non-interference and analysing how the onus for justifying

‘reasonable’ interferences with privacy rights is discursively positioned. Overall, the sub-section argues that privacy advocates are encumbered by liberal discourses that are vulnerable to the consequentialist logics of preventive justice.

The concept of privacy was assumed as a moral right to non-interference by a subset of nine (9) participants interviewed as part of this research. The most steadfast in their support for this perspective was Participant 2, who self-identified as a hacker and techno-libertarian. For example, when asked what information the state has a legitimate claim to collect about citizens, Participant 2 made the following remark:

137

“None. They consistently misuse the information or use it for more than they

initially stated. We have a right to privacy, not a right to partial privacy…

There is an implication in this statement that intelligence agencies going dark

is a bad thing and it needs to be prevented. But that is another way of saying

your right to privacy is only partial, [Turnbull’s] words are an attack on

privacy.” (emphasis added)

This participant argues that all forms of government surveillance are forms of interference that violate the moral right to privacy. This process of discursive articulation frames individuals as, by default, possessing privacy and places the onus on governments to justify any ‘reasonable’ interference with this right (i.e. Frey, 2000, p. 47). This idea was perhaps most succinctly articulated by Participant 21, who argued

“[t]he overarching message should be about the necessity of digital rights as inalienable freedoms that attach to the existing human rights framework”. There are several important features about this basic demand that give rise to disagreements within the privacy movement in Australia.

One of the primary discursive properties of articulating ‘privacy’ as a right to non-interference concerns the positioning of responsibility for justifying any

‘reasonable’ interferences. It is argued the discourses articulated by privacy advocates place this onus upon governments. For example, Participant 3, a prominent political advisor and privacy advocate, described the right to privacy in the following terms:

“Citizens have a right to privacy under international but not domestic law. The

onus is on the government to make the case as to when it has a right to violate

that right, not the other way around.”

138

Here, it is explicitly stated that there is an onus on governments to justify when they have a legitimate claim to interfere with privacy rights. This logic is built into the liberal discourses of non-interference – that the state can only justify interference with an individual insofar as it prevents some form of ‘harm’ (i.e. Harcourt, 1999; 2013).

This is reflected in the ways in which privacy advocates articulate their criticisms of surveillance powers. For example, when Participant 9, a prominent member of a technology-oriented advocacy organisation, described how “[a]t the end of the day, when we are talking about government agencies, they shouldn’t have access to any of this data by default” (emphasis added). This is the presumption of privacy – that citizens owns their personal information by default, which implies it may be collected by government under certain circumstances. Similarly, Participant 3 argued, “[t]he government has a right to the information of citizens that is provided or retained with their consent”, while Participant 11 similarly remarked, “[i]t should be acknowledged that I own [my] data, and access is provided so the government can provide services for me and other citizens”. It is argued these discourses establish that privacy may be interfered with as long as some technocratic criteria are satisfied. It thus establishes the foundation for justifications of surveillance powers through the logics of preventive justice (e.g. Zedner, 2007a). This logic is illustrated by Participant 11:

“Folks, whether they are citizens or not, should have some expectation of

privacy about what they do online. There is a fundamental freedom that people

should have. At the same time, we know that the justifications for surveillance

do not really line up.”

This reframes the moral question about the right to privacy as about the technocratic justifications for the surveillance solution. Specifically, whether the evidence for surveillance programs ‘lines up’. The discourse therefore establishes the foundation

139 for using the liberal principles of necessity, proportionality, accountability, and harm to determine the scope of a privacy right to non-interference. It is argued that privacy advocates are therefore encumbered by this liberal perspective, which renders privacy discourse vulnerable to the logics of preventive justice.

4.3.2. PRIVACY AS A RIGHT TO NON-INTERFERENCE

The previous section argued that privacy advocates are encumbered by a

‘presumption of privacy’ that is vulnerable to the logics of preventive justice. This section analyses the signification strategies used by participants to contest the

‘problem of going dark’ within this liberal framework. Specifically, this involves the articulation of categories that construct privacy as a right to non-interference, with an onus upon the state to justify surveillance as a tool used within criminal investigations.

These categories include the principle of necessity, the principle of proportionality, the principle of accountability, and the (collapsed) harm principle. Specifically, the categories of necessity, proportionality, and accountability are articulated to contest the effectiveness of surveillance programs as tools for criminal investigations, while the category of harm contests the moral equivalence drawn between practices of

‘privacy protection’ and ‘criminal evasion’. Overall, the section argues these liberal categories used to contest the ‘problem of going dark’ are easily co-opted by the logics of preventive justice to legitimate intrusive surveillance powers.

4.3.2.1. PRIVACY AND THE NECESSITY PRINCIPLE

This section examines the first signification strategy used by privacy advocates to contest the ‘problem of going dark’ – the principle of necessity. The discourse

140 contests the claim that surveillance programs are effective tools for criminal investigations. This was articulated by fifteen (15) participants. For example,

Participant 6 described their views about when governments should be collecting information about citizens:

“It is about what information the Government needs to have about people.

From first principles, what is the function that the government is trying to

accomplish, and what is the minimum amount of information necessary to

provide the relevant service to the citizen?” (emphasis added)

As Participant 10 noted, this is “the cop-out answer [of] the necessity and proportionate principles”, the consequentialist metrics used to determine whether interference with an individual’s rights is justifiable (Gellert, 2017, p. 187). Yet there is little agreement about what can empirically be demonstrated as ‘necessary’ interference to protect security interests, even among the advocates of privacy protection. For example,

Participant 14 articulated this tension clearly, identifying the census as an example of legitimate government surveillance:

“There needs to be a certain amount of data gathering about individuals without

consent, so governments can make effective and informed decisions. So, where

do you draw the line? The census has traditionally been a good example to

point to and say here’s the data, it has been fully anonymised, shows

demographic shifts and population density, and that’s critical for town

planning. We want to make sure we have good data about these issues.”

Indeed, among privacy advocates there is significant sympathy for government services that address issues of ‘social’ security. For example, Participant 21 observed that “[w]ithout providing information about housing arrangements it is very difficult

141 to provide social welfare to groups of people” while Participant 13 articulated

“I suppose there are also aspects around our health safety net and Medicare… it makes sense to keep information like blood types”. The principle of necessity, as a means to delineate the scope of privacy rights, thus involves substantive moral judgements about the greater or common good. Yet, participants were less willing to acknowledge that mass data retention for the purposes of criminal investigations serves a similar common good. At least in the absence of strong access controls. For example,

Participant 6 articulates this line-of-reasoning clearly:

“There is a need to distinguish between the use of information where

government is trying to deliver a service and its use within the criminal sphere.

There needs to be judicial independence and oversight, because that is an

especially invasive power.”

So, although privacy advocates accept that the principle of necessity is satisfied by a common interest in public health or social welfare, they reject the necessity of mandatory data retention and access to encrypted communications as serving common interests in crime prevention. By articulating this, they reject the necessity principle on the basis of contextually-contingent privacy norms (i.e. Nissenbaum, 2011; Marx,

2016). This is based upon their rejection of empirical claims about the effectiveness of the surveillance solution within the ‘criminal sphere’ of social life.

The principle of necessity thus shifts discourses of ‘privacy protection’ and

‘going dark’ into contesting the empirical evidence about the effectiveness of surveillance programs as solutions to crime and security threats. Therefore, the discourse is articulated through liberal-technocratic claims about the effectiveness of surveillance programs. For example, Participant 5 succinctly summarises this claim:

142

“There is no evidence that mass surveillance helps governments to catch

terrorists. Edward Snowden’s revelations suggest the opposite, that catch-all

surveillance programs actually make it harder to sift through all the data to

identify potential threats to security.”

Similarly, Participant 13 observed:

“There is a claim made about security behind it, but I think it has recently been

proven around the world that these sorts of schemes do not actually work. So,

there are a lot of costs for very little value. And not just direct entry costs, there

are other sorts of costs associated with privacy.”

These extracts frame the relevant issue as being about whether surveillance programs

‘work’, rather than whether they are morally right or wrong. For the advocates of privacy protection, if surveillance can be demonstrated to not work, then the principle of necessity cannot be satisfied, and the presumption of privacy as non-interference remains accepted. However, as discussed within Chapter Two (Section 2.2.4), there are significant methodological and epistemological limitations with measuring the effectiveness of the surveillance solution and crime prevention policies (i.e. Haggerty,

2009). The evaluation of surveillance powers is an inherently political process, with empirical evidence susceptible to pre-conceived moral conceptions of the issue. This limits the ability of both the advocates and critics of government surveillance programs to establish a common ground for debating ‘necessity’.

The ‘problem of going dark’ is also challenged by invoking the principle of necessity within arguments that contest the effectiveness of surveillance programs.

For example, Participant 12 notes that “the more input chaff you dump on them, the harder it is for the limited number of staff to find what is needed… [t]he consequence

143 is that more stuff gets missed”, while Participant 21 invoked the analogy of the “needle in the haystack” to criticise the accuracy of identifying suspects where “you need to be able to describe what the needle is” and “the haystack in this context keeps growing exponentially” within a criminal justice setting. Participant 16 makes a similar point with reference to the notion of ‘false positives’ for automated facial recognition technology at sporting events:

“You get swamped by false positives. In facial recognition it is about 15%. So

how can we use that on a mass scale? You get 15,000 wrongly suspected

persons who are attending a cricket Grand Final. Do you need to employ

another 1,000 police, when you know they are virtually all innocent? And

maybe you decided to keep the dataset for future reference. It is data retention

for the purpose of generating suspicion, and a lack of willingness to rule it out

because you have moved away from the traditional criminal justice model.

Guilty until proven innocent.”

This concern was also articulated by Participant 15, who noted the government “have more data that they can effectively deal with anyway… [t]he number of false positives they get is overwhelming”. The overarching claim here is that government surveillance programs are not necessary insofar as they are ineffective and inaccurate due to excessive ‘input chuff’ hindering the associated analysis of the data. Theoretically, the argument is used to undermine the Australian Government’s claim that surveillance is necessary to prevent harm and manage risk.

Privacy advocates also contest the ‘problem of going dark’ by contrasting the effectiveness of mass surveillance programs and traditional criminal investigative

144 techniques, as methods for detecting and preventing threats to security. This is used to argue that the former method is unnecessary. Participant 3 stated it simply:

“Metadata retention accumulates information in such bulk that it will

ultimately prove of lower use to law enforcement, national security and

intelligence agencies than properly resourced targeted surveillance.”

Other participants made the same point, with reference to specific security threats. For example, Participant 12 argued that the terrorist attacks on 11 September 2001 would not have been prevented by mass surveillance and data retention:

“It has been the case that in nearly every major attack or incident, the individual

has previously been known, often reported by members of their family or

because they have previously committed other crimes or domestic violence.

Even in the US, the 9/11 attackers were reported to the authorities. But nothing

has been done because analysts are being spread too thin. Data collection is not

helpful.”

Similarly, Participant 2 remarked how law enforcement and intelligence agencies frequently prevent terrorist attacks through traditional investigative techniques that mange to ‘get through’ mass surveillance programs:

“We see constantly that plots are foiled through normal investigations, and

quite often the attacks that get through it often turns out the assailants weren’t

even known to intelligence services, so being able to crack encryption wouldn’t

have helped at all.”

This was corroborated by Participant 11, who argued that high profile prosecutions in cybercrime cases are more regularly secured through traditional investigative methods:

145

“The NSA’s collection programs, for example, resulted in about one arrest. All

the actual arrests came from targeted surveillance programs. The big one I

remember was ‘Dread Pirate Roberts’ and all the Silk Road stuff. That was old

school surveillance methods, with FBI agents following him around for a few

weeks and accessing his open computer. They couldn’t do anything else. I think

those things still work.”

Indeed, as Participant 8 summarised it, “I think that in significant cases where organised criminal groups are infiltrated online, it has been traditional investigative processes that have been effective, rather than broad sweeping data analysis”. Overall, these extracts demonstrate how privacy advocates place significant importance on the comparative ineffectiveness of mass surveillance powers, such as those established under the Data Retention Act (2015), in comparison with traditional investigative techniques used by law enforcement and intelligence agencies.

The overarching characteristic of this discourse of necessity is how it renders normative views about surveillance as contingent upon empirical facts. Specifically, if surveillance can be challenged empirically, there is no need to deliberate about the logical priority of either privacy or security as moral concepts. As such, the discursive strategy involves signifying the impossibility of eliminating security risks (i.e. Ericson

& Haggerty, 1997, p. 90). This was apparent where participants challenged former

Prime Minister Tony Abbott’s claim, discussed within Chapter One, that the Data

Retention Act (2015) would have helped prevent the Lindt Café Siege:

“The Martin Place siege, Man Monis was known to the AFP, and it was a

deliberate decision by an analyst not to follow him up any further. So, his data

was available and being reviewed, and it was a human factor, a decision, not

146

to do anything about him. Having his metadata, with or without a warrant,

would not have actually prevented anything.” (Participant 1)

“On the human side of it, when we find people who commit terrorist acts, we

usually also find the authorities are already aware of them. The authorities are

not discovering brand new plots by discovering them within Facebook chats or

whatever. The perpetrator of the Lindt Café Siege, Man Haron Monis, was

interviewed by police and intelligence agencies multiple times. It is not like

they could go, ‘oh, if only we had been able to read his emails, we would have

picked it up’. No, they had as much warning as possible.” (Participant 15)

Overall, by invoking the principle of necessity as a signification strategy, privacy advocates reframe the debate as about the empirical effectiveness of surveillance programs. There was a common criticism of surveillance as an ineffective, and therefore unnecessary, tool for criminal investigations. However, as these types of empirical claims about surveillance are unavoidably politicised (i.e. Haggerty, 2009), it is argued the category of ‘necessity’ is vulnerable to co-option and distortion by the consequentialist logics of preventive justice.

4.3.2.2. PRIVACY AND THE PROPORTIONALITY PRINCIPLE

This sub-section examines the second signification strategy used by privacy advocates to contest the ‘problem of going dark’ – the principle of proportionality. The discourse concerns the comparative weight ascribed to the ‘costs’ and ‘benefits’ of surveillance as tools for criminal investigations. Thus, the principle may be invoked where the necessity of the surveillance solution has been accepted. Overall, ten (10) participants articulated this discourse. For example, Participant 10, who has a legal

147 background, demonstrates this consequentialist logic where they observed there is

“long-standing work that says that privacy is a human right, but we also have a long history of having to weigh human rights against collective interests” (emphasis added).

Similarly, Participant 15 articulated this notion of ‘proportionality’ directly:

“Any situation where it involves law enforcement, [privacy advocacy]

organisations tend to think of it in terms of being necessary and proportionate.

There are few cases where anything outside the traditional warrant regime can

be justified. Mass surveillance is deliberately invasive, so I think it is quite hard

to justify and the case has still not really been made. When you sit down and

look at it objectively the case for it is not actually that strong.”

This illustrates the consequentialist logics of liberal privacy discourses where they encounter conflicts among competing interests. Specifically, the discourse of

‘proportionality’ frames the issue as about comparing ‘reasonable’ measurements of the ‘costs’ and ‘benefits’ of surveillance programs. As such, the principle is another consequentialist category used to supplant substantive moral questions about the relationship between privacy and security as values.

There were several discursive strategies used to articulate the category of

‘proportionality’, and thus demonstrate why the ‘costs’ of surveillance outweigh any alleged ‘benefits’. The first discursive strategy involved highlighting the scope of interference enabled by the Data Retention Act (2015). For example, Participant 4 argued data retention laws were particularly invasive:

“And metadata is much more processable, if you think about the technological

process of it. You do not need to do natural language processing and try and

work out what the person meant when they sent this email. You just look at

148

who they contacted, and when, and where they and that other person was, and

you build a social graph of how they interact. That is far more invasive from

my perspective, and probably much more useful to the police or whoever else

is investigating a crime. So, I understand why they want that.”

Similarly, Participant 14 used the analogy of the ‘six degrees of separation’ to make the point about how mandatory data retention programs are unduly broad:

“You know the saying that everyone in the world is separated by six degrees?

Well, if you are doing that kind of overreach, you may as well, pragmatically,

complete databases of everyone ever. Well, we don’t want that. We can say,

‘you are going too far with this’.” (emphasis added)

It is evident how this category is invoked to counter claims about the appropriateness of surveillance legislation within circumstances where their supposed empirical effectiveness is acknowledged or accepted. However, it remains unclear what degree of interference can, or should, be considered ‘invasive’ or an ‘overreach’. Indeed, in seeking to compare the ‘costs’ of interference with the ‘benefits’ of crime prevention, the proportionality discourse is comparing qualitative moral interests.

There are no scientific formulas for assessing claims about the proportionality of any particular surveillance program. However, several participants did articulate several examples to illustrate how data retention is an overreach by law enforcement and intelligence agencies. For example, Participant 1 noted:

“Yes, there is crime, but terrorism is not as big a risk as falling off a ladder or

slipping in the bathtub. So, is it a proportionate response to terrorism to collect

all this data when there are also all these risks of inappropriate access,

149

reidentification, and all that sort of stuff? Our position was, you know, that it

probably wasn’t.” (emphasis added)

Similarly, Participant 14 remarked that time and resources could be better spent addressing other social problems:

“So, we want to recognise that there is a risk of terrorism and random nut jobs

with a car or improvised weapon, who can go out and do a lot of damage. But,

in terms of proportions, sure acts of terrorism are horrific and terrifying, in

terms of physical damage there are other things we could focus our time and

money on to do more good overall.” (emphasis added)

Altogether, privacy advocates articulate discourses of (dis)proportionality by drawing comparisons between different types of harm. Indeed, it may be the case that resources could be better spent addressing harm incurred by personal injuries rather than harm caused by ‘nutjobs in a car’. However, in the absence of objective metrics for comparing financial ‘costs’ against the ‘benefits’ of crime prevention, the proportionality principle is similarly vulnerable to being co-opted and distorted by the consequentialist logics of preventive justice, where significant faith is placed in the ‘reasonableness’ of accurate risk assessment.

Multiple participants articulated the category of ‘proportionality’ by explicitly invoking the idea of a cost-benefit analysis for data retention laws. This was broadly similar to the category of ‘necessity’ discussed above (Chapter Four, Section 4.3.2.1), however was differentiated by the explicit or implicit comparison of alleged ‘benefits’ with the associated ‘costs’ of the programs. For example, Participant 13 argued that

“[t]he proportionality is completely out” and the “laws are not proportionate to the risks that we are supposed to be combatting, particularly given the costs”. This was

150 attributed to the fact that “[h]umans suck at risk”, which compelled Participant 13 to rhetorically ask “[h]ow many people have been killed by terrorism in Australia since

9/11?” Similarly, Participant 21 focused on the Australian legal context and the counter-arguments their organisation made during the public consultation process for the Data Retention Act (2015):

“The adequacy argument we put forward at the time was concerned with the

cost that would attach to this, and how it would be passed onto the ISPs

required to retain metadata for two years. And again, can it ever be prospective,

or will it always be retrospective? We’ve had 83 terror charges laid in the last

two years. So, whether it is actually capable of preventing terror, or if it is a

lone wolf or radicalisation situation, I don’t necessarily think it is adequate.”

It is clear how privacy advocates contested the proportionality of data retention laws by drawing comparisons between their demonstrated effectiveness and their associated financial and social costs. In particular, participants made multiple references to the inability of data retention programs to produce observable metrics of success.

However, there was one participant attuned to the limits of the proportionality principle, highlighting how it can be readily distorted by governments to justify surveillance programs. Participant 16 made the following observation:

“You will find support for proportionality among some of the proponents for

these types of things, but it is almost like the idea of proportionality has been

corrupted. It is almost used against you… You can wind up with a high-level

and conceptual concept of proportionality: security is more important than

privacy. But that is very vague and non-operational and does not really engage

with issue of data security. It trickles down as a top-level, pre-judgement.”

151

This privacy advocate was unique among participants. Indeed, they observed how technocratic discourses about the ‘proportionality’ of surveillance programs, which assume qualitative concepts such as ‘privacy’ and ‘security’ may be quantified with precision, are easily corrupted via the logics of preventive justice.

A final rhetorical strategy was to articulate the category of ‘proportionality’ via discourses about the unequal social impacts of surveillance programs. One prominent member of a human rights organisation, Participant 21, summarised this concern:

“One of the big problems with these scenarios is that the people who are

impacted are not the wealthy. They usually occupy the lower echelons of

society, and this relates back to my earlier point about how classism is an

overarching problem in all of this. These people do not necessarily have the

knowledge or financial resources to seek legal recourse.”

This concern for the ‘lower echelons of society’ contrasts with the universal language that permeates the discourses discussed above. Instead, this subset of participants tended to view privacy as an interest that political elites are able to (unjustly) protect by circumventing surveillance. They therefore invoked the notions of a ‘privacy divide’ (i.e. Woo, 2006, pp. 956-957). For example, Participant 4 argued that ‘normal’ people are disproportionately victimised by surveillance, specifically drawing comparisons to the Great Firewall of China:71

“It is another problem I have with it, and it is the same as in China, where

people who work for foreign companies and people who are respected

members of society are easily able to circumvent the great firewall. The normal

71 The Great Firewall of China is the name given to the collection of legal restrictions and technologies used by the Chinese Government to filter content and control access to information on the internet (Ensafi, Winter, Mueen, & Crandall, 2015, p. 61). 152

or ordinary people, the working class, aren’t. So, it is a dividing mechanism as

well. I think eventually the same might happen with mass surveillance. I guess

it kind of already is, it is just not very obvious. It depends on being motivated

and having the means to do it.”

Similarly, another participant referred to their comparatively privileged position to illustrate the unequal impacts of surveillance programs across Australian society.

Participant 17 made the following observation:

“For my situation, metadata may not actually tell you a whole lot because I talk

to a lot of the same people. But if I was in a different position, a less privileged

position, I would probably be more concerned about it.”

What this illustrates is how attempts to determine the ‘proportionality’ of surveillance powers can be attuned to the distribution of ‘costs’ and ‘benefits’ across society.

Relatedly, such strategies of subjectivation will be further unpacked in Chapter Five.

Overall, it is argued this category of ‘proportionality’ is used by privacy advocates to contest the comparative ‘costs’ and ‘benefits’ of surveillance programs as tools for criminal investigations. Yet, by virtue of its foundation in consequentialism, the category is similarly vulnerable to the logics of preventive justice.

4.3.2.3. PRIVACY AND THE ACCOUNTABILITY PRINCIPLE

This section examines the third signification strategy used by privacy advocates to contest the ‘problem of going dark’ – the principle of accountability. This discourse contests the appropriateness of surveillance programs as tools for criminal investigations by criticising an associated lack of accountability to civil society and the judiciary. Indeed, the notion that the surveillance of citizens “should require a

153 warrant or other order if investigators want to breach privacy” (Participant 5), or “if you had to at least get a warrant to gain access, it is a very different scenario”

(Participant 1), was repeatedly articulated. In total, seven (7) participants articulated a category of ‘accountability’. For example, even Participant 2, who expressed anti- statist and techno-libertarian beliefs, acknowledged that “for argument’s sake I say they have a right [to engage in surveillance] some of the time, a warrant is the only check on this power”. Similarly, Participant 20 noted that the authority of government surveillance depends upon judicial oversight:

“I think that surveillance should be targeted, and we should have some

structures around surveillance practices in order for them to be legitimate. That

involves things like judicial warrants and transparent oversight measures,

which we generally do not have.” (emphasis added)

Evidently, this discourse of ‘accountability’ is articulated to contest the ‘problem of going dark’ by challenging the underlying levels of accountability for surveillance programs through the judiciary or civil society advocacy. As such, the discourse reflects a signified faith in institutions to temper authoritarian and populist sentiments that threaten to erode human rights protections.

Yet, this category of ‘accountability’ is also vulnerable to distortion by the logics of preventive justice. Built into the discursive logic is an acknowledgement the state can ‘reasonably’ justify surveillance legislation under circumstances where judicial warrants provide a bulwark against the abuse of such powers. For example,

Participant 1 made the following observation:

“I think the data being there, okay it may not be ideal, but we live in the modern

world and it does need to be used for this kind of purpose. But, make it so there

154

is accountability and access control, so it is really people who need to access

that kind of information, not people just going on a fishing expedition.”

Similarly, Participant 4 remarked:

“I would expect an argument about how transparent it can be, because

obviously they are investigating, and they cannot just make that information

public. Maybe after some time period, it could be public. I certainly think there

should be a way to see.”

The principle of accountability therefore acknowledges that there needs to be limits to the transparency of surveillance programs, if they are to ‘work’ effectively. Participant

16 noted that “[i]t is recognised that investigations are necessary, but they need to be constrained and government agencies must be transparent with little capacity to avoid proportionality assessments”. Similarly, Participant 18 made the following point:

“If an individual is informed of the type of information that is being collected,

and that information serves a public good, which is explained, tested, debated

and independently verified, then governments have that right. For example, law

enforcement operations where an open and transparent system of judicial

oversight has approved a warrant. However, as a base level, citizens have the

right to their own individual privacy in all cases where a law has not been

broken.”

Overall, participants invoke this category of accountability to contest the appropriateness of surveillance as a tool for criminal investigations in the absence of oversight by civil society and the judiciary. This discourse may still be articulated where the necessity and proportionality principles are satisfied. Yet, because the category is similarly contingent upon this ambiguous notion of ‘reasonable’ oversight

155 mechanisms, it is likewise vulnerable to the consequentialist logics of preventive justice. Indeed, as the latter extract reveals, if citizens are presumed to have privacy

‘at a base level’, this establishes a discursive framework that places the onus upon – and opportunities for – governments to justify intrusive surveillance powers.

4.3.2.4. PRIVACY AND THE (COLLAPSED) HARM PRINCIPLE

This sub-section critically analyses the fourth signification strategy used by privacy advocates to contest the ‘problem of going dark’ – the (collapsed) harm principle. This category contests the moral equivalence drawn between ‘privacy protection’ and ‘criminal evasion’ via discursively neutralising privacy-enabled harms.72 Indeed, the ‘problem of going dark’ is predicated upon a claim that ‘privacy protection’ enables the ‘harm’ of ‘criminal evasion’. This moral equivalence is used to justify interference via invoking John Stuart Mill’s (1859, p. 13) harm principle –

“the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others”. Yet, the category of ‘harm’ has collapsed due to its contested character (Harcourt, 1999, p. 139). This sub-section therefore argues the articulation of ‘harm’ to contest the ‘problem of going dark’ is vulnerable to the consequentialist logics of preventive justice.

The interviewed participants neutralised the harms of privacy-enhancing technologies within their articulated counter-arguments to the ‘going dark’ problem.

In total, thirteen (13) participants demonstrated some form of harm neutralisation.

72 The concept of ‘neutralisation’ was proposed by criminologists Gresham Sykes and David Matza (1957, p. 666) to refer to the “justifications for deviance that are seen as valid by the delinquent”. However, the theory of neutralisation has also been applied for understanding how people rationalise harms in everyday contexts. As an example, see Lois Presser’s (2015) Why We Harm for a comprehensive explanation of the techniques of neutralisation supported with empirical data. 156

Indeed, the practice was evident in the discursive justifications for the Go Dark Against

Data Retention (2015) and National Get a VPN Day (2017) campaigns. First, there is a strategy of denying any injuries enabled by privacy-enhancing technologies, with discourses displacing focus to the associated benefits of cybersecurity. For example,

Participant 2 made the following point:

“Encryption has an adverse and conversely beneficial impact on cybercrime

because enforcing your right to privacy means you are less likely to come to

the attention of law enforcement, but it also means that you should be able to

do a lot less damage as you can’t crack encryption. Stealing encrypted data is

useless without the decryption keys.”

Similarly, Participant 15 demonstrates how the moral responsibility for injuries enabled by privacy-enhancing technologies are also discursively neutralised:

“There is not really a way to maintain the cryptography used to secure financial

transactions but weaken it when people are talking about terrorism. There is no

way to know until you have already done it. It is an incredibly bad framing of

the situation.”

Indeed, this illustrates how privacy advocates are encumbered by a liberal framework that relies upon the harm principle to negotiate the relationship between privacy and security as moral values. This requires them to neutralise the moral responsibilities of privacy-enhancing technologies in enabling cybercrime (i.e. harm) when contesting the moral equivalence at the core of the ‘problem of going dark’. For example, participants rejected the claim that cybercriminals are using privacy-enhancing technologies in large numbers, again denying that they are responsible for enabling injury. Participant 12 made the following observation:

157

“The underlying problem here is that relatively few criminals or terrorists have

the technical skills to effectively resist law enforcement and national security.

The vast majority of criminals and terrorists are dumb. If they are not dumb,

they are still unlikely to have technical training, and even then, they are

unlikely to be specialists. Their OpSec nearly always gets broken. Even if they

use encryption, they almost always do something else that gives them away.”

Additionally, privacy advocates ‘condemn the condemners’ where they are compelled to neutralise harm by articulating the comparatively greater harms of surveillance. For example, Participant 6 notes how harms enabled by privacy-enhancing technologies are negligible compared to the harm enabled by a potential ‘backdoor’ into encryption:

“The amount of damage that is being done by ‘criminals’ use of encryption

technologies is much less than the good that comes from secure

communications technologies. For example, the entire banking system would

fall over in a day if banks could not connect to each other, if hostile actors

could get in the middle of those connections and just mess around with it.”

The point is not whether these neutralisations reflect empirical reality, but how the dominant liberal discourses of privacy compel participants to neutralise the harms of privacy-enhancing technologies to nullify the application of the harm principle.

Indeed, there is a tit-for-tat game to construct ‘harm’ in a manner that supports pre-conceived political beliefs (i.e. Harcourt, 1999; 2013). Overall, the contestation of the ‘problem of going dark’ within the dominant liberal framework requires the neutralisation of harms enabled by privacy-enhancing technologies.

Conversely, some privacy advocates acknowledged that privacy-enhancing technologies can unintentionally enable harm (i.e. enable cybercrime and the evasion

158 of criminal investigations). Instead, these participants discursively neutralise any responsibility for privacy-enabled harms by articulating the legitimate purposes of privacy-enhancing technologies promoted within campaigns like Go Dark Against

Data Retention (2015). For example, Participant 5 made this point here:

“The notion that only criminals or terrorists would want to use such techniques

‘if they are doing something wrong’ is patently wrong. There are many

legitimate reasons people want to protect their privacy. We don’t allow this

level of intrusion in our private lives, so why should we stand for it online?”

(emphasis added)

Similarly, Participant 14 echoed this sentiment:

“There are always going to be issues with this, issues of people trying to break

the law without being seen. But there will always be legitimate uses as well,

which shouldn’t be criminalised. And the blanket approach that is being

proposed by Western governments will sweep up other issues. Look at the

Syrian civil war – it has become a huge regional conflict that has touched so

many countries in the Middle East. We want reporters and journalists to be able

to go into that region safely, we want to know that their communications are

safe and private.”

These extracts demonstrate how harm is neutralised by prioritising the intentions of law-abiding citizens, who should not have their freedoms restricted due to the criminal misuse of privacy-enhancing technologies. This variation of the ‘harm’ discourse contests the underlying consequentialist framework itself. For example, three (3) participants made comments about how ‘criminal evasion’ is not a new issue for law

159 enforcement and intelligence agencies, as privacy-enhancing technologies have been known as techniques for evading investigations for a long time:

“I do not think it is a problem, because none of that is new. There are

supposedly records of Al-Qaeda in the 1990s where they discuss their use of

encryption. So, if you are engaging in organised crime online, you would be

using the technologies already. It is not rocket science.” (Participant 8)

“I think a sufficiently motivated criminal enterprise will find the information

they need to secure their data. Criminals across time eternal have learnt how to

limit their risk of detection by law enforcement organisations. There have

always been well-understood rules: you do not write certain things down or

you use codewords or ciphers. I do not think that talking about how to

meaningfully enhance the privacy of your communications has much of an

effect on net lawlessness or criminality.” (Participant 10)

“Essentially, there will always be people who are able to circumvent whatever

surveillance methods are put in place. The difficult thing is, it is the people who

are most motivate who are going to go to that method, which would include

organised criminals. This kind of makes you question the whole idea, in the

first place.” (Participant 4)

The key points being raised throughout these extracts is how motivated criminals will, and already have, found ways to circumvent law enforcement investigations, regardless of whether privacy advocates publicised information about how to use privacy-enhancing technologies. This discourse therefore denies any responsibility for the actions of malicious subjects who misuse the technology.

160

This discursive strategy for neutralising privacy-enable harms reflects a belief in the moral neutrality of communications technologies. Instead, moral responsibility is placed upon the individuals themselves. For example, Participant 4 questions the claim that the right to privacy might differ within cyberspace:

“That’s well established, the right to privacy. The perceived difference, at least

as it appears when said by government, about what you do online is, you

shouldn’t have the same expectations.” (emphasis added)

This signification strategy ascribes responsibility to individuals, neutralising the responsibility of privacy-enhancing technologies as enabling cybercrime. It reflects the ‘hacker ethic’ libertarianism embodied by crypto-anarchist Timothy C. May’s

(1994; quoted in Moore & Rid, 2016, pp. 24-25) notorious “Crypto = Guns” motto.

As such, privacy protection is not a problem, rather it is the people who abuse it to commit crime that are the problem. Indeed, a ‘Crypto = Guns’ discourse was invoked to neutralise the harms of privacy protection among a sub-set of interviewees, both explicitly and implicitly. For example, Participant 1 argued:

“Saying ‘well, now the bogeyman is encrypted messaging, so we’ll make a law

that says we cannot use encryption’ is not solving the fundamental problem. It

is like blaming guns for crime, it’s not addressing the problem at its root

cause… The argument that people make, that digital devices allow bullying to

follow kids home from the schoolyard, I do have some sympathy for that sort

of thought. But, one of the things that we argued was that responses to these

sorts of things should be technologically neutral. So, you make bullying illegal,

in whatever form, whether it happens on the playground, in-person or via a

device.” (emphasis added)

161

Similarly, Participant 11 remarked:

“I sympathise with those arguments, but we can also trace those arguments

back to the printing press. If you let people read, the criminals will read too. It

doesn’t really make sense. It is sort of a technophobic argument, almost like

the luddites, not wanting to engage with technology… That blindsided me a

bit, that people didn’t like those technologies. Because it allows folks to engage

in things like child pornography. But it is fundamentally a pro-censorship line.

Although I sympathise with it, I do not know if I am really on-board. It usually

comes from people who are not technologists.” (emphasis added)

It is important to note here that the participants are not condoning the harms enabled by privacy-enhancing technologies. However, it is argued that where privacy advocates are encumbered by a conceptualisation of privacy as non-interference, delineated by the harm principle, they are compelled to neutralise harm in this manner.

This highlights the vulnerability of the dominant discourses of privacy to the logics of preventive justice, which frame the debate in consequentialist terms.

A variation of this discourse was to neutralise the harms of privacy-enhancing technologies by drawing equivalence to harms committed by operating a vehicle

(rather than a firearm). Yet, the underlying point is the same. “The internet is a public sphere, with some nasty stuff out there,” Participant 6 argued, “but you don’t let your kids play in the traffic, and you don’t let your kids on the internet without supervision.”

Similarly, Participant 13 observed:

“Analogies tend to be useful in these sorts of things. Criminals use cars, for

example. There are drive-by shootings, hit-and-runs, and carjacking. It is only

a small proportion of the population, and we do not hear calls for cars and

162

road infrastructure to be banned. It is certainly an indirect comparison, but it

seems apt.” (emphasis added)

This reflects the need to individualise moral responsibility when neutralising harm, and instead assert the moral authority of privacy protection. Interestingly, this invocation of harm caused by vehicles may reflect a wariness among Australians to draw an equivalence between firearms and privacy-enhancing technologies, which is more clearly grounded within the American political tradition. Yet, it is argued it still reflects the influence of ‘hacker ethic’ libertarianism and its tendency to individualise moral responsibility (i.e. Coleman & Golub, 2008; Steinmetz & Geber, 2015).

Overall, participants neutralised the claims of ‘going dark’ through an argument that the criminal law should be neutral with regards to the technological context of human behaviour. In this sense, the harms enabled by privacy-enhancing technologies are discursively neutralised. For example, Participant 1 argued “one of the things we keep coming back to is that we should be approaching these policy issues from a point-of-view of digital neutrality.” Similarly, Participant 21 argues:

“All legislation has to be technologically neutral, because one of two things

happen. First, it may become obsolete very quickly. Second, you can

essentially negate or avoid statutory effect by adopting newer technologies.

From the legislative drafting point-of-view, it has to be neutral to ensure you

are applying law to a variety of similar interactions.”

These extracts reflect sentiments about how criminal sanctions should only be applied to human behaviour directly, rather than controlled via the regulation of technology.

Overall, while participants were sensitive to overly simplistic relationships between

‘privacy’ and ‘security’ as values, it is argued they are encumbered by a liberal

163 framework when articulating privacy as a human right to non-interference, which is vulnerable to the consequentialist logics of preventive justice that co-opt the categories of harm, necessity, proportionality, and accountability. It is because they operate within this discursive framework that privacy advocates are compelled to neutralise claims about “legitimate security imperatives” (i.e. Bessant, 2012, p. 4), as conceding otherwise would favour the proponents of the ‘going dark’ argument.

4.4. ASCRIBING ARBITRARINESS TO SURVEILLANCE POWERS

This chapter has thus-far examined how privacy advocates are encumbered by liberal discourses that are vulnerable to the logics of preventive justice. This section argues that privacy advocates supplement these discourses and contest the moral equivalence at the core of the ‘problem of going dark’ by ascribing ‘moral arbitrariness’ to the Data Retention Act (2015) and Encryption Access Act (2018).

Indeed, the discourses of necessity, proportionality, accountability, and harm do not completely capture the complex meaning of ‘privacy’ as a (contested) moral concept

(i.e. Regan, 2002; Newell, 2014a). As such, it is argued this signification strategy is used to supplement the dominant liberal framework. Overall, the section argues that the moral equivalence between ‘privacy protection’ and ‘criminal evasion’ is contested by ascribing moral arbitrariness to surveillance powers.

It is uncontroversial to note that there exists a relationship between the concepts of privacy, surveillance, and democracy (i.e. Haggerty & Samatas, 2010). As such the notion of ‘moral arbitrariness’ describes how privacy advocates contest the ‘problem of going dark’ by reframing the issue away from individual rights and towards analyses about the power dynamics of collective decision-making processes. In total, fourteen

164

(14) participants explicitly articulated the importance of democratic decision-making for defining the scope of privacy rights and corresponding surveillance powers.

For example, it was a repeated concern that “democracy only works if you have the right to privacy, the freedom of association, and the freedom of movement”

(Participant 9) or “the retention of metadata is wrong because it provides finely grained insight into people’s personal and private lives and creates a chilling effect on core institutions vital to a democracy” (Participant 3). Similarly, Participant 5 drew comparisons with recent events in the United States:

“I’m particularly concerned that private data can be used to target journalists,

activists and whistle-blowers, profile and discriminate against minorities and

crack down on free speech. Just look at the recent case of the US Department

of Justice demanding access to the details of everyone who visited an anti-

Trump protest site. Given the strength of freedom of speech protections in the

US Constitution, these types of politically motivated fishing expeditions are

particularly alarming.”

The participant is here referring to the attempts by the US Department of Justice to identify the IP addresses of over 1.3 million visitors to websites used to coordinate protests coinciding with the inauguration of President Donald Trump (Wong & Solon,

2017). These extracts capture how the mere act of collecting and retaining information about citizens can be, by itself, morally problematic. As discussed earlier within Civic

Republican Theories of Privacy (Chapter Two, Section 2.4.6), collecting information about citizens can enable the exercise of arbitrary power such as that of a benevolent dictator. The concept of ‘privacy’ thus acts as a bulwark against such arbitrary powers.

For example, Participant 21 observed the following:

165

“What we currently see is government closing itself off and requiring the

population to open itself up. That, within a representative democracy, is the

complete opposite of what should be the case. It should be clear transparency

into government reasoning, thinking, and action. With the option of complete

privacy for the individuals that comprise the aggregate of Australian society.”

Such observations, which criticise the ability of the Australian Government and law enforcement agencies to collect and analyse information about citizens without democratic oversight or transparency, reflect an underlying and implicit discourse of privacy as freedom from arbitrary surveillance powers. This category is therefore similar to the liberal notion of accountability (Chapter Four, Section 4.3.2.3), although differs in how the absence of oversight mechanisms is discursively linked with a capacity for the state to arbitrarily interfere with citizens. It thus supplements – rather than replaces – the liberal discourses of privacy surveyed above.

Concerns that surveillance powers enable the state to arbitrarily interfere with citizens were repeated articulated by participants. Specifically, it is argued that the notion of a ‘chilling effect’ is used by privacy advocates to articulate how surveillance programs arbitrarily interfere with citizens freedoms (i.e. Penney, 2015, p. 125). For example, Participant 4 made the following observation:

“The issue with that is subconsciously, generally, most people do change their

behaviour when they realise they are being surveilled. There was a survey done

in the US of writers, and it was something like one in six, since the Snowden

revelations, a couple of years ago, have self-censored. They have chosen not

to write on a particular topic, that they suspected would ‘get them on a list’.

Basically, they don’t want to do anything that might get them under more

166

scrutiny. And that applies not just to writers, but also to the whole population

including social activists and anyone else whose privacy is most important.”

What this participant is observing is that surveillance powers have greater impacts on human behaviour than people consciously recognise. The very fact that surveillance programs cause these effects thus renders them morally arbitrary for privacy advocates. This was explicitly recognised by some participants (in addition to

Participant 3 above). For example, that surveillance “has effects that are hard to measure, such as the chilling effect, you place people under surveillance, and they change their behaviour” (Participant 6; emphasis added) and “significant side-effects such as the chilling effect of living within a panopticon” (Participant 12; emphasis added). The concern articulated here is that arbitrary surveillance powers consciously and subconsciously restrict human freedoms. Overall, the signification strategy is used to supplement the liberal discourses, by highlighting how ‘chilling effects’ are not contingent upon individual-level judgements about privacy violations.

These illiberal discourses of privacy include judgements about the impact of democracy upon surveillance (rather than merely the other way around). They articulate the importance of non-arbitrary distributions of political power as a requirement for freedom from arbitrary forms of surveillance. As such, privacy advocates articulated scepticism about the use of privacy-enhancing technologies in isolation from other forms of collective resistance to the Data Retention Act (2015) and Encryption Access Act (2018). For example, Participant 10 articulated their thoughts about the adequacy of promoting privacy-enhancing technologies within the

Go Dark Against Data Retention (2015) campaign:

167

“Technological determinist bullshit! The answer to the machine is in the

machine! There is this dream that we can build this alternative society and will

not be subject to the rules of ‘the man’. You have Dan Gilmore’s quote ‘the

internet treats censorship as damage and rats around it’. You have John Perry

Barlow, I am not going to quote his entire declaration, who makes the argument

that there will always be a way for those who are savvy enough to evade

regulation. I do not think it is realistic… I don’t think you can opt-out of the

political process. I think we have obligations to participate in the political

process.” (emphasis added)

This notion that ‘you can[not] opt-out of the political process’ reflects a more communitarian and civic republican notion of ‘privacy’ as a product of collective decision-making processes, rather than as a narrowly-conceived individual interest. It thus frames the political legitimacy of surveillance power as contingent upon expressions of democratic consent, rather than the consent of any particular subject.

Similarly, Participant 16 made the following remarks:

“So, while I can see how circumvention is appealing, there remain questions

about its actual effectiveness. There are also issues about false confidence, or

delusions, of being able to escape all of this. If we come to think it is not a

problem anymore, we can retreat into individual or isolated cultural groups. I

think that is part of the problem. It is one of the signs of an authoritarian regime,

nobody knows who to trust and there is no vitality in civil society.”

This illustrates how participants consider civic deliberation, rather than individual consent, as important for establishing non-arbitrary forms of surveillance power. The reasons offered for this were varied. For example, Participant 17 argued that

168 individualising the problem leads to “a tendency to disengage from larger processes… artificially narrow the problem we face”. More pragmatically, Participant 15 argued that they “do not think focus[ing] on technological innovation is the most useful use of resources by an activist organisation”. Finally, Participant 19 criticised the Go Dark

Against Data Retention (2015) campaign as “just advocating for civil disobedience” and “mostly irrelevant for our day-to-day interactions with governments and the private sector”. Overall, it is argued these extracts demonstrate how privacy advocates ascribe moral arbitrariness to surveillance legislation that is enacted without democratic decision-making processes, as evidenced in the sentiment that withdrawing from civic deliberation is, by itself, morally problematic.

This highlights how the category of moral arbitrariness is at the core of conceptually differentiating ‘privacy protection’ from ‘criminal evasion’. Indeed, many advocates of ‘privacy protection’ are concerned privacy-enhancing technologies are counterproductive where they undermine democratic means of reform. For example, Participant 10 reflected on the Go Dark Against Data Retention (2015) campaign by highlighting it may have produced unintended consequences:

“Those campaigns [e.g. Go Dark] may have been effective at showing the

ineffectiveness of the laws, but there is probably also a perverse effect. They

assure the people who have some reason to care about the introduction of data

retention that there is really nothing to worry about. The problem with going

after those who are sophisticated enough to avoid the system is that the system

does not really affect them.”

The idea that privacy-enhancing technologies may have ‘perverse’ effects on civic deliberation reoccurred frequently, and it is argued this demonstrates how the signified

169 morality of ‘privacy protection’ is contingent upon the moral arbitrariness of corresponding surveillance legislation that has been enacted without adequate democratic consent. For example, in a normative sense, Participant 16 criticised privacy protection campaigns as withdrawing from civic debate:

“I am sceptical because I think it may not work, but also because it may divert

people’s attention from the high-level problems of constitutional or legal

inadequacy. In the past that would be taken as a sign of a police state or a

repressive or authoritarian regime. Because of the difficulty of dealing with the

subtle tactics of incrementalism and disengagement at the bureaucratic level,

and the level of technical complexity around solutions like encryption, I think

the danger is that you get quite a fragmented response… So, while I can see

how circumvention is appealing, there remain questions about its actual

effectiveness. There are also issues about false confidence, or delusions, of

being able to escape all of this. If we come to think it is not a problem anymore,

we can retreat into individual or isolated cultural groups. I think that is part of

the problem. It is one of the signs of an authoritarian regime – nobody knows

who to trust and there is no vitality in civil society.”

Participant 17 similarly criticised this type of political withdrawal:

“I think to be entirely preoccupied with individual-level information security,

while helpful, is a fool’s errand. If we sit back and run skills training workshops

about securing your data, that is fantastic. Any kind of awareness-raising and

skill-sharing is a good thing. But there is a tendency to disengage from larger

processes, and that means we’re not going to be able to make any progress.

One thing I have picked up on is how a lot of NGOs and civil society groups

170

that say, ‘we’re about giving you the skills necessary to protect your privacy’.

But that is always a reactive position… InfoSec training workshops are not

having the desired effect of incorporating those skills to individual or

institutionalised practices. So, we have artificially narrowed the problem we

face, and we are advancing a solution that is not having the effect we want. It

is not having the effect we want.”

The extracts convey how privacy advocates intuitively articulate how ‘privacy’ does not exist under conditions where individuals are required to use privacy-enhancing technologies as a means to avoid the gaze of morally arbitrary forms of surveillance.

Under such conditions, an act of ‘privacy protection’ is considered to be the product of coercion, rather than being freely chosen. As such, it is argued the extracts collectively capture how individual-level privacy protections are unavoidably

‘reactive’ forms of political resistance and highlight the importance of establishing non-arbitrary forms of surveillance power.

Finally, a small sub-group of participants articulated the view that widespread adoption of privacy-enhancing technologies actually has a beneficial impact on the quality of civic deliberation. For example, Participant 4 argued:

“What I really hope is that it is so normal to circumvent surveillance that it

becomes useless and it is abandoned. That is the ‘end game’ from the technical

side. I don’t think that’s ideal though. I think it will be much better if we can

avoid going down that route.”

Similarly, Participant 2 articulated how privacy-enhancing technologies act as a long- term strategy for motivating technology companies to lobby for change:

171

“An increased uptake in encryption to make sure the government gets as little

as possible so they are forcing the cost onto the ISP for no gain, eventually the

ISPs will lobby the government to remove it.”

These observed disagreements reflect competing claims about how privacy-enhancing technologies effect democratic decision-making processes. Yet, at the core of the extracts, the category of ‘moral arbitrariness’ is still invoked to differentiate the

(justified) forms of privacy protection from (unjustified) methods of evading criminal investigations. In doing so, the category insulates privacy discourses from conceptual distortion by the logics of preventive justice. Overall, it is argued this illiberal category of ‘moral arbitrariness’ is invoked to contest the moral equivalence at the core of the

‘problem of going dark’ used to justify intrusive surveillance powers.

4.5. CHAPTER CONCLUSION

This chapter has examined the signification strategies used by Australian privacy advocates to differentiate the meaning of ‘privacy protection’ from methods of ‘criminal evasion’, which is a moral equivalence at the core of the ‘problem of going dark’ that was used to justify the Data Retention Act (2015) and Encryption Access

Act (2018). The first section argued that the articulated meaning of ‘privacy protection’ is constructed via discourses that define the conceptual relationship between privacy and security as moral values. Thereby, it was argued that the moral equivalence between ‘privacy protection’ and ‘criminal evasion’ is a by-product of relational discourses, where ‘privacy’ may be equated with a ‘threat to security’ or as

‘dependent on security’. The second section argued that privacy advocates are encumbered by a liberal framework when contesting the ‘problem of going dark’ that

172 is vulnerable to distortion by the logics of preventive justice. Specifically, it was argued that the categories of necessity, proportionality, accountability, and harm can be co-opted to justify intrusive surveillance powers, as empirical claims about surveillance are unavoidably politicised (i.e. Haggerty, 2009; Lyon, 2003). Thus, the results reaffirm how notions of ‘harm’ and ‘risk’ enable proponents of surveillance to co-opt liberal discourses (i.e. Harcourt, 1999; Zedner, 2007a). Finally, the third section argued that privacy advocates alternatively contest the ‘problem of going dark’ by ascribing moral arbitrariness to the surveillance powers established under the Data

Retention Act (2015) and Encryption Access Act (2018). This category of ‘moral arbitrariness’ is constructed using illiberal discourses that reject individual notions of privacy in favour of analysing the power dynamics necessary for establishing non- arbitrary forms of surveillance (i.e. Newell, 2018; Hoye & Monaghan, 2018). The discourse is argued to supplement the dominant liberal framework. Overall, the chapter has provided an answer to the first research question: Australian privacy advocates differentiate the meaning of ‘privacy protection’ from methods of ‘criminal evasion’, and thus contest the equivalence at the core of the ‘problem of going dark’, via a discursive strategy of ascribing moral arbitrariness to surveillance powers.

173

Chapter Five: The Subjects of Surveillance Laws

5.1. INTRODUCTION

The psychosocial process of subjectivation describes how discourses position subjects in relation to one another. The previous chapter established how privacy advocates use the signification strategy of ascribing moral arbitrariness to surveillance powers to differentiate the meaning of ‘privacy protection’ from methods of evading criminal investigations. This chapter examines the corresponding strategies for discursively positioning the subjects of Australia’s surveillance legislation, focusing on how privacy advocates position the subjects of the Data Retention Act

(2015) and Encryption Access Act (2018). The chapter argues that privacy advocates position these subjects within relations of domination, contesting the capacity of citizens to confer non-arbitrary authority to metadata retention and encryption access laws. This subjectivation strategy is characterised by an external observer judging a power dynamic as coercive, rather than the subjugated party (Laclau & Mouffe, 1985, p. 154). Overall, the chapter provides an answer to the second research question.

The first section analyses the discursive positioning of citizens as culpable victims of the harms caused by Australia’s surveillance legislation. In this sense, they are constructed as enabling the Data Retention Act (2015) and Encryption Access Act

(2018) due to their technological ignorance and politically apathy. The second section analyses the discursive positioning of political elites as antagonists primarily responsible for the harms of Australia’s surveillance legislation. Specifically, it is argued that 1) the Australian Government is discursively positioned as offering disingenuous and ignorant justifications for expanding surveillance powers; 2) law

174 enforcement agencies are discursively positioned as controlling the domestic political agenda, deferential to the international intelligence community, and encumbered by the expectations of preventive justice; and 3) technology companies are positioned as complicit in the commodification of personal information, yet also as capable of corporate social responsibility. The third section examines the relational properties of these subjectivation strategies, arguing that privacy advocates position citizens as dominated by political elites. In this sense, privacy advocates construct citizens as the victims of coercion within a political system characterised by civic corruption, and, therefore, incapable of conferring non-arbitrary authority to Australia’s metadata retention and encryption access laws. Overall, the chapter argues the ascription of moral arbitrariness to surveillance powers is dependent upon this discursive positioning of subjects within relations of domination.

5.2. POSITIONING CITIZENS AS CULPABLE VICTIMS

This section examines the discursive positioning of citizens – also referred to as ‘the public’ and ‘the people’ – by participants in the course of differentiating the meaning of ‘privacy protection’ from ‘criminal evasion’. Specifically, the section demonstrates how privacy advocates discursively position ‘citizens’ as culpable victims of Australia’s surveillance legislation due to their technological ignorance and political apathy. The first sub-section unpacks the subjectivation strategy of

‘technological ignorance’ used to positioned citizens as lacking the knowledge necessary for informed views about surveillance legislation. The second sub-section unpacks the subjectivation strategy of ‘political apathy’ used to position citizens as lacking the necessary motivation to take steps to protect their privacy rights.

175

Throughout, it is observed how these subjectivation strategies articulate how the

‘enactment of citizenship’ is dependent upon a technologically-informed and politically-empowered citizenry (i.e. Hintz et al., 2017). Overall, it is argued these subjectivation strategies discursively position Australian citizens in different ways yet contribute to their overall subjectivation as culpable victims of the harms caused by the Data Retention Act (2015) and Encryption Access Act (2018).

5.2.1. POSITIONING CITIZENS AS TECHNOLOGICALLY IGNORANT

Australian citizens, or ‘the people’, are discursively positioned as technologically ignorant by the advocates of privacy protection. In this sense, they are partially responsible for the harms caused by surveillance legislation due to their inaction. In total, nine (9) participants articulated a discourse of ‘technological ignorance’ during research interviews. For example, Participant 7, an advocate involved in crafting the messages for the Citizens, Not Suspects (2014) and Go Dark

Against Data Retention (2015) campaigns, described the levels of technological and political literacy among the general public in the following terms:

“We conducted polling and surveying to determine the levels of literacy about

the issue. Our sample was also more likely to be informed about the issue, and

it was still clear that only one in twenty had a strong working understanding,

even a few months into the debate. I recall some polling around the passage of

the laws that also suggested that the majority of Australians just didn’t know.”

This sentiment was emblematic for the nine participants who positioned citizens as ignorant. Importantly, this discourse is articulated from an external subject-position capable of judging the relationship between citizens and elites as oppressive. This

176 external subject-position will be unpacked in further detail within Chapter Six (Section

6.2.2). For now, the implication is that citizens support surveillance programs because they lack the knowledge necessary to make ‘correct’ decisions.

This was a common view among the interviewed sample. To demonstrate the ignorance of ordinary citizens who support invasive surveillance powers, participants described common misunderstandings. For example, Participant 1 argued:

“People do not understand that with browser fingerprinting, it does not matter

what you do, if they get enough data, they will probably be able to tie it back

to you somehow.”

Browser fingerprinting refers to a process of tracking actions using identifiable information about a specific user’s internet browser, such as whether they have plugins installed, the size of font being used, or the size of the browser window (Upathilake,

Li, & Matrawy, 2015, p. 1). This information can be aggregated to identify the browsing behaviours of unique internet users across time. The invisibility of this process, coupled with the lack of knowledge among the public, is invoked to position

‘people’ as ‘not understanding’ the consequences of internet use for information privacy. Participant 4 makes a similar point about email:

“I think, if you ask people what happens when you send an email, they don’t

know that it gets copied to lots of different servers and they don’t know that

the NSA keeps a copy of that. And it’s not just email, its everything. So, they

have a reasonable expectation that the person who reads that email is the person

they sent it to, and no one else.”

Again, the participant is describing citizens as a distinct group – ‘people’ and ‘they’ – who lack an understanding of how email differs from traditional mail, and thereby fails

177 to appreciate the implications of the process for information privacy. This sentiment that “[t]here was a lack of detailed understanding” (Participant 17) and “the implications are not usually thought through” (Participant 21) was commonly articulated. Thus, it is through their own ignorance (‘they don’t know’) that citizens are being victimised by the harms of surveillance. Overall, it is argued privacy advocates use this subjectivation strategy to discursively position citizens as currently lacking the technical knowledge necessary to ‘enact citizenship’ through informed deliberation about surveillance laws (i.e. Hintz et al., 2017).

5.2.2. POSITIONING CITIZENS AS POLITICALLY APATHETIC

A similar, yet distinct, subjectivation strategy used by participants to discursively position Australian citizens as culpable victims was the category of political apathy. This positioning was articulated by a minority of six (6) participants within the total sample. For example, Participant 14 positioned the public in the following, subtly different, manner:

“I do not think the person on the street would be able to quickly and easily

understand the issues there. In Australia there are problems with political

engagement and apathy, and I think that has worsened over the last five years.

I like to hope that the trend will reverse. But, with that level of disengagement,

a lot of people are going to be unwilling to engage with complex issues like

that, especially since it relies upon understanding of networking and how the

internet works under-the-hood. Even to understand what metadata is, how to

collect it, and how it will be used.”

178

The difference here is small yet significant. Rather than positioning citizens as incapable of understanding the mechanisms and consequences of government surveillance legislation, the participant is articulating a perceived unwillingness among ordinary citizens to ‘engage with complex issues’. This sentiment was also expressed by Participant 21, who remarked that “[t]he Australian population also has, with respect, a tendency towards apathy” and “[p]art of that apathetic approach is we see little public debate”. Overall, it is argued this subjectivation strategy positions citizens as currently lacking the political power to ‘enact citizenship’ through civic deliberation about surveillance laws (i.e. Hintz et al., 2017).

The subjectivation strategies for discursively positioning Australian citizens as ignorant and apathetic conceptually overlap, as technical ignorance can be both a cause and consequence of political apathy. For example, Participant 13 draws such a conceptual link here:

“It is really easy for people to start glossing over when you start discussing

something technical. Encryption is pure mathematics, right. And people go,

‘oh, I don’t know how that works, it is too difficult for me to understand’.”

This subject-positioning is more nuanced. The participant is not suggesting that citizens are necessarily incapable of understanding the technicalities behind the

Encryption Access Act (2018) and is instead suggesting ‘people’ are simply unwilling to engage because of the perceived complexity. A similar idea of ‘learned helplessness’ was raised by Participant 16:

“Another day, another leak. It breads a passivity, a learned helplessness, and

they don’t notice that the water is getting warmer. The data breaches are

already quite huge, and it would need to be unfathomable at this point. The

179

largeness of the breaches seems to be diluting the impact. They don’t reach the

threshold of a ‘big new thing’.”

This parable of a frog being boiled alive within a gradually heating body of water is an argumentative strategy frequently used by human rights advocates to warn against the gradual erosion of civil liberties (Volokh, 2003, p. 1105). In this instance it is invoked to demonstrate how data vulnerabilities built-into surveillance programs are increasingly normalised, thereby preventing the wider mobilisation of resistance. This clearly frames citizens as victims of surveillance legislation, yet also as culpable due to their inaction. Finally, the concept of a ‘networking effect’ was invoked by

Participant 12 to explain this lack of political will:

“For example, everyone uses Facebook because everyone else does. I’m not on

Facebook, by the way. But there are also things like Yellow.73 I got invited to

Yellow, but I’ve never used it because none of my friends do, so why would I

care? I don’t care. The same is true of other new platforms. People are

constantly launching new platforms, but if no one is using them, what is the

point?”

Here, again, the causal explanation is not that citizens are inherently ignorant. Instead, they are positioned as uncritically accepting the conveniences of technology despite their associated interference with privacy rights. Indeed, as Participant 14 similarly remarked, “Facebook, everyone is on there, you can talk to everyone and you are kind of shooting yourself in the foot if you do avoid it”. Overall, whether it is due to ignorance, apathy, or an interaction of the two characteristics, it is argued that privacy

73 The participant here is referencing a relatively unknown mobile-based social media platform named Yellow, which after becoming embroiled in various controversies was re-named Yubo. See Chambers (2018, para. 8) for further information. 180 advocates discursively position ‘citizens’ as the culpable victims of surveillance harms, based upon the notion that the ‘enactment of citizenship’ is dependent upon a technologically-informed and politically-empowered citizenry that is capable of engaging in civic deliberation (i.e. Hintz et al., 2017).

5.3. POSITIONING POLITICAL ELITES AS ANTAGONISTS

The previous section established how privacy advocates use multiple subjectivation strategies to discursively position citizens as culpable victims of the harms produced by the Data Retention Act (2015) and Encryption Access Act (2018).

This section analyses the associated strategies for discursively positioning political elites as the primary antagonists responsible for this legislation. Specifically, this includes the subjectivation of several sub-groups, including the Australian

Government, law enforcement agencies, and technology companies. The analysis is divided into three corresponding sub-sections. Within the first sub-section, it is argued that the Australian Government are positioned as offering disingenuous and ignorant justification for surveillance legislation. The second sub-section argues that law enforcement agencies are positioned as manipulating the domestic political agenda, deferential to the international intelligence community, and encumbered by the expectations of preventive justice. The third sub-section argues that technology companies are discursively positioned as complicit in the commodification of personal information, yet also as capable of exercising corporate social responsibility. Overall, the section argues that political elites are primarily positioned as antagonists, yet occasionally also as partial victims and potential allies.

181

5.3.1. THE POSITIONING OF THE AUSTRALIAN GOVERNMENT

This sub-section examines the subjectivation strategies for discursively positioning the Australian Government as a primary antagonist responsible for surveillance legislation. First, it is demonstrated how privacy advocates are sceptical of the sincerity of the ‘going dark’ argument advanced to justify the Data Retention

Act (2015) and Encryption Access Act (2018), and thus position the Australian

Government as disingenuous. Second, it is demonstrated how the Australian

Government is discursively positioned as incompetent due to their perceived ignorance of the operational limits of surveillance technologies. Overall, the section argues that, whether privacy advocates articulate a subjectivation strategy of disingenuousness or incompetence, the Australian Government is positioned as a primary antagonist.

5.3.1.1. POSITIONING THE GOVERNMENT AS DISINGENUOUS

This sub-section examines how privacy advocates use a subjectivation strategy of discursively positioning the Australian Government as disingenuous where they describe the ‘going dark’ argument as a smokescreen obscuring other intentions. This involves an explicit rejection of the moral equivalence within the ‘going dark’ argument – that methods of privacy protection enable the evasion of criminal investigations. Overall, nine (9) participants within the interviewed sample positioned governments as disingenuous subjects. These participants constructed discourses that questioned the intentions and aims of the Data Retention Act (2015) and Encryption

Access Act (2018). For example, Participant 10 diplomatically made this point in relation to the public debate surrounding the Data Retention Act (2015):

182

“I think the justifications were not made in good faith. Let me qualify that.

There were political justification given that were justifying the requests of

national security and law enforcement agencies. For a long time, these

organisations have been asking for greater access to data. The justifications

given by Prime Minister Turnbull and Attorney-General Brandis are post-hoc

justifications. They do not, in themselves, provide a compelling reason. They

were used as political rhetoric to secure the passage of the scheme, but they do

not stand up to rational scrutiny.”

The discourse positions the Australian Government, and more specifically, Prime

Minister Malcolm Turnbull and Attorney-General George Brandis, as offering bad faith justifications for the metadata retention program. As noted within Chapter One

(Section 1.2), these justifications concerned the empirical claim that a mandatory metadata retention program was necessary to “prevent the further degradation” of

Australia’s intelligence agencies (Turnbull, 2014, p. 12560). A similar argument was also invoked to justify the Encryption Access Act (2018), based upon a claim that

Australia’s law enforcement and intelligence agencies require access to encrypted communications to counter the ‘problem of going dark’ (Johnson, 2017, para. 12; see also Weimann, 2016). In positioning the Australian Government as politically disingenuous, privacy advocates contest the earnestness of these justifications.

In discursively positioning the Australian Government as disingenuous, participants explicitly questioned the feasibility of any moral or legal duty placed upon technology companies to provide ‘technical assistance’ to law enforcement as required under the Encryption Access Act (2018). For example, Participant 4 directly questioned why the government was pursuing encryption access laws where they remark, “I suspect that people realise it is not feasible… [s]o, I wonder if they are

183 actually trying to achieve something else and this is kind of a set up.” Similarly,

Participant 10 made the following point:

“I do not think there is a serious end-game in requiring a backdoor to

encryption algorithms. I think the people who are serious about this recognise

how dangerous an idea that would be, because there would be no ability to

ensure it does not fall into the hands of unauthorised parties. They know it

would make everyone less safe. So, what is going on? The goal is not to ensure

that Facebook introduces an end-to-end encryption protocol with a backdoor

built in. The goal is to ensure Facebook can be pressured – either through a

warrant or not – to release a plaintext copy of, for example, WhatsApp

messages that it holds.”

Within this discourse, the Australian Government is explicitly positioned as knowing some of the legal duties placed upon technology companies are not technologically feasible. Rather, the participants reasonably presume that the ‘industry assistance’ provisions (Chapter One, Section 1.2) merely compel companies to avoid adopting end-to-end encryption. This fits into the broader positioning of the Australian

Government as primary antagonists. Participant 18 also makes this point:

“In many situations, a backdoor isn’t even possible, the very way that

encryption is employed means that others cannot intercept. I’m more

concerned about the public rhetoric in this push. A government convincing the

public that encryption is a bad thing. That’s not the way we want this to go.”

Here, it is apparent that privacy advocates are inherently distrustful of the intentions of the Government, positioning them as manipulating public opinion. Participant 17 addresses this point where they state that “I think ‘going dark’ is a misnomer… [i]t

184 deploys a metaphor that misleads”, while Participant 2 stated “I’d argue the notion that terrorism will flourish because of ‘national security agencies going dark’ is just as bullshit as the base statement”. The sentiment that ‘going dark’ is misleading or

‘bullshit’ was common. Through questioning the sincerity of the ‘going dark’ argument, privacy advocates discursively position the Australian Government as politically disingenuous and acting in bad faith.

Another subjectivation strategy for positioning the Australian Government as disingenuous involves targeting the use of the term ‘metadata’ within public debates to describe the mass surveillance powers established by the Data Retention Act (2015).

For example, Participant 4 made the following observation:

“In terms of whether or not it should be private it is exactly the same as data,

and the separation of metadata from data seems to have been created for

entirely political means… back before 2013, you never heard anyone in the

general public talk about metadata. I think the term was used specifically

because people do not understand… But labelling it as ‘you have nothing to

worry about because it is just metadata’ is just misleading, on purpose.”

The term ‘metadata’ is considered a rhetorical label intentionally used by its proponents to obfuscate debate and confuse ‘people’ who ‘do not understand’ its meaning. This was a claim repeatedly invoked to demonstrate how the Australian

Government approached the public debate about surveillance legislation in bad faith.

Similarly, Participant 8 observed that “[t]he idea that they are just now waking up to

[these problems] is ridiculous… it is so absurd that it makes you wonder, are they afraid of a real debate about this stuff?” Instead, the participant offered a competing interpretation of the relevance of the ‘going dark’ problem:

185

“For example, the terrorist attacks in Paris a few years ago. Initially, there was

discussion that they were using privacy-enhancing technologies to plan and

coordinate the attacks. But it turned out they were just using Facebook

messaging. I think what is really worrying is the lack of trust. The government

also have to trust citizens – they need to acknowledge to them the limitations,

explain why they want the changes… It is the Government’s responsibility to

demonstrate why people should trust them, beyond saying ‘trust me’ that it is

not being used in bad faith.”

The participant here is referencing the November 2015 Paris attacks, which resulted in 130 deaths from three suicide bombers and a mass shooting at Bataclan theatre

(CNN, 2015). Consequently, the Australian Government’s ‘bad faith’ arguments are attributed to a lack of trust for ordinary citizens as capable of deliberating about, and conferring consent to, reasonable surveillance powers.

Other participants were more direct in condemning the Australian Government as being intentionally deceitful in their pursuit of social and political control. For example, Participant 11 made the following two-pronged criticism of metadata retention and encryption access legislation:

“If we take those [justifications] on face value, that it is about national security,

it is making those organisations lazy. It is saying they cannot keep up with the

pace of technology, that they need these blanket provisions to give them access

to everything without any kind of justification. With that said, I do not think it

is just about national security. I think national security gets blown up to justify

extra-judicial surveillance needed to control the population.”

186

Again, the idea that ‘national security gets blown up’ discursively positions governments as offering ‘bad faith’ arguments to mask their true intentions of social control through ‘extra-judicial surveillance’ powers. This sentiment was repeatedly articulated. For example, Participant 18 argued “[n]ational security is often used as a smokescreen towards pushing for more and more power to law enforcement”, while

Participant 16 suggested the legislation was introduced “to increase efficiency, discipline people, and track what is going on”. Similarly, Participant 5 made the following remark:

“I think there is a global trend of governments using the threat of terrorism to

justify increases of their power and silence dissent. By ramping up fear, they

can manipulate and cajole the public into supporting policies that further

restrict their freedoms.” (emphasis added).

These extracts suggest it is through the ‘fear of terrorism’ that ‘the public’ is

‘manipulated’ to support metadata retention and encryption access laws, which are actually about ‘increasing power’ despite being justified via bad faith arguments such as ‘going dark’. Overall, through challenging the sincerity of the ‘going dark’ argument, the discourse clearly positions the Australian Government as articulating misleading arguments to justify surveillance legislation.

In order to position the Australian Government as disingenuous, privacy advocates also drew upon broader social and political issues. In this sense, the Data

Retention Act (2015) and Encryption Access Act (2018) are part of a perceived pattern of expanding state power. For example, Participant 11 made this point directly:

“I think if we look at it within the landscape of other things this particular

government has done: actively suppressing dissent, making it harder for

187

activists to sue companies, and other stuff outside national security issues. It

all leads to suppressing dissent. We have seen historically how having lots of

data can be used for those types of aims. A lot of us have been saying this is

what the data may be used for, and it is clearly part of a wider program of

entrenching political power where it currently is.”

Similarly, Participant 6 argues that “[t]he way the Government has acted in recent years suggests they are trying to exploit information they have on people”, Participant

8 notes that “[e]ven with quite limited information, you can hunt down whistle- blowers… [i]f there are corrupt officers, they can use this information to harass people”, and Participant 16 remarked that “data retention is very similar to how secret police operated… people who were in the Stasi go ‘oh wow, I wish we had this’… it is almost a caricature of a police state”. This discursive strategy of drawing comparisons with the Australian Government’s surveillance powers and a ‘police state’ was also directly articulated by some participants. Participant 2 drew comparisons with authoritarian governments globally:

“I feel like my country is betraying its citizens. We recently saw an article

outlining the fact the Australian government gives metadata information to

countries like China and Zimbabwe, among others. These are dictatorships,

why is Australia giving data on people to dictatorships? These kinds of regimes

kill dissidents.”

Similarly, Participant 5 argued that:

“Seeing how repressive regimes overseas have used mass surveillance against

their populations to crack down on activism and free speech has compelled me

to fight against any similar intrusions here.” (emphasis added)

188

Notably, the use of the word ‘regime’ here indicates that these governments, and by proxy the Australian Government, lack democratic legitimacy. Altogether, it is argued these extracts position the Australian Government as seeking to use metadata as a means to silence dissent and criminalise whistle-blowers. As such, the Australian

Government is positioned as akin to authoritarian states such as China and Zimbabwe.

Overall, it is argued these subjectivation strategies enable privacy advocates to position the Australian Government as a primary antagonist responsible for the disingenuous legitimation of surveillance powers.

5.3.1.2. POSITIONING THE GOVERNMENT AS INCOMPETENT

The previous sub-section argued the Australian Government is positioned as disingenuous in their pursuit of intrusive surveillance powers. However, there was also a complementary discourse of ‘incompetence’ that similarly positions the Australian

Government as an antagonist due to their ignorance about the technical limitations of surveillance technologies. A slight majority of eleven (11) participants articulated this view that governments are often ignorant about the limits of surveillance as a means for social control of cyberspace. For example, Participant 11 observes how public debate about the Data Retention Act (2015) was characterised by ignorance among those responsible for crafting the legislation:

“I also think the debate was characterised by fundamental misunderstandings

of how technology works. There were only a few voices in parliament who

seem to truly understand the issues. None of the debate was grounded in

technical reality.” (emphasis added)

189

Similarly, Participant 7, who was involved with the Go Dark Against Data Retention

(2015) campaign, made the following observations:

“I think that most Australians would be interested to know that there is pretty

limited government oversight of it [metadata retention]. This is partly due to a

lack of knowledge among government MPs themselves to understand the laws

they have brought into place… Watching Senate Estimates in the subsequent

years, it is pretty clear that there are perhaps only three Senators in that place

that have a working understanding of the concept [metadata].”

(emphasis added)

These extracts discursively position the Australian Government quite differently to their characterisation as disingenuous. Instead, they are positioned as fundamentally ignorant of the communications technologies they are seeking to regulate. For example, concerning the technical details of the Data Retention Act (2015),

Participant 15 observed that “the number of politicians who understand this area is like half a dozen… it is a tiny amount”. While this positioning does not absolve the

Australian Government of moral responsibility, it is clearly distinct from the overtly malicious and deceptive intentions ascribed to them above.

To an extent, the positioning of the Australian Government as incompetent, rather than as disingenuous, frames them as the partial victims of broader circumstances. Here, they are positioned as political subjects who are responsive to public opinion yet similarly lack the knowledge necessary to resolve complex socio- technological problems. For example, Participant 15 made the following remarks:

“Australia is, by far, the least engaged about surveillance among the Five Eyes

nations. By a big margin. It is not like EFA, or any of the other digital rights

190

groups, have been silent. It is that we do not get any engagement from inside

Canberra. Attempts by the Greens to do something about it just seem to bounce

off. The two major parties do not seem to want to do anything with it except

wrap themselves in a flag.” (emphasis added)

This observation that politicians want to ‘wrap themselves in a flag’ exemplifies how

‘national security’ is considered to be electorally popular. This frames governments not as manipulators, but as incompetent and passive recipients of wider cultural processes of securitisation. As Participant 1 remarks, “[s]aying in the media that ‘we’re going to do something because of the dirty terrorists’ is a good way to win some votes by being tough on national security”. Participant 21 offered a demonstrative example:

“The other issue is that I am not critical of any particular politician, because I

can understand where they are coming from. To naysay this is to say, ‘I would

support it if there was a problem,’ and that is simply political suicide. It is

political suicide for their career, not in the sense as a responsible member of

parliament.”

This clearly positions the Australian Government – and, in particular, individual politicians – as beholden to the views of the electorate, rather than as expressly manipulative of public opinion. Similarly, reflecting on the then-ongoing consultation process for the Encryption Access Act (2018), Participant 14 argued:

“In terms of politicisation of the process, I do not like assuming malice. I think

the people behind the national security agenda do have their best interests at

heart. They see it as the most important issue to address.”

This rejection of ‘malice’ as the motive ascribed to politicians and the Australian

Government clearly contrasts with the disingenuous intentions surveyed above.

191

Indeed, Participant 8 observed that “the separation between metadata and data is a red herring to distract people, maybe not intentionally as parliamentarians seem to misunderstand it as well” (emphasis added), while Participant 11 noted “[w]hether it is a misunderstanding or wilful ignorance did not come through in the debate”

(emphasis added). Thus, some participants were agnostic on the question of whether the Australian Government intentionally misled the public with the ‘going dark’ argument, or if they are equally misinformed. Overall, in contrast to the disingenuous intentions ascribed to them above, it is argued privacy advocates alternatively ascribe

‘incompetence’ to the Australian Government when discursively positioning them as antagonists responsible for surveillance legislation.

5.3.2. THE POSITIONING OF LAW ENFORCEMENT AGENCIES

The previous section argued that privacy advocates discursively position the

Australian Government as antagonists where the latter provide disingenuous or ignorant justifications for surveillance legislation. This section examines the subjectivation strategies used by privacy advocates to discursively position law enforcement agencies as similarly-culpable antagonists. The first sub-section argues that privacy advocates discursively position law enforcement agencies as manipulators who control Australia’s domestic surveillance agenda. The second sub-section argues that law enforcement agencies are positioned as deferential to the broader political agenda of the Five Eyes intelligence community. Finally, the third sub-section argues that law enforcement agencies are encumbered by the demands and expectations of preventive justice. Overall, the section argues these subjectivation strategies are used by privacy advocates to discursively position law enforcement agencies as antagonists

192 responsible for Australia’s surveillance legislation, yet also as sympathetic subjects encumbered by practical and political constraints.

5.3.2.1. POSITIONING LAW ENFORCEMENT AS MANIPULATIVE

This sub-section examines the first subjectivation strategy used by privacy advocates to discursively position law enforcement agencies – as political manipulators operating behind-the-scenes to influence the Australian Government’s surveillance agenda. This strategy was articulated by nine (9) participants within the research sample. Generally, the participants who discursively positioned Government as ‘incompetent’ tended to position law enforcement agencies as ‘manipulative’. For example, Participant 1 made the following observation:

“The cynic in me would say most of the government and opposition are so

completely clueless about technology that it is being driven by law enforcement

agencies, rather than by the government.” (emphasis added)

Participant 10 similarly drew a comparison between inept politicians and manipulative law enforcement agencies operating behind-the-scenes:

“Here, we saw an Attorney-General who had no comprehension of the policy

he was promoting. Clearly, he has been asked by law enforcement and national

security organisations for a particular policy response, and he is now

responsible for selling it to the Australian public without understanding the

justifications.”

These extracts clearly draw a parallel between the incompetence of the Australian

Government and the competence of law enforcement agencies, who have ‘driven’, or

193 are ‘responsible’ for, the expansion of surveillance powers. Participant 6 made the following observation about law enforcement lobbying for the Encryption Access Act

(2018) in the aftermath of the passage of the Data Retention Act (2015):

“As soon as the national security industry within Australia get a reform they

wanted, they arrive with a shopping list of other things they desire. Access to

encrypted messages is just the next one. It may have been more relevant

because there has been an uptake in encrypted technologies as a result of

governments putting in place surveillance systems without democratic

consent.” (emphasis added)

It is important to note how law enforcement agencies are positioned here as subjects guiding surveillance policymaking processes. Additionally, the participant uses a metaphor of the ‘shopping list’ to demonstrate the ongoing nature of this dynamic.

Participant 10 made a similar argument concerning repeated efforts to achieve passage of the Data Retention Act (2015):

“In Australia, data retention did not just come up within this parliament, it has

been something that law enforcement agencies have been pursuing for

successive parliaments. At some point it becomes politically possible to drive

legislative change – those are points of crisis.”

Overall, this subjectivation strategy highlights the complicated way in which moral responsibility for the harms of surveillance is distributed among political elites.

According to the discourse, law enforcement agencies are responsible for determining the Government’s surveillance agenda, while these latter subjects are responsible for justifying the laws to members of the public (i.e. via the ‘going dark’ argument).

194

This positioning of law enforcement agencies as powerful manipulators involved a similar rejection of the ‘going dark’ argument, as noted above. As part of this subjectivation strategy, privacy advocates focused on the existing surveillance and counterterrorism capabilities of Australia’s law enforcement and intelligence agencies.

For example, Participant 3 made the following remarks questioning the core claim of the ‘going dark’ argument:

“Given the demonstrably large increases of the human and budgetary resources

given to the national security agencies, it is simply not credible to talk about

these agencies being ‘further degraded’. This argument is a basic denial of

evidence.”

This was repeatedly articulated by the interviewed advocates. Participant 6 noted that

“I view the argument that intelligence is ‘being degraded’ as not being truthful” because “they have never had access to more information than they do now”.

Similarly, Participant 17 described how it is a “myth that law enforcement is at a relative disadvantage than where they were historically” and that “I personally have heard law enforcement refer to the current era, where they have access to social media, as a golden age”. Furthermore, Participant 2 argued “in private they [law enforcement] admit their access has never been greater and is only increasing”. These comments reflect a fundamental rejection of the ‘problem of going dark’ – that methods of privacy protection enable the evasion of criminal investigations. Instead, participants discursively position law enforcement as manipulative in their advocacy for additional legislation within the context of a ‘golden age’ of surveillance.

195

5.3.2.2. DEFERENCE TO THE FIVE EYES INTELLIGENCE COMMUNITY

This sub-section examines the second subjectivation strategy used by privacy advocates to discursively position Australian law enforcement agencies as antagonists

– as deferential to the decision-making of the Five Eyes intelligence-sharing community. A minority of five (5) subjects articulated this discourse. Although participants primarily positioned law enforcement as powerful antagonists capable of influencing Government policymaking processes, they are also positioned as influenced by political agendas created by Australia’s existing intelligence-sharing arrangements. For example, Participant 15 made the following observation:

“The intelligence community, particularly among the Five Eyes, have already

settled that via black agency deals. It’s been settled by intelligence agencies

with essentially no government or public oversight.”

Similarly, Participant 21 argued:

“This is something that has been in practice for a very long time. That data

sharing agreement is essentially one where, in my opinion, foreign

governments can survey foreign nationals without offending their domestic

legislation.”

This subjectivation strategy downplays the decision-making role of domestic law enforcement agencies and the Australian Government. Instead, the international intelligence community is assumed to be the key decision-maker. Participant 10 referred to this as “policy laundering” where “[i]t is sometimes easier to get laws passed in one jurisdiction, which can be used as a wedge to justify the introduction of comparable laws in other jurisdictions” and “[t]he increased global interest in encryption backdoors, I think, is part of a coordinated campaign by national security

196 organisations to find a way to respond to this threat.” Indeed, participants routinely referred to the actions of international intelligence agencies to explain why the

Australian Government pursued data retention and encryption access laws. For example, Participant 9 argued “these types of dragnets were already established, places like Britain or the United States” and Participant 12 argued “Australia is happy to point to [the US and UK] and say ‘well, they’re doing it’, which is unfortunate”. Overall, while significant political power and moral responsibility is positioned within

Australia’s law enforcement agencies, they are not necessarily considered as completely in control of the surveillance agenda.

5.3.2.3. ENCUMBERED BY THE DEMANDS OF PREVENTIVE JUSTICE

This sub-section examines the final subjectivation strategy used by privacy advocates to discursively position law enforcement agencies as antagonists – as subjects encumbered by the demands and expectations of preventive justice. In this sense, law enforcement agencies can also be considered victims of managerialism. In total, six (6) participants expressed this sentiment. For example, Participant 12 argued:

“Obviously, law enforcement always want access to as much as possible –

absolutely everything about absolutely everyone, when and where they want

it. They have a difficult job to do and they want to make it as easy as possible

for themselves.”

The idea that law enforcement has a ‘difficult job’ was frequently expressed. It positions law enforcement as not necessarily malicious in their intentions, yet still as a powerful actor guiding public debate. Similarly, Participant 10 argued:

197

“I think it is important to recognise that there can be a genuine law enforcement

or national security need for historical investigation of internet

communications. But part of it is also panic about the sheer volume of content

on the internet, the anonymity, the geographical dispersion, and the encryption.

All of these factors make it much more difficult to do police work, so there is

a natural tendency to want more powers. We saw that in response to the 9/11

bombings in the US – one of the first responses was enhanced security powers.

That’s a common pattern, increased surveillance powers.”

(emphasis added)

The motivation of Australia’s law enforcement agencies is therefore ascribed to ‘fear’ of the unpredictable actions of those who seek to cause harm (i.e. Ericson & Haggerty,

1997, p. 90). As such, the logics of preventive justice can be seen as guiding the behaviour of law enforcement agencies. There is a form of determinism within this discourse, which informs a more sympathetic distribution of moral responsibility.

For example, Participant 15 expressed sympathy for law enforcement trying to solve the ‘going dark’ problem:

“Data retention is an issue where some aspects of the law enforcement position

that I have a great deal of sympathy for. Those issues are, generally, quite

technical ones that are missing from the debate. In general, I have a great deal

of sympathy for the law enforcement idea that the use – or threat of use – of

carrier grade address translation is an issue for them. However, I think law

enforcement tend to make ambit claims, and the government generally accept

them.” (emphasis added)

198

Overall, these extracts demonstrate how privacy advocates discursively position law enforcement agencies as encumbered by the demands of preventive justice, as a form of managerialism placed upon them by the Australian Government. As such, it is argued this highlights the diversity of characterisations for subjects and the complexity with which moral responsibility is distributed among political elites.

While law enforcement agencies are certainly discursively positioned by privacy advocates as more powerful than any other category of subjects, their actions are still partially attributed to broader social and political structures. As with the

Australian Government, these agencies were characterised as needing to be ‘seen to be doing something’ to justify their existence. For example, Participant 20 made these observations about the ‘thin blue line’:

“Quite often the ‘thin blue line’ is a very thin blue line on the ground. So,

arguments surrounding the expansion of surveillance and the automation of

data processing is a way they try to get around resource limitations…You also

have people who have attained positions within intelligence and policing

agencies. They are the ones arguing for these powers – they are very invested

in social problems, but they are also looking out for themselves. Getting new

legislation can be runs on the board for them, in terms of their CV. Within the

context of the metadata retention debate, certainly some senior police I’ve

spoken with were aware how they could influence public debate.”

Where they are discursively positioned by privacy advocates as subjects encumbered by the expectations of preventive justice, there was a tendency to describe law enforcement agencies as having a ‘natural’ desire for greater access to data. For example, Participant 10 described how “there is a natural tendency for law

199 enforcement agencies to want more access to data than they current have,” as the novelty of cybercrime “would terrify a law enforcement officer who does not have the technical knowledge to understand how to do online policing”. In this sense, privacy advocates do have ‘sympathy’ for law enforcement agencies.

Relatedly, privacy advocates discursively position law enforcement agencies as subject to managerial expectations within a context of limited public funding.

Through contrasting the effectiveness of targeted and indiscriminate surveillance practices, participants expressed sympathy concerning the operational expectations placed upon law enforcement agencies responding to communications ‘going dark’.

For example, Participant 15 remarked:

“If governments were pushing HUMINT resources – the traditional

mechanisms for intelligence – they might get better results. But, a lot of these

agencies find that challenging. You know, like finding fluent Arabic speakers.

So, certainly there are challenges for them. Australian police have done well

combatting child abuse materials, but a lot of that is old-fashioned policing.

They use all the methods that police have used for decades. You catch one guy

and use him to get into the network. It doesn’t matter how encrypted a chat is

if you have an informer.”

The abbreviation HUMINT, which refers to human source intelligence, describes information gathered through interpersonal communication and is dependent upon relationship-building (Garner & McGlynn, 2018, p. 125). Specifically, the participant here is referencing a Queensland Police Service investigation (via Task Force Argos) where the collection of HUMINT lead to the successful identification of a global child exploitation material distribution network (Safi, 2016, para. 19). Indeed, there was a

200 common observation that surveillance and crime control work better where it is targeted and traditional. Participant 12 argues that:

“If it is possible to pursue the goals of preventing terrorism or crime without

draconian measures, I think that is a better society. One of the ways you can do

that is be willing to sufficiently fund security and law enforcement to do the

harder options. If we are willing to fund them to conduct targeted investigations

of serious crimes, rather than pursue the easy option of low-level or theoretical

crime, that is the option we should be aiming for.” (emphasis added)

This positioning displaces moral responsibility from law enforcement agencies and places it back upon the Australian Government, who are accused of underfunding crime control and national security programs. In contrast, Participant 3 observed:

“Law enforcement, national security and intelligence agencies have adequate

tools and legislation with to identify criminals and apprehend terrorists. Rather

than using these to the full, they want a lazy short cut. Under no circumstances

should law or arguments by analogy be tolerated.” (emphasis added)

This highlights how not all participants expressed sympathy for law enforcement agencies, instead positioning them as capable of meeting the demands required of them via conducting targeted investigations. They are thus positioned as ‘lazy’ where they opt for the surveillance solution. Still, this positions Australia’s law enforcement agencies as encumbered by the expectations of preventive justice.

Overall, there is diversity in how participants discursively positioned law enforcement agencies within the context of this observed encumbrance. For example, the following extracts both acknowledge the difficulty of the expectations encountered

201 by law enforcement agencies, yet the articulated discourses morally position them in starkly different ways:

“Talking to people at the operation level – those who are involved in this sort

of stuff – they were extremely concerned about this sort of thing [privacy]. It

is interesting the level of concern up-and-down the chain of command. They

are not alien concepts and they are not necessarily opposed to them. In some

areas they are very conscious than the need for controls and constraints,

sometimes more than those at the policymaking or political levels. They have

absorbed some of the historical, constitutional ideas about the limits on the role

of police, and the legality of their actions. So, from the outside it may look like

whole thing is unregulated, but from the inside there is recognition of the need

for controls.” (Participant 16)

“There is a very real problem that, like anybody anywhere, law enforcement

tends towards doing the least possible to get the best numbers on the books. In

the US – and less-so in Australia – you have a ridiculous situation where the

vast majority of ‘terrorist’ offences are actually people who are either very

gullible or have mental problems who are pushed into going along with a

scheme where all other participants are law enforcement. The crime is entirely

notional – no weaponry is real; no drugs are real. The crime is essentially

fictional, but they arrest the person on the basis they went along with the

fictional crime. They can then say they got someone dangerous off the street,

which is much easier than going after actual criminals.” (Participant 12)

These two participants are equally aware of how law enforcement agencies are encumbered by the demands of preventive justice within a context of limited funding

202 and resources. Indeed, they both articulate criticism of such structural factors.

However, their subjectivation strategies distribute moral responsibility differently – law enforcement agencies may be well-intentioned yet misguided; or, they may be malicious, deceptive, and controlling. Overall, it is argued that privacy advocates employ a variety of subjectivation strategies to position Australia’s law enforcement agencies as antagonists responsible for intrusive surveillance powers, while also acknowledging their encumbered demands of preventive justice.

5.3.3. THE POSITIONING OF TECHNOLOGY COMPANIES

The above sub-sections have established how the Australian Government and law enforcement agencies are discursively positioned as antagonists ascribed with varying levels of moral responsibility. This sub-section examines the subjectivation strategies used by privacy advocates to discursively position a final sub-category of political elites as antagonists – technology companies. First, it is argued that technology companies are primarily positioned as complicit in the commodification of personal information. In this sense, they are culpable in the expansion of intrusive surveillance. Second, it is argued that technology companies also occupy a contested space as potential allies where they are considered capable of engaging in corporate social responsibility. Overall, the section argues that privacy advocates contest the subjectivation of technology companies as antagonists and potential allies.

5.3.3.1. COMPANIES AS COMPLICITY IN DATA COMMODIFICATION

This sub-section examines the first subjectivation strategy used by privacy advocates to discursively position technology companies as antagonists who

203 contribute to the expansion of surveillance powers – as complicit in the commodification of personal information. In this sense, technology companies are recognised as operating under the market conditions of surveillance capitalism (i.e.

Zuboff, 2015). Overall, seven (7) participants positioned technology companies as morally complicit subjects. For example, Participant 9 made the following observations regarding their efforts to operate a relay point for the Tor Browser project, which enables the circumvention of data capture processes:

“One of the big players we have not really discussed is the multinational

corporations. They are just as bad as the governments. One of the biggest

problems for me to running my services, are the multinational corporations and

the legislation that protects them. There is no oversight, and their terms and

conditions are disruptive to services.” (emphasis added)

These comments are made in more generalised terms, referring to how corporations attempt to commodify personal information within cyberspace. Participant 9 continued, observing how “we also have only a handful of internet service providers in this country, and all of them have terms and services that prevent the use of VPNs or things like Tor” leading to a situation where “services continue to get black holed by major international carriers, who should be net neutral and should not be blocking any type of data transfer.” This articulates how companies directly frustrate privacy protection, not for ideological purposes, but as part of what Zuboff (2015, p. 77) refers to as ‘surveillance capitalism’ – the market conditions where personal information is collected and sold as a commodity.

The idea that surveillance is built-into contemporary economic relations was articulated by several participants to discursively position technology companies as

204 antagonists. Indeed, this category is invoked to ascribe moral responsibility to technology companies for enabling the surveillance agendas of other political elites.

For example, Participant 16 argued that:

“There is a sort of denial that we cannot protect our information anymore, and

it is as if we care as little as Facebook does, ‘oh, the Russians were able to take

over the American election? Whatever! Our cash flow is up, and we got lots of

likes’.” (emphasis added)

The focus here on the prioritisation of ‘cash flow’ and ‘lots of likes’ over the ‘Russians stealing the American election’ is a clear reference to the Facebook and Cambridge

Analytica controversies in the wake of the 2016 United States Presidential election

(see: Shad, 2018, p. 41). Such a discourse highlights how the pursuit of profit is prioritised by corporations over considerations of the common good. Relatedly,

Participant 4 made the point that there is a profit-motive behind the poor data management practices of technology companies:

“People take shortcuts all the time, because the pressure is on to release the

features on-time, and doing things that achieve ‘business value’, to use

buzzwords. The value of protecting your users’ privacy does not become

apparent until you lose it. Until you are breached, and you’ve lost all that data,

and you think ‘oh shit, I wish I hadn’t collected all that data – their addresses

and their phone numbers’.”

That is, there is no economic incentive for companies to prioritise information security.

The participant implies it is commercially beneficial to ignore privacy issues due to the operational costs. Thus, profit-motivated corporations lack an economic interest in

205 protecting privacy. Participant 14 observed this about the ‘Clean Feed’ debate, an effort by the Australian Government to regulate and censor online content:

“Then there were also the Torrent sites. Media companies, producers, and

middle-men were saying, ‘oh, you’re going to censor the internet? We want to

get in on that. We’re going to give you a big list of sites, don’t question us, and

we just want them gone from the internet’.”

Similarly, it is within this broader political context that Participant 20 discursively positioned companies as having market-based motives for ‘resisting’ surveillance, using the example of Apple’s refusal to provide access to a user’s encrypted communications without a warrant:

“It is almost a PR exercise by tech companies, this kind of resistance to things

like the encrypted iPhone. They are also complicit in this, as we saw with

Snowden and capitalist surveillance. I think that would also be problematic.”

Overall, technology companies are discursively positioned as ‘also complicit in this’ where they are motivated by the pursuit of profits under the market conditions of surveillance capitalism. Specifically, the behaviour of telecommunications and technology companies is thus interpreted through the lens of surveillance capitalism, where their intentions are unavoidably understood as financially motivated (i.e. ‘a PR exercise’) rather than altruistically pursuing the common good.

This subjectivation strategy therefore discursively positions technology companies as morally complicit in enabling the expansion of surveillance legislation, often raising concerns about information-sharing arrangements between governments and the private sector. For example, Participant 5 highlights how the Data Retention

206

Act (2015) could adversely affect the rights of consumers within the context of the health insurance industry:

“I’m also concerned with the way it can be used by private corporations to

profile and discriminate against people. For example, I don’t want an insurance

company to know that I have been googling certain healthcare-related websites

as they might want to raise my premiums. Why shouldn’t people be able to

protect themselves from potential intrusions?”

Interestingly, these concerns contribute to a general scepticism about relying upon the private sector to contribute to political campaigns against surveillance. Participant 13 raised such concerns about whether promoting end-to-end encrypted platform Signal

(developed by Whisper Systems) is an appropriate long-term strategy:

“Sometimes I think Facebook using WhatsApp – which is a Signal-under-the-

hood – sounds good, but it has nuances. It is not a black and white thing. Sure,

they may be using the same technology, but they are swapping information. It

is a scope creep thing. There are certainly short-term changes that can benefit

people, but we also need to take a step back and think about what this means

long-term. Is it actually a self-serving approach – do they want to just be

viewed by their customers that they protect privacy?”

Again, any potentially altruistic action is interpreted as a public relations exercise under the market conditions of surveillance capitalism, where corporations have economic incentives to collect and commodify customer information. Overall, it is argued this subjectivation strategy is used by privacy advocates to discursively position technology companies as antagonists.

207

5.3.3.2. TECHNOLOGY COMPANIES AND CORPORATE RESPONSIBILITY

This sub-section examines the second subjectivation strategy used by privacy advocates to discursively position technology companies as potential allies in campaigns of resistance to surveillance laws – as subjects capable of exercising corporate social responsibility. In total, eight (8) participants discursively positioned technology companies in this manner. Importantly, the prevalence of this subjectivation strategy was potentially aided by the fact that several participants were employed within the technology sector. Regardless, it is clear that the subjectivation of technology companies is a contested issue among privacy advocates. For example,

Participant 12 argued that “[i]f you refuse to accept money from corporations, and only accept money from individuals, you have insufficient funding to get work done” and

“[y]ou could argue for purity and say, ‘we should never take corporate money’, but if you take that approach you will be in a difficult position.” In this sense, privacy advocates expressed a willingness to work with technology companies, as long as they do not influence their advocacy. Participant 15 directly challenged the characterisation of corporations as antagonistic, while still recognising their moral complicity:

“The technical community has, pretty much, entirely turned against them. To

the extent that while Facebook and Google are cooperating with authorities,

they are being relatively restrained with how they go about it. Google is a good

example, they are consciously cooperating with authorities but they will not

enable mass surveillance. There are some concerns about how they enable mass

surveillance by accident, by doing insufficient privacy analysis within their

business.”

208

This uneasiness with which privacy advocates discursively position the private sector as potential allies in campaigns of resistance to surveillance laws reflects a tension between pragmatic and moral intuitions. Specifically, it reflects disagreements about whether civic discourse is inherently corrupted by economic interests.

This strategy of discursively positioning technology companies as potential allies is linked with the issue of funding within the privacy movement. Indeed, several participants noted how privacy organisations need access to more funding. For example, as Participant 12 observed, “enthusiasm is not enough, [organisations] will also need to generate some fundraising”. Participant 4 questioned the scepticism towards corporate donors:

“In terms of engaging with the general public, the NGOs could definitely

benefit from some input and funding from corporations. There an inherent

problem with the crypto-anarchism movement, which is not particularly strong

in Australia, but anarchists don’t like corporations. So, building a is

pretty difficult… I feel uneasy about taking funding from the very people who

are implementing these programs. It is not just about funding though. Google

comes with money, which is great, but it would be better if they came with

arguments and debate to help the general public understand digital rights and

privacy. “

It is argued the primary motivation for promoting this sort of corporate cooperation was to increase the level of influence of, and resources available to, the privacy movement. As such, it is a pragmatic decision. Yet, it is also clear how this subjectivation strategy of discursively positioning technology companies as potential

209

‘allies’ creates a moral dilemma for privacy advocates. For example, Participant 6 highlighted their competing intuitions about the matter:

“We have a vision of what we want society to look like, but we need resources

to challenge competing forces. It may be appropriate to take some money from

big tech giants in some circumstances. It is on a per issue basis. We need to

look if it is appropriate to take money from a particular organisation for a

particular digital rights issue. If you do take money from someone like Google

to advocate for fair use, you are going to be attacked by copyright lobbyists.

You are taking money from someone who, in their eyes, will be commercially

benefitting from a change in law. So, you put yourself at risk of damaging the

effectiveness of your argument.”

Other advocates were more unequivocal in rejecting sources of corporate funding. This also extended to avoiding any forms of cooperation with privacy advocates who opted to cooperate with the technology sector. For example, Participant 20 described their unwillingness to work with ‘sponsored’ advocates:

“Personally, I refuse to do any work with [organisation], not least because of

their corporate issues, but because of their corporate sponsorship. They take a

lot of money from Google, certainly they did during the data retention

campaign.”

Overall, there was a need to remain, or be perceived to remain, objective and uninfluenced where privacy advocates accept corporate donations. Yet, there remains a fundamental tension between the pragmatic reasoning sympathetic to cooperation with companies and the moral reasoning of steadfast opponents of the practice. It is argued this is because corporate intentions are perceived as financially motivated

210 under the market conditions of surveillance capitalism, and, therefore, are considered incompatible with the civic ethos of privacy advocacy.

As a result of this incompatibility, privacy advocates invoke the concept of

‘corporate social responsibility’ to position corporations as potential allies. Thereby, technology companies are positioned as ‘allies’ where they fulfil these responsibilities by donating to the Australian privacy movement and are eschewed as ‘antagonists’ where they reject this responsibility in the pursuit of profits. For example, Participant

4 draws a link between the historical complicity of technology companies and their contemporary ‘moral duty’ to help the cause:

“It is not just Google. Technology companies have kind of a moral duty, or

there should be, on technology companies, especially ones that helped to

implement this kind of thing. I feel that the destruction of privacy is at least

partially the responsibility of technology companies. I can’t tell you how many

projects I have worked on, where security is kind of this annoying thing you

have to do at the end of your project.”

This ascription of ‘responsibility to technology companies’ is used as the discursive benchmark for categorising them as either antagonists or allies – depending on whether they fulfil their obligations. Consequently, participants generally positioned the

Australian telecommunications and technology sector as antagonistic precisely because they are currently failing to contribute. This positioning was often articulated by drawing comparisons between the philanthropic cultures of Australia and the

United States. For example, Participant 19 observed:

“We live in a country where philanthropic support for civil liberties-oriented

organisations is almost zero. You know EFF in the United States have sponsors

211

with deep pockets. They have enough funding coming in for them to have

continuing litigation before the US Supreme Court. That doesn’t come cheap.

But they do it because there is enough philanthropic support, which we have

never been able to tap into in Australia. You might get a bit of campaign

funding if you are lucky, for one particular issue. But are you going to get core

funding that enables you to have one full-time employee or a director? We’ve

never found it.”

Evidently, there is a clear distinction between the economic interests of technology companies and the broader social interests of citizens. This was particularly prominent within the Australian context, as repeatedly expressed by participants. For example,

Participants 10 and 15 expressed:

“In America, there are two big advantages for digital rights advocates. The first

is they have access to philanthropic funding that enables the professionalisation

of advocacy. The second is that the interests of digital rights activists have

aligned closely with the interests of Silicon Valley tech firms.” (Participant 10)

“Most of our tech industry does not think about supporting activist

organisations the way the US industry often does… In Australian business

culture, there is a weak attitude concerning anything that is not pro-corporate

as well. There are only one or two corporate organisations that have contributed

to digital rights policy work, and even then, their contributions are miniscule.”

(Participant 15)

Overall, this illustrates how Australian technology companies were positioned as potential allies if they were to donate resources to the privacy movement. However, this does not erase the tension between the commercial interests of companies under

212 market conditions of surveillance capitalism and the civic ethos of privacy advocates sceptical of the corrupting effects of economic interests. Overall, it is argued that technology companies thus occupy a contested space as either ‘antagonists’ or ‘allies’ within the observed subjectivation strategies articulated by privacy advocates.

5.4. SURVEILLANCE SUBJECTS AND RELATIONS OF POWER

The previous two sections have argued that Australian citizens and political elites are, respectively, discursively positioned as the culpable victims of, and

(contested) antagonists responsible for, the Data Retention Act (2015) and Encryption

Access Act (2018). Specifically, it was argued that privacy advocates use various subjectivation strategies to discursively position citizens as culpable victims who enable surveillance legislation due to technological ignorance and political apathy.

Additionally, it was argued that the Australian Government and law enforcement agencies are morally culpable for intrusive surveillance powers, while technology companies occupy contested spaces as antagonists capable of becoming allies.

However, these subjectivation strategies are not articulated in isolation. Rather, the process of discursive positioning involves the articulation of “relations of power” that describe the relative position of subjects (i.e. Laclau, 2005, p. 68).

This section examines how privacy advocates discursively position subjects within relations of power, examining how this corresponds with the ascription of moral arbitrariness to Australia’s surveillance laws. Specifically, the section argues that privacy advocates discursively position citizens as the subordinate party within relations of domination with political elites. Importantly, this subjectivation strategy of positioning subjects within relations of domination involves an external subject

213 judging the power dynamic as oppressive despite the subjugated party considering the dynamic as consensual (Laclau & Mouffe, 1985, p. 154). As such, the first sub-section argues privacy advocates discursively position the collective consent of citizens to surveillance legislation as ‘coercive’. The second sub-section argues privacy advocates discursively position Australia’s political institutions as ‘corrupted’. It is therefore argued that privacy advocates invoke this subjectivation strategy to contest the capacity of Australian citizens and political institutions to confer non-arbitrary authority to surveillance powers. Overall, the section argues Australian privacy advocates thereby ascribe moral arbitrariness to surveillance powers by discursively positioning subjects within relations of domination.

5.4.1. SURVEILLANCE POWERS, CITIZENS, AND POLITICAL ELITES

This sub-section examines how privacy advocates discursively position the subjects of surveillance laws within relations of power. Specifically, this is achieved by analysing the relationships drawn between the categories of subjects discussed above. The sub-section argues that privacy advocates discursively position citizens as the subordinate party within relations of domination with political elites. This involves judging the power relationship between citizens and political elites as coercive from an external viewpoint. For example, Participant 12 described the difficulty of persuading the public about their actual interests:

“Anything that involves questions of technology, explaining to the general

public why they should care is very difficult. Politicians just say not to worry

because it is above your pay-grade, so they’ll take care of it.”

214

The participant is positioning citizens as apathetic (‘why they should care’), attributing this apathy to the technical complexity of the topic (‘questions of technology’), and suggesting citizens are vulnerable to manipulation by political elites (‘above your pay- grade’). Similarly, several participants expressed concern that the term ‘metadata’ had been used by politicians to intentionally obfuscate the scope and function of the Data

Retention Act (2015). Consequently, Participant 15 observed how ‘people’ struggle to understand the consequences of the legislation:

“The difference between metadata and data is largely a distraction. People

understand what it means when it comes to an email, but as soon as you start

talking about a website it gets complicated, and once you reach mobile phone

data the difference is arbitrary.”

This discourse was repeatedly articulated by multiple participants. For example,

Participant 4 observed that “back before 2013, you never heard anyone in the general public talk about metadata” and “I think the term was used specifically because people do not understand”, while Participant 7 remarked “I think the debate is happening at a technical detail, and obfuscation in some instances, that most Australians have little ability to understand” (emphasis added). Finally, Participant 21 made the following remark about the susceptibility of ‘the people’ to be ‘controlled’ through the exploitation of ‘fear’:

“The people around us probably wouldn’t understand what we are talking

about and wouldn’t necessarily care. And I think it is for a number of reasons.

Firstly, the overarching issue is fear among the population, which is a good

way to control people.”

215

What all these extracts demonstrate is how privacy advocates discursively position citizens as the subordinate subjects within relations of domination. Such a power relationship judges the public as unwitting victims of oppression from an external viewpoint (Laclau & Mouffe, 1985, p. 154). This is evident via the manner in which the relationship is described. The participants discursively position themselves using first or second-person perspectives (‘I think’ and ‘the people around us’) to separate themselves from members of the general public (‘people’ and ‘the majority of

Australians’), who, as discursive subjects, are ignorant of the consequences of surveillance powers (‘do not understand’ and ‘wouldn’t necessarily care’) due to manipulation by political elites (‘control’, ‘fear’, and ‘obfuscation’).

To demonstrate how participants employed this subjectivation strategy to position citizens within a relation of domination with political elites, it is useful to analyse how participants discursively position themselves as an external subject. One common technique used by participants was to perform conversations between themselves and members of the public, to demonstrate the differences between these categories of subjects and articulate the difficulties they have experienced attempting to mobilise resistance to surveillance legislation. For example, Participant 9 offered the following hypothetical interaction:

“Most people don’t want to [limit surveillance], that’s what I find perplexing.

We’ve lost more than most other nations, and yet I cannot persuade people

around me to take it seriously… I tell people about these problems already. I

ask, ‘do you take your phone to the bathroom?’ and they say ‘yeah?’, I respond

‘well, somebody could be watching you!’, and they go ‘oh, no they’re not’. But

there are already websites where you can actually watch people on their

phones.” (emphasis added)

216

This recounted conversation exemplifies the participant’s difficulties attempting to

‘persuade people’ to ‘take it seriously’, while also reinforcing the participant’s comparatively greater knowledge of the dangers of passively accepting the erosion of privacy rights. In contrast, ordinary citizens are positioned as ignorant of how spyware can enable remote access to mobile devices, and how this can be exploited by ‘black hat’ hackers on the dark web (e.g. Glister, 2018). Similarly, Participant 1 performed a conversation between themselves and a (hypothetical) ordinary citizen:

“We ask ‘well, are you happy that the government knows where you are all

day, everyday day?’ They reply, ‘of course I’m not’. It is not what the law is

meant to be, but it is what it says… I have talked to people about these issues,

and you get a whole room of shocked faces. They say, ‘I can’t believe this’. It’s

the EFF metadata slide: how they can determine that you called a sex worker

at 2am in the morning, even if they do not know the contents of the

conversation. People don’t really understand what the implications are.”

(emphasis added)

Through re-enacting these sorts of discursive interactions, whether they are real or not, the participants have positioned ordinary citizens as a separate category of subjects.

As will be further discussed in Chapter Six (Section 6.2), this distinction enables them to identify with the category of a ‘privacy advocate’ as marginalised experts, who are cognisant and capable of judging surveillance powers from an external position.

Overall, through this subjectivation strategy where ‘citizens’ are positioned as the subordinate party within relations of domination, they are constructed as incapable of conferring non-arbitrary authority to surveillance powers.

217

5.4.2. THE CIVIC CORRUPTION OF POLITICAL INSTITUTIONS

A related element of discursively positioning the subjects of surveillance legislation within relations of domination concerns the ‘civic corruption’ of Australia’s political institutions. The category of ‘civic corruption’ captures the inability of these institutions to operationalise the ‘will of the people’ due to inaccessibility and social injustice (i.e. Pettit, 2012; Braithwaite, 1995). Thus, this sub-section argues privacy advocates invoke this notion of ‘civic corruption’ to contest the ability of Australian political institutions to confer non-arbitrary authority to surveillance powers, focusing on the perceived corruption of Australia’s parliamentary inquiry and public consultation processes. For example, Participant 18 remarked how “legislation was rammed through parliament with very few MPs able to respond, let alone the public or civil society able to engage.” Similarly, Participant 6 noted:

“Within the privacy space, we see the national security community have a lot

of influence over government policy. The citizenry are not really asked for

their consent before changes are put in place, when the policies of the

government are not announced before an election. They are just implemented

after an election without political debate” (emphasis added)

Here, the participant is highlighting the inadequacy of the electoral process to ensure the Australian Government is held accountable to the citizenry. The institutions themselves are positioned as corrupted where policymaking and decision-making can occur without public input. This thus enables political elites to exert arbitrary forms of political power over citizens. Similarly, Participant 8 criticised the scope and timeframe of the associated PJCIS inquiry:

218

“A two-week period is ridiculous. To expect a scheme about storing the data

of everyone in the country, for the purposes of government access, to be

adequately covered over a two-week period is absurd. It is just not how you do

open policy. You should have people involved who should review it, you

should present adequate detail about the policy, you would avoid confusing

terms such as metadata, and you would have regular periods of review after

implementation.”

Evidently the participants are critical of the short timeframe of the inquiry and the exclusion of ‘civil society’ experts ‘who should review’ the legislation before it is passed. Indeed, the PJCIS inquiry into the Data Retention Act (2015) was specifically singled-out as an antagonistic structure. For example, Participant 16 remarked:

“Well, there was comparatively poor and non-robust treatment of the issue by

the Parliamentary Joint Committee on Intelligence and Security… The

Committee could have exercised a level of governance, that I would have

reluctantly accepted, if they had asked for information that would have enabled

that sort of proportionality assessment. Instead, they were satisfied with mere

assertions and security theatre. When the opponents of data retention appeared,

they had nothing to say – they asked no questions. They could have had a robust

process and provided an evidence-based foundation for rejecting the

counterarguments. They didn’t ask any questions that might potentially

undermine the factual or analytical basis of the major proposals. It was

consistent with them not undertaking an exercise of weighing up the merits –

they essentially pre-judged it.” (emphasis added)

219

This was a recurring sentiment expressed by the interviewed participants. For example,

Participant 20 described the decision-making processes of the PJCIS as “generally opaque” and as demonstrating the need for “more transparency in those sorts of practices”, while Participant 10 labelled the committee “a far more political process” that “provided cover for the government to ignore many of the recommendations of the Joint Committee on Human Rights”. As such, the participants did not consider the public consultation process for data retention legislation, spearheaded by the PJCIS, as productive. Rather, their experiences informed their perceptions that parliamentary institutions are ‘corrupted’ – insulated from accountability to the citizenry or as subject to oversight from civil society organisations.

Part of this subjectivation strategy involved articulating the inadequacy of political institutions to scrutinise the justifications offered by Australian Government for the Data Retention Act (2015) and Encryption Access Act (2018). Specifically, there was a lack of opportunities to publicly scrutinise the claims articulated within the

‘problem of going dark’ – that additional surveillance powers were necessary to prevent crime and terrorism due to the use of privacy-enhancing technologies.

For example, Participant 7 argued:

“I do not think [metadata retention] has been supported with evidence. The

Government has certainly not been able to articulate clear arguments. I think

the key question I would posit, in return, is ‘show us an example of the kind of

data you are unable to get, with a warrant, in the current situation? What data

about Australians are you unable to get?’ There may be an answer to that

question that I am not aware of, but I do not think they have addressed that

concern publicly.”

220

Here, the Australian Government is discursively positioned as taking advantage of the inaccessible character of parliamentary institutions. Specifically, the participant criticises the Government’s ability to pass surveillance legislation without explicitly engaging with the criticisms and concerns raised by experts within civil society organisations. Part of this was due to a lack of details about what specific powers the

Data Retention Act (2015) would grant law enforcement and intelligence agencies, whether the data would be handled securely, and whether limitations would be placed on its use within investigations. For example, Participant 13 argued:

“In the data retention debate, there was a lot of to-and-fro around what kind of

information the government was actually collecting. It was this ethereal thing,

no one really knows, and it cannot really be quantified. So, who’s going to have

access? Well, these people. What’s it going to be used for? It’s only going to

be used for this. It seems there is always something lurking in the background.

Addressing the lack of that transparency would be something that would be

beneficial generally.”

These limitations were linked to broader issues of civic corruption within Australia’s political institutions. For example, Participant 6 suggested the government “move[s] from one issue to the next without answering to the public… [w]e do not get the opportunity to really challenge politicians about what they are saying”, while

Participant 8 observed how they “cannot think of legislation passed in recent times in

Australia where there really was adequate public debate… political debate in Australia is a theatrical debate about strategy, not outcomes for people”. Overall, these extracts demonstrate how privacy advocates are critical of the limited opportunities available to citizens and civil society organisations to scrutinise surveillance legislation.

221

Some participants similarly criticised how Australia’s political institutions fail to operationalise the ‘will of the people’ due to bipartisan support for secretive national security and invasive criminal justice policies. For example, reflecting on their experience participating in the public consultation process for the Data Retention Act

(2015), Participant 16 remarked that their concerns were not taken seriously:

“My view is that the governance arrangements in Australia are defective on an

institutional level… We tried to raise objections that a reasonable and

independent person might want to be cautious or see more evidence. But it was

as if that was never said, there was no refutation or denial or scornful criticism

of the relevance or validity of that concern. It just went into a void. To me it

looked more like a political spin operation of not giving oxygen to any

counterarguments, any of the negative side of a proportionality assessment. It

was a model of governance failure.”

Similarly, Participant 17 made a remark about how national security can be used as a political ‘Trojan Horse’74 to secure support for legislation without the need for debate:

“National security legislation that harms democratic rights will sail through the

House with two-party support. Virtually no opposition. It will go to the Senate,

there will be some kicking and screaming from civil society groups without

any strong campaigns to win support from MPs, and the Senate will add a few

amendments that may or may not have come out of committees.”

These extracts illustrate how privacy advocates discursively position surveillance laws as the product of relations of domination where they ‘sail through’ parliament ‘with

74 The Trojan Horse refers to a metaphor used to describe a strategy used by disingenuous subjects to hide their true intentions. It refers to the ancient Greek parable of the use of large wooden horse by the Greeks to infiltrate and conquer the city of Troy. 222 virtually no opposition’. The absence of meaningful civic deliberation is equated with the exercise of arbitrary forms of political power. This was linked to other examples highlighting the lack of transparency within liberal democracies about the scope of government surveillance programs. For example, Participant 4 observed:

“The way we find out about these things is because conscientious people

leaking details, which is a bit ridiculous… The secrecy of the whole thing is

one of the biggest problems. I guess I don’t really need to explain this, but it

allows them to do whatever they want. The people who we have given power

to should be held accountable somehow.”

Overall, this suggests the perceived lack of accountability of the Australian

Government to civil society organisations has contributed to a broader sense that

Australia’s political institutions have become corrupted. It has informed the discursive positioning of surveillance subjects within relations of domination. Thereby, it is argued this subjectivation strategy is used to ascribe moral arbitrariness to the powers established under the Data Retention Act (2015) and Encryption Access Act (2018), as it suggests institutions are incapable of conferring authority to legislation.

Despite the articulation of a relation of domination between citizens and political elites, there remains hesitation among privacy advocates about the degree to which ordinary citizens ought to be empowered to make decisions about government surveillance powers. For example, Participant 10 offered a competing view about how effective civic institutions do not necessarily need to include processes of civic deliberation in order to confer non-arbitrary power to law:

“If you have a well-functioning political system, the public do not need to be

actively involved… I think it is possible to have a more engaged public debate

223

than we currently have, but I do not think forcing the public to understand the

technicalities of what is and is not ‘metadata’ is a solution to our policy

problems… I am a fan of political process. You do not want to create a system

where your ordinary user needs to understand a tonne of information before

giving consent. We need systems we can trust to deal with information

consistent with our rights, and without needing to become experts. I think it

could have been possible to have a public debate about the necessity for

retained data obligations.”

This discourse again reinforces the positioning of civil society advocates as external subjects – as subject-matter experts with greater access to knowledge and who can steer public debate in the right directions. The discourse itself positions both citizens and political elites as subordinate to subject-matter experts. Thereby, it contests the necessity of civic deliberation for establishing the authority of government surveillance powers. Similarly, a qualified role for civic deliberation was articulated by Participant 21, who described a desire for a more moderate and inclusive debate:

“What we currently see is government closing itself off and requiring the

population to open itself up. That, within a representative democracy, is the

complete opposite of what should be the case. It should be clear transparency

into government reasoning, thinking, and action… Part of the problem, I think,

is a public perception that there are two extremes – people wearing tinfoil hats

and a government being too gung-ho, and nothing happening in between. I

think there needs to be a more open discussion. It is not just about naysaying

privacy or national security, because those concepts are interwoven. It should

be a conversation where we are comfortable talking about both aspects as

necessary and moving on to a discussion about adequacy and proportionality.”

224

Evidently, there is an intuitive appeal to evidence-based approaches to criminal justice policymaking, detached from messy processes of civic deliberation. This explains a reluctance among participants to embrace radical forms of democracy that erase the superordinate status of subject-matter experts. Yet, among privacy advocates encumbered by a liberal framework, there is a muted recognition that citizens ought to be involved in decision-making processes. This underpins the invocation of the category of ‘civic corruption’ to articulate the conditions necessary for political institutions to operationalise the ‘will of the people’. Overall, it is argued that by positioning citizens as the subordinate party within relations of domination via this category of ‘civic corruption’, privacy advocates contest the capability of Australian citizens and institutions to confer non-arbitrary authority to surveillance powers.

5.5. CHAPTER CONCLUSION

This chapter has presented the results of a political discourse analysis examining how Australian privacy advocates discursively position the subjects of the

Data Retention Act (2015) and Encryption Access Act (2018). Specifically, the chapter examined how privacy advocates use a variety of subjectivation strategies to discursively position subjects within distinct categories and as parties to relations of power. Across multiple sub-sections, the alternative subjectivation strategies were analysed, highlighting the complexity in how moral responsibility is distributed by privacy advocates. First, the analysis revealed how ordinary citizens are positioned as culpable victims, how the Australian Government and law enforcement agencies are primarily positioned as antagonists, and how technology companies occupy a contested space as antagonists and allies. Second, the chapter argued that privacy

225 advocates discursively position these subjects within relations of domination.

Specifically, from an external subject-position, privacy advocates position citizens as the victims of coercion within a system characterised by civic corruption, thereby contesting the capacity of citizens and institutions to confer non-arbitrary authority to surveillance powers. Consequently, the chapter provides an answer to the second research question: Australian privacy advocates use various subjectivation strategies to discursively position citizens within relations of domination with political elites, thereby ascribing moral arbitrariness to the surveillance powers established under the

Data Retention Act (2015) and Encryption Access Act (2018).

226

Chapter Six: Advocating Resistance to Surveillance Power

6.1. CHAPTER INTRODUCTION

The previous two chapters have examined how privacy advocates differentiate the meaning of ‘privacy protection’ from methods of evading criminal investigations and discursively position the subjects of surveillance legislation. The analyses revealed how participants differentiate ‘privacy protection’ from methods of criminal evasion by ascribing moral arbitrariness to the surveillance powers established under the Data

Retention Act (2015) and Encryption Access Act (2018). Additionally, it is argued this property of moral arbitrariness is dependent upon a subjectivation strategy that discursively positions citizens as the subordinate party within relations of domination with political elites, thereby contesting their capacity to confer non-arbitrary authority to surveillance powers. This chapter builds upon these analyses and examines the mobilisation of resistance to surveillance legislation. It thus provides an answer to the third research question. It is argued that Australian privacy advocates construct a shared identity as marginalised subject-matter experts who have a civic duty to advocate resistance to morally arbitrary forms of surveillance.

The psychosocial process of identification describes how subjects develop and cultivate shared bonds with objects. Using this analytical construct, this chapter advances its argument across two sections. The first section analyses this process of identification within the Australian privacy movement. Specifically, it is argued that privacy advocates construct shared identities as marginalised subject-matter experts that transcend interpersonal conflicts and intergroup disagreements. The second section analyses the construction of an associated civic duty to advocate resistance to

227 morally arbitrary forms of surveillance. Specifically, it is argued that privacy advocates recognise the limits of legal strategies of resistance to arbitrary surveillance powers and construct an associated responsibility among subject-matter experts to disrupt relations of domination by cultivating citizens capable of protecting their privacy rights. Overall, the chapter argues that the Australian privacy movement therefore constructs a civic duty for ‘privacy advocates’, as subject-matter experts, to advocate resistance to morally arbitrary forms of surveillance. The chapter therefore completes an analysis of how ‘privacy protection’ is differentiated from methods of evading criminal investigations using the category of moral arbitrariness.

6.2. SELF-IDENTIFICATION AS A ‘PRIVACY ADVOCATE’

This section analyses the psychosocial process of identification as articulated by privacy advocates in the course of differentiating the meaning of ‘privacy protection’ from methods of evading criminal investigations. Importantly, the

Australian privacy ‘movement’ is constituted by a variety of individuals and groups that loosely coordinate their messages and campaigns. Indeed, this type of collective action is rendered possible through the process of identification – the development and cultivation of shared bonds with an object enabling the formation of diverse groups

(Laclau, 2005, p. 54). The first sub-section examines the internal conflicts within the

Australian privacy movement. It is argued that interpersonal conflict and intergroup disagreements create difficulties for building effective coalitions within the movement. The second sub-section analyses the articulation of shared identities among this diversity of individuals. As such, it is argued that privacy advocates articulate a shared identity as marginalised subject-matter experts. Overall, the section argues that

228 the Australian privacy ‘movement’ is constituted through a psychosocial process of identification that supersedes individual and group differences.

6.2.1. CONFLICT WITHIN THE AUSTRALIAN PRIVACY MOVEMENT

This sub-section analyses the internal conflicts within the Australian privacy movement. In this regard, the privacy movement shares similarities with other social movements struggling with the problem of collective action – the difficulty of mobilising individuals in shared political struggles (Mayer, 2014, p. 13). First, the individual and group differences within the Australian privacy movement are analysed. Second, the difficulties with building effective coalitions within the movement are analysed. Overall, the sub-section argues that the Australian privacy movement is characterised by interpersonal conflict and intergroup disagreements that create difficulties for effective coalition-building.

6.2.1.1. INTERPERSONAL AND INTERGROUP CONFLICT

There are apparent divisions within the Australian privacy movement, particularly where advocates reflect upon the limited success of the Citizens, Not

Suspects (2014) and Go Dark Against Data Retention (2015) campaigns to prevent the passage of the Data Retention Act (2015). As such, individual participants articulated their frustrations with trying to mobilise others and work collaboratively. In total, eight

(8) participants expressed a variety of self-reflective criticism of the movement. For example, Participant 20 offered the following explanation for why Australian privacy organisations struggle to work harmoniously:

229

“Again, individual interests are involved in the problems and solutions, on both

sides. I do not think you can get away from the people who compose the

organisations… When you are dealing with academics and activists, there tend

to be a lot of people with big personalities. It does come back to the people, in

terms of interpersonal relationships. People have their own stakes, their

reputations… they wield that and become self-appointed police for the

movement. So, I would say there are challenges in terms of working together.”

Evidently, the participant expressed frustration about how the movement is plagued by ‘big’ personalities thwarting effective activism. This attitude is characteristic of other social movements, which must overcome individual and group differences

(Laclau, 2005, p. 73; Mayer, 2014, p. 13). Echoing this sentiment, Participant 15 observed how individual political affiliations are a source of frustration for EFA:

“There is some friction there, some of which that comes down to individuals,

and others are intrinsic issues… for example, how we deal with organisations

that are involved with party political groups, whereas groups like EFA need to

remain non-partisan.”

Overall, this illustrates how individual and group tensions contribute to competing strategic priorities across privacy advocacy organisation. However, it is important to note this is a common characteristic of contemporary social movements, which struggle to overcome the problem of collective action (e.g. Greenwood, 2008). This highlights the importance of ‘identification’ for political mobilisation.

The fissures within the movement are also reflected through differences in how individuals self-identify as either ‘activists’ or ‘advocates’. This distinction has been

230 previously observed within the literature (i.e. Bennett, 2011, p. 128). The fissure is best illustrated by the following comment by Participant 20, a member of the APF:

“Out of all the civil society groups, it was only APF who was invited to speak

during the hearings. I think that goes to the reputation and history of the

organisation as an advocacy organisation, as opposed to an activist

organisation.” (emphasis added)

This terminological split was observable within the way participants explicitly referred to the roles they perform within the privacy movement. For example, Participant 3 referred to themselves as “an activist since my early 20s” and Participant 5 stated that

“[a]s a human rights activist I have always been interested in [privacy] issues”. In contrast, Participant 20 explicitly stated “I don’t consider myself an activist, I consider myself an advocate” and Participant 10 explained that “I do not necessarily see myself as a privacy activist… I do see myself in a leadership role in digital rights organisations where I try to empower activists”. Overall, these individual differences between self- labelled ‘activists’ and ‘advocates’ highlight one aspect of a broader fissure between civil society organisations working in the area.

These individual differences overlap significantly with differences in how civil society organisations conceptualise their work. Specifically, there were observable disagreements between members of well-established privacy-oriented organisations and human rights activists campaigning across multiple issues. For example,

Participant 13, a senior member of EFA, argued:

“Previously, you had organisations like the Australian Privacy Foundation or

Electronic Frontiers Australia, who had been around for a while, were well-

respected, and when they spoke, they were listened to. Nowadays, you have

231

the government just saying, ‘we’re going to do this’ and those organisations

can spend hours doing submissions and then they just get ignored. So, it is up

to organisations like GetUp!, EFA, Digital Rights Watch, and CryptoAustralia

to do more groundwork. Because politicians will only listen to noise now, as

opposed to reason.” (emphasis added)

This tension between the advocacy-oriented focus of groups like the APF and EFA, with activist-oriented groups like GetUp!, was observable on both sides of the divide.

Participant 11, who is a member of a human rights group, made the following criticism of campaigns like Go Dark Against Data Retention (2015):

“The problem I have with it is those organisations didn’t do anything earlier.

There was massive potential for a big campaign to shift the public debate to

the right level, and they missed it. It is a fundamental problem I have with the

digital rights organisations who were in the space at the time: they didn’t do

any campaigning.”

Similarly, Participant 7, who was in a leadership position in a human rights group, was critical of the traditional view of ‘privacy advocates’ as solution-providers:

“Their [advocacy organisations] role is to sound the alarm, and make the

government explain themselves, to make the case for why change is justified.

I do not think that civil society is there to solve the problems, I think it is their

role to outline the problems and compel the government to be accountable.”

There is a clear tension between these perceptions of human rights ‘activists’ as responsible for ‘sounding the alarm’ and ‘campaigning’ (also pejoratively referred to as ‘noise’ by one privacy advocate) and the traditional view of ‘privacy advocacy’ among members of the older and established civil society organisations.

232

The members of the privacy-oriented organisations such as the APF and EFA offered tentative defences of their organisations’ track-records, cognisant of criticisms from human rights activists. For example, Participant 15 defended EFA’s strategy of political lobbying:

“One of the frustrating things I think about is most of EFA’s effective lobbying

is invisible. If you are able to speak to an agency or official before there are

any public commitments, it is much easier to change. If you then talk about it

publicly in a manner suggesting you are the reason they changed their minds,

they will be less likely to work with you again. Effective lobbying does involve

that record of off-the-record discussion, which can make it look like you are

not doing anything at all.”

Similarly, Participant 20 defended the advocacy efforts of the APF:

“The Privacy Foundation is very much focused on drafting submissions to

legislative reviews and public consultation processes, as opposed to running

activist campaigns. We don’t participate or advise people the same way that

EFA, Digital Rights Watch, GetUp! or Access Now do. I also think, and maybe

this is lost on these activists, in terms of mandatory metadata retention: yes, it

was a loss, but what is not recognised or focused on are the subsequent

successes. One of the key examples in this area was the Government’s

discussion paper about expanding access to metadata to civil law cases.”

This is referring to the Australian Government’s decision not to enable access to retained metadata for use in civil proceedings, at the behest of submissions made during the Attorney-General Department’s (2017, p. 8) public consultation process.

Again, these comments are directed towards the criticism from ‘activists’ about the

233 invisibility of lobbying and assert “the importance of putting objections on the public record through submissions to proposed developments” (Participant 20). Participant

10 was more diplomatic, defending the role of advocacy-oriented organisations as part of the holistic movement:

“We have the Pirates and the EFAs, who have roles as rabble rousers that

articulate the hard-line ideologies of the extremist ends of the movement.

Those are views that are easily dismissed within the pragmatic day-to-day

business of policymaking. But they are important for extending Overton’s

Window – the range of topics open to discussion. We do not have enough good

centrists or pragmatists out there to present a viable alternative, in areas where

we are out-classed by professional lobbyists and well-organised state

agencies.”

Overall, this illustrates that, despite the self-identified differences between human rights ‘activists’ and privacy-oriented ‘advocates’, members of the privacy movement are cognisant of the need to transcend internal conflict and coordinate their activities.

On the basis of these individual and group differences, several participants articulated the need for greater coordination among the existing civil society groups.

For example, Participant 3, whose association is with a human rights activist group, made the following observations about two of the leading privacy advocacy organisations:

“Currently there are no specialized organisations leading the charge. There

isn’t a functioning advocacy and campaigning organisation, with both EFA and

DRW [Digital Rights Watch] totally failing on both advocacy and campaigning

activities. And we have lost the champion in the parliament. Working out why

234

Australian civil society is so utterly shit on these issues has been an ongoing

depressing fascination of mine.”

This allusion to the ‘champion in the parliament’ is a reference to the resignation of former Senator Scott Ludlam, a member of the who resigned in

2017 due to a failure to renounce his dual citizenship75 (Strutt & Kagi, 2017). Senator

Ludlam (2015) had been a vocal critic of the Government’s metadata retention legislation. This type of frustration about disorganisation within the movement was a commonly expressed sentiment. For example, Participant 4 described how “the issue is perceived to be too complicated, as we, the digital rights movement, are failing to get the message across in a simple and straightforward way”, while Participant 15 similarly observed:

"I do think we haven’t been doing a great job with it over the last few years for

a variety of reasons. I won’t go into those, because some of them get too

personal. Both in the activist and political side. We dropped the ball a bit… I

think there has been pointless infighting on some things.”

What is important to note about these sentiments is how they simultaneously reflect a process of shared and distributed moral responsibility for failing to effect change.

Advocates refer to the privacy movement as ‘we’ while identifying elements of the movement that ‘dropped the ball’ or are ‘utterly shit’. It suggests a struggle to reconcile a shared psychological bond with ‘privacy’ as a signifier with a recognition of broader conflicts and disagreements within the movement.

75 For a complete overview of the political and legal causes and consequences of the 2017-2018 Australian parliamentary eligibility crisis, see Hobbs, Pillai, and Williams (2018) and Morgan (2018). 235

6.2.1.2. THE STRUGGLE TO BUILD PRIVACY COALITIONS

This sub-section analyses the struggle to build effective political coalitions within the Australian privacy movement. In addition to the internal differences noted in the previous section, participants articulated struggles to build coalitions with, or mobilise support from, external organisations. This was a concern expressed by a minority of seven (7) participants interviewed as part of the research. For example,

Participant 10 observed how the advocacy-oriented privacy organisations need to improve their engagement with lobby-groups and other activists:

“There is a lack of knowledge about how to do activism. A lot of the

organisations that pursue advocacy are relatively young. But even the older

ones are not informed by a history of achieving political change. They are very

principled. They are headed by principled experts who can be relied upon to

give a perspective that reflects those principles. But they are not able to marshal

public support, and they are not highly influential advocates. So, we do not

have in-roads to politicians like lobby-groups have, and we do not have activist

campaigns like other social justice movements do.”

This captures the notion that these organisations need to learn how to ‘marshal public support’ as it is not enough for ‘privacy’ to be articulated in legalistic and technocratic terms. Participant 3 noted a need for “clearer public information, better digital and offline campaigning, and stronger collaborations among organisations with different expertise and skills”, while Participant 5 argued “it needs to be taken more seriously by other organisations and networks in the ‘progressive’ activism space”. Similarly,

Participant 17 captured how this is necessary to avoid reactive modes of advocacy:

236

“We need to get on the front foot and address that glaring gap. A Bill of Rights

doesn’t come easy, and we need to do the hard work of relationship-building

across, not just digital rights groups, but groups that have a longer history and

are able to provide support communicating how digital rights issues are human

rights issues.”

These extracts highlight the cognisance among privacy advocates of the benefits of cultivating relationships with other progressive groups that have established records working with human rights issues. In particular, it demonstrates how these relationships are being sought as a result of the limited success of privacy campaigns such as Go Dark Against Data Retention (2015) and National Get a VPN Day (2018).

In addition to building coalitions with other progressive or human rights organisations, a sub-group of four (4) participants articulated the value of building relationships with organisations and individuals not traditionally considered allies. For example, Participant 21 made the point that the movement need to go beyond

‘preaching to the converted’:

“It is all well and good to preach to the converted, but I think there is value in

talking to people who do not agree. And it isn’t even a matter of agreeing.

Engaging in a discourse where it is acceptable to naysay privacy, where it is

acceptable to defend privacy, and having a healthy conversation about what is

in the best interests for the community, is all a part of that process.”

The participant here identifies the value of ‘healthy conversations’ about the benefits and limits of ‘privacy’ as a mobilising signifier. This is related to broader disagreements about the value of political diversity within the privacy movement. For example, whereas Participant 10 noted how “[f]or too long civil society groups

237 working within this area have been focused on speaking only to a small, tech-savvy community” who “tend to be libertarians” that “work against the interests of a broad human rights agenda”, Participant 12 noted how “libertarians do have a very specific concern about privacy, so it is no surprise that they are the ones who are overrepresented” because “[a] higher proportion of them are willing to put in the effort and are politically active”. Similarly, reflecting on the history of the Australian privacy movement since the Australia Card76 campaign, Participant 19 noted the lack of support from conservative organisations and individuals:

“I am struggling to think of examples of right-wing personalities,

organisations, or academics who are involved in these sorts of campaigns since

the Australia Card… It is hard to get political allies on the right-wing on

national security issues. But, if you go back to the Health and Welfare Access

Card, it should have been an opportunity for right-wing organisations to go,

‘we don’t want this, it is too much of a paternalistic society’. They could have

adopted that attitude and got worked up about it, but the proposal was coming

from the Howard Government. Were they really going to get worked up about

a Howard Government proposal, particularly with an election coming up? Not

likely. That’s the problem.”

This participant expressed scepticism about the potential for an antagonistic subject

(i.e. ‘conservatives’) to be mobilised in support of contemporary privacy campaigns.

They observe how ‘conservatives’ were considered allies during the Australia Card debate and contrasts this with their subsequent abandonment of the movement.

76 The ‘Australia Card’ was a 1985 proposal by the Hawke Government for a national identification card for Australian citizens. The proposal was later dropped after significant resistance from members of the public and civil society organisations. The campaign was the origin of the Australian Privacy Foundation. See Greenleaf and Nolan (1986) and Clarke (1987) for a more complete analysis of the proposal and civil society campaign. 238

Overall, it is argued these extracts highlight the various interpersonal conflicts and intergroup disagreements within the Australian privacy movement, as well as the associated struggle to mobilise effective privacy coalitions.

6.2.2. IDENTITY WITHIN THE AUSTRALIAN PRIVACY MOVEMENT

The previous sub-section argued that the Australian privacy movement is characterised by individual conflicts and group differences that create difficulties in establishing and mobilising privacy coalitions. In contrast, this sub-section analyses the processes of identification that transcend these individual and group differences.

Specifically, it argues that privacy advocacy is rendered possible via psychological bonds ‘privacy advocates’ develop with ‘privacy’ as a signified moral value

(i.e. Laclau, 2005, p. 54). These bonds are developed via shared experiences of participating in privacy protection campaigns. First, it is argued ‘privacy advocates’ position themselves as ‘fighting’ common antagonists within a ‘battle’ against arbitrary surveillance powers. Second, it is argued that ‘privacy advocates’ position themselves as subject-matter experts within the fields of law and technology. Third, it is argued that ‘privacy advocates’ position themselves as marginalised from the decision-making processes of Australia’s political institutions. Overall, the sub-section argues that this political identity of ‘privacy advocates’ transcends differences via the articulation of the shared characteristics of marginalised subject-matter experts

‘fighting’ arbitrary surveillance powers.

239

6.2.2.1. PRIVACY ADVOCACY AS A SHARED STRUGGLE

This sub-section argues that Australian ‘privacy advocates’ position themselves as ‘fighting’ a ‘battle’ against antagonists who threaten the signified moral value of privacy. This characteristic of a shared political identity was articulated during discussions about participating in the public consultation processes for the Data

Retention Act (2015) and Encryption Access Act (2018), as well as organising the

Citizens, Not Suspects (2014), Go Dark Against Data Retention (2015), and National

Get a VPN Day (2017) campaigns. In total, seven (7) participants articulated a variation of this aspect of identity. The most demonstrative example of this process was offered by Participant 19:

“I am, at least concerning my views about privacy in Australia, a pessimistic

person… Partly because I can see we very rarely made any gains in Australia.

So, while occasionally I have spent a bit of my time fighting whatever the latest

battle is in Australia, when it comes to privacy, God, it’s depressing. I have

had more satisfying engagement in things outside Australia in a lot of ways.”

(emphasis added)

The type of language used by Participant 19 was common. The notion that it is a

‘battle’ that advocates are ‘fighting’ (and losing) is indicative of a broader solidarity shared within the movement. For example, Participant 5 explained their motivation for joining a human rights organisation, with a particular interest in surveillance and privacy issues, in the following terms:

“Seeing how repressive regimes overseas have used mass surveillance against

their populations to crack down on activism and free speech has compelled me

to fight against any similar intrusions here.” (emphasis added)

240

Participant 12 remarked that “[w]e have to fight [surveillance] on all fronts” while

Participant 9 observed “[i]f [governments] stop trying to stifle any more privacy, by continuing to push identification, they’ve lost the battle of the internet” (emphasis added). Evidently, the debate is explicitly conceptualised, at least by some privacy advocates, as a ‘battle’ with antagonists who threaten the signified value of privacy.

The idea of the ‘battle’ in the political identity of ‘privacy advocates’ was also reflected in how the participants reflected upon the difficulty of effecting change during public debate for the Data Retention Act (2015) and the then-ongoing debate about the Encryption Access Act (2018). In this sense, participants positioned themselves as losing the ‘battle’ for privacy protection. For example, Participant 17, a member of the APF, remarked on the experience:

“I know that after the data retention fallout, everyone was very tired. I do not

think we could have deconstructed the problems with that legislation any

harder than we did. Or fought any harder than we did. We’re spending all this

energy, but not engaging with policy processes. And I totally get why. In the

APF we’ll regularly provide submissions and testimony, but we are under no

illusion that it is going anywhere. But we need to have it on the record, because

if we don’t, it gets even more bleak.”

Similarly, Participant 1, who belongs to a small technology-oriented advocacy organisation that runs information security workshops, made the following comments about their experience of the Data Retention Act (2015) consultation process:

“As part of our advocacy during that campaign [data retention], we wrote

letters to members, including Warren Entsch who read from our submission.

241

They basically said, ‘well, they raise some good points but we’re still going to

do it, because of terrorism’.”

This highlights how there was a perception among participants that the amount of effort put in was not being reflected in the outcomes of their campaigns. Additionally,

Participant 20 articulates a similar sense of solidarity among ‘privacy advocates’ as yearning for recognition from political decision-makers:

“We do this out of our dedication to the cause. But I already have, essentially,

two full-time jobs. And everyone else is basically in the same position. We are

a time-poor organisation, with competing demands. It is disappointing when

we spend time, often late at night or on the weekends, writing these sorts of

things for them to only be disregarded.”

There is a common recognition that, despite the differences, privacy advocates (‘we’) are engaging in a collective form of resistance that often goes unrewarded and unrecognised. The discourses of ‘fighting’ the ‘battle’ were articulated by participants explicitly positioning themselves as on the ‘losing’ side. Overall, the extracts demonstrate how what binds ‘privacy advocates’ together, despite differences, are experiences of ‘fighting’ antagonists threatening the signified value of ‘privacy’.

6.2.2.2. RECOGNITION OF EXPERTISE IN LAW AND TECHNOLOGY

This sub-section analyses how Australian privacy advocates discursively position themselves through their shared subject-matter expertise in law and technology. Overall, six (6) participants explicitly identified with discourses of subject-matter expertise. This element of their shared identity is contrasted with the subjectivation strategy used to position ‘citizens’ as surveyed above (Chapter Five,

242

Section 5.2). For example, Participant 10 described the Australian privacy movement as “headed by principled experts and tech-nerds who speak with very technical language”. Similarly, Participant 13 used the historical example of automobiles to make a point about different levels of knowledge about new technologies:

“You know, when cars first came out, they needed people with a flag and

whistle to warn people that a car was coming. And here we are, years later, and

it is common sense. People do not understand how a car works, but they know

you need to replace oil and put water in it. They know how to operate a car

because it has been abstracted out. So, it is an option. It falls to those who

understand the technicalities to distil the message down.” (emphasis added)

The notion that ordinary citizens are incapable of understanding the complexities of surveillance technologies was analysed at length within Chapter Five (Section 5.2.1).

What it illustrates here is how ‘privacy advocates’ are correspondingly positioned as experts ‘who understand the technicalities’ and therefore mobilise on behalf of the public interest. Participant 12 also highlighted the importance of subject-matter expertise within the movement:

“Most issues require some amount of expertise to understand the implications,

understand what is feasible and what is not, and understand the risks and trade-

offs. In the US, those people are called ‘wonks’ – the people who bother to

study something. In Australia we might call them subject-matter experts… The

solution there is to have distributed federated systems. Technologists know this.

If you can have individual organisations – companies, non-profits, open-source

communities – developing applications that are federated and speak a common

243

protocol, that helps a great deal in minimising the ability for control.”

(emphasis added)

This sentiment that subject-matter experts have policy solutions that are being ignored was common. Participant 14 remarked how, within the metadata retention debate, “the people who knew about the technical specifications and what was required, [the] people who understood the implications of it, were saying: no, this is not the way to do it.” Similarly, Participant 7 observed that “the reality is that the issue has become so technical and complicated that people struggle to engage in debate”. Finally,

Participant 15 argued “[t]he complexities and details of any given policy area are hard to get across to non-experts”. Thus, regardless of whether these labels are warranted,

‘privacy advocates’ identify as a class of experts capable of responsible decision- making, in contrast to a comparative lack of knowledge among ordinary citizens.

6.2.2.3. SHARED EXPERIENCES OF MARGINALISATION

It has been argued that ‘privacy advocates’ identify as subject-matter experts engaged in a ‘fight’ or ‘battle’ against arbitrary surveillance powers. This sub-section examines the final characteristic of their shared identity – as subjects marginalised from political decision-making processes. In total, seven (7) participants recounted experiences of being marginalised from public debate. For example, reflecting on their role in preparing their organisation’s submission to the public consultation process for the Data Retention Act (2015), Participant 16 made the following remarks:

“I helped prepare the submission… We had at least 10 professors, other

technical experts, and we had gone into detail into the evidence of effectiveness

244

and analysed the various claims… Instead, [the Government] were satisfied

with mere assertions and security theatre.”

Here the participant clearly contrasts their subject-matter expertise, and expended efforts, with the Australian Government’s satisfaction with ‘mere assertions and security theatre’ as offered by proponents of the legislation. Similarly, Participant 17 remarked that:

“The amount of skills and expertise we already have to critique legislative

proposals is second-to-none the world-over… We’ve got excellent critique and

InfoSec skills, but in the areas of campaigning? We need to get on the front

foot and address that glaring gap.”

Evidently, ‘privacy advocates’ discursively position themselves as subject-matter experts by drawing comparisons with the ascribed positions of other subjects. For example, by contrasting their powerlessness compared with the antagonistically- positioned Australian Government and law enforcement agencies (Chapter Five,

Sections 5.3.1 and 5.3.2), contrasting their expertise with the ignorance of ordinary citizens (Chapter Five, Section 5.2), or articulating distinctions between self-described advocates and activists (Chapter Six, Section 6.2.2.2). Altogether, it is argued these extracts demonstrate how ‘privacy advocates’ commonly identify as marginalised subject-matter experts, defined through their differences to other subjects.

There was also a conscious attempt to differentiate the identity of ‘privacy advocates’ from other ‘conspiratorial’ subjects. Specifically, privacy advocates discursively distinguished themselves from the people who wear ‘tinfoil hats’ as part of articulating their identities as subject-matter experts. For example, Participant 10 made the following remarks:

245

“This is a global issue. It is not a coincidence that you see these policies crop

up in Australia after international meetings of Ministers or heads-of-state. Also,

how do you talk about this without sounding like you are wearing a tinfoil hat?”

(emphasis added)

This notion of the ‘tinfoil hat’ was raised separately by a sub-group of 4 (four) participants without any prompting by the researcher. It is argued this reflects the frustrations of a sub-set of participants of being marginalised from public debate due to an erroneous associated with ‘conspiratorial’ subjects. For example, Participant 1 remarked how privacy advocates who raised concerns about the Australian

Government retaining medical information about citizens are readily dismissed, arguing that “[the Government] will just say: there is going to be all these amazing public health benefits, and anybody who says there are privacy concerns is wearing a tinfoil hat” (emphasis added). Similarly, Participant 21, lamenting the state of public discourse about privacy issues, noted that “part of the problem, I think, is a public perception that there are two extremes: people wearing tinfoil hats and a government being too gung-ho, and nothing happening in between” (emphasis added). Finally, as another example, Participant 4 explained how they became involved in the privacy movement in the following terms:

“It all kind of came to a head with Snowden’s revelations in 2013. I was already

interested in it before that. I think from reading, at the time, what you would

call ‘tinfoil hats’.” (emphasis added)

Here, the participant is discursively positioning ‘privacy advocates’ who forewarned about the Snowden disclosures as unjustly stigmatised as ‘conspiratorial’ subjects, despite the fact their concerns have subsequently been vindicated. Thus, by

246 comparison, the contemporary use of the ‘tinfoil hat’ label is framed as similarly unwarranted. Overall, the rejection of the ‘tinfoil hat’ label is used to reinforce the shared identity of ‘privacy advocates’ as subject-matter experts who have been unreasonably marginalised from public debate.

While many participants made comments about ‘tinfoil hats’ in an off-hand manner, some were actively concerned it was being used to intentionally discredit the expertise of privacy advocates. For example, Participant 12 remarked how they are often dismissed when raising privacy issues with members of the public:

“It is straightforward to get privacy activists to raise concerns about being

under surveillance all the time, but then the debate turns into a ‘he-said, she-

said’ argument about what kind of data is collected and how ‘we’re all just a

bunch of conspiracy theorists hiding under a rock’… I think, as with any sort

of legislation, it was framed in a way that helps sell it to the public. It also

allows opposition to be painted in a particular fashion. They can say ‘oh that’s

crazy’ and that they will never allow indiscriminate access or secondary access.

Yet, here we are.” (emphasis added)

Similarly, Participant 16 stated their concerns bluntly:

“I am not a conspiracy theorist. I generally think you can find more plausible

explanations than a conspiracy to do something. I just wanted to start out with

that. On the other hand, you can have a coincidence of interests, that appears

to outsiders that there is a conscious conspiracy… And so, we get this sort of

expansionism and criticism of those who complain at any point. The question

becomes, what is wrong with you? This is such a small change compared to

what we have already, so you must be particularly sensitive or paranoid or not

247

reasonable. It is made in rational and administrative terms, framed as a minor

change, allowing them to disregard opposition to the whole thing. You get told,

‘you’re over-reacting’, as if the very act of trying to have a systemic review of

things is illegitimate. It is taken as a sign that you should be discredited or

excluded from sane, civil, and well-informed conversation.” (emphasis added)

Throughout these two extracts, Participants 13 and 16 allude to the notions of being discredited as ‘conspiracy theorists’ and ‘crazy’ and thereby marginalised from ‘sane’ public debate. Therefore, it is a similar way of articulating concern about their perceived exclusion as ‘subject-matter experts’ from reasonable public debate, as is noted above. Overall, it is argued participants construct this shared identity of

‘privacy advocates’ as marginalised subject-matter experts ‘fighting’ common antagonists who threaten the signified moral value of ‘privacy’. Subsequently, this shared identity enables the construction of an associated civic duty to advocate resistance to morally arbitrary forms of surveillance.

6.3. CONSTRUCTING A CIVIC DUTY TO ADVOCATE RESISTANCE

The previous section established how ‘privacy advocates’ transcend difference through the construction of a shared identity as subject-matter experts, marginalised from political decision-making processes, and who are ‘fighting’ common antagonists threatening the signified moral value of privacy. Building upon this analysis, this section argues ‘privacy advocates’ construct an associated civic duty to advocate resistance to the morally arbitrary forms of surveillance enabled by relations of domination. Specifically, this category of a ‘civic duty’ is derived from the republican notion that citizens have obligations to participate in collective self-government under

248 conditions of social justice (i.e. Pettit, 2012; Braithwaite, 1995). It is argued ‘privacy advocates’ construct this civic duty to reconcile their commitments to collective self- government with their self-identification as subject-matter experts residing within societies characterised by relations of domination (Chapter Five, Section 5.4).

Specifically, the first sub-section argues privacy advocates are cognisant of the limits of legal resistance to surveillance and conceptualise ‘privacy protection’ as a strategy for defending human rights. The second sub-section argues that ‘privacy advocates’ thereby construct a civic duty to advocate resistance to morally arbitrary forms of surveillance by cultivating citizens capable of protecting their own privacy rights.

These facets of a civic duty correspond with the notion that ‘citizens’ are “obstacles” to the arbitrary expansion of surveillance powers where they object on the basis of privacy as a human right (or civil liberty) or as the product of civic deliberation (i.e.

Andrejevic, 2017, p. 879). Finally, it is argued this civic duty is constructed to resolve tensions between ‘democracy’ and ‘technocracy’ as sources of non-arbitrary authority for surveillance powers. Overall, the section argues that privacy advocates construct a civic duty to advocate resistance to the morally arbitrary forms of surveillance.

6.3.1. PRIVACY PROTECTION AS DEFENDING HUMAN RIGHTS

This sub-section argues that privacy advocates are cognisant of the limits of legal strategies of resistance to surveillance legislation and therefore conceptualise

‘privacy protection’ as an alternative strategy for defending human rights. This is predicated upon the articulation of a discursive relationship between the legitimacy of surveillance powers and the moral status of privacy protection. Specifically, the aim of ‘surveillance law reform’ via legal strategies of resistance was articulated by four

249

(4) participants. For example, Participant 10, who occupies a leadership position within a privacy-oriented organisation and has expertise in law and technology, articulated the need for privacy rights backed by an independent judiciary:

“We introduce a constitutionally entrenched Bill of Rights and provide the

judiciary with the adequate training to understand the technical issues that

continuously emerge within the digital age. The only way to prevent

government overreach, in the long-term, is to have a strong and independent

judiciary.”

This highlights how the aim of some privacy advocates is to achieve institutional change – legal protections for the moral right to privacy. It also reflects a distrust of democracy as a system of government vulnerable to perversion by political populism when left untempered by technocratic expertise and judicial oversight. Here, privacy is conceptualised as a human right defended by the judiciary, and not necessarily as a matter for civic deliberation. This preference for surveillance law reform was repeated by several interviewed participants. For example, Participant 12, who belongs to a technology-oriented advocacy organisation, argued:

“You must have bedrock principles. Australia does not have a Bill of Rights

like the US. We have higher courts that are fairly willing to stand up for

traditional liberal values, but it would be helpful to have constitutional

guidelines setting out the traditional liberal values of Western civilisation.”

And similarly, Participants 17 and 20 described the need for actionable privacy rights:

“A formal Bill of Rights. I would want to get that through. One that is written

to be robust and enhance the role of the judiciary in Australia, because I think

250

it has atrophied when it comes to protecting democratic institutions.”

(Participant 17)

“Constitutional protection for human rights, including privacy rights. That’s

the lynchpin. That would lead to a court of human rights, which would lead to

some jurisprudence. Following that, a tort for serious invasions of privacy.”

(Participant 20)

The discourse articulated here supports strong legal protections for privacy rights – protected by the judiciary – to temper public opinion about national security and criminal justice matters. It reflects concerns about the potential for illiberal discourses of ‘moral responsibilisation’ to cultivate citizens who support authoritarian surveillance programs (i.e. Petersen & Tjalve, 2013; Garland, 1996). This highlights the ongoing importance of liberal discourses for justifying the institutionalisation of privacy rights (or civil liberties) as ‘obstacles’ against the arbitrary expansion of the scope and function of surveillance programs (i.e. Andrejevic, 2017).

In contrast, only two (2) participants articulated a more reserved desire for

‘necessarily general’ privacy rights amenable to civic deliberation. For example.

Participant 19, who has held leadership positions within privacy-oriented organisations, made the following observation about this point:

“I would have a robust, and necessarily general, protection of privacy within

the Australian Constitution as the starting point. So, as new issues arose, people

could say, ‘look, that interferes with my right to privacy and free speech’. Some

constitutional guarantees to life and liberty, and other rights, as it is often

phrased. And be able to contest legislation at a constitutional level… But, just

as importantly, the ability of individuals and organisations to initiate actions

251

before the courts, and not be trapped before the privacy commissioner as they

currently are. There is no access whatsoever to the courts. That is the most

important thing – to be able to open-up the potential for litigation in the courts

through multiple avenues.” (emphasis added)

There was a view that existing avenues for legal and political resistance to surveillance legislation – such as via the Australian Privacy Commission – are inadequate. The idea of a ‘necessarily general’ right to privacy was thus considered important as a viable alternative to the administrative framework of the Australian Privacy Principles established under the Privacy Act (1988, s. 14). Participant 21, also from a legal background and member of a human rights organisation, argued human rights legislation should not be overly restrictive:

“The constitution should reflect the need for human rights legislation, but not

the substance of human rights legislation. So, parliament must maintain a

Human Rights Act, or however you wish to frame it, but its existence is what

is important. It should be up to the parliament to determine how the legislation

exists. I say that because seventy-five, fifty, or even twenty-five years ago the

concept of including digital rights under human rights would not exist.”

Overall, these privacy advocates articulated the desire for both rights-based and tort law frameworks for limiting the scope of surveillance powers and institutionalising privacy protections. Yet, they demonstrate beliefs in the culturally contingent character of human rights, as distinctive of communitarian critics. Overall, these participants highlight the importance for tempered-yet-flexible privacy rights as a bulwark against an evolving landscape of surveillance technologies.

252

Despite support for legal strategies of resistance to surveillance programs, it is argued that privacy advocates’ experiences of marginalisation have contributed to the construction of ‘privacy protection’ as an alternative method of defending human rights. Specifically, the inadequacy of legal strategies of resistance to surveillance legislation was articulated by five (5) participants to justify the advocacy of extra-legal strategies of ‘privacy protection’. For example, Participant 14 labelled such strategies of privacy protection as justified acts of ‘civil disobedience’:

“What we advocate is civil disobedience. If you think a law is misinformed or

immoral or wrong in some way, while we do not advocate illegal behaviour,

we advocate that you express your discontent with the law in some way that is

positive and constructive. Not harmful. Ways that can informed other people

about the issues you have and get some sort of debate going.” (emphasis added)

This illustrates how the purpose of ‘privacy protection’ is not only to prevent interference with individual privacy rights, but also to prompt civic deliberation about the matter – to ‘get some sort of debate going’. As such, ‘privacy protection’ is constructed as a means of defending human rights and disrupting the domination of citizens by political elites. Other participants explained their support for ‘privacy protection’ campaigns as strategies of defending their privacy rights from morally arbitrary forms of state interference. For example, Participant 18 argued:

“It’s not currently against the law to use a VPN or other tools to circumvent

the metadata retention scheme. Citizens have the right to privacy and should

use reasonable means to protect that right.” (emphasis added)

Participant 2 was even more direct, arguing that “[i]f the government chooses to infringe upon my rights, that’s an unfortunate choice, but I’m going to assert my rights

253 and attempt to stop the government infringing upon me”. Similarly, Participant 3 described “encryption tools” as “a necessary part of protecting our human right to privacy when mass surveillance is occurring”, adding that:

“Encrypting our communications is not hiding, suspicious or a crime.

Resisting the chilling effects of mass surveillance helps to restore rights,

protect colleagues and fundamental principles upon which democracy rests in

the digital age”

Additionally, in discussing why people choose to use privacy-enhancing technologies,

Participant 7 made the following remarks:

“They were setting out to take control of their own data online. The morality

of that is very clear. To my mind and values, Australians have a right to know

what data is being collected and have some control over which corporations or

government authorities have access to that data. To do so using completely

legal software is a right… So what people are trying to do is just restore some

of the privacy that they have previously enjoyed. It is not to seek some new

level of confidentiality and evasion of state surveillance. People are seeking to

maintain an expectation of privacy that most Australians already hold.”

(emphasis added)

It is argued these extracts demonstrate how ‘privacy protection’ is constructed as a method of defending human rights against arbitrary state interference. Indeed, these participants are discursively constructing ‘privacy protection’ as a legal and ethical practice, where the moral status of the act is contingent upon the illegitimacy of corresponding surveillance powers (i.e. where citizens do not have a ‘right to know’ about, or ‘have some control’ over, their scope and function). Overall, these extracts

254 demonstrate how the limits of legal strategies of resistance to surveillance legislation have a discursive relationship with construction of ‘privacy protection’ as a strategy for defending human rights from arbitrary interferences.

Importantly, many privacy advocates interviewed for this research were cognisant of the morally contestable nature of these justification. As such, six (6) participants explicitly emphasised how the morality of ‘privacy protection' is contingent upon the legitimacy of surveillance powers. That is, they invoked the notions of domination and civic corruption (as discussed within Chapter Five, Section

5.4.2). For example, Participants 8 and 6 made the following points concerning why the movement embraced the advocacy of privacy-enhancing technologies:

“I have little faith that the Australian Government, both parliamentarians and

senior public servants, are interested in change. So, in the short term, I do not

see an electoral solution to these problems… There is an alternative to

lobbying the government for several years. It is to build a tool. For example,

you could lobby a government to change the rules around Freedom of

Information, or you might build a tool that makes it easier to provide access to

that information.” (Participant 8; emphasis added)

“I see the varying levels of encryption used by individuals as people taking

responsibility for their right to privacy, because the government has decided to

no longer defend their right to privacy… My argument here is that complete

surveillance by Western nations, and their inability to restrain their desires to

surveil everybody, lead to a situation where people had to adopt encryption

technologies to protect themselves.” (Participant 6)

255

Evidently, these participants frame ‘privacy protection’ as justifiable within a context where ‘lobbying the government for several years’ has been ineffective, and therefore governments have created circumstances ‘where people had to adopt encryption technologies to protect themselves’. This draws a discursive contingency between the legitimacy of surveillance powers and the morality of privacy protection. For example,

Participant 20 made the following observation:

“Within the context of metadata retention, there is a recognition among activist

groups that it is essentially impossible to affect legislative change via the legal

system. In that sense, their strategies of trying to get people to circumvent the

amendments that authorised blanket surveillance is the next best option.”

This clearly links the ‘impossibility of affecting legislative change’ to the ‘next best option’ of prompting the use of privacy-enhancing technologies. Similarly, as

Participant 9 described it, “[t]here is no debate, and people feel that nothing is going to change” and therefore, “I am going to continue pushing the technology”.

Overall, it is argued these extracts demonstrate how the experiences of privacy advocates, as marginalised subject-matter experts ‘fighting’ a losing ‘battle’ against the proponents of surveillance laws, correspond with the construction of ‘privacy protection’ as a strategy of defending human rights from arbitrary interference.

Privacy advocates are cognisant that ‘legitimate’ surveillance power must be derived through democratic processes. In this sense, the human right that is being

‘protected’ cannot be reduced to mere non-interference and includes elements of privacy as freedom from arbitrary surveillance powers (e.g. Newell, 2018; Hoye &

Monaghan, 2018). For example, Participant 6 offered the following observation about the role of ‘democracy’ in tempering arbitrary power:

256

“The Australian public do need a mechanism to force politicians to explain

their decisions before it gets out of control. Other countries have introduced

similar procedures with success. Our democracy is based on delegating

authority to others to make decisions on all issues, all of the time.”

This extract very explicitly links ‘delegating authority’ to ‘the Australian public’

(citizens) as a means of determining the scope of surveillance and privacy rights.

Relatedly, several participants expressed concern about the ‘perverse effects’ of individualisation upon debates about surveillance powers. For example, Participant 10 was critical of promoting individual strategies of privacy protection:

“Those campaigns [e.g. Go Dark Against Data Retention] may have been

effective at showing the ineffectiveness of the laws, but there is probably also

a perverse effect. They assure the people who have some reason to care about

the introduction of data retention that there is really nothing to worry about.”

Furthermore, several participants were concerned about the prospect of privacy- enhancing technologies contributing to citizens ‘opting out’ of collective decision- making processes. For example, Participants 16 and 17 made the following comments:

“So, while I can see how circumvention is appealing, there remain questions

about its actual effectiveness. There are also issues about false confidence, or

delusions, of being able to escape all of this. If we come to think it is not a

problem anymore, we can retreat into individual or isolated cultural groups. I

think that is part of the problem. It is one of the signs of an authoritarian regime,

nobody knows who to trust and there is no vitality in civil society.”

(Participant 16)

257

“I think to be entirely preoccupied with individual-level information security,

while helpful, is a fool’s errand. If we sit back and run skills training workshops

about securing your data, that is fantastic. Any kind of awareness-raising and

skill-sharing is a good thing. But there is a tendency to disengage from larger

processes, and that means we’re not going to be able to make any progress.

One thing I have picked up on is how a lot of NGOs and civil society groups

that say, ‘we’re about giving you the skills necessary to protect your privacy’.

But that is always a reactive position.” (Participant 17)

It is argued these extracts demonstrate how the construction of the value that is being

‘protected’ is a freedom from arbitrary surveillance powers. Indeed, these criticisms recognise that the protection of privacy rights through privacy-enhancing technologies are always ‘reactive’ behaviours – forms of ‘retreat’ – rather than freely chosen acts.

This is consistent with the civic republican argument that where an individual is forced to use privacy-enhancing technologies, they cannot be considered to be free from arbitrary surveillance powers (i.e. Bodó, 2014, p. 9). Overall, this highlights how the human right being defending should be conceptualised in a broader manner, not bound to the narrow liberal framework of privacy as non-interference.

A related sub-category regarding the construction of privacy protection as a strategy for defending human rights concerns how the absence of actionable rights impacts the ability of privacy advocacy organisations to attract sufficient funding.

In total, four (4) participants expressed scepticism about the efficacy of legal strategies of resistance specifically due to funding limitations. For example, Participant 19 offered the following observation about the efficacy of legal resistance via the courts:

258

“Legal resistance, via court actions, has never got us anywhere because, in

Australia, there is nowhere to go. There are no constitutional protections,

except in incredibly unusual circumstances. There is a lack of a general tort of

invasion of privacy, or, any commitments by the Australian government to

international agreements that, in any way, end up before international courts.

The only exception is the very limited approach you can take through Article

17 of the International Convention of Civil and Political Rights. That happened

only once in the Toonen case.77 But, it is almost impossible as a way of, say,

resisting data retention laws or something like that. So, unlike digital rights

activists in Europe or the United States, it is almost impossible for us to resort

to the courts as a way of dealing with actions or proposals that threaten privacy

on a large scale.”

The lack of enforceable privacy rights was thus invoked as an explanation for why

Australian privacy organisations struggle to attract funding from potential allies within the technology sector. Indeed, Participant 19 continued by describing how the lack of actionable privacy right limits other advocacy efforts:

“If you haven’t got a constitution that enables you to contest anything, why are

people going to sink money into organisations, year-in and year-out, when they

never have any spectacular victories? You can never say, ‘this year APF

challenged the federal government on the unconstitutionality of data retention.’

If you were able to say things like that, you would probably have more donors

thinking it was a good investment for their money. What we say now is, ‘we’ll

77 The participant is referencing Toonen v Australia, Communication No. 488/1992, U.N. Doc CCPR/C/50/D/488/1992 (1994). 259

do our best to spend your money on political campaigns’ and they will probably

be pointless.”

Similarly, Participant 15 diagnosed the problem in the following terms:

“It is hard to build a strong and continuing movement around reactive outrage

to government nonsense. You get a lot of media for a day or two, but you do

not get ongoing funding to respond to governments’ stupid off-the-cuff ideas.

You get ongoing funding when government, citizens, and industry recognise it

as an issue.”

Other participants also made the connection between a lack of actionable rights and difficulties obtaining adequate funding. For example, Participant 21 highlighted how,

“[o]ne of the big difficulties there is that a lot of lawyers are unable to run matters that are not adequately funded”, and therefore “I attempt to do as much pro bono work as possible and involve myself where I can in challenges of a rights-based nature”.

Similarly, Participant 9 observed how, “countries with greater capacity for better services have the least draconian laws” because “[t]he more access people have to privacy and basic internet protection, such as net neutrality, the better internet infrastructure they have”. Indeed, it is a ‘chicken-or-the-egg’ dynamic, as a lack of initial funding limits lobbying efforts to enshrine privacy rights, which itself limits ongoing access to funding. What this illustrates is how the lack of robust privacy rights has flow-on effects on privacy protection campaigns, further limiting the availability and effectiveness of methods of legal resistance to surveillance.

The final sub-category regarding the construction of privacy protection as a strategy for defending human rights concerns its symbiotic relationship with surveillance powers. Specifically, six (6) participants considered ‘privacy protection’

260 a viable strategy within the context of an ongoing “arms race” between the subjects of surveillance (i.e. Marx, 2009, p. 199). For example, Participant 13 observed how legal regulations attempts to ‘catch up’ to technological developments:

“Historically, technology has been ahead, and the law is always catching up.

So, I think our saviour may be technology that negates or invalidates the

legislation… The long-term friction is that governments will continue to

legislate around these issues, and we will end up in a panopticon. I think

seeking wider societal shifts is necessary. The huge push towards HTTPS is

fantastic, and not just from a technical perspective. The fact that it provides

significant privacy benefits is unreal.”

Participants 11 and 4 made similar observations:

“They try to stop the flow of ideas, but even the most successful fascist states

can only do that for a short time. You can only hold it back for so long. There

are smart people doing things with encryption now that benefits everyone.

Look at Tor, that originated within the United States Navy to protect their

communications during overseas operations. Now it is an open project. Those

sorts of ideas can come from anywhere, and I think it is very hard to try to stop

them.” (Participant 11)

“I have run quite a few crypto-parties, and one of the aims is to increase the

use of technology that prevents mass surveillance for the individual. It is

actually kind of a sign that it is working, that the government wants backdoors

into encryption.” (Participant 4).

It is argued these extracts highlights how the complex ‘arms race’ between the subjects of surveillance influences judgements about the viability of ‘privacy protection’ as a

261 strategy for defending human rights. Specifically, it is based upon an empirical claim that technology can innovate around regulation. For example, Participant 12 described the efforts people will go, to resist arbitrary surveillance powers:

“You know, the public wants what it wants, and people will do what it takes to

achieve it… If you block VPNs, people will invent something else using lower

layer protocols, and if you block that, people will invent something else that

bypasses the entire internet and doesn’t even use IP. If the government blocks

that, then people will come up with another approach that uses Bluetooth or

something, and if the government blocks that people will start finding yet

another way that involves exchanging USB sticks or something similar.”

(emphasis added)

This claim that technologists innovate around regulation was a recurring sentiment.

For example, in reference to the development of the Tor Browser, Participant 21 argued that “[t]o think you can legislate around those nerds is an exercise in futility and arrogance”, while Participant 6 argued that “there is no way for governments to take [encryption technology] offline”. Overall, such discourses invoke the notion of a symbiosis between surveillance power and privacy protection. This contributes to a broader construction of ‘privacy protection’ as a strategy for defending human rights within a context of limited legal avenues for resisting surveillance powers,

6.3.2. PRIVACY PROTECTION AND CITIZEN CULTIVATION

The above sub-section argued that privacy advocates are cognisant of the limits of legal strategies of resistance to surveillance legislation and that this informs the construction of ‘privacy protection’ as a method of defending human rights. Building

262 upon this analysis, this sub-section argues that privacy advocates construct an associated ‘civic duty’ to advocate resistance to morally arbitrary forms of surveillance. This duty is linked to the identity of ‘privacy advocates’ as subject-matter expert equipped with the knowledge necessary to disrupt relations of domination by cultivating citizens capable of protecting their privacy rights. The duty therefore aims to ensure that the ‘obstacle’ of civic deliberation prevents the arbitrary expansion of surveillance powers (i.e. Andrejevic, 2017). Finally, it is argued this civic duty enables

‘privacy advocates’ to reconcile their self-identification as subject-matter experts with a recognition that non-arbitrary forms of surveillance power require participation in collective self-government. Overall, the sub-section argues that ‘privacy advocates’ construct a ‘civic duty’ to advocate resistance to morally arbitrary forms of surveillance by cultivating citizens capable of privacy protection.

The articulation of a civic duty to advocate resistance is premised upon the positioning of ordinary citizens as culpable victims (Chapter Five, Section 5.2), privacy advocates as subject-matter experts (Chapter Six, Section 6.2.2.2), and the construction of ‘privacy protection’ as an alternative method of defending human rights (Chapter Six, Section 6.3.1). Specifically, it is argued privacy advocates construct a self-imposed responsibility, as subject-matter experts, to disrupt relations of domination by cultivating citizens capable of privacy protection. In total, twelve

(12) participants articulated this need to disrupt relations of domination by advocating resistance to morally arbitrary forms of surveillance. For example, Participant 1 argued that, in order to retain democratic control over surveillance legislation, there is an antecedent requirement of digital literacy among the citizenry:

“We expect all these people to do all these complicated things for us, but this

is actually really hard. So, you cannot outsource it to geeks entirely because

263

then you don’t have any control over it. That decreases accessibility. But

equally, everybody needs to have a baseline level of digital literacy, because

that’s what our lives are and it is what we do.” (emphasis added)

Evidently, the participant acknowledges the importance of empowering citizens to collectively confer non-arbitrary authority to surveillance powers. However, they articulate the underlying tension between ‘democracy’ and ‘technocracy’ as sources of authority where there is a low ‘baseline level of digital literacy’. This is consistent with the subjectivation strategies analysed within Chapter Five (Section 5.2), where ordinary citizens are considered as ignorant or apathetic with regards to privacy issues.

Similarly, Participant 6 reiterated how the legitimacy of surveillance legislation is contingent upon democratic consent:

“We need more participation, and those who do participate need more say in

the outcomes. That is what I don’t think we saw in the decision by the

parliament to create a mass surveillance regime. Before a government makes a

choice like that, it needs to be very apparent what they are doing. It needs to be

agreed to by the democracy, that they have weighed up the pros and cons, and

concluded they want it.” (emphasis added)

What is distinct about these extracts is the explicit relationship constructed between the notions of ‘educated citizens’ and ‘non-arbitrary surveillance’. Specifically, the participants highlight the contingency of the authority for any ‘mass surveillance regime’ upon the capacity of ‘the democracy’ (i.e. the citizenry) to have appropriately

‘weighed up the pros and cons’. Overall, this discourse provides the context for discussions about a civic duty to advocate resistance.

264

There was a prevailing discourse that privacy advocates have a civic duty to advocate resistance to morally arbitrary forms of surveillance. Specifically, privacy advocates, as subject-matter experts, are presumed to appraise the risks associated with surveillance legislation more objectively. For example, Participant 18 identified the necessity of citizen education for challenging arbitrary surveillance powers:

“Awareness and understanding are key. This regime has been possible due to

a low understanding of digital tech from the public. If we can make more people

aware of the invasion of privacy that is occurring, we can get them to demand

a change.”

What is important here is the ways in which the discourse characterises ordinary citizens as ignorant and apathetic, yet also capable of being cultivated. In the absence of adequate legal and political institutions through which surveillance legislation might be challenged, privacy advocates articulate a responsibility to disrupt relations of domination through citizen education. This responsibility is reflected in the ways in which participants place the onus for educating citizens upon ‘privacy advocates’

(i.e. ‘if we can make more people aware… we can get them to demand a change’).

As such, it is a discursive strategy used to ascribe the responsibility for advocating resistance to morally arbitrary forms of surveillance to their shared identities. For example, Participant 1 made the following related remark:

“If we make enough people understand what a risk this is to their privacy, and

how it is not the same as not caring about privacy when you post what you are

eating for lunch on Instagram… If people actually understand the power of

these sorts of datasets then hopefully they will take more of this onboard and

realise it is such a potential threat.” (emphasis added)

265

These extracts articulate a clear relationship between the identification of ‘privacy advocates’, as subject-matter experts, and the associated discursive positioning of ordinary citizens (i.e. “if we make enough people understand” or “if we can make more people aware”). Overall, the key observation is how the subjectivation strategies used to discursively position citizens render them amenable to cultivation.

It is argued the construction of this civic duty to disrupt relations of domination by cultivating citizens capable of protecting their privacy rights is articulated to resolve a fundamental tension between ‘democracy’ and ‘technocracy’ as sources of non- arbitrary power. Specifically, this problem arises where there are competing commitments to moral rule by the people (‘dimo-kratia’) and moral rule by experts

(‘techne-kratia’) (Gilley, 2017, p. 10). Indeed, this can create crises of legitimacy within societies – such as within the European Union – where technical expertise is privileged at the expense of the citizenry (Kurki, 2011; Wallace & Smith, 1995). As such, the articulation of a civic duty to advocate resistance enables participants to maintain commitments to ‘democracy’ without undermining the superordinate status of subject-matter experts as sources of technocratic authority. For example, Participant

21 highlights the need of experts to educate members of the public, to influence their attitudes about privacy rights and surveillance powers:

“The key to unlocking this problem lies in education. There is an element of

human nature that you cannot predict, legislate around, or alleviant: people do

stupid things. That is an inherent aspect of being human, because humans are

inherently greedy. There is no solution to that other than educating people

about the consequences of their actions, including extremism, radicalisation,

and intruding into others’ private lives.” (emphasis added)

266

It is from this premise that participants argued “solutions therefore probably need to be at the cultural level” (Participant 16) and “you can make society safer by educating them [citizens] about how to use these technologies appropriately” (Participant 6). This latter extract specifies how the aim of ‘education’ is to cultivate a capacity among citizens for privacy protection. Relatedly, Participant 15 observed, “[i]f you choose to location tag your holiday photos that is a legitimate choice”, however “you need to be aware that [surveillance] is happening” to make an informed choice. Evidently, the legitimacy of decisions to disclose information are considered contingent upon levels of digital literacy. Overall, privacy advocates construct a civic duty to advocate resistance to morally arbitrary surveillance on the basis of empowering citizens to protect their privacy rights in accordance with the advice of technical experts.

There were two articulated strategies for how citizens can be cultivated to protect their privacy rights. One strategy involved the articulation of the harms of surveillance legislation (as discussed within Chapter Four, Section 4.3.2). Specifically,

‘privacy advocates’ placed upon themselves an onus for rendering these surveillance harms ‘visible’ and therefore recognised by ordinary citizens. For example, Participant

16 characterised the Australian privacy movement’s goals in the following terms:

“Privacy and data retention have never been at the top of people’s concern,

which I suppose is due to the intangibility of the harm that is involved. I am

increasingly thinking that we need to go back-to-basics and determine how to

make it ‘real’ and convince people that it can impact your financial or physical

well-being. From my perspective, one thing I try to do is to provide concrete

examples of the consequences.” (emphasis added)

267

This extract demonstrates the self-imposed responsibility of ‘privacy advocates’ to make ‘real’ these financial and physical harms caused by surveillance programs.

Furthermore, it is argued the use of a second-person perspective is indicative of how the civic duty is attached to the shared identity of ‘privacy advocates’ (i.e. ‘we need to go back to basics’). This is predicated upon the recognition that ‘privacy advocates’ are subject-matter experts capable of assisting citizens to ‘connect the dots’ and ‘put two-and-two together’. Indeed, this was repeatedly expressed by participants to describe the need to render surveillance harms visible to non-experts:

“There is a huge amount of complacency at the moment, and I don’t think these

changes will happen until there are workable case studies of everyday

individuals whose rights have been seriously breached.” (Participant 5)

“Yeah, it does all come down to perception. If people came home at the end of

the night, turned on the TV, and the first thing they saw was the metadata

produced during the day – information about how their children’s travel habits

– people would be more concerned.” (Participant 9)

“We need to get through by talking about the consequences like, ‘you’re mum

can see this’ or ‘you’ll be banned from traveling to the US’ or ‘you’ll be subtly

suspected of stuff in an ongoing capacity’. If you make it a personal and

pragmatic process, where people recognise it matters to them, they will join the

dots.” (Participant 16)

Overall, it is argued that the notion of ‘cultivating citizens’ is built into discourses articulated within these extracts – it is assumed that judgements about surveillance harms can be influenced by privacy advocates. It is clear how this enables privacy advocates to resolve the tensions between ‘democracy’ and ‘technocracy’ as sources

268 of non-arbitrary surveillance powers. By invoking a discourse of ‘citizen cultivation’, privacy advocates retain their self-conception as subject-matter experts with privileged access to knowledge about surveillance harms, while also acknowledging that non- arbitrary forms of power require democratic decision-making processes.

The second articulated strategy for cultivating citizens capable of protecting their privacy rights concerns correcting attitudes about the necessity and proportionality of surveillance as a criminal justice policy (Chapter Four, Section

4.3.2). This is still linked to the notion of harm, which itself underpins judgements about what interferences with privacy are necessary and proportionate. Again, this was articulated via the distinction between ‘visible' and ‘intangible’ harms – the latter considered to be given undue weight by citizens in public debates about surveillance powers and privacy rights. For example, Participant 8 made the following observation about how ‘people’ attribute disproportionate consideration to the benefits of surveillance rather than the associated harms:

“So, on one hand, we have concrete cases of people being made less secure by

these systems, and on the other hand a claim that it is necessary to prevent

threats to national security or respond to organised crime” (emphasis added).

This extract highlights the distinction between the ascribed capabilities of subject- matter experts and ordinary citizens to assess the veracity of claims about necessity.

As such, from the subject-position of privacy advocates, there is an associated responsibility to educate citizens to appropriately assess such claims. Again, it is argued this contributes to the construction of a civic duty to advocate resistance as a means for reconciling the tension between ‘democracy’ and ‘technocracy’ as sources

269 of non-arbitrary surveillance powers. For example, Participant 11 observed how where people ‘connect the dots’ they ‘get angry’:

“If you talk to most people and tell them how their credit card number is at risk

at being stolen, they would care. It is getting to the point where if people can

access your name and driver’s license number, they can steal your identity.

People do give a shit about that. But they have not yet connected the dots on

this other stuff. But when you have the conversation with people, they certainly

get angrier.” (emphasis added)

Similarly, Participant 19 concurred, where they noted, “I think [what] would be the most effective thing is a financial crash, in the sense of surveillance-based marketing crashing”, while Participant 11 stated “I think we will need a high-profile leak in

Australia for people to care… I think the Equifax leak in the States has made more people care”. Finally, Participant 9 described how the visceral distinction between

‘visible’ and ‘intangible’ surveillance harms impacts the citizenry’s capacity to accurately assess proportionality:

“They don’t see it because it is not physical. That’s the problem – humans see

things as being tangible. When we see things on the screen, we do not realise

that the device is making assumptions about what we are doing. You don’t see

the algorithms, and there are no laws protecting that information… When it

actually impacts the average Australian, when it creates an actual economic

problem, then they would put two-and-two together. Most people I speak to

say the same thing: ‘I’ve got nothing to hide’. It is a stupid argument, but at the

end of the day they are not fearful. I’m not fearful either, I just find it

disconcerting.”

270

This invocation of the ‘nothing to hide’ argument reflects frustration with the perceived ‘intangibility’ of privacy understood as freedom from arbitrary power.

While the ‘visible’ harms noted throughout are all cases of violations of non- interference (e.g. financial loss resulting from a data breach), the notion of arbitrary surveillance power does not necessarily involve any ‘tangible’ or ‘visible’ interference with the person (i.e. van der Sloot, 2018). Overall, these extracts demonstrate how privacy advocates construct a civic duty on the premise that citizens can be cultivated to more accurately appraise the harms, necessity, and proportionality of surveillance – or, as expressed by Participant 11, to “give a shit” about privacy rights.

6.5. CHAPTER CONCLUSION

This chapter built upon the analyses presented within Chapters Four and Five via examining the mobilisation of ‘privacy advocates’ who resisted the Data Retention

Act (2015) and Encryption Access Act (2018). As such, the chapter used the analytical construct of identification to understand the process of collective action. The first section argued that the Australian privacy movement is mobilised through the construction of the shared identity of ‘privacy advocates’ – as marginalised subject- matter experts who are ‘fighting’ antagonists threatening the signified moral value of privacy. The second section argued that this shared identity informs the construction of an associated ‘civic duty’ to advocate resistance to morally arbitrary forms of surveillance. Specifically, privacy advocates have a self-imposed responsibility to cultivate citizens capable of protecting their privacy rights. It is argued this civic duty reconciles tensions between democracy and technocracy, enabling privacy advocates to remain committed to democratic decision-making processes without threatening

271 their identities as subject-matter experts. The chapter therefore provides an answer to the third research question: Australian privacy advocates are mobilised by a shared identity as marginalised subject-matter experts who have a corresponding civic duty to advocate resistance to morally arbitrary forms of surveillance power. Finally, the chapter concludes the analysis of how privacy advocates contest the moral equivalence at the core of the ‘problem of going dark’ – the advocacy of ‘privacy protection’ is the performance of a civic duty to disrupt the relations of domination that enable the passage of morally arbitrary forms of surveillance.

272

Chapter Seven:

Conclusion

7.1. INTRODUCTION

This thesis has examined the politics of privacy protection within the context of Australia’s national debate about digital surveillance legislation. In particular, the research analysed the moral equivalence drawn between the practices of privacy protection and methods of evading criminal investigations embedded within the

‘problem of going dark’, an argument invoked by the Australian Government to justify the Data Retention Act (2015) and Encryption Access Act (2018). The research therefore has both academic and applied significance. This chapter presents a summary of the results of the research, the significance and limitations of the results, and provides suggestions for additional research. First, an overall summary of the study’s results within the context of previous research is presented. Second, the significance of the results for criminal justice scholars, policymakers, and civil society advocates is considered. Third, the limitations of the research are discussed, with suggestions for further avenues for research provided. Overall, this final chapter brings the strands of the analyses together and establishes the foundation for an ongoing research program examining the politics of privacy protection and criminal justice policymaking.

7.2. SUMMARY OF THE THESIS

The research presented within this thesis was prompted by concern about the use of the ‘going dark’ argument to erode privacy rights. Specifically, it was prompted by a desire to clarify the conceptual distinction between the practices of privacy

273 protection and the evasion of criminal investigations, particularly given the limitations of ‘privacy’ conceptualised as an individual right to non-interference (Coll, 2014;

Stalder, 2002; Bronitt & Stellios, 2005). Indeed, the moral equivalence at the core of the ‘going dark’ argument had political consequences within Australia’s national debates about surveillance powers. For example, after the passage of the Data

Retention Act (2015), Australian privacy advocates ran the Go Dark Against Data

Retention (GetUp!, 2015; EFA, 2015) campaign to advocate for citizens to circumvent data capture processes. While the campaign was ostensibly about protecting privacy rights, the language of ‘going dark’ was subsequently used by the Australian government to justify the Encryption Access Act (2018). Thus, using the logics of preventive justice, and aided by a collapsed harm principle, the practices of privacy protection were successfully framed as enabling the evasion of criminal investigations.

Other researchers have already observed the limitations of ‘privacy’ as a concept for understanding surveillance ethics. Liberal theorists define privacy as a right to non-interference from others (Warren & Brandeis, 1890), presuming individuals possess such a right over all personal information by default (Frey, 2000, p. 47). Yet, this is entirely consistent with the logics of preventive justice underpinning the ‘going dark’ argument. From within such a framework, surveillance powers are readily justified under any circumstances where they are necessary to prevent harm to others (Zedner, 2007a, p. 174). The communitarian critics of liberal privacy argue it should be conceptualised as a common good (Etzioni, 2000; Regan, 2002), yet this provides no additional insight into how ‘privacy’ and ‘security’ ought to be balanced against one another (Bronitt & Stellios, 2005; Waldron, 2003). Recent contributions from civic republican theorists provide a more viable alternative: privacy is freedom from arbitrary forms of surveillance (Hoye & Monaghan, 2018, p. 354; Newell,

274

2014a). Yet, although the civic republican perspective is appealing, the framework can similarly justify invasive methods of social control via the logics of moral responsibilisation (Petersen & Tjalve, 2013, pp. 2-3; Garland, 1996, p. 452). Overall, these theories of privacy clarify how the social and political value being ‘protected’ is more complex than any narrow frameworks suggest.

Building upon the existing literature, this thesis identified the need for research examining how privacy advocates articulate the politics of privacy protection and challenge the moral equivalence embedded within the ‘problem of going dark’. To achieve this, the research examined the experiences of twenty-one (n=21) Australian privacy advocates who campaigned against the Data Retention Act (2015) and

Encryption Access Act (2018). The research was guided by Laclau and Mouffe’s

(1985) post-structuralist theory of political discourse, which conceptualises collective behaviour as enabled by psychosocial processes of signification, subjectivation, and identification. This theoretical framework was chosen to analyse how discursive meaning is contested and ascribed to objects (e.g. privacy protection), around which groups of subjects (e.g. privacy advocates, citizens, governments, law enforcement agencies, and technology companies) – positioned in relation to one another – collectively mobilise. The analytical constructs also guided the development of a semi- structured interview protocol for the data collection process and an inductive- deductive category frame for the associated political discourses analysis. The results of the research were thus reported across three chapters, examining the signification strategies used to contest the ‘problem of going dark’, the subjectivation strategies used to discursively position the subjects of Australia’s surveillance legislation, and the mobilisation of privacy advocates through the construction of shared identities.

275

Drawing upon the theoretical construct of signification, Chapter Four examined the signification strategies used by Australian privacy advocates to differentiate ‘privacy protection’ from methods of ‘criminal evasion’ – a moral equivalence at the core of the ‘problem of going dark’ that was used to justify the Data

Retention Act (2015) and Encryption Access Act (2018). First, it was argued that the articulated meaning of ‘privacy protection’ is constructed via discourses that define the conceptual relationship between privacy and security as moral values.

Consequently, it was demonstrated how the moral equivalence between ‘privacy protection’ and ‘criminal evasion’ is a by-product of the relational character of discourses, where ‘privacy’ can always be equated with a ‘threat to security’ or as

‘dependent on security’. Second, it was argued that privacy advocates are encumbered by a liberal framework that is vulnerable to being distorted by the logics of preventive justice. Third, it was argued that privacy advocates therefore contest the ‘problem of going dark’ by ascribing moral arbitrariness to surveillance powers. Additionally, it was argued this property of ‘moral arbitrariness’ supplements the conceptual limitations of the liberal framework of privacy rights. Overall, the chapter provided an answer to the first research question: Australian privacy advocates differentiate the meaning of ‘privacy protection’ from methods of ‘criminal evasion’, and therefore contest the equivalence at the core of the ‘problem of going dark’, via a signification strategy of ascribing moral arbitrariness to surveillance powers.

Drawing upon the theoretical construct of subjectivation, Chapter Five analysed the subjectivation strategies used to discursively position the subjects of the

Data Retention Act (2015) and Encryption Access Act (2018) in relation to one another. First, it was argued that ordinary citizens are discursively positioned as disengaged subjects characterised by technological ignorance and political apathy. As

276 such, these citizens are discursively positioned as culpable victims of arbitrary surveillance powers. Second, it was argued that political elites are variously ascribed moral responsibility for the expansion of arbitrary surveillance powers, including the

Australian Government, law enforcement agencies, and technology companies. While the Government and law enforcement agencies were discursively positioned as antagonists, technology companies occupied a comparatively more contested space as potential allies. Third, it was argued that the subjects of Australian surveillance laws are discursively positioned within relations of domination, where citizens are considered as coerced into supporting the surveillances agendas of political elites.

Overall, the chapter provided an answer to the second research question: Australian privacy advocates use a subjectivation strategy of discursively positioning citizens as the subordinate party within relations of domination with political elites, enabling the ascription of moral arbitrariness to surveillance powers.

Drawing upon the theoretical construct of identification, Chapter Six analysed the mobilisation of ‘privacy advocates’ to resist the Data Retention Act (2015) and

Encryption Access Act (2018). First, it was argued that the Australian privacy movement is mobilised through the construction of the shared identity of ‘privacy advocates’ – marginalised subject-matter experts who ‘battle’ the antagonists threatening the signified value of privacy. It was argued this shared identity transcends individual conflict and intergroup disagreements. Second, it was argued that ‘privacy advocates’ construct a corresponding civic duty to advocate resistance to morally arbitrary forms of surveillance. Specifically, this self-imposed duty seeks to disrupt the domination of citizens via cultivating capabilities of privacy protection. Finally, it was argued that the construction of this civic duty enables ‘privacy advocates’ to reconcile their commitments to ‘democracy’ as a source of non-arbitrary authority

277 without threatening their self-conceptualisation as subject-matter experts. Overall, the chapter provided an answer to the third research question: Australian privacy advocates are mobilised through a shared identity as marginalised experts who have a corresponding civic duty to advocate resistance to morally arbitrary forms of surveillance. As such, the data chapters address the overall aim of the thesis – the advocacy of ‘privacy protection’ is the performance of a civic duty to disrupt the relations of domination that enable morally arbitrary forms of surveillance.

7.3. ORIGINAL CONTRIBUTIONS OF THE RESEARCH

The research presented within this thesis has scholarly and applied significance for researchers across the social sciences, privacy advocates, and criminal justice policymakers. The debate about Australia’s surveillance legislation is entrenched within a framework of narrowly-conceived liberal privacy rights and the associated logics of preventive justice. Indeed, scholars have observed the limits of ‘privacy’ as a framework for challenging the legitimacy of surveillance powers (Coll, 2014;

Stalder, 2002). Privacy advocates have also witnessed these limitations through successive expansions of surveillance powers under the Data Retention Act (2015) and

Encryption Access Act (2018), where the ‘problem of going dark’ has been invoked to draw a moral equivalence between privacy protection and criminal evasion. This thesis clarifies how subjects contest this moral equivalence.

The research presented in this thesis clarifies how the advocacy of privacy protection, as a form of resistance to the Data Retention Act (2015) and Encryption

Access Act (2018), was mobilised by the perceived moral arbitrariness of surveillance powers. It is argued that this property of moral arbitrariness is invoked to escape the

278 conceptual limits of liberal privacy rights and the logics of preventive justice. That is, although privacy advocates have faith in the liberal-technocratic principles of necessity, proportionality, and accountability, they struggle to contest the application of “conservative harm arguments” justifying state interference (Harcourt, 1999, p. 139). Specifically, because interference with individual privacy rights can always be justified via the construction of ‘risk’ (Zedner, 2007a, p. 174), it has been possible for political elites to equate ‘privacy protection’ with the evasion of criminal investigations (Joh, 2013, p. 997; Marx, 2016, p. 168). Indeed, from this perspective, surveillance may be understood as protecting rights to non-interference (Simone,

2009, p. 8), while ‘privacy protection’ can enable criminal interference without detection. This clarifies why ‘privacy’ is often maligned as an “ally” of, or “not the antidote” to, surveillance (Coll, 2014, p. 1250; Stalder, 2002, p. 120). This suggests that the Australian Government’s successful use of the ‘going dark’ argument to justify surveillance powers, and the inability of privacy advocates to successfully counter it, is partly due to such conceptual limitations.

However, the politics of privacy protection are more complex than a narrow framework of liberal privacy rights suggests. What matters is not the ‘privacy’ signifier itself, but its signified meaning as a moral and political value. As such, the signified meaning of ‘privacy’ may be considered as ‘protected’ where citizens are free from morally arbitrary forms of surveillance enacted without democratic deliberation

(Newell, 2018, p. 2; Hoye & Monaghan, 2018, p. 354). Additionally, via an analysis of the subjectivation strategies used to discursively position the subjects of the Data

Retention Act (2015) and Encryption Access Act (2018), it is clear this ascription of moral arbitrariness to surveillance powers is predicated upon the positioning of subjects within relations of domination. Relatedly, privacy advocates construct a civic

279 duty to advocate resistance to surveillance powers, thereby disrupting domination by cultivating citizens capable of privacy protection. This reconciles their competing commitments to democratic and technocratic sources of non-arbitrary power. This is how privacy advocates combine their faith in the application of liberal-technocratic principles of necessity, proportionality, accountability, and harm, while seeking to avoid negating the authority of citizens to engage in self-government.

Ultimately, this thesis challenges the view that disagreements about the appropriate scope of surveillance powers may be resolved solely through a liberal- technocratic framework. Indeed, as rights are justified “not just in and of themselves, but in terms of the consequences of their existence” (Waldron, 2003, p. 208), the justifications for privacy rights cannot be articulated in isolation from both the ‘goods’ and ‘harms’ they enable. As such, the concept of ‘privacy protection’ simultaneously signifies the good of defending human rights and the harm of enabling criminals to evade detection. It is suggested this moral complexity is the reason privacy advocates contest the equivalence drawn within the ‘problem of going dark’ by ascribing moral arbitrariness to surveillance powers and articulating an associated civic duty to advocate strategies of resistance for disrupting relations of domination. As such, the thesis argues that privacy advocates and criminal justice policymakers will benefit from acknowledging that there are limits to applying a liberal framework for determining which ‘harms’ justify ‘reasonable’ interferences with a right to privacy, while also recognising that there are unavoidable tensions between technocracy and democracy as sources of authority for non-arbitrary surveillance laws.

280

7.4. AVENUES FOR FURTHER RESEARCH

The research presented within this thesis has examined the politics of privacy protection within the context of Australia’s national debate about digital surveillance legislation. The thesis provides rich and detailed qualitative insights into the psychosocial processes of signification, subjectivation, and identification as represented through discourse. The inherent limitations of qualitative research methods are documented within Chapter Three (Section 3.5). However, there are other limitations of the research that must be acknowledged. First, there are limitations associated with the participant sample, which was dominated by white males to the exclusion of more diverse experiences. This was partly a product of the demographic homogeneity of the Australian privacy movement, a problem explicitly addressed by participants during interviews (Chapter Six, Section 6.1.2.1). Additionally, the analysis is highly specific to Australia’s national debates about surveillance. As such, the research should be interpreted within the socio-political context of debate about the Data Retention Act (2015) and Encryption Access Act (2018), and associated privacy campaigns such as Citizens, Not Suspects (2014), Go Dark Against Data

Retention (2015), and National Get a VPN Day (2017). These debates revolved around the veracity of the ‘problem of going dark’ and the moral equivalence drawn between

‘privacy protection’ and methods of evading criminal investigations. Finally, it is also important to acknowledge that the interview data was produced within a context of ongoing debate prior to the passage of the Encryption Access Act (2018). These characteristics limit the generalisability of the findings.

The results of the research outlined throughout, and the associated limitations noted above, are also useful for identifying avenues for further research. Overall, the thesis clarified how judgements about the moral arbitrariness of surveillance power

281 influences the ascribed meaning and advocacy of privacy protection. This property of

‘moral arbitrariness’ is an aspect of social and political power that is often overlooked within both the empirical and theoretical traditions of surveillance studies. As sociologist Gary T. Marx (2016, p. 43) has observed, there is a tendency for social scientists to write “surveillance essays” devoid of meaningful analysis. Thus, rather than pursuing descriptive research merely documenting power dynamics, there is further potential in examining how the normative status of power influences surveillance and privacy practices (e.g. Hoye & Monaghan, 2018, pp. 343, 355;

Marx, 2016, p. 276). Indeed, if analyses of surveillance power are to be useful in an applied sense, as guiding decision-making and civil society advocacy, social scientists should continue to unpack how such characteristics of surveillance power influences their status as either morally legitimate or arbitrary.

Relatedly, there is potential to examine how the construction of the normative status of surveillance power differs between categories of more diverse groups of subjects (e.g. Gilliom, 2005), rather than the exclusive focus of the current research on organised privacy advocates. As such, there is potential to examine the politics of privacy protection from an explicitly conflict-oriented perspective, as subjects can employ competing strategies of signification and subjectivation. Finally, there are opportunities for applying the post-structuralist framework used within this project to other criminal justice policy debates. For example, the observed characteristics of signifiers such as privacy – as relational and contested symbols – also applies to other concepts frequently articulated within public discourse. Concepts such as justice, liberty, security, law and order, legitimacy, and civil disobedience may be commonly articulated, yet signify different meanings for subjects seemingly talking past one another. A post-structuralist framework that understands the arbitrary relationship

282 between signifiers and their signified meaning can help clarify underlying political disagreements that entrench public debates about criminal justice practices. Overall, the thesis identifies avenues for further research examining the politics of privacy protection, the moral status of surveillance power, and the construction and contestation of meaning within criminal justice policy debates.

283

Reference List

Abbasi, A., & Chen, H. (2007). Affect intensity analysis of dark web forums. In

Intelligence and Security Informatics, 2007 IEEE. Accessed December 16,

2016. http://ieeexplore.ieee.org/abstract/document/4258712/.

Agur, C. (2013). Negotiated order: The fourth amendment, telephone surveillance,

and social interactions, 1878–1968. Information & Culture, 48(4), 419-447.

doi:10.7560/IC48402.

Ahmed, S. (2015). The ‘emotionalization of the ‘war on terror’: Counter-terrorism,

fear, risk, insecurity and helplessness. Criminology & Criminal Justice,

15(5), 545-560. doi:10.1177/1748895815572161.

Albrechtsen, E., & Hovden, J. (2010). Improving information security awareness and

behaviour through dialogue, participation and collective reflection. An

intervention study. Computers & Security, 29, 432-445.

doi:10.1016/j.cose.2009.12.005.

Albrechtslund, A. (2008). Online social networking as participatory surveillance.

First Monday, 13(3). Retrieved from

http://firstmonday.org/article/view/2142/1949.

Alfino, M., & Mayes, G.R. (2003). Reconstructing the right to privacy. Social

Theory and Practice 29(1), 1-18.

Allen, A. (2000). Privacy-as-data control: Conceptual, practical, and moral limits of

the paradigm.” Connecticut Law Review, 32, 861-875.

Allen, A. (2003). Why privacy isn’t everything. Lanham, MD: Rowman & Littlefield.

284

Allen, A. (2005). Privacy isn’t everything: Accountability as a personal and social

good. Information Ethics, 19, 398-416.

Allen, W.M., Coopman, S. J., Hart, J. L., & Walker, K. L. (2007). Workplace

surveillance and managing privacy boundaries. Management Communication

Quarterly, 21(2), 172-200. doi:10.1177/0893318907306033.

Aly, A., & Green, L. (2010). Fear, anxiety and the state of terror. Studies in Conflict

& Terrorism, 33(3), 268-281. doi:10.1080/10576100903555796.

Amoore, L., & De Goede, M. (2005). Governance, risk and dataveillance in the war

on terror. Crime, Law and Social Change, 43(2–3), 149-173.

doi:10.1007/s10611-005-1717-8.

Andrejevic, M. (2017). To preempt a thief. International Journal of Communication

11(2017), 879-896. Retrieved from https://ijoc.org/index.php/ijoc/

article/view/6308.

Angwin, J., Larson, J., Mattau, S., & Kirchner, L. (2016). Machine bias. Retrieved

from https://www.propublica.org/article/machine-bias-risk-assessments-in-

criminal-sentencing.

Atkinson, J. (2006). Analyzing resistance narratives at the North American anarchist

gathering: A method for analyzing social justice alternative media. Journal of

Communication Inquiry, 30(3), 251-272. doi:10.1177/0196859906287892.

Attorney-General’s Department (2016). Access to telecommunications data in civil

proceedings. Retrieved from: https://www.ag.gov.au/Consultations/Pages/

Access-to-telecommunications-data-in-civil-proceedings.aspx.

Australian Broadcasting Corporation (ABC) News. (2014). Sydney siege: Flag

displayed during Martin Place hostage crisis not same as that used by terrorist

285

group Islamic State. Retrieved from http://www.abc.net.au/news/2014-12-

15/sydney-siege-islamic-flag-explained/5968010.

Australian Broadcasting Corporation (ABC) News. (2015). Sydney Lindt café siege:

Tony Abbott says protecting the community will mean 'redrawing the line' on

individual rights, after review released. Retrieved from https://www.abc.net.

au/news/2015-02-22/sydney-siege-joint-review-released-visa-citizenship-

reforms/6184012.

Australian Broadcasting Corporation (ABC) News. (2017). When is 'not a backdoor'

just a backdoor? Australia's struggle with encryption. Retrieved from

https://www.abc.net.au/news/2017-07-14/encryption-laws-australia-does-

government-need-a-backdoor/8709654.

Australian Cyber Security Centre. (2016). Threat Report. Retrieved from

https://www.acsc.gov.au/publications/ACSC_Threat_Report_2016.pdf.

Australian Cybercrime Online Reporting Network. (2016). ACORN snap shot.

Retrieved from https://acorn.govcms.gov.au/sites/g/files/net1061/f/

acorn_snap_shot_-_jul16_-_sep16.pdf.

Australian Institute of Criminology. (2010). Research in practice: Covert and cyber

bullying. Retrieved from https://aic.gov.au/publications/rip/rip09.

Australian Law Reform Commission. (2008). For your information: Australian

privacy law and practice. ALRC Report 108. Retrieved from

https://www.alrc.gov.au/publications/

report-108.

286

Australian National University. (2016). Attitudes to national security: Balancing

safety and srivacy. ANUPoll July 2016. Retrieved from http://csrm.cass.anu.

edu.au/sites/default/files/docs/ANUpoll-22-Security_0.pdf.

Australian Privacy Foundation (2014). Submission to the Parliamentary Joint

Committee on Intelligence and Security Inquiry into the Telecommunications

(Interception and Access) Amendment (Data Retention) Bill 2014. Retrieved

from https://www.aph.gov.au/DocumentStore.ashx?id=84b594e2-ffc6-4eb2-

8f6b-5c6914c51b49&subId=302725.

Australian Privacy Foundation. (2017). Federal Court decision guts the Privacy Act.

Retrieved from https://privacy.org.au/2017/01/19/federal-court-decision-guts-

the-privacy-act/.

Australian Privacy Foundation. (2018, 15 August). The formation of the Australian

Privacy Foundation. Retrieved from https://privacy.org.au/about/

history/formation/.

Ayling, J. (2011). Pre-emptive strike: How Australia is tackling outlaw motorcycle

gangs. American Journal of Criminal Justice, 36(3), 250-264.

doi:10.1007/s12103-011-9105-7.

Baele, S.J., Brace, L., Coan, T.G. (2019). From ‘incel’ to ‘saint’: Analyzing the

violent worldview behind the 2018 Toronto attack. Terrorism and Political

Violence. Retrieved from https://doi.org/ 10.1080/09546553.2019.1638256.

Bailey, K.D. (2012). It’s complicated: Privacy and domestic violence. American

Criminal Law Review, 49, 1777-1813. Retrieved from

https://scholarship.kentlaw.iit.edu/fac_schol/39

287

Baldry, E., Brown, D., Brown, M., Cunneen, C., Schwartz, M., & Steel, A. (2011).

Imprisoning rationalities. Australian & New Zealand Journal of Criminology,

44(1), 24-40. doi:10.1177/0004865810393112.

Ball, K. (2010). Workplace surveillance: An overview. Labor History, 51(1), 87-106.

doi:10.1080/00236561003654776.

Banet-Weiser, S., & Miltner, K.M. (2016). “#MasculinitySoFragile: Culture,

structure, and networked misogyny.” Feminist Media Studies, 16(1),

171-174. doi:10.1080/14680777.2016.1120490.

Bannister, F. (2005). The panoptic state: Privacy, surveillance and the balance of

risk. Information Polity, 10(1/2), 65-78. doi:10.3233/IP-2005-0068.

Barnard-Wills, D. (2011). UK news media discourses of surveillance.

The Sociological Quarterly, 52(4), 548–567.

doi:10.1111/j.1533-8525.2011.01219.x.

Bauman, Z., Bigo, D., Esteves, P., Guild, E., Jabri, V., Lyon, D., & Walker, R.

(2014). After Snowden: Rethinking the impact of surveillance. International

Political Sociology 8(2), 121-144. doi:10.1111/ips.12048.

Beck, C. (2016). Web of resistance: Deleuzian digital space and hacktivism. Journal

for Cultural Research, 20(4), 334-349. doi:10.1080/14797585.2016.1168971.

Beck, U. (1992). Risk society: Towards a new modernity. New Delhi: Sage

Publications.

Bell, J. (1996). Assassination politics. Retrieved from https://cryptome.org/ap.htm.

Bennett, C.J. (2008). The privacy advocates: Resisting the spread of surveillance.

Cambridge, Massachusetts: MIT Press.

288

Bennett, C.J. (2011). Privacy advocacy from the inside and the outside: Implications

for the politics of personal data protection in networked societies. Journal of

Comparative Policy Analysis: Research and Practice, 13(2), 125-141.

doi:10.1080/13876988.2011.555996.

Benson, S. (2014). Soft laws and poor intelligence let loose Martin Place siege killer

Man Haron Monis. Retrieved from: https://www.dailytelegraph.com.au/

news/national/soft-laws-and-poor-intelligence-let-loose-martin-place-siege-

killer-man-haron-monis/news-story/f89ccfe8292c01d7fb2cc40a5f4296dd.

Berger, R. (2015). “Now I see it, now I don’t: Researcher’s position and reflexivity

in qualitative research.” Qualitative Research, 15(2), 219–234.

doi:0.1177/1468794112468475.

Bernard-Wills, D. (2016). Surveillance and identity: Discourse, subjectivity and the

state. Routledge: New York.

Bessant, J. (2012). Human rights, the law, cyber-security and democracy: after the

European convention. Australian Journal of Human Rights, 18(1), 1-26.

doi:10.1080/1323-238X.2012.11882096.

Bessant, J. (2015). Criminalizing the political in a digital age. Critical Criminology,

23(3), 329-348. doi:10.1007/s10612-014-9261-4.

Bessant, J. (2016). Democracy denied, youth participation and criminalizing digital

dissent. Journal of Youth Studies, 19(7), 921-937.

doi:10.1080/13676261.2015.1123235.

Bever, L. (2014, April 18). A white supremacist web site frequented by killers. The

Washington Post. Retrieved from https://www.washingtonpost.com/news/

289

morning-mix/wp/2014/04/18/posters-on-one-white-supremacist-site-

have-killed-almost-100-people-watchdog-says/

Beyer, J., & McKelvey, F. (2015). You are not welcome among us: Pirates and the

state. International Journal of Communication, 9, 890-908. Retrieved from

https://ijoc.org/index.php/ijoc/article/view/3759.

Bloustein, E.J. (1964). Privacy as an aspect of human dignity: An answer to Dean

Prosser. New York University Law Review, 39, 962-1007. Retrieved from

https://heinonline.org/HOL/P?h=hein.journals/nylr39&i=974.

Bock, M.A. (2016). Film the police! Cop-watching and its embodied narratives.

Journal of Communication, 66(1), 13-34. doi:10.1111/jcom.12204.

Bodó, B. (2014). Hacktivism1-2-3: How privacy enhancing technologies change the

face of anonymous hacktivism. Internet Policy Review, 3(4), 1-12.

doi:10.14763/2014.1.340.

Bogomolov, A., Lepri, B., Staiano, J., Oliver, N., Pianesi, F., & Pentland, A. (2014).

Once upon a crime: Towards crime prediction from demographics and mobile

data. In Proceedings of the 16th international conference on multimodal

interaction (pp. 427-434). Retrieved from http://arxiv.org/abs/1409.2983.

Bossler, A.M., Holt, T.J., & May, D.C. (2012). Predicting online harassment

victimization among a juvenile population. Youth & Society, 44(4), 500-523.

doi:10.1177/0044118X11407525.

Bowen, G.A. (2008). Naturalistic inquiry and the saturation concept: A research

note. Qualitative Research, 8(1), 137-152. doi:10.1177/1468794107085301.

290 boyd, d. (2011). Dear voyeur, meet Flâneur… Sincerely, social media. Surveillance

& Society 8(4), 505-507. Retrieved from https://ojs.library.queensu.ca/

index.php/surveillance-and-society/article/view/4187. boyd, d. (2012). The politics of ‘real names’. Communications of the ACM, 55(8),

29-31. doi:10.1145/2240236.2240247.

Braithwaite, J. (1995). Inequality and republican criminology. Retrieved from

http://johnbraithwaite.com/wp-content/uploads/2016/05/1995_Inequality-

and-Republican-Crim.pdf.

Braithwaite, J., & Pettit, P. (1990). Not just deserts: A republican theory of criminal

justice. Claredon: Oxford Press.

Broadhurst, R. (2016). Cybercrime in Australia. Retrieved from https://papers.ssrn.

com/sol3/papers.cfm?abstract_id=2865295.

Bronitt, S., & Stellios, J. (2005). Telecommunications interception in Australia:

Recent trends and regulatory prospects. Telecommunications Policy, 29(11),

875-888. doi:10.1016/j.telpol.2005.06.010.

Brown, A.P. (2010). Qualitative method and compromise in applied social research.

Qualitative Research, 10(2), 229-248. doi:10.1177/1468794109356743.

Brown, I., & Korff, D. (2009). Terrorism and the proportionality of internet

surveillance. European Journal of Criminology, 6(2), 119-34.

doi:10.1177/1477370808100541.

Browne, K. (2005). Snowball sampling: Using social networks to research non-

heterosexual women. International Journal of Social Research Methodology,

8(1), 47-60. doi:10.1080/1364557032000081663.

291

Bruen, A.A., & Forcinito, M.A. (2005). Cryptography, information theory, and

error-correction: A handbook for the 21st century.

Hoboken, NJ: John Wiley & Sons.

Bryan, J., Day-Vines, N.L., Griffin, D., & Moore-Thomas, C. (2012). The

disproportionality dilemma: Patterns of teacher referrals to school counsellors

for disruptive behaviour. Journal of Counselling and Development, 90(2),

177-190. doi:10.1111/j.1556-6676.2012.00023.x.

Bryman, A., Becker, S., & Sempik, J. (2008). Quality criteria for quantitative,

qualitative and mixed methods research: A view from social policy.

International Journal of Social Research Methodology, 11(4), 261-276.

doi:10.1080/13645570701401644.

Bullingham, L., & Vasconcelos, A.C. (2013). The presentation of self in the online

world: Goffman and the study of online identities. Journal of Information

Science, 39(1): 101-112. doi:10.1177/0165551512470051.

Burgess, M. (2015). Securing your data. Retrieved from:

https://exchange.telstra.com.au/securing-your-data/

Butler, D. (2005). A tort of invasion of privacy in Australia. Melbourne University

Law Review, 29(2), 339-389. Retrieved from https://search.informit.com.au

/fullText;dn=200511735;res=IELAPA

Butler, D. (2014). The dawn of the age of the drones: An Australian privacy law

perspective. University of Law Journal, 37(2), 434-470.

Retrieved from http://www.austlii.edu.au/

au/journals/UNSWLJ/2014/17.html.

292

Capurro, R. (2005). Privacy: An intercultural perspective. Ethics and Information

Technology 7(1), 37-47. doi:10.1007/s10676-005-4407-4.

Carrascal, J.P., Riederer, C., Erramilli, V., Cherubini, M., & de Oliveira, R. (2013).

Your browsing behaviour for a Big Mac: Economics of personal information

online. In Proceedings of the 22nd international conference on World Wide

Web (pp. 189-200). Retrieved from http://jpcarrascal.com/docs/publications/

WWW2013-Browsing_behavior_big_mac.pdf.

Casemajor, N., Couture, S., Delfin, M., Goerzen, M., & Delfanti, A. (2015). Non-

participation in digital media: Toward a framework of mediated political

action. Media, Culture & Society, 37(6), 850-866.

doi:10.1177/0163443715584098.

Cesaroni, C., Downing, S., & Alvi, S. (2012). Bullying enters the 21st century?

Turning a critical eye to cyber-bullying research. Youth Justice, 12(3), 199-

211. doi:10.1177/1473225412459837.

Christofides, E., Muise, A., & Desmarais, S. (2009). Information disclosure and

control on Facebook: Are they two sides of the same coin or two different

processes? CyberPsychology & Behavior, 12(3), 341–45.

doi:10.1089/cpb.2008.0226.

Clarke, R. (1987). Just another piece of plastic for your wallet: The ‘Australia Card’

scheme. Prometheus, 5(1), 29-45. doi:10.1080/08109028708629411.

Clarke, R. (2015). Data retention as mass surveillance: The need for an evaluative

framework. International Data Privacy Law, 5(2), 121-132.

doi:10.1093/idpl/ipu036.

293

CNN. (2015). Paris terror attacks fast facts. Retrieved from https://edition.cnn.com

/2015/12/08/europe/2015-paris-terror-attacks-fast-facts/index.html

Coleman, E.G., & Golub, A. (2008). Hacker practice: Moral genres and the cultural

articulation of liberalism. Anthropological Theory, 8(3), 255-277.

doi:10.1177/1463499608093814.

Coll, S. (2014). Power, knowledge, and the subjects of privacy: Understanding

privacy as the ally of surveillance. Information, Communication & Society,

17(10), 1250-1263. doi:10.1080/1369118X.2014.918636.

Commonwealth of Australia, Department of Prime Minister and Cabinet. (2011).

Issues paper: A Commonwealth statutory cause of action for serious invasion

of privacy. Retrieved from https://www.ag.gov.au/RightsAndProtections/

Privacy/Documents/Statutorycauseofaction-SeriousInvasionofprivacy-

Issuespaper.pdf.

Crawford, A. (2009). Governing through anti-social behaviour: Regulatory

challenges to criminal justice. British Journal of Criminology, 49(6),

810-831. doi:10.1093/bjc/azp041.

Curran, G., & Gibson, M. (2013). WikiLeaks, anarchism and technologies of dissent.

Antipode, 45(2), 294-314. doi:10.1111/j.1467-8330.2012.01009.x.

Dalgaard-Nielsen, A., Laisen, J., & Wandorf, C. (2016). Visible counterterrorism

measures in urban spaces – Fear-inducing or not? Terrorism and Political

Violence, 28(4), 692-712. doi:10.1080/09546553.2014.930027.

Davies, D. (1997). A brief history of cryptography. Information Security Technical

Report, 2(2), 14-17. doi:10.1016/S1363-4127(97)81323-4.

294 de Bruin, B. (2010). The liberal value of privacy. Law and Philosophy, 29(5),

505-534. doi:10.1007/s10982-010-9067-9. de Fina, A. (2009). Narratives in interview – the case of accounts: For an

interactional approach to narrative genres. Narrative Inquiry, 19(2), 233-258.

doi:10.1075/ni.19.2.03def. de Saussure, F. (1959). Course in general linguistics (W. Baskin, Trans.). New York,

NY: Philosophical Library. (Original work published 1916). de Zwart, M, Lindsay, D., Henderson, M., & Phillips, M. (2011). Teenagers, legal

risks and social networking sites. Retrieved from http://newmediaresearch.

educ.monash.edu.au/lnm/wpcontent/uploads/2015/05/

SNSandRisks_REPORT_0.pdf. de Zwart, M., Humphreys, S., & Van Dissel, B. (2014). Surveillance, big data and

democracy: Lessons for Australia from the US and UK. University of New

South Wales Law Journal, 37(2), 713-747. Retrieved from http://www.

unswlawjournal.unsw.edu.au/wp-content/uploads/2017/09/37-2-3.pdf

DeCew, J.W. (2000). The priority of privacy for medical information. Social

Philosophy and Policy, 17(2), 213-234. doi:10.1017/S026505250000217X.

Deibert, R.J., & Rohozinski, R. (2010). Risking security: Policies and paradoxes of

cyberspace security. International Political Sociology, 4(1), 15–32.

doi:10.1111/j.1749-5687.2009.00088.x.

Delfs, H., & Knebl, H. (2015). Introduction to cryptography. Berlin, Heidelberg:

Springer. doi:10.1007/978-3-662-47974-2.

295

Dempster, Q. (2015). Data retention and the end of Australians' digital privacy.

Retrieved from https://www.smh.com.au/technology/data-retention-and-the-

end-of-australians-digital-privacy-20150827-gj96kq.html

Dencik, L., Hintz, A., & Cable, J. (2016). Towards data justice? The ambiguity of

anti-surveillance resistance in political activism. Big Data & Society, 3(2),

1-12. doi:10.1177/2053951716679678.

Desai, A. C. (2007). Wiretapping before the wires: The pose office and the birth of

communications privacy. Stanford Law Review, 60(2), 553-594. Retrieved

from https://www.jstor.org/stable/40040416.

Digital Rights Watch. (2017). National get a VPN day. Retrieved from

https://digitalrightswatch.org.au/2017/04/12/get-a-vpn/.

Dinev, T., Hart, P., & Mullen, M.R. (2008). Internet privacy concerns and beliefs

about government surveillance – An empirical investigation. Strategic

Information Systems 17(3), 214-233. doi:10.1016/j.jsis.2007.09.002.

Dolliver, D.S. (2015). Evaluating drug trafficking on the Tor network: Silk Road 2,

the sequel. International Journal of Drug Policy, 26(11), 1113-1123.

doi:10.1016/j.drugpo.2015.01.008.

Dragiewicz, M. (2011). Equality with a vengeance: Men’s rights groups, battered

women, and antifeminist backlash. Boston, MA: Northeastern University

Press.

Dressel, J., & Farid, H. (2018). The accuracy, fairness, and limits of predicting

recidivism. Science Advances, 4(1), 1-5. doi:10.1126/sciadv.aao5580.

Dupont, B. (2008). Hacking the panopticon: Distributed online surveillance and

resistance. Sociology of Crime, Law and Deviance, 10, 257–278. Retrieved

296

from https://www.taylorfrancis.com/books/9781315243566/

chapters/10.4324/9781315243566-15

Dupont, B. (2017). Bots, cops, and corporations: On the limits of enforcement and

the promise of polycentric regulation as a way to control large-scale

cybercrime. Crime, Law and Social Change, 67(1), 97-116.

doi:10.1007/s10611-016-9649-z.

Dzindolet, M.T. (2003). The role of trust in automation reliance. International

Journal of Human-Computer Studies, 58(6), 697-718. doi:10.1016/S1071-

5819(03)00038-7.

Edmond, G., & San Roque, M. (2013). Justicia’s gaze: Surveillance, evidence and

the criminal trial. Surveillance & Society, 11(3), 252-271.

doi:10.24908/ss.v11i3.4556.

Electronic Frontiers Australia. (2014). Submission to the Parliamentary Joint

Committee on Intelligence and Security Inquiry into the Telecommunications

(Interception and Access) Amendment (Data Retention) Bill 2014. Retrieved

from https://www.aph.gov.au/DocumentStore.ashx?id=a633c2be-a32a-4923-

9f1e-5810b83a4a71&subId=302748.

Electronic Frontiers Australia. (2015). Go dark against data retention. Retrieved from

https://www.efa.org.au/2015/03/25/go-dark/.

Electronic Frontiers Australia. (2017a). When is ‘not a backdoor’ just a backdoor?

Australia’s struggle with encryption. Retrieved from

https://www.efa.org.au/2017/06/14/encryption-backdoors/.

297

Electronic Frontiers Australia. (2017b). Mobiles, metadata and the meaning of

‘personal information. Retrieved from

https://www.efa.org.au/2017/01/24/meaning-of-personal-information/.

Elo, S., & Kyngäs, H. (2008). The qualitative content analysis process. Journal of

Advanced Nursing, 62(1), 107-115. doi:10.1111/j.1365-2648.2007.04569.x.

Ensafi, R, Winter, P., Mueen, A., & Crandall, J.R. (2015). Analysing the great

firewall of China over space and time. Proceedings on Privacy Enhancing

Technologies, 2015(1), 61-76. doi:10.1515/popets-2015-0005.

Epstein, R. A. (2000). Deconstructing privacy: And putting it back together again.

Social Philosophy and Policy, 17(2), 1-24. doi:10.1017/S0265052500002089.

Erdin, E., Zachor, C., & Gunes, M.H. (2015). How to find hidden users: A survey of

attacks on anonymity networks. IEEE Communications Surveys & Tutorials,

17(4), 2296-2316. doi:10.1109/COMST.2015.2453434.

Ericson, R., & Haggerty, K. (1997). Policing the Risk Society. Toronto, Ontario:

University of Toronto Press.

Etzioni, A. (2000). The limits of privacy. New York, NY: Basic Books.

Etzioni, A. (2015). Privacy in a cyber age: Policy and practice.

New York, NY: Palgrave MacMillan.

Evershed, N. (2017). Australia's plan to force tech giants to give up encrypted

messages may not add up. Retrieved from https://www.theguardian.com

/technology/2017/jul/14/forcing-facebook-google-to-give-police-access-to-

encrypted-messages-doesnt-add-up.

298

Explanatory Memorandum, Telecommunications and Other Legislation Amendment

(Assistance and Access) Act. (2018). Retrieved from https://parlinfo.aph.gov.

au/parlInfo/download/legislation/ems/r6195_ems_1139bfde-17f3-4538-b2b2-

5875f5881239/upload_pdf/685255.pdf;fileType=application%2Fpdf.

Fereday, J., & Muir-Cochrane, E. (2006). Demonstrating rigor using thematic

analysis: A hybrid approach of inductive and deductive coding and theme

development. International Journal of Qualitative Methods, 5(1), 80-92.

doi:10.1177/160940690600500107.

Ferguson, N., Schneier, B., & Kohno, T. (2010). Cryptography engineering: Design

principles and practical applications. Indianapolis, IN: Wiley.

Fernandez, L.A., & Huey, L. (2009). Is resistance futile? Thoughts on resisting

surveillance. Surveillance & Society, 6(3), 199-202.

doi:10.24908/ss.v6i3.3280.

Fernback, J. (2013). Sousveillance: Communities of resistance to the surveillance

environment. Telematics and Informatics, 30(1), 11–21.

doi:10.1016/j.tele.2012.03.003.

Finnane, M., & Donkin, S. (2013). Fighting terror with law? Some other genealogies

of pre-emption. International Journal for Crime and Justice, 2(1), 3-17.

Retrieved from https://www.crimejusticejournal.com/article/view/679.

Flores, A.W., Lowenkamp, C.T., & Bechtel, K. (2017). False positives, false

negatives, and false analyses: A rejoinder to ‘Machine bias: There’s software

used across the country to predict future criminals. And it’s biased against

blacks.’ Retrieved from https://www.uscourts.gov/federal-probation-

journal/2016/09/false-positives-false-negatives-and-false-analyses-rejoinder

299

Floridi, L. (2016). On human dignity as a foundation for the right to privacy.

Philosophy & Technology, 29(4), 307-312. doi:10.1007/s13347-016-0220-8.

Foellmi, M. C., Rosenfeld, B., & Galietta, M. (2016). Assessing risk for recidivism

in individuals convicted of stalking offenses: Predictive validity of the

guidelines for stalking assessment and management. Criminal Justice and

Behavior, 43(5), 600-616. doi:10.1177/0093854815610612.

Foucault, M. (1977). Discipline and punish: The birth of the prison.

London, UK: Penguin Books.

Foucault, M. (1990). The history of sexuality: An introduction. London, UK: Penguin

Books.

Frey, R.G. (2000). Privacy, control, and talk of rights. Social Philosophy and Policy,

17(2), 45-67. doi:10.1017/S0265052500002107.

Fried, C. (1968). Privacy. Yale Law Journal, 77(3): 475–493. Retrieved from

https://digitalcommons.law.yale.edu/ylj/vol77/iss3/3.

Froomkin, A.M. (1995). The metaphor is the key: Cryptography, the clipper chip,

and the constitution. University of Pennsylvania Law Review, 143(3), 709-

897. doi:10.2307/3312529.

Frost, N. (2009). Do you know what I mean? The use of a pluralistic narrative

analysis approach in the interpretation of an interview. Qualitative Research,

9(1), 9–29. doi:10.1177/1468794108094867.

Fuchs, C. (2013). Societal and ideological impacts of deep packet inspection internet

surveillance. Information, Communication & Society, 16(8), 1328–1359.

doi:10.1080/1369118X.2013.770544.

300

Fuhrmann, C. (2012). Policing the roman empire: Soldiers, administration, and

public order. New York, NY: Oxford University Press.

Fusch, P I., & Ness, L.R. (2015). Are we there yet? Data saturation in qualitative

research. The Qualitative Report, 20(9), 1408-1416. Retrieved from

http://www.nova.edu/ssss/QR/QR20/9/fusch1.pdf

Gabbatt, A. (2014, February 12). Protesters rally for 'the day we fight back' against

mass surveillance. The Guardian. Retrieved from https://www.theguardian.

com/world/2014/feb/11/day-fight-back-protest-nsa-mass-surveillance.

Garcia, B.E., & Geva, N. (2016). Security versus liberty in the context of

counterterrorism: An experimental approach. Terrorism and Political

Violence, 28(1), 30–48. doi:10.1080/09546553.2013.878704.

Garland, D. (1996). Limits of the sovereign state: Strategies of crime control in

contemporary society. British Journal of Criminology 36(4), 445-471.

doi:10.1093/oxfordjournals.bjc.a014105.

Gehl, R.W. (2016). Power/freedom on the dark web: A digital ethnography of the

Dark Web Social Network. New Media & Society, 18(7), 1219-1235.

doi:10.1177/1461444814554900

Gellert, R. (2017). On risk, balancing, and data protection: A response to van der

Sloot. European Data Protection Law Review, 3(2), 180-186.

doi:10.21552/edpl/2017/2/7

Gelo, O., Braakmann, D., & Benetka, G. (2008). Quantitative and qualitative

research: Beyond the debate. Integrative Psychological and Behavioral

Science, 42(3): 266-290. doi:10.1007/s12124-008-9078-3.

301

Gemignani, M. (2014). Memory, remembering, and oblivion in active narrative

interviewing. Qualitative Inquiry 20(2): 127–135.

doi:10.1177/1077800413510271.

GetUp! (2014). Citizens, not suspects. Retrieved from: https://www.getup.org.au/

campaigns/digital-freedom-and-privacy/mandatory-data-retention-getup--

2/citizens-not-suspects.

GetUp! (2015). Go Dark Against Data Retention. Retrieved from https://www.getup.

org.au/campaigns/digital-freedom-and-privacy/go-dark-against-data-

retention/go-dark-against-data-retention.

Gilley, B. (2017). Technocracy and democracy as spheres of justice in public policy.

Policy Sciences, 50(9), 9-22. doi:10.1007/s11077-016-9260-2.

Gilliom, J. (2005). Resisting surveillance. Social Text, 83(2), 71-83. Retrieved from

https://read.dukeupress.edu/social-text/article-pdf/23/2 (83)/71/513555/st83-

06_gilliom.pdf

Goffman, E. (1959). The presentation of self in everyday life. New York, NY:

Anchor Books.

Goggin, G. Vromen, A., Weatherall, K., Martin, F., Webb, A., Sunman, L., & Bailo,

F. (2017). Digital rights in Australia. Retrieved from

https://ses.library.usyd.edu.au/handle/2123/17587.

Goldberg, I. (2007). Privacy enhancing technologies for the internet III: Ten years

later. In A. Acquistie, S. Gritzalis, C. Lambrinoudakis, & S. di Vimercati

(Eds.), Digital Privacy: Theory, Technologies, and Practices (pp. 3-18).

Boca Raton, FL: Auerbach Publications.

302

Gonzales, A. (2016). The contemporary US digital divide: From initial access to

technology Maintenance. Information, Communication & Society, 19(2):

234–248. doi:10.1080/1369118X.2015.1050438.

Goode, E. (2016). Deviant behavior (11th ed.). New York, NY: Routledge.

Goold, B., Loader, I., & Thumala, A. (2013). The banality of security: The curious

case of surveillance cameras. British Journal of Criminology, 53(6), 977-996.

doi:10.1093/bjc/azt044.

Greenberg, J., & Hier, S. (2009). CCTV surveillance and the poverty of media

discourse: A content analysis of Canadian newspaper coverage.

Canadian Journal of Communications, 34(3), 461-486.

doi:10.22230/cjc.2009v34n3a2200.

Greenleaf, G., & Nolan, J. (1986). The deceptive history of the Australia Card.

The Australian Quarterly, 58(4), 407-425. Retrieved from

https://www.jstor.org/stable/20635401.

Greenwood, R. M. (2008). Intersectional political consciousness: Appreciation for

intragroup differences and solidarity in diverse groups. Psychology of Women

Quarterly, 32(1), 36-47. doi:10.1111/j.1471-6402.2007.00405.x.

Guest, G, Bruce, A., & Johnson, L. (2006). How many interviews are enough? An

experiment with data saturation and variability. Field Methods, 18(1), 59-82.

doi:10.1177/1525822X05279903.

Guillemin, M., & Gillam, L. (2004). Ethics, reflexivity, and ‘ethically important

moments’ in research. Qualitative Inquiry, 10(2), 261-280.

doi:10.1177/1077800403262360.

303

Gustafson, K. (2009). The criminalisation of poverty. The Journal of Criminal Law

& Criminology, 99(3), 643-716. Retrieved from https://www-jstor-

org.ezp01.library.qut.edu.au/stable/20685055.

Guzik, K. (2009). Discrimination by design: Predictive data mining as security

practice in the United States’ ‘war on terrorism’. Surveillance & Society,

7(1), 3-20. doi:10.24908/ss.v7i1.3304.

Habermas, J. (2010). The concept of human dignity and the realistic utopia of human

rights. Metaphilosophy, 41(4), 464-480.

doi:10.1111/j.1467-9973.2010.01648.x.

Haggerty, K. (2009). Methodology as a knife fight: The process, politics and paradox

of evaluating surveillance. Critical Criminology, 17(4), 277-291.

doi:10.1007/s10612-009-9083-y.

Haggerty, K., & Gazso, A. (2005). Seeing beyond the ruins: Surveillance as a

response to terrorist threats. Canadian Journal of Sociology, 30(2), 169-187.

doi:10.2307/4146129.

Haggerty, K.D. (2004). Technology and crime policy: Reply to Michael Jacobson.

Theoretical Criminology, 8(4), 491-497. doi:10.1177/1362480604046661.

Haggery, K., & Samatas, M. (2010). Surveillance and Democracy.

New York, NY: Routledge.

Hagins, Z.R. (2013). Fashioning the ‘born criminal’ on the beat: Juridical

photography and the police municipale in Fin-de-Siècle Paris. Modern &

Contemporary France, 21(3), 281-296. doi:10.1080/09639489.2013.781143.

Hallborg, R.B. (1986). Principles of liberty and the right to privacy. Law and

Philosophy, 5(2), 175-218. doi:10.2307/3504688.

304

Hampson, N. (2012). Hacktivism, anonymous, and a new breed of protest in a

networked world. Boston College International and Comparative Law

Review, 35(6), 511-542. Retrieved from https://heinonline.org/

HOL/P?h=hein.journals/bcic35&i=515.

Harbisher, B. (2016). The Million Mask March: Language, legitimacy, and dissent.

Critical Discourse Studies, 13(3), 294-309.

doi:10.1080/17405904.2016.1141696.

Harcourt, B. (1999). The collapse of the harm principle. Journal of Criminal Law

and Criminology, 90(1), 109-194. doi:10.2307/1144164.

Harcourt, B. (2013). The collapse of the harm principle redux: On same-sex

marriage, the Supreme Court’s opinion in United States v. Windsor, John

Stuart Mill’s essay on Liberty (1859), and H.L.A. Hart’s modern harm

principle. University of Chicago Public Law Working Paper No. 437.

Retrieved from http://www.law.uchicago.edu/

academics/publiclaw/index.html.

Hegemann, H., & Kahl, M. (2017). (Re)politicizing security? The legitimation and

contestation of mass surveillance after Snowden. World Political Science,

13(1), 21-56. doi:10.1515/wps-2017-0002.

Hempel, L., & Topfer, E. (2009). The surveillance consensus: Reviewing the politics

of CCTV in three European countries. European Journal of Criminology,

6(2), 157-177. doi:10.1177/1477370808100544.

Henman, P., & Marston, G. (2008). The social division of welfare surveillance.

Journal of Social Policy, 37(2), 187-205. doi:10.1017/S0047279407001705.

305

Henry, N., & Powell, A. (2018). Technology-facilitated sexual violence: A literature

review of empirical research. Trauma, Violence, & Abuse, 19(2), 195-208.

doi:10.1177/1524838016650189.

Hildebrandt, M. (2013). Balance or trade-off? Online security technologies and

fundamental rights. Philosophy & Technology, 26(4), 357-379.

doi:10.1007/s13347-013-0104-0.

Hill, J., & Marion, N. (2016). Presidential rhetoric on cybercrime: Links to

terrorism? Criminal Justice Studies, 29(2), 163-177.

doi:10.1080/1478601X.2016.1170279.

Hin, S., Conde, D.A., & Lenart, A. (2016). New light on Roman census papyri

through semi-automated record linkage. Historical Methods: A Journal of

Quantitative and Interdisciplinary History, 49(1), 50-65.

doi:10.1080/01615440.2015.1071226.

Hintz, A., Denick, L., & Wahl-Jorgensen, K. (2017). Digital citizenship and

surveillance society: Introduction. International Journal of Communication,

11(2017), 731-739. Retrieved from https://ijoc.org/index.php/ijoc/article

/view/5521/1929.

Hobbs, H., Pillai, S., & Williams, G. (2018). The disqualification of dual citizens

from Parliament: Three problems and a solution. Alternative Law Journal,

43(2), 73-80. doi:10.1177/1037969X18777910.

Hogg, R. 2013a. Punishment and the people: Rescuing populism from its critics. In

K. Carrington, M. Ball, E. O’Brien, & J. Tauri, Crime, Justice and Social

Democracy: International Perspectives. Hampshire, UK: Palgrave.

306

Hogg, R. 2013b. Populism, law and order and the crimes of the 1%. International

Journal for Crime and Justice, 2(1), 113-131. Retrieved from

https://www.crimejusticejournal.com/article/view/686.

Hope, A. (2005). Panopticism, play and the resistance of surveillance: Case studies

of the observation of student internet use in UK schools. British Journal of

Sociology of Education, 26(3), 359–373. doi:10.1080/01425690500128890.

Hoye, J.M., & Monaghan, J. (2018). Surveillance, freedom and the republic.

European Journal of Political Theory, 17(3), 343-363.

doi:10.1177/1474885115608783.

Huey, L. (2009). A social movement for privacy/ against surveillance: Some

difficulties in engendering mass resistance in a land of Twitter and tweets.

Case Western Reserve Journal of International Law, 42(3), 699-709.

Retrieved from https://heinonline.org/HOL/P?h=

hein.journals/cwrint42&i=733.

Huey, L., Walby, K., & Doyle, A. (2006). Cop watching in the downtown eastside:

Exploring the use of (counter)surveillance as a tool of resistance. In T.

Monahan, Surveillance and Security: Technological Politics and Power in

Everyday Life (pp. 149-165). New York, NY: Routledge.

Hughes, G. (1996). Communitarianism and law and order. Critical Social Policy,

16(49), 17-41. doi:10.1177/026101839601604902.

Humphreys, L. (2011). Who’s watching whom? A study of interactive technology

and surveillance. Journal of Communication, 61(4), 575–95.

doi:10.1111/j.1460-2466.2011.01570.x.

307

Introna, L.D., & Gibbons, A. (2009). Networks and resistance: Investigating online

advocacy networks as a modality for resisting state surveillance. Surveillance

& Society, 6(3), 234-258. doi:10.24908/ss.v6i3.3283.

Jackson, L., Zhao, Y., Kolenic, A., Fitzgerald, H.E., Harold, R., & von Eye, A.

(2008). Race, gender, and information technology use: The new digital

divide. Cyberpsychology & Behaviour, 11(4), 437-442.

doi:10.1089/cpb.2007.0157.

Jacobs, B.A. (2010). Deterrence and deterrability. Criminology, 48(2): 417-441.

doi:10.1111/j.1745-9125.2010.00191.x.

Jäger, J. (2001). Photography: A means of surveillance? Judicial photography, 1850

to 1900. Crime, Histoire & Sociétés, 5(1), 27-51. doi:10.4000/chs.1056.

Jain, A.K., Ross, A., & Pankanti, S. (2006). Biometrics: A tool for information

security. IEEE Transactions on Information Forensics and Security,

1(2): 125-143. doi:10.1109/TIFS.2006.873653.

Jardine, E. (2018). Tor, what is it good for? Political repression and the use of online

anonymity granting technologies. New Media & Society, 20(2), 435-452.

doi:10.1177/1461444816639976.

Jeffries, F. (2011). Saying something: The location of social movements in the

surveillance society. Social Movement Studies, 10(2), 175–190.

doi:10.1080/14742837.2011.562362.

Joh, E.E. (2013). Privacy protests: Surveillance evasion and fourth amendment

suspicion. Arizona Law Review, 55(4), 997-1029. Retrieved from

https://heinonline.org/HOL/P?h=hein.journals/arz55&i=1015.

308

Johansson, A., & Vinthagen, S. (2014). Dimensions of everyday resistance: An

analytical framework. Critical Sociology, 42(3), 417-435.

doi:10.1177/0896920514524604.

Johnson, J. L. (1989). Privacy and judgement of others. The Journal of Value

Inquiry, 2, 157-168. Retrieved from

https://link.springer.com/content/pdf/10.1007/BF00137284.pdf.

Johnson, R. (2017). Everything that went down at Malcolm Turnbull's encryption

law announcement. Retrieved from

https://www.gizmodo.com.au/2017/07/everything-that-went-down-at-

malcolm-turnbulls-encryption-law-announcement/

Joslyn, M.R., & Haider-Markel, D.P. (2007). Sociotropic concerns and support for

counterterrorism policies. Social Science Quarterly, 88(2), 306-319.

Retrieved from https://www.jstor.org/stable/42956297.

Kahn, D. (1984). Cryptology and the origins of spread spectrum: Engineers during

World War II developed an unbreakable scrambler to guarantee secure

communications between allied leaders; actress Hedy Lamarr played a role in

the technology.

IEEE Spectrum, 21(9), 70-80. doi:10.1109/MSPEC.1984.6370466.

Kant, I. (1785). Groundwork for the metaphysics of morals (Trans. J. Bennett, 2005).

Retrieved from http://www.earlymoderntexts.com/assets/pdfs/kant1785.pdf.

Karanasiou, A.P. (2014). The changing face of protests in the digital age: On

occupying cyberspace and Distributed-Denial-of-Services (DDoS) attacks.

International

309

Review of Law, Computers & Technology, 28(1), 98–113.

doi:10.1080/13600869.2014.870638.

Karlstrøm, H. (2014). Do libertarians dream of electric coins? The material and

social embeddedness of Bitcoin. Scandinavian Journal of Social Theory,

15(1), 23-36. doi:10.1080/1600910X.2013.870083.

Karp, P. (2019, 8 October). What does ’s US trip mean for privacy and

encryption? The Guardian. Retrieved from https://www.theguardian.com/

technology/2019/oct/08/what-does-peter-duttons-us-trip-mean-for-

encryption-and-privacy.

Kearon, T. (2012). Surveillance technologies and the crises of confidence in

regulatory agencies. Criminology & Criminal Justice, 13(4), 415-430.

doi:10.1177/1748895812454747.

Kelling, G.L., & Wilson, J.Q. (1982). Broken windows: The police and

neighbourhood safety. Retrieved from https://www.theatlantic.com/

magazine/archive/1982/03/broken-windows/304465/.

Kellner, D. (2007). Bushspeak and the politics of lying: Presidential rhetoric in the

‘War on Terror’. Presidential Studies Quarterly, 37(4), 622-645.

doi:10.1111/j.1741-5705.2007.02617.x

Kininmonth, J., Thompson, N., McGill, T., & Bunn, A. (2018). Privacy concerns and

acceptance of government surveillance in Australia. Australasian Conference

on Information System, Sydney, Australia. Retrieved from

http://www.acis2018.org/conference-program/

310

Kokolakis, S. 2017. Privacy attitudes and privacy behaviour: A review of current

research on the privacy paradox phenomenon. Computers & Security, 64,

122-134. doi:10.1016/j.cose.2015.07.002.

Kopkin, M.R., Brodsky, S.L., & DeMatteo, D. (2017). Risk assessment in sentencing

decisions: A remedy to mass incarceration? Journal of Aggression, Conflict

and Peace Research, 9(2), 155-164. doi:10.1108/JACPR-06-2016-0232.

Koskela, H. (2011). Hijackers and humble servants: Individuals as camwitnesses in

contemporary controlwork. Theoretical Criminology, 15(3), 269–282.

doi:10.1177/1362480610396646.

Krueger, B.S. (2005). Government surveillance and political participation on the

internet. Social Science Computer Review, 23(4), 439-452.

doi:10.1177/0894439305278871.

Kupchik, A., & Monahan, T. (2006). The new American school: Preparation for

post‐industrial discipline. British Journal of Sociology of Education,

27(5), 617-631. doi:10.1080/01425690600958816.

Kupfer, J. (1987). Privacy, autonomy, and self-concept. American Philosophical

Quarterly, 24(1), 81-89. Retrieved from

https://www.jstor.org/stable/20014176.

Kurki, M. (2011). Democracy through technocracy? Reflections on technocratic

assumptions in EU democracy promotion discourse. Journal of Intervention

and Statebuilding, 5(2), 211-234. doi:10.1080/17502977.2011.566482.

Laas-Mikko, K., & Sutrop, M. (2012). How do violations of privacy and moral

autonomy threaten the basis of our democracy? Trames: Journal of the

Humanities and Social Sciences, 16(4), 369-381. doi:10.3176/tr.2012.4.05.

311

Laclau, E. (2005). On populist reason. London, UK: Verso.

Laclau, E. & Mouffe, C. (1985). Hegemony and social strategy: Towards a radical

democratic politics (2nd ed.). London, UK: Verso.

Lanier, M.M., & Cooper, A.T. (2016). From papyrus to cyber: How technology has

directed law enforcement policy and practice. Criminal Justice Studies, 29(2),

92-104. doi:10.1080/1478601X.2016.1170280.

Larsson, S. (2016). A first line of defence? Vigilant surveillance, participatory

policing, and the reporting of “suspicious” activity. Surveillance & Society,

15(1), 94-107. doi:10.24908/ss.v15i1.5342.

Lashmar, P. (2017). No more sources? Journalism Practice, 11(6), 665-688.

doi:10.1080/17512786.2016.1179587.

Lauer, J. (2012). Surveillance history and the history of new media: An evidential

paradigm. New Media & Society, 14(4), 566-582.

doi:10.1177/1461444811420986.

Lauzon, E. (1998). The Philip Zimmerman investigation: The start of the fall of

export restrictions on encryption software under First Amendment free

speech issues. Syracuse Law Review, 48, 1307-1364. Retrieved from

https://heinonline.org/HOL/P?h=hein.journals/syrlr48&i=1331

Lee, A., & Cook, P.S. (2015). The conditions of exposure and immediacy: Internet

surveillance and Generation Y. Journal of Sociology, 51(3), 674-688.

doi:10.1177/1440783314522870.

Lee, M. (2001). The genesis of 'fear of crime'. Theoretical Criminology, 5(4),

467-486. doi:10.1177/1362480601005004004.

312

Lee, M., & McGovern, A. (2016). Logics of risk: police communications in an age of

uncertainty. Journal of Risk Research, 19(10), 1291-1302.

doi:10.1080/13669877.2015.1115423.

Lee, M., Taylor, E., & Willis, M. (2019). Being held to account: Detainees'

perceptions of police body-worn cameras. Australian and New Zealand

Journal of Criminology, 52(2), 174-192. doi:10.1177/0004865818781913.

Leistert, O. (2012). Resistance against cyber-surveillance within social movements

and how surveillance adapts. Surveillance & Society, 9(4), 441-456.

doi:10.24908/ss.v9i4.4345.

Lett, D., Hier, S., & Walby, K. (2012). Policy legitimacy, rhetorical politics, and the

evaluation of city-street video surveillance monitoring programs in Canada.

Canadian Review of Sociology, 49(4), 328-349.

doi:10.1111/j.1755-618X.2012.01298.x.

Levin, S.I. (1998). Who are we protecting: A critical evaluation of United States

encryption technology export controls. Law & Policy in International

Business, 30(3), 529-552. Retrieved from https://heinonline.org/HOL

/P?h=hein.journals/geojintl30&i=539.

Levy, S. (1984). Hackers: Heroes of the computer revolution. New York, NY: Delta.

Lewis, C.W. (2005). The clash between security and liberty in the US response to

terror. Public Administration Review, 65(1), 18-30.

Li, B., Erdin, E., Gunes, M.H., Bebis, G., & Shipley, T. (2013). An overview of

anonymity technology usage. Computer Communications, 36(12),

1269-1283. doi:10.1016/j.comcom.2013.04.009.

313

Lingel, J., & boyd, d. (2013). Keep it secret, keep it safe: Information poverty,

information norms, and stigma. Journal of the American Society for

Information Science and Technology, 64(5), 981-991.

doi:10.1002/asi.22800.

Locke, J. (1689). Two Treatise of Government. London, UK: Awnsham Churchill.

Loftus, B., & Goold, B. (2012). Covert surveillance and the invisibilities of policing.

Criminology & Criminal Justice, 12(3), 275-288.

doi:10.1177/1748895811432014.

Logie, J. (2014). Dark days: Understanding the historical context and the visual

rhetorics of the SOPA/ PIPA blackout. In M. McCaughey (ed.),

Cyberactivism on the Participatory Web (pp. 20-40). New York, NY: Taylor

& Francis. doi:10.4324/9781315885797.

Lorenz, C. (2012). If you’re so smart, why are you under surveillance? Universities,

neoliberalism, and new public management. Critical Inquiry, 38(3), 599-629.

doi:10.1086/664553.

Lucas, G.R. (2014). NSA management direction #424: Secrecy and privacy in the

aftermath of Edward Snowden. Ethics & International Affairs, 28(1), 29-38.

doi:10.1017/S0892679413000488.

Ludlam, S. (2015). Parliamentary Debates, Senate of Australia, 24 March 2015,

2129-2133.

Lyell, D., & Coiera, E. 2016. Automation bias and verification complexity: A

systematic review. Journal of the American Medical Informatics Association,

24(2), 423-431. doi:10.1093/jamia/ocw105.

314

Lynskey, O. (2015). The Foundations of EU Data Protection Law. Oxford, UK:

Oxford University Press.

Lyon, D. (2002). Everyday surveillance: Personal data and social classifications.

Information, Communication & Society, 5(2), 242-257.

doi:10.1080/13691180210130806.

Lyon, D. (2003). Surveillance as Social Sorting. London, UK: Routledge Publishing.

Lyon, D. (2010). Liquid surveillance: The contribution of Zygmunt Bauman to

surveillance studies. International Political Sociology, 4(4), 325-338.

doi:10.1111/j.1749-5687.2010.00109.x.

Lyon, D. (2015). Surveillance after Snowden. Cambridge, UK: Polity Press.

Machuletz, D., Sendt, H., Laube, S., & Böhme, R. (2016). Users protect their privacy

if they can: Determinants of webcam covering behavior. Retrieved from

http://www.internetsociety.org/sites/default/files/10%20users-protect-their-

privacy-if-they-can-determinants-of-webcam-covering-behavior.pdf.

Maki, K. (2011). Neoliberal deviants and surveillance: Welfare recipients under the

watchful eye of Ontario Works. Surveillance & Society 9(1/2), 47-63.

doi:10.24908/ss.v9i1/2.4098.

Mann, M., Daly, A., Wilson, M., & Suzor, N. (2018). The limits of (digital)

constitutionalism: Exploring the privacy-security (im)balance in Australia.

International Communication Gazette, 80(4), 369-384.

doi:10.1177/1748048518757141.

Mann, S., & Ferenbok, J. (2013). New media and the power politics of sousveillance

in a surveillance-dominated world. Surveillance & Society, 11(1/2), 18-34.

doi:10.24908/ss.v11i1/2.4456.

315

Mann, S., Nolan, J., & Wellma, B. (2003). Sousveillance: Inventing and using

wearable computing devices for data collection in surveillance environments.

Surveillance & Society, 1(3), 331-355. doi:10.24908/ss.v1i3.3344.

Maras, M-H. (2014). Inside darknet: The takedown of Silk Road. Criminal Justice

Matters, 98(1), 22–23. doi:10.1080/09627251.2014.984541.

Marriott, L. (2013). Justice and the justice system: A comparison of tax evasion and

welfare fraud in Australia and New Zealand. Griffith Law Review, 22(2), 403-

429. doi:10.1080/10383441.2013.10854781.

Marthews, A., & Tucker, C. (2015). Government surveillance and internet search

behavior. Retrieved from http://papers.ssrn.com/sol3/

papers.cfm?abstract_id=2412564.

Martin, A.K., Van Brakel, R.E., & Bernhard, D.J. (2009). Understanding resistance

to digital surveillance: Towards a multi-disciplinary, multi-actor framework.

Surveillance & Society, 6(3), 213-232. doi:10.24908/ss.v6i3.3282.

Martin, J. (2014). Lost on the Silk Road: Online drug distribution and the

cryptomarket. Criminology & Criminal Justice, 14(3), 351-367.

doi:10.1177/1748895813505234.

Martin, K. (2012). Everyday cryptography: Fundamental principles and

applications. Oxford, UK: Oxford University Press.

Martino, W., & Frank, B. (2006). The tyranny of surveillance: Male teachers and the

policing of masculinities in a single sex school. Gender and Education, 18(1),

17-33. doi:10.1080/09540250500194914.

Marwick, A.E. (2012). The public domain: Surveillance in everyday life.

Surveillance & Society, 9(4), 378-393. doi:10.24908/ss.v9i4.4342.

316

Marx, G.T. (2009). A tack in the shoe and taking off the shoe: Neutralization and

counter-neutralization dynamics. Surveillance & Society, 6(3), 294-306.

doi:10.24908/ss.v6i3.3286.

Marx, G.T. (2016). Windows into the soul: Surveillance and society in an age of high

technology. Chicago, IL: University of Chicago Press.

Masco, J. (2010). Sensitive but unclassified: Secrecy and the counterterrorist state.

Public Culture, 22(3), 433-463. doi:10.1215/08992363-2010-004.

Mathiesen, T. (1997). The viewer society: Michel Foucault’s panopticon revisited.

Theoretical Criminology, 1(2), 215-234. doi:10.1177/1362480697001002003.

May, T. (1992). The crypto anarchist manifesto. Retrieved from

https://www.activism.net/cypherpunk/crypto-anarchy.html.

May, T. (2014). Parliamentary Debates, House of Commons of the United

Kingdom, 15 July 2014. Retrieved from https://publications.parliament.uk/pa/

cm201415/cmhansrd/cm140715/debtext/1407150002.htm#14071547000001.

Mayer, F. 2014. Narrative politics: Stories and collective action. New York, NY:

Oxford University Press.

McCulloch, J., & Pickering, S. (2009). Pre-crime and counter-terrorism: Imagining

future crime in the ‘War on Terror.’ British Journal of Criminology,

49(5), 628-645. doi:10.1093/bjc/azp023.

McCulloch, J., & Pickering, S. (2010). Future threat: Pre-crime, state terror, and

dystopia in the 21st century. Criminal Justice Matters 81(1), 32-33.

doi:10.1080/09627251.2010.505400.

317

McCulloch, J., & Wilson, D. (2015). Pre-crime: Pre-emption, precaution and the

future. Oxon, UK: Routledge Publishing.

McGuirk, R. (2017). Australia plans law to force tech giants to decrypt messages.

Retrieved from https://www.apnews.com/

621e0913072a4cb5a1a7f7338721b059.

Mendelson, K.A., Walker, S.T., & Winston, J.D. (1998). The evolution of recent

cryptographic policy in the United States. Cryptologia, 22(3), 193-210.

doi:10.1080/0161-119891886876.

Michaelsen, C. (2006). Balancing civil liberties against national security? A critique

of counterterrorism rhetoric. University of New South Wales Law Journal,

29(2), 1-21. Retrieved from http://classic.austlii.edu.au/au/

journals/UNSWLawJl/2006/13.html.

Milivojevic, S., & McGovern, A. (2014). The death of Jill Meagher: Crime and

punishment on social media. International Journal of Crime, Justice, and

Social Democracy, 3(3), 22-39. Retrieved from https://www.crimejustice

journal.com/article/view/731.

Mill, A., & Sarikakis, K. (2016). Reluctant activists? The impact of legislative and

structural attempts of surveillance on investigative journalism. Big Data &

Society, 3(2), 1-11. doi:10.1177/2053951716669381.

Mill, J.S. (1859). On liberty. In M. Philip & F. Rosen (eds., 2017), On liberty,

utilitarianism and other essays (pp. 5-112). New York, NY: Oxford

University Press.

Miller, K. (2014). Total surveillance, big data, and predictive crime technology:

Privacy’s perfect storm. Journal of Technology of Law and Policy, 19(1),

318

105-146. Retrieved from https://heinonline.org/HOL/

P?h=hein.journals/jtlp19&i=111.

Miller, S. (1995). The presentation of self in electronic life: Goffman on the internet.

Embodied Knowledge and Virtual Space Conference Goldsmiths' College,

University of London. Retrieved from http://www.douri.sh/

classes/ics234cw04/miller2.pdf.

Möllers, N., & Hälterlein, J. (2013). Privacy issues in public discourse: The case of

‘smart’ CCTV in Germany. Innovation: The European Journal of Social

Science Research, 26(1-2), 57-70. doi:10.1080/13511610.2013.723396.

Monahan, T. (2006). Counter-surveillance as political action? Social Semiotics,

16(4), 515-534. doi:10.1080/10350330601019769.

Monahan, T. (2015). The right to hide? Anti-surveillance camouflage and the

aestheticization of resistance. Communication and Critical/ Cultural Studies,

12(2), 159-178. doi:10.1080/14791420.2015.1006646.

Moor, J.H. (1991). The ethics of privacy protection. Library Trends, 9(1/2), 69-82.

Retrieved from https://www.ideals.illinois.edu/bitstream/handle/2142/7714/

librarytrendsv39i1-2h_opt.pdf

Moore, A.D. (2004). Privacy: Its meaning and value. American Philosophical

Quarterly, 40(3), 215-227. Retrieved from

https://www.jstor.org/stable/20010117.

Moore, A.D. (2011). Privacy, security, and government surveillance: Wikileaks and

the new accountability. Public Affairs Quarterly, 25(2), 141-156. Retrieved

from https://www.jstor.org/stable/23057094.

319

Moore, D., & Rid, T. (2016). Cryptopolitik and the darknet. Survival, 58(1), 7-38.

doi:10.1080/00396338.2016.1142085.

Morgan. J. (2018). Dual citizenship and Australian parliamentary eligibility: A time

for reflection or referendum. Adelaide Law Review, 39(2), 439-451. Retrieved

from https://search-informit-com-au.ezp01.library.qut.edu.au/document

Summary;dn=265096986906998;res=IELAPA

Munk, T.B. (2017). 100,000 false positives for every real terrorist: Why anti-terror

algorithms don’t work. First Monday 22(9). Retrieved from

https://firstmonday.org/ojs/index.php/fm/article/view/7126/6522

Munksgaard, R., & Demant, J. (2016). Mixing politics and crime: The prevalence

and decline of political discourse on the cryptomarket. International Journal

of Drug Policy, 35(September), 77–83. doi:10.1016/j.drugpo.2016.04.021.

Murphy, B., & Anderson, J. (2016). Assemblage, counter-law and the legal

architecture of Australian covert surveillance.” In R.K. Lippert, K. Walby, I.

Warren, & D. Palmer, National security, surveillance and terror (pp. 99-

127). Cham, Switzerland: Springer International Publishing.

Murphy, M.H. (2014). The pendulum effect: Comparisons between the Snowden

revelations and the church committee. Information & Communications

Technology Law, 23(3), 192-219. doi:10.1080/13600834.2014.970375.

Mythen, G., Walklate, S., & Khan, F. (2009). I’m a Muslim, but I am not a terrorist:

Victimization, risky identities and the performance of safety. British Journal

of Criminology, 49(6), 736-754. doi:10.1093/bjc/azp032.

320

Na, C., & Gottfredson, D.C. (2013). Police officers in schools: Effects on school

crime and the processing of offending behaviors. Justice Quarterly, 30(4),

619-650. doi:10.1080/07418825.2011.615754.

Neocleous, M. (2007). Security, liberty and the myth of balance: Towards a critique

of security politics. Contemporary Political Theory, 6(2), 131-149.

doi:10.1057/palgrave.cpt.9300301.

Newell, B C. (2014a). The massive metadata machine: Liberty, power, and secret

mass surveillance in the U.S. and Europe. Journal of Law and Policy for the

Information Society, 10(2), 481-522. Retrieved from

https://kb.osu.edu/handle/1811/73359.

Newell, B.C. (2014b). Technopolicing, surveillance, and citizen oversight: A

neorepublican theory of liberty and information control. Government

Information Quarterly, 31(3), 421-431. doi:10.1016/j.giq.2014.04.001.

Newell, B.C. (2018). Privacy as antipower: In pursuit of non-domination. European

Data Protection Law Review, 4(1), 12-16. Retrieved from

https://edpl.lexxion.eu/article/edpl/2018/1/5.

Nimmer, M.B. (1954). The right of publicity. Law and Contemporary Problems,

19(2), 203-223. Retrieved from https://scholarship.law.duke.edu/cgi/

viewcontent.cgi?article=2595&context=lcp.

Nissembaum, H. (2011). A contextual approach to privacy online. Dædalus 140(4),

32-48. Retrieved from http://www.cs.cornell.edu/~shmat/courses/cs5436/

contextualapproach.pdf.

321

Nissenbaum, H. (1998). Protecting privacy in an information age: The problem of

privacy in public. Law and Philosophy, 17(5/6), 559-596.

doi:10.2307/3505189.

Nissenbaum, Helen. (2004). Hackers and the contested ontology of cyberspace. New

Media & Society, 6(2), 195–217. doi:10.1177/1461444804041445.

Norris, C. (2007). The intensification and bifurcation of surveillance in British

criminal justice policy. European Journal on Criminal Policy and Research,

13(1/2), 139-158. doi:10.1007/s10610-006-9032-1.

Noy, C. (2008). Sampling knowledge: The hermeneutics of snowball sampling in

qualitative research. International Journal of Social Research Methodology,

11(4), 327-344. doi:10.1080/13645570701401305.

Nyst, C., & Falchetta, T. (2017). The right to privacy in the digital age. Journal of

Human Rights Practice, 9(1), 104-118. doi:10.1093/jhuman/huw026.

O’Brien, M. (2008). Law, privacy and information technology: A sleepwalk through

the surveillance society? Information & Communications Technology Law,

17(1), 25-35. doi:10.1080/13600830801887214.

O’Brien, M. (2014). The internet, child pornography and cloud computing: The dark

side of the web? Information & Communications Technology Law, 23(3),

238–255. doi:10.1080/13600834.2014.970376.

O’Neill, M., & Loftus, B. (2013). Policing and the surveillance of the marginal:

Everyday contexts of social control. Theoretical Criminology, 17(4), 437-

454. doi:10.1177/1362480613495084.

322

Onwuegbuzie, A.J., & Leech, N.L. (2007). Validity and qualitative research: An

oxymoron? Quality & Quantity, 41(2), 233-249.

doi:10.1007/s11135-006-9000-3.

Orwell, G. (1949). Nineteen Eighty-Four. London, UK: Secker & Warburg.

Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse,

abuse. Human Factors: The Journal of the Human Factors and Ergonomics

Society, 39(2), 230-253. doi:10.1518/001872097778543886.

Parent, W.A. (1983). Privacy, morality, and the law. Philosophy & Public Affairs,

12(4), 269-288. Retrieved from https://www.jstor.org/stable/2265374.

Parliamentary Joint Committee on Intelligence and Security. (2015). Advisory report

on the Telecommunications (Interception and Access) Amendment (Data

Retention) Bill 2014. Retrieved from www.aph.gov.au/Parliamentary_Bus

iness/Committees/Joint/Intelligence_and_Security/Data_Retention/Report.

Parry, O., & Mauthner, N.S. (2004). Whose data are they anyway? Practical, legal

and ethical issues in archiving qualitative research data. Sociology, 38(1),

139-152. doi:10.1177/0038038504039366.

Patel, T.G. (2012). Surveillance, suspicion and stigma: Brown bodies in a terror-

panic climate. Surveillance & Society, 10(3/4), 215-234.

doi:10.24908/ss.v10i3/4.4216.

Patsakis, C., Charemis, A., Papageorgiou, A., Mermigas, D., & Pirounias, S. (2018).

The market’s response toward privacy and mass surveillance: The Snowden

aftermath. Computers & Security, 73(March), 194–206.

doi:10.1016/j.cose.2017.11.002.

323

Penney, J.W. (2015). Chilling effects: Online surveillance and Wikipedia use.

Berkeley Technology Law Journal, 31(1), 117-161. doi:10.15779/Z38SS13.

Petersen, K.L., & Tjalve, V.S. (2013). (Neo)republican security governance? US

Homeland security and the politics of ‘shared responsibility. International

Political Sociology, 7(1), 1–18. doi:10.1111/ips.12006.

Pettit, P. (2001). A theory of freedom: From the psychology to the politics of agency.

Cambridge, UK: Polity Press.

Pettit, P. (2011). The instability of freedom as non-interference: The case of Isaiah

Berlin. Ethics, 121(4), 693-716. Retrieved from

http://www.jstor.org/stable/10.1086/660694.

Pettit, P. (2012). On the people’s terms: A republican theory and model of

democracy. Cambridge, UK: Cambridge University Press.

Phelps, A., & Watt, A. (2014). I shop online – recreationally! Internet anonymity and

Silk Road enabling drug use in Australia. Digital Investigation, 11(4), 261-

272. doi:10.1016/j.diin.2014.08.001.

Polkinghorne, D.E. (2007). Validity issues in narrative research. Qualitative Inquiry,

13(4), 471-486. doi:10.1177/1077800406297670.

Presser, L. (2015). Why We Harm. New Brunswick, NJ: Rutgers University Press.

Price, M., & Dalgleish, J. (2010). Cyberbullying: Experiences, impacts and coping

strategies as described by Australian young people. Youth Studies Australia,

29(2), 51-59. Retrieved from https://search.informit.com.au/document

Summary;dn=213627997089283;res=IELHSS.

324

Privacy International. (2013). Necessary and proportionate: International principles

on the application of human rights to communications surveillance. Retrieved

from https://necessaryandproportionate.org/principles.

Privacy International. (2014). Submission to the Parliamentary Joint Committee on

Intelligence and Security Inquiry into the Telecommunications (Interception

and Access) Amendment (Data Retention) Bill 2014. Retrieved from

https://www.aph.gov.au/DocumentStore.ashx?id=b76e1118-020b

-4ca2-8ca5-552749142018&subId=302730

Prosser, W.L. (1960). The right to privacy. California Law Review, 48, 383-423.

doi:10.15779/Z383J3C.

Quilter, J. (2015). Populism and criminal justice policy: An Australian case study of

non-punitive responses to alcohol-related violence. Australian & New

Zealand Journal of Criminology, 48(1), 24-52.

doi:10.1177/0004865813519656.

Rabionet, S.E. (2011). How I learned to design and conduct semi-structured

interviews: An ongoing and continuous journey. The Qualitative Report,

16(2), 563-566. Retrieved from https://core.ac.uk/download/pdf/

51097406.pdf.

Rafter, N., Posick, C., & Rocque, M. (2016). The criminal brain: Understanding

biological theories of crime (2nd ed.). New York, NY:

New York University Press.

Raible, J., & Irizarry, J.G. (2010). Redirecting the teacher’s gaze: Teacher education,

youth surveillance and the school-to-prison pipeline. Teaching and Teacher

Education, 26(5), 1196-1203. doi:10.1016/j.tate.2010.02.006.

325

Ransley, J., Anderson, J., & Prenzler, T. (2007). Civil litigation against police in

Australia: Exploring its extent, nature and implications for accountability.

Australian and New Zealand Journal of Criminology, 40(2), 143-160.

doi:10.1375/acri.40.2.143.

Rauhofer, J. (2008). Privacy is dead, get over it! Information privacy and the dream

of a risk-free society. Information & Communications Technology Law,

17(3), 185-197. doi:10.1080/13600830802472990.

Reddick, C.G., Chatfield, A.T., & Jaramillo, P.A. (2015). Public opinion on National

Security Agency surveillance programs: A multi-method approach.

Government Information Quarterly, 32(2), 129-141.

doi:10.1016/j.giq.2015.01.003.

Reese, S. D., & Lewis, S. C. (2011). Framing the War on Terror: The internalization

of policy in the US press. Journalism, 1096): 777-797.

doi:10.1057/9781137001931.0015

Reeves, J. (2012). If you see something, say something: Lateral surveillance and the

uses of responsibility. Surveillance & Society, 10(3/4), 235-248.

doi:10.24908/ss.v10i3/4.4209.

Regan, P.M. (2002). Privacy as a common good in the digital world. Information,

Communication & Society, 5(3), 382-405. doi:10.1080/13691180210159328.

Regan, P.M. (2011). Response to Bennett: Also in defence of privacy. Surveillance

& Society, 8(4), 497-499. doi:10.24908/ss.v8i4.4185.

Report of the Special Rapporteur on the Promotion and Protection of Human Rights

while Countering Terrorism. (2014). Protection of human rights and

fundamental freedoms while countering terrorism. Retrieved from

326

https://documents-dds-ny.un.org/doc/UNDOC/GEN/G19/134/55/PDF

/G1913455.pdf?OpenElement

Rhee, Y. (2005). A comparative historical study of the census registers of early

choson Korea and Ming China. International Journal of Asian Studies,

2(1), 25-55.

Roberts, A. (2015). A republican account of the value of privacy. European Journal

of Political Theory, 14(3), 320-344. doi:10.1177/1474885114533262.

Robinson, O.C. (2014). Sampling in interview-based qualitative research: A

theoretical and practical guide. Qualitative Research in Psychology,

11(1), 25-41. doi:10.1080/14780887.2013.801543.

Roessler, B., & Mokrosinska, D. (2013). Privacy and social interaction. Philosophy

& Social Criticism, 39(8), 771-791. doi:10.1177/0191453713494968.

Rothe, D.L., & Steinmetz, K.F. (2013). The case of Bradley Manning: State

victimization, realpolitik and WikiLeaks. Contemporary Justice Review,

16(2), 280-292. doi:10.1080/10282580.2013.798694.

Ryberg, J. (2007). Privacy rights, crime prevention, CCTV, and the life of Mrs

Aremac. Res Publica, 13(2), 127-143. doi:10.1007/s11158-007-9035-x.

Safi, M. (2016, 13 July). The takeover: How police ended up running a paedophile

website. The Guardian. Retrieved from https://www.theguardian.com/

society/2016/jul/13/shining-a-light-on-the-dark-web-how-the-police-

ended-up-running-a-paedophile-site

Sanchez, A. (2009). The Facebook feeding frenzy: Resistance-through-distance and

resistance-through-persistence in the societied network. Surveillance &

Society, 6(3), 275-293. doi:10.24908/ss.v6i3.3285.

327

Sandel, M. (1995). Democracy’s discontent: America in search for a public

philosophy. Cambridge, MA: Harvard University Press.

Sanders, C., & Hannem, S. (2012). Policing ‘the risky’: Technology and surveillance

in everyday patrol work. Canadian Review of Sociology, 49(4), 289-410.

doi:10.1111/j.1755-618X.2012.01300.x.

Sandywell, B. (2006). Monsters in cyberspace: Cyberphobia and cultural panic in the

information age. Information, Communication & Society, 9(1), 39-61.

doi:10.1080/13691180500519407.

Sanquist, T.F., Mahy, H., & Morris, F. (2008). An exploratory risk perception study

of attitudes toward homeland security systems. Risk Analysis, 28(4), 1128-

1133. doi:10.1111/j.1539-6924.2008.01069.x.

Sauter, M. (2013). LOIC will tear us apart: The impact of tool design and media

portrayals in the success of activist DDOS attacks. American Behavioral

Scientist, 57(7), 983-1007. doi:10.1177/0002764213479370.

Schaefer, B.P., & Steinmetz, K.F. (2014). Watching the watchers and McLuhan’s

tetrad: The limits of cop-watching in the internet age. Surveillance & Society,

12(4), 502-515. doi:10.24908/ss.v12i4.5028.

Schilling, J. (2006). On the pragmatics of qualitative assessment. European Journal

of Psychological Assessment, 22(1), 28-37. doi:10.1027/1015-5759.22.1.28.

Schmitz, R., & Kazyak, E. (2016). Masculinities in cyberspace: An analysis of

portrayals of manhood in men’s rights activist websites. Social Sciences,

5(2), 18-33. doi:10.3390/socsci5020018.

Schneier, B. (2006). Why data mining won’t stop terror. Retrieved from

https://www.wired.com/2006/03/why-data-mining-wont-stop-terror-2/

328

Schroeder, D. (2012). Human rights and human dignity: An appeal to separate the

conjoined twins. Ethical Theory and Moral Practice, 15(3), 323-335.

doi:10.1007/s10677-011-9326-3.

Scott, J.C. (1984). Weapons of the weak: Everyday forms of peasant resistance.

New Haven, CT: Yale University Press.

Scott, J.C. (1990). Domination and the arts of resistance: Hidden transcripts.

New Haven, CT: Yale University Press.

Sensen, O. (2011) Kant on human dignity. Berlin, Germany: De Gruyter.

Shad, M.R. (2018). Cyber threat in interstate relations: Case of US-Russia cyber

tensions. Policy Perspectives, 15(2), 41-55. doi:10.13169/polipers.15.2.0041.

Simmons, R. (2016). Quantifying criminal procedure: How to unlock the potential of

big data in our criminal justice system. Michigan State Law Review, 2016,

947-1047. Retrieved from https://digitalcommons.law.msu.edu/

lr/vol2016/iss4/1/.

Simon, S. (2012). Suspicious encounters: Ordinary preemption and the securitization

of photography. Security Dialogue, 43(2), 157-173.

doi:10.1177/0967010612438433.

Simone, M.A. (2009). Give me liberty and give me surveillance: A case study of the

US government’s discourse of surveillance. Critical Discourse Studies, 6(1),

1-14. doi:10.1080/17405900802559977.

Smith, G.J.D. (2016). Surveillance, data and embodiment: On the work of being

watched. Body & Society, 22(2), 108-139. doi:10.1177/1357034X15623622.

329

Solove, D.J. (2007). ‘I’ve got nothing to hide’ and other misunderstandings of

privacy. San Diego Law Review, 44, 745-772. Retrieved from

https://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=

1159&context=faculty_publications

Solove, D.J. (2008). Understanding privacy. Cambridge, MA: Harvard University

Press.

Spector-Mersel, G. (2010). Narrative research: Time for a paradigm. Narrative

Inquiry, 20(1), 204-224. doi:10.1075/ni.20.1.10spe.

Stalder, F. (2002). Privacy is not the antidote to surveillance. Surveillance & Society,

1(1): 120-124. doi:10.24908/ss.v1i1.3397.

Starkosch, E. (2012). Colonial risk management. Borderlands e-Journal, 11(1), 1-28.

Retrieved from http://www.borderlands.net.au/vol11no1_2012/

strakosch_risk.htm.

Starr, A., & Fernandez, L.A. (2009). Legal control and resistance post-Seattle. Social

Justice, 36(1), 41-60. Retrieved from https://www.jstor.org/stable/29768525.

Starr, A., Ferandez, L.A., Amster, R., Wood, L.J., & Caro, M.J. (2008). The impacts

of state surveillance on political assembly and association: A socio-legal

analysis. Qualitative Sociology, 31(3), 251-270.

doi:10.1007/s11133-008-9107-z

Stay, R.J. (1996). Cryptic controversy: U.S. government restrictions on cryptography

exports and the plight of Philip Zimmerman. Georgia State University Law

Review, 13(2), 581-604. Retrieved from https://heinonline.org/

HOL/P?h=hein.journals/gslr13&i=599.

330

Steeves, V., & Regan, P. (2014). Young people online and the social value of

privacy. Journal of Information, Communication and Ethics in Society,

12(4), 298-313. doi: doi/10.1108/JICES-01-2014-0004.

Stefanone, M., Lackaff, D., & Rosen, D. (2010). The relationship between traditional

mass media and ‘social media’: Reality television as a model for social

network site behaviour. Journal of Broadcasting & Electronic Media,

54(3), 508-525. doi:10.1080/08838151.2010.498851.

Steinmetz, K.F. (2015). Craft(y)ness: An ethnographic study of hacking. British

Journal of Criminology, 55(1), 125-145. doi:10.1093/bjc/azu061.

Steinmetz, K.F. (2016). Hacked: A radical approach to hacker culture and crime.

New York, NY: New York University Press.

Steinmetz, K.F., & Gerber, J. (2015). It doesn’t have to be this way: Hacker

perspectives on privacy. Social Justice, 41(3), 29-51. Retrieved from

https://www.jstor.org/stable/24361631

Strutt, J., & Kagi, J. (2017). Greens senator Scott Ludlam resigns over failure to

renounce dual citizenship. ABC News, 16 August. Retrieved from

https://www.abc.net.au/news/2017-07-14/senator-scott-ludlam-

resign-constitution-dual-citizenship/8708606

Suran, M., & Kilgo, D.K. (2017). Freedom from the press? How anonymous

gatekeepers on Reddit covered the Boston Marathon bombing. Journalism

Studies, 18(8), 1035-1051. doi:10.1080/1461670X.2015.1111160.

Suri, H. (2011). Purposeful sampling in qualitative research synthesis. Qualitative

Research Journal, 11(2), 63-75. doi:10.3316/QRJ1102063.

331

Suzor, N. (2010). The role of the rule of law in virtual communities. Berkeley

Technology Law Journal, 25(4), 1817-1886. Retrieved from

https://www.jstor.org/stable/24118612.

Suzor, N., Pappalardo, K., & McIntosh, N. (2016). The passage of Australia’s data

retention regime: National security, human rights, and media scrutiny.

Internet Policy Review 6(1), 1-16. doi:10.31228/osf.io/6wxmw.

Sykes, G.M., & Matza, D. (1957). Techniques of neutralisation: A theory of

delinquency. American Sociological Review, 22(6), 664-670.

doi:10.2307/2089195.

Taddei, S., & Contena, B. (2013). Privacy, trust and control: Which relationships

with online self-disclosure? Computers in Human Behavior, 29(3), 821-826.

doi:10.1016/j.chb.2012.11.022.

Taylor, E., & Lee, M. (2019). Points of view: Arrestees' perspectives on police body-

worn cameras and their perceived impact on police-citizen interactions. The

British Journal of Criminology, 59(4), 958-978. doi:10.1093/bjc/azz007.

Tebbutt, J. (2011). Towards a history of listening and surveillance. Continuum,

25(2), 239-249. doi:10.1080/10304312.2011.557828.

Thomas, D.R. (2006). A general inductive approach for analyzing qualitative

evaluation data. American Journal of Evaluation, 27(2), 237-246.

doi:10.1177/1098214005283748.

Thomas, J. (2005). The moral ambiguity of social control in cyberspace: A retro-

assessment of the ‘golden age’ of hacking. New Media & Society, 7(5),

599-624. doi:10.1177/1461444805056008.

332

Thompson, P.B. (2001). Privacy, secrecy and security. Ethics and Information

Technology, 3(1), 13-19. doi:10.1023/A:1011423705643.

Thomson, J.J. (1971). A defense of abortion. Philosophy & Public Affairs, 1(1),

47-66. Retrieved from https://www.jstor.org/stable/2265091.

Thomson, J.J. (1975). The right to privacy. Philosophy & Public Affairs, 4(4),

295-314. Retrieved from https://www.jstor.org/stable/2265075.

Tomblin, J., & Jenion, G. (2016). Sentencing ‘Anonymous’: Exacerbating the civil

divide between online citizens and government. Police Practice and

Research, 17(6), 507-519. doi:10.1080/15614263.2016.1205983.

Tor Project. (2018). Tor Metrics: Users. Retrieved from https://metrics.torproject.

org/userstats-relay-country.html?start=2010-01-01&end=2018-12-

31&country=all&events=off

Townsend, K. (2005). Electronic surveillance and cohesive teams: Room for

resistance in an Australian call centre? New Technology, Work and

Employment, 20(1), 47-59. doi:10.1111/j.1468-005X.2005.00143.x.

Tracy, S.J. (2010). Qualitative quality: Eight ‘big-tent’ criteria for excellent

qualitative research. Qualitative Inquiry, 16(10), 837-851.

doi:10.1177/1077800410383121.

Traylor, J.M. (2016). Shedding light on the ‘Going Dark’ problem and the encryption

debate. University of Michigan Journal of Law Reform, 50(2), 489-524.

Retrieved from https://repository.law.umich.edu/mjlr/vol50/iss2/5/.

Trottier, D. (2014). Crowdsourcing CCTV surveillance on the internet. Information,

Communication & Society, 17(5), 609-626.

doi:10.1080/1369118X.2013.808359.

333

Trottier, D. (2017). Digital vigilantism as weaponisation of visibility. Philosophy &

Technology, 30(1), 55-72. doi:10.1007/s13347-016-0216-4.

Turnbull, M. (2014). Parliamentary Debates, House of Representatives of the

Commonwealth of Australia, 30 October 2014. Retrieved from

https://parlinfo.aph.gov.au/parlInfo/genpdf/chamber/hansardr/

4a3ea2e7-05f5-4423-88aa-f33e93256485/0010/hansard

_frag.pdf;fileType=application%2Fpdf.

Turner, D.W. (2010). Qualitative interview design: A practical guide for novice

investigators. The Qualitative Report, 15(3), 754-760. Retrieved from

https://nsuworks.nova.edu/tqr/vol15/iss3/19/.

Turner, G. (2006). The mass production of celebrity: ‘Celetoids’, reality TV and the

‘demotic turn’. International Journal of Cultural Studies, 9(2), 153-165.

doi:10.1177/1367877906064028.

United Nations General Assembly. 2013. Resolution 69/166 on the right to privacy

in the digital age. Retrieved from https://undocs.org/en/A/RES/69/166.

United Nations General Assembly. 1948. Universal Declaration of Human Rights.

Retrieved from

https://ohchr.org/EN/UDHR/Documents/UDHR_Translations/eng.pdf.

Utsest, M.A. (2017). Digital surveillance and preventive policing. Connecticut Law

Review 49(5), 1453-1494. Retrieved from https://heinonline.org/HOL/

P?h=hein.journals/conlr49&i=1497. van der Sloot, B. (2018). A new approach to the right to privacy, or how the

European Court of Human Rights embraced the non-domination principle.

334

Computer Law & Security Review, 34(3), 539-549.

doi:10.1016/j.clsr.2017.11.013. van Houdt, F., & Schinkel, W. (2013). A genealogy of neoliberal communitarianism.

Theoretical Criminology, 17(4), 493-516. doi:10.1177/1362480613485768.

Viseu, A., Clement, A., & Aspinall, J. (2004). Situating privacy online: Complex

perceptions and everyday practices. Information, Communication & Society,

7(1), 92-114. doi:10.1080/1369118042000208924.

Volokh, E. (2003). The mechanisms of the slippery slope. Harvard Law Review, 116,

1026-1137. Retrieved from http://www2.law.ucla.edu/volokh/slippery.pdf.

Vysotsky, S., & McCarthy, A. (2016). Normalising cyberracism: A neutralization

theory analysis. Journal of Crime and Justice, 40(4), 446-461.

doi:10.1080/0735648X.2015.1133314.

Waldron, J. (2003). Security and liberty: The image of balance. Journal of Political

Philosophy, 11(2), 191-210. doi:10.1111/1467-9760.00174.

Walklate, S., & Mythen, G. (2008). How scared are we? British Journal of

Criminology, 48(2), 209-225. doi:10.1093/bjc/azm070.

Wall, D.S. (2008). Cybercrime, media and insecurity: The shaping of public

perceptions of cybercrime. International Review of Law, Computers &

Technology, 22(1/2), 45-63. doi:10.1080/13600860801924907.

Wallace, W., & Smith, J. (1995). Democracy or technocracy? European integration

and the problem of popular consent. West European Politics, 18(3), 137-157.

doi:10.1080/01402389508425095

335

Walzer, M. (1983). Spheres of Justice: A defense of pluralism and equality.

New York, NY: Basic Books.

Wang, X., Gerber, M.S., & Brown, D.E. (2012). Automatic crime prediction using

events extracted from Twitter posts. In S.J. Yang, A.M. Greenberg, & M.

Endsley, Social Computing, Behavioral - Cultural Modeling and Prediction,

7227 (pp. 231-238). Berlin, Heidelberg: Springer.

doi:10.1007/978-3-642-29047-3_28.

Warren, I. (2015). Surveillance, criminal law and sovereignty. Surveillance &

Society, 13(2), 300-305. doi:10.24908/ss.v13i2.5679.

Warren, S.D., & Brandeis, L. (1890). The right to privacy. The Harvard Law Review,

4/5, 193-220. Retrieved from https://www.cs.cornell.edu/~shmat/courses/

cs5436/warren-brandeis.pdf.

Wäscher, T. (2017). Framing resistance against surveillance. Digital Journalism,

5(3), 368-385. doi:10.1080/21670811.2016.1254052.

Weimann, G. (2016). Going dark: Terrorism on the dark web. Studies in Conflict and

Terrorism, 39(3), 195-206. doi:10.1080/1057610X.2015.1119546.

Welch, M. (2011). Counterveillance: How Foucault and the Groupe d’Infromation

sur les Prisons reversed the optics. Theoretical Criminology, 15(3), 301-313.

doi:10.1177/1362480610396651.

Wells, H. (2008). The techno-fix versus the fair cop: Procedural (in)justice and

automated speed Limit Enforcement. British Journal of Criminology, 48(6),

798-817. doi:10.1093/bjc/azn058.

336

West, A., Lewis, J., & Currie, P. (2009). Students’ Facebook ‘friends’: Public and

private spheres. Journal of Youth Studies, 12(6), 615-627.

doi:10.1080/13676260902960752.

Westin, A.F. (1967). Privacy and freedom. New York, NY: Athenum.

Westin, A.F. (2003). Social and political dimensions of privacy. Journal of Social

Issues, 59(2), 431-453. doi:10.1111/1540-4560.00072.

White, N. (2018, 16 August). What (we think) you should know about Australia’s

new encryption bill. Access Now. Retrieved from

https://www.accessnow.org/what-we-think-you-should-know-about-

-new-encryption-bill/.

Whitman, J.Q. (2004). The two Western cultures of privacy: Dignity versus liberty.

Yale Law Journal, 113, 1151-1221. Retrieved from https://digitalcommons.

law.yale.edu/ylj/vol113/iss6/1/

Wiles, R., Crow, G., Heath, S., & Charles, V. (2008). The management of

confidentiality and anonymity in social research. International Journal of

Social Research Methodology, 11(5), 417-428.

doi:10.1080/13645570701622231.

Wilson, D., & Weber, L. (2008). Surveillance, risk and pre-emption on the

Australian border. Surveillance & Society, 5(2), 124-141.

doi:10.24908/ss.v5i2.3431.

Wilson, D.J., & Serisier, T. (2010). Video activism and the ambiguities of counter-

surveillance. Surveillance & Society, 8(2), 166-180.

doi:10.24908/ss.v8i2.3484.

337

Wilson, L.A. (2011). Perceptions of legitimacy and strategies of resistance:

Melbourne illicit drug users and random roadside drug testing. Current Issues

in Criminal Justice, 23(2), 183-201. doi:10.1080/10345329.2011.12035918.

Wilson, T., Maimon, D., Sobesto, B., & Cukier, M. (2015). The effect of a

surveillance banner in an attacked computer system: Additional evidence for

the relevance of restrictive deterrence in cyberspace. Journal of Research in

Crime and Delinquency, 52(6), 829-855. doi:10.1177/0022427815587761.

Winkler, S., & Zeadally, S. (2015). An analysis of tools for online anonymity.

International Journal of Pervasive Computing and Communications, 11(4),

436-453. doi:10.1108/IJPCC-08-2015-0030.

Wong, J.C., & Solon, O. (2017). US government demands details on all visitors to

anti-trump protest website. The Guardian, 15 August. Retrieved from

https://www.theguardian.com/world/2017/aug/14/donald-trump-

inauguration-protest-website-search-warrant-dreamhost

Woo, J. (2006). The right not to be identified: Privacy and anonymity in the

interactive media environment. New Media & Society, 8(6), 949-967.

doi:10.1177/1461444806069650.

Wood, D.M., & Webster, C.W.R. (2009). Living in surveillance societies: The

normalisation of surveillance in Europe and the threat of Britain’s bad

example. Journal of Contemporary European Research, 5(2), 259-273.

Retrieved from https://www.jcer.net/index.php/jcer/article/view/159.

Wood, M. (2016). Crowdsourced counter-surveillance: Examining the subversion of

random breath testing stations by social media facilitated crowdsourcing. In

Rethinking Cybercrime Conference, Preston, June 27-28. Retrieved from

338

http://www.academia.edu/26510748/Crowdsourced_counter-surveillance_

Examining_the_subversion_of_random_breath_testing_stations_by_

social_media_facilitated_crowdsourcing.

Wright, D., & Raab, C. (2014). Privacy principles, risks and harms. International

Review of Law, Computers & Technology, 28(3), 277-298.

doi:10.1080/13600869.2014.913874.

Yar, M., & Steinmetz, K.F. (2019). Cybercrime and Society (3rd ed.). London, UK:

Sage.

Yerke, A. F. & Mitchell, V. (2013). Transgender people in the military: Don’t ask?

Don’t tell? Don’t enlist! Journal of Homosexuality, 60, 436-457.

doi:10.1080/00918369.2013.744933.

Youn, S. (2009). Determinants of online privacy concern and its influence on privacy

protection behaviors among young adolescents. Journal of Consumer Affairs,

43(3), 389-418. doi:10.1111/j.1745-6606.2009.01146.x.

Young, R., & Zhang, L. (2007). Illegal computer hacking: An assessment of factors

that encourage and deter the behavior. Journal of Information Privacy and

Security, 3(4), 33-52. doi:10.1080/15536548.2007.10855827.

Young, R., Zhang, L., & Prybutok, V.R. (2007). Hacking into the minds of hackers.

Information Systems Management, 24(4), 281-287.

doi:10.1080/10580530701585823.

Zajko, M. (2018). Security against surveillance: IT security as resistance to pervasive

surveillance. Surveillance & Society, 16(1), 39-52.

doi:10.24908/ss.v16i1.5316.

339

Zedner, L. (2003). Too much security? International Journal of the Sociology of

Law, 31(3), 155-184. doi:10.1016/j.ijsl.2003.09.002.

Zedner, L. (2005). Securing liberty in the face of terror: Reflections from criminal

justice. Journal of Law and Society, 32(4): 507-533.

Zedner, L. (2007a). Preventive justice or pre-punishment? The case of control orders.

Current Legal Problems, 60(1), 174-203. doi:10.1093/clp/60.1.174.

Zedner, L. (2007b). Pre-crime and post-criminology? Theoretical Criminology,

11(2), 261-281. doi:10.1177/1362480607075851.

Zedner, L. (2008). Terrorism, the ticking bomb, and criminal justice values. Criminal

Justice Matters, 73(1), 18-19. doi:10.1080/09627250802274253.

Zimmer, C. (2011). Surveillance cinema: Narrative between technology and politics.

Surveillance & Society, 8(4), 427-440. doi:10.24908/ss.v8i4.4180.

Zimmerman, P. (1994). PGP Source Code and Internals. Cambridge, MA: MIT

Press.

Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an

information civilization. Journal of Information Technology, 30(1),

75-89. doi:10.1057/jit.2015.5.

Cases Cited

American Civil Liberties Union v. James Clapper, 785 F.3d 787 (7 May 2015)

(USA).

American Civil Liberties Union v. James Clapper, 959 F.Supp.2d 724 (28 Dec 2013)

(USA).

340

Australian Broadcasting Corporation v Lenah Game Meats Pty Ltd (2001) 208 CLR

199 (Austl.).

Ben Grubb and Telstra Corporation Limited [2015] AICmr 35 (1 May 2015) (Austl.)

Clapper v. Amnesty International, 568 U.S. 398 (2013) (USA).

Davis and Others v. Secretary of State for the Home Department [2015] EWHC

2092 (Admin) (UK).

Digital Rights Ireland v. Ireland and Seitlinger and Others (2014) CJEU Joined

Cases C-293/12 and C-594/12 (EU).

Grosse v Purvis [2003] QDS 151; (2003) Aust Tort Reports 81-706 (Austl.)

Klayman v. Obama, 142 F.Supp.3d 172 (DC Cir. 2015) (USA)

Liberty and Others v GCHQ and Others [2014] UKIPTrib 13_77-H

(5 December 2014) (UK).

Liberty v Home Department [2018] EWHC 975 (Admin) (UK).

Obama v Klayman, 800 F.3d. 559 (DC Cir. 2015) (USA).

Privacy Commissioner v Telstra Corporation Limited [2017] FCAFC 4. (Austl.)

Roe v. Wade, 410 US 113 (1973) (USA).

Smith v. Maryland, 442 U.S. 735 (1979) (USA).

Telstra Corporation Limited and Privacy Commissioner [2015] AATA 991

(18 December 2015) (Austl.)

Toonen v Australia, Communication No. 488/1992, U.N. Doc

CCPR/C/50/D/488/1992 (1994) (UN).

341

Victoria Park Racing and Recreation Grounds Co Ltd v Taylor (1937) 58 CLR 479

(Austl.).

Weber and Saravia v. Germany, 54934/00 ECHR (EU).

Legislation Cited

Anti-Terrorism Crime and Security Act 2001 (UK).

Arms Export Control Act 1976 (USA).

Charter of Fundamental Rights of the European Union 2000 (EU).

Criminal Code Act 1995 (Cth) (Austl.)

Criminal Justice (Terrorist Offences) Act 2005 (IRL).

Data Retention and Investigatory Powers Act 2014 (UK).

Data Retention Directive 2006/24/EC (EU).

Defence Trade Controls Act 2012 (Cth) (Austl.)

Investigatory Powers Act 2016 (UK).

Metropolitan Police Act 1822 (UK).

Postal Services Act 1792 (USA).

Privacy Act 1988 (Cth) (Austl.).

Public Records Act 2002 (Qld) (Austl.).

Stop Online Privacy Bill 2012 (USA).

Telecommunications (Interception and Access) Act 1979 (Cth) (Austl.).

342

Telecommunications (Interception and Access) Amendment (Data Retention) Act

2015 (Cth) (Austl.).

Telecommunications (Interception and Access) Amendment (Data Retention) Bill

2014 (Cth) (Austl).

Telecommunications and Other Legislation Amendment (Assistance and Access) Bill

2018 (Cth) (Austl.).

Telecommunications and Other Legislation Amendment (Assistance and Access) Act

2018 (Cth) (Austl.).

USA FREEDOM Act 2015 (USA).

USA PATRIOT Act 2001 (USA).

343

Appendices

APPENDIX A

1. Introduction

1.1. Consent/ Information

1.2. Nature of Questions

2. How did you initially become involved with – or interested in – political activism

surrounding the issue of digital surveillance?

2.1. What are your views about digital surveillance?

2.2. Why is the topic important to you?

3. There are different types of personal information that may be targeted by

surveillance systems, including details about an individual’s social networks,

educational or employment history, everyday behaviours, and personal beliefs.

3.1. What types of personal information do you think governments have a right to

collect?

3.2. What types of personal information do you think citizens have a right to hide?

3.3. For example, metadata is defined under section 187AA of the Act as

“information about the source, destination, time, type, and location of a

communication.”

3.4. How do you feel about the retention of these types of information specifically?

3.5. What do you think are the likely consequences – intended or otherwise – of

Australia’s metadata retention scheme?

4. When defending the Telecommunications (Interception and Access) Amendment

(Data Retention) Act 2015 (Cth), then-Minister of Communications Malcolm

Turnbull argued that the legislation was necessary to “prevent national security

agencies being further degraded.”

344

4.1. What are your views of the Government’s argument?

4.2. Was there adequate consultation with the public before the introduction of the

laws?

4.3. Do you think the government listened to, or incorporated, any of your specific

concerns?

5. Under the legislation, twenty-one agencies have access to metadata without a

warrant or the need to notify the subject of an investigation. The Attorney-General

can also expand the list of authorized agencies without the need for parliamentary

debate or public notice.

5.1. Do you have any concerns about the level of secrecy surrounding the metadata

scheme?

5.2. How do you feel about the power of the Attorney-General to authorize

warrantless access without further public consultation?

6. Scott Ludlam famously listed several methods of Australian citizens to circumvent

the metadata retention scheme, such as using overseas email services, direct

messaging apps, publicly-available Wi-Fi, Virtual Private Networks, Onion

Routing, and cryptographic software.

6.1. Do you think it is alright for citizens to use these techniques or technologies?

6.2. Are there any ways you have changed your use of digital technologies since

the introduction of the metadata retention scheme?

6.3. Do you think privacy-enhancing technologies and techniques have an adverse

impact on national security?

7. The Australian Government has recently argued for the need to intercept

encrypted communications, because “cybercriminals and terrorists hide in the

dark”.

345

7.1. Do you think this is a legitimate justification?

If not, why do you think they are pursuing additional powers?

7.2. If the government is successful, in what ways do you think you will change

how you use digital technologies?

8. What changes or reforms would you like to see to Australia’s metadata retention

scheme?

8.1. What strategies do you think are necessary to achieve these changes?

9. Is there any other information or comments you would like to share about the

issue of digital surveillance and metadata retention?

10. Are there any questions you would like to ask me?

11. Conclusion

11.1. Thank you

11.2. Further interviews

11.3. Transcript & Summary

11.4. Snowball Sampling

346

APPENDIX B

Number Public Hearings

1 Public Hearing. December 17, 2014. Canberra.

2 Public Hearing. January 29, 2015. Canberra. 3 Public Hearing. January 29, 2015. Canberra.

Number Public Submission 1 Mr Alexander Lynch 2 Mr Alexander Schnur 3 Mr Noel Butler 4 Mr Daniel Gorza 5 Gilbert + Tobin Centre of Public Law Communications Alliance Ltd and Australian Mobile 6 Telecommunications Association 7 Australian Federal Police 8 Victoria Police 9 South Australia Police 10 Police, Fire and Emergency Services 11 Western Australia Police 12 Australian Security Intelligence Organisation 13 Australian Crime Commission 14 Independent Commission Against Corruption 15 Mr Virgil Hesse 16 Dr Geoffrey Jenkins 17 Police 18 Mr Mason Hope 19 Queensland Police Service 20 Mr Brian Ridgway 21 Mr Alex Carneal 22 Ms Alicia Cooper 23 Mr Tom Courtney 24 Australian Securities and Investments Commission 25 White Label Personal Clouds 26 Mr Peter Freak 27 Attorney-General's Department 28 Mr Iain Muir 29 Mr Josh O'Callaghan 30 Mr Damien Donnelly

347

31 Ms Fiona Brown 32 Mr Douglas Stetner 33 Bravehearts 34 AIMIA Digital Policy Group 35 Mr Ben Johnston 36 Ms Tanja Kahl 37 Mr Bernard Keane 38 Mr Glenn Bradbury 39 Commissioner for Privacy and Data Protection (Victoria) 40 Mr Hugh Murdoch 41 Heather Dowling 42 Australian Human Rights Commission 43 Mr Adam Cooksley 44 Mr Cam Browning 45 Mr Geoff Walker 46 Viraf Bhavnagri 47 Priya Shaw 48 Australia Commission for Law Enforcement Integrity 49 Ms Fiona Maley 50 Ms Pam Webster 51 Mr William Delaforce 52 Name Withheld 53 Ms Ashley Doodkorte 54 Blueprint for Free Speech 55 Mr Paul White 56 Mr Roger Clark 57 Dr Peter Evans 58 Mr Ken Stephens 59 Mr Andrew Horton 60 Mr Marco Setiawan 61 Mr Daniel Scott 62 Ms Catalina Zylberberg 63 Ms Bethany Skurrie 64 Mr David Murray 65 Mr Murray Deerbon 66 Ms Sue Bettison 67 Mr Donald Newgreen 68 Ms Sally Wylie 69 Mr Daniel Audsley 70 Dr Ricardo Cavicchioli and Dr Tassia Kolesnikow

348

71 Human Rights Law Centre 72 Police Federation of Australia 73 Mr Tom McDonnell 74 Commonwealth Ombudsman 75 Australian Privacy Foundation Justice and International Mission Unit, Synod of Victoria and 76 Tasmania, Uniting Church of Australia 77 Private Media 78 Name Withheld 79 Ms Catherine Cresswell 80 Privacy International 81 Eric Lindsay 82 Ian Hobbs 83 Bill Fisher 84 Universities Australia 85 Dr A. Bryan Fricker 86 Optus 87 Rochelle Roberts 88 Australian Lawyers for Human Rights 89 Australian Computer Society 90 Media, Entertainment & Arts Alliance 91 Ms MP 92 Office of the Australian Information Commissioner 93 University of Sydney 94 Institute of Public Affairs 95 Amnesty International Australia 96 Dr A J Wood 97 Electronic Frontiers Australia 98 Society of University Lawyers 99 Dr Paul Bernal 100 Corruption and Crime Commission (Western Australia) 101 Name Withheld 102 Mr David Lovejoy 103 Paul Schnackenburg 104 Dr Felix Rauch 105 Roger Graf 106 Mr Oak McIlwain 107 Name Withheld 108 Jonathan Grace 109 Australian Information Industry Association

349

110 Open Knowledge Australia Communications Law Centre, University of Technology, 111 Sydney 112 Telstra 113 Mr Terry Darling 114 Dr John Selby, Prof. Vijay Varadharajan and Dr Yvette Blount 115 Mr Doug Carter 116 Name Withheld 117 Law Institute of Victoria 118 Thoughtworks Pty Ltd 119 Daniel Black 120 Australian Communications Consumer Action Network 121 Mr Scott Millwood 122 Internet Society of Australia 123 Mr Mark Newton 124 Pirate Party Australia 125 Joint media organisations 126 Law Council of Australia 127 Australian Customs and Border Protection Service 128 FutureWise 129 Councils for civil liberties across Australia 130 Vodafone 131 Inspector-General of Intelligence and Security 132 Guardian Australia 133 Mr John Blair 134 Mr Albert Lightfoot 135 Mr Noel Falk 136 Ash Naughton 137 Mr Ben Marshall 138 Name Withheld 139 Mr Alan Lamb 140 Mr Graeme Tychsen 141 Mr Gordon Curtis 142 Mr Bill Egan 143 Ms Eileen Whitehead 144 Ms Jane Paterson 145 Mr Leon Lack 146 Mr Maurice Jones 147 Mr Einar Thorsteinsson 148 Mr Robert Lammers 149 Mr Paul James

350

150 Name Withheld 151 Ms Anne Layton-Bennett 152 Ms Heather Stock 153 Mr James McPherson 154 Ms Barbara Reed 155 Ms Katrina McAlpine 156 Ms Dimity Odea 157 Mr Simon Kemp 158 Mr Paul Wilkins 159 Ms Stephanie Stewart 160 Mr David Powell 161 Ms Elayne Jay 162 Ms Kerrie Matchett 163 Ms Eve Stocker 164 Mr Ben Smith 165 Mr Nathan Sherburn 166 Name Withheld 167 Mr Keith Wilson 168 Mr Anthony Hughes 169 Ms Jenny Rae 170 Name Withheld 171 Ms Belinda Wright 172 Arda Barut 173 Chris Sanderson 174 Yoon Leng Ooi 175 Mr Stephen Vicarioli 176 Mr Malcolm McKinnon 177 Ms Deborah Harris 178 Ms Lidia Nemitschenko 179 Leigh Milne 180 Shirley McRae and Wanda Grabowski 181 Ms Sharon Whitewood 182 Mr James Bowling 183 Chris Ogilvie 184 Andy Spate 185 Roger Marchant 186 Mr Ken White 187 Bob Brown 188 Name Withheld 189 Cara Clark

351

190 Mr Michael Latta 191 Mr Anthony Cavanna 192 Name Withheld 193 Name Withheld 194 Mr David Vaile and Mr Paolo Remati 195 Charles Lowe 196 Proctor McKenzie 197 D Adams 198 Muslim Legal Network (NSW) 199 NSW Ministry for Police and Emergency Services 200 Ms Erica Jolly 201 Crime and Corruption Commission (Qld.) 202 State and Territory Police Forces 203 Mr Phill Ball 204 Mr Shane Greenaway

352

APPENDIX C

Subject Title: Participate in a research study about digital surveillance in Australia

Dear colleagues

My name is Michael Wilson from the Faculty of Law, Queensland University of Technology (QUT) and I am doing a PhD examining attitudinal and behavioural responses to metadata retention laws.

I am looking for current and former employees of relevant government agencies and members or associates of non-government organisations to interview for 30-60 minutes. Interviews can be held at your office, another convenient location, or via telephone/ video-conference.

Participating in this study will provide you with an opportunity to express your views about Australian metadata retention laws. In particular, I am interested to hear your views about the associated policymaking process, the efficacy of digital surveillance systems, any moral concerns you may have, and whether these have informed how you use digital technologies. As such, your participation will contribute to a holistic analysis of Australian cybersecurity policy.

Please view the attached Participant Information Sheet and Consent Form for further details on the study. If you are interested in participating or have any questions, please contact me via email or telephone.

Please note that this study has been approved by the QUT Human Research Ethics Committee (approval number 1700000517).

Many thanks for your consideration of this request.

Michael Wilson PhD Candidate 07 3138 2896 [email protected]

Supervisor Dr Erin O’Brien 0 7 3138 7103 [email protected]

School of Justice Faculty of Law Queensland University of Technology

353

APPENDIX D

PARTICIPANT INFORMATION FOR QUT RESEARCH PROJECT – Interview –

The contested legitimacy of digital surveillance: An analysis of the behavioural neutralisation of Australia’s metadata retention laws

QUT Ethics Approval Number 1700000517

RESEARCH TEAM Principal Researcher: Mr Michael Wilson PhD Candidate Associate Researchers: Dr Erin O’Brien Principal Supervisor Professor Russell Hogg Associate Supervisor Professor Belinda Carpenter Associate Supervisor Faculty of Law, Queensland University of Technology

DESCRIPTION This project is being undertaken as part of PhD research by Michael Wilson.

The purpose of this project is to investigate the attitudinal and behavioural responses to Australia’s metadata retention laws and the associated implications for cybersecurity policy. The research examines policymaking processes, the efficacy of digital surveillance systems, associated moral concerns, and whether these have informed the use of digital technologies.

You are invited to participate in this project because you are associated with an organisation that is recognised as a stakeholder and/or has publicly advocated a position about Australia’s metadata retention laws.

PARTICIPATION Your participation will involve an audio recorded interview at your office, another convenient location, or via telephone/ video-conference. It is possible to participate in the project without being audio recorded or by providing written responses to interview questions.

Interviews will take approximately 30 to 60 minutes of your time.

Questions will include: 1. What types of personal information do you believe citizens have a duty to disclose/ right to hide from government? 2. To what extent do you think Australia’s metadata retention laws are inclusive and democratic? 3. How, if at all, have you changed how you use digital technologies since the introduction of mandatory metadata retention?

Your participation in this project is entirely voluntary. If you do agree to participate you can withdraw from the project without comment or penalty. If you withdraw within six weeks after your interview, upon request any identifiable information already obtained from you will be destroyed. Your decision to participate or not participate will in no way impact upon your current or future relationship with QUT.

354

EXPECTED BENEFITS It is expected that this project will not benefit you directly. However, it may contribute to scholarly and practical knowledge about the interactions between agents and subjects of digital surveillance. As such, the research may assist in the development of cybersecurity policy that addresses the concerns of both government agencies and non-government organisations.

RISKS There are no risks beyond normal day-to-day living associated with your participation in this project.

PRIVACY AND CONFIDENTIALITY All comments and responses will be treated confidentially unless required by law. The names of individual persons are not required in any of the responses.

As the project involves an audio recording: x You will have the opportunity to verify your comments and responses included within a transcript of the audio recording prior to final inclusion. x The audio recording will be destroyed 5 years after the last publication. x The audio recording will not be used for any other purpose. x Only the named researchers will have access to the audio recording. x It is possible to participate in the project without being audio recorded.

Any data collected as part of this project will be stored securely as per QUT’s Management of research data policy. The project is funded by scholarships awarded by the Australian Government and QUT, however these organisations will not have access to data obtained during the project.

Please note that non-identifiable data from this project may be used as comparative data in future projects.

CONSENT TO PARTICIPATE We would like to ask you to sign a written consent form (enclosed) to confirm your agreement to participate. This includes extended consent to allow non-identified data from this project to be used as comparative data in future projects.

QUESTIONS / FURTHER INFORMATION ABOUT THE PROJECT If you have any questions or require further information please contact one of the listed researchers:

Michael Wilson [email protected] 07 3138 2896 Erin O’Brien [email protected] 07 3138 7103

CONCERNS / COMPLAINTS REGARDING THE CONDUCT OF THE PROJECT QUT is committed to research integrity and the ethical conduct of research projects. However, if you do have any concerns or complaints about the ethical conduct of the project you may contact the QUT Research Ethics Advisory Team on 07 3138 5123 or email [email protected]. The QUT Research Ethics Advisory Team is not connected with the research project and can facilitate a resolution to your concern in an impartial manner.

THANK YOU FOR HELPING WITH THIS RESEARCH PROJECT. PLEASE KEEP THIS SHEET FOR YOUR INFORMATION.

355

APPENDIX E

CONSENT FORM FOR QUT RESEARCH PROJECT – Interview –

The contested legitimacy of digital surveillance: An analysis of the behavioural neutralisation of Australia’s metadata retention laws

QUT Ethics Approval Number 1700000517

RESEARCH TEAM Michael Wilson [email protected] 07 3138 2896 Erin O’Brien [email protected] 07 3138 7103 Russell Hogg [email protected] 07 3138 7124 Belinda Carpenter [email protected] 07 3138 7111

STATEMENT OF CONSENT By signing below, you are indicating that you: x Have read and understood the information document regarding this project. x Have had any questions answered to your satisfaction. x Understand that if you have any additional questions you can contact the research team. x Understand that you will be provided an opportunity to review the transcript of the interview. x Understand that you are free to withdraw from the study at any time within ten weeks from the date of participation. x Understand that if you have concerns about the ethical conduct of the project you can contact the Research Ethics Advisory Team on 07 3138 5123 or email [email protected]. x Understand that you are providing extended consent to allow non-identifiable data from this project may be used as comparative data in future projects. x Agree to participate in this research project.

Please tick the relevant box below: I agree for the interview to be audio recorded. I do not agree for the interview to be audio recorded.

Optional: Would you like to receive a summary of the research when it is completed? If yes, please provide your email address below.

Name

Email

Signature

Date

PLEASE RETURN THE SIGNED CONSENT FORM TO THE RESEARCHER.

356

APPENDIX F

APPENDIX G

358

359

360