<<

Software bug bounties and legal risks to security researchers

Robin Hamper

(Student #: 3191917)

A thesis in fulfilment of the requirements for the degree of Masters of Law by Research

Page 2 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

COPYRIGHT STATEMENT

‘I hereby grant the University of New South Wales or its agents a non-exclusive licence to archive and to make available (including to members of the public) my thesis or dissertation in whole or part in the University libraries in all forms of media, now or here after known. I acknowledge that I retain all intellectual property rights which subsist in my thesis or dissertation, such as copyright and patent rights, subject to applicable law. I also retain the right to use all or part of my thesis or dissertation in future works (such as articles or books).’

‘For any substantial portions of copyright material used in this thesis, written permission for use has been obtained, or the copyright material is removed from the final public version of the thesis.’

Signed ……………………………………………......

Date ……………………………………………......

AUTHENTICITY STATEMENT ‘I certify that the Library deposit digital copy is a direct equivalent of the final officially approved version of my thesis.’

Signed ……………………………………………......

Date ……………………………………………...... Thesis/Dissertation Sheet

Surname/Family Name : Hamper Given Name/s : Robin Abbreviation for degree as give in the University calendar : Masters of Laws by Research Faculty : Law School : Thesis Title : bounties and the legal risks to security researchers

Abstract 350 words maximum: (PLEASE TYPE) This thesis examines some of the contractual legal risks to which security researchers are exposed in disclosing software vulnerabilities, under coordinated disclosure programs (“bug bounty programs”), to vendors and other operators.

On their face, the terms of these programs are purported to offer an alternative to security researchers to publicly disclosing or selling discovered bugs, which have significant value and potential for harm if used maliciously, to purchasers who do not intend to use them in order to fix the underlying issues in software. Historically, vendors have deployed a range of legal measures to discourage or eliminate such disclosure.

This thesis examines the terms of three popular bug bounty programs (, Department of Defence (hosted on HackerOne) and and considers their effect in the Australian jurisdiction. It examines issues including the application of unfair contracts legislation and unconscionability. It further examines three key case studies in which vendors have sought, or threatened to seek, legal remedies against researchers who have discovered and disclosed vulnerabilities to them under their programs or directly to them in the absence of one.

It concludes that while bug bounty programs somewhat advance the previous uncertainty and potentially onerous legal regime, the terms remain asymmetric, largely non-negotiable and vendors may be able to depart from them in certain circumstances. In this context, a range of reforms are suggested in the concluding Chapter which may improve certainty for security researchers, impose greater responsibility on software vendors and, ultimately, create more secure software.

Declaration relating to disposition of project thesis/dissertation

I hereby grant to the University of New South Wales or its agents the right to archive and to make available my thesis or dissertation in whole or in part in the University libraries in all forms of media, now or here after known, subject to the provisions of the Copyright Act 1968. I retain all property rights, such as patent rights. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertation.

I also authorise University Microfilms to use the 350 word abstract of my thesis in Dissertation Abstracts International (this is applicable to doctoral theses only).

…………………………………………………………… ……………………………………..……………… ………7/9/2019.……………………...…….… Signature Witness Signature Date The University recognises that there may be exceptional circumstances requiring restrictions on copying or conditions on use. Requests for restriction for a period of up to 2 years must be made in writing. Requests for a longer period of restriction may be considered in exceptional circumstances and require the approval of the Dean of Graduate Research.

FOR OFFICE USE ONLY Date of completion of requirements for Award:

Page 1 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. School of Law

Originality Statement

I hereby declare that this submission is my own work and to the best of my knowledge it contains no materials previously published or written by another person, or substantial proportions of material which have been accepted for the award of any other degree or diploma at UNSW or any other educational institution, except where due acknowledgement is made in the thesis. Any contribution made to the research by others, with whom I have worked at UNSW or elsewhere, is explicitly acknowledged in the thesis. I also declare that the intellectual content of this thesis is the product of my own work, except to the extent that assistance from others in the project's design and conception or in style, presentation and linguistic expression is acknowledged.’

Signed:

Date:

Page 3 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. INCLUSION OF PUBLICATIONS STATEMENT

UNSW is supportive of candidates publishing their research results during their candidature as detailed in the UNSW Thesis Examination Procedure.

Publications can be used in their thesis in lieu of a Chapter if: • The student contributed greater than 50% of the content in the publication and is the “primary author”, ie. the student was responsible primarily for the planning, execution and preparation of the work for publication • The student has approval to include the publication in their thesis in lieu of a Chapter from their supervisor and Postgraduate Coordinator. • The publication is not subject to any obligations or contractual agreements with a third party that would constrain its inclusion in the thesis

Please indicate whether this thesis contains published material or not.

☒ This thesis contains no publications, either published or submitted for publication

Some of the work described in this thesis has been published and it has been ☐ documented in the relevant Chapters with acknowledgement

This thesis has publications (either published or submitted for publication) ☐ incorporated into it in lieu of a chapter and the details are presented below

CANDIDATE’S DECLARATION I declare that: • I have complied with the Thesis Examination Procedure • where I have used a publication in lieu of a Chapter, the listed publication(s) below meet(s) the requirements to be included in the thesis. Name Signature Date (dd/mm/yy) Rob Hamper 7/9/2019

Postgraduate Coordinator’s Declaration I declare that: • the information below is accurate • where listed publication(s) have been used in lieu of Chapter(s), their use complies with the Thesis Examination Procedure • the minimum requirements for the format of the thesis have been met.

PGC’s Name PGC’s Signature Date (dd/mm/yy) NA

Page 4 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Table of Contents

Chapter 1 - Introduction ...... 9 1.1 Introduction ...... 9 1.2 Research Question ...... 11 1.3 Introducing Software Vulnerabilities, Bugs and their Source ...... 11 1.4 Research Methodology ...... 14 1.5 Thesis Structure ...... 16 Chapter 2 – History, Themes and Literature Review ...... 18 2.1 Vulnerability Disclosure: History and Literature Review ...... 18 2.2 Bug Bounties: History and Literature Review ...... 29 2.3 Legal Framework: Overview and Literature Review ...... 38 2.4 “” Ethics: History and Literature Review ...... 42 2.5 Conclusion ...... 45 Chapter 3 - Why Software Matters ...... 46 3.1 Introduction ...... 46 3.2 Why Software Matters ...... 46 3.3 Mobile Devices ...... 47 3.4 Big Data ...... 48 3.5 of Things ...... 49 3.6 Enabling Technologies - Internet Protocol v4 vs Internet Protocol v6 ...... 49 3.7 Nature of change ...... 51 3.8 Cost ...... 51 Hardware...... 51 Software ...... 52 3.9 Range of devices ...... 52 Consumer devices ...... 52 Enterprise devices – SCADA and Cyber-physical systems ...... 54 Cyber Physical Systems ...... 55 3.10 Exploitation of Vulnerabilities ...... 56 ...... 57 Surveillance of Dissidents ...... 57 – Spam and DDoS ...... 58

Page 5 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. 3.11 Conclusion ...... 58 Chapter 4: Case Studies ...... 59 4.1 DJI / Finisterre Case Study ...... 59 Background ...... 59 Formalisation of DJI Bug Bounty Program ...... 69 DJI Response to Finisterre account ...... 70 Conclusion ...... 70 Subsequent Announcement of Data ...... 73 4.2 / Wineberg Case Study ...... 74 Initial Contact ...... 74 Initial Facebook Response ...... 77 Subsequent Facebook response ...... 80 Conclusion ...... 81 4.3 Public Transport Victoria / Rogers – Case Study #3 ...... 83 4.4 St Jude Medical ...... 85 Introduction ...... 85 Medsec Vulnerability Disclosure ...... 86 Outcome ...... 87 4.5 Conclusion ...... 87 Limited Redress ...... 88 Bounty program by press release ...... 88 Summary ...... 89 Chapter 5 – Contract Formation, Elements of Contract ...... 90 5.1 Introduction ...... 90 5.2 Industry Perspectives regarding protection from liability ...... 91 5.3 Programs examined ...... 94 5.4 “Policy” or Contract ...... 95 5.5 Formation / Elements of a contract ...... 95 Introduction ...... 95 Offer, Acceptance and Validity ...... 96 Relevant U.S. Authorities ...... 97 Australian Authorities ...... 99 5.6 Formation under the examined programs ...... 100 Facebook ...... 100

Page 6 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Google ...... 103 Department of Defence – HackerOne ...... 105 5.7 Consideration ...... 111 5.8 Scope and Content ...... 113 Alternate Dispute Resolution ...... 113 Alternate Dispute Resolution Schemes...... 114 Jurisdiction ...... 115 5.9 IP Ownership ...... 115 5.10 Confidential Information...... 118 5.11 Conclusion ...... 120 Chapter 6 - Avoidance / Vitiating Factors ...... 121 6.1 Elements ...... 121 6.2 Unfair Contracts ...... 121 Consequently, assuming the satisfaction of jurisdictional requirements, many security researchers will be afforded protection under the Australian Consumer Law in this regard. However, as their businesses grow, they may exceed the statutory threshold and the protection against unfair terms may end...... Error! Bookmark not defined. Deceptive and Misleading Conduct ...... 124 Failure to Pay Bounty ...... 124 Application to the ACL ...... 126 Jurisdiction ...... 131 6.3 Other Issues – Criminal Liability (Waiver of rights, release from liability) ...... 132 6.4 Conclusion ...... 134 Chapter 7 – Summary, Conclusion and Future Research ...... 136 7.1 Introduction ...... 136 7.2 Summary ...... 136 7.3 Key Findings ...... 136 7.4 Future Research ...... 137 Potential Legislative Reform ...... 137 Non-Legislative Government Guidelines ...... 141 7.5 Conclusion ...... 142 Appendices ...... 143 8.1 Department of Defence – Hack the Pentagon Terms ...... 143 8.2 HackerOne Terms...... 146 HackerOne General Terms and Conditions ...... 146

Page 7 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. HackerOne Finder Terms and Conditions ...... 151 HackerOne Customer Terms ...... 153 8.3 Facebook Terms ...... 157 Facebook General Terms ...... 157 Facebook Whitehat Terms...... 164 8.4 Google Vulnerability Reward Program (VRP) Rules ...... 169 Services in scope ...... 169 Qualifying vulnerabilities ...... 169 Non-qualifying vulnerabilities ...... 170 Reward amounts for security vulnerabilities ...... 171 Reward amounts for abuse-related methodologies ...... 172 Investigating and reporting bugs ...... 172 Frequently asked questions ...... 173 Legal points ...... 174

Page 8 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Chapter 1 - Introduction

Information technology increasingly lies at the centre of our everyday lives, and as this occurs, the consequences of the exploitation of vulnerabilities in that software grow. The effect of such exploitation includes , identity theft, espionage, denial of service attacks, and a range of other computer-based crimes. In this context, the ability to conduct effective research into improving the security of software, by creative appropriate incentives and protection for those involved in the software vulnerability (colloquially, a bug) ecosystem increases in importance. Focusing particularly on participation in coordinated software vulnerability disclosure programs (also known as bug bounty programs) this thesis considers contemporary changes to the legal risks to which security researchers are exposed to in the exploration, discovery, disclosure, and remediation of software vulnerabilities.

Coordinated vulnerability disclosure programs are programs that offer incentives for security researchers to disclose software vulnerabilities to the vendor or program operator so that they may be repaired (or “patched”) rather than concealing them or selling them to parties which may seek to exploit them offensively (that is, to use them against a third party who is using the unpatched software).

A persistent tension in vulnerability disclosure, evident in the operation of bug bounties, is the view, on one hand, that security researchers have, or should have, a legal or contractual, ethical, moral or otherwise axiomatic obligation or duty to disclose vulnerabilities they discover. From this perspective, the legal system should intervene to protect the interests of software vendors in ensuring vulnerabilities are disclosed only in ways that protect (and maximise) their interest.

The countervailing perspective is one that commodifies vulnerabilities and expressly or impliedly considers that vulnerabilities have a market value and that such value should accrue, in whole or in part, to the researcher that discovers the vulnerability. From this perspective, the legal system should intervene to ensure that this value should not be eroded through legal interventions that have a chilling effect on the discovery, disclosure or sale of a vulnerability.

Apparent in the examinations undertaken in this thesis are divergences in the pace of development and change of market mechanisms (in which bug bounties play a role) and the slower to develop legal system that underpins the rights afforded, or purportedly afforded, to security researchers.

Page 9 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

Historic maladaptive application of legal mechanisms, in manners significantly divergent from their intended purposes, has resulted in the application a diverse range of legal theories including intellectual property, defamation, trade secrets, confidentiality, criminal and contract law in ways that restrict or discourage security research.1 This thesis focuses substantially on contractual risk though a range of other significant and pressing legal areas of worth of further research as set out in the concluding Chapter 7.

Similarly, the application of standards and notions adapted from the physical world, such as those related to access and authorisation,2 to assess and prosecute actions that occur solely in the “virtual” world is increasingly strained and ill-suited when seeking a balance between the common behaviour of well-intentioned security researchers and reducing behaviour that is harmful to the security of the systems and its users. In this environment, coordinated software vulnerability programs and bug bounties have emerged, and seen significant growth over recent years, which have the potential to reduce the legal risk to security researchers. However, the extent to which this promise has been realised, particularly in the Australian jurisdiction, has not been examined in the literature.

It is recognised that the environment of the online world is such that attacks are constant, rather than isolated incidents and are often coordinated, intricately planned and carefully executed. In this environment, systems must be resilient to, rather than impervious, from them.34 While the consequence of legal limitations have been partially ameliorated through significant development of norms that improve security outcomes, through the development of coordinated disclosure programs and bug bounties, adoption is far from universal and results in residual uncertainty and an asymmetry of power between researchers and vendors. An examination of these limitations will be undertaken in this thesis primarily through the lens of contractual law.

1 See, for instance, Bambauer, D. and Day, O. The Hacker's Aegis Emory Law Journal, Vol. 60, p. 1051. Mar 1 2010 Available: ://papers.ssrn.com/sol3/papers.cfm?abstract_id=1561845 2 Kerr, O. 's Scope: Interpreting "Access" and "" in Computer Misuse Statutes. NYU Law Review. November 2003. Available: http://www.nyulawreview.org/issues/volume-78-number-5/cybercrimes-scope-interpreting-access- and-authorization-computer-misuse 3 Department of Homeland Security. Presidential Policy Directive 21 Implementation: An Interagency Security Committee White Paper. February 2015. Available: https://www.dhs.gov/sites/default/files/publications/ISC-PPD-21-Implementation-White-Paper- 2015-508.pdf 4 Department of the Prime Minister and Cabinet. Building national cyber resilience. September 12, 2017. Available: https://www.pmc.gov.au/news-centre/cyber-security/building-national-cyber-resilience

Page 10 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

The research question this thesis answers is:

to what extent do bug bounty programs protect security researchers from contractual civil legal risks in the Australian jurisdiction?

The concept of “contractual civil risk” is used to distinguish civil liability arising as between the security researcher and a bounty operator, under contract, as distinct from that arising under the operation of criminal and related legislation. As discussed below, the focus of this thesis is on contractual, rather than criminal liability.

Section 1.3 below introduces some of the key terms used in the question and throughout the thesis. Section 1.4 below sets out how the methodology by which the question will be addressed. Finally, Section 1.5 below sets out the Chapter structure in which this question will be divided and addressed.

This section introduces the concept of software vulnerabilities, bugs, and provides an overview of their source and prevalence.

Defining software vulnerabilities

Software vulnerabilities are weaknesses in computer software or online services that can be exploited5 to compromise the confidentiality, integrity or availability of software or of an online service.6

A software vulnerability on its own may not be harmful or dangerous and needs to be deployed in an “exploit” - the implementation of a vulnerability, typically instantiated in software, to take advantage of the vulnerability.78 The most valuable of these are known as “zero-day exploits”, often referred to simply as “zero days” or “0-days”. These are exploits that have not been disclosed to the software

5 This definition is adapted from ISO Standard ISO/IEC 29147 v1 at 3.9 which more fully defines a vulnerability as a “weakness of software, hardware, or online service that can be exploited”. Vulnerabilities in hardware are outside the scope of this project and have been, consequently, omitted from the definition. 6 TechNet Definition of a Security Vulnerability Available: http://technet.microsoft.com/en-us/library/cc751383.aspx 7 Ablon, L. Bogart, A. Zero Days, Thousands of Nights. RAND Corporation. 2017 at p.iii Available: https://www.rand.org/pubs/research_reports/RR1751.html 8 Wolf, . Fresco, N. Ethics of the software vulnerabilities and exploits market. The Information Society, 32:4, 269-279 at p.269 Available: http://dx.doi.org/10.1080/01972243.2016.1177764

Page 11 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. vendor or knowingly exploited against a target.9 The clock starts ticking from “day zero” when exploitation of the vulnerability is discovered or disclosed to the vendor – that is zero days have elapsed since the public discovery of the vulnerability. Its value begins to decrease immediately as the opportunity for the vulnerability to be fixed begins and the potential for it to be fully exploited declines as a is released. The handling of these “security bugs”10 or software vulnerabilities is at the heart of this thesis. This section briefly considers the source of these vulnerabilities and provides an indication of their potential scale.

Software complexity

Software has only increased in complexity since its inception. This growth in complexity of software is a significant factor in the existence of software vulnerabilities. Modern software systems are vast in scale and complexity. While the Apollo 11 spacecraft was launched and landed on the moon in 1969 with 145,000 lines of source code, a relatively recent version of a modern computer , Apple’s OSX, comprises 86 million lines of code.11 It is estimated that Google control a code base of over 2 billion lines of code.12

Error rates

Examining the volume of source code and expected error rates provides an illustrative and illuminating, if imprecise, indication of the potential scale of software vulnerabilities. Well-engineered software code is still estimated to have several errors per one thousand lines of code, and hence larger systems are expected to have more bugs than smaller ones13 - “the simpler the software is, the fewer bugs it will have”. 14 Similarly, Carnegie Mellon University has shown that a thousand lines of code typically has five to fifteen bugs.15 While most software bugs are minor and do not affect performance they all may, potentially, compromise security.16 It is a widely accepted view that the complexity of

9 Bilge, L. Dumitras, T. Before we knew it: an empirical study of zero-day attacks in the real world. Proceedings of the 2012 ACM conference on Computer and communications security. 2012 at p.833 10 “Bugs” are a term used since at least the time of Thomas Edison by engineers to describe errors in systems they have developed. Griffin, J. Kaplan, E. Burke, Q. Debug'ems and other Deconstruction Kits for STEM learning Integrated STEM Education Conference (ISEC) 2012 IEEE 2nd, pp. 1-4, 2012. 11 Johnson, P. Curiosity about lines of code. ITworld. August 8, 2012. Available: http://www.itworld.com/article/2725085/big-data/curiosity-about-lines-of-code.html 12 Metz, C. Google Is 2 Billion Lines of Code—And It’s All in One Place. September 16, 2015. Wired Magazine. Available: https://www.wired.com/2015/09/google-2-billion-lines-codeand-one-place/ 13 NICTA and UNSW. Secure Microkernel Project (seL4). Available: https://ts.data61.csiro.au/projects/seL4 14 Schneier, B. Secrets and Lies: Digital Security in a Networked World. Wiley. 2013.at p.210 15 Ibid. 16 Ibid.

Page 12 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. modern software means that it is not reasonable to expect software not to have security bugs.171819 A cursory comparison between the number of lines in software and the expected error rates highlights an almost unfathomable “depth” of potential vulnerabilities.

Formal methods

While it is possible to mathematically prove that software is correct,20 such an approach, known as formal verification, is impractical at large-scale due to its extremely resource intensive nature. It has not been deployed on a large scale. It is estimated that 20 person years of effort were applied21 to create the proof for 8700 lines of code used in the seL4 microkernel developed at the University of New South Wales.22 Tools used for formal verification typically trade completeness for soundness.23 Even certifying, rather than formally verifying code, to very high, almost mathematically proven security levels, is estimated to cost as much as $10,000 per line of code.24 This suggests that releasing error- is unlikely to occur in the foreseeable future and that handling of vulnerabilities upon their discovery is a key element in producing a more secure software ecosystem.

Insights into technical measures to improve software quality and reduce vulnerabilities are a significant field in software engineering in the discipline. Consequently, this and an analysis of the ethical frameworks in which software vulnerability disclosure occurs are also outside the scope of this thesis.

This section has briefly introduced some of the persistent difficulties that underlie modern software development which make it likely that error-free software remains a distant goal. The impact and source of such vulnerabilities continues in Chapter 3. In this environment, bug bounties will play an important part in securing the software ecosystem.

17 Ibid. at p.129 18 Bessey, A. Block, K. Chelf, A. et al. A few billion lines of code later: using static analysis to find bugs in the real world. Communications of the ACM. February 2010. Available: https://dl.acm.org/citation.cfm?id=1646374 19 Facebook. Important Message from Facebook's White Hat Program. June 22, 2013. Available: https://www.facebook.com/notes/facebook-security/important-message-from-facebooks-white- hat-program/10151437074840766 20 Subject to certain assumptions outside the scope of discussion for the purposes of this example. 21 Klein, G. Elphinstone, K. Heiser, G. et al. seL4: Formal Verification of an OS Kernel. Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles at pp.207-220 at p.216. Available: http://dl.acm.org/citation.cfm?id=1629596 22 A microkernel is the smallest implementation of an operating system. 23 Manadhata, P. Wing, J. An Attack Surface Metric. IEEE Transactions on Software Engineering Volume: 37, Issue: 3, May-June 2011. 24 Klein, G. Elphinstone, K. Heiser, G. et al. seL4: Formal Verification of an OS Kernel. Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles. 2009. at pp.207-220 at p.216. Available: http://dl.acm.org/citation.cfm?id=1629596

Page 13 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

A traditional doctrinal approach to addressing the research question. This thesis will conduct analysis by collecting and analysing published academic literature in related and adjacent fields, of which there are many. These fields are elucidated and expanded in Chapter 2.

Core documents central to answering the research question are published vulnerability handling policies and bug bounty terms. These terms are made available on the public websites of bounty program operators and vendors. The vulnerability handling policies and bug bounty terms set the “rules of the road” from the perspective of the bounty operators who prepare and promulgate these terms. Analysis of relevant case law and legislation elucidate ways in which the application of these terms may be modified or reduced. Where there is no relevant Australian case law, that of the US and where appropriate, other jurisdictions, will be utilised for instruction. Legislation to be analysed in this thesis includes Competition and Consumer Act 2010 (Cth) to provide a legislative lens through which to examine the operation, or potential operation, of bounty terms.

The focus on contractual legal risk in this thesis has been chosen, as distinct from criminal legal risk, so as to contain the required scope of analysis and address a question that is answerable within the bounds of a master’s thesis (as opposed to a doctoral thesis which may address the wider scope of issues that a topic such as legal risk in bug bounties raises). The discussion below highlights the almost limitless range of legal issues that could conceivably be analysed. However, attempting to do so would necessarily mean that analysis of each individual issue would be cursory at best. While criminal risk is an important subject to analyse in the context of bug bounties, it is sufficiently distinct to be examined in future research.

Other relevant material includes the relevant ISO standards, discussed further below, non-ISO industry standards, published interviews with security researchers, company publications, conference presentations, workshop papers, blog posts, mailing lists, terms of use / terms of service and legal notices relevant case-law will also be examined and are contextually relevant.

Case Studies

Page 14 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. The re-emergence, and subsequent significant growth, of bug bounties is a relatively new phenomenon, and consequently there are a relatively small number of published accounts of participation in the programs which have had negative consequences to the security researcher through the threats of, or commencement of legal action, being taken against them. However, those examples that are available provide useful bases to undertake a case study. To provide balance, and to provide a basis from which to draw insightful conclusions, in each case there is an account of events from both parties - that is, the security researcher and a representative of the vendor.

Similarly, there are several examples where vulnerabilities have been disclosed to vendors in the absence of coordinated vulnerability disclosure programs which have resulted in threatened or actual legal action. In these selected cases, there are also accounts from the security researcher and the vendor disclosed either in (i) blog posts; (ii) media reporting; or (iii) legal filings; all of which will be examined.

Bounty Terms

Each sponsoring organisation (whether the program is vendor operated or hosted by a third party) operates under a set of published terms. These terms are typically very short-form when compared to typical legal documents in related fields such as end-user license agreements and they are often free of legalese. However, they have important consequences with respect to the obligations of the parties.

An analysis of these terms in the Australian legal context has not occurred. This gap in the literature and related analysis is at the centre of this thesis and the research questions and is the core of the contribution that this thesis makes. Australia has a significant base of security researchers who understand how Australian laws affect their participation in bounty programs in this nascent sector of the economy. The scope of this thesis is such that an examination of the potential effect of each of the terms is not possible. However, a selection of key terms has been chosen which highlights both the scope and potential effect of key terms. The published terms of Facebook, Google and the Department of Defence will be used as a baseline for analysis. The relevant terms are attached as Appendixes to this thesis.

Social Media

Much of the contemporary discussion regarding emerging issues occurs on social media platforms, particularly which serves almost as a default communication medium for

Page 15 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. many top-class security researchers, and occasionally, Facebook which will also be utilised in the analysis.

Specialist Resources

There are a number of specialist resources including BugBountyForum which conducted a series of weekly interviews with leading security researchers who feature on the leader boards of popular bug bounty platform operators. These interviews25 provide very useful insight into contemporary technical research methodology that is used by researchers in conducting reconnaissance of target systems and the tools, methodologies and behaviours used in the exploration and discovery of vulnerabilities.

An understanding of these techniques, tools and methodologies are relevant to the analysis of legal risk in a number of areas. Firstly, bug bounty terms set out, to varying degrees of precision and equivocality, types of allowable behaviour under the bounty program. Thus, use of certain techniques, tools and methodologies may, on some views, constitute a breach of the bounty terms in certain cases. A breach may give rise to certain rights of the bounty operator which may significantly expand the liability of security researchers.

Timeline

Due to the rapid evolution of events in the area of software vulnerability disclosure and big bounty programs, this thesis does not consider events that occurred post December 2018.

This thesis is set out in the following Chapters:

This Chapter One introduces the topic of software vulnerabilities and key concepts and the research questions.

Chapter Two continues the introduction to the topic and includes the history of the various software vulnerability disclosure paradigms, and the re-emergence and increasing significance of bug bounties within the security research community. It discusses the emergence and rapid growth of bug bounty platforms, the legal framework they operate and a brief examination of “hacker” or security researcher ethics. A review of the relevant literature drawn from the multiple disciplines applicable to software vulnerability disclosure is undertaken.

25 See, for example, https://bugbountyforum.com/blog/ama/fransrosen and https://bugbountyforum.com/blog/ama/avlidienbrunn/

Page 16 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Chapter Three will further examine the context in which the increasing impact and importance of the exploitation of software vulnerabilities due to societal and technological change including persistent connectivity, the internet of things and the effect of the exploitation of software vulnerabilities on society more broadly has developed.

Chapter Four undertakes an analysis of four case studies highlighting a number of the issues raised in the earlier Chapters – particularly the divergence between the prima facie operation of a bounty program under its terms and the operation in practice, particularly when the behaviour, or outcome, diverges from that desired by software vendors or bug bounty program operators. The case studies examined are Instagram (through the operation of its parent company, Facebook), Public Transport Victoria, St Jude Medical and drone maker, DJI (Dà-Jiāng Innovations Science and Technology Co). These case studies highlight cases where bug bounty program terms, or principles of responsible disclosure, have been breached or disregarded. The resultant legal liability to security researchers and bounty operators and vendors is considered.

Chapter Five commences the analysis of the terms of three leading bug bounty programs – Facebook, Google and the United States Department of Defence (hosted on HackerOne’s bug bounty platform). The analysis considers these terms and the adequacy of them in protect the interests of legitimate security researchers in conducting their research and seeking to improve the security of software. The analysis considers factors related to a formation of a contract including offer and acceptance, consideration, capacity and certainty. It further considers the resolution of disputes arising under the contract including alternative dispute resolution. An analysis of substantive issues affecting the rights of the parties including IP ownership and confidential information are also undertaken.

Chapter Six continues the analysis commenced in Chapter Five. It focuses its examination on vitiating factors focusing on the operation of the unfair contracts regime and considers remedies for breach in which the intervention and operation of the Australian statutory regime is particularly relevant.

Chapter Seven summarises the findings of the research and makes recommendations for further research including a broad outline for potential options for legislative and other reform options including methods based on current legislative regimes including data breach notification, whistle- blower protection and other "public interest" exemptions or defences to existing law.

Page 17 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Chapter 2 – History, Themes and Literature Review

This Chapter undertakes an analysis of key themes and selected literature in the areas of vulnerability disclosure, bug bounties, the legal framework they operate in and of “hacker” ethics. Chapter 3 then provides a high-level outline of the broad technological themes that underlie the position of software at the centre of modern-day life and, by extension, the significance of the existence and exploitation of vulnerabilities in such software.

The history and literature review is undertaken in four parts. The first part describes the historical, and persistent, tensions in vulnerability disclosure and the evolution of vulnerability disclosure paradigms. The second part analyses the history of bug bounty programs and highlights the gaps in the literature, particularly the gaps related to the legal risks associated with participation in bug bounties. The third section sets out the legal framework in which vulnerability disclosure and bug bounties operate. The final part briefly examines “hacker” ethics”.

The way in which software vulnerabilities are handled upon their discovery is an area of significant historical debate and controversy.26 Their handling enlivens much of the application of various legal doctrines which have been deployed in attempts to restrict or modify the behaviour of security researchers in their discovery of vulnerabilities.

This section considers the history and evolution of the handling of software vulnerabilities upon their discovery by examining each of the broad software vulnerability disclosure paradigms. These paradigms are full disclosure and coordinated (or “responsible” disclosure).

More recently the re-emergence and evolution of incentivised coordinated vulnerability disclosure models or “bug bounties” and market-based approaches have been at the centre of development and evolution of new paradigms in the area. An examination of the development and transition through each of these paradigms usefully sets the context for an in-depth examination of the current bug bounty model and the approach that vendors and security researchers have adopted in the creation and participation under these programs.

26 See, for instance, Maurushat, A. Disclosure of Security Vulnerabilities: Legal and Ethical Issues Springer 2013

Page 18 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Reveal or Conceal

The heart of the debate regarding disclosure of software vulnerabilities has historically focused on their dual nature and the resultant tension that exists in the decision to either reveal or conceal a vulnerability upon its discovery. The debate is often polarised27, dogmatic, emotive and hyperbolic.

This section examines the evolution of vulnerability disclosure paradigms and the environment out of which incentivised coordinated vulnerability disclosure programs or, colloquially, “bug bounties” have emerged and grown.

Full Disclosure

Under the full disclosure paradigm, a software vulnerability is disclosed to the public immediately upon its discovery. Security researchers advocating full disclosure have long sought to present their findings at public conferences such as Blackhat, on mailing lists such as bugtraq or traded the vulnerabilities amongst themselves.28 The conferences and presentations at which they are disclosed have long served to build the reputation and credibility of the security researchers who present and the disclosure of significant vulnerabilities demonstrates the skills, and builds significant cachet amongst the security research community as well as the potential to provide future job opportunities.29

Once a vulnerability is publicly disclosed, under the full disclosure paradigm30, it can be immediately exploited by malicious actors for a variety of potentially harmful purposes. These purposes include technology enabled such as identify theft, proliferation of spam, ransomware, denial of service attacks and electronic surveillance discussed further below and in Chapter Three.

However, the disclosure of a vulnerability also provides the opportunity for it to be remediated by the software vendor(s) through the development and distribution of a patch or “bug fix” or for its effects to be mitigated by users of the software through reconfiguration – for example, by disabling or modifying certain features of the software. The crux of the issue is that the information that allows the exploitation of vulnerabilities is the same as that required to correct them.31

27 Hoskins, B. The Rhetoric of Commoditized Vulnerabilities: Ethical Discourses in Cybersecurity. April 27, 2015. Available: https://vtechworks.lib.vt.edu/bitstream/handle/10919/52943/Hoskins_BN_T_2015.pdf?sequence=1 28 Miller, C. The Legitimate Vulnerability Market. Inside the Secretive World of 0-day Exploit Sales. Independent Security Evaluators. 2007. At p.2 Available: http://www.econinfosec.org/archive/weis2007/papers/29.pdf 29 Ibid. 30 In contrast to the responsible or “partial” disclosure paradigms discussed further below. 31 Granick, J. Legal Risks of Vulnerability Disclosure. Presentation to Blackhat Conference. 2004.

Page 19 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. The full disclosure movement emerged after software vulnerability disclosure processes managed by CERT32 would not disclose a vulnerability publicly until the vendor had patched it.33 This approach was considered unsatisfactory, as in many cases, vendors were not provided with an adequate incentive to release a patch and either delayed in doing so, or failed to do so at all.34 During this time window, the vulnerability may be discovered by malicious actors and exploited unknowingly or otherwise with harmful effect.

Historical Context

The debate regarding the disclosure or concealment of vulnerabilities is not new.35 In the context of software vulnerabilities, this debate has continued for decades.36 Interesting analogous debate, relating to the decision to publish or withhold information regarding the security of physical locks, has continued for more than 160 years as highlighted in literature from the 1860s:37

A commercial, and in some respects a social doubt has been started within the last year or two, whether or not it is right to discuss so openly the security or insecurity of locks. Many well- meaning persons suppose that the discussion respecting the means for baffling the supposed safety of locks offers a premium for dishonesty, by showing others how to be dishonest. This is a fallacy. Rogues are very keen in their profession, and know already much more than we can teach them respecting their several kinds of roguery. Rogues knew a good deal about lock- picking long before locksmiths discussed it among themselves, as they have lately done.

In the historic context of locks, as highlighted, it is considered desirable by some to publish vulnerability information so that “honest” people can benefit from it. In the world of software vulnerabilities these are the end users and vendors of the software. The countervailing concern is that “dishonest” people – malicious , are likely to exploit it first if the information is not spread. This is further paralleled in the debate regarding locksmith information of the mid 1800s:

Available: http://www.blackhat.com/presentations/win-usa-04/bh-win-04-granick.pdf 32 CERT is the computer emergency response team for the Software Engineering Institute at Carnegie Mellon University who research the creation and discovery of software vulnerabilities and publish information regarding them. 33 Schneier, B. Crypto-Gram. November 15, 2001 Available: https://www.schneier.com/crypto-gram/archives/2001/1115.html 34 Ibid. 35 See, for instance, Granick, J. The Price of Restricting Vulnerability Publications. International Journal of Communications Law & Policy. Issue 9 - Special Issue on Cybercrime, Spring 2005. 36 Zetter. K. A Bizarre Twist in the Debate Over Vulnerability Disclosures. September 11, 2015. Wired Magazine Available: https://www.wired.com/2015/09/fireeye-enrw-injunction-bizarre-twist-in-the-debate-over- vulnerability-disclosures 37 Hobbs, A. Locks and Safes: The Construction of Locks. Virtue & Co. London. 1868

Page 20 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. If a lock … is not so inviolable as it has hitherto been deemed to be, surely it is to the interest of honest persons to know this fact, because the dishonest are tolerably certain to be the first to apply the knowledge practically; and the spread of the knowledge is necessary to give fair play to those who might suffer by ignorance. … The unscrupulous have the command of much of this kind of knowledge without or aid; and there is moral and commercial justice in placing on their guard those who might possibly suffer therefrom.38

Optimising social outcome in considering choices of publication vs disclosure is also evident in both the current39 and past debate:

The discussion, truthfully conducted, must lead to public advantage: the discussion stimulates curiosity, and the curiosity stimulated invention. Nothing but a partial and limited view of the question could lead to the opinion that harm can result: if there be harm, it will be much more than counterbalanced by good. 40

Arguments for and against full disclosure

In this vein, full disclosure advocates consider that public disclosure of software vulnerabilities provides the strongest incentive for vendors to fix their software. They posit that full disclosure is the only way to reliably improve software41 in cases where vendors may otherwise delay and drag their feet or conceal the existence of vulnerabilities to preserve their reputation, share price or simply because they do not have the resources or other sufficient incentive to remediate vulnerabilities.

The latter issue of resourcing and development of appropriate (or inappropriate) incentives are particularly evident in the emergent “internet of things” discussed in Chapter Three in which billions of devices, many without traditional user interfaces, costing as little as a few dollars are connected to

38 Hobbs, A. Locks and Safes: The Construction of Locks. Virtue & Co. London. 1868 at p.3 39 Arora, A. Telang, R. Xu, H. Optimal Policy for Software Vulnerability Disclosure. MANAGEMENT SCIENCE Vol. 54, No. 4, April 2008, pp. 642–656 Available: http://www.heinz.cmu.edu/~rtelang/MS_disclosure_published.pdf 40 Hobbs, A. Locks and Safes: The Construction of Locks. Virtue & Co. London. 1868 at p.2 41 Schneier, B. Schneier: Full Disclosure of Security Vulnerabilities a 'Damned Good Idea' CSO Online. January 2007. Available: https://www.schneier.com/essays/archives/2007/01/schneier_full_disclo.html

Page 21 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. the internet with little thought given to security and ability for them to be patched and, in many cases, little incentive to do so.42

Advocates of full disclosure reject the approach of “security through obscurity”, a widely discredited notion that the security of a system can be increased by concealing elements of a systems design, including its vulnerabilities - “hackers reject the notion that ignorance makes you safer”.43

On the other side of the argument, vendors argued that full disclosure, while forcing the vendor to respond rapidly to vulnerabilities that have become public, is necessarily undertaken at the expense of the quality and completeness of the patch. In these circumstances the patch must be developed under a compressed time schedule in order that it be released in such a time that the consequence of exploitation is eliminated or significantly mitigated. It is posited that this results in reduced security coverage as a result of decreased testing of the many combinations of systems in which the software operated. This time compressed approach also creates angst amongst the software’s users who must deviate from a scheduled and known update processes to install patches that are released out of schedule.44

In 2001, full disclosure was considered the "norm"45 despite large vendors such as Microsoft considering it "information anarchy".46 Microsoft called for adoption of more “responsible” practices – “this isn't a call for people for give up freedom of speech; only that they stop yelling "fire" in a crowded movie house.”47

A difficulty for vendors is that, even if they are responsive to publicly disclosed vulnerability information, writing an exploit based on a published vulnerability is usually less complex and faster than writing and releasing a patch.48 This places vendors, as well as the software’s users, at a persistent and unresolvable disadvantage where full disclosure is utilised.

42 Patching is the release of an updated version of the software to fix the vulnerability. 43 Cross, T. Academic Freedom and the Hacker Ethic Communications of the ACM. June 2006, Vol. 49, No. 6. at p.39. 44 Miller, M. Microsoft: Responsible Vulnerability Disclosure Protects Users. January 9, 2007. CSO Online. Available: http://www.csoonline.com/article/2121631/build-ci-sdlc/microsoft--responsible-vulnerability- disclosure-protects-users.html 45 Ibid. 46 Culp, S. It's Time to End Information Anarchy. Microsoft. October 2001. Available: http://web.archive.org/web/20011108041931/http://www.microsoft.com/technet/treeview/default.asp?url= /technet/columns/security/noarch.asp 47 Ibid. 48 Frei, S. Schatszmann, D. Plattner, B. Trammell, B. Modeling the Security Ecosystem - The Dynamics of (In)Security. Economics of Information Security and Privacy. 2010. Springer, Boston, MA at p.94.

Page 22 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Partially in response to this disadvantage, vendors have historically reacted to public disclosure of vulnerabilities with an array of legal responses discussed further below.

Coordinated Disclosure

Subsequently, a model of “responsible”49 or “coordinated” disclosure evolved, though some consider the term “responsible disclosure” to be a pejorative and emotionally laden term. In this model, initially advocated by software vendors, vulnerabilities are disclosed first to a vendor. Vendors are provided a fixed period of time (commonly 90 days50) for the vendor to remediate the vulnerability, through the release of a patch, before the vulnerability is disclosed publicly. This approach has been incorporated into ISO standards regarding both disclosure51 and handling of vulnerabilities.52

Support amongst security researchers for this model is far from universal. Critics note that vendors are still, in some cases, unresponsive and uncooperative in response to vulnerabilities reported to them. In announcing a decision to publicly release zero-day vulnerabilities, rather than disclose them to vendors, Russian security researcher Evgeny Legerov cited his experience:

After working with the vendors long enough, we’ve come to conclusion that, to put it simply, it is a waste of time. Now, we do not contact with vendors and do not support so-called ‘responsible disclosure’ policy (sic) 53

Highlighting that vendor coordination may still not result in the fix of a vulnerability, Legerov noted that vulnerabilities in Realplayer, of which the vendor had been notified, had remained unpatched for more than two years despite the vendor being aware of it:

there will be published two years old Realplayer vulnerability soon, which we handled in a responsible way [and] contacted with a vendor (sic)54

While responsible disclosure largely addresses software vendors publicly stated concerns regarding the provision of an opportunity to produce a patch for the software, it does not address the legal risk

49 See, for instance, Goodin, M. Microsoft to banish 'responsible' from disclosure debate. 22 July 2010. The Register. Available: http://www.theregister.co.uk/2010/07/22/microsoft_coordinated_disclosure/ 50 See, for instance, Dent, S. Google posts Windows 8.1 vulnerability before Microsoft can patch it. Engadget. 2 January 2015. Available: https://www.engadget.com/2015/01/02/google-posts-unpatched-microsoft-bug/ 51 ISO/IEC 29147:2014 : Information technology - Security techniques - Vulnerability disclosure 52 ISO/IEC 30111:2013: Information technology - Security techniques - Vulnerability handling processes 53 Krebs, B. Firm to Release Database & Web Server 0days. Krebs On Security. Jan 10, 2010. Available: http://www.krebsonsecurity.com/2010/01/firm-to-release-database-web-server-0days/ 54 Ibid.

Page 23 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. researchers incur in disclosing vulnerabilities to vendors. That is, a security researcher may still be the subject of legal action, under a variety of legal theories, upon disclosure to the vendor. As discussed further below, the disclosure to the vendor also does not provide recognition to the researchers, either financially or reputationally, of the benefit the vendor and its users have received in not having a vulnerability exposed, and potentially exploited, “in the wild”.

No more free bugs

In 2009, at Canadian security conference CanSecWest, prominent security researchers Dino Dai Zovi, Charlie Miller and Alex Sotirov announced their “no more free bugs”55 position. The "no more free bugs" movement adopts the position that, for commercial software,56 vulnerabilities have inherent financial value and it is wrong to allow vendors to "freeload" on the security researcher community by relying on them to find and disclose it to vendors without compensation.

They posit that allowing vendors to externalise the cost of making secure software, by relying on security researchers to report vulnerabilities to them for free, is unfair to the paying customers of the software. They argue free disclosure is not justified on three bases. Firstly, the value of vulnerabilities is demonstrated by vendors paying their own developers to reduce or eliminate them. Secondly, third parties will pay to purchase them once discovered as evidenced by the operation of “black markets” for vulnerabilities, in which high prices are paid for zero-day vulnerabilities. Finally, security researchers incur legal and professional risk by disclosing vulnerabilities they find (as is further discussed below).57

Figure: Dino Dai Zovi and Alex Sotirov at CanSecWest 2009.

55 Dai Zovi, D. No More Free Bugs. And You Will Know me by the Trail of Bits. 22 March 2009. Available: https://web-beta.archive.org/web/20091122091102/https://www.trailofbits.com/2009/03/22/no- more-free-bugs 56 In contrast to open source software. 57 Ibid.

Page 24 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Antisec

In 2009, a fringe and extremist “antisec” movement, supported by the “” group of hackers and “Lulzsec” emerged, though its existence was fleeting. This movement opposed full-disclosure, not on the traditional bases that it is harmful to vendors and users discussed above, but, rather, due to philosophical reasons underlying the commerciality of the security industry - "the security industry uses full-disclosure to profit and develop scare-tactics" to encourage people to buy their products. The movement sought the "unmerciful elimination of supporters of full-disclosure and the security industry". Their perspective has not achieved mainstream support.

To spread their message Antisec posted their manifesto as a replacement image for all of those hosted on Imageshack (at the time one of the world’s largest image hosts) after successfully compromising the website in 2009.58 Ironically the vulnerability used to exploit Imageshack was published on a security mailing list adopting the full disclosure paradigm they eschew:59

58 Leyden, J. ImageShack hacked in oddball security protest. The Register. 13 July 2009. Available: http://www.theregister.co.uk/2009/07/13/imageshack_hack/ 59 Ferguson, R. ImageShack hacked by cyber survivalists. TrendMicro. July 11, 2009. Available: http://countermeasures.trendmicro.eu/imageshack-hacked-by-anti-sec-movement/

Page 25 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

Image: Antisec manifesto.

Page 26 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Market / Sales

More recently, the debate regarding disclosure of software vulnerabilities has been centred on vulnerability sales and their exploitation for both offensive and defensive purposes. An offensive use of a vulnerability is the exploitation of the vulnerability to attack other systems. In contrast, defensive use seeks to patch or mitigate the effects of the vulnerability.

In the underground market, purchasers of vulnerabilities with the intent to use them for offensive purposes, typically governments, pay the highest prices.60 The gap between the prices paid between purchasers of exploits for offensive purposes and the amounts offered through incentivised coordinated vulnerability disclosure programs and bug bounties discussed below is vast and encapsulated by 's, the successor to controversial exploit purchaser Vupen (discussed in the next section), catch phrase "We pay BIG bounties, not bug bounties". Their highest offered payout for a zero-day exploit is, as at March 2017, USD$1,500,000, far higher than those paid by even the most generous bounty programs discussed in the next section.

Defensive Use

While defensive use would typically occur through the paradigm of responsible disclosure and vendor disclosure, there are recent market approaches which seek to sell and disseminate vulnerability information for defensive purposes without vendor notification. For instance, certain commercial subscription services allow users to prevent exploitation of their systems by updating their intrusion detection systems with a signature for a known, but unpatched, vulnerability.61 This allows the vendor of the subscription service to receive an ongoing revenue stream for many users of the product rather than a single return, or no return, from the vendor, were they to choose to disclose the vulnerability to them.

The market-based approach sidesteps both the full disclosure and coordinated disclosure approaches. Underground sales of software exploits are not a new phenomenon62, though their prominence due to the impact of their exploitation and, consequently, increased attention in the media (for instance,

60 O'Neill, P. Zero day exploits are rarer and more expensive than ever, researchers say. CyberScoop. April 26, 2017. Available: https://www.cyberscoop.com/zero-day-vulns-are-rarer-and-more-expensive-than-ever/ 61 Cox, J. Exploit Company Exodus Sold Firefox Zero-Day Earlier This Year. Dec 3, 2016. Vice. Available: https://motherboard.vice.com/en_us/article/exploit-company-exodus-sold-firefox-zero-day-earlier- this-year 62 Miller, M. The Legitimate Vulnerability Market. Inside the Secretive World of 0-day Exploit Sales. In Sixth Workshop on the Economics of Information Security. May 6, 2007. Available: http://www.econinfosec.org/archive/weis2007/papers/29.pdf

Page 27 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. the Stuxnet worm discussed below and large scale data breaches such as ) has brought the topic more attention in the academic literature and in the public conscious more broadly. The discovery of the Stuxnet worm, discussed further below, in the wild also focused public attention on the use and exploitation in novel and previously unknown ways.

Offensive Use

While the consequences of the exploitation of a vulnerability may be severe to both the user and the software vendor, researchers participating in the underground market do not consider it their responsibility to disclose vulnerabilities only to vendors. Encapsulating this rationale, Chaouki Bekrar, a controversial security researcher, and then CEO of exploit-sales company Vupen stated:

“We don’t work as hard as we do to help multibillion-dollar software companies make their code secure. If we wanted to volunteer, we’d help the homeless.”63

Bekrar famously participated in the Pwn2Own competition at security conference CanSecWest to discover vulnerabilities in Google's Chrome browser. Bekrar discovered a significant vulnerability which would have been eligible for a $60,000 prize upon disclosure. Bekrar then declined to disclose it to contest organisers and turned down the prize stating that:

“We wouldn’t share this with Google for even $1 million,” says Bekrar. “We don’t want to give them any knowledge that can help them in fixing this exploit or other similar exploits. We want to keep this for our customers.”

The “customers” of which Bekrar spoke are, effectively, anyone that is willing to pay sufficiently high a price for the discovered vulnerabilities as the ultimate end user of the unpatched vulnerability is unknowable at the point of sale.

The terms of participation for subsequent Pwn2Own competitions were modified and now occur under a prescriptive legal agreement with the researcher requiring, among other things, disclosure of discovered vulnerabilities.

Though not a member of the No More Free Bugs movement, Luigi Auriemma, founder of REVULN, a company that sells vulnerabilities, similarly rejects the notion of disclosing vulnerabilities to vendors for free:

63 Greenberg, A. The Zero-Day Salesmen. Forbes Magazine. 21 March 2012. Available: https://www.forbes.com/forbes/2012/0409/technology-hackers-government-security-zero-day- salesmen.html

Page 28 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. “Providing professional work for free to a vendor is unethical. Providing professional work almost for free to security companies that make their business with your research is even more unethical.”64

Bug bounties are described “an incentivized, results-focused program that encourages security researchers to report security issues to the sponsoring organization.”65 At its core, security researchers are paid for reporting vulnerabilities to bounty operators. While the “sponsoring organisation” has historically been the software vendor, this need not be the case. A number of companies including BugCrowd, HackerOne and Synack operate platforms in which users of the software, as well as vendors themselves, can offer bounties for vulnerabilities reported to them. Bounty programs may be public and are available for anyone to report vulnerabilities, or private where only invited security researchers may participate.

These programs seek to address the negative effects of full disclosure on vendors and software users while ostensibly providing both protection from legal liability and remuneration for security researchers who disclose vulnerabilities under them. These platforms are particularly relevant to this thesis as they are an emergent and largely unexamined phenomenon and both the case studies and examination of bounty terms include examples of programs run under the auspices of a bug bounty platform. There has been no academic study of the legal aspects of these programs in the Australian context, or in the published literature, particularly of the legal terms under which the programs operate and the ways in which they are enforced by vendors and bounty platforms.

Bug Bounty History

This section briefly examines the history of bug bounties and their recent reemergence. While the first bug bounty was paid by in 199566 the rewards were modest - a t-shirt and corporate apparel. The bug bounty approach was largely in stasis until the emergence of Google's Vulnerability Reward

64 Perlroth, N. Sanger, D. Nations Buying as Hackers Sell Flaws in Computer Code. New York Times. July 13, 2013. Available: http://www.nytimes.com/2013/07/14/world/europe/nations-buying-as-hackers-sell-computer- flaws.html 65 Bugcrowd Inc. State of Bug Bounty 2016 at p.4. Available: https://pages.bugcrowd.com/hubfs/PDFs/state-of-bug-bounty-2016.pdf 66 Netscape. Netscape announces "Netscape bugs bounty" with release of Netscape Navigator 2.0 Beta Available: https://web.archive.org/web/19970501041756/www101.netscape.com/newsref/pr/newsrelease48.html

Page 29 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Program in November 2010. Since then, many entities have launched their own bug bounty programs including large software vendors and technology companies such as Microsoft67, Facebook68, Yahoo.69

While there is strong growth in these programs, uptake is still a long way from universal. In November 2015, HackerOne found that 94% of Forbes' Global 2000 had no established channel for receiving external vulnerability reports and only 14% of the top 100 publicly traded companies in the same list had disclosure programs.70

More recently, non-traditional technology companies such as Tesla and the US Government with its “Hack the Pentagon” program run through bug bounty platform HackerOne (discussed below) have introduced their own incentivised coordinated disclosure programs. The amount offered by these programs has increased considerably from low-hundreds to tens of thousands of dollars per reported vulnerability depending on its nature and severity.

In some cases rewards include novel offerings such as non-cash payments by certain vendors – for example, of up to one million frequent flyer miles in the case of vulnerabilities discovered in United Airline’s website or apps.71 Symbolically, Google’s bounty program for Chrome offers a $31 337 reward for, among other things, remote code execution vulnerabilities reflecting the desire for hackers to be considered "elite", a term suggesting technical expertise or achievement, represented in hacker parlance numerically as 31337 representing “ELEET” (which is also rendered as "leet" and "L337").72 The returns on bug bounty platforms mean that top tier security researchers can make very lucrative livings solely from participating under them.

A study of the behavior of security researchers in discovering and disclosing vulnerabilities was undertaken by the United States Government's National Telecommunications and Information Administration (NTIA) survey into vulnerability disclosure attitudes and actions which targeted interested security researchers and vendors. 73 Their research found that that 50% of security research respondents acted independently in their discovery and disclosure of vulnerabilities while 42% undertook their work for a for-profit employer. A further 4% disclosed publicly and 4% didn’t disclose

67 https://technet.microsoft.com/en-us/security/dn425049.aspx 68 https://www.facebook.com/whitehat 69 https://hackerone.com/yahoo 70 Rice, A. 411 for Hackers: Disclosure Assistance. HackerOne. November 5, 2015. Available: https://www.hackerone.com/blog/vulnerability-disclosure-assistance 71 https://www.united.com/web/en-US/content/Contact/bugbounty.aspx 72 Available: https://www.google.com.au/about/appsecurity/reward-program/ 73 NTIA. Vulnerability Disclosure Attitudes and Actions. National Telecommunications and Information Administration (NTIA) Awareness and Adoption Group. December 15, 2016. Available: https://www.ntia.doc.gov/files/ntia/publications/2016_ntia_a_a_vulnerability_disclosure_insights_report.pdf

Page 30 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. discovered vulnerabilities at all. It is likely, however, that this survey demonstrates significant self- selection bias. They found that payment of a bounty has not become an expected norm with 67% of respondents disclosing to the vendor without expectation of a reward, while only 11% sought bounties. As the norms of full disclosure, and responsible disclosure, discussed above demonstrate, there can be significant and swift shifts in expected normal and, consequently, it will be interesting to see whether this persists into the future.

Bug Bounty Platforms

An emergent business model is that of the ‘bug bounty’ platform such as those operated by HackerOne, BugCrowd, Synack and Cobalt who operate bug bounties on behalf of third parties, providing the technical and support infrastructure to allow for the recruitment and vetting of security researchers and discovery and reporting of discovered vulnerabilities with a discrete framework. In addition to this technical and support infrastructure, they also act as strong advocates for the big bounty disclosure paradigm amongst legislators, vendors and the security research community.

The novelty of this model, both BugCrowd and HackerOne were founded in 2012, means that substantial studies of them is scant and significant opportunity for novel research exists and it is in this gap that much of the research in this thesis is positioned. The current literature in relation to bounty platforms is limited and has been focused on niche areas including the increase in software quality that occurs through their utilisation74, efficient allocation of security researcher effort75 and the emergence of bug bounty programs in the framework of institutional economics theory.7677

Participation in bug bounty programs occurs under a variety of legal terms78 which are largely dictated by the bounty operators. The efficacy, completeness and appropriateness of these terms in

74 Zhai, M. Laszka, A. Maillart, T. Grosssklags, J. Crowdsourced Security Vulnerability Discovery: Modeling and Organizing Bug-Bounty Programs. 2016. Pennsylvania State University. Available: http://aronlaszka.com/papers/zhao2016crowdsourced.pdf 75 Zhao, M. Grossklags, J. Liu, P. An Empirical Study of Web Vulnerability Discovery Ecosystems. CCS '15 Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security pp.1105-1117 76 Kuehn, A. New Paradigms in Securing Software Vulnerabilities – An Institutional Analysis of Emerging Bug Bounty Programs and their Implications for Cybersecurity. Working Paper. 9th Annual GigaNet Symposium, Istanbul, Turkey, September 1, 2014 Available: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2809862 77 Kuehn, A. Mueller, M. Analyzing Bug Bounty Programs: An Institutional Perspective on the Economics of Software Vulnerabilities. Working Paper. 2014 TPRC / 42nd Research Conference on Communication, Information and Internet Policy, George Mason University School of Law, Arlington, Virginia, September 12-14, 2016 Available: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2418812 78 See, for instance, (i) https://www.facebook.com/whitehat; (ii) https://www.google.com/about/appsecurity/reward-program/; and (iii)

Page 31 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. reducing or eliminating legal risk to security researchers have not been analysed in the literature or the subject of judicial guidance are discussed in Chapters 5 and 6. The analysis of these terms and the application of case law and relevant legislation in them is central to addressing the research question.

Literature Review - Bug Bounties

This section examines the current state of the literature in relation to bug bounties. Examination of the literature related to incentivised coordinated vulnerability disclosure programs and software vulnerabilities spans many disciplines including computer science79, economics8081, intellectual property law82, tort law83, sociology84, anthropology85, regulatory theory86, ethics87, export control law88 and law of war.89 The focus of this thesis is on the legal issues related to participation in coordinated software vulnerability disclosure programs also known as bug bounties and will particularly focus on contract and ancillary legislation law. It includes an analysis of core intellectual property issues. The thesis does not consider export control law or criminal offences as these are outside its core focus on civil contractual risks which arise when a contract is entered into (and would not otherwise exist).

79 Shin, Y. Meeneely, A. Williams, L. Osborne, J. Evaluating Complexity, Code Churn, and Developer Activity Metrics as Indicators of Software Vulnerabilities. IEEE Transactions on Software Engineering, vol. 37, no. 6, pp. 772-787, Nov.-Dec. 2011. Available: http://ieeexplore.ieee.org/document/5560680/# 80 Anderson, R. Moore, T. The Economics of Information Security. Science. 27 October 2006. Vol 313, Issue 5799, pp. 610-613 Available: http://science.sciencemag.org/content/314/5799/610/tab-pdf 81 Kesan, J. Hayes, M. Bugs in the Market: Creating a Legitimate, Transparent, and Vendor-Focused Market for Software Vulnerabilities. Arizona Law Review 58.3 pp.753-830. 2016. 82 Bambauer, D. and Day, O. The Hacker's Aegis Emory Law Journal, Vol. 60, p. 1051, 2011; Brooklyn Law School, Legal Studies Paper No. 184. Available: https://ssrn.com/abstract=1561845 83 de Villiers, M. Free Radicals in Cyberspace: Complex Liability Issues in Information Warfare 4 Nw. J. Tech. & Intell. Prop. 13 2005 Available: http://scholarlycommons.law.northwestern.edu/njtip/vol4/iss1/2 84 Coleman, G. Coding Freedom: the ethics and aesthetics of hacking. 2013. Princeton University Press. Available: http://gabriellacoleman.org/Coleman-Coding-Freedom.pdf 85 Escobar, A. Hess, D. Licha, I. Sibley, W. Strathern, M. Sutz, J. Welcome to Cyberia: Notes on the Anthropology of Cyberculture Current Anthropology 35, no. 3 Jun., 1994 pp 211-231. 86 Lessig, L. Code version 2.0. 2006. Perseus Books. Available: http://codev2.cc/download+remix/Lessig-Codev2.pdf 87 Oriola, T. Bugs for sale: legal and ethical proprieties of the market in software vulnerabilities 28 J. Marshall J. Computer & Info. L. 451 (2011) Available: http://repository.jmls.edu/cgi/viewcontent.cgi?article=1694 88 See, for instance, The Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies. Available: http://www.wassenaar.org/ 89 Goldsmith, J. How Cyber Changes the Laws of War. European Journal International Law 24 (2013), 129–138

Page 32 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. As discussed above, bug bounty literature only peripherally addresses the legal risk to researchers participating under them. For instance, Laszka et al.90 consider the misaligned incentives in bug bounties in that security researchers are only interested in increasing the number of valid bug reports, without a consideration of the number of invalid reports. Conversely, bounty operators have an interest in decreasing the number of false reports due to the administrative overhead in assessing them – i.e. reducing the signal to noise ratio. The way in which the contractual terms regarding participation in bounty terms address and seek to resolve these misaligned incentives is not addressed by Laszka and will be examined in Chapter 5 and 6.

Huang et al91 have examined security researcher behaviour in decisions made by security researcher’s in choosing to concentrate their focus on a small number of programs versus pursuing a more diverse range as they seek to maximise their reputation and income. The research found that focusing on a small number of programs, less than five, to gain reputation before diversifying to a broader set of programs is a common strategy as the pool of discoverable (and, consequently, remunerable) vulnerabilities may lessen by focusing on a single program. However, the research does not consider the effect that the legal terms have on decisions of security researchers in choosing to participate (or not to participate) in various bounty programs.

While Laszka, Huang et al and Finifter consider bounty incentives, researcher behavior and bounty rewards, they do not consider the extent to which the terms under which bounties are operated, oblige the organisation to assess and credit researchers’ submissions or address the legal risk of researchers in their participation, as is the subject of this thesis.

Bug Bounty Economics

Examining Google's bounty program for its Chrome and 's Firefox, Finifter92 has suggested that bug bounties are more cost-effective and economically efficient than hiring internal security researchers to discover vulnerabilities.93 Finifter examines, with limited data, the possibility for researchers to earn a living from participating in bounties but does not consider the legal risks to

90 Laszka, A. Zhao, M. Grossklags, J. Banishing Misaligned Incentives for validating reports in Bug Bounty Platforms. – ESORICS 2016. Lecture Notes in Computer Science, vol 9879. Springer. Available: https://link.springer.com/chapter/10.1007/978-3-319-45741-3_9 91 Huang, K. Siegel, M. Madnick, S. Li, X. Feng, Z. Poster: Diversity or Concentration? Hackers’ Strategy for Working Across Multiple Bug Bounty Programs. 37th IEEE Symposium on Security and Privacy (S&P). 2016. Available: http://www.ieee-security.org/TC/SP2016/poster-abstracts/13-poster_abstract.pdf 92 Finifter, M. Akhawe, D. Wagner, D. An Empirical Study of Vulnerability Rewards Programs. Proceedings of the 22nd USENIX Security Symposium. August 14-16 2013. Washington. 22nd USENIX Security Symposium. Available: https://www.usenix.org/system/files/conference/usenixsecurity13/sec13-paper_finifter.pdf 93 Ibid.

Page 33 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. which a researcher may be exposed through such participation. Anecdotal accounts on media such as Twitter suggests that many security researchers are making a living on through their participation in bounty programs.

Bohme’s94 research, among other things, considers the market failure that is historically evident in for software security (that is, that the market mechanism does not intervene to remediate software vulnerabilities) through the lens of Akerlof's lemon market problem and introduces the concept of bug bounties as a possible way to resolve this market problem. Akerlof's theory posits that information asymmetry is at the heart of the "lemon" second-hand car problem. In that case, the car dealer knows more about the car than the purchaser can know, and the purchaser cannot assess its faults. This limits the purchaser’s willingness to pay prices for a second car to that of a lemon. Similarly, with computer software, Bohme argues, that it is not possible for the market to assess the security of a given piece of software, thus the price purchasers are willing to pay is limited to that of insecure software and thus there is no economic incentive for vendors to produce secure software.95

Bohme96 makes four proposals to overcome the market failure that exists: (i) bug challenges (vulnerability reward programs or bug bounties) where vendors pay those to discover bugs in their software97, (ii) vulnerability brokers, (iii) derivative markets for exploits; and (iv) cyber-insurance. The first suggestion of bug challenges largely mirrors that adopted by the bug bounty program and platform operators.

Camp proposes that an optimal market for software vulnerabilities is one of a single purchaser, likely the US Government, who would purchase and freely distribute all vulnerability information.98 This approach largely models that proposed much later by Dan Geer, Chief Information Security Officer at InQTel, a firm investing in technology to support the missions of the Central Intelligence Agency (CIA) and the broader U.S. intelligence community.99 At the keynote address to Blackhat 2014 Geer suggested that:

94 Bohme, R. A Comparison of Market Approaches to Software Vulnerability Disclosure. PROC. OF ETRICS. LNCS 3995. 95 Ibid. at p.299. 96 Ibid. 97 For example, the “Pwn2Own” competition run annually at the CanSecWest security conference where Google provides a prize of up to $250,000 for those to successfully exploit its (and other vendors’) software 98 Camp, J. The State of Economics of Information Security. Vol2:2, 2006. pp. 189-205 at p.194 Available: http://moritzlaw.osu.edu/students/groups/is/files/2012/02/2-camp.pdf 99 http://www.blackhat.com/us-14/speakers/Dan-Geer.html

Page 34 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. "we buy them all and we make them all public. Simply announce "Show us a competing bid, and we'll give you 10x."100

This approach has not received widespread support and Government participation in the market for software vulnerabilities in this regard, has been limited to the operation of bug bounty programs to seek disclosure of vulnerabilities in systems they operate themselves, rather than third party systems. The terms of one such program are examined in Chapter 3. The other limited way in which the Government participates is through the purchase of vulnerabilities for intelligence gathering and offensive cybersecurity purposes, discussed elsewhere in this Chapter.

RAND101 have undertaken an extensive study into a pool of software vulnerabilities to provide an empirical basis for some of the common assertions regarding the discovery, disclose and remediation of software vulnerabilities. RAND examined the lifecycle of a number of vulnerabilities and found that the lifespan of a vulnerability from initial discovery to exploitation is 6.9 years. For the set of vulnerabilities the subject of their research they found that "after a year, approximately 5.7% have been discovered by an outside entity".102 This suggests that the stockpiling of vulnerabilities by the Government, in order to preserve them for offensive use against third parties for its own benefit (typically espionage) as discussed further below, is not materially harmful, at least with respect to the argument that hoarding vulnerabilities rather than disclosing them will result in harm by others (re)discovering and exploiting them before the initial discoverer exploits them. However, this research did not consider the possibility that the government’s “hoarding” may not be successful in that if they are the subject of a breach, as they have been in prominent examples in the media.103 In such cases, the negative effects of misuse of the “hoarded” vulnerabilities and related exploits are far greater than the risk of independent rediscovery. In the case of the hack of the NSA’s exploits the losses due to them having been compromised are estimated to be in the order of many hundreds of millions of dollars.104

RAND’s research considered the types of security researchers the operate in the field and found that there are three tiers - a top tier, numbering a few hundred to a few thousand worldwide, who are highly skilled and produce high-quality, reliable exploits and are typically delivering them to a nation

100 Geer, D. Blackhat 2014 Keynote Address. Available: http://geer.tinho.net/geer.blackhat.6viii14.txt 101 Ablon, L. Bogart, A. Zero Days, Thousands of Nights. RAND Corporation. 2017. Available: https://www.rand.org/pubs/research_reports/RR1751.html 102 Ibid. 103 Shane, S. Perlroth, N. Sanger, E. Security Breach and Spilled Secrets Have Shaken the N.S.A. to Its Core. Nov 12, 2017. Available: https://www.nytimes.com/2017/11/12/us/nsa-shadow-brokers.html 104 Ibid.

Page 35 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. state actor.105 A second, intermediate tier, are lesser skilled and more reliant on the adaptation of existing tools rather than developing these new tools themselves. The third and final tier, are those that only find vulnerabilities and theoretical exploitations of them (rather than actually instantiating them into exploits) – these number in the tens of thousands and are viewed as being the primary participants in bounty platforms. It is the activities of this second and, particularly, third tier that are central to the research in this thesis, as it is these researchers than comprise the bulk of bug bounty activity, and thus, those that are exposed to legal risk through participation in them.

Hacker’s Aegis

Prior to the re-emergence of bug bounty programs, there have been suggestions in the literature intended to reduce or eliminate legal risk to security researchers through legislative reform. These suggestions may still have utility in the context of bug bounties, though they were not made with this context in mind. The leading models in this regard are those proposed by Bambauer and Day who propose the creation of an immunity from civil intellectual property claims for researchers that adhere to five rules. These rules are summarised as: "tell the vendor first, don’t sell the bug, test on your own system, don’t weaponize, and create a trail."106. Bambauer and Day's proposed approach does not impose any reciprocal obligations on vendors or bounty platform operators related to their receipt and handling of the vulnerability information (for instance, such as that prescribed by the ISO standards related to vulnerability handling) and the second rule of “testing on your own system” is not practical in the context of online systems and cloud computing where security testing is required to happen, in many cases, on production systems. This research also does not address the issue of economic return to the security researcher in disclosing the vulnerability. Exploration of this gap in this research as it applies to bug bounty programs and coordinated vulnerability disclosure programs, as well as a consideration of an appropriate framework for protecting security researchers conducting their research on online services are important areas of focus for this thesis.

105 Ablon, L. Bogart, A. Zero Days, Thousands of Nights. RAND Corporation. 2017 at p21-22. Available: https://www.rand.org/pubs/research_reports/RR1751.html 106 Bambauer, D. and Day, O. The Hacker's Aegis Emory Law Journal, Vol. 60, p. 1051, 2011 at p.1069 at p.1088

Page 36 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Vulnerability Disclosure and Security Research Rhetoric

The way in which security researchers are described in the media and, more broadly, in discussions regarding cybersecurity influence the way in which their actions in undertaking security research is perceived. This is particularly relevant in resolving any ambiguity in the terms of bug bounty programs, and behaviour allowed under them, and the actions of security researchers. This perception and rhetoric frames the interpretation of security researcher’s actions. These issues are particularly evident in Chapter 4’s case studies.

Hoskins notes that cybersecurity is relatively unexamined by sociologists and she undertakes an analysis of the evolving discourse in the commodification of vulnerabilities.107 She examines the polarising and loaded nature of much of the rhetoric used by vendors, journalists and security researchers as they each seek to advance their own positions by analysing media reporting of vulnerability sales, commentary of technology companies affected by vulnerability sales and, finally, commentary from vulnerability vendors Vupen and Netragard. The use of metaphors such as “war and weaponry” as it relates to the use of exploits, “shady and secretive” in relation to the behaviour of exploit brokers and the use of “responsible disclosure” as a term (the implication being that other disclosure is, therefore, irresponsible) are examined in the context of their effect on the debate and evolving norms.

Other examples of the value-laden language used in the discussion is highlighted by Algarni et al. who draw a distinction between the profession of discovering vulnerabilities – where it is described as being a "respectable profession" as compared to exploiting vulnerabilities which is "generally the opposite".108 This is a more nuanced view than Oriola109 who fails to even make the distinction between the application (or exploitation) of vulnerabilities and the research into them in its own right. Oriola fails to distinguish between security research undertaken in bug bounties with “malicious vulnerabilities research” as a potent threat. The validity of these distinctions and categorisations will be considered in the context of striking a balance in the operation of bug bounty terms. These terms should, on the one hand, provide a safe legal environment for security researchers to conduct their research and disclose discovered vulnerabilities. On the other hand, they must simultaneously protect the legitimate interests of bounty program operators from “malicious vulnerability researchers” who

107 Hoskins, B. The Rhetoric of Commoditized Vulnerabilities: Ethical Discourses in Cybersecurity. April 27, 2015. Available: https://vtechworks.lib.vt.edu/bitstream/handle/10919/52943/Hoskins_BN_T_2015.pdf?sequence=1 108 Algarni, A. Malaiya, Y. Software Vulnerability Markets: Discoverers and Buyers. International Journal of Computer, Information Science and Engineering Vol:8 No:3, 2014 at p.72. 109 Oriola, T. Bugs for Sale: Legal and Ethical Proprieties of the Market In Software Vulnerabilities 28 J. Marshall J. Computer & Info. L. 451 (2011) Available: http://repository.jmls.edu/cgi/viewcontent.cgi?article=1694 at p.519.

Page 37 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. may seek to claim they were operating under the protection of the bounty program and, are thus, not liable for their actions despite their negative impact.

Software vulnerability disclosure enlivens various legal regimes in seeking to regulate researcher and vendor behaviour. In order to provide context and background as to the wide variety of legal issues that arise in the context of the operation of bug bounty programs, this section briefly analyses the overarching legal doctrines and framework which apply to vulnerability disclosure broadly. While these issues arise in distinct areas of law, as is described below, their precise boundaries are often ambiguous. Chapters 5 and 6 examine more closely the changes to the liability to security researchers effected by their participation under bug bounty programs, particularly under various aspects of contract law.

Equity

In addition to contract law, there may be scope for equity, in the Australian jurisdiction, to intervene in the context of bug bounty programs to restrain unconscionable conduct on either the part of the bounty operators or the security researcher. Unconscionable conduct being where a “party makes unconscientious use of his superior position or bargaining power to the detriment of a party who suffers from some special disability”. 110

However, the Australian Parliament evinced its intention that the protection afforded under the Australian Consumer Law is not limited by the "unwritten law relating to unconscionable conduct".111 Thus the statutory unfair contracts regime seeks to expand the range of conduct that would otherwise fall within the equitable jurisdiction related to unconscionable conduct. Consequently, given the statutory intervention provides greater protection for security researchers participating in bounty programs, the equitable doctrine of unconscionability will not be further examined in this thesis and the focus remains on the protection afforded under the unfair contracts regime as discussed in Chapter 6.

Vendor Liability

While the liability of security researchers is the key focus of this thesis, the concomitant liability of software vendors in producing software with vulnerabilities provides relevant context. Software vendors typically avoid contractual liability for the damages caused by vulnerabilities in their software

110 Commercial Bank of Australia Ltd v Amadio 151 CLR 447 at 461 per Mason J 111 S24(1)(a) Competition and Consumer Act 2010 (Cth) - SCHEDULE 2 The Australian Consumer Law

Page 38 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. through broad contractual exclusions and limitations of liability. Tort law has not materially developed to impose liability on vendors for damages suffered by insecure software112 other than in limited circumstances.

The imposition of vendor liability for failing to patch vulnerabilities once they have been notified of them113, as well as liability for their existence at first instance (i.e. through the creation of insecure software)114, have both been proposed. However, there are no current cases under Australian law in which liability has been imposed on vendors for their handling (or failure to handle) vulnerabilities that are disclosed to them.

Australian Criminal Law

The Cybercrime Act 2001 (Cth) inserted provisions related to computer crime into the Commonwealth Criminal Code. These provisions are implemented in substantially harmonised legislation across States and Territories.

The Criminal Code provides sanctions to the unauthorised access, modification or impairment to a computer115. Bug bounty programs ostensibly provide relief from the application of these and equivalent provisions in foreign jurisdictions – for instance, Facebook’s terms state that “we will not initiate a lawsuit or law enforcement investigation against you in response to your report”. The appropriateness of the notions of access, impairment and authorisation in the contemporary persistently connected cyber landscape as well as the effect of the provisions of these terms on security researchers participating under bounty programs is a core element to assessing the civil and criminal liability to which security researchers are exposed.

Legal Liability for Disclosure

Vendors have sought to impose liability for those that publicly disclose vulnerabilities under a wide- range of theories of law including infringement of patents116, breaches of technological protection

112 Scott, D. Tort liability for vendors of insecure software: Has the time finally come? 2008. Maryland Law Review. 67: pp425-484 113 De Villiers. M. Reasonable Foreseeability in Information Security Law: A Forensic Analysis Hastings Communications and Entertainment Law Journal April 23, 2008 Available: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1158165 114 De Villiers, M. Free Radicals in Cyberspace: Complex Liability Issues in Information Warfare Fall 2005 4 Nw. J. Tech. & Intell. Prop. 13 Avaliable: http://scholarlycommons.law.northwestern.edu/njtip/vol4/iss1/2 115 see, for instance, Division 476 of the Criminal Code Act 1995 (Cth) 116 Roberts, P. Lawsuits, patent claims silence Black Hat talk. Feb 27, 2007. InfoWorld Available: http://www.infoworld.com/article/2659928/security/lawsuits--patent-claims-silence-black-hat- talk.html

Page 39 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. measures117, copyright infringement118, disclosure of trade secrets119, defamation120 and breaches of criminal statutes including the U.S. and Abuse Act (CFAA)121 and the Criminal Code Act 1995 (Cth).

The extent to which the terms of participation in various bug bounty programs address liability under these regimes in the Australian context are important though the scope of this thesis will restrict examination largely to contractual issues. The issues raised by these terms include the extent to which researchers are protected or legally indemnified from civil or criminal legal action while participating in bounty programs or coordinated disclosure programs will be examined. The extent to which the terms serve, or could better serve, to provide a “safe harbor” for researchers to undertake security research will be examined.

The balance struck between the interests of researchers and the interests of vendors will be examined under these programs by undertaking an examination of the potential application of unfair contracts legislation, unconscionability, deceptive and misleading conduct and other vitiating factors which may modify the extent to which the contractual terms of the bounty programs apply.

While the above sets out some of the range of legal theories applied to the action of security researchers, as disclosure paradigms and approaches have evolved, so too have the legal responses to them and the legal tools deployed by vendors attempting to suppress public disclosure of vulnerabilities continues to expand, driven in part, arguably, by the asymmetry in resources available to vendors as against independent security researchers. Chapters 4 highlights a number of historic and contemporary examples of vendor responses to vulnerability disclosure - including those involving public disclosure, coordinated disclosure and those arising from participation in bounty programs. These examples illuminate the potential residual liability for security researchers participating under bounty programs.

117 United States v. Elcom Ltd (203 F.Supp.2d 1111, 62 USPQ2d 1736) 118 Jia, C. Green Dam breached patch-up in progress. China Daily. Jun 15, 2009. Available: http://www.chinadaily.com.cn/china/2009-06/15/content_8282225.htm 119 Zetter. K. A Bizarre Twist in the Debate Over Vulnerability Disclosures. September 11, 2015. Wired Magazine Available: https://www.wired.com/2015/09/fireeye-enrw-injunction-bizarre-twist-in-the-debate-over- vulnerability-disclosures 120 St Jude Medical Inc. v. Muddy Waters Consulting LLC & Ors. Complaint. Case No. 16-cv-03002. Available: https://regmedia.co.uk/2016/09/08/medsec_lawsuit.pdf 121 Freeman, E. Vulnerability Disclosure: The Strange Case of Bret McDanel. Information Systems Security. 12 April 2007. Available: http://dx.doi.org/10.1080/10658980601144915

Page 40 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Government

Government participation in the market122 for software vulnerabilities, through the discovery, purchase or stockpiling of vulnerabilities and exploits123, while simultaneously adopting regulatory or legislative approaches that seek to (i) restrict trade in them; or (ii) to require or encourage disclosure is a source of controversy124 and Governments are an important participant in the overall “security ecosystem”.

The inherent tensions that lie at the heart of Government’s so-called “dual-mandate” – that is, the potentially, at times, contradictory requirements to protect the systems and communications of its own citizens and corporations while preserving the ability to exploit the systems of third parties through the use of “cyber weapons”125 for law enforcement, espionage and other purposes, are a source of much continuing, and largely unresolved, debate126.

The differing positions adopted by Government in its public discourse127 contrasted against those revealed through sources such as in WikiLeaks’ release of almost 9,000 documents from within the Central Intelligence Agency’s Center for Cyber Intelligence128, Shadow Brokers and Edward Snowden have highlighted substantial inconsistencies in their approach.129 Partially in response to the breach by Edward Snowden, and the previously unknown behaviour divergent from the rhetoric, the US Government released an updated vulnerabilities equity process highlighting the circumstances in

122 Schwartz, A. and Knake, R. “Government's Role in Vulnerability Disclosure: Creating a Permanent and Accountable Vulnerability Equities Process.” Discussion Paper, 2016-04, Cyber Security Project, Belfer Center, June 2016. Available:http://www.belfercenter.org/sites/default/files/files/publication/Vulnerability%20Disclosure%20We b-Final4.pdf 123 Fung, B. The NSA hacks other countries by buying millions of dollars’ worth of computer vulnerabilities. Washington Post. August 31, 2013. Available: https://www.washingtonpost.com/news/the-switch/wp/2013/08/31/the-nsa-hacks-other- countries-by-buying-millions-of-dollars-worth-of-computer-vulnerabilities 124 Healey, J. The U.S. Government and Zero-Day Vulnerabilities. From Pre- to Shadow Brokers Journal of International Affairs. November 2016 Available: https://jia.sipa.columbia.edu/sites/default/files/attachments/Healey%20VEP.pdf 125 Diamond, J. “The Cyber Arms Trade: Opportunities and Threats in Software Vulnerability Markets” Sigma Iota Rho, Journal of International Relations. Available: http://www.sirjournal.org/2012/12/11/the-cyber-arms- trade-opportunities-and-threats-in-software-vulnerability-markets/ 126 Marks J. WikiLeaks dump shines light on government's shadowy zero-day policy. Nextgov.Com (Online). March 10, 2017 Available: https://search.proquest.com/docview/1876096725 127 Daniel, M. Heartbleed: Understanding When We Disclose Cyber Vulnerabilities. White House. April 28, 2014. Available: https://obamawhitehouse.archives.gov/blog/2014/04/28/heartbleed-understanding-when-we- disclose-cyber-vulnerabilities 128 Available: https://wikileaks.org/ciav7p1/cms/index.html 129 Aitel, D. Tait, M. Everything You Know About the Vulnerability Equities Process Is Wrong. August 18, 2016. Lawfare. Available: https://www.lawfareblog.com/everything-you-know-about-vulnerability-equities-process-wrong

Page 41 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. which it will choose to keep a vulnerability for exploitation versus disclosing it to the vendors to be remediated.130

However, this debate and the inconsistencies in approach by the Government, are focused on a relatively small subset of total vulnerabilities and, as discussed above in the research by RAND, a very small subset of elite and skilled researchers who are working on highly specialised classes of vulnerabilities which are then exploited against specific nation-state actors for important intelligence gathering missions. The focus of the research in this thesis, by contrast, is on wider publicly available bug bounties and coordinated disclosure programs undertaken by a larger and lesser skilled set of researchers.

In this area, a growing number131 of US Government agencies have been utilising bug bounty platforms to search for vulnerabilities in their systems and the terms of the US Department of Defence bounty program is examined in this regard.

Throughout this thesis I have elected to use the term “security researcher” rather than the term “hacker”, which has been used with different meanings throughout time and depending on the context. A brief history of that use and the evolution of the underlying ethics of that term and the foundation of my research into the ethics of security researchers follows. A broader analysis of the ethical dimensions is a field of study itself and substantially outside the scope of this thesis. However, the overview of the ethical issues apparent in the history of bug bounties, and security research more broadly, influences the positions that are ostensibly protected in bounty programs that are examined in later Chapters. Similarly, the ethical aspects make a significant contribution to the norms of expected researcher behaviour amongst the researcher community.

The Oxford English Dictionary identifies and highlights the two competing notions of a hacker in its definition. The first relevant definition focuses on illicitness - "A person who attempts to gain unauthorized access, esp. remotely, to a computer system or network".

The second definition focuses on skill and enthusiasm –

130 Heller, M. New VEP Charter promises vulnerability transparency. TechTarget. 15 November 2017. Available: http://searchsecurity.techtarget.com/news/450430177/New-VEP-Charter-promises-vulnerability- transparency 131 Nicholas, S. Senate Committee OKs DHS Bug Bounty Program Bill. ExecutiveGo. October 5, 2017. Available: http://www.executivegov.com/2017/10/senate-committee-oks-dhs-bug-bounty-program-bill/

Page 42 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. "A person with an enthusiastic interest in computer systems, esp. one who is skilled at programming"

This is reflected in Nissenbaum, noting the intent is not to cause harm but to understand and experience a thrill:

To hack was to find a way, any way that worked, to make something happen, solve the problem, invent the next thrill. There was a bravado associated with being a hacker, an identity worn as a badge of honor.132

The terminology of black hat and white hat hackers, a common way to describe security researchers, has its genesis in the depiction of cowboys in movies depicting the Old West between the 1920s and 1940s. In those films black and white differentiated good and bad - "The good guy wore a white hat, and the bad guy wore a black hat"133

Richard Stallman, a well-known and zealous advocate for open source software and founder of the Free Software Foundation, is oft credited with coining the term “hacker” in relation to IT security, though he rejects both the use of "hacker" in the context of "hacking to refer to breaking security" and his role in its adoption.134

Fred Shapiro of Yale University Law School identified the first use of the term hacker in the context of computers in 1963 in the Massachusetts Institute of Technology (MIT) newspaper, in reference to students misusing MIT phones to, among other things, make long-distance calls, including by using a PDP-1 computer to scan for dial tones.135

Levy136, Coleman137, Nissenbaum138 and Graham139 have all examined hacker culture, albeit from different perspectives. Levy considers hacker culture from a historical perspective of the hacker roots

132 Nissenbaum, H. Hackers and the contested ontology of cyberspace. 2004. New Media and Society. Available: http://journals.sagepub.com/doi/abs/10.1177/1461444804041445 at p.197. 133 Agnew, J. The Old West in Fact and Film: History Versus Hollywood. McFarland. 2012. 134 Laskow, S. The Counterintuitive History of Black Hats, White Hats, And Villains. Atlas Obscura. January 27, 2017 at p.131. Available: http://www.atlasobscura.com/articles/the-counterintuitive-history-of-black-hats-white-hats-and- villains 135 Shapiro, F. Antedating of "Hacker". E-mail to American Dialect Society Mailing List. 13 Jun 2003. Available: https://web-beta.archive.org/web/20060715031620/http://listserv.linguistlist.org/cgi- bin/wa?A2=ind0306B&L=ads-l&P=R5831&m=24290 136 Levy, S. Hackers: Heroes of the Computer Revolution. O'Reilly Media. 2010. 137 Coleman, G. Coding Freedom: the ethics and aesthetics of hacking. 2013. Princeton University Press. Available: http://gabriellacoleman.org/Coleman-Coding-Freedom.pdf 138 Nissenbaum, H. Hackers and the contested ontology of cyberspace. 2004. New media and Society. Available: http://journals.sagepub.com/doi/abs/10.1177/1461444804041445 139 Graham, P. Hackers & Painters: Big Ideas from the Computer Age. O'Reilly Media. 2004.

Page 43 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. at MIT and its development throughout the 1970s and 1980s. Coleman focuses particularly on the open source movement and Graham considers aspects of creativity and software development.

Hacker culture

In examining the roots of hacker culture, in the context of free and open source software (FOSS), Coleman identifies the link between the ethics of free software and the broader regime of liberalism.140 Coleman considers the many motivations of hackers to produce software for free, and notes "they are committed to productive freedom":

They tend to value a set of liberal principles: freedom, privacy, and access. Hackers also tend to adore computers— the glue that binds them together— and are trained in specialized and esoteric technical arts, primarily programming, system, or Net administration, security research, and hardware hacking.141

In the context of bug bounties, Katie Moussouris, responsible for Microsoft's first bounty program and contributor to the ISO standards regarding vulnerability handling, suggests that security researcher motives are "somewhere between compensation, recognition and pursuit of intellectual happiness"142

Many of these attributes are evident in the rhetoric of security researchers and are apparent in the case studies in Chapters 4 and 5. Levy examines the genesis of hackers from MIT in the 1950s and creating hacker communities of highly skilled technical experts that subsequently spread to other Californian universities from Stanford to Berkeley and eventually across the USA. Coleman, Levy and Grahams’ work all provide sounds bases for considering the ethical and philosophical actions of security researchers and their participation in coordinated vulnerability programs.

Nissenbaum examines the journey from the heroes of the computer revolution to its cyberspace villains and their conception as miscreants, vandals, criminals, and even terrorists.143

Anthropological research is outside the scope of this thesis. However, the history and, more relevantly, development of hacker culture influences the way in which security researchers participate in bug

140 Coleman, G. Coding Freedom: the ethics and aesthetics of hacking. 2013. Princeton University Press. Available: http://gabriellacoleman.org/Coleman-Coding-Freedom.pdf at p.7 141 Ibid. at 17 142 Moussouris, K. Presentation RSA Conference (2018) at 30:15 https://www.rsaconference.com/events/us18/rsac-ondemand/industry-experts-bug-bounty 143 Nissenbaum, H. Hackers and the contested ontology of cyberspace. 2004. New media and Society. Available: http://journals.sagepub.com/doi/abs/10.1177/1461444804041445 at p.195.

Page 44 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. bounty programs and the way in which their behaviour is interpreted. Further, it affects their interactions with the security researcher community and with bounty operators. An understanding of the way these historical cultural influences have developed in the context of bug bounty programs may be useful in understanding and resolving disputes regarding standards of expected behaviour which arise in the operation of bounty programs. These disputes are examined in depth in the Chapter 4 case studies.

This Chapter has considered four broad historical themes and a subset of the related literature being the history and evolution of software vulnerability disclosure paradigms and bug bounties, the legal framework through which liability to vendors and security researchers has been effected and, finally, a brief overview of the historical ethical and philosophical context in which security research occurs. Much of the literature above is undertaken from a US-centric perspective, especially that examining bounty programs directly. This is perhaps unsurprising given that bounty programs emerged in the US and many of the participants in the market for security vulnerabilities are US-based. It does highlight, however, that an analysis of the law from an Australian perspective may be useful to Australian-based security researchers. This highlights the persistent tension between disclosure and concealment, researchers and vendors and “black” vs “white” hat hackers. Chapter 3, which follows, examines the technical and societal context in which software vulnerabilities now operate.

Page 45 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

Chapter 3 - Why Software Matters

The themes of mobile device adoption, software complexity, big data, the emergence of connected ‘things’ (rather than people and traditional devices with user interfaces), economic and other technological development that enable persistent connectivity and pervasive computing will be considered in this Chapter. The impact of vulnerabilities in this broadening context results in an increase of both the opportunity to exploit software vulnerabilities through increased exposure and opportunity as well as the consequences of doing so.

It has been stated that software is the foundation of modern civilization144 while Mark Andreesen145 describes the centrality of software to the economy and our lives as “software is eating the world”:

“… we are in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy.

More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense.” 146

Software is everywhere; it is everywhere because software is the closest thing we have to a universal tool.147

This concept of ubiquitous, or pervasive, computing is not a new one, having been described by Mark Weiser in 1991. Weiser stated that "the most profound technologies are those that disappear. They weave themselves into the fibre of everyday life until they are indistinguishable from it".148 The evolution of the Internet of Things reflects a continuation on the path to such technology.

144 Rice, D. Geekonomics: The Real Cost of Insecure Software (2007) Addison-Wesley Professional at xvii 145 Co-author of Mosaic (the first modern web browser), co-founder of Netscape, venture capitalist and Board Member of eBay, Facebook and HP. 146 Andreesen, M. Why Software Is Eating The World Wall Street Journal. August 20, 2011. Available: http://online.wsj.com/news/articles/SB10001424053111903480904576512250915629460 147 Rice, D. Geekonomics: The Real Cost of Insecure Software (2007) Addison-Wesley Professional at Chapter 1 148 Weiser, M. The Computer for the 21st Century. Scientific American. September 1991 at p.1 Available: https://www.ics.uci.edu/~corps/phaseii/Weiser-Computer21stCentury-SciAm.pdf

Page 46 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. This universality of computing, and its integration into every aspect of the economy, means that ensuring the software that operates these systems are secure and operate in an error-free fashion is a key concern for governments, corporations and individuals. The World Economic Forum, held in Davos during January 2013, recognised the "increasing dependence on connectivity for the normal functioning of society" and noted that the "cyber risk landscape evolves rapidly".149 Notably, the paradigmatic analysis of the centrality of security of locks to protection of property (in the same way as the security of software is central to the protection of data) was understood as early as the 1800s:

Houses, rooms, vaults, cellars, cabinets, cupboards, caskets, desks, chests, boxes, caddies - all with the contents of each, ring the changes between meum and tuum150 pretty much according to the security of the locks by which they are guarded.151

So, with the control of the treasures of the 1860s, guaranteed by the security of the locks, so too are the treasures of the modern era controlled by the security of its software.

A modern mobile device carries an array of sensors including microphones, multiple high-resolution cameras, accelerometers, global positioning system and fingerprint recognition devices. The compromise of such devices potentially exposes a large amount of valuable and sensitive information about the device’s users and their actions.

The proliferation of smartphones has occurred at an enormous scale. There are predicted to be 6 billion smartphones in circulation by 2020.152 As far back as 2013, Gartner predicted that mobile devices would overtake PCs as the most common device to access the web.153

In 2015, Ofcom, the UK telecommunications regulator found that, in the UK, two-thirds of adults had a smartphone and, for the first time, the smartphone had surpassed the laptop as the most important

149 World Economic Forum. Partnering for Cyber Resilience. Available: http://www3.weforum.org/docs/WEF_IT_PartneringCyberResilience_Guidelines_2012.pdf 150 Meum and Tuum are latin for “mine” and “thine” respectively. It is used here reflect the difference between what is mine, and what is yours. 151 Hobbs, A. Locks and Safes: The Construction of Locks. Virtue & Co. London. 1868 at p.2. 152 Kharpal, A. Smartphone market worth $355 billion, with 6 billion devices in circulation by 2020: Report. CNBC News. January 17, 2017. Available: https://www.cnbc.com/2017/01/17/6-billion-smartphones-will-be-in-circulation-in-2020-ihs- report.html 153 Gartner Identifies the Top 10 Strategic Technology Trends for 2013. Gartner. Available: http://www.gartner.com/newsroom/id/2209615

Page 47 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. device for internet connectivity.154 The Report highlighted that 33% of users identified the smartphone as the most important device to access the internet, an increase from 23% in 2014.

For the first time, in November 2016, more websites are now loaded on mobile devices than on desktop computers.155

Modern information systems collect and store vast amounts of information, a phenomena described as ‘big data’ – “datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyse”. 156 The ability to analyse and derive meaning from such data is considered a key basis for competitive advantage and growth for companies.157

Companies will introduce new devices in order to seek to exploit the data they can capture. This desire for competitive advantage drives the number and type of devices becoming internet connected in order to centralise and analyse this data. Mikko Hyppönen, chief research officer at F- Secure, a large cybersecurity company, noted the trend driving connection of new devices to collect data for analytics purposes:

"It's going to be so cheap that vendors will put the chip in any device, even if the benefits are only very small. But those benefits won't be benefits to you, the consumer, they'll be benefits for the manufacturers because they want to collect analytics.” 158

That the value accrues from the data collected, rather than revenue from the consumer, suggests that security may not be a commercial priority in such devices.

154 Ofcom. The Comunications Market Report. 6 August 2015. Available: https://www.ofcom.org.uk/__data/assets/pdf_file/0022/20668/cmr_uk_2015.pdf at p.10 155 StatCounter. Mobile and tablet internet usage exceeds desktop for first time worldwide. 1 November 2016. Available: http://gs.statcounter.com/press/mobile-and-tablet-internet-usage-exceeds-desktop-for-first-time- worldwide 156 Manyika, J. Chui, M. Brown et al. Big data: The next frontier for innovation, competition, and productivity. May 2011. McKinsey Global Institute. Available: http://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/big-data-the-next- frontier-for-innovation 157 Manyika, J. Chui, M. Brown et al. Big data: The next frontier for innovation, competition, and productivity. May 2011. McKinsey Global Institute. Available: http://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/big-data-the-next- frontier-for-innovation 158 Palmer, D. Internet of Things security: What happens when every device is smart and you don't even know it? March 20, 2017. ZDNet. Available: http://www.zdnet.com/article/internet-of-things-security-what-happens-when-every-device-is- smart-and-you-dont-even-know-it/

Page 48 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

In April 2011, Daniel Evans, then chief technologist at , the largest networking company in the world, defined the phrase “the Internet of Things” to mean “the point in time when more “things or objects” were connected to the Internet than people.159 These “things” can take many forms and include sensors, consumer devices and enterprise devices in many forms, shapes and sizes and either with, or without, a typical user interface.

The growth in these devices has been explosive - Gartner estimates the growth of connected devices as follows:160

Year Number of Connected Devices (millions)

2016 6,381

2017 8,380

2018 11,196

2020 20,415

As of July 2015, the United Nations Department of Economic and Social Affairs estimates the world population to be 7.3 billion161 and, consequently, it can be considered that the realisation of the internet of things, as defined by Cisco, occurred at a point between 2016 and 2017. The connection of a device to the Internet potentially expands its “attack surface” – that is, the number of points on a system that the attacker can target.162

A further technology that has seen the expansion of the number of persistently connected devices, and commensurate expansion of potential security risks, is the expansion of available IP addresses.

159 Evans, D. The Internet of Things. How the Next Evolution of the Internet is Changing Everything. White paper. April 2011 at p.2. Available: http://www.cisco.com/c/dam/en_us/about/ac79/docs/innov/IoT_IBSG_0411FINAL.pdf 160 Forecast: Internet of Things — Endpoints and Associated Services, Worldwide, 2016. Gartner. Available: https://www.gartner.com/document/3558917 161 World Population Prospects: 2015 Revision. United Nations Department of Economic and Social Affairs. Available: http://www.un.org/en/development/desa/publications/world-population-prospects-2015- revision.html 162 Manadhata, P. An Attack Surface Metric. Carnegie Mellon University. Available: http://reports-archive.adm.cs.cmu.edu/anon/2008/CMU-CS-08-152.pdf

Page 49 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. For a device to be natively addressed163 on the internet, it requires an address known as an Internet Protocol (“IP”) address. The version of the IP standard deployed on the predecessor to the modern- day internet, ARPANET, was IPv4 and was defined in IEEE RFC791164 published in September 1981. Under IP v4 each address is defined by four binary octets separated by a dot. In decimal, each IP address takes the form of four number between 0-255 separates by a dot – i.e. 203.2.193.124. This address format limits the total address space to a 32-bit number (232) which is 4,294,967,296, or a little over four billion addresses.

IP Address Space Exhaustion

Due to the unforeseen growth of the Internet and certain inefficiencies in allocating IP addresses – for instance, by allocating them in blocks as large as 16 million to single entities such as IBM, AT&T, Hewlett-Packard, Apple, MIT, Ford and Computer Sciences Corporation165 as early as the late 1980s exhaustion of the address pool was forecast and subsequently realised on 31 January 2011.166

IP v6

The successor standard to IP v4, IP v6 was released in December 1998167, though a coordinated effort for transition did not occur until World IP v6 Launch conducted by the Internet Society on June 6, 2012.168 Amongst other things, expanded the total address space to a 128-bit hexadecimal number (3.4*1038). This allows for an almost incomprehensibly large increase in total addresses - approximately 340 trillion, trillion, trillion theoretical addresses. It is estimated that this would support an assignment of “an IPv6 address to EVERY ATOM ON THE SURFACE OF THE EARTH, and still have enough addresses left to do another 100+ earths.”169

The advancement of these underlying technologies vastly reduces the cost and effort to create a connected device.

163 That is, rather than being obscured or hidden by another internet connected device such as a proxy server. 164 Internet Protocol. DARPA Internet Program Protocol Specification. University of Southern California. September 1981. Available: https://tools.ietf.org/html/rfc791 165 IANA IPv4 Address Space Registry. Available: https://www.iana.org/assignments/ipv4-address-space/ipv4- address-space.xhtml 166 Available Pool of Unallocated IPv4 Internet Address Now Completely Exhausted. The Internet Corporation for Assigned Names and Numbers. Press Release. February 3, 2011. Available: https://www.icann.org/en/system/files/press-materials/release-03feb11-en.pdf 167 RFC2460. Internet Protocol, Version 6 (IPv6) SpecificationAvailable: https://tools.ietf.org/html/rfc2460 168 “IP V6 is the new normal”. Available: http://www.worldipv6launch.org/ 169 Leibson, S. IPV6: How Many IP Addresses can dance on the Head of a Pin. EDN Network. March 28, 2008 Available: http://www.edn.com/electronics-blogs/other/4306822/IPV6-How-Many-IP-Addresses-Can-Dance- on-the-Head-of-a-Pin-

Page 50 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

While the above sets out the technical basis facilitating the enormous growth in connected devices, it is useful and illustrative to consider a practical example of the effect of other historical technological advances and their effect on everyday life in the context of internet connected devices.

Benedict Evans, a Partner at Andreesen Horowitz (a Silicon Valley Venture Capital fund founded by Marc Andreesen, co-author of the first widely used web browser), considers that:

My grandfather could probably have told you how many electric motors he owned. There was one in the car, one in the fridge, one in his drill and so on.

My father, when I was a child, might have struggled to list all the motors he owned (how many, exactly, are in a car?) but could have told you how many devices were in the house that had a chip in.

Today, I have no idea how many devices I own with a chip, but I could tell you how many have a network connection. And I doubt my children will know that, in their turn.170

At each of these stages, the things that this new technology would be used for were hard to predict, and many would seem absurd. (A little motor in the wing mirror to adjust it for you? Really?) But the trend is inevitable.

It is this ubiquity, and its rapid onset, that makes the scale and scope of software vulnerabilities, and the consequences of their exploitation so significant.

Hardware

The growth of the global smartphone market, described above, has resulted in the creation and growth of a supply chain that has resulted in vastly lower costs of componentry that underlie the creation of connected devices. Similarly, there has been a reduction in their size so that they can fit in almost any device. The cost of this componentry would, in many cases, have been prohibitive when compared with previous years, even if the technology had otherwise been commercially available.

170 Evans, B. The internet of things. May 26, 2014. Available: http://ben-evans.com/benedictevans/2014/5/26/the-internet-of-things

Page 51 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. For example, the hardware components to enable WiFi, Bluetooth and GPS/Global Navigational Satellite System functionality cost as little as US$5171, while a full set of components to allow high- speed (LTE) mobile access costs US$33 with an aim, in the near future, for a reduced feature-set to cost as little as US$5172. This allows the integration of advanced functionality into hardware devices that would have been inconceivable. This range of devices is described below.

Software

The low-cost of hardware is complemented by a wide-range of free open source and low-cost proprietary software platforms that allow developers to easily develop and integrate internet connectivity into their devices. For instance, the Arduino project173 is an open source hardware and software project originally aimed at hobbyists to build devices that sense and interact with the physical world and Google Things is an extension of Google’s Android mobile platform that allows developers to create devices in the same way as they would create an app174. This reduces the time, complexity and cost to create, manufacture and distribute an internet connected device.

The foregoing describes a range of technological developments that lay the ground for a proliferation and promulgation of Internet connected devices into everyday life. This section describes some of those devices and examines the consequences of their exploitation.

Consumer devices

In the consumer market, the Internet of Things extends connected devices from the more typical personal computer, smartphone or tablet to an almost limitless range of both new devices and reimagined ‘traditional’ devices whose capabilities are expanded or extended by adding Internet connectivity. Many such devices are, as illustrated below, manufactured by corporations without a history of development of secure software.

171 IHS Markit. Google Pixel XL Manufacturing Cost is in Line with Rival Smartphones, IHS Markit Teardown Shows October 25, 2016. Available: http://www.businesswire.com/news/home/20161025005551/en/Google-Pixel-XL-Manufacturing- Cost-Line-Rival 172 Mo-Hyun, C. Affordable LTE chips to make IoT real by 2018: Altair co-founder. ZDNet. May 30, 2016. Available: http://www.zdnet.com/article/cost-effective-lte-chips-will-make-iot-a-reality-by-2018-altair-co- founder/ 173 http://www.arduino.cc 174 https://developer.android.com/things/sdk/index.html

Page 52 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. In this context, Cisco extended its concept of the “Internet of Things”, described above, to be the “Internet of Everything”. The scale of the market for the “Internet of Everything” is enormous and estimated by Cisco to be USD$19 Trillion by 2020.

The Consumer Electronics Show (“CES”) held in Las Vegas each January serves as a venue for vendors to release new consumer technology each year. In 2017, the following Internet-connected devices were announced or launched which illustrates the breadth and novelty of devices deemed worth of connection to the Internet175:

Device Feature

U by Moen Showerhead Start showerhead from bed to ensure it is warm when shower is entered

Simplehuman Wi-Fi Trash Can Pair with phone to order replacement rubbish bags

Kerastatse Hair Coach “identify hair issues”, count brush strokes, measure brushing patterns

Sunflower Smart Patio Umbrella Integrates security camera and other sensors

These products add to an already broad range of Internet connected devices that have entered the market in recent years including bathroom scales176, blood pressure monitors177, fridges178, thermostats179, baby monitors,180 toothbrushes181. The expansion of such devices to become a “smart” or “connected” device only 5 years ago may have been seen as fanciful or, indeed, unnecessary.

As the complexity and interconnectedness of these devices continues to expand, it is likely to result in unintended security consequences, particularly related to unanticipated, unpatched or unpatchable vulnerabilities. Security models to adequately manage and address the diversity of function, volume of devices and ease of deployment are of great concern and securing these devices has been front of

175 Barrett, B. The dumb ‘smart’ gear that somone’s gonna hack in 2017. Cnet. 4 January 2017. Available: https://www.cnet.com/au/products/kerastase-hair-coach-powered-by-withings/preview/ 176 http://www.fitbit.com/au/aria 177 http://www.qardio.com 178 http://www.samsung.com/us/explore/family-hub-refrigerator/ 179 http://www.nest.com 180 http://www.babyping.com 181 http://www.connectedtoothbrush.com

Page 53 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. mind for security researchers since the Internet of Things / Internet of Everything paradigm was nascent. 182

This security risk was highlighted in a study by Hewlett-Packard has estimated that up to 70% of the most common internet of things devices contain vulnerabilities with an average of 25 vulnerabilities per product.183

Mature security models continue to be developed to secure the internet of things though it is possible that these ubiquitous, low-cost devices are built with componentry that does not allow them to be properly updated to handle modern security standards (remote management, remote updates, anti- virus, encrypted communications, use of ).184 The consequences are further considered in the next section.

Example – Refrigerator

Holm185 considered an example how an innocuous device, in this case a refrigerator that contained “smart” options to allow it to be internet connected, could be remotely controlled by a malfeasant to perpetuate identity theft. This occurs by the device being added to a , a remotely controlled network of compromised devices, and used to route unsolicited 'spam' e-mails to potential victims to garner identity details.

Similarly, a significant vulnerability in a connected dishwasher was discovered in March 2017186187 highlighting that the risk is not merely an isolated or theoretical one.

Enterprise devices – SCADA and Cyber-physical systems

Supervisory control and data acquisition, known as SCADA systems, are computerised industrial control systems that are crucial to the maintenance and control of critical infrastructure such as

182 See, for instance, Suo, H. Wan, J. Zou, C. Liu, J. Security in the Internet of Things: A review. 2012 International Conference on Computer Science and Electronics Engineering 23 March 2012. Available: http://ieeexplore.ieee.org/abstract/document/6188257/ 183 HP Study Reveals 70 Percent of Internet of Things Devices Vulnerable to Attack. Hewlett Packard. July 29, 2014. Available: https://www8.hp.com/us/en/hp-news/press-release.html?id=1744676#.VO4ofvmUe4o 184 Jones, N. Top 10 IoT Technologies for 2017 and 2018. Gartner. 22 January 2016. Available: https://www.gartner.com/document/3188520 185 Holm, E. The Role of the Refrigerator in Identity Crime? International Journal of Cyber-Security and Digital Forensics (IJCSDF) 5(1):1-9 The Society of Digital Information and Wireless Communications, 2016 186 Duckett, C. This is the dishwasher with an unsecured web server we deserve. March 26, 2017. ZDNet. Available: http://www.zdnet.com/article/this-is-the-dishwasher-with-an-unsecured-web-server-we-deserve/ 187 CVE-2017-7240. https://cve.circl.lu/cve/CVE-2017-7240

Page 54 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. power, energy, water, transportation and telecommunications.188 They may control, for instance, the operation of outlets of a dam, switching of electricity in a grid, or the operation of centrifuges in the case of Stuxnet further discussed below.

Increasingly these systems are moving from operating solely on physically isolated, secured networks and utilising proprietary protocols and standards to adopt standards common to the internet (such as TCP/IP), being connected to the internet; or (iii) both.189 This migration has occurred often in an ad- hoc fashion and without standardised security architectures.

The significance of the exploitation of vulnerabilities in such systems is that, in contrast to traditional information and communications technologies, they have potential to have an effect in the physical world (hence the appending of the “physical” moniker to the traditional “cyber” description”).

An Australian example of the consequences of the exploitation of such cyber-physical systems occurred in 2000 in the Maroochy Shire Council in Queensland. In that case, a disgruntled former council employee, Vitek Boden, remotely accessed and issued 46 radio commands to sewage pumping control equipment and released 800,000 litres of raw sewage into the local environment, killing marine life and creating an unbearable stench for residents.190 He was sentenced to two years jail. He unsuccessfully appealed the decision, including in an application for special leave to appeal to the High Court. 191

Cyber Physical Systems

Cyber Physical Systems are those that are “integrations of computation and physical processes”192 and may extend beyond those used purely in industrial processes such as SCADA systems.

The increasing automation of motor vehicles that commenced with features such as automated parking, radar cruise control, throttle by wire and brake by wire has meant that traditionally mechanical processes are now computer controlled and are an example of such cyber-physical

188 Zhu B., Joseph, A. and Sastry S. , "A Taxonomy of Cyber Attacks on SCADA Systems," 2011 International Conference on Internet of Things and 4th International Conference on Cyber, Physical and Social Computing, Dalian, 2011, pp. 380-388. 189 Ibid. 190 Abrams, M. Weiss, J. Malicious Control System CyberSecurity Attack Case Study–Maroochy Water Services, Australia The and Applies Control Solutions, LLC. August 2008. Available: http://csrc.nist.gov/groups/SMA/fisma/ics/documents/Maroochy-Water-Services-Case- Study_briefing.pdf 191Boden v The Queen B55/2002 [2003] HCATrans 828 (25 June 2003) 192 Lee, E. Cyber Physical Systems: Design Challenges 2008 11th IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC) 2008, pp. 363-369.

Page 55 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. systems. The consequence is that vulnerabilities in these systems can be exploited to remotely control a vehicle including the ability to apply its brakes, steering or throttle and cause a crash.

For example, at the Blackhat security conference in Las Vegas in 2015, security researchers Charlie Miller and Chris Valasek demonstrated that they could remotely compromise FiatChrysler motor vehicles by exploiting vulnerabilities in the vehicle’s radio unit.193 The radio unit was connected insecurely to the CAN-bus194 within the car which, after exploiting a string of vulnerabilities, ultimately allowed access to its throttle, brakes and steering of the vehicle (which were electronically controllable to enable functionality such as adaptive cruise control, forward collision warning and park assist)195 over a mobile network. Hundreds of thousands of vehicles were vulnerable to the attack and 1.4 million were recalled as a result of the demonstrated vulnerability.196 The vulnerabilities were not exploited “in the wild” and no injuries were recorded.

Other documented attacks on SCADA and cyber physical systems include those exploiting systems responsible for control of water diversion in a dam197 and disabling leak detection in offshore oil rigs198.

The physical consequences of such vulnerabilities and their exploitation dramatically magnifies the risks to society and requirement for proper handling of vulnerabilities by vendors – in part through the operation of effective and transparent bug bounty programs.

This section briefly considers further modalities, cases and consequences of exploited vulnerabilities for in the areas of international law and politics. The significant, unprecedented and unpredictable consequences of exploitation of software vulnerabilities in these areas highlights the need for an ecosystem in which vulnerabilities are dealt with in a manner that reduces the likelihood and consequence of exploitation. Bug bounty programs, if operating optimally, may be a key element in ameliorating these effects.

193 Miller, C. Valasek, C. Remote Exploitation of an Unaltered Passenger Vehicle. 2015. Black Hat USA. Available: https://securityzap.com/files/Remote%20Car%20Hacking.pdf 194 The CAN bus is a standardised in-vehicle network standard allowing different components to communicate. 195 Miller, C. Valasek, C. Remote Exploitation of an Unaltered Passenger Vehicle. 2015. Black Hat USA. Available: https://securityzap.com/files/Remote%20Car%20Hacking.pdf at pp.10-13. 196 Ibid. at p.87. 197 Weiss, J. Protecting Industrial Control Systems from Electronic Threats. May 15, 2010. Momentum Press. at p.118 198 Ibid. at p.119.

Page 56 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Stuxnet

Much has been written199 about the exploitation of vulnerabilities, most notably in the case of Stuxnet200 where four zero-day vulnerabilities were used, suspected to be by the U.S. and Israeli governments, to disrupt Iran’s nuclear program by disabling uranium refining centrifuges.

Stuxnet was the first “in the wild” exploitation of vulnerabilities, their use discovered in 2010, for cyber warfare to effect physical harm. In that case, the physical harm was caused by exploiting vulnerabilities in a Siemens industrial control system that allowed the attackers to spin centrifuges at very high speed and then very low speed causing a physical failure, to disrupt the refinement of uranium.

Surveillance of Dissidents

In 2012, Egyptian protesters seized control of the security headquarters of the recently deposed Mubarak regime and found documents in which UK based software company Gamma Group were found to be offering for sale, software known as FinFisher, which achieved its ends, including interception of Skype calls, through exploited software vulnerabilities in products such as Apple’s iTunes to track and intercept communications of dissidents.201 The software, FinFisher, was also found to have been sold to Bahrain.202

In addition to their exploitation for surveillance of dissidents in undemocratic regimes, the exploitation of vulnerabilities by cyber-criminals for a diverse range of illegal purposes such as ransomware, botnets, propagation of spam, identify theft and fraud are also prevalent.203 Use of vulnerabilities by law enforcement for lawful wiretapping has also been proposed.204

199 Farwell, P. Rohozinski, R. Stuxnet and the Future of Cyber War. 2011. Global Politics and Strategy Available:http://dx.doi.org/10.1080/00396338.2011.555586 200 W32.StuxnetDossier. Symantec Security Response Available: http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/w32_stuxnet_d ossier.pdf 201 Schwartz, M. FinFisher Mobile Tracking Political Activists. Aug 31, 2012. Informationweek - Online. 202 Rapid 7. Analysis of the FinFisher Lawful Interception . 8 August 2012. Available: https://community.rapid7.com/community/infosec/blog/2012/08/08/finfisher 203 BBC News. Cybercriminals exploit bug in software. BBC News. June 30, 2015. Available: http://www.bbc.com/news/technology-33324715 204 Bellovin, S. Blaze, M. Clark, S. Landau, S. Lawful Hacking: Using Existing Vulnerabilities for Wiretapping on the Internet. Privacy Legal Scholars Conference June 2013 - Draft. Available: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2312107

Page 57 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Botnets – Spam and DDoS

A vulnerability may be remotely exploited against a target computer205 to conscript it as a “”, “robot” or “bot” in a network or “botnet” of potentially millions of similarly compromised computers or other connected devices so that they can be collectively and simultaneously remotely controlled by a “botmaster”. These botnets are typically used to propagate spam at enormous scale, often associated with attempts at identity theft, or to conduct distributed denial-of-service (“DDoS”) attacks – an attack where a host is flooded with traffic from each of the bots in an attempt to overwhelm it and take it offline. Such DDoS attacks have disrupted significant parts of the Internet including simultaneously taking Twitter, , Netflix, and CNN offline206 after targeting intermediate provider of domain name services, Dyn, that provides, among other things, translation of internet protocol (IP) addresses to names.

These attacks now target not only typical desktop or mobile computers, but compromised refrigerators, DVRs, security cameras207 and other “Internet of Things” devices.

This Chapter has highlighted a range of technical, social and political change that has broadened the likelihood and consequence of software vulnerabilities and highlights the growing scale and scope of the problem. These consequences include harm or disruption to many aspects of modern life including the use of software vulnerabilities to effect real, physical harm such as in the case of disruption to vehicle control systems and physical infrastructure. As discussed in Chapter 1, the negative effects of software vulnerabilities can be partially ameliorated through the effective use of bug bounty programs which have the potential to scale quickly by leveraging the “power of the crowd”. However, the way in which the terms of these programs are operated may significantly impact the operation of the programs and the protection afforded to security researchers participating under them. The following Chapter 4 examines four case studies of issues that arise in this context.

205 Hachem, N. Mustapha, Y. Granadillo, G. Debar, H. Botnets: Lifecycle and Taxonomy. 2011 Conference on Network and Information Systems Security, La Rochelle, 2011, pp. 1-8 Available: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5931395&isnumber=5931356 206 Woolf, N. DDoS attack that disrupted internet was largest of its kind in history, experts say. The Guardian. 27 October, 2016. Available: https://www.theguardian.com/technology/2016/oct/26/ddos-attack-dyn--botnet 207 Jerkins, J. Motivating a market or regulatory solution to IoT insecurity with the Mirai botnet code. Computing and Communication Workshop and Conference (CCWC), 2017 IEEE 7th Annual. Available: http://ieeexplore.ieee.org/abstract/document/7868464/

Page 58 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Chapter 4: Case Studies

This Chapter sets out four case studies that illuminate some of the specific legal and related issues that have arisen from security researcher participation in bug bounty programs and in the disclosure of vulnerabilities in the absence of an operating bug bounty program.

Later Chapters consider the potential effect of the legal terms and relevant legislation in greater detail. Due to the relative novelty of bounty programs and a likely reluctance by vendors to make interactions with security researchers public, particularly where they are the subject of dispute, published examples of legal issues arising in bounty program participation are relatively sparse. Four examples where such issues have been published are examined in this Chapter.

The four case studies chosen are Dà-Jiāng Innovations (DJI), Wineberg, St Jude and Rogers. These case studies are predominantly descriptive but will involve a brief overview of key issues as they relate to the relevant bug bounty program. As the case studies will be inter-woven throughout the proceeding chapters, a more detailed legal analysis of these studies in found in these chapters. As a reminder, Contractual issues are analysed in Chapter 5 and 6. Specifically, those chapters look at a number of most prevalent contractual issues arising from such programs that relate to the balancing of interests of three parties – the bounty operator, security researcher and the user community.

These case studies highlight the ways in which vendors can exert pressure on security researchers in attempts to suppress publication of vulnerabilities which may embarrass them or enforce onerous terms beyond those expected or included in the terms made publicly available. This occurs via use of express or implied legal threats and the involvement of authorities. It is the use of these measures that have the potential to act as a chilling effect on security research or, as is demonstrated in the case of DJI, the research occurring but the vulnerabilities not being disclosed. This has the dual negative effect of failing to increase the security of the products while the discovered vulnerabilities remain, at best, undisclosed or, at worst, sold and exploited by malicious actors.

Background

This section undertakes a case study of the recently launched bug bounty program operated by Chinese unmanned aircraft systems (“UAS”) (commonly known as drones) manufacturer Dà-Jiāng

Page 59 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Innovations (“DJI”). DJI, the largest consumer drone manufacturer, were estimated to hold up to 70% of the consumer drone market in 2016.208

The case study of DJI is worthy of examination for a number of reasons. Firstly, it is an example of the exposure of security researchers to legal risk through their participation in a bug bounty program arising not through the implementation of onerous terms created at the outset of the program. In this example the liability of the security researcher arose rather, through a program that was seemingly designed and deployed in haste without consideration or implementation of measures to address key and, indeed, entirely foreseeable issues. Upon discovery of these limitations DJI sought to implement onerous terms accompanied by legal threats to the security researcher.

Jail Breaking

Security researchers have frequently sought the ability to “jail break” DJI’s drones – that is, to exploit vulnerabilities to allow them to operate in unintended ways.

As with popular consumer devices efforts such as Apple’s iPhone, significant efforts are made to "jail break"209 DJI’s drones so that they are able to run any arbitrary software code that potentially allows them to function in ways that the manufacturer did not intend or, indeed, specifically seeks to restrain. This is in contrast to the state in which the drones are shipped, which restricts certain functions that restrict, for instance, the flying of drones in designated areas such as airports or conflict areas. The process of circumventing these protections is known as "jail breaking" and typically requires the exploitation of one or more vulnerabilities.

One argument in favour of partaking in the “jail breaking” of drones is that prohibiting it is a restriction on the property rights of the device owner who, it is argued, should be allowed to deal with it in any way they choose. This has parallels with the matter of John Deere tractors210, who sought to restrict owners of their tractors from repairing them by imposing technological measures restricting their servicing to authorised dealers211. A circumvention of these measures, John Deere argue, would be a

208 Wang, Y. As China's Drone Market Takes Off, Leader DJI Still Flies Far Above The Competition Forbes Asia, May 12, 2016 Available: https://www.forbes.com/sites/ywang/2016/05/12/chinas-flood-of-cheap-flying-cameras-is-little- threat-to-dajiang/#738563e31869 209 Ricker, T. iPhone Hackers: "we have owned the filesystem" Engadget. July 10, 2017 Available: https://www.engadget.com/2007/07/10/iphone-hackers-we-have-owned-the-filesystem 210 See, for instance, Kansas House Bill H2122. Available: http://www.kslegislature.org/li/b2017_18/measures/documents/hb2122_00_0000.pdf 211 John Deere Tractors. Position Paper. Available: https://www.scribd.com/document/339340098/John-Deere- letter?irgwc=1&content=10079&campaign=Skimbit%2C%20Ltd.&ad_group=&keyword=ft750noi&source=impa ctradius&medium=affiliate#from_embed

Page 60 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. breach of the Digital Millennium Copyright Act (DMCA) which imposes penalties for breaches of technological protection measures that restrict access to a copyrighted work, even in the absence of an actual breach of copyright. The effect of the implemented technological protection measure was to restrict anyone but an authorised dealer repairing the tractors and imposing a potentially higher cost and delay of repair to owners.212 John Deere argues that the strict controls on modification of their equipment is required to meet "industry and safety standards, or environmental regulations".213 It is these safety arguments that are advanced, in part, against the jail breaking of drones. In response the US librarian of Congress implemented a number of exceptions to breaches of technological protection measures, which have effect only under copyright, in certain circumstances including a limited exception for security research.214 Subsequent to this measure, John Deere sought to impose such restrictions by contract by modifications to its End User License Agreement.215

While there are no similarly prominent case examples under Australian law, the potential legislative effect of equivalent technological protection measure (“TPM”) legislation under the Copyright Act 1968 (Cth) remains and may represent an important area of future research. Of equal importance is that, though amendment to copyright legislation could be implemented, much of the same restrictions could be imposed through terms of use (or other contractual measures).

Misuse / Compromise Civilian Aircraft

The example of DJI drones is particularly worthy of consideration due to the nature of the products offered by DJI, their market prominence and the potential consequences of exploitation of vulnerabilities within them. These potential consequences are discussed in greater detail below. If a drone or other autonomous vehicle is exploited it may be able to be remotely controlled by a malicious attacker. The consequences of such an exploitation may include compromising the safety of individual members of the public against whom the devices could be targeted. For example, in October 2017 a drone was alleged to have been deliberately flown into the flight path of a passenger jet aircraft.216 If

212 https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8093797 213 http://www.abc.net.au/news/rural/2018-03-11/farmers-spearhead-right-to-repair-fight/9535730 214 17 U.S. Code § 1201 - Circumvention of copyright protection systems 215 Bloomberg, J. John Deere's Digital Transformation Runs Afoul Of Right-To-Repair Movement Forbes Magazine April 30, 2017. Available: https://www.forbes.com/sites/jasonbloomberg/2017/04/30/john-deeres-digital-transformation- runs-afoul-of-right-to-repair-movement/#323a28005ab9 216 Zorthian, J. 'This Should Not Have Happened.' A Drone Crashed Into a Canadian Passenger Plane Time October 16, 2017 Available: https://time.com/4983677/drone-crash-passenger-plane/

Page 61 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. flown into the engines or movable flight surfaces217 it has the potential to cause the aircraft to crash and, in a worst-case scenario, kill passengers aboard.

Airspace Restriction

DJI drones have built-in safety features that prevent them being flown in restricted airspace based on both altitude and proximity to airports. DJI are able to modify and expand the scope of this restricted airspace based on emergent threats. Exploitation of a vulnerability in a drone could disable or modify the application of these restrictions and let them be flown in dangerous or otherwise unauthorised situations.

Drone Army

While only developed as a proof of concept, Sammy Kamkar has demonstrated a compromise of drones such that once the first drone is compromised it can be used to automatically “infect” other drones and each subsequently infected drone can infect others and so on. This had the ability to create a “drone army” of compromised drones that could be used to potentially amplify the potential negative effect of an initial malicious exploitation such as targeting them against civilian aircraft or simply causing them to drop into crowded areas where people gather.218

Defensive Military Use

The use of adapted civilian drones in war zones and conflict zones has resulted in the U.S. Department of Defence (DoD) investing hundreds of millions of dollars to counter their use by militants where they have been adapted to carry grenades or other explosives and used with lethal effect.219

DJI had experienced a number of significant issues including terrorist group ISIS adapting their drones into weapons carrying bombs. In response DJI created “no-fly zones”220 for their drones in an attempt to reduce the areas in which off the shelf drones can be operated.221

217 wings, flaps etc 218 Kamkar, S. SkyJack December 2, 2013 Available: http://www.samy.pl/skyjack/ 219 Schmitt, E. Pentagon Tests Lasers and Nets to Combat a Vexing Foe: ISIS Drones New York Times September 23, 2017 Available: https://www.nytimes.com/2017/09/23/world/middleeast/isis-drones-pentagon-experiments.html 220 A no-fly zone is a technologically imposed geographic restriction on where the drone can be flown. 221 Kulwin, N. ISIS will no longer be able to use DJI drones as weapons in most of Iraq and Syria. Vice News. April 27, 2017. Available: https://news.vice.com/en_us/article/mb9wmb/isis-will-no-longer-be-able-to-use-dji-drones-as- weapons-in-most-of-iraq-and-syria

Page 62 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Offensive Military Use

Finally, the U.S DoD is also investing significantly in offensive use of drones themselves and a remote compromise of these could result in weapons of war falling into enemy or civilian hands entirely through remote compromise and exploitation.

In July 2017 a drone carrying a Russian thermite hand grenade detonated an ammunition dump in the Ukraine causing an estimated USD$1 billion in damage, killing one person and injuring five. Such an attack could easily be carried out by a civilian drone as it relies on a very light weapon, the grenade weighs less than 500g, and is within the capability of certain consumer drones.222

The above are, in some cases, extreme and hypothetical examples of the potential consequence of exploitation of vulnerabilities in drones. However, they collectively demonstrate that the subject matter requires particular care in the design and security measures such as ‘no fly zones’. Ideally, these types of devices are built as securely as possible, however, since perfect security is not possible, the use of programs such as bug bounties to discover and patch vulnerabilities is important. Such devices also require careful implementation and operation of related bug bounty programs to ensure that, when vulnerabilities are discovered, the correct incentives and processes apply to ensure they are remediated. It is likely that in a new and growing market, Goldman Sachs estimating that drones will become a USD$100 billion business between 2016 and 2020223, that the security of drones will become an increasingly important issue.

The Researcher Kevin Finisterre

Kevin Finisterre, the security researcher at the heart of this issue, was employed by ASX-listed “counter drone” company Department 13, a company that provides "Advanced Protection for Emerging Drone Threats"224. Finisterre undertook research into vulnerabilities in DJI drones, in his own time and largely out of personal interest stating that the community in which they participate “just do it for the laughs"225 as opposed to undertaking it predominantly for commercial gain. These “just do it for the laughs” motivations are consistent with much security researcher activity highlighted in

222 Mizokami, K. Kaboom! Russian Drone WIth Thermite Grenade Blows Up a Billion Dollars of Ukrainian Ammo Popular Mechanics July 27, 2017 Available: https://www.popularmechanics.com/military/weapons/news/a27511/russia-drone-thermite- grenade-ukraine-ammo/ 223 Drones: Reporting for Work Goldman Sachs Research Available: http://www.goldmansachs.com/our-thinking/technology-driving-innovation/drones 224 https://department13.com/ 225 Finisterre, K. KickStarting a Drone JailBreak Scene by Kevin Finisterre Video at 28:30 (Youtube, 2017) Available: https://www.youtube.com/watch?v=0OJebU7AOvw

Page 63 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Chapter 2 and the discussion of hacker ethics where the mental thrill and challenge and community reputation often outweighs financial or professional award.

The Program

DJI announces its bug bounty program on August 28, 2017 via a press release referring to it as a “Threat Identification Reward Program” and noting that “rewards for qualifying bugs will range from $100 to $30,000, depending on the potential impact of the threat226”.

The program’s focus was on identifying threats to user's privacy, particularly concerning personal information and details of the photos and videos they created. Its secondary focus was on vulnerabilities "that may reveal proprietary source codes and keys or backdoors created to bypass safety certifications227".

The program was launched via the press release and bugs were able to be submitted immediately:

Rewards for qualifying bugs will range from $100 to $30,000, depending on the potential impact of the threat. DJI is developing a website with full program terms and a standardized form for reporting potential threats related to DJI’s servers, apps or hardware. Starting today, bug reports can be sent to [email protected] for review by technical experts.228

A significant omission in DJI’s bug bounty program, as promulgated by e-mail, was an almost total lack of clarity and completeness in scope and no details regarding other points such as confidentiality, handling of duplicates or allocation of legal risk by contract. DJI simply provided an e-mail address to report vulnerabilities without sufficient detail for a researcher to determine what conduct was, or was not, allowed.

Uber’s bug bounty serves as a useful basis for comparison. Their prominent bug bounty program is noteworthy for its provision of a "treasure map" of their infrastructure. This sets out various parts of their technical infrastructure describing each component with a "what it does", 'what to look for" and "what it runs on"229 for each component. This assists security researchers in directing and focusing their technical efforts and time, providing a clear scope for the program and reducing the scope for later disagreement.

226 DJI To Offer 'Bug Bounty' Rewards For Reporting Software Issues DJI Newsroom August 28, 2017 Available: https://www.dji.com/au/newsroom/news/dji-to-offer-bug-bounty-rewards-for-reporting-software- issues 227 Ibid. 228 https://www.dji.com/newsroom/news/dji-to-offer-bug-bounty-rewards-for-reporting-software-issues 229 https://eng.uber.com/bug-bounty/

Page 64 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Initial Exploitation and Query Regarding Program Scope

Finisterre queried the scope of the program, given its ambiguity and incompleteness, using the provided e-mail address. DJI took almost two weeks to respond.230

During this period Finisterre continued to conduct research on DJI infrastructure and searched publicly available sections of source code repository Github.231 Sensitive DJI material had previously been located on Github, presumably placed there by an employee or agent of DJI. Finisterre located a set of DJI’s API keys for (“AWS”) in DJI source code on Github. API keys are a unique identifier that authenticate and allow access to a set of hosted services or resources, such as storage or other computing infrastructure, which are commonly used to outsource certain operations, to allow greater scalability, and to reduce infrastructure costs and overhead for the user.232

Subsequent to Finisterre’s discovery of the API keys, DJI replied by e-mail regarding the scope of the program, though this was still not reflected on public websites nor were there any further terms available for participation in the program:

“… the bug bounty program covers all the security issues in firmware, application and servers, including source code leak, security workaround, privacy issue. We are working on a detailed user guide for it”233 (sic)

As will be discussed in Chapters 5 and 6 regarding contractual issues, the contractual effect of the initial press release in respect of the formation of a contract, and purported subsequent amendments to it via e-mail, are undesirably uncertain with respect to both party’s rights. The live question is whether the press release is binding as to subsequent participation in the bounty program or, if not contractually binding, whether, in some jurisdictions, it may constitute deceptive and misleading conduct.

Finisterre subsequently utilised the discovered API keys to access DJI’s AWS hosted infrastructure, and utilising this access and a chain of subsequent vulnerabilities, Finisterre discovered a range of

230 Finisterre, K. Why I walked away from $30,000 of DJI bounty money Digital Munition Fall 2017 at p.3 Available: http://www.digitalmunition.com/WhyIWalkedFrom3k.pdf 231 Github is commonly used to host open source software projects. Many companies utilise source code from Github hosted projects in their products. They often require, as a condition of the software license (e.g. GPL) that any changes they make to source code are redistributed. Consequently, it is a common technique for security researchers to search these source code repositories for inadvertently included sensitive material that company’s may have included in source code uploaded to comply with the open source conditions. 232 For instance, Netflix uses Amazon Web Services to provide its storage. 233 Finisterre, K. Why I walked away from $30,000 of DJI bounty money Digital Munition Fall 2017 at p.5 Available: http://www.digitalmunition.com/WhyIWalkedFrom3k.pdf

Page 65 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. information. This included sensitive personal information including identification documents such as passports, driver’s licenses and identification cards. Finisterre was further able to access flight logs of drones and to filter these records and extract details of those flights that were undertaken by accounts registered to military and government domain names.234

This suggests that such data may be particularly sensitive or, potentially, have national security implications. The discovery of potentially sensitive information regarding the use by the military of civilian drones may have been a matter of significant importance and concern to DJI given an order by the US Army, made only a number of weeks prior to the announcement of the bug bounty program, banning the use of DJI drones “due to increased awareness of cyber vulnerabilities associated with DJI products.”235

Initial Award of Bounty

Finisterre extensively documented the vulnerabilities and their exploitation that he had discovered in a 31-page report and exchanged over 130 e-mails with DJI regarding his discoveries. Ultimately DJI agreed in writing to award the highest bounty of $30,000 but then, subsequently, requested that Finisterre execute an agreement in order to receive it.

Finisterre responded to the proposed DJI agreement and requested the inclusion of several aspects to the process: (i) acknowledgement of his findings; (ii) more transparent processes regarding reported bugs; (iii) more timely updates; (iv) clarity regarding security researcher ability to disclose; and (v) transparent public disclosure of vulnerabilities disclosed to DJI.236

Such requests are consistent with the terms and procedures provided under most large-scale bounty programs including those discussed in Chapters 5 and 6 regarding Facebook, Google, and Hack the Pentagon terms.

DJI Threats

The full text of the legal agreement proffered by DJI, which they state was required to be executed in order to receive the bounty payment, has not been made publicly available. However, Finisterre notes

234 Ibid. 235 Newman, S. The Army Grounds Its DJI Drones Over Security Wired Magazine. July 8 2017. Available: https://www.wired.com/story/army-dji-drone-ban/ 236 Finisterre, K. Why I walked away from $30,000 of DJI bounty money Digital Munition Fall 2017 at p.5 Available: http://www.digitalmunition.com/WhyIWalkedFrom3k.pdf at p.10.

Page 66 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. that the agreement did not offer protection to security researchers and risked his "right to work237" and "posed a direct conflicts(sic) of interest to many things including my freedom of speech".

The agreement obliged Finisterre to disclose any "input or suggestions regarding other information security issues" to DJI prior to disclosing it publicly and public disclosure would require prior written consent from DJI.238 Such an obligation limits the ability for researchers to conduct further research and DJI could choose never to disclose a vulnerability publicly or fix it. This is inconsistent with the fundamentals of responsible disclosure239 and a significant divergence from the limited information made available upon the launch of the bounty program.

Negotiations requesting changes to the agreement continued for a period of time but, after they ultimately broke down, Finisterre received a letter in which DJI noted that:

While we appreciate your support, DJI's legal department noticed that you had obtained DJI proprietary and confidential information by accessing DJI server without authorisation on or about September 27, 2017, which caused damage to the integrity of the server and aforementioned information.

DJI continued:

Without waiving other rights under applicable laws, DJI hereby demands you to immediately delete and destroy any copies of information you obtained from such unauthorized access in a complete and irrevocable way.

DJI sought to retract any permission arguably expressly granted by its press release establishing the bounty program, and then expressly authorised by e-mail noting that “the bounty program cover all servers” (sic) as noted above and colour Finisterre's behaviour as unauthorised under the CFAA:

your report to DJI and correspondence therefor(sic) do not constitute DJI's grant of authorization to you

DJI then sought to encourage a settlement by way of an implicit threat:

DJI is in good faith willing to explore the possibility of reaching an amicable resolution regarding the aforementioned unauthorized access and transmission of information, including

237 Ibid. 238 Locklear, M. DJI threatens legal action after researcher reports bug. Engadget. 20 November 2017. Available: https://www.engadget.com/2017/11/20/dji-threatens-legal-action-researcher-reports-bug/ 239 The principles of responsible disclosure, in this instance, would provide DJI with a period of time in which to remediate the vulnerability before it was publicly disclosed.

Page 67 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. a release of liability agreed by both parties. In the interim, DJI reserves all rights under applicable laws, including but not limited to, its right of action under the Computer Fraud and Abuse Act.

DJI’s offer to work “in good faith” appears to be diametrically opposed to its exhibited behaviour in granting authorisation and then seeking to retract it at a later stage. Finisterre, offended by the nature of DJI's approach and response, withdrew from negotiations and decided to forego payment under the program and published an account, upon which the above is based.

Good Faith

Matters of good faith are clearly an issue of import in the context of bug bounty programs and invoke notions of both fair play and commonly expected behaviours. Of course, the persistent tension between security researchers and software vendors, discussed elsewhere, means that these notions may be viewed very differently by security researchers and bounty program operators depending on the circumstances and may invoke some of the ethical dimensions discussed in Chapter 2.

Expected standards of good faith are included in the terms of the programs examined in Chapters 5 and 6, applying variously to the actions of both security researchers and program operators. For instance, the Department of Defence terms state that it will "deal in good faith with researchers who discover, test and submit vulnerabilities... in accordance with these guidelines" Facebook also requires that researchers will make a "good faith effort to avoid privacy violations and disruptions to others".

The precise contours of the requirements imposed by an obligation to act in good faith in the context of bug bounty programs are unclear, particularly in the Australian jurisdiction where implied duties of good faith and fair dealing in contracts, overall, is an area of general uncertainty.240 A good faith standard has not been embraced by a final appeal court level in any of Australia, New Zealand, Hong Kong or the United Kingdom courts.241

A fuller examination of these matters is important and could seek to fill gaps in terms where ambiguity exists as to the expectations of behaviour of both parties. It is an area worthy of further research, though of relatively higher importance in civil jurisdictions rather than in the Australian jurisdiction where, for the purposes of brevity and scope, a fuller analysis has not been included in this thesis. In contrast to US and European legal system "most common law systems refuse to accept an overarching

240 See, for instance, Masters Home Improvement Pty Ltd v North East Solutions Pty Ltd VSCA 88 at 19 241 Kiefel S. Good Faith in Contractual Performance Background paper for the Judicial Colloquium Hong Kong September 2015 at p.9 Available: http://www.hcourt.gov.au/assets/publications/speeches/current-justices/kiefelj/kiefelj-2015-09.pdf

Page 68 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. principle of good faith in performance of contractual rights and duties".242 As an alternative, the potential intervention of unfair contracts legislation, as undertaken in Chapters 5 and 6, has been chosen as providing a more solid basis for analysis in the Australian context.

Turning back to Finisterre, given he was a highly respected security researcher with full-time employment at a specialised firm and marketable skills, foregoing such a large amount may be more acceptable than to others at a more junior level. However, for many security researchers having invested such significant time and effort over such a long period it may be impossible to accept such an outcome and they may feel pressured or obliged to accept the legal terms proffered, notwithstanding DJI’s bullying behaviour, or worse still, to sell the information through another avenue which could include the legal sale to vulnerability sales companies, who do not limit themselves to defensive use, and will on-sell them to customers who wish to exploit them against third parties, such as Zerodium (as introduced in Chapter 2), or the sale of the vulnerability on the dark net for potentially illegal uses.

Finisterre ultimately published his account of the events which received considerable media coverage in the specialist IT security press243244 and in technology coverage more broadly.245 While this may have resulted in a temporary short-term solution in forcing action on DJI’s part, a more holistic and certain approach to bug bounty programs is required and will be examined in future reforms in Chapter 7.

Formalisation of DJI Bug Bounty Program

On December 1, 2017, more than three months after the launch of the bug bounty program via its press release, DJI launched a website setting out fuller terms of its program. In contrast to its approach with Finisterre it included, inter alia, a grant of express permission under the CFAA246 to authorise actions taken consistent with the program terms:

By participating in this program and abiding by these terms, DJI grants you limited “authorized access” to its systems under the Computer Fraud and Abuse Act in accordance with the terms

242 Ibid. at p.1. 243 Pepper, B. DJI's bug bounty program starts with a stumble The Verge November 20, 2017 Available: https://www.theverge.com/2017/11/20/16669724/dji-bug-bounty-program-conflict-researcher 244 aDJI in cyber-security row over bug bounty BBC News 20 November, 2017 Available: https://www.bbc.com/news/technology-42052473 245 Locklear, M. DJI threatens legal action after researcher reports bug Engadget November 20, 2017 Available: https://www.engadget.com/2017/11/20/dji-threatens-legal-action-researcher-reports-bug 246 DJI Bug Bounty Policy Program Available: https://security.dji.com/policy?lang=en_US

Page 69 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. of the program and will waive any claims under the Digital Millennium Copyright Act (DMCA) and other relevant laws.

These undertakings now extend beyond express permission under named legislation (such as the CFAA) to an obligation to provide assistance and support to security researchers in the event that a third party brings any legal action against them:

Furthermore, if you conduct your security research and vulnerability disclosure activities in accordance with the terms set forth in this policy, DJI will take steps to make known that your activities were conducted pursuant to and in compliance with this policy in the event of any law enforcement or civil action brought by anyone other than DJI.247

DJI Response to Finisterre account

In response to Finisterre’s published account of his interactions DJI referred to him in a press release pejoratively as a “hacker”, in contrast to their use of “researcher” used elsewhere in the press release to refer to the activities of others. His behaviour was referred to as “attempts to claim a ‘bug bounty’” and claimed that DJI only:

asks researchers to follow standard terms for bug bounty programs, which are designed to protect confidential data and allow time for analysis and resolution of a vulnerability before it is publicly disclosed. The hacker in question refused to agree to these terms, despite DJI’s continued attempts to negotiate with him, and threatened DJI if his terms were not met.248

These “standard terms” were not made available prior to submission of the vulnerability and it is not clear on what basis their description of them as “standard” as an analysis of the terms of other bug bounty programs as undertaken in Chapters 5 and 6 discloses. Indeed, the subsequent terms adopted by the program, discussed briefly above, are far more favourable to security researchers than those sought from Finisterre and notably do not include, for instance, the extremely broad indemnity for liability arising from a report made under the program.249

Conclusion

The errors or omissions on the part of DJI were manifold. Firstly, launching a bounty program and accepting submissions without preparing or promulgating any terms is a significant error. That three

247 DJI Security Response Centre Available: https://security.dji.com/terms?lang=en_US 248 Ibid. 249 Murison, M. Inside DJI's Flawed Bug Bounty Program. November 16, 2017 Available: https://dronelife.com/2017/11/16/dji-flawed-bug-bounty-program/

Page 70 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. months passed from the press release until such terms were published further compounded this error.

Secondly, DJI failed to respond to reasonably and clear queries regarding the inadequately described scope in a timely fashion.

Thirdly, DJI sought to impose a lengthy and, arguably, unreasonably onerous set of terms during a process of negotiation that shouldn’t have needed to occur in the first place. Such terms were not consistent with the spirit of the press release announcing the program and extended far beyond those that would be necessary to formalise or document the existing arrangement.

Finally, the threat to a security researcher of potential criminal sanctions under the CFAA during negotiations necessary only since DJI had not delivered a properly developed bounty program is particularly egregious. The suggestion that Finisterre had been acting without authorisation despite exchanges involving hundreds of e-mails, an offer to engage him as a consultant made by DJI and a confirmation that he was eligible for a bounty payout suggests that DJI were, at best, mercurial. At worst it suggests acting in bad faith and exploiting the ambiguity created by their poor processes and documentation as a basis to bully a party with significantly less power into accepting unfavourable terms.

Conversely, the behaviour of Finisterre embodied behaviours that are consistent with someone acting good faith and seeking to understand the bounds of allowable behaviour under the program, while attempting to comply with it. Behaviour evidencing this included making timely and thorough disclosures of discovered vulnerabilities, engaging in extended discussions explaining the process of discovery and exploitation and offering assistance to DJI to resolve them.

Finisterre is employed by a reputable, stock-market listed counter-drone research company, rather than a grey-market operator.250 He had a history of submitting vulnerabilities under bounty programs and being credited for doing so, including by large and reputable vendors such as Apple, and presents on his findings at large industry conferences such as Derbycon and Blackhat.251 Collectively this suggests that the characterisation of Finisterre as a “hacker”, and one motivated purely by money, as DJI attempted to do, is unreasonable and does not withstand significant scrutiny.

250 A “gray-market” participant is one that will trade in vulnerabilities without disclosure to the vendor, typically to Governments and Defense Contractors. They are neither “white hat” participants who would disclose to the vendor, nor “black hat” with malicious intent. 251 See, for instance, Derbycon 2017 - KickStarting a Drone JailBreak Scene by Kevin Finisterre (Youtube, 2017) Available: https://www.youtube.com/watch?v=0OJebU7AOvw

Page 71 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Melia, a fellow security researcher, who also reported vulnerabilities to DJI, and who is “respected and in good standing” on bug bounty platform HackerOne where “he has submitted 813 confirmed valid vulnerabilities to 69 individual companies, including GM, Starbucks, DoD, GitHub, Spotify” without complaint252 also declined a $500 bounty after reporting a significant vulnerability to DJI. In Melia’s view the vulnerability would have warranted a payout of at least $16,000. DJI then accused him of having agreed to their bounty terms and having breached the confidentiality provisions in making a public disclosure of his finding.253

This highlights the chilling effect of poorly operated bounty programs. The consequence is that security researchers of significant standing in the community and with a valuable and demonstrated skillset will decline to submit vulnerabilities. In a presentation Finisterre highlighted the views of other drone security researchers in response to DJI’s behaviour:

"I will just continue to keep things to myself and share them only in a small circle. I only submitted this 1 bug to test the bug bounty program and see how they handle it. That done, I will not report a bug ever again, and will not make their life easy in posting the bugs public. they can start puzzling the pieces together from exploits."254

This suggests that researchers do not feel empowered to seek enforcement of the terms of bounty programs when faced with a lengthy and potentially expensive legal battle with a well-resourced vendor. This deprives security researchers of potential earnings, deprives the user community of the products of more secure products and, potentially, leaves third parties exposed to malicious exploitation of the vulnerabilities by malicious actors. As discussed below, this is not merely a theoretical risk.

Finally, DJI ignored offers from Senior Community Manager at BugCrowd, Sam Houston, who offered assistance on August 28 in establishing a bug bounty program using BugCrowd. 255 Use of a bounty platform would have eliminated many of the issues that arose with DJI by developing a program based on hundreds of previous programs and established clear rules and a framework for the submission,

252 Bing, C. How DJI fumbled its bug bounty program and created a PR nightmare Cyberscoop November 30, 2017 Available: https://www.cyberscoop.com/dji-bug-bounty-drone-technology-sean-melia-kevin-finisterre/ 253 Corfield, G. Researcher: DJI RCE-holes offered me $500 after I found Heartbleed etc on its servers 28 November, 2017 Available: https://www.theregister.co.uk/2017/11/28/dji_heartbleed_rce_sql_injection_500_bounty/ 254 Derbycon 2017 - KickStarting a Drone JailBreak Scene by Kevin Finisterre (Youtube, 2017) Available: https://www.youtube.com/watch?v=0OJebU7AOvw 255 Houston, S. Twitter Post 28 August 2017 Available: https://twitter.com/samhouston/status/902283780567605248

Page 72 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. review and award of bounties. The terms of such a platform, though of BugCrowd competitors HackerOne, are considered in Chapters 5 and 6.

Subsequent Announcement of Data breach

On December 28, 2017, law firm Wilson Sonsini, acting for DJI, notified the Attorney General of New Hampshire that "DJI believes that other individuals may have accessed information" before Finisterre's report of September 27, 2017. This notification was not provided voluntarily – it was required by New Hampshire’s Data Breach Laws which provide for penalties up to three times injured parties' damages if not reported.256 The investigation into further breaches only commenced after Finisterre's report was made.

The letter noted that the information that was the subject of the breach included "personal information, including information contained in scanned photo identification uploaded by users to DJI's servers" which contained "full name, address, date of birth, photo and identification number".

This letter suggests that vulnerabilities reported by Finisterre were both present and exploited before Finisterre's report.

Further, it highlights the clear risk and potential consequences of researchers choosing not to report vulnerabilities due to the poor behaviour of DJI in the operation of its bounty program and the legal risk to which researchers are exposed when they do as exemplified by the treatment of Finisterre.

The chilling effect of DJI’s approach was not purely theoretical and their approach negatively impacts the organisation, security researchers and, perhaps most significantly, the users of the products whose personal information is put at risk. Further, as demonstrated in the introduction, the consequences of an exploited drone have potentially serious consequences to public safety.

256 Olsen, S. Re: Incident Notification Wilson Sonsini December 19, 2017 Available: https://www.doj.nh.gov/consumer/security-breaches/documents/dji-technology-20171219.pdf

Page 73 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

This case study will examine the technical process by which access to Instagram’s most sensitive material was obtained by a security researcher who was, at the time, participating under the terms of Facebook’s257 bug bounty program (Instagram is a subsidiary of Facebook).258

The steps taken by the researcher in reporting his finding to Facebook’s bug bounty program and their response at each stage are instructive in some of the ethical and legal issues raised by participation in bounties.

Initial Contact

In late October 2015, security researcher Wesley Wineberg was contacted via a friend of a friend on IRC259 regarding a potential vulnerability in an administrative web interface of Instagram260. The interface, to an infrastructure monitoring program known as Sensu, was open to the public Internet rather than being “firewalled” and available solely to IP addresses within Facebook's private network as would be usual for such tools for security purposes. Wesley had previously submitted valid vulnerability reports to Facebook's bug bounty program and was recognised for this by Facebook.261

Vulnerabilities, Exploitation and Reports

Wineberg discovered a number of vulnerabilities which are explored below.

Vulnerability #1

The fact that the interface was open to the Internet had already been reported to Facebook by someone other than Wineberg through its bug bounty program. However, in the initial report to Facebook the vulnerability was only generally described – it did not report how, or if, the vulnerability could be exploited or to what consequence. On this basis it was, at a minimum, impossible for Wineberg to know the vulnerability had already been reported and was likely to be labelled as a duplicate and, indeed, it was arguably not even a duplicate given the narrow scope of the original report. Further, Facebook had not fixed the vulnerability. Wineberg undertook to investigate further.

257 Instagram is a subsidiary of Facebook. 258 Wineberg, W. Instagram's Million Dollar Bug. 27 Dec 2015. Available: http://www.exfiltrated.com/research-Instagram-RCE.php 259 IRC stands for internet relay chat – an internet based chat service where users can join “channels” for topics of interest or directly message other users. 260 Part of the Facebook group of companies 261 See “2014” link at https://www.facebook.com/whitehat/thanks/

Page 74 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Wineberg located the source code for Sensu on Github (a publicly available source code repository popular for managing and maintaining open source software products such as Sensu). As open source software Sensu’s source code is publicly available262. Wineberg found that the source code contained a hard-coded token263, which Instagram had not changed in their implementation of Sensu as would be usual for security purposes. That Instagram had not changed this default security token allowed him to generate session cookies264 that appeared as though they were issued by Instagram - a technique known as spoofing. A "spoofed" session cookie allowed Wineberg to appear as though he had an existing authorised session with the server.

Vulnerability 2

Once Wineberg verified that the session cookie issued using the token operated as expected he exploited a known vulnerability in , a popular open source web application framework. This vulnerability could have the server execute any commands that he sent – a Remote Code Execution (RCE) vulnerability.265

At this juncture Wineberg reported two vulnerabilities – the hard-coded session cookie, and the vulnerability in Ruby on Rails which allowed this to be exploited to execute code remotely. Wineberg then continued his exploration.

Wineberg then leveraged the RCE capability to extract the database of users for the Sensu tool which were stored on the same server (as opposed to being stored and secured in a centralised database).

The passwords were encrypted but easily crackable using common tools to conduct a routine brute force attack. 266 Wineberg described this tool as “extremely slow” at cracking passwords since very low security was employed - two of the passwords were 'password' and 12 out of 60 account passwords were recovered within minutes suggesting poor password policies267. This highlights the significance of the vulnerability and weakness of the measures employed by Facebook in securing its

262 Available: https://github.com/sensu/sensu 263 A token acts as the basis for issuing a cryptographically secure session cookie 264 A session cookie is a piece of data sent by a server to a user that tracks their use of the website for as long as their web browser is open, after their web browser is closed, the cookie is deleted. 265 i.e. you are able to have a remote server execute code of your choice. This is a particularly valuable category of vulnerability – in IT security parlance you are referred to as “owning” the server since you can completely control it. 266 John the Ripper password cracker. Available: http://www.openwall.com/john/ 267 Which would mandate a minimum password length, special characters, capital letters etc. to deter such attacks.

Page 75 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. systems. Wineberg verified he could now logon with these accounts but did not makes changes to the system.268

Wineberg documented the vulnerability with the user database and reported it to Facebook and asked whether he should further explore the scope of the vulnerability by examining other systems of which he had a list. Facebook did not respond in writing but firewalled the Sensu server – that is, made it inaccessible to the public Internet.

Wineberg found that the configuration file containing the details of the user database also contained keys to access a data repository, a "bucket" in Amazon’s technical parlance, hosted on Amazon's S3 service269.

Access to this first “bucket” provided access to another file that contained keys to 83 further separate Amazon buckets. Wineberg exfiltrated data from these buckets, though notes he took care not to download user images to comply with Facebook’s requirement that “researchers make a good faith effort to avoid privacy violation.” This behaviour indicates a degree of care taken to comply with the policy in his action, further confirmed by his behaviour in seeking clarifications by e-mail.

These buckets contained substantial highly sensitive content including source code for Instagram's backend; SSL certificates for instagram.com; API keys for Twitter, Facebook, Flickr, & Foursquare; and username/passwords for a variety of service including e-mail.

Collectively this material represented “basically all of Instagram's secret key material” and would have allowed Wineberg to impersonate Instagram or its staff in a wide range of circumstances including the ability to replicate Instagram.com and produce websites that would be securely signed as though produced by Instagram. The API keys could have been used to connect to third party services with the user acting as though they were Instagram and Wineberg would have been able to access all Instagram users’ private data (including non-public photographs). Clearly this should have been of tremendous importance to Facebook and Instagram.

Wineberg made a third submission to Facebook's bug bounty program noting the chain of vulnerabilities that had been exploited including, among other things:

1. Unprivileged, i.e. non-administrative users, of Sensu could access Amazon Storage Services username and passwords.

268 Wineberg notes that this was done in order to comply with Facebook’s terms not to cause downtime. 269 Amazon Simple Storage. It is usual, even for large companies such as Instagram, to only operate a portion of their infrastructure themselves and use third party providers for certain infrastructure, in this case storage.

Page 76 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. 2. The first Amazon repository accessed contained usernames and passwords for other repositories. 3. One set of credentials provided access to all of Instagram’s Amazon Storage buckets. 4. "Secret keys" stored in Amazon storage buckets including in archived previous versions of software. 5. Files stored in some buckets are encrypted to passwords also stored in the same bucket (or accessible via the same AWS key). 6. Amazon Web Services “keys can be used from any remote IP” (i.e. not restricted to Instagram’s range of IP addresses). 7. Audit logs did not appear to detect Wineberg’s access.

Initial Facebook Response

Oct 28: Though Facebook promptly firewalled access to the Sensu server so that it was no longer accessible to the public internet though it wasn’t until six days after Vulnerability Report #2 Facebook replied with a generic e-mail:

Thank you for reporting this information to us. We are sending it to the appropriate product team for further investigation. We will keep you updated on our progress.

Please be mindful that taking additional action after locating a bug violates our bounty policy. In the future we expect you will make a good faith effort to avoid privacy violations, destruction of data, and interruption or degradation of our service during your research.

Wineberg took the position that he had not breached the published policy and had not caused a privacy violation, destroyed, data or degraded service as Facebook. He responded to this negative depiction of his research:

Can you explain where in the policy that is described. I spent a while searching to see if I could better understand the terms and conditions, and what you further note is really all I could find: "make a good faith effort to avoid privacy violations, destruction of data, and interruption or degradation of our service during your research."

Of that list, there was definitely, no "destruction of data", there should not have been any "interruption or degradation of service" due to my actions, leaving the last item of "privacy violations". In regards to that item, it appears to be further explained in the terms and conditions and I basically interpret that to mean "don't access user data".

Page 77 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. November 6: Facebook then appeared to suggest post facto that Wineberg was bound by elements of policy not listed in their program terms:

As a researcher on the Facebook program, the expectation is that you report a vulnerability as soon as you find it. We discourage escalating or trying to escalate access as doing so might make your report ineligible for a bounty. Our team accesses the severity of the reported vulnerability and we typically pay based on its potential use rather than rely on what's been demonstrated by the researcher.

On November 6, Wineberg sought clarification regarding the source of the requirement not to “escalate access” and to report a vulnerability upon its discovery:

… Many bounty programs like to see what impact is possible from a vulnerability, which was my assumption with this issue.

The clarifications make sense, and sound pretty black and white. Can I ask why they're not listed in the Facebook Whitehat guidelines? I see no mention of: 1) Report an issue as soon as it's found 2) Do not attempt to escalate access

Maybe this just simply hasn't been an issue before, but it seems silly that Facebook would expect people to follow rules that are not communicated.

Wineberg’s questions appear to be important, pertinent and clearly stated. They relate to access to highly valuable key infrastructure. Further, they clearly seek to elucidate and clarify gaps in the terms of the bounty program which potentially affect both legal liability and eligibility for payment of a bounty.

Wineberg’s reference to “escalate access” is a reference to a common technique of “” in which an initial exploitation may grant access at the level of a standard or “unprivileged” user of the system and then further exploits seek to gain further rights or “privileges” on the system – such as those of an administrator or “root” user. Clarification of this point is fundamental to the allowable behaviour of researchers participating under the program.

Page 78 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Subsequently Facebook set the status of the case to closed without answering Wineberg’s question nor clarifying how long it would take to receive a response regarding the bounty. After following up in another e-mail (not extracted), Facebook responded with the following generic reply:

We appreciate the report, but it does not qualify as a part of our bounty program. We will be in touch if we have any further questions.

November 17: Wineberg’s frustration level was reflected in his reply and he hinted at the publication of his findings:

… I am feeling very frustrated by the lack of information that is being shared related to this submission. It is Facebook's right to determine what qualifies and what does not, but I have received no response at all as to how that was determined in this case. My reading of the rules at https://www.facebook.com/whitehat implies that this submission should have qualified. That leaves me with two requests:

#1: Answer my original question from 11 days ago: "Thanks for the clarifications. Many bounty programs like to see what impact is possible from a vulnerability, which was my assumption with this issue.

The clarifications make sense, and sound pretty black and white. Can I ask why they're not listed in the Facebook Whitehat guidelines? I see no mention of: 1) Report an issue as soon as it's found 2) Do not attempt to escalate access

Maybe this just simply hasn't been an issue before, but it seems silly that Facebook would expect people to follow rules that are not communicated."

#2: Confirm if I am permitted to publish my findings on this vulnerability.

December 1: The final communication from Facebook, after failing to respond for two weeks, reasserts their position, without justification, that preservation of user privacy made the vulnerabilities ineligible for a bounty:

Page 79 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Your submission violates expectations of preserving user privacy, which, as we previously mentioned, makes it ineligible for a bounty.

The decision to publish is yours, we do not explicitly prevent nor provide permission.

Subsequent Facebook response

Facebook’s Chief Security Officer, Alex Stamos, considered that Wineberg’s behaviour after discovering vulnerability #1 was unethical270:

The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself. Intentional exfiltration of data is not authorized by our bug bounty program, is not useful in understanding and addressing the core issue, and was not ethical behaviour by Wes.

The exfiltration of data in the participation of bug bounties is an important issue and could be considered non-ethical if more data is exfiltrated than is required to validate the existence of a vulnerability. For instance, as discussed in the case of Rogers below, it is only necessary to download one record from a database to verify the operation of vulnerability in a database271 rather than a copy of all records.

While Facebook did not see to deny the right to publish, it appears that their response was, at least in part, attributable to Wineberg raising the issue with them.

Facebook Response

Despite Wineberg not having represented himself as being acting on behalf of his employer, Facebook contacted the CEO of Wineberg’s employer, security company (and, ironically, a bug bounty platform operator) Synack, and stated that he, acting on behalf of Facebook, did "not want to have to get Facebook's legal involved, but that he wasn't sure if this was something he needed to go to law enforcement over".

The consequences of the CSO of a multi-national such as Facebook contacting the employer of an individual could obviously have potentially very serious consequences including termination in

270 Stamos, A. Bug Bounty Ethics. December 18, 2015. Available: https://www.facebook.com/notes/alex-stamos/bug-bounty-ethics/10153799951452929/ 271 Typically known as an “SQL Injection” attack.

Page 80 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. countries such as the US where employment is, subject to certain exceptions, “at-will”.272 The allusion to contacting “law enforcement” is presumably a reference to criminal liability under the CFAA which could have potentially more serious consequences including imprisonment. This could easily be perceived as a legal threat to the researcher.

Wineberg states that Stamos demanded that he confirm he had not made any vulnerability details public, delete all exfiltrated data, confirm no user data was accessed and keep all findings and interactions private.273

Stamos' account of events does not differ markedly from Wineberg's, though he states that publication of the exploitation of the first vulnerability was acceptable, though release of details of Wineberg's access to Amazon Storage Services (which contained, among other things, Instagram's source code) was not acceptable:

… Couldn't allow Wes to set a precedent that anybody can exfiltrate unnecessary amounts of data and call it a part of legitimate bug research, and that I wanted to keep this out of the hands of the lawyers on both sides. I did not threaten legal action against Synack or Wes nor did I ask for Wes to be fired. I did say that Wes's behavior reflected poorly on him and on Synack, and that it was in our common best interests to focus on the legitimate RCE report and not the unnecessary pivot into S3 and downloading of data.274

It would be reasonable to consider that a statement that “I want to keep this out of the hands of lawyers” followed immediately by a list of demands is, in itself a legal threat that suggests if the demands are not complied with then legal action will follow. Facebook’s policy relevantly stated, at that, “we will not initiate a lawsuit or law enforcement investigation against you in response to your report.” On this basis it is arguable that Stamos’ actions and statements were a clear breach of their published policy and, as discussed in the contractual Chapter, a breach of contract.

Conclusion

That perception of security is often more important than security itself275 is reflected in Facebook's (implied) threats of legal action unless publication was suppressed despite the protection security

272 See, for instance, Stone, K. Revisiting the At-Will Employment Doctrine: Imposed Terms, Implied Terms, and the Normative World of the Workplace. Industrial Law Journal, Volume 36, Issue 1, 1 March 2007, Pages 84– 101 Available: https://doi.org/10.1093/indlaw/dwl042 273 Stamos, A. Bug Bounty Ethics. December 18, 2015. Available: https://www.facebook.com/notes/alex-stamos/bug-bounty-ethics/10153799951452929/ 274 Ibid. 275 Bambauer, D. and Day, O. The Hacker's Aegis Emory Law Journal, Vol. 60, p. 1051, 2011 at p.1069

Page 81 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. researchers would reasonably have relied on offered by the Responsible Disclosure Policy stating that “we will not initiate a lawsuit or law enforcement investigation against you in response to your report” if the policy is complied with.

In assessing the researcher’s behaviour, Stamos implied that their work should be for the “good of humanity” – though such a standard is not reflected in Facebook’s current or previous responsible disclosure policy:

I strongly believe that security researchers should have the freedom to find and report flaws for the betterment of humanity, and I believe it is right to offer them economic rewards for their hard work. (emphasis added)

The invocation of support for the work of independent security researchers only where it is for the “betterment of humanity” is not a standard reflected by Facebook’s own primary activity – where their activity is undertaken for the benefit of its shareholders reflected, in part, in its greater than USD$500 billion market capitalisation.276

Facebook subsequently updated its Responsible Disclosure Policy to include a term that appears to be directly responsive to the issues raised in Wineberg’s reports: “You do not exploit a security issue you discover for any reason. (This includes demonstrating additional risk, such as attempted compromise of sensitive company data or probing for additional issues.)” Nonetheless, it appears that Wineberg was not in a position to have known this at the time of his participation and interaction with Facebook.

As with the case of Finisterre, discussed above, vendors appear to seek to enforce terms that are not included in the legal terms of their bug bounty programs by use of legal threats and then, after the fact, modify them to remediate the earlier deficiencies.

276 As at 23 May 2018. See: https://www.bloomberg.com/quote/FB:US

Page 82 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

This case study briefly considers the example of the actions of a security researcher discovering, exploiting and reporting a vulnerability in the absence of an operating bug bounty program. The other case studies focus on the issues of the operation of a bug bounty program. However, this example highlights the dangerous and ambiguous middle ground between research conducted under a defined program and a purely malevolent action. In this middle ground a researcher is acting without express (or implied) authority but could still, arguably, be acting without malicious intent yet still be acting in breach of relevant criminal legislation and committing an offense despite acting in ways that are advantageous to the security of the complainant’s products.

Background

In 2014, Josh Rogers, a then 16 year old Melbourne high school student, discovered a vulnerability while examining an error webpage of Public Transport Victoria's (“PTV”) website. Rogers used this error as a basis for an SQL Injection277 attack and, subsequently, downloaded 600,000 records of PTV Myki travel card users including their full names, addresses, home and mobile phone numbers, email addresses, dates of birth, seniors card ID numbers, and nine-digit extracts of credit card numbers. After reporting the vulnerability to PTV but receiving no response278, Rogers alerted the news media.

Actions

Despite having reported the vulnerabilities to PTV by e-mail, it is likely the involvement of the news media and, particularly, the exfiltration of the entire database (rather than a sample of a few records sufficient to verify the vulnerability) contributed to the decision of PTV to involve the police. Police executed a search warrant and Rogers had a number of computers and related equipment seized, was arrested, interviewed and charged with an offence related to the unauthorised access, modification or impairment of a computer under S477 Criminal Code 1995 (Cth). He later agreed to sign a document acknowledging the breach of the law and was officially cautioned.279

On a cynical view, it appears that a degree of naivete and hubris played a role in the choice to download the entire database and inform the media. On a more charitable view, there was no

277 An SQL Injection attack is one which allows the attacker to issue commands (using Structure Query Language – hence SQL) on a database including, typically, downloading or modifying its contents). 278 It is not clear how long Rogers waited before contacting Fairfax Media after not receiving a response from PTV. 279 Kirk, J. Australian teen accepts police caution to avoid hacking charge. PC World. July 7 2014. Available: https://www.pcworld.idg.com.au/article/549362/australian_teen_accepts_police_caution_avoid_hacking_cha rge/

Page 83 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. demonstration of malicious intent in that he had informed PTV of the vulnerability who had not responded to the reports, other than by informing the police.

The approach of PTV redirects the focus from the poor implementation of security controls and a lack of best practice in terms of vulnerability handling practices, a failure to respond to vulnerabilities reported to them and the consequential potential seriousness of the impact of the many customers of PTV (many of whom are customers by default, rather than by choice as they are a sole supplier of public transport services) and redirects the focus onto a high school student who they describe in the media as responsible for an "attack". Regardless of whether the reported vulnerability was fixed due to Rogers’ report, a more effective response would have been to acknowledge the report and, subsequently, implement a clear set of guidelines for vulnerability reporting.

Theme

This is illustrative of a persistent theme evident when conceptions of authorisation at law and authorisation as understood in a technology and security research context differ markedly, if not fundamentally – “if a user interacts with a server in a way that the protocol does not prohibit but which is upsetting to the server’s operator, should this be construed as “unauthorised access” as a matter of law?.”

Despite a relatively simple vulnerability being exploited to provide access to significant quantities of sensitive customer data, PTV downplayed the seriousness of the consequences of the exploit stating that "Customers can rest assured that the database is in no way linked to Myki online accounts and no useable credit card details were stored in the database."280

However, even a subset of the digits of a credit card, such as the last four digits, and a billing address have been used as a launching point with devastating consequence when used, as part of an exploit chain for further attacks against accounts held by third party service providers. For instance, in the case of Wired journalist Matt Honan, the last four digits of a credit card discovered from Amazon, who did not consider them to of sufficient security importance to obscure them on their website, were subsequently used to gain access and authenticate hackers and provide the ability to reset passwords and gain access to Honan's Apple, Gmail and Twitter accounts. This subsequently involved malicious

280 Melbourne schoolboy exposes security flaw in Public Transport Victoria's website ABC News January 8, 2014 Available: https://www.abc.net.au/news/2014-01-08/schoolboy-exposes-security-flaw-in-public-transport- victoria27/5190536

Page 84 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. hackers remotely wiping the contents of his Macbook using the FindMyMac utility.281 This suggests that assurances by PTV are hollow and, potentially, misleading.

PTV Summary

PTV did not have a vulnerability disclosure policy published on their website and did not offer a bug bounty program at the relevant time. Had they done so, Roger’s reported vulnerability could have been assessed and patched and appropriate recognition or reward granted. While his actions, in the absence of such a policy are regrettable, they do not detract from the failings of PTV.

Further, PTV have not since implemented one in the three years that have elapsed since the incident despite that having such a policy may have avoided or limited the issues that arose in this instance. Finally, PTV did not respond to a vulnerability that was reported to it in good faith. Collectively these show that, despite significant deviations from best practice, there are few consequences for companies that fail to implement reasonable measures to respond to vulnerabilities reported to them.

Governments, including the US, are attempting to regulate such behaviour282 through the generation of documented policy though there are, as at the date of writing no equivalent Australian policies. The need for such policies is further discussed in Chapter 7.

Introduction

In August 2016, security research firm MedSec published a report regarding vulnerabilities in St Jude Medical's products including implantable pacemakers and defibrillators which, according to the report, could allow hackers to exploit vulnerabilities in the wireless interface and potentially deplete the battery or deliver shocks to users.283

281 Honan, M. How Apple and Amazon Security Flaws Led to My Epic Hacking Wired Magazine 6 August, 2012 Available: https://www.wired.com/2012/08/apple-amazon-mat-honan-hacking/ 282 See, for instance, A Framework for a Vulnerability Disclosure Program for Online Systems. July 2017. US Department of Justice Available: https://www.justice.gov/criminal-ccips/page/file/983996/download 283 Osborne, C. St. Jude Medical releases security patches for vulnerable cardiac devices. Zero Day. ZDNet. January 10, 2017. Available: http://www.zdnet.com/article/st-jude-releases-security-patches-for-vulnerable-cardiac-devices/

Page 85 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Medsec chose not to disclose the vulnerabilities to St. Jude directly due to, MedSec allege, their previous failure to remediate vulnerabilities promptly and “sweeping these issues under the carpet”284.

Medsec Vulnerability Disclosure

For context, at the time St Jude Medical had a page listed on bug bounty platform operator HackerOne that simply requested that “If you identify a potential cybersecurity vulnerability in a St. Jude Medical product, please contact us at [email protected] and a cybersecurity representative will contact you within 10 business days”285 and made no undertakings as to how the vulnerability would be handled. Such a policy did not provide any protection from legal action being brought against the discloser.

MedSec partnered with Muddy Waters Capital, an investment firm specialising in due diligence and investigative-based research, typically based on accounting and other fraud. Through this partnership, they "shorted" the stock - that is, adopting a position where they would jointly benefit from falls in the share price of St Jude, before publishing their report while simultaneously disclosing the vulnerabilities to the Food and Drug Administration (FDA). The share price fell from $82.88 to $78 upon release of the report.286

St Jude Medical brought a claim287 under S1125 of the Lanham Act as well as claims for defamation, conspiracy and deceptive trade practices alleging that MedSec's action had wrongfully defamed and disparaged St Jude's devices by "publicly disseminating false and unsubstantiated information"288 as part of a conspiracy between Muddy Waters and St. Jude."289

The products were subsequently patched290 after an independent report found that “do not meet the security requirements of a system responsible for safeguarding life-sustaining equipment implanted in patients”291 and the FDA confirmed the vulnerabilities existed. The lawsuit was settled with prejudice

284 Bone, J. MedSec: St. Jude flaws across ecosystem, company prone to hiding issues. The Fly. August 25, 2016. Interview Transcript. Available: https://thefly.com/landingPageNews.php?id=2422996 285 https://hackerone.com/stjudemedical 286 Bloomberg. https://www.bloomberg.com/quote/STJ:US 287 St Jude Medical Inc. v. Muddy Waters Consulting LLC & Ors. Complaint. Case No. 16-cv-03002. Available: https://regmedia.co.uk/2016/09/08/medsec_lawsuit.pdf 288 Ibid. at para 3. 289 Ibid. at para 54. 290 Osborne, C. St. Jude Medical releases security patches for vulnerable cardiac devices. Zero Day. ZDNet. January 10, 2017. Available: http://www.zdnet.com/article/st-jude-releases-security-patches-for-vulnerable-cardiac-devices/ 291 St Jude Medical Inc. v. Muddy Waters Consulting LLC & Ors. Complaint. Case No. 16-cv-03002. Preliminary Expert Report of Carl D. Livitt, October 23, 2016 at para. 10. Available: https://medsec.com/stj_expert_witness_report.pdf

Page 86 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. in February 2018, with each party agreeing to bear their own costs. Thus, it appears the lawsuit was a fruitless exercise on the part of St Jude. Subsequently, security researcher Josh Corman has noted the interest of other hedge funds in appropriating MedSec and Muddy Waters’ approach to vulnerability disclosure.292

Outcome

St Jude offers no financial incentive for security researchers to disclose vulnerabilities to them and, indeed, no protection from legal liability from them for doing so. In this context, it is perhaps unsurprising that security researchers will seek other methods that maximise their interests when considering ways to handle vulnerabilities they become aware of.

Had St Jude adopted a more inclusive and transparent approach to its vulnerability handling it could have, potentially, avoided or minimised costly and lengthy litigation and the negative publicity associated with it. It is particularly telling that the vulnerabilities published ultimately appear to have been validated and that St Jude has not altered its approach with respect to handling.

While this case study is an example of behaviour in the absence of a bug bounty program, it usefully highlights the ongoing expansion of legal doctrines that vendors attempt to use to suppress vulnerability disclosure – in this case, the use of defamation and conspiracy are notably novel but, ultimately, unsuccessful.

These case studies highlight some of the ways in which security researchers operating under bounty programs and otherwise conducting security research can be inappropriately treated by vendors resulting in legal and other liability. They highlight the power disparity faced by researchers that limit the ways in which they are able to redress this treatment.

In the case of Wineberg, threats made to an employer by an enormous multi-national have potentially disastrous career consequences and, in the case of Finisterre, threats of potential criminal consequences under the CFFA were, in part, contributory to a decision to forego a $30,000 USD bounty payment to which he was otherwise entitled.

292 Kan, M. Stock-tanking in St. Jude Medical security disclosure might have legs. IDG. January 9, 2017. Available: http://www.pcworld.com/article/3155990/security/stock-tanking-in-st-jude-medical-security- disclosure-might-have-legs.html

Page 87 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Limited Redress

Vendors appear to be able to adopt shifting positions in relation to allowable behaviour under their programs and seeking to enforce changes to them post facto by modifying, or seeking to modify, contractual provisions or purporting to unilaterally terminate them. This appears to occur with little legal or financial consequence to the vendor, though reputational consequences appear to be unavoidable.

An element of acceptance or defeatism is apparent in security researchers in the face of vendor behaviour that is contrary to or inconsistent with the published terms. In the case of Finisterre’s colleague, this resulted in a bounty program, known to seek to modify terms after vulnerabilities being disclosed, being ignored as productive avenue for disclosure. This view was promulgated among the highly skilled and tightly connected researcher community. It is likely that this reputation will persist even though DJI subsequently modified the published terms.

Security researchers may be deterred by the cost, time or uncertainty of litigation were they to seek to pursue claims for breach of contract or otherwise. Later Chapters will consider whether there are, or could be, other mechanisms by which security researchers can access lower cost and more flexible mechanisms to seek redress through other regulatory or non-regulatory mechanisms or whether there is a role for standardised terms, legislative intervention or for the collective power of the security researcher community to be harnessed.

The Roger’s case study, while not an example of a bounty program, does provide some insight into the ways in which criminal legislation struggles to deal with behaviour that is, at least in parts, consistent with security research undertaken in good faith and with the goal of improving security outcomes.

Bounty program by press release

Adoption of bug bounty programs are an increasingly common means to demonstrate engagement with the security research community and, ostensibly, commit to better security of an organisation’s products. However, the case of DJI suggests that such engagement and security outcomes can be absent and that their launch may act as “virtue signalling” 293 rather than a measure effective at actually improving security.

293 Ashford, W. Bug bounties not a silver bullet, Katie Moussouris warns Computer Weekly 10 October 2018. Available: https://www.computerweekly.com/news/252450337/Bug-bounties-not-a-silver-bullet-Katie- Moussouris-warns

Page 88 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. The example of DJI demonstrates that a “bug bounty program by press release” can have negative consequences for three groups. Firstly, DJI received significant negative press coverage for their handling of the Finisterre. Secondly, Finisterre received threats of legal action involving significant criminal penalties and forewent a USD $30,000 bounty payment. Finally, users of DJI products potentially had their personal information publicly released due to exploitation of the vulnerability during the period that the bounty program was not operating in an efficient manner. Other security researchers are likely to be discouraged from submitting vulnerabilities to a bounty program where doing so results in a threat of criminal prosecution meaning such vulnerabilities remain discoverable by malicious actors or, alternately, some security researchers may seek to monetise them in grey or black markets if they are not offered a safe harbour by the vendor.

Summary

This Chapter has provided an analysis and description of some of the limited published examples of unexpected or, indeed, unprincipled vendor behaviour under the terms of bounty programs and, in the case of St Jude an incomplete one and in the case of Public Transport Victoria, the absence of one. These cases highlight that risks to security researchers are manifold, including while operating under the purported protection of the terms of bounty programs.

The behaviour highlighted appears to perpetuate the historical tension between two maxims of vulnerability disclosure – a degree of transparency and openness by vendors in encouraging and rewarding disclosure under the terms of bounty program and, the second, of assertion of legal rights and application of legal (and other) pressure to suppress security research and vulnerability disclosure. This appears to expose security researchers to a high degree of risk when their behaviour under a program is unexpected or inconsistent with the vendor or program operator’s self-interest. This chapter has highlighted that model clauses to protect security researcher interests are an important reform step and should be the focus of further work and research – methods for achieving this are suggested in the concluding Chapter including legislative reform, use of creative commons like licenses, independent organisations and non-legislative government guidelines (such as those promulgated by the Department of Justice.

In contrast to this Chapter, the two contractual chapters that follow analyse the bounty terms themselves, rather than published behaviour and vendor response. Consequently, much of that analysis is based on a consideration of the potential future application of the terms and law, rather than the actual examples analysed in this behaviour.

Page 89 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Chapter 5 – Contract Formation, Elements of Contract

This Chapter analyses the legal effect of some of the terms of three significant incentivised coordinated vulnerability disclosure programs, including the terms of Facebook (and its subsidiary, Instagram) as previously examined in the case studies identified in Chapter 4. An examination of the effect of each of the terms is beyond the scope of this thesis and the terms analysed are intended to introduce an analysis of their potential effect under Australian law, rather than undertake an exhaustive clause-by-clause exposition of their potential effect which would not be possible within the constraints of this thesis. The terms selected for analysis have been chosen for their centrality to the research question and importance to security researchers participating in bounty programs. As this thesis is focused on the Australian jurisdiction, analysis is not conducted through the lens of the Uniform Commercial Code (UCC). The terms selected for analysis have been chosen as reflecting those that are, on their face, likely to reveal flaws or unintended consequences.

The terms of the vulnerability disclosure programs, prima facie, deal with a wide subject matter with legal implications for security researchers including liability under a number of criminal and civil liability regimes. Further, they deal with a range of technical and commercial matters regarding assessment and categorisation of vulnerabilities, payment of bounties, intellectual property rights and issues including restrictions or limitations on public disclosure of discovered vulnerabilities. This chapter focuses on contractual issues arising out of bug bounty terms including the peripheral matter of intellectual property rights. Criminal matters and other civil matters, consistent with the research question which examines “contractual civil legal risk” are outside the scope of this thesis

The aims of this Chapter are to examine the terms in the three selected programs. The selected programs are: Google, Facebook and the US Department of Defence (hosted by HackerOne). Full copies of the terms of the programs are found in Appendix A. The contractual elements of formation, scope, vitiating factors and termination will be examined in the context of these terms.

These terms should set out, with sufficient precision and certainty, the legal basis under which security researchers explore, discover and disclose vulnerabilities; how they are rewarded for doing so; and the allocation of risk between the researcher and the program operator in undertaking this activity.

Page 90 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

This section introduces the views expressed by bug bounty program operators, US Government bodies, industry bodies and contemporary research regarding legal security researcher participation in bounty programs to provide the regulatory, including self-regulatory context in which the terms are examined. Bounty Operators

The positioning of the programs in the popular press by bounty operators is that participation in bounty programs is done without legal risk - "HackerOne is turning hacking into a paid job that won't get you arrested".294 This rhetoric is similarly promoted by operators of the programs themselves who state the program creates "a legal avenue for digital security researchers who find and disclose vulnerabilities."295 This Chapter commences with an examination of the extent to which this promise is realised on a contractual basis through the programs’ terms. Later Chapters will examine this question from a legislative basis.

The issues chosen for examination in this Chapter are the risks that security researchers are exposed to where vulnerabilities are disclosed pre-contractually or, where a contract is not formed, issues around formation of contract in satisfying each of the required elements, including consent/acceptance, the certainty of terms and the effect of breaches by the program operator.

National Telecommunications and Information Administration (“NTIA”)

The National Telecommunications and Information Administration research found that sixty percent of researchers cited threats of legal action in choosing not to disclose to a particular vendor.296 In this context, protection from civil and criminal legal liability is clearly a key concern in security researcher’s choice to participate in bug bounty programs.

294 Perlroth, N. HackerOne is turning hacking into a paid job that won't get you arrested. Australian Financial Review. June 8, 2015. http://www.afr.com/technology/enterprise-it/hackerone-is-turning-hacking-into-a-paid-job-that-wont-get- you-arrested-20150608-ghirv0 295 U.S. Department of Defense. DoD, Army Ramp Up CyberSecurity Measures With New Initiatives. November 21, 2016 Available: https://www.defense.gov/News/Article/Article/1010626/dod-army-ramp-up-cybersecurity- measures-with-new-initiatives 296 NTIA Awareness and Adoption Working Group. Vulnerability Disclosure Attitudes and Actions: A Research Report from the NTIA Awareness and Adoption Group 15 December 2016. Available: https://www.ntia.doc.gov/files/ntia/publications/2016_ntia_a_a_vulnerability_disclosure_insights_report.pdf

Page 91 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Business Software Alliance (“BSA”)

The Business Software Alliance (“BSA”), founded by Microsoft and representing many of the world’s largest software companies including Adobe, Oracle, Apple and IBM, suggest that security research exemptions to the DMCA should be constructed extremely narrowly since “a significant amount of independent security research is conducted every day with little to no interference from the Digital Millennium Copyright Act (“DMCA”)’s anti-circumvention prohibitions”.297 They further claim that there is “very little in the record to demonstrate that researchers are declining to engage in good faith security testing”298. The BSA does not claim that such research occurs without breaches of the DMCA, merely that they are not prosecuted.299

This suggests that the prevailing expectation of the software industry, or at least its most prominent advocacy body, is that residual legal risk in undertaking security research should be borne by the security researcher, notwithstanding that such research may otherwise occur consistently with the terms of the bounty programs of many of these companies.

Risks in practice

Gamero-Garrido, A. et al. find that almost a quarter of security researchers had received legal threats or had legal action commenced against them during the conduct of their research. 300 In that study, the researchers expressly requested permission from technology companies to “evaluate, alter, and potentially circumvent security mechanisms (as defined by the Digital Millennium Copyright Act - U.S. Title 17 Section 1201) of the Sample Product for legitimate research purposes.” The companies were drawn from the consumer electronics and software sectors who, collectively, had $1.5 trillion in revenue and represented 8% of U.S. GDP.301

Where more information was requested from the surveyed companies regarding the proposed testing the researchers replied that they would undertake a “combination of fuzz testing against network

297 Business Software Alliance. Long Comment Regarding a Proposed Exemption Under 17 U.S.C. 1201. December 2015 at p.5 Available: https://www.copyright.gov/1201/2015/comments- 032715/class%2025/BSA_The_Software_Alliance_Class25_1201_2014.pdf 298 Ibid. at p.5 299 Ibid. at p.5 300 Gamero, Garrido, A. Savage, S. Levchenko, K. Snoeren, A. Quantifying the Pressure of Legal Risks on Third- Party Vulnerability Research In Proceedings of the 24th ACM SIGSAC Conference on Computer and Communications Security (CCS '17). ACM, Dallas, TX, USA. September 2017. 301 Ibid. at p.4

Page 92 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. interfaces, reverse engineering using tools like IDA Pro, runtime tracking of input buffers and so on.”302303

Eighteen of the surveyed companies had published vulnerability disclosure policies. Of this group, explicit consent to conduct research which included circumvention of security mechanisms under the DMCA was granted in only 40% of cases. This approval rate is higher than companies without a vulnerability disclosure policy, who approved the request in ~20% of cases.304

These results may lead to a number of conclusions. Firstly, companies may not believe that a published vulnerability disclosure policy has the effect of exempting researchers participating under them from legal liability under the DMCA and, potentially from other legislation including the CFAA. In this case it can be presume that the security researcher’s actions are therefore, potentially unauthorised.

This conclusion is difficult to draw with certainty as it would turn on the content of each of the individual policies and the companies surveyed. However, it does suggest that an assessment of the contractual terms is of vital importance to understanding the risk to which researchers are exposed. The research does not name the companies and thus cross-matching the survey results with the content of their published vulnerability disclosure policies is not possible and thus it cannot be determined whether there is an alignment of the terms of the policies and the response to the survey.

Other factors may be at play including the individual parties responsible for communications within the surveyed companies or a belief that the vulnerability disclosure policy spoke for itself and parsing it further was unnecessary and, therefore, explicit permission was therefore unnecessary.

302 Ibid. at p.5 303 Fuzz testing or “” involves automation of a wide-range of random inputs to a program to test it under as wide a range of conditions as possible. 304 Ibid. at p.8.

Page 93 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

The programs of Google, Facebook and the U.S. Department of Defence (hosted on HackerOne’s platform) were selected for analysis. Google and Facebook were chosen since they operated some of the earliest bug bounty programs as discussed in the history of bug bounties Chapter One and are technology market leaders. Facebook and Google collectively have over USD$100 billion in revenue, billions of users, are responsible for billions of lines of code and have paid out greater than USD$10 million in bounty payments to security researchers305306307308. Consequently, the terms they apply have far-reaching effect both in terms of impacting a large number of security researchers as well as influencing the development of terms of other bug bounty program operators. The Department of Defence was selected to represent the way in which non-corporate entities approach their implementation of bug bounty terms and highlights the use of an intermediary, in this case HackerOne, who host the bounty program on their own platform. HackerOne also interposes a set of terms common to all programs hosted on their platform. In the event a conflict between the terms specific to a program and the General Terms, the general terms prevail.

The interaction of a set of primary terms in each of the programs may affect the clarity, completeness, consistency and uncertainty regarding the content of the agreement struck, or purportedly struck, through security researcher’s participation under the programs. The next section considers whether each of the programs forms enforceable contracts or whether they are merely policy documents that guide the company’s internal behaviour.

305 Meyer, D. Here's How Much Google Paid Out To Security Researchers Last Year. Fortune Magazine. January 29, 2016. Available: http://fortune.com/2016/01/29/heres-how-much-google-paid-out-to-security-researchers-last-year 306 Hackett, R. Google Paid $3 Million to Hackers in 2016. Fortune Magazine. February 2, 2017. Available: http://fortune.com/2017/02/02/google-android-chrome-bug-bounty/ 307 King, R. Facebook's bug bounty program paid out $1.3 million in 2014. ZDNet. February 25, 2015. Available: http://www.zdnet.com/article/facebook-bug-bounty-program-paid-out-1-3-million-in-2014/ 153 Camarda, B. $5 million dollars paid as Facebook’s bug bounty program turns 5. Naked Security. Available:https://nakedsecurity.sophos.com/2016/10/17/5-million-dollars-paid-as-facebooks-bug-bounty- program-turns-5

Page 94 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

Before examining the content of each of the terms of the selected programs, the initial threshold question is whether the terms form a binding and enforceable contract between the bounty program operator and the security researcher. In some cases, bug bounty program operators describe the way in which they handle vulnerabilities as “policies” – which is a manner in which a company, internally, intends or proposed to act but is likely not be binding on them on a contractual basis. The significance is one of the divides between public and private domains. In the absence of a binding contract the rights and remedies in the event of a breach of the terms by either party and, particularly, the legal protections afforded to security researchers in undertaking activity that may otherwise have criminal or civil liability associated with it, is potentially reduced or eliminated.

In the event that a binding contract is not formed between the security researcher and the bounty program operator there may, nonetheless, be remedies available to a security researcher on the basis of representations made on bounty operators. This Chapter is limited to a consideration only of the contractual effect of the purported terms rather than ancillary material.

Introduction

Formation of a valid contract, under Australian law, requires fulfilment of certain elements: (i) agreement (typically through acceptance of an offer); (ii) consideration; (iii) certainty; and (iv) an intention to create legal relations.309310 A body of law has evolved to deal with fulfilment of these elements in the online context. However, the precise bounds are unclear and their application in the context of bug bounties is unexamined. In the case of the bug bounty programs examined, each of the program’s terms are supplemented by additional terms which are, in some cases, in conflict or the relationship between the two is ambiguous as discussed further below.

309 Paterson, J. Robertson, A. Duke, A. Contract: Cases and Materials Lawbook Case Co. Twelfth Edition, 2012 at p.41. 310 Seddon, N. The Laws of Australia. Thomson at 7.1.10

Page 95 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Offer, Acceptance and Validity

The first element to consider in the formation of a contract is acceptance of an offer. The contemporary application, and a brief history of the evolution, of methods of acceptance in the online world is considered below. Software licensing – background

Software distribution and licensing has evolved from physically purchased packaged products, to software that is downloaded from the internet and installed and, most recently, to software hosted entirely online and most often accessed via a web browser or a mobile application as in the case of each of the bug bounty programs examined. This has similarly resulted in the evolution of legal doctrines related to the formation of contracts through stages known as shrink-wrap311, click-wrap, browse-wrap and sign-in wrap. These terms are explained below.

Shrink-wrap

Under a shrink-wrap license, the packaging of the software states that the software is subject to a license which is included in printed form to which the user is bound by opening the shrink-wrapping. To decline or reject the license the prospective licensee may return the unopened software, the offer being made upon purchase of the software on the terms set out on the packaging.312 It is deemed that a license is entered into for the purchased software when the user opens the software packaging and breaks the shrink-wrapped plastic seal, fulfilling the element of acceptance. It is considered that the course of conduct specified on the label constitutes acceptance.313 Each of the bounty programs examined occurs entirely online and thus shrink-wrap acceptance of the agreement does not occur. Click-wrap

Under a click-wrap agreement the user is presented with the option to agree to a set of terms upon installation or first execution of the software314 – whether it is downloaded or purchased in physical form. The user signifies their acceptance of terms by clicking a button and thus actively and expressly assents to be bound by the terms. 315

311 Smith, GJH Internet Law and Regulation 2007, 4th ed, Sweet & Maxwell: London at p.357 312 Smith, GJH Internet Law and Regulation 2007, 4th ed, Sweet & Maxwell: London at p.823. 313 Lemley, M. Terms of Use 91 Minnesota Law Review 2006 pp 459-467 at p.467 314 Davis, N. Presumed assent: The judicial acceptance of clipwrap. Berkeley Technology Law Journal 22 Berkeley Tech. L.J. 2007. Available: http://heinonline.org/HOL/Page?handle=hein.journals/berktech22&div=31&g_sent=1&casa_token=&collectio n=journals 315 Davis, N. Presumed assent: The judicial acceptance of clipwrap. Berkeley Technology Law Journal 22 Berkeley Tech. L.J. 2007 at p.47.

Page 96 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Browse-Wrap

In contrast to the shrink-wrap and click-wrap, a browse-wrap agreement simply provides a link to a set of terms which the user can follow if they wish to review them but to which they do not have to actively assent.316 In such agreements assent is implied and not actively communicated through a button such as “I agree” or “I Accept”.317 The link typically appears at the bottom of the page under a link labelled “Terms” or “Terms and Conditions”. Sign-in and sign-up wrap

More recently commentary suggests that a form of acceptance in which you agree to a set of terms only at the time you initially log-in or sign-up to a service constitutes a new category of "sign in wrap."318 This is the approach adopted by Facebook and Google discussed further below.

Relevant U.S. Authorities

There are a number of U.S. authorities on browse-wrap licenses, though Australian judicial consideration is scanter and more reliant on analogous reasoning from similar cases319 and where issues relevant to the browse-wrap element are accepted without critical examination.320 In Southwest Airlines v BoardFirst321 the U.S. Court of Appeals for the Fifth Circuit, considering the issue of offer and acceptance in the broader context of validity, held that: “the validity of a browsewrap license turns on whether a website user has actual or constructive knowledge of a site’s terms and conditions prior to using the site. 322

The Court went on to consider that in the absence of adequate notice, such agreements would not be binding:

Available: http://heinonline.org/HOL/Page?handle=hein.journals/berktech22&div=31&g_sent=1&casa_token=&collectio n=journals 316 Toto, S. Buffington, K. How Binding Is Your Browsewrap Agreement? Pillsbury Winthrop Shaw Pittman LLP. June 6, 2016. Available: https://www.lexology.com/library/detail.aspx?g=4b4b93da-c40a-4724-916c-3bdd9011698d 317 Kunz, C. Ottaviani, E. et al. Browse-Wrap Agreements: Validity of Implied Assent in Electronic Form Agreements The Business Lawyer. Vol. 59, No. 1 (November 2003), pp. 279-312 318 Hughes, G. Sutherland, A. Enforcement problems with online contracts: an Uber case study. Davies Collison Craig. 5 October 2016. Available: http://www.davies.com.au/ip-news/pdf/enforcement-problems-with-online-contacts-an-uber- case-study 319 See, for instance, Surfstone Pty Ltd v Morgan Consulting Engineers Pty Ltd [2016] 2 Qd R 194 320 Benson v Rational Entertainment Enterprises Ltd [2015] NSWSC 906 (10 July 2015) 321 Civ. Act. No. 3:06-CV-0891-B (N.D. Texas, September 12, 2007) 322 Ibid. at 5

Page 97 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. “Where a website fails to provide adequate notice of the terms, and there is no showing of actual or constructive knowledge, browse-wraps have been found unenforceable.”323

This early authority on the point suggests that adequacy of notice is a key issue in determining formation and that the way in which each of Google, Facebook and the Department of Defence present their terms, or links to the terms, is central in determining the issue.

In Meyer v Kalanick324, users of the Uber ride-sharing application were presented with screens containing fields into which they were prompted to enter their registration details. This includes names and phone numbers and the requirement through the text to assent to the terms of service through the language “by creating an Uber account, you agree to the Terms of Service and Privacy Policy (as further set out below). The latter language was described in the judgement at first instance as appearing in a "considerably smaller font"325 that was "barely legible"326, raising additional issues such as the bargaining power of the parties and nature of the contract. Prospective users were able to proceed to the end of the registration process without active assent by clicking on the link. At first instance, it was held that these terms were not enforceable, though on appeal they were upheld, partially on the basis that the relevant screens (extracted hereafter) were “uncluttered”327 and the highlighted blue text would indicate to a “a reasonably prudent smartphone user”328 that it was a link to further information and that the Plaintiff “unambiguously manifested his assent.”329

The screens appeared as follows330:

323 Ibid. at 5 324 Meyer v. Uber Technologies, Inc., No. 16-2750 (2d Cir. 2017) 325 Meyer v Kalanick No 15 Civ.9796, 2016 WL 4073012 at p.12 326 Ibid. 327 Meyer v. Uber Technologies, Inc., No. 16-2750 (2d Cir. 2017) at p.24 328 Ibid. 329 Ibid. at p.30. 330 Ibid. at Addendum B, p.33

Page 98 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

Australian Authorities

There are no Australian cases that directly consider issues of formation in a click-wrap, browse-wrap or sign-in wrap331 context, though Courts have otherwise upheld agreements entered into in this manner on the assumption such elements were fulfilled where that point was not in contention.332

An examination of authorities related to incorporation of terms by notice provides useful instruction in how Australian Courts may consider click-wrap, browse-wrap and sign-in wrap cases. This point was recently considered by the Queensland Supreme Court in Surfstone333. In that case, a set of subsidiary terms, not appended to or otherwise attached to the head contract, or provided by the other party, were deemed to have been incorporated fully into the contract. In considering these issues the Court noted prior High Court authorities334 that the Court "should adopt a commercial approach in resolving any uncertainties in the language, and give a meaning to words where that is possible, without being unduly pedantic or narrow".335 However, Surfstone was decided in the context of a commercial contract between two commercial parties and it may be that consideration of the reasonable

174 Manwaring, K. Enforceability of Clickwrap and Browsewrap Terms in Australia: Lessons from the US and the UK Studies in Ethics, Law, and Technology, Vol. 5, Issue 1 (2011), Art. 4 Available: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2251593 332 See, for instance, Peter Smythe v Vincent Thomas [2007] NSWSC 844 333 Surfstone Pty Ltd v Morgan Consulting Engineers Pty Ltd [2016] QCA 213 334 Upper Hunter County District Council v Australia Chilling and Freezing Co Ltd (1968) 118 CLR 429 at 436-437; Electricity Generation Corporation v Woodside Energy Ltd (2014) 251 CLR 640 at [35] 335 at para. [40]

Page 99 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. expectations of consumers regarding the inclusion of subsidiary or ancillary terms may be considered in a different light.

Other relevant points for consideration include incorporation of terms by a course of dealing (in which the element of consent would be satisfied impliedly rather than expressly) and the potential exclusion of certain terms where they constitute an “onerous exemptive provision”.336 In that case, the onerous exemptive provision was an exclusion of liability clause that the defendant’s counsel conceded would have needed to have been brought to the attention of the contracting party in order for it to be binding. In that case it was held not to apply, despite the signature of the plaintiff. This suggests that users may not be bound by such terms even where they are otherwise required to have read the terms before assenting to them. Other vitiating factors discussed below, particularly as they apply to consumer contracts, may also be relevant.

As in the case of a contract being formed where a ticket is taken entering a carpark, despite the terms not being provided337, where terms are deemed to be standard, they may be incorporated by reference. However, this is subject to limitation by the Courts. In cases where terms are incorporated by notice, as opposed to signature338, the terms will only be deemed to have been incorporated by reference where the vendor has taken reasonable steps to bring terms to the attention of the customer when the contract is formed.339

More recently further limitations have been implemented by legislation such as the unfair contract terms regime under the Australian Consumer Law 2011 (Cth) implemented in 2010 and then extended in 2016 to apply to standard form small business contracts further discussed below in the unfair contract section of this Chapter.

Facebook

When signing up for a Facebook account the user agrees that "By clicking Create an account, you agree to our Terms and confirm that you have read our Data Policy, including our Cookie Use Policy." These “Terms” are included as Appendix A.

336 Toll (FGCT) Pty Ltd v Alphapharm Pty Ltd [2004] HCA 52 at para. 66. 337 Thornton v Shoe Lane Parking Ltd [1970] EWCA Civ 2 338 L’Estrange v F Graucob Ltd (1934) 2 KB 394. 339 Parker v The South Eastern Railway Company (1877) 2 CPD 416

Page 100 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Facebook requires that users have agreed to these terms to access their own account or one of the test accounts that is provided for the purposes of exploring security risks and software vulnerabilities. The primary terms do not directly relate to the conduct of any security research activities but do, for instance, expressly exclude any activity that breaches the law at 18.11 “You will comply with all applicable laws when using or accessing Facebook” which may include access without authorisation under the Criminal Code, a breach of technology protection measures under the Copyright Act and analogous unauthorised offences under the US Computer Fraud and Abuse Act (CFAA). These offenses have all been used in the context of security researchers undertaking activity without consent.

Facebook's terms do contemplate that supplemental terms may apply and the Statement of Rights and Responsibilities provides that they "may ask you to review and accept supplemental terms that apply to your interaction with a specific app, product, or service." However, the webpage containing information regarding security vulnerability research and disclosure does not clearly state that it contains such supplemental terms. This page contains sections including “background information”, the “Responsible disclosure policy”, "Bug bounty programme terms", "Bug bounty programme scope" and sections which otherwise relate to participation in the bug bounty program (collectively, Whitehat terms). They appear on an unlinked page and unrelated page location at http://www.facebook.com/whitehat. The page appears as follows:

Page 101 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. 340 All of the text appears under a heading simply called “Information” rather than “Terms”, “Agreement” or another label that would suggest that a user must agree to them. The webpage does not provide a method to assent to the terms through an “I Agree” button or similar as contemplated by the statement contained in the primary terms that Facebook “may ask you to review and accept supplemental terms.”

Accepting the authority in Meyer and the Australian authorities above, it is reasonably certain that the primary terms of Facebook are binding on the user in Australian and U.S. jurisdictions. However, the case with the supplemental terms is far less clear.

340 Accessed at http://www.facebook.com/whitehat 4 October 2017.

Page 102 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. The terms are not brought to the attention of the user at the time they sign up for a Facebook account, nor when they log-in to use it. The user is not asked to agree to the collective Whitehat “terms” at any point. Thus, it is difficult to conclude that the Facebook Whitehat “terms” are effective in forming a contract under ordinary principles.

When you submit a vulnerability report you are asked to report on the vulnerability type, its scope and then provide a summary of the issues and describe how to recreate it. At the point of submission, a statement appears that “by submitting, you are agreeing to the terms of the bug bounty program as found on https://www.facebook.com/whitehat”. This gives weight to the side of the argument that a contract may not be formed earlier.

However, a determinative outcome to the question would require a fuller analysis of a number of elements such as a contract being formed through a course of dealing or course of conduct. In Kendall v Lillico341, the regularity of dealings between the parties and that certain oral terms were "continuously made known" meant that, despite an exclusion clause (which would have otherwise excluded oral representations), the terms were binding on the basis of prior dealings.

In the case of vulnerabilities submitted under a bug bounty program, the authorities in Kendall v Lillico and La Rosa v Nudrull provide some guidance and for terms to apply by a course of dealing it would seem to require that the previous submission of vulnerabilities, followed by assessment and payment occurred with sufficient proximity and frequency342 for any terms to be implied. Additionally, it is required that any terms claimed to be implied are clearly identifiable and that the current dealing fit within the previously claimed dealings to reasonably claim they applied.343

On balance, it appears that a contract being formed by the course of dealing in the Australian Federal jurisdiction imposes significant, but not insurmountable, hurdles were a contract not otherwise formed as set out above.

Google

Google adopts a similar approach to Facebook and has users assent when signing up, though users can access a number of Google services without being signed in and are thus bound by terms that appear at the bottom of each relevant page under a hyperlink to “Terms”.

341 [1969] 2 AC 31 342 La Rosa v Nudrill Pty Ltd [2013] WASCA 18 343 Kendall v Lillico [1969] 2 AC 31

Page 103 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Google similarly contemplates that other terms, in addition to the standard “Terms of Service”, may apply – “Our Services are very diverse, so sometimes additional terms or product requirements (including age requirements) may apply.”344 but is not explicit as to how or when these additional plans would apply.

Google sets out the details of its bug bounty under a page titled "Google Vulnerability Reward Program (VRP) Rules345". It does not take the form of a typical legal agreement. Its content is largely technically and commercially focused on issues of scope and payments of bounties. It does not deal with typical legal issues such as terms, termination, allocation of risk between the parties, consent, warranties or intellectual property issues. The only exception is a small section at the bottom labelled "Legal points". This section addresses only export controls, the status of the program and a requirement that laws are not violated or data compromised. The legal points section provides that:

Legal points

We are unable to issue rewards to individuals who are on sanctions lists, or who are in countries (e.g. Cuba, Iran, North Korea, Sudan and Syria) on sanctions lists. You are responsible for any tax implications depending on your country of residency and citizenship. There may be additional restrictions on your ability to enter depending upon your local law.

This is not a competition, but rather an experimental and discretionary rewards program. You should understand that we can cancel the program at any time and the decision as to whether or not to pay a reward has to be entirely at our discretion.

Of course, your testing must not violate any law, or disrupt or compromise any data that is not your own.

The “Rules” do not address interaction with the main Terms; nor do they use language suggesting that either Google or the user are bound by contract such a “terms”, “agreement” or “contract”.

344 Google Terms of Service. April 14, 2014. Available: https://www.google.com/intl/en/policies/terms/ 345 Google. Vulnerability Reward Program (VRP) Rules Available: https://www.google.com/about/appsecurity/reward-program/

Page 104 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. In the event that the program terms were not binding on the basis of acceptance of an offer, then consideration of how they may become binding through a course of dealing should be examined. In the context of the exploration, discovery and disclosure of software vulnerabilities, it may be that acceptance is constituted by any of these phases.

For comparison purposes, Microsoft, a leader in bug bounty programs, in relation to its own bug bounty program, provides that users are bound by the terms upon submission of a discovered vulnerability "If you send us a submission for this program, you are agreeing to these terms. If you do not want to agree with these terms, do not send us any submissions or otherwise participate in this program."346 This approach matches that of the application of HackerOne’s customer terms that apply as between the security researcher and the company operating the bounty program but not between the security researcher and HackerOne itself. HackerOne is explored further in section [X] below.

In contrast to the process at Facebook whereby submission of a vulnerability explicitly notes the users’ agreement to the bounty terms, Google’s process of submitting a vulnerability via [x] does not do so.

Department of Defence – HackerOne

HackerOne operates its legal framework as follows, each of the elements of which are further discussed below:

346 Microsoft. Frequently Asked Questions about Microsoft Bug Bounty Programs. Available: https://technet.microsoft.com/en-us/security/dn425055.aspx

Page 105 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. General Terms and Conditions (between HackerOne and “Customer or “Finder”)

Finder Terms Customer Terms (between (between HackerOne and HackerOne and Finder) Customer)

Disclosure Guidelines (not binding)

Program Policy (between Finder and Customer)

General Terms and Conditions

To participate in the US Department of Defence program, each user is required, as a pre-requisite, to be a member of the HackerOne platform and must register to become one. This occurs via a sign-up process where it is noted that “By clicking 'Create account', you agree to our Terms and Conditions and acknowledge that you have read our Privacy Policy and Disclosure Guidelines.” The HackerOne “Finder Terms”347 are also noted as being agreed to by the security researcher (or “Finder” as HackerOne defines them).

Other relevant considerations are whether the program forms a contract between the Finder and the Company operating the bounty or a three party agreement as between HackerOne, Finder and Company.

Disclosure Guidelines

The Disclosure Guidelines are not identified at the sign-up stage as being agreed to by the user, the user is merely required to attest that they have read them. These Disclosure Guidelines set out important issues related to the vulnerability of the discovery and disclosure process which is discussed further below.

347 HackerOne Finder Terms Available: https://www.hackerone.com/terms/finder

Page 106 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

Turning to the issue of the interaction of further set of subsidiary terms, the Terms and Conditions expressly draw the user’s attention, under a section labelled Vulnerability Guidelines, to the fact that the Terms and Conditions “are superseded by individual Program Policies in the event of a conflict.” Each of these Program Policies are prepared by the individual Customer operating the program as distinct from the vulnerability researcher or Finder.

The Finder Terms further note that “By making any Vulnerability Report available to a Customer, you agree to the Program Policy..348 This raises the question of whether a provision of a contract between the security researcher and HackerOne is effective in creating a new contract between the security researcher and the Customer.

The opening paragraph of the Department of Defense “DoD Vulnerability Disclosure Policy” states that its purpose is to “provide clear guidelines” “for conducting vulnerability discovery” and “submitting discovered vulnerabilities to DoD.” It does not state that it is an agreement between the user and the Department of Defense.

This statement raises a number of issues. Firstly, there is the question as to whether the “policy” and its “guidelines” are binding as a matter of contract law and, potentially, misrepresentations that induced a contract to be entered into which may render it voidable and, thus, rescindable.

Secondly, if the first question is satisfied, and the terms of the HackerOne Finder Terms are interpreted literally, a conflict exists as to when the DoD Vulnerability Disclosure Policy would become binding on the security researcher. The HackerOne terms state that the terms are effective upon disclosure of a vulnerability to the Customer, in this case the DoD. By contrast the DoD Policy states that the guidelines would apply also in the discovery process. This has a markedly different effect on the security researcher. If the former position applied, the security researcher would not be afforded any of the purported protection from prosecution under criminal legislation or other civil action in the event they did not discover a vulnerability or during the pre-reporting discovery phase and would have contractual remedies in the event such prosecution or civil action were commenced. Commentary from security researchers further discussed in suggests that this reconnaissance phase is where much of the technical innovation in the bug bounty space has occurred and is a key focus of security

348 At para 3.

Page 107 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. researcher effort349350 and thus uncertainty as to the protection of security researchers legal position is of significance in this area.

In an interview with a journalist, the CEO of HackerOne seems to explicitly state that security researchers are afforded no protection until they actually submit a vulnerability to HackerOne (emphasis added): DB: Do you feel there’s a risk that bounty hunters could turn malicious? Please explain. … Criminal hackers don’t wait for an invitation to hack. They come to you uninvited. Indeed, computer criminals don’t sign up with a service like HackerOne because it provides them no benefit. We give no special access or special privilege to our bug hunters. The only way to get a benefit from HackerOne is by doing the right thing: reporting a vulnerability to the owner of the system.351

This leaves open the possibility, consistent with the Terms, that activity that is undertaken either without discovery of a vulnerability, or prior to disclosure of a vulnerability, does not afford the security researcher any protection, at least on a contractual basis. While the liability referred to in the interview with HackerOne is criminal, a security researcher is likely to consider a contractual element to be protective of criminal liability.

It is apparent that the terms, consistent with the spirit of the bug bounty programs in general, are drafted to incentivise disclosure of vulnerabilities. In such circumstances it would not be reasonable to afford researchers protection from legal liability that arose from the search and discovery of vulnerabilities that were not subsequently disclosed to HackerOne. In such circumstances, the bounty operators could reasonably argue that they should have the right to pursue the security researcher on the basis that they were not participating under the terms of the program, have no express or implied authorisation to access the bounty operators systems and they should not be restricted in pursuing civil and criminal legal action.

349 HackerOne How to: Recon and Content Discovery. July 25, 2017. Available: https://www.hackerone.com/blog/how-to-recon-and-content-discovery 350 Khan, F. Bug Bounty Hunter Methodology. Presented to Nullcon 2016. March 14, 2016. Available: https://www.slideshare.net/bugcrowd/bug-bounty-hunter-methodology-nullcon-2016 351 Bisson, D. On Bug Bounty Programs: An Interview with HackerOne’s CEO. September 20, 2017. Tripwire. Available: https://www.tripwire.com/state-of-security/vulnerability-management/bug-bounty-programs- interview-hackerones-ceo

Page 108 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. However, there is a possibility that, intentionally or otherwise, the effect of the terms is to further exclude protection from liability for security researcher activity that occurs prior to discovery of vulnerabilities, but where there is the intent of subsequent disclosure after discovery (were such discovery to occur). In this case there is a strong argument that this results in an unacceptably unfavourable outcome for researchers participating under the ostensible and presumed protection the programs provide. There are no instances where bounty operators have commenced legal action against researchers in these circumstances. However, it is desirable that the legal protection afforded to security researchers by the terms of a contract is as certain and unambiguous as possible so that legal risk is, at worst, fully understood and quantifiable, and, ideally reduced to an acceptable level.

On this basis, the Terms and Conditions and further terms that are included by reference appear to satisfy the criteria for contract formation. It is less clear that the terms of individual programs (“Program Policy”), that apply upon submission of a vulnerability report, or the Disclosure Guidelines are similarly binding.

Program Policy

These General Terms provide that HackerOne is not party to the Program Policy: “Any contract or other interaction between a Customer and a Finder, including with respect to any Program Policy, will be between the Customer and the Finder. HackerOne is not a party to such contracts and disclaims all liability arising from or related to such contracts.

The Finder Terms also seek to exclude HackerOne from any liability under a Program Policy, reinforcing that they are not party to such an agreement and do not accept any liability under it which may serve to remove HackerOne from contractual liability but not from liability arising under tort. Though an examination of tortious liability for HackerOne’s actions is relevant to an overall consideration of liability of the various parties participating in bug bounty programs, it is beyond the scope of this thesis.

Conclusion

As with the case of Facebook and Google, it is tolerably certain that these primary terms fulfil the requirements for formation of a contract under Australian and U.S Law. On this basis, security researchers and the bug bounty program operators have formed a contractual relationship and thus given rise to a variety of rights and remedies that would otherwise not arise in the absence of a

Page 109 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. contract. These rights and remedies encompass remedies for breach, potential orders for specific performance, entitlement to damages and a highly developed legal framework under which rights can be examined and adjudicated.

Page 110 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

An examination of consideration in the context of formation of a contract is included here for completeness rather than it being a core area of focus or concern. While it has not been a significant factor in the litigation of issues related to terms of use on websites generally and does not raise significant novel issues more specific to formation of a contract under bug bounty programs it is an element that is useful to analyse to understand the range of elements that may be challenged in any litigation related to the operation of bug bounty programs.

Consideration is “the price for which the promise of the other is bought”352 and is a required element for a valid contract to be formed. Actions raising issues of consideration under online contracts have not been brought in the Australian or US Context. However, in Ryanair Ltd v Billigfluege.de GMBH353, Hanna J of the Irish High Court considered whether sufficient consideration was provided by RyanAir under the terms of use of its website holding that:

provision of information as to flights and price of flights by Ryanair on their site, subject at all times to their Terms and Conditions, constitutes a sufficient act of consideration for the purposes of making the contract legally binding

The primary terms of each of Facebook and Google’s products offer access to a service and its information in a similar way and without cost to its users. On this basis, the requirement for consideration would appear to be satisfied. In the case of HackerOne and its Customer Programs, the ability to participate in a program that offers potential financial rewards would appear, on its face, to likely to provide sufficient consideration for a contract to be formed.

Promise to pay bounty

While the bounty payment in the case of Facebook and Google are paid at their discretion, this issue relates more to the fairness (or otherwise) of the terms than it does to the issue of consideration at the stage of contract formation since consideration is satisfied by mere access to the website, rather than payment of a bounty to a security researcher. The issue of payments is discussed further below. .

352 Pollock, F. Pollock on Contracts 8th ed. Stevens & Sons 1911 353 [2010] IEHC 47

Page 111 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

Page 112 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

Alternate Dispute Resolution

Dispute resolution is chosen as an issue for examination due to the potential for it to bring to the fore the disparity in power between independent security researchers and bounty operators. Each of the examined program terms leave substantial, indeed almost complete, discretion regarding the assessment and payment of submitted vulnerabilities to the bounty operator. In addition to the general risk of disputes that may arise in any contract, the withholding of payments or incorrectly labelling duplicates gives rise to additional substantial risk of a dispute in the context of security researchers contributing vulnerabilities under the terms of a vulnerability disclosure program. This section considers that ways in which the terms of the programs handle, or fail to handle, disputes outside the Court system.

Each of the programs fails to provide an alternative dispute resolution mechanism that the security researcher could use as a forum to resolve a complaint prior to, or as an alternative to, Court action. Such mechanisms could include a choice of arbitration or mediation and could otherwise specify the rules and allocation of costs between the parties that the processes would be subject to. These alternative dispute resolution mechanisms provide a lower cost and more flexible way for disputes to be resolved quickly without the expense, time and uncertainty inherent in any intervention of the Courts.

Alternative dispute resolution schemes may be particularly advantageous to security researchers, as independent professionals instead of large corporations, in that they may otherwise lack the financial resources to pursue a claim in Court. Further, in relation to disputes regarding payment of a bounty which are, in most cases, likely to relate to an individual amount under USD$10,000354, such an amount is unlikely to warrant a legal action in its own right due to the cost and complexity of the proceedings, even if a claim is within the small claims jurisdictional threshold.

354 While the top payouts for bounties may be in excess of this amount, the average payment is much lower. The average on the HackerOne platform is US$500 (https://www.hackerone.com/resources/bug-bounty- basics)

Page 113 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Alternate Dispute Resolution Schemes

Further, the international and, consequently, potentially multi-jurisdictional nature of the disputes means that dispute resolution mechanisms not bound to a specific jurisdiction may be more useful, efficient and effective. For instance, appropriate selection of an alternative dispute resolution mechanism could allow the proceedings to occur entirely online.

Sui generis dispute resolution schemes have been developed to apply specifically in contexts uniquely arising on the Internet and such a scheme may have utility in the context of vulnerability disclosure.

Examples

For instance, the Internet Corporation for Assigned Names and Numbers (“ICANN”) developed, in cooperation with the World Intellectual Property Organisation (“WIPO”), the Uniform Domain-Name Dispute-Resolution Policy (“UDRP”) to resolve, among other things, abusive registrations of domain names (known as cybersquatting). These disputes are addressed through the use of administrative processes and mandatory online arbitration commenced by filing complaints with dispute resolution service providers approved by ICANN.355

While this program has been the source of substantial criticism since its inception356, it has, nonetheless, resulted in a resolution of ~150-300 cases per month in recent years357 and has resolved complaints involving parties from 178 countries358. Costs of the proceedings, excluding professionals engaged by the parties vary from USD$1500 for 1-5 domains and a single panellist to USD$5000 for a three panellist panel and 6-10 domain names.359 While the volume of cases brought under the terms of bug bounty programs may be initially small, it is likely that the scale would grow considerably over time, matching the growth in the number of security researchers participating under the programs.

355 ICANN. Uniform Domain-Name Dispute-Resolution Policy Available: https://www.icann.org/resources/pages/help/dndr/udrp-en 356 For an analysis of such criticisms see, for instance, Geist, M. Fair.com?: An examination of the allegations of systemic unfairness in the ICANN UDRP Brooklyn Journal of International Law Vol 27.3 2002 pp. 903-938. 357 WIPO. Domain Name Dispute Resolution Statistics: Filing and Parties, Total number of cases per year. Available: http://www.wipo.int/amc/en/domains/statistics 358 WIPO. Geographical Distribution of Parties. Available: http://www.wipo.int/amc/en/domains/statistics/countries_a-z.jsp 359 WIPO. Schedule of Fees under the UDRP. Available: http://www.wipo.int/amc/en/domains/fees

Page 114 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Further suggestions for a model of alternative dispute resolution which may be useful in ameliorating the power disparity security researchers face through in the context of bug bounty programs will be considered as an area for future research in Chapter 7.

Jurisdiction

A relevant adjacent point is that all the studied programs are subject to US, specifically, Californian law, which again further adds to the difficulty and cost of pursuing litigation for researchers that are located outside the US. Equally, this thesis considers the application of Australian law in the context of bug bounty programs and assumes that security researchers are able to avail themselves of this jurisdiction and thus security researchers must overcome this jurisdictional issue. This is discussed in Chapter 6 below.

IP subsisting in vulnerability report or exploit

Intellectual property rights may arise in the discovery and exploitation of a software vulnerability. Ownership of such rights may be important to the security researcher who may wish to exercise rights in copyright in the future (and would be prevented from doing so had such rights been assigned to the bug bounty program operator) or, they may be required to be exercised by the bug bounty operators in testing or patching the vulnerability (and who may be prevented from doing so if the terms of the bounty program do not grant them ownership or an appropriate license to the underlying intellectual property.

The submission of a vulnerability report may include multiple elements that are subject to certain intellectual property rights. These may include copyright subsisting in text, software code, images, video, and potentially captured network traffic.

Firstly, the text in a report itself may include a description of the vulnerability and a list of steps required to reproduce the issue and related background that were created by the security researcher in the discovery of the vulnerability. This may also include a description of tools and techniques that have been used in previous reports, or that the researcher may wish to reproduce in whole or in part in future reports.

Secondly, the researcher may have developed and include certain software code that exploits or demonstrates a proof of concept of the vulnerability and this may be included in the report or made

Page 115 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. available online through a code repository such as Github. Such code may constitute part of the “toolkit” of the security researcher which they may wish to use again in the future to test or develop future exploits. If copyright in such code were to be assigned to the bug bounty program operators, and not otherwise licensed back to the security researcher, future use by the security researcher would be likely to be an infringement.

Thirdly, images and video which are subject to copyright may be included as either an individual or a series of screenshots which demonstrate or validate reproduction of the vulnerability or a video which shows the same. While not central to this point, intellectual property rights in the screenshot may be subject to rights of third parties (including the software vendor).

Finally, some vulnerability reports will include captured network packets to demonstrate the nature and content of traffic that is passed between the client system and the server or between servers to verify a vulnerability. Ownership of such material may, again, be subject to rights of a range of third parties.

Ownership

Under Australian law, copyright subsists in an original literary work and, subject to certain exceptions, is vested in the author of that work. One of the relevant exceptions is that related to employment. It is settled that ownership of copyright created in the course of employment is, unless otherwise agreed, owned by the employer if such works are created "in pursuance of the terms" of the employment contract.360 However, this does not apply to works created by an independent security researcher under the terms of a bug bounty program or, at a minimum, there is no authority suggesting that that the relationship between the bounty operator and the security researcher would otherwise be sufficient to form an employment relationship to enliven the exception to copyright ownership.

The programs of Facebook and Google are silent as to ownership or licensing of any intellectual property subsisting in vulnerability reports. HackerOne includes a broad license of intellectual property rights contained in a Vulnerability Report.361 This matches the approach adopted by

360 See, for instance, S35(6) Copyright Act 1968 (Cth) 361 “you hereby grant to HackerOne a perpetual, irrevocable, non-exclusive, transferable, sublicenseable, worldwide, royalty-free license to use, copy, reproduce, display, modify, adapt, transmit and distribute copies of that Vulnerability Report, for the sole purpose of providing the Services.”

Page 116 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Microsoft who state that they are "not claiming any ownership rights to your submission" but do request a broad non-exclusive license to any submissions made.362

The result of this omission creates significant ambiguity and uncertainty as to the intellectual property position. The consequences may be manifold.

Assignment of Intellectual Property

In the event the intellectual property subsisting in a vulnerability report were owned by the security researcher, as appears to be the case, then absent an assignment, ownership would remain unchanged. Secondly, S196(3)363 requires that any assignment of copyright must be writing. None of the terms of the examined bounty programs have such provisions. Consequently, any argument that an assignment has occurred through the operation of an implied term would likely not be upheld by an Australian Court.

Implied License

Accepting that the copyright in a vulnerability is owned by the security researcher, this gives rise to the question of the nature and scope of any license grant from the security researcher to the bounty operator that would, or may, be implied by a Court. For a Court to imply a term in a contract:

(1) it must be reasonable and equitable; (2) it must be necessary to give business efficacy to the contract, so that no term will be implied if the contract is effective without it; (3) it must be so obvious that 'it goes without saying'; and (4) it must be capable of clear expression; (5) it must not contradict any express term of the contract."364

The ability to use the vulnerability report to remediate the vulnerability seems to be a fundamental element of the transaction between the security researcher and the bounty operator. On this basis, a grant of a license would seem necessary to give business efficacy to the contract and otherwise satisfy each of the five grounds above. However, the breadth of this license is still not clear. Security

362 Ibid. 363 S196(3) Copyright Act 1968 (Cth) 364 Codelfa Construction Pty Ltd v State Rail Authority of NSW [1982] HCA 24 at Para 20 per Brennan J citing Mason J in Secured Income Real Estate v. St. Martin's Investments Pty. Ltd. [1979] HCA 51

Page 117 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. researchers may include snippets of code which form a part of their tools of the trade and may also reproduce text describing methods and techniques in future vulnerability reports. On this basis, security researchers have a strong argument that any license grant should not be exclusive and that giving business efficacy to the contract requires only that the license be non-exclusive and otherwise limited to remediating vulnerabilities in the system of the bounty operator. It is likely that an assignment of intellectual property would not be required to give business efficacy to the contract.

In the US context, an examination of the doctrine of a “work made for hire” under 17 U.S.C S101 may be relevant in determining ownership of copyright in the context of submissions of a security researcher under a bug bounty program. This would typically grant ownership of copyright in a work to an employer for those works created by an employee within the scope of their employment and, in the absence of employment, in other limited circumstances. Technology companies routinely seek to invoke this doctrine by including a clause describing work completed under it as a “work made for hire” in independent contractor agreements. However, such a clause is not included in any of the examined programs.

In the equivalent Australian context, S35(6) of the Copyright Act 1968 (NSW) provides that where a work “is made in pursuance of the terms of his or her employment by another person under a contract of service or apprenticeship” that it is owned by the other party to that agreement. The terms of the examined bounty programs do not engage the security researchers to provide a “service” nor “employment” and are thus not likely to apply. Given the focus of this thesis on the Australian context, and the exclusion of the language in the bug bounty program terms, it will not be further considered here.

The bounty terms of Google, Facebook, Department of Defence are silent as to ownership of copyright (or other intellectual property rights, if any), and any other rights, in any submissions made by security researchers and this provides a further area of uncertainty as to security researchers participation under the terms of the examined programs.

As discussed in Chapter 1 and elsewhere throughout are the negative impacts that software vendors suffer where vulnerabilities are disclosed prior to them having been afforded the opportunity to patch them. The operation of bug bounty programs aims to ameliorate the effect of this harm by ensuring the first disclosure of the vulnerability is to the vendor, rather than the public. Contractually, much of

Page 118 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. this protection of the confidentiality of software vendors and bug bounty program operators is afforded through confidentiality provisions in the bounty terms which would make public disclosure a breach of confidence.

Confidential Information - Vendors

Even in the absence of express terms, software vendors have sought to advance actions for breach of confidence where discovered vulnerabilities have been publicly disclosed. In addition to arising under contract, confidentiality obligations can arise under the principles of equity where three criteria are satisfied. These criteria are: (i) that the information has “the necessary quality of confidence about it”, (ii) that the “information must have been imparted in circumstances importing an obligation of confidence”; and (iii) “there must be an unauthorised use of that information to the detriment of the party communicating it”.365

In the case of the exploration, discovery and public disclosure of a vulnerability there are no authorities suggesting which, if any, of these criteria are fulfilled. There are two aspects to confidentiality in respect of bug bounty participation. Firstly, the confidentiality of information contained in the vulnerability report prepared by the security researcher, where the obligations of confidence may be owed to the security researcher. Secondly, the converse situation relating to obligations of confidence owing to the bounty operator in information accessed by the security researcher in exploring or discovering the vulnerability and breached by the public disclosure.

The obligations of confidence owed to the security researcher may, arguably, apply to the techniques described in the report, and used in the discovery of the vulnerability, which may have further application to the researcher’s future work and participation in other bounty programs. As in the case discussed above, in relation to the intellectual property subsisting in a report, such techniques may constitute their tools of the trade. This would mean that any disclosure beyond that necessary to remediate the vulnerability could be argued to a breach of confidentiality, in the absence of any express contractual provision that allows it.

The UK authority in Gartside v Outram366 Wood-VC held that confidentiality would not be afforded “as to the disclosure of iniquity”. His Honour went on “You cannot make me the confidant of a crime or a

365 Coco v A N Clark (Engineers) Ltd [1969] RPC 41 per Megarry J 366 (1856) 26 LJ Ch 113

Page 119 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. fraud, and be entitled to close up my lips upon any secret which you have the audacity to disclose to me relating to any fraudulent intention on your part".367

However, under Australian law the exception is narrower than the “public interest” / iniquity defense under the UK authorities. In Corrs Pavey Whiting & Byrne v Collector of Customs, Gummow J found that a necessary of attribute of confidence, as required by the first element in Coco v AN Clark will not be found where "the subject matter is the existence or real likelihood of the existence of an iniquity in the sense of a crime, civil wrong or serious misdeed of public importance"368. This standard narrows the exception to a higher standard of misconduct and, it is suggested, is simply an application of the equitable doctrine of clean hands and is, consequently, likely to be too narrow of an exception to be relevant here.

This Chapter has considered the elements of formation of contracts under the three examined programs and the key issues of IP ownership and confidential information concluding that, despite contracts likely being formed, there remains ambiguity as to the protection afforded to security researchers interests including their ability to avail themselves of practical and effective legal remedies in the event of a breach and of cost effective forums to address claims. Jurisdictional issues, examined further below, present a further area where non-US based researchers maybe disadvantaged. The following Chapter considers the ability for Australian consumer protection legislation, as it relates to unfair contracts, to intervene to resolve certain inherent power imbalances as between security researchers and bug bounty program operators.

367 Ibid at 114 368 Corrs Pavey Whiting & Byrne v Collector of Customs (VIC) (1987) 14 FCR 434 at 456

Page 120 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Chapter 6 - Avoidance / Vitiating Factors

Discussion of the requisite factors for formation of a contract are discussed above. This section discusses the intervention of legislative regimes and considers equitable doctrines that may be enlivened to allow a contract to be avoided under certain circumstances, despite it being otherwise validly formed.

The relevant legislative interventions that may apply to the terms of bug bounty programs, and the primary terms to which the bug bounty terms are subsidiary, are provided under (i) the unfair contracts regime; and (ii) prohibitions against deceptive and misleading conduct; both contained in the Australian Consumer Law. The equitable doctrine of unconscionability may also be enlivened in certain circumstances but, for reasons set out below, is not examined in detail in this thesis.

S24(1) of the Australian Consumer Law contained in Schedule 2 of the Competition and Consumer Act 2010 (Cth) applies to contracts under which the supplied goods or services are predominately for "personal, domestic, or household use or consumption".

The Australian Consumer Law set out in Schedule 2 of the Competition and Consumer Act 2010 (Cth) regulates unfair contracts and may intervene to strike out terms if terms are deemed to be unfair and the contracts fall within the definition of a “consumer contract” and a “small business contract” (defined further below). Consumer Contract

S23(3) defines a consumer contract as a contract for:

(a) a supply of goods or services; or (b) a sale or grant of an interest in land; to an individual whose acquisition of the goods, services or interest is wholly or predominantly for personal, domestic or household use or consumption.

There are no cases in Australia interpreting whether the offering of a bug bounty program falls within the definition of a consumer contract.

Page 121 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. The definition of a small business contract is considered further below.

Supply of goods or services

The first element under S23(3)(a) seems to be easily satisfied. That is, each of the bug bounty program operators – Google, Facebook and HackerOne/Department of Defense, are offering a service in the form of a website or web platform by which security researchers may report software vulnerabilities and is thus the supply of a service. Alternatively, the legislative definition would be similarly fulfilled by considering the required “service” to be the reporting of a vulnerability to bug bounty operator.

Domestic or household consumption

The next element of the test is that the “service are wholly or predominantly for personal, domestic or household use or consumption”. The term is not further defined in the ACL.

If the view is taken that the relevant services are the offering of the bug bounty platform, then an assessment of the services being “wholly or predominantly” for “personal consumption” is easier to fulfil.

This is due to the requirement that each of the user account registrations on the relevant platforms are required to be personal, i.e. in the name of a natural person and represented on the platform pseudonymously, rather than in the name of a corporation. For instance, under Clause 4 of Facebook's Terms, users must provide their "real name" and cannot enrol in the program on behalf of a company or otherwise enrol pseudonymously without being in breach of the Terms. This suggests that participation in the bounty program is indeed for personal use. This is further reinforced by Facebook's Whitehat Thanks page which provides credit by listing participant’s names. Facebook’s Bug Bounty page located at https://www.facebook.com/BugBounty/ also described the BugBounty program as a “Product/Service”. On this basis it is arguable that the services cannot be consumed in any way other than personally and this this element is fulfilled.

Small Business Contract

It is possible that were the bounty programs terms not be found to fall within the definition of a consumer contract under the ACL they may, nonetheless, still fall within the definition of a small business contract and be afforded similar protection. In November 2016, the Australian Consumer Law was extended to provide protection against unfair contract terms in small business contracts.

Page 122 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

S23(4) provides that a small business contract is such if: (a) the contract is for a supply of goods or services, or a sale or grant of an interest in land; and (b) at the time the contract is entered into, at least one party to the contract is a business that employs fewer than 20 persons; and (c) either of the following applies: (i) the upfront price payable under the contract does not exceed $300,000; (ii) the contract has a duration of more than 12 months and the upfront price payable under the contract does not exceed $1,000,000.

It is not required that a business be a corporation. This is an important distinction given that each of Facebook, Google and HackerOne registrations are personal and that rewards for reported bounties are only provided to individuals rather than corporations.

Thus, a person may be acting as a “business” despite not being a corporation – i.e. a sole trader. In many cases, security researchers earn significant income from their participation in bounty programs and pay tax on this income in the same way as they would from employment - 17% of researchers on the HackerOne platform rely solely on bug bounty programs for their income, while a further 26% earn between 76% and 100% of their income from their participation.369 On this basis, establishing their participation in programs as a business is strongly arguable.

In the event that the terms for a bug bounty did not meet the definition of a consumer contract, they may, alternatively, be argued to fall within the small business contract definition.

Consequently, each of the terms to participate in the bounty programs of Facebook, Google and the Department of Defense would meet within this definition, or the related definition of a small business contract, in order for the protective provisions to apply. Consequently, assuming that jurisdictional requirements are satisfied, many security researchers will be afforded protection under the Australian Consumer Law in this regard. However, as their businesses grow, they may exceed the statutory threshold and their protection against unfair terms under the Act may cease. On this basis, security

369 HackerOne Hacker-Powered Security Report 2017 at 9.23 Available: https://www.hackerone.com/sites/default/files/2017-06/The%20Hacker- Powered%20Security%20Report.pdf

Page 123 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. researchers are afforded at least some protection in circumstances where it falls within this definition as set out further below.

Deceptive and Misleading Conduct

In addition to the remedies available under the unfair contracts regime discussed above, and applied to the contracts further below, there is also further potential for the intervention of the provisions of the Australian Consumer Law as they relate to deceptive and misleading conduct.370 The focus on this analysis, however, is on the provisions of the unfair contracts legislation.

Failure to Pay Bounty

Each of the examined bug bounty programs will pay bounties only where the security researcher is the first to disclose that particular bug. That is, duplicates or "rediscovered" bugs are not eligible to receive a bounty. Consequently, researchers may have applied significant effort to discover and submit a bug which is then rejected by the bounty operator as being invalid on this basis and the bounty is not awarded. It is suggested this "rediscovery" of bugs occurs much more frequently that had previously been considered.371 In examining a dataset of 4,300 vulnerabilities it was estimated that 15-20% of vulnerabilities are discovered at least twice within a year of initial discovery.372 This suggests that it is likely that bug bounty participation will result in much wasted researcher effort.

The content and transparency of the assessment process of evaluating a submission and deciding whether it is a duplicate is fundamental to the underlying commercial element of the bug bounty transaction. Were the process to leave the assessment at the sole discretion of the bounty operator and they provided no indication of the way the process operated or why it was determined to be invalid, it may fall within the realm of an unfair contract term.

Without providing researchers the opportunity to examine previously submitted vulnerabilities security researchers have no way of knowing what vulnerabilities have already been discovered in the systems they are examining. Thus, there is no way for them to efficiently allocate their effort or to have any certainty that their efforts will be rewarded. In this case, all of the risk of rediscovery falls on the security researcher and all of the ability to ameliorate this risk falls on the bug bounty program

370 S18 Competition and Consumer Act 2010 371 Herr, T. Schneier, B. Taking Stock: Estimating Vulnerability Rediscovery. Belfer Cyber Security Project White Paper Series. July 2017 at p.1 Available: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2928758 372 Ibid.

Page 124 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. operator. However, were these vulnerabilities published prior to them being remediated it would leave the bounty operator exposed to the negative effects of their potential exploitation against them. This is further constrained by the need for software vendors to prioritise new security fixes both among other security scheduled fixes as well as development and implementation of new features as well as non-security related bugs that may only affect functionality but not the confidentiality, integrity or availability of data in a system.373 On balance, a resolution of this issue would require some certainty as to the time in which vulnerabilities are patched by bug bounty program operators. The approach of each of the examined is undertaken below, proceeded by an analysis of these terms under the ACL.

Facebook

Facebook retain sole discretion regarding assessment of duplicate submissions and may not provide any details regarding the basis on which it has determined a submission is a duplicate - "we award a bounty to the first person to submit an issue. (Facebook determines duplicates and may not share details on the other reports."374

In addition to the restriction on duplicate bounties, Facebook also assert a broad right to withhold payment – “Monetary bounties for such reports are entirely at Facebook’s discretion, based on risk, impact, and other factors.”

This assertion of broad and, ostensibly, unappealable, rights over matters as fundamental as payment places the security researcher at a significant disadvantage. While this broad discretion is likely asserted in order to manage the scale at which Facebook’s operations, rather than a deliberate attempt to partake in deceptive or misleading conduct, it may nonetheless have the effect. A security researcher may simply receive a message stating a vulnerability is not eligible due to being a duplicate and then receive no further correspondence – largely as described in the case study of Finisterre in 4.2.1 above. HackerOne

373 HackerOne. Your TL;DR Summary of The CERT Guide to Coordinated Vulnerability Disclosure. October 26, 2017. Available: https://www.hackerone.com/blog/Your-TLDR-Summary-of-The-CERT-Guide-to-Coordinated- Vulnerability-Disclosure

Page 125 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. HackerOne leaves the decision of whether to show the original report to the submitter of the duplicate report.375 The Department of Defense, as a the customer of HackerOne, does not indicate whether they would provide details of any original report to researcher submitting a duplicate but does undertake that "to the best of our ability, we will confirm the existence of the vulnerability to the researcher and keep the researcher informed, as appropriate, as remediation of the vulnerability is underway".376

Google

Google describe their approach as “first in, best dressed”377 and describe the program and payments as “an experimental and discretionary rewards program. You should understand that we can cancel the program at any time and the decision as to whether or not to pay a reward has to be entirely at our discretion.”

Again, this leaves assessments of duplicates and choices to pay entirely at the bounty operator’s discretion. The application of these terms to the ACL is discussed below.

Application to the ACL

A term of a consumer contract is unfair if it fulfils the three criteria set out in S24(1) of the Australian Consumer Law: (a) it would cause a significant imbalance in the parties’ rights and obligations arising under the contract; and (b) it is not reasonably necessary in order to protect the legitimate interests of the party who would be advantaged by the term; and (c) it would cause detriment (whether financial or otherwise) to a party if it were to be applied or relied on.

S24(1)(a) – Significant imbalance

375 HackerOne How do we handle duplicate reports? HackerOne Help centre. Accessed 18 October 2017. Available: https://support.hackerone.com/hc/en-us/articles/207786846-How-do-we-handle-duplicate-reports- 376 DoD Vulnerability Disclosure Policy at Section 6. 377 https://www.google.com/about/appsecurity/reward-program/

Page 126 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Failure to pay a bounty, which otherwise fulfilled the criteria specified in the terms, such as category of bug, targeted system and the nature of the bug, would seem to create a clear imbalance between the party’s rights and obligations and would fulfil the first criteria required by S24(1)(a). The researcher cannot control the assessment process, appeal its outcome or view the contents of deliberations. The researcher does not have access to a list, or content, of previously submitted vulnerabilities against which they can assess their own submission. On this basis, the balance of power tips, without the intervention of legislative protection, entirely in the favour of the bounty operator.

The failure to make the processes appealable and transparent weigh strongly in this calculation.

S24(1)(b) - Legitimate Interests

Turning to the second point requires an assessment of whether the ability to unilaterally refuse to make a payment is necessary to protect the legitimate interests of the bounty operator. "Legitimate interest" is not defined in the Act and thus an understanding of the nature of protected interests in this situation requires a synthesis of those found to be "legitimate" from previous cases.

Hevron378 considered the manner in which legitimate interests have been considered and applied by the Courts, noting that “it must be determined based on the evidence and common knowledge of: a. the instant businesses, and its relationships; and b. the industry generally.”379 Another element relevant to the analysis at hand is that of the public interest which includes an examination of the “policy implication of the private interests of the parties; or the community’s interest in the arrangement between the parties”.380 The courts also have regard to reasonableness.381

Applying these principles to the subject matter is a nuanced point. Bounty operators are faced with a number of common problems. As it applies to the “evidence and common knowledge, and the industry generally”, bounty operators may be inundated with reports which are of a low quality and consume significant resources just to assess, let alone develop, test and implement fixes for. Reported vulnerabilities are often submitted to a [email protected] e-mail address. These messages must, currently, be filtered and triaged manually. It is both a skilled and repetitive job. Many reports will be incomplete, duplicates, false or erroneous. The difficulty in triaging bug reports at a large vendor are significant. The jobs may be monotonous, unglamorous, relentless and unfulfilling. In 2007 reports to

378 Hevron, A. Legitimate Interests and Unfair Terms: the other threshold test. UW Austl. L. Rev., 2013 at p.256. 379 Esso Petroleum Co Ltd v Harper’s Garage (Stourport) Ltd [1968] AC 269, 301. 380 Hevron, A. Legitimate Interests and Unfair Terms: the other threshold test. UW Austl. L. Rev., 2013 at p.256. 381 Ibid.

Page 127 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. the [email protected] numbered 100,000 per year, they are now greater than 250,000. In 2007, humorously illustrating the difficulties of the role, Popular Science Magazine rated Microsoft "Security Grunt" as one of the top 10 worst jobs in science, just above "garbologist", "elephant vasectomist" and the duties of a Hazmat Diver who was required to rescue a worker who had drowned in a lagoon full of "urine, pig feces and all the needles used to inject pigs with antibiotics" but worse than "whale feces researcher".382 It is reported that these jobs have the highest turnover of any within Microsoft's Security Response Center (MSRC) where the messages the address receives are described as an "endless flood of information that needs to be triaged and prioritised and dealt with and followed all the way to its conclusion".383 Security engineers, whose primary skills and roles are not generally focused on customer service, becoming frontline customer service personnel and having to filter a lot of “noise” as well as recognising significant issues. In the US Department of Defense bounty program, 1189 reports were made, of which 138 were valid suggesting a signal to noise ratio of ~10%.384

Duplicates may be so voluminous that they are unable to adequately assess and respond to all of them. Reports may be provided by non-native speakers of English and difficult or impossible to understand, reproduce or both. For instance, India, Russia, Argentina and Pakistan all appear in the top ten countries from which hackers are earning bounties on the HackerOne platform suggesting language difficulties are more likely than as would occur with a locally employed penetration testing team.385

Further, many security researchers use automated tools to search for vulnerabilities.386 In these cases an asymmetry exists in that reports may be generated in a substantially automated fashion whereas assessment and categorisation of them is, currently, substantially a manual process. This asymmetry extends further as, in the case of Facebook and Google, the bounty programs are typically open to the entire internet community. In combination, this means high volume and duplicate reports are very likely. Thus, bounty operators may argue that without broad discretion to assess and award bounties,

382 Daley, J. The Worst Jobs in Science 2007. Popular Science. 14 June 2007 Available: https://www.popsci.com/scitech/article/2007-06/worst-jobs-science-2007 383 Moussouris, K. ‘Presentation’, RSA Conference (2018) at 20:23 https://www.rsaconference.com/events/us18/rsac-ondemand/industry-experts-bug-bounty 384 Ibid. 385 HackerOne Hacker-Powered Security Report 2017 at p.17 Available: https://www.hackerone.com/sites/default/files/2017-06/The%20Hacker- Powered%20Security%20Report.pdf 386 See, for instance, interviews with leading security researchers at: https://bugbountyforum.com/blog/ama/jhaddix , https://bugbountyforum.com/blog/ama/geekboy and https://bugbountyforum.com/blog/ama/fransrosen/ where each interview specifically queries the tools and automation techniques used.

Page 128 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. that their legitimate interest in fulfilling their business mission, may be impacted through being overwhelmed with vulnerability reports.

However, the countervailing argument, is that there is a public interest in assuring the security of systems on which the public broadly rely (for instance, with Facebook having billions of users), and a “community interest” in ensuring that individuals are paid for efforts that are induced by representations from some of the most valuable corporations to have ever existed – (for instance Facebook’s market capitalisation is over USD$500 billion as at 2019). On balance, the preferred view is that the public and community interests subsume those of bounty operators in being overwhelmed by vulnerability reports and such abroad discretion to not to pay bounties would not be protected by law.

It has been suggested that making the discoverer pay the transaction costs of processing vulnerability reports provide an incentive against submission of "erroneous, badly detailed, or duplicate" reports387 would protect against this issue. However, this places the burden on the researcher rather than on the bounty operator and it is the bounty operator that chooses to operate the program and invites the security researcher into the program and incentivises their participation through the “promise” of payment. Each of the elements which may cause the bounty operator difficulty in conducting the program is within the sole and exclusive control of the bounty operator.

Possible Alternatives

Given this asymmetry in security researcher vs. bounty operator power, possible solutions to redress this imbalance include: (i) a commitment by bounty operators to assess vulnerabilities using industry accepted standards that are transparent and known; (ii) conduct such assessments within prescribed timelines; (iii) provide an appropriate appeals process by a committee formed with appropriately skilled parties both internal and external to the bounty operator or vendor.

Commonly accepted industry standards that maybe used as starting point to assess vulnerabilities include the Common Vulnerability Scoring System (“CVSS”)388 promulgated by the Forum of Incident Response and Security Teams ("FIRST"). This system includes a list of 15 attributes, split across three groups, each of which are collated and analysed to deliver a final score:

387 Schechter, C. How to buy better testing pp 73-87 in Davida, G. Frankel, Y. Rees, O. Infrastructure Security from Proceedings of InfraSec 2002 Bristol, UK, October 1-3, 2002 388 FIRST. Common Vulnerability Scoring System v3.0: Specification Document. Available: https://www.first.org/cvss/

Page 129 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

This organisation has credibility it is a global association of computer security incident response teams and has broad based contribution into the standard. While a technical analysis of the merits of this scoring system are beyond the scope of this thesis, such normative and transparent processes provide substantially greater certainty and protection for security researchers than a broad-based “sole discretion” approach to vulnerability assessment and payments.

S24(1)(c) Cause detriment if relied upon.

The final element is the easiest of the three to satisfy and applies almost axiomatically. That is, a failure to make a financial payment when other provisions of the program have been met, is inherently detrimental, indeed, fundamentally detrimental, to the security researcher’s participation in the relevant program.

In ByteCard389, the ACCC brought an action against ByteCard, a consumer ISP, alleging that certain clauses in its terms were unfair and in contravention of the Australian Consumer Law. These terms included clauses that enabled unilateral price variation, a broad indemnity of ByteCard and a unilateral right of termination.390

The Federal Court held that these terms were unfair contract terms and created a "significant imbalance in the parties rights and obligations". The Court further held that such terms would cause

389 ACCC v Bytecard Pty Limited (Federal Court, 24 July 2013, VID301/2013) 390 Australian Competition and Consumer Commission. Court declares consumer contract terms unfair. 30 July 2013. Available: https://www.accc.gov.au/media-release/court-declares-consumer-contract-terms-unfair

Page 130 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. a detriment to the customer and were not reasonably necessary to protect ByteCard's legitimate interests.

The authority in Bytecard suggested unilateral variation, particularly as it relates to prices and, by logical extension, payments, would be likely to equally apply in the context of bug bounty payments. If applying this judicial reasoning in the context of bug bounty programs, the argument would be made that a price for a submitted vulnerability is set out in the published terms of the program. That is, the security researcher relies on the promise that their effort will be rewarded if the published criteria are fulfilled. The exercise of a broad and unfettered discretion to vary this price to zero would create the requisite “significant imbalance”. That is, a decision of a bounty operator to decline payment, and to provide no reason for doing so, would be an unfair contractual term that would be struck out.

Jurisdiction

Section 5.8 introduced the issue of jurisdiction which is further considered below. The jurisdiction or "choice of forum" clauses in each of the examined programs seeks to restrict the parties to commencing actions exclusively in the jurisdiction selected by the bounty program operator. In each of the terms examined, this is California.

However, as noted elsewhere, security researchers may be located anywhere in the world and, for the purposes of this thesis, the application of the law to Australian security researchers has been examined. Such clauses are chosen for the somewhat obvious reason that each of the technology giants programs that are examined are headquartered in California. In this circumstance, these clauses reduce the likelihood of the program operators being subject to legal action in a foreign jurisdiction.

Under Australian law, these clauses may be challenged but Courts consider them an important factor in determining its authority to hear a claim.391 Courts have held that, even in pro forma contracts in commercial contexts, that jurisdiction clauses are "sufficient to dispose"392 of application to have the matter heard in a jurisdiction different from that specified in the contract. However, in that case, the defendants had not suggested that the contracts were unfair. Were a plaintiff able to avail themselves of the unfair contracts provisions of the ACL, researchers may have the ability to have the jurisdiction

391 Middleton, G. Fair Go! Are jurisdiction clauses in online consumer contract unfair? Precedent Issue 103, March 2011 Available: http://classic.austlii.edu.au/au/journals/PrecedentAULA/2011/25.pdf 392 Macinnis, A. Choice of jurisdiction in contractual disputes Legalwise 31 January 2019 Available: https://legalwiseseminars.com.au/choice-of-jurisdiction-in-contractual-disputes/

Page 131 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. clause struck out and have the matter heard in a more convenient Court, most likely a jurisdiction local to the researcher.

Jurisdiction clauses are not specifically contemplated by the ACL, though they are included in similar contexts by European Courts.393 Despite not being contemplated by the ACL, on a first principles analysis considering the relevant “significant imbalance”, “legitimate interests” and “detriment if relied upon” elements, a compelling argument can be made that the jurisdiction clause should be struck out and allow the security researcher to have to have a matter heard in an Australian Court.

Having to bring a matter in a foreign jurisdiction, where the security research did not take place, and where the costs of even filing a claim may be prohibitive, would seem to satisfy the first in creating a significant imbalance in the financial and legal resources of security researchers as against bounty program operators.

Turning to the second point, security researchers having practical ability to avail themselves of an adjudicated dispute resolution mechanism (given that alternative dispute resolution is excluded) would seem central to the legitimate interests of a contracting party. To satisfy the final test of causing “detriment if relied upon”, it could be argued that unless the clause was struck out, that a practical resolution to any disputes was not available due to the relative cost of bringing an action as against the disputed amount. Such reasoning was upheld in European Courts considering similar provisions.394

Further Support for this argument is found in analogous cases where Victorian and NSW Courts declined to uphold a jurisdiction clause in a software licence “because the rights conferred by consumer protection legislation in each of those states ‘would be eroded if consumers were compelled to take any legal action arising from the supply of goods to them in an interstate court or tribunal’395

A key element in relation to legal liability of security researchers is the extent to which purported liability waivers by bounty operators are effective. These waivers take various forms in each of the

393 Middleton, G. Fair Go! Are jurisdiction clauses in online consumer contract unfair? Precedent Issue 103, March 2011 at p.33 394 Occano Grupo Editorial SA v Salvat Editores SA [2000] ECR 1-494 395 Ibid. Available: http://classic.austlii.edu.au/au/journals/PrecedentAULA/2011/25.pdf

Page 132 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. examined programs as set out below. While an in-depth examination of criminal statutes is outside the scope of this thesis, which focuses on contractual matters, this section briefly analyses the protection to security researchers that is ostensibly afforded to security researchers through undertakings in each of the bounty programs not to pursue legal action for the purposes of completeness.

Instagram / Facebook

As outlined above in the Chapter 3 case studies and Chapter 1 introduction, Facebook / Instagram’s terms relevantly state that:

If you comply with the policies below when reporting a security issue to Facebook, we will not initiate a lawsuit or law enforcement investigation against you in response to your report. We ask that:

. You give us reasonable time to investigate and mitigate an issue you report before making public any information about the report or sharing such information with others. . You do not interact with an individual account (which includes modifying or accessing data from the account) if the account owner has not consented to such actions. . You make a good faith effort to avoid privacy violations and disruptions to others, including (but not limited to) destruction of data and interruption or degradation of our services. . You do not exploit a security issue you discover for any reason. (This includes demonstrating additional risk, such as attempted compromise of sensitive company data or probing for additional issues.) . You do not violate any other applicable laws or regulations.396

Department of Defense

The Department of Defense terms, operated through HackerOne state that:

If you conduct your security research and vulnerability disclosure activities in accordance with the restrictions and guidelines set forth in this policy, (1) DoD will not initiate or recommend any law enforcement or civil lawsuits related to such activities, and (2) in the event of any law enforcement or civil action brought by anyone other than DoD, DoD will take steps to make known that your activities were conducted pursuant to and in compliance with this policy.

Google

Google’s terms state that “of course, your testing must not violate any law, or disrupt or compromise any data that is not your own”397 but are silent on not pursuing legal action.

397 Google Vulnerability Reward Program Available: https://www.google.com/about/appsecurity/reward-program/

Page 133 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. A common thread across each of these terms is that the undertaking not to pursue legal action is contingent upon compliance with "all laws".

These requirements assume, of course, that the behaviour of security researchers in discovering vulnerabilities is clear and allowed under all relevant laws. This is, however, far from clear. For instance, the program guidelines do not give express authorisation as may be required to provide a safe harbor against prohibitions against "exceeding authorised access" under the Computer Fraud and Abuse Act or for "unauthorised access" in Australia under the Criminal Code Act. These offences are technology neutral, of a different era and thus broad, imprecise and, as examined in the case studies, able wielded as threats when researcher behaviour occurs in unanticipated ways.

Elazari’s analysis398 of a range of bounty programs finds that express authorisation as would be required under relevant statutes is largely absent and thus a “safe harbour” is not present. Finding that while the "technical scope" of the program is set out clearly - that is, vulnerability types which are eligible and the systems which may be targeted - the "legal" scope of authorisation, by contrast is largely absent. This omission largely relates to criminal liability.

To redress this gap, Elazari proposes that program terms should expressly authorised access under relevant acts, waive liability and expressly permit security research techniques399 and that this is best achieved through a standardisation of terms as further discussed in the following, concluding Chapter 7.

On the above, it appears that each of the relevant statutory criteria under the ACL as it relates to unfair contracts is satisfied and that, therefore, statutory protection may apply to the participation of security researchers under bug bounty programs, particularly as it relates to the application of clauses that unilaterally award the right to withhold payment and require disputes be resolved outside Australia. In such circumstances a Court may make a range of orders declaring part of the contract void or to enliven a range of other remedies including varying the contract, making an order for compensation or any other order it sees fit.400

398 Elazari, A. Private Ordering Shaping Cybersecurity Policy in Rewired: Cybersecurity Governance. John Wiley & Sons, 2019 Available: http://ebookcentral.proquest.com/lib/unsw/detail.action?docID=5761058 399 Ibid. at p.232 400 S241 Competition and Consumer Act 2010 (Cth)

Page 134 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. As set out in the Appendices, there are many other clauses to which this analysis may similarly apply and an examination of each of them, in turn, is an area worthy of further research. The analysis above suggests that there may be significant scope for the intervention of statutory protection for security researchers to reduce the risk to which they are exposed through their participation in bounty programs, particularly as it relates to the more commercial elements around reward for effort and payments. This protection may partially ameliorate the effects of the power disparity that exists between the two parties and create better security outcomes through increased researcher participation this protection affords.

Page 135 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Chapter 7 – Summary, Conclusion and Future Research

This Chapter summarises and concludes this thesis and includes (i) a summary of the key elements of the thesis including an analysis of key issues raised; (ii) suggestions for future research, legislative and other reform; and (iii) a conclusion.

This thesis has highlighted the disparate pace of development of the market mechanism of bounty program that has quickly evolved and engaged a wide-range of security researchers who discover and disclose vulnerabilities under them and the legal system that underpin their operation.

While the environment is considerably more certain that that which prevailed before the emergence of bug bounty programs, there remains residual uncertainty as to the precise boundaries of the law and the level of protection afforded to security researchers. This uncertainty can be exploited by software vendors where security researchers act in unexpected or undesirable ways.

The terms of bug bounty programs, in the Australian jurisdiction, in many ways, perpetuate the asymmetry in power between security researchers and program operators. This manifest itself in two ways. Firstly, when security researchers act in ways that are unexpected as in the case of Wineberg and Finisterre, bounty program operators may react with threats of legal action, involve third parties such as employers, assert previously unstated or onerous terms or, finally, threaten non-payment. Such action, while contrary to the offered terms, is substantially irremediable on a practical basis due to the costs, uncertainty and time of any litigation, particularly when the security researcher is operating independently and faced with the power of multi-billion dollar software companies.

Secondly, the terms themselves often assert positions that are onerous or uncommercial and place much discretion within the authority of bounty operators.

In answering the research question as to the risks that security researchers in Australia are exposed to, the key finding is that this asymmetry of power is potentially redressed by the operation of legislative protection in the Australian jurisdiction under the unfair contracts provisions of the ACL which, though they have not been applied specifically in the context of security researchers participating under bug bounty programs. Notwithstanding that they have not been applied in this

Page 136 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. context, they have the potential to redress the negative behaviour of the programs, were they invoked.

Notwithstanding the statutory intervention, there remains substantial residual uncertainty as to the protection security researchers are afforded under bug bounty programs which would be usefully supplemented by legal and non-legal remedies further discussed below. This residual uncertainty reflects, in part, the persistent and historical tension in vulnerability disclosure between concealing and revealing vulnerabilities which is re-enlivened when unexpected events occur as in the case studies.

When this unexpected behaviour occurs, particularly through disclosure of vulnerabilities of a severity or scale that is embarrassing, commercial and publicity interests appear to supersede interests in producing better security outcomes. When programs are operated in this manner, they can be viewed as virtue signalling rather than a good-faith commitment to improving security outcomes.

The protection afforded by the undertaking not to take civil action against researchers provides protection that is limited, at best and, at worst, illusory with little practical negative effect where bounty operators don't operate in accordance with their published terms.

Potential Legislative Reform

While this thesis has not focused on criminal “hacking” statutes such as the U.S. Computer Fraud and Abuse Act (CFAA)401 and the Criminal Code Act 1995 (Cth), clarity regarding the scope of offences such as “unauthorised access” in the context of actions undertaken in a bounty program are of vital importance. It is threats of criminal prosecution that are mostly likely to have a chilling effect on security researchers willingness to participate in bounty programs and the consequence to them if a bounty program operator were to make a criminal complaint regarding their actions. Other Reform

The disparity in power between security researchers and bounty program operators could be partially addressed through a collective and organised approach to representing security researcher’s interests.

401 Freeman, E. Vulnerability Disclosure: The Strange Case of Bret McDanel. Information Systems Security. 12 April 2007. Available: http://dx.doi.org/10.1080/10658980601144915

Page 137 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Creative Commons

Creative Commons, a not-for-profit organisation, was founded with a mandate to “help legally share your knowledge”.402 It does so through the creation and promulgation of “free, international, easy-to- use copyright licenses that are the standard for enabling sharing and remix”403. Rather than having to review lengthy, proprietary licenses and form a view as to the rights granted under them, these standardised licenses summarise complex areas of law in an easily digestible icon format. For example, a copyright license that allows non-commercial sharing of works with attribution and in the same manner in which they were licensed is represented by a “CC BY NC SA” icon as follows:

404

This “quick-reference” icon is supported by a standard, version-controlled contract that implements the terms indicated. More than a billion works are made available under these licenses.405

The analysis in this thesis has shown that the terms adopted by bug bounty programs are diverse, non- standard, lengthy, complex and do not provide sufficient certainty to researchers in areas such as protection from civil and criminal liability, payment of bounties and certainty as to the methods that may be used in conducting security research.

An independent organisation, in a similar vein to Creative Commons, a “security research commons” could seek to standardise the terms for bounty programs and communicate the protection offered and expectations of security researchers in simple ways. This would have several benefits. Firstly, security researchers would have increased certainty as to the basis of their participation under programs which may broaden the number of participants and, potentially, increasing the number of discovered vulnerabilities. Secondly, bounty operators could leverage the work of the independent organisation and more quickly and cheaply deploy bounty programs and speed their uptake without the need to engage specialist counsel. Finally, standardisation would further normalise security research under bounty programs and move it towards mainstream rather than fringe activity.

An independent organisation could engage with stakeholders including researchers, software vendors and software users and develop and publish “best practice” positions that seek to strike a fairer balance and, in turn, create a more secure software ecosystem. Independently produced terms would

402 About Creative Commons. Available: https://creativecommons.org/about/ 403 stateof.creativecommons.org 404 https://creativecommons.org/share-your-work/ 405 https://stateof.creativecommons.org/

Page 138 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. reduce the power asymmetry between security researchers and software vendors in terms of legal resources and commercial bargaining power.

A not-for-profit organisation can respond to changes in this dynamic area with a revision or expansion of the standardised terms more quickly than legislation can and would be able to promulgate them more quickly than decentralised, proprietary terms. Similarly, as the law evolves through elucidation of key points as they are interpreted by the Courts, or as new legislation is passed, the terms could be iterated and promulgated more quickly than were they decentralised.

Models for funding, implementing and operating such a security research commons organisation is an area worthy of further research. Initiatives such as the Open Source Vulnerability Disclosure Framework406, supported by BugCrowd, which provides open source guides and policies to setting up bug bounty programs are important initiatives in this regard.

Alternative Dispute Resolution

The cost of resolving disputes that arise under bounty programs through the Court system is lengthy, uncertain and costly for bounty operators but, particularly, for security researchers. Similarly, the “at the sole discretion” of the bounty operator elements in contracts means that security researchers are materially disadvantages in their activities.

A specialised dispute resolution service, with subject matter expertise and streamlined processes, could act to resolve disputes at greater speed and reduced cost and do so with a mandate to balance competing interests in a way that maximises security outcomes rather than commercial interests.

This would provide a degree of independent, third party accountability that is lacking in the current environment. Research into proper models to form, fund, operate and audit such organisations could be usefully applied to improving the overall liability profile for security researchers.

AI Legal Issues

Transparency

AI is likely to be used to make decisions in the assessment of the validity, uniqueness and quality of vulnerability reports or security researcher behaviour in order to assess it against compliance with either the contractual terms of the bounty programs or, potentially, to criminal standards under computer crime statutes. In such cases there must be sufficient algorithmic transparency of the

406 https://github.com/bugcrowd/disclosure-policy

Page 139 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. processes used to reach these conclusions. A lack of such transparency and accountability is currently a common weakness in AI decision making generally.

Realising the benefits of such technology while providing appropriate transparency and accountability for decisions reached is key to maintaining trust in it. While algorithmic transparency is a significant contemporary issue in many areas, particularly in relation to the operation of advertising and user tracking, the consideration of appropriate measures to ensure such accountability in the context of the operation of bug bounty programs and vulnerability disclosure has not been undertaken.

A further issue with the use of automated discovery tools is that both the volume and quality of discovered vulnerabilities (as against their assessment) may be such that assessment of them at sufficient scale is impractical or uncommercial. In this vein, there is a need to ensure that discovery and assessment tools and processes are aligned. In this regard, technical solutions to the process of vulnerability assessment and handling are also important and suggested, for instance, in Google's patent for an automated bug clearing house implemented in software to handle the submission of problems with applications and to generate reports of them for a developer.407

Researcher use

Security researchers currently deploy automated tools and scans to detect vulnerabilities during their participation in bug bounty programs. Such tools may begin to be supplemented or replaced by AI powered tools. The legal framework in which such tools are used will need to adapt to ensure appropriate restrictions are placed on their use so that the researchers remain accountable for the actions of tools which they may not completely understand or control at a sufficiently granular level once deployed.

Similarly, such developments must ensure that security researchers seeking to discover vulnerabilities to disclose them to the vendor must not be put at an undue disadvantage compared to malicious actors seeking to discover and exploit vulnerabilities for criminal and other purposes who will not be constrained by restrictions on the use of AI powered tools. Future research could be valuable conducted to consider ways in which sensible bounds can be put on the use of such AI powered tools and techniques.

407 Google LLC (2018). US10007512B2.

Page 140 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Non-Legislative Government Guidelines

In 2017, the Criminal Division of the Department of Justice's Cybersecurity Unit prepared and promulgated a "Framework for a Vulnerability Disclosure program for Online systems"408 which provides a range of best practice steps and sets out a four-step process to i) design, ii) plan to administer, iii) draft and; iv) implement a bug bounty program that captures the organisation's intent.

Each of the steps requires the program operator to turn their mind to, and address, a number of areas where ambiguity may otherwise arise - including many of those that arise in the case studies. This includes clarity in the selection of the systems and data which are within the scope of the program and dealing expressly with the restrictions on "accessing, copying, transferring, storing, using and retaining"409 particular types of information. Other considerations include whether to restrict particular methods or techniques such as denial-of-service attacks or social engineering.

The step involved in administering the program addresses processes for the reporting of vulnerability information including specifying appropriate contacts and internal processes for addressing unanticipated questions including, relevantly, consulting legal counsel in relation to questions that may raise unconsidered legal issues.

An important step to minimising legal liability for security researchers is the instruction that in drafting the policy to "avoid using vague jargon or ambiguous technical language to describe critical aspects of the policy, such as acceptable and unacceptable conduct".410

Other important proposals to minimise legal liability for security researchers, and provide protection for bug bounty operators are:

- "Explain the consequences of complying—and not complying—with the policy." including describing the activity that "the organization considers to constitute “authorized” conduct under the Computer Fraud and Abuse Act."411

- “If legal action is initiated by a third party against a party who complied with the vulnerability disclosure policy, the organization will take steps to make it known, either to the public or to the court, that the individual’s actions were conducted in compliance with the policy”

408 Framework for a Vulnerability Disclosure program for Online systems Department of Justice. July 2017 Available: https://www.justice.gov/criminal-ccips/page/file/983996/download 409 Ibid. at p.7 410 Ibid. at p.6 411 Ibid. at p.7

Page 141 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

While these guidelines are non-binding, they are important in setting clear expectations, and have the authority of the US Government, that, importantly, would be responsible for prosecuting security researchers under legislation such as the CFAA. Providing clarity as to the scope of authorised conduct materially reduces uncertainty. However, Marten Mickos, of HackerOne has noted that the guidance does not include a plan for "organising remediation and bug fixing" or "reporting of results to key stakeholder and decision-makers"412 Similarly, it does not address issues such as payment and overall fairness of the terms.

Nonetheless, adoption of similar, but expanded and maintained, guidance in the Australian jurisdiction, where equivalent guidance does not exist could be a useful gap-filler between slow to change legislation, incomplete self-regulation, and the imbalance between security researchers and bug bounty operators.

There remains significant scope for the terms of bounty programs to be interpreted by the Courts and, as with many areas of legal development, there remains a period of uncertainty until this void is filled and the precise contours of the law become clear.

A largely self-regulated environment has created a flourishing, and fast growing, environment for security researchers to participate in bounty programs, in novel and lucrative ways that were previously not possible. However, the approach is not without risk, and the extent to which security researchers are protected is only partial and may be substantially uncertain in some circumstances until a Court so determines. This risk is manifested when bounty operators seek to resile from the terms of their programs or apply them in punitive or practically unchallengeable ways. In the space in which the law is uncertain or misaligned with achieving better security outcomes, the void can be filled with a range non-governmental representative bodies and interventions. These may provide a collective voice for security researchers interests and those of software vendors, and, perhaps most importantly, the interests of the wider user community and are an important and much-needed development.

412 Bing, C. The Justice Department wants to help you run a vulnerability disclosure program Cyberscoop. July 31, 2017. Available: https://www.cyberscoop.com/doj-vulnerability-disclosure-program-cfaa-bug-bounty/

Page 142 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Appendices

Accessed 11 September 2017 at https://hackerone.com/deptofdefense Policy

DoD Vulnerability Disclosure Policy

Purpose

This policy is intended to give security researchers clear guidelines for conducting vulnerability discovery activities directed at Department of Defense (DoD) web properties, and submitting discovered vulnerabilities to DoD.

Overview

Maintaining the security of our networks is a high priority at the DoD. Our information technologies provide critical services to Military Service members, their families, and DoD employees and contractors. Ultimately, our ensures that we can accomplish our missions and defend the United States of America.

The security researcher community regularly makes valuable contributions to the security of organizations and the broader Internet, and DoD recognizes that fostering a close relationship with the community will help improve our own security. So if you have information about a vulnerability in a DoD website or web application, we want to hear from you!

Information submitted to DoD under this policy will be used for defensive purposes – to mitigate or remediate vulnerabilities in our networks or applications, or the applications of our vendors.

This is DoD’s initial effort to create a positive feedback loop between researchers and DoD – please be patient as we refine and update the process.

Please review, understand, and agree to the following terms and conditions before conducting any testing of DoD networks and before submitting a report. Thank you.

Scope

Any public-facing website owned, operated, or controlled by DoD, including web applications hosted on those sites.¹

How to Submit a Report

Please provide a detailed summary of the vulnerability, including: type of issue; product, version, and configuration of software containing the bug; step-by-step instructions to reproduce the issue; proof-of-concept; impact of the issue; and suggested mitigation or remediation actions, as appropriate.

By clicking “Submit Report,” you are indicating that you have read, understand, and agree to the guidelines described in this policy for the conduct of security research and disclosure of vulnerabilities or indicators of vulnerabilities related to DoD information systems, and consent to having the contents of the communication and follow-up communications stored on a U.S. Government information system.

Page 143 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Guidelines

DoD will deal in good faith with researchers who discover, test, and submit vulnerabilities² or indicators of vulnerabilities in accordance with these guidelines:

• Your activities are limited exclusively to – • (1) Testing to detect a vulnerability or identify an indicator related to a vulnerability;³ or • (2) Sharing with, or receiving from, DoD information about a vulnerability or an indicator related to a vulnerability. • You do no harm and do not exploit any vulnerability beyond the minimal amount of testing required to prove that a vulnerability exists or to identify an indicator related to a vulnerability. • You avoid intentionally accessing the content of any communications, data, or information transiting or stored on DoD information system(s) – except to the extent that the information is directly related to a vulnerability and the access is necessary to prove that the vulnerability exists. • You do not exfiltrate any data under any circumstances. • You do not intentionally compromise the privacy or safety of DoD personnel (e.g. civilian employees or military members), or any third parties. • You do not intentionally compromise the intellectual property or other commercial or financial interests of any DoD personnel or entities, or any third parties. • You do not publicly disclose any details of the vulnerability, indicator of vulnerability, or the content of information rendered available by a vulnerability, except upon receiving explicit written authorization from DoD. • You do not conduct denial of service testing. • You do not conduct social engineering, including spear , of DoD personnel or contractors. • You do not submit a high-volume of low-quality reports. • If at any point you are uncertain whether to continue testing, please engage with our team.

What You Can Expect From Us

We take every disclosure seriously and very much appreciate the efforts of security researchers. We will investigate every disclosure and strive to ensure that appropriate steps are taken to mitigate risk and remediate reported vulnerabilities.

DoD has a unique information and communications technology footprint that is tightly interwoven and globally deployed. Many DoD technologies are deployed in combat zones and, to varying degrees, support ongoing military operations; the proper functioning of DoD systems and applications can have a life-or-death impact on Service members and international allies and partners of the United States. DoD must take extra care while investigating the impact of vulnerabilities and providing a fix, so we ask your patience during this period.

DoD remains committed to coordinating with the researcher as openly and quickly as possible. This includes:

• Within three business days, we will acknowledge receipt of your report. DoD’s security team will investigate the report and may contact you for further information. • To the best of our ability, we will confirm the existence of the vulnerability to the researcher and keep the researcher informed, as appropriate, as remediation of the vulnerability is underway.

Page 144 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. • We want researchers to be recognized publicly for their contributions, if that is the researcher’s desire. We will seek to allow researchers to be publicly recognized whenever possible. However, public disclosure of vulnerabilities will only be authorized at the express written consent of DoD. Information submitted to DoD under this policy will be used for defensive purposes – to mitigate or remediate vulnerabilities in our networks or applications, or the applications of our vendors.

Legal

You must comply with all applicable Federal, State, and local laws in connection with your security research activities or other participation in this vulnerability disclosure program.

DoD does not authorize, permit, or otherwise allow (expressly or impliedly) any person, including any individual, group of individuals, consortium, partnership, or any other business or legal entity to engage in any security research or vulnerability or threat disclosure activity that is inconsistent with this policy or the law. If you engage in any activities that are inconsistent with this policy or the law, you may be subject to criminal and/or civil liabilities.

To the extent that any security research or vulnerability disclosure activity involves the networks, systems, information, applications, products, or services of a non-DoD entity (e.g., other Federal departments or agencies; State, local, or tribal governments; private sector companies or persons; employees or personnel of any such entities; or any other such third party), that non-DoD third party may independently determine whether to pursue legal action or remedies related to such activities.

If you conduct your security research and vulnerability disclosure activities in accordance with the restrictions and guidelines set forth in this policy, (1) DoD will not initiate or recommend any law enforcement or civil lawsuits related to such activities, and (2) in the event of any law enforcement or civil action brought by anyone other than DoD, DoD will take steps to make known that your activities were conducted pursuant to and in compliance with this policy.

DoD may modify the terms of this policy or terminate the policy at any time.

¹ These websites constitute “information systems” as defined by 6 U.S.C. 1501(9)). ² Vulnerabilities throughout this policy may be considered “security vulnerabilities” as defined by 6 U.S.C. 1501(17) ³ These activities, if applied consistent with the terms of this policy, constitute “defensive measures” as defined by 6 U.S.C. 1501(7)).

View changes

Page 145 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

HackerOne General Terms and Conditions

Accessed 11 September 2017 at https://www.hackerone.com/terms/general

Last Updated: February 16, 2017 Please read these General Terms and Conditions carefully because they, together with the Customer Terms and Conditions or the Finder Terms and Conditions, govern Customer's or Finder's use of the Services.

Independent Transactions Any contract or other interaction between a Customer and a Finder, including with respect to any Program Policy, will be between the Customer and the Finder. HackerOne is not a party to such contracts and disclaims all liability arising from or related to such contracts.

General Prohibitions Customer or Finder shall not use the Services, or any portion thereof, for the benefit of any third party or in any manner not permitted by the Terms.

Changes to HackerOne Platform or HackerOne Site HackerOne may change all or any part of the HackerOne Platform or HackerOne Site provided that such change is within the compliance of the Terms herein. Further, where any Program is inactive or unattended by Company, HackerOne shall have the right to remove or disable access to any relevant Program Material or Vulnerability Reports if Company has not responded to HackerOne's written notice (by email) requiring attention within 3 business days of such written notice.

Changes to the Terms HackerOne may modify the Terms at any time upon notice to Customer or Finder. If Customer or Finder continues to use the Services after HackerOne has modified the Terms, Customer and Finder will be deemed to have agreed to be bound by the modified Terms.

Confidential Information HackerOne understands that it may receive Confidential Information of Customer, Customer understands that it may receive Confidential Information of HackerOne, and Finder understands that it, he or she may receive Confidential Information of a Customer or HackerOne. The receiving party agrees not to divulge to any third person any Confidential Information of another party and not to use any Confidential Information of another party for any purpose not contemplated by the Terms, provided Customer or Finder agrees that HackerOne may collect data with respect to Services and Programs and report on the aggregate response rate, aggregate Bounties paid and other aggregate measures ("HackerOne Aggregate Data") and the HackerOne Aggregate Data is not Confidential Information.

Privacy Policy HackerOne's Privacy Policy (https://www.hackerone.com/privacy), which describes how HackerOne collects, uses and discloses information from HackerOne's Customers and Finders, will be applicable to the Services.

Page 146 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Security Policy HackerOne's Security Policy (https://www.hackerone.com/security), which describes the security of the HackerOne Platform, will be applicable to the Services.

Vulnerability Guidelines HackerOne's Vulnerability Guidelines (https://www.hackerone.com/disclosure-guidelines), which describe the default policy governing vulnerability reporting through the Services, will be applicable to the Services. HackerOne's Vulnerability Guidelines are superseded by individual Program Policies in the event of a conflict.

Copyright Policy HackerOne respects copyright law in all jurisdictions in which it does business and expects its users to do the same. It is HackerOne's policy to terminate in appropriate circumstances Customers and Finders which repeatedly infringe or are believed to be repeatedly infringing the rights of copyright holders. Please see HackerOne's Copyright and IP Policy (https://www.hackerone.com/dmca), for further information.

Feedback Customer or Finder can submit Feedback by emailing HackerOne at [email protected]. By submitting any Feedback, Customer or Finder grants to HackerOne a worldwide, perpetual, irrevocable, non-exclusive, transferable, sublicenseable, fully-paid and royalty-free license under any and all intellectual property rights that Customer or Finder owns or controls to use, copy, modify, create derivative works based upon and otherwise exploit the Feedback for any purpose.

Links to Third Party Websites or Resources The Services may contain links to third-party websites or resources. HackerOne provides these links only as a convenience and is not responsible for the content, products or services on or available from those websites or resources or links displayed on such websites. Customer or Finder acknowledges sole responsibility for and assumes all risk arising from Customer's or Finder's use of any third-party websites or resources.

Warranty Disclaimers THE SERVICES ARE PROVIDED BY HACKERONE "AS IS," WITHOUT WARRANTY OF ANY KIND. WITHOUT LIMITING THE FOREGOING, HACKERONE EXPLICITLY DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, AND ANY WARRANTIES ARISING OUT OF COURSE OF DEALING OR USAGE OF TRADE. HackerOne makes no warranty that the Services will meet Customer's or Finder's requirements, as applicable, or be available on an uninterrupted, secure or error-free basis.

Indemnities Customer will indemnify, defend and hold harmless HackerOne and its officers, directors, employees and agents, from and against any claims, disputes, demands, liabilities, damages, losses, and costs and expenses, including, without limitation, reasonable legal and accounting fees arising out of or in any way connected with (i) Customer's Program Material, (ii) Customer's use of a Vulnerability Report, or (iii) Customer's violation of the Terms.

Finder will indemnify, defend and hold harmless HackerOne and its officers, directors, employees and agents, from and against any claims, disputes, demands, liabilities, damages, losses, and costs and expenses, including, without limitation,

Page 147 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. reasonable legal and accounting fees arising out of or in any way connected with (i) Finder's access to or use of the Services, (ii) Finder's reliance of Program Material, (iii) Finder's Vulnerability Reports, or (iv) Finder's violation of the Terms.

Limitation of Liability NO PARTY TO THE TERMS WILL BE LIABLE FOR ANY INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, LOSS OF DATA OR GOODWILL, SERVICE INTERRUPTION, COMPUTER DAMAGE OR SYSTEM FAILURE OR THE COST OF SUBSTITUTE SERVICES ARISING OUT OF OR IN CONNECTION WITH THE TERMS OR FROM THE USE OF OR INABILITY TO USE THE SERVICES, WHETHER BASED ON WARRANTY, CONTRACT, TORT (INCLUDING NEGLIGENCE), OR ANY OTHER LEGAL THEORY, AND WHETHER OR NOT SUCH PARTY HAS BEEN INFORMED OF THE POSSIBILITY OF SUCH DAMAGE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR LIMITATION OF LIABILITY FOR CONSEQUENTIAL OR INCIDENTAL DAMAGES, SO THE ABOVE LIMITATION MAY NOT APPLY.

IN NO EVENT WILL CUSTOMER'S OR HACKERONE'S TOTAL LIABILITY TO THE OTHER ARISING OUT OF OR IN CONNECTION WITH THE TERMS OR FROM THE USE OF OR INABILITY TO USE THE SERVICES EXCEED THE AMOUNTS PAID OR PAYABLE BY CUSTOMER TO HACKERONE FOR USE OF THE SERVICES DURING THE TWELVE (12) MONTH PERIOD PRIOR TO THE DATE WHEN THE CLAIM OR LIABILITY FIRST AROSE.

IN NO EVENT WILL HACKERONE'S TOTAL LIABILITY TO FINDER ARISING OUT OF OR IN CONNECTION WITH THE TERMS OR FROM THE USE OF OR INABILITY TO USE THE SERVICES EXCEED $1,000.

Dispute Resolution The Terms and any action related thereto will be governed by the laws of the State of California without regard to its conflict of laws provisions. Any and all disputes arising out of or concerning the Terms shall be brought exclusively within the Superior Court for the County of or the United States District Court for the Northern District of California. Customer or Finder hereby submits to the personal jurisdiction of such courts and waives any and all objections to the exercise of jurisdiction, venue or inconvenient forum in such courts.

Publicity HackerOne may use Customer's and/or Finder's name in any publicity or advertising describing the relationship between the parties.

Miscellaneous Terms The Terms and any applicable executed order form that references the Terms constitute the entire and exclusive understanding and agreement between HackerOne and Customer or Finder, and supersede and replace any and all prior oral or written understandings or agreements between HackerOne and Customer or Finder regarding the Services. If any provision of the Terms is held to be invalid, prohibited or otherwise unenforceable by legal authority of competent jurisdiction, the other provisions of the Terms shall remain enforceable, and the invalid or unenforceable provision shall be deemed modified so that it is valid and enforceable to the maximum extent permitted by law. The Terms are assignable by HackerOne, and will bind and inure to the benefit of the parties, their successors and assigns. Customer or Finder may not assign the Terms without HackerOne's prior written consent, not to be unreasonably withheld.

Page 148 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Any notices or other communications provided by HackerOne under the Terms, including those regarding modifications to the Terms, will be given via email or by posting to the HackerOne Site.

HackerOne's failure to enforce any right or provision of the Terms will not be considered a waiver of such right or provision. Any such waiver will be effective only if in writing and signed by a duly authorized representative of HackerOne.

Termination HackerOne may terminate any Customer's or Finder's access to and use of the HackerOne Platform, at HackerOne's sole discretion, at any time and without notice to the Customer or Finder. A Customer or Finder may cancel such Customer's or Finder's account at any time by sending an email to [email protected].

Upon any termination, discontinuation or cancellation of the Services, the HackerOne Platform or a Customer's or Finder's account, the following provisions of the Terms will survive: No Endorsement, Independent Parties, Ownership, Warranty Disclaimers, Limitation of Liability, and Dispute Resolution.

Certain Definitions The following capitalized terms shall have the following meanings as used in these General Terms and Conditions, in the Customer Terms and Conditions, and/or in the Finder Terms and Conditions.

"Confidential Information" means any confidential or proprietary business or technical information about a party related to the Services or a Program, including the HackerOne HackerOne Platform and the content of Vulnerability Reports. Confidential Information does not include any information that (i) was publicly known and made generally available in the public domain prior to the time of disclosure by the disclosing party; (ii) becomes publicly known and made generally available after disclosure by the disclosing party to the receiving party; (iii) is already in the possession the receiving party at the time of disclosure by the disclosing party; or (iv) is obtained by the receiving party from a third party without a breach of such third party's obligations of confidentiality. "Customer" means a customer of HackerOne using the HackerOne Platform to receive Vulnerability Reports. "Feedback" means any feedback, comments or suggestions for improvements to the Services. "Finder" means an individual or entity using the HackerOne Platform to provide Vulnerability Reports. "HackerOne" means HackerOne, Inc., a Delaware corporation. "HackerOne Platform" means the vulnerability coordination software-as-a-service HackerOne Platform offered by HackerOne. "HackerOne Site" means HackerOne's website located at hackerone.com and related domains and subdomains. "Program" means the security initiative(s) for which a Customer desires to receive Vulnerability Reports from Finders, which a Customer posts to the HackerOne Platform. "Program Materials" means the Program Policy and the description of the Program. "Program Policy" include a description of the security-related services prepared by a Customer that the Customer is seeking from Finders, the terms, conditions and requirements governing the Program to which the Finders must agree, and the Bounties, if any, that a Customer will award to Finders who participate in the Program. "Services" means the HackerOne Platform and any related service made available by or through HackerOne. "Terms" means these General Terms and Conditions and the Customer Terms and Conditions or the Finder Terms and Conditions, as applicable.

Page 149 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. "Third Party Services" means any third party services to be provided to a Customer through HackerOne. "Vulnerability Reports" means bug reports or other vulnerability information, in text, graphics, image, software, works of authorship of any kind, and information or other material that Finders provide or otherwise made available through the HackerOne Platform to a Customer resulting from participation in a Program.

Page 150 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. HackerOne Finder Terms and Conditions

Accessed 11 September 2017 at https://www.hackerone.com/terms/finder Last Updated: February 16, 2017

Welcome to HackerOne! By signing up as a Finder, you are agreeing to the following terms and the General Terms and Conditions found at https://www.hackerone.com/terms/general, which are incorporated by reference. A Finder is a hacker, security researcher or anyone who is willing to help companies and other organizations find bugs and vulnerabilities in their computer systems.

Your Use of HackerOne Platform You may use the HackerOne Platform to participate in Programs and submit Vulnerability Reports provided you comply with the Terms.

Vulnerability Reports By making any Vulnerability Report available to a Customer, you agree to the Program Policy. HackerOne's Vulnerability Guidelines are superseded by individual Program Policies in the event of a conflict.

You represent that neither the Vulnerability Reports nor any use of Vulnerability Report by the Customer will infringe, misappropriate or violate a third party's intellectual property rights, or rights of publicity or privacy, or result in the violation of any applicable law or regulation, including export control laws.

Bounties You may be awarded a Bounty for submitting Vulnerability Reports to a Customer for a particular Program if the submitted Vulnerability Reports meets the Customer's requirements described in the Program Policy. HackerOne will process Bounties that are monetary payments on behalf of Customer, and will typically remit the Bounty payments to you within ten (10) business days after HackerOne receives the Bounty payments from the Customer (or, if HackerOne has a Bounty Prepayment from Customer for the Program, within ten (10) business days after Customer notifies HackerOne that you have been awarded the Bounty). HackerOne is not responsible for delays in payment outside of HackerOne's reasonable control.

You may remain anonymous by using a pseudonym. To be eligible to receive a Bounty, however, you must provide HackerOne with accurate, complete and up-to-date information about you, including your mailing address, social security number (if applicable) and any other information that HackerOne reasonably requests, to allow HackerOne to legally send any Bounty to you and file the appropriate tax form following year end. If you do not provide this information to HackerOne, any Bounty that would otherwise be paid to you will be paid to a charity of HackerOne's choosing. You are responsible for paying all taxes related to the Bounty payments, if any.

HackerOne will not be liable in any way for any Program, including any errors or omissions in any Program Policy, or any loss or damage incurred as a result of your reliance on any Program Policy. Independent Parties You are not an employee, contractor or agent of HackerOne, but are an independent third party who wants to participate in Programs and connect with the Customer through the Services. Nothing in the Terms is intended to render HackerOne and

Page 151 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. you as joint venturers, partners, or employer and employee. Under no circumstance shall HackerOne be considered to be your employer, nor shall you have any right as an employee of HackerOne.

Customers are not employees, contractors or agents of HackerOne, but are independent third parties who want to participate in Programs and connect with you through the Services. You agree that any legal remedy that you seek to obtain for a Customer's actions or omissions or other third parties regarding a Customer's Program, including Vulnerability Reports, will be limited to a claim against the particular Customer or other third parties who caused harm to you, and you will not to attempt to impose liability on HackerOne or seek any legal remedy from HackerOne with respect to those actions or omissions.

Ownership and Licenses HackerOne does not claim any ownership rights in any Vulnerability Reports. You agree that HackerOne may collect statistical and other information about Vulnerability Reports, and use that information at HackerOne. Except for any Vulnerability Reports, HackerOne and its licensors exclusively own all right, title and interest in and to the Services and content contained on the Services, including all intellectual property rights. The Services and HackerOne content are protected by copyright, trademark, and other laws of the United States and foreign countries.

By making any Vulnerability Report available to a Customer through the Services, you hereby grant to HackerOne a perpetual, irrevocable, non-exclusive, transferable, sublicenseable, worldwide, royalty-free license to use, copy, reproduce, display, modify, adapt, transmit and distribute copies of that Vulnerability Report, for the sole purpose of providing the Services.

By making any Vulnerability Report available to a Customer through the Services, you hereby grant to the Customer a perpetual, irrevocable, non-exclusive, transferable, sublicenseable, worldwide, royalty-free license to use, copy, reproduce, display, modify, adapt, transmit and distribute copies of that Vulnerability Report.

HackerOne hereby grants to you a revocable, non-exclusive, non-transferable, non-sublicenseable, worldwide, royalty-free license to use the HackerOne Platform and access and view the content that HackerOne makes available on the HackerOne Platform solely in connection with your permitted use of the HackerOne Platform. HackerOne may change or discontinue all or any part of the HackerOne Platform, including your access to it, at HackerOne's discretion.

Authority If you are using the Services on behalf of a company (such as your employer), or a Customer or other legal entity, you represent that you have the authority to bind that company or other legal entity to the Terms. If you are a minor (in the United States, that means under 18 years old), your parents must agree to the Terms on your behalf.

Definitions Some of the capitalized terms used in these Finder Terms and Conditions are defined the General Terms and Conditions.

Page 152 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. HackerOne Customer Terms https://www.hackerone.com/terms). Accessed September 21, 2017. Last Updated: June 14, 2016

Welcome to HackerOne! Please read these Customer Terms and Conditions carefully because they govern each Customer's access to and use of the Services.

Definitions Certain capitalized terms used in these Customer Terms and Conditions are defined the General Terms and Conditions found at https://www.hackerone.com/terms/general, which are incorporated by reference.

Agreement to Terms By using the Services, Customer agrees to be bound by these Customer Terms and Conditions and the General Terms and Conditions, which are incorporated by reference.

Services HackerOne Platform. Subject to Customer's compliance with the Terms, HackerOne will allow Customer to access and use the HackerOne Platform solely for its own business purposes in order to allow Customer to connect with Finders. Customer may create Programs and offer Bounties to allow Finders to submit Vulnerability Reports. Finders can browse the Programs and contact Customer through the HackerOne Platform if Finders are interested in submitting Vulnerability Reports for the Program.

Other HackerOne Services. If set forth on a fully executed Order Form or otherwise mutually agreed by HackerOne and Customer, the Services may include other services to be provided by HackerOne. A description of these other services and any special terms related to these services are found at https://www.hackerone.com/terms/services.

Third Party Services. If set forth on a fully executed Order Form or otherwise mutually agreed by HackerOne and Customer, the Services may include certain Third Party Services. Notwithstanding anything to the contrary in the Terms, the Third Party Services will be provided by the third party to Customer, and HackerOne is not responsible for the Third Party Services, and HackerOne makes no warranty or representation with respect to the Third Party Services. Customer agrees to be responsible for all payment obligations related to the Third Party Services and to agree to and be bound by any terms and conditions presented to Customer by the Third Party Services provider governing the use of the applicable Third Party Services, and unless otherwise agreed, Customer will remit payment for the Third Party Services directly to HackerOne within thirty (30) days of invoice, and HackerOne will pay the Third Party Services provider.

Use of the Services as a Finder. If Customer or an employee of Customer desires to access and use the Services as a Finder with the consent of Customer, then the Finder Terms and Conditions found at https://www.hackerone.com/terms/finder will govern Customer's or Customer's employee's use of the Services, as a Finder. The Finder Terms and Conditions are independent of, and in addition

Page 153 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. to, these Customer Terms and Conditions. In such case, Customer or Customer's employee is solely responsible for performing Finder's obligations under the Finder Terms and Conditions.

No Endorsement HackerOne does not endorse any Finder. HackerOne is not responsible for any damage or harm resulting from Customer's communications or interactions with Finders or other customers, either through the Services or otherwise. Any reputation ranking or description of any Finder as part of the Services is not intended by HackerOne as an endorsement of any type. Any selection or use of any Finder is at Customer's own risk.

Any use or reliance of Vulnerability Reports that Customer receives is at Customer's own risk. HackerOne does not endorse, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Vulnerability Report. Under no circumstances will HackerOne be liable in any way for any Vulnerability Report, including, but not limited to, any errors or omissions in any Vulnerability Report, or any loss or damage of any kind incurred as a result of the use of any Vulnerability Report.

Finders are not employees, contractors or agents of HackerOne, but are independent third parties who want to participate in Programs and connect with Customer through the Services. Customer agrees that any legal remedy that Customer seeks to obtain for actions or omissions of Finder or other third parties regarding Customer's Program, including Vulnerability Reports, will be limited to a claim against the particular Finder or other third parties who caused harm to Customer, and Customer agrees not to attempt to impose liability on HackerOne or seek any legal remedy from HackerOne with respect to such actions or omissions.

Bounties and HackerOne Fees In accordance with the Program Terms, Customer agrees to award Bounties to those Finders who submit Vulnerability Reports to Customer for a particular Program if the submitted Vulnerability Reports meets Customer's requirements. HackerOne agrees to process Bounties that are monetary payments on behalf of Customer and will typically remit the Bounty payments to the applicable Finders within ten (10) business days after HackerOne receives the Bounty payment from Customer (or, if HackerOne has a Bounty Prepayment from Customer for the Program, or Customer has a credit card on file with HackerOne, within ten (10) business days after Customer notifies HackerOne via the HackerOne Platform that the Bounty has been awarded to a Finder). HackerOne is not responsible for processing any Bounty award that is not a monetary payment, or for delays in payment outside of HackerOne's reasonable control.

Customer agrees to pay HackerOne a fee equal to twenty percent (20%) of each monetary Bounty awarded to a Finder for access to and use of the HackerOne Platform

Customer agrees to pay HackerOne any additional fees listed in any applicable Order Form or otherwise agreed by the parties (collectively, "HackerOne Fees").

Customer agrees to pay the HackerOne Fees and the applicable Bounty payments directly to HackerOne within thirty (30) days of the date of HackerOne's invoice unless otherwise stated on Order Form. The HackerOne Fees and Bounty payments are nonrefundable, except as otherwise specifically provided in the Terms.

Page 154 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Except for any amounts disputed in good faith, all past due amounts payable in accordance with any applicable Order Form or the Terms will incur interest at a rate of 1.5% per month or the maximum rate permitted by law, whichever is less. Customer will reimburse HackerOne for all reasonable costs and expenses incurred (including reasonable attorneys' fees) in collecting any overdue amounts.

Programs and Program Materials Except as may be agreed by the parties, Customer is solely responsible for the management and administration of Customer's Programs through the Services. HackerOne reserves the right to reject a Program for any reason in its sole discretion.

While HackerOne may assist Customer in preparing Customer's Program Material, Customer is solely responsible for Customer's Program Material. Customer represents and warrants that Customer owns all of Customer's Program Material or that Customer has all rights necessary to grant HackerOne the license rights in Customer's Program Material under the Terms. Customer also represents and warrants that neither the Program Material, nor Customer's use and provision of the Program Material to be made available through the Services, nor any use of the Program Material by HackerOne or a Finder on or through the Services, will infringe, misappropriate or violate a third party's intellectual property rights, or rights of publicity or privacy, or result in the violation of any applicable law or regulation, including export control laws.

Ownership and Licenses HackerOne does not claim any ownership rights in any Program Material or Vulnerability Reports, and nothing in the Terms will be deemed to restrict any rights that Customer may have to use and exploit Customer's Program Material and Vulnerability Reports. Customer acknowledges and agrees that HackerOne may collect statistical and other information, which will not identify particular Customers, and use such information internally at HackerOne. Subject to Customer's rights in any Program Material or Vulnerability Reports, HackerOne and its licensors exclusively own all right, title and interest in and to the Services and content contained thereon, including all associated intellectual property rights. Customer acknowledges that the Services and HackerOne content are protected by copyright, trademark, and other laws of the United States and foreign countries.

By making any Program Material available through the Services, Customer hereby grants to HackerOne a perpetual, irrevocable, non-exclusive, non-transferable, non-sublicenseable, worldwide, royalty-free license to use, copy, reproduce, display, modify, adapt, transmit and distribute copies of Customer's Program Material, for the sole purpose of providing the Services.

Subject to Customer's compliance with the Terms, HackerOne hereby grants to Customer a non-exclusive, non-transferable, non-sublicenseable, worldwide, royalty-free license to access and view the content that HackerOne makes available on the Services solely in connection with Customer's permitted use of the Services.

Subject to Customer's compliance with the Terms, HackerOne hereby grants to Customer a non-exclusive, non-transferable, non-sublicenseable, worldwide, royalty-free license to access and view the Vulnerability Reports that HackerOne makes available on the Services solely in connection with Customer's permitted use of the Services. There shall be no fee for the license unless otherwise provided on an Order Form.

Page 155 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

Page 156 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

Facebook General Terms

Accessed 12 September 2017 at http://www.facebook.com/legal/terms

This agreement was written in English (US). To the extent any translated version of this agreement conflicts with the English version, the English version controls. Please note that Section 16 contains certain changes to the general terms for users outside the United States. Date of Last Revision: January 30, 2015

Statement of Rights and Responsibilities This Statement of Rights and Responsibilities ("Statement," "Terms," or "SRR") derives from the Facebook Principles, and is our terms of service that governs our relationship with users and others who interact with Facebook, as well as Facebook brands, products and services, which we call the “Facebook Services” or “Services”. By using or accessing the Facebook Services, you agree to this Statement, as updated from time to time in accordance with Section 13 below. Additionally, you will find resources at the end of this document that help you understand how Facebook works.

Because Facebook provides a wide range of Services, we may ask you to review and accept supplemental terms that apply to your interaction with a specific app, product, or service. To the extent those supplemental terms conflict with this SRR, the supplemental terms associated with the app, product, or service govern with respect to your use of such app, product or service to the extent of the conflict.

1. Privacy Your privacy is very important to us. We designed our Data Policy to make important disclosures about how you can use Facebook to share with others and how we collect and can use your content and information. We encourage you to read the Data Policy, and to use it to help you make informed decisions.

2. Sharing Your Content and Information You own all of the content and information you post on Facebook, and you can control how it is shared through your privacy and application settings. In addition: 1. For content that is covered by intellectual property rights, like photos and videos (IP content), you specifically give us the following permission, subject to your privacy and application settings: you grant us a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it. 2. When you delete IP content, it is deleted in a manner similar to emptying the recycle bin on a computer. However, you understand that removed content may persist in backup copies for a reasonable period of time (but will not be available to others). 3. When you use an application, the application may ask for your permission to access your content and information as well as content and information that others have shared with you. We require applications to respect your privacy, and your agreement with that application will control how the application can use, store,

Page 157 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. and transfer that content and information. (To learn more about Platform, including how you can control what information other people may share with applications, read our Data Policy and Platform Page.) 4. When you publish content or information using the Public setting, it means that you are allowing everyone, including people off of Facebook, to access and use that information, and to associate it with you (i.e., your name and profile picture). 5. We always appreciate your feedback or other suggestions about Facebook, but you understand that we may use your feedback or suggestions without any obligation to compensate you for them (just as you have no obligation to offer them).

3. Safety 1. We do our best to keep Facebook safe, but we cannot guarantee it. We need your help to keep Facebook safe, which includes the following commitments by you: 2. You will not post unauthorized commercial communications (such as spam) on Facebook. 3. You will not collect users' content or information, or otherwise access Facebook, using automated means (such as harvesting bots, robots, spiders, or scrapers) without our prior permission. 4. You will not engage in unlawful multi-level marketing, such as a pyramid scheme, on Facebook. 5. You will not upload viruses or other malicious code. 6. You will not solicit login information or access an account belonging to someone else. 7. You will not bully, intimidate, or harass any user. 8. You will not post content that: is hate speech, threatening, or pornographic; incites violence; or contains nudity or graphic or gratuitous violence. 9. You will not develop or operate a third-party application containing alcohol-related, dating or other mature content (including advertisements) without appropriate age-based restrictions. 10. You will not use Facebook to do anything unlawful, misleading, malicious, or discriminatory. 11. You will not do anything that could disable, overburden, or impair the proper working or appearance of Facebook, such as a denial of service attack or interference with page rendering or other Facebook functionality. 12. You will not facilitate or encourage any violations of this Statement or our policies.

4. Registration and Account Security Facebook users provide their real names and information, and we need your help to keep it that way. Here are some commitments you make to us relating to registering and maintaining the security of your account: 1. You will not provide any false personal information on Facebook, or create an account for anyone other than yourself without permission. 2. You will not create more than one personal account. 3. If we disable your account, you will not create another one without our permission. 4. You will not use your personal timeline primarily for your own commercial gain, and will use a Facebook Page for such purposes. 5. You will not use Facebook if you are under 13. 6. You will not use Facebook if you are a convicted sex offender. 7. You will keep your contact information accurate and up-to-date.

Page 158 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. 8. You will not share your password (or in the case of developers, your secret key), let anyone else access your account, or do anything else that might jeopardize the security of your account. 9. You will not transfer your account (including any Page or application you administer) to anyone without first getting our written permission. 10. If you select a username or similar identifier for your account or Page, we reserve the right to remove or reclaim it if we believe it is appropriate (such as when a trademark owner complains about a username that does not closely relate to a user's actual name).

5. Protecting Other People's Rights 1. We respect other people's rights, and expect you to do the same. 2. You will not post content or take any action on Facebook that infringes or violates someone else's rights or otherwise violates the law. 3. We can remove any content or information you post on Facebook if we believe that it violates this Statement or our policies. 4. We provide you with tools to help you protect your intellectual property rights. To learn more, visit our How to Report Claims of Intellectual Property Infringement page. 5. If we remove your content for infringing someone else's copyright, and you believe we removed it by mistake, we will provide you with an opportunity to appeal. 6. If you repeatedly infringe other people's intellectual property rights, we will disable your account when appropriate. 7. You will not use our copyrights or Trademarks or any confusingly similar marks, except as expressly permitted by our Brand Usage Guidelines or with our prior written permission. 8. If you collect information from users, you will: obtain their consent, make it clear you (and not Facebook) are the one collecting their information, and post a privacy policy explaining what information you collect and how you will use it. 9. You will not post anyone's identification documents or sensitive financial information on Facebook. 10. You will not tag users or send email invitations to non-users without their consent. Facebook offers social reporting tools to enable users to provide feedback about tagging.

6. Mobile and Other Devices 1. We currently provide our mobile services for free, but please be aware that your carrier's normal rates and fees, such as text messaging and data charges, will still apply. 2. In the event you change or deactivate your mobile telephone number, you will update your account information on Facebook within 48 hours to ensure that your messages are not sent to the person who acquires your old number. 3. You provide consent and all rights necessary to enable users to sync (including through an application) their devices with any information that is visible to them on Facebook.

7. Payments If you make a payment on Facebook, you agree to our Payments Terms unless it is stated that other terms apply.

Page 159 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. 8. Special Provisions Applicable to Developers/Operators of Applications and Websites If you are a developer or operator of a Platform application or website or if you use Social Plugins, you must comply with the Policy. 9. About Advertisements and Other Commercial Content Served or Enhanced by Facebook Our goal is to deliver advertising and other commercial or sponsored content that is valuable to our users and advertisers. In order to help us do that, you agree to the following: 4. You give us permission to use your name, profile picture, content, and information in connection with commercial, sponsored, or related content (such as a brand you like) served or enhanced by us. This means, for example, that you permit a business or other entity to pay us to display your name and/or profile picture with your content or information, without any compensation to you. If you have selected a specific audience for your content or information, we will respect your choice when we use it. 5. We do not give your content or information to advertisers without your consent. 6. You understand that we may not always identify paid services and communications as such.

10. Special Provisions Applicable to Advertisers If you use our self-service advertising creation interfaces for creation, submission and/or delivery of any advertising or other commercial or sponsored activity or content (collectively, the “Self-Serve Ad Interfaces”), you agree to our Self-Serve Ad Terms. In addition, your advertising or other commercial or sponsored activity or content placed on Facebook or our publisher network will comply with our Advertising Policies.

11. Special Provisions Applicable to Pages If you create or administer a Page on Facebook, or run a promotion or an offer from your Page, you agree to our Pages Terms.

12. Special Provisions Applicable to Software 1. If you download or use our software, such as a stand-alone software product, an app, or a browser plugin, you agree that from time to time, the software may download and install upgrades, updates and additional features from us in order to improve, enhance, and further develop the software. 2. You will not modify, create derivative works of, decompile, or otherwise attempt to extract source code from us, unless you are expressly permitted to do so under an open source license, or we give you express written permission.

13. Amendments 1. We’ll notify you before we make changes to these terms and give you the opportunity to review and comment on the revised terms before continuing to use our Services. 2. If we make changes to policies, guidelines or other terms referenced in or incorporated by this Statement, we may provide notice on the Site Governance Page. 3. Your continued use of the Facebook Services, following notice of the changes to our terms, policies or guidelines, constitutes your acceptance of our amended terms, policies or guidelines.

14. Termination If you violate the letter or spirit of this Statement, or otherwise create risk or possible legal exposure for us, we can stop providing all or part of Facebook to you. We will notify you by email or at the next time you attempt to access your account.

Page 160 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. You may also delete your account or disable your application at any time. In all such cases, this Statement shall terminate, but the following provisions will still apply: 2.2, 2.4, 3-5, 9.3, and 14-18.

15. Disputes 1. You will resolve any claim, cause of action or dispute (claim) you have with us arising out of or relating to this Statement or Facebook exclusively in the U.S. District Court for the Northern District of California or a state court located in San Mateo County, and you agree to submit to the personal jurisdiction of such courts for the purpose of litigating all such claims. The laws of the State of California will govern this Statement, as well as any claim that might arise between you and us, without regard to conflict of law provisions. 2. If anyone brings a claim against us related to your actions, content or information on Facebook, you will indemnify and hold us harmless from and against all damages, losses, and expenses of any kind (including reasonable legal fees and costs) related to such claim. Although we provide rules for user conduct, we do not control or direct users' actions on Facebook and are not responsible for the content or information users transmit or share on Facebook. We are not responsible for any offensive, inappropriate, obscene, unlawful or otherwise objectionable content or information you may encounter on Facebook. We are not responsible for the conduct, whether online or offline, of any user of Facebook. 3. WE TRY TO KEEP FACEBOOK UP, BUG-FREE, AND SAFE, BUT YOU USE IT AT YOUR OWN RISK. WE ARE PROVIDING FACEBOOK AS IS WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON- INFRINGEMENT. WE DO NOT GUARANTEE THAT FACEBOOK WILL ALWAYS BE SAFE, SECURE OR ERROR-FREE OR THAT FACEBOOK WILL ALWAYS FUNCTION WITHOUT DISRUPTIONS, DELAYS OR IMPERFECTIONS. FACEBOOK IS NOT RESPONSIBLE FOR THE ACTIONS, CONTENT, INFORMATION, OR DATA OF THIRD PARTIES, AND YOU RELEASE US, OUR DIRECTORS, OFFICERS, EMPLOYEES, AND AGENTS FROM ANY CLAIMS AND DAMAGES, KNOWN AND UNKNOWN, ARISING OUT OF OR IN ANY WAY CONNECTED WITH ANY CLAIM YOU HAVE AGAINST ANY SUCH THIRD PARTIES. IF YOU ARE A CALIFORNIA RESIDENT, YOU WAIVE CALIFORNIA CIVIL CODE §1542, WHICH SAYS: A GENERAL RELEASE DOES NOT EXTEND TO CLAIMS WHICH THE CREDITOR DOES NOT KNOW OR SUSPECT TO EXIST IN HIS OR HER FAVOR AT THE TIME OF EXECUTING THE RELEASE, WHICH IF KNOWN BY HIM OR HER MUST HAVE MATERIALLY AFFECTED HIS OR HER SETTLEMENT WITH THE DEBTOR. WE WILL NOT BE LIABLE TO YOU FOR ANY LOST PROFITS OR OTHER CONSEQUENTIAL, SPECIAL, INDIRECT, OR INCIDENTAL DAMAGES ARISING OUT OF OR IN CONNECTION WITH THIS STATEMENT OR FACEBOOK, EVEN IF WE HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. OUR AGGREGATE LIABILITY ARISING OUT OF THIS STATEMENT OR FACEBOOK WILL NOT EXCEED THE GREATER OF ONE HUNDRED DOLLARS ($100) OR THE AMOUNT YOU HAVE PAID US IN THE PAST TWELVE MONTHS. APPLICABLE LAW MAY NOT ALLOW THE LIMITATION OR EXCLUSION OF LIABILITY OR INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THE ABOVE LIMITATION OR EXCLUSION MAY NOT APPLY TO YOU. IN SUCH CASES, FACEBOOK'S LIABILITY WILL BE LIMITED TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW.

16. Special Provisions Applicable to Users Outside the United States We strive to create a global community with consistent standards for everyone, but we also strive to respect local laws. The following provisions apply to users and non-users who interact with Facebook outside the United States: 1. You consent to having your personal data transferred to and processed in the United States.

Page 161 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. 2. If you are located in a country embargoed by the United States, or are on the U.S. Treasury Department's list of Specially Designated Nationals you will not engage in commercial activities on Facebook (such as advertising or payments) or operate a Platform application or website. You will not use Facebook if you are prohibited from receiving products, services, or software originating from the United States. 3. Certain specific terms that apply only for German users are available here.

17. Definitions 1. By "Facebook" or” Facebook Services” we mean the features and services we make available, including through (a) our website at www.facebook.com and any other Facebook branded or co-branded websites (including sub-domains, international versions, widgets, and mobile versions); (b) our Platform; (c) social plugins such as the , the Share button and other similar offerings; and (d) other media, brands, products, services, software (such as a toolbar), devices, or networks now existing or later developed. Facebook reserves the right to designate, in its sole discretion, that certain of our brands, products, or services are governed by separate terms and not this SRR. 2. By "Platform" we mean a set of APIs and services (such as content) that enable others, including application developers and website operators, to retrieve data from Facebook or provide data to us. 3. By "information" we mean facts and other information about you, including actions taken by users and non- users who interact with Facebook. 4. By "content" we mean anything you or other users post, provide or share using Facebook Services. 5. By "data" or "user data" or "user's data" we mean any data, including a user's content or information that you or third parties can retrieve from Facebook or provide to Facebook through Platform. 6. By "post" we mean post on Facebook or otherwise make available by using Facebook. 7. By "use" we mean use, run, copy, publicly perform or display, distribute, modify, translate, and create derivative works of. 8. By "application" we mean any application or website that uses or accesses Platform, as well as anything else that receives or has received data from us. If you no longer access Platform but have not deleted all data from us, the term application will apply until you delete the data. 9. By “Trademarks” we mean the list of trademarks provided here.

18. Other 1. If you are a resident of or have your principal place of business in the US or Canada, this Statement is an agreement between you and Facebook, Inc. Otherwise, this Statement is an agreement between you and Facebook Ireland Limited. References to “us,” “we,” and “our” mean either Facebook, Inc. or Facebook Ireland Limited, as appropriate. 2. This Statement makes up the entire agreement between the parties regarding Facebook, and supersedes any prior agreements. 3. If any portion of this Statement is found to be unenforceable, the remaining portion will remain in full force and effect. 4. If we fail to enforce any of this Statement, it will not be considered a waiver. 5. Any amendment to or waiver of this Statement must be made in writing and signed by us. 6. You will not transfer any of your rights or obligations under this Statement to anyone else without our consent.

Page 162 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. 7. All of our rights and obligations under this Statement are freely assignable by us in connection with a merger, acquisition, or sale of assets, or by operation of law or otherwise. 8. Nothing in this Statement shall prevent us from complying with the law. 9. This Statement does not confer any third party beneficiary rights. 10. We reserve all rights not expressly granted to you. 11. You will comply with all applicable laws when using or accessing Facebook.

By using or accessing Facebook Services, you agree that we can collect and use such content and information in accordance with the Data Policy as amended from time to time. You may also want to review the following documents, which provide additional information about your use of Facebook: Payment Terms: These additional terms apply to all payments made on or through Facebook, unless it is stated that other terms apply. Platform Page: This page helps you better understand what happens when you add a third-party application or use Facebook Connect, including how they may access and use your data. Facebook Platform Policies: These guidelines outline the policies that apply to applications, including Connect sites. Advertising Policies: These guidelines outline the policies that apply to advertisements placed on Facebook. Self-Serve Ad Terms: These terms apply when you use the Self-Serve Ad Interfaces to create, submit, or deliver any advertising or other commercial or sponsored activity or content. Promotions Guidelines: These guidelines outline the policies that apply if you offer contests, sweepstakes, and other types of promotions on Facebook. Facebook Brand Resources: These guidelines outline the policies that apply to use of Facebook trademarks, logos and screenshots. How to Report Claims of Intellectual Property Infringement Pages Terms: These guidelines apply to your use of Facebook Pages. Community Standards: These guidelines outline our expectations regarding the content you post to Facebook and your activity on Facebook.

To access the Statement of Rights and Responsibilities in several different languages, change the language setting for your Facebook session by clicking on the language link in the left corner of most pages. If the Statement is not available in the language you select, we will default to the English version.

Page 163 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Facebook Whitehat Terms.413

Information (Last updated May 23, 2017)

If you believe you have found a security vulnerability on Facebook (or another member of the Facebook family of companies), we encourage you to let us know right away. We will investigate all legitimate reports and do our best to quickly fix the problem. Before reporting though, please review this page including our responsible disclosure policy, reward guidelines, and those things that should not be reported.

If you are looking to report another type of issue, please use the links below for assistance.

. If your account or a friend’s account is sending out suspicious links: https://www.facebook.com/help/hacked . To report abuse: https://www.facebook.com/help/reportlinks . For any other questions or concerns, please visit our Help Center: https://www.facebook.com/help . For program updates and news from our Bug Bounty team, please Like our Facebook page: https://www.facebook.com/bugbounty

Responsible Disclosure Policy

If you comply with the policies below when reporting a security issue to Facebook, we will not initiate a lawsuit or law enforcement investigation against you in response to your report. We ask that:

. You give us reasonable time to investigate and mitigate an issue you report before making public any information about the report or sharing such information with others. . You do not interact with an individual account (which includes modifying or accessing data from the account) if the account owner has not consented to such actions. . You make a good faith effort to avoid privacy violations and disruptions to others, including (but not limited to) destruction of data and interruption or degradation of our services. . You do not exploit a security issue you discover for any reason. (This includes demonstrating additional risk, such as attempted compromise of sensitive company data or probing for additional issues.) . You do not violate any other applicable laws or regulations.

Bug Bounty Program Terms

We recognize and reward security researchers who help us keep people safe by reporting vulnerabilities in our services. Monetary bounties for such reports are entirely at Facebook’s discretion, based on risk, impact, and other factors. To potentially qualify for a bounty, you first need to meet the following requirements:

. Adhere to our Responsible Disclosure Policy (see above). . Report a security bug: that is, identify a vulnerability in our services or infrastructure which creates a security or privacy risk. (Note that Facebook ultimately determines the risk of an issue, and that many software bugs are not security issues.)

413 Accessed 12 September at Facebook.com/whitehat

Page 164 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. . Your report must describe a problem involving one the products or services listed under “Bug Bounty Program Scope” (see below). . We specifically exclude certain types of potential security issues; these are listed under “Ineligible Reports and False Positives” (see below). . Submit your report via our “Report a Security Vulnerability” form (one issue per report) and respond to the report with any updates. Please do not contact employees directly or through other channels about a report. . If you inadvertently cause a privacy violation or disruption (such as accessing account data, service configurations, or other confidential information) while investigating an issue, be sure to disclose this in your report. . Use test accounts when investigating issues. If you cannot reproduce an issue with a test account, you can use a real account (except for automated testing). Do not interact with other accounts without consent (e.g. do not test against ’s account).

In turn, we will follow these guidelines when evaluating reports under our bug bounty program:

. We investigate and respond to all valid reports. Due to the volume of reports we receive, though, we prioritize evaluations based on risk and other factors, and it may take some time before you receive a reply. . We determine bounty amounts based on a variety of factors, including (but not limited to) impact, ease of exploitation, and quality of the report. If we pay a bounty, the minimum reward is $500. Note that extremely low-risk issues may not qualify for a bounty at all. . We seek to pay similar amounts for similar issues, but bounty amounts and qualifying issues may change with time. Past rewards do not necessarily guarantee similar results in the future. . In the event of duplicate reports, we award a bounty to the first person to submit an issue. (Facebook determines duplicates and may not share details on the other reports.) A given bounty is only paid to one individual. . You may donate a bounty to a recognized charity (subject to approval by Facebook), and we double bounty amounts that are donated in this way. . We reserve the right to publish reports (and accompanying updates). . We publish a list of researchers who have submitted valid security reports. You must receive a bounty to be eligible for this list, but your participation is then optional. We reserve the right to limit or modify the information accompanying your name in the list. . We verify that all bounty awards are permitted by applicable laws, including (but not limited to) US trade sanctions and economic restrictions.

Note that your use of Facebook services and the services of any member of the Facebook family of companies, including for purposes of this program, is subject to Facebook’s Terms and Policies and the terms and policies of any member of the Facebook family of companies whose services you use. We (and any member of the Facebook family of companies that is the subject of your report) may retain any communications about security issues you report for as long as we deem necessary for program purposes, and we may cancel or modify this program at any time.

Bug Bounty Program Scope

To qualify for a bounty, report a security bug in Facebook or one of the following qualifying products or acquisitions:

. Atlas

Page 165 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. . Instagram . Internet.org / Free Basics . Moves . . . Open source projects by Facebook (e.g. osquery) . WhatsApp

Note that services not owned by Facebook (e.g. WordPress VIP and Page.ly) are not eligible under our bug bounty program. While we often care about vulnerabilities affecting services we use, we cannot guarantee our disclosure policies apply to services from other companies.

Specific Examples of Program Scope

If you are unsure whether a service is eligible for a bounty or not, feel free to ask us. Below are some specific examples of eligible and ineligible apps and websites to help guide your research.

Target Eligible Ineligible

Websites: Websites: events.fb.com, facebook.com, fbsbx.com, fb.com, fb.me, investor.fb.com, messenger.com, media.fb.com, thefacebook.com newsroom.fb.com, research.fb.com, Apps: Ads Manager, search.fb.com, Facebook, Facebook work.fb.com, Facebook Lite, Workplace by research.fb.com, Facebook, Groups, madebykorea.fb.com, Hello, Mentions, accountkit.com Messenger, Moments, Pages Apps: Facebook for Manager, Paper (by Blackberry, Facebook for Facebook), Work Windows Chat

Websites: Websites: atlassolutions.com (no Atlas app.atlassolutions.co subdomain), atdmt.com, m atlassbx.com

Page 166 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Websites: instagram.com

Websites: Instagram Apps: Boomerang, blog.instagram.com Hyperlapse, Instagram, Layout

Websites: freebasics.com,

Internet.org internet.org

Apps: Free Basics

Websites: moves-

app.com Moves

Apps: Moves

Websites: answers.oculus.com, Oculus Websites: oculus.com forums.oculus.com, support.oculus.com

Websites: onavo.com

Apps: Onavo Count, Websites: Websites: Onavo Onavo Extend, Onavo blog.onavo.com Protect

Code Code Open Source repos: https://github. repos: https://github.com/ com/facebook/ facebookarchive/

Websites: blog..com, translate.whatsapp.c Websites: WhatsApp om, alpha.whatsapp.com, web.whatsapp.com, media.whatsapp.com whatsapp.net, www.whatsapp.com

Page 167 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Apps: WhatsApp

Websites: daytum.com, drop.io, face.com, .com, monoidics.com, Other opencompute.org, and spaceport.io Partnerships/

Acquisitions Services (websites and apps): LiveRail

Out of Scope

. Spam or social engineering techniques. . Denial-of-service attacks. . Content injection. Posting content on Facebook is a core feature, and content injection (also "content spoofing" or "HTML injection") is ineligible unless you can clearly demonstrate a significant risk. . Security issues in third-party apps or websites that integrate with Facebook (including most pages on apps.facebook.com). These are not managed by Facebook and do not qualify under our guidelines for security testing. . Executing scripts on sandboxed domains (such as fbrell.com or fbsbx.com). Using alert(document.domain) can help verify if the context is actually *.facebook.com.

False Positives

. Open redirects. Any redirect using our "linkshim" system is not an open redirect (learn more). . Profile pictures available publicly. Your current profile picture is always public (regardless of size or resolution). . Note that public information also includes your username, ID, name, current cover photo, gender, and anything you’ve shared publicly (learn more). . Sending messages to anyone on Facebook (learn more). . Accessing photos via raw image URLs from our CDN (Content Delivery Network). One of our engineers has posted a more detailed explanation (external link). . Case-insensitive passwords. We accept the "caps lock" version of a password or with the first character capitalized to avoid login problems. . Missing attribution on page posts. We generally show page admins which admin created a post, but this is not a security control.

Page 168 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

We have long enjoyed a close relationship with the security research community. To honor all the cutting-edge external contributions that help us keep our users safe, we maintain a Vulnerability Reward Program for Google-owned web properties, running continuously since November 2010. Services in scope

In principle, any Google-owned web service that handles reasonably sensitive user data is intended to be in scope. This includes virtually all the content in the following domains:

• *.google.com • *.youtube.com • *.blogger.com

Bugs in Google Cloud Platform, Google-developed apps and extensions (published in Google Play, in iTunes, or in the Chrome Web Store), as well as some of our hardware devices (Home, OnHub and Nest) will also qualify. See our Android Rewards and Chrome Rewards for other services and devices that are also in scope. On the flip side, the program has two important exclusions to keep in mind:

• Third-party websites. Some Google-branded services hosted in less common domains may be operated by our vendors or partners (this notably includes zagat.com). We can’t authorize you to test these systems on behalf of their owners and will not reward such reports. Please read the fine print on the page and examine domain and IP WHOIS records to confirm. If in doubt, talk to us first! • Recent acquisitions. To allow time for internal review and remediation, newly acquired companies are subject to a six-month blackout period. Bugs reported sooner than that will typically not qualify for a reward.

Qualifying vulnerabilities

Any design or implementation issue that substantially affects the confidentiality or integrity of user data is likely to be in scope for the program. Common examples include:

• Cross-site scripting, • Cross-site request forgery, • Mixed-content scripts, • or authorization flaws, • Server-side code execution bugs.

New! In addition, significant abuse-related methodologies are also in scope for this program, if the reported attack scenario displays a design or implementation issue in a Google product that could lead to significant harm.

An example of an abuse-related methodology would be a technique by which an attacker is able to manipulate the rating score of a listing on Google Maps by submitting a sufficiently large volume of fake reviews that go undetected by our abuse systems. However, reporting a specific business with likely fake ratings would not qualify. Note that the scope of the program is limited to technical vulnerabilities in Google-owned browser extensions, mobile, and web applications; please do not try to sneak into Google offices, attempt phishing attacks against our employees, and so on. Out of concern for the availability of our services to all users, please do not attempt to carry out DoS attacks, leverage black hat SEO techniques, spam people, or do other similarly questionable things.

Page 169 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. We also discourage the use of any vulnerability testing tools that automatically generate very significant volumes of traffic. Non-qualifying vulnerabilities

New! Visit our Bug Hunter University page dedicated to common non-qualifying findings and vulnerabilities.

Depending on their impact, some of the reported issues may not qualify. Although we review them on a case-by-case basis, here are some of the common low-risk issues that typically do not earn a monetary reward:

• Vulnerabilities in *.bc.googleusercontent.com or *.appspot.com. These domains are used to host applications that belong to Google Cloud customers. The Vulnerability Reward Program does not authorize the testing of Google Cloud customer applications. Google Cloud customers can authorize the penetration testing of their own applications (read more), but testing of these domains is not within the scope of or authorized by the Vulnerability Reward Program. • Cross-site scripting vulnerabilities in “sandbox” domains (read more.) We maintain a number of domains that leverage the same-origin policy to safely isolate certain types of untrusted content; the most prominent example of this is *.googleusercontent.com. Unless an impact on sensitive user data can be demonstrated, we do not consider the ability to execute JavaScript in that domain to be a bug. • Execution of owner-supplied JavaScript in Blogger. Blogs hosted in *.blogspot.com are no different from any third-party website on the Internet. For your safety, we employ spam and malware detection tools, but we do not consider the ability to embed JavaScript within your own blog to be a security bug. • URL redirection (read more.) We recognize that the address bar is the only reliable security indicator in modern browsers; consequently, we hold that the usability and security benefits of a small number of well-designed and closely monitored redirectors outweigh their true risks. • Legitimate content proxying and framing. We expect our services to unambiguously label third-party content and to perform a number of abuse-detection checks, but as with redirectors, we think that the value of products such as Google Translate outweighs the risk. • Bugs requiring exceedingly unlikely user interaction. For example, a cross-site scripting flaw that requires the victim to manually type in an XSS payload into Google Maps and then double-click an error message may realistically not meet the bar. • Logout cross-site request forgery (read more.) For better or worse, the design of HTTP cookies means that no single website can prevent its users from being logged out; consequently, application-specific ways of achieving this goal will likely not qualify. You may be interested in personal blog posts from Chris Evans and Michal Zalewski for more background. • Flaws affecting the users of out-of-date browsers and plugins. The security model of the web is being constantly fine-tuned. The panel will typically not reward any problems that affect only the users of outdated or unpatched browsers. In particular, we exclude Internet Explorer prior to version 9. • Presence of banner or version information. Version information does not, by itself, expose the service to attacks - so we do not consider this to be a bug. That said, if you find outdated software and have good reasons to suspect that it poses a well-defined security risk, please let us know. • Email spoofing on Gmail and Google Groups. We are aware of the risk presented by spoofed messages and are taking steps to ensure that the Gmail filter can effectively deal with such attacks. • User enumeration. Reports outlining user enumeration are not within scope unless you can demonstrate that we don’t have any rate limits in place to protect our users. • Bypassing the limit of accounts that can be verified with a given SMS number. We often receive reports about users being able to bypass our SMS limit for verifying accounts. There are actually two different quotas per number for account verification, one via 'SMS' and a different one via 'Call Me'.

Page 170 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Monetary rewards aside, vulnerability reporters who work with us to resolve security bugs in our products will be credited on the Hall of Fame. If we file an internal security bug, we will acknowledge your contribution on that page. Reward amounts for security vulnerabilities

New! To read more about our approach to vulnerability rewards you can read our Bug Hunter University article here.

Rewards for qualifying bugs range from $100 to $31,337. The following table outlines the usual rewards chosen for the most common classes of bugs:

Category Examples Applicatio Other Normal Non- ns that highly Google integrated permit sensitive applicatio acquisitio taking applicatio ns ns and over a ns [2] other Google sandboxe account d or lower [1] priority applicatio ns [3]

Vulnerabilities giving direct access to Google servers

Remote Command $31,337 $31,337 $31,337 $1,337 - , $5,000 execution deserialization bugs, sandbox escapes

Unrestricte Unsandboxed $13,337 $13,337 $13,337 $1,337 - d file XXE, SQL injection $5,000 system or database access

Logic flaw Direct object $13,337 $7,500 $5,000 $500 bugs reference, remote leaking or user impersonation bypassing significant security controls

Vulnerabilities giving access to client or authenticated session of the logged-in victim Execute Web: Cross-site $7,500 $5,000 $3,133.7 $100 code on the scripting client Mobile / Hardware: Code execution

Page 171 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. Other valid Web: CSRF, $500 - $500 - $500 - $100 security Clickjacking $7,500 $5,000 $3,133.7 vulnerabiliti Mobile / es Hardware: Informat ion leak, privilege escalation

[1] For example, for web properties this includes some vulnerabilities in Google Accounts (https://accounts.google.com). [2] This category includes products such as Google Search (https://www.google.com and https://encrypted.google.com), Google Wallet (https://wallet.google.com), Google Mail (https://mail.google.com), Google Inbox (https://inbox.google.com), Google Code Hosting (https://code.google.com), Chromium Bug Tracker (https://bugs.chromium.org), Chrome Web Store (https://chrome.google.com), Google App Engine (https://appengine.google.com), Google Admin (https://admin.google.com), Google Developers Console (https://console.developers.google.com), and Google Play (https://play.google.com). [3] Note that acquisitions qualify for a reward only after the initial six-month blackout period has elapsed. Reward amounts for abuse-related methodologies

New! Rewards for abuse-related methodologies are based on a different scale and range from USD $100 to $5,000. The reward amount for these abuse-related bugs depends on the potential probability and impact of the submitted technique.

Impact [1]

High Medium Low High Up to $5,000 $1,337 to $3,133.7 $500

$1,337 to $3,133.7 $500 $100 Probability [2] Medium

Low $500 $100 HoF Credit

[1] The impact assessment is based on the attack’s potential for causing privacy violations, financial loss, and other user harm, as well as the user-base reached. [2] The probability assessment takes into account the technical skill set needed to conduct the attack, the potential motivators of such an attack, and the likelihood of the vulnerability being discovered by an attacker. The final amount is always chosen at the discretion of the reward panel. In particular, we may decide to pay higher rewards for unusually clever or severe vulnerabilities; decide to pay lower rewards for vulnerabilities that require unusual user interaction; decide that a single report actually constitutes multiple bugs; or that multiple reports are so closely related that they only warrant a single reward. We understand that some of you are not interested in money. We offer the option to donate your reward to an established charity. If you do so, we will double your donation - subject to our discretion. Any rewards that are unclaimed after 12 months will be donated to a charity of our choosing. Investigating and reporting bugs

When investigating a vulnerability, please, only ever target your own accounts. Never attempt to access anyone else's data and do not engage in any activity that would be disruptive or damaging to your fellow users or to Google.

Page 172 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. New! Visit our Bug Hunter University articles to learn more about sending good vulnerability reports.

If you have found a vulnerability, please contact us at goo.gl/vulnz. Please be succinct: the contact form is attended by security engineers and a short proof-of-concept link is more valuable than a video explaining the consequences of an XSS bug. If necessary, you can use this PGP key. Note that we are only able to answer to technical vulnerability reports. Non-security bugs and queries about problems with your account should be instead directed to Google Help Centers. Frequently asked questions

Q: What if I found a vulnerability, but I don't know how to exploit it? A: We expect that vulnerability reports sent to us have a valid attack scenario to qualify for a reward, and we consider it as a critical step when doing vulnerability research. Reward amounts are decided based on the maximum impact of the vulnerability, and the panel is willing to reconsider a reward amount, based on new information (such as a chain of bugs, or a revised attack scenario). Q: How do I demonstrate the severity of the bug if I’m not supposed to snoop around? A: Please submit your report as soon as you have discovered a potential security issue. The panel will consider the maximum impact and will choose the reward accordingly. We routinely pay higher rewards for otherwise well-written and useful submissions where the reporter didn't notice or couldn't fully analyze the impact of a particular flaw. Q: I found an outdated software (e.g. Apache or Wordpress). Does this qualify for a reward? A: Please perform due diligence: confirm that the discovered software had any noteworthy vulnerabilities, and explain why you suspect that these features may be exposed and may pose a risk in our specific use. Reports that do not include this information will typically not qualify. Q: Who determines whether my report is eligible for a reward? A: The reward panel consists of the members of the Google Security Team. The current permanent members are Daniel Stelter-Gliese, Eduardo Vela Nava, Gábor Molnár, Krzysztof Kotowicz, Martin Straka, and Michael Jezierny. In addition there is a rotating member from the rest of our team. Q: What happens if I disclose the bug publicly before you had a chance to fix it? A: Please read our stance on coordinated disclosure. In essence, our pledge to you is to respond promptly and fix bugs in a sensible timeframe - and in exchange, we ask for a reasonable advance notice. Reports that go against this principle will usually not qualify, but we will evaluate them on a case-by-case basis. Q: My report has not been resolved within the first week of submission. Why hasn't it been resolved yet? A: Reports that deal with potential abuse-related vulnerabilities may take longer to assess, because reviewing our current defense mechanisms requires investigating how a real life attack would take place and reviewing the impact and likelihood requires studying the type of motivations and incentives of abusers of the submitted attack scenario against one of our products. Q: I wish to report an issue through a vulnerability broker. Will my report still qualify for a reward? A: We believe that it is against the spirit of the program to privately disclose the flaw to third parties for purposes other than actually fixing the bug. Consequently, such reports will typically not qualify. Q: What if somebody else also found the same bug? A: First in, best dressed. You will qualify for a reward only if you were the first person to alert us to a previously unknown flaw. Q: My employer / boyfriend / dog frowns upon my security research. Can I report a problem privately?

Page 173 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. A: Sure. If you are selected as a recipient of a reward, and if you accept, we will need your contact details to process the payment. You can still request not to be listed on our public credits page. Q: What is bughunter.withgoogle.com? A: The dashboard for the participants in Google’s VRP program. It dynamically creates the hall of fame, i.e., the 0x0A and honorable mentions lists. Q: Do I need a profile on bughunter.withgoogle.com to participate in the VRP? A: No. You can participate in the VRP under the same rules without the need of a profile. However, if you want your name to be listed in the 0x0A or the honorable mentions lists, you need to create a profile. Q: Is the profile data publicly available? A: Yes. The profile holds the data that is currently already available now on our hall of fame, i.e., on the 0x0A and honorable mentions lists. You can always leave these fields blank. Q: How is the honorable mentions list sorted? A: The hall of fame is sorted based on the volume of valid bug submissions, the ratio of valid vs. invalid submissions, and the severity of those submissions. Q: My account was disabled after doing some tests. How can I get my account restored? A: We recommend that you create an account dedicated only to testing before beginning any tests on our products, since we cannot guarantee that you will get access back to your account if it is disabled due to your testing activities. If you accidentally used a non-test account or you suspect your personal account was disabled due to your testing, you can request to have your account restored by Signing in to your Google Account and selecting Try to Restore. Legal points

We are unable to issue rewards to individuals who are on sanctions lists, or who are in countries (e.g. Cuba, Iran, North Korea, Sudan and Syria) on sanctions lists. You are responsible for any tax implications depending on your country of residency and citizenship. There may be additional restrictions on your ability to enter depending upon your local law. This is not a competition, but rather an experimental and discretionary rewards program. You should understand that we can cancel the program at any time and the decision as to whether or not to pay a reward has to be entirely at our discretion. Of course, your testing must not violate any law, or disrupt or compromise any data that is not your own.

Page 174 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. DoD Vulnerability Disclosure Policy Accessed at https://hackerone.com/deptofdefense September 15, 2017.

Purpose This policy is intended to give security researchers clear guidelines for conducting vulnerability discovery activities directed at Department of Defense (DoD) web properties, and submitting discovered vulnerabilities to DoD.

Overview Maintaining the security of our networks is a high priority at the DoD. Our information technologies provide critical services to Military Service members, their families, and DoD employees and contractors. Ultimately, our network security ensures that we can accomplish our missions and defend the United States of America. The security researcher community regularly makes valuable contributions to the security of organizations and the broader Internet, and DoD recognizes that fostering a close relationship with the community will help improve our own security. So if you have information about a vulnerability in a DoD website or web application, we want to hear from you! Information submitted to DoD under this policy will be used for defensive purposes – to mitigate or remediate vulnerabilities in our networks or applications, or the applications of our vendors. This is DoD’s initial effort to create a positive feedback loop between researchers and DoD – please be patient as we refine and update the process. Please review, understand, and agree to the following terms and conditions before conducting any testing of DoD networks and before submitting a report. Thank you.

Scope Any public-facing website owned, operated, or controlled by DoD, including web applications hosted on those sites.¹

How to Submit a Report

Please provide a detailed summary of the vulnerability, including: type of issue; product, version, and configuration of software containing the bug; step-by-step instructions to reproduce the issue; proof-of-concept; impact of the issue; and suggested mitigation or remediation actions, as appropriate. By clicking “Submit Report,” you are indicating that you have read, understand, and agree to the guidelines described in this policy for the conduct of security research and disclosure of vulnerabilities or indicators of vulnerabilities related to DoD information systems, and consent to having the contents of the communication and follow-up communications stored on a U.S. Government information system.

Guidelines DoD will deal in good faith with researchers who discover, test, and submit vulnerabilities² or indicators of vulnerabilities in accordance with these guidelines: Your activities are limited exclusively to –

o Testing to detect a vulnerability or identify an indicator related to a vulnerability;³ or o Sharing with, or receiving from, DoD information about a vulnerability or an indicator related to a vulnerability.

Page 175 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. • You do no harm and do not exploit any vulnerability beyond the minimal amount of testing required to prove that a vulnerability exists or to identify an indicator related to a vulnerability. • You avoid intentionally accessing the content of any communications, data, or information transiting or stored on DoD information system(s) – except to the extent that the information is directly related to a vulnerability and the access is necessary to prove that the vulnerability exists. • You do not exfiltrate any data under any circumstances. • You do not intentionally compromise the privacy or safety of DoD personnel (e.g. civilian employees or military members), or any third parties. • You do not intentionally compromise the intellectual property or other commercial or financial interests of any DoD personnel or entities, or any third parties. • You do not publicly disclose any details of the vulnerability, indicator of vulnerability, or the content of information rendered available by a vulnerability, except upon receiving explicit written authorization from DoD. • You do not conduct denial of service testing. • You do not conduct social engineering, including spear phishing, of DoD personnel or contractors. • You do not submit a high-volume of low-quality reports. • If at any point you are uncertain whether to continue testing, please engage with our team.

What You Can Expect From Us We take every disclosure seriously and very much appreciate the efforts of security researchers. We will investigate every disclosure and strive to ensure that appropriate steps are taken to mitigate risk and remediate reported vulnerabilities. DoD has a unique information and communications technology footprint that is tightly interwoven and globally deployed. Many DoD technologies are deployed in combat zones and, to varying degrees, support ongoing military operations; the proper functioning of DoD systems and applications can have a life-or-death impact on Service members and international allies and partners of the United States. DoD must take extra care while investigating the impact of vulnerabilities and providing a fix, so we ask your patience during this period. DoD remains committed to coordinating with the researcher as openly and quickly as possible. This includes: Within three business days, we will acknowledge receipt of your report. DoD’s security team will investigate the report and may contact you for further information. To the best of our ability, we will confirm the existence of the vulnerability to the researcher and keep the researcher informed, as appropriate, as remediation of the vulnerability is underway. We want researchers to be recognized publicly for their contributions, if that is the researcher’s desire. We will seek to allow researchers to be publicly recognized whenever possible. However, public disclosure of vulnerabilities will only be authorized at the express written consent of DoD. Information submitted to DoD under this policy will be used for defensive purposes – to mitigate or remediate vulnerabilities in our networks or applications, or the applications of our vendors.

Legal You must comply with all applicable Federal, State, and local laws in connection with your security research activities or other participation in this vulnerability disclosure program. DoD does not authorize, permit, or otherwise allow (expressly or impliedly) any person, including any individual, group of individuals, consortium, partnership, or any other business or legal entity to engage in any security research or vulnerability

Page 176 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis. or threat disclosure activity that is inconsistent with this policy or the law. If you engage in any activities that are inconsistent with this policy or the law, you may be subject to criminal and/or civil liabilities.

To the extent that any security research or vulnerability disclosure activity involves the networks, systems, information, applications, products, or services of a non-DoD entity (e.g., other Federal departments or agencies; State, local, or tribal governments; private sector companies or persons; employees or personnel of any such entities; or any other such third party), that non-DoD third party may independently determine whether to pursue legal action or remedies related to such activities. If you conduct your security research and vulnerability disclosure activities in accordance with the restrictions and guidelines set forth in this policy, (1) DoD will not initiate or recommend any law enforcement or civil lawsuits related to such activities, and (2) in the event of any law enforcement or civil action brought by anyone other than DoD, DoD will take steps to make known that your activities were conducted pursuant to and in compliance with this policy. DoD may modify the terms of this policy or terminate the policy at any time. ¹ These websites constitute “information systems” as defined by 6 U.S.C. 1501(9)). ² Vulnerabilities throughout this policy may be considered “security vulnerabilities” as defined by 6 U.S.C. 1501(17) ³ These activities, if applied consistent with the terms of this policy, constitute “defensive measures” as defined by 6 U.S.C. 1501(7)).

Page 177 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.

Page 178 of 178 Rob Hamper. Faculty of Law. Masters by Research Thesis.