TOWARD ETHICAL APPLICATIONS OF : UNDERSTANDING CURRENT USES OF FACIAL RECOGNITION TECHNOLOGY AND ADVANCING BIAS MITIGATION

A Thesis submitted to the Faculty of the Graduate School of Arts and Sciences of Georgetown University in partial fulfillment of the requirements for the degree of Master of Arts in Communication, Culture & Technology

By

Alie J Fordyce, B.A.

Washington, D.C. April 20, 2021

Copyright 2021 by Alie J Fordyce All Rights Reserved

ii TOWARD ETHICAL APPLICATIONS OF ARTIFICIAL INTELLIGENCE: UNDERSTANDING CURRENT USES OF FACIAL RECOGNITION TECHNOLOGY AND ADVANCING BIAS MITIGATION

Alie J Fordyce, B.A.

Thesis Advisor: Martin Irvine, Ph.D.

ABSTRACT

Facial recognition technology (FRT) is a biometric software-based tool that mathematically maps and analyzes an individual’s facial features for the purpose of making identifying conclusions from photographs and video. FRT is being implemented throughout society at a rapid rate as the tool offers significant economic benefits for identification processes and policing. In spite of FRT’s benefits its broadening implementation comes with significant risk to society, as the potential for misuse or identification errors and bias with FRT can lead to large-scale violation of individual’s civil and human rights. The key risks using FRT come from two sources: first, FRT uses curated facial datasets for training, it has been shown that labeling errors and lack of diverse facial demographics in the datasets leads to poorly trained and error-prone outcomes with regard to underrepresented groups. Second, there are only limited regulatory frameworks and ethical standards of use for FRT, leading to situations where FRT is either misused or extended beyond its practical utility, leading to violation of individual privacy and legal assemble rights and the perpetuation of cultural bias. The legal and ethical issues surrounding FRT have come under scrutiny in recent years following increased public awareness from mainstream media reports on the use of FRT in large-scale protest events and in law enforcement use cases. Currently, there are a few examples of state-level regulation and industry self-regulation through guiding ethical principles that restrict and

iii monitor the use of FRT in both government and industry applications. These minimal and isolated forms of regulation leave tremendous gaps in the effective and ethical implementation of FRT, leaving ample room for unregulated and unethical use cases. This thesis primarily aims to advance promising bias mitigation strategies. The key recommendations made are: 1) education for users and increased engagement by stakeholders, 2) comprehensive guidelines that can lead to federal regulation, and 3) a push towards explainable AI. FRT regulation has become a controversial and increasingly challenging task; the time for urgency and regulation is now in order to put a halt to the negative consequences of the technology as it currently exists.

iv ACKNOWLEDGMENTS

This thesis was made possible with the help of several special people and with the generous resources provided to me by the Communication, Culture & Technology program.

Thank you to my thesis advisor, Dr. Martin Irvine, and my faculty advisor and second reader, Dr. Jeanine Turner, for all the guidance and advice. Thank you to Ai-Hui Tan, CCT program coordinator, and Dr. Matthew Tinkcom, CCT program director, for keeping everyone on track.

Thanks to my family, friends, and peers for a genuine interest in and tireless support of my endeavors.

A special thanks to Lyra Katzman for coding assistance and to my supervisor at the Wilson Center Science and Technology Innovation Program, Dr. Anne Bowser, for exposing me to new ways of thinking.

v TABLE OF CONTENTS

Introduction: Toward Ethical Use of Facial Recognition Technology ...... 1

Chapter 1: State of the Problem ...... 9

1.1 Facial Recognition Technology and Bias ...... 9

1.2 What is Facial Recognition Technology and How is it Used? ...... 17

1.3 What Are the Key Issues with Facial Recognition Technology? ...... 22

1.4 Sentiment Analysis and Public Perception of Facial Recognition Technology? ...... 25

Chapter 2: Current Approaches to Ethics in Artificial Intelligence for Facial Recognition ...... 38

2.1 Current Approaches to Ethics ...... 38

2.2 Applications of Facial Recognition Technology ...... 39

2.3 State-Level Regulation and Ethical Principles ...... 46

2.4 Big Tech Industry Response to Ethics in Artificial Intelligence ...... 51

2.5 Filling Regulatory Gaps ...... 53

Chapter 3: Paving a Way Forward in Achieving Ethical Technologies ...... 56

3.1 Current Gaps in Regulation and Policy ...... 56

3.2 Bridging Regulatory Gaps ...... 57

3.3 Future Research ...... 64

Conclusion: Ethical Implementation of Facial Recognition Technology ...... 66

Appendix A. Python Code Used to Fetch API Data ...... 70

Appendix B. Python Code Using Twitter API Data to Fetch Tweets (2020) ...... 72

Appendix C. Python Code Using Vader Sentiment Analysis to Conduct Sentiment Analysis on Tweets (2020) ...... 75

vi Appendix D. Total Number of Tweets Including the Term “Facial Recognition” Per Week (2019)...... 76

Appendix E. Total Number of Tweets Including the Term “Facial Recognition” Per Week (2020)...... 87

Appendix F. Total Number of Tweets Including the Term “Facial Recognition” Per Week (2021)...... 97

Bibliography ...... 100

vii LIST OF FIGURES

Figure 1. GOA’s Number of Granted Patents Associated with Facial Recognition Technology by Year, 2015-2019...... 15

Figure 2. Conceptual Organization of Facial Recognition Surveillance System...... 18

Figure 3. Variety of Architectures Used in Face Recognition...... 20

Figure 4. Basic Convolutional Neural Network Architecture ...... 21

Figure 5. Graph Showing Total Weekly Tweets Including the Term ‘Facial Recognition’ (2020)...... 31

Figure 6. Fight for the Future’s Map Showing Where Facial Recognition Surveillance Is Happening and Where There Are Local and State Efforts in Place to Restrict its Use in the ...... 44

viii LIST OF TABLES

Table 1. Comparing Search Trends Data (2020-2021) ...... 27

Table 2. Weekly Total Number of Tweets including the Term ‘Facial Recognition’ (2020) ...... 29

Table 3. Sample of Tweets Fetched from Week 3 and Week 24 (2020) ...... 33

Table 4. The Average Polarity of Tweets from each week (2020) ...... 35

Table 5. Examples of Current and Potential Applications of Facial Recognition Technology Use in Three Categories along with the Advantages and Areas for Concerns ...... 42

ix INTRODUCTION TOWARD ETHICAL USE OF FACIAL RECOGNITION TECHNOLOGY

Computer-based algorithms have become essential tools for information-based societies such that individuals in modern society engage with algorithms, mostly unknowingly, on a daily basis. Private and public institutions, local and national governments have all become reliant on algorithmic decision-making systems.1 Artificial intelligence (AI) and machine learning (ML) present the opportunity to bring unmatched economic and social benefits that outmatch any other technology since the rise of the Internet.2 Automated reasoning is the subset of computer science that imitates human intelligence; this is the technology category that face recognition technology

(FRT) systems fall under, which includes both human-based and rule-based decision-making systems. While the potential for society benefit is significant, as AI innovation continues to accelerate, there are also significant ethical and civil rights risks that are amplified by broad utility and increasing scale of FRT.3 Hence, there is a strong need to increase awareness and transparency on FRT and its use and the need for appropriate regulation and ethical use guidelines to control risks, while harvesting the social and economic benefits FRT promises.

The key issues in the emerging use of FRT lies in two areas, first is in the accuracy and effectiveness of the technology, and second, in development of robust and effective regulation to ensure uses of FRT avoid ethical and civil rights violations, while delivering society benefits. The accuracy issue results when narrowly trained algorithms are extrapolated beyond their practical utility or when poorly curated, mislabeled and incomplete datasets lead to poorly performing FRT algorithms with insufficient identification success metrics in implemented FRT systems. The issue

1 Tsamados et al., “The Ethics of Algorithms.” 2 Bowser, “Beyond Bans.” 3 Tsamados et al., “The Ethics of Algorithms.”

1 of ethical and legal use have come to the fore as Bower notes when face recognition systems and video surveillance gained attention and attracted controversy, due to introduction of FRT into national security systems after the 9-11 attacks in the US.4 There is a fundamental conflict in regulation and technology implementation of both striving to abide by the United States

Constitution that promises to protect the individual rights of citizens against governmental oppression and uphold the right to privacy, freedom of expression, and peaceful assembly, while at the same time working to protect the public against increasing security and safety risks. 5

Biometric technology offers a form of scalable surveillance, in identity verification, that while value in protecting society brings a host of ethical concerns along with it. There is an urgent need for cohesive bias mitigation strategies and legal/ethical frameworks to ensure FRT is both accurate and used in an ethical and equitable manner in government and industry. This thesis hypothesizes that public ignorance on the scale and impact of this FRT misuse risk is leading to a lack of urgency in industry and policymakers to implement bias mitigation strategies and regulatory systems. The speed by which FRT is being implemented and the scalable nature of the technology means that the risk of unmitigated bias and unethical use are growing rapidly, possibly risking societal backlash which would limit the positive benefits FRT can offer. The issues related to FRT’s implementation and society acceptance are explored by conducting a meta-analysis on the research topic, interviewing industry experts, and collecting data on public sentiment regarding FRT.

Despite a surge in the Big Tech industry response to combat issues of that have become deeply embedded in today’s consumer and urban technologies, the failure to acknowledge and correct the inherently flawed system persists. When Google fired AI ethicist Dr.

4 Bowyer, “Face Recognition Technology.” 5 Bowyer.

2 Timnit Gebru, cofounder of the Black in AI affinity group and champion of diversity in the tech industry, a move that was covered widely in the news at the time, they faced intense backlash, with outcries heard loudly in both the ethical technology community as well the broader public.6 Shortly after, a group was established – Google Walkout for Real Change – as a form of protest striving to make the future of AI ethical. 7 To many, that Google would terminate a leading ethicist demonstrated that despite the many strides that Big Tech has made to build ethics into their technologies, the homogenous and biased groundwork on which Silicon Valley was built would live on, with no clear end or change in sight.8 Gebru’s research directly preceding her termination discussed ethical issues with recent advances in AI technology that focuses on the use of language,9 highlighting the tension between the human consequences of AI development and implementation and the conflicts of interest for profit-seeking companies that underwrite the majority of leading

AI research.10 This is one of many recent media stories that has created strong waves in the technology industry and broader public, drawing attention to where the gaps lie in ethics and regulation, as well as highlighting the bias and systemic racism that persists in a tainted system.

This thesis seeks to shed light into some of the technology black-boxes and the ethical issues surrounding them, focusing specifically on facial recognition technology.

The conversation concerning the tech industry’s struggle with bias is not new. The crux of the issue in building more ethical technologies, broadly speaking, was aptly put by Yael Eisenstat, who wrote, “Humans train the machine-learning and AI systems at Facebook, Google, and Twitter

6 Hao, “We Read the Paper That Forced Timnit Gebru out of Google.” 7 “The Future Must Be Ethical: #MakeAIEthical.” 8 Simonite, “A Prominent AI Ethics Researcher Says Google Fired Her.” 9 Bender et al., “On the Dangers of Stochastic Parrots.” 10 Simonite, “A Prominent AI Ethics Researcher Says Google Fired Her.”

3 to filter out bias. The problem: they don’t know what they’re looking for.”11 In short, the tech industry is widely criticized as being engineered by an overly homogenous group, despite being used by a much broader and more heterogeneous population. Technology is trained to perform, but neither for, nor by, the vast majority of its users. Ethicists have warned that AI and ML systems are subject to using mislabeled and/or unrepresentative data, which leads to biased and inaccurate outputs.12 AI is a form of statistical methodology, and therefore its performance is a reflection of the bias inherent in the training data used for future predictions and interpretations of other or new data. Yet, do the engineers, who design the algorithms and supervise the training really know what bias in technology means, let alone how to prevent it? Do tech companies train their employees on cognitive biases and ways to combat them?13 It has become routine for companies to reassure the public by claiming that they do not intentionally bias their systems and say nothing of their efforts to anticipate bias. Many do not take necessary precautions that would look for problems in robust methodological systems before they are applied; instead, the conversation of bias in their systems remains largely an open public debate, rather than becoming critical to in-house technology design methodologies, quality control and use standards that are auditable and sufficient for safe use.

The key research studying unethical technology continues to uncover systemic racism that persists through our modern systems and higher rates of technological inaccuracy that disproportionately affect minority groups and women. , author and founder of the

Algorithmic Justice League, published a pivotal paper with Dr. Timnit Gebru entitled Gender

11 Eisenstat, “The Real Reason Tech Struggles With Algorithmic Bias.” 12 Eisenstat. 13 Eisenstat.

4 Shades, which focuses on issues of facial recognition in terms racial and gendered biases.14 The paper helped to publicize the ability of facial recognition to accurately detect different types of faces, namely its consistent inability to detect the faces of women of color.15 In the same vein,

Shalini Kantayya’s work with Coded Bias, a documentary she created and directed, has further publicized Buolamwini’s work on Gender Shades, helping to broadcast the implications of unethical technologies to the broader public. Buolamwini and Kantayya’s work, and others like it, create an important precedent for researchers, policymakers, and consumers alike, cutting a path to reexamine the technologies the public use on a daily basis, and to question how and why they are made, including the ultimate destination of the data collected from us as users.

Further, FRT also carries a bigger potential for conflict in society as images of the face represent a culturally significant and unique ‘self-identity’ characteristic which many individuals view as personal and private. Although biometric software data is not new or unique to facial recognition, because faces are central to human social interaction, technology using FRT is fraught with suspicion of misuse, unlike fingerprints or retinal scans which are less relatable. In certain

Native American tribes for example you are considered to have had your soul stolen if a photograph is taken of you, which speaks to larger issue of consent and misappropriation of both individual rights and violation of societal taboos.16 However, even for those with less specified beliefs, the appearance of one’s own face is for most people a deeply individual attribute with inextricable ties to identity and ownership, making facial recognition technology a particularly sensitive form of data collection and the question of appropriate consent and regulation all the more urgent. Intensifying the dilemma, data capturing, and facial imaging are engineered with

14 Buolamwini and Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” 15 Buolamwini and Gebru. 16 Moraes, “Capturing Souls.”

5 capabilities for recognition that humans do not possess. Indeed, faces are the most visibly unique markers differentiating one person from the next, and humans use that visual input to recognize one another. One human, with a naked eye, cannot recognize another by examining a fingerprint or retina within an instant. Facial recognition is a deeply embedded human ability and there are dangers with mechanizing that process, including the inevitable bias that comes with the inability to decouple faces from biases that humans carry. Walking the line of those dangers is modern implementation of facial recognition technology, whose development marks a stark transformation of past systems of facial identification and the creation of an active surveillance and data collection machine with significant cultural implications.

This thesis not only helps readers better understand the landscape and implications of facial recognition, but also presents and advances research and recommendations on bias mitigation strategies while presenting my own take on the most important steps needed to move forward ethically and responsibly. In order to formulate some original thoughts on the matter, part of the thesis research involved informal interviews with industry and academic experts, as well as conducting a sentiment analysis on public discourse concerning ‘facial recognition’ using historical Twitter data. That in combination with a meta-analysis of existing literature has created the bulk of this thesis.

We are at a pivotal time in which technology is becoming increasingly powerful and we have a window of opportunity to create policy and strategies to ensure that AI not only creates opportunity and strives to better livelihood across all sectors, but also is done so in an equitable and sustainable manner in which power is disseminated across multiple stakeholders under proper regulation. I draw on the work of Researchers like Dr. Timnit Gebru, Joy Buolamwini, Dr. Alex

6 Hanna, Dr. Fei-Fei Li, Dr. Ben Schneiderman, Dr. Kate Crawford, and Dr. Cathy O’Neil for this thesis and are to thank for much of the research attempting to disseminate and disenfranchise the bias that is seeping through our technological systems in order to strive for a more ethical future.

A key criticism this thesis makes is the current passivity with which we engage with technology.

“We’re submerged, all of us. You, me, the children, our friends, their children, everybody else. Sometimes we get out: for lunch, to read or to tan, never for very long. Then we all climb back into the metaphor. The Lazy River is a circle, it is wet, it has an artificial current. Even if you don’t move you will get somewhere…”17

Zadie Smith, author of “The Lazy River,” an essay that stitches together the implications of man’s varied but pervasive relationship with technology, likens that relationship to a “lazy river,”18 which operates according to a system determined by technology rather than by its users.

It is a system Smith deems tainted and broken: in “Lazy River” she argues that, unbeknownst to us, humans are riding down an inevitable and autonomous river of modernism, questioning neither our destination, nor the power the current exerts over our decisions along the way. As Smith grapples with this relationship shared between society and technology, the primary message she seeks to convey is that technology has been used to make individuals feel that they have “no choice and no freedom [but also that] there’s no way of living without [it]”.19 Smith also touches on how humans are deceived by the systems that dictate our individual relationships with technology: she writes, “You go through every website saying ‘yes’ because you can’t possibly read 50,000 pages of [small print]. But every time you say yes, you are submitting to a system which makes you

17 Smith, “The Lazy River.” 18 Smith. 19 Dundas, “Zadie Smith on Fighting the Algorithm.”

7 manifestly unfree”. 20 Smith reads such exposure’s principal influence on our society as the overpromotion of individualization: she sees as a landmark in the riverbed the point at which we are all driven to think and act solely in “the first-person voice: me, me, me, me, me”.21 Zadie Smith creates an allegory for passivity in the modern world; this thesis criticizes that and strives to move beyond passivity to engagement.

In order to better grasp this topic, this thesis provides a literature review and state of the problem, outlines currently ethical approaches and where the gaps exist, and finally makes recommendations toward a future of ethical technology development and implementation.

20 Dundas. 21 Dundas.

8 CHAPTER 1 STATE OF THE PROBLEM

1.1 Facial Recognition Technology and Bias

Bias in facial recognition technology (FRT) is the result of narrowly trained algorithms, with poorly curated and maintained datasets, being extrapolated beyond practical use. The fact that FRT is being widely implemented and is technically easy to scale means perpetuation of long- term bias is being amplified. Facial recognition, although an extremely widely used and recognized technology, harbors many misconceptions surrounding its use and capabilities. This chapter aims to uncover some of the mystery behind where and how FRT is being used. Further, it examines public perception of the technology, especially in association with broadly publicized events linked to the technology. Over recent years, the use of facial recognition technology has become increasingly contentious due to multiple factors: the lack of regulation and standards guiding legal and ethical use, nonconsensual use of personal data, and routine use of the technology in the course of violating civil rights including privacy and peaceable assembly. Multinational companies and government play an especially important role in both the controversial and ethical implementation of the technology.

Joy Buolamwini’s groundbreaking research, Gender Shades: Intersectional Accuracy

Disparities in Commercial Gender Classification, done in conjunction with Dr. Timnit Gebru, presents a new approach to inclusive product testing in AI, and specifically FRT. Despite some criticisms regarding methodological issues of Buolamwini’s research,22 it is important in elevating the conversation and highlighting the ways in which machine learning algorithms can discriminate based on classes like race and gender, further underscoring that darker-skinned females are the

22 Bowser, “Beyond Bans.”

9 most misclassified group. They found that error rates for lighter-skinned males was 0.8% and for darker-skinned females was 34.7%.23 The research presents an approach to evaluate bias present in automated facial analysis and algorithms and introduces a new facial analysis dataset that more appropriately includes a balance of gender and skin types.24 Ultimately, Buolamwini and Gebru shed light on the urgency with which these inaccuracies should be addressed if companies attempt to employ and build this technology fairly, transparently, and accountably.25 Gender Shades and similar research underscore how high the stakes are; if not urgently and broadly addressed: “we risk the losing the gains made with the civil rights movement and women’s movement under the false assumption of machine neutrality. Automated systems are not inherently neutral.”26 This false assumption of machine neutrality is aptly referred to as the “coded gaze,” of the individuals who have the power to mold to technology.27

Coded Bias, a documentary based on Joy Buolamwini’s MIT research Gender Shades, creates an accessible medium for the public to better understand the fundamental issues in FRT.28

Buolamwini demonstrates the inaccuracies of the technology for minority groups and women and explains that the root cause of these inaccuracies stems from the training datasets the algorithms use to form a decision basis. In looking at the training datasets of facial recognition systems,

Buolamwini found that the majority contained men and lighter-skinned individuals. It is here that the issues of bias in FRT become overwhelmingly clear; if the algorithms do not have rich, diverse and accurate datasets to learn from, the outputs will not be accurate. Gender Shades and subsequently Coded Bias are pivotal to the momentum gained in the efforts towards ethical

23 Buolamwini and Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” 24 Buolamwini and Gebru. 25 Buolamwini and Gebru. 26 “Overview ‹ Gender Shades — MIT Media Lab.” 27 “Overview ‹ Gender Shades — MIT Media Lab.” 28 Kantayya, Coded Bias.

10 technology. Without this kind of research highlighting the unjust and unethical uses of FRT and the systemic bias FRT is vulnerable to, there would less awareness of the potential for deeply negative impact on society and its civil and human rights, especially on the lives of minorities and other marginalized groups.

In Coded Bias, Dr. Meredith Broussard– data journalist and author of Artificial

Unintelligence –introduces the concept of artificial intelligence (AI) and its origin: it all started at a meeting in the Dartmouth math department in 1956, between a relatively homogenous group of mathematicians who coined the term, “AI.”29 Through Coded Bias, Broussard highlights the misconceptions of AI in popular culture, namely “science fiction…everything in Hollywood” 30, which presents AI as human-like robots that have the power to take over the world. In reality AI is really just narrow AI, which is the mathematics of pattern recognition and correlation. The

Hollywood version of AI skews what people fundamentally understand the meaning of AI to be and causes an undue air of mystery that cloud the real issues with the technology we should be focused on. Broussard’s book, Artificial Unintelligence, warns that we should never assume computer are always right and that there is and should be a limit to what we do with technology.

Broussard goes on to argue that our collective enthusiasm towards computers and digitization has led us to become trapped in a network of poorly designed systems. In doing so, she argues against technochauvinism– the idea that technology is the superior solution in all cases.31 Ultimately,

Broussard makes the point that the better we understand the limits of technology, the more able we are to make choices about what we can and should do with it. Further, Dr. Cathy O’Neil, author of Weapons of Math Destruction, has established herself as another pioneer of the researchers

29 Kantayya. 30 Kantayya, pt. 00:03:00. 31 Broussard, Artificial Unintelligence.

11 warning against an algorithmic decision-making takeover. O’Neil does so by furthering the notion by hypothesizing that the mathematics of AI, used with large datasets, is being used as a shield for corrupt practices.32 Similarly to Broussard’s work, O’Neil writes on the interconnectedness of these “weapons of math destruction” (WMD):

“Promising efficiency and fairness, they distort higher education, drive up debt, spur mass incarceration, pummel the poor at nearly every juncture, and undermine democracy. It might seem like the logical response to disarm these weapons, one by one. The problem is that they’re feeding on each other. Poor people are more likely to have bad credit and live in high-crime neighborhoods, surrounded by other people. Once the dark universe of WMDs digests that data, it showers them with predatory ads for subprime loans or for-profit schools. It sends more police to arrest them, and when they’re convicted it sentences them to longer terms. This data feeds into other WMDs, which score the same people as high risks or easy targets and proceed to block them from jobs, while jacking up their rates for mortgages, car loans, and every kind of insurance imaginable… How do we regulate the mathematical models that run more and more of our lives? I would suggest that process begin with the modelers themselves.”33

O’Neil both reiterates this self-perpetuating cycle and suggests that the solution to correcting these systems of oppression can and should start with the modelers and architects of the systems themselves.

Because of groundbreaking research of Buolamwini, Broussard, O’Neil and others, there has been an upwelling push towards ethical use of AI and as a subcategory, FRT. A push for transparency in algorithmic modeling of machine learning has garnered the most momentum in efforts towards ethical artificial intelligence and is now a fundamental principle in data processing as presented by the General Data Protection Regulation.34 Although terms like AI, trustworthiness, and transparency are difficult to define, we still recognize the relationship these terms share and how they are important to ethical use of technology. AI’s primary importance in discussing

32 O’Neil, Weapons of Math Destruction. 33 O’Neil, 199–205. 34 Felzmann et al., “Transparency You Can Trust.”

12 transparency is its connection to automated decision-making processes and their ability to function independently of human intervention. 35 The General Data Protection Regulation (GDPR) organization defines transparency in both prospective and retrospective terms: “prospective transparency means that individuals must be informed about the ongoing data processing before such processing takes place and is therefore linked to the information duties of the GDPR” and “a retrospective transparency element which refers to the possibility to trace back how and why a particular decision was reached”.36 Further, Felzmann et al. highlight how user autonomy is a foundational concept in achieving ethics. Felzmann et al. state that achieving a truly autonomy- respecting system of informed consent requires extended efforts beyond basic notice and trivialized consent that currently is used in ‘contemporary informational technologies’.37Felzmann et al. also present important limits of transparency in AI beyond how and what information should be provided to users, suggesting that transparency requirements should be tailored to individual stakeholder groups because of how these different stakeholders groups (users, developers, regulators, deployers, etc.) interact with and are impacted by the technology. The issue of transparency holds particular significance in the scope of FRT, especially when understanding the complexity of stakeholders that are subjected to the technology.

Bowyer presents the September 11, 2001 terrorist attacks as a pivotal moment of public reckoning and attention towards video surveillance and face recognition systems. FRT offers an opportunity for increased national security, while prompting fear of an “Orwellian invasion of privacy”.38 The 9-11 attacks caused a surge in the use of FRT as a protective measure, primarily in airports and other mass gathering public spaces, and brought attention to the use of biometrics

35 Felzmann et al., 1–2. 36 Felzmann et al., 2–3. 37 Felzmann et al., 4. 38 Bowyer, “Face Recognition Technology,” 9.

13 as a means of identity verification. Bowyer also directs attention towards the fact that all codes of ethics guiding computing-oriented professions are concerned with privacy, while there are essentially no legal guidelines restricting the use of FRT in public spaces.39 In the 1759 Historical

Review of Pennsylvania, Benjamin Franklin is quoted: “They that can give up liberty to obtain a little temporary safety deserve neither liberty or safety.”40 Bowyer considers an alternative version of this quote when thinking about FRT in public spaces: “They that insist on keeping an inessential liberty at the cost of a large and permanent threat to safety…” In other words, we must consider whether FRT works well enough to be deployed into our public spaces, and what elements of privacy we consider essential versus what level of safety can we achieve through implementation of this technology. Assessing the balance is essential to deciding if the utility for safety and security is worth the risk to privacy.41

FRT is an important technological feat, spanning multiple disciplines, that is – in part – a byproduct of the widespread automation and digitization that has been driven out of the Fourth

Industrial Revolution. The use of commercial facial recognition technology has seen considerable increase over the last 5 years. As cited in a U.S. Government Accountability Office report in 2015;

FRT’s primary use at that time involved photographic identification in social media applications and providing secure access in lieu of passwords,42 whereas more recently, in a 2020 report, FRT’s uses span from payment authorization, tracking and monitoring event attendance, observation of students, and/or employees, to enhanced secure access.43 Privacy and ethical concerns vocalized primarily by advocacy groups, such as Fight for the Future, have heightened following the

39 Bowyer, 16–17. 40 Bowyer, 9. 41 Bowyer, 18. 42 “FACIAL RECOGNITION TECHNOLOGY: Commercial Uses, Privacy Issues, and Applicable Federal Law.” 43 “FACIAL RECOGNITION TECHNOLOGY: Privacy and Accuracy Issues Related to Commercial Uses.”

14 increased commercial use of FRT by private entities in recent years.44 The most prominent ethical concerns involve higher error rates associated with the technology’s accuracy among certain demographic groups, namely that of historically marginalized groups.45 Figure 1, below, depicts the increased use of FRT in the private sector as demonstrated by patent grants given to companies in industries spanning across technology, retail, government, insurance and telecommunications, among others.46

Figure 1. GOA’s Number of Granted Patents Associated with Facial Recognition Technology by Year, 2015-201947 Source: GOA-20-522

This demonstrates a clear increase in interest and investment in developing the technology behind FRT and its ability to collect and process data and applications like surveillance, security, and search. With increased use of the technology comes increased need to regulate and monitor the impacts and consequences of the technology.

44 “FACIAL RECOGNITION TECHNOLOGY: Privacy and Accuracy Issues Related to Commercial Uses.” 45 “FACIAL RECOGNITION TECHNOLOGY: Privacy and Accuracy Issues Related to Commercial Uses.” 46 “FACIAL RECOGNITION TECHNOLOGY: Privacy and Accuracy Issues Related to Commercial Uses.” 47 “FACIAL RECOGNITION TECHNOLOGY: Privacy and Accuracy Issues Related to Commercial Uses.”

15 As part of my research, I spoke to various industry experts from the private sector, the public sector and academics, which helped identify key issues for consideration. These discussions were in the form informal interviews to facilitate open conversation about AI in industry and try and better understand the opportunities and limitations of facial recognition from various expert perspectives. A challenge I had in talking to certain industry experts was conflicts with company non-disclosure policies, but in general I found most people were excited to talk to the “next generation” of research AI ethics.48 Some highlights from these conversations drove me to think about effective policy recommendations and moving ‘beyond bans’, thinking about scale of technology and how that has consequences on impact and proper regulation, and thinking about controlling unintended uses of data and data collection. I draw heavily on research by Dr. Mark

Nitzberg and Dr. Anne Bowser in advancing bias mitigation recommendations and I gained insight from Dr. Douglas Eck, a computer scientist at Google, and Chip Hall, a Managing Director at

Google, both in understanding the ethical issues and limitations of the technology in practice.49

Engaging with these experts was helpful in understanding the industry and academic sentiment towards FRT, what the most pressing issues were surrounding its implementation and use from their perspective, and why it serves as a valuable technology. Some state-level regulation and safeguarding through guidelines and principles already exists in the private sector, but there is still no federal regulation in place that controls how and where we use FRT. It is important to further understand the way the technology works in order to better understand the issues that can arise from its use.

48 Nitzberg, Informal Interview about Concerns with Facial Recognition Technology. 49 Eck, Informal Interview about Human-Centered Design from Google Computer Science Perspective; Hall, Informal Interview about Data Privacy Concerns from a Business Perspective.

16 1.2 What is Facial Recognition Technology and How is it Used?

There are various machine learning techniques used in facial recognition. To understand the ethical implications of FRT, it is important to understand how it works. Biometrics is a term used to identify people based on some aspect of an individual’s biology: facial features, retina scans, and fingerprints are examples of biometric measurements. 50 Facial recognition is a biometric software category that mathematically maps and analyzes an individual’s facial features stored in digital format as a faceprint using machine learning techniques. After a face is detected in the computer processing of the image or video, the image of the face is cropped to be used as a probe into a facial data gallery to identify potential matches.51 Biometric software denotes the use of statistical analysis of any unique physiological or behavioral characteristics – some other examples physiological characteristics involve fingerprints, iris recognition, voice recognition, and

DNA matching – to confirm identity. FRT is a process of pattern recognition trained on curated image datasets (for example ImageNet which is large database of over 14 million images) - the machine learning algorithms are trained to pick out specific details of the face - distance between eyes, shape of chin - and convert these features to mathematical representations compared to other faces collected in datasets. The basic operation of FRT is relatively simple: a camera monitors a physical space and instead of a person continuously watching the camera footage, a computer monitors the video and creates an alert if an ‘interesting person’ is in view.52 Below (Figure 2) displays a conceptual organization for how face recognition surveillance systems work.

50 Bowyer, “Face Recognition Technology,” 10. 51 Bowyer, 10–11. 52 Bowyer, 10.

17

Figure 2. Conceptual Organization of Facial Recognition Surveillance System53 Source: Bowyer 2004

From the conceptual organization we learn that surveillance systems can only recognize someone already in the ‘watch list’ and, ultimately, potential matches are screened by human operators.54

Law enforcement is a prominent example of how face recognition is used: mugshots are collected from arrestees and compared to federal and local face recognition databases or watch lists. The machine learning used in facial recognition is highly effective and is a subset technology of AI. The problematic elements of FRT have less to do with effectiveness of the machine learning itself, but rather are a problem with data cleaning and errors in face labeling of large datasets.

Accuracy of the technology depends on having a representative data sample used to train the algorithms used in the machine learning techniques, which we currently lack for many marginalized groups. Datasets are the backbone of AI and some are more impactful than others:

ImageNet, for example, used in many applications of FRT contains both racist and sexist data

53 Bowyer, 12. 54 Bowyer, 12.

18 labels and images collected without consent. 55 Hao claims that within ImageNet among there are some gross data labeling errors, including, a mushroom labeled as a spoon and a frog labeled as a cat. Hao estimates a labelling error rate of 5.8%.56 These errors have the potential to have a large impact, especially when examining context of use. Mixing up a frog with a cat is not good, but also not necessarily life threatening. Mislabeling the image of a darker-skinned woman with that of something entirely different can have real life impacts, where there is little room for error. Hao argues that the AI field needs to create cleaner data sets for evaluating models, urging researchers to reexamine their own data.57 Like O’Neil suggests in Weapons of Math Destruction, there is an urgent need for modelers and developers to take the first major steps in critiquing their own datasets and research material.

Deep learning techniques are the latest machine learning advancement improving the capabilities of FRT. However, it is important to emphasize that the performance of any machine learning model largely depends on the quality of the data. Fuad et al. examine deep learning techniques that have greatly advanced FRT which is leading to diverse real-world applications of the technology. FRT started as a simple statistical technique, Eigenfaces being one of the most popular techniques among them: this involved representing every image as a vector of weights.58

FRT has evolved to a diverse set of deep learning architectures, as displayed in Figure 3 below.

55 Hao, “Error-Riddled Data Sets Are Warping Our Sense of How Good AI Really Is.” 56 Hao. 57 Hao. 58 Fuad et al., “Recent Advances in Deep Learning Techniques for Face Recognition,” 1.

19

Figure 3. Variety of Deep Learning Architectures Used in Face Recognition59 Source: Fuad et al. 2021

Here, we see that FRT recognition techniques have proliferated to a diverse set of modeling architectures. A primary goal of FRT is feature extraction. In contemporary modeling techniques, feature extraction is made more efficient by methods like Convolutional Neural Networks (CNN), consisting of many neural network layers.60 CNN is also the most popular Deep learning algorithm used in for image recognition, image classification, pattern recognition, and other feature extraction operations. CNNs can be many different types of algorithms, but two key types are 1) feature extractors and 2) classifiers. 61 The following figure (Figure 4) helps elucidate the basic

CNN architecture:

59 Fuad et al., 2. 60 Fuad et al., 3. 61 Fuad et al., 4.

20

Figure 4. Basic Convolutional Neural Network Architecture62 Source: Phung and Rhee 2019

The basic structure consists of four key layers: 1) convolutional layer, 2) pooling layer, 3) non-linear layer, and 4) fully connected layer. 63 Most FRT systems work under a supervised learning method, where labelled datasets serve to link inputs to outputs. These methods of FRT are having an increasingly strong effect on decision-making processes and performance is closely tied to the user’s demographics and factors that extend beyond demographics.64 Extensive research shows that face recognition solutions possess biases that lead to “discriminatory performances differences based on the user’s demographics.”65 Yet, there are also biases that extend beyond demographics that and have the potential to affect the fairness and security of FRT. It is important to understand how face recognition systems work to understand the issues the technologies could create.

62 Phung and Rhee, “A High-Accuracy Model Average Ensemble of Convolutional Neural Networks for Classification of Cloud Image Patches on Small Datasets.” 63 Fuad et al., “Recent Advances in Deep Learning Techniques for Face Recognition,” 5. 64 Terhörst et al., “A Comprehensive Study on Face Recognition Biases Beyond Demographics,” 1. 65 Terhörst et al., 1.

21 1.3 What Are the Key Issues with Facial Recognition Technology?

The key issues in the use of facial recognition are not in the machine learning modeling itself, but rather in the quality of the data used to train the algorithm and the situations in which

FRT is used to identify and surveil people. Data quality can be tarnished by poor data cleaning and errors in face image labeling that ultimately perpetuate biases. Issues related to biased and FRT errors, combined with limited regulation and guidelines on use of FRT lead to the following potential negative consequences: 1) violations of constitutional human rights, 2) bias in algorithmic decision-making, 3) the issue of appropriate scale. Interestingly, AI and machine learning have helped bring these long-term issues of bias to light, especially relating to issues of bias in criminal justice.66 The issues of bias are often correlated to the FRT itself, but in reality, the technology is a tool used to elevate human capabilities, whereas in actuality it is perpetuating human limitation.

A primary concern of FRT and one that extends beyond bias and is cited in many ethical principles is the need to protect privacy. In particular, the rights to both privacy and freedom of expression and peaceful assembly are threatened when FRT is used as a grid surveillance system.

This has been a particularly controversial issue in regard to recent protests (namely that of June

2020 during the Black Lives Matter protests across the nation) where FRT has been used to falsely accuse and arrest legally assembling protest-goers based on video surveillance, primarily targeting people of color.67 These arrests provide an example of how errors and misuse of FRT can have extreme impacts on individual lives. This situation demonstrates the importance of high accuracy standards and strict regulation on FRT use in surveillance is critical to ensuring human-based decisions and protection of civil and human rights when using FRT. Further, this grid surveillance

66 Bowser, “Beyond Bans.” 67 Hill, “Wrongfully Accused by an Algorithm.”

22 system is creating a ‘perpetual line-up’ risk, in which approximately 64 million Americans (16 states) allow FBI to use FRT to compare faces of suspected criminals to driver’s license ID photos, creating a virtual line-up for state residents.68 This line-up is controlled entirely by algorithms, not by humans and with existing biases in the datasets training these algorithms, the error rate of the lineup is likely to be much higher for marginalized groups. This perpetuates discrimination in the criminal justice system. Further, the human “backstop” to help prevent errors in image detection is non-standardized, meaning it is not a dependable safety measure.69

The backbone of any algorithmic model is its data; ImageNet is a dataset that helped propel much of the modern AI revolution.70 Much of the data collected to be used for face recognition systems, like in the case of ImageNet – one of largest and widely used FRT datasets, are images collected are scraped off of webpages with without consent from individuals in the image.71 This is a stark example of a violation of individual privacy, but this extends far beyond web-scraping.

As states previously, ImageNet is known to have racist and sexist image labeling and is only one example of an error-riddled dataset. 72 Northcutt et al. were able to identify and show how significant label errors in 10 of the most commonly used computer vision, natural language, and audio datasets, skews algorithm results compared to benchmarks.73 They found there were on average 3.4% errors across the 10 datasets and determine that large datasets, although useful in training machine learning algorithms, can be far more problematic in terms of labeling errors than that of smaller, focused datasets.74 This is important to furthering the notion that large datasets

68 Garvie, Bedoya, and Frankle, “The Perpetual Line-Up.” 69 Garvie, Bedoya, and Frankle. 70 Hao, “Error-Riddled Data Sets Are Warping Our Sense of How Good AI Really Is.” 71 Hao. 72 Hao. 73 Northcutt, Athalye, and Mueller, “Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks.” 74 Northcutt, Athalye, and Mueller, 1.

23 used for FRT are higher risk than broadly understood by both practitioners and the public. As

Broussard makes clear in her research, it is important to not immediately trust the accuracy of the outputs of computers and algorithms.75

The most urgent and controversial issue with FRT is the bias in algorithmic decision- making that results from inadequate and erroneous training datasets. These data shortcomings affect marginalized groups at a far higher rate than other groups (e.g., lighter-skinned males).

Buolamwini and Gebru, in their landmark study Gender Shades, show a difference in accuracy rates by demonstrating that darker-skinned females suffer from a 34.7% inaccuracy rate, whereas lighter-skinned males only see a 0.8% inaccuracy rate on average.76 These are results demonstrate more about human limitations than that of ML modeling. The models are extremely advanced and effective but cannot function at the highest level of utility until datasets are representative of the population they are being used on and are rid of labeling errors. As demonstrated, FRT accuracy rates are divergent across demographic groups, due to inadequate and erroneous datasets and algorithmic extrapolation beyond dataset relevance.

Lastly, it is important to highlight the issue of scaling the use of technology when it comes to FRT. Real-time one-on-one use of FRT (like unlocking phone) has been deemed relatively unproblematic and useful, but one-to-many use of FRT (where a faceprint is compared to a larger database of images like law enforcement use of public image data) has had disproportionately negative implications on certain demographic groups.77 FRT risks being weaponized against marginalized groups by perpetuating systemic racism, partly because its being scaled beyond its intended or appropriate use. In an interview with Dr. Mark Nitzberg, he clarifies that it isn’t AI

75 Broussard, Artificial Unintelligence. 76 Buolamwini and Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” 77 Bowser, “Beyond Bans.”

24 that has all of a sudden in recent years become powerful, but rather the fact that the vast majority of us now carry miniature super computers with us everywhere in the form of smartphones and they are all connected to one another.78 This has greatly impacted the scale and ubiquity of AI implementation and thus has transformed the impact of FRT well beyond the effect from historical human scale impact to orders of magnitude greater impact using scaled machine-based impact.

This completely changes how we need to think about regulation and legal frameworks for managing FRT during the 4th stage of the industrial revolution. Dr. Nitzberg also claims that the

“duty of care is proportional to a system’s reach,” meaning large corporations and national governments need to be leading the charge to rectifying the unintended negative consequences of technology because they have the broadest reach.79

1.4 Sentiment Analysis and Public Perception of Facial Recognition Technology

Google Trends and Twitter are social media platforms, serving as digital data sources that reflect societies interest in trends and issues. Twitter allows users to share and exchange all types of information in the form of tweets (‘tweet’ is Twitter’s specific term for a post created and published by a user). Twitter is used to create networks and online communities that have common interests; in other words, “Twitter is what’s happening in the world and what people are talking about right now”. is an analysis tool that accesses Google users search terms and frequency of use, also serving as a metric for what is society is currently interested in based on current events. Both Google Trends and Twitter serve as data rich resources that allow researchers to access historical data, offering insight into issues and trends in public discourse over time. I used Google Trends and the Twitter application programing interface (API) to access Google

78 Nitzberg, Informal Interview about Concerns with Facial Recognition Technology. 79 Nitzberg.

25 search terms and Twitter tweets referencing FRT from 2019 to 2021. Access to this data is useful in positioning the past and present trends and sentiment within the overall topic and analyzing what users think and how they react to events involving facial recognition. While Google and

Twitter users are a subset of society, they do serve as a proxy to understand society’s view on issues and events. Therefore, to assess the public perception of FRT, an analysis using both Google and Twitter has been done to highlight how much society is aware of FRT, to what types of events is that awareness linked and what issues are associated with FRT awareness.

To gauge historical interest in FRT, Google Trends for the year 2020 and the spring of

2021 were analyzed for search terms related to FRT. The Google Trends data helps to demonstrate how intertwined facial recognition technology is with political, social, and economic events. The below graphs are from the Google Trends search review of 2020, demonstrating search terms ranging from protests to Clearview AI.80 It demonstrates the “interest” of certain searches over a given time period using the following parameters: worldwide geographical data, the time period of a single year (2020), and drawing from any kind of web search. The trends show similar peaks and troughs, implying a common trend between the search areas. The left side of the table is drawn from data from the year 2020 and on the right side of the table data is drawn from the past 12 months (ending in April 2021).

80 “Explore What the World Is Searching.”

26 Table 1. Comparing Trends Data (2020-2021)

Data from 2020 Data from April 2020 – April 2021 (past 12 months)

Google Trends Data on ‘Facial Recognition’ Search Google Trends Data on ‘Facial Recognition’ Search (2020)81 (2020-2021)82

Google Trends Data on ‘Black Lives Matter’ Search Google Trends Data on ‘Protests’ Search (2020-2021)84 (2020)83

Google Trends Data on ‘Police’ Search (2020)85 Google Trends Data on ‘Clearview AI’ Search (2020- 2021)86

Google Trends Data on ‘Clearview AI’ Search (2020)87 Google Trends Data on ‘Artificial Intelligence’ Search (2020-2021)88

81 “Explore What the World Is Searching: Facial Recognition” 82 “Explore What the World Is Searching: Facial Recognition” 83 “Explore What the World Is Searching: Black Lives Matter” 84 “Explore What the World Is Searching: Protests” 85 “Explore What the World Is Searching: Police” 86 “Explore What the World Is Searching: Clearview AI” 87 “Explore What the World Is Searching: Clearview AI” 88 “Explore What the World Is Searching: Artificial Intelligence”

27 All graphs covering search trends ranging from ‘Facial Recognition’, ‘Black Lives Matter’,

‘Protests’, ‘Police’, and ‘Clearview AI’ display at least one major surge in search results. A major surge in search interest for these terms was during the June 2020 nationwide Black Lives Matter protests – a period in which tensions between political and social groups were high and a strong backlash against systemic racism was occurring. In addition to the social unrest and FRT search terms a search against ‘Artificial Intelligence’ was also made in order to see if AI had similar interest compared to facial recognition. Although ‘Artificial Intelligence’ search trends also saw peaks over the reviewed periods, they weren’t as consistent with the trend as ‘Facial Recognition’, suggesting people do not necessarily associate FRT with the more general technology of AI.

Delving deeper into understanding periods of interest around FRT and gauging public sentiment around FRT, an analysis of FRT related Twitter activity was completed. First, any tweets including the term ‘facial recognition’ between 2019 and 2021 (the first 14 weeks of 2021) were fetched using a Python computer language script linked to Twitter via the API (see full script in

Appendix A).89 The code specifies the elimination of any ‘retweet’ (which refers to a reposting of a unique tweet by a user other than the one who originally wrote it) to restrict the total number of tweets fetched to only refer to the original tweets written and not duplicates of the same tweet.

Additionally, it specifies the period and loops itself within the period from week to week until the end date is reached. The following table (Table 2) displays the total number of tweets including the term ‘facial recognition’ for each week of 2020. 2020 is used as a benchmark year because it’s the most recent full year of data and therefore offers a full range of recent events throughout a whole year. The average number of tweets per week equated to 7,011. There were weeks that saw spikes in tweets regarding facial recognition due to concurrent events.

89 Katzman, Twitter API Python Coding Assistance.

28 Table 2. Weekly Total Number of Tweets including the Term ‘Facial Recognition’ (2020)

Week Total Number of Week Total Number of Tweets Tweets 1 3493 27 7290 2 5135 28 7262 3 17797 29 7624 4 16878 30 100067 5 10493 31 8088 6 9291 32 6888 7 7177 33 7588 8 8472 34 6957 9 8851 35 4720 10 8378 36 4791 11 6383 37 7153 12 3507 38 4898 13 3591 39 6302 14 5048 40 4211 15 4762 41 3597 16 5152 42 3743 17 3866 43 4238 18 3246 44 4460 19 4735 45 3767 20 5027 46 3674 21 5963 47 4478 22 6713 48 3793 23 9950 49 3383 24 27362 50 5172 25 12791 51 5409 26 15355 52 5615 Total Tweets 364,584

The highest number of tweets in the year of 2020 were in week 3 (January 15-22) and week

24 (June 8-14). Week 3 reached 17,797 tweets and week 24 reached 27,362 tweets. This is interesting when understanding the events occurring during these select time periods correlated to a surge in public attention towards facial recognition. Week 3’s increase in facial recognition tweeting can be correlated to increased reporting on Clearview AI, a facial recognition technology startup highlighted in the media as a threat to personal privacy. By January 2020, more than 600

29 law enforcement agencies had started using Clearview AI technology, an app that allowed a user to upload an image of an individual and see any public photographs of that person along with links identifying personal information about that person. 90 By Week 3 of 2020, the Clearview AI database contained over three billion images scraped from image data on Facebook, YouTube,

Venmo and many other websites. This made Clearview’s database much more invasive and expansive than any data collection scheme initiated by the national government or key Silicon

Valley multinationals.91 Clearview AI, a New York based company, has been criticized as being unethical on many accounts; namely, scraping billions of images off the internet with no consent and putting the data in the hands of unregulated systems.92 In May of 2020 the American Civil

Liberties Union (ACLU) sued Clearview AI claiming to have helped “hundreds of law enforcement agencies use of online photos to solve crimes, accusing the company of ‘unlawful, privacy-destroying surveillance activities.’”93

Week 24’s surge in facial recognition tweeting can be correlated to the nationwide Black

Lives Matter protests. In June of 2020 tensions between the public and law enforcement were reaching a peak sparked by the killing of George Floyd. This highly publicized event spurred a public backlash against police use of force. Further, it spurred public scrutiny of the use of technology by law enforcement to surveil individuals involved in subsequent protests that were alleged as invading privacy and perpetuating racism. Many multinational companies (e.g.,

Amazon, , IBM) banned or halted law enforcement agencies of proprietary facial recognition technology systems. This move from Silicon Valley multinationals was in part a reaction to pressure from police-reform activist groups and civil rights groups, who research

90 Hill, “The Secretive Company That Might End Privacy as We Know It.” 91 Hill. 92 Hill, “What We Learned About Clearview AI and Its Secret ‘Co-Founder.’” 93 Hill.

30 alleged that FRT currently in use produces inaccurate results for people of color making the technology biased.94 The risk related to inaccuracies in law enforcement were felt by many as too high a risk to be acceptable for use. These relationships between specific events and facial recognition technology searches are furthered detailed below.

The Twitter API data from 2019 to 2021 is graphically displayed below. This, again, demonstrates a similar trend line as that of the Google Trends search.

Figure 5. Graph Showing Total Weekly Tweets Including the Term ‘Facial Recognition’ (2020)

The trendlines for the data from the year 2020 display surges in Week 3 and 24. The total amount of tweets between 2019 and 2020 are similar (2019: 380,215, 2020: 364, 584).

Surprisingly, the total tweets involving ‘facial recognition’ was slightly higher in 2019, despite

94 Allyn, “Amazon Halts Police Use Of Its Facial Recognition Technology.”

31 2020 seeing massive spikes during various periods. The June (2020) spike is the most recognizable in all the graphs (both the total tweets graphs and the Google Trends graphs).

In addition to analyzing total tweets per week, I also used the Twitter API data retrieve and review the tweets themselves. The below table (Table 3) provides a sampling of some of the tweets from Week 3 and 24 to provide further context on what the discourse surrounding both the use of facial recognition technology by Clearview AI and law enforcement agencies was at the time of the events.

32 Table 3. Sample of Tweets Fetched from Week 3 and Week 24 (2020)

Tweet ID Tweet Text Week 3 1219771420575596544 "Clearview AI's facial recognition goes creepier than most surveillance tech video - CNET https://t.co/7pT3Ounvvl https://t.co/CzhgtXgm8p" 1219769648486416384 "Clearview AI Takes Facial Recognition Data of Millions from Social Media #marketing #contentmarketing #inboundmarketing #blog #mktg #socialmedia #socialmediamarketing #smm #growthhacking #website https://t.co/q7w9pO4n3S" 1219769550641778690 "LEAK: Commission considers facial recognition ban in AI 'white paper' https://t.co/iU5HZ6Odnn" 1219769155701878785 "2018 Risky Technology 2019: US Police Use of New Facial Recognition App Prompts Privacy Concerns: https://t.co/ZA8ENtMXFe via @SputnikInt" 1219768390421749760 "Schneier says that we need to regulate more than facial recognition, we need to regulate recognition itself -- and the data-brokers whose data-sets are used to map recognition data to peoples' identities." https://t.co/VmwzYNIn7i" 1219767820579475456 "Facial recognition companies are trying to install a rights-violating surveillance dragnet on college campuses. This won\u2019t make us safer, and we need to stop them NOW. Sign the petition: https://t.co/rXctF0UyV4" 1219765794671861762 "We really need laws targeting the use of facial recognition not just by law enforcement but by everyone.\n\nhttps://t.co/UHt9ztvKpU" 1219765253405196288 "Law enforcement is using a facial recognition app with huge #privacy issues https://t.co/X6VlhNK7bg https://t.co/pRsg3WTnPs" 1219762534766411776 "Chinese City Uses Facial Recognition to Shame Pajama Wearers https://t.co/bWbzKQQm0j" 1219760662060175360 "Go read this NYT expose on a creepy new facial recognition database used by US police https://t.co/B5IuLnevWW" 1219760333776093184 "Automated facial recognition systems are a direct threat to the right to privacy. Proud to work with @fightfortheftr on such an important movement. #BanFacialRecognition \nhttps://t.co/YGPWguOqRD" 1219752710360031232 "It’s creepy what they’re doing, but there will be many more of these companies. There is no monopoly on math. Absent a very strong federal privacy law, we’re all screwed.\u201d said Director of Privacy @agidari on Clearview AI, a facial recognition app. https://t.co/6KvrveMIOy" Week 24 1271954993893924864 ".@amazon will you support Black communities by stopping the sale of facial recognition technologies to police forces, to prevent these from being used to criminalize and police communities of color? #StandAgainstFacialRecognition" 1271953266537086977 "Major Tech Companies Promise To Stop Selling Facial Recognition Tech To Cops - The Mind Unleashed via BrainSights for iOS https://t.co/4sNWlXIxiV" 1271953081782149121 "George Floyd: Microsoft bars facial recognition sales to police https://t.co/1CSQXgw9jn" 1271950389940826113 "Microsoft's policy doesn't go far enough. They should refuse to sell facial recognition technology to *any* government of any kind" 1271948733727379458 "Major Tech Companies Promise To Stop Selling Facial Recognition Tech To Cops https://t.co/8vae82jfJ8 https://t.co/lXQmIsXcWf" 1271947981424492544 "Facial recognition backlash persuades Big Tech to look in the mirror - The Sunday Times https://t.co/hil9g76KYy #humanrights #tech" 1271947357316247555 "Silicon Valley has admitted facial recognition technology is toxic \u2013 about time https://t.co/31BBUulWqq" 1271944076854562816 "Outrage over police brutality has finally convinced Amazon, Microsoft, and IBM to rule out selling facial recognition tech to law enforcement. https://t.co/4pvGkLy4cE" 1271943034146603010 "@constructal1 @D1reito @marinamaral2 Clearly the analysis of the data is flawed. Like facial recognition software is biased because most analysts, software developers and testers are white. Education, news, TV, comedy, the arts in most parts subtly perpetuates racial stereo types. 2/2" 1271940786737815552 "@TonyBeast1957 FBI needs more facial recognition data for upcoming protests" 1271940203997954049 "The European solution of temporary ban (until proper regulation and ethical frameworks are developed) is increasingly seen as the safer way to go: \n\n#facialrecognition #AIEthics \n\nhttps://t.co/r0h4EFEH3v" 1271932533723344897 "Now on Naija Reports\nReport: 30,000 college football fans unknowingly captured by facial- recognition test at Rose Bowl\nhttps://t.co/turl83xYis"

33 A sentiment analysis is used to detect natural language usage which implies positive, neutral or negative sentiment polarity within a text. In other words, it indicates a positive or negative opinion and the general attitude about the topic being discussed based on the

“computational treatment of the subjectivity in a text.”95 This is, in general, a difficult task to do with full accuracy. A text can contain multiple sentiments at once, the data are difficult to appropriately clean for the accurate sentiment analysis (removing URLs and tags), and irony and sarcasm are generally difficult to detect which can further confuse the results.96 For large datasets, however, it is useful in collecting average sentiment. The specific kind of sentiment analysis used in this study was the VADER (Valence Aware Dictionary for Sentiment Reasoning) model. This model relies on a dictionary map of lexical features to both emotion intensities and sentiment scores.97 Positive lexical features produce positive sentiment scores and highly negative features produce negative scores (with neutral terms averaging 0 sentiment score), normalized to a range of positive 4 – to negative 4. Below (Table 4) displays the average sentiment polarity of each week of the 2020 FRT-including tweet data.

95 Beri, “Sentimental Analysis Using Vader.” 96 Beri. 97 Beri.

34 Table 4. The Average Polarity of Tweets from each week (2020)

Week Polarity of Tweet (Scale: -4 to 4) Week Polarity of Tweet (Scale: -4 to 4) 1 0.054 out of 3493 tweets 27 -0.064 out of 7279 tweets 2 0.034 out of 5134 tweets 28 -0.041 out of 7257 tweets 3 -0.060 out of 17793 tweets 29 -0.034 out of 7612 tweets 4 -0.061 out of 16884 tweets 30 -0.196 out of 10064 tweets 5 -0.049 out of 10493 tweets 31 -0.007 out of 8087 tweets 6 -0.041 out of 9294 tweets 32 0.029 out of 6885 tweets 7 0.001 out of 7177 tweets 33 -0.002 out of 7581 tweets 8 -0.021 out of 8473 tweets 34 -0.017 out of 6943 tweets 9 -0.041 out of 8845 tweets 35 -0.032 out of 4721 tweets 10 0.072 out of 8379 tweets 36 -0.042 out of 4786 tweets 11 0.021 out of 6379 tweets 37 -0.073 out of 7149 tweets 12 0.015 out of 3503 tweets 38 0.025 out of 4903 tweets 13 0.059 out of 3592 tweets 39 -0.032 out of 6304 tweets 14 0.057 out of 5042 tweets 40 0.009 out of 4211 tweets 15 0.059 out of 4763 tweets 41 0.055 out of 3593 tweets 16 0.047 out of 5155 tweets 42 0.028 out of 3739 tweets 17 0.061 out of 3876 tweets 43 0.025 out of 4242 tweets 18 0.071 out of 3253 tweets 44 0.033 out of 4458 tweets 19 0.025 out of 4737 tweets 45 -0.009 out of 3765 tweets 20 0.020 out of 5019 tweets 46 0.062 out of 3678 tweets 21 0.063 out of 5960 tweets 47 0.055 out of 4473 tweets 22 0.029 out of 6709 tweets 48 0.053 out of 3786 tweets 23 -0.039 out of 9943 tweets 49 -0.008 out of 3383 tweets 24 0.000 out of 27349 tweets 50 0.026 out of 5168 tweets 25 0.009 out of 12783 tweets 51 -0.008 out of 5410 tweets 26 -0.171 out of 15341 tweets 52 -0.018 out of 5615 tweets Average 0.000 out of 380215 tweets

Ultimately, the sentiment analysis demonstrated an overall neutral (0.0) polarity towards tweets involving ‘facial recognition’ for the year 2020. It is interesting to note that Week 3 has a negative polarity (during the spike in tweets that seemed to correlate with the Clearview AI controversy) and Week 24 had a neutral polarity (during the spike in tweets that seemed to correlate to the George Floyd’s death and the Black Lives Matter protests), however the weeks following had consecutively negative polarities for 6 weeks. A sentiment analysis is useful is gaining a consensus on wider public opinion about specific topics. In this case it was useful both to understand how the public understands the impacts of FR and how there are gaps in what the public

35 and researchers perceive or believe implications of FRT. For example, during Week 24 the sentiment was not immediately negative, although the technology later gained a lot of negative backlash due to its consequences on protest-goers and its misidentifying individuals leading to false arresting.

In a more broadly publicized case, Robert Julian-Borchak Williams was wrongfully arrested based on facial recognition decision-making by the Detroit Police Department, ultimately leading to Mr. Williams filing a formal lawsuit.98 After his arrest Mr. Williams stated to investigators, “This is not me… You think all Black men look alike?”99 This speaks to the biases and inaccuracy of FRT algorithms when their use is either beyond that intended or when the datasets used for training are insufficient or inaccurate. As Williams’ case and other examples show, these bias or inaccuracy situations can have significant consequences on individual lives and threaten marginalized groups. More alarmingly, this is only the first reported case and subsequent lawsuit for an American being wrongfully arrested based on a flawed match from facial recognition software, but likely not the last incidence as the methodology is scaled up for general use in policing.100 Regarding the lack of understanding and risks related to issues of bias, Dr. Mark Nitzberg, Executive Director of the Center for Human-Compatible AI, shared his concerns with a lack of education between researchers and the public on the topic in an interview for this thesis. He shared that the key and most important step to rectifying the flaws of the system are widespread education, particularly for members of congress. As he states, “Congress represents the people, so they need to understand [the biases in facial recognition]” so proper

98 Hill, “Wrongfully Accused by an Algorithm.” 99 Hill. 100 Hill.

36 regulation can establish safe and effective use of FRT leading to positive effects on how and when we use the technology and making it a positive for the society as a whole.101

The current backlash against FRT has caused a widespread reckoning with how we use it and who is control of its implementation and the data collected. With further advances in artificial intelligence and increased applications of tools like facial recognition, there is a greater need to refine policy and technological implementation to ensure its equitability and efficacy for all. There are already ethical principles in place put forth by institutions themselves, and a few cases of state-level regulation that I will cover in next, in Chapter 2. There is yet to be any federal regulations limiting and guiding the use of FRT, which helps present where some of the gaps in achieving ethical technologies lie.

101 Nitzberg, Informal Interview about Concerns with Facial Recognition Technology.

37 CHAPTER 2

CURRENT APPROACHES TO ETHICS IN ARTIFICIAL INTELLIGENCE FOR FACIAL RECOGNITION

2.1 Current Approaches to Ethics

In this section the state of the regulation and policy on FRT will be outlined, following the introduction to some of the issues relating to the use of FRT from Chapter 1. This will also allow for gaps in regulation and policy to be identified, in order to effectively make recommendations for a more ethical future of FRT use. FRT’s increasing breadth of use makes it one of the most powerful surveillance tools ever made, thus the need for increased urgency with which it needs to be monitored and regulated.102 In some consumer products, the software might allow you to opt in or out of using the face recognition system (like on your smartphone), but because the technology is already so ubiquitous in public spaces its near impossible to avoid.103 Public backlash expressing disapproval of the technology’s ubiquity, evidence of its racial and gender discrimination, and more specifically its identification of protestors, threatening freedom of expression and fears of an

‘Orwellian nightmare’, caused Big Tech companies to respond by limiting the use and sales of

FRT.104 Public awareness and backlash has proven to be a powerful tool in restricting use of FRT in controversial situations and promoting regulation of the technology’s use.

In order for users to have increased autonomy and control, without limiting the potential benefits of the technology and threatening an individuals’ constitutional right to freedom of expression, a combination of government regulations, industry self-regulation, and privacy-

102 Klosowski, “Facial Recognition Is Everywhere. Here’s What We Can Do About It.” 103 Klosowski. 104 Bowyer, “Face Recognition Technology,” 1; Klosowski, “Facial Recognition Is Everywhere. Here’s What We Can Do About It.”

38 enhancing technologies need to be implemented.105 In other words, a balance needs to be reached between “the potential for algorithms to improve individual social welfare… [and]… significant ethical risks.”106 Reaching this balance is already being attempted by researchers, policymakers and industry experts alike. But, as of now there are no measures holistic enough to create an accepted consensus on appropriate use. In order to better understand how ethics can be applied to

AI in FRT, its useful to have a better picture of the array of applications the technology can have.

2.2 Applications of Facial Recognition Technology

To reiterate, FRT is an extremely powerful surveillance tool. Consumer use of the technology has become so normalized in daily lives, it has become relatively unremarkable: unlocking smartphones, sorting through photos. Government and corporate use of the technology, on the other hand, risks greater impact on individual lives and society at large and has become so pervasive that it exists everywhere and yet is mainly out of sight.107 Widespread application of AI is inevitable with its expansive and innovative capabilities for sectors ranging from healthcare, environmental conservation, security, to education.108 Facial recognition is made possible using machine learning tools and is a category within the broader functions of AI that aims not to better human function, but replace it.109 The focus of innovation has been primarily driven to create autonomous systems, so much so that modern data-driven systems, such as deep learning, have made it difficult to pinpoint root causes of the problematic outcomes. Since the initial development

105 “Seeing Is IDʼing: Facial Recognition & Privacy.” 106 Tsamados et al., “The Ethics of Algorithms,” 1. 107 Klosowski, “Facial Recognition Is Everywhere. Here’s What We Can Do About It.” 108 Shneiderman, “Bridging the Gap Between Ethics and Practice,” 2. 109 Bowser, “Beyond Bans.”

39 of AI, there has been little to no emphasis on creating systems that humans can easily control through user interfaces.110

Advances and widespread use of AI have brought forward the long-term issues of bias in machine driven decision-making; these issues of bias extend deeply into the uses of facial recognition technology. Further, these biases deepen social inequalities, such as the inequitable treatment of minority groups, data and privacy violations, and even challenges basic human rights. 111 Schneiderman, a distinguished scholar of computer science, urges for a second

Copernican revolution - a fundamental shift from machine-centered to human-centered thinking.

Schneiderman offers 15 practical recommendations to move forward this fundamental shift, that will be in part addressed in this chapter and in Chapter 3. The United States, in particular, has witnessed a recent and intense surge in AI investment to ‘catch up’ to other nations who have advanced more quickly in the field. This increased investment focus has created more urgency for scholars to take part in the conversation regarding establishment of ethical frameworks to be used in parallel with AI innovation.

The National Security Commission on AI (NSCAI) has indicated that “America is not prepared to defend or compete in the AI era” underscoring the need for further advances in technology and policy infrastructure to ensure both advanced and responsible innovation of facial recognition within the US.112 The goal of NSCAI’s report is to present “an integrated national strategy to reorganize the government, reorient the nation, and rally our closest allies and partners to defend and compete in the coming era of AI-accelerated competition and conflict,” urging infrastructure propelling AI development and implementation to be one of the nation’s immediate

110 Shneiderman, “Bridging the Gap Between Ethics and Practice,” 2. 111 Shneiderman, 2. 112 Schmidt and Work, “Final Report,” 1.

40 and top priorities. 113 The two primary convictions of the final report outline: 1) the rapidly improving ability of computer systems is “world altering,” and 2) AI is “expanding the window of vulnerability the United States has already entered,” announcing America’s technological dominance to be under threat for the first time since the beginning of World War II.114 The NSCAI is an American-centric call to action and is important in detailing the urgency that exists around the call for governmental and national involvement in AI development, but it is important to note that the regulatory and innovative measures needed to make technologies ethical are of global importance. In summary, the NSCAI reports on AI’s present and the future development and further suggests the US be a part of the acceleration of innovation and potential of AI, along with establishing an ethical and sustainable future for AI.

The following tabular summary (Table 5) provides examples of the current and potential applications for FRT and gives a brief introduction to the pros and cons of using the technology.

This serves to illustrate some of the legal and ethical concerns of its use.

113 Schmidt and Work, 8. 114 Schmidt and Work, 7.

41 Table 5. Examples of Current and Potential Applications of Facial Recognition Technology Use in Three Categories along with the Advantages and Areas for Concerns

Categories of Use Advantages Areas for Concern Facial Recognition Use

Enhanced user interface (UI) • Storage & Data security Personal • Data breaches of mass data Secure access to applications & stored if FRT is not advanced as Unique ID systems that eliminates part of the AI technology and passwords (e.g., Face ID) policy drive

Unique Personal ID metric • Storage & Data Security • Bias potential and Driver’s License (DMV Records), misidentification/use Social Security, Passports (Customs & Border Protection)

Biometric IDs eliminating paperwork • Privacy and Security Government • Digital ID Misuse115

FR/Digital IDs/Human Identification • Algorithmic mistakes based on at a Distance (HID) inaccuracies of machine tracking missing persons/criminals. learning (ML) for certain profiles Police investigative tool. • Bias profiling and misidentification/use116

National defense and security • Misuse • Bias potential and misidentification/use

Enhanced Information Technology • Intellectual Property (IP) control Technology or adherence is not uniform Industry globally117 • Control of exported technology

National Central Repository for • Privacy and Security Patient Health Care Records - Health • Bias potential and Care Service misidentification/use

115 “EPIC.Org.” 116 “Facial Recognition Technology (FRT).” 117 Schmidt and Work, “Final Report.”

42 With the growing applications of AI on a broad and public landscape, activist groups have reactively scaled up efforts to vocalize concerns and take action against governments, corporations, and private entities that are deemed to be threatening individual civil rights. An example of a prominent activist group is Fight for the Future, a group of “artists, engineers, activists, and technologists who have been behind the largest online protests in human history, channeling internet outrage into political power to win public interest victories… We fight for a future where technology liberates - not oppresses - us”.118 A key project that Fight for the Future is focused on is banning facial recognition surveillance programs, citing the following key issues with the technology: 1) a 98% inaccuracy rate in identifying people, making the system “broken”,

2) law enforcement using facial recognition databases without warrants, making the technology invasive and in violation of the Fourth Amendment, 3) the programmatic misidentification of people of color, women, and children, making the technology unjust, 4) the data collected is stored in governments databases, making the data vulnerable to identity theft and hackers, and 5) it threatens our future because it makes surveillance overly ubiquitous and automated to an extent in which it is near impossible to avoid and can too easily be used as a tool of oppression as in some authoritarian states - many cite China as an example of this.119

Fight for the Future created a map (Figure 6) that displays where facial recognition technology is being used and where state and local restrictions are being put in place. The map is aimed to help spread awareness to the public of where and how facial recognition is being used around them.

118 “Fight for the Future.” 119 “Ban Facial Recognition.”

43

Figure 6. Fight for the Future’s Map Showing Where Facial Recognition Surveillance Is Happening and Where There Are Local and State Efforts in Place to Restrict its Use in the United States120 Source: Fight for the Future

The map demonstrates a nationwide use of FRT, with a higher concentration in urban and more densely populated regions. The map also demonstrates a clear pushback to the technology by specific pockets of the general public and some local and state governments that have already put some kind of restriction on the technology in place.

Facial recognition has the potential to be used in almost any industry or field: law enforcement, healthcare, retail, hospitality, marketing, banking, public events, social media, air travel, automobiles, entertainment, voting, education, ride-hailing, food, and consumer electronics.121 In addition, smart cities (an urban area that uses electronics to collect data which is used to help improve city infrastructure and mechanisms) are increasingly prevalent and have automated systems for engagement in public urban spaces. For example, public housing has been using security cameras as a tool of law enforcement related surveillance in Detroit.122 The purpose of this surveillance tool is to give the Detroit Police Department access to video footage to be used

120 “Ban Facial Recognition.” 121 “Like It Or Not Facial Recognition Is Already Here.” 122 Fadulu, “Facial Recognition Technology in Public Housing Prompts Backlash.”

44 in conjunction with its facial recognition technology when the Detroit Public Housing Commission files a police report. Following the installment of the surveillance technology Sandra Henriquez, the commission’s executive director, stated, “I think the police departments won’t make frivolous claims based solely on technology… I think that they will use the technology as one tool that they use in bringing people into the criminal justice system”. 123 This statement has proven to be extremely contentious following many instances of law enforcement’s questionable use of the technology.

Law enforcement has been using facial recognition technology for over 20 years, with little oversight and regulation on its protocol for use. 124 In more recent years, in part because of increased attention surrounding facial recognition and its broadening reach in the public arena, law enforcement use has been under considerable public scrutiny. When Black Lives Matter protests gained momentum across the nation during the summer of 2020 and much of the nation was reckoning with the issues of systemic racism and social injustices persisting today, some large tech companies - namely, IBM, Amazon, and Microsoft - took a stand and halted sales of facial recognition technology to law enforcement.125 Some cities, like San Franscico, also implemented city-wide bans of law enforcement’s use of the technology.126 These bans and restrictions came about because of increased attention given to court cases involving unregulated and alleged error- prone uses of FRT which were significant in reaching a verdict. For example, the case of Willie

Allen Lynch: “who was accused in 2015 of selling $50 worth of crack cocaine, after the Pinellas facial recognition system suggested him as a likely match. Mr. Lynch, who claimed he had been

123 Fadulu. 124 Valentino-DeVries, “How the Police Use Facial Recognition.” 125 Horowitz, “Tech Companies Are Still Selling Facial Recognition Tools to the Police.” 126 Valentino-DeVries, “How the Police Use Facial Recognition.”

45 misidentified, sought the images of other possible matches; a Florida appeals court ruled against it. He is serving an eight-year prison sentence.”127

This demonstrates, in part, how the use of facial recognition technology in public housing and in criminal identification matters could cause public backlash. The issue has reached Congress due to the growing concerns that “unproven technology will ensnare innocent people while diminishing privacy rights”.128 A bill was introduced in 2019, “No Biometric Barriers to Housing

Act”, limiting the use of biometric surveillance tools in federally assisted dwelling units. 129

Ultimately, the bill was not voted on, but it shed light on the unjust uses and potential negative outcomes of the technology, calling for a greater push towards regulation and safeguarding FRT’s implementation.

2.3 State-Level Regulations and Ethical Principles

Within the last decade, FRT has gained national and international attention as an effective tool and simultaneously as an area that needs comprehensive regulatory oversight. The following provides some recent FRT-related regulations or ethical guidelines as well as landmark cases in the US:

○ 2015: The US Government Accountability Office (GAO) published a report to the

Privacy Technology and the Law entitled ‘Facial Recognition Technology:

Commercial Uses, Privacy Issues and Applicable Federal Law”. The report points

out the need to address a full range of privacy concerns and augment the Privacy

Act of 1974 that covers information collected by the government as well as

127 Valentino-DeVries. 128 Fadulu, “Facial Recognition Technology in Public Housing Prompts Backlash.” 129 “No Biometric Barriers to Housing Act of 2019.”

46 information in the private sector (this includes information covered by separate acts

such as the Health Insurance Portability and Accountability Act [HIPAA] and the

Children’s Online Privacy Protection Act [COPPA]).130

○ 2016: Relating to privacy and commercial use, the US National

Telecommunications and Information Administration (NTIA) produced the

Privacy Best Practice Recommendations for Facial Recognition Technology.131

○ 2018: Relating to government use of biometric data, EPIC.Org formally opposed

the use of FRT in US Custom and Border Protection (CBP) due to inadequate FRT

and laws for protection of a Digital ID without consent. 132 In addition, EPIC

requested documents through Freedom of Information Act (FOIA) that were denied

or incomplete indicating more policy around not only FRT but the infrastructure to

support policy and the policy itself is required. The American Civil Liberties Union

(ACLU) supports these as well as state-led initiatives to ensure transparency

policies of surveillance technology.133

○ 2016-2019: Industries put forth or updated self-regulation standards such as:

International Biometrics and Identification Association released “Privacy Best

Practice Recommendations for Commercial Biometric Use,” and the IEEE

Standards Association is working on specifications to help ensure the technology

is used ethically.134 The Digital Signage Federation issued privacy standards for its

members that address facial recognition, and the U.S. Chamber of Commerce has

130 “FACIAL RECOGNITION TECHNOLOGY: Commercial Uses, Privacy Issues, and Applicable Federal Law.” 131 “Privacy Best Practice Recommendations.” 132 “EPIC v. CBP.” 133 “EPIC v. CBP.” 134 “IBIA Privacy Best Practice Recommendations For Commercial Biometric Use.”

47 published policy principles to guide policymakers as they consider legislative

proposals.135 In addition, the Federal Trade Commission has issued best practices

(NCSL).136 Further, the global Partnership for AI was formed that includes 100+

Big Tech companies including Google, Apple, Amazon and Microsoft.137

○ 2019: State vs. Federal Regulation: Individual states, beginning in 2008 with IL,

and more states including MA, CA, CO, WA and OR, put bans on the use of FRT

by city agencies including the police after statistics and high-profile cases (George

Floyd Justice in Policing Act 2020) pointed to FRT resulting in misuse of

information including racial bias and profiling. 138 These states and others are

providing their own laws protecting biometric privacy (e.g., Biometric Information

Privacy Act [BIPA]) and California Consumer Privacy Act [CCPA]). In a

landmark case, Facebook lost a 550M lawsuit for using FR in violation of the

BIPA.139

○ 2019: Organization for Economic Co-operation and Development (OECD) adopted

principles on AI, promoting innovative and trustworthy AI that respects human

rights and democratic values.140

○ 2019: The White House released ten legally binding principles to help guide

agencies to create regulatory and non-regulatory approaches to AI innovation.141

○ 2019 - 2020: Two prominent FR bills were put forth:

135 “Digital Signage Privacy Standards”; “U.S. Chamber Facial Recognition Policy Principles.” 136 “FTC Recommends Best Practices.” 137 “The Partnership on AI.” 138 Greenberg, “Facial Recognition Gaining Measured Acceptance.” 139 Klosowski, “Facial Recognition Is Everywhere. Here’s What We Can Do About It.” 140 “OECD Principles on Artificial Intelligence.” 141 Vought, “Memorandum for the Heads of Executive Departments and Agencies.”

48 ■ Facial Recognition and Biometric Technology Moratorium Act: Bans the

use of FR by federal law enforcement but allows exceptions with a warrant.

This bill is also accompanied by a bill that proposes a ban on the use of

biometric data for housing.142

■ Commercial Facial Recognition Privacy Act: Prohibits users of FR from

collecting, re-sharing, tracking or identifying users without consent.143

○ 2020: U.S. Department of Defense released ethical principles for the use of AI

following the recommendations of the Defense Innovation Board.144

○ 2020: Clearview AI became a landmark case in the regulation of facial recognition

and also furthered the discourse concerning what positive and negative use-cases

of the technology are and can be. Clearview AI scraped over 3 billion images from

Facebook, Twitter, LinkedIn, YouTube and Venmo as well as millions of other

websites.145 The data is used by law enforcement agencies to help find victims of

abuse and to solve child abuse cases. According to law enforcement it’s the “biggest

breakthrough in the last decade”, but according to many groups concerned with

misuse and misidentification like the Surveillance Technology Oversight Project it

underscores the current situation of wanting to “normalize surveillance” and, due

to the current technology and policy caveats that, “Facial recognition makes a lot

of mistakes.”146

142 “Facial Recognition and Biometric Technology Moratorium Act of 2020.” 143 “Commercial Facial Recognition Privacy Act of 2019.” 144 “DOD Adopts Ethical Principles for Artificial Intelligence”; Bowser, “Beyond Bans.” 145 Hill, “The Secretive Company That Might End Privacy as We Know It.” 146 Hill and Dance, “Clearview’s Facial Recognition App Is Identifying Child Victims of Abuse.”

49 ○ 2020: NIST (National Institute of Standards and Technology) released four

principles of explainable AI, providing insights to the challenges of designing

explainable AI systems and offering a step towards creating trustworthy AI.147

Concurrently, in Europe some of the primary regulations are as follows:

○ 2018: the European Union (EU) Commission released an Ethics Guideline for

Trustworthy AI. In it, they list seven key requirements that need to be met in order

to ensure that AI is appropriately regulated applicable to facial recognition

technology. These guidelines were developed in conjunction with the 2016 General

Data Protection Regulation (GDPR) that safeguards biometric data and in further

defines “remote biometric identification” by requiring consent or an impact

assessment.148

The EU’s 7 Key Requirements:149

1. Human agency and oversight

2. Technical robustness and safety

3. Privacy and Data governance

4. Transparency

5. Diversity, non-discrimination and fairness

6. Societal and environmental well-being

7. Accountability

147 Phillips et al., “Four Principles of Explainable Artificial Intelligence.” 148 Phillips et al. 149 “Ethics Guidelines for Trustworthy AI.”

50 o In addition, the EU is already far ahead of the united states in developing federal

regulations. In a united effort to limit police use of FRT, the EU has proposed some of

the first laws to regulate AI in policing.150

The above regulatory activity for AI including FRT highlights several key points: 1) regulation is complicated – it requires consideration of practical uses and privacy protection and

2) regulation is not only regional – it requires country wide and global conformities. Regulation and concern for appropriate and ethical use of AI, and facial recognition specifically, has increased in recent years. Safeguarding measures have extended to impact assessment tools as a means to monitor the effects of a system. For example, NIST’s ‘Face Recognition Vendor Test’ assesses the performance of one-to-one face recognition algorithms used for identity verification and evaluates accuracy variations across different demographic groups.151 Impacts to individuals of unjust and unwarranted use of the technology can be extremely detrimental in ways that could not have been accounted for during the technology’s inception.

2.4 Big Tech Industry Response to Ethics in Artificial Intelligence

Big Tech, which is the main driver of FRT innovation and the biggest commercial interest, has started to implement their own responsible AI initiatives in response to and as a testimony to conformance of state or proposed Regulations. 152 In addition, some Big Tech (e.g., Amazon,

Microsoft) either pulled out of or put a temporary moratorium on the use of their FRT by law enforcement until further evaluation and regulations are developed.

150 Schechner, “Artificial Intelligence, Facial Recognition Face Curbs in New EU Proposal.” 151 Grother, Ngan, and Hanaoka, “Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects.” 152 Browne, “Tech Giants Want Rules on Facial Recognition.”

51 The following section provides examples on Microsoft and Google who are part of a wider group tech companies working toward best practices AI use standards (Partnership on AI) as well implementing self-regulating company principles.153

o Microsoft: Initiated a team (FATE: Fairness, Accountability, Transparency, and Ethics

in AI) in response to growing demand for accountability and ethics in AI by the key

developers of the technology. 154 In order to promote a culture shift, Microsoft

developed its own AI principles: 1) fairness, 2) reliability and safety, 3) privacy and

security, 4) inclusiveness, 5) transparency, and 6) accountability. 155 In addition,

following the Black Lives Matter protests in June 2020, Microsoft announced limits of

police use of their FRT technology.156

o Google: Similarly launched a team (PAIR: People + AI Research) and released a list

and commitment to general recommended practices for AI: 1) use a human-centered

design approach, 2) identify multiple metrics to assess training and monitoring, 3) when

possible, directly examine your raw data, 4) understand the limitations of your dataset

and model, 5) test, test, test, and 6) continue to monitor and update the system after

deployment.157 In addition, Google has platforms like TensorFlow that, following the

2018 release of Google’s Responsible AI principles, help enable inclusive, ethical, and

accountability in their products by allowing developers all around the world access

their ML models to build responsible AI applications.158

153 “The Partnership on AI.” 154 “FATE.” 155 Green et al., “Responsible Use of Technology: The Microsoft Case Study,” 7. 156 Bowser, “Beyond Bans.” 157 “Responsible AI Practices.” 158 Doshi and Zaldivar, “Responsible AI with TensorFlow.”

52 Industry is extremely powerful in setting the standard for responsible innovation and use of AI. Big Tech companies whose broad reach into society means they have a particular responsibility to set a precedent for appropriate approaches moving forward. Despite efforts being made across local and state governments and industry, there are still gaps in properly regulating and limiting FRT’s use.

2.5 Filling Regulatory Gaps

The regulatory activity outlined in this chapter illustrates that currently, FRT is subject to fragmented legal frameworks, mostly state-level and some institutional self-regulation. Elizabeth

Rowe of the University of Florida Levin College of Law School recently summarized recommendations to Congress to unify FRT regulations for industry where she refers to the existing system as the “Wild West” and recommends that FRT regulations adapt trade secrecy style regulations and that individuals treat their faceprint like their social security number.159

The regulatory gaps in the implementation of FRT and how to move forward include the need for uniformity across states, but also to an approach that anticipates the advances in the technology rather than just outright banning the technology. In moving forward and creating recommendations for the future of ethical implementation of AI in FRT, there are three key gaps in existing regulatory and safeguarding measures: 1) a need for consensus on definition, 2) a need for transparency in ML models, and 3) a need for sector-specific policies to drive guidelines and protocol for industry practices.

Some gaps from the technology perspective were touched on in the explanation of the key issues of facial recognition (namely data cleaning errors and lack of representation in training

159 Rowe, “Regulating Facial Recognition Technology in the Private Sector”; “Expert.”

53 datasets). There are also significant regulatory gaps inhibiting the ethical use of the technology: in this section I draw heavily on a paper written by Dr. Anne Bowser of the Wilson Center, research conducted by Dr. Mark Nitzberg of the Center for Human-Compatible AI, and recommendations for bridging policy and practice by Dr. Ben Shneiderman.

First, there is little agreement in research and across industries on term definitions, for example that of ‘algorithm,’ ‘trustworthy AI,’ ‘fairness,’ ‘bias,’ and ‘transparency’. Definitions are useful in making general principles actionable. NIST wrote “without clear standards defining what algorithmic transparency actually is and how to measure it, it can be prohibitively difficult to objectively evaluate whether an AI system… meets expectations.”160 Definitions will help create initial checklists of whether systems meet AI standards and principles. This is something industry leaders and researchers, in particular, are struggling with; in an informal interview with

Dr. Mark Nitzberg, he shared that he was recently called on to help reach a consensus on how we should properly measure how compliant systems are and create a consensus on definitions in AI ethics, demonstrating a recent and urgent push to reach a consensus on this matter.

Second, the value of transparency has already been recognized in US policy but has not been appropriately acted on. There is no clear definition for transparency, which would help create more concrete guidelines to achieve transparency in AI systems. 161 We lack more concrete requirements of transparency that relates more specifically to a model’s auditability and explainability (the ability to explain the technical processes of an AI system). 162 Further, transparency would also allow users insight into the limitations of an AI system with appropriate education, especially when the system has potential to have a large impact.

160 Bowser, “Beyond Bans.” 161 Nitzberg, Informal Interview about Concerns with Facial Recognition Technology. 162 Shneiderman, “Bridging the Gap Between Ethics and Practice,” 3.

54 Lastly, there is a need for sector-specific requirements and policies to drive industry practices. This builds on the definitions and principles and can be technical or non-technical requirements, but there is a greater need for sector specific requirements that either make mandatory or prohibit certain behaviors. An example of a technical, sector-specific requirement presented by Dr. Bowser of the Wilson Center is FRT used in police body cams should not display image matches below a certain probability threshold to avoid seeing a pattern where one might not exist.163 Another technical requirement example is requiring human review in certain cases of high potential impact. In terms of non-technical requirement examples, one could be a regular system audit requirement or requiring third party reviews.164

These three gaps identified above are a start to creating more equitable futures in the use and impact of FRT. This is by no means an exhaustive list of the existing gaps in regulating FRT but provide groundwork for recommendations to be made in the next chapter.

163 Bowser, “Beyond Bans.” 164 Bowser.

55 CHAPTER 3 PAVING A WAY FORWARD TO ACHIEVE ETHICAL USE OF FRT TECHNOLOGY

3.1 Current Gaps in Regulation and Policy

Following the January 6th, 2021 Capitol building attack, the investigation actively used

FRT to identify participants. This is following several high-profile uses of FRT by law enforcement to monitor public gatherings, like the BLM protests in the summer of 2020. These recent law enforcement cases have created significant debate both for and against the use of FRT.

However, through these highly reported events it is clear there is a lack of both a public consensus on the technology and a holistic legal framework regulating its use in the United States. Some other nations, namely in the EU, are moving ahead in creating federal-level regulatory measures to ensure ethical use of FRT and other AI-based technologies. With increased use and application of FRT, the gaps in regulatory measures are becoming more problematic. Hence, there is an increasing push to both invest in and implement resources to create cohesive national policy and guidelines for AI technologies, thus helping ensure equitable and legal use nationwide. The following are three key recommendations this thesis seeks to advance to pursue the goal of ethical and equitable FRT implementation. These recommendations serve to solve the problem of the black box of algorithmic decision-making and to create consensus on how they should be appropriately used.

56 3.2 Bridging Regulatory Gaps

3.2.1 Education for Users

Mainstream and social media platforms, including and Twitter posts, have helped shed light on the issues and implications of unregulated development and the increasing use of FRT. This has however caused broad misconceptions and fear around AI and the current state of its use. Dr. Mark Nitzberg shared his ideas on how to bridge the gaps between the technology and its users, and one key proposition was a national plan on digital education for all.165

Public AI labs and workshops can help break down the mystery around AI, explaining that at is core it is a statistical methodology, measuring correlation between inputs and outputs. Actively educating the public about AI methodologies can help debunk the science-fiction depiction of AI and its anthropomorphized mystification. 166 As Dr. Nitzberg explains, AI is a statistical methodology that has been around for far longer than AI’s more recent popularization.167 Pop culture has created a misleading picture of a humanized form of AI that does not exist. This has created both a misunderstanding and fear associated with AI, that detracts from the real issues at hand.

User education and information privacy is closely related to user autonomy; information privacy is vital to ensuring people’s freedom to think and communicate.168 Yet, increasing reliance and interaction with algorithm systems has reduced the ability to control who has access to personal information and what is done with it. It is difficult to strike a balance between a person’s

165 Nitzberg, Informal Interview about Concerns with Facial Recognition Technology. 166 Kantayya, Coded Bias; Nitzberg, Informal Interview about Concerns with Facial Recognition Technology. 167 Nitzberg, Informal Interview about Concerns with Facial Recognition Technology. 168 Felzmann et al., “Transparency You Can Trust.”

57 own decision-making ability and that of an algorithm when the AI system isn’t understood by the people using them and there is a lack of transparency of the decision-making process. An interesting way to mitigate this and decrease distrust is presented by Whitman et al. Mistrust and lack of decision transparency is mitigated by implementing participatory design as a way to promote the value of the end user and protect autonomy in the ML modeling process. This also brings embodied experience into the design, and therefore keeps the users informed on the process by which the system is developed.169 Rahwan’s conceptual framework, ‘Society in Loop’ is a version of this where stakeholders are enabled to help design the algorithmic systems before they are used and rectify and reverse the decisions made by algorithmic systems that negatively affect the underpinnings of protected social activities.170 Participatory design also has the ability to better educate the users of these systems on how they reach their outcomes, which serves to expand the stakeholder groups keeping these systems accountable. User education and participation by stakeholders in algorithmic design processes is a start to achieving transparency and accountability to foster ethical FRT development and deployment, but more is required to ensure a cohesive plan for all AI developers across the board.

3.2.2 Comprehensive Guidelines that Can Lead to Federal Regulation

Dr. Ben Schneiderman provides a holistic 15-step recommendation plan to bridge the gap between ethical principles of ethical Human-Centered AI Systems and effective governance. These

15 recommendations cover three governance categories: team, organization, and industry and offer a comprehensive first step of how management systems can unite the goals of ethics and effective

169 Tsamados et al., “The Ethics of Algorithms.” 170 Tsamados et al., 9.

58 practice.171 Here I advance Dr. Ben Shneiderman’s recommendations for guidelines to bridge the gap between ethics and practice: Shneiderman’s approach is a fundamental shift from machine- centered thinking to human-centered thinking and include 1) the requirement of in-depth testing and training of datasets to verify that the data is current and fair, 2) a push for organizational responsibility for safety (involving leadership commitment, extensive reporting of failures and misuses to avoid repeating negative behavior, and internal review boards for problems and future plans), 3) requirement to align with industry standard practices (this refers to Robotics Industry

Association and International Standards Organization), 4) trustworthy certification by third party oversight, and 5) government intervention and regulation (Europe is ahead on creating a regulatory approach to this where they stress 7 key principles: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and fairness, environmental and societal well-being, and accountability).172 In doing so, Shneiderman pushes effective governance, limiting the dangers of AI systems to individuals, organizations, and society, while enabling use of the technology to create benefits for individuals and society.

As explained in the issues of FRT, we understand that image labeling is a key downfall of the outcomes of the technology: “A woman sleeps in an airplane seat, her right arm protectively curled around her pregnant stomach. The image is labeled ‘slob.’ A photoshopped picture shows a smiling Barack Obama wearing a Nazi uniform, his arm raised and holding a Nazi flag. It is labeled ‘Bolshevik.’”173 These results are nonsensical and are void of a human’s common sense,

171 Shneiderman, “Bridging the Gap Between Ethics and Practice.” 172 Shneiderman. 173 Crawford and Paglen, “Excavating AI.”

59 yet they project assumptions, worldviews, and politics.174 Crawford and Paglen underscore the fundamental issue of training datasets that helps shape guidelines to mitigate bias within them:

“The training sets of labeled images that are ubiquitous in contemporary computer vision and AI are built on a foundation of unsubstantiated and unstable epistemological and metaphysical assumptions about the nature of images, labels, categorization, and representation. Furthermore, those epistemological and metaphysical assumptions hark back to historical approaches where people were visually assessed and classifies as a tool of oppression and race science. Datasets aren’t simply raw materials to feed algorithms but are political interventions. As such, much of the discussion around “bias” in AI systems misses the mark: there is no ‘neutral,’ ‘natural,’ or ‘apolitical’ vantage point that training data can be built upon. There is no easy technical ‘fix’ by shifting demographics, deleting offensive terms, or seeking equal representation by skin tone.”175

In delineating these fundamental issues, Crawford and Paglen pushback against the use of the datasets we already use, “And so we need to examine them– because they are already used to examine us– and to have a wider public discussion about their consequences. Hao further emphasizes how we have “lost control of our faces” and that has disrupted any meaning of consent in the latest use of new deep learning-based FRT.176

Shneiderman addresses these issues and urges technological systems to return “to serve the collective needs of humanity,” and states that the three governance structures rest in software engineering, organizational design, and external reviews.177 All of his methods to bridge the ethical use gap rest on the core idea of putting human performance and human experience at the center of design thinking and innovation.178 In general, the comprehensive guidelines Shneiderman presents draw on bringing diverse groups of researchers and practitioners together in tackling the core

174 Crawford and Paglen. 175 Crawford and Paglen. 176 Hao, “This Is How We Lost Control of Our Faces.” 177 Shneiderman, “Bridging the Gap Between Ethics and Practice,” 2–3. 178 Shneiderman, 25.

60 issues of problematic uses of AI.179 Further, he calls for a need for developers to be the starting point of those who should reexamine the datasets they use to train the ML models they design.

Altogether, accountability can only be reached when all are involved and being held to the same standards. These types of cohesive guidelines will ultimately be necessary to build effective federal regulation creating binding laws all stakeholders can be held to.

3.2.3 Push Towards Explainable AI

A prominent technical solution to implementing transparency in practice involves pushing towards explainable AI. First, to understand how explainable AI lends itself to transparency, its useful to understand the open AI strategy. Some innovative research labs, Open AI, Google AI,

Allen Institute for AI to name a few, are pushing the boundaries for what is possible with AI, all while promoting the importance of ethical and safe development and implementation. 180

Movements toward open data and open AI design being driven by national governments, industry members, and prominent research groups has been met with fierce resistance.181 The resistance is twofold, on one hand a significant amount of public resources are required to achieve open data, and on the other there is no guaranteed benefit from making data and AI open. However, the potential benefit of open sourcing data is important to the continued effort of ensuring ethical AI implementation, and particularly to create digital transparency. 182 The Open Knowledge

Foundation defines ‘open data’ as “data that can be freely used, re-used and redistributed by anyone - subject only, at most, to the requirement to attribute and share alike”.183 Data, in this case,

179 Shneiderman, “Bridging the Gap Between Ethics and Practice.” 180 “OpenAI”; “Google AI”; “Allen Institute for AI.” 181 Mayernik, “Open Data: Accountability and Transparency.” 182 “Open Data Handbook.” 183 “Open Data Handbook.”

61 refers to non-personal, governmental or public data that does not breach national security or privacy restrictions. This kind of data is particularly valuable because of its volume and centrality, making is especially useful in training AI models.184

The Open State Foundation outlined 10 key challenges to using open data in practice: 1) national governments struggle to properly implement methods that allow access to open data and use open data, 2) manage the dilemma of access to open data which has been structured by commercial entities 3) a lack of communication between governments regulators and developers,

4) corporate social responsibility not robust enough to ensure cooperative use, 5) a structured approach to openness is limited to Big Tech companies, 6) missed opportunities with certain open access approaches, 7) open data being considered harmful, 8) too little research on the impact of open data, 9) technological barriers to governments, and 10) local data is not always scalable.185

These challenges, alongside multinational resistance to the ‘openness revolution’ in fear of profit loss and transparency rejection, present as barriers to a more unified approach on tapping into the value of open data as a resource.186

Some of the major efforts in creating open data resources and creating transparency in practice are 1) the World Bank’s open data initiative : an Open Data Toolkit designed to help government branches, and private entities to better understand the basic principles of open data and better implement it in their programs by avoiding common mistakes.187 Similarly, Data.Gov is an initiative by the US federal government to progress the use of open data in an effort to make government more open and more accountable.188 Another government example of where open data

184 “Open Data Handbook.” 185 “10 Challenges for Open Data.” 186 “The Openness Revolution.” 187 “Starting an Open Data Initiative | Data.” 188 “Open Government.”

62 can be helpful economically and socially is in tracking the uses of taxpayer money. Some projects that did this were the Finnish ‘tax tree’, the British ‘Where does my money go’, and a Canadian project that saved $3.2 billion by identifying charity tax fraud through open data.189

Open algorithms provide a technical solution to achieving transparency in algorithmic decision-making. Open algorithms are a means, similar to explainable AI, of creating public trust and increased benefit by sharing and exchanging AI methodologies. Explainable AI is an extremely promising way of achieving this kind of public transparency and a way to start deblack- boxing the algorithmic processes of these systems. NIST presented four principles of explainable

AI in an effort to provide insight to the challenges of designing explainable AI systems: 1) explanation: systems deliver accompanying evidence or reason(s) for all outputs, 2) meaningful: systems provide explanations that are understandable to individual users, 3) explanation accuracy: the explanation correctly reflects the system’s process for generating the output, and 4) knowledge limits: the system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its outputs.190 While being a promising technical solution to mitigating bias in FRT, it is not easy to do. There is no “one size fits all” explanation to how to implement explainable AI into all systems, but there are categories for what explainable AI can be.191

Explainable AI not only helps to disrupt the black box that exists around algorithmic decision making but it also is acknowledged as a crucial feature of the practical deployment of AI models. AI has become a pervasive tool in almost all sectors of business and society with little to no need for human intervention. This pervasiveness and impact potential means there is a strong

189 “Open Data Handbook.” 190 Phillips et al., “Four Principles of Explainable Artificial Intelligence,” 2. 191 Phillips et al., “Four Principles of Explainable Artificial Intelligence.”

63 need to better understand how decisions are made by AI models. Currently, most AI algorithms use advanced machine learning techniques such as Deep Learning Networks, which are black-box models producing outputs with little or no decision rationale, seeming to humans as both opaque and arbitrary, which is the opposite of transparency which ethical AI advocates.192 Explanations are crucial to support the output of models, especially when the output can be extremely impactful

- when designing ML models for use in FRT, interpretability should be a key factor in order to broaden its applicability and transparency for users and providers alike.

3.3 Future Research

Part of the key challenge with regulating AI and FRT is that innovation is continuous process, while regulation follows at a stepwise, reactive pace. Since FRT is a relatively immature technology and current interest is driving a high rate of innovation there must be extra attention put on frequent updates to emerging regulation. This is important so that regulation stays abreast of advances in FRT maintaining a clear view on ethical and human centered use, and thus safeguarding society against uses that violate civil rights or that perpetuate high-impact errors. The future of FRT goes beyond banning and looks to harmonize the relationship between implementation of the technology and the impact of its algorithmic decision-making outputs. A balance needs to be found in which the technology can enhance the efficiency and security of individual lives without risking the privacy, freedom of expression, or perpetuating bias that disproportionally harms marginalized groups. The risk associated with harm relating to unethical

FRT creates an urgent platform on which to urge further debate and research to correct the

192 Arrieta et al., “Explainable Artificial Intelligence (XAI).”

64 inaccuracies that exist today. Further research is needed to effectively and fully eliminate bias from these machine learning systems that are supposedly means to better livelihoods.

The key areas of research I highlight for future focus to expand on this work are as follows:

1) Historical case studies of new technology implementation and the effectiveness of legal

and regulatory frameworks in ensuring effective use of the technology while preserving

civil rights. Examples of relevant case studies could be legal frameworks used to regulate

surveillance of electronic communication or consent regulations for using images of

individuals for commercial activity.

2) Understanding unconscious bias and its implications for FRT algorithm design, dataset

curation, and non-fair use scenarios where unconscious bias would create inequitable

outcomes or perpetuate bias. In addition, focus should continue to expand on the promising

area of Explainable AI in order to elaborate transparency criteria in ML modeling.

3) Understanding the key drivers of public perception so that regulatory frameworks, public

education, and proper and improper use of the technology is based on well-understood

public attitudes and concerns.

I only highlight a few areas that could be expanded on, but in reality, the list is far more expansive. There is immense potential for AI to continue to bring great benefit to society, without the negative consequences that currently exist today. Resources and attention need to continue to be invested by both government, industry, and individuals in order to rectify the injustices of the technology and ensure that it continues to have positive use-cases.

65 CONCLUSION ETHICAL IMPLEMENTATION OF FACE RECOGNITION TECHNOLOGY

Face recognition systems are quickly becoming an everyday part of people’s lives. Modern machine learning algorithms, combined with powerful computers and broadly available face image datasets mean that FRT is being implemented in a multitude of applications with every increasing scale and ubiquity. Increasing use and power of FRT offers society benefits in the form of increased economic productivity, improved personal security, and tools for improved community policing and national security. However, with the increasing use and scale of FRT in our lives come risks to our individual civil liberties and personal freedom, as FRT can be used in ways that invade privacy, subject individuals to scrutiny and surveillance in the course of living their day- to-day lives and even risk false accusation based on misidentification. The need to strike a balance between using FRT to harvest society benefits, while at the same time limiting misuse or overuse of the technology, thus preserving our civil and human rights is a key challenge.

The uses of FRT extend to consumers, industry, and government, offering a multitude of opportunities to harvest the benefits the technology can bring. However, poorly trained, overextended or unregulated use of FRT can create real harm. The inherent accuracy risk that comes with FRT comes with inadequately representative training datasets that do not account for marginalized groups effectively and can introduce bias in use. The extent to which bias in FRT persists is, in short, the results of narrowly trained algorithms being extrapolated beyond practical utility and when datasets have persistent errors or biased image associations. To combat the risk with bias, identification error and application misuse a cohesive bias mitigation plan and use regulation framework are needed to protect against the risk the technology brings. While regulation and dataset quality issues are fundamental to ethical use, it is clear that public ignorance

66 on the scale and impact of this FRT misuse problem mean there may be a lack of urgency in industry and with policymakers in implementing the needed policies and guidelines to put the technology on a sustainable path, with basic public acceptance. The speed by which facial recognition systems are being implemented and the scalable nature of the technology means that the risk of unmitigated bias and unethical use are growing rapidly, causing the potential for societal backlash if safeguards are not put into place. Society backlash could limit the positive benefits the technology can offer and therefore this is an important issue at this time.

Over the past decade we have witnessed incredible technological advancements in facial recognition and AI at large. Deep learning modeling techniques have allowed innovation to advance to the next level of accuracy and applicability. This increase in scalability to due technological advancements has posed specific ethical risks to individuals and communities more broadly. AI has not suddenly become powerful, but rather because we have become a deeply interconnected society with individual super computers on hand that act as intermediators to keep us all intertwined, we have become dependent and sensitive to its consequences. AI is something we add into the equation to make things more efficient and have the appearance of human behavior.

We ask Siri, “what is the weather?” and it responds as if asking another human. In this way we have anthropomorphized AI’s capabilities as being fully human, which is entirely misleading. AI will never equate to a human’s common sensibilities, or physical understanding of the world. In spite of AI’s advancement, its inherent lack of human sensibility combined with the scale it is being implemented mean we risk overwhelming individual rights. Herein lies the importance of understanding the urgency of acting on duty of care: “the duty of care is proportional to a system’s reach”.193 This claim directs recommendations for a more ethical future towards government and

193 Nitzberg, Informal Interview about Concerns with Facial Recognition Technology.

67 the Big Tech industry, who uniquely harbor the the manpower and breadth to reach broadly into society, to be responsible for the future of ethical implementation of technology.

This thesis bridges the baseline of public understanding and a national plan that incorporates education and regulation in an effort to correct potential negative consequences of facial recognition and create a more ethical future for its appropriate and beneficial use and implementation in public and personal lives. Datasets are at the core of AI accuracy and function, therefore an increase in representation and accurate data labeling is needed in the future of AI research and innovation. This is done by have a diverse range of representation in training datasets, which appropriately correlates with its intended and practical use. The key bias mitigation recommendations advanced in this thesis are 1) education for users and participation by stakeholders, 2) cohesive guidelines that can eventually lead to federal regulation, and 3) a push towards explainable AI. These are achievable efforts that will guide innovation towards ensuring an ethical and sustainable future of man’s relationship with technology. Most promising of these recommendations, based on this thesis’ findings, is the push towards explainable AI. This relates closely to existing efforts towards Open AI and transparency in algorithmic decision-making.

Moving forward, there are many research areas that continue to show promise outside of the scope of this thesis. Future research beyond this thesis could be many things, but here I suggest areas that should be given particular attention: 1) developing case studies of the effectiveness of legal and regulatory frameworks used to regulate surveillance technology, 2) understanding unconscious bias and its implications for FRT algorithm design, dataset curation, and non-fair use scenarios where unconscious bias would create inequitable outcomes or perpetuate bias, as well as expanding on the positive impacts of explainable AI in order to mitigate this, and 3) understanding

68 the key drivers of public perception so that regulatory frameworks, public education, and proper and improper use of the technology is based on well understood public attitudes and concerns.

AI is of great benefit to society and, when implemented and developed ethically and for appropriate uses, it can enhance livelihoods and have unmatched social and economic benefits.

This thesis is strictly not to deter the use of AI, but rather ensure its sustainable and equitable use into the future so that its benefits can be equally enjoyed by all. Now is the time to implement change and act with urgency in creating ethical requirements of technology before we become overly reliant on the broken systems that exist today.

69 APPENDIX A: PYTHON CODE USED TO FETCH TWITTER API DATA194

import requests import os import json import time

# To set your environment variables in your terminal run the following line: # export 'BEARER_TOKEN'='' bearer_token = 'AAAAAAAAAAAAAAAAAAAAAMMxNgEAAAAABqzZXoJWTznEaecavTM2PNRKXrY%3D8HYIk7Jqq8dc1mcRRnG2Vl5S7k kqaAGFsswnNyOUS7GdmZQscF' next_token = "" initial = 0 search_url = "https://api.twitter.com/2/tweets/search/all" search_url = "https://api.twitter.com/2/tweets/search/all"

# Optional params: start_time,end_time,since_id,until_id,max_results,next_token, # expansions,tweet.fields,media.fields,poll.fields,place.fields,user.fields # query_params = {'query': '(from:twitterdev -is:retweet) OR #twitterdev','tweet.fields': 'author_id'}

def create_headers(bearer_token): headers = {"Authorization": "Bearer {}".format(bearer_token)} return headers

def convert(numconvert): if numconvert < 10: numreturn = '0' + str(numconvert) else: numreturn = str(numconvert) return numreturn

def connect_to_endpoint(url, headers, next_token, is_initial, day1, day2, month1, month2): day1s = convert(day1) day2s = convert(day2) month1s = convert(month1) month2s = convert(month2) print(day1s + ', ' + day2s + ', ' + month1s + ', ' + month2s) if is_initial is 1: params = {'query':'facial recognition -is:retweet', 'start_time':'2020-'+month1s+'-'+day1s+'T00:00:00Z', 'end_time':'2020-'+month2s+'-'+day2s+'T00:00:00Z', 'max_results':'500'} else: params = {'query':'facial recognition -is:retweet', 'start_time':'2020-'+month1s+'-'+day1s+'T00:00:00Z', 'end_time':'2020-'+month2s+'-'+day2s+'T00:00:00Z', 'max_results':'500', 'next_token':next_token} response = requests.request("GET", search_url, headers=headers, params=params) # print(response.status_code) if response.status_code != 200: raise Exception(response.status_code, response.text) return response.json()

def main(): count = 0 next_token = "" initial = 0

194 Katzman, Twitter API Python Coding Assistance; Pavloski, “Accessing the Twitter API with Python”; Piper, “Twitterdev/Twitter-API-v2-Sample-Code”; “Paginate | Search Tweets.”

70 there_is_token = True day1 = 1 day2 = day1 + 7 month1 = 1 month2 = 1 week = 1 #count, but already used 'count' while week < 53: print('week: ' + str(week)) while there_is_token is True: headers = create_headers(bearer_token) if initial is 0: json_response = connect_to_endpoint(search_url, headers, next_token, 1, day1, day2, month1, month2) else: json_response = connect_to_endpoint(search_url, headers, next_token, 0, day1, day2, month1, month2) # Line below: when uncommented (command/) will print tweets # print(json.dumps(json_response, indent=4, sort_keys=True)) # print("OUTPUT") count = count + json_response["meta"]["result_count"] time.sleep(3) if "next_token" in json_response["meta"]: # print("line 50") initial = 1 next_token = json_response["meta"]["next_token"] # print(next_token) print(json_response["meta"]["result_count"]) else: print(json_response["meta"]["result_count"]) there_is_token = False day1 = day2 month1 = month2 if day2 + 7 > 31: month2 = month2 + 1 day2 = 7 - (31 - day2) elif day2 + 7 > 30: if month2 is 4 or month2 is 6 or month2 is 9 or month2 is 11: month2 = month2 + 1 day2 = 7 - (30 - day2) else: day2 = day2 + 7 elif day2 + 7 > 28: if month2 is 2: month2 = month2 + 1 day2 = 7 - (28 - day2) else: day2 = day2 + 7 else: day2 = day2 + 7 there_is_token = True print(count) count = 0 initial = 0 week = week + 1 if __name__ == "__main__": main()

71 APPENDIX B: PYTHON CODE USING TWITTER API DATA TO FETCH TWEETS (2020)195

import requests import os import json import time import csv

# To set your environment variables in your terminal run the following line: # export 'BEARER_TOKEN'='' bearer_token = 'AAAAAAAAAAAAAAAAAAAAAMMxNgEAAAAABqzZXoJWTznEaecavTM2PNRKXrY%3D8HYIk7Jqq8dc1mcRRnG2Vl5S7k kqaAGFsswnNyOUS7GdmZQscF' next_token = "" initial = 0

search_url = "https://api.twitter.com/2/tweets/search/all" search_url = "https://api.twitter.com/2/tweets/search/all"

# Optional params: start_time,end_time,since_id,until_id,max_results,next_token, # expansions,tweet.fields,media.fields,poll.fields,place.fields,user.fields # query_params = {'query': '(from:twitterdev -is:retweet) OR #twitterdev','tweet.fields': 'author_id'}

def create_headers(bearer_token): headers = {"Authorization": "Bearer {}".format(bearer_token)} return headers

def convert(numconvert): if numconvert < 10: numreturn = '0' + str(numconvert) else: numreturn = str(numconvert) return numreturn

def connect_to_endpoint(url, headers, next_token, is_initial, day1, day2, month1, month2): day1s = convert(day1) day2s = convert(day2) month1s = convert(month1) month2s = convert(month2) print(day1s + ', ' + day2s + ', ' + month1s + ', ' + month2s) if is_initial is 1: params = {'query':'facial recognition -is:retweet', 'start_time':'2020-'+month1s+'-'+day1s+'T00:00:00Z', 'end_time':'2020-'+month2s+'-'+day2s+'T00:00:00Z', 'max_results':'500'} else: params = {'query':'facial recognition -is:retweet', 'start_time':'2020-'+month1s+'-'+day1s+'T00:00:00Z', 'end_time':'2020-'+month2s+'-'+day2s+'T00:00:00Z', 'max_results':'500', 'next_token':next_token} response = requests.request("GET", search_url, headers=headers, params=params) # print(response.status_code) if response.status_code != 200: raise Exception(response.status_code, response.text) return response.json()

195 Katzman, Twitter API Python Coding Assistance; Pavloski, “Accessing the Twitter API with Python”; Piper, “Twitterdev/Twitter-API-v2-Sample-Code”; “Paginate | Search Tweets.”

72 def main(): count = 0 next_token = "" initial = 0 there_is_token = True day1 = 1 day2 = day1 + 7 month1 = 1 month2 = 1 week = 1 #count, but already used 'count' with open('Twitter_Text.csv', 'w', newline = '') as file: writer = csv.writer(file) writer.writerow(["Week", "Tweet ID", "Tweet Text"]) while week < 53: print('week: ' + str(week)) while there_is_token is True: headers = create_headers(bearer_token) if initial is 0: json_response = connect_to_endpoint(search_url, headers, next_token, 1, day1, day2, month1, month2) else: json_response = connect_to_endpoint(search_url, headers, next_token, 0, day1, day2, month1, month2) # Line below: when uncommented (command/) will print tweets print(json.dumps(json_response, indent=4, sort_keys=True)) with open('Twitter_Text.csv', 'a', newline = '') as file: writer = csv.writer(file) for response in json_response["data"]: writer.writerow([week, response["id"], response["text"]]) # print("OUTPUT") count = count + json_response["meta"]["result_count"] time.sleep(3) if "next_token" in json_response["meta"]: # print("line 50") initial = 1 next_token = json_response["meta"]["next_token"] # print(next_token) print(json_response["meta"]["result_count"]) else: print(json_response["meta"]["result_count"]) there_is_token = False day1 = day2 month1 = month2 if day2 + 7 > 31: month2 = month2 + 1 day2 = 7 - (31 - day2) elif day2 + 7 > 30: if month2 is 4 or month2 is 6 or month2 is 9 or month2 is 11: month2 = month2 + 1 day2 = 7 - (30 - day2) else: day2 = day2 + 7 elif day2 + 7 > 28: if month2 is 2: month2 = month2 + 1 day2 = 7 - (28 - day2) else: day2 = day2 + 7

73 else: day2 = day2 + 7 there_is_token = True print(count) count = 0 initial = 0 week = week + 1 if __name__ == "__main__": main()

74 APPENDIX C: PYTHON CODE USING VADER SENTIMENT ANALYSIS TO CONDUCT SENTIMENT ANALYSIS ON TWEETS (2020)196

import requests import os import json import time import csv import nltk # nltk.download("vader_lexicon") from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer def main(): sid = SentimentIntensityAnalyzer() with open('Twitter_Text_1.csv') as file: csv_reader = csv.reader(file, delimiter=',') line_count = 0 week = 1 avg = 0 total = 0 count = 1 for row in csv_reader: if line_count == 0: line_count += 1 else: if int(row[0]) != int(week): avg = total/count print("average of week " + str(week) + " is: " + str(avg) + " out of " + str(count - 1) + " tweets") count = 1 avg = 0 total = 0 week = row[0] polarity = sid.polarity_scores(row[2]) compound = polarity["compound"] # prints polarity of tweet (below) # Print(“polarity: “ + compound + “ of: “ + row[2]) total = total + compound line_count += 1 count = count + 1

if __name__ == "__main__": main()

196 Katzman, Twitter API Python Coding Assistance; Terry-Jack, “NLP: Pre-Trained Sentiment Analysis | by Mohammed Terry-Jack”; Fincher, “Reading and Writing CSV Files in Python.”

75 APPENDIX D: TOTAL NUMBER OF TWEETS PER WEEK INCLUDING THE TERM “FACIAL RECOGNITION” (2019)

week: 1 450 22, 29, 01, 01 01, 08, 01, 01 15, 22, 01, 01 446 452 462 22, 29, 01, 01 01, 08, 01, 01 15, 22, 01, 01 459 429 477 22, 29, 01, 01 01, 08, 01, 01 15, 22, 01, 01 453 433 466 22, 29, 01, 01 01, 08, 01, 01 15, 22, 01, 01 456 459 474 22, 29, 01, 01 01, 08, 01, 01 15, 22, 01, 01 440 449 452 22, 29, 01, 01 01, 08, 01, 01 15, 22, 01, 01 101 449 443 5648 01, 08, 01, 01 15, 22, 01, 01 438 441 week: 5 01, 08, 01, 01 15, 22, 01, 01 29, 05, 01, 02 355 403 442 3464 15, 22, 01, 01 29, 05, 01, 02 444 440 week: 2 15, 22, 01, 01 29, 05, 01, 02 450 08, 15, 01, 01 455 455 15, 22, 01, 01 29, 05, 01, 02 452 08, 15, 01, 01 418 460 15, 22, 01, 01 29, 05, 01, 02 08, 15, 01, 01 460 389 15, 22, 01, 01 463 29, 05, 01, 02 08, 15, 01, 01 452 427 449 15, 22, 01, 01 29, 05, 01, 02 462 08, 15, 01, 01 437 431 15, 22, 01, 01 29, 05, 01, 02 411 08, 15, 01, 01 443 436 15, 22, 01, 01 29, 05, 01, 02 08, 15, 01, 01 464 455 15, 22, 01, 01 453 29, 05, 01, 02 08, 15, 01, 01 462 430 15, 22, 01, 01 418 29, 05, 01, 02 08, 15, 01, 01 446 458 473 15, 22, 01, 01 29, 05, 01, 02 98 08, 15, 01, 01 452 151 11878 29, 05, 01, 02 4189 466 week: 4 29, 05, 01, 02 week: 3 22, 29, 01, 01 470 464 29, 05, 01, 02 15, 22, 01, 01 468 22, 29, 01, 01 459 15, 22, 01, 01 455 29, 05, 01, 02 22, 29, 01, 01 449 469 15, 22, 01, 01 447 29, 05, 01, 02 22, 29, 01, 01 459 444 15, 22, 01, 01 445 29, 05, 01, 02 446 22, 29, 01, 01 145 406 15, 22, 01, 01 7694 22, 29, 01, 01 470 405 15, 22, 01, 01 week: 6 449 22, 29, 01, 01 05, 12, 02, 02 15, 22, 01, 01 377 472 22, 29, 01, 01 463 05, 12, 02, 02 15, 22, 01, 01 294 460

76 05, 12, 02, 02 440 460 444 19, 26, 02, 02 09, 16, 03, 03 05, 12, 02, 02 419 450 437 19, 26, 02, 02 09, 16, 03, 03 05, 12, 02, 02 204 440 442 4183 09, 16, 03, 03 05, 12, 02, 02 430 417 week: 9 09, 16, 03, 03 05, 12, 02, 02 26, 02, 02, 03 461 442 448 09, 16, 03, 03 05, 12, 02, 02 26, 02, 02, 03 450 451 462 09, 16, 03, 03 05, 12, 02, 02 26, 02, 02, 03 435 457 449 09, 16, 03, 03 05, 12, 02, 02 26, 02, 02, 03 57 452 460 7088 05, 12, 02, 02 26, 02, 02, 03 468 260 week: 12 05, 12, 02, 02 2079 16, 23, 03, 03 363 481 5305 week: 10 16, 23, 03, 03

02, 09, 03, 03 415 week: 7 458 16, 23, 03, 03 12, 19, 02, 02 02, 09, 03, 03 376 468 465 16, 23, 03, 03 12, 19, 02, 02 02, 09, 03, 03 221 459 478 16, 23, 03, 03 12, 19, 02, 02 02, 09, 03, 03 477 447 465 16, 23, 03, 03 12, 19, 02, 02 02, 09, 03, 03 455 448 458 16, 23, 03, 03 12, 19, 02, 02 02, 09, 03, 03 447 452 358 16, 23, 03, 03 12, 19, 02, 02 02, 09, 03, 03 449 465 216 16, 23, 03, 03 12, 19, 02, 02 02, 09, 03, 03 400 469 433 16, 23, 03, 03 12, 19, 02, 02 3331 274 466 16, 23, 03, 03

12, 19, 02, 02 302 453 week: 11 16, 23, 03, 03 12, 19, 02, 02 09, 16, 03, 03 406 224 379 16, 23, 03, 03 4351 09, 16, 03, 03 94 284 4797 09, 16, 03, 03 week: 8 450 19, 26, 02, 02 09, 16, 03, 03 week: 13 465 471 23, 30, 03, 03 19, 26, 02, 02 09, 16, 03, 03 453 451 372 23, 30, 03, 03 19, 26, 02, 02 09, 16, 03, 03 460 441 417 23, 30, 03, 03 19, 26, 02, 02 09, 16, 03, 03 475 446 379 23, 30, 03, 03 19, 26, 02, 02 09, 16, 03, 03 448 451 369 23, 30, 03, 03 19, 26, 02, 02 09, 16, 03, 03 466 438 342 23, 30, 03, 03 19, 26, 02, 02 09, 16, 03, 03 461 428 442 23, 30, 03, 03 19, 26, 02, 02 09, 16, 03, 03 467

77 23, 30, 03, 03 13, 20, 04, 04 449 448 451 20, 27, 04, 04 23, 30, 03, 03 13, 20, 04, 04 166 43 448 8673 3721 13, 20, 04, 04 466 week: 18 week: 14 13, 20, 04, 04 27, 03, 04, 05 30, 06, 03, 04 462 447 446 13, 20, 04, 04 27, 03, 04, 05 30, 06, 03, 04 465 453 451 13, 20, 04, 04 27, 03, 04, 05 30, 06, 03, 04 477 470 478 13, 20, 04, 04 27, 03, 04, 05 30, 06, 03, 04 430 460 462 13, 20, 04, 04 27, 03, 04, 05 30, 06, 03, 04 461 448 466 13, 20, 04, 04 27, 03, 04, 05 30, 06, 03, 04 429 458 468 13, 20, 04, 04 27, 03, 04, 05 30, 06, 03, 04 427 451 463 13, 20, 04, 04 27, 03, 04, 05 30, 06, 03, 04 290 447 463 7056 27, 03, 04, 05 30, 06, 03, 04 72 470 week: 17 3706 30, 06, 03, 04 20, 27, 04, 04 449 450 week: 19 30, 06, 03, 04 20, 27, 04, 04 03, 10, 05, 05 228 449 457 4844 20, 27, 04, 04 03, 10, 05, 05 444 463 week: 15 20, 27, 04, 04 03, 10, 05, 05 06, 13, 04, 04 459 467 462 20, 27, 04, 04 03, 10, 05, 05 06, 13, 04, 04 443 456 462 20, 27, 04, 04 03, 10, 05, 05 06, 13, 04, 04 460 448 431 20, 27, 04, 04 03, 10, 05, 05 06, 13, 04, 04 450 432 438 20, 27, 04, 04 03, 10, 05, 05 06, 13, 04, 04 439 440 460 20, 27, 04, 04 03, 10, 05, 05 06, 13, 04, 04 452 452 459 20, 27, 04, 04 03, 10, 05, 05 06, 13, 04, 04 454 470 438 20, 27, 04, 04 03, 10, 05, 05 06, 13, 04, 04 439 455 423 20, 27, 04, 04 03, 10, 05, 05 3573 439 454 20, 27, 04, 04 03, 10, 05, 05 427 week: 16 455 20, 27, 04, 04 03, 10, 05, 05 13, 20, 04, 04 447 445 183 20, 27, 04, 04 5632 13, 20, 04, 04 439

452 20, 27, 04, 04 13, 20, 04, 04 458 week: 20 452 20, 27, 04, 04 10, 17, 05, 05 13, 20, 04, 04 449 458 432 20, 27, 04, 04 10, 17, 05, 05 13, 20, 04, 04 460 434 469 20, 27, 04, 04 10, 17, 05, 05

78 448 456 17, 24, 05, 05 10, 17, 05, 05 10, 17, 05, 05 377 456 448 17, 24, 05, 05 10, 17, 05, 05 10, 17, 05, 05 339 440 444 17, 24, 05, 05 10, 17, 05, 05 10, 17, 05, 05 459 414 459 17, 24, 05, 05 10, 17, 05, 05 10, 17, 05, 05 460 418 454 17, 24, 05, 05 10, 17, 05, 05 10, 17, 05, 05 465 447 464 17, 24, 05, 05 10, 17, 05, 05 10, 17, 05, 05 430 423 381 17, 24, 05, 05 10, 17, 05, 05 10, 17, 05, 05 442 461 35 17, 24, 05, 05 10, 17, 05, 05 17649 428 433 17, 24, 05, 05 10, 17, 05, 05 week: 21 413 440 17, 24, 05, 05 17, 24, 05, 05 10, 17, 05, 05 468 403 437 17, 24, 05, 05 17, 24, 05, 05 10, 17, 05, 05 461 399 451 17, 24, 05, 05 17, 24, 05, 05 10, 17, 05, 05 450 380 446 17, 24, 05, 05 17, 24, 05, 05 10, 17, 05, 05 449 305 449 17, 24, 05, 05 17, 24, 05, 05 10, 17, 05, 05 437 447 457 17, 24, 05, 05 17, 24, 05, 05 10, 17, 05, 05 426 453 454 17, 24, 05, 05 17, 24, 05, 05 10, 17, 05, 05 436 464 449 17, 24, 05, 05 17, 24, 05, 05 10, 17, 05, 05 459 440 431 17, 24, 05, 05 17, 24, 05, 05 10, 17, 05, 05 446 449 449 17, 24, 05, 05 17, 24, 05, 05 10, 17, 05, 05 456 211 417 17, 24, 05, 05 17130 10, 17, 05, 05 415 407 17, 24, 05, 05 week: 22 10, 17, 05, 05 449 24, 31, 05, 05 425 17, 24, 05, 05 432 10, 17, 05, 05 465 24, 31, 05, 05 435 17, 24, 05, 05 460 10, 17, 05, 05 424 24, 31, 05, 05 444 17, 24, 05, 05 442 10, 17, 05, 05 333 24, 31, 05, 05 448 17, 24, 05, 05 430 10, 17, 05, 05 345 24, 31, 05, 05 455 17, 24, 05, 05 463 10, 17, 05, 05 302 24, 31, 05, 05 457 17, 24, 05, 05 446 10, 17, 05, 05 450 24, 31, 05, 05 446 17, 24, 05, 05 383 10, 17, 05, 05 443 24, 31, 05, 05 408 17, 24, 05, 05 393 10, 17, 05, 05 451 24, 31, 05, 05 438 17, 24, 05, 05 386 10, 17, 05, 05 421 24, 31, 05, 05 433 17, 24, 05, 05 373 10, 17, 05, 05 380 24, 31, 05, 05

79 300 week: 24 14, 21, 06, 06 24, 31, 05, 05 07, 14, 06, 06 411 453 457 14, 21, 06, 06 24, 31, 05, 05 07, 14, 06, 06 324 444 436 4360 24, 31, 05, 05 07, 14, 06, 06 433 471 week: 26 24, 31, 05, 05 07, 14, 06, 06 21, 28, 06, 06 444 484 371 24, 31, 05, 05 07, 14, 06, 06 21, 28, 06, 06 419 475 437 24, 31, 05, 05 07, 14, 06, 06 21, 28, 06, 06 442 481 470 24, 31, 05, 05 07, 14, 06, 06 21, 28, 06, 06 455 460 468 24, 31, 05, 05 07, 14, 06, 06 21, 28, 06, 06 447 470 454 24, 31, 05, 05 07, 14, 06, 06 21, 28, 06, 06 57 482 448 8102 07, 14, 06, 06 21, 28, 06, 06 464 461 week: 23 07, 14, 06, 06 21, 28, 06, 06 31, 07, 05, 06 460 430 420 07, 14, 06, 06 3539 31, 07, 05, 06 440 07, 14, 06, 06 444 week: 27 464 31, 07, 05, 06 28, 04, 06, 07 07, 14, 06, 06 469 456 456 31, 07, 05, 06 28, 04, 06, 07 479 07, 14, 06, 06 446 447 31, 07, 05, 06 28, 04, 06, 07 472 07, 14, 06, 06 400 417 31, 07, 05, 06 28, 04, 06, 07 07, 14, 06, 06 433 438 31, 07, 05, 06 455 07, 14, 06, 06 28, 04, 06, 07 440 436 31, 07, 05, 06 458 07, 14, 06, 06 28, 04, 06, 07 435 460 420 31, 07, 05, 06 28, 04, 06, 07 406 07, 14, 06, 06 467 443 31, 07, 05, 06 28, 04, 06, 07 441 07, 14, 06, 06 344 439 31, 07, 05, 06 28, 04, 06, 07 9490 460 430 31, 07, 05, 06 28, 04, 06, 07 435 week: 25 362 31, 07, 05, 06 14, 21, 06, 06 28, 04, 06, 07 449 463 434 31, 07, 05, 06 14, 21, 06, 06 28, 04, 06, 07 428 465 454 31, 07, 05, 06 14, 21, 06, 06 28, 04, 06, 07 436 465 419 31, 07, 05, 06 14, 21, 06, 06 5635 450 451 31, 07, 05, 06 14, 21, 06, 06 week: 28 447 446 04, 11, 07, 07 31, 07, 05, 06 14, 21, 06, 06 465 446 450 31, 07, 05, 06 14, 21, 06, 06 04, 11, 07, 07 462 356 441 8365 14, 21, 06, 06 04, 11, 07, 07 463 448

80 04, 11, 07, 07 week: 29 445 11, 18, 07, 07 week: 30 04, 11, 07, 07 466 18, 25, 07, 07 451 11, 18, 07, 07 461 04, 11, 07, 07 464 18, 25, 07, 07 456 11, 18, 07, 07 444 04, 11, 07, 07 449 18, 25, 07, 07 432 11, 18, 07, 07 431 04, 11, 07, 07 463 18, 25, 07, 07 460 11, 18, 07, 07 458 04, 11, 07, 07 460 18, 25, 07, 07 457 11, 18, 07, 07 446 04, 11, 07, 07 466 18, 25, 07, 07 451 11, 18, 07, 07 461 04, 11, 07, 07 469 18, 25, 07, 07 462 11, 18, 07, 07 451 04, 11, 07, 07 467 18, 25, 07, 07 464 11, 18, 07, 07 425 04, 11, 07, 07 467 18, 25, 07, 07 466 11, 18, 07, 07 456 04, 11, 07, 07 477 18, 25, 07, 07 473 11, 18, 07, 07 440 04, 11, 07, 07 477 18, 25, 07, 07 469 11, 18, 07, 07 440 04, 11, 07, 07 463 18, 25, 07, 07 465 11, 18, 07, 07 453 04, 11, 07, 07 474 18, 25, 07, 07 463 11, 18, 07, 07 457 04, 11, 07, 07 475 18, 25, 07, 07 445 11, 18, 07, 07 425 04, 11, 07, 07 450 18, 25, 07, 07 457 11, 18, 07, 07 436 04, 11, 07, 07 452 18, 25, 07, 07 458 11, 18, 07, 07 467 04, 11, 07, 07 431 18, 25, 07, 07 462 11, 18, 07, 07 455 04, 11, 07, 07 444 18, 25, 07, 07 448 11, 18, 07, 07 452 04, 11, 07, 07 427 18, 25, 07, 07 470 11, 18, 07, 07 446 04, 11, 07, 07 454 18, 25, 07, 07 438 11, 18, 07, 07 467 04, 11, 07, 07 470 18, 25, 07, 07 453 11, 18, 07, 07 310 04, 11, 07, 07 460 9281 458 11, 18, 07, 07 04, 11, 07, 07 456 week: 31 469 11, 18, 07, 07 25, 01, 07, 08 04, 11, 07, 07 455 479 448 11, 18, 07, 07 25, 01, 07, 08 04, 11, 07, 07 450 481 455 11, 18, 07, 07 25, 01, 07, 08 04, 11, 07, 07 462 464 475 11, 18, 07, 07 04, 11, 07, 07 25, 01, 07, 08 454 458 456 11, 18, 07, 07 25, 01, 07, 08 04, 11, 07, 07 456 467 456 11, 18, 07, 07 25, 01, 07, 08 04, 11, 07, 07 463 13 467 11, 18, 07, 07 25, 01, 07, 08 14707 451 434 13772 25, 01, 07, 08

81 454 08, 15, 08, 08 461 25, 01, 07, 08 440 15, 22, 08, 08 421 08, 15, 08, 08 465 25, 01, 07, 08 434 15, 22, 08, 08 473 08, 15, 08, 08 462 25, 01, 07, 08 456 15, 22, 08, 08 452 08, 15, 08, 08 447 25, 01, 07, 08 444 15, 22, 08, 08 449 08, 15, 08, 08 453 25, 01, 07, 08 463 15, 22, 08, 08 450 08, 15, 08, 08 446 25, 01, 07, 08 459 15, 22, 08, 08 93 08, 15, 08, 08 480 5985 462 15, 22, 08, 08 08, 15, 08, 08 437 week: 32 458 15, 22, 08, 08 01, 08, 08, 08 08, 15, 08, 08 456 455 451 15, 22, 08, 08 01, 08, 08, 08 08, 15, 08, 08 458 467 451 15, 22, 08, 08 01, 08, 08, 08 08, 15, 08, 08 472 468 435 15, 22, 08, 08 01, 08, 08, 08 08, 15, 08, 08 462 453 426 15, 22, 08, 08 01, 08, 08, 08 08, 15, 08, 08 431 456 448 15, 22, 08, 08 01, 08, 08, 08 08, 15, 08, 08 250 437 437 10615 01, 08, 08, 08 08, 15, 08, 08 437 455 week: 35 01, 08, 08, 08 08, 15, 08, 08 22, 29, 08, 08 444 441 449 01, 08, 08, 08 08, 15, 08, 08 22, 29, 08, 08 443 440 466 01, 08, 08, 08 08, 15, 08, 08 22, 29, 08, 08 459 455 457 01, 08, 08, 08 08, 15, 08, 08 22, 29, 08, 08 468 404 467 01, 08, 08, 08 10752 22, 29, 08, 08 453 452 01, 08, 08, 08 week: 34 22, 29, 08, 08 446 15, 22, 08, 08 462 01, 08, 08, 08 444 22, 29, 08, 08 468 15, 22, 08, 08 457 01, 08, 08, 08 449 22, 29, 08, 08 453 15, 22, 08, 08 449 01, 08, 08, 08 401 22, 29, 08, 08 438 15, 22, 08, 08 437 7245 450 22, 29, 08, 08 15, 22, 08, 08 450 week: 33 452 22, 29, 08, 08 08, 15, 08, 08 15, 22, 08, 08 434 458 458 22, 29, 08, 08 08, 15, 08, 08 15, 22, 08, 08 443 464 443 22, 29, 08, 08 08, 15, 08, 08 15, 22, 08, 08 437 451 427 22, 29, 08, 08 08, 15, 08, 08 15, 22, 08, 08 442 450 464 22, 29, 08, 08 08, 15, 08, 08 15, 22, 08, 08 440 470 447 22, 29, 08, 08 15, 22, 08, 08 460

82 22, 29, 08, 08 432 6367 469 05, 12, 09, 09 22, 29, 08, 08 460 week: 39 449 05, 12, 09, 09 19, 26, 09, 09 22, 29, 08, 08 454 448 463 05, 12, 09, 09 19, 26, 09, 09 22, 29, 08, 08 440 475 451 05, 12, 09, 09 19, 26, 09, 09 22, 29, 08, 08 451 443 153 05, 12, 09, 09 19, 26, 09, 09 9187 445 456 05, 12, 09, 09 19, 26, 09, 09 week: 36 467 440 29, 05, 08, 09 05, 12, 09, 09 19, 26, 09, 09 447 473 448 29, 05, 08, 09 05, 12, 09, 09 19, 26, 09, 09 464 447 440 29, 05, 08, 09 05, 12, 09, 09 19, 26, 09, 09 464 471 451 29, 05, 08, 09 05, 12, 09, 09 19, 26, 09, 09 461 460 449 29, 05, 08, 09 05, 12, 09, 09 19, 26, 09, 09 451 455 453 29, 05, 08, 09 05, 12, 09, 09 19, 26, 09, 09 443 459 432 29, 05, 08, 09 05, 12, 09, 09 19, 26, 09, 09 463 451 480 29, 05, 08, 09 05, 12, 09, 09 19, 26, 09, 09 446 30 153 29, 05, 08, 09 8223 5568 427 29, 05, 08, 09 week: 38 week: 40 425 12, 19, 09, 09 26, 02, 09, 10 29, 05, 08, 09 472 428 378 12, 19, 09, 09 26, 02, 09, 10 29, 05, 08, 09 466 461 439 12, 19, 09, 09 26, 02, 09, 10 29, 05, 08, 09 462 470 411 12, 19, 09, 09 26, 02, 09, 10 29, 05, 08, 09 428 483 441 12, 19, 09, 09 26, 02, 09, 10 29, 05, 08, 09 445 453 467 12, 19, 09, 09 26, 02, 09, 10 29, 05, 08, 09 470 448 459 12, 19, 09, 09 26, 02, 09, 10 29, 05, 08, 09 468 464 461 12, 19, 09, 09 26, 02, 09, 10 29, 05, 08, 09 465 450 141 12, 19, 09, 09 26, 02, 09, 10 7688 439 444 12, 19, 09, 09 26, 02, 09, 10 week: 37 458 450 05, 12, 09, 09 12, 19, 09, 09 26, 02, 09, 10 458 449 446 05, 12, 09, 09 12, 19, 09, 09 26, 02, 09, 10 467 433 365 05, 12, 09, 09 12, 19, 09, 09 5362 467 469 05, 12, 09, 09 12, 19, 09, 09 444 week: 41 434 02, 09, 10, 10 05, 12, 09, 09 12, 19, 09, 09 1 454

83 02, 09, 10, 10 week: 42 16, 23, 10, 10 469 09, 16, 10, 10 437 02, 09, 10, 10 468 16, 23, 10, 10 462 09, 16, 10, 10 460 02, 09, 10, 10 467 16, 23, 10, 10 445 09, 16, 10, 10 466 02, 09, 10, 10 460 16, 23, 10, 10 448 09, 16, 10, 10 456 02, 09, 10, 10 457 16, 23, 10, 10 453 09, 16, 10, 10 450 02, 09, 10, 10 436 16, 23, 10, 10 455 09, 16, 10, 10 447 02, 09, 10, 10 393 16, 23, 10, 10 435 09, 16, 10, 10 457 02, 09, 10, 10 404 16, 23, 10, 10 449 09, 16, 10, 10 464 02, 09, 10, 10 383 16, 23, 10, 10 444 09, 16, 10, 10 476 02, 09, 10, 10 359 16, 23, 10, 10 437 09, 16, 10, 10 244 02, 09, 10, 10 425 7573 433 09, 16, 10, 10 02, 09, 10, 10 437 week: 44 445 09, 16, 10, 10 23, 30, 10, 10 02, 09, 10, 10 460 467 446 09, 16, 10, 10 23, 30, 10, 10 02, 09, 10, 10 473 449 426 09, 16, 10, 10 23, 30, 10, 10 02, 09, 10, 10 453 464 424 09, 16, 10, 10 23, 30, 10, 10 02, 09, 10, 10 458 455 447 09, 16, 10, 10 23, 30, 10, 10 02, 09, 10, 10 460 472 437 09, 16, 10, 10 23, 30, 10, 10 02, 09, 10, 10 470 466 410 09, 16, 10, 10 23, 30, 10, 10 02, 09, 10, 10 451 455 424 09, 16, 10, 10 23, 30, 10, 10 02, 09, 10, 10 458 448 401 09, 16, 10, 10 23, 30, 10, 10 02, 09, 10, 10 450 451 408 09, 16, 10, 10 23, 30, 10, 10 02, 09, 10, 10 439 450 431 09, 16, 10, 10 23, 30, 10, 10 02, 09, 10, 10 349 474 408 9610 23, 30, 10, 10 02, 09, 10, 10 470 427 week: 43 23, 30, 10, 10 02, 09, 10, 10 16, 23, 10, 10 465 423 472 23, 30, 10, 10 02, 09, 10, 10 16, 23, 10, 10 462 403 466 23, 30, 10, 10 02, 09, 10, 10 16, 23, 10, 10 459 423 468 23, 30, 10, 10 02, 09, 10, 10 16, 23, 10, 10 459 410 478 23, 30, 10, 10 02, 09, 10, 10 16, 23, 10, 10 43 382 457 7409 02, 09, 10, 10 16, 23, 10, 10 399 459 13358 week: 45 16, 23, 10, 10 30, 06, 10, 11 416

84 473 06, 13, 11, 11 478 30, 06, 10, 11 117 20, 27, 11, 11 461 6119 476 30, 06, 10, 11 20, 27, 11, 11 476 week: 47 469 30, 06, 10, 11 13, 20, 11, 11 20, 27, 11, 11 462 479 169 30, 06, 10, 11 13, 20, 11, 11 7202 442 470 30, 06, 10, 11 13, 20, 11, 11 week: 49 450 468 27, 03, 11, 12 30, 06, 10, 11 13, 20, 11, 11 459 460 455 27, 03, 11, 12 30, 06, 10, 11 13, 20, 11, 11 294 466 460 27, 03, 11, 12 30, 06, 10, 11 13, 20, 11, 11 450 470 455 27, 03, 11, 12 30, 06, 10, 11 13, 20, 11, 11 471 465 471 27, 03, 11, 12 30, 06, 10, 11 13, 20, 11, 11 457 454 455 27, 03, 11, 12 30, 06, 10, 11 13, 20, 11, 11 452 468 452 27, 03, 11, 12 30, 06, 10, 11 13, 20, 11, 11 435 475 456 27, 03, 11, 12 30, 06, 10, 11 13, 20, 11, 11 469 469 462 27, 03, 11, 12 30, 06, 10, 11 13, 20, 11, 11 473 464 468 27, 03, 11, 12 30, 06, 10, 11 13, 20, 11, 11 466 461 466 27, 03, 11, 12 30, 06, 10, 11 13, 20, 11, 11 469 50 284 27, 03, 11, 12 7466 6301 434 5329 week: 46 week: 48 06, 13, 11, 11 20, 27, 11, 11 week: 50 472 463 03, 10, 12, 12 06, 13, 11, 11 20, 27, 11, 11 475 469 470 03, 10, 12, 12 06, 13, 11, 11 20, 27, 11, 11 460 479 460 03, 10, 12, 12 06, 13, 11, 11 20, 27, 11, 11 453 467 467 03, 10, 12, 12 06, 13, 11, 11 20, 27, 11, 11 462 458 480 03, 10, 12, 12 06, 13, 11, 11 20, 27, 11, 11 469 470 465 03, 10, 12, 12 06, 13, 11, 11 20, 27, 11, 11 466 471 474 03, 10, 12, 12 06, 13, 11, 11 20, 27, 11, 11 364 458 458 03, 10, 12, 12 06, 13, 11, 11 20, 27, 11, 11 435 460 445 03, 10, 12, 12 06, 13, 11, 11 20, 27, 11, 11 462 453 477 03, 10, 12, 12 06, 13, 11, 11 20, 27, 11, 11 475 462 473 03, 10, 12, 12 06, 13, 11, 11 20, 27, 11, 11 494 419 478 03, 10, 12, 12 06, 13, 11, 11 20, 27, 11, 11 468 464

85 03, 10, 12, 12 10, 17, 12, 12 17, 24, 12, 12 459 469 473 03, 10, 12, 12 10, 17, 12, 12 17, 24, 12, 12 440 475 461 03, 10, 12, 12 10, 17, 12, 12 17, 24, 12, 12 469 471 455 03, 10, 12, 12 10, 17, 12, 12 17, 24, 12, 12 468 472 459 03, 10, 12, 12 10, 17, 12, 12 17, 24, 12, 12 464 472 489 03, 10, 12, 12 10, 17, 12, 12 17, 24, 12, 12 48 438 467 7831 5132 17, 24, 12, 12 465 week: 51 week: 52 17, 24, 12, 12 10, 17, 12, 12 17, 24, 12, 12 460 474 462 17, 24, 12, 12 10, 17, 12, 12 17, 24, 12, 12 476 472 450 17, 24, 12, 12 10, 17, 12, 12 17, 24, 12, 12 144 459 460 6660 10, 17, 12, 12 17, 24, 12, 12 457 469 10, 17, 12, 12 17, 24, 12, 12 473 470

86 APPENDIX E: TOTAL NUMBER OF TWEETS PER WEEK INLCUDING THE TERM “FACIAL RECOGNTION” (2020)

week: 1 478 465 01, 08, 01, 01 15, 22, 01, 01 15, 22, 01, 01 478 472 473 01, 08, 01, 01 15, 22, 01, 01 15, 22, 01, 01 476 474 154 01, 08, 01, 01 15, 22, 01, 01 17797 466 471 01, 08, 01, 01 15, 22, 01, 01 week: 4 454 470 22, 29, 01, 01 01, 08, 01, 01 15, 22, 01, 01 474 478 476 22, 29, 01, 01 01, 08, 01, 01 15, 22, 01, 01 481 464 474 22, 29, 01, 01 01, 08, 01, 01 15, 22, 01, 01 484 465 469 22, 29, 01, 01 01, 08, 01, 01 15, 22, 01, 01 478 212 476 22, 29, 01, 01 3493 15, 22, 01, 01 469 487 22, 29, 01, 01 week: 2 15, 22, 01, 01 477 485 08, 15, 01, 01 22, 29, 01, 01 479 15, 22, 01, 01 477 482 08, 15, 01, 01 22, 29, 01, 01 472 15, 22, 01, 01 472 08, 15, 01, 01 483 22, 29, 01, 01 15, 22, 01, 01 484 467 08, 15, 01, 01 481 22, 29, 01, 01 476 15, 22, 01, 01 482 486 08, 15, 01, 01 22, 29, 01, 01 466 15, 22, 01, 01 470 484 08, 15, 01, 01 22, 29, 01, 01 471 15, 22, 01, 01 479 08, 15, 01, 01 483 22, 29, 01, 01 15, 22, 01, 01 476 471 08, 15, 01, 01 486 22, 29, 01, 01 15, 22, 01, 01 481 471 08, 15, 01, 01 490 22, 29, 01, 01 473 15, 22, 01, 01 475 478 08, 15, 01, 01 22, 29, 01, 01 476 15, 22, 01, 01 462 477 08, 15, 01, 01 22, 29, 01, 01 381 15, 22, 01, 01 463 472 5135 22, 29, 01, 01 15, 22, 01, 01 464 479 22, 29, 01, 01 week: 3 15, 22, 01, 01 15, 22, 01, 01 474 472 22, 29, 01, 01 473 15, 22, 01, 01 15, 22, 01, 01 473 475 22, 29, 01, 01 479 15, 22, 01, 01 476 15, 22, 01, 01 469 470 22, 29, 01, 01 15, 22, 01, 01 482 15, 22, 01, 01 479 22, 29, 01, 01 475 15, 22, 01, 01 480 15, 22, 01, 01 485 22, 29, 01, 01 475 15, 22, 01, 01 15, 22, 01, 01 473 466 22, 29, 01, 01 474 15, 22, 01, 01 15, 22, 01, 01 475

87 22, 29, 01, 01 463 12, 19, 02, 02 472 29, 05, 01, 02 481 22, 29, 01, 01 484 12, 19, 02, 02 473 29, 05, 01, 02 476 22, 29, 01, 01 477 12, 19, 02, 02 474 29, 05, 01, 02 468 22, 29, 01, 01 484 12, 19, 02, 02 478 29, 05, 01, 02 465 22, 29, 01, 01 86 12, 19, 02, 02 476 10493 475 22, 29, 01, 01 12, 19, 02, 02 479 week: 6 482 22, 29, 01, 01 05, 12, 02, 02 12, 19, 02, 02 483 484 473 22, 29, 01, 01 05, 12, 02, 02 12, 19, 02, 02 480 488 475 22, 29, 01, 01 05, 12, 02, 02 12, 19, 02, 02 479 473 482 22, 29, 01, 01 05, 12, 02, 02 12, 19, 02, 02 469 481 475 22, 29, 01, 01 05, 12, 02, 02 12, 19, 02, 02 266 478 487 16878 05, 12, 02, 02 12, 19, 02, 02 475 483 week: 5 05, 12, 02, 02 12, 19, 02, 02 29, 05, 01, 02 466 30 477 05, 12, 02, 02 7177 29, 05, 01, 02 467 481 05, 12, 02, 02 week: 8 29, 05, 01, 02 471 19, 26, 02, 02 467 05, 12, 02, 02 474 29, 05, 01, 02 474 19, 26, 02, 02 467 05, 12, 02, 02 478 29, 05, 01, 02 473 19, 26, 02, 02 475 05, 12, 02, 02 465 29, 05, 01, 02 467 19, 26, 02, 02 464 05, 12, 02, 02 465 29, 05, 01, 02 475 19, 26, 02, 02 478 05, 12, 02, 02 459 29, 05, 01, 02 474 19, 26, 02, 02 462 05, 12, 02, 02 463 29, 05, 01, 02 485 19, 26, 02, 02 476 05, 12, 02, 02 462 29, 05, 01, 02 480 19, 26, 02, 02 476 05, 12, 02, 02 459 29, 05, 01, 02 462 19, 26, 02, 02 474 05, 12, 02, 02 457 29, 05, 01, 02 468 19, 26, 02, 02 464 05, 12, 02, 02 467 29, 05, 01, 02 468 19, 26, 02, 02 469 05, 12, 02, 02 463 29, 05, 01, 02 282 19, 26, 02, 02 476 9291 479 29, 05, 01, 02 19, 26, 02, 02 486 week: 7 476 29, 05, 01, 02 12, 19, 02, 02 19, 26, 02, 02 473 482 486 29, 05, 01, 02 12, 19, 02, 02 19, 26, 02, 02 468 471 482 29, 05, 01, 02 12, 19, 02, 02 19, 26, 02, 02 466 472 473 29, 05, 01, 02 19, 26, 02, 02

88 481 02, 09, 03, 03 480 19, 26, 02, 02 474 16, 23, 03, 03 481 02, 09, 03, 03 485 19, 26, 02, 02 466 16, 23, 03, 03 2 02, 09, 03, 03 490 8472 478 16, 23, 03, 03 02, 09, 03, 03 483 week: 9 480 16, 23, 03, 03 26, 02, 02, 03 02, 09, 03, 03 476 459 480 16, 23, 03, 03 26, 02, 02, 03 02, 09, 03, 03 489 474 487 16, 23, 03, 03 26, 02, 02, 03 02, 09, 03, 03 131 461 477 3507 26, 02, 02, 03 02, 09, 03, 03 466 481 week: 13 26, 02, 02, 03 02, 09, 03, 03 23, 30, 03, 03 472 477 469 26, 02, 02, 03 02, 09, 03, 03 23, 30, 03, 03 476 482 472 26, 02, 02, 03 02, 09, 03, 03 23, 30, 03, 03 467 478 463 26, 02, 02, 03 02, 09, 03, 03 23, 30, 03, 03 474 249 468 26, 02, 02, 03 8378 23, 30, 03, 03 462 475 26, 02, 02, 03 week: 11 23, 30, 03, 03 465 09, 16, 03, 03 476 26, 02, 02, 03 478 23, 30, 03, 03 463 09, 16, 03, 03 473 26, 02, 02, 03 477 23, 30, 03, 03 477 09, 16, 03, 03 295 26, 02, 02, 03 490 3591 466 09, 16, 03, 03 26, 02, 02, 03 465 week: 14 476 09, 16, 03, 03 30, 06, 03, 04 26, 02, 02, 03 476 467 467 09, 16, 03, 03 30, 06, 03, 04 26, 02, 02, 03 486 471 474 09, 16, 03, 03 30, 06, 03, 04 26, 02, 02, 03 473 456 467 09, 16, 03, 03 30, 06, 03, 04 26, 02, 02, 03 478 457 473 09, 16, 03, 03 30, 06, 03, 04 26, 02, 02, 03 476 456 412 09, 16, 03, 03 30, 06, 03, 04 8851 480 476 09, 16, 03, 03 30, 06, 03, 04 week: 10 488 466 02, 09, 03, 03 09, 16, 03, 03 30, 06, 03, 04 463 471 444 02, 09, 03, 03 09, 16, 03, 03 30, 06, 03, 04 471 469 464 02, 09, 03, 03 09, 16, 03, 03 30, 06, 03, 04 483 176 479 02, 09, 03, 03 6383 30, 06, 03, 04 476 412 02, 09, 03, 03 week: 12 5048 487 16, 23, 03, 03 02, 09, 03, 03 473 week: 15 489 16, 23, 03, 03 06, 13, 04, 04

89 461 20, 27, 04, 04 10, 17, 05, 05 06, 13, 04, 04 460 453 475 20, 27, 04, 04 10, 17, 05, 05 06, 13, 04, 04 456 466 464 20, 27, 04, 04 10, 17, 05, 05 06, 13, 04, 04 463 469 477 20, 27, 04, 04 10, 17, 05, 05 06, 13, 04, 04 200 460 478 3866 10, 17, 05, 05 06, 13, 04, 04 468 471 week: 18 10, 17, 05, 05 06, 13, 04, 04 27, 03, 04, 05 462 471 445 10, 17, 05, 05 06, 13, 04, 04 27, 03, 04, 05 472 466 463 10, 17, 05, 05 06, 13, 04, 04 27, 03, 04, 05 389 475 460 5027 06, 13, 04, 04 27, 03, 04, 05 474 472 week: 21 06, 13, 04, 04 27, 03, 04, 05 17, 24, 05, 05 50 466 481 4762 27, 03, 04, 05 17, 24, 05, 05 466 477 week: 16 27, 03, 04, 05 17, 24, 05, 05 13, 20, 04, 04 469 471 451 27, 03, 04, 05 17, 24, 05, 05 13, 20, 04, 04 5 466 462 3246 17, 24, 05, 05 13, 20, 04, 04 460 460 week: 19 17, 24, 05, 05 13, 20, 04, 04 03, 10, 05, 05 462 471 457 17, 24, 05, 05 13, 20, 04, 04 03, 10, 05, 05 469 482 457 17, 24, 05, 05 13, 20, 04, 04 03, 10, 05, 05 476 486 462 17, 24, 05, 05 13, 20, 04, 04 03, 10, 05, 05 460 463 456 17, 24, 05, 05 13, 20, 04, 04 03, 10, 05, 05 471 471 445 17, 24, 05, 05 13, 20, 04, 04 03, 10, 05, 05 473 464 457 17, 24, 05, 05 13, 20, 04, 04 03, 10, 05, 05 446 474 469 17, 24, 05, 05 13, 20, 04, 04 03, 10, 05, 05 351 467 462 5963 13, 20, 04, 04 03, 10, 05, 05 1 459 week: 22 5152 03, 10, 05, 05 24, 31, 05, 05 464 453 week: 17 03, 10, 05, 05 24, 31, 05, 05 20, 27, 04, 04 147 464 464 4735 24, 31, 05, 05 20, 27, 04, 04 456 464 week: 20 24, 31, 05, 05 20, 27, 04, 04 10, 17, 05, 05 463 461 466 24, 31, 05, 05 20, 27, 04, 04 10, 17, 05, 05 472 459 461 24, 31, 05, 05 20, 27, 04, 04 10, 17, 05, 05 466 439 461 24, 31, 05, 05

90 461 31, 07, 05, 06 473 24, 31, 05, 05 404 07, 14, 06, 06 453 9950 483 24, 31, 05, 05 07, 14, 06, 06 448 week: 24 467 24, 31, 05, 05 07, 14, 06, 06 07, 14, 06, 06 466 446 476 24, 31, 05, 05 07, 14, 06, 06 07, 14, 06, 06 449 480 476 24, 31, 05, 05 07, 14, 06, 06 07, 14, 06, 06 462 455 470 24, 31, 05, 05 07, 14, 06, 06 07, 14, 06, 06 467 433 484 24, 31, 05, 05 07, 14, 06, 06 07, 14, 06, 06 468 448 486 24, 31, 05, 05 07, 14, 06, 06 07, 14, 06, 06 265 468 484 6713 07, 14, 06, 06 07, 14, 06, 06 476 488 week: 23 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 478 474 446 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 461 478 475 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 461 479 484 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 467 489 483 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 488 480 467 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 473 488 483 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 475 479 469 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 484 479 469 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 480 483 456 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 481 486 452 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 475 487 458 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 479 486 453 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 490 485 425 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 478 471 429 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 483 489 430 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 482 484 446 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 478 481 447 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 485 463 448 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 483 478 447 07, 14, 06, 06 07, 14, 06, 06 31, 07, 05, 06 474 198 436 07, 14, 06, 06 27362 31, 07, 05, 06 477 443 07, 14, 06, 06 week: 25

91 14, 21, 06, 06 470 28, 04, 06, 07 479 21, 28, 06, 06 484 14, 21, 06, 06 483 28, 04, 06, 07 478 21, 28, 06, 06 477 14, 21, 06, 06 477 28, 04, 06, 07 483 21, 28, 06, 06 482 14, 21, 06, 06 489 28, 04, 06, 07 479 21, 28, 06, 06 479 14, 21, 06, 06 488 28, 04, 06, 07 458 21, 28, 06, 06 476 14, 21, 06, 06 483 28, 04, 06, 07 456 21, 28, 06, 06 489 14, 21, 06, 06 484 28, 04, 06, 07 467 21, 28, 06, 06 479 14, 21, 06, 06 492 28, 04, 06, 07 451 21, 28, 06, 06 479 14, 21, 06, 06 487 28, 04, 06, 07 472 21, 28, 06, 06 487 14, 21, 06, 06 485 28, 04, 06, 07 455 21, 28, 06, 06 477 14, 21, 06, 06 483 28, 04, 06, 07 465 21, 28, 06, 06 485 14, 21, 06, 06 491 28, 04, 06, 07 448 21, 28, 06, 06 488 14, 21, 06, 06 484 28, 04, 06, 07 480 21, 28, 06, 06 483 14, 21, 06, 06 485 28, 04, 06, 07 483 21, 28, 06, 06 471 14, 21, 06, 06 481 28, 04, 06, 07 486 21, 28, 06, 06 483 14, 21, 06, 06 476 28, 04, 06, 07 483 21, 28, 06, 06 71 14, 21, 06, 06 487 7290 477 21, 28, 06, 06 14, 21, 06, 06 486 week: 28 486 21, 28, 06, 06 04, 11, 07, 07 14, 21, 06, 06 487 475 482 21, 28, 06, 06 04, 11, 07, 07 14, 21, 06, 06 492 479 484 21, 28, 06, 06 04, 11, 07, 07 14, 21, 06, 06 480 484 484 21, 28, 06, 06 04, 11, 07, 07 14, 21, 06, 06 480 486 485 21, 28, 06, 06 04, 11, 07, 07 14, 21, 06, 06 483 485 488 21, 28, 06, 06 04, 11, 07, 07 14, 21, 06, 06 473 473 488 21, 28, 06, 06 04, 11, 07, 07 14, 21, 06, 06 479 486 473 21, 28, 06, 06 04, 11, 07, 07 14, 21, 06, 06 477 487 478 21, 28, 06, 06 04, 11, 07, 07 14, 21, 06, 06 476 478 443 21, 28, 06, 06 04, 11, 07, 07 12791 484 485 21, 28, 06, 06 04, 11, 07, 07 week: 26 480 482 21, 28, 06, 06 21, 28, 06, 06 04, 11, 07, 07 475 394 482 21, 28, 06, 06 15355 04, 11, 07, 07 484 470 21, 28, 06, 06 week: 27 04, 11, 07, 07

92 472 18, 25, 07, 07 429 04, 11, 07, 07 481 8088 482 18, 25, 07, 07 04, 11, 07, 07 499 week: 32 56 18, 25, 07, 07 01, 08, 08, 08 7262 489 478 18, 25, 07, 07 01, 08, 08, 08 week: 29 472 472 11, 18, 07, 07 18, 25, 07, 07 01, 08, 08, 08 478 484 485 11, 18, 07, 07 18, 25, 07, 07 01, 08, 08, 08 483 468 468 11, 18, 07, 07 18, 25, 07, 07 01, 08, 08, 08 488 464 485 11, 18, 07, 07 18, 25, 07, 07 01, 08, 08, 08 475 465 487 11, 18, 07, 07 18, 25, 07, 07 01, 08, 08, 08 478 477 485 11, 18, 07, 07 18, 25, 07, 07 01, 08, 08, 08 480 477 482 11, 18, 07, 07 18, 25, 07, 07 01, 08, 08, 08 480 471 472 11, 18, 07, 07 18, 25, 07, 07 01, 08, 08, 08 477 474 475 11, 18, 07, 07 18, 25, 07, 07 01, 08, 08, 08 485 25 477 11, 18, 07, 07 10067 01, 08, 08, 08 483 458 11, 18, 07, 07 week: 31 01, 08, 08, 08 468 25, 01, 07, 08 458 11, 18, 07, 07 466 01, 08, 08, 08 482 25, 01, 07, 08 464 11, 18, 07, 07 479 01, 08, 08, 08 472 25, 01, 07, 08 242 11, 18, 07, 07 482 6888 477 25, 01, 07, 08 11, 18, 07, 07 470 week: 33 482 25, 01, 07, 08 08, 15, 08, 08 11, 18, 07, 07 482 489 436 25, 01, 07, 08 08, 15, 08, 08 7624 477 483 25, 01, 07, 08 08, 15, 08, 08 week: 30 474 478 18, 25, 07, 07 25, 01, 07, 08 08, 15, 08, 08 473 480 485 18, 25, 07, 07 25, 01, 07, 08 08, 15, 08, 08 481 485 481 18, 25, 07, 07 25, 01, 07, 08 08, 15, 08, 08 494 486 488 18, 25, 07, 07 25, 01, 07, 08 08, 15, 08, 08 477 484 482 18, 25, 07, 07 25, 01, 07, 08 08, 15, 08, 08 484 476 489 18, 25, 07, 07 25, 01, 07, 08 08, 15, 08, 08 483 484 484 18, 25, 07, 07 25, 01, 07, 08 08, 15, 08, 08 490 477 483 18, 25, 07, 07 25, 01, 07, 08 08, 15, 08, 08 449 484 477 18, 25, 07, 07 25, 01, 07, 08 08, 15, 08, 08 490 473 477 25, 01, 07, 08

93 08, 15, 08, 08 477 7153 479 22, 29, 08, 08 08, 15, 08, 08 369 week: 38 474 4720 12, 19, 09, 09 08, 15, 08, 08 490 480 week: 36 12, 19, 09, 09 08, 15, 08, 08 29, 05, 08, 09 492 359 493 12, 19, 09, 09 7588 29, 05, 08, 09 489 488 12, 19, 09, 09 week: 34 29, 05, 08, 09 490 15, 22, 08, 08 490 12, 19, 09, 09 482 29, 05, 08, 09 485 15, 22, 08, 08 480 12, 19, 09, 09 465 29, 05, 08, 09 484 15, 22, 08, 08 486 12, 19, 09, 09 467 29, 05, 08, 09 489 15, 22, 08, 08 482 12, 19, 09, 09 480 29, 05, 08, 09 481 15, 22, 08, 08 458 12, 19, 09, 09 477 29, 05, 08, 09 478 15, 22, 08, 08 461 12, 19, 09, 09 464 29, 05, 08, 09 481 15, 22, 08, 08 470 12, 19, 09, 09 481 29, 05, 08, 09 39 15, 22, 08, 08 475 4898 486 29, 05, 08, 09 15, 22, 08, 08 8 week: 39 473 4791 19, 26, 09, 09 15, 22, 08, 08 486 484 week: 37 19, 26, 09, 09 15, 22, 08, 08 05, 12, 09, 09 491 480 471 19, 26, 09, 09 15, 22, 08, 08 05, 12, 09, 09 489 483 484 19, 26, 09, 09 15, 22, 08, 08 05, 12, 09, 09 483 474 452 19, 26, 09, 09 15, 22, 08, 08 05, 12, 09, 09 489 484 462 19, 26, 09, 09 15, 22, 08, 08 05, 12, 09, 09 491 277 480 19, 26, 09, 09 6957 05, 12, 09, 09 495 459 19, 26, 09, 09 week: 35 05, 12, 09, 09 491 22, 29, 08, 08 488 19, 26, 09, 09 480 05, 12, 09, 09 485 22, 29, 08, 08 489 19, 26, 09, 09 488 05, 12, 09, 09 493 22, 29, 08, 08 476 19, 26, 09, 09 489 05, 12, 09, 09 490 22, 29, 08, 08 488 19, 26, 09, 09 486 05, 12, 09, 09 493 22, 29, 08, 08 488 19, 26, 09, 09 484 05, 12, 09, 09 426 22, 29, 08, 08 485 6302 488 05, 12, 09, 09 22, 29, 08, 08 484 week: 40 478 05, 12, 09, 09 26, 02, 09, 10 22, 29, 08, 08 484 489 481 05, 12, 09, 09 26, 02, 09, 10 22, 29, 08, 08 463 493

94 26, 02, 09, 10 16, 23, 10, 10 06, 13, 11, 11 494 490 489 26, 02, 09, 10 16, 23, 10, 10 06, 13, 11, 11 487 495 480 26, 02, 09, 10 16, 23, 10, 10 06, 13, 11, 11 489 487 478 26, 02, 09, 10 16, 23, 10, 10 06, 13, 11, 11 480 490 489 26, 02, 09, 10 16, 23, 10, 10 06, 13, 11, 11 492 322 273 26, 02, 09, 10 4238 3674 490 26, 02, 09, 10 week: 44 week: 47 297 23, 30, 10, 10 13, 20, 11, 11 4211 481 493 23, 30, 10, 10 13, 20, 11, 11 week: 41 494 489 02, 09, 10, 10 23, 30, 10, 10 13, 20, 11, 11 494 02, 09, 10, 10 493 499 492 23, 30, 10, 10 13, 20, 11, 11 02, 09, 10, 10 483 488 492 23, 30, 10, 10 13, 20, 11, 11 02, 09, 10, 10 489 488 488 23, 30, 10, 10 13, 20, 11, 11 02, 09, 10, 10 485 491 493 23, 30, 10, 10 13, 20, 11, 11 02, 09, 10, 10 490 491 483 02, 09, 10, 10 23, 30, 10, 10 13, 20, 11, 11 493 486 487 02, 09, 10, 10 23, 30, 10, 10 13, 20, 11, 11 155 492 487 3597 23, 30, 10, 10 13, 20, 11, 11 66 73 week: 42 4460 4478 09, 16, 10, 10 492 week: 45 week: 48 09, 16, 10, 10 30, 06, 10, 11 20, 27, 11, 11 490 468 491 09, 16, 10, 10 30, 06, 10, 11 20, 27, 11, 11 488 484 496 09, 16, 10, 10 30, 06, 10, 11 20, 27, 11, 11 495 494 493 09, 16, 10, 10 30, 06, 10, 11 20, 27, 11, 11 490 488 488 09, 16, 10, 10 30, 06, 10, 11 20, 27, 11, 11 490 491 490 09, 16, 10, 10 30, 06, 10, 11 20, 27, 11, 11 490 483 490 09, 16, 10, 10 30, 06, 10, 11 20, 27, 11, 11 308 487 488 3743 30, 06, 10, 11 20, 27, 11, 11 372 357 week: 43 3767 3793 16, 23, 10, 10 489 week: 46 week: 49 16, 23, 10, 10 06, 13, 11, 11 27, 03, 11, 12 493 486 492 16, 23, 10, 10 06, 13, 11, 11 27, 03, 11, 12 489 489 492 16, 23, 10, 10 06, 13, 11, 11 27, 03, 11, 12 483 490 489

95 27, 03, 11, 12 487 5409 487 03, 10, 12, 12 27, 03, 11, 12 291 week: 52 489 5172 17, 24, 12, 12 27, 03, 11, 12 486 488 week: 51 17, 24, 12, 12 27, 03, 11, 12 10, 17, 12, 12 496 446 489 17, 24, 12, 12 3383 10, 17, 12, 12 491 492 17, 24, 12, 12 week: 50 10, 17, 12, 12 490 03, 10, 12, 12 491 17, 24, 12, 12 485 10, 17, 12, 12 480 03, 10, 12, 12 490 17, 24, 12, 12 492 10, 17, 12, 12 492 03, 10, 12, 12 488 17, 24, 12, 12 490 10, 17, 12, 12 483 03, 10, 12, 12 484 17, 24, 12, 12 491 10, 17, 12, 12 481 03, 10, 12, 12 488 17, 24, 12, 12 492 10, 17, 12, 12 487 03, 10, 12, 12 492 17, 24, 12, 12 481 10, 17, 12, 12 486 03, 10, 12, 12 491 17, 24, 12, 12 480 10, 17, 12, 12 490 03, 10, 12, 12 495 17, 24, 12, 12 489 10, 17, 12, 12 253 03, 10, 12, 12 496 5615 494 10, 17, 12, 12 03, 10, 12, 12 13

96 APPENDIX F: TOTAL NUMBER OF TWEETS PER WEEK INCLUDING THE TERM “FACIAL RECOGNITION” (2021)

week: 1 444 08, 15, 01, 01 01, 08, 01, 01 01, 08, 01, 01 486 467 453 08, 15, 01, 01 01, 08, 01, 01 01, 08, 01, 01 486 481 484 08, 15, 01, 01 01, 08, 01, 01 01, 08, 01, 01 488 467 489 08, 15, 01, 01 01, 08, 01, 01 01, 08, 01, 01 488 473 489 08, 15, 01, 01 01, 08, 01, 01 01, 08, 01, 01 477 470 488 08, 15, 01, 01 01, 08, 01, 01 01, 08, 01, 01 483 485 483 08, 15, 01, 01 01, 08, 01, 01 01, 08, 01, 01 484 479 481 08, 15, 01, 01 01, 08, 01, 01 01, 08, 01, 01 484 473 490 08, 15, 01, 01 01, 08, 01, 01 01, 08, 01, 01 480 462 489 08, 15, 01, 01 01, 08, 01, 01 01, 08, 01, 01 478 466 487 08, 15, 01, 01 01, 08, 01, 01 01, 08, 01, 01 478 473 484 08, 15, 01, 01 01, 08, 01, 01 01, 08, 01, 01 478 463 483 08, 15, 01, 01 01, 08, 01, 01 01, 08, 01, 01 481 467 301 08, 15, 01, 01 01, 08, 01, 01 19522 473 463 08, 15, 01, 01 01, 08, 01, 01 week: 2 473 450 08, 15, 01, 01 08, 15, 01, 01 01, 08, 01, 01 491 481 451 08, 15, 01, 01 08, 15, 01, 01 01, 08, 01, 01 488 485 460 08, 15, 01, 01 08, 15, 01, 01 01, 08, 01, 01 484 481 475 08, 15, 01, 01 08, 15, 01, 01 01, 08, 01, 01 483 468 467 08, 15, 01, 01 08, 15, 01, 01 01, 08, 01, 01 486 481 472 08, 15, 01, 01 08, 15, 01, 01 01, 08, 01, 01 486 281 473 08, 15, 01, 01 16691 01, 08, 01, 01 488 430 08, 15, 01, 01 week: 3 01, 08, 01, 01 487 15, 22, 01, 01 455 08, 15, 01, 01 489 01, 08, 01, 01 487 15, 22, 01, 01 468 08, 15, 01, 01 492 01, 08, 01, 01 483 15, 22, 01, 01 467 08, 15, 01, 01 494 01, 08, 01, 01 487 15, 22, 01, 01 445 08, 15, 01, 01 493 01, 08, 01, 01 485 15, 22, 01, 01 435 08, 15, 01, 01 494 01, 08, 01, 01 478 15, 22, 01, 01 440 08, 15, 01, 01 483 01, 08, 01, 01 484 15, 22, 01, 01

97 483 29, 05, 01, 02 19, 26, 02, 02 15, 22, 01, 01 490 489 485 29, 05, 01, 02 19, 26, 02, 02 15, 22, 01, 01 486 487 482 29, 05, 01, 02 19, 26, 02, 02 15, 22, 01, 01 490 493 484 29, 05, 01, 02 19, 26, 02, 02 15, 22, 01, 01 4 497 489 6390 19, 26, 02, 02 15, 22, 01, 01 496 489 week: 6 19, 26, 02, 02 15, 22, 01, 01 05, 12, 02, 02 491 54 491 19, 26, 02, 02 5911 05, 12, 02, 02 491 483 19, 26, 02, 02 week: 4 05, 12, 02, 02 52 22, 29, 01, 01 486 3991 489 05, 12, 02, 02 22, 29, 01, 01 490 week: 9 495 05, 12, 02, 02 26, 02, 02, 03 22, 29, 01, 01 483 492 495 05, 12, 02, 02 26, 02, 02, 03 22, 29, 01, 01 487 491 491 05, 12, 02, 02 26, 02, 02, 03 22, 29, 01, 01 492 488 494 05, 12, 02, 02 26, 02, 02, 03 22, 29, 01, 01 493 485 481 05, 12, 02, 02 26, 02, 02, 03 22, 29, 01, 01 493 493 491 05, 12, 02, 02 26, 02, 02, 03 22, 29, 01, 01 320 492 497 4718 26, 02, 02, 03 22, 29, 01, 01 490 493 week: 7 26, 02, 02, 03 22, 29, 01, 01 12, 19, 02, 02 454 494 493 3885 22, 29, 01, 01 12, 19, 02, 02 181 493 week: 10 5101 12, 19, 02, 02 02, 09, 03, 03 493 494 week: 5 12, 19, 02, 02 02, 09, 03, 03 29, 05, 01, 02 491 497 490 12, 19, 02, 02 02, 09, 03, 03 29, 05, 01, 02 490 490 495 12, 19, 02, 02 02, 09, 03, 03 29, 05, 01, 02 491 488 495 12, 19, 02, 02 02, 09, 03, 03 29, 05, 01, 02 477 493 497 12, 19, 02, 02 02, 09, 03, 03 29, 05, 01, 02 493 496 492 12, 19, 02, 02 02, 09, 03, 03 29, 05, 01, 02 487 498 478 12, 19, 02, 02 02, 09, 03, 03 29, 05, 01, 02 475 202 491 12, 19, 02, 02 3658 29, 05, 01, 02 226 494 5109 week: 11 29, 05, 01, 02 09, 16, 03, 03 495 week: 8 494 29, 05, 01, 02 19, 26, 02, 02 09, 16, 03, 03 493 495 494

98 09, 16, 03, 03 16, 23, 03, 03 23, 30, 03, 03 496 492 493 09, 16, 03, 03 16, 23, 03, 03 23, 30, 03, 03 499 493 492 09, 16, 03, 03 16, 23, 03, 03 23, 30, 03, 03 489 497 230 09, 16, 03, 03 16, 23, 03, 03 4684 497 489 09, 16, 03, 03 16, 23, 03, 03 week: 14 494 267 30, 06, 03, 04 09, 16, 03, 03 4712 500 496 30, 06, 03, 04 09, 16, 03, 03 week: 13 499 321 23, 30, 03, 03 30, 06, 03, 04 4280 499 499 23, 30, 03, 03 30, 06, 03, 04 week: 12 496 500 16, 23, 03, 03 23, 30, 03, 03 30, 06, 03, 04 495 489 497 16, 23, 03, 03 23, 30, 03, 03 30, 06, 03, 04 494 495 499 16, 23, 03, 03 23, 30, 03, 03 30, 06, 03, 04 496 497 498 16, 23, 03, 03 23, 30, 03, 03 30, 06, 03, 04 494 498 382 16, 23, 03, 03 23, 30, 03, 03 3874 495 495

99 BIBLIOGRAPHY

Open State Foundation. “10 Challenges for Open Data,” August 6, 2015.

https://openstate.eu/en/2015/08/english-10-challenges-for-open-data/.

“Allen Institute for AI.” Accessed April 23, 2021. https://allenai.org/.

Allyn, Bobby. “Amazon Halts Police Use Of Its Facial Recognition Technology.”

NPR.org, June 10, 2020. https://www.npr.org/2020/06/10/874418013/amazon-

halts-police-use-of-its-facial-recognition-technology.

Arrieta, Alejandro Barredo, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot,

Siham Tabik, Alberto Barbado, Salvador García, et al. “Explainable Artificial

Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward

Responsible AI.” ArXiv.Org, arXiv:1910.10045, December 26, 2019, 67.

Fight for the Future. “Ban Facial Recognition.” Accessed March 30, 2021.

https://www.banfacialrecognition.com/.

Bender, Emily M, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell.

“On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In

2021 ACM Conference on Fairness, Accountabillity and Transparency, 14.

Virtual Event, Canada: Association of Computing Machinery, 2021.

https://doi.org/10.1145/3442188.3445922.

Beri, Aditya. “SENTIMENTAL ANALYSIS USING VADER: Interpretation and

Classification of Emotions.” Medium, May 27, 2020.

https://towardsdatascience.com/sentimental-analysis-using-vader-a3415fef7664.

Bowser, Anne. “Beyond Bans: Policy Options for Facial Recognition and the Need for a

Grand Strategy on AI.” Wilson Center. Science and Technology Innovation

100 Program (blog), September 2020.

https://www.wilsoncenter.org/publication/beyond-bans-policy-options-facial-

recognition-and-need-grand-strategy-ai.

Bowyer, Kevin W. “Face Recognition Technology: Security versus Privacy.” IEEE

Technology and Society Magazine 23 (2004): 9–20.

Broussard, Meredith. Artificial Unintelligence: How Computers Misunderstand the

World. Hardcover. Cambridge, MA: MIT Press, 2018.

Browne, Ryan. “Tech Giants Want Rules on Facial Recognition, but Critics Warn That

Won’t Be Enough.” CNBC, August 30, 2019.

https://www.cnbc.com/2019/08/30/facial-recognition-tech-firms-want-regulation-

but-critics-want-a-ban.html.

Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy

Disparities in Commercial Gender Classification.” In Proceedings of Machine

Learning Research 81:1-15, 2018, 15, 2018.

http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf.

Crawford, Kate, and Trevor Paglen. “Excavating AI: The Politics in Machine Learning

Training Sets.” Excavating AI. Accessed April 24, 2021. https://excavating.ai.

“DIGITAL SIGNAGE FEDERATION: Digital Signage Privacy Standards.” Digitial

Signage Federation, February 2011. https://www.digitalsignagefederation.org/wp-

content/uploads/2017/02/DSF-Digital-Signage-Privacy-Standards-02-2011-3.pdf.

U.S. DEPARTMENT OF DEFENSE. “DOD Adopts Ethical Principles for Artificial

Intelligence,” February 24, 2020.

101 https://www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod-

adopts-ethical-principles-for-artificial-intelligence/.

Doshi, Tulsee, and Andrew Zaldivar. “Responsible AI with TensorFlow.” TensorFlow

Blog (blog), June 29, 2020. https://blog.tensorflow.org/2020/06/responsible-ai-

with-tensorflow.html.

Dundas, Deborah. “Zadie Smith on Fighting the Algorithm: ‘If You Are under 30, and

You Are Able to Think for Yourself Right Now, God Bless You.’” Toronto Star

(blog), November 8, 2019.

https://www.thestar.com/entertainment/books/2019/11/08/zadie-smith-on-

fighting-the-algorithm-if-you-are-under-30-and-you-are-able-to-think-for-

yourself-right-now-god-bless-you.html.

Eck, Douglas. Informal Interview about Human-Centered Design from Google Computer

Science Perspective. Video, March 18, 2021.

Eisenstat, Yael. “The Real Reason Tech Struggles With Algorithmic Bias.” WIRED,

February 12, 2019. https://www.wired.com/story/the-real-reason-tech-struggles-

with-algorithmic-bias/.

Electronic Privacy Information Center. “EPIC v. CBP (Biometric Entry-Exit Alternative

Screening Procedures).” Accessed April 22, 2021.

https://www.epic.org/foia/dhs/cbp/alt-screening-procedures/.

“Ethics Guidelines for Trustworthy AI: High-Level Expert Group on Artificial

Intelligence.” European Comission, April 8, 2019.

https://ec.europa.eu/futurium/en/ai-alliance-consultation.

102 “Expert: We Need New Laws For The Facial Recognition ‘Wild West.’” Government

Executive, December 11, 2020, sec. Technology.

https://www.govexec.com/technology/2020/12/expert-we-need-new-laws-facial-

recognition-wild-west/170629/.

Google Trends. “Explore What the World Is Searching.” Accessed April 18, 2021.

https://trends.google.com/trends/?geo=US.

“FACIAL RECOGNITION TECHNOLOGY: Commercial Uses, Privacy Issues, and

Applicable Federal Law.” United States Government Accountability Office, July

2015. https://www.gao.gov/assets/gao-15-621.pdf.

NIST. “Facial Recognition Technology (FRT),” February 6, 2020.

https://www.nist.gov/speech-testimony/facial-recognition-technology-frt-0.

“FACIAL RECOGNITION TECHNOLOGY: Privacy and Accuracy Issues Related to

Commercial Uses.” United States Government Accountability Office, July 2020.

https://www.gao.gov/assets/gao-20-522.pdf.

Fadulu, Lola. “Facial Recognition Technology in Public Housing Prompts Backlash.”

New York Times, September 24, 2019, sec. Politics.

https://www.nytimes.com/2019/09/24/us/politics/facial-recognition-technology-

housing.html.

Microsoft Research. “FATE: Fairness, Accountability, Transparency, and Ethics in AI.”

Accessed April 23, 2021. https://www.microsoft.com/en-us/research/theme/fate/.

Felzmann, Heike, Eduard Fosch Villaronga, Christoph Lutz, and Aurelia Tamò-Larrieux.

“Transparency You Can Trust: Transparency Requirements for Artificial

Intelligence between Legal Norms and Contextual Concerns.” Big Data & Society

103 6, no. 1 (January 1, 2019): 2053951719860542.

https://doi.org/10.1177/2053951719860542.

Fight for the Future. “Fight for the Future.” Accessed March 30, 2021.

https://www.fightforthefuture.org/.

Fincher, Jon. “Reading and Writing CSV Files in Python.” Real Python. Accessed April

27, 2021. https://realpython.com/python-csv/.

Federal Trade Commission. “FTC Recommends Best Practices for Companies That Use

Facial Recognition Technologies,” October 22, 2012. https://www.ftc.gov/news-

events/press-releases/2012/10/ftc-recommends-best-practices-companies-use-

facial-recognition.

Fuad, Md Tahmid Hasan, Awal Ahmed Fime, Delowar Sikder, Md Akil Raihan Iftee,

Jakaria Rabbi, Mabrook S. Al-rakhami, Abdu Gumae, Ovishake Sen, Mohtasim

Fuad, and Md Nazrul Islam. “Recent Advances in Deep Learning Techniques for

Face Recognition.” IEEE Access Journal 4 (March 18, 2021): 30.

https://doi.org/10.1109/ACCESS.2017.

Garvie, Clare, Alvaro Bedoya, and Jonathan Frankle. “The Perpetual Line-Up:

Unregulated Police Face Recognition in America.” Georgetown Law Center on

Privacy & Technology. Accessed April 22, 2021.

https://www.perpetuallineup.org/.

Google AI. “Google AI.” Accessed April 23, 2021. https://ai.google/.

Green, Brian, Don Heider, Kay Firth-Butterfield, and Daniel Lim. “Responsible Use of

Technology: The Microsoft Case Study.” World Economic Forum, February

2021.

104 http://www3.weforum.org/docs/WEF_Responsible_Use_of_Technology_2021.pd

f.

Greenberg, Pam. “Facial Recognition Gaining Measured Acceptance.” National

Conference of State Legislatures, September 18, 2020.

https://www.ncsl.org/research/telecommunications-and-information-

technology/facial-recognition-gaining-measured-acceptance-magazine2020.aspx.

Grother, Patrick, Mei Ngan, and Kayee Hanaoka. “Face Recognition Vendor Test

(FRVT) Part 3: Demographic Effects.” National Institute of Standards and

Technology, December 19, 2019.

https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf.

Hall, Chip. Informal Interview about Data Privacy Concerns from a Business Perspective.

Video, March 30, 2021.

Hao, Karen. “Error-Riddled Data Sets Are Warping Our Sense of How Good AI Really

Is.” MIT Technology Review. Accessed April 21, 2021.

https://www.technologyreview.com/2021/04/01/1021619/ai-data-errors-warp-

machine-learning-progress/.

———. “This Is How We Lost Control of Our Faces.” MIT Technology Review,

February 5, 2021. https://www.technologyreview.com/2021/02/05/1017388/ai-

deep-learning-facial-recognition-data-history/.

———. “We Read the Paper That Forced Timnit Gebru out of Google. Here’s What It

Says.” MIT Technology Review, December 4, 2020.

https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-

research-paper-forced-out-timnit-gebru/.

105 Hill, Kashmir. “The Secretive Company That Might End Privacy as We Know It.” The

New York Times, January 18, 2020, sec. Technology.

https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-

recognition.html.

———. “What We Learned About Clearview AI and Its Secret ‘Co-Founder.’” The New

York Times, March 18, 2021, sec. Technology.

https://www.nytimes.com/2021/03/18/technology/clearview-facial-recognition-

ai.html.

———. “Wrongfully Accused by an Algorithm.” The New York Times, June 24, 2020,

sec. Technology. https://www.nytimes.com/2020/06/24/technology/facial-

recognition-arrest.html.

Hill, Kashmir, and Gabriel J.X. Dance. “Clearview’s Facial Recognition App Is

Identifying Child Victims of Abuse.” New York Times, February 7, 2020, sec.

Business. https://www.nytimes.com/2020/02/07/business/clearview-facial-

recognition-child-sexual-abuse.html.

Horowitz, Julia. “Tech Companies Are Still Selling Facial Recognition Tools to the

Police.” CNN, July 3, 2020, sec. Business.

https://www.cnn.com/2020/07/03/tech/facial-recognition-police/index.html.

Congress.Gov. “H.R.4008 - 116th Congress (2019-2020): No Biometric Barriers to

Housing Act of 2019,” 2019. https://www.congress.gov/bill/116th-

congress/house-bill/4008/text?r=9&s=1.

“IBIA Privacy Best Practice Recommendations For Commercial Biometric Use.”

International Biometrics & Identification Association, August 2014.

106 https://www.ntia.doc.gov/files/ntia/publications/ibia_privacy_best_practice_reco

mmendations_8_18_14.pdf.

Kantayya, Shalini. Coded Bias. Documentary. Netflix, 2020.

https://www.netflix.com/title/81328723.

Katzman, Lyra. Twitter API Python Coding Assistance. Personal Conversation, 04 2021.

Klosowski, Thorin. “Facial Recognition Is Everywhere. Here’s What We Can Do About

It.” New York Times, July 15, 2020.

https://www.nytimes.com/wirecutter/blog/how-facial-recognition-works/.

CB Insights Research. “Like It Or Not Facial Recognition Is Already Here. These Are

The Industries It Will Transform First,” April 19, 2019.

https://www.cbinsights.com/research/facial-recognition-disrupting-industries/.

Mayernik, Matthew S. “Open Data: Accountability and Transparency.” Big Data &

Society 4, no. 2 (July 4, 2017): 5. https://doi.org/10.1177/2053951717718853.

Moraes, Ricardo. “Capturing Souls.” Reuters Blogs (blog), May 12, 2011.

http://blogs.reuters.com/photographers-blog/2011/05/12/capturing-souls/.

Nitzberg, Mark. Informal Interview about Concerns with Facial Recognition Technology.

Video, May 15, 2021.

Northcutt, Curtis G., Anish Athalye, and Jonas Mueller. “Pervasive Label Errors in Test

Sets Destabilize Machine Learning Benchmarks.” ICLR 2021 RobustML and

Weakly Supervised Learning Workshops, April 8, 2021, 16.

OECD. “OECD Principles on Artificial Intelligence,” May 2019.

https://www.oecd.org/going-digital/ai/principles/.

107 O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and

Threatens Democracy. New York, NY: Crown Publishing Group, 2016.

Open Knowledge Foundation. “Open Data Handbook.” Accessed April 23, 2021.

https://opendatahandbook.org/guide/en/what-is-open-

data/#:~:text=Open%20data%20is%20data%20that,as%20to%20what%20this%2

0means.

Data.gov. “Open Government.” Accessed April 24, 2021. https://www.data.gov/open-

gov/.

OpenAI. “OpenAI.” Accessed April 23, 2021. https://openai.com/.

“Overview ‹ Gender Shades — MIT Media Lab.” Accessed April 18, 2021.

https://www.media.mit.edu/projects/gender-shades/overview/.

Twitter Developer. “Paginate | Search Tweets.” Accessed April 7, 2021.

https://developer.twitter.com/en/docs/twitter-api/tweets/search/integrate/paginate.

Pavloski, Mihajlo. “Accessing the Twitter API with Python.” Stack Abuse. Accessed

April 27, 2021. https://stackabuse.com/accessing-the-twitter-api-with-python/.

Phillips, Jonathon P., Carina A. Hahn, Peter C. Fontana, David A. Broniatowski, and

Mark A. Przybocki. “Four Principles of Explainable Artificial Intelligence.”

National Institute of Standards and Technology, August 2020.

https://www.nist.gov/system/files/documents/2020/08/17/NIST%20Explainable%

20AI%20Draft%20NISTIR8312%20%281%29.pdf.

Phung, Van Hiep, and Eun Joo Rhee. “A High-Accuracy Model Average Ensemble of

Convolutional Neural Networks for Classification of Cloud Image Patches on

108 Small Datasets.” Applied Sciences 9, no. 21 (January 2019): 4500.

https://doi.org/10.3390/app9214500.

Piper, Andy. “Twitterdev/Twitter-API-v2-Sample-Code.” GitHub, January 26, 2021.

https://github.com/twitterdev/Twitter-API-v2-sample-code.

“Privacy Best Practice Recommendations For Commercial Facial Recognition Use.”

National Telecommunications and Information Administration, June 17, 2016.

https://www.ntia.doc.gov/files/ntia/publications/privacy_best_practices_recomme

ndations_for_commercial_use_of_facial_recogntion.pdf.

Google AI. “Responsible AI Practices.” Accessed April 23, 2021.

https://ai.google/responsibilities/responsible-ai-practices/.

Rowe, Elizabeth. “Regulating Facial Recognition Technology in the Private Sector.”

Stanford Law Review, 24 STAN. TECH. L. REV. 1 (2020), 24, no. 1 (2020): 54.

Congress.Gov. “S.847 - 116th Congress (2019-2020): Commercial Facial Recognition

Privacy Act of 2019,” 2019. https://www.congress.gov/bill/116th-

congress/senate-bill/847.

Congress.Gov. “S.4084 — 116th Congress (2019-2020): Facial Recognition and

Biometric Technology Moratorium Act of 2020,” 2020.

https://www.congress.gov/bill/116th-congress/senate-bill/4084.

Schechner, Sam. “Artificial Intelligence, Facial Recognition Face Curbs in New EU

Proposal.” Wall Street Journal, April 21, 2021, sec. Europe.

https://www.wsj.com/articles/artificial-intelligence-facial-recognition-face-curbs-

in-new-eu-proposal-11619000520.

109 Schmidt, Eric, and Robert Work. “Final Report.” National Security Commission on

Artificial Intelligence, March 1, 2021. https://www.nscai.gov/wp-

content/uploads/2021/03/Full-Report-Digital-1.pdf.

“Seeing Is IDʼing: Facial Recognition & Privacy.” Center for Democracy & Technology,

January 22, 2012, 17.

Shneiderman, Ben. “Bridging the Gap Between Ethics and Practice: Guidelines for

Reliable, Safe, and Trustworthy Human-Centered AI Systems.” ACM

Transactions on Interactive Intelligent Systems 10, no. 4 (December 3, 2020): 1–

31. https://doi.org/10.1145/3419764.

Simonite, Tom. “A Prominent AI Ethics Researcher Says Google Fired Her.” WIRED,

December 3, 2020. https://www.wired.com/story/prominent-ai-ethics-researcher-

says-google-fired-her/.

Smith, Zadie. “The Lazy River.” The New Yorker, December 11, 2017.

https://www.newyorker.com/magazine/2017/12/18/the-lazy-river.

“Starting an Open Data Initiative | Data,” October 26, 2020.

http://opendatatoolkit.worldbank.org/en/starting.html.

Electronic Privacy Information Center. “State Facial Recognition Policy,” n.d.

https://epic.org/state-policy/facialrecognition/.

Terhörst, Philipp, Jan Niklas Kolf, Marco Huber, Florian Kirchbuchner, Naser Damer,

Aythami Morales, Julian Fierrez, and Arjan Kuijper. “A Comprehensive Study on

Face Recognition Biases Beyond Demographics.” Journal of Latex Class Files

14, no. 8 (March 2, 2021): 14.

110 Terry-Jack, Mohammed. “NLP: Pre-Trained Sentiment Analysis | by Mohammed Terry-

Jack.” Medium, May 1, 2019. https://medium.com/@b.terryjack/nlp-pre-trained-

sentiment-analysis-1eb52a9d742c.

Google Walkout For Real Change. “The Future Must Be Ethical: #MakeAIEthical.”

Medium, March 8, 2021. https://googlewalkout.medium.com/.

“The Openness Revolution.” , December 11, 2014.

https://www.economist.com/business/2014/12/11/the-openness-revolution.

The Partnership on AI. “The Partnership on AI Brings Together Diverse, Global Voices

to Realize the Promise of Artificial Intelligence.” Accessed April 22, 2021.

https://www.partnershiponai.org/.

Tsamados, Andreas, Nikita Aggarwal, Josh Cowls, Jessica Morley, Huw Roberts,

Mariarosaria Taddeo, and Luciano Floridi. “The Ethics of Algorithms: Key

Problems and Solutions.” AI & SOCIETY, February 20, 2021.

https://doi.org/10.1007/s00146-021-01154-8.

“U.S. Chamber Facial Recognition Policy Principles.” U.S. Chamber of Commerce,

December 5, 2019. https://www.uschamber.com/issue-brief/us-chamber-facial-

recognition-policy-principles-0.

Valentino-DeVries, Jennifer. “How the Police Use Facial Recognition, and Where It Falls

Short.” New York Times, January 12, 2020.

https://www.nytimes.com/2020/01/12/technology/facial-recognition-police.html.

Vought, Russell T. “Memorandum for the Heads of Executive Departments and

Agencies.” The White House, 2019. https://www.whitehouse.gov/wp-

content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf.

111