From the GDPR) the United States Should Adopt to Advance Economic Justice

Total Page:16

File Type:pdf, Size:1020Kb

From the GDPR) the United States Should Adopt to Advance Economic Justice Five Privacy Principles (from the GDPR) the United States Should Adopt To Advance Economic Justice Michele E. Gilman* ABSTRACT Algorithmic profiling technologies are impeding the economic security of low-income people in the United States. Based on their digital profiles, low- income people are targeted for predatory marketing campaigns and financial products. At the same time, algorithmic decision-making can result in their exclusion from mainstream employment, housing, financial, health care, and educational opportunities. Government agencies are turning to algorithms to apportion social services, yet these algorithms lack transparency, leaving thousands of people adrift without state support and not knowing why. Marginalized communities are also subject to disproportionately high levels of surveillance, including facial recognition technology and the use of predictive policing software. American privacy law is no bulwark against these profiling harms, instead placing the onus of protecting personal data on individuals while leaving government and businesses largely free to collect, analyze, share, and sell personal data. By contrast, in the European Union, the General Data Protection Regulation (GDPR) gives EU residents numerous, enforceable rights to control their personal data. Spurred in part by the GDPR, Congress is debating whether to adopt comprehensive privacy legislation in the United States. This article contends that the GDPR contains several provisions that have the potential to limit digital discrimination against the poor, while enhancing their economic stability and mobility. The GDPR provides the following: (1) the right to an explanation about automated decision-making; (2) the right not to be subject to decisions based solely on automated profiling; (3) the right to be forgotten; (4) opportunities for public participation in data processing programs; and (5) robust implementation * Venable Professor of Law and Director, Saul Ewing Civil Advocacy Clinic, University of Baltimore School of Law; Faculty Fellow, Data & Society. B.A., Duke University; J.D., University of Michigan Law School. Many thanks for feedback to the researchers at Data & Society and the faculties of William & Mary Law School, Pace Law School, and Drexel University School of Law. 52:0368] FIVE PRIVACY PRINCIPLES 369 and enforcement tools. The interests of low-income people must be part of privacy lawmaking, and the GDPR is a useful template for thinking about how to meet their data privacy needs. ABSTRACT ................................................................................................... 368 I. INTRODUCTION...................................................................................... 369 II. THE CLASS DIFFERENTIAL IN DATA PRIVACY HARMS .......................... 375 A. Digital Discrimination/Electronic Exploitation ............................. 378 B. Dirty Data and Careless Coding ..................................................... 390 C. Surveillance .................................................................................... 394 D. Conclusion ..................................................................................... 399 III. THE GAPS IN AMERICAN PRIVACY PROTECTIONS ................................. 400 A. Constitution .................................................................................... 400 B. Privacy Statutes .............................................................................. 402 C. Notice and Consent ........................................................................ 406 D. Enforcement ................................................................................... 408 E. Anti-Discrimination Law ............................................................... 409 F. Workplace Protections ................................................................... 411 IV. FIVE GDPR PRINCIPLES TO ADVANCE ECONOMIC JUSTICE .................. 412 A. Right to an Explanation .................................................................. 414 B. Right to Object to Automated Profiling ......................................... 420 C. Right To Be Forgotten and Criminal Records ............................... 425 D. Public Participation ........................................................................ 431 E. Implementation and Enforcement .................................................. 439 V. WHAT ABOUT THE CALIFORNIA CONSUMER PRIVACY ACT? ................ 442 VI. CONCLUSION ......................................................................................... 444 I. INTRODUCTION On May 18, 2018, citizens of the European Union awoke to a new, robust set of data privacy protections codified in the General Data Protection Regulation (GDPR), which gives them new levels of control over their 370 ARIZONA STATE LAW JOURNAL [Ariz. St. L.J. personal information.1 Meanwhile, Americans rise daily to the same fragmented privacy regime that has failed to forestall a drumbeat of data breaches, online misinformation campaigns, and a robust market in the sale of personal data, usually without their knowledge.2 For instance, in 2017, the credit reporting company Equifax disclosed that hackers had breached its servers and stolen the personal data of almost half the United States’s population.3 In 2018, we learned that Cambridge Analytica harvested the data of over 50 million Americans through personality quizzes on Facebook in order to target voters with political advertisements for the Trump campaign.4 And, as consumers are bombarded with advertisements based on their internet searches or remarks captured by digital assistants such as Amazon’s Alexa, Americans are increasingly realizing that their online and offline behavior is being tracked and sold as part of a massive, networked data-for- profit and surveillance system.5 The American privacy regime is largely based on a notice and consent model that puts the onus on individuals to protect their own privacy.6 The model is not working. In the absence of congressional action, some states have enacted laws to protect the data privacy and/or security of their citizens.7 California has enacted the most comprehensive statute, effective January 2020, entitled the California Consumer Privacy Act, which expands transparency about the big data marketplace and gives consumers the right to opt-out of having their data sold to third parties.8 Other states may soon 1. Commission Regulation 2016/679, 2016 O.J. (L 119) 1 [hereinafter GDPR]. 2. See SARAH E. IGO, THE KNOWN CITIZEN: A HISTORY OF PRIVACY IN MODERN AMERICA 353–55 (2018). 3. Stacy Cowley, 2.5 Million More People Potentially Exposed in Equifax Breach, N.Y. TIMES (Oct. 2, 2017), https://www.nytimes.com/2017/10/02/business/equifax-breach.html [https://perma.cc/8VWZ-L653]. 4. Matthew Rosenberg, Nicholas Confessore & Carole Cadwalladr, How Trump Consultants Exploited the Facebook Data of Millions, N.Y. TIMES (Mar. 17, 2018), https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump- campaign.html?module=inline [https://perma.cc/9LVD-9DAG]. 5. Lee Rainie, Americans’ Complicated Feelings About Social Media in an Era of Privacy Concerns, PEW RES. CTR. (Mar. 27, 2018), https://www.pewresearch.org/fact- tank/2018/03/27/americans-complicated-feelings-about-social-media-in-an-era-of-privacy- concerns/ [https://perma.cc/8WLL-QL4Y]. 6. See Daniel J. Solove, Privacy Self-Management and the Consent Dilemma, 126 HARV. L. REV. 1880, 1882–83 (2013). 7. State Laws Related to Internet Privacy, NAT’L CONF. ST. LEGISLATURES (Jan. 27, 2020), http://www.ncsl.org/research/telecommunications-and-information-technology/state-laws- related-to-internet-privacy.aspx#Consumer [https://perma.cc/68V2-C338]. 8. CAL. CIV. CODE § 1798.100 (West 2020). For more analysis of the California Consumer Privacy Act, see infra Part IV. 52:0368] FIVE PRIVACY PRINCIPLES 371 follow suit. After years of resistance, Big Tech companies such as Facebook, Google, Microsoft and Apple are now advocating for a national data protection law, primarily because they are worried about the emergence of varying state standards.9 Moreover, Big Tech is already working to comply with the GDPR for their millions of European consumers.10 In light of these new laws and the American public’s “techlash” against revelations of big data scandals,11 Congress is finally and seriously considering comprehensive privacy legislation.12 While the issue has bipartisan support, proposals vary in their solicitude for corporations versus consumers.13 As these issues are being actively debated, it is essential that the legislative process include the interests of all Americans, and not just elites and industry. Simply put, digital privacy needs are not the same for all Americans. Based on their digital profiles, low-income people are targeted with marketing campaigns for predatory products such as payday loans and for- profit educational scams.14 At the same time, algorithmic decision-making can result in their exclusion from mainstream employment, housing, financial, and educational opportunities.15 Government agencies are turning to algorithms to apportion public benefits, yet these automated decision- making systems lack transparency, leaving thousands of people adrift without state support and not knowing why.16 Marginalized communities are also subject to high levels of law enforcement surveillance, including the use of 9. See, e.g., Ben Brody & Spencer Soper, Amazon Joins Tech Giants in Backing Federal Privacy Safeguards, BLOOMBERG (Sept.
Recommended publications
  • AI PRINCIPLES in CONTEXT: Tensions and Opportunities for the United States and China
    Written by Jessica Cussins Newman August 2020 AI PRINCIPLES IN CONTEXT: Tensions and Opportunities for the United States and China INTRODUCTION Over the past five years, awareness of the transformative impacts of artificial intelligence (AI) on societies, policies, and economies around the world has been increasing. Unique challenges posed by AI include the rise of synthetic media (text, audio, images, and video content);1 insufficient transparency in the automation of decision making in complex environments;2 the amplification of social biases from training data and design GAINING A BETTER UNDERSTANDING OF choices, causing disparate outcomes for 3 THE CULTURAL AND POLITICAL CONTEXT different races, genders, and other groups; and SURROUNDING KEY AI PRINCIPLES CAN the vulnerability of AI models to cyberattacks HELP CLARIFY WHICH PRINCIPLES PROVIDE such as data poisoning.4 Awareness of these and PRODUCTIVE OPPORTUNITIES FOR numerous other challenges has led to serious POTENTIAL COOPERATION. deliberation about the principles needed to guide AI development and use. By some counts, there are now more than 160 AI principles and guidelines globally.5 Because these principles have emerged from different stakeholders with unique histories and motivations, their style and substance vary. Some documents include recommendations or voluntary commitments, while a smaller subset includes binding agreements or enforcement mechanisms. Despite the differences, arguably a consensus has also been growing around key thematic trends, including privacy, accountability, safety and security, transparency and explainability (meaning that the results and decisions made by an AI system can be understood by humans), fairness and nondiscrimination, human control of technology, professional responsibility, and promotion of human values.6 However, in the face of complex and growing the United States and China.
    [Show full text]
  • Authors' Pre-Publication Copy Dated March 7, 2021 This Article Has Been
    Authors’ pre-publication copy dated March 7, 2021 This article has been accepted for publication in Volume 19, Issue 1 of the Northwestern Journal of Technology & Intellectual Property (Fall 2021) Artificial Intelligence as Evidence1 ABSTRACT This article explores issues that govern the admissibility of Artificial Intelligence (“AI”) applications in civil and criminal cases, from the perspective of a federal trial judge and two computer scientists, one of whom also is an experienced attorney. It provides a detailed yet intelligible discussion of what AI is and how it works, a history of its development, and a description of the wide variety of functions that it is designed to accomplish, stressing that AI applications are ubiquitous, both in the private and public sector. Applications today include: health care, education, employment-related decision-making, finance, law enforcement, and the legal profession. The article underscores the importance of determining the validity of an AI application (i.e., how accurately the AI measures, classifies, or predicts what it is designed to), as well as its reliability (i.e., the consistency with which the AI produces accurate results when applied to the same or substantially similar circumstances), in deciding whether it should be admitted into evidence in civil and criminal cases. The article further discusses factors that can affect the validity and reliability of AI evidence, including bias of various types, “function creep,” lack of transparency and explainability, and the sufficiency of the objective testing of the AI application before it is released for public use. The article next provides an in-depth discussion of the evidentiary principles that govern whether AI evidence should be admitted in court cases, a topic which, at present, is not the subject of comprehensive analysis in decisional law.
    [Show full text]
  • The Role of Technology in Online Misinformation Sarah Kreps
    THE ROLE OF TECHNOLOGY IN ONLINE MISINFORMATION SARAH KREPS JUNE 2020 EXECUTIVE SUMMARY States have long interfered in the domestic politics of other states. Foreign election interference is nothing new, nor are misinformation campaigns. The new feature of the 2016 election was the role of technology in personalizing and then amplifying the information to maximize the impact. As a 2019 Senate Select Committee on Intelligence report concluded, malicious actors will continue to weaponize information and develop increasingly sophisticated tools for personalizing, targeting, and scaling up the content. This report focuses on those tools. It outlines the logic of digital personalization, which uses big data to analyze individual interests to determine the types of messages most likely to resonate with particular demographics. The report speaks to the role of artificial intelligence, machine learning, and neural networks in creating tools that distinguish quickly between objects, for example a stop sign versus a kite, or in a battlefield context, a combatant versus a civilian. Those same technologies can also operate in the service of misinformation through text prediction tools that receive user inputs and produce new text that is as credible as the original text itself. The report addresses potential policy solutions that can counter digital personalization, closing with a discussion of regulatory or normative tools that are less likely to be effective in countering the adverse effects of digital technology. INTRODUCTION and machine learning about user behavior to manipulate public opinion, allowed social media Meddling in domestic elections is nothing new as bots to target individuals or demographics known a tool of foreign influence.
    [Show full text]
  • Whose Streets? Our Streets! (Tech Edition)
    TECHNOLOGY AND PUBLIC PURPOSE PROJECT Whose Streets? Our Streets! (Tech Edition) 2020-21 “Smart City” Cautionary Trends & 10 Calls to Action to Protect and Promote Democracy Rebecca Williams REPORT AUGUST 2021 Technology and Public Purpose Project Belfer Center for Science and International Affairs Harvard Kennedy School 79 JFK Street Cambridge, MA 02138 www.belfercenter.org/TAPP Statements and views expressed in this report are solely those of the authors and do not imply endorsement by Harvard University, Harvard Kennedy School, or the Belfer Center for Science and International Affairs. Cover image: Les Droits de l’Homme, Rene Magritte 1947 Design and layout by Andrew Facini Copyright 2021, President and Fellows of Harvard College Printed in the United States of America TECHNOLOGY AND PUBLIC PURPOSE PROJECT Whose Streets? Our Streets! (Tech Edition) 2020-21 “Smart City” Cautionary Trends & 10 Calls to Action to Protect and Promote Democracy Rebecca Williams REPORT AUGUST 2021 Acknowledgments This report culminates my research on “smart city” technology risks to civil liberties as part of Harvard Kennedy School’s Belfer Center for Science and International Affairs’ Technology and Public Purpose (TAPP) Project. The TAPP Project works to ensure that emerging technologies are developed and managed in ways that serve the overall public good. I am grateful to the following individuals for their inspiration, guidance, and support: Liz Barry, Ash Carter, Kade Crockford, Susan Crawford, Karen Ejiofor, Niva Elkin-Koren, Clare Garvie and The Perpetual Line-Up team for inspiring this research, Kelsey Finch, Ben Green, Gretchen Greene, Leah Horgan, Amritha Jayanti, Stephen Larrick, Greg Lindsay, Beryl Lipton, Jeff Maki, Laura Manley, Dave Maass, Dominic Mauro, Hunter Owens, Kathy Pettit, Bruce Schneier, Madeline Smith who helped so much wrangling all of these examples, Audrey Tang, James Waldo, Sarah Williams, Kevin Webb, Bianca Wylie, Jonathan Zittrain, my fellow TAPP fellows, and many others.
    [Show full text]
  • AI Now Report 2018
    AI Now Report 2018 Meredith Whittaker​, AI Now Institute, New York University, Google Open Research Kate Crawford​, AI Now Institute, New York University, Microsoft Research Roel Dobbe​, AI Now Institute, New York University Genevieve Fried​, AI Now Institute, New York University Elizabeth Kaziunas​, AI Now Institute, New York University Varoon Mathur​, AI Now Institute, New York University Sarah Myers West​, AI Now Institute, New York University Rashida Richardson​, AI Now Institute, New York University Jason Schultz​, AI Now Institute, New York University School of Law Oscar Schwartz​, AI Now Institute, New York University With research assistance from Alex Campolo and Gretchen Krueger (AI Now Institute, New York University) DECEMBER 2018 CONTENTS ABOUT THE AI NOW INSTITUTE 3 RECOMMENDATIONS 4 EXECUTIVE SUMMARY 7 INTRODUCTION 10 1. THE INTENSIFYING PROBLEM SPACE 12 1.1 AI is Amplifying Widespread Surveillance 12 The faulty science and dangerous history of affect recognition 13 ​Facial recognition amplifies civil rights concerns 15 1.2 The Risks of Automated Decision Systems in Government 18 1.3 Experimenting on Society: Who Bears the Burden? 22 2. EMERGING SOLUTIONS IN 2018 24 2.1 Bias Busting and Formulas for Fairness: the Limits of Technological “Fixes” 24 ​Broader approaches 27 2.2 Industry Applications: Toolkits and System Tweaks 28 2.3 Why Ethics is Not Enough 29 3. WHAT IS NEEDED NEXT 32 3.1 From Fairness to Justice 32 3.2 Infrastructural Thinking 33 3.3 Accounting for Hidden Labor in AI Systems 34 3.4 Deeper Interdisciplinarity 36 3.5 Race, Gender and Power in AI 37 3.6 Strategic Litigation and Policy Interventions 39 3.7 Research and Organizing: An Emergent Coalition 40 CONCLUSION 42 ENDNOTES 44 This work is licensed under a ​Creative Commons Attribution-NoDerivatives 4.0 International License 2 ABOUT THE AI NOW INSTITUTE The AI Now Institute at New York University is an interdisciplinary research institute dedicated to understanding the social implications of AI technologies.
    [Show full text]
  • Data for Peacebuilding and Prevention Ecosystem Mapping
    October 2020 DATA FOR PEACEBUILDING AND PREVENTION Ecosystem Mapping The State of Play and the Path to Creating a Com- munity of Practice Branka Panic ABOUT THE CENTER ON INTERNATIONAL COOPERATION The Center on International Cooperation (CIC) is a non-profit research center housed at New York University. Our vision is to advance effective multilateral action to prevent crises and build peace, justice, and inclusion. Our mission is to strengthen cooperative approaches among national governments, international organizations, and the wider policy community to advance peace, justice, and inclusion. Acknowledgments The NYU Center on International Cooperation is grateful for the support of the Netherlands Ministry of Foreign Affairs for this initiative. We also offer a special thanks to interviewees for their time and support at various stages of this research, and to an Expert Group convened to discuss the report findings: Bas Bijlsma, Christina Goodness, Gregor Reisch, Jacob Lefton, Lisa Schirch, Merve Hickok, Monica Nthiga, and Thierry van der Horst. CITATION: This report should be cited as: Branka Panic, Data for Peacebuilding and Prevention Ecosystem Mapping: The State of Play and the Path to Creating a Community of Practice (New York: NYU Center on International Cooperation, 2020). Contents Summary . 1 Terminology........................................................................ 4 Introduction: Objectives and Methodology ............................................. 5 Objectives.....................................................................
    [Show full text]
  • The Next Generation of Cyber-Enabled Information Warfare
    2020 12th International Conference on Cyber Conflict Permission to make digital or hard copies of this publication for internal use within NATO and for personal or educational use when for non-profit or 20/20 Vision: The Next Decade non-commercial purposes is granted providing that copies bear this notice T. Jančárková, L. Lindström, and a full citation on the first page. Any other reproduction or transmission M. Signoretti, I. Tolga, G. Visky (Eds.) requires prior written permission by NATOCCD COE. 2020 © NATO CCDCOE Publications, Tallinn The Next Generation of Cyber-Enabled Information Warfare Kim Hartmann Keir Giles Conflict Studies Research Centre Conflict Studies Research Centre [email protected] [email protected] Abstract: Malign influence campaigns leveraging cyber capabilities have caused significant political disruption in the United States and elsewhere; but the next generation of campaigns could be considerably more damaging as a result of the widespread use of machine learning. Current methods for successfully waging these campaigns depend on labour-intensive human interaction with targets. The introduction of machine learning, and potentially artificial intelligence (AI), will vastly enhance capabilities for automating the reaching of mass audiences with tailored and plausible content. Consequently, they will render malicious actors even more powerful. Tools for making use of machine learning in information operations are developing at an extraordinarily rapid pace, and are becoming rapidly more available and affordable for a much wider variety of users. Until early 2018 it was assumed that the utilisation of AI methods by cyber criminals was not to be expected soon, because those methods rely on vast datasets, correspondingly vast computational power, or both, and demanded highly specialised skills and knowledge.
    [Show full text]
  • Tackling Deepfakes in European Policy
    Tackling deepfakes in European policy STUDY Panel for the Future of Science and Technology EPRS | European Parliamentary Research Service Scientific Foresight Unit (STOA) PE 690.039 – July 2021 EN Tackling deepfakes in European policy The emergence of a new generation of digitally manipulated media capable of generating highly realistic videos – also known as deepfakes – has generated substantial concerns about possible misuse. In response to these concerns, this report assesses the technical, societal and regulatory aspects of deepfakes. The assessment of the underlying technologies for deepfake videos, audio and text synthesis shows that they are developing rapidly, and are becoming cheaper and more accessible by the day. The rapid development and spread of deepfakes is taking place within the wider context of a changing media system. An assessment of the risks associated with deepfakes shows that they can be psychological, financial and societal in nature, and their impacts can range from the individual to the societal level. The report identifies five dimensions of the deepfake lifecycle that policy-makers could take into account to prevent and address the adverse impacts of deepfakes. The legislative framework on artificial intelligence (AI) proposed by the European Commission presents an opportunity to mitigate some of these risks, although regulation should not focus on the technological dimension of deepfakes alone. The report includes policy options under each of the five dimensions, which could be incorporated into the AI legislative framework, the proposed European Union digital services act package and beyond. A combination of measures will likely be necessary to limit the risks of deepfakes, while harnessing their potential.
    [Show full text]
  • Artificial Intelligence and Disinformation How AI Changes the Way Disinformation Is Produced, Disseminated, and Can Be Countered
    security and human rights 29 (2018) 55-81 brill.com/shrs Artificial Intelligence and Disinformation How AI Changes the Way Disinformation is Produced, Disseminated, and Can Be Countered Katarina Kertysova* George F. Kennan Fellow, Kennan Institute, Woodrow Wilson Center [email protected] Abstract This article explores the challenges and opportunities presented by advances in artifi- cial intelligence (AI) in the context of information operations. The article first exam- ines the ways in which AI can be used to counter disinformation online. It then dives into some of the limitations of AI solutions and threats associated with AI techniques, namely user profiling, micro-targeting, and deep fakes. Finally, the paper reviews a number of solutions that could help address the spread of AI-powered disinformation and improve the online environment. The article recognises that in the fight against disinformation, there is no single fix. The next wave of disinformation calls first and foremost for societal resilience. Keywords disinformation – artificial intelligence – deep fakes – algorithms – profiling – micro-targeting – automated fact-checking – social media 1 Introduction In recent years, Western democracies have been grappling with a mix of cyber- attacks, information operations, political and social subversion, exploitation of * This report was written with research and editorial support of Eline Chivot, Senior Policy Analyst at the Center for Data Innovation. Opinions expressed in the article are solely those of the author. © Katarina Kertysova, 2019 | doi:10.1163/18750230-02901005 This is an open access article distributed under the terms of the prevailing cc-by-nc license at the time of publication. Downloaded from Brill.com09/25/2021 04:22:02AM via free access <UN> 56 Kertysova existing tensions within their societies, and malign financial influence.
    [Show full text]
  • The Universal Declaration of Human Rights at 70
    The Universal Declaration of Human Rights at 70 Putting Human Rights at the Heart of the Design, Development and Deployment of Artificial Intelligence 2 THE UNIVERSAL DECLARATION OF HUMAN RIGHTS AT 70 PUTTING HUMAN RIGHTS AT THE HEART OF THE DESIGN, DEVELOPMENT, AND DEPLOYMENT OF ARTIFICIAL INTELLIGENCE 20 December 2018 This report was written by Lorna McGregor, Vivian Ng and Ahmed Shaheed with Elena Abrusci, Catherine Kent, Daragh Murray and Carmel Williams. The report greatly benefited from the insightful comments and suggestions by Laura Carter, Jane Connors, Eimear Farrell, Geoff Gilbert, Sabrina Rau and Maurice Sunkin. The Human Rights, Big Data and Technology Project 3 CONTENTS EXECUTIVE SUMMARY 5 INTRODUCTION 7 I. THE IMPACT OF BIG DATA AND AI ON HUMAN RIGHTS 11 A. The Role of the Rights of Equality, Non-Discrimination and Privacy as Gatekeepers to All Human Rights ..... 11 1. The Rights to Equality and Non-Discrimination ...............................................................................................................................11 2. The Right to Privacy .........................................................................................................................................................................14 B. THE HUMAN RIGHTS’ IMPACT OF THE USE OF BIG DATA AND AI IN KEY SECTORS ......................................... 17 1. The Impact of Big Data and AI on the Right to Education ...............................................................................................................17
    [Show full text]
  • Modernizing Authentication Standards for Digital Video Evidence in the Era of Deepfakes
    American University Law Review Volume 69 Issue 6 Article 5 2020 A Break From Reality: Modernizing Authentication Standards for Digital Video Evidence in the Era of Deepfakes John P. LaMonaga American University Washington College of Law Follow this and additional works at: https://digitalcommons.wcl.american.edu/aulr Part of the Evidence Commons, and the Science and Technology Law Commons Recommended Citation LaMonaga, John P. (2020) "A Break From Reality: Modernizing Authentication Standards for Digital Video Evidence in the Era of Deepfakes," American University Law Review: Vol. 69 : Iss. 6 , Article 5. Available at: https://digitalcommons.wcl.american.edu/aulr/vol69/iss6/5 This Comment is brought to you for free and open access by the Washington College of Law Journals & Law Reviews at Digital Commons @ American University Washington College of Law. It has been accepted for inclusion in American University Law Review by an authorized editor of Digital Commons @ American University Washington College of Law. For more information, please contact [email protected]. A Break From Reality: Modernizing Authentication Standards for Digital Video Evidence in the Era of Deepfakes Abstract The legal standard for authenticating photographic and video evidence in court has remained largely static throughout the evolution of media technology in the twentieth century. The advent of “deepfakes,” or fake videos created using artificial intelligence programming, renders outdated many of the assumptions that the Federal Rules of Evidence are built upon. Rule 901(b)(1) provides a means to authenticate evidence through the testimony of a “witness with knowledge.” Courts commonly admit photographic and video evidence by using the “fair and accurate portrayal” standard to meet this Rule’s intent.
    [Show full text]
  • School Surveillance: the Students' Rights Implications of Artificial Intelligence As K-12 School Security
    NORTH CAROLINA LAW REVIEW Volume 98 Number 2 Article 12 1-1-2020 School Surveillance: The Students' Rights Implications of Artificial Intelligence as K-12 School Security Maya Weinstein Follow this and additional works at: https://scholarship.law.unc.edu/nclr Part of the Law Commons Recommended Citation Maya Weinstein, School Surveillance: The Students' Rights Implications of Artificial Intelligence as -12K School Security, 98 N.C. L. REV. 438 (2020). Available at: https://scholarship.law.unc.edu/nclr/vol98/iss2/12 This Comments is brought to you for free and open access by Carolina Law Scholarship Repository. It has been accepted for inclusion in North Carolina Law Review by an authorized editor of Carolina Law Scholarship Repository. For more information, please contact [email protected]. 98 N.C. L. REV. 438 (2020) School of Surveillance: The Students’ Rights Implications of Artificial Intelligence as K-12 Public School Security* Concerns about school safety dominate the nation as school shootings leave parents, children, and school officials in constant fear. In response to the violence, some schools are acquiring advanced artificial intelligence surveillance technology, including facial recognition and geolocation tracking devices, to strengthen security. The companies behind these technologies claim they will improve school safety. However, there is little indication that they are effective or accurate. Moreover, there is even less information regarding the implications to student privacy and the potential negative impact on the educational environment. Studies prove facial recognition technologies are biased against people of color, which could have devastating effects on students who are already at risk of being pushed out of school through the school-to-prison pipeline.
    [Show full text]