Case 3:17-Cv-01738-JSC Document 1 Filed 03/29/17 Page 1 of 32
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Deepfakes and Cheap Fakes
DEEPFAKES AND CHEAP FAKES THE MANIPULATION OF AUDIO AND VISUAL EVIDENCE Britt Paris Joan Donovan DEEPFAKES AND CHEAP FAKES - 1 - CONTENTS 02 Executive Summary 05 Introduction 10 Cheap Fakes/Deepfakes: A Spectrum 17 The Politics of Evidence 23 Cheap Fakes on Social Media 25 Photoshopping 27 Lookalikes 28 Recontextualizing 30 Speeding and Slowing 33 Deepfakes Present and Future 35 Virtual Performances 35 Face Swapping 38 Lip-synching and Voice Synthesis 40 Conclusion 47 Acknowledgments Author: Britt Paris, assistant professor of Library and Information Science, Rutgers University; PhD, 2018,Information Studies, University of California, Los Angeles. Author: Joan Donovan, director of the Technology and Social Change Research Project, Harvard Kennedy School; PhD, 2015, Sociology and Science Studies, University of California San Diego. This report is published under Data & Society’s Media Manipulation research initiative; for more information on the initiative, including focus areas, researchers, and funders, please visit https://datasociety.net/research/ media-manipulation DATA & SOCIETY - 2 - EXECUTIVE SUMMARY Do deepfakes signal an information apocalypse? Are they the end of evidence as we know it? The answers to these questions require us to understand what is truly new about contemporary AV manipulation and what is simply an old struggle for power in a new guise. The first widely-known examples of amateur, AI-manipulated, face swap videos appeared in November 2017. Since then, the news media, and therefore the general public, have begun to use the term “deepfakes” to refer to this larger genre of videos—videos that use some form of deep or machine learning to hybridize or generate human bodies and faces. -
Digital Platform As a Double-Edged Sword: How to Interpret Cultural Flows in the Platform Era
International Journal of Communication 11(2017), 3880–3898 1932–8036/20170005 Digital Platform as a Double-Edged Sword: How to Interpret Cultural Flows in the Platform Era DAL YONG JIN Simon Fraser University, Canada This article critically examines the main characteristics of cultural flows in the era of digital platforms. By focusing on the increasing role of digital platforms during the Korean Wave (referring to the rapid growth of local popular culture and its global penetration starting in the late 1990s), it first analyzes whether digital platforms as new outlets for popular culture have changed traditional notions of cultural flows—the forms of the export and import of popular culture mainly from Western countries to non-Western countries. Second, it maps out whether platform-driven cultural flows have resolved existing global imbalances in cultural flows. Third, it analyzes whether digital platforms themselves have intensified disparities between Western and non- Western countries. In other words, it interprets whether digital platforms have deepened asymmetrical power relations between a few Western countries (in particular, the United States) and non-Western countries. Keywords: digital platforms, cultural flows, globalization, social media, asymmetrical power relations Cultural flows have been some of the most significant issues in globalization and media studies since the early 20th century. From television programs to films, and from popular music to video games, cultural flows as a form of the export and import of cultural materials have been increasing. Global fans of popular culture used to enjoy films, television programs, and music by either purchasing DVDs and CDs or watching them on traditional media, including television and on the big screen. -
Deepfakes and Cheap Fakes
DEEPFAKES AND CHEAP FAKES THE MANIPULATION OF AUDIO AND VISUAL EVIDENCE Britt Paris Joan Donovan DEEPFAKES AND CHEAP FAKES - 1 - CONTENTS 02 Executive Summary 05 Introduction 10 Cheap Fakes/Deepfakes: A Spectrum 17 The Politics of Evidence 23 Cheap Fakes on Social Media 25 Photoshopping 27 Lookalikes 28 Recontextualizing 30 Speeding and Slowing 33 Deepfakes Present and Future 35 Virtual Performances 35 Face Swapping 38 Lip-synching and Voice Synthesis 40 Conclusion 47 Acknowledgments Author: Britt Paris, assistant professor of Library and Information Science, Rutgers University; PhD, 2018,Information Studies, University of California, Los Angeles. Author: Joan Donovan, director of the Technology and Social Change Research Project, Harvard Kennedy School; PhD, 2015, Sociology and Science Studies, University of California San Diego. This report is published under Data & Society’s Media Manipulation research initiative; for more information on the initiative, including focus areas, researchers, and funders, please visit https://datasociety.net/research/ media-manipulation DATA & SOCIETY - 2 - EXECUTIVE SUMMARY Do deepfakes signal an information apocalypse? Are they the end of evidence as we know it? The answers to these questions require us to understand what is truly new about contemporary AV manipulation and what is simply an old struggle for power in a new guise. The first widely-known examples of amateur, AI-manipulated, face swap videos appeared in November 2017. Since then, the news media, and therefore the general public, have begun to use the term “deepfakes” to refer to this larger genre of videos—videos that use some form of deep or machine learning to hybridize or generate human bodies and faces. -
Estimating Age and Gender in Instagram Using Face Recognition: Advantages, Bias and Issues. / Diego Couto De Las Casas
ESTIMATING AGE AND GENDER IN INSTAGRAM USING FACE RECOGNITION: ADVANTAGES, BIAS AND ISSUES. DIEGO COUTO DE. LAS CASAS ESTIMATING AGE AND GENDER IN INSTAGRAM USING FACE RECOGNITION: ADVANTAGES, BIAS AND ISSUES. Dissertação apresentada ao Programa de Pós-Graduação em Ciência da Computação do Instituto de Ciências Exatas da Univer- sidade Federal de Minas Gerais – Depar- tamento de Ciência da Computação como requisito parcial para a obtenção do grau de Mestre em Ciência da Computação. Orientador: Virgílio Augusto Fernandes de Almeida Belo Horizonte Fevereiro de 2016 DIEGO COUTO DE. LAS CASAS ESTIMATING AGE AND GENDER IN INSTAGRAM USING FACE RECOGNITION: ADVANTAGES, BIAS AND ISSUES. Dissertation presented to the Graduate Program in Ciência da Computação of the Universidade Federal de Minas Gerais – De- partamento de Ciência da Computação in partial fulfillment of the requirements for the degree of Master in Ciência da Com- putação. Advisor: Virgílio Augusto Fernandes de Almeida Belo Horizonte February 2016 © 2016, Diego Couto de Las Casas. Todos os direitos reservados Ficha catalográfica elaborada pela Biblioteca do ICEx - UFMG Las Casas, Diego Couto de. L337e Estimating age and gender in Instagram using face recognition: advantages, bias and issues. / Diego Couto de Las Casas. – Belo Horizonte, 2016. xx, 80 f. : il.; 29 cm. Dissertação (mestrado) - Universidade Federal de Minas Gerais – Departamento de Ciência da Computação. Orientador: Virgílio Augusto Fernandes de Almeida. 1. Computação - Teses. 2. Redes sociais on-line. 3. Computação social. 4. Instagram. I. Orientador. II. Título. CDU 519.6*04(043) Acknowledgments Gostaria de agradecer a todos que me fizeram chegar até aqui. Àminhafamília,pelosconselhos,pitacoseportodoosuporteaolongodesses anos. Aos meus colegas do CAMPS(-Élysées),pelascolaborações,pelasrisadasepelo companheirismo. -
Artificial Intelligence: Risks to Privacy and Democracy
Artificial Intelligence: Risks to Privacy and Democracy Karl Manheim* and Lyric Kaplan** 21 Yale J.L. & Tech. 106 (2019) A “Democracy Index” is published annually by the Economist. For 2017, it reported that half of the world’s countries scored lower than the previous year. This included the United States, which was de- moted from “full democracy” to “flawed democracy.” The princi- pal factor was “erosion of confidence in government and public in- stitutions.” Interference by Russia and voter manipulation by Cam- bridge Analytica in the 2016 presidential election played a large part in that public disaffection. Threats of these kinds will continue, fueled by growing deployment of artificial intelligence (AI) tools to manipulate the preconditions and levers of democracy. Equally destructive is AI’s threat to deci- sional and informational privacy. AI is the engine behind Big Data Analytics and the Internet of Things. While conferring some con- sumer benefit, their principal function at present is to capture per- sonal information, create detailed behavioral profiles and sell us goods and agendas. Privacy, anonymity and autonomy are the main casualties of AI’s ability to manipulate choices in economic and po- litical decisions. The way forward requires greater attention to these risks at the na- tional level, and attendant regulation. In its absence, technology gi- ants, all of whom are heavily investing in and profiting from AI, will dominate not only the public discourse, but also the future of our core values and democratic institutions. * Professor of Law, Loyola Law School, Los Angeles. This article was inspired by a lecture given in April 2018 at Kansai University, Osaka, Japan. -
Bad Actors: Authenticity, Inauthenticity, Speech, and Capitalism
ARTICLES BAD ACTORS: AUTHENTICITY, INAUTHENTICITY, SPEECH, AND CAPITALISM Sarah C. Haan* ABSTRACT “Authenticity” has evolved into an important value that guides social media companies’ regulation of online speech. It is enforced through rules and practices that include real-name policies, Terms of Service requiring users to present only accurate information about themselves, community guidelines that prohibit “coordinated inauthentic behavior,” verification practices, product features, and more. This Article critically examines authenticity regulation by the social media industry, including companies’ claims that authenticity is a moral virtue, an expressive value, and a pragmatic necessity for online communication. It explains how authenticity regulation provides economic value to companies engaged in “information capitalism,” “data capitalism,” and “surveillance capitalism.” It also explores how companies’ self-regulatory focus on authenticity shapes users’ views about objectionable speech, upends traditional commitments to pseudonymous political expression, and encourages collaboration between the State and private companies. The Article concludes that “authenticity,” as conceptualized by the industry, is not an important value for users on par with privacy or dignity, but that it offers business value to companies. Authenticity regulation also provides many of the same opportunities for viewpoint discrimination as does garden-variety content moderation. * Associate Professor of Law, Washington and Lee University School of Law. The Author thanks Carliss Chatman, Rebecca Green, Margaret Hu, Lyman Johnson, Thomas Kadri, James Nelson, Elizabeth Pollman, Carla L. Reyes, Christopher B. Seaman, Micah Schwartzman, Morgan Weiland, participants in the W&L Law 2019 Big Data Research Colloquium, and participants in the 2019 Yale Freedom of Expression Scholars Conference (“FESC VII”), for their insightful comments on early drafts of this Article. -
FROM EDITORS to ALGORITHMS a Values-Based Approach to Understanding Story Selection in the Facebook News Feed
Running head: DEVITO / FROM EDITORS TO ALGORITHMS 1 FROM EDITORS TO ALGORITHMS A Values-Based Approach to Understanding Story Selection in the Facebook News Feed Michael A. DeVito Department of Communication Studies Northwestern University Evanston, IL, USA [email protected] http://www.mikedevito.net/ This is a pre-print version of an article published that has been accepted for publication in Digital Journalism. For quotation, please consult the final, published version of the article at http://dx.doi.org/10.1080/21670811.2016.1178592. Please cite as: DeVito, M. A. (2016) From Editors to Algorithms: A values-based approach to understanding story selection in the Facebook news feed. Digital Journalism Ahead of print. doi: 10.1080/21670811.2016.1178592 ACKNOWLEDGEMENTS The author thanks Eileen Emerson for her research assistance, as well as Kerric Harvey, David Karpf, and Emily Thorson of The George Washington University for their guidance and support during the initial research on this piece. The author also thanks the anonymous reviewers, William Marler, and the members of Aaron Shaw’s “Bring Your Own Research” group at Northwestern University for their feedback on the manuscript. FUNDING This research was partially supported by a thesis grant from The George Washington University’s School of Media and Public Affairs, with which the author was previously affiliated. DEVITO / FROM EDITORS TO ALGORITHMS (PRE-PRINT) 2 FROM EDITORS TO ALGORITHMS A Values-Based Approach to Understanding Story Selection in the Facebook News Feed Facebook’s News Feed is an emerging, influential force in our personal information flows, especially where news information is concerned. -
Founder Mark Zuckerberg, Eduardo Saverin, Dustin Moskovitz, Chris Hughes
Facebook Facebook, Inc. Type Private Founded Cambridge, Massachusetts[1] (2004) Founder Mark Zuckerberg, Eduardo Saverin, Dustin Moskovitz, Chris Hughes Headquarters Palo Alto, California, U.S., will be moved to Menlo Park, California, U.S. in June 2011 Area served Worldwide Mark Zuckerberg (CEO), Chris Cox (VP of Product), Sheryl Sandberg (COO), Donald E. Key people Graham (Chairman) Net income N/A Website facebook.com Type of site Social network service Registration Required Users 600 million[5][6] (active in January 2011) Launched February 4, 2004 Current status Active Facebook (stylized facebook) is a social network service and website launched in February 2004, operated and privately owned by Facebook, Inc. As of January 2011, Facebook has more than 600 million active users. Users may create a personal profile, add other users as friends, and exchange messages, including automatic notifications when they update their profile. Additionally, users may join common interest user groups, organized by workplace, school, or college, or other characteristics. The name of the service stems from the colloquial name for the book given to students at the start of the academic year by university administrations in the USA to help students get to know each other better. Facebook allows anyone who declares themselves to be at least 13 years old to become a registered user of the website. Facebook was founded by Mark Zuckerberg with his college roommates and fellow computer science students Eduardo Saverin, Dustin Moskovitz and Chris Hughes. The website's membership was initially limited by the founders to Harvard students, but was expanded to other colleges in the Boston area, the Ivy League, and Stanford University. -
CS 182: Ethics, Public Policy, and Technological Change
Rob Reich CS 182: Ethics, Public Policy, and Mehran Sahami Jeremy Weinstein Technological Change Hilary Cohen Housekeeping • Recording of information session on Public Policy memo available on class website • Also posted on the website is a recent research study on the efficacy of contact tracing apps (if you’re interested) Today’s Agenda 1. Perspectives on data privacy 2. Approaches to data privacy • Anonymization • Encryption • Differential Privacy 3. What can we infer from your digital trails? 4. The information ecosystem 5. Facial recognition Today’s Agenda 1. Perspectives on data privacy 2. Approaches to data privacy • Anonymization • Encryption • Differential Privacy 3. What can we infer from your digital trails? 4. The information ecosystem 5. Facial recognition Perspectives on Data Privacy • Data privacy often involves a balance of competing interests • Making data available for meaningful analysis • For public goods • Auditing algorithmic decision-making for fairness • Medical research and health care improvement • Protecting national security • For private goods • Personalized advertising • Protecting individual privacy • Personal value of privacy and respect for individual – thanks Rob! • Freedom of speech and activity • Avoiding discrimination • Regulation: FERPA, HIPAA, GDPR, etc. – thanks Jeremy! • Preventing access from “adversaries” Today’s Agenda 1. Perspectives on data privacy 2. Approaches to data privacy • Anonymization • Encryption • Differential Privacy 3. What can we infer from your digital trails? 4. The information ecosystem 5. Facial recognition Anonymization • Basic idea: drop personally identifying features from the data Name SS# Employer Job Title Nationality Gender D.O.B Zipcode Has Condition? Mehran XXX-XX- Stanford Professor Iran Male May 10, 94306 No Sahami XXXX University 1970 Claire XXX-XX- Google Inc. -
Predicting User Interaction on Social Media Using Machine Learning Chad Crowe University of Nebraska at Omaha
University of Nebraska at Omaha DigitalCommons@UNO Student Work 11-2018 Predicting User Interaction on Social Media using Machine Learning Chad Crowe University of Nebraska at Omaha Follow this and additional works at: https://digitalcommons.unomaha.edu/studentwork Part of the Computer Sciences Commons Recommended Citation Crowe, Chad, "Predicting User Interaction on Social Media using Machine Learning" (2018). Student Work. 2920. https://digitalcommons.unomaha.edu/studentwork/2920 This Thesis is brought to you for free and open access by DigitalCommons@UNO. It has been accepted for inclusion in Student Work by an authorized administrator of DigitalCommons@UNO. For more information, please contact [email protected]. Predicting User Interaction on Social Media using Machine Learning A Thesis Presented to the College of Information Science and Technology and the Faculty of the Graduate College University of Nebraska at Omaha In Partial Fulfillment of the Requirements for the Degree Master of Science in Computer Science by Chad Crowe November 2018 Supervisory Committee Dr. Brian Ricks Dr. Margeret Hall Dr. Yuliya Lierler ProQuest Number:10974767 All rights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if material had to be removed, a note will indicate the deletion. ProQuest 10974767 Published by ProQuest LLC ( 2019). Copyright of the Dissertation is held by the Author. All rights reserved. This work is protected against unauthorized copying under Title 17, United States Code Microform Edition © ProQuest LLC. -
Face Recognition and Privacy in the Age of Augmented Reality
Journal of Privacy and Confidentiality (2014) 6, Number 2, 1{20 Face Recognition and Privacy in the Age of Augmented Reality Alessandro Acquisti∗, Ralph Grossy, and Fred Stutzmanz 1 Introduction1 In 1997, the best computer face recognizer in the US Department of Defense's Face Recognition Technology program scored an error rate of 0.54 (the false reject rate at a false accept rate of 1 in 1,000). By 2006, the best recognizer scored 0.026 [1]. By 2010, the best recognizer scored 0.003 [2]|an improvement of more than two orders of magnitude in just over 10 years. In 2000, of the approximately 100 billion photographs shot worldwide [3], only a negligible portion found their way online. By 2010, 2.5 billion digital photos a month were uploaded by members of Facebook alone [4]. Often, those photos showed people's faces, were tagged with their names, and were shared with friends and strangers alike. This manuscript investigates the implications of the convergence of those two trends: the increasing public availability of facial, digital images; and the ever-improving ability of computer programs to recognize individuals in them. In recent years, massive amounts of identified and unidentified facial data have be- come available|often publicly so|through Web 2.0 applications. So have also the infrastructure and technologies necessary to navigate through those data in real time, matching individuals across online services, independently of their knowledge or consent. In the literature on statistical re-identification [5, 6], an identified database is pinned against an unidentified database in order to recognize individuals in the latter and as- sociate them with information from the former. -
Algorytm Edge Rank Serwisu Facebook: Narodziny, Rozwój I Działanie W Uj Ęciu Teorii Aktora-Sieci
71 Michał Pałasz Uniwersytet Jagiello ński w Krakowie Wydział Zarz ądzania i Komunikacji Społecznej Instytut Kultury e-mail: [email protected] Algorytm Edge Rank serwisu Facebook: narodziny, rozwój i działanie w uj ęciu teorii aktora-sieci Abstrakt Artykuł wychodzi od omówienia metodologii badania (teoria aktora- sieci, autoetnografia), a nast ępnie przedstawia rozwój serwisu Facebook w latach 2004-2018 w perspektywie narodzin i przemian algorytmu kształtu- jącego „aktualno ści” ( News Feed ), okre ślone kluczow ą innowacj ą platformy, po czym w konkluzjach syntetyzuje rozpoznane translacje i modus operandi głównego aktora. Tekst powstał na bazie badania przeprowadzonego na po- trzeby wyst ąpienia autora w ramach II Ogólnopolskiej Interdyscyplinarnej Konferencji Naukowej „TechSpo’18: Władza algorytmów?”, zorganizowanej przez Wydział Humanistyczny Akademii Górniczo-Hutniczej w Krakowie (Kraków, 20-21 wrze śnia 2018). Słowa kluczowe: algorytm Edge Rank, Facebook, media społeczno ściowe, News Feed, teoria aktora-sieci, zarz ądzanie mediami. Wst ęp Artykuł prezentuje rezultaty prowadzonej w duchu teorii aktora-sieci (ANT, ang. actor-network theory ) eksploracji przemian serwisu Facebook w latach 2004-2018, w efekcie których powstał i w których bierze aktywny, sprawczy udział algorytm, okre ślany jako Edge Rank , EdgeRank , Ranking bądź po prostu algorytm serwisu Facebook czy te ż algorytm Facebooka. Mil- cz ąco towarzyszy on w momencie pisania tych słów ponad dwóm miliardom użytkowników serwisu, którzy korzystaj ą z niego co najmniej raz w miesi ącu (Facebook Newsroom 2018a), a ponadto, m.in.: • decyduje, które komunikaty w ramach serwisu docieraj ą do których u żyt- kowników, tworz ąc ba ńki informacyjne (por. Pariser 2011), jak Blue Feed, Red Feed ( „Kanał niebieski, kanał czerwony” – je śli nie zaznaczono ina- czej, tłum.