Technocultural Pluralism:A ``Clash of Civilizations'' in Technology?
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Empirical Assessment of Risk Factors: How Online and Offline Lifestyle, Social Learning, and Social Networking Sites Influence Crime Victimization Ashley Bettencourt
Bridgewater State University Virtual Commons - Bridgewater State University Master’s Theses and Projects College of Graduate Studies 12-2014 Empirical Assessment of Risk Factors: How Online and Offline Lifestyle, Social Learning, And Social Networking Sites Influence Crime Victimization Ashley Bettencourt Follow this and additional works at: http://vc.bridgew.edu/theses Part of the Criminology and Criminal Justice Commons Recommended Citation Bettencourt, Ashley. (2014). Empirical Assessment of Risk Factors: How Online and Offline Lifestyle, Social Learning, And Social Networking Sites Influence Crime Victimization. In BSU Master’s Theses and Projects. Item 10. Available at http://vc.bridgew.edu/theses/10 Copyright © 2014 Ashley Bettencourt This item is available as part of Virtual Commons, the open-access institutional repository of Bridgewater State University, Bridgewater, Massachusetts. Empirical Assessment of Risk Factors: How Online and Offline Lifestyle, Social Learning, and Social Networking Sites Influence Crime Victimization By: Ashley Bettencourt Thesis Submitted in fulfillment of the requirements for the degree of Master of Science in Criminal Justice in the Graduate College of Bridgewater State University, 2014 Bridgewater, Massachusetts Thesis Chair: Dr. Kyung-Shick Choi 2 Empirical Assessment of Risk Factors: How Online and Offline Lifestyle, Social Learning, and Social Networking Sites Influence Crime Victimization Thesis By Ashley Bettencourt Approved as to style and content by: ______________________________ Kyung-shick -
When Diving Deep Into Operationalizing the Ethical Development of Artificial Intelligence, One Immediately Runs Into the “Fractal Problem”
When diving deep into operationalizing the ethical development of artificial intelligence, one immediately runs into the “fractal problem”. This may also be called the “infinite onion” problem.1 That is, each problem within development that you pinpoint expands into a vast universe of new complex problems. It can be hard to make any measurable progress as you run in circles among different competing issues, but one of the paths forward is to pause at a specific point and detail what you see there. Here I list some of the complex points that I see at play in the firing of Dr. Timnit Gebru, and why it will remain forever after a really, really, really terrible decision. The Punchline. The firing of Dr. Timnit Gebru is not okay, and the way it was done is not okay. It appears to stem from the same lack of foresight that is at the core of modern technology,2 and so itself serves as an example of the problem. The firing seems to have been fueled by the same underpinnings of racism and sexism that our AI systems, when in the wrong hands, tend to soak up. How Dr. Gebru was fired is not okay, what was said about it is not okay, and the environment leading up to it was -- and is -- not okay. Every moment where Jeff Dean and Megan Kacholia do not take responsibility for their actions is another moment where the company as a whole stands by silently as if to intentionally send the horrifying message that Dr. Gebru deserves to be treated this way. -
Bias and Fairness in NLP
Bias and Fairness in NLP Margaret Mitchell Kai-Wei Chang Vicente Ordóñez Román Google Brain UCLA University of Virginia Vinodkumar Prabhakaran Google Brain Tutorial Outline ● Part 1: Cognitive Biases / Data Biases / Bias laundering ● Part 2: Bias in NLP and Mitigation Approaches ● Part 3: Building Fair and Robust Representations for Vision and Language ● Part 4: Conclusion and Discussion “Bias Laundering” Cognitive Biases, Data Biases, and ML Vinodkumar Prabhakaran Margaret Mitchell Google Brain Google Brain Andrew Emily Simone Parker Lucy Ben Elena Deb Timnit Gebru Zaldivar Denton Wu Barnes Vasserman Hutchinson Spitzer Raji Adrian Brian Dirk Josh Alex Blake Hee Jung Hartwig Blaise Benton Zhang Hovy Lovejoy Beutel Lemoine Ryu Adam Agüera y Arcas What’s in this tutorial ● Motivation for Fairness research in NLP ● How and why NLP models may be unfair ● Various types of NLP fairness issues and mitigation approaches ● What can/should we do? What’s NOT in this tutorial ● Definitive answers to fairness/ethical questions ● Prescriptive solutions to fix ML/NLP (un)fairness What do you see? What do you see? ● Bananas What do you see? ● Bananas ● Stickers What do you see? ● Bananas ● Stickers ● Dole Bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas at a store ● Bananas on shelves ● Bunches of bananas What do you see? ● Bananas ● Stickers ● Dole Bananas ● Bananas -
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification∗
Proceedings of Machine Learning Research 81:1{15, 2018 Conference on Fairness, Accountability, and Transparency Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification∗ Joy Buolamwini [email protected] MIT Media Lab 75 Amherst St. Cambridge, MA 02139 Timnit Gebru [email protected] Microsoft Research 641 Avenue of the Americas, New York, NY 10011 Editors: Sorelle A. Friedler and Christo Wilson Abstract who is hired, fired, granted a loan, or how long Recent studies demonstrate that machine an individual spends in prison, decisions that learning algorithms can discriminate based have traditionally been performed by humans are on classes like race and gender. In this rapidly made by algorithms (O'Neil, 2017; Citron work, we present an approach to evaluate and Pasquale, 2014). Even AI-based technologies bias present in automated facial analysis al- that are not specifically trained to perform high- gorithms and datasets with respect to phe- stakes tasks (such as determining how long some- notypic subgroups. Using the dermatolo- one spends in prison) can be used in a pipeline gist approved Fitzpatrick Skin Type clas- that performs such tasks. For example, while sification system, we characterize the gen- face recognition software by itself should not be der and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. trained to determine the fate of an individual in We find that these datasets are overwhelm- the criminal justice system, it is very likely that ingly composed of lighter-skinned subjects such software is used to identify suspects. Thus, (79:6% for IJB-A and 86:2% for Adience) an error in the output of a face recognition algo- and introduce a new facial analysis dataset rithm used as input for other tasks can have se- which is balanced by gender and skin type. -
JULIAN ASSANGE: When Google Met Wikileaks
JULIAN ASSANGE JULIAN +OR Books Email Images Behind Google’s image as the over-friendly giant of global tech when.google.met.wikileaks.org Nobody wants to acknowledge that Google has grown big and bad. But it has. Schmidt’s tenure as CEO saw Google integrate with the shadiest of US power structures as it expanded into a geographically invasive megacorporation... Google is watching you when.google.met.wikileaks.org As Google enlarges its industrial surveillance cone to cover the majority of the world’s / WikiLeaks population... Google was accepting NSA money to the tune of... WHEN GOOGLE MET WIKILEAKS GOOGLE WHEN When Google Met WikiLeaks Google spends more on Washington lobbying than leading military contractors when.google.met.wikileaks.org WikiLeaks Search I’m Feeling Evil Google entered the lobbying rankings above military aerospace giant Lockheed Martin, with a total of $18.2 million spent in 2012. Boeing and Northrop Grumman also came below the tech… Transcript of secret meeting between Julian Assange and Google’s Eric Schmidt... wikileaks.org/Transcript-Meeting-Assange-Schmidt.html Assange: We wouldn’t mind a leak from Google, which would be, I think, probably all the Patriot Act requests... Schmidt: Which would be [whispers] illegal... Assange: Tell your general counsel to argue... Eric Schmidt and the State Department-Google nexus when.google.met.wikileaks.org It was at this point that I realized that Eric Schmidt might not have been an emissary of Google alone... the delegation was one part Google, three parts US foreign-policy establishment... We called the State Department front desk and told them that Julian Assange wanted to have a conversation with Hillary Clinton... -
AI Principles 2020 Progress Update
AI Principles 2020 Progress update AI Principles 2020 Progress update Table of contents Overview ........................................................................................................................................... 2 Culture, education, and participation ....................................................................4 Technical progress ................................................................................................................... 5 Internal processes ..................................................................................................................... 8 Community outreach and exchange .....................................................................12 Conclusion .....................................................................................................................................17 Appendix: Research publications and tools ....................................................18 Endnotes .........................................................................................................................................20 1 AI Principles 2020 Progress update Overview Google’s AI Principles were published in June 2018 as a charter to guide how we develop AI responsibly and the types of applications we will pursue. This report highlights recent progress in AI Principles implementation across Google, including technical tools, educational programs and governance processes. Of particular note in 2020, the AI Principles have supported our ongoing work to address -
Facial Recognition Is Improving — but the Bigger That Had Been Scanned
Feature KELVIN CHAN/AP/SHUTTERSTOCK KELVIN The Metropolitan Police in London used facial-recognition cameras to scan for wanted people in February. Fussey and Murray listed a number of ethical and privacy concerns with the dragnet, and questioned whether it was legal at all. And they queried the accuracy of the system, which is sold by Tokyo-based technology giant NEC. The software flagged 42 people over 6 trials BEATING that the researchers analysed; officers dis- missed 16 matches as ‘non-credible’ but rushed out to stop the others. They lost 4 people in the crowd, but still stopped 22: only 8 turned out to be correct matches. The police saw the issue differently. They BIOMETRIC BIAS said the system’s number of false positives was tiny, considering the many thousands of faces Facial recognition is improving — but the bigger that had been scanned. (They didn’t reply to issue is how it’s used. By Davide Castelvecchi Nature’s requests for comment for this article.) The accuracy of facial recognition has improved drastically since ‘deep learning’ tech- hen London’s Metropolitan extracted key features and compared them niques were introduced into the field about a Police tested real-time to those of suspects from a watch list. “If there decade ago. But whether that means it’s good facial-recognition technology is a match, it pulls an image from the live feed, enough to be used on lower-quality, ‘in the wild’ between 2016 and 2019, they together with the image from the watch list.” images is a hugely controversial issue. -
Regulating Electronic Identity Intermediaries: the Soft Eid Conundrum
Regulating Electronic Identity Intermediaries: The "Soft elD" Conundrum TAL Z. ZARSKY* & NORBERTO NuNo GOMES DE ANDRADEt TABLE OF CONTENTS 1. INTRODUCTION: THE RISE OF THE DIGITAL IDENTITY INTERMEDIARY .......................................... 1336 II. IDENTITY INTERMEDIATION: "HARD" EIDS/"SOFT" EIDS...............1342 A. From IDs to elDs ......................... ....1 342 B. Soft eiDs: Definitions, Roles, and Tasks .............. 1345 1. Incidental .......................... ...... 1346 2. Remote Verification ................... ...... 1348 3. Lightly RegulatedIndustries .............. ..... 1348 III. SOFT EIDs: COMMON VARIATIONS AND KEY EXAMPLES ............... 1349 A. The Ipse/Idem Distinction ........................ 1349 B. Real Names Versus Stable Pseudonyms .............. 1351 1. Real Name IDs ............................. 1351 a. Mandatory Versus Voluntary? ...................1352 b. Verification Methods ...........................1353 c. Real Name elDs: Unique Benefits and Uses............. 1354 2. Stable Pseudonyms ..................... ..... 1357 IV. THE BENEFITS AND PITFALLS OF SOFT EID INTERMEDIATION ......................................... 1359 A. The Benefits of Soft elDs: Economic, Personal, and Social .................................. 1359 B. Identity-Related Harms and Soft Identity Intermediaries .................................. 1362 1. Identity, "Virtual Identity" Theft andFraud, and the PossibleRole and Responsibility of the Intermediary. ............................... 1363 2. Virtual PropertyRights, Publicity, -
Distributed Blackness Published Just Last Year (2020, New York University Press), Dr
Transcript The Critical Technology Episode 4: Distributed Blackness Published just last year (2020, New York University Press), Dr. Andre Brock Jr’s Distributed Blackness: African American Cybercultures has already become a highly influential, widely cited, must read text for anyone interested in digital culture and technology. Through a theoretically rich and methodologically innovative analysis, Brock positions Blackness at the centre of Internet technoculture, its technological designs and infrastructures, its political economic underpinnings, its cultural significance and its emergent informational identities. Sara Grimes 0:00 Like many of you, as the fall 2020 semester was wrapping up, my Twitter feed was suddenly flooded with the news that Dr. Timnit Gebru had been fired from Google. A world renowned computer scientist, leading AI ethics researcher, and co-founder of Black in AI. Dr. Gebru had recently completed research raising numerous ethical concerns with the type of large scale AI language models used in many Google products. Her expulsion from Google highlights the discrimination and precarity faced by black people, especially black women, working in the tech industries and in society at large. On the one hand, Dr. Gebru’s treatment by Google execs made visible through screenshots and conversations posted on Twitter was met with an outpouring of public support from fellow Google employees, tech workers, academics and concerned citizens. The #ISupportTimnit was often paired with #BelieveBlackWomen, connecting this event with the systematic anti-black racism and sexism found across technoculture. On the other hand, since December, Dr. Gebru and her supporters have also been targeted by Twitter trolls, and subjected to a bombardment of racist and sexist vitriol online. -
Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing
Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing Inioluwa Deborah Raji Timnit Gebru Margaret Mitchell University of Toronto Google Research Google Research Canada United States United States [email protected] [email protected] [email protected] Joy Buolamwini Joonseok Lee Emily Denton MIT Media Lab. Google Research Google Research United States United States United States [email protected] [email protected] [email protected] ABSTRACT the positive uses of this technology includes claims of “understand- Although essential to revealing biased performance, well inten- ing users”, “monitoring or detecting human activity”, “indexing and tioned attempts at algorithmic auditing can have effects that may searching digital image libraries”, and verifying and identifying harm the very populations these measures are meant to protect. subjects “in security scenarios” [1, 8, 16, 33]. This concern is even more salient while auditing biometric sys- The reality of FPT deployments, however, reveals that many tems such as facial recognition, where the data is sensitive and of these uses are quite vulnerable to abuse, especially when used the technology is often used in ethically questionable manners. for surveillance and coupled with predatory data collection prac- We demonstrate a set of five ethical concerns in the particular case tices that, intentionally or unintentionally, discriminate against of auditing commercial facial processing technology, highlighting marginalized groups [2]. The stakes are particularly high when additional design considerations and ethical tensions the auditor we consider companies like Amazon and HireVue, who are selling needs to be aware of so as not exacerbate or complement the harms their services to police departments or using the technology to help propagated by the audited system. -
Alexis Lothian | [email protected] | 310-436-5727 Writing Sample
Alexis Lothian | [email protected] | 310-436-5727 Writing Sample Submission to International Journal of Cultural Studies Submitted September 8, 2011 Archival Anarchies: Online Fandom, Subcultural Conservation, and the Transformative Work of Digital Ephemera Abstract: This essay explores the politics of digital memory and traceability, drawing on Jacques Derrida’s Archive Fever and on queer theories of performance and ephemerality. Its primary example is the Organization for Transformative Works, a successful advocacy group for creative media fans whose main project has been an online fan fiction archive. I am concerned both with this public face and with the activities that fans would prefer to keep out of the official record. For scholars of subcultural artistic and cultural production, the mixed blessings of conservation and legitimacy are necessary considerations for the archiving and meaning-making work of cultural theory. Paying attention to the technological and social specificities that contextualize fandom as a digital archive culture, I trace contradictions and contestations around what fannish and scholarly archivable content is and should be: what’s trivial, what’s significant, what legally belongs to whom, and what deserves to be preserved. Keywords: subculture, fandom, archive, ephemera, memory, digital, queer Word count: 8,053 including all notes and references Lothian 2 1. Introduction: Legitimacy [W]hat is no longer archived in the same way is no longer lived in the same way. (Derrida 1996: 18) In 2011, Google’s decision to delete apparently pseudonymous accounts on its new social networking service, Google Plus, caused enormous online controversy. The hashtag #nymwars indexed online disagreements over whether individuals’ participation in online social networking should be connected to the names on their credit cards. -
Social Media Monopolies and Their Alternatives 2
EDITED BY GEERT LOVINK AND MIRIAM RASCH INC READER #8 The Unlike Us Reader offers a critical examination of social media, bringing together theoretical essays, personal discussions, and artistic manifestos. How can we understand the social media we use everyday, or consciously choose not to use? We know very well that monopolies control social media, but what are the alternatives? While Facebook continues to increase its user population and combines loose privacy restrictions with control over data, many researchers, programmers, and activists turn towards MIRIAM RASCH designing a decentralized future. Through understanding the big networks from within, be it by philosophy or art, new perspectives emerge. AND Unlike Us is a research network of artists, designers, scholars, activists, and programmers, with the aim to combine a critique of the dominant social media platforms with work on ‘alternatives in social media’, through workshops, conferences, online dialogues, and publications. Everyone is invited to be a part of the public discussion on how we want to shape the network architectures and the future of social networks we are using so intensely. LOVINK GEERT www.networkcultures.org/unlikeus Contributors: Solon Barocas, Caroline Bassett, Tatiana Bazzichelli, David Beer, David M. Berry, Mercedes Bunz, Florencio Cabello, Paolo Cirio, Joan Donovan, EDITED BY Louis Doulas, Leighton Evans, Marta G. Franco, Robert W. Gehl, Seda Gürses, Alexandra Haché, Harry Halpin, Mariann Hardey, Pavlos Hatzopoulos, Yuk Hui, Ippolita, Nathan Jurgenson, Nelli Kambouri, Jenny Kennedy, Ganaele Langlois, Simona Lodi, Alessandro Ludovico, Tiziana Mancinelli, Andrew McNicol, Andrea Miconi, Arvind Narayanan, Wyatt Niehaus, Korinna Patelis, PJ Rey, Sebastian SOCIAL MEDIA MONOPOLIES Sevignani, Bernard Stiegler, Marc Stumpel, Tiziana Terranova, Vincent Toubiana, AND THEIR ALTERNATIVES Brad Troemel, Lonneke van der Velden, Martin Warnke and D.E.