Ethics, Crowdsourcing and Traceability for Big Data in Human Language Technology

Total Page:16

File Type:pdf, Size:1020Kb

Ethics, Crowdsourcing and Traceability for Big Data in Human Language Technology "Where are the data coming from?" Ethics, crowdsourcing and traceability for Big Data in Human Language Technology Gilles Adda, Laurent Besacier, Alain Couillault, Kar¨enFort, Joseph Mariani, Hugues de Mazancourt [email protected] Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 1 / 25 "Where are the data coming from?" Ethics, crowdsourcing and traceability for Big Data in Human Language Technology Gilles Adda, Laurent Besacier, Alain Couillault, Kar¨enFort, Joseph Mariani, Hugues de Mazancourt [email protected] Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 1 / 25 HLT and data 1 HLT and data The need for more The cost of more 2 HLT and human computation 3 The Ethics and Big Data Charter 4 Conclusion Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 2 / 25 HLT and data The need for more The growing need for (growing size) corpora Success of probabilistic machine learning methods + evaluation paradigm ) need for more and more (manually annotated) corpora: I for learning purposes I for evaluation purposes Success of the technology ) needed annotations become more and more diverse more and more complex Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 3 / 25 HLT and data The cost of more The notoriously high cost of corpora development Prague Dependency Treebank [B¨ohmov´aet al., 2001]: 1.8 million words ) 5 years, 22 persons, $600,000 GENIA [Kim et al., 2008]: 9,372 sentences (gene and protein names) ) 5 part-time annotators,1 senior and1 junior coordinators for 1.5 year Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 4 / 25 HLT and human computation 1 HLT and data 2 HLT and human computation Amazon Mechanical Turk MTurk: the legend Long-term consequences 3 The Ethics and Big Data Charter 4 Conclusion Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 5 / 25 HLT and human computation Amazon Mechanical Turk History: von Kempelen's Mechanical Turk A mechanical chess player created by J. W. von Kempelen in 1770: Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 6 / 25 HLT and human computation Amazon Mechanical Turk History: von Kempelen's Mechanical Turk In fact, a human chess master was hiding inside to operate the machine: Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 6 / 25 HLT and human computation Amazon Mechanical Turk History: von Kempelen's Mechanical Turk artificial artificial intelligence! Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 6 / 25 HLT and human computation Amazon Mechanical Turk Amazon Mechanical Turk MTurk Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 7 / 25 HLT and human computation Amazon Mechanical Turk Amazon Mechanical Turk MTurk is a crowdsourcing system: work outsourced via the Web, done by many people (the crowd), here, the Turkers Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 7 / 25 HLT and human computation Amazon Mechanical Turk Amazon Mechanical Turk MTurk is a crowdsourcing, microworking system: tasks are cut into small pieces (HITs) and their execution is paid for by the Requesters Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 7 / 25 HLT and human computation Amazon Mechanical Turk Amazon Mechanical Turk MTurk is a crowdsourcing, microworking system: tasks are cut into small pieces (HITs) and their execution is paid for. Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 7 / 25 HLT and human computation MTurk: the legend MTurk: the Dream-Come-True Story? It's Cheap, Fast, Good [Snow et al., 2008] and a Hobby for Turkers! Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 8 / 25 HLT and human computation MTurk: the legend How many Turkers? [Fort et al., 2011]: although 500k people are registered as Turkers in the MTurk system, there are really between 15,059 and 42,912 of them. Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 9 / 25 HLT and human computation MTurk: the legend MTurk: a hobby for Turkers? [Ross et al., 2010, Ipeirotis, 2010] show that: Turkers are priorly financially motivated (91%): I 20% use MTurk as their primary source of income; I 50% as their secondary source of income; I leisure is important for only a minority (30%). 20% of the Turkers spend more than 15 hour a week on MTurk, and contribute to 80% of the tasks. observed mean hourly wages is below US$ 2. Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 10 / 25 HLT and human computation MTurk: the legend MTurk allows to produce an equivalent quality? Possibility to produce quality resources in some precise cases (e.g. speech transcription) But questionable quality: I quality decreases when the task becomes complex (e.g. summarizing [Gillick and Liu, 2010]) I UI issues [Tratz and Hovy, 2010] I Turkers (cheaters, spammers) I by-task payment model [Kochhar et al., 2010] For some simple tasks, NLP tools perform better than MTurk [Wais et al., 2010]. Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 11 / 25 HLT and human computation MTurk: the legend Is MTurk Ethical and/or legal? Ethics: No identification: no relation Requesters/Turkers and among Turkers No possibility to unionize, to protest against wrongdoings or to go to court. No minimal wage( < 2$/hr in average) Possibility to refuse to pay the Turkers Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 12 / 25 HLT and human computation MTurk: the legend Is MTurk Ethical and/or legal? Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 12 / 25 HLT and human computation MTurk: the legend Is MTurk Ethical and/or legal? Legality: Amazon license agreement: Turkers are considered as independent workers ) they are supposed to pay all the taxes. Illusory, giving the very low wages ) States are deprived of a legitimate income source. Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 12 / 25 HLT and human computation Long-term consequences Consequences on data production Vicious circle: Usage of low cost systems (such as MTurk) in projects ) Funding agencies see a huge (but unethical) reduction in costs ) Funding agencies get reluctant to pay for projects developed outside MTurk ) MTurk costs become a standard ) Other, more costly systems disappear Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 13 / 25 The Ethics and Big Data Charter 1 HLT and data 2 HLT and human computation 3 The Ethics and Big Data Charter Creation process Contents Usage Example 4 Conclusion Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 14 / 25 The Ethics and Big Data Charter Creation process Writers and contributors Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 15 / 25 The Ethics and Big Data Charter Creation process Collaboration 1 meeting a month, from June to December 2012 validation by each participant Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 16 / 25 The Ethics and Big Data Charter Contents A self-declared charter Form to fill and provide for each grant proposal Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 17 / 25 The Ethics and Big Data Charter Contents Key points Traceability: history of the data Quality: description of the means deployed to ensure the quality of the data Ethics: status and remuneration of the participants Legal aspects: license and relevant laws [Couillault et al., 2014] showed that most published data in HLT do not provide all these information Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 18 / 25 The Ethics and Big Data Charter Usage Goal Have the Charter adopted by funding agencies, so that they can define an ethical selection policy Provide (force?) some space for researchers to take a more global perspective on their project and allow them to also reflect on other issues (privacy, surveillance, etc) Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 19 / 25 The Ethics and Big Data Charter Example TCOF-POS [Benzitoun et al., 2012] From TCOF (Traitement de Corpus Oraux en Fran¸cais): spontaneous speech corpus transcribed with Transcriber TCOF-POS: annotated with part-of-speech pre-annotations correction 2 annotators + 1 validator using a spreadsheet used methodology: regularly computed inter-annotator agreement Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 20 / 25 The Ethics and Big Data Charter Example TCOF-POS: extract L2 LOC L2 ok FNO ok L3 LOC L3 il PRO:clsi il y PRO:cloy aura VER:futu avoir il PRO:clsi il y PRO:cloy aura VER:futu avoir Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 21 / 25 The Ethics and Big Data Charter Example TCOF-POS charter: some details Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 22 / 25 The Ethics and Big Data Charter Example Writing the Charter for TCOF-POS 2h of work: A. Couillault (Aproged) et K. Fort revision: C. Benzitoun (ATILF) Kar¨enFort ([email protected]) Ethics, crowdsourcing and traceability 23 / 25 Conclusion MTurk: latest evolutions Amazon
Recommended publications
  • Labeling Parts of Speech Using Untrained Annotators on Mechanical Turk THESIS Presented in Partial Fulfillment of the Requiremen
    Labeling Parts of Speech Using Untrained Annotators on Mechanical Turk THESIS Presented in Partial Fulfillment of the Requirements for the Degree Master of Science in the Graduate School of The Ohio State University By Jacob Emil Mainzer Graduate Program in Computer Science and Engineering The Ohio State University 2011 Master's Examination Committee: Professor Eric Fosler-Lussier, Advisor Professor Mikhail Belkin Copyright by Jacob Emil Mainzer 2011 Abstract Supervised learning algorithms often require large amounts of labeled data. Creating this data can be time consuming and expensive. Recent work has used untrained annotators on Mechanical Turk to quickly and cheaply create data for NLP tasks, such as word sense disambiguation, word similarity, machine translation, and PP attachment. In this experiment, we test whether untrained annotators can accurately perform the task of POS tagging. We design a Java Applet, called the Interactive Tagging Guide (ITG) to assist untrained annotators in accurately and quickly POS tagging words using the Penn Treebank tagset. We test this Applet on a small corpus using Mechanical Turk, an online marketplace where users earn small payments for the completion of short tasks. Our results demonstrate that, given the proper assistance, untrained annotators are able to tag parts of speech with approximately 90% accuracy. Furthermore, we analyze the performance of expert annotators using the ITG and discover nearly identical levels of performance as compared to the untrained annotators. ii Vita 2009................................................................B.S. Physics, University of Rochester September 2009 – August 2010 .....................Distinguished University Fellowship, The Ohio State University September 2010 – June 2011 .........................Graduate Teaching Assistant, The Ohio State University Fields of Study Major Field: Computer Science and Engineering iii Table of Contents Abstract ..............................................................................................................................
    [Show full text]
  • Return of the Crowds: Mechanical Turk and Neoliberal States of Exception 79 Ayhan Aytes Vi Contents
    DIGITAL LABOR The Internet as Playground and Factory Edited by Trebor Scholz First published 2013 by Routledge 711 Third Avenue, New York, NY 10017 Simultaneously published in the UK by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2013 Taylor & Francis The right of the editor to be identifi ed as the author of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identifi cation and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Digital labor : the Internet as playground and factory / edited by Trebor Scholz. p. cm. Includes bibliographical references and index. 1. Internet–Social aspects. 2. Information society. I. Scholz, Trebor. HM851.D538 2013 302.23'1–dc23 2012012133 ISBN: 978-0-415-89694-8 (hbk) ISBN: 978-0-415-89695-5 (pbk) ISBN: 978-0-203-14579-1 (ebk) Typeset in ApexBembo by Apex CoVantage, LLC CONTENTS Acknowledgments vii Introduction: Why Does Digital
    [Show full text]
  • I Found Work on an Amazon Website. I Made 97 Cents an Hour. - the New York Times Crossword Times Insider Newsletters the Learning Network
    11/15/2019 I Found Work on an Amazon Website. I Made 97 Cents an Hour. - The New York Times Crossword Times Insider Newsletters The Learning Network Multimedia Photography Podcasts NYT Store NYT Wine Club nytEducation Times Journeys Meal Kits Subscribe Manage Account Today's Paper Tools & Services Jobs Classifieds Corrections More Site Mobile Navigation 12 I Found Work on an Amazon Website. I Made 97 Cents an Hour. By ANDY NEWMAN NOV. 15, 2019 Inside the weird, wild, low-wage world of Mechanical Turk. https://www.nytimes.com/interactive/2019/11/15/nyregion/amazon-mechanical-turk.html 2/15 11/15/2019 I Found Work on an Amazon Website. I Made 97 Cents an Hour. - The New York Times After turking for eight hours, the author had earned $7.83. Dave Sanders for The New York Times The computer showed a photo of what looked like a school board meeting. My job was to rate it on a scale of 1 to 5 for 23 different qualities: “patriotic,” “elitist,” “reassuring” and so on. I did the same for a photo of a woman wearing headphones — I gave her a 4 for “competent” and a 1 for “threatening” — and another of five smiling women flanking a smiling man in a blue windbreaker. I submitted my answers. I checked the clock. Three minutes had passed. I had just earned another 5 cents on a digital work marketplace run by Amazon called Mechanical Turk. At least I thought I had. Weeks later, I’m still not sure. There are lots of ways to make a little money in this world.
    [Show full text]
  • Privacy Experiences on Amazon Mechanical Turk
    113 “Our Privacy Needs to be Protected at All Costs”: Crowd Workers’ Privacy Experiences on Amazon Mechanical Turk HUICHUAN XIA, Syracuse University YANG WANG, Syracuse University YUN HUANG, Syracuse University ANUJ SHAH, Carnegie Mellon University Crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) are widely used by organizations, researchers, and individuals to outsource a broad range of tasks to crowd workers. Prior research has shown that crowdsourcing can pose privacy risks (e.g., de-anonymization) to crowd workers. However, little is known about the specific privacy issues crowd workers have experienced and how they perceive the state ofprivacy in crowdsourcing. In this paper, we present results from an online survey of 435 MTurk crowd workers from the US, India, and other countries and areas. Our respondents reported different types of privacy concerns (e.g., data aggregation, profiling, scams), experiences of privacy losses (e.g., phishing, malware, stalking, targeted ads), and privacy expectations on MTurk (e.g., screening tasks). Respondents from multiple countries and areas reported experiences with the same privacy issues, suggesting that these problems may be endemic to the whole MTurk platform. We discuss challenges, high-level principles and concrete suggestions in protecting crowd workers’ privacy on MTurk and in crowdsourcing more broadly. CCS Concepts: • Information systems → Crowdsourcing; • Security and privacy → Human and societal aspects of security and privacy; • Human-centered computing → Computer supported cooperative work; Additional Key Words and Phrases: Crowdsourcing; Privacy; Amazon Mechanical Turk (MTurk) ACM Reference format: Huichuan Xia, Yang Wang, Yun Huang, and Anuj Shah. 2017. “Our Privacy Needs to be Protected at All Costs”: Crowd Workers’ Privacy Experiences on Amazon Mechanical Turk.
    [Show full text]
  • Crowd Economy and Digital Precarity
    Crowd Economy Denis Jaromil Roio Planetary Collegium / M-Node Plymouth, 10 July 2010 Plymouth, 10 July 2010 1 / Denis Jaromil Roio (2010) Crowd Economy 19 Outline 1 Crowdsourcing 2 Everyone is an Artist 3 Mechanical Turks 4 Conclusion Plymouth, 10 July 2010 2 / Denis Jaromil Roio (2010) Crowd Economy 19 Crowdsourcing Crowdsourcing The term crowdsourcing indicates the act of outsourcing tasks, traditionally performed by an employee or contractor, to a large group of people (a crowd): the trend of leveraging the mass collaboration enabled by Web 2.0 technologies to achieve business goals. Crowdsourcing constituted a new form of corporate outsourcing to largely amateur pools of “volunteer labor that create content, solve problems, and even do corporate R & D.1” 1Howe, 2006 Plymouth, 10 July 2010 3 / Denis Jaromil Roio (2010) Crowd Economy 19 Crowdsourcing ESP Games “5000 people playing simultaneously an ESP game on image recognition can label all images on google in 30 days. Individual games in Yahoo! and MSN average over 5000 players at a time.”2 To address the problem of creating difficult metadata, ESP uses the computational power of humans to perform a task that computers cannot yet do by packaging the task as a “game”. 2von Ahn, 2006 Plymouth, 10 July 2010 4 / Denis Jaromil Roio (2010) Crowd Economy 19 Crowdsourcing Massive Multiplayer Online RPG MMORPG World of Warcraft Second Life OpenSIM etc. Virtual reality architecture Virtual miners Plymouth, 10 July 2010 5 / Denis Jaromil Roio (2010) Crowd Economy 19 Crowdsourcing Electronic Design Automation When crowdsourcing EDA, complex problems can be broken up into modules where the I/O of logic circuits is tested against combinations computed by humans.3 3Romero, 2009 Plymouth, 10 July 2010 6 / Denis Jaromil Roio (2010) Crowd Economy 19 Crowdsourcing Functional transormation Interested in the liberation of the means of production, Berthold Brecht elaborated the concept of functional transormation (Umfunktionierung) for the transformation of the forms and instruments of production [by a progressive intelligentsia].
    [Show full text]
  • Crowdsourcing Language Resources for Dutch Using PYBOSSA: Case Studies on Blends, Neologisms and Language Variation
    Crowdsourcing Language Resources for Dutch using PYBOSSA: Case Studies on Blends, Neologisms and Language Variation Peter Dekker, Tanneke Schoonheim Instituut voor de Nederlandse Taal (Dutch Language Institute) {peter.dekker,tanneke.schoonheim}@ivdnt.org Abstract In this paper, we evaluate PYBOSSA, an open-source crowdsourcing framework, by performing case studies on blends, neologisms and language variation. We describe the procedural aspects of crowdsourcing, such as working with a crowdsourcing platform and reaching the desired audience. Furthermore, we analyze the results, and show that crowdsourcing can shed new light on how language is used by speakers. Keywords: crowdsourcing, lexicography, neologisms, language variation 1. Introduction Open-task crowdsourcing has been applied to lexicography Crowdsourcing (or: citizen science) has shown to be a for other languages, such as Slovene, where crowdsourcing quick and cost-efficient way to perform tasks by a large was integrated in the thesaurus and collocation dictionary number of lay people, which normally have to be per- applications (Holdt et al., 2018; Kosem et al., 2018). On formed by a small number of experts (Holley, 2010; Causer top of this goal of language documentation, we would like et al., 2018). In this paper, we use the PYBOSSA (PB) to use crowdsourcing to make language material available framework1 for crowdsourcing language resources for the for language learners. Dutch language. We will describe our experiences with this framework, to accomplish the goals of language doc- 2. Method umentation and generation of language learning material. In addition to sharing our experiences, we will report on As the basis for our experiments, we hosted an instance linguistic findings based on the experiments we performed of PYBOSSA at our institute.
    [Show full text]
  • Mechanical Turk
    Mechanical Turk W ILLIAM LUNDIN FORS S É N and ADAM RENBERG Bachelor of Science Thesis Stockholm, Sweden 2010 Mechanical Turk W ILLIAM LUNDIN FORS S É N and ADAM RENBERG Bachelor’s Thesis in Computer Science (15 ECTS credits) at the School of Computer Science and Engineering Royal Institute of Technology year 2010 Supervisor at CSC was Mads Dam Examiner was Mads Dam URL: www.csc.kth.se/utbildning/kandidatexjobb/datateknik/2010/ lundin_forssen_william_OCH_renberg_adam_K10066.pdf Royal Institute of Technology School of Computer Science and Communication KTH CSC 100 44 Stockholm URL: www.kth.se/csc Abstract Amazon Mechanical Turk (mTurk) is an infrastructure where tasks can be published for a crowd to solve. Those publishing the tasks reward the crowd for solving them. The system has increased in popularity since its release in 2005 but is still in a beta stage. This paper surveys the infrastructure, suggests improvements and debates the social and ethical questions that has arisen, such as "is this a virtual sweatshop?". We investigate the similarities and differences between this presumed virtual sweatshop and real world ones, and find that they closely resemble each other in wages and the menial nature of the tasks performed, but that they differ in working conditions and legal complexities. Referat Mechanical Turk Amazon Mechanical Turk (mTurk) är en infrastruktur där uppgifter kan publiceras för en folkmassa att lösa. De som publicerar uppgifterna belönar folkmassan för att lösa dem. Systemet har ökat i popularitet sedan dess uppstart 2005, men är fortfarande i ett beta-stadium. Denna uppsats undersöker dess infrastruktur, föreslår förbättringar och debat- terar de sociala och etiska frågor som uppstått, såsom "Är detta en virtuell sweatshop?".
    [Show full text]
  • Guide to Chess.Pdf
    A Guide to the Wiki For “The Game of Chess” By Allan Stewart DISCLAIMER: I started writing up this guide when I believed there was a writing requirement beyond the wiki. Instead I decided to focus on the wiki. So this short document is very meta. Consider it a guide to the wiki, or if you miss my presentation, a good overview of the ideas I cover. STRUCTURE OF THE PROJECT I chose chess as my topic because it is the premier game of Western culture. Although a game, it is not trivial at all. The evolution of chess has touched so many “human” elements in society. Conversely, the issue of whether chess (or any logic-based mechanism) is human – ie. requiring human intelligence – is of more recent interest. Regarding the evolution of chess, I see that the course has proceeded in the following way: 1. “Archaic” chess is when chess was in its foundations, before it was solidified 2. “Modern” chess is when mankind decides to place its face on the chess pieces and symbols 3. “Contemporary” chess addresses the motivations of more recent chess players Although this is a chronological approach, I argue that it produces a building foundation. In my opinion, chess has only gained complexity as humans interact with it and chess interacts with us. ARCHAIC CHESS “Chess: Origins and Myth” - plus its appended photo page – covers this extensively. Chess, believed to be quite old for reasons of its widespread distribution, is actually recent. Chess dates to later than Christ's birth and thus it is a game born in between the classical age and medieval times.
    [Show full text]
  • Give a Penny for Their Thoughts
    Give a penny for their thoughts Annie Zaenen ∗ Palo Alto Research Center 1 Introduction RTE intends to capture the way a normal person makes textual inferences.1 But how do we capture the intuitions of this \man in the street"? Using the intuitions of graduate students and their supervisors is a bit suspect and using traditional polling techniques to isolate a rep- resentative sample of the English speaking population would be prohibitively expensive apart from being methodological overkill. In this squib we describe and illustrate the use of a relative light-weight method that, without capture the \man in the street" outright, might be used to give the RTE data a broader basis. The proposal is to use a service such as Mechanical Turk to insure that the inferences are shared by several people or to obtain an idea of the variability of a particular inference. Below we describe a first experiment using Mechanical Turk with a subset of the RTE3 test data. The aim of this squib is not so much to present results than well to investigate what kind of information we can get out of the turker data. 2 Mechanical Turk The Amazon Mechanical Turk (MTurk)® is part of the Amazon Web Services.2 The service uses human intelligence in general to perform tasks which computers are unable to do. Here we will use it to perform a task that computers don't yet do well enough. \Requesters" ask people to perform tasks known as HITs (Human Intelligence Tasks), such as choosing paraphrases, labeling pictures, or transcribing voice-input.
    [Show full text]
  • Using Amazon's Mechanical Turk & Machine Learning to Identify & Model Owners of Solar Panels
    Using Amazon’s Mechanical Turk & Machine Learning to Identify & Model Owners of Solar Panels Less than one percent of the 125M plus residential buildings in the United States currently have solar panels installed. This low penetration exists despite a 30% federal tax credit and other state and local incentives. At the same time, according to the framework of the recently-signed Paris Accords, the United States will need to reduce its greenhouse gas emissions by 26% below 2005 levels by 2025. 30% of US greenhouse gas emissions currently come from electrical generation. While many reductions in this category will come from new solar and wind power plants, residential solar will also play a large role in - butmeeting there the are Paris a number obligations. of states The where financial it is alreadybenefit aeconomically consumer could worthwhile realize for to installinstalling solar residential (AZ, CA, CO, so lar differs according to a geography’s level of solar suitability (sunlight) and a state’s financial incentives, most likely to buy or lease solar panels, particularly in these target states. DE, FL, HI, MA, MD, NJ, NV, NY, etc.). This raises the question of how do we find the individual home owners HaystaqDNA’s interest in this area is not entirely academic. While the company’s origins are as a left-leaning as well. Individuals who buy in one category of green energy will usually buy in others. Haystaq’s existing political modeling firm, there is an immediate value in finding residential solar buyers for other industries appliances,clients in the etc. in will the beautomotive able to market industry to these face thesame increasing individuals.
    [Show full text]
  • Using Crowdsourced Online Experiments to Study Context-Dependency of Behavior
    Using Crowdsourced Online Experiments to Study Context-dependency of Behavior Marc Keuschnigg, Felix Bader and Johannes Bracher Journal Article N.B.: When citing this work, cite the original article. Original Publication: Marc Keuschnigg, Felix Bader and Johannes Bracher, Using Crowdsourced Online Experiments to Study Context-dependency of Behavior, Social Science Research, 2016. 59, pp.68-82. http://dx.doi.org/10.1016/j.ssresearch.2016.04.014 Copyright: Elsevier http://www.elsevier.com/ Postprint available at: Linköping University Electronic Press http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-136539 Using Crowdsourced Online Experiments to Study Context-dependency of Behavior Marc Keuschnigg∗1, Felix Bader1, and Johannes Bracher1 1Department of Sociology, LMU Munich; Konradstrasse 6, 80801 Munich, Germany mailto:[email protected]@lmu.de, mailto:[email protected]@lmu.de, mailto:[email protected]@outlook.de January 2016 Abstract We use Mechanical Turk’s diverse participant pool to conduct online bargaining games in India and the US. First, we assess internal validity of crowdsourced experimentation through variation of stakes ($0, $1, $4, and $10) in the Ultimatum and Dictator Game. For cross-country equivalence we adjust the stakes following differences in purchasing power. Our marginal totals correspond closely to laboratory findings. Monetary incentives induce more selfish behavior but, in line with most laboratory findings, the particular size of a positive stake appears irrelevant. Second, by transporting a homogeneous decision situation into various living conditions crowdsourced experimentation permits identification of context effects on elicited behavior. We explore context- dependency using session-level variation in participants’ geographical location, regional affluence, and local social capital.
    [Show full text]
  • What Motivates Effort? Evidence and Expert Forecasts∗
    What Motivates Effort? Evidence and Expert Forecasts∗ Stefano DellaVigna Devin Pope UC Berkeley and NBER U Chicago and NBER This version: March 15, 2017 Abstract How much do different monetary and non-monetary motivators induce costly effort? Does the effectiveness line up with the expectations of researchers and with results in the literature? We conduct a large-scale real-effort experiment with 18 treatment arms. We examine the effect of (i) standard incentives; (ii) behavioral factors like social preferences and reference dependence; and (iii) non-monetary inducements from psychology. We find that (i) monetary incentives work largely as expected, including a very low piece rate treat- mentwhichdoesnotcrowdouteffort; (ii) the evidence is partly consistent with standard behavioral models, including warm glow, though we do not find evidence of probability weighting; (iii) the psychological motivators are effective, but less so than incentives. We then compare the results to forecasts by 208 academic experts. On average, the experts an- ticipate several key features, like the effectiveness of psychological motivators. A sizeable share of experts, however, expects crowd-out, probability weighting, and pure altruism, counterfactually. As a further comparison, we present a meta-analysis of similar treat- ments in the literature. Overall, predictions based on the literature are correlated with, but underperform, the expert forecasts. ∗We thank Ned Augenblick, Oriana Bandiera, Dan Benjamin, Jordi Blanes-i-Vidal, Patrick Dejarnette, Jon de Quidt, Clayton Featherstone, Judd Kessler, David Laibson, John List, Benjamin Lockwood, Barbara Mellers, Katie Milkman, Don Moore, Sendhil Mullainathan, Victoria Prowse, Jesse Shapiro, Uri Simonsohn, Erik Snowberg, Philipp Strack, Justin Sydnor, Dmitry Taubinsky, Richard Thaler, Mirco Tonin, and Kevin Volpp.
    [Show full text]