Incentive Engineering for Collaborative Online Work: The
Total Page:16
File Type:pdf, Size:1020Kb
INCENTIVE ENGINEERING FOR COLLABORATIVE ONLINE WORK:THE CASE OF CROWDSOURCING Zur Erlangung des akademischen Grades eines Doktors der Wirtschaftswissenschaften (Dr. rer. pol.) von der Fakultät für Wirtschaftswissenschaften am Karlsruher Institut für Technologie (KIT) genehmigte DISSERTATION von M.Sc. Tim Jochen Straub Tag der mündlichen Prüfung: 15.03.2017 Referent: Prof. Dr. Christof Weinhardt Korreferent: Prof. Dr. Orestis Terzidis Karlsruhe, 2017 Contents List of Abbreviationsv List of Figures ix List of Tables xi I Introduction1 1 Introduction3 1.1 Motivation........................................ 3 1.2 Research Outline .................................... 6 1.3 Structure of the Thesis................................. 9 1.4 Research Development................................. 11 II Foundations and Related Work 15 2 Aspects of Collaborative Online Work 17 2.1 Crowdsourcing ..................................... 19 2.1.1 Classification of Crowdsourcing ...................... 22 2.1.2 Exemplary Crowdsourcing Platforms ................... 25 2.1.3 Related Concepts ............................... 27 2.2 Existing Work on Collaborative Online Work and Incentives.......... 29 3 The Theory of Motivation and Incentives 31 3.1 Definitions ........................................ 32 3.2 Motivation Theory ................................... 34 3.2.1 Content Theories ............................... 34 3.2.2 Process Theories................................ 36 3.2.3 Intrinsic and Extrinsic Motivation ..................... 37 III Insights from Collaborative Online Work 41 4 Incentives for Crowdsourcing: Payment Schemes and Feedback 43 i Contents 4.1 Introduction....................................... 44 4.2 Background ....................................... 46 4.2.1 Payments in Online Labor.......................... 46 4.2.2 Incentives: Piece Rates and Rank-Order-Tournaments......... 47 4.2.3 Motivation in Paid Micro-Task Crowdsourcing.............. 48 4.3 Research Model..................................... 49 4.4 General Study Design and Procedures ....................... 51 4.5 Study 1: Piece Rates versus Rank-Order-Tournaments ............. 54 4.5.1 Study Design.................................. 54 4.5.2 Evaluation.................................... 55 4.6 Study 2: Feedback in Rank-Order-Tournaments – Different Strengths of Competitors ....................................... 59 4.6.1 Study Design.................................. 59 4.6.2 Evaluation.................................... 60 4.7 Study 3: Feedback in Rank-Order-Tournaments – Equal Strength of Com- petitors .......................................... 66 4.7.1 Study Design.................................. 66 4.7.2 Evaluation.................................... 66 4.8 Summary......................................... 70 5 Risk-Taking and Crowdsourcing: Tournament Modes and Rankings 75 5.1 Introduction....................................... 76 5.2 Background ....................................... 78 5.2.1 Risk in Rank-Order-Tournaments...................... 80 5.2.2 Risk-Taking and Creativity.......................... 82 5.3 Study Design....................................... 83 5.4 Evaluation ........................................ 87 5.5 Summary......................................... 92 6 Non-Monetary Incentives for Crowdsourcing: Framing and Monetary Com- pensation 95 6.1 Introduction....................................... 96 6.2 Background ....................................... 98 6.2.1 Incentives and Motives for Non-Monetary Crowdsourcing . 98 6.2.2 The “Planspiel Flächenhandel” ....................... 99 6.3 Study Design.......................................101 6.3.1 Participants...................................102 6.3.2 Experimental Process.............................104 6.3.3 Phases of the Experiments..........................107 6.3.4 Experimental Platform ............................113 6.4 Evaluation ........................................116 6.4.1 No Framing vs. Framing...........................117 6.4.2 No Monetary Compensation vs. Monetary Compensation . 122 ii Contents 6.4.3 Summary of Evaluation............................125 6.5 Summary.........................................125 IV Finale 127 7 Conclusion and Future Work 129 7.1 Summary and Contributions .............................130 7.2 Future Work .......................................135 7.3 Final Remark.......................................146 V Appendices 147 A Texts and Platform Screens of Chapter4 149 B Texts and Platform Screens of Chapter5 161 C Texts and Platform Screens of Chapter6 173 References I iii List of Abbreviations A Auction Treatment . 102 CDA Continuous Double Auction . 102 DV Dependant Variable . 56 FE Feedback on a Equal Competitor . 66 FM Feedback on a Mediocre Competitor . 59 Fr Framing Setting . 101 FS Feedback on a Strong Competitor . 59 FW Feedback on a Weak Competitor . 59 FZI FZI Research Center for Information Technology . 103 GLS Generalized Least Squares . 89 HITs Human Intelligence Tasks . 25 IND Individual Treatment . 83 IS Information Systems . 18 KD2Lab Karlsruhe Design & Decision Lab . 103 KIT Karlsruher Institute of Technology . 83 Mo Money Setting . 101 MTurk Amazon Mechanical Turk . .4 NA No Auction Treatment . 102 NF No Feedback . 59 NoFr No Framing Setting . 101 NoMo No Money Setting . 101 OLS Ordinary Least Squares . 56 ORSEE Online Recruitment System for Economic Experiments . 83 PR Piece Rate . .6 ROT Rank-Order-Tournament . .6 SD Standard Deviation . 55 TDR Tradable Development Right . 97 TEAM Team Treatment . 83 USBA Unified Sealed Bid Auction . 102 USD United States Dollar . 20 WWW World Wide Web . 17 v List of Figures 1.1 Structure of this thesis. ................................ 10 2.1 Classification of crowdsourcing into four main alternatives by Prpi´c et al. (2015), extended with further examples. ..................... 23 2.2 Classification of crowdsourcing based on the incentive scheme. Own pre- sentation based on Malone et al.(2010), Hammon and Hippner(2012), and Kittur et al.(2013)................................. 25 2.3 The high-level steps of crowdsourcing and the respective actors of Tran- quillini et al.(2015)................................... 26 3.1 The classical approach to motivation adapted from Rheinberg(2008). 34 3.2 Maslow’s hierarchy of needs. Own presentation based on Maslow(1943). 35 4.1 Hypothesized research model on the correlates of crowd workers’ perfor- mance in rank-order-tournaments. ......................... 50 4.2 Overall experimental process of Chapter’s4 Studies 1, 2, and 3. 53 4.3 User interface: feedback (left image), no feedback (right image). 54 4.4 Experimental process of Study 1. .......................... 55 4.5 Mean finished sliders (performance) by treatment, standard error bars for 95 % confidence interval. ............................... 57 4.6 Experimental process of Study 2. .......................... 59 4.7 Experimental process of Study 3. .......................... 66 5.1 Betting environment................................... 84 5.2 Individual ranking.................................... 85 5.3 Team ranking....................................... 85 5.4 Experimental process of Chapter’s5 study...................... 86 5.5 Mean risk-taking (selected odd) by treatment, standard error bars for 95 % confidence interval.................................... 88 5.6 Comparison of risk-taking (selected odd) by individual and team ranking over the course of the tournament (GLS regression estimates). 91 6.1 Overview of the six experiments conducted in the context of the “Planspiel Flächenhandel”......................................102 6.2 Experimental processes of Chapter’s6 study. ...................104 vii List of Figures 6.3 Information message that was shown if a participant would spend more than 10 % of his yearly budget. ...........................105 6.4 Information message that was shown if a participant would spend more than 20 % of his yearly budget. ...........................106 6.5 Overview of the experimental phases per simulated year. 107 6.6 Stepwise reduction of land consumption in hectare per day (ha/d). 108 6.7 Orderbook of the experimental platform. .....................108 6.8 Example project list of a local authority.......................110 6.9 User interface of the project census platform by Gertz Gutsche Rümenapp GbR.............................................111 6.10 Example depletion project list of a local authority. 112 6.11 User Interface of the experimental platform: Active CDA trading phase. 114 6.12 User Interface of the experimental platform: Active activation phase. 115 6.13 Participants’ spending of the local authorities’ budget for land consumption during the experiments (Fr): NA treatment (top image), A treatment (bot- tom image). .......................................119 7.1 Process of conducting experiments based on Friedman and Sunder(1994). 137 7.2 Framework of an online platform for a collaborative and continuous crowd experiment guideline. .................................140 7.3 Potential treatments of a future study analyzing feedback mechanisms and quality. ..........................................141 7.4 Potential treatment process of a future study analyzing adaptive feedback, effort, and motivation in long lasting tasks.....................143 7.5 Potential process of crowdsourcing trust-scores for profile pictures of shar- ing economy platforms. ................................143 7.6 Potential treatments of a future study analyzing user interface designs