Game Taskification for Crowdsourcing
Total Page:16
File Type:pdf, Size:1020Kb
Game taskification for crowdsourcing A design framework to integrate tasks into digital games. Anna Quecke Master of Science Digital and Interaction design Author Anna Quecke 895915 Academic year 2019-20 Supervisor Ilaria Mariani Table of Contents 1 Moving crowds to achieve valuable social 2.2.2 Target matters. A discussion on user groups innovation through games 11 in game-based crowdsourcing 102 1.1 From participatory culture to citizen science 2.2.3 Understanding players motivation through and game-based crowdsourcing systems 14 Self-Determination Theory 108 1.2 Analyzing the nature of crowdsourcing 17 2.3 Converging players to new activities: research aim 112 1.2.1 The relevance of fun and enjoyment 3 Research methodology 117 in crowdsourcing 20 3.1 Research question and hypothesis 120 1.2.2 The rise of Games with a Purpose 25 3.2 Iterative process 122 1.3 Games as productive systems 32 4 Testing and results 127 1.3.1 Evidence of the positive effects of gamifying a crowdsourcing system 37 4.1 Defining a framework for game 1.4 Design between social innovation taskification for crowdsourcing 131 and game-based crowdsourcing 42 4.1.1 Simperl’s framework for crowdsourcing design 132 1.4.1 Which impact deserves recognition? 4.1.2 MDA, a game design framework 137 Disputes on Game for Impact definition 46 4.1.3 Diegetic connectivity 139 1.4.2 Ethical implications of game-based 4.1.4 Guidelines Review 142 crowdsourcing systems 48 T 4.1.5 The framework 153 1.4.3 For a better collaboration 4.2 Testing through pilots 160 between practice and research 62 2 The design of game-based crowdsourcing systems 69 4.2.1 Pilot 1 174 4.2.2 Pilot 2 181 2.1 What is fun and how to design for it 72 4.2.3 Pilot 3 187 2.1.1 Gamification 75 5 Conclusion 195 2.1.2 Serious Games 78 5.1 Result discussion 198 2.1.3 Taskification 85 2.1.4 A comparison between gamification, 5.1.1 An in-depth analysis of feedback 202 SGs and taskification 90 5.2 Contributions to knowledge 209 2.2 Fun is all? Investigating underlying motivations 92 5.3 Directions for future research 211 2.2.1 How games affect ongoing motivation References 215 in crowdsourcing activities 99 List of figures Fig. 1 Results from the investigation of iStockers’ motivations to participa- Fig. 16 Ethical issues in crowdsourcing clustered in the knowledge cluster te (Brabham, 2008). (Standing and Standing, 2018). Fig. 2 Ye and Kankanhalli’s sum of theories applied to crowdsourcing pla- Fig. 17 Ethical issues in crowdsourcing clustered in the economic and rela- tforms to explain users participation (2017). tional cluster (Standing and Standing, 2018). Fig. 3 The research model for solvers’ participation from Ye and Kan- Fig. 18 Complexity Limit competing for the World of Warcraft Race to Wor- kanhalli (2017). H3, H6b, and H7a are not supported. ld First on Ny’alotha, The Waking City. Fig. 4 The interface of the ESP Game. The thermometer at the bottom mea- Fig. 19 The homepage of the original web application, displaying the sures how many images partners have agreed on. When filled, players project progression bar and the objective statement. gain a bonus point. Fig. 20 The investigation interface from the original web application. Parti- Fig. 5 The home page of gwap.com. A brief overview of the motive of cipants could report the kind of document and whether it was intere- the games is displayed: “You play the games, computers get smarter, sting to investigate further or not. everyone benefits!” Fig. 21 A MP’s page on Investigate Your MP’s Expenses, showing their mug- Fig. 6 A screenshot from the ESP Game on gwap.com. The game was poli- shot and their documents. shed and designed according to the whole website image. Fig. 22 “Gamification” between game and play, whole and parts (Deterding Fig. 7 The Google Image Labeler: it had all the features of the ESP Game plus et al., 2011). the pass counter and the list of the images labelled during the session. Fig. 23 After the puzzle has been completed the system rewards the player Fig. 8 Psychological outcomes reported in the literature reviewed by Mor- with an animation of triumph and records the score to update the schheuser and colleagues (2017). player ranking. LFig. 9 Results on gamified crowdsourcing from Morschheuser and collea- Fig. 24 An example of protein folding in the game in which all visual clues gues’ (2017) review. are observable. Fig. 10 Positive effects of gamification in crowdsourcing reported in Mor- Fig. 25 One of the scientists behind Foldit (left) explaining the spike protein schheuser and colleagues’ (2017) review. (right) contained in the coronavirus puzzle. See the video at: https:// Fig. 11 Merits of games to design for social innovation and their explana- youtu.be/hS5g-2KhoSk. tion (Bayrak, 2019). Fig. 26 Google searches trends on “Foldit coronavirus” (blue) and “Foldit” Fig. 12 Relations among merits of games, challenges of design for social (red). The coronavirus puzzle was launched in February and ended in innovation and some common approaches in the context of designing May. for social innovation (Bayrak, 2019). Fig. 27 Paper resonance of Predicting protein structures with a multiplayer Fig. 13 Framework of persuasive technologies (top) compared to traditio- online game (https://www.nature.com/articles/nature09304/metrics). nal persuasion (bottom) (Berdichevsky and Neuenschwander, 1999). Fig. 28 A shot during the gameplay of EVE Online. Fig. 14 The Ecosystem for Designing Games Ethically, EDGE in short (San- Fig. 29 The interface of Project Discovery on EVE Online. On the left, there is dovar et al., 2016). the image that needs to be classified. On the right, there are the typo- Fig. 15 Conceptual mapping of gamification ethics according to Kim and logies of proteins. The ones marked in blue are those that the player Werbach (2016). selected. Fig. 30 The interface of Project Discovery on EVE Online, showing the Fig. 47 Taskification design framework for crowdsourcing activities. In ita- rewards collected after completing an image classification. lics, the elements infferred from the guidelines review. Fig. 31 A visual representation of the three approaches presented to com- Fig. 48 The pilot protocol, composed of introduction, workshop and fee- bine crowdsourcing and games dback. Fig. 32 User profile in Biotracker. Badges show user achievements (Bowser Fig. 49 Teams and their composition. et al., 2013). Fig. 50 The arcade that enables access to Borderland Science. Fig. 33 Independent t-test results of the research. In yellow, relevant resul- Fig. 51 The minigame interface. The player has to match similar tiles mo- ts (Bowser et al., 2013). ving them up (the yellow tiles mark where tiles have been moved). Fig. 34 A process model of volunteers and scientists involved in a citizen Fig. 52 The structure of the workshop, defined by its main activity. science project. Once addressed a personal interest, the correct mo- Fig. 53 A model for brainstorming (Gray et al., 2010). tivational triggers can ensure long-term participation (Rotman et al., Fig. 54 The virtual board designed for the workshop, completed by team B. 2012). Fig. 55 A close-up of the defining phase from team B. Fig. 35 The gameplay of Eyewire. On the left a three-dimensional represen- Fig. 56 An highlighted element redisegned for Pilot 2 in the virtual board. tation of a neuron, on the right the two-dimensional image that the Relative subcategories of the element inferred from guidelines were player has to map. displayed at this point. Fig. 36 Screenshots from KpopRally application (Wang et al., 2020). Fig. 57 Sketches from Team A workshop. From top to bottom: Tess explains Fig. 37 Reasons why respondents participate in Foldit on a sample of 37 re- to Joel how to use the device, Joel finds a functioning server, the in-ga- spondents. The number at the top of each bar represents the number me visualization of the task with the player’s accuracy graph. of respondents providing that response (Curtis, 2015). Fig. 58 Composite images from Team B workshop. From top to bottom: the Fig. 38 A comparison of the motivation between gamers and citizen scien- “hell employee” job description, the document transcription interface ce volunteers (Bowser et al., 2013). and the investigation section. Fig. 39 “Check-in” to a Willow oak in Floracaching. Users may submit diffe- Fig. 59 Composite images from Team B workshop. From top to bottom: rent types of data, such as the state of a plant (i.e., first flower, full leaf) the criminal identification section and the “mission complete” alert or a photograph (Bowser et al., 2014). showing the reward. Fig. 40 The sum of the research process. Fig. 60 Composite images from team C workshop. Top: a conspiracy the- Fig. 41 The fundamental elements of crowdsourcing design according to orist with an alien. Right: a cow with the moun mark on the back. Simperl. Bottom: an example of the reward gained by the player through the Fig. 42 Elements of the games’ consumption process according to the MDA mission. and their design counterparts (Hunicke et al., 2004). Fig. 61 A schema of the iterative design process. Fig. 43 Designer and player have different perspectives over the game Fig. 62 A comparison between vertical (left) and horizontal (right) fra- (Hunicke et al., 2004). mework partition. Fig. 44 Diegetic connectivity use story to connect various aspects of the game to improve player motivation, engagement and task outcomes (Lane and Prestopnik, 2017, p. 231). Fig.