Crowdforge: Crowdsourcing Complex Work

Crowdforge: Crowdsourcing Complex Work

CrowdForge: Crowdsourcing Complex Work Aniket Kittur Boris Smus Susheel Khamkar Robert E. Kraut Carnegie Mellon University 5000 Forbes Ave, Pittsburgh, PA 15213 [email protected], [email protected], [email protected], [email protected] ABSTRACT 1-10 cents per task). These markets represent the potential Micro-task markets such as Amazon’s Mechanical Turk for accomplishing work in a fraction of the time and money represent a new paradigm for accomplishing work, in required by more traditional methods [5][14][22][25]. which employers can tap into a large population of workers However, the types of tasks accomplished through MTurk around the globe to accomplish tasks in a fraction of the have typically been limited to those that are low in com- time and money of more traditional methods. However, plexity, independent, and require little time and cognitive such markets have been primarily used for simple, inde- effort to complete. The typical task on MTurk is a self- pendent tasks, such as labeling an image or judging the contained, simple, repetitive, and short one, requiring little relevance of a search result. Here we present a general specialized skill and often paying less than minimum wage. purpose framework for accomplishing complex and inter- Amazon calls its tasks HITs, for human intelligence tasks. dependent tasks using micro-task markets. We describe our During February 2010, we scraped descriptions of 13,449 framework, a web-based prototype, and case studies on HIT groups posted to Mechanical Turk. The modal HIT article writing, decision making, and science journalism paid $0.03 US. Examples of typical tasks include identify- that demonstrate the benefits and limitations of the ap- ing objects in a photo or video, de-duplicating data, tran- proach. scribing audio recordings, or researching data details, with ACM Classification: H5.m. Information interfaces and many tasks taking only a fraction of a minute to complete. presentation (e.g., HCI): Miscellaneous. In contrast to the typical tasks posted on Mechanical Turk, General terms: Algorithms, Design, Economics, Human much of the work required in many real-world work organ- Factors. izations and even many temporary employment assign- ments is often much more complex, interdependent, and Keywords: crowdsourcing, Mechanical Turk, human com- putation, distributed processing, MapReduce, coordination. requires significant time and cognitive effort [18]. They require substantially more coordination among co-workers INTRODUCTION than do the simple, independent tasks seen on micro-task Crowdsourcing has become a powerful mechanism for ac- markets. The impact of micro-task markets would be sub- complishing work online. Hundreds of thousands of volun- stantially greater if they could also be applied to more teers have completed tasks including classifying craters on complex and interdependent tasks. planetary surfaces (clickworkers.arc.nasa.gov), deciphering scanned text (recaptcha.net), and discovering new galaxies Consider for example the task of writing a short article (galaxyzoo.org). Crowdsourcing has succeeded as a com- about a locale, a newspaper, a travel guide, or a corporate mercial strategy for accomplishing work as well, with retreat or an encyclopedia. This is a complex and highly companies accomplishing work ranging from crowdsourc- interrelated task that involves many subtasks, such as de- ing t-shirt designs (Threadless) to research and develop- ciding on the scope and structure of the article, finding and ment (Innocentive). collecting relevant information, writing the narrative, tak- One of the most interesting developments is the creation of ing pictures, laying out the document and editing the final general-purpose markets for crowdsourcing diverse tasks. copy. Furthermore, if more than one person is involved, For example, in Amazon’s Mechanical Turk (MTurk), they need to coordinate in order to avoid redundant work tasks range from labeling images with keywords to judging and to make the final product coherent. Many kinds of the relevance of search results to transcribing podcasts. tasks ranging from researching where to go on vacation to Such “micro-task” markets typically involve short tasks planning a new consumer product to writing software share (ranging from a few seconds to a few minutes) which users the properties of being complex and highly interdependent, self-select and complete for monetary gain (typically from requiring substantial effort from individual contributors. These challenges are exacerbated in crowdsourcing mar- Permission to make digital or hard copies of all or part of this work for kets such as MTurk, in which tasks must be simple enough personal or classroom use is granted without fee provided that copies are for workers to easily learn and complete and workers have not made or distributed for profit or commercial advantage and that copies low commitment and unknown expertise and skills. bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior Here we present the CrowdForge framework and toolkit for specific permission and/or a fee. UIST’11, October 16–19, 2011, Santa Barbara, CA, USA. crowdsourcing complex work. Our first contribution is Copyright © 2011 ACM 978-1-4503-0716-1/11/10... $10.00. ! conceptualizing a framework for accomplishing complex and interdependent tasks from many small contributions in are employed to generate transcriptions from each audio crowdsourcing markets. The framework provides a small segment. These transcriptions are then verified by other set of task primitives (partition, map, and reduce) that can workers, whose work is later automatically put together be combined and nested to address many crowdsourcing into a complete transcription. Unlike the standard micro- problems. Our second contribution is implementing the market task, the disaggregation of an audio file into smaller framework in a software toolkit, and using this toolkit to transcription tasks and the use of a second wave of workers investigate the applicability, strengths, and weaknesses of to verify the work done by the transcriptions involves the the framework across a diverse range of tasks. producer/consumer dependency identified in [18]. It also provides a simple model for many of the elements of our approach. For example, the transcription task can be broken MECHANICAL TURK up into the following elements: Amazon’s Mechanical Turk is a “marketplace for work that requires human intelligence” in which employers post gen- x A pre-specified partition that breaks up the audio into erally small and self-contained tasks that workers across smaller subtasks the globe can complete for monetary reward. MTurk en- x A flow that controls the sequencing of the tasks and compasses a large and heterogeneous pool of tasks and transfer of information between them workers; Amazon claims over 100,000 workers in 100 dif- ferent countries, and as of the time of writing there were x A quality control phase that involves verification of approximately 80,000 tasks available. There has been an one task by another worker increasing amount of research on Mechanical Turk recent- x Automatic aggregation of the results ly. A large number of studies have examined the usefulness The TurKit toolkit for MTurk [17] extends some of these of MTurk as a platform for collecting and evaluating data elements by enabling task designers to specify iterative for applications including machine translation [4] image flows. Little and colleagues use as an example a text identi- databases [7], and natural language processing [22]. There fication in which the results of multiple workers’ outputs have also been studies examining the use of MTurk as a are voted on and the best sent to new workers, whose work platform for experiments in perception [5][11], judgment is then voted on, and so forth. This can also be characte- and decision making [13], and user interface evaluation rized as a producer/consumer dependency [18], with a flow [14]. Sorokin & Forsyth used Mechanical Turk to tag im- between a production task (text identification) and a quality ages in computer vision research and suggested voting me- control task (voting) that serves to aggregate the results. chanisms to improve quality [25]. Little et al. [17] intro- duced TurKit, discussed later in this paper, a toolkit for Our goal is to generalize these elements into a framework iterative tasks in MTurk. that supports the crowdsourcing of highly complex work. Specifically, our framework aims to support: APPROACH x Dynamic partitioning so that workers themselves can Our goal is to support the coordination dependencies in- volved in complex work done using micro-task markets. decide a task partition, with their results in turn gene- Most tasks on these markets are simple, self-contained, and rating new subtasks (rather than the task designer needing to fully specify partitions beforehand) independent. The audio transcription tasks posted by Cas- tingwords.com are a rare exception. Castingwords breaks x Multi-level partitions in which a task can be broken up up an audio stream into overlapping segments, and workers by more than one partition x Complex flows involving many tasks and workers x A variety of quality control methods including voting, verification, or merging items x Intelligent aggregation of results both automatically and by workers. x A simple method for specifying and managing tasks and flows between tasks To accomplish these goals our approach draws on concepts from both organizational behavior [18] and distributed computing [1][24]. Malone and Crowston [18] note the general parallels between coordination in human systems and computer systems. Key challenges in both organiza- tions and distributed computing include partitioning work into tasks that can be done in parallel, mapping tasks to workers/processors, managing the dependencies between them, and maintaining quality controls [1][2][18][24]. Figure 1: Overview of framework for splitting up and Crowdsourced labor markets can be viewed as large distri- recombining complex human computation tasks.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us