Low-effort : leveraging peripheral attention for crowd work

Abstract Crowdsourcing systems leverage short bursts of focused attention from many contributors to achieve a goal. By requiring people’s full attention, existing crowdsourc- ing systems fail to leverage people’s cognitive surplus in the many settings for which they may be distracted, performing or waiting to perform another task, or barely paying attention. In this paper, we study opportunities for low-effort crowdsourcing that enable people to con- tribute to problem solving in such settings. We dis- cuss the design space for low-effort crowdsourcing, and through a series of prototypes, demonstrate interaction techniques, mechanisms, and emerging principles for enabling low-effort crowdsourcing. Figure 1: With a front-facing camera, the emotive voting in- Introduction terface ‘likes’ an image if you smile while the image is on the screen, and ‘dislikes’ if you frown. Faces are blurred for Crowdsourcing and human computation systems leverage anonymity. the cognitive surplus (Shirky 2010) of large numbers of peo- ple to achieve a goal. Existing systems accomplish this by providing entertainment through games with a purpose (von task. For a person walking to catch a bus, the act of taking Ahn and Dabbish 2008), requiring a microtask for access- out their phone, opening an app, and then performing a sim- ing content (von Ahn et al. 2008), and requesting tasks from ple task will likely already require more effort than people workers and volunteers without requiring long-term com- are willing to give in that situation. Leveraging any cognitive mitment from contributors. By capturing people’s attention surplus people may have in such settings will thus require and leveraging their efforts, systems such as the ESP game, accounting for people’s situational context, and introducing reCAPTCHA, and Mechanical Turk enable useful work to novel interaction techniques and mechanisms that can en- be completed by bringing together episodes of focused at- able effective contributions in spite of it. tention from many individuals. In this paper, we explore opportunities in low-effort While crowdsourcing systems often remove the need for crowdsourcing that enable contributions to crowdsourcing sustained, long-term participation, they still require people’s efforts even in situations when people are distracted, per- full attention while performing a task in the system. As such, forming or waiting to perform another task, or peripher- existing crowdsourcing systems are not designed to leverage ally paying attention. Low-effort crowdsourcing is possible people’s cognitive surplus in the many scenarios for which through a mix of low-granularity tasks, unobtrusive input they may be distracted, performing or waiting to perform methods, and an appropriate setting. We introduce interac- another task, or barely paying attention. For example, pos- tion techniques and mechanisms that enable people to do sible scenarios arise when people are on the go (e.g., walk- useful work with lower fidelity input and output, and mech- ing to the bus), waiting (e.g., for a page to load in a mo- anisms for inserting low-effort crowd work into people’s ex- bile browser; for a train to arrive), or performing a boring or isting situational context. mindless task (e.g., calling customer service; watching TV). We take particular advantage of low-effort, incidental, and Such scenarios create situational impairments (Sears et al. peripheral forms of contribution that arise naturally or can 2003) and otherwise demand people’s attention in ways that otherwise be embedded within an existing interaction. As limit people’s ability and interest in performing an auxiliary one example, consider a display of potentially funny images Copyright c 2014, Association for the Advancement of Artificial scrolling across a screen (see Figure 1). At a glance, a person Intelligence (www.aaai.org). All rights reserved. may be able to tell if a particular image is catchy or funny, and their natural reaction (e.g., turning their head; laughing) the web was previously described as incidental crowdsourc- can be tracked with a camera to serve as a signal about the ef- ing (Organisciak 2013). Incidental crowdsourcing formal- fect the image may have on people. This ‘task’ of looking at ized these types of contributions as fundamentally unob- scrolling images at a glance can be performed while people trusive, meaning they exist in the periphery of other tasks, walk by such a public display, or, on their personal displays and non-critical, both for the user in completing them and while they are on the phone or are otherwise distracted. With the system in relying on them. It was also noted that the appropriate design, such tasks may not only result in useful nature of incidental crowdsourcing contributions is gener- data for immediate use or for machine learning, but can also ally descriptive of existing information objects, tends toward enrich people’s lives by filling in dull moments and relieving low-granularity contributions, and favors choices rather than boredom, during and in between tasks and objectives. statements. Our paper differs from this earlier work in ex- This paper makes the following contributions: (1) the de- ploring low-effort crowdsourcing in broader situational, in- sign space for low-effort crowdsourcing, drawing from lit- teractive, and technological contexts. erature, existing systems, and early prototyping experience; Unobtrusiveness is an important consideration because (2) a series of prototype systems that we built to demonstrate even lightweight feedback mechanisms can be intrusive if what’s possible with low-effort crowdsourcing systems; and they disrupt the user from their primary task. As one ex- (3) interaction techniques, mechanisms, and emerging prin- ample, alerts within mobile applications for a user to “rate ciples for low-effort crowdsourcing that can apply across my app” may disrupt the user experience and annoy the problem domains and user scenarios. We focus in this paper user (Friedman 2013). In designing low-effort crowdsourc- on demonstrating novel techniques and discussing the op- ing interactions, we face a similar challenge in that we aim portunities and challenges in low-effort crowdsourcing, and to leverage people’s efforts while they are involved with an- not on empirical studies to validate any particular prototype. other task. But in addition, the task we wish them to perform That said, in a later section we discuss lessons learned from may or may not be related to their primary task, which re- our prototyping experience, which will inform the design of quires additional consideration. future low-effort crowdsourcing systems. A few recent works explore opportunities for crowdsourc- ing on-the-go by replacing traditional lock screens on mo- Related Work bile phones with tasks that can be completed in a few sec- A popular and effective approach for promoting participa- onds. For example, Twitch (Vaish et al. 2014) replaces the tion in crowdsourcing efforts is to embed useful work in ac- lock screen with census, photo ranking, and data verification tivities that people willingly partake. For example, the ESP tasks. Similarly, Slide to X (Truong, Shihipar, and Wigdor game (von Ahn and Dabbish 2008) collects labels for im- 2014) replaces the lock screen with tasks that collect geo- ages as a side effect of people playing an enjoyable game. graphical and personal health information. While these sys- reCAPTCHA (von Ahn et al. 2008) digitizes books as a side tems share some of the same design challenges as our work effect of people gaining access to valuable content by ver- in that tasks need to be doable in short durations of time and ifying that they are human. Duolingo translates content on have low demands on cognitive load, their requirements dif- the web as a side effect of people learning a new language. fer in that such applications can interrupt the user from their With low-effort crowdsourcing, we extend this approach by primary task briefly (in this case, unlocking and using their designing tasks and interactions that people partake in will- phone). In contrast, our work on low-effort crowdsourcing ingly despite their situational impairment, while also gener- seeks to enable contributions within the natural flow of in- ating useful data as a side effect. teraction, with the goal of collecting useful data and enrich- Perhaps the lowest level of effort required for partici- ing people’s existing interactions without disruption. Low- pation in crowdsourcing arise through opportunistic com- effort crowdsourcing systems can thus promote contribu- munitysensing (Lane et al. 2008), in which applications tions through the duration of another task, and apply to a collect data passively as participants go about their day. non-overlapping set of scenarios. For instance, communitysensing applications collect data from smartphones to detect and predict urban mobility pat- Design Space terns (Vent 2013), noise pollution (Kanjo 2010), and traf- We identify a number of design opportunities and challenges fic conditions (Matthews 2013). While useful for specific for enabling crowd work by people who may be distracted, applications that garner widespread participation, passively performing or waiting to perform another task, or periph- collecting data is limited by the regularity of participants’ erally paying attention. This section explores the design mobility patterns (Gonzalez, Hidalgo, and Barabasi 2008). space for low-effort crowdsourcing by discussing the set- Furthermore, opportunistic sensing is limited to the use of tings, tasks, and input methods that surface these design op- machine sensors and cannot use human sensing capabilities. portunities and challenges. Lightweight feedback mechanisms such as the Like but- ton on Facebook, the Retweet button in Twitter, and upvot- ing in Reddit can be viewed as forms of low-effort crowd- Settings sourcing through which people indirectly curate content in We are particularly interested in settings in which people the midst of consuming it. In such examples, the feedback seek to perform some primary task but have the cognitive mechanisms enable low-effort participation by collecting surplus to potentially contribute to another task in the pe- coarse-grained feedback. This form of crowdsourcing on riphery. We focus here on three example scenarios: people on the go (e.g., walking; biking; driving), waiting (e.g., for a Tasks page to load in a mobile browser; for a train to arrive; for an Our prototyping experience and exploration into appropriate ad to become skippable on YouTube), and performing a task settings for low-effort crowdsourcing revealed a number of with low demands on cognitive load (e.g., calling customer considerations for the design of such crowdsourcing interac- service; watching TV; going for a run). tions: When people are on the go, their primary goal is getting to • Low-time commitment and cognitive load. To enable their destination. In so far as another task does not get in the users to contribute effectively while their attention is else- way of this, people may be willing to contribute, for example where or distributed, the crowd work being requested by recognizing and reporting phenomenon in their environ- should be broken down into microtask units that can be ment. But even relatively lightweight interactions can delay completed very quickly, or demand low cognitive load a person who is on the go or introduce safety concerns. For so that they can be done while their primary task is example, consider the act of reporting a pothole through a on-going. As interruptions can easily get in the way of mobile app provided by the city. Even the act of taking out the primary tasks people are trying to complete (Mc- the smartphone, opening the app, and entering a simple re- Farlane and Latorella 2002; Gillie and Broadbent 1989; port may act as a channel factor (Ross and Nisbett 1991), Trafton et al. 2003), low-effort crowdsourcing systems be too much of an inconvenience, and ultimately dissuade a need still be deployed in appropriate settings in which person from reporting the issue while on go. As another ex- the crowd work can be naturally inserted into the primary ample, an application like Waze, which asks users to report task. issues on the road, can provide tremendous value to the com- • Design the entire interaction. Task interaction include munity but can also distract the driver and raise safety con- not only doing the task, but also prompting the user of cerns (Roose 2013). For drivers, designing a non-distracting what the task is, which can demand even more attention. interactions is a necessity. These examples pose a design The interaction design problem thus encompasses the en- challenge for low-effort crowdsourcing systems in that such tire process, and is not limited to the design of the task. settings may require lighter weight interactions that are less • Keep the task secondary. Tasks should grant users con- obtrusive, that place even lower demands on one’s cognitive trol and freedom; that is, they should never stand in the load, and that are safe. way of the user’s primary task, but rather as an option When people are waiting, there are often periods of ‘dead available to potential participants. time’ during which people cannot usefully switch context We also identified considerations that may encourage peo- to perform another task and are thus stuck waiting. For in- ple to participate in low-effort crowdsourcing: stance, it may not be worthwhile to switch to another tab • Enhance the primary experience. Tasks should strive to while waiting for a website that is slow to load, or to walk enhance people’s primary task experience, e.g., by filling away while a YouTube video ad is playing. This suggests in for dull moments and dead time. design opportunities to insert microtask into such periods • Facilitate secondary goals. Effective tasks may be fun of waiting so as to elicit useful work while alleviating the and interesting, but may also enable people to contribute boredom or dullness of dead time. However, as people are to efforts they care about but would not contribute to not ultimately interested in what they are waiting for, we hy- to should doing so hinder their ability to perform their pothesize that such microtasks should vanish and be out of primary task (e.g., reporting a pothole while on the go). the way as soon as the desired content is loaded or whenever there is no longer a need to wait. • Leverage the primary experience. Tasks can potentially take advantage of what people are already doing through When performing primary tasks with low demands on their primary task, and provide a medium for contributing cognitive load, people are most likely to have the cognitive that didn’t previously exist. surplus to contribute to another task. But their willingness to contribute may depend not only on their cognitive load, but Input methods on how the proposed secondary task enhances or interferes When designing low-effort crowdsourcing systems, we need with their primary task. For example, while a person watch- to account for people’s situational impairments and the so- ing TV can potentially spare their attention to do another cial and physical restrictions they may place on interac- task and even be okay with missing some of the content, tion design. In particular, people may only be restricted to they may only participate if doing so does not hinder the specific input device or medium and their inputs may be enjoyment and leisurely qualities of watching. As less precise. For example, a person may not wish to take another example, a person on the phone with customer ser- out their phone while walking, or use a keyboard/mouse vice or a salesman may be bored and be interested in com- while watching TV. The design challenge is to identify in- pleting low-effort tasks during a call to alleviate boredom, put mechanisms through which people can contribute that but would not want to appear rude and distracted. The de- works around such restrictions. sign of low-effort crowdsourcing interactions must thus also In our prototypes, we introduce interaction techniques and be appropriate for the primary task, both with respect to the mechanisms that take particular advantage of sensing and in- purpose and value of the primary task, and to social consid- put technologies that are available ubiquitously through mo- erations and norms. bile devices, embedded in developing wearable technologies Figure 2: Wait Extension: a Chrome extension that allows Figure 3: Knock for Good: a mobile app that allows people users to perform simple tasks (e.g., odd image selection) to report local problems on-the-go by knocking their phone while a page loads. while walking by the problem.

(e.g., Google Glass and smart watches), and in sensor-rich, can be useful data for developing computer vision algo- -connected smart devices. To communicate with the rithms. To further encourage participation, future prototypes user about the task (perceptual interaction), we can embed can also allow users to choose tasks that are of particular in- task information within visual, audio, and haptic feedback. terest to them, per content (e.g., interesting subject matter) For users to perform tasks on the system (physical interac- and purpose (e.g., for citizen science (Lintott et al. 2008; tion), we can leverage not only traditional input mechanisms Nov, Arazy, and Anderson 2011)). such as the keyboard, mouse, speech, and gesture, but also By embedding tasks in the browser, Wait Extension al- by tracking physical actions with activity sensors (e.g., head lows a user to contribute through dead time that occurs dur- movement; facial expressions). Through our prototype ex- ing page loads, compulsory ads, or even while consuming amples, we demonstrate how these input modalities can en- content. A core idea is that it allows the user to contribute able people to execute tasks through natural and incidental during dead time without having to leave the context of their interactions while still performing their primary task. primary task. In doing so, inserted crowd work interrupts the user no more than they are already being interrupted by Prototype Examples the wait. While prototyped on personal computers, creating Based on our understanding of design challenges and oppor- a version of Wait Extension for mobile devices can similarly tunities from our exploration of the design space, we devel- leverage these same principles. oped five prototype low-effort crowdsourcing applications. These applications demonstrate interaction techniques and Knock for Good mechanisms for enabling participation in low-effort crowd Increasingly, city agencies are recognizing opportunities for work in a variety of scenarios. mobilizing citizens to report local problems such as broken lights and potholes, for example by calling 311 or through Wait Extension mobile apps such as SeeClickFix and FixMyStreet. How- A US adult spends an average of 61 hours a month on the ever, as discussed earlier, even citizens who care about such Internet through their smartphones and personal comput- issues and notice them may fail to report them if they are ers (Nielsen 2014). While browsing the web, there are many on-the-go, as the act required for reporting a problem may short episodes of waiting during which a user waits for the get in the way of their primary goal. webpage to load, or is forced to consume video ads to ac- To address such issues, we developed Knock for Good, a cess content of interest. To leverage the dead time in these prototype mobile app that allows users to report local prob- periods of waiting, we developed Wait Extension (Figure 2), lems on-the-go by simply knocking on their phone while a Chrome browser extension that allows users to perform walking by the problem (Figure 3). A user first selects a simple tasks during slow page loads and video ads. problem they care about (e.g., potholes). While on-the-go, Wait Extension prefetches crowdsourcing tasks that can the app listens for knocks, and when detected, automati- be completed quickly. Tasks can be launched at anytime cally reports the chosen problem along with the geograph- from the address bar of the browser, without the need to ical coordinates of the user. Since knocks can be accurately switch tabs or to leave the page the user is trying to load. detected using the accelerometer (e.g., see Knock to un- Any task that can be completed very quickly and without lock (Warren 2013)), this physical action allows a user to significant cognitive effort can potentially be used; in our report the locations of problems on-the-go, without having prototype, users of the extension choose the odd picture to take out their phone or having to stop to interact with the among a set of images along the lines of Matchin (Hacker phone directly. and Von Ahn 2009) and Kitten Wars (kitten 2014), which Knock for Good introduces sensing through actuation as a design pattern through which the use of physical gestures can be used to register particular events of interest and en- able lightweight reporting of phenomena in physical envi- ronments. By using physical gestures, sensing through actu- ation can significantly reduce the time, effort, and interaction required to make a useful contribution. Emotive voting Our next prototype takes low-fidelity input one step further. Using a camera, emotive voting (Figure 1) collects informa- tion on funny and salient images through people’s natural emotion response. To do this, the system displays a scrolling stream of potentially funny images, either on a public dis- play or on a personal device. When a user glances at the im- ages, the system continually observes a user’s facial expres- sion. When a user smiles or laughs,the systems takes this emotional response as a vote that the image they are view- ing is actually funny. Emotive voting demonstrates the idea of capturing people’s natural, instinctive reactions as useful Figure 4: The awesomeR meme interface lets a user choose input for crowd work. Since people need to only glance at the better meme via an affirmative grunt (i.e. yeah or uh huh) a display to be providing useful data, the demands on their while he/she is talking to someone else attention and effort are minimal, and can potentially be de- ployed both in situations when people are on-the-go (e.g., just walking past the display) or while waiting (e.g., for a train to arrive). While appealing as a mechanism for low- effort crowdsourcing, there are a number of social and tech- nical challenges in using facial recognition and other tech- nologies that can capture people’s reactions as input. For ex- ample, some people do not express amusement outwardly, and lighting and technology quirks can result in false pos- Figure 5: Binary Tweet generates sentences through choice- itives. Assuming the absence of systematic bias, however, based typing. The program prompts a user to choose one of many such problems are mitigated or resolved by aggregat- two words that should come next, allowing them to generate ing large quantities of user input. For deployments on public a tweet through a sequence of binary interactions. displays, a perhaps more serious concern is people’s comfort with technologies that continually watch their reaction, and potential violations of privacy. Deploying such public sys- Awesomer Meme demonstrates that crowd work can even tems effectively may thus require identifying use cases in be embedded through the communication modality used by which they are appropriate, and tailoring the available tech- the primary task. Furthermore, it demonstrates a design pat- nology to alleviate potential concerns. tern for side effect equivalence, wherein the meaning of dif- Awesomer Meme ferent inputs is discriminating for the crowd task but the same for the primary task. This approach may be generally A telephia survey (Gibson 2014) found that Americans av- useful in situations where the input for the crowd task needs erage 13 talking hours a month (the 18-24 age group av- to be natural and appropriate for the primary task. eraged 22 hours). Since many phone calls do not demand people’s full cognitive phones, people often continue to per- Binary Tweet form other tasks while talking (e.g., washing the dishes, typ- ing an email, etc). The Awesomer Meme prototype demon- In our previous prototype examples, the data being collected strate how to leverage this cognitive surplus for identifying are effectively binary (e.g., indicating the presence of a prob- the best memes (Figure 4), but surprisingly, by using voice lem, choosing among two options). For our final prototype, as the input modality while a person is on a call. we explore ways to enable people to communicate a com- Awesomer Meme encodes verbal reaction, such as “yeah, plex idea through low-fidelity input. ok” and “uh huh”, as a method of input. Using the micro- Binary Tweet (Figure 5) allows a user to generate tweets phone and text-to-speech, the prototype allows users to rank using a choice-based interface. Using the microblogging images (memes in this example) by listening to the affir- API from Twitter for a colloquial corpus, we built a variant mative grunts that a person gives when they are listening of an n + 1 markov model. A user types in a seed word and to somebody, or pretending to. Users are shown A versus the next word is suggested. Rather than choosing the highest B tasks; saying “uh-huh” selects one option while saying probability term to form, the system provides users with a “yeah, ok” selects another. To the person on the other end of small number of choices. The user can select among those the line, either input would appear natural in conversation. choices by keying up or down, and confirming to move on. ‘NONE’ and ‘DONE’ options allow a user to get a different ployment in our daily lives seamlessly and ubiquitously, is set of word choices or end the sentence, respectively. still a challenge. We experienced that the ability to interact with this In the Knock for Good application, we are limited to sim- phrase-building system seems to foster flow, as we steer the pler reports (e.g., details about the problem), and there may sentence through the rapidly branching paths. In general, we be inaccuracies in the activity recognition and opportunities hypothesize that low-fidelity input mechanisms for enabling for submitting false positives (e.g., a person just happens to rich communications can foster flow as well as enable users knock on their phone). Techniques for quality control in hu- to participate in situations where richer input modalities are man computation, along with appropriate interaction design unavailable, and through devices for which low fidelity input (e.g., requiring actions that can be robustly detected but are is standard (e.g., a smartwatch). unlikely to be performed accidentally) needs to be incorpo- rated and can help make this approach more robust. Discussion Low-effort crowdsourcing can extend and be applied be- Future work yond the examples discussed in this paper. Based on our re- In future work, we plan to deploy our prototypes in real search findings and experience while exploring prototypes, world setup and conduct an exhaustive study to continue we learned that its possible to crowdsource tasks using pe- making progress in the area. Our attempt would be incor- ripheral attention of crowd. In the process of exploration, porate our learnings and address current limitations to make our prototype designs addressed the scope of design space user experience as seamless as possible. In order to ex- - settings, tasks and input methods. Since the area of low- plore the extent of possible applications, we further plan effort crowdsourcing is in its nascent stage, therefore, in this to run student groups, working on low-effort crowdsourcing section we’ll attempt to discuss some of the open questions projects. and limitations. The concern about feasibility of embedding tasks in our References daily lives raises an important practical question. Whether the rapid advancement of wearable and accessible technol- Friedman, L. 2013. When app makers behave ogy cause crowd to participate while leveraging their periph- badly. http://www.macworld.com/article/ eral attention? Would it be possible for researchers and de- 1159659/app_developers_behavior.html. velopers to create highly motivating tasks, which continue to Gibson, M. 2014. 2013 cell phone statistics. engage crowd with minimal time commitment and cognitive http://www.accuconference.com/blog/ load? And finally, would it be possible to develop ubiquitous Cell-Phone-Statistics.aspx. tasks of worthwhile global value? Gillie, T., and Broadbent, D. 1989. What makes interrup- Though our paper attempts to discuss a design space and tions disruptive? a study of length, similarity, and complex- answer positively to the above open questions. The area has ity. Psychological Research 50(4):243–250. some known limitations. By extending to situations beyond Gonzalez, M. C.; Hidalgo, C. A.; and Barabasi, A.-L. 2008. the screen, we may run into problems of task seeking dur- Understanding individual human mobility patterns. Nature ing inappropriate contexts. Offering a question to a user on 453(7196):779–782. their mobile phone at a bus stop might be appropriate, for example, but doing the same in a car can lead to potentially Hacker, S., and Von Ahn, L. 2009. Matchin: eliciting user dangerous circumstances. In other cases, users may not feel preferences with an online game. In Proceedings of the comfortable with atypical input methods, like voice control SIGCHI Conference on Human Factors in Computing Sys- in public places. At the same time, while we have thus far tems, 1207–1216. ACM. considered cognitive load as constant in an interaction, dif- Kanjo, E. 2010. Noisespy: A real-time mobile phone plat- ferent situations might make a task easier or more difficult to form for urban noise monitoring and mapping. Mobile Net- perform, placing restrictions on available attention of a user. works and Applications 15(4):562–574. In this paper we focus on the design needs of seamless Kim, J.; Cheng, J.; and Bernstein, M. S. 2014. Ensemble: task design, but longevity of engagement is the next step. Exploring complementary strengths of leaders and crowds Moving forward, motivation and incentives can cause limi- in creative collaboration. In CSCW. tations towards a project’s success: if low-effort crowdsourc- 2014. Kittenwar! may the cutest kitten win! http:// ing is meant to be unobtrusive and welcoming, what fac- www.kittenwar.com. tors can drive long-term engagement after users grow accus- tomed to the novelty of the concept? Lane, N. D.; Eisenman, S. B.; Musolesi, M.; Miluzzo, E.; Apart from discussing limitations from a wider point and Campbell, A. T. 2008. Urban sensing systems: Oppor- of view in the area of low-effort crowdsourcing, we also tunistic or participatory? In Proceedings of the 9th Workshop encountered limitations in our prototype applications. Our on Mobile Computing Systems and Applications, HotMobile choice-based writing prototype explores the idea of convert- ’08, 11–16. New York, NY, USA: ACM. ing a complex task to a low-effort medium; however, what Lintott, C. J.; Schawinski, K.; Slosar, A.; Land, K.; Bam- would it take to achieve important or serious work in an en- ford, S.; Thomas, D.; Raddick, M. J.; Nichol, R. C.; Szalay, gaged by low-effort way? Voice and gesture inputs encour- A.; Andreescu, D.; et al. 2008. Galaxy zoo: morphologies ages users to participate in a low-effort way, but their de- derived from visual inspection of galaxies from the sloan digital sky survey. Monthly Notices of the Royal Astronomi- mac with knock. http://mashable.com/2013/11/05/knock- cal Society 389(3):1179–1189. unlock-app/. Matthews, S. E. 2013. How google tracks traf- fic. http://www.theconnectivist.com/2013/ 07/how-google-tracks-traffic/. McFarlane, D. C., and Latorella, K. A. 2002. The scope and importance of human interruption in human-computer interaction design. Hum.-Comput. Interact. 17(1):1–61. Nielsen. 2014. The us digital consumer report. Nov, O.; Arazy, O.; and Anderson, D. 2011. Dusting for sci- ence: motivation and participation of digital citizen science volunteers. In Proceedings of the 2011 iConference, 68–74. ACM. Organisciak, P. 2013. Incidental crowdsourcing: Crowd- sourcing in the periphery. In Digital Humanities 2013. Roose, K. 2013. Did google just buy a dangerous driving app? http://nymag. com/daily/intelligencer/2013/06/ did-google-just-buy-a-dangerous-driving-app. html. Ross, L., and Nisbett, R. E. 1991. The person and the situa- tion: Perspectives of social psychology. Mcgraw-Hill Book Company. Sears, A.; Lin, M.; Jacko, J.; and Xiao, Y. 2003. When com- puters fade: Pervasive computing and situationally-induced impairments and disabilities. HCI International 2(3):1298– 1302. Shirky, C. 2010. Cognitive surplus: How technology makes consumers into collaborators. Penguin. Trafton, J.; Altmann, E. M.; Brock, D. P.; and Mintz, F. E. 2003. Preparing to resume an interrupted task: effects of prospective goal encoding and retrospective rehearsal. In- ternational Journal of Human-Computer Studies 58(5):583 – 603. Notification User Interfaces. Truong, K.; Shihipar, T.; and Wigdor, D. 2014. Slide to x: Unlocking the potential of smartphone unlocking. In Pro- ceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 14). Vaish, R.; Wyngarden, K.; Chen, J.; Cheung, B.; and Bern- stein, M. S. 2014. Twitch crowdsourcing: Crowd contribu- tions in short bursts of time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 14). Vent, K. 2013. The advantages of passive mobile positioning as a type of community sensing for analyzing space-time behaviour of a citizen. In Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication, 1325–1328. ACM. von Ahn, L., and Dabbish, L. 2008. General techniques for designing games with a purpose. In CACM, 58–67. von Ahn, L.; Maurer, B.; McMillen, C.; Abraham, D.; and Blum, M. 2008. reCAPTCHA: Human-based character recognition via web security measures. Science 1465–1468. Warren, C. 2013. Tap your iphone to unlock your