Crowdlens: Experimenting with Crowd-Powered Recommendation and Explanation

Crowdlens: Experimenting with Crowd-Powered Recommendation and Explanation

Proceedings of the Tenth International AAAI Conference on Web and Social Media (ICWSM 2016) CrowdLens: Experimenting with Crowd-Powered Recommendation and Explanation Shuo Chang, F. Maxwell Harper, Lingfei He and Loren G. Terveen GroupLens Research University of Minnesota {schang, harper, lingfei, terveen}@cs.umn.edu Abstract for their recommendations. For example, user-curated book lists in Goodreads1 are a popular way for users to find new Recommender systems face several challenges, e.g., recom- books to read. mending novel and diverse items and generating helpful ex- planations. Where algorithms struggle, people may excel. However, sites – especially new or small ones – may not We therefore designed CrowdLens to explore different work- always be able to rely on users to produce recommenda- flows for incorporating people into the recommendation pro- tions. In such cases, paid crowdsourcing is an alternative, cess. We did an online experiment, finding that: compared though paid crowd workers may not have domain expertise to a state-of-the-art algorithm, crowdsourcing workflows pro- and knowledge of users’ preferences. duced more diverse and novel recommendations favored by All things considered, while incorporating crowdsourcing human judges; some crowdworkers produced high-quality into the recommendation process is promising, how best to explanations for their recommendations, and we created an do so is an open challenge. This work takes on that challenge accurate model for identifying high-quality explanations; vol- unteers from an online community generally performed bet- by exploring three research questions: ter than paid crowdworkers, but appropriate algorithmic sup- RQ1 - Organizing crowd recommendation What roles port erased this gap. We conclude by reflecting on lessons should people play in the recommendation process? How of our work for those considering a crowdsourcing approach can they complement recommender algorithms? Prior and identifying several fundamental issues for future work. work has shown that crowdsourcing benefits from an itera- tive workflow (e.g., (Little et al. 2009; Bernstein et al. 2010; Introduction Kittur et al. 2011)). In our context, we might iteratively gen- erate recommendations by having one group generates ex- Recommendation algorithms are widely used in online sys- amples and another group synthesizes recommendations, in- tems to customize the delivery of information to match user corporating those examples along with their own fresh ideas. preferences. For example, Facebook filters its news feed, Alternatively, we might substitute a recommendation algo- the Google Play Store suggests games and apps to try, and rithm as the source of the examples. We are interested in the YouTube suggests videos to watch. effectiveness of these designs compared with an algorithmic Despite their popularity, algorithms face challenges. First, baseline. We speculate that this design decision will have a while algorithms excel at predicting whether a user will like measurable impact on the quality, diversity, popularity, and a single item, for sets of recommendations to please users, recency of the resulting recommendations. they must balance factors like the diversity, popularity, and RQ2 - Explanations Can crowds produce useful expla- recency of the items in the set (McNee, Riedl, and Konstan nations for recommendations? What makes a good explana- 2006). Achieving this balance is an open problem. Second, tion? Algorithm can only generate template-based explana- users like to know why items are being recommended. Sys- tions, such as “we recommend X because it is similar to Y”. tems do commonly provide explanations (Herlocker, Kon- People, on the other hand, have the potential to explain rec- stan, and Riedl 2000), but they are template-based and sim- ommendations in varied and creative ways. We will explore plistic. the feasibility of crowdsourcing explanations, identify char- People may not be as good at algorithms at predicting acteristics of high-quality explanations, and develop mecha- how much someone will like an item (Krishnan et al. 2008). nisms to identify and select the best explanations to present But “accuracy is not enough” (McNee, Riedl, and Konstan to users. 2006). We conjecture that people’s creativity, ability to con- RQ3 - Volunteers vs. crowdworkers How do recom- sider and balance multiple criteria, and ability to explain mendations and explanations produced by volunteers com- recommendations make them well suited to take on a role pare to those produced by paid crowdworkers? Volunteers in the recommendation process. Indeed, many sites rely on from a site have more domain knowledge and commitment users to produce recommendations and explain the reasons to the site, while crowdworkers (e.g. from Amazon Mechan- Copyright c 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 1www.goodreads.com 52 ical Turk) are quicker to recruit. We thus will compare result Huang, and Bailey 2014; Xu et al. 2015) compared struc- quality and timeliness for the two groups. tured feedback from Turkers on graphic designs with free- To address these questions, we built CrowdLens, a crowd- form feedback from design experts. They found that Turk- sourcing framework and system to produce movie recom- ers could reach consensus with experts on design guidelines. mendations and explanations. We implemented CrowdLens On the other hand, when Retelny et al. (Retelny et al. 2014) on top of MovieLens2, a movie recommendation web site explored complex and interdependent applications in engi- with thousands of active monthly users. neering, they used a “flash team” of paid experts. Using a Contributions. This paper offers several research contri- system called Foundry to coordinate, they reduced the time butions to a novel problem: how to incorporate crowd work required for filming animation by half compared to using into the recommendation process. We developed an itera- traditional self-managed teams. Mindful of these results, we tive crowd workflow framework. We experimentally evalu- compared performance of paid crowdworkers to “experts” ated several different workflows, comparing them to a state (members of the MovieLens film recommendation site). of the art recommendation algorithm; crowd workflows pro- Crowdsourcing for personalization. Personalization – duced comparable quality, more diverse, and less common customizing the presentation of information to meet an in- recommendations. Crowdworkers were capable of produc- dividual’s preferences – requires domain knowledge as well ing good explanations; we developed a model that identi- as knowledge of individual preferences. This complex task fied features associated with explanation quality and that can has received relatively little attention from a crowdsourcing predict good explanations with high accuracy. Volunteers perspective. We know of two efforts that focused on a re- generally performed better than paid crowdworkers, but pro- lated but distinct task: predicting user ratings of items. Kr- viding paid crowdworkers good examples reduced this gap. ishnan et al. (Krishnan et al. 2008) compared human pre- dictions to those of a collaborative filtering recommender Related work algorithm. They found that most humans are worse than the algorithm while a few were more accurate, and people Foundational crowdsourcing work showed that tasks could relied more on item content and demographic information be decomposed into independent microtasks and that peo- to make predictions. Organisciak et al. (Organisciak et al. ple’s outputs on microtasks could be aggregated to pro- 2014) proposed a crowdsourcing system to predict ratings on duce high quality results. After the original explorations, items for requesters. They compared a collaborative filter- researchers began to study the use of crowdsourcing for ing approach (predicting based on ratings from crowdwork- more intellectually complex and creative tasks. Bernstein ers who share similar preferences) with a crowd prediction et al. (Bernstein et al. 2010) created Soylent, a Microsoft approach (crowdworkers predicting ratings based on the re- Word plugin that uses the crowd to help edit documents. quester’s past ratings). They found that the former scales up Lasecki et al. (Lasecki et al. 2013) used crowdworkers to to many workers and generates a reusable dataset while the collaboratively act as conversational assistants. Nebeling et latter works in areas where tastes of requesters can be easily al. (Nebeling, Speicher, and Norrie 2013) proposed Crow- communicated with fewer workers. dAdapt, a system that allowed the crowd to design and eval- Generating sets of recommendations is more difficult than uate adaptive website interfaces. When applying crowd- predicting ratings: as we noted, good recommendation sets sourcing for these more complicated tasks, various issues balance factors such as diversity, popularity, familiarity, and arose. recency. We know of two efforts that used crowdsourcing Organizing the workflow. More complicated tasks tend to generate recommendations: Felfernig et al. deployed a to require more complex worklows, i.e., ways to organize crowd-based recommender system prototype (Felfernig et crowd effort to ensure efficiency and good results. Early ex- al. 2014), finding evidence that users were satisfied with the amples included: Little et al. (Little et al. 2009) proposed an interface. The StitchFix3 commercial website combines al- iterative crowdsourcing process with

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us