
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) Sequential and Diverse Recommendation with Long Tail Yejin Kim1∗ , Kwangseob Kim2 , Chanyoung Park3 and Hwanjo Yu3 1University of Texas Health Science Center at Houston 2Kakao Corp. 3Pohang University of Science and Technology [email protected], [email protected], fhwanjoyu, [email protected] Abstract !"#$% !"#$%&'(#) *+,+)-. /)$+0-1.) 2#3%&4)156- ;+<&;.#< 7."".(1)5 Sequential recommendation is a task that learns a =6+&>"+<5+ 89#: temporal dynamic of a user behaviour in sequen- &'( ! tial data and predicts items that a user would like !"#$%&'(#) *+,+)-. /)$+0-1.) afterward. However, diversity has been rarely em- ! phasized in the context of sequential recommenda- tion. Sequential and diverse recommendation must Figure 1: Example on movie sequences. After Bob watched Incep- learn temporal preference on diverse items as well tion, we’d like to recommend both general movies (Dark Knight) as on general items. Thus, we propose a sequential and relevant diverse movies such as Following and Quay rather than and diverse recommendation model that predicts a Red Road and The Pledge ranked list containing general items and also di- verse items without compromising significant accu- recommender systems [Vargas and Castells, 2011; Oh et racy. To learn temporal preference on diverse items al., 2011; Adomavicius and Kwon, 2012; Adomavicius and as well as on general items, we cluster and relocate Kwon, 2011; Ziegler et al., 2005; Christoffel et al., 2015; consumed long tail items to make a pseudo ground Yin et al., 2012; Antikacioglu and Ravi, 2017; Cheng et al., truth for diverse items and learn the preference on 2017]. To recommend diverse items in the context of se- long tail using recurrent neural network, which en- quential recommendation, we must learn temporal prefer- ables us to directly learn a ranking function. Ex- ence on diverse items as well as on general items. Here, tensive online and offline experiments deployed on we define general item as a popular item that is frequently a commercial platform demonstrate that our mod- exposed to customers and diverse item as an unpopular and els significantly increase diversity while preserving niche item that is rarely exposed to customers. Depending accuracy compared to the state-of-the-art sequen- on what a user consumed previously, the preferred next di- tial recommendation model, and consequently our verse items can change. For example, Alice recently enjoyed models improve user satisfaction. popular twisted psychological thrillers (Black Swan ! Memento ! Inception) (Fig. 1). After watching Memento 1 Introduction and Inception, Alice found the director Christopher Nolan attracts her attention, and thus Alice watched Nolan’s other Users’ feedback (e.g. click, view) in e-commerce naturally movie, Dark Knight. She also often watched diverse movies arrives one by one in sequential manner. Sequential recom- along with the general ones. After watching Inception Alice mendation is a task that learns a temporal dynamic of a user watched Following and Quay, which are Nolan’s less pop- behaviour in the sequential data and predicts items that a user ular movies. Given another user, Bob, who also watched would like afterward. Given a user’s historical data as a se- Black Swan ! Memento ! Inception, our goal is to quence, an objective of the sequential recommender systems recommend Bob a list of next movies that contains not only is to recommend the next items that the user would be inter- general ones but also diverse ones. In this case, the recom- ested in. mendation list should include Dark Knight and also some rel- 1 Surprisingly, diversity has been rarely emphasized in the evant diverse movies that have similar flavour with Follow- context of sequential recommendation though it has been ing or Quay (rather than Red Road or The Pledge). To do treated importantly in ordinary collaborative filtering-based so, sequential recommendation model should learn temporal ∗ preference on general items and also diverse items. This work is done during an internship at Kakao Corp. Thus, we propose a sequential and diverse recommenda- 1There are two types of diversity: aggregate or individual diver- sity. The aggregate diversity is for all recommended items across all tion model (S-DIV), that predicts a ranked list containing gen- users, which represents overall product variety and sales concentra- eral items and also diverse items. One challenge here is that tion [Adomavicius and Kwon, 2012; Adomavicius and Kwon, 2011; the diverse tail items are “cold” (i.e., too few occurrences) Anderson, 2004]. Whereas, the individual diversity is for recom- and consequently they give too much weights on implausi- mended items to each individual user regardless of other users. Here, ble events and hurt the recommendation accuracy. Previous we focus on the aggregate diversity. work removed these tail items during pre-processing step. 2740 Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19) 3 3,&,#'2*%1,45 6 6'%2*%1,45 %&720,&-,5 into a recommendation list. They cluster tail items in pre- processing phase, treat the clusters as general items, and ap- !"#$%&'#()*+,- .,/0,&1%'2*+,- ply ordinary recommendation models so that the recommen- dation list contain the tail items as well. 3 3 3 3 3 3 3 3 3 3 However, these heuristic methods generate recommenda- tion lists using limited candidate items and a single diver- 8%9,#5,*+,- !"#$"%&'()*+*,'-"./"*0"1 sity metric regardless of correlation with user preference on general items. Moreover, such a re-ranking heuristic always 3 3 3 3 3 3 3 3 3 3 requires extensive tuning engineering effort. To tackle this problem, the diverse recommendation requires a robust end- 6 6 6 6 6 6 to-end model that is based on i) a supervised approach with ground truth of diverse items and ii) a learning-to-rank ap- Figure 2: Comparing sequential and diverse recommendation model proach that directly learns a ranking function that imposes (S-DIV) with other recommendation models. Left: static models high values on diverse items as well as on general items. without considering temporal information. Right: sequential mod- The first attempt on this method is a subset retrieval tech- els that predict next items. Up: general models without diverse tail nique [Cheng et al., 2017]. It learns a ranking function for items. Down: diverse models that predict general items and tail selecting a set of diverse items using structural support vec- items. tor machines. Likewise, our proposed model is also based on learning-to-rank framework with ground-truth tail items. We tackle this problem by clustering the tail items and map- 2.2 Sequential Recommendation ping them into content-based vector space. Besides, consid- Recently RNN has been recognized as useful due to its abil- ering diversity in recommendation is technically challenging ity to model variable-length sequences. The most representa- because accuracy and diversity are conflicting measures and tive work is the RNN-based recommendation [Hidasi et al., thus simply increasing one will end up decreasing the other. 2015], which predict next item based on the sequence of pre- Increasing diversity significantly while preserving (or negli- vious items using RNN. A feature-rich version of the this gibly losing) the state-of-the-art accuracy requires a careful model incorporates content feature vector as separate input design of optimization. To address this problem, we set a [Hidasi et al., 2016]. Likewise, we build our sequential model pseudo ground-truth ranking order as next general item (for using the RNN framework like the previous studies. We do accuracy) followed by relevant unpopular items (for diver- not limit the historical data as a short sequence without user sity), and then we optimize a ranking function to preserve the ID as [Hidasi et al., 2015] but let it be a user’s implicit feed- pseudo ground-truth ranking order using recurrent neural net- back sequence of any length (say a few days to a few hundreds work (RNN) and listwise learning-to-rank. Extensive online days). Another line of study is next-basket recommendation, and offline experiments deployed on a commercial blog plat- which is a task to formulate a customer who purchases a se- form demonstrate that S-DIV significantly increases diversity ries of baskets of items [Rendle et al., 2010; Yu et al., 2016; while preserving the accuracy compared to the state-of-the- Wang et al., 2015]. These studies are similar with our models art sequential recommendation model, which consequently in that both aim to predict multiple relevant items in sequen- improves user satisfaction. Our implementation is accessi- tial manner. However, the next-basket recommendation does ble at https://github.com/yejinjkim/seq-div-rec for repro- not consider a relative order among multiple relevant items, ducibility. Our main contributions are: which is required for our models to preserve accuracy on pre- • To the best of our knowledge, our work is the first to ad- dicting general items. dress the recommendation of diverse items in the context of sequential recommendation. • We substantially increase diversity while preserving the 3 Sequential and Diverse Recommendation state-of-the-art accuracy. We propose S-DIV, a sequential and diverse recommendation • Our method is an end-to-end approach that derives a rank- model that predicts next general items together with relevant ing function to directly recommend general items and diverse diverse items. To make an end-to-end model without the re- items at the same time. ranking heuristic, we formulate our problem as supervised learning to rank. 2 Related work 3.1 Problem Formulation 2.1 Diverse Recommendation In sequential recommendation, there is a mass of users, and Typical approach for increasing recommendation diversity each user consumes a series of items.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-