FEXIPRO: Fast and Exact Inner Product Retrieval in Recommender Systems∗
Total Page:16
File Type:pdf, Size:1020Kb
FEXIPRO: Fast and Exact Inner Product Retrieval in Recommender Systems∗ Hui Liy, Tsz Nam Chanx, Man Lung Yiux, Nikos Mamoulisy yThe University of Hong Kong xHong Kong Polytechnic University y{hli2, nikos}@cs.hku.hk x{cstnchan, csmlyiu}@comp.polyu.edu.hk ABSTRACT learning, approximation theory, and various heuristics [2]. Exam- Recommender systems have many successful applications in e- ple techniques include collaborative filtering [33], user-item graph commerce and social media, including Amazon, Netflix, and Yelp. models [4], regression based models [37] and matrix factorization Matrix Factorization (MF) is one of the most popular recommen- (MF) [26]. In this paper, we focus on MF, due to its prevalence in dation approaches; the original user-product rating matrix R with dealing with large user-item rating matrices. Let R be a m × n millions of rows and columns is decomposed into a user matrix Q matrix with the ratings of m users on n items. MF approximately and an item matrix P, such that the product QT P approximates factorizes R and computes the mapping of each user and item to a d-dimensional factor vector, where d minfm; ng. The output R. Each column q (p) of Q (P) holds the latent factors of the d×m corresponding user (item), and qT p is a prediction of the rating of the learning phase based on MF is a user matrix Q 2 R , where the i-th column is the factor vector of the i-th user, and an to item p by user q. Recommender systems based on MF sug- d×n gest to a user in q the items with the top-k scores in qT P. For item matrix P 2 R , where the i-th column is the factor vec- tor of the i-th item. Given a user vector q and an item vector p, this problem, we propose a Fast and EXact Inner PROduct retrieval T (FEXIPRO) framework, based on sequential scan, which includes their inner product q p is the predicted rating of the correspond- three elements. First, FEXIPRO applies an SVD transformation to ing user to the corresponding item. A higher inner product implies P, after which the first several dimensions capture a large percent- a higher chance of the user to be satisfied by the item. The power age of the inner products. This enables us to prune item vectors of MF has been proved in the Netflix Prize [10]. Besides its supe- by only computing their partial inner products with q. Second, we rior performance, another strength of MF, making it widely used, construct an integer approximation version of P, which can be used is that additional information besides the existing ratings can easily to compute fast upper bounds for the inner products that can prune be integrated into the model to further increase its accuracy. Such item vectors. Finally, we apply a lossless transformation to P, such information includes social network data [27], locations of users that the resulting matrix has only positive values, allowing for the and items [28] and visual appearance [20]. inner products to be monotonically increasing with dimensionality. Given the output from the learning phase, the task of the retrieval Experiments on real data demonstrate that our framework outper- phase is to generate a top-k recommendation list for any target user. forms alternative approaches typically by an order of magnitude. This task is challenging because the number of items n is typically very large (e.g., millions). Materializing the entire ratings predic- tion list for all users is practically infeasible [7, 18]. Thus, reducing 1. INTRODUCTION the online top-k retrieval cost becomes critical, especially when de- Recommender systems have become de facto tools for suggest- signing a real-world large-scale recommender system that handles ing items that are of potential interest to users. Real applications tens of thousands of queries per second. include product recommendation (Amazon), TV recommendation Most of the previous works focus on the learning phase of rec- (Netflix) and restaurant recommendation (Yelp). A fundamen- ommender systems; there are only a few studies about the retrieval tal task in modern recommender systems is top-k recommenda- phase [7, 36, 35, 32]. LEMP [36, 35] is a sequential scan algorithm tion [14], which suggests to a user the top-k predicted ratings on designed to compute the top-k items for all users in Q. Besides, items that she has not rated yet. Top-k recommendation is typically the algorithm of [32] uses a tree structure to index Q, in order to performed in two phases: learning and retrieval [7] (see Figure 1). speed up retrieval. As pointed out in [7], such batch computational Specifically, consider a set of users and a set of items: each user approaches assume that Q is static; therefore, they may not be suit- can rate any item. In the learning phase, the missing ratings of able for recommender systems (e.g., FindMe [13, 3] and Microsoft the not-yet-rated items are estimated using methods from machine Xbox [7, 31, 24]) where the target user’s vector q is updated online ∗Work supported by grant 17205015 from Hong Kong RGC. by some ad-hoc contextual information (e.g., user behavior) before computing qT P. In this paper, we propose FEXIPRO, a Fast and EXact Inner Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed PROduct retrieval framework, which takes as input a single user for profit or commercial advantage and that copies bear this notice and the full cita- vector q and finds the top-k item vectors p with the largest qT p. tion on the first page. Copyrights for components of this work owned by others than FEXIPRO is orthogonal to how the learning phase (MF) is imple- ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re- publish, to post on servers or to redistribute to lists, requires prior specific permission mented and does not assume a static Q, therefore, it does not af- and/or a fee. Request permissions from [email protected]. fect recommendation quality and can be applied even for dynami- SIGMOD’17, May 14-19, 2017, Chicago, IL, USA cally adjusted user vectors. Our framework is based on sequential c 2017 ACM. ISBN 978-1-4503-4197-4/17/05. $15.00 scan. Besides applying two popular heuristics (early termination DOI: http://dx.doi.org/10.1145/3035918.3064009 p(1) p(2) p(3) p(4) p(5) T T q(1) 3.0 0.5 q(1) 1.1 0.9 T p(1) p(2) p(3) p(4) p(5) p(1) p(2) p(3) p(4) p(5) T p(1) p(2) p(3) p(4) p(5) q(2) q(2) 2.5 2.5 1.4 0.9 1.7 -0.6 1.1 -0.4 0.5 T T T T T 1.7 -0.6 1.1 -0.4 0.5 q(1) 1.2 0.8 q(1) 3.16 0.96 2.08 0.64 1.96 q(1): { p(3) p(5) } q(3) 4.0 1.0 q(3) 1.5 1.0 1.5 1.2 1.0 0.8 1.3 1.5 1.2 1.0 0.8 1.3 T T q(4) 2.0 1.0 q(4) 1.2 -0.8 P T T Online adjust q(5) 3.5 0.5 q(5) 1.3 1.0 Materialization Top-2 List q(1) for user 1 R QT P Learning Retrieval Figure 1: Overview of Top-k IP Retrieval in Recommender Systems Based on Matrix Factorization (MF) using Cauchy-Schwarz inequality, incremental pruning) [29, 36, Table 1: Notation 35], FEXIPRO uses three techniques to accelerate search: Symbols Description SVD Transformation. We apply a lossless SVD transformation to q, p, Q, P original vectors and matrices P in a preprocessing stage, which results in a new matrix P¯. Given ¯q, ¯p vectors after applying SVD transformation (Section 3) a query vector q, we convert q to a ¯q, such that qT P=¯qT P¯. The ^q, ^p scaled vectors for integer upper bound (Section 4) transformation introduces skew to the scalars of ¯q, such that the ab- ^q^, ^p^ vectors after applying reduction (Section 5) solute value at each dimension is larger than the next one with high p(i) the i-th vector in P T T d probability. For a given ¯q ¯p, the skew results in a large percentage ps the s-th scalar of vector p = (p1; p2; :::; pd) 2 R q of the inner product after processing only few dimensions. This Pd 2 jjpjj the length (magnitude/norm) of p: (ps) leads to a tight upper bound that can facilitate pruning ¯p without s=1 m, n number of users/items having to compute the entire product. d number of dimensions Integer Approximation. Arithmetic operations on integers are t threshold: the k-th inner product found so far much faster than those on floating-point numbers. FEXIPRO is w checking dimension in incremental pruning the first framework to exploit this opportunity. We generate an ap- ρ parameter used for selecting w (Section 3) proximation of P that keeps the integral parts of the original values e scaling parameter for integer upper bound (Section 4) after scaling them up (in order to minimize accuracy loss). The fast-to-compute inner product between the integer approximations of q and p can be converted to a tight upper bound of qT p that can to the dimensionality curse [19], index-based similarity search ap- be used to prune p without computing the exact product.