
Is Simple Better? Revisiting Non-linear Matrix Factorization for Learning Incomplete Ratings Vaibhav Krishna Tian Guo Nino Antulov-Fantulin ETH Zurich ETH Zurich ETH Zurich Zurich, Switzerland Zurich, Switzerland Zurich, Switzerland [email protected] [email protected] [email protected] Abstract—Matrix factorization techniques have been widely non-linear latent representations or (ii) non-linear interaction used as a method for collaborative filtering for recommender functions dominates the ability to learn incomplete ratings. systems. In recent times, different variants of deep learning In this paper, we are focused on the matrix factorization rec- algorithms have been explored in this setting to improve the task of making a personalized recommendation with user-item ommender models. Note, that the representations of classical interaction data. The idea that the mapping between the latent matrix factorizations such as non-negative matrix factorization user or item factors and the original features is highly nonlinear (NMF) or singular value decomposition (SVD) are essentially suggest that classical matrix factorization techniques are no the same as the ones learned by a basic linear auto-encoders longer sufficient. In this paper, we propose a multilayer nonlinear [1]. semi-nonnegative matrix factorization method, with the motiva- tion that user-item interactions can be modeled more accurately In this paper, we focus on the collaborative filtering task using a linear combination of non-linear item features. Firstly, we of learning explicit ratings based on collaborative filtering. learn latent factors for representations of users and items from For recent advances in using deep models with an auxiliary the designed multilayer nonlinear Semi-NMF approach using information and implicit feedback [8]. The main contributions explicit ratings. Secondly, the architecture built is compared with of the paper are the following: (i) We propose a simple model deep-learning algorithms like Restricted Boltzmann Machine and state-of-the-art Deep Matrix factorization techniques. By using of the compositions of non-linear matrix factors for learning both supervised rate prediction task and unsupervised clustering incomplete explicit ratings. (ii) We evaluated our approach in latent item space, we demonstrate that our proposed approach against a variety of baselines including both linear and non- achieves better generalization ability in prediction as well as linear methods. (iii) In the supervised rate prediction task, our comparable representation ability as deep matrix factorization simple linear combination of non-linear representations has in the clustering task. Index Terms—Matrix Factorization, Collaborative Filtering, lower prediction errors (RMSE) on hold-out datasets, thereby Recommender Systems leading to better generalization ability. (iv) In the unsupervised clustering task performed on the obtained representations, we demonstrate that our approach attains comparable representa- I. INTRODUCTION tion ability in comparison to complex deep matrix factoriza- Nowadays, recommender systems [2], [15], [18], [19] are so tion, by presenting comparable clustering performance metric ubiquitous in information systems that their absence draws at- (within-cluster sum of squares). tention and not vice versa. Making personalized predictions for specific users, based on some functional dependency of past II. PRELIMINARIES interactions of all users and items is known as collaborative In this section, we formulate the problem setting. The user- arXiv:1710.05613v3 [cs.LG] 21 Sep 2018 filtering [23], [24] approach. Within this approach, different item rating matrix is denoted by R ∈ Rn×m. The original matrix factorization (MF) techniques have been proven to be representation of user i is the i-th row of matrix R i.e. ui ∈ quite accurate and scalable for many recommender scenarios Rm, while the original representation of item j is the j-th n n×k [9], [10], [12]. Essentially, they map both users and items column of matrix R i.e. vj ∈ R . Let P ∈ R denote to a joint latent factor space of lower dimension and model the latent feature matrix of users, where pi denotes the latent interaction as their inner product in latent space. feature vector of user i i.e. i-th row of matrix P . Similarly, let × Recently, various deep learning models [1] such as Re- Q ∈ Rk m denote the latent feature matrix of items, where stricted Boltzmann Machines [14], stacked auto-encoders [11], qj denotes the latent feature vector of item j i.e. j-th column [28], deep neural networks [6] or deep matrix factorizations of Q. [29] were introduced to the collaborative filtering methods. The collaborative filtering task is to estimate the unobserved Two aspects of collaborative filtering were upgraded: (i) linear rating for user i and item j as latent representations of users and items were replaced by rˆ = f(p , q |Θ), (1) deep representations and (ii) inner product was replaced by i,j i j non-linear function represented with deep neural networks. where Θ denotes the model parameters of interaction function This has motivated us to study which of this two aspects: (i) f. + + The latent features pi, qj and parameters Θ are found by representation qj is the j-th column of Q1 ≈ g(S2Q2 ). This minimizing the objective function model falls into the category of semi-NMF factorization mod- els [4]. Semi-NMF is a variant of NMF, which imposes non- L = L(rij , rˆij )+ λΩ(Θ), (2) negativity constraints only on the latent factors of the second i,j∈κ X layer Q+. This allows both positive and negative offsets from where L(.) is a point-wise loss function (for pair-wise learning the bias term B. Usually, in clustering P represents cluster see [3]), Ω(.) is a regularizer (L2 or L1 norm [13]) and κ centroids and Q represents soft membership indicator [4]. But, denotes sets of training instances over which learning is done from recommender perspective, matrix P may be interpreted (see [22], about the missing data assumptions). as the linear regression coefficients and the nonnegativity The regularized matrix factorization [9], [12] approach is constraints imposed on latent item attributes Q allow part- modeling the interaction function f as the linear combination based representations [20], [21]. Note, that much of the observed variation in rating values is f(pi, qj |Θ) = k pi(k)qj (k), which represent the inner product in the latent space. The latent factors pi, qj are linear due to the effects associated with either users or items, known representations, dueP to the absence of non-linearity in the as biases or intercepts, independent of any interactions. Thus, linear algebra transformations R ≈ PQ. Here, usually L2 instead of explaining the full rating value by an interaction of T function is used for loss and regularization, while for the the form pi.qj , the system can try to identify the portion of training set κ, set of observed ratings were used. these values that individual user or item biases cannot explain. Neural collaborative filtering [6] models the interac- Bias bui involved in a rating rui can be computed as follows: tion function f as a deep neural network f(pi, qj |Θ) = bui = µ + bu + bi, where µ is the overall average rating, bu φout(φx(...(φ1(pi, qj ))...)), where φout and φx are mapping of is the observed deviations of user u and bi is the observed output layer and x-th layer of the form φ(z)= σ(Wz+b). The deviations of item i. σ(.) represents the non-linear function such as rectified linear Note, that in general case, we are able to compose + unit, hyperbolic tangent or others. The model parameters Θ more non-linearities with the following relation Qj ≈ + and latent factors pi, qj are learned in a joint manner, where g(Sj+1Qj+1), which generates the following model R ≈ + the latent factors are non-linear representations of user and B + P1g(S2g(S3g(...g(Sf Qf )))). However, more complex item vectors ui, vj from rating matrix R. item latent representation f ≥ 3 were showed not to be useful, Deep matrix factorizations [29] model the interaction func- see experiments section for more details. tion f as an inner product f(p , q |Θ) = p (k)q (k). i j k i j A. Learning Model Parameters However, both user and item features are deep representa- P tions of the form: pi = φout(φx(...(φ1(ui))...)) and qj = For a two layered item features structure, the model pa- φout(φx(...(φ1(vj ))...)). rameters are updated through an element-wise gradient de- scent approach, minimizing eq.(2), with squared loss function 2 III. PROPOSED MODEL L(rij , rˆij ) = (rui − rˆui) and L2 regularization Ω(Θ) = 2 2 2 2 2 In order to learn incomplete ratings, we formulate bu + bi + kpuk + kqik + ksuik . the following multilayer semi-NMF model (NSNMF): The model parameters Θ are randomly initialized uniformly ≈ + n×m at the range [0,1], and we perform iterative updates for each R B + P1Q1 , where R ∈ R is the rating matrix, n×m n×k B ∈ R is the bias matrix, P1 ∈ R is the latent observed rating rui as follows: user preferences matrix and Q+ ∈ Rk×m is the matrix 1 + ∗ of the non-linear item latent features. To model non-linear bu ← bu + η(eui − λbu) (4) representation of the items, we use the following model ∗ + + bi ← bi + η(eui − λbi) (5) Q1 ≈ g(S2Q2 ), which finally leads to: ∗ + puk ← puk + η eui · g skh · qhi − λpuk (6) R ≈ B + P1g(S2Q2 ), (3) h ! ! + l×m X where Q2 ∈ R+ is the hidden latent representation of items, ∗ ′ S2 is the weighting matrix between latent representations skl ← skl + η eui · puk · g skh · qhi qli − λskl , on different levels and g(.) is the element-wise non-linear h ! ! function to better approximate the non-linear manifolds of the X (7) ∗ ∗ latent features.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-