
Training Deep AutoEncoders for Collaborative Filtering Oleksii Kuchaiev Boris Ginsburg NVIDIA NVIDIA Santa Clara, California Santa Clara, California [email protected] [email protected] ABSTRACT of as a form of dimensionality reduction. It is, therefore, natural is paper proposes a model for the rating prediction task in rec- to adapt deep autoencoders for this task as well. I-AutoRec (item- ommender systems which signicantly outperforms previous state- based autoencoder) and U-AutoRec (user-based autoencoder) are of-the art models on a time-split Netix data set. Our model is rst successful aempts to do so [17]. based on deep autoencoder with 6 layers and is trained end-to-end ere are many non deep learning types of approaches to collab- without any layer-wise pre-training. We empirically demonstrate orative ltering (CF) [3, 15]. Matrix factorization techniques, such that: a) deep autoencoder models generalize much beer than as alternating least squares (ALS) [8, 12] and probabilistic matrix the shallow ones, b) non-linear activation functions with nega- factorization [14] are particularly popular. e most robust sys- tive parts are crucial for training deep models, and c) heavy use tems may incorporate several ideas together such as the winning of regularization techniques such as dropout is necessary to pre- solution to the Netix Prize competition [10]. vent overing. We also propose a new training algorithm based Note that Netix Prize data also includes temporal signal - time on iterative output re-feeding to overcome natural sparseness of when each rating has been made. us, several classic CF ap- collaborate ltering. e new algorithm signicantly speeds up proaches has been extended to incorporate temporal information training and improves model performance. Our code is available at such as TimeSVD++ [11], as well as more recent RNN-based tech- hps://github.com/NVIDIA/DeepRecommender. niques such as recurrent recommender networks [19]. 1 INTRODUCTION Sites like Amazon, Netix and Spotify use recommender systems to 2 MODEL suggest items to users. Recommender systems can be divided into Our model is inspired by U-AutoRec approach with several impor- two categories: context-based and personalized recommendations. tant distinctions. We train much deeper models. To enable this Context based recommendations take into account contextual without any pre-training, we: a) use “scaled exponential linear factors such as location, date and time [1]. Personalized recom- units” (SELUs) [9], b) use high dropout rates, and d) use iterative mendations typically suggest items to users using the collaborative output re-feeding during training. ltering (CF) approach. In this approach the user’s interests are An autoencoder is a network which implements two transforma- predicted based on the analysis of tastes and preference of other tions - encoder encode¹xº : Rn ! Rd and decoder¹zº : Rd ! Rn. users in the system and implicitly inferring “similarity” between e “goal” of autoenoder is to obtain d dimensional representa- them. e underlying assumption is that two people who have tion of data such that an error measure between x and f ¹xº = similar tastes, have a higher likelihood of having the same opinion decode¹encode¹xºº is minimized [7]. Figure 1 depicts typical 4-layer on an item than two randomly chosen people. autoencoder network. If noise is added to the data during encoding In designing recommender systems, the goal is to improve the step, the autoencoder is called de-noising. Autoencoder is an excel- accuracy of predictions. e Netix Prize contest provides the most lent tool for dimensionality reduction and can be thought of as a famous example of this problem [2]: Netix held the Netix Prize strict generalization of principle component analysis (PCA) [6]. An to substantially improve the accuracy of the algorithm to predict autoencoder without non-linear activations and only with “code” user ratings for lms. layer should be able to learn PCA transformation in the encoder if is is a classic CF problem: Infer the missing entries in an mxn trained to optimize mean squared error (MSE) loss. arXiv:1708.01715v3 [stat.ML] 10 Oct 2017 matrix, R, whose ¹i; jº entry describes the ratings given by the ith In our model, both encoder and decoder parts of the autoen- user to the jth item. e performance is then measured using Root coder consist of feed-forward neural networks with classical fully Mean Squared Error (RMSE). connected layers computing l = f ¹W ∗ x + bº, where f is some non-linear activation function. If range of the activation function 1.1 Related work is smaller than that of data, the last layer of the decoder should be Deep learning [13] has led to breakthroughs in image recognition, kept linear. We found it to be very important for activation function natural language understanding, and reinforcement learning. Nat- f in hidden layers to contain non-zero negative part, and we use urally, these successes fuel an interest for using deep learning in SELU units in most of our experiments (see Section 3.2 for details). recommender systems. First aempts at using deep learning for rec- If decoder mirrors encoder architecture (as it does in our model), l ommender systems involved restricted Boltzman machines (RBM) then one can constrain decoder’s weights Wd to be equal to trans- l [16]. Several recent approaches use autoencoders [17, 18], feed- posed encoder weights We from the corresponding layer l. Such forward neural networks [5] and recurrent recommender networks autoencoder is called constrained or tied and has almost two times [19]. Many popular matrix factorization techniques can be thought less free parameters than unconstrained one. Forward pass and inference. During forward pass (and infer- ence) the model takes user represented by his vector of ratings from the training set x 2 Rn, where n is number of items. Note that x is very sparse, while the output of the decoder, f ¹xº 2 Rn is dense and contains rating predictions for all items in the corpus. 2.1 Loss function Since it doesn’t make sense to predict zeros in user’s representation vector x, we follow the approach from [17] and optimize Masked Mean Squared Error loss: m ∗ ¹r − y º2 MMSE = i i i (1) Íi=n i=0 mi where ri is actual rating, yi is reconstructed, or predicted rating, and mi is a mask function such that mi = 1 if ri , 0 else mi = 0. Figure 1: AutoEncoder consists of two neural networks, en- Note that there is a straightforwardp relation between RMSE score coder and decoder, fused together on the “representation” and MMSE score: RMSE = MMSE. layer z. Encoder has 2 layers e1 and e2 and decoder has 2 layers d1 and d2. Dropout may be applied to coding layer z. 2.2 Dense re-feeding n During training and inference, an input x 2 R is very sparse Table 1: Subsets of Netix Prize training set used in our ex- because no user can realistically rate but a tiny fractions of all periments. We made sure that these splits match the ones items. On the other hand, autoencoder’s output f ¹xº is dense. Lets used in [19]. consider an idealized scenario with a perfect f . en f ¹xºi = xi ; i : 8 xi , 0 and f ¹xºi accurately predicts all user’s future ratings for Full 3 months 6 months 1 year items i : xi = 0. is means that if user rates new item k (thereby 0 ¹ º 0 ¹ º ¹ 0º creating a new vector x ) then f x k = xk and f x = f x . Hence, in this idealized scenario, y = f ¹xº should be a xed point of a well Training 12/99-11/05 09/05-11/05 06/05-11/05 06/04-05/05 trained autoencoder: f ¹yº = y. Users 477,412 311,315 390,795 345,855 To explicitly enforce xed-point constraint and to be able to Ratings 98,074,901 13,675,402 29,179,009 41,451,832 perform dense training updates, we augment every optimization iteration with an iterative dense re-feeding steps (3 and 4 below) as Testing 12/05 12/05 12/05 06/05 follows: Users 173,482 160,906 169,541 197,951 Ratings 2,250,481 2,082,559 2,175,535 3,888,684 (1) Given sparse x, compute dense f ¹xº and loss using equa- tion 1 (forward pass) (2) Compute gradients and perform weight update (backward pass) (3) Treat f ¹xº as a new example and compute f ¹f ¹xºº. Now both f ¹xº and f ¹f ¹xºº are dense and the loss from equation the training set are removed from both test and validation subsets. 1 has all m as non-zeros. (second forward pass) Table 1 provides details on the data sets. (4) Compute gradients and perform weight update (second For most of our experiments we uses a batch size of 128, trained backward pass) using SGD with momentum of 0.9 and learning rate of 0.001. We used xavier initialization to initialize parameters. Note, that unlike Steps (3) and (4) can be also performed more than once for every [18] we did not use any layer-wise pre-training. We believe that iteration. we were able to do so successfully because of choosing the right activation function (see Section 3.2). 3 EXPERIMENTS AND RESULTS 3.1 Experiment setup 3.2 Eects of the activation types For the rating prediction task, it is oen most relevant to predict To explore the eects of using dierent activation functions, we future ratings given the past ones instead of predicting ratings miss- tested some of the most popular choices in deep learning : sigmoid, ing at random.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-