Context-Aware Learning to Rank with Self-Attention

Context-Aware Learning to Rank with Self-Attention

Context-Aware Learning to Rank with Self-Attention Przemysław Pobrotyn Tomasz Bartczak Mikołaj Synowiec ML Research at Allegro.pl ML Research at Allegro.pl ML Research at Allegro.pl [email protected] [email protected] [email protected] Radosław Białobrzeski Jarosław Bojar ML Research at Allegro.pl ML Research at Allegro.pl [email protected] [email protected] ABSTRACT A typical machine learning solution to the LTR problem involves In learning to rank, one is interested in optimising the global or- learning a scoring function, which assigns real-valued scores to dering of a list of items according to their utility for users. Popular each item of a given list, based on a dataset of item features and approaches learn a scoring function that scores items individually human-curated or implicit (e.g. clickthrough logs) relevance labels. (i.e. without the context of other items in the list) by optimising Items are then sorted in the descending order of scores [20]. Perfor- a pointwise, pairwise or listwise loss. The list is then sorted in mance of the trained scoring function is usually evaluated using an the descending order of the scores. Possible interactions between IR metric like Mean Reciprocal Rank (MRR) [30], Normalised Dis- items present in the same list are taken into account in the train- counted Cumulative Gain (NDCG) [16] or Mean Average Precision ing phase at the loss level. However, during inference, items are (MAP) [4]. scored individually, and possible interactions between them are In contrast to other classic machine learning problems like clas- not considered. In this paper, we propose a context-aware neural sification or regression, the main goal of a ranking algorithm is network model that learns item scores by applying a self-attention to determine relative preference among a group of items. Scoring mechanism. The relevance of a given item is thus determined in the items individually is a proxy of the actual learning to rank task. context of all other items present in the list, both in training and Users’ preference for a given item on a list depends on other items in inference. We empirically demonstrate significant performance present in the same list: an otherwise preferable item might become gains of self-attention based neural architecture over Multi-Layer less relevant in the presence of other, more relevant items. Com- Perceptron baselines. This effect is consistent across popular point- mon learning to rank algorithms attempt to model such inter-item wise, pairwise and listwise losses on datasets with both implicit and dependencies at the loss level. That is, items in a list are still scored explicit relevance feedback. Finally, we report new state-of-the-art individually, but the effect of their interactions on evaluation met- results on MSLR-WEB30K, the learning to rank benchmark. rics is accounted for in the loss function, which usually takes a form of a pairwise (RankNet [6], LambdaLoss [31]) or a listwise (ListNet CCS CONCEPTS [9], ListMLE [32]) objective. For example, in LambdaMART [8] the gradient of the pairwise loss is rescaled by the change in NDCG of • Information systems → Learning to rank; Information re- the list which would occur if a pair of items was swapped. Pointwise trieval; objectives, on the other hand, do not take such dependencies into KEYWORDS account. In this work, we propose a learnable, context-aware, self-attention learning to rank, self-attention, context-aware ranking [28] based scoring function, which allows for modelling of inter- ACM Reference Format: item dependencies not only at the loss level but also in the computa- Przemysław Pobrotyn, Tomasz Bartczak, Mikołaj Synowiec, Radosław Bi- tion of items’ scores. Self-attention is a mechanism first introduced ałobrzeski, and Jarosław Bojar. 2010. Context-Aware Learning to Rank with in the context of natural language processing. Unlike RNNs [14], it Self-Attention. In Proceedings of ACM Conference (Conference’17). ACM, New does not process the input items sequentially but allows the model York, NY, USA, 8 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn to attend to different parts of the input regardless of their distance 1 INTRODUCTION from the currently processed item. We adapt the Transformer [28], a popular self-attention based neural machine translation archi- Learning to rank (LTR) is an important area of machine learning re- tecture, to the ranking task. Since the self-attention operation is search, lying at the core of many information retrieval (IR) systems. permutation-equivariant (scores items the same way irrespective It arises in numerous industrial applications like search engines, of their input order), we obtain a permutation-equivariant scor- recommender systems, question-answering systems, and others. ing function suitable for ranking. If we further refine the model Permission to make digital or hard copies of part or all of this work for personal or with positional encodings, the obtained model becomes suitable for classroom use is granted without fee provided that copies are not made or distributed re-ranking setting. We demonstrate that the obtained (re)ranking for profit or commercial advantage and that copies bear this notice and the full citation model significantly improves performance over Multi-Layer Per- on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). ceptron (MLP) baselines across a range of pointwise, pairwise and Conference’17, July 2017, Washington, DC, USA listwise ranking losses. Evaluation is conducted on MSLR-WEB30K © 2020 Copyright held by the owner/author(s). [25], the benchmark LTR dataset with multi-level relevance judge- ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. https://doi.org/10.1145/nnnnnnn.nnnnnnn ments, as well as on clickthrough data coming from Allegro.pl, a Conference’17, July 2017, Washington, DC, USA Pobrotyn et al. large-scale e-commerce search engine. We also establish the new • similarly to our approach, Pei et al. [24] use the self-attention state-of-the-art results on WEB30K in terms of NDCG@5. mechanism to model inter-item dependencies. Their ap- We provide an open-source Pytorch [23] implementation of our proach, however, was not evaluated on a standard WEB30K self-attentive context-aware ranker available at https://github.com/ dataset and the only loss function considered was ListNet. allegro/allRank. The rest of the paper is organised as follows. In Section 2 we Our proposed solution to the problem of context-aware ranking review related work. In Section 3 we formulate the problem solved makes use of the self-attention mechanism. It was first introduced in this work. In Section 4 we describe our self-attentive ranking as intra-attention in [11] and received more attention after the model. Experimental results and their discussion are presented in introduction of the Transformer architecture [28]. Our model can Section 5. In Section 6 we conduct an ablation study of various be seen as a special case of the encoder part of the Transformer. hyperparameters of our model. Finally, a summary of our work is Our model, being a neural network, can be trained with gradient- given in Section 7. based optimisation methods to minimise any differentiable loss function. Loss functions suitable for the ranking setting have been studied extensively in the past [20]. In order to demonstrate that our context-aware model provides performance boosts irrespectively 2 RELATED WORK of the loss function used, we evaluate its performance when tasked Learning to rank has been extensively studied and there is a plethora with optimising several popular ranking losses. of resources available on classic pointwise, pairwise and listwise We compare the proposed approach with those of the afore- approaches. We refer the reader to [20] for the overview of the most mentioned methods which provided an evaluation on WEB30K in popular methods. terms of NDCG@5. These include GSF of [2] and DLCM of [1]. We What the majority of LTR methods have in common is that their outperform both competing methods. scoring functions score items individually. Inter-item dependencies are (if at all) taken into account at the loss level only. Previous attempts at modelling context of other items in a list in the scoring 3 PROBLEM FORMULATION function include: In this section, we formulate problem at hand in learning to rank setting. Let X be the training set. It consists of pairs ¹x;yº of a list x of df -dimensional real-valued vectors xi together with a list y • a pairwise scoring function [12] and Groupwise Scoring of their relevance labels yi (multi-level or binary). Note that lists Function (GSF) [2], which incorporates the former work as x in the training set may be of varying length. The goal is to find its special case. However, the proposed GSF method simply a scoring function f which maximises an IR metric of choice (e.g. concatenates feature vectors of multiple items and passes NDCG) on the test set. Since IR metrics are rank based (thus, non- them through an MLP. To desensitize the model to the order differentiable), the scoring function f is trained to minimise the of concatenated items, Monte-Carlo sampling is used, which average of a surrogate loss l over the training data. yields an unscalable algorithm, • a seq2slate model [5] uses an RNN combined with a variant 1 Õ of Pointer Networks [29] in an encoder-decoder type archi- L¹f º = l¹¹x;yº; f º; jX j tecture to both encode items in a context-aware fashion and ¹x ;yº2X then produce the optimal list by selecting items one-by-one. Authors evaluate their approach only on clickthrough data while controlling for overfitting (e.g.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us