Balancing Exploration and Exploitation in Listwise and Pairwise Online Learning to Rank for Information Retrieval

Balancing Exploration and Exploitation in Listwise and Pairwise Online Learning to Rank for Information Retrieval

Information Retrieval manuscript No. (will be inserted by the editor) Balancing Exploration and Exploitation in Listwise and Pairwise Online Learning to Rank for Information Retrieval Katja Hofmann · Shimon Whiteson · Maarten de Rijke Received: date / Accepted: date Abstract As retrieval systems become more complex, learning to rank approaches are being developed to automatically tune their parameters. Using online learning to rank, retrieval systems can learn directly from implicit feedback inferred from user interactions. In such an online setting, algorithms must obtain feedback for effective learning while simultaneously utilizing what has already been learned to produce high quality results. We formulate this challenge as an exploration-exploitation dilemma and propose two methods for addressing it. By adding mechanisms for balancing exploration and exploitation during learning, each method extends a state-of-the-art learning to rank method, one based on listwise learning and the other on pairwise learning. Using a recently developed simulation framework that allows assessment of online performance, we empirically evaluate both methods. Our results show that balancing exploration and exploitation can substantially and significantly improve the online retrieval performance of both listwise and pairwise approaches. In ad- dition, the results demonstrate that such a balance affects the two approaches in different ways, especially when user feedback is noisy, yielding new insights relevant to making online learning to rank effective in practice. Keywords Information retrieval · Learning to rank · Implicit feedback An earlier version of this article appeared in Hofmann et al (2011a). In this substantially revised and extended version we introduce a novel approach for balancing exploration and exploitation that works with pairwise online learning approaches, and carefully evaluate this new approach. Comparisons with the earlier described algorithm for listwise approaches yield new insights into the behavior of the two types of approach in online settings, especially how they compare in the face of noisy feedback and how they react to a balance of exploration and exploitation. K. Hofmann ISLA, University of Amsterdam, Amsterdam, The Netherlands E-mail: [email protected] S. Whiteson E-mail: [email protected] M. de Rijke E-mail: [email protected] 2 K. Hofmann et al. 1 Introduction Information retrieval (IR) systems are becoming increasingly complex. For ex- ample, web search engines may combine hundreds of ranking features that each capture a particular aspect of a query, candidate documents, and the match be- tween the two.1 In heavily used search engines, these combinations are carefully tuned to fit users' needs. However, on smaller-scale systems, such careful manual tuning is often infeasible. For automatically tuning the parameters of such a system, machine learning algorithms are invaluable (Liu, 2009). Most methods employ supervised learning, i.e., algorithms are trained on examples of relevant and non-relevant documents for particular queries. While large amounts of data are available for training in some applications, such as web search, there are also many situations in which such data cannot be obtained. For example, when deploying a search engine for a company's intranet (enterprise search) or a personal computer (desktop search, personalization), collecting the large amounts of training data required for su- pervised learning is usually not feasible (Sanderson, 2010). Even in environments where training data is available, it may not capture typical information needs and user preferences perfectly (Radlinski and Craswell, 2010), and cannot anticipate future changes in user needs. A promising direction for addressing a lack of resources for manual or super- vised training are online approaches for learning to rank (Joachims, 2002; Yue and Joachims, 2009; Yue et al, 2009). These methods work in settings where no train- ing data is available before deployment. They learn directly from implicit feedback inferred from user interactions, such as clicks, making it possible to adapt to users throughout the lifetime of the system.2 In an online setting, it is crucial to consider the impact of such learning on the users. In contrast to offline approaches, where the goal is to learn as effectively as possible from the available training data, online learning affects, and is affected by, how user feedback is collected. Ideally, the learning algorithm should not interfere with the user experience, observing user behavior and learning in the background, so as to present search results that meet the user's information needs as well as possible at all times. This would imply passively observing, e.g., clicks on result documents. However, passively observed feedback can be biased towards the top results displayed to the user (Silverstein et al, 1999). Learning from this biased feedback may be suboptimal, thereby reducing the system's performance later on. Consequently, an online learning to rank approach should take into account both the quality of current search results, and the potential to improve that quality in the future, if feedback suitable for learning can be observed. In this article, we frame this fundamental trade-off as an exploration{exploitation dilemma. If the system presents only document lists that it expects will satisfy the user, it cannot obtain feedback on other, potentially better, solutions. However, if it presents document lists from which it can gain a lot of new information, it risks 1 http://www.google.com/corporate/tech.html 2 This article focuses on learning solutions that generalize to unseen queries. Thus, learning from previous interactions with results for the same query is not possible, in contrast to settings assumed by most on-line relevance feedback and re-ranking approaches. These approaches are orthogonal to work in online learning to rank and could, e.g., be used to further improve learned rankings for frequent queries. Balancing Exploration and Exploitation 3 presenting bad results to the user during learning. Therefore, to perform optimally, the system must explore new solutions, while also maintaining satisfactory perfor- mance by exploiting existing solutions. Making online learning to rank for IR work in realistic settings requires effective ways to balance exploration and exploitation. We investigate mechanisms for achieving a balance between exploration and exploitation when using listwise and pairwise methods, the two most successful approaches for learning to rank in IR (Liu, 2009). The pairwise approach takes as input pairs of documents with labels identifying which is preferred and learns a classifier that predicts these labels. In principle, pairwise approaches can be di- rectly applied online, as preference relations can be inferred from clicks (Joachims, 2002). However, as we demonstrate in this article, balancing exploration and ex- ploitation is crucial to achieving good performance. In contrast, listwise approaches aim to directly optimize an evaluation measure, such as NDCG, that concerns the entire document list. Since such evaluation measures cannot be computed online, new approaches that work with implicit feedback have been developed (Yue and Joachims, 2009). These online approaches rely on interleaving techniques, where preference relations between two ranking functions can be inferred from aggregated clicks (Joachims et al, 2007). In this article, we present the first two algorithms that can balance exploration and exploitation in settings where only implicit feedback is available. First, we start from a recently developed listwise algorithm that is initially purely exploratory (Yue and Joachims, 2009). Second, we develop a similar mechanism for a pairwise approach that is initially purely exploitative. We assess the resulting algorithms using an evaluation framework that lever- ages standard learning to rank datasets and models of users' click behavior. Our main result is that finding a proper balance between exploration and exploita- tion can substantially and significantly improve the online retrieval performance of both listwise and pairwise approaches. In addition, our results are the first to shed light on the strengths and weak- nesses of using pairwise and listwise approaches online, as they have previously only been compared offline. We find that the pairwise approach can learn effec- tively when feedback is reliable. However, when feedback is noisy, a high amount of exploration is required to obtain reasonable performance. The listwise approach learns more slowly when provided with perfect feedback, but is much more ro- bust to noise than the pairwise approach. We discuss in detail the effects on each approach of balancing exploration and exploitation, the amount of noise in user feedback, and characteristics of the datasets. Finally, we describe the implications of our results for making these approaches work effectively in practice. The remainder of this paper is organized as follows. We present related work in x2 and our methods for balancing exploration and exploitation in x3. Experiments are described in x4, followed by results and analysis in x5. We conclude in x6. 2 Related work While our methods are the first to balance exploration and exploitation in a set- ting where only implicit feedback is available, a large body of research addresses related problems. The question of how to explore is addressed by active learning approaches for supervised learning to rank, and in online

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    29 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us