Reinforcement Learning to Rank with Markov Decision Process

Reinforcement Learning to Rank with Markov Decision Process

Reinforcement Learning to Rank with Markov Decision Process Zeng Wei, Jun Xu∗, Yanyan Lan, Jiafeng Guo, Xueqi Cheng CAS Key Lab of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences [email protected],fjunxu,lanyanyan,guojiafeng,[email protected] ABSTRACT directly optimizing the ranking evaluation measures is a represen- One of the central issues in learning to rank for information re- tative approach and has been proved to be eective. Following the trieval is to develop algorithms that construct ranking models by idea of directly optimizing evaluation measures, a number of met- directly optimizing evaluation measures such as normalized dis- hods have been proposed, including SVMMAP [17] , Adarank [14], counted cumulative gain (NDCG). Existing methods usually focus and PermuRank [15] etc. In training, these methods rst construct on optimizing a specic evaluation measure calculated at a xed a loss function on the basis of a predened ranking evaluation me- position, e.g., NDCG calculated at a xed position K. In informa- asure. en, approximations of the loss function or convex upper tion retrieval the evaluation measures, including the widely used bounds upon the loss function are constructed and then optimized, NDCG and P@K, are usually designed to evaluate the document leading to dierent directly optimizing learning to rank algorithms. ranking at all of the ranking positions, which provide much richer In ranking, each document is assigned a relevance score based on information than only measuring the document ranking at a single the learned ranking model. position. us, it is interesting to ask if we can devise an algorithm Several learning to rank models that directly optimize evaluation that has the ability of leveraging the measures calculated at all measure have been proposed and applied to variant ranking tasks. of the ranking postilions, for learning a beer ranking model. In Dierent loss functions and optimization techniques are adopted this paper, we propose a novel learning to rank model on the basis in these methods. For example, the loss function of SVMMAP is of Markov decision process (MDP), referred to as MDPRank. In constructed on the basis of MAP. e upper bound of the hinge the learning phase of MDPRank, the construction of a document loss is dened and is optimized with structure SVM. AdaRank, ranking is considered as a sequential decision making, each corre- another directly optimizing method, construct its loss function on sponds to an action of selecting a document for the corresponding the basis of any evaluation measure whose values are between 0 position. e policy gradient algorithm of REINFORCE is adopted and 1. Exponential upper bound is constructed and Boosting is to train the model parameters. e evaluation measures calculated adopted for conduct the optimization. at every ranking positions are utilized as the immediate rewards In general, all the methods that directly optimize evaluation to the corresponding actions, which guide the learning algorithm measures perform well in many ranking tasks, especially in terms to adjust the model parameters so that the measure is optimized. of the specic evaluation measure used in the training phase. We Experimental results on LETOR benchmark datasets showed that also note that some widely used evaluation measures are designed MDPRank can outperform the state-of-the-art baselines. to measure the goodness of a document ranking at all of the ranking positions. For example, the evaluation measure of NDCG can be KEYWORDS calculated at all of the ranking positions, each reects the goodness of the document ranking from the beginning to the corresponding learning to rank; Markov decision process position. Existing methods, however, can only directly optimize the ACM Reference format: evaluation measure calculated at a predened ranking position. For ∗ Zeng Wei, Jun Xu , Yanyan Lan, Jiafeng Guo, Xueqi Cheng. 2017. Reinfor- example, the AdaRank algorithm will focus on optimizing NDCG cement Learning to Rank with Markov Decision Process. In Proceedings of at rank K, if its loss function is congured based on the evaluation SIGIR17, August 7–11, 2017, Shinjuku, Tokyo, Japan, , 4 pages. measure NDCG@K. e information carried by the documents DOI: hp://dx.doi.org/10.1145/3077136.3080685 aer the rank K are ignored, because they have no contribution to NDCG@K. us, it is natural to ask is it possible to devise a learning 1 INTRODUCTION to rank algorithm that directly optimizes evaluation measures and Learning to rank has been widely used in information retrieval has the ability to leverage the measures at all of the ranks? and recommender systems. Among the learning to rank methods, To answer the above question, we propose a new learning to rank model on the basis of Markov decision process (MDP) [10], called * Corresponding author: Jun Xu MDPRank. e training phase of MDPRank considers the con- struction of a document ranking as a process of sequential decision Permission to make digital or hard copies of all or part of this work for personal or making and learns the the model parameters through maximizing classroom use is granted without fee provided that copies are not made or distributed for prot or commercial advantage and that copies bear this notice and the full citation the cumulated rewards to all of the decisions. Specically, the ran- on the rst page. Copyrights for components of this work owned by others than ACM king of M documents is considered as a sequence of M discrete must be honored. Abstracting with credit is permied. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specic permission and/or a time steps where each time step corresponds to a ranking position. fee. Request permissions from [email protected]. e ranking of documents, thus, is formalized as a sequence of M SIGIR17, August 7–11, 2017, Shinjuku, Tokyo, Japan decisions and each action corresponds to selecting one document. © 2017 ACM. 978-1-4503-5022-8/17/08...$15.00 At each time step, the agent receives the environment’s state and DOI: hp://dx.doi.org/10.1145/3077136.3080685 chooses an action on the basis of the state. One time step later, as a For example, [9] designed an MDP-based recommendation model consequence of the action the system transit to a new state. At each for taking both the long-term eects of each recommendation and time step, the chosen of the action depends on a policy, which is a the expected value of each recommendation into account. e multi- function maps from the current state to a probability distribution bandit also widely used for ranking [4, 8] and recommendation [5]. of selecting each possible action. Reinforcement learning is employed to train the model parame- 3 MDP FORMULATION OF LEARNING TO RANK ters. Given a set of labeled queries, at each time step, the agent In supervised learning seings, we are given N labeled training can receive a numerical action-dependent reward which is dened n oN n ¹ º ¹ º o queries ¹q¹nº; X ¹nº;Y ¹nºº , where X ¹nº = x n ; ··· ; x n and upon the evaluation measure calculated at that position. e policy n=1 1 Mn n ¹ º ¹ º o gradient algorithm of REINFORCE [10] is adopted to adjust the Y ¹nº = y n ; ··· ;y n are the query-document feature set and 1 Mn model parameters so that expected long-term discounted rewards 1 ¹nº in terms of the evaluation measure is maximized. us, MDPRank the relevance label set for documents retrieved by query q , can directly optimize the evaluation measure and leverages the respectively; and Mn is the number of the candidate documents fd ; ··· ;d g q¹nº performances at all of the ranks as rewards. 1 Mn retrieved by query . Compared with existing methods that directly optimize IR mea- sure, MDPRank enjoys the following advantages: 1) the ability of 3.1 Ranking as MDP utilizing the IR measures calculated at all of the ranking positions e process of document ranking can be formalized as an MDP, in as supervision in training; 2) directly optimizes the IR measure on which the construction of a document ranking can be considered the training data without approximation or upper bounding. as a sequential decision making where each time step corresponds To evaluate the eectiveness of MDPRank, we conducted experi- to a ranking position and each action selects a document for the ments on the basis of LETOR benchmark datasets. e experimental corresponding position. e model, referred to as MDPRank, can be results showed that MDPRank can outperform the state-of-the-art is represented by a tuple hS; A; T ; R; πi composed by states, actions, learning to rank models including the methods that directly opti- transition, reward, and policy, which are respectively dened as mize evaluation measures such as SVM-MAP and AdaRank. follows: States S is a set of states which describe the environment. In ranking, the agent should know the ranking position as well as the 2 RELATED WORK the candidate document set that the agent can choose from. us, In learning to rank for information retrieval, one of the most straig- at time step t, the state st is dened as a pair »t; Xt ¼ where Xt is htforward way to learn the ranking model is directly optimizing the the remaining documents for ranking. measure used for evaluating the ranking performance. A number Actions A is a discrete set of actions that an agent can take. e of methods have been developed in recent years. Some methods try set of possible actions depends on the state st , denoted as A¹st º. At 2 ¹ º 2 to construct a continuous and dierentiable approximation of the the time step t, at A st selects a document xm¹at º Xt for the measure-based ranking error.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us