
Reinforcement Learning to Rank in E-Commerce Search Engine: Formalization, Analysis, and Application Yujing Hu Qing Da Anxiang Zeng Alibaba Group Alibaba Group Alibaba Group Hangzhou, China Hangzhou, China Hangzhou, China [email protected] [email protected] [email protected] Yang Yu Yinghui Xu National Key Laboratory for Artificial Intelligence Department, Novel Software Technology, Zhejiang Cainiao Supply Chain Nanjing University Management Co., Ltd. Nanjing, China Hangzhou, China [email protected] [email protected] ABSTRACT Analysis, and Application. In Proceedings of The 24th ACM SIGKDD Interna- In E-commerce platforms such as Amazon and TaoBao, ranking tional Conference on Knowledge Discovery & Data Mining (KDD ’18). ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3219819.3219846 items in a search session is a typical multi-step decision-making problem. Learning to rank (LTR) methods have been widely applied to ranking problems. However, such methods often consider differ- 1 INTRODUCTION ent ranking steps in a session to be independent, which conversely Over past decades, shopping online has become an important part may be highly correlated to each other. For better utilizing the cor- of people’s daily life, requiring the E-commerce giants like Ama- relation between different ranking steps, in this paper, we propose zon, eBay and TaoBao to provide stable and fascinating services to use reinforcement learning (RL) to learn an optimal ranking for hundreds of millions of users all over the world. Among these policy which maximizes the expected accumulative rewards in a services, commodity search is the fundamental infrastructure of search session. Firstly, we formally define the concept of search these E-commerce platforms, affording users the opportunities to session Markov decision process (SSMDP) to formulate the multi- search commodities, browse product information and make compar- step ranking problem. Secondly, we analyze the property of SSMDP isons. For example, every day millions of users choose to purchase and theoretically prove the necessity of maximizing accumulative commodities through TaoBao search engine. rewards. Lastly, we propose a novel policy gradient algorithm for In this paper, we focus on the problem of ranking items in large- learning an optimal ranking policy, which is able to deal with the scale item search engines, which refers to assigning each item a problem of high reward variance and unbalanced reward distribu- score and sorting the items according to their scores. Generally, a tion of an SSMDP. Experiments are conducted in simulation and search session between a user and the search engine is a multi-step TaoBao search engine. The results demonstrate that our algorithm ranking problem as follows: performs much better than the state-of-the-art LTR methods, with (1) the user inputs a query in the blank of the search engine, more than 40% and 30% growth of total transaction amount in the (2) the search engine ranks the items related to the query and simulation and the real application, respectively. displays the top K items (e.g., K = 10) in a page, (3) the user makes some operations (e.g., click items, buy some KEYWORDS certain item or just request a new page of the same query) reinforcement learning; online learning to rank; policy gradient on the page, arXiv:1803.00710v3 [cs.LG] 23 May 2018 (4) when a new page is requested, the search engine reranks the ACM Reference Format: rest of the items and display the top K items. Yujing Hu, Qing Da, Anxiang Zeng, Yang Yu, and Yinghui Xu. 2018. Rein- These four steps will repeat until the user buys some items or forcement Learning to Rank in E-Commerce Search Engine: Formalization, just leaves the search session. Empirically, a successful transaction always involves multiple rounds of the above process. The operations of users in a search session may indicate their Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed personal intentions and preference on items. From a statistical view, for profit or commercial advantage and that copies bear this notice and the full citation these signals can be utilized to learn a ranking function which sat- on the first page. Copyrights for components of this work owned by others than ACM isfies the users’ demand. This motivates the marriage of machine must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a learning and information retrieval, namely the learning to rank fee. Request permissions from [email protected]. (LTR) methods [8, 18], which learns a ranking function by classi- KDD ’18, August 19–23, 2018, London, United Kingdom fication or regression from training data. The major paradigms of © 2018 Association for Computing Machinery. ACM ISBN 978-1-4503-5552-0/18/08...$15.00 supervised LTR methods are pointwise [15, 21], pairwise [4, 5], and https://doi.org/10.1145/3219819.3219846 listwise [6]. Recently, online learning techniques such as regret minimization [2, 11, 14] have been introduced into the LTR do- The objective of an agent in an MDP is to find an optimal policy main for directly learning from user signals. Compared with offline which maximizes the expected accumulative rewards starting from LTR, online LTR avoids the mismatch between manually curated any state s (typically under the infinite-horizon discounted set- labels, user intent [31] and the expensive cost of creating labeled ∗¹ º Eπ Í1 k ting), which is defined by V s = maxπ k=0 γ rt+k st = s , data sets. Although rigorous mathematical models are adopted where π : S × A ! »0; 1¼ denotes any policy of the agent, Eπ for problem formalization [11, 31, 32] and guarantees on regret stands for expectation under policy π, t is the current time step, k bounds are established, most of those works only consider a one- is a future time step, and rt+k is the immediate reward at the time shot ranking problem, which means that the interaction between step ¹t + kº. This goal is equivalent to finding the optimal state- the search engine and each user contains only one round of ranking- ∗ π n Í1 k o action value Q ¹s; aº = maxπ E γ rt+k st = s; at = a for and-feedback activity. However, in practice, a search session often k=0 any state-action pair ¹s; aº. In finite-horizon setting with a time contains multiple rounds of interactions and the sequential correla- horizon T , the objective of an agent can be reinterpreted as the tion between each round may be an important factor for ranking, finding the optimal policy which maximizes the expected T -step which has not been well investigated. discounted return Eπ ÍT γ kr s = s or undiscounted re- In this paper, we consider the multi-step sequential ranking prob- k=0 t+k t Eπ ÍT 1 lem mentioned above and propose a novel reinforcement learning turn k=0 rt+k st = s in the discounted and undiscounted (RL) algorithm for learning an optimal ranking policy. The major reward cases, respectively. contributions of this paper are as follows. An optimal policy can be found by computing the optimal state- value function V ∗ or the optimal state-action value function Q∗. • We formally define the concept of search session Markov de- Early methods such as dynamic programming [27] and temporal- cision process (SSMDP) to formulate the multi-step ranking difference learning [29] rely on a table to store and compute the problem, by identifying the state space, reward function and value functions. However, such tabular methods cannot scale up state transition function. in large-scale state/action space problems due to the curse of di- • We theoretically prove that maximizing accumulative re- mensionality. Function approximation is widely used to address wards is necessary, indicating that the different ranking steps the scalability issues of RL. By using a parameterized function (e.g., in a session are tightly correlated rather than independent. linear functions [19], neural networks [20, 24]) to represent the • We propose a novel algorithm named deterministic policy value function or the policy (a.k.a value function approximation gradient with full backup estimation (DPG-FBE), designed and policy gradient method respectively), the learning problem is for the problem of high reward variance and unbalanced transformed to optimizing the function parameters according to reward distribution of SSMDP, which could be hardly dealt reward signals. In recent years, policy gradient methods [23, 25, 28] with even for existing state-of-the-art RL algorithms. have drawn much attention in the RL domain. The explicit pa- • We empirically demonstrate that our algorithm performs rameterized representation of policy enables the learning agent to much better than online LTR methods, with more than 40% directly search in the policy space and avoids the policy degradation and 30% growth of total transaction amount in the simulation problem of value function approximation. and the TaoBao application, respectively. The rest of the paper is organized as follows. Section 2 introduces 2.2 Related Work the background of this work. The problem description, analysis of Early attempt of online LTR can be dated back to the evaluation SSMDP and the proposed algorithm are stated in Section 3, 4, 5, of RankSVM in online settings [8]. As claimed by Hofmann et al., respectively. The experimental results are shown in Section 6, and balancing exploitation and exploration should be a key ability of Section 7 concludes the paper finally. online LTR methods [7]. The theoretical results in the online learn- ing community (typically in the bandit problem domain) [2, 14] 2 BACKGROUND provide rich mathematical tools for online LTR problem formaliza- In this section, we briefly review some key concepts of reinforce- tion and algorithms for efficient exploration, which motivates alot ment learning and the related work in the online LTR domain.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-