Semi-Markov Reinforcement Learning for Stochastic Resource Collection

Semi-Markov Reinforcement Learning for Stochastic Resource Collection

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20) Semi-Markov Reinforcement Learning for Stochastic Resource Collection Sebastian Schmoll and Matthias Schubert LMU Munich fschmoll, [email protected]fi.lmu.de Abstract source Collection (SRC). The tasks vary with respect to ob- servability, the dynamics of resources, and the stochasticity We show that the task of collecting stochastic, of rewards and/or travel times. Examples include the Taxi spatially distributed resources (Stochastic Resource Dispatching Problem (TDP), and finding an available park- Collection, SRC) may be considered as a Semi- ing spot (Resource Routing). In the TDP, taxicabs are look- Markov-Decision-Process. Our Deep-Q-Network ing for passengers (resources) and may get information about (DQN) based approach uses a novel scalable and current trip requests from a central entity. However, other transferable artificial neural network architecture. cabs might serve the passenger first, or the passenger changes The concrete use-case of the SRC is an officer (sin- his/her mind. The major difference to the TOP is that after gle agent) trying to maximize the amount of fined serving a passenger (or collecting a resource), the cab’s posi- parking violations in his area. We evaluate our ap- tion changes to the trip’s destination. The Resource Routing proach on a environment based on the real-world task ends when a single resource is claimed. Though our pro- parking data of the city of Melbourne. In small, posed solution is generally applicable to any SRC, we focus hence simple, settings with short distances between on the TOP task in this paper. In previous work, the optimal resources and few simultaneous violations, our ap- policy for the TOP was approximated with solvers for a time- proach is comparable to previous work. When the varying Travelling Salesman Problem (TSP). Here, we argue size of the network grows (and hence the amount that SRC should not be considered as a TSP problem. Fur- of resources) our solution significantly outperforms thermore, the generalization of the TSP called Vehicle Rout- preceding methods. Moreover, applying a trained ing Problem (VRP) has no existing suitable variation fitting to agent to a non-overlapping new area outperforms the examined problem setting of SRC tasks.The optimal SRC existing approaches. solution is non-deterministic because the transitions of the re- sources are typically unknown. Thus, the task should be mod- 1 Introduction eled as a Markov Decision Process (MDP). Since the state A parking spot is often a rare resource and therefore, smart space increases exponentially with the number of resources cities try to allocate them fair among all drivers. On that (or parking bays), finding an optimal policy using table-based account, cities usually establish parking restrictions such as solvers is infeasible. Therefore, we base our solution on Re- maximum parking duration. Unfortunately, people tend to inforcement Learning with function approximations. To han- violate restrictions. Hence, parking officers issue tickets for dle non-uniform action duration (travel times), we propose to overstaying cars. In smart cities the assignment of parking formulate SRC tasks as discrete-time Semi-Markov Decision space might be facilitated by the use of sensors which allow Processes (SMDP). A challenge when learning an efficient real-time monitoring of the current state of particular spots. policy for SRCs is to select a temporal abstraction for the The state of a parking spot can be free, occupied, in viola- action space. A instinctive choice would be to let the agent tion or already fined. Based on this information, it is not only only decide among the next road segment at each crossing. possible to recognize violations but also to predict violations However, based on this abstraction agents start with random- in the near future. Due to legal constraints, and special rules walks while training, which slowly explores the whole graph. for residents, automated fining is not allowed. Thus, we aim Hence, we use higher temporal abstractions which directly to find a movement policy for an officer (single agent) which consider traveling to any potentially useful location as action maximizes the number of issued tickets within working hours space. We present a novel neural network architecture devel- (finite horizon). The task is non-deterministic because over- oped for this abstraction which outperforms standard multi- staying cars might drive away before the officer arrives to layer perceptrons (MLP) by a large margin. Furthermore, record them. In previous work, this task is called Travelling our network architecture has a fixed size of parameters for Officer Problem (TOP) [Shao et al., 2017]. any amount of resources. Hence, model sizes scale well with Similar efforts exist in the transportation domain which can the amount of resources and trained agents are transferable be outlined in the more general framework of Stochastic Re- to previously unseen regions. In our experimental setup, we 3349 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20) compare our trained agent to the baselines proposed in [Shao but also moves the agent to the drop off location. Within et al., 2017], and show that agents perform well when trans- the literature, TDP is usually defined as multi-agent setting ferred to a new environment using real-world parking data with either no observation [Kim et al., 2019], or the com- from the city of Melbourne. Hence, our main contributions mon goal of all taxicabs is to distribute the fleet on a spa- are (1) the first consideration of SRC as a SMDP at graph tial grid or zone [Alshamsi et al., 2009; Xu et al., 2018; level and (2) solving it on a higher level temporal abstraction. Lin et al., 2018; Li et al., 2019; Alabbasi et al., 2019; (3) A novel, transferable neural network architecture adapted Tang et al., 2019]. The closest TDP approach to our work is to this problem is introduced. (4) We compare our approach described in [Tang et al., 2019]. Albeit, this solution defines to existing baselines on a real-world setting. the TDP as a SMDP, it still work on a hexagon grid system and needs SMDPs for the options framework (meta-level ac- tions). In comparison, we model the SMDP such that rewards 2 Related Work appear at discrete time steps rather than assuming uniformly The general task of collecting resources has a broad range of distributed rewards during the action execution. Furthermore, real-world applications. [Shao et al., 2017] studied the Trav- we formulate the optimal solution as a SMDP operating di- elling Officer Problem (TOP) where an officer moves within rectly on the street network. a street graph while maximizing the number of fined park- ing offenders. Since future development is uncertain, they 3 Background propose a Greedy and an Ant Colony Optimization (ACO) A Markov Decision Process (MDP) (S; A; R; T ; γ) consists approach. In their follow up work, both methods are ap- of a set of states S, a set of actions A, a reward function R, a proximated by imitation learning with an artificial neural net- a 0 transition function Ts;s0 defining the probability that s is the work [Shao et al., 2019]. Thus, the quality of the policies follow state of s after executing action a. The discount factor learned in [Shao et al., 2019] approximate the solutions in γ is a value between zero and one defining the optimization [Shao et al., 2017] but do not optimize the expected reward horizon. Given an MDP, a deterministic policy π provides directly. If the future development of resources is determinis- for each state s 2 S an applicable action a 2 A(s) .A tic, resource collection is a Vehicle Routing Problem (VRP) discrete-time, finite-horizon MDP has a pre-defined amount with time windows. A survey of VRP with time windows of equal-sized discrete time steps t 2 f0; 1;:::;T g available is given in [Solomon and Desrosiers, 1988]. In VRPs, the before an episode ends. It is important to distinguish between agent has to visit pickup and drop-off locations for cargo the MDP time step t and the time of the system ξ that may or which are given in advance. In general, it is not acceptable for may not be part of state s 2 S. An agent being in state s 2 S scheduled locations to be missed, therefore, solutions have to at time t tries to find the action a 2 A that maximizes the fu- consider whether a route to all locations in the queue is still ture expected, discounted rewards, also known as state-value possible before choosing their next goal. One common sub- function V. The Bellman equation [Bellman et al., 1957] de- task of VRP is the Travelling Salesman Problem (TSP). Re- fines a system of equations to compute the optimal values V ∗: cently, advances in solving TSPs have been achieved by us- ing function approximations and reinforcement or supervised ∗ X h a ∗ i learning techniques [Kool et al., 2018; Khalil et al., 2017; V (st) = max Ts;s · (R(st; a; st+1) + γV (st+1)) (1) a2A(s) t+1 Vinyals et al., 2015]. In variations of the VRP with time win- st+12S dows, the agent does not know all customers at the beginning Note that in this class of MDPs, the optimal policy π∗ may of the day, and over time more and more customers become not be stationary, as the horizon to optimize decreases with known [Godfrey and Powell, 2002]. Although this task is increasing t. In other words, the optimal action a for state dynamic as well, it differs from our setting in that the agent s might be different for varying t.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us