Markov Decision Processes

Markov Decision Processes

Feb 23 & 25, 2021 Markov Decision Processes Cathy Wu 6.246: Reinforcement Learning: Foundations and Methods Outline 1 Markov Decision Processes 2 Dynamic Programming for MDPs 3 Bellman equations and operators 4 Value iteration algorithm 5 Policy iteration algorithm Wu Outline 1 Markov Decision Processes 2 Dynamic Programming for MDPs 3 Bellman equations and operators 4 Value iteration algorithm 5 Policy iteration algorithm Wu Why stochastic problems? § Stochastic environment: • Uncertainty in rewards (e.g. multi-armed bandits, contextual bandits) • Uncertainty in dynamics, i.e. !",$" → !"&' • Uncertainty in horizon (called stochastic shortest path) § Stochastic policies (technical reasons) • Trades off exploration and exploitation • Enables off-policy learning • Compatible with maximum likelihood estimation (MLE) Dynamic programming in deterministic setting is insufficient. Wu The Amazing Goods Company (Supply Chain) Description. At each month !, a warehouse contains "# items of a specific goods and the demand for that goods is $ (stochastic). At the end of each month the manager of the warehouse can order %_! more items from the supplier. § The cost of maintaining an inventory of " is ℎ("). § The cost to order % items is *(%). § The income for selling + items if ,(+). § If the demand -~$ is bigger than the available inventory ", customers that cannot be served leave. § The value of the remaining inventory at the end of the year is / " . § Constraint: the store has a maximum capacity 0. Wu Markov Decision Process Definition (Markov decision process) A Markov decision process (MDP) is defined as a tuple ! = ($,&,',(,)) where § $ is the state space, Example: The Amazing Goods Company § State space: + ∈ $ = {0, 1, … , !}. Wu Markov Decision Process Definition (Markov decision process) A Markov decision process (MDP) is defined as a tuple ! = ($,&,',(,)) where § $ is the state space, § A is the action space, Example: The Amazing Goods Company § Action space: it is not possible to order more item than the capacity of the store, then the action space should depend on the current state. Formally, at state +, , ∈ & + = {0, 1, … , ! − +}. Wu Markov Decision Process Definition (Markov decision process) A Markov decision process (MDP) is defined as a tuple ! = ($,&,',(,)) where § $ is the state space, often simplified to finite § A is the action space, § '(+,|+,.) is the transition probability with , , ' + +,. = ℙ(+012 = + |+0 = +,.0 = .) Example: The Amazing Goods Company 1 § Dynamics: +012 = +0 + .0 − 50 . 6.6.8 § The demand 50 is stochastic and time-independent. Formally, 50 ~ :. Wu Markov Decision Process Definition (Markov decision process) A Markov decision process (MDP) is defined as a tuple ! = ($,&,',(,)) where § $ is the state space, often simplified to finite § A is the action space, § '(+,|+,.) is the transition probability with , , ' + +,. = ℙ(+012 = + |+0 = +,.0 = .) § ((+,.,+,) is the immediate reward at state + upon taking action ., sometimes simply ((+) Example: The Amazing Goods Company 1 § Reward: (0 = −4 .0 − ℎ +0 + .0 + 7( +0 + .0 − +012 ). This corresponds to a purchasing cost, a cost for excess stock (storage, maintenance), and a reward for fulfilling orders. Wu Markov Decision Process Definition (Markov decision process) A Markov decision process (MDP) is defined as a tuple ! = ($,&,',(,)) where § $ is the state space, often simplified to finite § A is the action space, § '(+,|+,.) is the transition probability with , , ' + +,. = ℙ(+012 = + |+0 = +,.0 = .) § ((+,.,+,) is the immediate reward at state + upon taking action ., sometimes simply ((+) § ) ∈ [0,1) is the discount factor. Example: The Amazing Goods Company § Discount: ) = 0.95. A dollar today is worth more than a dollar tomorrow. Wu Markov Decision Process Definition (Markov decision process) A Markov decision process (MDP) is defined as a tuple ! = ($,&,',(,)) where § $ is the state space, often simplified to finite § A is the action space, § '(+,|+,.) is the transition probability with , , ' + +,. = ℙ(+012 = + |+0 = +,.0 = .) § ((+,.,+,) is the immediate reward at state + upon taking action ., sometimes simply ((+) § ) ∈ [0,1) is the discount factor. Example: The Amazing Goods Company = 0 § Objective: 7 +8; .8, … = ∑0<8 ) (0. This corresponds to the cumulative reward, plus the value of the remaining inventory. Wu Markov Decision Process Definition (Markov decision process) A Markov decision process (MDP) is defined as a tuple ! = ($,&,',(,)) where § $ is the state space, often simplified to finite § A is the action space, § '(+,|+,.) is the transition probability with , , ' + +,. = ℙ(+012 = + |+0 = +,.0 = .) § ((+,.,+,) is the immediate reward at state + upon taking action ., sometimes simply ((+) § ) ∈ [0,1) is the discount factor. F In general, a non-Markovian decision process’s transitions could depend on much more information: , ℙ +012 = + +0 = +,.0 = .,+072,.072,…,+9,.9 , Wu Markov Decision Process Definition (Markov decision process) A Markov decision process (MDP) is defined as a tuple ! = ($,&,',(,)) where § $ is the state space, often simplified to finite § A is the action space, § '(+,|+,.) is the transition probability with , , ' + +,. = ℙ(+012 = + |+0 = +,.0 = .) § ((+,.,+,) is the immediate reward at state + upon taking action ., sometimes simply ((+) § ) ∈ [0,1) is the discount factor. F The process generates trajectories 70 = (+8, .8, …, +0:2, .0:2, +0), with +012~'(⋅ |+0, .0) Wu Markov Decision Process Definition (Markov decision process) A Markov decision process (MDP) is defined as a tuple ! = ($,&,',(,)) where § $ is the state space, often simplified to finite § A is the action space, § '(+,|+,.) is the transition probability with , , ' + +,. = ℙ(+012 = + |+0 = +,.0 = .) § ((+,.,+,) is the immediate reward at state + upon taking action ., sometimes simply ((+) § ) ∈ [0,1) is the discount factor. F The process generates trajectories 70 = (+8, .8, …, +0:2, .0:2, +0), with +012~'(⋅ |+0, .0) Wu Freeway Atari game (David Crane,1981) FREEWAY is an Atari 2600 video game, released in 1981. In FREEWAY, the agent must navigate a chicken (think: jaywalker) across a busy road of ten lanes of incoming traffic. The top of the screen lists the score. After a successful crossing, the chicken is teleported back to the bottom of the screen. If hit by a car, a chicken is forced back either slightly, or pushed back to the bottom of the screen, depending on what difficulty the switch is set to. One or two players can play at once. Related applications: Self-driving cars (input from LIDAR, radar, cameras) Traffic signal control (input from cameras) Crowd navigation robot Figure: Atari 2600 video game FREEWAY. Wu ATARI Breakout Non-Markov dynamics Wu ATARI Breakout Wu ATARI Breakout 21 Non-Markov dynamics An MDP satisfies the Markovian property if ℙ "#$% = " '#, )#) = ℙ "#$% = " "#,)#,"#+%,)#+%,…,"-,)- = ℙ("#$% = "|"#,)#) i.e., the current state "# and action )# are sufficient for predicting the next state ". Wu Markov Decision Process: the Assumptions Time assumption: time is discrete ! → ! + 1 Possible relaxations § Identify the proper time granularity § Most of MDP literature extends to continuous time Wu ATARI Breakout Wu ATARI Breakout Too fine-grained resolution Wu ATARI Breakout Wu ATARI Breakout Too coarse-grained resolution Wu Markov Decision Process: the Assumptions Reward assumption: the reward is uniquely defined by a transition (or part of it) !(#, %, #&) Possible relaxations § Distinguish between global goal and reward function § Move to inverse reinforcement learning (IRL) to induce the reward function from desired behaviors Wu Markov Decision Process: the Assumptions Stationarity assumption: the dynamics and reward do not change over time # # # ! " ", % = ℙ "()* = " "( = ",%( = % +(", %, " ) Possible relaxations § Identify and add/remove the non-stationary components (e.g., cyclo-stationary dynamics) § Identify the time-scale of the changes § Work on finite horizon problems Wu Markov Decision Process Definition (Markov decision process) A Markov decision process (MDP) is defined as a tuple ! = ($,&,',(,)) where § $ is the state space, § A is the action space, § '(+,|+,.) is the transition probability with , , ' + +,. = ℙ(+012 = + |+0 = +,.0 = .) § ((+,.,+,) is the immediate reward at state + upon taking action ., § ) ∈ [0,1) is the discount factor. F Two missing ingredients: § How actions are selected → policy. § What determines which actions (and states) are good → value function. Wu Policy Definition (Policy) A decision rule ! can be § Deterministic: !: # → %, § Stochastic: !: # → Δ(%), § History-dependent: !: )* → %, § Markov: !: # → Δ(%), A policy (strategy, plan) can be § Non-stationary: + = (!-, !/, !0, … ), § Stationary: + = !, !, !, … . FMDP 2 + stationary policy + = !, !, … ⟹ Markov Chain of state # and transition probability 4 56 5) = 4 56 5, ! 5 . FFor simplicity, we will typically write + instead of ! for stationary policies, and +* instead of !* for non-stationary policies. Wu Recall: The Amazing Goods Company Example Description. At each month !, a warehouse contains "# items of a specific goods and the demand for that goods is $ (stochastic). At the end of each month the manager of the warehouse can order %# more items from the supplier. § The cost of maintaining an inventory of " is ℎ("). § The cost to order % items is )(%). § The income for selling * items if +(*). § If the demand ,~$ is bigger than the available inventory ", customers that cannot be served leave. § The value of the remaining inventory at the end of the

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    93 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us