
Policy Gradient Methods February 13, 2017 Policy Optimization Problems maximize π [expression] π E PT −1 I Fixed-horizon episodic: t=0 rt 1 PT −1 I Average-cost: limT !1 T t=0 rt P1 t I Infinite-horizon discounted: t=0 γ rt PTterminal−1 I Variable-length undiscounted: t=0 rt P1 I Infinite-horizon undiscounted: t=0 rt Episodic Setting s0 ∼ µ(s0) a0 ∼ π(a0 j s0) s1; r0 ∼ P(s1; r0 j s0; a0) a1 ∼ π(a1 j s1) s2; r1 ∼ P(s2; r1 j s1; a1) ::: aT −1 ∼ π(aT −1 j sT −1) sT ; rT −1 ∼ P(sT j sT −1; aT −1) Objective: maximize η(π); where η(π) = E[r0 + r1 + ··· + rT −1 j π] Episodic Setting π Agent a0 a1 aT-1 s0 s1 s2 sT r0 r1 rT-1 μ0 Environment P Objective: maximize η(π); where η(π) = E[r0 + r1 + ··· + rT −1 j π] Parameterized Policies d I A family of policies indexed by parameter vector θ 2 R I Deterministic: a = π(s; θ) I Stochastic: π(a j s; θ) I Analogous to classification or regression with input s, output a. I Discrete action space: network outputs vector of probabilities I Continuous action space: network outputs mean and diagonal covariance of Gaussian Policy Gradient Methods: Overview Problem: maximize E[R j πθ] Intuitions: collect a bunch of trajectories, and ... 1. Make the good trajectories more probable1 2. Make the good actions more probable 3. Push the actions towards good actions (DPG2, SVG3) 1R. J. Williams. \Simple statistical gradient-following algorithms for connectionist reinforcement learning". Machine learning (1992); R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour. \Policy gradient methods for reinforcement learning with function approximation". NIPS. MIT Press, 2000. 2D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, et al. \Deterministic Policy Gradient Algorithms". ICML. 2014. 3N. Heess, G. Wayne, D. Silver, T. Lillicrap, Y. Tassa, et al. \Learning Continuous Control Policies by Stochastic Value Gradients". arXiv preprint arXiv:1510.09142 (2015). Score Function Gradient Estimator I Consider an expectation Ex∼p(x j θ)[f (x)]. Want to compute gradient wrt θ Z rθEx [f (x)] = rθ dx p(x j θ)f (x) Z = dx rθp(x j θ)f (x) Z r p(x j θ) = dx p(x j θ) θ f (x) p(x j θ) Z = dx p(x j θ)rθ log p(x j θ)f (x) = Ex [f (x)rθ log p(x j θ)]: I Last expression gives us an unbiased gradient estimator. Just sample xi ∼ p(x j θ), and computeg ^i = f (xi )rθ log p(xi j θ). I Need to be able to compute and differentiate density p(x j θ) wrt θ Derivation via Importance Sampling Alternative Derivation Using Importance Sampling4 p(x j θ) Ex∼θ [f (x)] = Ex∼θold f (x) p(x j θold) rθp(x j θ) rθEx∼θ [f (x)] = Ex∼θold f (x) p(x j θold) " # rθp(x j θ) θ=θold rθEx∼θ [f (x)] θ=θ = Ex∼θold f (x) old p(x j θold) h i = x∼θ rθ log p(x j θ) f (x) E old θ=θold 4T. Jie and P. Abbeel. \On a connection between importance sampling and the likelihood ratio policy gradient". Advances in Neural Information Processing Systems. 2010, pp. 1000{1008. Score Function Gradient Estimator: Intuition g^i = f (xi )rθ log p(xi j θ) I Let's say that f (x) measures how good the sample x is. I Moving in the directiong ^i pushes up the logprob of the sample, in proportion to how good it is I Valid even if f (x) is discontinuous, and unknown, or sample space (containing x) is a discrete set Score Function Gradient Estimator: Intuition g^i = f (xi )rθ log p(xi j θ) Score Function Gradient Estimator: Intuition g^i = f (xi )rθ log p(xi j θ) Score Function Gradient Estimator for Policies I Now random variable x is a whole trajectory τ = (s0; a0; r0; s1; a1; r1;:::; sT −1; aT −1; rT −1; sT ) rθEτ [R(τ)] = Eτ [rθ log p(τ j θ)R(τ)] I Just need to write out p(τ j θ): T −1 Y p(τ j θ) = µ(s0) [π(at j st ; θ)P(st+1; rt j st ; at )] t=0 T −1 X log p(τ j θ) = log µ(s0) + [log π(at j st ; θ) + log P(st+1; rt j st ; at )] t=0 T −1 X rθ log p(τ j θ) = rθ log π(at j st ; θ) t=0 " T −1 # X rθEτ [R] = Eτ Rrθ log π(at j st ; θ) t=0 I Interpretation: using good trajectories (high R) as supervised examples in classification / regression Policy Gradient: Use Temporal Structure I Previous slide: " T −1 ! T −1 !# X X rθEτ [R] = Eτ rt rθ log π(at j st ; θ) t=0 t=0 I We can repeat the same argument to derive the gradient estimator for a single reward term rt0 . 2 t0 3 X rθE [rt0 ] = E 4rt0 rθ log π(at j st ; θ)5 t=0 I Sum this formula over t, we obtain 2T −1 t0 3 X X rθE [R] = E 4 rt0 rθ log π(at j st ; θ)5 t=0 t=0 "T −1 T −1 # X X = E rθ log π(at j st ; θ) rt0 t=0 t0=t Policy Gradient: Introduce Baseline I Further reduce variance by introducing a baseline b(s) "T −1 T −1 !# X X rθEτ [R] = Eτ rθ log π(at j st ; θ) rt0 − b(st ) t=0 t0=t I For any choice of b, gradient estimator is unbiased. I Near optimal choice is expected return, b(st ) ≈ E [rt + rt+1 + rt+2 + ··· + rT −1] I Interpretation: increase logprob of action at proportionally to how much PT −1 returns t0=t rt0 are better than expected Baseline|Derivation Eτ [rθ log π(at j st ; θ)b(st )] h i = Es0:t ;a0:(t−1) Es(t+1):T ;at:(T −1) [rθ log π(at j st ; θ)b(st )] (break up expectation) h i = Es0:t ;a0:(t−1) b(st )Es(t+1):T ;at:(T −1) [rθ log π(at j st ; θ)] (pull baseline term out) = Es0:t ;a0:(t−1) [b(st )Eat [rθ log π(at j st ; θ)]] (remove irrelevant vars.) = Es0:t ;a0:(t−1) [b(st ) · 0] Last equality because 0 = rθEat ∼π(· j st ) [1] = Eat ∼π(· j st ) [rθ log πθ(at j st )] Discounts for Variance Reduction I Introduce discount factor γ, which ignores delayed effects between actions and rewards "T −1 T −1 !# X X t0−t rθEτ [R] ≈ Eτ rθ log π(at j st ; θ) γ rt0 − b(st ) t=0 t0=t 2 T −1−t I Now, we want b(st ) ≈ E rt + γrt+1 + γ rt+2 + ··· + γ rT −1 \Vanilla" Policy Gradient Algorithm Initialize policy parameter θ, baseline b for iteration=1; 2;::: do Collect a set of trajectories by executing the current policy At each timestep in each trajectory, compute PT −1 t0−t the return Rt = t0=t γ rt0 , and ^ the advantage estimate At = Rt − b(st ). 2 Re-fit the baseline, by minimizing kb(st ) − Rt k , summed over all trajectories and timesteps. Update the policy, using a policy gradient estimateg ^, ^ which is a sum of terms rθ log π(at j st ; θ)At . (Plugg ^ into SGD or ADAM) end for Practical Implementation with Autodiff P ^ I Usual formula t rθ log π(at j st ; θ)At is inefficient—want to batch data I Define \surrogate" function using data from currecnt batch X ^ L(θ) = log π(at j st ; θ)At t I Then policy gradient estimatorg ^ = rθL(θ) I Can also include value function fit error X ^ ^ 2 L(θ) = log π(at j st ; θ)At − kV (st ) − Rt k t Value Functions π,γ 2 Q (s; a) = Eπ r0 + γr1 + γ r2 + ::: j s0 = s; a0 = a Called Q-function or state-action-value function π,γ 2 V (s) = Eπ r0 + γr1 + γ r2 + ::: j s0 = s π,γ = Ea∼π [Q (s; a)] Called state-value function Aπ,γ(s; a) = Qπ,γ(s; a) − V π,γ(s) Called advantage function Policy Gradient Formulas with Value Functions I Recall: "T −1 T −1 !# X X rθEτ [R] = Eτ rθ log π(at j st ; θ) rt0 − b(st ) t=0 t0=t "T −1 T −1 !# X X t0−t ≈ Eτ rθ log π(at j st ; θ) γ rt0 − b(st ) t=0 t0=t I Using value functions "T −1 # X π rθEτ [R] = Eτ rθ log π(at j st ; θ)Q (st ; at ) t=0 "T −1 # X π = Eτ rθ log π(at j st ; θ)A (st ; at ) t=0 "T −1 # X π,γ ≈ Eτ rθ log π(at j st ; θ)A (st ; at ) t=0 I Can plug in \advantage estimator" A^ for Aπ,γ I Advantage estimators have the form Return − V (s) Value Functions in the Future I Baseline accounts for and removes the effect of past actions I Can also use the value function to estimate future rewards ^(1) Rt = rt + γV (st+1) cut off at one timestep ^(2) 2 Rt = rt + γrt+1 + γ V (st+2) cut off at two timesteps ::: ^(1) 2 Rt = rt + γrt+1 + γ rt+2 + ::: 1 timesteps (no V ) Value Functions in the Future I Subtracting out baselines, we get advantage estimators ^(1) At = rt + γV (st+1)−V (st ) ^(2) 2 At = rt + rt+1 + γ V (st+2)−V (st ) ::: ^(1) 2 At = rt + γrt+1 + γ rt+2 + ::: −V (st ) ^(1) ^(1) I At has low variance but high bias, At has high variance but low bias.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages28 Page
-
File Size-