NONLINEAR PROGRAMMING in APPROXIMATE DYNAMIC PROGRAMMING Bang-Bang Solutions, Stock-Management and Unsmooth Penalties

NONLINEAR PROGRAMMING in APPROXIMATE DYNAMIC PROGRAMMING Bang-Bang Solutions, Stock-Management and Unsmooth Penalties

NONLINEAR PROGRAMMING IN APPROXIMATE DYNAMIC PROGRAMMING Bang-bang Solutions, Stock-management and Unsmooth Penalties Olivier Teytaud and Sylvain Gelly TAO (Inria, Univ. Paris-Sud, UMR CNRS-8623) Keywords: Evolutionary computation and control, Optimization algorithms. Abstract: Many stochastic dynamic programming tasks in continuous action-spaces are tackled through discretization. We here avoid discretization; then, approximate dynamic programming (ADP) involves (i) many learning tasks, performed here by Support Vector Machines, for Bellman-function-regression (ii) many non-linear- optimization tasks for action-selection, for which we compare many algorithms. We include discretizations of the domain as particular non-linear-programming-tools in our experiments, so that by the way we compare optimization approaches and discretization methods. We conclude that robustness is strongly required in the non-linear-optimizations in ADP, and experimental results show that (i) discretization is sometimes inefficient, but some specific discretization is very efficient for ”bang-bang” problems (ii) simple evolutionary tools out- perform quasi-random in a stable manner (iii) gradient-based techniques are much less stable (iv) for most high-dimensional ”less unsmooth” problems Covariance-Matrix-Adaptation is first ranked. 1 NON-LINEAR OPTIMIZATION gramming problem. There are not a lot of works deal- IN STOCHASTIC DYNAMIC ing with continuous actions, and they often do not study the non-linear optimization step involved in ac- PROGRAMMING (SDP) tion selection. In this paper, we focus on this part: we compare many non-linear optimization-tools, and we Some of the most traditional fields of stochastic dy- also compare these tools to discretization techniques namic programming, e.g. energy stock-management, to quantify the importance of the action-selection which have a strong economic impact, have not been step. studied thoroughly in the reinforcement learning or We here roughly introduce stochastic dynamic approximate-dynamic-programming (ADP) commu- programming. The interested reader is referred to nity. This is damageable to reinforcement learn- (Bertsekas and Tsitsiklis, 1996) for more details. ing as it has been pointed out that there are not yet Consider a dynamical system that stochastically many industrial realizations of reinforcement learn- evolves in time depending upon your decisions. As- ing. Energy stock-management leads to continuous sume that time is discrete and has finitely many time problems that are usually handled by traditional lin- steps. Assume that the total cost of your decisions is ear approaches in which (i) convex value-functions the sum of instantaneous costs. Precisely: are approximated by linear cuts (leading to piecewise cost = c1 + c2 + + cT linear approximations (PWLA)) (ii) decisions are so- ··· c = c(x ,d ), x = f (x ,d ,ω ) lutions of a linear-problem. However, this approach i i i i i 1 i 1 i ω − − does not work in large dimension, due to the curse of di 1 = strategy(xi 1, i) − − dimensionality which strongly affects PWLA. These where xi is the state at time step i, the ωi are a ran- problems should be handled by other learning tools. dom process, cost is to be minimized, and strategy However, in this case, the action-selection, minimiz- is the decision function that has to be optimized. We ing the expected cost-to-go, can’t be anymore done are interested in a control problem: the element to be using linear-programming, as the Bellman function is optimized is a function. no more a convex PWLA. Stochastic dynamic programming, a tool to solve The action selection is therefore a nonlinear pro- this control problem, is based on Bellman’s optimality 47 Teytaud O. and Gelly S. (2007). NONLINEAR PROGRAMMING IN APPROXIMATE DYNAMIC PROGRAMMING - Bang-bang Solutions, Stock-management and Unsmooth Penalties. In Proceedings of the Fourth International Conference on Informatics in Control, Automation and Robotics, pages 47-54 DOI: 10.5220/0001645800470054 Copyright c SciTePress ICINCO 2007 - International Conference on Informatics in Control, Automation and Robotics principle that can be informally stated as follows: 1.1 Robustness in Non-linear ”Take the decision at time step t such that the sum Optimization ”cost at time step t due to your decision” plus ”ex- pected cost from time step t 1 to ∞” is minimal.” + Robustness is one of the main issue in non-linear op- Bellman’s optimality principle states that this timization and has various meanings. strategy is optimal. Unfortunately, it can only be ap- 1. A first meaning is the following: robust opti- plied if the expected cost from time step t 1 to ∞ + mization is the search of x such that in the neighbor- can be guessed, depending on the current state of the hood of x the fitness is good, and not only at x. In par- system and the decision. Bellman’s optimality prin- ticular, (DeJong, 1992) has introduced the idea that ciple reduces the control problem to the computation evolutionary algorithms are not function-optimizers, of this function. If x can be computed from x and t t 1 but rather tools for finding wide areas of good fitness. d (i.e., if f is known) then the control problem− is t 1 2. A second meaning is that robust optimization reduced− to the computation of a function is the avoidance of local minima. It is known that iterative deterministic methods are often more subject V(t,xt ) = E[c(xt ,dt ) + c(xt+1,dt+1) + + c(xT ,dT )] ··· to local minima than evolutionary methods; however, Note that this function depends on the strategy (we various forms of restarts (relaunch the optimization omit for short dependencies on the random process). from a different initial point) can also be efficient for We consider this expectation for any optimal strategy avoiding local minima. (even if many strategies are optimal, V is uniquely 3. A third possible meaning is the robustness with determined as it is the same for any optimal strategy). respect to fitness noise. Various models of noise and Stochastic dynamic programming is the computa- conclusions can be found in (Jin and Branke, 2005; tion of V backwards in time, thanks to the following Sendhoff et al., 2004; Tsutsui, 1999; Fitzpatrick and equation: Grefenstette, 1988; Beyer et al., 2004). 4. A fourth possible meaning is the robustness V(t,xt ) = infc(xt ,dt ) + EV(t + 1,xt+1) with respect to unsmooth fitness functions, even in dt or equivalently cases in which there’s no local minima. Evolution- ary algorithms are usually rank-based (the next iter- V(t,xt ) = infc(xt ,dt ) + EV(t + 1, f (xt ,dt ))(1) ate point depends only on the fitnesses rank of previ- dt ously visited points), therefore do not depend on in- For each t, V(t,xt ) is computed for many values of xt , creasing transformations of the fitness-function. It is and then a learning algorithm (here by support vec- known that they have optimality properties w.r.t this tor machines) is applied for building x V(t,x) from kind of transformations (Gelly et al., 2006). For ex- 7→ ∞ these examples. Thanks to Bellman’s optimality prin- ample, p x (or some C functions close to this one) ciple, the computation of V is sufficient to define an lead to a very|| || bad behavior of standard Newton-based optimal strategy. This is a well known, robust so- methods like BFGS (Broyden., 1970; Fletcher, 1970; lution, applied in many areas including power sup- Goldfarb, 1970; Shanno, 1970) whereas a rank-based ply management. A general introduction, including evolutionary algorithm behaves the same for x 2 and || || learning, is (Bertsekas, 1995; Bertsekas and Tsitsik- p x . lis, 1996). Combined with learning, it can lead to pos- ||5.|| The fifth possible meaning is the robustness itive results in spite of large dimensions. Many devel- with respect to the non-deterministic choices made by opments, including RTDP and the field of reinforce- the algorithm. Even algorithms that are considered as ment learning, can be found in (Sutton and Barto, deterministic often have a random part1: the choice of 1998). the initial point. Population-based methods are more Equation 1 is used many many times during a run robust in this sense, even if they use more random- of dynamic programming. For T time steps, if N ness for the initial step (full random initial population points are required for efficiently approximating each compared to only one initial point): a bad initializa- Vt , then there are T N optimizations. Furthermore, tion which would lead to a disaster is much more un- × the derivative of the function to optimize is not always likely. available, due to the fact that complex simulators are The first sense of robustness given above, i.e. sometimes involved in the transition f . Convexity avoiding too narrow areas of good fitness, fully ap- sometimes holds, but sometimes not. Binary variables plies here. Consider for example a robot navigating are sometimes involved, e.g. in power plants manage- ment. This suggests that evolutionary algorithms are 1Or, if not random, a deterministic but arbitrary part, a possible tool. such as the initial point or the initial step-size. 48 NONLINEAR PROGRAMMING IN APPROXIMATE DYNAMIC PROGRAMMING - Bang-bang Solutions, Stock-management and Unsmooth Penalties in an environment in order to find a target. The robot gorithms and some discretization techniques. Evolu- has to avoid obstacles. The strict optimization of the tionary algorithms can work in continuous domains cost-to-go leads to choices just tangent to obstacles. (Back¨ et al., 1991; Back¨ et al., 1993; Beyer, 2001); As at each step the learning is far from perfect, then moreover, they are compatible with mixed-integer being tangent to obstacles leads to hit the obstacles in programming (e.g.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us