
2005 American Control Conference ThA18.2 June 8-10, 2005. Portland, OR, USA An Efficient Sequential Linear Quadratic Algorithm for Solving Nonlinear Optimal Control Problems Athanasios Sideris and James E. Bobrow Department of Mechanical and Aerospace Engineering University of California, Irvine Irvine, CA 92697 {asideris, jebobrow}@uci.edu Abstract— We develop a numerically efficient algorithm for continuous-time optimal control problems can handled after computing controls for nonlinear systems that minimize a discretization. We are interested primarily in systems for quadratic performance measure. We formulate the optimal which the derivatives of the dynamic equations with respect control problem in discrete-time, but many continuous-time problems can be also solved after discretization. Our ap- to the state and the control are available, but whose second proach is similar to sequential quadratic programming for derivatives with respect to the states and the controls are too finite-dimensional optimization problems in that we solve the complex to compute analytically. Our algorithm is based nonlinear optimal control problem using sequence of linear on linearizing the system dynamics about a input/state quadratic subproblems. Each subproblem is solved efficiently trajectory and solving a corresponding linear quadratic using the Riccati difference equation. We show that each iteration produces a descent direction for the performance optimal control problem. From the solution of the latter measure, and that the sequence of controls converges to a problem, we obtain a search direction along which we solution that satisfies the well-known necessary conditions for minimize the performance index by direct simulation of the optimal control. the system dynamics. Given the structure of the proposed algorithm, we refer to it in the sequel as the Sequential I. INTRODUCTION Linear Quadratic (SLQ) algorithm. We prove that search As the complexity of nonlinear systems such as robots directions produced in this manner are descent directions, increases, it is becoming more important to find controls and that this algorithm converges to a control that locally for them that minimize a performance measure such as minimizes the cost function. Solution of the linear quadratic power consumption. Although the Maximum Principle [1] problem is well-known, and can be reliably obtained by provides the optimality conditions for minimizing a given solving a Riccati difference equation. cost function, it does not provide a method for its nu- Algorithms similar in spirit are reported in [6], [7], merical computation. Because of the importance in solving [8], [9]. Such algorithms implement a Newton search, these problems, many numerical algorithms and commercial or asymptotically Newton search, but require that 2nd- software packages have been developed to solve them order system derivatives be available. Newton algorithms since the 1960’s [2]. The various approaches taken can can achieve quadratic local convergence under favorable be classified as either indirect or direct methods. Indirect circumstances such as the existence of continuous 3rd-order methods explicitly solve the optimality conditions stated or Lipschiz continuous 2nd-order derivatives, but cannot in terms of the maximum principle, the adjoint equations, guarantee global convergence (that is convergence from any and the tranversality (boundary) conditions. Direct methods starting point to a local minimum) unless properly modified. such as collocation [3] techniques, or direct transcription, Many systems are too complex for 2nd-order derivatives to replace the ODE’s with algebraic constraints using a large be available or even do not satisfy such strong continuity set of unknowns. These collocation methods are powerful assumptions (e.g. see Section IV for examples.) In such for solving trajectory optimization problems [4], [5] and cases, Newton’s method cannot be applied, or the quadratic problems with inequality constraints. However, due to the convergence rate of Newton’s method does not material- large-scale nature of these problems, convergence can be ize. We have found that our approach efficiently solves slow. Furthermore, optimal control problems can be difficult optimal control problems that are difficult to solve with to solve numerically for a variety of reasons. For instance, other popular algorithms such as collocation methods (see the nonlinear system dynamics may create an unfavorable Example, Section IV.) More specifically, we have observed eigenvalue structure of the Hessian of the ensuing optimiza- that our algorithm exhibits near-quadratic convergence in tion problem, and gradient-based descent methods will be many of the problems that we have tested. Indeed, it inefficient. is shown in the full version of the paper [10], that the In this paper, we focus on the case of finding opti- proposed algorithm can be interpreted as a Gauss-Newton mal controls for nonlinear dynamic systems with general method, thus explaining its excellent rate of convergence linear quadratic performance indexes (including tracking properties observed in simulations. Thus although many of and regulation). The discrete-time case is considered but the alternative methods ([2], [12], [11], [8]) can be applied 0-7803-9098-9/05/$25.00 ©2005 AACC 2275 to a broader class of problems, our SLQ algorithm provides we have the recursion: a fast and reliable alternative to such algorithms for the T T λ (n)=Lx(x(n),u(n),n)+λ (n +1)fx(x(n),u(n)) important class of optimal control problems with quadratic o T T =[x(n) − x (n)] Q(n)+λ (n +1)fx(x(n),u(n)) (8) cost under general nonlinear dynamics, while relying only on first derivative information. by using (3) and where Lx and fx denote the partials of L and f respectively with respect to the state variables. The II. PROBLEM FORMULATION AND BACKGROUND previous recursion can be solved backward in time (n = RESULTS N − 1,...,0) given the control and state trajectories and it We consider the following general formulation of can be started with the final value: discrete-time optimal control problems. ∂L N λT N ( ) x N − xo N T Q N ( )= ∂x N =[ ( ) ( )] ( ) (9) N−1 ( ) Minimize J = φ(x(N)) + L(x(n),u(n),n) u(n),x(n) (1) derived from (4). In a similar manner, we compute the n=0 J n u n subject to x(n +1)=f(x(n),u(n)); x(0) = x sensitivity of ( ) with respect to the current control ( ). 0 (2) Clearly from (7), In the formulation above we assume a quadratic perfor- ∂J(n) T mance index, namely: = Lu(x(n),u(n),n)+λ (n +1)fu(x(n),u(n)) ∂u(n) 1 o T o o T T L(x(n),u(n),n)= [x(n) − x (n)] Q(n)[x(n) − x (n)] + =[u(n) − u (n)] R(n)+λ (n +1)fu(x(n),u(n)). (10) 2 [u(n) − uo(n)]T R(n)[u(n) − uo(n)] (3) In (10), Lu and fu denote the partials of L and f respec- and tively with respect to the control variables and (3) is used. 1 o T o ∂J ∂J(n) φ(x)= [x − x (N)] Q(N)[x − x (N)] (4) Next note that = since the first n terms in J 2 ∂u(n) ∂u(n) do not depend on u(n). We have then obtained the gradient o o In (3) and (4), u (n), x (n), n =1,... N are given of the cost with respect to the control variables, namely: control input and state target sequences. In standard optimal uo n xo n ∂J(0) ∂J(1) ∂J(N − 1) regulator control problem formulations, ( ), ( ) are ∇uJ = ... (11) usually taken to be zero with the exception perhaps of ∂u(0) ∂u(1) ∂u(N − 1) xo N ( ), the desired final value for the state. The formulation Assuming u is unconstrained, the first order optimality considered here addresses the more general optimal tracking conditions require that control problem and is required for the linear quadratic step in the proposed algorithm presented in Section III.1. ∇uJ =0. (12) A. First Order Optimality Conditions We remark that by considering the Hamiltonian We next briefly review the first order optimality condi- T H(x, u, λ, n) ≡ L(x, u, n)+λ f(x, u), (13) tions for the optimal control problem of (1) and (2), in ∂J a manner that brings out certain important interpretations we have that Hu(x(n),u(n),λ(n +1),n) ≡ ∂u(n) , i.e. we of the adjoint dynamical equations encountered in a control uncover the generally known but frequently overlooked fact theoretic approach and Lagrange Multipliers found in a pure that the partial of the Hamiltonian with respect to the control optimization theory approach. variables u is the gradient of the cost function with respect Let us consider the cost-to-go: to u. We emphasize here that in our approach for solving N−1 the optimal control problem, we take the viewpoint of the J(n) ≡ L(x(k),u(k),k)+φ(x(N)) (5) control variables u(n) being the independent variables of n=k the problem since the dynamical equations express (recur- sively) the state variables in terms of the controls and thus with L and φ as defined in (3) and (4) respectively. We can be eliminated from the cost function. Thus in taking remark that J(n) is a function of x(n),andu(k), k = the partials of J with respect to u, J is considered as a n,...,N− 1 and introduce the sensitivity of the cost to go function u(n), n =0,...,N− 1 alone, assuming that x(0) with respect to the current state: is given. With this perspective, the problem becomes one T ∂J(n) of unconstrained minimization, and having computed ∇uJ, λ (n)= (6) ∂x(n) Steepest Descent, Quasi-Newton, and other first derivative methods can be brought to bear to solve it.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-