The Maximum Principle of Pontryagin in Control and in Optimal Control
Total Page:16
File Type:pdf, Size:1020Kb
The Maximum Principle of Pontryagin in control and in optimal control Andrew D. Lewis1 16/05/2006 Last updated: 23/05/2006 1Professor, Department of Mathematics and Statistics, Queen's University, Kingston, ON K7L 3N6, Canada Email: [email protected], URL: http://www.mast.queensu.ca/~andrew/ 2 A. D. Lewis Preface These notes provide an introduction to Pontryagin's Maximum Principle. Optimal con- trol, and in particular the Maximum Principle, is one of the real triumphs of mathematical control theory. Certain of the developments stemming from the Maximum Principle are now a part of the standard tool box of users of control theory. While the Maximum Principle has proved to be extremely valuable in applications, here our emphasis is on understanding the Maximum Principle, where it comes from, and what light it sheds on the subject of control theory in general. For this reason, readers looking for many instances of applications of the Maximum Principle will not find what we say here too rewarding. Such applications can be found in many other places in the literature. What is more difficult to obtain, however, is an appreciation for the Maximum Principle. It is true that the original presentation of Pontryagin, Boltyanski˘i, Gamkrelidze, and Mishchenko [1986] is still an excellent one in many respects. However, it is probably the case that one can benefit by focusing exclusively on understanding the ideas behind the Maximum Principle, and this is what we try to do here. Let us outline the structure of our presentation. 1. The Maximum Principle can be thought of as a far reaching generalisation of the classical subject of the calculus of variations. For this reason, we begin our development with a discussion of those parts of the calculus of variations that bear upon the Maximum Principle. The usual Euler{Lagrange equations only paint part of the picture, with the necessary conditions of Legendre and Weierstrass filling in the rest of the canvas. We hope that readers familiar with the calculus of variations can, at the end of this development, at least find the Maximum Principle plausible. 2. After our introduction through the calculus of variations, we give a precise statement of the Maximum Principle. 3. With the Maximum Principle stated, we next wind our way towards its proof. While it is not necessary to understand the proof of the Maximum Principle to use it,1 a number of important ideas in control theory come up in the proof, and these are explored in independent detail. (a) The notion of a \control variation" is fundamental in control theory, particularly in optimal control and controllability theory. It is the common connection with control variations that accounts for the links, at first glance unexpected, between controllability and optimal control. (b) The notion of the reachable set lies at the heart of the Maximum Principle. As we shall see in Section 6.2, a significant step in understanding the Maximum Principle occurs with the recognition that optimal trajectories lie on the boundary of the reachable set for the so-called extended system. 4. With the buildup to the proof of the Maximum Principle done, it is possible to complete the proof. In doing so, we identify the key steps and what they rely upon. 1Like much of mathematics, one can come to some sort of understanding of the Maximum Principle by applying it enough times to enough interesting problems. However, the understanding one obtains in this way may be compromised by looking only at simple examples. As Bertrand Russell wrote, \The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken." The Maximum Principle in control and in optimal control 3 5. Since one should, at the end of the day, be able to apply the Maximum Principle, we do this in two special cases: (1) linear quadratic optimal control and (2) linear time-optimal control. In both cases one can arrive at a decent characterisation of the solution of the problem by simply applying the Maximum Principle. 6. In three appendices we collect some details that are needed in the proof of the Maximum Principle. Readers not caring too much about technicalities can probably pretend that these appendices do not exist. An attempt has been made to keep prerequisites to a minimum. For example, the treatment is not differential geometric. Readers with a background in advanced analysis (at the level of, say, \Baby Rudin" [Rudin, 1976]) ought to be able to follow the development. We have also tried to make the treatment as self-contained as possible. The only ingredient in the presentation that is not substantially self-contained is our summary of measure theory and differential equations with measurable time-dependence. It is not really possible, and moreover not very useful, to attempt a significant diversion into these topics. We instead refer the reader to the many excellent references. Acknowledgements. These notes were prepared for a short graduate course at the Universi- tat Polit`ecnicade Catalunya in May 2006. I extend warm thanks to Miguel Mu~noz{Lecanda for the invitation to give the course and for his hospitality during my stay in Barcelona. I would also like to thank the attendees of the lectures for indulging my excessively pedantic style and onerous notation for approximately three gruelling (for the students, not for me) hours of every day of the course. 4 A. D. Lewis Contents 1 Control systems and optimal control problems6 1.1. Control systems . .7 1.2. Controls and trajectories . .8 1.3. Two general problems in optimal control . .9 1.4. Some examples of optimal control problems . 10 2 From the calculus of variations to optimal control 13 2.1. Three necessary conditions in the calculus of variations . 14 2.1.1 The necessary condition of Euler{Lagrange. 14 2.1.2 The necessary condition of Legendre. 16 2.1.3 The necessary condition of Weierstrass. 18 2.2. Discussion of calculus of variations . 20 2.3. The Skinner{Rusk formulation of the calculus of variations . 24 2.4. From the calculus of variations to the Maximum Principle, almost . 25 3 The Maximum Principle 29 3.1. Preliminary definitions . 29 3.2. The statement of the Maximum Principle . 31 3.3. The mysterious parts and the useful parts of the Maximum Principle . 32 4 Control variations 35 4.1. The variational and adjoint equations . 35 4.2. Needle variations . 40 4.3. Multi-needle variations . 43 4.4. Free interval variations . 45 5 The reachable set and approximation of its boundary by cones 49 5.1. Definitions . 49 5.2. The fixed interval tangent cone . 50 5.3. The free interval tangent cone . 53 5.4. Approximations of the reachable set by cones . 55 5.4.1 Approximation by the fixed interval tangent cone. 55 5.4.2 Approximation by the free interval tangent cone. 58 5.5. The connection between tangent cones and the Hamiltonian . 59 5.6. Controlled trajectories on the boundary of the reachable set . 64 5.6.1 The fixed interval case. 64 5.6.2 The free interval case. 66 6 A proof of the Maximum Principle 70 6.1. The extended system . 70 6.2. Optimal trajectories lie on the boundary of the reachable set of the extended system . 71 6.3. The properties of the adjoint response and the Hamiltonian . 71 6.4. The transversality conditions . 73 The Maximum Principle in control and in optimal control 5 7 A discussion of the Maximum Principle 79 7.1. Normal and abnormal extremals . 79 7.2. Regular and singular extremals . 80 7.3. Tangent cones and linearisation . 82 7.4. Differential geometric formulations . 84 7.4.1 Control systems. 85 7.4.2 The Maximum Principle. 85 7.4.3 The variational and adjoint equations, and needle variations. 85 7.4.4 Tangent cones. 86 8 Linear quadratic optimal control 90 8.1. Problem formulation . 90 8.2. The necessary conditions of the Maximum Principle . 91 8.3. The r^oleof the Riccati differential equation . 92 8.4. The infinite horizon problem . 95 8.5. Linear quadratic optimal control as a case study of abnormality . 97 9 Linear time-optimal control 102 9.1. Some general comments about time-optimal control . 102 9.2. The Maximum Principle for linear time-optimal control . 103 9.3. An example . 105 A Ordinary differential equations 110 A.1. Concepts from measure theory . 110 A.1.1 Lebesgue measure. 110 A.1.2 Integration. 111 A.1.3 Classes of integrable functions. 112 A.2. Ordinary differential equations with measurable time-dependence . 113 B Convex sets, affine subspaces, and cones 115 B.1. Definitions . 115 B.2. Combinations and hulls . 115 B.3. Topology of convex sets and cones . 121 B.4. Simplices and simplex cones . 123 B.5. Separation theorems for convex sets . 128 B.6. Linear functions on convex polytopes . 130 C Two topological lemmata 134 C.1. The Hairy Ball Theorem . 134 C.2. The Brouwer Fixed Point Theorem . 137 C.3. The desired results . 139 Chapter 1 Control systems and optimal control problems We begin by indicating what we will mean by a control system in these notes. We will be slightly fussy about classes of trajectories and in this chapter we give the notation attendant to this. We follow this by formulating precisely the problem in optimal control on which we will focus for the remainder of the discussion. Then, by means of motivation, we consider a few typical concrete problems in optimal control.