
Linear Control Theory (Lecture notes) Version 0.9 Dmitry Gromov August 25, 2017 Contents Preface 5 I CONTROL SYSTEMS: ANALYSIS6 1 Introduction7 1.1 General notions................................7 1.2 Typical problems solved by control theory.................8 1.3 Linearization.................................8 1.3.1 * Hartman-Grobman theorem.................... 10 2 Solutions of an LTV system 11 2.1 Fundamental matrix............................. 11 2.2 State transition matrix............................ 12 2.3 Time-invariant case.............................. 13 2.4 Controlled systems: variation of constants formula............ 14 3 Controllability and observability 17 3.1 Controllability of an LTV system...................... 17 3.1.1 * Optimality property ofu ¯ ..................... 19 3.2 Observability of an LTV system....................... 21 3.3 Duality principle............................... 22 3.4 Controllability of an LTI system...................... 23 3.4.1 Kalman's controllability criterion.................. 23 3.4.2 Decomposition of a non-controllable LTI system......... 24 3.4.3 Hautus' controllability criterion................... 25 3.5 Observability of an LTI system....................... 26 3.5.1 Decomposition of a non-observable LTI system.......... 27 3.6 Canonical decomposition of an LTI control system............ 28 4 Stability of LTI systems 29 4.1 Matrix norm and related inequalities.................... 29 4.2 Stability of an LTI system.......................... 31 4.2.1 Basic notions............................. 31 4.2.2 * Some more about stability..................... 32 4.2.3 Lyapunov's criterion of asymptotic stability............ 33 2 4.2.4 Algebraic Lyapunov matrix equation................ 35 4.3 Hurwitz stable polynomials......................... 36 4.3.1 Stodola's necessary condition.................... 36 4.3.2 Hurwitz stability criterion...................... 36 4.4 Frequency domain stability criteria..................... 37 5 Linear systems in frequency domain 42 5.1 Laplace transform.............................. 42 5.2 Transfer matrices............................... 43 5.2.1 Properties of a transfer matrix................... 44 5.2.2 * Computing eAt using the Laplace transform........... 45 5.3 Transfer functions.............................. 45 5.3.1 Physical interpretation of a transfer function........... 46 5.3.2 Bode plot............................... 46 5.3.3 * Impulse response function..................... 47 II CONTROL SYSTEMS: SYNTHESIS 49 6 Feedback control 50 6.1 Introduction.................................. 50 6.1.1 Reference tracking control...................... 51 6.1.2 Feedback transformation....................... 52 6.2 Pole placement procedure.......................... 54 6.3 Linear-quadratic regulator (LQR)...................... 55 6.3.1 Optimal control basics........................ 55 6.3.2 Dynamic programming........................ 57 6.3.3 Linear-quadratic optimal control problem............. 59 7 State observers 61 7.1 Full state observer.............................. 61 7.2 Reduced state observer............................ 63 APPENDIX 65 A Block matrices 66 A.1 Matrix inversion............................... 66 A.2 Determinant of a block matrix....................... 67 B Canonical forms of a matrix 68 B.1 Similarity transformation.......................... 68 B.2 Frobenius companion matrix........................ 69 B.2.1 Transformation of A to AF ...................... 70 B.2.2 Transformation of A to A¯F ...................... 72 B.3 Jordan form.................................. 72 C * Linear operators 73 3 C.1 General properties.............................. 73 C.2 Adjoint of L(u)................................ 74 C.3 Solving homogeneous ODEs......................... 74 D Miscellaneous 76 Bibliography 77 4 Preface These lecture notes are intended to provide a supplement to the 1-semester course \Linear control systems" taught to the 3rd year bachelor students at the Faculty of Applied Mathematics and Control Processes, Saint Petersburg State University. This course is developed in order to familiarize the students with basic concepts of Linear Control Theory and to provide them with a set of basic tools that can be used in the subsequent courses on robust control, nonlinear control, control of time-delay systems and so on. The main emphasis is put on understanding the internal logic of the theory, many particular results are omitted, some parts of the proofs are left to the students as exercises. On the other hand, there are certain topics, marked with asterisks, that are not taught in the course, but included in the lecture notes because it is believed that these can help interested students to get deeper into the matter. Some of these topics are included in the course \Modern control theory" taught to the 1st year master students of the specialization \Operations research and systems analysis". The lecture notes do not include homework exercises. These are given by tutors and elaborated during the weekly seminars. All exercises and examples included in the lecture notes are intended to introduce certain concepts that will be used later on in the course. When preparing the lecture notes the author extensively used the material from the classical book by Roger Brockett, [3], as well as from the lecture notes by Vladimir L. Kharitonov, [6]. 5 Part I CONTROL SYSTEMS: ANALYSIS 6 Chapter 1 Introduction 1.1 General notions We begin by considering the following system of first order nonlinear differential equa- tions: ( x_(t) = f(x(t); u(t); t); x(t ) = x ; 0 0 (1.1) y(t) = g(x(t); u(t); t); n m k 1 where x(t) 2 R , u(t) 2 R , and y(t) 2 R for all t 2 I, I 2 f[t0;T ]; [t0; 1)g ; f(x(t); u(t); t) and g(x(t); u(t); t) are continuously differentiable w.r.t. all their argu- ments with uniformly bounded first derivatives, and u(t) is a measurable function. With these assumptions, system (1.1) has a unique solution for any pair (t0; x0) and any u(t) which can be extended to the whole interval I. In the following we will say that x(t) is the state, u(t) is the input (or the control), and y(t) is the output. Below we consider these notions in more detail. State. The state is defined as a quantity that uniquely determines the future system's evolution for any (admissible) control u(t). We consider systems with x(t) being an n element of a vector space R , n 2 f1; 2;:::g. NB: Other cases are possible! For instance, the state of a time-delay system is an element of a functional space. Control. The control u(·) is an element of the functional space of admissible controls: u(·) 2 U, where U can be defined, e.g., as a set of measurable, L2 or L1, piecewise m continuous or piecewise constant functions from I to U ⊆ R , where U is referred to m as the set of admissible control values. In this course we assume that U = R and U is the set of piecewise continuous functions. Definition 1.1.1. Given (t0; x0) and u(t), t 2 I,x ~(t) is said to be the solution of (1.1) d ifx ~(t0) = x0 and if dt x~(t) = f(~x(t); u(t); t) almost everywhere. We will often distinguish the following special cases: 1Whether we will consider a closed and finite or a half-open and semi-infinite interval will depend on the studied problem. For instance, the time-dependent controllability problem is considered on a closed interval while the feedback stabilization (typically) requires infinite time. 7 • Uncontrolled dynamics. If u(t) = 0 for all t 2 [t0; 1) the system (1.1) turns into ( x_(t) = f (x(t); t); x(t ) = x ; 0 0 0 (1.2) y(t) = g0(x(t); t); where f0(x; t) = f(x; 0; t), resp., g0(x; t) = g(x; 0; t). The dynamics of (1.2) de- pends only on the initial values x(t0) = x0. • Time-invariant dynamics. Let f and g do not depend explicitly on t. Then (1.1) turns into ( x_(t) = f(x(t); u(t)); x(t ) = x ; 0 0 (1.3) y(t) = g(x(t); u(t)): The system (1.3) is invariant under time shift and hence we can set t0 = 0. 1.2 Typical problems solved by control theory Below we list some problems which are addressed by control theory. 1. How to steer the system from the point A (i.e., x(t0) = xA) to the point B (x(T ) = xB) ; Open-loop control. 2. Does the above problem always possess a solution? ; Controllability analysis. 3. How to counteract the external disturbances resulting in deviations from the pre- computed trajectory? ; Feedback control. 4. How to get the necessary information about the system's state? ; Observer design. 5. Is the above problem always solvable? ; Observability analysis. 6. How to drive the system to an equilibrium from any initial position? ; Stabiliza- tion. 7. And so on and so forth ... many problems are beyond the scope of our course. 1.3 Linearization Typically, there are two ways to study a nonlinear system: a global and a local one. The global analysis is done using the methods from nonlinear control theory while the local analysis can be performed using linear control theory. The reason for this is that locally the behavior of most nonlinear systems can be well captured by a linear model. The procedure of substituting a nonlinear model with a linear one is referred to as the linearization. 8 Linearization in the neighborhood of an equilibrium point. The state x∗ is said to be an equilibrium (or fixed) point of (1.1) if f(x∗; 0; t) = 0, 8t. One can consider also controlled equilibria, i.e. the pairs (x∗; u∗) s.t. f(x∗; u∗; t) = 0, 8t. Let x∗ be an equilibrium point of (1.1). Consider the dynamics of (1.1) in a sufficiently small neighborhood of x∗, denoted by U(x∗). Let ∆x(t) = x(t) − x∗ be the deviation from the equilibrium point x∗. We write the DE for ∆x(t) expanding the r.h.s. into the Taylor series: d ∗ @ @ ∆x(t) = f(x ; 0; t)+ f(x; u; t) ∆x(t)+ f(x; u; t) u(t)+H:O:T:2 dt @x x=x∗;u=0 @u x=x∗;u=0 @ @ Introducing notation A(t) = @x f(x; u; t) and B(t) = @u f(x; u; t) , re- x=x∗;u=0 x=x∗;u=0 calling that f(x∗; 0; t) and, finally, dropping the high-order terms we get d ∆x(t) = A(t)∆x(t) + B(t)u(t): (1.4) dt The equation (1.4) is said to be Linear Time-Variant (LTV).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages77 Page
-
File Size-