Linear Control Theory (Lecture notes)
Version 0.9
Dmitry Gromov
August 25, 2017 Contents
Preface 5
I CONTROL SYSTEMS: ANALYSIS6
1 Introduction7 1.1 General notions...... 7 1.2 Typical problems solved by control theory...... 8 1.3 Linearization...... 8 1.3.1 * Hartman-Grobman theorem...... 10
2 Solutions of an LTV system 11 2.1 Fundamental matrix...... 11 2.2 State transition matrix...... 12 2.3 Time-invariant case...... 13 2.4 Controlled systems: variation of constants formula...... 14
3 Controllability and observability 17 3.1 Controllability of an LTV system...... 17 3.1.1 * Optimality property ofu ¯ ...... 19 3.2 Observability of an LTV system...... 21 3.3 Duality principle...... 22 3.4 Controllability of an LTI system...... 23 3.4.1 Kalman’s controllability criterion...... 23 3.4.2 Decomposition of a non-controllable LTI system...... 24 3.4.3 Hautus’ controllability criterion...... 25 3.5 Observability of an LTI system...... 26 3.5.1 Decomposition of a non-observable LTI system...... 27 3.6 Canonical decomposition of an LTI control system...... 28
4 Stability of LTI systems 29 4.1 Matrix norm and related inequalities...... 29 4.2 Stability of an LTI system...... 31 4.2.1 Basic notions...... 31 4.2.2 * Some more about stability...... 32 4.2.3 Lyapunov’s criterion of asymptotic stability...... 33
2 4.2.4 Algebraic Lyapunov matrix equation...... 35 4.3 Hurwitz stable polynomials...... 36 4.3.1 Stodola’s necessary condition...... 36 4.3.2 Hurwitz stability criterion...... 36 4.4 Frequency domain stability criteria...... 37
5 Linear systems in frequency domain 42 5.1 Laplace transform...... 42 5.2 Transfer matrices...... 43 5.2.1 Properties of a transfer matrix...... 44 5.2.2 * Computing eAt using the Laplace transform...... 45 5.3 Transfer functions...... 45 5.3.1 Physical interpretation of a transfer function...... 46 5.3.2 Bode plot...... 46 5.3.3 * Impulse response function...... 47
II CONTROL SYSTEMS: SYNTHESIS 49
6 Feedback control 50 6.1 Introduction...... 50 6.1.1 Reference tracking control...... 51 6.1.2 Feedback transformation...... 52 6.2 Pole placement procedure...... 54 6.3 Linear-quadratic regulator (LQR)...... 55 6.3.1 Optimal control basics...... 55 6.3.2 Dynamic programming...... 57 6.3.3 Linear-quadratic optimal control problem...... 59
7 State observers 61 7.1 Full state observer...... 61 7.2 Reduced state observer...... 63
APPENDIX 65
A Block matrices 66 A.1 Matrix inversion...... 66 A.2 Determinant of a block matrix...... 67
B Canonical forms of a matrix 68 B.1 Similarity transformation...... 68 B.2 Frobenius companion matrix...... 69 B.2.1 Transformation of A to AF ...... 70 B.2.2 Transformation of A to A¯F ...... 72 B.3 Jordan form...... 72
C * Linear operators 73
3 C.1 General properties...... 73 C.2 Adjoint of L(u)...... 74 C.3 Solving homogeneous ODEs...... 74
D Miscellaneous 76
Bibliography 77
4 Preface
These lecture notes are intended to provide a supplement to the 1-semester course “Linear control systems” taught to the 3rd year bachelor students at the Faculty of Applied Mathematics and Control Processes, Saint Petersburg State University. This course is developed in order to familiarize the students with basic concepts of Linear Control Theory and to provide them with a set of basic tools that can be used in the subsequent courses on robust control, nonlinear control, control of time-delay systems and so on. The main emphasis is put on understanding the internal logic of the theory, many particular results are omitted, some parts of the proofs are left to the students as exercises. On the other hand, there are certain topics, marked with asterisks, that are not taught in the course, but included in the lecture notes because it is believed that these can help interested students to get deeper into the matter. Some of these topics are included in the course “Modern control theory” taught to the 1st year master students of the specialization “Operations research and systems analysis”. The lecture notes do not include homework exercises. These are given by tutors and elaborated during the weekly seminars. All exercises and examples included in the lecture notes are intended to introduce certain concepts that will be used later on in the course. When preparing the lecture notes the author extensively used the material from the classical book by Roger Brockett, [3], as well as from the lecture notes by Vladimir L. Kharitonov, [6].
5 Part I
CONTROL SYSTEMS: ANALYSIS
6 Chapter 1
Introduction
1.1 General notions
We begin by considering the following system of first order nonlinear differential equa- tions: ( x˙(t) = f(x(t), u(t), t), x(t ) = x , 0 0 (1.1) y(t) = g(x(t), u(t), t),
n m k 1 where x(t) ∈ R , u(t) ∈ R , and y(t) ∈ R for all t ∈ I, I ∈ {[t0,T ], [t0, ∞)} ; f(x(t), u(t), t) and g(x(t), u(t), t) are continuously differentiable w.r.t. all their argu- ments with uniformly bounded first derivatives, and u(t) is a measurable function. With these assumptions, system (1.1) has a unique solution for any pair (t0, x0) and any u(t) which can be extended to the whole interval I. In the following we will say that x(t) is the state, u(t) is the input (or the control), and y(t) is the output. Below we consider these notions in more detail. State. The state is defined as a quantity that uniquely determines the future system’s evolution for any (admissible) control u(t). We consider systems with x(t) being an n element of a vector space R , n ∈ {1, 2,...}. NB: Other cases are possible! For instance, the state of a time-delay system is an element of a functional space. Control. The control u(·) is an element of the functional space of admissible controls: u(·) ∈ U, where U can be defined, e.g., as a set of measurable, L2 or L∞, piecewise m continuous or piecewise constant functions from I to U ⊆ R , where U is referred to m as the set of admissible control values. In this course we assume that U = R and U is the set of piecewise continuous functions.
Definition 1.1.1. Given (t0, x0) and u(t), t ∈ I,x ˜(t) is said to be the solution of (1.1) d ifx ˜(t0) = x0 and if dt x˜(t) = f(˜x(t), u(t), t) almost everywhere. We will often distinguish the following special cases:
1Whether we will consider a closed and finite or a half-open and semi-infinite interval will depend on the studied problem. For instance, the time-dependent controllability problem is considered on a closed interval while the feedback stabilization (typically) requires infinite time.
7 • Uncontrolled dynamics.
If u(t) = 0 for all t ∈ [t0, ∞) the system (1.1) turns into ( x˙(t) = f (x(t), t), x(t ) = x , 0 0 0 (1.2) y(t) = g0(x(t), t),
where f0(x, t) = f(x, 0, t), resp., g0(x, t) = g(x, 0, t). The dynamics of (1.2) de- pends only on the initial values x(t0) = x0. • Time-invariant dynamics. Let f and g do not depend explicitly on t. Then (1.1) turns into ( x˙(t) = f(x(t), u(t)), x(t ) = x , 0 0 (1.3) y(t) = g(x(t), u(t)).
The system (1.3) is invariant under time shift and hence we can set t0 = 0.
1.2 Typical problems solved by control theory
Below we list some problems which are addressed by control theory.
1. How to steer the system from the point A (i.e., x(t0) = xA) to the point B (x(T ) = xB) ; Open-loop control. 2. Does the above problem always possess a solution? ; Controllability analysis. 3. How to counteract the external disturbances resulting in deviations from the pre- computed trajectory? ; Feedback control. 4. How to get the necessary information about the system’s state? ; Observer design. 5. Is the above problem always solvable? ; Observability analysis. 6. How to drive the system to an equilibrium from any initial position? ; Stabiliza- tion. 7. And so on and so forth ... many problems are beyond the scope of our course.
1.3 Linearization
Typically, there are two ways to study a nonlinear system: a global and a local one. The global analysis is done using the methods from nonlinear control theory while the local analysis can be performed using linear control theory. The reason for this is that locally the behavior of most nonlinear systems can be well captured by a linear model. The procedure of substituting a nonlinear model with a linear one is referred to as the linearization.
8 Linearization in the neighborhood of an equilibrium point. The state x∗ is said to be an equilibrium (or fixed) point of (1.1) if f(x∗, 0, t) = 0, ∀t. One can consider also controlled equilibria, i.e. the pairs (x∗, u∗) s.t. f(x∗, u∗, t) = 0, ∀t. Let x∗ be an equilibrium point of (1.1). Consider the dynamics of (1.1) in a sufficiently small neighborhood of x∗, denoted by U(x∗). Let ∆x(t) = x(t) − x∗ be the deviation from the equilibrium point x∗. We write the DE for ∆x(t) expanding the r.h.s. into the Taylor series:
d ∗ ∂ ∂ ∆x(t) = f(x , 0, t)+ f(x, u, t) ∆x(t)+ f(x, u, t) u(t)+H.O.T.2 dt ∂x x=x∗,u=0 ∂u x=x∗,u=0
∂ ∂ Introducing notation A(t) = ∂x f(x, u, t) and B(t) = ∂u f(x, u, t) , re- x=x∗,u=0 x=x∗,u=0 calling that f(x∗, 0, t) and, finally, dropping the high-order terms we get
d ∆x(t) = A(t)∆x(t) + B(t)u(t). (1.4) dt The equation (1.4) is said to be Linear Time-Variant (LTV). If the initial nonlinear equation was time-invariant, we had the Linear Time-Invariant (LTI) equation:
d ∆x(t) = A∆x(t) + Bu(t). (1.5) dt Note that the linearization procedure can be applied to the second equation in (1.1) as well, thus yielding y(t) = C(t)∆x(t)+D(t)u(t) in the LTV case or y(t) = C∆x(t)+Du(t) in the LTI case (there could also be a constant term which can be easily eliminated by passing toy ˜(t) = y(t) − g(x∗, 0, t)).
Linearization in the neighborhood of a system’s trajectory. Consider the time- invariant nonlinear system (1.3). Let (x∗(t), u∗(t)) be the system’s trajectory and the corresponding control. Denote δx(t) = x(t) − x∗(t) and δu(t) = u(t) − u∗(t). The DE for δx(t) is
d δx(t) =x ˙(t) − x˙ ∗(t) = f(x(t), u(t)) − f(x∗(t), u∗(t)) = dt
∂ ∂ f(x, u) δx(t) + f(x, u) δu(t) + H.O.T. (1.6) ∂x x=x∗(t),u=u∗(t) ∂u x=x∗(t),u=u∗(t)
∂ ∂ Denoting A(t) = ∂x f(x, u) and B(t) = ∂u f(x, u) and drop- x=x∗(t),u=u∗(t) x=x∗(t),u=u∗(t) ping the high-order terms we get an LTV system (1.4). Note that even though the initial nonlinear system was time-invariant, its linearization around the system’s trajectory (x∗(t), u∗(t)) is time-variant!
2H.O.T. = high order terms.
9 1.3.1 * Hartman-Grobman theorem
A justification for using linearized models is given by the Hartman-Grobman theorem which is based on the notion of a hyperbolic fixed point. Definition 1.3.1. The equilibrium (fixed) point x∗ is said to be hyperbolic if all eigen- values of the linearization A(t) have non-zero real parts. Theorem 1.3.1 (Hartman-Grobman). The set of solutions of (1.1) in the neighborhood of a hyperbolic equilibrium point x∗ is homeomorphic to that of the linearized system (1.4) in the neighborhood of the origin. Quoting Wikipedia: The Hartman–Grobman theorem ... asserts that linearization — our first resort in applications — is unreasonably effective in predicting qualitative patterns of behavior.
10 Chapter 2
Solutions of an LTV system
2.1 Fundamental matrix
Consider the set of homogeneous (i.e., uncontrolled) LTV differential equations:
x˙(t) = A(t)x(t), x(t0) = x0, (2.1)
n where x(t) ∈ R , t ∈ [t0,T ]. A(t) is component-wise continuous and bounded. Proposition 2.1.1. The set of all solutions of (2.1) forms an n-dimensional vector space over R. n Definition 2.1.1. A fundamental set of solutions of (2.1) is any set {xi(·)}i=1 such that n n for some t ∈ [t0,T ], {xi(t)}i=1 forms a basis of R . An n × n matrix function of t, Ψ(·) is said to be a fundamental matrix for (2.1) if the n columns of Ψ(·) consist of n linearly independent solutions of (2.1), i.e.,
Ψ(˙ t) = A(t)Ψ(t),
where Ψ(t) = ψ1(t) . . . ψn(t) . Exercise 2.1.1. Prove that rank Ψ(t) = n, t ∈ I ⇒ rank Ψ(t) = n, ∀t ∈ I .
Note that there are many possible fundamental matrices. For instance, an n × n matrix Ψ(t) satisfying Ψ(˙ t) = A(t)Ψ(t) with Ψ(t0) = In×n is a fundamental matrix. Example 2.1.2. Consider the system
0 0 x˙(t) = x(t). (2.2) t 0
That is,x ˙ 1(t) = 0,x ˙ 2(t) = tx1(t). The solution is:
1 1 x (t) = x (t ), and x (t) = t2x (t ) − t2x (t ) + x (t ). 1 1 0 2 2 1 0 2 0 1 0 2 0
11 x1(0) 0 0 2 Let t0 = 0 and ψ1(0) = = . Then ψ1(t) = . Now let ψ2(0) = . Then x2(0) 1 1 0 2 0 we have ψ (t) = = . Thus a fundamental matrix for the system is given by: 2 t2 1
0 2 Ψ(t) = . 1 t2
Proposition 2.1.2. Null space of a fundamental matrix is invariant for all t ∈ [t0,T ] and is equal to {0}. Corollary 2.1.3. Given a fundamental matrix Ψ(t), its inverse Ψ−1(t) exists for all t ∈ [t0,T ].
2.2 State transition matrix
Definition 2.2.1. The state transition matrix Φ(t, t0) associated with the system (2.1) is the matrix-valued function of t and t0 which:
1. Solves the matrix differential equation Φ(˙ t, t0) = A(t)Φ(t, t0), t ∈ [t0,T ],
2. Satisfies Φ(t, t) = In×n for any t ∈ [t0,T ]. Proposition 2.2.1. Let Ψ(t) be any fundamental matrix of (2.1). Then Φ(t, τ) = −1 Ψ(t)Ψ (τ), ∀t, τ ∈ [t0,T ].
−1 Proof. We have Φ(t0, t0) = Ψ(t0)Ψ (t0) = I . Moreover,
−1 −1 Φ(˙ t, t0) = Ψ(˙ t)Ψ (t0) = A(t)Ψ(t)Ψ (t0) = A(t)Φ(t, t0).
Proposition 2.2.2. The solution of (2.1) is given by x(t) = Φ(t, t0)x0.
Proof. The initial state is x(t0) = Φ(t0, t0)x0 = x0. Next, we show that x(t) = Φ(t, t0)x0 satisfies the differential equation:
x˙(t) = Φ(˙ t, t0)x0 = A(t)Φ(t, t0)x0 = A(t)x(t).
Lemma 2.2.3. Properties of the state transition matrix:
1. Φ(t, t1)Φ(t1, t0) = Φ(t, t0) — semi-group property.
−1 −1 −1 −1 2. Φ (t, t0) = Ψ(t)Ψ (t0) = Ψ(t0)Ψ (t) = Φ(t0, t).
3. Φ(˙ t0, t) = −Φ(t0, t)A(t) (hint: differentiate Φ(t0, t)Φ(t, t0) = I). T 4. If Φ(t, t0) is the state transition matrix of x˙(t) = A(t)x(t), then Φ (t0, t) is the state transition matrix of the system z˙(t) = −AT (t)z(t) — adjoint equation.
R t t tr(A(s))ds 5. det(Φ(t, t0)) = e 0 , where tr(A(t)) denotes the trace of matrix A(t).
12 R t t A(s)ds 6. If A(t) is a scalar, we have Φ(t, t0) = e 0 (NB: does not hold in general !). Example 2.2.1. The state transition matrix corresponding to the fundamental matrix found in Example 2.1.2 has the following form: 1 0 Φ(t, τ) = t2 − τ 2 1 2 Exercise 2.2.2. Check that the obtained state transition matrix defines solutions to (2.2).
2.3 Time-invariant case
Consider the time-invariant differential equation:
x˙(t) = Ax(t), x(t0) = x0. (2.3)
In this case, Φ(t, t0) = Φ(t − t0, 0) = Φ(t − t0) and
Φ(˙ t − t0) = AΦ(t − t0), Φ(t0) = I.
We can set t0 = 0 and consider Φ(t).
n×n Matrix Exponential If A ∈ R , the state transition matrix is (note that 0! = 1):
∞ 1 X ti Φ(t) = I + At + A2t2 + ... = Ai = eAt, 2 i! i=0 where the series converges uniformly and absolutely for any finite t. Henceforth, eAt will be referred to as the matrix exponential. Lemma 2.3.1. Properties of the matrix exponential: 1. AeAt = eAtA, that is A commutes with its matrix exponential. 2. eAt−1 = e−At.
3. If P is a nonsingular [n × n] matrix, then eP −1AP = P −1eAP (similarity transfor- mation = change of the basis).
A a an 4. If A is a diagonal matrix, A = diag(a1, . . . , an), then e = diag(e 1 , . . . , e ). 5. If A and B commute, i.e., AB = BA, we have eA+B = eAeB. Example 2.3.1 (Harmonic motion). Consider the equation
x˙ (t) 0 ω x (t) 1 = 1 = Ax(t). x˙ 2(t) −ω 0 x2(t)
13 The exponential matrix is thus:
1 0 0 1 1 0 ω2t2 0 1 ω3t3 1 0 ω4t4 eAt = + ωt − − + + ... 0 1 −1 0 0 1 2 −1 0 3! 0 1 4!
x3 x5 x7 x2 x4 x6 Taking into account that sin(x) = x− 3! + 5! − 7! +... and cos(x) = 1− 2! + 4! − 6! +... we readily obtain: cos(ωt) sin(ωt) eAt = , − sin(ωt) cos(ωt) which is the rotation matrix that rotates the points of the Cartesian plane clockwise. Exercise 2.3.2. Using the result of the preceding example and the properties of the matrix exponential determine the matrix exponential eAt for the matrix
r φ A = . −φ r
Example 2.3.3 (Matrix exponential of a Jordan block). Let the [m × m] matrix J be of the form s 1 0 ··· 0 0 s 1 ··· 0 .. .. J = . . , 0 ··· 0 s 1 0 0 0 0 s where s ∈ C. The matrix J can be written as J = sI + U, where U is the upper shift matrix. First, we observe that I and U commute (as the identity matrix commutes with any square matrix). Thus we can write
eJt = esIteUt.
Next, note that U is nilpotent, i.e., U m = 0. (NB: every matrix with zero main diagonal is nilpotent). Finally, we have
m−1 X ti eJt = estI U i. i! i=0
2.4 Controlled systems: variation of constants formula
Consider the LTV system
x˙(t) = A(t)x(t) + B(t)u(t), x(t0) = x0, (2.4) whose homogeneous (uncontrollable) solution is x(t) = Φ(t, t0)x0.
14 Theorem 2.4.1. If Φ(t, t0) is the state transition matrix for x˙(t) = A(t)x(t), then the unique solution of (2.4) is given by
t Z x(t) = Φ(t, t0)x0 + Φ(t, s)B(s)u(s)ds. (2.5)
t0
Proof. Define the new variable z(t) = Φ(t0, t)x(t). Differentiating z(t) w.r.t. t we get
z˙(t) = Φ(˙ t0, t)x(t) + Φ(t0, t)x ˙(t) =
− Φ(t0, t)A(t)x(t) + Φ(t0, t)A(t)x(t) + Φ(t0, t)B(t)u(t),
where the first two terms cancel. The resulting expression does not contain z(t) in the r.h.s. thus we can integrate it to get the solution:
t Z z(t) = z(t0) + Φ(t0, s)B(s)u(s)ds,
t0
whence follows
t t Z Z −1 x(t) = Φ (t0, t) x0 + Φ(t0, s)B(s)u(s)ds = Φ(t, t0)x0 + Φ(t, s)B(s)u(s)ds.
t0 t0
Corollary 2.4.2. The solution of a linear time-invariant equation is given by
t Z At A(t−s) x(t) = e x0 + e Bu(s)ds. (2.6) 0
Example 2.4.1 (Exponential input). Consider a (complex valued)1 LTI system with a scalar exponential input signal eσt:
σt x˙(t) = Ax(t) + be , x(·) ∈ C, x(0) = x0 (2.7) where σ ∈ C. The solution of (2.7) is found using (2.6):
t Z At A(t−τ) στ x(t) = e x0 + e be dτ, 0 which can be solved using integration by parts to yield
At −1 σt At x(t) = e x0 + (σI − A) (Ie − e )b. (2.8)
1We consider an LTI system in complex domain as we wish to include also harmonic input signals, iωt e.g. u(t) = e . This condition can be dropped if we assume that σ ∈ R.
15 Assume that the parameter σ is equal to an eigenvalue of A. This is referred to as the resonance. At first sight it seems that there is a singularity in the solution. To inspect this case more closely we rewrite (2.8) as x(t) = eAt (σI − A)−1 e(σI−A)t − I b.
−1 Zt P∞ (Zt)k and note that Z (e − I) = t k=0 (k+1)! , which converges everywhere. Hence we conclude that the solution x(t) is well defined for all σ ∈ C. −1 Observe that for x0 = (σI − A) b the solution (2.8) takes a particularly simple form:
x(t) = (σI − A)−1 beσt, that is to say, for a properly chosen initial value the linear system transforms an expo- nential signal into another exponential signal. Remember this fact. We shall elaborate on it in Sec. 5.3.1.
16 Chapter 3
Controllability and observability
3.1 Controllability of an LTV system
When approaching a control system a first step consists in determining whether the system can be controlled and to which extent? This type of problem is referred to as the controllability problem. To make this more concrete we consider the following problem statement.
Two-point controllability. Consider the LTV system (2.4). Given initial state x0 at time t0, find an admissible control u such that the system reaches the final state x1 at time t1. Solving this problem amounts to determining an admissible controlu ¯(t) ∈ U, t ∈ [t0, t1], (typically non-unique) that solves the following equation:
t Z 1 x1 = Φ(t1, t0)x0 + Φ(t1, s)B(s)¯u(s)ds. (3.1)
t0
Obviously, the two-point controllability problem is stated in a very limited way. We need a general formulation as defined below.
Definition 3.1.1. The system (2.4) defined over [t0, t1] is said to be completely con- trollable (or just controllable) on [t0, t1] if, given any two states x0 and x1, there exists an admissible control that transfers (x0, t0) to (x1, t1). Otherwise the system is said to be uncontrollable.
Remark. Note that a system can be completely controllable on some interval [t0, t1] and 0 0 uncontrollable on [t0, t1] ⊂ [t0, t1]. However, it turns out that if a system is controllable 00 00 on [t0, t1] it will be controllable for any [t0, t1] ⊃ [t0, t1]. 00 00 Exercise 3.1.1. Prove that controllability on [t0, t1] implies controllability on [t0, t1] ⊃ [t0, t1]. An LTV system is characterized by its structural elements, i.e. by the matrices A(t) and B(t). In this sense we can speak about controllability of the pair (A(t),B(t)). Thus our goal will be to characterize the controllability property of (2.4) in terms of (A(t),B(t)).
17 To do so we first transform (3.1) to a generic form. Denotingx ˆ1 = x1 − Φ(t1, t0)x0 we rewrite (3.1) as t Z 1 xˆ1 = Φ(t1, s)B(s)¯u(s)ds, (3.2)
t0 which amounts to determining an admissible inputu ¯(t) that transfers the zero state at t0 tox ˆ1 at t1. This problem is typically referred to as the reachability problem. One can easily see that for a linear system controllability and reachability problems are equivalent. Using (3.2) we can give the following characterization of two-point controllability. Proposition 3.1.1. The pair (x0, x1) is controllable iff x1 −Φ(t1, t0)x0 belongs to the range of the linear map L(u), where
t Z 1 L(u) = Φ(t1, s)B(s)u(s)ds. (3.3)
t0
The above condition is particularly difficult to check as the map L is defined on the infinite-dimensional space of admissible controls U. We would prefer to have some finite- dimensional criterion. Such criterion will be formulated below but first we present the following formal result. Lemma 3.1.2. Let G(t) be an [n × m] matrix whose elements are continuous functions of t, t ∈ [t , t ]. A vector x ∈ n lies in the range space of L(u) = R t1 G(s)u(s)ds if and 0 1 R t0 only if it lies in the range space of the matrix
Z t1 T W (t0, t1) = G(s)G (s)ds. t0
T Proof. (if) If x ∈ R(W (t0, t1)), then there exists η s.t. x = W (t0, t1)η. Takeu ¯ = G η, then L(¯u) = W (t0, t1)η = x and so, x ∈ R(L(¯u)).
⊥ (only if) Let there be x1 ∈/ R(W (t0, t1)). Then there exists x2 ∈ R (W (t0, t1)), i.e., T T x2 W (t0, t1) = 0. Obviously, x1 ∈/ R(W (t0, t1)) implies that x2 x1 6= 0. Suppose, ad absurdum, that there exists a control u s.t. R t1 G(s)u (s)ds = x . Then we have 1 t0 1 1
Z t1 T T x2 G(s)u1(s)ds = x2 x1 6= 0. (3.4) t0
T But, x2 W (t0, t1) = 0 and so,
Z t1 T T T x2 W (t0, t1)x2 = x2 G(s) G (s)x2 ds = 0. t0
Observe that xT W (t , t )x = R t1 kGT (s)x kds = 0 implies G(t) ≡ 0 for all t ∈ [t , t ], 2 0 1 2 t0 2 0 1 whence a contradiction of (3.4) follows.
18 Now we can use the results of Proposition 3.1.1 and Lemma 3.1.2 to formulate the following fundamental theorem on controllability. Theorem 3.1.3. The pair (x0, x1) is controllable if and only if x1 −Φ(t, t0)x0 belongs to the range space of
Z t1 T T W (t0, t1) = Φ(t1, s)B(s)B (s)Φ (t1, s)ds. (3.5) t0