<<

Preliminaries: State-space models System interconnections

Control Theory (035188) lecture no. 6

Leonid Mirkin

Faculty of Mechanical Engineering Technion — IIT Preliminaries: linear algebra State-space models System interconnections Outline

Preliminaries: linear algebra

State-space models

System interconnections in terms of state-space realizations Preliminaries: linear algebra State-space models System interconnections Outline

Preliminaries: linear algebra

State-space models

System interconnections in terms of state-space realizations Preliminaries: linear algebra State-space models System interconnections Linear algebra, notation and terminology A A Rn×m is an n m table of real numbers, ∈ ×   a a a m 11 12 ··· 1  a21 a22 a1m  A =  . . ··· .  = [aij ];  . . .  · an an anm 1 2 ··· accompanied by their manipulation rules (addition, multiplication, etc). The number of linearly independent rows (or columns) in A is called its rank and denoted rank A. A matrix A Rn×m is said to ∈ have full row (column) rank if rank A = n (rank A = m) − be square if n = m, tall if n > m, and fat if n < m − be diagonal (A = diag ai ) if it is square and aij = 0 whenever i = j − { } 6 be lower (upper) triangular if aij = 0 whenever i > j (i < j) − 0 0 be symmetric if A = A , where the A = [aji ] − · The I = diag 1 Rn×n (if the dimension is clear, just I ). n · It is a convention that ·A0 = I {for} every∈ nonzero square A. Preliminaries: linear algebra State-space models System interconnections Block matrices n×m P P Let A R and let i=1 ni = n and i=1 mi = n for some ni ; mi N. Then A∈ can be formally split as ∈   A A A  11 12 ··· 1  A21 A22 A1  A =  . . ··· .  = [Aij ]  . . .  A A A 1 2 ···

ni ×mj Pi−1 for Aij = [aij;kl ] R with entries aij;kl = apq, where p = r=1 nr + k Pj−1 ∈ and q = r=1 mr + l. For example, we may present  1 2 3 4   A A A  A = 5 6 7 8 = 11 12 13   A A A 9 10 11 12 21 22 23

 1 2   3   4    for A11 = 5 6 , A12 = 7 , A13 = 8 , A21 = 9 10 , A22 = 11, A23 = 12. This split is not unique and is normally done merely for convenience. Preliminaries: linear algebra State-space models System interconnections Block matrices (contd)

We say that a A = [Aij ] is

block-diagonal (A = diag Ai ) if  =  and Aij = 0 whenever i = j − { } 6 block lower (upper) triangular if Aij = 0 whenever i > j (i < j) − If A is square, i.e. n = m, a natural split is with ni = mi for all i. Then all Aii are square too.

Operations on block matrices are performed in terms of their blocks, like

 A A   B B   A + B A + B  11 12 + 11 12 = 11 11 12 12 ; A21 A22 B21 B22 A21 + B21 A22 + B22  A A   B B   A B + A B A B + A B  11 12 11 12 = 11 11 12 21 11 12 12 22 : A21 A22 B21 B22 A21B11 + A22B21 A21B12 + A22B22

Sum, product, inverse of block-diagonal (block-triangular) matrices remain block-diagonal (block-triangular). Preliminaries: linear algebra State-space models System interconnections Eigenvalues & eigenvectors Given a A Rn×n, its eigenvalues are the solutions  C to ∈ ∈ n n−1 A() = det(I A) =  + n−1 + + 1 + 0 = 0 · − ··· (characteristic equation). There are n (not necessarily different) eigenvalues of A and spec(A) denotes their set. If i spec(A) is such that Im i = 0, ¯ ∈ 6 then i spec(A) too. ∈

Right and left eigenvectors associated with i spec(A) are nonzero vectors such that ∈ 0 (i I A)Ái = 0 and Á (i I A) = 0; − i − respectively. Note that

k k 0 k k 0 A Ái = Ái  and Á A =  Á ; k N i i i i ∀ ∈ k k so i spec(A) =  spec(A ), keeping left and right eigenvectors. ∈ ⇒ i ∈ Preliminaries: linear algebra State-space models System interconnections Eigenvalues & eigenvectors Given a square matrix A Rn×n, its eigenvalues are the solutions  C to ∈ ∈ n n−1 A() = det(I A) =  + n−1 + + 1 + 0 = 0 · − ··· (characteristic equation). There are n (not necessarily different) eigenvalues of A and spec(A) denotes their set. If i spec(A) is such that Im i = 0, ¯ ∈ 6 then i spec(A) too. ∈

Right and left eigenvectors associated with i spec(A) are nonzero vectors such that ∈ 0 (i I A)Ái = 0 and Á (i I A) = 0; − i − respectively. Note that

k k 0 k k 0 A Ái = Ái  and Á A =  Á ; k N i i i i ∀ ∈ k k so i spec(A) =  spec(A ), keeping left and right eigenvectors. ∈ ⇒ i ∈ Preliminaries: linear algebra State-space models System interconnections Eigenvalues & eigenvectors of block-triangular matrices Let A =  A11 A12 . Because 0 A22

 I A A   Á   (I A )Á  11 12 = 11 ; −0 I− A 0 −0 − 22

 Ái  every eigenvalue of A11 is that of A (its right eigenvector is 0 ). Because    0  I A11 A12  0  0 Á − − = 0 Á (I A22) ; 0 I A22 − −  0  every eigenvalue of A22 is that of A (its left eigenvector is 0 Ái ). Thus i is an eigenvalue of A iff it is an eigenvalue of A or A . − 11 22 The same arguments work for block-lower-triangular and block-diagonal matrices too. − Preliminaries: linear algebra State-space models System interconnections Cayley–Hamilton In essence: each square matrix satisfies its own characteristic equation:

n n−1 A(A) = A + n−1A + + 1A + 0In = 0: · ··· Important consequences: Ak for all k n is a linear combination of Ai , i = 0;:::; n 1, like − ≥ − n n−1 A = n− A A In − 1 − · · · − 1 − 0 n+1 n 2 A = n− A A A − 1 − · · · − 1 − 0 n−1 2 = n− ( n− A + + A + In) A A 1 1 ··· 1 0 − · · · − 1 − 0 2 n−1 = ( n− )A + + ( n− )A + n− In n−1 − 2 ··· 1 1 − 0 1 0 . .

A−1, if exists, is also a linear combination of Ai , i = 0;:::; n 1: − − −1 1 n−2 n−1 A = 1I + + n−1A + A : − 0 ··· Preliminaries: linear algebra State-space models System interconnections Matrix functions P∞ i Let f (x) = i=0 fi x be analytic. Its matrix version f (A) is defined as ∞ X f (A) = f Ai : · i · i=0

It is readily seen that f (A)Ái = Ái f (i ) for all eigenvalue-eigenvector pairs. By Cayley–Hamilton, we know that gi , i = 0;:::; n 1 such that ∃ − n−1 X i f (A) = gi A : i=0

Pn−1 i This does not imply that f (x) = g(x) = i gi x for all x, but still · =0 dj f (x) dj g(x) f ( ) = g( ) and = ; j = 1; : : : ;  1 i i j j i dx x=i dx x=i ∀ − for each eigenvalue i of A having multiplicity i . These equations can be used to calculate all n coefficients gi and, hence, f (A). Preliminaries: linear algebra State-space models System interconnections Matrix functions (contd) If all n eigenvalues of A are different,

 1  n−1 X j    i  f (i ) = gi  = g0 g1 gn−1  .  : i ···  .  j =0 n−1 i     Thus, g g gn− V = f ( ) f ( ) f (n) , where 0 1 ··· 1 1 2 ···  1 1 1  ···       1 2 n  V =  . . ··· .  ·  . . .  n−1 n−1 n−1 1 2 ··· n Q is the (with det V = (j i ) = 0). Hence, 1≤i

Example Let A =  −1 1  with eigenvalues  = 1 and  = 2. Then 0 2 1 − 2  −1 (g (t) = 2 e−t + 1 e2t    −t 2t  1 1 0 3 3 g0(t) g1(t) = e e 1 2 ⇐⇒ g (t) = 1 e−t + 1 e2t : − 1 − 3 3 and hence  1 1    1 0   1 1   e−t 1 (e2t e−t )  exp t = g (t) + g (t) = 3 : −0 2 0 0 1 1 −0 2 0 e−2t

Note, the exponential of this is triangular too (generic). Preliminaries: linear algebra State-space models System interconnections Matrix exponential The matrix exponential is defined (here t R, we shall need it later on) as ∈ 1 1 exp(At) = eAt = I + At + (At)2 + (At)3 + · 2! 3! ···

Example Let A =  −1 1  with eigenvalues  = 1 and  = 2. Then 0 2 1 − 2  −1 (g (t) = 2 e−t + 1 e2t    −t 2t  1 1 0 3 3 g0(t) g1(t) = e e 1 2 ⇐⇒ g (t) = 1 e−t + 1 e2t : − 1 − 3 3 and hence  1 1    1 0   1 1   e−t 1 (e2t e−t )  exp t = g (t) + g (t) = 3 : −0 2 0 0 1 1 −0 2 0 e−2t

Note, the exponential of this triangular matrix is triangular too (generic). Preliminaries: linear algebra State-space models System interconnections Similar matrices Let A Rn×n and T Rn×n with det T = 0. The matrices ∈ ∈ 6 A and TAT −1 are said to be similar and A TAT −1 is called a similarity transformation. Similarity transformations do7→ not affect characteristic equations:

−1 −1 −1 (s) = det(I TAT ) = det T (I A)T TAT − − = det(T ) det(I A) det(T −1) = det(I A) − − = A(s): Some definitions / facts: a matrix A Rn×n is said to be diagonalizable if there is a T Rn×n − ∈ −1 ∈ such that TAT = diag i ; { } symmetric matrices are always diagonalizable; − f (TAT −1) = Tf (A)T −1 for any analytic function f (x) − follows by the very fact that (TAT −1)k = TAk T −1 for all k ∈ N Preliminaries: linear algebra State-space models System interconnections Sign-definite matrices A A = A0 Rn×n is said to be ∈ positive definite (A > 0) if x0Ax > 0 for all x = 0, − 6 positive semidefinite (A 0) if x0Ax 0 for all x, − ≥ ≥ negative definite (A < 0) if x0Ax < 0 for all x = 0 (or if A > 0), − 6 − negative semidefinite (A 0) if x0Ax 0 for all x (or A 0). − ≤ ≤ − ≥ Clearly, A > 0 = det A = 0. In fact, ⇒ 6 A > 0 ( 0) iff all its eigenvalues are positive (nonnegative) − ≥ Given two matrices A = A0 Rn×n and B = B0 Rn×n, we say ∈ ∈ A > B if A B > 0, − − A B if A B 0, − ≥ − ≥ A < B if B A > 0 (equivalently, A B < 0), − − − A B if B A 0 (equivalently, A B 0). − ≤ − ≥ − ≤ Preliminaries: linear algebra State-space models System interconnections Sign-definite matrices A symmetric matrix A = A0 Rn×n is said to be ∈ positive definite (A > 0) if x0Ax > 0 for all x = 0, − 6 positive semidefinite (A 0) if x0Ax 0 for all x, − ≥ ≥ negative definite (A < 0) if x0Ax < 0 for all x = 0 (or if A > 0), − 6 − negative semidefinite (A 0) if x0Ax 0 for all x (or A 0). − ≤ ≤ − ≥ Clearly, A > 0 = det A = 0. In fact, ⇒ 6 A > 0 ( 0) iff all its eigenvalues are positive (nonnegative) − ≥ Given two matrices A = A0 Rn×n and B = B0 Rn×n, we say ∈ ∈ A > B if A B > 0, − − A B if A B 0, − ≥ − ≥ A < B if B A > 0 (equivalently, A B < 0), − − − A B if B A 0 (equivalently, A B 0). − ≤ − ≥ − ≤ Preliminaries: linear algebra State-space models System interconnections Sign-definite matrices A symmetric matrix A = A0 Rn×n is said to be ∈ positive definite (A > 0) if x0Ax > 0 for all x = 0, − 6 positive semidefinite (A 0) if x0Ax 0 for all x, − ≥ ≥ negative definite (A < 0) if x0Ax < 0 for all x = 0 (or if A > 0), − 6 − negative semidefinite (A 0) if x0Ax 0 for all x (or A 0). − ≤ ≤ − ≥ Clearly, A > 0 = det A = 0. In fact, ⇒ 6 A > 0 ( 0) iff all its eigenvalues are positive (nonnegative) − ≥ Given two matrices A = A0 Rn×n and B = B0 Rn×n, we say ∈ ∈ A > B if A B > 0, − − A B if A B 0, − ≥ − ≥ A < B if B A > 0 (equivalently, A B < 0), − − − A B if B A 0 (equivalently, A B 0). − ≤ − ≥ − ≤ Preliminaries: linear algebra State-space models System interconnections Sign-definite matrices (contd)

Example Let A =  ˛ −1  for some ˛ R, then −1 1 ∈     0   ˛ 1 x1 2 2 x Ax = x1 x2 − = ˛x1 2x1x2 + x2 1 1 x2 − − = (˛ 1)x2 + (x x )2 − 1 1 − 2 and if 0 0 ˛ > 1 then x Ax > 0 unless x1 = x2 = 0, for which x Ax = 0 = A > 0 0 0 ⇒ ˛ = 1 then x Ax > 0 unless x1 = x2, for which x Ax = 0 = A 0 0 ⇒ ≥ ˛ < 1 then x Ax might be both positive (e.g. if x1 = 0 = x2) and negative (e.g. if x = x = 0)6 = A 0 1 2 6 ⇒ 6≥ Preliminaries: linear algebra State-space models System interconnections Sign-definite block matrices Let   A11 A12 A = for square A11 and A22: A21 A22 Then

(  0   x  0 A > 0 as x 0 A 1 = x A x > 0 for all x = 0 A > 0 = 11 1 0 1 11 1 1 6  0    0 ⇒ A > 0 as 0 x A 0 = x A x > 0 for all x = 0 22 2 x2 2 22 2 2 6 Then (this technique is known as completing the square)

0 0 0 0 0 x Ax = x1A11x1 + x1A12x2 + x2A21x1 + x2A22x2 = x0 (A A A−1A )x + x0 A A−1A x + 2x0 A x + x0 A x 1 11 − 12 22 21 1 1 12 22 21 1 2 21 1 2 22 2 = x0 (A A A−1A )x + (x0 A A−1 + x0 )A (A−1A x + x ) 1 11 − 12 22 21 1 1 12 22 2 22 22 21 1 2 Thus, A > 0 A > 0 A A A−1A > 0. We can see, by ⇐⇒ 22 ∧ 11 − 12 22 21 similar arguments, that A > 0 A > 0 A A A−1A > 0. ⇐⇒ 11 ∧ 22 − 21 11 12 Preliminaries: linear algebra State-space models System interconnections Sign-definite block matrices Let   A11 A12 A = for square A11 and A22: A21 A22 Then

(  0   x  0 A > 0 as x 0 A 1 = x A x > 0 for all x = 0 A > 0 = 11 1 0 1 11 1 1 6  0    0 ⇒ A > 0 as 0 x A 0 = x A x > 0 for all x = 0 22 2 x2 2 22 2 2 6 Then (this technique is known as completing the square)

0 0 0 0 0 x Ax = x1A11x1 + x1A12x2 + x2A21x1 + x2A22x2 = x0 (A A A−1A )x + x0 A A−1A x + 2x0 A x + x0 A x 1 11 − 12 22 21 1 1 12 22 21 1 2 21 1 2 22 2 = x0 (A A A−1A )x + (x0 A A−1 + x0 )A (A−1A x + x ) 1 11 − 12 22 21 1 1 12 22 2 22 22 21 1 2 Thus, A > 0 A > 0 A A A−1A > 0. We can see, by ⇐⇒ 22 ∧ 11 − 12 22 21 similar arguments, that A > 0 A > 0 A A A−1A > 0. ⇐⇒ 11 ∧ 22 − 21 11 12 Preliminaries: linear algebra State-space models System interconnections Sign-definite block matrices Let   A11 A12 A = for square A11 and A22: A21 A22 Then

(  0   x  0 A > 0 as x 0 A 1 = x A x > 0 for all x = 0 A > 0 = 11 1 0 1 11 1 1 6  0    0 ⇒ A > 0 as 0 x A 0 = x A x > 0 for all x = 0 22 2 x2 2 22 2 2 6 Then (this technique is known as completing the square)

0 0 0 0 0 x Ax = x1A11x1 + x1A12x2 + x2A21x1 + x2A22x2 = x0 (A A A−1A )x + x0 A A−1A x + 2x0 A x + x0 A x 1 11 − 12 22 21 1 1 12 22 21 1 2 21 1 2 22 2 = x0 (A A A−1A )x + (x0 A A−1 + x0 )A (A−1A x + x ) 1 11 − 12 22 21 1 1 12 22 2 22 22 21 1 2 Thus, A > 0 A > 0 A A A−1A > 0. We can see, by ⇐⇒ 22 ∧ 11 − 12 22 21 similar arguments, that A > 0 A > 0 A A A−1A > 0. ⇐⇒ 11 ∧ 22 − 21 11 12 Preliminaries: linear algebra State-space models System interconnections Outline

Preliminaries: linear algebra

State-space models

System interconnections in terms of state-space realizations Preliminaries: linear algebra State-space models System interconnections Spring-mass-damper system 1

c u = force m k y Dynamics, my¨(t) + cy˙(t) + ky(t) = u(t); can be rewritten as   0 1   0  x˙(t) = x(t) + u(t)  k=m c=m 1=m − − y(t) =  1 0  x(t)  where  y(t)  x(t) = · y˙(t) is the state vector, having clear physical meaning. Preliminaries: linear algebra State-space models System interconnections Spring-mass-damper system 2

c m k u = position y Dynamics, my¨(t) + cy˙(t) + ky(t) = cu˙(t) + ku(t); can be rewritten as   0 1   0  x˙(t) = x(t) + u(t)  k=m c=m 1 − − y(t) =  k=m c=m  x(t)  where  v(t)  x(t) = forv ¨(t) = u(t) y(t) · v˙(t) − is the state vector, having no clear physical meaning. Preliminaries: linear algebra State-space models System interconnections State-space realizations A state-space description (realization) of a linear SISO system G : u y is 7→ ( x˙(t) = Ax(t) + Bu(t); x(0) = x G : 0 y(t) = Cx(t) + Du(t); where u(t) R is an input signal, − ∈ y(t) R is an output signal, − ∈ x(t) Rn is a state vector (internal variable) with initial condition x . − ∈ 0 State and realization are not unique. For example, letx ˜ = Tx, then · ( ˙ −1 x˜(t) = TAT x˜(t) + TBu(t); x˜(0) = Tx0; G : y(t) = CT −1x˜(t) + Du(t) describes the very same mapping u y (similarity transformation). 7→ Preliminaries: linear algebra State-space models System interconnections State-space realizations A state-space description (realization) of a linear SISO system G : u y is 7→ ( x˙(t) = Ax(t) + Bu(t); x(0) = x G : 0 y(t) = Cx(t) + Du(t); where u(t) R is an input signal, − ∈ y(t) R is an output signal, − ∈ x(t) Rn is a state vector (internal variable) with initial condition x . − ∈ 0 State and realization are not unique. For example, letx ˜ = Tx, then · ( ˙ −1 x˜(t) = TAT x˜(t) + TBu(t); x˜(0) = Tx0; G : y(t) = CT −1x˜(t) + Du(t) describes the very same mapping u y (similarity transformation). 7→ Preliminaries: linear algebra State-space models System interconnections Solution of the state equation The evolution of the state vector x(t) at t > 0 is Z t At A(t−s) x(t) = e x0 + e Bu(s)ds: 0 Substituting this expression into the output equation, we end up with Z t At A(t−s) y(t) = Ce x0 + Du(t) + C e Bu(s)ds: 0 This solution prompts the following interpretation of the state vector: the state vector x(t) is a history accumulator: there is no need to know − input history to calculate future outputs, the knowledge of the current state and future inputs is sufficient.

Indeed, given any t0, then for all t > t0 Z t A(t−t0) A(t−s) y(t) = Ce x(t0) + Du(t) + C e Bu(s)ds : t0 | {z } | {z } history (up to t0) future (from t0 and up to t) Preliminaries: linear algebra State-space models System interconnections Solution of the state equation The evolution of the state vector x(t) at t > 0 is Z t At A(t−s) x(t) = e x0 + e Bu(s)ds: 0 Substituting this expression into the output equation, we end up with Z t At A(t−s) y(t) = Ce x0 + Du(t) + C e Bu(s)ds: 0 This solution prompts the following interpretation of the state vector: the state vector x(t) is a history accumulator: there is no need to know − input history to calculate future outputs, the knowledge of the current state and future inputs is sufficient.

Indeed, given any t0, then for all t > t0 Z t A(t−t0) A(t−s) y(t) = Ce x(t0) + Du(t) + C e Bu(s)ds : t0 | {z } | {z } history (up to t0) future (from t0 and up to t) Preliminaries: linear algebra State-space models System interconnections Impulse response via state-space realizations

The response to u(t) = ı(t) and zero initial conditions x0 = 0 is Z t y(t) = Dı(t) + C eA(t−s)Bı(s)ds = Dı(t) + CeAt B: 0

If D = 0, this response is reminiscent of that to initial condition. Namely,

At At y(t) = Ce B = Ce x0 whenever x0 = B:

This implies that the effect of initial conditions can also be expresses as the effect of − external signal (Dirac delta in this case). Preliminaries: linear algebra State-space models System interconnections From state space to transfer functions The transfer function is the Laplace transform of the impulse response (with zero initial conditions). Hence,

G(s) = LDı(t) + CeAt B = D + C(sI A)−1B: − This can also be seen via rewriting the state model in the s-domain: ) ( sX (s) = AX (s) + BU(s) X (s) = (sI A)−1BU(s) = − Y (s) = CX (s) + DU(s) ⇒ Y (s) = CX (s) + DU(s)

Because C adj(sI A)B D + C(sI A)−1B = D + − ; − det(sI A) − where adj(sI A) is the adjugate matrix having polynomial entries, − poles of D + C(sI A)−1B are the eigenvalues of A, i.e. in spec(A) − − (it’s a bit more complicated, we’ll study that later on in the course). Preliminaries: linear algebra State-space models System interconnections From state space to transfer functions The transfer function is the Laplace transform of the impulse response (with zero initial conditions). Hence,

G(s) = LDı(t) + CeAt B = D + C(sI A)−1B: − This can also be seen via rewriting the state model in the s-domain: ) ( sX (s) = AX (s) + BU(s) X (s) = (sI A)−1BU(s) = − Y (s) = CX (s) + DU(s) ⇒ Y (s) = CX (s) + DU(s)

Because C adj(sI A)B D + C(sI A)−1B = D + − ; − det(sI A) − where adj(sI A) is the adjugate matrix having polynomial entries, − poles of D + C(sI A)−1B are the eigenvalues of A, i.e. in spec(A) − − (it’s a bit more complicated, we’ll study that later on in the course). Preliminaries: linear algebra State-space models System interconnections Properness Because

−1 1 1 −1 −1 lim C(sI A) B = lim s C I s A B = lim zC(I zA) B = 0 |s|→∞ − |s|→∞ − |z|→0 − we have that lim D + C(sI A)−1B = D: |s|→∞ − Compare b sn + b sn−1 + + b s + b lim n n−1 1 0 = b : n n−1 ··· n |s|→∞ s + an− s + + a s + a 1 ··· 1 0 Thus, the transfer function D + C(sI A)−1B is strictly proper iff D = 0 (or bi-proper iff D = 0). − − 6 The parameter D is called the feed-through term of the realization. Preliminaries: linear algebra State-space models System interconnections Pole excess and high-frequency gain Let b sm + b sm−1 + + b s + b G(s) = m m−1 1 0 n n−1 ··· s + an− s + + a s + a 1 ··· 1 0 be strictly proper (i.e. n > m). Its pole excess, n m, can be interpreted as − the minimum k N such that sk G(s) is bi-proper. − ∈ Now, because s(sI A)−1 = I + A(sI A)−1; − − we have that

sC(sI A)−1B = CB + CA(sI A)−1B − − s2C(sI A)−1B = sCB + CAB + CA2(sI A)−1B − − . . k−1 X sk C(sI A)−1B = si CAk−1−i B + CAk−1B + CAk (sI A)−1B − − i=1 Preliminaries: linear algebra State-space models System interconnections Pole excess and high-frequency gain (contd) Thus, given G(s) = D + C(sI A)−1B, its − pole excess is the minimum k N such that CAk−1B = 0. − ∈ 6 The quantities CAk−1B, k N, are known as Markov parameters of G(s). ∈

The high-frequency gain of G(s),

n−m bm = lim s G(s): |s|→∞

Thus, the high-frequency gain of G(s) = C(sI A)−1B is − n−m−1 bm = CA B; where n is the dimension of A and n m is the pole excess of G(s). − Preliminaries: linear algebra State-space models System interconnections Pole excess and high-frequency gain (contd) Thus, given G(s) = D + C(sI A)−1B, its − pole excess is the minimum k N such that CAk−1B = 0. − ∈ 6 The quantities CAk−1B, k N, are known as Markov parameters of G(s). ∈

The high-frequency gain of G(s),

n−m bm = lim s G(s): |s|→∞

Thus, the high-frequency gain of G(s) = C(sI A)−1B is − n−m−1 bm = CA B; where n is the dimension of A and n m is the pole excess of G(s). − Preliminaries: linear algebra State-space models System interconnections From transfer functions to state space b sn−1 + + b s + b Let G(s) = n−1 1 0 . Its state-space realization in n n−1 ··· s + an−1s + + a1s + a0 companion form is ··· −  0 1 ::: 0 0   ......     . . . . .  AB   =  0 0 ::: 1 0  ; CD    a0 a1 ::: an−1 1  − − − b0 b1 ::: bn−1 0 observer form is −   an− 1 ::: 0 bn− − 1 1  ......   AB   . . . . .  =   :  a1 0 ::: 1 b1  CD  −   a0 0 ::: 0 b0  − 1 0 ::: 0 0 Preliminaries: linear algebra State-space models System interconnections Outline

Preliminaries: linear algebra

State-space models

System interconnections in terms of state-space realizations Preliminaries: linear algebra State-space models System interconnections Problem We know how to carry out parallel, cascade, and feedback interconnections in terms of transfer functions G1(s) and G2(s). Indeed: parallel: G (s) + G (s) − 1 2 series: G (s)G (s) − 2 1 G (s) feedback: 1 − 1 G (s)G (s) ∓ 1 2 The question is: how this can be done in state space?

We assume that ( ( x˙1(t) = A1x1(t) + B1u1(t) x˙2(t) = A2x2(t) + B2u2(t) G1 : and G2 : y1(t) = C1x1(t) + D1u1(t) y2(t) = C2x2(t) + D2u2(t)

A possible way of interconnecting systems is to determine what inputs / outputs to equate − unite state vectors − Preliminaries: linear algebra State-space models System interconnections Problem We know how to carry out parallel, cascade, and feedback interconnections in terms of transfer functions G1(s) and G2(s). Indeed: parallel: G (s) + G (s) − 1 2 series: G (s)G (s) − 2 1 G (s) feedback: 1 − 1 G (s)G (s) ∓ 1 2 The question is: how this can be done in state space?

We assume that ( ( x˙1(t) = A1x1(t) + B1u1(t) x˙2(t) = A2x2(t) + B2u2(t) G1 : and G2 : y1(t) = C1x1(t) + D1u1(t) y2(t) = C2x2(t) + D2u2(t)

A possible way of interconnecting systems is to determine what inputs / outputs to equate − unite state vectors − Preliminaries: linear algebra State-space models System interconnections Parallel interconnection

G1 y u

G2

This corresponds to u = u = u − 1 2 y = y + y − 1 2 Then          x˙1(t) A1 0 x1(t) B1  = + u(t)  x˙2(t) 0 A2 x2(t) B2 G :      x1(t)  y(t) = C1 C2 + (D1 + D2)u(t)  x2(t) We already know (spectrum of block-diagonal matrices) that poles of G(s) are spec(A ) spec(A ) − 1 ∪ 2 Preliminaries: linear algebra State-space models System interconnections Series / cascade interconnection

y u G2 G1

This corresponds to u = u − 1 u = y − 2 1 y = y − 2 Then          x˙1(t) A1 0 x1(t) B1  = + u(t)  x˙2(t) B2C1 A2 x2(t) B2D1 G :      x1(t)  y(t) = D2C1 C2 + D2D1u(t)  x2(t) We already know (spectrum of block-triangular matrices) that poles of G(s) are spec(A ) spec(A ) − 1 ∪ 2 Preliminaries: linear algebra State-space models System interconnections Feedback interconnection

y u G1 ±

G2

This corresponds to u = u y − 1 ± 2 u = y − 2 1 y = y − 1 Then, assuming that D1 = 0 for simplicity (enough for our purposes),          x˙1(t) A1 B1D2C1 B1C2 x1(t) B1  = ± ± + u(t)  x˙2(t) B2C1 A2 x2(t) 0 G :      x1(t)  y(t) = C1 0  x2(t) Preliminaries: linear algebra State-space models System interconnections Inversion Let ( x˙(t) = Ax(t) + Bu(t) G : y(t) = Cx(t) + Du(t) be bi-proper (i.e. D = 0). Its inverse is the system mapping y u. Then 6 7→ u(t) = D−1Cx(t) + D−1y(t) − and x˙(t) = Ax(t) + B D−1Cx(t) + D−1y(t) − Therefore,

(x˙(t) = (A BD−1C)x(t) + BD−1y(t) G −1 : − u(t) = D−1Cx(t) + D−1y(t) − and zeros of G(s) (= poles of G −1(s)) are in spec(A BD−1C). −