Matrix Algebra and Control

Matrix Algebra and Control

Appendix A Matrix Algebra and Control Boldface lower case letters, e.g., a or b, denote vectors, boldface capital letters, e.g., A, M, denote matrices. A vector is a column matrix. Containing m elements (entries) it is referred to as an m-vector. The number of rows and columns of a matrix A is nand m, respectively. Then, A is an (n, m)-matrix or n x m-matrix (dimension n x m). The matrix A is called positive or non-negative if A>, 0 or A :2:, 0 , respectively, i.e., if the elements are real, positive and non-negative, respectively. A.1 Matrix Multiplication Two matrices A and B may only be multiplied, C = AB , if they are conformable. A has size n x m, B m x r, C n x r. Two matrices are conformable for multiplication if the number m of columns of the first matrix A equals the number m of rows of the second matrix B. Kronecker matrix products do not require conformable multiplicands. The elements or entries of the matrices are related as follows Cij = 2::;;'=1 AivBvj 'Vi = 1 ... n, j = 1 ... r . The jth column vector C,j of the matrix C as denoted in Eq.(A.I3) can be calculated from the columns Av and the entries BVj by the following relation; the jth row C j ' from rows Bv. and Ajv: column C j = L A,vBvj , row Cj ' = (CT),j = LAjvBv, (A. 1) /1=1 11=1 A matrix product, e.g., AB = (c: c:) (~b ~b) = 0 , may be zero although neither multipli­ cand A nor multiplicator B is zero. Without A or B being the nullmatrix the product AB only vanishes if both A and B are singular. The matrix B = A 1. is the (right) annihilator of A, i.e., AA 1. = O. A.2 Properties of Matrix Operations Distributivity: A(B + C) = AB + AC . Associativity of addition (A + B) + C = A + (B + C) and multiplication (AB)C = A(BC). Commutativity of addition, non-commutativity of multiplication and raise to higher powers: A+B=B+A, AB f- BA, (A.2) Exceptions: Consider, first, a multi variable control with transfer matrix G( s) in the forward path and unity feedback and, second, H in the forward path and F in the feedback where G = FH. The overall transfer matrix is given by (I + FH)-IFH = FH(I + FH)-I . (A.3) The inverse of the return-difference matrix and G commute unexpectedly. Another exceptional case is the product A exp (At), i.e. the coefficient matrix and the state transition matrix. Finally, suppose A and B nonsingular. Then, A and B commute if their product is the identity matrix: Both AB = I and BA = I yield A = B-1 . Generally, the matrices A and B commute with respect to multiplication if B is a function of A , e.g. as given by a matrix polynomial or by the decomposition in Eq.(A.45). 610 A Matrix Algebra and Control Note that F(I + HF)-I = (I + FH)-I F and F(I + HF)-I H = (I + FH)-IFH , particularly observe the change of order within the parentheses. 0 Properties when transposing or inverting a matrix product: (AA) Inverse and transpose operations (symbols) may be permuted: (AT)-I = (A-If. If A-I = AT is true then A is referred to as an orthogonal matrix. An idempotent matrix A has the property A 2 = A . This result can be observed in least squares, estimation and sliding mode theory, e.g., or A = I - B(C,B)-IC, . (A.5) A matrix A is nilpotent if A k = 0 for some k. Such a matrix appears in the case of the state-space representation of a k-tuples integrator. A.3 Diagonal Matrices A diagonal matrix A is a square matrix with non-zero entries A;; in the main diagonal, only, e.g., (A.6) If these entries A;; are equal to each other A is a scalar matrix. The identity matrix Inis a scalar matrix with elements 1 and dimension n x n: In ~ dia~(1,I,I, ... ,1), In E nnxn . Given a rectangular (n, r)-matrix B, premultiplying B by the identity matrix In or postmultiplying B by Ir yields InB = B or Blr = B . Premultiplying [postmultiplying] a matrix A by a diagonal matrix yields ... ... ... ) ) , (A.7) i.e., a new matrix the rows [columns] of which are successively multiplied (scaled) by d; (i.e., i-th row [column] with di). A.4 Triangular Matrices A lower triangular matrix is a square matrix having all elements zero above the main diagonal, an upper triangular matrix only contains zero elements below the main diagonal. The product of two triangular matrices produces a triangular one again. If A is given as a diagonal matrix or an upper or lower triangular matrix, the eigenvalues A[A] are already given by the entries in the main diagonal A;; Vi = L.n. A.S Column Matrices (Vectors) and Row Matrices The unit m-vector with k-th component 1 is termed ek . Defining this m-vector ek and the n-vector ei , the elementary matrix or Kronecker matrix E;k is given by the dyadic product ek = (0,0, ... I, ... ,0, of = e~mx I) = (Im).k E;k E nnxm (A.8) as an (n,m)-matrix with entry 1 only in the i,k-element and zero elsewhere. Thus, the (n,m)-matrix A can be established element by element: A = L7=1 L;'=I A;kEik where A E nnxm or dim A = n X m . The identity matrix can be achieved by the sum In = L7=1 e;er = L7=1 E;; . The sum vector with elements throughout unity 1 = (I, 1, ... ,If serves as a summation operator for an m-vector: ITa= L~ai. A.6 Reduced Matrix, Minor, Cofactor, Adjoint 611 The inner product of two vectors a and b is a scalar aTb = bT a . Orthogonal vectors have zero inner product. Assume a vector output signal y given by the linear combination of a vector input signal x governed by the transfer equation y=Cx where yEnr , xEnn , CEnrxn . (A.9) The entry Gij of C is considered as an operational factor from the input component Xj to the output component Yi. Note that the output subscript i (effect) is written first and j (source) is written second. A partitioned vector u is denoted by the "vec" symbol u- (:~ ) =vec(uI,u2, ... uN)=vecui=(uf,uI, ... u~f uiEnm " uEnm , m=Lmi. - u~ (A.lO) The norm or length of a is the distance to the nullvector (origin) and is defined by the Frobenius norm lIaliF = .,;;T; . The inner product is always smaller than the product of the norms of the multiplicators (Schwartz inequality): laTbl ::; lIallFllbliF . Triangle inequality (for any kind of norm): lIa + bll ::; Iiall + IIbll . The angle 0 between two vectors u and v is defined by cosO = uTv/(lluIlFllvIlF) . Mapping a matrix A (A.ll) to a vector a is provided by the operator "col" (or by the operator "vec") colA = vecA = a = (All ... AmI: AI2 ... Am2 ... Amnf (A.I2) The operator col lists the entries of A column-wise. Separating the ith column of A will be termed by (A).i. For abbreviation also A.i is used although, usually, only matrices are denoted by upper case boldface letters, irrespective of the subscript. A.i is defined as a column matrix (vector). The operator col (column string) can also be written as col A = (A·f : A·I ... A.~f. (A.I3) The inner product of two real matrices A and B of equal dimension m x n is a scalar and coincides with the trace of the matrix product: (A, B) = (coIA)T colB = trATB = L:::I L:j=1 AijBij ::; IIAIIFIIBIIF . The Frobenius or Euclidian norm of a real matrix A is given by (A.I4) A.6 Reduced Matrix, Minor, Cofactor, Adjoint Given the (n, n)-matrix A , the reduced matrix A"d ik of the size (n - 1) x (n - 1) is obtained by cancelling row i and column k. Repeating for NI ... n, k'v'I ... n yields n 2 different matrices. The minor on the i, k-component of A is defined as the determinant det Ared ik . The cofactor of the i, k-element of A is obtained by permuting the sign of the minor, precisely by multiplying the minor with (-1 )iH, that is, COfikA = (_1)iH det Ared ik . Given the (n, n)-matrix A on the elements Aik, then A = (Aik) = matrix[AikJ and the adjoint is given as the transposed matrix of cofactors: adjA = [matrix( COfikA)JT . The determinant can be decomposed with respect to the row i , or with respect to the kth column, that is, n det A = L AikcofikA Vi = 1 ... n or det A = L AikcofikA 'v'k=l. .. n. (A.I5) k=1 i=l Interpretation of the equations above as a matrix multiplication yields I det A = A adjA = (adjA) A A -I = adjA . (A.I6) detA 612 A Ma.trix Algebra. and Control A.7 Similar Matrices Matrices A and A are similar, i.e., A ~ A, if A = TAT-I. The preceding equation is named similarity transformation. Similar matrices A and A are characterized by the property det A = det A, by the same eigenvalues, eigenvalue multiplicities and eigenvalue indices (and by the same number of generalized eigenvectors), see Eq.(B.I07). Examples of similar matrices are A and A = diag ~i[A) = T-I AT in Eq.(B.I3).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    110 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us