Chapter 1 Matrix Algebra. Definitions and Operations

Chapter 1 Matrix Algebra. Definitions and Operations

Chapter 1 Matrix Algebra. Definitions and Operations 1.1 Matrices Matrices play very important roles in the computation and analysis of several engineering problems. First, matrices allow for compact notations. As discussed below, matrices are collections of objects arranged in rows and columns. The symbols representing these collec- tions can then induce an algebra, in which different operations such as matrix addition or matrix multiplication can be defined. (This compact representation has become even more significant today with the advent of computer software that allows simple statements such as A+B or A*B to be evaluated directly.) Aside from the convenience in representation, once the matrices have been constructed, other internal properties can be derived and assessed. Properties such as determinant, rank, trace, eigenvalues and eigenvectors (all to be defined later) determine characteristics about the systems from which the matrices were obtained. These properties can then help in the analysis and improvement of the systems under study. It can be argued that some problems may be solved without the use of matrices. How- ever, as the complexity of the problem increases, matrices can help improve the tractability of the solution. Definition 1.1 A matrix is a collection of objects, called the elements of the matrix, arranged in rows and columns. These elements could be numbers, 1 0 0.3 A = − with i = √ 1 2 3 + i 1 − µ − 2 ¶ 3 4 c 2006 Tomas Co, all rights reserved ° or functions, 1 2x(t) + a B = sin(ωt)dt dy/dt µ ¶ We restrict the discussion on matricesR that contain elements for which binary oper- ations such as addition, subtraction, multiplication and division among the elements make algebraic sense. To distinguish the elements from the collection, we refer to the valid elements of the matrix as scalars. Thus, a scalar is not the same as a matrix having only one row and one column. We will denote the elements of matrix A positioned at the ith row and jth column as aij. We will use capital letters to denote matrices. For example, let matrix A have m rows and n columns, a a a 11 12 ··· 1n a21 a22 a2n A = . ···. . .. am1 am2 amn ··· We will also use the symbol “[=]” to denote “has the size”, i.e A [=] m n means A has m rows and n columns. × A row vector is simply a matrix having one row. v = (v , v , , v ) 1 2 ··· n If v has n elements, then v is said to have length n. Likewise, a column vector is simply a matrix having one column. v1 v2 v = . . vn By default, “vector” will imply a column vector, unless it has been specified to be a row vector. A square matrix is a matrix with the same number of columns and rows. Special cases include: 1. L, a lower triangular matrix ℓ11 0 0 0 ℓ ℓ 0 ··· 0 21 22 ··· L = ℓ31 ℓ32 ℓ33 0 . ···. . .. ℓn1 ℓn2 ℓn3 ℓnn ··· c 2006 Tomas Co, all rights reserved 5 ° 2. U, an upper triangular matrix u u u u 11 12 13 ··· 1n 0 u u u 22 23 ··· 2n U = 0 0 u33 u3n . ···. . .. 0 0 0 unn ··· 3. D, a diagonal matrix d11 0 0 0 0 d 0 ··· 0 22 ··· D = 0 0 d33 0 . ···. . .. 0 0 0 dnn ··· A short hand notation is, D = diag(d11, d22, . , dnn) 4. I, the identity matrix I = diag(1, 1,..., 1) We will also use In to denote an identity matrix of size n. 1.2 Matrix Operations 1. Matrix Addition Let A = (aij), B = (bij), C = (cij), then A + B = C if and only if cij = aij + bij. Condition: A, B and C all have the same size. 2. Scalar Matrix Multiplication. Let A = (aij),B = (bij), and α a scalar (e.g. a real number or a complex number), then αA = B if and only if bij = α aij. Condition: A and B have the same size. 3. Matrix Multiplication. Let A [=] m n, B [=] n p, C [=] m p, then × × × A B = C ∗ 6 c 2006 Tomas Co, all rights reserved ° if and only if n cij = aiℓ bℓj Xℓ=1 Remarks: (a) A shorthand notation is AB. (b) For the operation AB, we say A pre-multiplies B and B post-multiplies A. (c) When the number of columns of A is equal to the number of rows of B then we say that A is conformable with B for the operation AB. (d) In general, AB is not equal to BA. For those special cases in which AB = BA, then we say that A commutes with B. 4. Haddamard-Schur Product. Let A = (aij), B = (bij), C = (cij), then A B = C ◦ if and only if cij = aijbij Condition: A, B and C all have the same size. 5. Kronecker (direct) Product. Let A = (a )[=]m n then ij × a B a B a B 11 12 ··· 1n a21B a22B a2nB A B = C = . ···. ⊗ . .. am1B am2B amnB ··· T 6. Transpose. Let A = (aij), then the transpose of A, denoted A , is obtained by interchanging the position of row and column.1 For example, suppose A is given by a b c d A = ef gh µ ¶ then the transpose is given by a e b f AT = c g d h 1In other journals and books, the transpose symbol is an apostrophe (′), i.e. A′ instead of AT . c 2006 Tomas Co, all rights reserved 7 ° If A = AT , then A is said to be symmetric. If A = AT , then A is said to be skew-symmetric. − If the elements of the matrix involves elements in the complex number field, then a related operation is the conjugate transpose A∗ = (¯aji), where A = (aij) anda ¯ is the complex conjugate of a. If A = A∗ then A is said to be Hermitian. If A = A∗ then − A is said to be skew-Hermitian. 7. Vectorization. Let A = (a )[=]m n, ij × a11 a21 . . am1 . x = vec(A) = . . a1n a2n . . a mn 8. Determinant. Let A be a square matrix of size n, then the determinant of A is given by det(A) or A = p(k ,...,k )a a . a n (1.1) | | 1 n 1,k1 2,k2 n,k k1=k2= =kn 6 X6 ···6 h where p(k1, . , kn) = ( 1) is called the permutation index and h is equal to the number of flips needed to− make the sequence k , k , k ,...,k equal to the sequence { 1 2 3 n} 1, 2, 3, . , n . { } Example 1.1 Let A be a 3 3 matrix, then the determinant of A is obtained as follows: × h k k k h ( 1) a a a n 1 2 3 − 1k1 2k2 nk 1 2 3 0 a11a22a33 1 3 2 1 a a a − 11 23 32 2 1 3 1 a a a − 12 21 33 2 3 1 2 a12a23a31 3 1 2 2 a13a21a32 3 2 1 1 a a a − 13 22 31 A = a a a a a a a a a + a a a + a a a a a a | | 11 22 33 − 11 23 32 − 12 21 33 12 23 31 13 21 32 − 13 22 31 8 c 2006 Tomas Co, all rights reserved ° ♦♦♦ From the definition given, we expect the summation to consist of n! terms. This definition is not usually used when doing actual determinant calculations. Instead, it is used more for proving some theorems which involve determinants. It is crucial to remember that (1.1) is the definition of a determinant ( and not the computation method using the cofactor that is developed below ). 9. Cofactor of aij. Let Aij denote a new matrix obtained by deleting the ith row and ↓ i+j jth column of A, then the cofactor of aij, denoted cof(aij) is given by ( 1) Aij . − | ↓| Using cofactors, the determinant of a matrix can be obtained recursively as follows: (a) The determinant of a 1 1 matrix is equal to that element, e.g. (a) = a. × | | (b) The determinant of an n n matrix can be obtained by column expansion × n A = a cof(a ) k is any one fixed column | | ik ik i=1 X or by row expansion n A = a cof(a ) k is any one fixed row | | kj kj j=1 X 10. Matrix Adjoint. The matrix adjoint of a square matrix, denoted adj(A), is obtained by first replacing each element aij by its cofactor and then taking the transpose of the resulting matrix. a a a cof(a ) cof(a ) cof(a ) 11 12 ··· 1n 11 12 ··· 1n a21 a22 a2n −→ cof(a21) cof(a22) cof(a2n) . ···. replace with . ···. . .. .. cofactors an1 an2 ann cof(an1) cof(an2) cof(ann) ··· ··· cof(a ) cof(a ) cof(a ) 11 21 ··· n1 cof(a12) cof(a22) cof(an2) −→ . ···. transpose . .. cof(a1n) cof(a2n) cof(ann) ··· 11. Trace of a Square Matrix. The trace of an n n matrix A, denoted tr(A), is given by × n tr(A) = aii i=1 X c 2006 Tomas Co, all rights reserved 9 ° 1 12. Inverse of a Square Matrix. The matrix, denoted by A− , is called the inverse of A if and only if 1 1 A− A = AA− = I where I is the identity matrix. Condition: The inverse of a square matrix exists only if its determinant is not equal to zero. A matrix whose determinant is zero is called a singular matrix. Lemma 1.1 The inverse of a square matrix A can be obtained using the identity 1 1 A− = adj(A) (1.2) A | | (see page 23 for proof) Note that even though only nonsingular square matrices have inverses, all square ma- trices can still have matrix adjoints.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    106 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us