Appendix A: Matrices and Tensors
A.1 Introduction and Rationale
The purpose of this appendix is to present the notation and most of the mathematical techniques that will be used in the body of the text. The audience is assumed to have been through several years of college level mathematics that included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in undergraduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles and the vector cross and triple scalar products. The solutions to ordinary differential equations are reviewed in the last two sections. The notation, as far as possible, will be a matrix notation that is easily entered into existing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad etc. The desire to represent the components of three-dimensional fourth order tensors that appear in anisotropic elasticity as the components of six-dimensional second order tensors and thus represent these components in matrices of tensor components in six dimensions leads to the nontraditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in Sect. A.11, along with the rationale for this approach.
A.2 Definition of Square, Column, and Row Matrices
An r by c matrix M is a rectangular array of numbers consisting of r rows and c columns,
S.C. Cowin, Continuum Mechanics of Anisotropic Materials, 335 DOI 10.1007/978-1-4614-5025-2, # Springer Science+Business Media New York 2013 336 Appendix A: Matrices and Tensors 2 3 M11 M12 :::M1c 6 ::: 7 6 M21 M22 M2c 7: M ¼ 4 ::::::5 (A.1)
Mr1 ::::Mrc
The typical element of the array, Mij, is the ith element in the jth column; in this text the elements Mij will be real numbers or functions whose values are real numbers. The transpose of the matrix M is denoted by MT and is obtained from M by interchanging the rows and columns 2 3 M11 M21 :::Mr1 6 ::: 7 T 6 M12 M22 Mr2 7: M ¼ 4 ::::::5 (A.2)
M1c ::::Mrc
The operation of obtaining MT from M is called transposition. In this text we are interested in special cases of the r by c matrix M. These special cases are those of the square matrix, r ¼ c ¼ n, the case of the row matrix, r ¼ 1, c ¼ n, and the case of column matrix, r ¼ n, c ¼ 1. Further, the special sub-cases of interest are n ¼ 2, n ¼ 3, and n ¼ 6; the sub-case n ¼ 1 reduces all three special cases to the trivial situation of a single number or scalar. A square matrix A has the form 2 3 A11 A12 :::A1n 6 ::: 7 6 A21 A22 A2n 7; A ¼ 4 ::::::5 (A.3)
An1 ::::Ann while row and column matrices, r and c, have the forms 2 3 c1 6 7 6 c2 7 6 7 6 : 7 r ¼½r1 r2 :::rn ; c ¼ 6 7; (A.4) 6 : 7 4 : 5
cn respectively. The transpose of a column matrix is a row matrix, thus
T c ¼½c1 c2 :::cn : (A.5)
To save space in books and papers the form of c in (A.5) is used more frequently than the form in the second of (A.4). Wherever possible, square matrices will be denoted by upper case boldface Latin letters, while row and column matrices will be denoted by lower case boldface Latin letters as is the case in equations (A.3) and (A.4). A.3 The Types and Algebra of Square Matrices 337
A.3 The Types and Algebra of Square Matrices
The elements of the square matrix A given by (A.3) for which the row and column indices are equal, namely the elements A11, A22,..., Ann, are called diagonal elements. The sum of the diagonal elements of a matrix is a scalar called the trace of the matrix and, for a matrix A, it is denoted by trA,
trA ¼ A11 þ A22 þ þAnn: (A.6)
If the trace of a matrix is zero, the matrix is said to be traceless. Note also that trA ¼ trAT. A matrix with only diagonal elements is called a diagonal matrix, 2 3 A11 0 ::: 0 6 ::: 7 6 0 A22 0 7: A ¼ 4 ::::::5 (A.7)
0 ::::Ann
The zero and the unit matrix, 0 and 1, respectively, constitute the null element, the 0, and the unit element, the 1, in the algebra of square matrices. The zero matrix is a matrix whose every element is zero and the unit matrix is a diagonal matrix whose diagonal elements are all one: 2 3 2 3 00:::0 10:::0 6 ::: 7 6 ::: 7 6 00 0 7; 6 01 0 7: 0 ¼ 4 : :::::5 1 ¼ 4 ::::::5 (A.8) 0 ::::0 0 ::::1
A special symbol, the Kronecker delta dij, is introduced to represent the components of the unit matrix. When the indices are equal, i ¼ j, the value of the Kronecker delta is one, d11 ¼ d22 ¼ ... ¼ dnn ¼ 1 and when they are unequal, i 6¼ j, the value of the Kronecker delta is zero, d12 ¼ d21 ¼ ... ¼ dn1 ¼ d1n ¼ 0. The multiplication of a matrix A by a scalar is defined as the multiplication of every element of the matrix A by the scalar a, thus 2 3 aA11 aA12 :::aA1n 6 ::: 7 6 aA21 aA22 aA2n 7: aA ¼ 4 ::::::5 (A.9)
aAn1 ::::aAnn
It is then easy to show that 1A ¼ A, 1A ¼ A,0A ¼ 0, and a0 ¼ 0. The addition of matrices is defined only for matrices with the same number of rows and columns. The sum of two matrices, A and B, is denoted by A þ B, where 338 Appendix A: Matrices and Tensors 2 3 A11 þ B11 A12 þ B12 :::A1n þ B1n 6 ::: 7 6 A21 þ B21 A22 þ B22 A2n þ B2n 7: A þ B ¼ 4 ::::::5 (A.10)
An1 þ Bn1 ::::Ann þ Bnn
Matrix addition is commutative and associative,
A þ B ¼ B þ A and A þðB þ CÞ¼ðA þ BÞþC; (A.11) respectively. The following distributive laws connect matrix addition and matrix multiplication by scalars:
aðA þ BÞ¼aA þ aB and ða þ bÞA ¼ aA þ bA; (A.12) where a and b are scalars. Negative square matrices may be created by employing the definition of matrix multiplication by a scalar (A.8) in the special case when a ¼ 1. In this case the definition of addition of square matrices (A.10) can be extended to include the subtraction of square matrices, A B. A matrix for which B ¼ BT is said to be a symmetric matrix and a matrix for which C ¼ CT is said to be a skew-symmetric or anti-symmetric matrix. The symmetric and anti-symmetric parts of a matrix, say AS and AA, are constructed from A as follows:
1 symmetric part of AS ¼ ðA þ ATÞ; and (A.13) 2
1 anti symmetric part of AA ¼ ðA ATÞ: (A.14) 2
It is easy to verify that the symmetric part of A is a symmetric matrix and that the skew-symmetric part of A is a skew-symmetric matrix. The sum of the symmetric part of A and the skew-symmetric part of A, AS þ AA,isA:
1 1 A ¼ AS þ AA ¼ ðA þ ATÞþ ðA ATÞ: (A.15) 2 2
This result shows that any square matrix can be decomposed into the sum of a symmetric and a skew-symmetric matrix or anti-symmetric matrix. Using the trace operation introduced above, the representation (A.15) can be extended to three-way decomposition of the matrix A, trA 1 trA 1 A ¼ 1 þ A þ AT 2 1 þ ðA ATÞ: (A.16) n 2 n 2 A.3 The Types and Algebra of Square Matrices 339
The last term in this decomposition is still the skew-symmetric part of the matrix. The second term is the traceless symmetric part of the matrix and the first term is simply the trace of the matrix multiplied by the unit matrix. Example A.3.1 Construct the three-way decomposition of the matrix A given by: 2 3 123 A ¼ 4 4565: 789
Solution: The symmetric and skew-symmetric parts of A, AS, and AA, as well as the trace of A are calculated, 2 3 2 3 135 0 1 2 1 6 7 1 6 7 AS ¼ ðA þ ATÞ¼4 3575; AA ¼ ðA ATÞ¼4 10 1 5; trA ¼ 15; 2 2 579 21 0 then, since n ¼ 3, it follows from (A.16) that 2 3 2 3 2 3 500 435 0 1 2 A ¼ 4 0505 þ 4 3075 þ 4 10 1 5: 005 574 21 0
Introducing the notation for the deviatoric part of an n by n square matrix A,
trA devA ¼ A 1; (A.17) n the representation for the matrix A given by (A.16) may be rewritten as the sum of three terms
trA A ¼ 1 þ devAS þ AA; (A.18) n where the first term is called the isotropic,orspherical (or hydrostatic as in hydraulics) part of A. Note that
1 devAS ¼ ðdevA þ devATÞ: (A.19) 2
Example A.3.2 Show that tr (devA) ¼ 0. Solution: Applying the trace operation to both sides of (A.17) one obtains 340 Appendix A: Matrices and Tensors
trðdevAÞ¼trA 1=nðtrAÞðtr1Þ; then, since tr1 ¼ n, it follows that tr(devA) ¼ 0. The product of two square matrices, A and B, with equal numbers of rows (columns) is a square matrix with the same number of rows (columns). The matrix product is written as A B where A B is defined by
Xk¼n ; ðA BÞij ¼ AikBkj (A.20) k¼1 thus, for example, the element in the rth row and cth column of the product A B is given by
: ðA BÞrc ¼ Ar1B1c þ Ar2B2c þ þArnBnc
The widely used notational convention, called the Einstein summation conven- tion, allows one to simplify the notation by dropping the summation symbol in (A.20) so that
; ðA BÞij ¼ AikBkj (A.21) where the convention is the understanding that the repeated index, in this case k,is to be summed over its range of the admissible values from 1 to n. This summation convention will be used from this point forward in this Appendix and in the body of the text. For n ¼ 6, the range of admissible values is 1–6, including 2, 3, 4, and 5. The two k indices are the summation or dummy indices; note that the implied summation is unchanged if both of the k’s are replaced by any other letter of the alphabet. A summation index is defined as an index that occurs in a summand twice and only twice. Note that summands are terms in equations separated from each other by plus, minus, or equal signs. The existence of summation indices in a summand requires that the summand be summed with respect to those indices over the entire range of admissible values. Note again that the summation index is only a means of stating that a term must be summed, and the letter used for this index is immaterial, thus AimBmj has the same meaning as AikBkj. The other indices in the formula (A.22), the i and j indices, are called free indices. A free index is free to take on any one of the admissible values in its range from 1 to n. For example if n were3,thefreeindexcouldbe1,2,or3.Afree index is formally defined as an index that occurs once and only once in every summand of an equation. The total number of equations that may be represented by an equation with one free index is the range of the admissible values. Thus the equation (A.21) represents n2 separate equations. For two 2 by 2 matrices A and B, the product is written as A.3 The Types and Algebra of Square Matrices 341 A A B B A B ¼ 11 12 11 12 A21 A22 B21 B22 A B þ A B A B þ A B ¼ 11 11 12 21 11 12 12 22 ; (A.22) A21B11 þ A22B21 A21B12 þ A22B22 where, in this case, the products (A.20) and (A.21) stand for the n2 ¼ 22 ¼ 4 separate equations, the right-hand sides of which are the four elements of the last matrix in (A.22). The dot between the matrix product A B indicates that one index from A and one index from B is to be summed over. The positioning of the summation index on the two matrices involved in a matrix product is critical and is reflected in the matrix notation by the transpose. In the three equations below, (A.21), study carefully how the positions of the summation indices within the summation sign change in relation to the position of the transpose on the matrices in the associated matrix product:
T ; T ; T T : ðA B Þij ¼ AikBjk ðA BÞij ¼ AkiBkj ðA B Þij ¼ AkiBjk (A.23)
A very significant feature of matrix multiplication is noncommutatively, that is to say A B 6¼ B A. Note, for example, the transposed product B A of the multipli- cation represented in (A.22), B B A A B A ¼ 11 12 11 12 B21 B22 A21 A22 B A þ B A B A þ B A ¼ 11 11 12 21 11 12 12 22 ; (A.24) B21A11 þ B22A21 B21A12 þ B22A22 is an illustration of the fact that A B 6¼ B A, in general. If A B ¼ B A, the matrices A and B are said to commute. Finally, matrix multiplication is associative,
A ðB CÞ¼ðA BÞ C; (A.25) and matrix multiplication is distributive with respect to addition
A ðB þ CÞ¼A B þ A C and ðB þ CÞ A ¼ B A þ C A; (A.26) provided the results of these operations are defined. Example A.3.3 Construct the products A B and B A of the matrices A and B given by: 2 3 2 3 123 10 11 12 A ¼ 4 4565; B ¼ 4 13 14 15 5: 789 16 17 18 342 Appendix A: Matrices and Tensors
Solution: The products A B and B A are given by 2 3 2 3 84 90 96 138 171 204 A B ¼ 4 201 216 231 5; B A ¼ 4 174 216 258 5: 318 342 366 210 261 312
Observe that A B 6¼ B A. The double dot notation between the two second order tensors is an extension of the single dot notation between the matrices, A B, which indicates that one index from A and one index from B are to be summed over; the double dot notation between the matrices, A : B, indicates that both indices of A are to be summed with different indices from B, thus
A : B ¼ AikBik:
This colon notation stands for the same operation as the trace of the product, A:B ¼ tr(A B). Although tr(A B) and A:B mean the same thing, A:B involves fewer characters and it will be the notation of choice. Note that A:B ¼ AT:BT and AT:B ¼ A:BT but that A:B 6¼ AT:B in general. In the considerations of mechanics, matrices are often functions of coordinate positions x1, x2, x3, and time t. In this case the matrix is written A(x1, x2, x3, t) which means that each element of A is a function of x1, x2, x3, and t, 2 3 A11ðx1; x2; x3; tÞ A12ðx1; x2; x3; tÞ ... A1nðx1; x2; x3; tÞ 6 ; ; ; ; ; ; ... ; ; ; 7 ; ; ; 6 A21ðx1 x2 x3 tÞ A22ðx1 x2 x3 tÞ A2nðx1 x2 x3 tÞ 7: Aðx1 x2 x3 tÞ¼4 ::... : 5
An1ðx1; x2; x3; tÞ : ... Annðx1; x2; x3; tÞ (A.27)
Let the operator } stand for a total derivative, or a partial derivative with respect to x1, x2, x3,ort, or a definite or indefinite (single or multiple) integral; then the operation of the operator on the matrix follows the same rule as the multiplication of a matrix by a scalar (A.9), thus 2 3 }A11ðx1; x2; x3; tÞ}A12ðx1; x2; x3; tÞ ::: }A1nðx1; x2; x3; tÞ 6 7 6 }A21ðx1; x2; x3; tÞ}A22ðx1; x2; x3; tÞ ::: }A2nðx1; x2; x3; tÞ 7 }Aðx ; x ; x ; tÞ¼6 7: 1 2 3 4 : : ::: : 5
}An1ðx1; x2; x3; tÞ :::}Annðx1; x2; x3; tÞ (A.28)
}ðA þ BÞ¼}B þ}A and ð}1 þ}2ÞA ¼}1A þ}2A; (A.29) where }1 and }2 are two different operators. A.4 The Algebra of n-Tuples 343
Problems A.3.1. Simplify the following expression by using the Einstein summation index convention for a range of three:
0 ¼ r1w1 þ r2w2 þ r3w3;
C ¼ðu1v1 þ u2v2 þ u3v3Þðu1v1 þ u2v2 þ u3v3Þ; 2 2 2 f ¼ A11x1 þ A22x2 þ A33x3 þ A12x1x2 þ A21x1x2 þ A13x1x3 þ A31x1x3 þ A23x3x2 þ A32x3x2:
A.3.2. The matrix M has the numbers 4, 5, 5 in its first row, 1, 3, 1 in its second row and 7, 1, 1 in its third row. Find the transpose of M, the symmetric part of M, and the skew-symmetric part of M. @xi A.3.3. Prove that ¼ dij. @xj A.3.4. Consider the hydrostatic component H, the deviatoric component D of the symmetric part of A, and the skew-symmetric component S of the square n by n matrix A defined by (A.17) and (A.18), trA 1 trA 1 H ¼ 1; D ¼ A þ AT 2 1 and S ¼ ½A AT : n 2 n 2
Evaluate the following: trH,trD,trS, tr(H D) ¼ H:D, tr(H S) ¼ H:S, and tr(S D) ¼ S:D. A.3.5. For the matrices in example A3.3 show that tr A B ¼ tr B A ¼ 666. In general, will A:B ¼ B:A, or is this a special case? A.3.6. Prove that A:B is zero if A is symmetric and B is skew-symmetric. A.3.7. Calculate AT B, A BT, and AT BT for the matrices A and B of Example A.3.3. A.3.8. Find the derivative of the matrix A(t) with respect to t. 2 3 tt2 sin ot AðtÞ¼4 cosh t ln t 17t 5: 1=t 1=t2 ln t2
A.3.9. Show that (A B)T ¼ BT AT. A.3.10. Show that (A B C)T ¼ CT BT AT.
A.4 The Algebra of n-Tuples
The algebra of column matrices is the same as the algebra of row matrices. The column matrices need only be transposed to be equivalent to row matrices as illustrated in equations (A.4) and (A.5). A phrase that describes both row and 344 Appendix A: Matrices and Tensors column matrices is n-tuples. This phrase will be used here because it is descriptive and inclusive. A zero n-tuple is an n-tuple whose entries are all zero; it is denoted by 0 ¼ [0, 0, ..., 0]. The multiplication of an n-tuple r by a scalar a is defined as the multiplication of every element of the n-tuple r by the scalar a, thus ar ¼ [ar1, ar2, ..., arn]. As with square matrices, it is then easy to show for n-tuples that 1r ¼ r, 1r ¼ r,0r ¼ 0, and a0 ¼ 0. The addition of n-tuples is only defined for n- tuples with the same n. The sum of two n-tuples, r and t, is denoted by r þ t, where r þ t ¼ [r1 þ t1, r2 þ t2, ..., rn þ tn]. Row-matrix addition is commutative, r þ t ¼ t þ r, and associative, r þ (t þ u) ¼ (r þ t) þ u. The following distributive laws connect n-tuple addition and n-tuple multiplication by scalars, thus a(r þ t) ¼ ar þ at and (a þ b)r ¼ ar þ br, where a and b are scalars. Negative n-tuples may be created by employing the definition of n-tuple multiplication by a scalar, ar ¼ [ar1, ar2, ..., arn], in the special case when a ¼ 1. In this case the definition of addition of n-tuples, r þ t ¼ [r1 þ t1, r2 þ t2, ..., rn þ tn], can be extended to include the subtraction of n-tuples r t, and the difference between n-tuples, r t. Two n-tuples may be employed to create a square matrix. The square matrix formed from r and t is called the open product of the n-tuples r and t; it is denoted by r t, and defined by 2 3 r1t1 r1t2 ... r1tn 6 ... 7 6 r2t1 r2t2 r2tn 7: r t ¼ 4 ::... : 5 (A.30)
rnt1 : ... rntn
The American physicist J. Willard Gibbs introduced the concept of the open product of vectors calling the product a dyad. This terminology is still used in some books and the notation is spoken of as the dyadic notation. The trace of this square matrix, tr{r t} is the scalar product of r and t,
trfr tg¼r t ¼ r1t1 þ r2t2 þ þrntn: (A.31)
In the special case of n ¼ 3, the skew-symmetric part of the open product r t, 2 3 0 r t r t r t r t 1 1 2 2 1 1 3 3 1 4 r t r t 0 r t r t 5; (A.32) 2 2 1 1 2 2 3 3 2 r3t1 r1t3 r3t2 r2t3 0 provides the components of the cross product of r and t, denoted by r t, and T written as r t ¼ [r2t3 r3t2, r3t1 r1t3, r1t2 r2t1] . These points concerning the dot product r t and cross product r t will be revisited later in this Appendix. Example A.4.1 Given the n-tuples a ¼ [1, 2, 3] and b ¼ [4, 5, 6], construct the open product matrix, a b, the skew-symmetric part of the open product matrix, and trace of the open product matrix. A.5 Linear Transformations 345
Solution: 2 3 2 3 456 0 1 2 1 3 a b ¼ 4 810125; ða b ða bÞTÞ¼ 4 10 1 5: 2 2 12 15 18 21 0 and tr{a b} ¼ a b ¼32. Frequently n-tuples are considered as functions of coordinate positions x1, x2, x3, and time t. In this case the n-tuple is written r(x1, x2, x3, t) that means that each element of r is a function of x1, x2, x3, and t,
rðx1; x2; x3; tÞ¼½r1ðx1; x2; x3; tÞ; r2ðx1; x2; x3; tÞ; ...; rnðx1; x2; x3; tÞ : (A.33)
Again letting the operator } stand for a total derivative, or a partial derivative with respect to x1, x2, x3,ort, or a definite or indefinite (single or multiple) integral, then the operation of the operator on the n-tuple follows the same rule as the multiplication of an n-tuple by a scalar (A.9), thus
}rðx1; x2; x3; tÞ¼½}r1ðx1; x2; x3; tÞ; }r2ðx1; x2; x3; tÞ; ...; }rnðx1; x2; x3; tÞ : (A.34)
The following distributive laws connect matrix addition and operator operations:
}ðr þ tÞ¼}r þ}t and ð}1 þ}2Þr ¼}1r þ}2r; (A.35) where }1 and }2 are two different operators. Problems
A.4.1. Find the derivative of the n-tuple r(x1, x2, x3, t) ¼ [x1x2x3,10x1x2, T coshax3] with respect to x3. A.4.2. Find the symmetric and skew-symmetric parts of the matrix r s where r ¼ [1, 2, 3, 4] and s ¼ [5, 6, 7, 8].
A.5 The Types andLinear Transformations
A system of linear equations
r1 ¼ A11t1 þ A12t2 þ þA1ntn;
r2 ¼ A21t1 þ A22t2 þ þA2ntn; ...
rn ¼ An1t1 þ An2t2 þ ...þ Anntn; (A.36) 346 Appendix A: Matrices and Tensors may be contracted horizontally using the summation symbol, thus
r1 ¼ A1ktk;
r2 ¼ A2ktk; ...
rn ¼ Anktk: (A.37)
Introduction of the free index convention condenses this system of equations vertically,
ri ¼ Aiktk: (A.38)
This result may also be represented in the matrix notation as a combination of n- tuples, r and t, and a square matrix A,
r ¼ A t; (A.39) where the dot between A and t indicates that the summation is with respect to one index of A and one index of t,or 2 3 2 3 r1 2 3 t1 6 7 6 7 6 r2 7 A11 A12 :::A1n 6 t2 7 6 7 6 7 6 7 6 : 7 6 A21 A22 :::A2n 7 6 : 7 6 7 ¼ 4 5 6 7; (A.40) 6 : 7 ::::::6 : 7 4 5 4 5 : An1 ::::Ann : rn tn if the operation of the matrix A upon the column matrix t is interpreted as the operation of the square matrix upon the n-tuple defined by (A.38). This is an operation very similar to square matrix multiplication. This may be seen easily by rewriting the n-tuple in (A.40) as the first column of a square matrix whose entries are all otherwise zero; thus the operation is one of multiplication of one square matrix by another: 2 3 r1 2 3 2 3 6 7 6 r2 7 A11 A12 :::A1n t1 0 :::0 6 7 6 7 6 7 6 : 7 6 A21 A22 :::A2n 7 6 t2 0 :::0 7 6 7 ¼ 4 5 4 5: (A.41) 6 : 7 : : ::: : :::::: 4 5 : An1 : :::Ann tn ::::0 rn
The operation of the square matrix A on the n-tuple t is called a linear transformation of t into the n-tuple r. The linearity property is reflected in the A.5 Linear Transformations 347 property that A applied to the sum (r þ t) follows a distributive law A ðr þ tÞ ¼ A r þ A t and that multiplication by a scalar a follows the rule aðA rÞ¼ A ðarÞ. These two properties may be combined into one, A ðar þ btÞ¼aA r þbA t where a and b are scalars. The composition of linear transformations is again a linear transformation. Consider the linear transformation t ¼ B u, u ! t (meaning u is transformed into t) which is combined with the linear transformation (A.39) r ¼ A t, t ! r to transform u ! r, thus r ¼ A B u, and if we let C A B, then r ¼ C u. The result of the composition of the two linear transformations, r ¼ A t and t ¼ B u, is then a new linear transformation r ¼ C u where the square matrix C is given by the matrix product A B. To verify that it is, in fact, a matrix multiplication, the composition of transformations is done again in the indicial notation. The transformation t ¼ B u in the indicial notation,
tk ¼ Bkmum; (A.42) is substituted into r ¼ A t in the indicial notation (A.38),
ri ¼ AikBkmum; (A.43) which may be rewritten as
ri ¼ Cimum; (A.44) where C is defined by:
Cim ¼ AikBkm: (A.45)
Comparison of (A.45) with (A.20) shows that C is the matrix product of A and B, C ¼ A B. The calculation from (A.42) to (A.45) may be repeated using the Einstein summation convention. The calculation will be similar to the one above with the exception that the summation symbols will appear. Example A.5.1 Determine the result r ¼ C u of the composition of the two linear transformations, r ¼ A t and t ¼ B u, where A and B are given by 2 3 2 3 123 10 11 12 A ¼ 4 4565; B ¼ 4 13 14 15 5: 789 16 17 18
Solution: The matrix product A B yields the square matrix C representing the composed linear transformation, 2 3 84 90 96 A B ¼ 4 201 216 231 5: 318 342 366 348 Appendix A: Matrices and Tensors
It is important to be able to construct the inverse of a linear transformation, r ¼ A t, t ¼ A 1 r, if it exists. The inverse transformation exists if A 1 can be constructed from A, thus the question is one of the construction of the inverse of a square matrix. The construction of the inverse of a matrix involves the determinant of the matrix and the matrix of the cofactors. DetA denotes the determinant of A.A matrix is said to be singular if its determinant is zero, non-singular if it is not. The i+j cofactor of the element Aij of A is denoted by coAij and is equal to ( 1) times the determinant of a matrix constructed from the matrix A by deleting the row and column in which the element Aij occurs. CoA denotes a matrix formed of the cofactors coAij. Example A.5.2 2 3 ade Compute the matrix of cofactors of A; A ¼ 4 dbf5. efc Solution: The cofactors of the distinct elements of the matrix A are coa ¼ (bc f2), cob ¼ (ac e2), coc ¼ (ab d2), cod ¼ (dc fe), coe ¼ (df eb), and cof ¼ (af de); thus the matrix of cofactors of A is 2 3 bc f 2 ðdc feÞðdf ebÞ 4 5 coA ¼ ðdc feÞ ac e2 ðaf deÞ : ðdf ebÞ ðaf deÞ ab d2
The formula for the inverse of A is written in terms of coA as
ðcoAÞT A 1 ¼ ; (A.46) DetA where ðcoAÞT is the matrix of cofactors transposed. The inverse of a matrix is not defined if the matrix is singular. For every non-singular square matrix A the inverse of A can be constructed, thus
A A 1 ¼ A 1 A ¼ 1: (A.47)
It follows then that the inverse of a linear transformation r ¼ A t, t ¼ A 1 r, exists if the matrix A is non-singular, DetA 6¼ 0. Example A.5.3 Show that the determinant of a 3-by-3-open product matrix, a b, is zero. Solution: 2 3 a1b1 a1b2 a1b3 6 7 Detfa bg¼Det4 a2b1 a2b2 a2b3 5 ¼ a1b1ða2b2a3b3 a2b3a3b2Þ
a3b1 a3b2 a3b3