
Lecture 3: Determinants & Matrix Inversion Methods to Invert a Matrix The approaches available to find the inverse of a matrix are extensive and diverse. All methods seek to solve a linear system of equations that can be expressed in a matrix format as [A]{x} = {b} for the unknowns contained in the vector {x}, i.e., {x} = [A]−1 {b} The methods used to accomplish this can be loosely grouped into the following three categories: • methods that exppylicitly calculate { {}x} • methods that implicitly calculate {x}, and • iterative methods that calculate {x} Of course hybrid methods exist that are combinations of two or methods in the categories listed above. Lecture 3: Determinants & Matrix Inversion Consider the following list of methods which is not comprehensive: 1. Exppplicit methods for sparse matrices – includes Cramer’s rule which is a sppgecific case of using self adjoint matrices. Variations include: a. Gauss elimination - Row echelon form; all entries below a nonzero entry in the matrix are zero - Bareiss alggyorithm; every element comp uted is the determinant of a [A] - Tri-diagonal matrix algorithm; special form of Gauss elimination b. Gauss-Jordan elimination 2. LDU decomposition – an implicit method that factors [A] into a product of a lower and upper triangggular matrices and a diagonal matrix. Variations include a. LU reduction – a special parallelized version of a LDU decomposition algorithm - Crout matrix decomposition is a special type of LU decomposition 3. Cholesky LDL decomposition – an implicit method that decomposes [A], when it is a positive- definite matrix, into the ppggjgroduct of a lower triangular matrix, a diagonal matrix and the conjugate transpose a. Frontal solvers used in finite element methods b. Nested dissection – for symmetric matrices, based on graph partitioning c. Minimum degree algorithm d. Symbolic Cholesky decomposition Lecture 3: Determinants & Matrix Inversion 5. Iterative methods: a. Gauss-Seidel methods • Successive over relaxation (()SOR) • Back fit algorithms b. Conjugate gradient methods (CG) – used often in optimization problems • Nonlinear conjugate gradient method • Biconjjgugate g radient method ( BiCG) • Biconjugate gradient stabilized method (BiCGSTAB) • Conjugate residual method c. Jacobi method d. Modified Richardson iteration e. Generalized minimal residual method (GMRES) – based on the Arnoldi iteration f. Chebyshev iteration – avoids inner products but needs bounds on the spectrum g. Stone's method (SIP: Strongly Implicit Procedure) – uses an incomplete LU decomposition h. Kaczmarz method i. Iterative refinement – procedure to turn an inaccurate solution in a more accurate one 6. Levinson recursion – for Toeplitz matrices 7. SPIKE algorithm – hybrid parallel solver for narrow-banded matrices Details and derivations are presented for several of the methods in this section of the notes. Concepts associated with determinants, cofactors and minors are presented first. Lecture 3: Determinants & Matrix Inversion The Determinant of a Square Matrix A square matrix of order n (an n x n matrix), i.e., a11 a12 K a1n a21 a22 K a2n [A] = M M O M an1 an2 L ann possesses a unildfidlhidiiquely defined scalar that is designated as th hde determi nant of fh the matri x, or merely the determinant det [A] = A Observe that only square matrices possess determinants. Lecture 3: Determinants & Matrix Inversion Vertical lines and not brackets designate a determinant, and while det[A] is a number and has no elements, it is customary to represent it as an array of elements of the matrix a11 a12 K a1n a21 a22 K a2n det[]A = M M O M an1 an2 L ann A general procedure for finding the value of a determinant sometimes is called “expansion by minors.” We will discuss this method after going over some ground rules for operating with determinants. Lecture 3: Determinants & Matrix Inversion Rules for Operating with Determinants Rules ppgertaining to the manip ulation of determinants are p resented in this section without formal proof. Their validity is demonstrated through examples presented at the end of the section. Rule #1: Interchanggging any row ( or column) of a determinant with its immediate ad jacent row (or column) flips the sign of the determinant. Rule #2: The multiplication of any single row (column) of determinant by a scalar constant is equivalent to the multiplication of the determinant by the scalar. Rule #3: If any two rows (columns) of a determinant are identical, the value of the determinant is zero and the matrix from which the determinant is derived is said to be singular. Rule #4: If any row (column) of a determinant contains nothing but zeroes then the matrix from which the determinant is derived is singular. Rule #5: If any two rows (two columns) of a determinant are proportional , i .e ., the two rows (two columns) are linearly dependent, then the determinant is zero and the matrix from which the determinant is derived is singular. Lecture 3: Determinants & Matrix Inversion Rule #6: If the elements of any row (column) of a determinant are added to or subtracted from the corresponding elements of another row (column) the value of the determinant is unchanged. Rule #6a: If the elements of any row (column) of a determinant are multiplied by a constant and then added or subtracted from the corresponding elements of another row (column), the value of the determinant is unchanged . Rule #7: The value of the determinant of a diagonal matrix is equal to the product of the terms on the diagonal. Rule #8: The value for the determinant of a matrix is equal to the value of the determinant of the transpose of the matrix. Rule #9: The determinant of the product of two matrices is equal to the product of the dideterminants o fhf the two matr ices. Rule #10: If the determinant of the product of two square matrices is zero, then at least one of the two matrices is singular. Rule #11: If an m x n rectangular matrix A is post-multiplied by an n x m rectangular matrix B, the resulting square matrix [C] = [A][B] of order m will, in general, be singular if m > n. Lecture 3: Determinants & Matrix Inversion In Class Example Lecture 3: Determinants & Matrix Inversion Minors and Cofactors Consider the nth order determinant: a11 a12 K a1n a21 a22 K a2n det[]A = M M O M an1 an2 L ann The mth order minor of the nth order matrix is the determinant formed by deleting ( n – m ) th rows and ( n – m ) colihlumns in the n orddder determ inant. For examp lhle the m inor |M|ir ofhf the determinant |A| is formed by deleting the ith row and the rth column. Because |A| is an nth 2 order determinant, the minor |M|ir is of order m = n – 1 and contains m elements. IlIn general, a minor fdbdltiformed by deleting p rows and p colithlumns in the nth ordddtidered determinant |A| is an (n – p)th order minor. If p = n – 1, the minor is of first order and contains only a single element from |A|. From this it is easy to see th at th e d et ermi nant | A|ti| contains n2 eltffitdlements of first order minors, each containing a single element. Lecture 3: Determinants & Matrix Inversion When dealing with minors other than the (n – 1)th order, the designation of the eliminated rows and columns of the determinant |||A| must be considered carefully. It is best to consider consecutive rows j, k, l, m … and consecutive columns r, s, t, u … so that the (n – 1)th, th th (n – 2) , and (n – 3) order minors would be designated, respectively, as |M|j,r, |M|jk,rs and |M|jkl,rst. The complementary minor, or the complement of the minor, is designated as |N| (with subscripts). This minor is the determinant formed by placing the elements that lie at the intersections of the deleted rows and columns of the original determinant into a square array in the same order that they appear in the original determinant . For example , given the determinant from the previous page, then N = a 23 23 a a N = 21 23 23,31 a31 a33 Lecture 3: Determinants & Matrix Inversion The algebraic complement of the minor |M| is the “signed” complementary minor. If a minor is obtained by deleting rows i, k, l and columns r, s, t from the determinant |A| the minor is designated M ikl,rst the complementary minor is designated N ikl,rst and the algebraic complement is designated −1 i+k +l+ L +r+s+t N ( ) ikl,rst The cofactor, designated with capital letters and subscripts, is the signed (n – 1)th minor formed from the nth order determinant. Suppose the that the (n – 1)th order minor is formed by deleting the ith row and jth column from the determinant |A|. Then corresponding cofactor is A = −1 i+ j M ij ( ) ij Lecture 3: Determinants & Matrix Inversion Observe the cofactor has no meaning for minors with orders smaller than (n – 1) unless the minor itself is being treated as a determinant of order one less than the determinant |A| from wwhichhich iitt was dederived.rived. Also observe that when the minor is order (n – 1), the product of the cofactor and the complement is equal to the product of the minor and the algebraic complement. WblthftftifdWe can assemble the cofactors of a square matrix of order n (an n x n matitrix) )i int o a square cofactor matrix, i.e., A11 A12 K A1n C A21 A22 K A2n []A = M M O M An1 An2 L Ann So when the elements of a matrix are denoted with capital letters the matrix represents a matrix of cofactors for another matrix. Lecture 3: Determinants & Matrix Inversion In Class Example Lecture 3: Determinants & Matrix Inversion Rules for Operations with Cofactors The determinant for a three by three matrix can be computed via the expansion of the matrix by minors as follows: a11 a12 a13 a22 a23 a12 a13 a12 a13 det[]A = a21 a22 a23 = a11 − a21 + a31 a32 a33 a32 a33 a22 a23 a31 a32 a33 This can be confirmed using the classic expansion technique for 3 x 3 determinants.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages55 Page
-
File Size-