Minors and Cofactors Consider the Nth Order Determinant

Minors and Cofactors Consider the Nth Order Determinant

Section 5: Linear Systems and Matrices Washkewicz College of Engineering Solution Methods – System of Linear Equations Earlier we saw that a generic system of n equations in n unknowns b1 a11 x1 a12 x2 a1n xn b2 a21 x1 a22 x2 a2n xn bn an1 x1 an2 x2 ann xn could be represented in the following matrix format The elements of the b1 a11 a12 a1n x1 square [A] matrix and b2 a21 a22 a2n x2 the {b} vector will be known and our goal is finding the elements of bn an1 an2 annxn the vector{x}. B AX 1 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Finding the elements of the {x} vector can be accomplished using approaches from an extensive library of methods that are quite diverse. All methods seek to solve a linear system of equations that can be expressed in a matrix format as Ax b for the vector {x}. If we could simply “divide” this expression by the matrix [A], i.e., x A1 b then we could easily formulate the vector {x}. As we will see this task is labor intensive. The methods used to accomplish this can be broadly grouped into the following two categories: 1. direct methods and 2. iterative methods Each group contains a number of methods and we will look at several in each category. Keep in mind that there are hybrid methods exist that are combinations of the two methods in the categories. 2 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Basic Definitions In scalar algebra we easily make use of the concept of zero and one as follows: 0 0 1 1 where is a scalar quantity. A scalar certainly possesses a reciprocal, or multiplicative inverse. that when applied to the scalar quantity produces one: 1 1 1 The above can be extended to n x n matrices. Here scalar one (1) becomes the identity matrix [I], and zero is the null matrix [0], i.e., A 0 0 A A A I IA A 3 Section 5: Linear Systems and Matrices Washkewicz College of Engineering At this point we note that if there is an n x n matrix [A]-1 that pre- and post-multiplies the matrix [A] such that A1 A A A1 I then the matrix [A]-1 is termed the inverse of the matrix [A] with respect to matrix multiplication. The matrix [A] is said to be invertible, or non-singular if [A]-1 exists, and non-invertible or singular if [A]-1 does not exist. The concept of matrix inversion is important in the study of structural analysis with matrix methods. We will study this topic in detail several times, and refer to it often throughout the course. We will formally define the inverse of a matrix though the use of the determinant of the matrix and its self –adjoint matrix. We will do that in a formal manner after revisiting properties of the determinants and co-factors of a matrix. However, there are a number of methods that enable one to find the solution without finding the inverse of the matrix. Probably the best known of these is Cramer's Rule followed by Gaussian elimination and the Gauss-Jordan method. 4 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Cramer’s Rule – Three Equations and Three Unknowns It is unfortunate that usually the method for the solution of linear equations that students remember from secondary education is Cramer's rule which is really an expansion by minors (topic discussed subsequently). This method is rather inefficient and relatively difficult to program. However, as it forms sort of a standard by which other methods can by judged, we will review it here for a system of three equations and three unknowns. The more general formulation is inductive. Consider the following system of three equations in terms of three unknowns {x1, x2, x3} b1 a11 a12 a13 x1 b a a a x 2 21 22 23 2 bn a31 a32 a33x3 Where we identify a11 a12 a13 A a a a 21 22 23 a a a 31 32 33 5 Section 5: Linear Systems and Matrices Washkewicz College of Engineering and b1 a12 a13 a11 b1 a13 a11 a12 b1 A b a a A a b a A a a b 1 2 22 23 2 21 2 23 2 21 22 2 b3 a32 a33 a31 b3 a33 a31 a32 b3 The solution is formulated as follows b1 a12 a13 a11 b1 a13 b2 a22 a23 a21 b2 a23 A A 1 b3 a32 a33 2 a31 b3 a33 x1 x2 A a11 a12 a13 A a11 a12 a13 a21 a22 a23 a21 a22 a23 a32 a32 a33 a32 a32 a33 6 Section 5: Linear Systems and Matrices Washkewicz College of Engineering and a11 a12 b1 a21 a22 b2 A 3 a31 a32 b3 x3 A a11 a12 a13 a21 a22 a23 a32 a32 a33 Proof follows from the solution of a system of two equations and two unknowns. For a system of n equations with n unknowns this solution method requires evaluating the determinant of the matrix [A] as well as augmented matrices (see above and previous page) where the jth column has been replaced by the elements of the vector {B}. Evaluation of the determinant of an n × n matrix requires about 3n2 operations and this must be repeated for each unknown. Thus solution by Cramer's rule will require at least 3n3 operations. 7 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Gaussian Elimination Let us consider a simpler algorithm, which forms the basis for one of the most reliable and stable direct methods for the solution of linear equations. It also provides a method for the inversion of matrices. Let begin by describing the method and then trying to understand why it works. Consider representing the set of linear equations a a a b 11 12 1n 1 a21 a22 a2n b2 an1 an2 ann bn Here we have suppressed the presence of the elements of the solution vector {x} and parentheses are used in lieu of brackets and braces so as not to infer matrix multiplication in this expression. We will refer to the above as an “augmented matrix.” 8 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Now we perform a series of operations on the rows and columns of the coefficient matrix [A] and we shall carry through the row operations to include the elements of the constant vector {B}. The rows are treated as if they were the equations so that anything done to one element is done to all. Start by dividing each row including the vector {B} by the lead element in the row – initially a11. The first row is then multiplied by an appropriate constant and subtracted from all the lower rows. Thus all rows but the first will have zero in the first column. That row should have a one (1) in the first column. This is repeated for each succeeding row. The second row is divided by the second element producing a one in the second column. This row is multiplied by appropriate constants and subtracted from the lower rows producing zeroes in the second column. This process is repeated until the following matrix is obtained 1 12 1n 1 0 1 23 2n 2 0 0 1 an1, n n1 0 0 0 1 n 9 Section 5: Linear Systems and Matrices Washkewicz College of Engineering When the diagonal coefficients are all unity, the last term of the vector {} contains the value of xn, i.e., xn n This can be used in the (n -1)th equation represented by the second to the last line to obtain xn-1 and so on right up to the first line which will yield the value of x1. n xi n ij x j ji1 10 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Gaussian-Jordan Elimination A simple modification to Gauss elimination method allows us to obtain the inverse to the matrix [A] as well as the solution vector {x}. Consider representing the set of linear equations as a a a b 1 0 0 11 12 1n 1 a21 a22 a2n b2 0 1 0 an1 an2 ann bn 0 0 1 Now the unit matrix [I] is included in the augmented matrix. The procedure is carried out as before, the Gauss elimination method producing zeros in the columns below and to the left of the diagonal element. However the same row operations are conducted on the unit matrix as well. At the end of the procedure we have both solved the system of equations and found the inverse of the original matrix. 11 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Example 5.1 12 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Example 5.2 13 Section 5: Linear Systems and Matrices Washkewicz College of Engineering The Determinant of a Square Matrix A square matrix of order n (an n x n matrix), i.e., a11 a12 a1n a a a A 21 22 2n an1 an2 ann possesses a uniquely defined scalar that is designated as the determinant of the matrix, or merely the determinant det A A Observe that only square matrices possess determinants. 14 Section 5: Linear Systems and Matrices Washkewicz College of Engineering Vertical lines and not brackets designate a determinant, and while det[A] is a number and has no elements, it is customary to represent it as an array of elements of the matrix a11 a12 a1n a a a detA 21 22 2n an1 an2 ann A general procedure for finding the value of a determinant sometimes is called “expansion by minors.” We will discuss this method after going over some ground rules for operating with determinants.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    48 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us