Matrix Algebra Part C: Pivoting and Reduced Row Echelon Form

Total Page:16

File Type:pdf, Size:1020Kb

Matrix Algebra Part C: Pivoting and Reduced Row Echelon Form Lecture Notes 1: Matrix Algebra Part C: Pivoting and Reduced Row Echelon Form Peter J. Hammond revised 2018 September 23rd University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 88 Lecture Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 2 of 88 Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 3 of 88 Three Simultaneous Equations Consider the following system of three simultaneous equations in three unknowns, which depends upon two \exogenous" constants a and b: x + y − z = 1 x − y + 2z = 2 x + 2y + az = b It can be expressed, using an augmented 3 × 4 matrix, as : 1 1 −1 1 1 −1 2 2 1 2 a b Perhaps even more useful is the doubly augmented 3 × 7 matrix: 1 1 −1 1 1 0 0 1 −1 2 2 0 1 0 1 2 a b 0 0 1 whose last 3 columns are those of the 3 × 3 identity matrix I3. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 4 of 88 The First Pivot Step Start with the doubly augmented 3 × 7 matrix: 1 1 −1 1 1 0 0 1 −1 2 2 0 1 0 1 2 a b 0 0 1 First, we pivot about the element in row 1 and column 1 to eliminate or \zeroize" the other elements of column 1. This elementary row operation requires us to subtract row 1 from both rows 2 and 3. It is equivalent to multiplying 0 1 0 01 by the lower triangular matrix E1 = @−1 1 0A. −1 0 1 Note: this is the result of applying the same row operations to I3. The resulting 3 × 7 matrix is: 1 1 −1 1 1 0 0 0 −2 3 1 −1 1 0 0 1 a + 1 b − 1 −1 0 1 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 5 of 88 The Second Pivot Step After augmenting again by the identity matrix, we have: 1 1 −1 1 1 0 0 1 0 0 0 −2 3 1 −1 1 0 0 1 0 0 1 a + 1 b − 1 −1 0 1 0 0 1 Next, we pivot about the element in row 2 and column 2. 1 Specifically, multiply the second row by − 2 , then subtract the new second row from the third to obtain: 1 1 −1 1 1 0 0 1 0 0 3 1 1 1 1 0 1 − 2 − 2 2 − 2 0 0 − 2 0 5 1 3 1 1 0 0 a + 2 b − 2 − 2 2 1 0 2 1 Again, the pivot operation is equivalent to multiplying 01 0 01 1 by the lower triangular matrix E2 = @0 − 2 0A, 1 0 2 1 which is the result of applying the same row operation to I3. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 6 of 88 Case 1: Dependent Equations 5 In case 1, when a + 2 = 0, the equation system reduces to: x + y − z = 1 3 1 y − 2 z = − 2 1 0 = b − 2 1 In case 1A, when b 6= 2 , neither the last equation, nor the system as a whole, has any solution. 1 In case 1B, when b = 2 , the third equation is redundant. In this case, the first two equations have a general solution 3 1 3 1 3 1 with y = 2 z − 2 and x = z + 1 − y = z + 1 − 2 z + 2 = 2 − 2 z, where z is an arbitrary scalar. In particular, there is a one-dimensional set of solutions 3 along the unique straight line in R that passes through both: 3 1 (i) ( 2 ; − 2 ; 0), when z = 0; (ii) (1; 1; 1), when z = 1. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 7 of 88 Case 2: Three Independent Equations 1 1 −1 1 1 0 0 1 0 0 3 1 1 1 1 0 1 − 2 − 2 2 − 2 0 0 − 2 0 5 1 3 1 1 0 0 a + 2 b − 2 − 2 2 1 0 − 2 1 5 Case 2 occurs when a + 2 6= 0, 5 and so the reciprocal c := 1=(a + 2 ) is well defined. 5 Now divide the last row by a + 2 , or multiply by c, to obtain: 1 1 −1 1 1 0 0 3 1 1 1 0 1 − 2 − 2 2 − 2 0 1 3 3 1 0 0 1 (b − 2 )c − 2 c 2 c 2 c The system has been reduced to row echelon form in which the leading zeroes of each successive row form the steps (in French, ´echelons, meaning rungs) of a ladder (or ´echelle in French) which descends steadily as one goes from left to right. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 8 of 88 Case 2: Three Independent Equations, Third Pivot 1 1 −1 1 1 0 0 3 1 1 1 0 1 − 2 − 2 2 − 2 0 1 3 3 1 0 0 1 (b − 2 )c − 2 c 2 c 2 c Next, we zeroize the elements in the third column above row 3. To do so, pivot about the element in row 3 and column 3. This requires adding 1 times the last row to the first, 3 and 2 times the last row to the second. In effect, one multiplies 01 1 11 3 by the upper triangular matrix E3 := @0 1 2 A 0 0 1 1 1 0 The first three columns of the result are 0 1 0 0 0 1 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 9 of 88 Case 2: Three Independent Equations, Final Pivot As already remarked, the first three columns of the matrix we are left with are 1 1 0 0 1 0 0 0 1 The final pivoting operation involves subtracting the second row from the first, so the first three columns become the identity matrix 1 0 0 0 1 0 0 0 1 This is a matrix in reduced row echelon form because, given the leading non-zero element of any row (if there is one), all elements above this element are zero. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 10 of 88 Final Exercise Exercise 1. Find the last 4 columns of each 3 × 7 matrix produced by these last two pivoting steps. 2. Check that the fourth column solves the original system of 3 simultaneous equations. 3. Check that the last 3 columns form the inverse of the original coefficient matrix. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 11 of 88 Outline Pivoting to Reach the Reduced Row Echelon Form Example The Row Echelon Form The Reduced Row Echelon Form Determinants and Inverses Properties of Determinants Eight Basic Rules for Determinants Verifying the Product Rule Cofactor Expansion Expansion by Alien Cofactors and the Adjugate Matrix Invertible Matrices Minors, Dimensions, and Rank Column Rank, Row Rank, and Determinantal Rank Minor Determinants Solutions to Linear Equations General Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 12 of 88 Definition of Row Echelon Form Definition An m × n matrix A is in row echelon form just in case: 1. The first r ≤ m rows i 2 Nr each have a non-zero leading entry ai;`i in column `i such that aij = 0 for all j < `i . 2. Each successive leading entry is in a column to the right of the leading entry in the previous row. That is, given the leading element ai;`i 6= 0 of row i, one has ahj = 0 for all h > i and all j ≤ `i . 3. If r < m, then any row i 2 fr + 1;:::; mg = Nm n Nr has no leading entry, because all its elements are zero. This row without a leading entry must be below any row with a leading entry. University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 13 of 88 Examples Assuming that α; β; γ 2 R n f0g, here are three examples of matrices in row echelon form: 0α 2 0 01 0α 2 0 01 0α 01 0 0 β 0 A = 0 0 β 0 ; B = B C ; C = 0 β @ A B0 0 0 γC @ A 0 0 0 γ @ A 0 0 0 0 0 0 Here are three examples of matrices that are not in row echelon form 00 11 01 21 01 01 D = @1 0A ; E = @0 1A ; F = @0 0A 0 0 0 1 0 1 University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 14 of 88 Pivoting to Reach a Generalized Row Echelon Form Any m × n matrix A can be transformed into row echelon form by applying a series of determinant preserving row operations involving non-zero pivot elements.
Recommended publications
  • Linear Algebra: Example Sheet 2 of 4
    Michaelmas Term 2014 SJW Linear Algebra: Example Sheet 2 of 4 1. (Another proof of the row rank column rank equality.) Let A be an m × n matrix of (column) rank r. Show that r is the least integer for which A factorises as A = BC with B 2 Matm;r(F) and C 2 Matr;n(F). Using the fact that (BC)T = CT BT , deduce that the (column) rank of AT equals r. 2. Write down the three types of elementary matrices and find their inverses. Show that an n × n matrix A is invertible if and only if it can be written as a product of elementary matrices. Use this method to find the inverse of 0 1 −1 0 1 @ 0 0 1 A : 0 3 −1 3. Let A and B be n × n matrices over a field F . Show that the 2n × 2n matrix IB IB C = can be transformed into D = −A 0 0 AB by elementary row operations (which you should specify). By considering the determinants of C and D, obtain another proof that det AB = det A det B. 4. (i) Let V be a non-trivial real vector space of finite dimension. Show that there are no endomorphisms α; β of V with αβ − βα = idV . (ii) Let V be the space of infinitely differentiable functions R ! R. Find endomorphisms α; β of V which do satisfy αβ − βα = idV . 5. Find the eigenvalues and give bases for the eigenspaces of the following complex matrices: 0 1 1 0 1 0 1 1 −1 1 0 1 1 −1 1 @ 0 3 −2 A ; @ 0 3 −2 A ; @ −1 3 −1 A : 0 1 0 0 1 0 −1 1 1 The second and third matrices commute; find a basis with respect to which they are both diagonal.
    [Show full text]
  • 18.06 Linear Algebra, Problem Set 5 Solutions
    18.06 Problem Set 5 Solution Total: points Section 4.1. Problem 7. Every system with no solution is like the one in problem 6. There are numbers y1; : : : ; ym that multiply the m equations so they add up to 0 = 1. This is called Fredholm’s Alternative: T Exactly one of these problems has a solution: Ax = b OR A y = 0 with T y b = 1. T If b is not in the column space of A it is not orthogonal to the nullspace of A . Multiply the equations x1 − x2 = 1 and x2 − x3 = 1 and x1 − x3 = 1 by numbers y1; y2; y3 chosen so that the equations add up to 0 = 1. Solution (4 points) Let y1 = 1, y2 = 1 and y3 = −1. Then the left-hand side of the sum of the equations is (x1 − x2) + (x2 − x3) − (x1 − x3) = x1 − x2 + x2 − x3 + x3 − x1 = 0 and the right-hand side is 1 + 1 − 1 = 1: Problem 9. If AT Ax = 0 then Ax = 0. Reason: Ax is inthe nullspace of AT and also in the of A and those spaces are . Conclusion: AT A has the same nullspace as A. This key fact is repeated in the next section. Solution (4 points) Ax is in the nullspace of AT and also in the column space of A and those spaces are orthogonal. Problem 31. The command N=null(A) will produce a basis for the nullspace of A. Then the command B=null(N') will produce a basis for the of A.
    [Show full text]
  • Linear Algebra Review James Chuang
    Linear algebra review James Chuang December 15, 2016 Contents 2.1 vector-vector products ............................................... 1 2.2 matrix-vector products ............................................... 2 2.3 matrix-matrix products ............................................... 4 3.2 the transpose .................................................... 5 3.3 symmetric matrices ................................................. 5 3.4 the trace ....................................................... 6 3.5 norms ........................................................ 6 3.6 linear independence and rank ............................................ 7 3.7 the inverse ...................................................... 7 3.8 orthogonal matrices ................................................. 8 3.9 range and nullspace of a matrix ........................................... 8 3.10 the determinant ................................................... 9 3.11 quadratic forms and positive semidefinite matrices ................................ 10 3.12 eigenvalues and eigenvectors ........................................... 11 3.13 eigenvalues and eigenvectors of symmetric matrices ............................... 12 4.1 the gradient ..................................................... 13 4.2 the Hessian ..................................................... 14 4.3 gradients and hessians of linear and quadratic functions ............................. 15 4.5 gradients of the determinant ............................................ 16 4.6 eigenvalues
    [Show full text]
  • Span, Linear Independence and Basis Rank and Nullity
    Remarks for Exam 2 in Linear Algebra Span, linear independence and basis The span of a set of vectors is the set of all linear combinations of the vectors. A set of vectors is linearly independent if the only solution to c1v1 + ::: + ckvk = 0 is ci = 0 for all i. Given a set of vectors, you can determine if they are linearly independent by writing the vectors as the columns of the matrix A, and solving Ax = 0. If there are any non-zero solutions, then the vectors are linearly dependent. If the only solution is x = 0, then they are linearly independent. A basis for a subspace S of Rn is a set of vectors that spans S and is linearly independent. There are many bases, but every basis must have exactly k = dim(S) vectors. A spanning set in S must contain at least k vectors, and a linearly independent set in S can contain at most k vectors. A spanning set in S with exactly k vectors is a basis. A linearly independent set in S with exactly k vectors is a basis. Rank and nullity The span of the rows of matrix A is the row space of A. The span of the columns of A is the column space C(A). The row and column spaces always have the same dimension, called the rank of A. Let r = rank(A). Then r is the maximal number of linearly independent row vectors, and the maximal number of linearly independent column vectors. So if r < n then the columns are linearly dependent; if r < m then the rows are linearly dependent.
    [Show full text]
  • A New Algorithm to Obtain the Adjugate Matrix Using CUBLAS on GPU González, H.E., Carmona, L., J.J
    ISSN: 2277-3754 ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 7, Issue 4, October 2017 A New Algorithm to Obtain the Adjugate Matrix using CUBLAS on GPU González, H.E., Carmona, L., J.J. Information Technology Department, ININ-ITTLA Information Technology Department, ININ proved to be the best option for most practical applications. Abstract— In this paper a parallel code for obtain the Adjugate The new transformation proposed here can be obtained from Matrix with real coefficients are used. We postulate a new linear it. Next, we briefly review this topic. transformation in matrix product form and we apply this linear transformation to an augmented matrix (A|I) by means of both a minimum and a complete pivoting strategies, then we obtain the II. LU-MATRICIAL DECOMPOSITION WITH GT. Adjugate matrix. Furthermore, if we apply this new linear NOTATION AND DEFINITIONS transformation and the above pivot strategy to a augmented The problem of solving a linear system of equations matrix (A|b), we obtain a Cramer’s solution of the linear system of Ax b is central to the field of matrix computation. There are equations. That new algorithm present an O n3 computational several ways to perform the elimination process necessary for complexity when n . We use subroutines of CUBLAS 2nd A, b R its matrix triangulation. We will focus on the Doolittle-Gauss rd and 3 levels in double precision and we obtain correct numeric elimination method: the algorithm of choice when A is square, results. dense, and un-structured. nxn Index Terms—Adjoint matrix, Adjugate matrix, Cramer’s Let us assume that AR is nonsingular and that we rule, CUBLAS, GPU.
    [Show full text]
  • Field of U-Invariants of Adjoint Representation of the Group GL (N, K)
    Field of U-invariants of adjoint representation of the group GL(n,K) K.A.Vyatkina A.N.Panov ∗ It is known that the field of invariants of an arbitrary unitriangular group is rational (see [1]). In this paper for adjoint representation of the group GL(n,K) we present the algebraically system of generators of the field of U-invariants. Note that the algorithm for calculating generators of the field of invariants of an arbitrary rational representation of an unipotent group was presented in [2, Chapter 1]. The invariants was constructed using induction on the length of Jordan-H¨older series. In our case the length is equal to n2; that is why it is difficult to apply the algorithm. −1 Let us consider the adjoint representation AdgA = gAg of the group GL(n,K), where K is a field of zero characteristic, on the algebra of matri- ces M = Mat(n,K). The adjoint representation determines the representation −1 ρg on K[M] (resp. K(M)) by formula ρgf(A)= f(g Ag). Let U be the subgroup of upper triangular matrices in GL(n,K) with units on the diagonal. A polynomial (rational function) f on M is called an U- invariant, if ρuf = f for every u ∈ U. The set of U-invariant rational functions K(M)U is a subfield of K(M). Let {xi,j} be a system of standard coordinate functions on M. Construct a X X∗ ∗ X X∗ X∗ X matrix = (xij). Let = (xij) be its adjugate matrix, · = · = det X · E. Denote by Jk the left lower corner minor of order k of the matrix X.
    [Show full text]
  • Formalizing the Matrix Inversion Based on the Adjugate Matrix in HOL4 Liming Li, Zhiping Shi, Yong Guan, Jie Zhang, Hongxing Wei
    Formalizing the Matrix Inversion Based on the Adjugate Matrix in HOL4 Liming Li, Zhiping Shi, Yong Guan, Jie Zhang, Hongxing Wei To cite this version: Liming Li, Zhiping Shi, Yong Guan, Jie Zhang, Hongxing Wei. Formalizing the Matrix Inversion Based on the Adjugate Matrix in HOL4. 8th International Conference on Intelligent Information Processing (IIP), Oct 2014, Hangzhou, China. pp.178-186, 10.1007/978-3-662-44980-6_20. hal-01383331 HAL Id: hal-01383331 https://hal.inria.fr/hal-01383331 Submitted on 18 Oct 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Distributed under a Creative Commons Attribution| 4.0 International License Formalizing the Matrix Inversion Based on the Adjugate Matrix in HOL4 Liming LI1, Zhiping SHI1?, Yong GUAN1, Jie ZHANG2, and Hongxing WEI3 1 Beijing Key Laboratory of Electronic System Reliability Technology, Capital Normal University, Beijing 100048, China [email protected], [email protected] 2 College of Information Science & Technology, Beijing University of Chemical Technology, Beijing 100029, China 3 School of Mechanical Engineering and Automation, Beihang University, Beijing 100083, China Abstract. This paper presents the formalization of the matrix inversion based on the adjugate matrix in the HOL4 system.
    [Show full text]
  • Linearized Polynomials Over Finite Fields Revisited
    Linearized polynomials over finite fields revisited∗ Baofeng Wu,† Zhuojun Liu‡ Abstract We give new characterizations of the algebra Ln(Fqn ) formed by all linearized polynomials over the finite field Fqn after briefly sur- veying some known ones. One isomorphism we construct is between ∨ L F n F F n n( q ) and the composition algebra qn ⊗Fq q . The other iso- morphism we construct is between Ln(Fqn ) and the so-called Dickson matrix algebra Dn(Fqn ). We also further study the relations between a linearized polynomial and its associated Dickson matrix, generalizing a well-known criterion of Dickson on linearized permutation polyno- mials. Adjugate polynomial of a linearized polynomial is then intro- duced, and connections between them are discussed. Both of the new characterizations can bring us more simple approaches to establish a special form of representations of linearized polynomials proposed re- cently by several authors. Structure of the subalgebra Ln(Fqm ) which are formed by all linearized polynomials over a subfield Fqm of Fqn where m|n are also described. Keywords Linearized polynomial; Composition algebra; Dickson arXiv:1211.5475v2 [math.RA] 1 Jan 2013 matrix algebra; Representation. ∗Partially supported by National Basic Research Program of China (2011CB302400). †Key Laboratory of Mathematics Mechanization, AMSS, Chinese Academy of Sciences, Beijing 100190, China. Email: [email protected] ‡Key Laboratory of Mathematics Mechanization, AMSS, Chinese Academy of Sciences, Beijing 100190, China. Email: [email protected] 1 1 Introduction n Let Fq and Fqn be the finite fields with q and q elements respectively, where q is a prime or a prime power.
    [Show full text]
  • Arxiv:2012.03523V3 [Math.NT] 16 May 2021 Favne Cetfi Optn Eerh(SR Spr Ft of Part (CM4)
    WRONSKIAN´ ALGEBRA AND BROADHURST–ROBERTS QUADRATIC RELATIONS YAJUN ZHOU To the memory of Dr. W (1986–2020) ABSTRACT. Throughalgebraicmanipulationson Wronskian´ matrices whose entries are reducible to Bessel moments, we present a new analytic proof of the quadratic relations conjectured by Broadhurst and Roberts, along with some generalizations. In the Wronskian´ framework, we reinterpret the de Rham intersection pairing through polynomial coefficients in Vanhove’s differential operators, and compute the Betti inter- section pairing via linear sum rules for on-shell and off-shell Feynman diagrams at threshold momenta. From the ideal generated by Broadhurst–Roberts quadratic relations, we derive new non-linear sum rules for on-shell Feynman diagrams, including an infinite family of determinant identities that are compatible with Deligne’s conjectures for critical values of motivic L-functions. CONTENTS 1. Introduction 1 2. W algebra 5 2.1. Wronskian´ matrices and Vanhove operators 5 2.2. Block diagonalization of on-shell Wronskian´ matrices 8 2.3. Sumrulesforon-shellandoff-shellBesselmoments 11 3. W⋆ algebra 20 3.1. Cofactors of Wronskians´ 21 3.2. Threshold behavior of Wronskian´ cofactors 24 3.3. Quadratic relations for off-shell Bessel moments 30 4. Broadhurst–Roberts quadratic relations 37 4.1. Quadratic relations for on-shell Bessel moments 37 4.2. Arithmetic applications to Feynman diagrams 43 4.3. Alternative representations of de Rham matrices 49 References 53 arXiv:2012.03523v3 [math.NT] 16 May 2021 Date: May 18, 2021. Keywords: Bessel moments, Feynman integrals, Wronskian´ matrices, Bernoulli numbers Subject Classification (AMS 2020): 11B68, 33C10, 34M35 (Primary) 81T18, 81T40 (Secondary) * This research was supported in part by the Applied Mathematics Program within the Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR) as part of the Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4).
    [Show full text]
  • Lecture 5: Matrix Operations: Inverse
    Lecture 5: Matrix Operations: Inverse • Inverse of a matrix • Computation of inverse using co-factor matrix • Properties of the inverse of a matrix • Inverse of special matrices • Unit Matrix • Diagonal Matrix • Orthogonal Matrix • Lower/Upper Triangular Matrices 1 Matrix Inverse • Inverse of a matrix can only be defined for square matrices. • Inverse of a square matrix exists only if the determinant of that matrix is non-zero. • Inverse matrix of 퐴 is noted as 퐴−1. • 퐴퐴−1 = 퐴−1퐴 = 퐼 • Example: 2 −1 0 1 • 퐴 = , 퐴−1 = , 1 0 −1 2 2 −1 0 1 0 1 2 −1 1 0 • 퐴퐴−1 = = 퐴−1퐴 = = 1 0 −1 2 −1 2 1 0 0 1 2 Inverse of a 3 x 3 matrix (using cofactor matrix) • Calculating the inverse of a 3 × 3 matrix is: • Compute the matrix of minors for A. • Compute the cofactor matrix by alternating + and – signs. • Compute the adjugate matrix by taking a transpose of cofactor matrix. • Divide all elements in the adjugate matrix by determinant of matrix 퐴. 1 퐴−1 = 푎푑푗(퐴) det(퐴) 3 Inverse of a 3 x 3 matrix (using cofactor matrix) 3 0 2 퐴 = 2 0 −2 0 1 1 0 −2 2 −2 2 0 1 1 0 1 0 1 2 2 2 0 2 3 2 3 0 Matrix of Minors = = −2 3 3 1 1 0 1 0 1 0 2 3 2 3 0 0 −10 0 0 −2 2 −2 2 0 2 2 2 1 −1 1 2 −2 2 Cofactor of A (퐂) = −2 3 3 .∗ −1 1 −1 = 2 3 −3 0 −10 0 1 −1 1 0 10 0 2 2 0 adj A = CT = −2 3 10 2 −3 0 2 2 0 0.2 0.2 0 1 1 A-1 = ∗ adj A = −2 3 10 = −0.2 0.3 1 |퐴| 10 2 −3 0 0.2 −0.3 0 4 Properties of Inverse of a Matrix • (A-1)-1 = A • (AB)-1 = B-1A-1 • (kA)-1 = k-1A-1 where k is a non-zero scalar.
    [Show full text]
  • Math 102 -- Linear Algebra I -- Study Guide
    Math 102 Linear Algebra I Stefan Martynkiw These notes are adapted from lecture notes taught by Dr.Alan Thompson and from “Elementary Linear Algebra: 10th Edition” :Howard Anton. Picture above sourced from (http://i.imgur.com/RgmnA.gif) 1/52 Table of Contents Chapter 3 – Euclidean Vector Spaces.........................................................................................................7 3.1 – Vectors in 2-space, 3-space, and n-space......................................................................................7 Theorem 3.1.1 – Algebraic Vector Operations without components...........................................7 Theorem 3.1.2 .............................................................................................................................7 3.2 – Norm, Dot Product, and Distance................................................................................................7 Definition 1 – Norm of a Vector..................................................................................................7 Definition 2 – Distance in Rn......................................................................................................7 Dot Product.......................................................................................................................................8 Definition 3 – Dot Product...........................................................................................................8 Definition 4 – Dot Product, Component by component..............................................................8
    [Show full text]
  • Contents 2 Matrices and Systems of Linear Equations
    Business Math II (part 2) : Matrices and Systems of Linear Equations (by Evan Dummit, 2019, v. 1.00) Contents 2 Matrices and Systems of Linear Equations 1 2.1 Systems of Linear Equations . 1 2.1.1 Elimination, Matrix Formulation . 2 2.1.2 Row-Echelon Form and Reduced Row-Echelon Form . 4 2.1.3 Gaussian Elimination . 6 2.2 Matrix Operations: Addition and Multiplication . 8 2.3 Determinants and Inverses . 11 2.3.1 The Inverse of a Matrix . 11 2.3.2 The Determinant of a Matrix . 13 2.3.3 Cofactor Expansions and the Adjugate . 16 2.3.4 Matrices and Systems of Linear Equations, Revisited . 18 2 Matrices and Systems of Linear Equations In this chapter, we will discuss the problem of solving systems of linear equations, reformulate the problem using matrices, and then give the general procedure for solving such systems. We will then study basic matrix operations (addition and multiplication) and discuss the inverse of a matrix and how it relates to another quantity known as the determinant. We will then revisit systems of linear equations after reformulating them in the language of matrices. 2.1 Systems of Linear Equations • Our motivating problem is to study the solutions to a system of linear equations, such as the system x + 3x = 5 1 2 : 3x1 + x2 = −1 ◦ Recall that a linear equation is an equation of the form a1x1 + a2x2 + ··· + anxn = b, for some constants a1; : : : an; b, and variables x1; : : : ; xn. ◦ When we seek to nd the solutions to a system of linear equations, this means nding all possible values of the variables such that all equations are satised simultaneously.
    [Show full text]