Solving Linear Systems, Continued and the Inverse of a Matrix

Total Page:16

File Type:pdf, Size:1020Kb

Solving Linear Systems, Continued and the Inverse of a Matrix Solving Linear Systems Math 240 Solving Linear Systems Gauss-Jordan elimination Rank Solving Linear Systems, Continued Inverse matrices and Definition Computing inverses The Inverse of a Matrix Properties of inverses Using inverse matrices Conclusion Math 240 | Calculus III Summer 2013, Session II Monday, July 15, 2013 Solving Linear Systems Agenda Math 240 Solving Linear Systems Gauss-Jordan elimination Rank 1. Solving Linear Systems Inverse matrices Gauss-Jordan elimination Definition Computing The rank of a matrix inverses Properties of inverses Using inverse matrices Conclusion 2. The inverse of a square matrix Definition Computing inverses Properties of inverses Using inverse matrices Conclusion Solving Linear Systems Gaussian elimination Math 240 Solving Linear Gaussian elimination solves a linear system by reducing to Systems Gauss-Jordan REF via elementary row ops and then using back substitution. elimination Rank Example Inverse matrices 2 3 Definition 3x1 − 2x2 + 2x3 = 9 3 −2 2 9 Computing inverses x − 2x + x = 5 1 −2 1 5 Properties of 1 2 3 4 5 inverses Using inverse 2x1 − x2 − 2x3 = −1 2 −1 −2 −1 matrices Conclusion 2 3 1 −2 1 5 x1 − 2x2 + x3 = 5 ! 40 1 3 55 x2 + 3x3 = 5 0 0 1 2 x3 = 2 Steps 1. P12 3. A13(−2) 5. A23(−3) −1 2. A12(−3) 4. A32(−1) 6. M3 13 Back substitution gives the solution (1; −1; 2). Solving Linear Systems Gauss-Jordan elimination Math 240 Solving Linear Reducing the augmented matrix to RREF makes the system Systems Gauss-Jordan even easier to solve. elimination Rank Example Inverse matrices 2 3 2 3 Definition 1 −2 1 5 1 0 0 1 x1 = 1 Computing inverses 0 1 3 5 ! 0 1 0 −1 x = −1 4 5 4 5 2 Properties of inverses 0 0 1 2 0 0 1 2 x3 = 2 Using inverse matrices Conclusion Steps 1. A32(−3) 2. A31(−1) 3. A21(2) Now, without any back substitution, we can see that the solution is (1; −1; 2). The method of solving a linear system by reducing its augmented matrix to RREF is called Gauss-Jordan elimination. Solving Linear Systems The rank of a matrix Math 240 Solving Linear Definition Systems Gauss-Jordan The rank of a matrix, A, is the number of nonzero rows it has elimination Rank after reduction to REF. It is denoted by rank(A). Inverse matrices If A the coefficient matrix of an m × n linear system and Definition # Computing rank(A ) = rank(A) = n then the REF looks like inverses Properties of inverses 2 3 Using inverse 1 ∗∗···∗ matrices x1 = ∗ Conclusion 6 1 ∗ ::: ∗7 6 . .7 x2 = ∗ 6 .. .7 . 6 7 . 6 1 ∗7 40 5 x = ∗ 0 ::::::::::: 0 n Lemma Suppose Ax = b is an m × n linear system with augmented matrix A#. If rank(A#) = rank(A) = n then the system has a unique solution. Solving Linear Systems The rank of a matrix Math 240 Solving Linear Example Systems Gauss-Jordan elimination Determine the solution set of the linear system Rank Inverse x1 + x2 − x3 + x4 = 1; matrices Definition 2x1 + 3x2 + x3 = 4; Computing inverses 3x1 + 5x2 + 3x3 − x4 = 5: Properties of inverses Using inverse matrices Reduce the augmented matrix. Conclusion 21 1 −1 1 13 A12(−2) 21 1 −1 1 13 A13(−3) 42 3 1 0 45 −−−−−! 40 1 3 − 2 25 3 5 3 −1 5 A23(−2) 0 0 0 0 − 2 The last row says 0 = −2; the system is inconsistent. Lemma Suppose Ax = b is a linear system with augmented matrix A#. If rank(A#) > rank(A) then the system is inconsistent. Solving Linear Systems The rank of a matrix Math 240 Solving Linear Systems Example Gauss-Jordan elimination Determine the solution set of the linear system Rank Inverse 5x1 − 6x2 + x3 = 4; matrices Definition 2x1 − 3x2 + x3 = 1; Computing inverses 4x1 − 3x2 − x3 = 5: Properties of inverses Using inverse Reduce the augmented matrix. matrices Conclusion 25 −6 1 43 21 0 − 1 23 x − x = 2 2 −3 1 1 ! 0 1 − 1 1 1 3 4 5 4 5 x − x = 1 4 −3 −1 5 0 0 0 0 2 3 The unknown x3 can assume any value. Let x3 = t. Then by back substitution we get x2 = t + 1 and x1 = t + 2. Thus, the solution set is the line f(t + 2; t + 1; t): t 2 Rg : Solving Linear Systems The rank of a matrix Math 240 Solving Linear Systems Gauss-Jordan elimination Rank Definition Inverse When an unknown variable in a linear system is free to assume matrices Definition any value, we call it a free variable. Variables that are not free Computing inverses are called bound variables. Properties of inverses Using inverse The value of a bound variable is uniquely determined by a matrices Conclusion choice of values for all of the free variables in the system. Lemma Suppose Ax = b is an m × n linear system with augmented matrix A#. If rank(A#) = rank(A) < n then the system has an infinite number of solutions. Such a system will have n − rank(A#) free variables. Solving Linear Systems Solving linear systems with free variables Math 240 Solving Linear Example Systems Gauss-Jordan Use Gaussian elimination to solve elimination Rank x1 + 2x2 − 2x3 − x4 = 3; Inverse matrices 3x1 + 6x2 + x3 + 11x4 = 16; Definition Computing 2x1 + 4x2 − x3 + 4x4 = 9: inverses Properties of inverses Reducing to row-echelon form yields Using inverse matrices Conclusion x1 + 2x2 − 2x3 − x4 = 3; x3 + 2x4 = 1: Choose as free variables those variables that do not have a pivot in their column. In this case, our free variables will be x2 and x4. The solution set is the plane f(5 − 2s − 3t; s; 1 − 2t; t): s; t 2 Rg : Solving Linear Systems The inverse of a square matrix Math 240 Can we divide by a matrix? What properties should the inverse Solving Linear Systems matrix have? Gauss-Jordan elimination Rank Definition Inverse Suppose A is a square, n × n matrix. An inverse matrix for A matrices Definition is an n × n matrix, B, such that Computing inverses Properties of inverses AB = In and BA = In: Using inverse matrices Conclusion If A has such an inverse then we say that it is invertible or nonsingular. Otherwise, we say that A is singular. Remark Not every matrix is invertible. If you have a linear system Ax = b and B is an inverse matrix for A then the linear system has the unique solution x = Bb: Solving Linear Systems The inverse of a square matrix Math 240 Solving Linear Systems Example Gauss-Jordan elimination If Rank 2 3 2 3 Inverse 1 −1 2 0 −1 3 matrices −1 Definition A = 42 −3 35 and B = 41 −1 15 = A Computing inverses 1 −1 1 1 0 −1 Properties of inverses Using inverse then B is the inverse of A. matrices Conclusion Theorem (Matrix inverses are well-defined) Suppose A is an n × n matrix. If B and C are two inverses of A then B = C. Thus, we can write A−1 for the inverse of A with no ambiguity. Useful Example a b d −b If A = and ad − bc 6= 0 then A−1 = 1 . c d ad−bc −c a Solving Linear Systems Finding the inverse of a matrix Math 240 Solving Linear Inverse matrices sound great! How do I find one? Systems Suppose A is a 3 × 3 invertible matrix. If A−1 = x x x Gauss-Jordan 1 2 3 elimination Rank then Inverse 2 3 2 3 2 3 matrices 1 0 0 Definition Computing Ax1 = 405 ;Ax2 = 415 ; and Ax3 = 405 : inverses Properties of 0 0 1 inverses Using inverse matrices −1 Conclusion We can find A by solving 3 linear systems at once! In general, form the augmented matrix and reduce to RREF. You end up with A−1 on the right. −1 A In In A Solving Linear Systems Finding the inverse of a matrix Math 240 Solving Linear Example Systems 2 3 Gauss-Jordan 1 −1 2 elimination Rank Let's find the inverse of A = 42 −3 35. Inverse 1 −1 1 matrices Definition Computing Take the augmented matrix and row reduce. inverses Properties of inverses 2 3 2 3 Using inverse 1 −1 2 1 0 0 1 0 0 0 −1 3 matrices Conclusion 42 −3 3 0 1 05 40 1 0 1 −1 15 1 −1 1 0 0 1 0 0 1 1 0 −1 | {z } A−1 Steps 1. A12(−2) 5. A32(−1) 2. A13(−1) 6. A31(−2) 3. M2(−1) 7. A21(1) 4. M3(−1) Solving Linear Systems Finding the inverse of a matrix Math 240 Solving Linear In order to find the inverse of a matrix, A, we row reduced an Systems Gauss-Jordan augmented matrix with A on the left. What if we don't end up elimination Rank with In on the left? Inverse matrices Theorem Definition Computing inverses An n × n matrix, A, is invertible if and only if rank(A) = n. Properties of inverses Using inverse matrices Example Conclusion 1 3 Find the inverse of the matrix A = . 2 6 Try to reduce the matrix to RREF. 1 3 A12(−2) 1 3 −−−−−! 2 6 0 0 Since rank(A) < 2, we conclude that A is not invertible. Notice that (1)(6) − (3)(2) = 0. Solving Linear Systems Finding the inverse of a matrix Math 240 Solving Linear Diagonal matrices have simple inverses. Systems Gauss-Jordan elimination Proposition Rank Inverse The inverse of a diagonal matrix is the diagonal matrix with matrices Definition reciprocal entries.
Recommended publications
  • Arxiv:2003.06292V1 [Math.GR] 12 Mar 2020 Eggnrtr N Ignlmti.Tedaoa Arxi Matrix Diagonal the Matrix
    ALGORITHMS IN LINEAR ALGEBRAIC GROUPS SUSHIL BHUNIA, AYAN MAHALANOBIS, PRALHAD SHINDE, AND ANUPAM SINGH ABSTRACT. This paper presents some algorithms in linear algebraic groups. These algorithms solve the word problem and compute the spinor norm for orthogonal groups. This gives us an algorithmic definition of the spinor norm. We compute the double coset decompositionwith respect to a Siegel maximal parabolic subgroup, which is important in computing infinite-dimensional representations for some algebraic groups. 1. INTRODUCTION Spinor norm was first defined by Dieudonné and Kneser using Clifford algebras. Wall [21] defined the spinor norm using bilinear forms. These days, to compute the spinor norm, one uses the definition of Wall. In this paper, we develop a new definition of the spinor norm for split and twisted orthogonal groups. Our definition of the spinornorm is rich in the sense, that itis algorithmic in nature. Now one can compute spinor norm using a Gaussian elimination algorithm that we develop in this paper. This paper can be seen as an extension of our earlier work in the book chapter [3], where we described Gaussian elimination algorithms for orthogonal and symplectic groups in the context of public key cryptography. In computational group theory, one always looks for algorithms to solve the word problem. For a group G defined by a set of generators hXi = G, the problem is to write g ∈ G as a word in X: we say that this is the word problem for G (for details, see [18, Section 1.4]). Brooksbank [4] and Costi [10] developed algorithms similar to ours for classical groups over finite fields.
    [Show full text]
  • Orthogonal Reduction 1 the Row Echelon Form -.: Mathematical
    MATH 5330: Computational Methods of Linear Algebra Lecture 9: Orthogonal Reduction Xianyi Zeng Department of Mathematical Sciences, UTEP 1 The Row Echelon Form Our target is to solve the normal equation: AtAx = Atb ; (1.1) m×n where A 2 R is arbitrary; we have shown previously that this is equivalent to the least squares problem: min jjAx−bjj : (1.2) x2Rn t n×n As A A2R is symmetric positive semi-definite, we can try to compute the Cholesky decom- t t n×n position such that A A = L L for some lower-triangular matrix L 2 R . One problem with this approach is that we're not fully exploring our information, particularly in Cholesky decomposition we treat AtA as a single entity in ignorance of the information about A itself. t m×m In particular, the structure A A motivates us to study a factorization A=QE, where Q2R m×n is orthogonal and E 2 R is to be determined. Then we may transform the normal equation to: EtEx = EtQtb ; (1.3) t m×m where the identity Q Q = Im (the identity matrix in R ) is used. This normal equation is equivalent to the least squares problem with E: t min Ex−Q b : (1.4) x2Rn Because orthogonal transformation preserves the L2-norm, (1.2) and (1.4) are equivalent to each n other. Indeed, for any x 2 R : jjAx−bjj2 = (b−Ax)t(b−Ax) = (b−QEx)t(b−QEx) = [Q(Qtb−Ex)]t[Q(Qtb−Ex)] t t t t t t t t 2 = (Q b−Ex) Q Q(Q b−Ex) = (Q b−Ex) (Q b−Ex) = Ex−Q b : Hence the target is to find an E such that (1.3) is easier to solve.
    [Show full text]
  • Implementation of Gaussian- Elimination
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-5 Issue-11, April 2016 Implementation of Gaussian- Elimination Awatif M.A. Elsiddieg Abstract: Gaussian elimination is an algorithm for solving systems of linear equations, can also use to find the rank of any II. PIVOTING matrix ,we use Gaussian Jordan elimination to find the inverse of a non singular square matrix. This work gives basic concepts The objective of pivoting is to make an element above or in section (1) , show what is pivoting , and implementation of below a leading one into a zero. Gaussian elimination to solve a system of linear equations. The "pivot" or "pivot element" is an element on the left Section (2) we find the rank of any matrix. Section (3) we use hand side of a matrix that you want the elements above and Gaussian elimination to find the inverse of a non singular square matrix. We compare the method by Gauss Jordan method. In below to be zero. Normally, this element is a one. If you can section (4) practical implementation of the method we inherit the find a book that mentions pivoting, they will usually tell you computation features of Gaussian elimination we use programs that you must pivot on a one. If you restrict yourself to the in Matlab software. three elementary row operations, then this is a true Keywords: Gaussian elimination, algorithm Gauss, statement. However, if you are willing to combine the Jordan, method, computation, features, programs in Matlab, second and third elementary row operations, you come up software.
    [Show full text]
  • Naïve Gaussian Elimination Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America [email protected]
    nbm_sle_sim_naivegauss.nb 1 Naïve Gaussian Elimination Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America [email protected] Introduction One of the most popular numerical techniques for solving simultaneous linear equations is Naïve Gaussian Elimination method. The approach is designed to solve a set of n equations with n unknowns, [A][X]=[C], where A nxn is a square coefficient matrix, X nx1 is the solution vector, and C nx1 is the right hand side array. Naïve Gauss consists of two steps: 1) Forward Elimination: In this step, the unknown is eliminated in each equation starting with the first@ D equation. This way, the equations @areD "reduced" to one equation and@ oneD unknown in each equation. 2) Back Substitution: In this step, starting from the last equation, each of the unknowns is found. To learn more about Naïve Gauss Elimination as well as the pitfall's of the method, click here. A simulation of Naive Gauss Method follows. Section 1: Input Data Below are the input parameters to begin the simulation. This is the only section that requires user input. Once the values are entered, Mathematica will calculate the solution vector [X]. èNumber of equations, n: n = 4 4 è nxn coefficient matrix, [A]: nbm_sle_sim_naivegauss.nb 2 A = Table 1, 10, 100, 1000 , 1, 15, 225, 3375 , 1, 20, 400, 8000 , 1, 22.5, 506.25, 11391 ;A MatrixForm 1 10@88 100 1000 < 8 < 18 15 225 3375< 8 <<D êê 1 20 400 8000 ji 1 22.5 506.25 11391 zy j z j z j z j z è nx1 rightj hand side array, [RHS]: z k { RHS = Table 227.04, 362.78, 517.35, 602.97 ; RHS MatrixForm 227.04 362.78 @8 <D êê 517.35 ji 602.97 zy j z j z j z j z j z k { Section 2: Naïve Gaussian Elimination Method The following sections divide Naïve Gauss elimination into two steps: 1) Forward Elimination 2) Back Substitution To conduct Naïve Gauss Elimination, Mathematica will join the [A] and [RHS] matrices into one augmented matrix, [C], that will facilitate the process of forward elimination.
    [Show full text]
  • Linear Algebra and Matrix Theory
    Linear Algebra and Matrix Theory Chapter 1 - Linear Systems, Matrices and Determinants This is a very brief outline of some basic definitions and theorems of linear algebra. We will assume that you know elementary facts such as how to add two matrices, how to multiply a matrix by a number, how to multiply two matrices, what an identity matrix is, and what a solution of a linear system of equations is. Hardly any of the theorems will be proved. More complete treatments may be found in the following references. 1. References (1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall. (2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa- tions Using Matlab, Brooks-Cole. (3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall. (4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca- demic Press. 1 2 2. Linear Systems of Equations and Gaussian Elimination The solutions, if any, of a linear system of equations (2.1) a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . am1x1 + am2x2 + ··· + amnxn = bm may be found by Gaussian elimination. The permitted steps are as follows. (1) Both sides of any equation may be multiplied by the same nonzero constant. (2) Any two equations may be interchanged. (3) Any multiple of one equation may be added to another equation. Instead of working with the symbols for the variables (the xi), it is eas- ier to place the coefficients (the aij) and the forcing terms (the bi) in a rectangular array called the augmented matrix of the system. a11 a12 .
    [Show full text]
  • Linear Programming
    Linear Programming Lecture 1: Linear Algebra Review Lecture 1: Linear Algebra Review Linear Programming 1 / 24 1 Linear Algebra Review 2 Linear Algebra Review 3 Block Structured Matrices 4 Gaussian Elimination Matrices 5 Gauss-Jordan Elimination (Pivoting) Lecture 1: Linear Algebra Review Linear Programming 2 / 24 columns rows 2 3 a1• 6 a2• 7 = a a ::: a = 6 7 •1 •2 •n 6 . 7 4 . 5 am• 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 . .. 7 6 . 7 1• 2• m• 4 . 5 4 . 5 T a1n a2n ::: amn a•n Matrices in Rm×n A 2 Rm×n 2 3 a11 a12 ::: a1n 6 a21 a22 ::: a2n 7 A = 6 7 6 . .. 7 4 . 5 am1 am2 ::: amn Lecture 1: Linear Algebra Review Linear Programming 3 / 24 rows 2 3 a1• 6 a2• 7 = 6 7 6 . 7 4 . 5 am• 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 . .. 7 6 . 7 1• 2• m• 4 . 5 4 . 5 T a1n a2n ::: amn a•n Matrices in Rm×n A 2 Rm×n columns 2 3 a11 a12 ::: a1n 6 a21 a22 ::: a2n 7 A = 6 7 = a a ::: a 6 . .. 7 •1 •2 •n 4 . 5 am1 am2 ::: amn Lecture 1: Linear Algebra Review Linear Programming 3 / 24 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 .
    [Show full text]
  • Gaussian Elimination and Lu Decomposition (Supplement for Ma511)
    GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1. Matrix multiplication The rule for multiplying matrices is, at first glance, a little complicated. If A is m × n and B is n × p then C = AB is defined and of size m × p with entries n X cij = aikbkj k=1 The main reason for defining it this way will be explained when we talk about linear transformations later on. THEOREM 1.1. (1) Matrix multiplication is associative, i.e. A(BC) = (AB)C. (Henceforth, we will drop parentheses.) (2) If A is m × n and Im×m and In×n denote the identity matrices of the indicated sizes, AIn×n = Im×mA = A. (We usually just write I and the size is understood from context.) On the other hand, matrix multiplication is almost never commutative; in other words generally AB 6= BA when both are defined. So we have to be careful not to inadvertently use it in a calculation. Given an n × n matrix A, the inverse, if it exists, is another n × n matrix A−1 such that AA−I = I;A−1A = I A matrix is called invertible or nonsingular if A−1 exists. In practice, it is only necessary to check one of these. THEOREM 1.2. If B is n × n and AB = I or BA = I, then A invertible and B = A−1. Proof. The proof of invertibility of A will need to wait until we talk about deter- minants.
    [Show full text]
  • Numerical Analysis Lecture Notes Peter J
    Numerical Analysis Lecture Notes Peter J. Olver 4. Gaussian Elimination In this part, our focus will be on the most basic method for solving linear algebraic systems, known as Gaussian Elimination in honor of one of the all-time mathematical greats — the early nineteenth century German mathematician Carl Friedrich Gauss. As the father of linear algebra, his name will occur repeatedly throughout this text. Gaus- sian Elimination is quite elementary, but remains one of the most important algorithms in applied (as well as theoretical) mathematics. Our initial focus will be on the most important class of systems: those involving the same number of equations as unknowns — although we will eventually develop techniques for handling completely general linear systems. While the former typically have a unique solution, general linear systems may have either no solutions or infinitely many solutions. Since physical models require exis- tence and uniqueness of their solution, the systems arising in applications often (but not always) involve the same number of equations as unknowns. Nevertheless, the ability to confidently handle all types of linear systems is a basic prerequisite for further progress in the subject. In contemporary applications, particularly those arising in numerical solu- tions of differential equations, in signal and image processing, and elsewhere, the governing linear systems can be huge, sometimes involving millions of equations in millions of un- knowns, challenging even the most powerful supercomputer. So, a systematic and careful development of solution techniques is essential. Section 4.5 discusses some of the practical issues and limitations in computer implementations of the Gaussian Elimination method for large systems arising in applications.
    [Show full text]
  • Math 304 Linear Algebra from Wednesday: I Basis and Dimension
    Highlights Math 304 Linear Algebra From Wednesday: I basis and dimension I transition matrix for change of basis Harold P. Boas Today: [email protected] I row space and column space June 12, 2006 I the rank-nullity property Example Example continued 1 2 1 2 1 1 2 1 2 1 1 2 1 2 1 Let A = 2 4 3 7 1 . R2−2R1 2 4 3 7 1 −−−−−→ 0 0 1 3 −1 4 8 5 11 3 R −4R 4 8 5 11 3 3 1 0 0 1 3 −1 Three-part problem. Find a basis for 1 2 1 2 1 1 2 0 −1 2 R3−R2 R1−R2 5 −−−−→ 0 0 1 3 −1 −−−−→ 0 0 1 3 −1 I the row space (the subspace of R spanned by the rows of the matrix) 0 0 0 0 0 0 0 0 0 0 3 I the column space (the subspace of R spanned by the columns of the matrix) Notice that in each step, the second column is twice the first column; also, the second column is the sum of the third and the nullspace (the set of vectors x such that Ax = 0) I fifth columns. We can analyze all three parts by Gaussian elimination, Row operations change the column space but preserve linear even though row operations change the column space. relations among the columns. The final row-echelon form shows that the column space has dimension equal to 2, and the first and third columns are linearly independent.
    [Show full text]
  • Using Row Reduction to Calculate the Inverse and the Determinant of a Square Matrix
    Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n × n square matrix A is called invertible if there exists a matrix X such that AX = XA = I, where I is the n × n identity matrix. If such matrix X exists, one can show that it is unique. We call it the inverse of A and denote it by A−1 = X, so that AA−1 = A−1A = I holds if A−1 exists, i.e. if A is invertible. Not all matrices are invertible. If A−1 does not exist, the matrix A is called singular or noninvertible. Note that if A is invertible, then the linear algebraic system Ax = b has a unique solution x = A−1b. Indeed, multiplying both sides of Ax = b on the left by A−1, we obtain A−1Ax = A−1b. But A−1A = I and Ix = x, so x = A−1b The converse is also true, so for a square matrix A, Ax = b has a unique solution if and only if A is invertible. 2 Calculating the inverse To compute A−1 if it exists, we need to find a matrix X such that AX = I (1) Linear algebra tells us that if such X exists, then XA = I holds as well, and so X = A−1. 1 Now observe that solving (1) is equivalent to solving the following linear systems: Ax1 = e1 Ax2 = e2 ... Axn = en, where xj, j = 1, .
    [Show full text]
  • Linear Algebra Review
    Linear Algebra Review Kaiyu Zheng October 2017 Linear algebra is fundamental for many areas in computer science. This document aims at providing a reference (mostly for myself) when I need to remember some concepts or examples. Instead of a collection of facts as the Matrix Cookbook, this document is more gentle like a tutorial. Most of the content come from my notes while taking the undergraduate linear algebra course (Math 308) at the University of Washington. Contents on more advanced topics are collected from reading different sources on the Internet. Contents 3.8 Exponential and 7 Special Matrices 19 Logarithm...... 11 7.1 Block Matrix.... 19 1 Linear System of Equa- 3.9 Conversion Be- 7.2 Orthogonal..... 20 tions2 tween Matrix Nota- 7.3 Diagonal....... 20 tion and Summation 12 7.4 Diagonalizable... 20 2 Vectors3 7.5 Symmetric...... 21 2.1 Linear independence5 4 Vector Spaces 13 7.6 Positive-Definite.. 21 2.2 Linear dependence.5 4.1 Determinant..... 13 7.7 Singular Value De- 2.3 Linear transforma- 4.2 Kernel........ 15 composition..... 22 tion.........5 4.3 Basis......... 15 7.8 Similar........ 22 7.9 Jordan Normal Form 23 4.4 Change of Basis... 16 3 Matrix Algebra6 7.10 Hermitian...... 23 4.5 Dimension, Row & 7.11 Discrete Fourier 3.1 Addition.......6 Column Space, and Transform...... 24 3.2 Scalar Multiplication6 Rank......... 17 3.3 Matrix Multiplication6 8 Matrix Calculus 24 3.4 Transpose......8 5 Eigen 17 8.1 Differentiation... 24 3.4.1 Conjugate 5.1 Multiplicity of 8.2 Jacobian......
    [Show full text]
  • Gaussian Elimination and Matrix Inverse (Updated September 3, 2010) – 1 M
    Gaussian Elimination Example 1 Gaussian elimination for the solution of a linear system transforms the Suppose that we want to solve 0 1 0 1 0 1 system Sx = f into an equivalent system Ux = c with upper triangular 2 4 −2 x1 2 matrix U (that means all entries in U below the diagonal are zero). @ 4 9 −3A @x2A = @ 8 A : (1) −2 −3 7 x 10 This transformation is done by applying three types of transformations to 3 the augmented matrix (S j f). We apply Gaussian elimination. To keep track of the operations, we use, e.g., R2 = R2 − 2 ∗ R1, which means that the new row 2 is computed by Type 1: Interchange two equations; and subtracting 2 times row 1 from row 2. Type 2: Replace an equation with the sum of the same equation and a 0 2 4 −2 2 1 0 2 4 −2 2 1 multiple of another equation. @ 4 9 −3 8 A ! @ 0 1 1 4 A R2 = R2 − 2 ∗ R1 −2 −3 7 10 0 1 5 12 R3 = R3 + R1 Once the augmented matrix (U j f) is transformed into (U j c), where U 0 2 4 −2 2 1 is an upper triangular matrix, we can solve this transformed system ! @ 0 1 1 4 A Ux = c using backsubstitution. 0 0 4 8 R3 = R3 − R2 The original system (1) is equivalent to 0 1 0 1 0 1 2 4 −2 x1 2 @0 1 1 A @x2A = @4A : (2) 0 0 4 x3 8 The system (2) can be solved by backsubstitution.
    [Show full text]