Lecture 15: Recall Defn. of Row Operations, Row Equivalence, and Row Space

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 15: Recall Defn. of Row Operations, Row Equivalence, and Row Space Lecture 15: Recall defn. of row operations, row equivalence, and row space. Theorem 1: If B is row equivalent to A, then R(B) = R(A). Proof of Theorem 1: Check that, after each elementary row oper- ation, the row space does not change. 2 3 2 0 3 R1 R1 6 R 7 6 0 7 6 2 7 6 R2 7 A = 6 7 ;B = 6 7 4 ::: 5 4 ::: 5 R 0 m Rm 0 0 1. Ri = Rj; Rj = Ri. { An exchange of rows does not change the set of rows. 0 −1 0 2. Ri := aRi; Ri := a Ri. { So, any linear combination of rows of B is a linear combination of rows of A, and vice versa. 0 0 0 0 3. Rj := aRi + Rj; Rj = Rj − aRi = Rj − aRi { So, any linear combination of rows of B is a linear combination of rows of A, and vice versa. Recall defn. of REF. Theorem 2: Every matrix is row equivalent to an REF. Proof of Theorem 2: Step 1: Exchange rows so that a row with the left-most leading entry is at top. Then the leading entry in the top row is in position (1; j1) 1 Step 2: Subtract appropriate multiples of R1 from all other rows in order to \zero out" all entries of column j1 except for position (1; j1). Continue Steps 1 and 2 inductively until only zero rows remain: { Exchange remaining rows so that (among all but row 1), R2 has the left-most leading entry . Then the leading entry in R2 is in position (2; j2), with j2 > j1. { Subtract appropriate multiples of R2 from all other rows in order to \zero out" all entries of column j2 below (2; j2). { etc. Example 1 over GF (3) = Z3: Let 2 0 2 2 3 A = 4 2 1 0 5 1 1 2 Construct an REF B for A: R1 $ R2: 2 2 1 0 3 4 0 2 2 5 1 1 2 2R1 + R3 ! R3: 2 2 1 0 3 4 0 2 2 5 0 2 2 R2 + R3 ! R3: 2 2 1 0 3 4 0 2 2 5 = B 0 0 0 2 Example 2 over GF (4) = f0; 1; a; bg, with special addition/multiplication tables (on p. 36). Find REF of: 2 a b 1 3 A = 4 1 b 0 5 b 0 1 bR1 + R2 ! R3: 2 a b 1 3 4 0 1 b 5 b 0 1 aR1 + R3 ! R3: 2 a b 1 3 4 0 1 b 5 0 1 b R2 + R3 ! R3: 2 a b 1 3 4 0 1 b 5 0 0 0 Theorem 3: Let W = hSi where S is a subset of V (n; q). Let A be a matrix whose rows are the elements of S. Let B be a REF of A. Then the nonzero rows of B from a basis for W . Proof of Theorem 3: By Theorem 2, an REF B for A exists. We must show that i) the nonzero rows of B span W and ii) the nonzero rows are linearly independent. i: By Theorem 1, the row space of B is W ; thus, the nonzero rows of B span W . 3 ii: Write 2 0 3 R1 0 6 R 7 B = 6 2 7 6 ::: 7 4 0 5 Rm 0 0 Let R1;:::; Rk, k ≤ m, be the non-zero rows. Suppose k X 0 aiRi = 0: i=1 0 For i = 1; : : : ; k, let ji be the leading entry of Ri. We show that each ai = 0, 1 ≤ i ≤ k, by induction on i. For i = 1: Since all entries in column j1, except for (1; j1), are 0, k X 0 a R0 = ( a R ) = 0: 1 1;j1 i i j1 i=1 Since R0 =6 0 and F is a field, a = 0. 1;j1 1 Inductive step: Assume ai = 0 for i = 1;:::;I − 1 < k, for some I. Since R0 = 0 for all ` > j , `;jI I k X 0 a R0 = ( a R ) = 0: I I;jI i i jI i=1 Again, since R0 =6 0 and F is a field a = 0. I;jI I By Example 1 above (GF (3)), f210; 022g is a basis for h210; 112; 022i. By Example 2 above (GF (4)), fab1; 01bg is a basis for hab1; 1b0; b01i. 4 Corollary: Let S be a finite subset of V (n; q). Let A be a matrix whose rows are the elements of S. Let B be a REF of A. Then S is linearly independent iff B has no zero rows. Proof of Corollary: One can show row equivalence preserves linear independence (i.e., if A and B are row equivalent, then the rows of A are linearly independent iff the rows of B are linearly independent) { left a an exercise. Thus, S is linearly independent iff the the rows of the REF B are linearly independent. But the latter can happen iff B has no zero rows. Another proof: If B has no zero rows, then the rows of B form a basis for R(A). Thus, the dimension of R(A) is the number of rows of A. Since the rows of A span R(A) and by Theorem 4.2, some subset of a spanning set is a basis, and all bases for the same subspace have the same size, it follows that the rows of A form a basis for R(A) and thus are linearly independent. If S is linearly independent, then the rows of A form a basis for R(A). Since the nonzero rows of B also form a basis, and any two bases have the same size, it follows that B has no zero rows. Defn: A matrix is in Reduced Row Echelon form (RREF) if 1. The leading entry in each non-zero row is strictly to the right of the leading entry in the preceding row. 2. All the non-zero rows (if any) are at the bottom. 3. All leading entries are 1 4. A leading entry is the only nonzero entry of its column. Note that the difference between REF and RREF is conditions 3 and 4. 5 REF: L = a leading entry (the different appearances of L may be differ- ent nonzero elements) 2 3 0 ::: 0 L ∗ ::: ∗ a ∗ ::: ∗ b ∗ ::: ∗ 6 0 ::: 0 0 0 ::: 0 L ∗ ::: ∗ c ∗ ::: ∗ 7 6 7 6 7 6 0 ::: 0 0 0 ::: 0 0 0 ::: 0 L ∗ ::: ∗ 7 6 7 6 0 ::: 0 0 0 ::: 0 0 0 ::: 0 0 0 ::: 0 7 6 7 4 ::::::::::::::::::::::::::::::::::::::::::::: 5 0 ::: 0 0 0 ::: 0 0 0 ::: 0 0 0 ::: 0 RREF: 2 3 0 ::: 0 1 ∗ ::: ∗ 0 ∗ ::: ∗ 0 ∗ ::: ∗ 6 0 ::: 0 0 0 ::: 0 1 ∗ ::: ∗ 0 ∗ ::: ∗ 7 6 7 6 7 6 0 ::: 0 0 0 ::: 0 0 0 ::: 0 1 ∗ ::: ∗ 7 6 7 6 0 ::: 0 0 0 ::: 0 0 0 ::: 0 0 0 ::: 0 7 6 7 4 ::::::::::::::::::::::::::::::::::::::::::::: 5 0 ::: 0 0 0 ::: 0 0 0 ::: 0 0 0 ::: 0 Theorem: Every matrix is row equivalent to a unique RREF. Proof: In an REF, all entries below a leading entry are 0. From RRE, multiply each leading entry a by a−1 to make the leading entry = 1. For each leading entry i, let ji denote its column; so ai;ji = 1. For each nonzero entry a = ar;ji =6 0 for some r < i (above a leading entry), replace row Rr by Rr − aRi; this \zeros out" the (r; ji) entry. After making all leading entries of REF equal to 1: 6 2 3 0 ::: 0 1 ∗ ::: ∗ a ∗ ::: ∗ b ∗ ::: ∗ 6 0 ::: 0 0 0 ::: 0 1 ∗ ::: ∗ c ∗ ::: ∗ 7 6 7 6 7 6 0 ::: 0 0 0 ::: 0 0 0 ::: 0 1 ∗ ::: ∗ 7 6 7 6 0 ::: 0 0 0 ::: 0 0 0 ::: 0 0 0 ::: 0 7 6 7 4 ::::::::::::::::::::::::::::::::::::::::::::: 5 0 ::: 0 0 0 ::: 0 0 0 ::: 0 0 0 ::: 0 Replace R1 by R1 − aR2: 2 3 0 ::: 0 1 ∗ ::: ∗ 0 ∗ ::: ∗ d ∗ ::: ∗ 6 0 ::: 0 0 0 ::: 0 1 ∗ ::: ∗ c ∗ ::: ∗ 7 6 7 6 7 6 0 ::: 0 0 0 ::: 0 0 0 ::: 0 1 ∗ ::: ∗ 7 6 7 6 0 ::: 0 0 0 ::: 0 0 0 ::: 0 0 0 ::: 0 7 6 7 4 ::::::::::::::::::::::::::::::::::::::::::::: 5 0 ::: 0 0 0 ::: 0 0 0 ::: 0 0 0 ::: 0 Replace R1 by R1 − dR3: 2 3 0 ::: 0 1 ∗ ::: ∗ 0 ∗ ::: ∗ 0 ∗ ::: ∗ 6 0 ::: 0 0 0 ::: 0 1 ∗ ::: ∗ e ∗ ::: ∗ 7 6 7 6 7 6 0 ::: 0 0 0 ::: 0 0 0 ::: 0 1 ∗ ::: ∗ 7 6 7 6 0 ::: 0 0 0 ::: 0 0 0 ::: 0 0 0 ::: 0 7 6 7 4 ::::::::::::::::::::::::::::::::::::::::::::: 5 0 ::: 0 0 0 ::: 0 0 0 ::: 0 0 0 ::: 0 Replace R2 by R2 − eR3: 7 2 3 0 ::: 0 1 ∗ ::: ∗ 0 ∗ ::: ∗ 0 ∗ ::: ∗ 6 0 ::: 0 0 0 ::: 0 1 ∗ ::: ∗ 0 ∗ ::: ∗ 7 6 7 6 7 6 0 ::: 0 0 0 ::: 0 0 0 ::: 0 1 ∗ ::: ∗ 7 6 7 6 0 ::: 0 0 0 ::: 0 0 0 ::: 0 0 0 ::: 0 7 6 7 4 ::::::::::::::::::::::::::::::::::::::::::::: 5 0 ::: 0 0 0 ::: 0 0 0 ::: 0 0 0 ::: 0 8 Lecture 16: HW4 will be posted later today and due on Friday, March 11. Recall REF and RREF. Example 1 over GF (3) = Z3. An REF for 2 0 2 2 3 A = 4 2 1 0 5 1 1 2 is: 2 2 1 0 3 B = 4 0 2 2 5 0 0 0 To find RREF: 2R1 ! R1 : 2R2 ! R2 2 1 2 0 3 4 0 1 1 5 0 0 0 R1 + R2 ! R1: 2 1 0 1 3 RREF = 4 0 1 1 5 0 0 0 Example 2 over GF (4) = f0; 1; a; bg.
Recommended publications
  • Implementation of Gaussian- Elimination
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-5 Issue-11, April 2016 Implementation of Gaussian- Elimination Awatif M.A. Elsiddieg Abstract: Gaussian elimination is an algorithm for solving systems of linear equations, can also use to find the rank of any II. PIVOTING matrix ,we use Gaussian Jordan elimination to find the inverse of a non singular square matrix. This work gives basic concepts The objective of pivoting is to make an element above or in section (1) , show what is pivoting , and implementation of below a leading one into a zero. Gaussian elimination to solve a system of linear equations. The "pivot" or "pivot element" is an element on the left Section (2) we find the rank of any matrix. Section (3) we use hand side of a matrix that you want the elements above and Gaussian elimination to find the inverse of a non singular square matrix. We compare the method by Gauss Jordan method. In below to be zero. Normally, this element is a one. If you can section (4) practical implementation of the method we inherit the find a book that mentions pivoting, they will usually tell you computation features of Gaussian elimination we use programs that you must pivot on a one. If you restrict yourself to the in Matlab software. three elementary row operations, then this is a true Keywords: Gaussian elimination, algorithm Gauss, statement. However, if you are willing to combine the Jordan, method, computation, features, programs in Matlab, second and third elementary row operations, you come up software.
    [Show full text]
  • Linear Algebra and Matrix Theory
    Linear Algebra and Matrix Theory Chapter 1 - Linear Systems, Matrices and Determinants This is a very brief outline of some basic definitions and theorems of linear algebra. We will assume that you know elementary facts such as how to add two matrices, how to multiply a matrix by a number, how to multiply two matrices, what an identity matrix is, and what a solution of a linear system of equations is. Hardly any of the theorems will be proved. More complete treatments may be found in the following references. 1. References (1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall. (2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa- tions Using Matlab, Brooks-Cole. (3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall. (4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca- demic Press. 1 2 2. Linear Systems of Equations and Gaussian Elimination The solutions, if any, of a linear system of equations (2.1) a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . am1x1 + am2x2 + ··· + amnxn = bm may be found by Gaussian elimination. The permitted steps are as follows. (1) Both sides of any equation may be multiplied by the same nonzero constant. (2) Any two equations may be interchanged. (3) Any multiple of one equation may be added to another equation. Instead of working with the symbols for the variables (the xi), it is eas- ier to place the coefficients (the aij) and the forcing terms (the bi) in a rectangular array called the augmented matrix of the system. a11 a12 .
    [Show full text]
  • Math 204, Spring 2020 About the First Test
    Math 204, Spring 2020 About the First Test The first test for this course will be given in class on Friday, February 21. It covers all of the material that we have done in Chapters 1 and 2, up to Chapter 2, Section II. For the test, you should know and understand all the definitions and theorems that we have covered. You should be able to work with matrices and systems of linear equations. The test will include some \short essay" questions that ask you to define something, or discuss something, or explain something, and so on. Other than that, you can expect most of the questions to be similar to problems that have been given on the homework. You can expect to do a few proofs, but they will be fairly straightforward. Here are some terms and ideas that you should be familiar with for the test: systems of linear equations solution set of a linear system of equations Gaussian elimination the three row operations notations for row operations: kρi + ρj, ρi $ ρj, kρj row operations are reversible applying row operations to a system of equations echelon form for a system of linear equations matrix; rows and columns of a matrix; m × n matrix representing a system of linear equations as an augmented matrix row operations and echelon form for matrices applying row operations to a matrix n vectors in R ; column vectors and row vectors n vector addition and scalar multiplication for vectors in R n linear combination of vectors in R expressing the solution set of a linear system in vector form homogeneous system of linear equations associated homogeneous
    [Show full text]
  • The Rank of Recurrence Matrices Christopher Lee University of Portland, [email protected]
    University of Portland Pilot Scholars Mathematics Faculty Publications and Presentations Mathematics 5-2014 The Rank of Recurrence Matrices Christopher Lee University of Portland, [email protected] Valerie Peterson University of Portland, [email protected] Follow this and additional works at: http://pilotscholars.up.edu/mth_facpubs Part of the Applied Mathematics Commons, and the Mathematics Commons Citation: Pilot Scholars Version (Modified MLA Style) Lee, Christopher and Peterson, Valerie, "The Rank of Recurrence Matrices" (2014). Mathematics Faculty Publications and Presentations. 9. http://pilotscholars.up.edu/mth_facpubs/9 This Journal Article is brought to you for free and open access by the Mathematics at Pilot Scholars. It has been accepted for inclusion in Mathematics Faculty Publications and Presentations by an authorized administrator of Pilot Scholars. For more information, please contact [email protected]. The Rank of Recurrence Matrices Christopher Lee and Valerie Peterson Christopher Lee ([email protected]), a Wyoming native, earned his Ph.D. from the University of Illinois in 2009; he is currently a Visiting Assistant Professor at the University of Portland. His primary field of research lies in differential topology and geometry, but he has interests in a variety of disciplines, including linear algebra and the mathematics of physics. When not teaching or learning math, Chris enjoys playing hockey, dabbling in cooking, and resisting the tendency for gravity to anchor heavy things to the ground. Valerie Peterson ([email protected]) received degrees in mathematics from Santa Clara University (B.S.) and the University of Illinois at Urbana-Champaign (Ph.D.) before joining the faculty at the University of Portland in 2009.
    [Show full text]
  • Math 2331 – Linear Algebra 4.6 Rank
    4.6 Rank Math 2331 { Linear Algebra 4.6 Rank Jiwen He Department of Mathematics, University of Houston [email protected] math.uh.edu/∼jiwenhe/math2331 Jiwen He, University of Houston Math 2331, Linear Algebra 1 / 16 4.6 Rank Row Space Rank Theorem IMT 4.6 Rank Row Space Row Space and Row Equivalence Row Space: Examples Rank: Defintion Rank Theorem Rank Theorem: Examples Visualizing Row A and Nul A The Invertible Matrix Theorem (continued) Jiwen He, University of Houston Math 2331, Linear Algebra 2 / 16 4.6 Rank Row Space Rank Theorem IMT Row Space Row Space The set of all linear combinations of the row vectors of a matrix A is called the row space of A and is denoted by Row A. Example Let 2 3 −1 2 3 6 r1 = (−1; 2; 3; 6) A = 4 2 −5 −6 −12 5 and r2 = (2; −5; −6; −12) : 1 −3 −3 −6 r3 = (1; −3; −3; −6) 4 Row A = Spanfr1; r2; r3g (a subspace of R ) While it is natural to express row vectors horizontally, they can also be written as column vectors if it is more convenient. Therefore Col AT = Row A . Jiwen He, University of Houston Math 2331, Linear Algebra 3 / 16 4.6 Rank Row Space Rank Theorem IMT Row Space and Row Equivalence When we use row operations to reduce matrix A to matrix B, we are taking linear combinations of the rows of A to come up with B. We could reverse this process and use row operations on B to get back to A.
    [Show full text]
  • Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix
    3.3. Matrix Rank and the Inverse of a Full Rank Matrix 1 Section 3.3. Matrix Rank and the Inverse of a Full Rank Matrix Note. The lengthy section (21 pages in the text) gives a thorough study of the rank of a matrix (and matrix products) and considers inverses of matrices briefly at the end. Note. Recall that the row space of a matrix A is the span of the row vectors of A and the row rank of A is the dimension of this row space. Similarly, the column space of A is the span of the column vectors of A and the column rank is the dimension of this column space. You will recall that the dimension of the column space and the dimension of the row space of a given matrix are the same (see Theorem 2.4 of Fraleigh and Beauregard’s Linear Algebra, 3rd Edition, Addison- Wesley Publishing Company, 1995, in 2.2. The Rank of a Matrix). We now give a proof of this based in part on Gentle’s argument and on Peter Lancaster’s Theory of Matrices, Academic Press (1969), page 42. First, we need a lemma. i k i i i k n Lemma 3.3.1. Let {a }i=1 = {[a1, a2, . , an]}i=1 be a set of vectors in R and let i k π ∈ Sn. Then the set of vectors {a }i=1 is linearly independent if and only if the set i i i k of vectors {[aπ(1), aπ(2), . , aπ(n)]}i=1 is linearly independent.
    [Show full text]
  • Math 290 Linear Modeling for Engineers
    Math 290 Linear Modeling for Engineers Instructor: Minnie Catral Text: Linear Algebra and Its Applications, 3rd ed. Update, by David Lay Have you ever used Matlab as a computing tool? 1. Yes 2. No 2 1.1 Systems of Equations System of 2 equations in 2 unknowns: Three possibilities: 1. The lines intersect. 2. The lines coincide. 3. The lines are parallel. 3 More generally: System of m equations in n unknowns A system is consistent if it has at least one solution. Otherwise, it is said to be inconsistent. 4 FACT: A system of linear equations has either (1) exactly one solution or (2) infinitely many solutions or (3) no solutions. The solution set is the set of all possible solutions of a linear system. Two systems are equivalent if they have the same solution set. STRATEGY FOR SOLVING A SYSTEM: Replace a system with an equivalent system that is easier to solve. 5 • EXAMPLE: 6 Matrix Notation 7 Elementary Row Operations: 1. (Interchange) Interchange two rows. 2. (Scaling) Multiply a row by a nonzero constant. 3. (Replacement) Replace a row by the sum of itself and a multiple of another row. Row equivalent Matrices: Two matrices where one matrix can be transformed into the other matrix by a sequence of elementary row operations. Notation: Fact about Row Equivalence: If the augmented matrices of two linear systems are row equivalent, then the two systems have the same solution set. 8 • EXAMPLE 9 Two Fundamental Questions (Existence and Uniqueness) 1. Is the system consistent? (i.e. does a solution exist?) 2.
    [Show full text]
  • Notes on Row Reduction
    Notes on Row Reduction Francis J. Narcowich Department of Mathematics Texas A&M University May 2012 1 Notation and Terms The row echelon form of a matrix contains a great deal of information, both about the matrix itself and about systems of equations that may be associated with it. To talk about row reduction, we need to define several terms and introduce some notation. 1. Notation for elementary row operations. Row reduction is made easier by having notation for various row operations. This is the notation that we will use here. (The equals sign “‘=” used in the row operations below means “replacement,” which is common usage in computer languages.) I. R ↔ R′ means interchange rows R and R′. II. R = cR means multiply the current row R by c = 0 and make the result the new row R. III. R = R + cR′ means multiply R′ by c, then add cR′ to R, and make the result the new row R. 2. Row equivalence. A matrix A is row equivalent to a matrix B if A can be transformed into B using elementary row operations. When this happens, we write A ⇔ B. 3. Leading entry. The leading entry1 in a row is the first non-zero entry in a row. The leading entries in each row of M are in boldface type. 0 1 3 2 M = 2 4 0 −1 0 0 6 5 4. Row echelon form of a matrix. A matrix is in row echelon form if these hold: 1Leon does not have a name for this entry.
    [Show full text]
  • Gaussian Elimination Notation for Linear Systems
    Gaussian Elimination Notation for Linear Systems Last time we studied the linear system x + y = 27 (1) 2x − y = 0 (2) and found that x = 9 (3) y = 18 (4) We learned to write the linear system using a matrix and two vectors like so: ! ! ! 1 2 x 27 = 2 −1 y 0 Likewise, we can write the solution as: ! ! ! 1 0 x 9 = 0 1 y 18 ! 1 0 The matrix I = is called the Identity Matrix. You can check 0 1 that for any vector v, then Iv = v. A useful shorthand for a linear system is an Augmented Matrix, which looks like this for the linear system we've been dealing with: ! 1 1 27 2 −1 ! x We don't bother writing the vector , since it will show up in any y linear system we deal with, and we just leave blank spaces for 0's. The solution to the linear system looks like this: 1 ! 1 9 1 18 Here's another example of an augmented matrix, for a linear system with three equations and four unknowns: 0 1 3 2 91 B C @ 6 2 −2 A −1 1 1 3 And finally, here's the general case. The number of equations in the linear system is the number of rows r in the augmented matrix, and the number of columns k in the matrix left of the vertical line is the number of unknowns. 0 1 1 1 11 a1 a1 : : : a1 b B 2 2 2 2C Ba1 a2 : : : ak b C B .
    [Show full text]
  • Row Equivalence of Matrices
    Row Equivalence of matrices March 1, 2007 0.1 Row equivalence Let F be a field and let m and n be positive integers. Two m by n matrices are said to be row equivalent if there is an invertible matrix S such that B = SA. (Check that this is indeed an equivalence relation.) The textbook shows that any two row equivalent matrices have the same null space. In fact the converse is also true, so that we have the following theorem: Theorem: If A and B are two m by n matrices, then the following conditions are equivalent: 1. There exists an invertible matrix S such that B = SA 2. The matrices A and B have the same nullspace. This theorem can be translated into a statement about linear transfor- mations by considering the linear transformations LA, LB, and LS associated with the matrices above. It seems more revealing to treat this more abstract looking version. Theorem Let V and W be finite dimensional vector spaces, and let T and T 0 be linear transformations from V to W . Then the following are equivalent: 1. There exists an isomorphism S: T → T 0 such that T 0 = S ◦ T . 2. The transformations T and T 0 have the same kernel. Proof: The proof that (1) implies (2) is relatively easy (and is in the book) and we omit it. Let us prove that (2) implies (1). Let K := Ker(T ) = Ker(T 0), a linear subspace of V . By the rank-nulity theorem, the dimension k of K is n−r, where r is the rank of T and hence also the rank of T 0.
    [Show full text]
  • LINEAR ALGEBRA Rudi Weikard
    LINEAR ALGEBRA Lecture notes for MA 434/534 Rudi Weikard 250 200 150 100 50 0 Mar 30 Apr 06 Apr 13 Apr 20 Apr 27 Version of December 1, 2020 Contents Preface iii Chapter 1. Systems of linear equations1 1.1. Introduction1 1.2. Solving systems of linear equations2 1.3. Matrices and vectors3 1.4. Back to systems of linear equations5 Chapter 2. Vector spaces7 2.1. Spaces and subspaces7 2.2. Linear independence and spans8 2.3. Direct sums 10 Chapter 3. Linear transformations 13 3.1. Basics 13 3.2. The fundamental theorem of linear algebra 14 3.3. The algebra of linear transformation 15 3.4. Linear transformations and matrices 15 3.5. Matrix algebra 16 Chapter 4. Inner product spaces 19 4.1. Inner products 19 4.2. Orthogonality 20 4.3. Linear functionals and adjoints 21 4.4. Normal and self-adjoint transformations 22 4.5. Least squares approximation 23 Chapter 5. Spectral theory 25 5.1. Eigenvalues and Eigenvectors 25 5.2. Spectral theory for general linear transformations 26 5.3. Spectral theory for normal transformations 27 5.4. The functional calculus for general linear transformations 28 Appendix A. Appendix 33 A.1. Set Theory 33 A.2. Algebra 34 List of special symbols 37 Index 39 i ii CONTENTS Bibliography 41 Preface We all know how to solve two linear equations in two unknowns like 2x − 3y = 5 and x + 3y = −2: Linear Algebra grew out of the need to solve simultaneously many such equations for perhaps many unknowns. For instance, the frontispiece of these notes shows a number of data points and an attempt to find the \best" straight line as an approximation.
    [Show full text]
  • Bc = 0, Then the Matrix a = (Abcd ) Is Inve
    Math 60. Rumbos Spring 2009 1 Solutions to Assignment #17 a b 1. Prove that if ad − bc 6= 0, then the matrix A = is invertible and c d compute A−1. a b Solution: Let A = and assume that ad − bc 6= 0. c d First consider the case a 6= 0. Perform the elementary row operations: (1=a)R1 ! R1 and −cR1 + R2 ! R2, successively, on the augmented matrix a b j 1 0 ; (1) c d j 0 1 we obtain the augmented matrix 1 b=a j 1=a 0 ; 0 −(cb=a) + d j −c=a 1 or 1 b=a j 1=a 0 ; (2) 0 ∆=a j −c=a 1 where we have set ∆ = ad − bc; thus ∆ 6= 0. a Next, perform the elementary row operation R ! R on the aug- ∆ 2 2 mented matrix in (2) to get 1 b=a j 1=a 0 : (3) 0 1 j −c=∆ a=∆ b Finally, perform the elementary row operation − R + R ! R on a 2 1 1 the augmented matrix in (3) to get 1 0 j d=∆ −b=∆ : (4) 0 1 j −c=∆ a=∆ From (4) we then see that, if ∆ = ad − bc 6= 0, then A is invertible and d=∆ −b=∆ A−1 = ; −c=∆ a=∆ or 1 d −b A−1 = : (5) ∆ −c a Math 60. Rumbos Spring 2009 2 Observe that the formula for A−1 in (5) also works for the case a = 0. In this case, ∆ = −bc; so that b 6= 0 and c 6= 0, and 1 d −b A−1 = ; −bc −c 0 or −d=bc 1=c A−1 = ; 1=b 0 and we can check that −d=bc 1=c 0 b 1 0 = ; 1=b 0 c d 0 1 and therefore A is invertible in this case as well.
    [Show full text]