Linear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Linear Algebra LINEAR ALGEBRA by Alexander Givental s u m ι Σ τ σ δ α Sumizdat s u m Published by Sumizdat ι Σ τ 5426 Hillside Avenue, El Cerrito, California 94530, USA σ δ α http://www.sumizdat.org University of California, Berkeley Cataloging-in-Publication Data Givental, Alexander Linear Algebra / by Alexander Givental ; El Cerrito, Calif. : Sumizdat, 2009. iv, 200 p. : ill. ; 23 cm. Includes bibliographical references and index. ISBN 978-0-9779852-4-1 1. Linear algebra. I. Givental, Alexander. Library of Congress Control Number: c 2009 by Alexander Givental All rights reserved. Copies or derivative products of the whole work or any part of it may not be produced without the written permission from Alexander Givental ([email protected]), except for brief excerpts in connection with reviews or scholarly analysis. Credits Editing: Alisa Givental. Art advising: Irina Mukhacheva, http://irinartstudio.com Cataloging-in-publication: Catherine Moreno, Technical Services Department, Library, UC Berkeley. A Layout, typesetting and graphics: using L TEX and Xfig. Printing and binding: Thomson-Shore, Inc., http://www.tshore.com Member of the Green Press Initiative. 7300 West Joy Road, Dexter, Michigan 48130-9701, USA. Offset printing on 30% recycled paper; cover: by 4-color process on Kivar-7. ISBN 978-0-9779852-4-1 Contents 1 A Crash Course 1 1 Vectors .......................... 1 2 QuadraticCurves..................... 9 3 ComplexNumbers .................... 17 4 ProblemsofLinearAlgebra . 23 2 Dramatis Personae 31 1 Matrices.......................... 31 2 Determinants ....................... 41 3 VectorSpaces....................... 59 3 Simple Problems 71 1 DimensionandRank................... 71 2 GaussianElimination. 83 3 TheInertiaTheorem................... 97 4 Eigenvalues 117 1 TheSpectralTheorem . .117 2 JordanCanonicalForms . 137 Hints ..............................149 Answers .............................153 Bibliography .........................157 Index ..............................158 iii iv Foreword “Mathematics is a form of culture. Mathematical ignorance is a form of national culture.” “In antiquity, they knew few ways to learn, but modern pedagogy discovered 1001 ways to fail.” August Dumbel1 1From his eye-opening address to NCVM, the National Council of Visionaries of Mathematics. v vi Chapter 1 A Crash Course 1 Vectors Operations and their properties The following definition of vectors can be found in elementary geom- etry textbooks, see for instance [4]. A directed segment −−→AB on the plane or in space is specified by an ordered pair of points: the tail A and the head B. Two directed segments −−→AB and −−→CD are said to represent the same vector if they are obtained from one another by translation. In other words, the lines AB and CD must be parallel, the lengths AB and CD must be equal, and the segments must point toward the| same| of| the| two possible directions (Figure 1). B B v D u A C u+v C A Figure 1 Figure 2 A trip from A to B followed by a trip from B to C results in a trip from A to C. This observation motivates the definition of the 1 2 Chapter 1. A CRASH COURSE vector sum w = v + u of two vectors v and u: if −−→AB represents v and −−→BC represents u then −→AC represents their sum w (Figure 2). The vector 3v = v + v + v has the same direction as v but is 3 times longer. Generalizing this example one arrives at the definition of the multiplication of a vector by a scalar: given a vector v and a real number α, the result of their multiplication is a vector, denoted αv, which has the same direction as v but is α times longer. The last phrase calls for comments since it is literally true only for α> 1. If 0 <α< 1, being “α times longer” actually means “shorter.” If α < 0, the direction of αv is in fact opposite to the direction of v. Finally, 0v = 0 is the zero vector represented by directed segments −→AA of zero length. Combining the operations of vector addition and multiplication by scalars we can form expressions αu+βv+...+γw. They are called linear combinations of the vectors u, v, ..., w with the coefficients α, β, ..., γ. The pictures of a parallelogram and parallelepiped (Figures 3 and 4) prove that the addition of vectors is commutative and associa- tive: for all vectors u, v, w, u + v = v + u and (u + v)+ w = u + (v + w). u u+v+w w v+u v+w v v u+v u+v v u u Figure 3 Figure 4 From properties of proportional segments and similar triangles, the reader will easily derive the following two distributive laws: for all vectors u, v and scalars α, β, (α + β)u = αu + βv and α(u + v)= αu + αv. 1. Vectors 3 Coordinates From a point O in space, draw three directed segments −→OA, −−→OB, and −−→OC not lying in the same plane and denote by i, j, and k the vectors they represent. Then every vector u = −−→OU can be uniquely written as a linear combination of i, j, k (Figure 5): u = αi + βj + γk. The coefficients form the array (α, β, γ) of coordinates of the vec- tor u (and of the point U) with respect to the basis i, j, k (or the coordinate system OABC). αi βj U γk u C B OA Figure 5 Multiplying u by a scalar λ or adding another vector u′ = α′i + β′j + γ′k, and using the above algebraic properties of the operations with vectors, we find: λu = λαi+λβj+λγk, and u+u′ = (α+α′)i+(β +β′)j+(γ +γ′)k. Thus, the geometric operations with vectors are expressed by com- ponentwise operations with the arrays of their coordinates: λ(α, β, γ) = (λα, λβ, λγ), (α, β, γ) + (α′, β′, γ′) = (α + α′, β + β′, γ + γ′). What is a vector? No doubt, the idea of vectors is not new to the reader. However, some subtleties of the above introduction do not easily meet the eye, and we would like to say here a few words about them. 4 Chapter 1. A CRASH COURSE As many other mathematical notions, vectors come from physics, where they represent quantities, such as velocities and forces, which are characterized by their magnitude and direction. Yet, the popular slogan “Vectors are magnitude and direction” does not qualify for a mathematical definition of vectors, e.g. because it does not tell us how to operate with them. The computer science definition of vectors as arrays of numbers, to be added “apples with apples, oranges with oranges,” will meet the following objection by physicists. When a coordinate system rotates, the coordinates of the same force or velocity will change, but the numbers of apples and oranges won’t. Thus forces and velocities are not arrays of numbers. The geometric notion of a directed segment resolves this problem. Note however, that calling directed segments vectors would constitute abuse of terminology. Indeed, strictly speaking, directed segments can be added only when the head of one of them coincides with the tail of the other. So, what is a vector? In our formulations, we actually avoided answering this question directly, and said instead that two directed segments represent the same vector if . Such wording is due to ped- agogical wisdom of the authors of elementary geometry textbooks, because a direct answer sounds quite abstract: A vector is the class of all directed segments obtained from each other by translation in space. Such a class is shown in Figure 6. Figure 6 This picture has another interpretation: For every point in space (the tail of an arrow), it indicates a new position (the head). The ge- ometric transformation in space defined this way is translation. This leads to another attractive point of view: a vector is a translation. Then the sum of two vectors is the composition of the translations. 1. Vectors 5 The dot product This operation encodes metric concepts of elementary Euclidean ge- ometry, such as lengths and angles. Given two vectors u and v of lengths u and v and making the angle θ to each other, their dot product| (also| called| | inner product or scalar product) is a number defined by the formula: u, v = u v cos θ. h i | | | | Of the following properties, the first three are easy (check them!): (a) u, v = v, u (symmetricity); h i h i (b) u, u = u 2 > 0 unless u = 0 (positivity); h i | | (c) λu, v = λ u, v = u, λv (homogeneity); h i h i h i (d) u + v, w = u, w + v, w (additivity with respect to the first factor).h i h i h i To prove the last property, note that due to homogeneity, it suf- fices to check it assuming that w is a unit vector, i.e. w =1. In | | this case, consider (Figure 7) a triangle ABC such that −−→AB = u, −−→BC = v, and therefore −→AC = u + v, and let −−→OW = w. We can consider the line OW as the number line, with the points O and W representing the numbers 0 and 1 respectively, and denote by α, β, γ the numbers representing perpendicular projections to this line of the vertices A,B,C of the triangle. Then −−→AB, w = β α, −−→BC, w = γ β, and −→AC, w = γ α. h i − h i − h i − The required identity follows, because γ α = (γ β) + (β α). − − − B u v C A u+v OW α w β γ Figure 7 Combining the properties (c) and (d) with (a), we obtain the following identities, expressing bilinearity of the dot product (i.e. 6 Chapter 1. A CRASH COURSE linearity with respect to each factor): αu + βv, w = α u, w + β v, w h i h i h i w, αu + βv = α w, u + β w, v .
Recommended publications
  • Arxiv:2003.06292V1 [Math.GR] 12 Mar 2020 Eggnrtr N Ignlmti.Tedaoa Arxi Matrix Diagonal the Matrix
    ALGORITHMS IN LINEAR ALGEBRAIC GROUPS SUSHIL BHUNIA, AYAN MAHALANOBIS, PRALHAD SHINDE, AND ANUPAM SINGH ABSTRACT. This paper presents some algorithms in linear algebraic groups. These algorithms solve the word problem and compute the spinor norm for orthogonal groups. This gives us an algorithmic definition of the spinor norm. We compute the double coset decompositionwith respect to a Siegel maximal parabolic subgroup, which is important in computing infinite-dimensional representations for some algebraic groups. 1. INTRODUCTION Spinor norm was first defined by Dieudonné and Kneser using Clifford algebras. Wall [21] defined the spinor norm using bilinear forms. These days, to compute the spinor norm, one uses the definition of Wall. In this paper, we develop a new definition of the spinor norm for split and twisted orthogonal groups. Our definition of the spinornorm is rich in the sense, that itis algorithmic in nature. Now one can compute spinor norm using a Gaussian elimination algorithm that we develop in this paper. This paper can be seen as an extension of our earlier work in the book chapter [3], where we described Gaussian elimination algorithms for orthogonal and symplectic groups in the context of public key cryptography. In computational group theory, one always looks for algorithms to solve the word problem. For a group G defined by a set of generators hXi = G, the problem is to write g ∈ G as a word in X: we say that this is the word problem for G (for details, see [18, Section 1.4]). Brooksbank [4] and Costi [10] developed algorithms similar to ours for classical groups over finite fields.
    [Show full text]
  • Orthogonal Reduction 1 the Row Echelon Form -.: Mathematical
    MATH 5330: Computational Methods of Linear Algebra Lecture 9: Orthogonal Reduction Xianyi Zeng Department of Mathematical Sciences, UTEP 1 The Row Echelon Form Our target is to solve the normal equation: AtAx = Atb ; (1.1) m×n where A 2 R is arbitrary; we have shown previously that this is equivalent to the least squares problem: min jjAx−bjj : (1.2) x2Rn t n×n As A A2R is symmetric positive semi-definite, we can try to compute the Cholesky decom- t t n×n position such that A A = L L for some lower-triangular matrix L 2 R . One problem with this approach is that we're not fully exploring our information, particularly in Cholesky decomposition we treat AtA as a single entity in ignorance of the information about A itself. t m×m In particular, the structure A A motivates us to study a factorization A=QE, where Q2R m×n is orthogonal and E 2 R is to be determined. Then we may transform the normal equation to: EtEx = EtQtb ; (1.3) t m×m where the identity Q Q = Im (the identity matrix in R ) is used. This normal equation is equivalent to the least squares problem with E: t min Ex−Q b : (1.4) x2Rn Because orthogonal transformation preserves the L2-norm, (1.2) and (1.4) are equivalent to each n other. Indeed, for any x 2 R : jjAx−bjj2 = (b−Ax)t(b−Ax) = (b−QEx)t(b−QEx) = [Q(Qtb−Ex)]t[Q(Qtb−Ex)] t t t t t t t t 2 = (Q b−Ex) Q Q(Q b−Ex) = (Q b−Ex) (Q b−Ex) = Ex−Q b : Hence the target is to find an E such that (1.3) is easier to solve.
    [Show full text]
  • Implementation of Gaussian- Elimination
    International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-5 Issue-11, April 2016 Implementation of Gaussian- Elimination Awatif M.A. Elsiddieg Abstract: Gaussian elimination is an algorithm for solving systems of linear equations, can also use to find the rank of any II. PIVOTING matrix ,we use Gaussian Jordan elimination to find the inverse of a non singular square matrix. This work gives basic concepts The objective of pivoting is to make an element above or in section (1) , show what is pivoting , and implementation of below a leading one into a zero. Gaussian elimination to solve a system of linear equations. The "pivot" or "pivot element" is an element on the left Section (2) we find the rank of any matrix. Section (3) we use hand side of a matrix that you want the elements above and Gaussian elimination to find the inverse of a non singular square matrix. We compare the method by Gauss Jordan method. In below to be zero. Normally, this element is a one. If you can section (4) practical implementation of the method we inherit the find a book that mentions pivoting, they will usually tell you computation features of Gaussian elimination we use programs that you must pivot on a one. If you restrict yourself to the in Matlab software. three elementary row operations, then this is a true Keywords: Gaussian elimination, algorithm Gauss, statement. However, if you are willing to combine the Jordan, method, computation, features, programs in Matlab, second and third elementary row operations, you come up software.
    [Show full text]
  • Naïve Gaussian Elimination Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America [email protected]
    nbm_sle_sim_naivegauss.nb 1 Naïve Gaussian Elimination Jamie Trahan, Autar Kaw, Kevin Martin University of South Florida United States of America [email protected] Introduction One of the most popular numerical techniques for solving simultaneous linear equations is Naïve Gaussian Elimination method. The approach is designed to solve a set of n equations with n unknowns, [A][X]=[C], where A nxn is a square coefficient matrix, X nx1 is the solution vector, and C nx1 is the right hand side array. Naïve Gauss consists of two steps: 1) Forward Elimination: In this step, the unknown is eliminated in each equation starting with the first@ D equation. This way, the equations @areD "reduced" to one equation and@ oneD unknown in each equation. 2) Back Substitution: In this step, starting from the last equation, each of the unknowns is found. To learn more about Naïve Gauss Elimination as well as the pitfall's of the method, click here. A simulation of Naive Gauss Method follows. Section 1: Input Data Below are the input parameters to begin the simulation. This is the only section that requires user input. Once the values are entered, Mathematica will calculate the solution vector [X]. èNumber of equations, n: n = 4 4 è nxn coefficient matrix, [A]: nbm_sle_sim_naivegauss.nb 2 A = Table 1, 10, 100, 1000 , 1, 15, 225, 3375 , 1, 20, 400, 8000 , 1, 22.5, 506.25, 11391 ;A MatrixForm 1 10@88 100 1000 < 8 < 18 15 225 3375< 8 <<D êê 1 20 400 8000 ji 1 22.5 506.25 11391 zy j z j z j z j z è nx1 rightj hand side array, [RHS]: z k { RHS = Table 227.04, 362.78, 517.35, 602.97 ; RHS MatrixForm 227.04 362.78 @8 <D êê 517.35 ji 602.97 zy j z j z j z j z j z k { Section 2: Naïve Gaussian Elimination Method The following sections divide Naïve Gauss elimination into two steps: 1) Forward Elimination 2) Back Substitution To conduct Naïve Gauss Elimination, Mathematica will join the [A] and [RHS] matrices into one augmented matrix, [C], that will facilitate the process of forward elimination.
    [Show full text]
  • Cohomological Approach to the Graded Berezinian
    J. Noncommut. Geom. 9 (2015), 543–565 Journal of Noncommutative Geometry DOI 10.4171/JNCG/200 © European Mathematical Society Cohomological approach to the graded Berezinian Tiffany Covolo n Abstract. We develop the theory of linear algebra over a .Z2/ -commutative algebra (n N), which includes the well-known super linear algebra as a special case (n 1). Examples of2 such graded-commutative algebras are the Clifford algebras, in particularD the quaternion algebra H. Following a cohomological approach, we introduce analogues of the notions of trace and determinant. Our construction reduces in the classical commutative case to the coordinate-free description of the determinant by means of the action of invertible matrices on the top exterior power, and in the supercommutative case it coincides with the well-known cohomological interpretation of the Berezinian. Mathematics Subject Classification (2010). 16W50, 17A70, 11R52, 15A15, 15A66, 16E40. Keywords. Graded linear algebra, graded trace and Berezinian, quaternions, Clifford algebra. 1. Introduction Remarkable series of algebras, such as the algebra of quaternions and, more generally, Clifford algebras turn out to be graded-commutative. Originated in [1] and [2], this idea was developed in [10] and [11]. The grading group in this case is n 1 .Z2/ C , where n is the number of generators, and the graded-commutativity reads as a;b ab . 1/hQ Qiba (1.1) D n 1 where a; b .Z2/ C denote the degrees of the respective homogeneous elements Q Q 2 n 1 n 1 a and b, and ; .Z2/ C .Z2/ C Z2 is the standard scalar product of binary .n 1/h-vectorsi W (see Section 2).
    [Show full text]
  • Linear Programming
    Linear Programming Lecture 1: Linear Algebra Review Lecture 1: Linear Algebra Review Linear Programming 1 / 24 1 Linear Algebra Review 2 Linear Algebra Review 3 Block Structured Matrices 4 Gaussian Elimination Matrices 5 Gauss-Jordan Elimination (Pivoting) Lecture 1: Linear Algebra Review Linear Programming 2 / 24 columns rows 2 3 a1• 6 a2• 7 = a a ::: a = 6 7 •1 •2 •n 6 . 7 4 . 5 am• 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 . .. 7 6 . 7 1• 2• m• 4 . 5 4 . 5 T a1n a2n ::: amn a•n Matrices in Rm×n A 2 Rm×n 2 3 a11 a12 ::: a1n 6 a21 a22 ::: a2n 7 A = 6 7 6 . .. 7 4 . 5 am1 am2 ::: amn Lecture 1: Linear Algebra Review Linear Programming 3 / 24 rows 2 3 a1• 6 a2• 7 = 6 7 6 . 7 4 . 5 am• 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 . .. 7 6 . 7 1• 2• m• 4 . 5 4 . 5 T a1n a2n ::: amn a•n Matrices in Rm×n A 2 Rm×n columns 2 3 a11 a12 ::: a1n 6 a21 a22 ::: a2n 7 A = 6 7 = a a ::: a 6 . .. 7 •1 •2 •n 4 . 5 am1 am2 ::: amn Lecture 1: Linear Algebra Review Linear Programming 3 / 24 2 3 2 T 3 a11 a21 ::: am1 a•1 T 6 a12 a22 ::: am2 7 6 a•2 7 AT = 6 7 = 6 7 = aT aT ::: aT 6 .
    [Show full text]
  • Gaussian Elimination and Lu Decomposition (Supplement for Ma511)
    GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1. Matrix multiplication The rule for multiplying matrices is, at first glance, a little complicated. If A is m × n and B is n × p then C = AB is defined and of size m × p with entries n X cij = aikbkj k=1 The main reason for defining it this way will be explained when we talk about linear transformations later on. THEOREM 1.1. (1) Matrix multiplication is associative, i.e. A(BC) = (AB)C. (Henceforth, we will drop parentheses.) (2) If A is m × n and Im×m and In×n denote the identity matrices of the indicated sizes, AIn×n = Im×mA = A. (We usually just write I and the size is understood from context.) On the other hand, matrix multiplication is almost never commutative; in other words generally AB 6= BA when both are defined. So we have to be careful not to inadvertently use it in a calculation. Given an n × n matrix A, the inverse, if it exists, is another n × n matrix A−1 such that AA−I = I;A−1A = I A matrix is called invertible or nonsingular if A−1 exists. In practice, it is only necessary to check one of these. THEOREM 1.2. If B is n × n and AB = I or BA = I, then A invertible and B = A−1. Proof. The proof of invertibility of A will need to wait until we talk about deter- minants.
    [Show full text]
  • Numerical Analysis Lecture Notes Peter J
    Numerical Analysis Lecture Notes Peter J. Olver 4. Gaussian Elimination In this part, our focus will be on the most basic method for solving linear algebraic systems, known as Gaussian Elimination in honor of one of the all-time mathematical greats — the early nineteenth century German mathematician Carl Friedrich Gauss. As the father of linear algebra, his name will occur repeatedly throughout this text. Gaus- sian Elimination is quite elementary, but remains one of the most important algorithms in applied (as well as theoretical) mathematics. Our initial focus will be on the most important class of systems: those involving the same number of equations as unknowns — although we will eventually develop techniques for handling completely general linear systems. While the former typically have a unique solution, general linear systems may have either no solutions or infinitely many solutions. Since physical models require exis- tence and uniqueness of their solution, the systems arising in applications often (but not always) involve the same number of equations as unknowns. Nevertheless, the ability to confidently handle all types of linear systems is a basic prerequisite for further progress in the subject. In contemporary applications, particularly those arising in numerical solu- tions of differential equations, in signal and image processing, and elsewhere, the governing linear systems can be huge, sometimes involving millions of equations in millions of un- knowns, challenging even the most powerful supercomputer. So, a systematic and careful development of solution techniques is essential. Section 4.5 discusses some of the practical issues and limitations in computer implementations of the Gaussian Elimination method for large systems arising in applications.
    [Show full text]
  • A Proof of the Multiplicative Property of the Berezinian ∗
    Morfismos, Vol. 12, No. 1, 2008, pp. 45–61 A proof of the multiplicative property of the Berezinian ∗ Manuel Vladimir Vega Blanco Abstract The Berezinian is the analogue of the determinant in super-linear algebra. In this paper we give an elementary and explicative proof that the Berezinian is a multiplicative homomorphism. This paper is self-conatined, so we begin with a short introduction to super- algebra, where we study the category of supermodules. We write the matrix representation of super-linear transformations. Then we can define a concept analogous to the determinant, this is the superdeterminant or Berezinian. We finish the paper proving the multiplicative property. 2000 Mathematics Subject Classification: 81R99. Keywords and phrases: Berezinian, superalgebra, supergeometry, super- calculus. 1 Introduction Linear superalgebra arises in the study of elementary particles and its fields, specially with the introduction of super-symmetry. In nature we can find two classes of elementary particles, in accordance with the statistics of Einstein-Bose and the statistics of Fermi, the bosons and the fermions, respectively. The fields that represent it have a parity, there are bosonic fields (even) and fermionic fields (odd): the former commute with themselves and with the fermionic fields, while the latter anti-commute with themselves. This fields form an algebra, and with the above non-commutativity property they form an algebraic structure ∗Research supported by a CONACyT scholarship. This paper is part of the autor’s M. Sc. thesis presented at the Department of Mathematics of CINVESTAV-IPN. 45 46 Manuel Vladimir Vega Blanco called a which we call a superalgebra.
    [Show full text]
  • Math 304 Linear Algebra from Wednesday: I Basis and Dimension
    Highlights Math 304 Linear Algebra From Wednesday: I basis and dimension I transition matrix for change of basis Harold P. Boas Today: [email protected] I row space and column space June 12, 2006 I the rank-nullity property Example Example continued 1 2 1 2 1 1 2 1 2 1 1 2 1 2 1 Let A = 2 4 3 7 1 . R2−2R1 2 4 3 7 1 −−−−−→ 0 0 1 3 −1 4 8 5 11 3 R −4R 4 8 5 11 3 3 1 0 0 1 3 −1 Three-part problem. Find a basis for 1 2 1 2 1 1 2 0 −1 2 R3−R2 R1−R2 5 −−−−→ 0 0 1 3 −1 −−−−→ 0 0 1 3 −1 I the row space (the subspace of R spanned by the rows of the matrix) 0 0 0 0 0 0 0 0 0 0 3 I the column space (the subspace of R spanned by the columns of the matrix) Notice that in each step, the second column is twice the first column; also, the second column is the sum of the third and the nullspace (the set of vectors x such that Ax = 0) I fifth columns. We can analyze all three parts by Gaussian elimination, Row operations change the column space but preserve linear even though row operations change the column space. relations among the columns. The final row-echelon form shows that the column space has dimension equal to 2, and the first and third columns are linearly independent.
    [Show full text]
  • Gaussian Elimination and Matrix Inverse (Updated September 3, 2010) – 1 M
    Gaussian Elimination Example 1 Gaussian elimination for the solution of a linear system transforms the Suppose that we want to solve 0 1 0 1 0 1 system Sx = f into an equivalent system Ux = c with upper triangular 2 4 −2 x1 2 matrix U (that means all entries in U below the diagonal are zero). @ 4 9 −3A @x2A = @ 8 A : (1) −2 −3 7 x 10 This transformation is done by applying three types of transformations to 3 the augmented matrix (S j f). We apply Gaussian elimination. To keep track of the operations, we use, e.g., R2 = R2 − 2 ∗ R1, which means that the new row 2 is computed by Type 1: Interchange two equations; and subtracting 2 times row 1 from row 2. Type 2: Replace an equation with the sum of the same equation and a 0 2 4 −2 2 1 0 2 4 −2 2 1 multiple of another equation. @ 4 9 −3 8 A ! @ 0 1 1 4 A R2 = R2 − 2 ∗ R1 −2 −3 7 10 0 1 5 12 R3 = R3 + R1 Once the augmented matrix (U j f) is transformed into (U j c), where U 0 2 4 −2 2 1 is an upper triangular matrix, we can solve this transformed system ! @ 0 1 1 4 A Ux = c using backsubstitution. 0 0 4 8 R3 = R3 − R2 The original system (1) is equivalent to 0 1 0 1 0 1 2 4 −2 x1 2 @0 1 1 A @x2A = @4A : (2) 0 0 4 x3 8 The system (2) can be solved by backsubstitution.
    [Show full text]
  • Matrix Algebra
    AN INTRODUCTION TO MATRIX ALGEBRA Craig W. Rasmussen Department of Mathematics Naval Postgraduate School AN INTRODUCTION TO MATRIX ALGEBRA Craig W. Rasmussen Department of Mathematics Naval Postgraduate School PREFACE These notes are intended to serve as the text for MA1042:Matrix Algebra, and for its 6-week refresher equivalent, MAR142. The intent of MA1042 and MAR142 is to equip students with the fundamental mechanical skills for follow-on courses in linear algebra that require a basic working knowledge of matrix algebra and complex arithmetic. In the Department of Mathematics, the principal target course is MA3042, Linear Algebra. Those whose plans lead to MA3046, Computational Linear Algebra, are urged to take MA1043 rather than MA1042, although these notes might serve as the nucleus of the material covered in that course. Other disciplines make heavy use of these techniques, as well; one must venture into the arts and humanities to find fields in which linear algebra does not come into play. The notes contain two sections that are somewhat beyond what is nor- mally covered in MA1042. The first is section 2.5, which gives a cursory introduction to vector spaces and linear transformations. Optional reading for students in MA1042, this provides a glimpse of the subject matter of MA3042. The second is section 3.5, the topic of which is the cross product. This is of potential interest to students of multivariable (three, to be precise) calculus. The course opens with an introduction to systems of linear equations and their solution by substitution. The limitations of this method quickly be- come apparent, and we begin the development of alternatives by discussing the simplicity of the substitution method on certain highly structured sys- tems.
    [Show full text]