Linear Algebra Explained in Four Pages

Linear Algebra Explained in Four Pages

Linear algebra explained in four pages Excerpt from the NO BULLSHIT GUIDE TO LINEAR ALGEBRA by Ivan Savov Abstract—This document will review the fundamental ideas of linear algebra. B. Matrix operations We will learn about matrices, matrix operations, linear transformations and A a discuss both the theoretical and computational aspects of linear algebra. The We denote by the matrix as a whole and refer to its entries as ij . tools of linear algebra open the gateway to the study of more advanced The mathematical operations defined for matrices are the following: mathematics. A lot of knowledge buzz awaits you if you choose to follow the • addition (denoted +) path of understanding, instead of trying to memorize a bunch of formulas. C = A + B , cij = aij + bij : I. INTRODUCTION • subtraction (the inverse of addition) m×n n×` Linear algebra is the math of vectors and matrices. Let n be a positive • matrix product. The product of matrices A 2 and B 2 n R R integer and let R denote the set of real numbers, then R is the set of all is another matrix C 2 m×` given by the formula n R n-tuples of real numbers. A vector ~v 2 R is an n-tuple of real numbers. n The notation “2S” is read “element of S.” For example, consider a vector X C = AB , cij = aikbkj ; that has three components: k=1 3 2 3 2 3 ~v = (v1; v2; v3) 2 ( ; ; ) ≡ : a11 a12 a11b11 + a12b21 a11b12 + a12b22 R R R R b11 b12 m×n 4a21 a225 = 4a21b11 + a22b21 a21b12 + a22b225 A matrix A 2 is a rectangular array of real numbers with m rows b21 b22 R a a a b + a b a b + a b and n columns. For example, a 3 × 2 matrix looks like this: 31 32 31 11 32 21 31 12 32 22 −1 2 3 2 3 • matrix inverse (denoted A ) a11 a12 RR T 3×2 • matrix transpose (denoted ): A = 4 a21 a22 5 2 4 RR 5 ≡ R : 2 3 a31 a32 RR T α1 β1 α1 α2 α3 The purpose of this document is to introduce you to the mathematical = 4α2 β25 : β1 β2 β3 operations that we can perform on vectors and matrices and to give you a α3 β3 feel of the power of linear algebra. Many problems in science, business, Pn • matrix trace: Tr[A] ≡ i=1 aii and technology can be described in terms of vectors and matrices so it is • determinant (denoted det(A) or jAj) important that you understand how to work with these. Note that the matrix product is not a commutative operation: AB 6= BA. Prerequisites The only prerequisite for this tutorial is a basic understanding of high school C. Matrix-vector product math concepts1 like numbers, variables, equations, and the fundamental The matrix-vector product is an important special case of the matrix- arithmetic operations on real numbers: addition (denoted +), subtraction matrix product. The product of a 3 × 2 matrix A and the 2 × 1 column (denoted −), multiplication (denoted implicitly), and division (fractions). vector ~x results in a 3 × 1 vector ~y given by: You should also be familiar with functions that take real numbers as 2 3 2 3 2 3 y1 a11 a12 a11x1 + a12x2 inputs and give real numbers as outputs, f : R ! R. Recall that, by x1 −1 ~y = A~x , 4y25=4a21 a225 =4a21x1 + a22x25 definition, the inverse function f undoes the effect of f. If you are x2 y3 a31 a32 a31x1 + a32x2 given f(x) and you want to find x, you can use the inverse function as 2 3 2 3 f −1 (f(x)) = x f(x) = ln(x) a11 a12 follows: . For example, the functionp has the inverse f −1(x) = ex, and the inverse of g(x) = x is g−1(x) = x2. =x14a215+x24a225 (C) a31 a32 II. DEFINITIONS 2 3 (a11; a12) · ~x A. Vector operations =4(a21; a22) · ~x5: (R) We now define the math operations for vectors. The operations we can (a31; a32) · ~x perform on vectors ~u = (u ; u ; u ) and ~v = (v ; v ; v ) are: addition, 1 2 3 1 2 3 There are two2 fundamentally different yet equivalent ways to interpret the subtraction, scaling, norm (length), dot product, and cross product: matrix-vector product. In the column picture, (C), the multiplication of the ~u + ~v = (u1 + v1; u2 + v2; u3 + v3) matrix A by the vector ~x produces a linear combination of the columns ~u − ~v = (u1 − v1; u2 − v2; u3 − v3) of the matrix: ~y = A~x = x1A[:;1] + x2A[:;2], where A[:;1] and A[:;2] are the first and second columns of the matrix A. α~u = (αu ; αu ; αu ) 1 2 3 In the row picture, (R), multiplication of the matrix A by the vector ~x q 2 2 2 jj~ujj = u1 + u2 + u3 produces a column vector with coefficients equal to the dot products of rows of the matrix with the vector ~x. ~u · ~v = u1v1 + u2v2 + u3v3 ~u × ~v =(u v − u v ; u v − u v ; u v − u v ) 2 3 3 2 3 1 1 3 1 2 2 1 D. Linear transformations The dot product and the cross product of two vectors can also be described The matrix-vector product is used to define the notion of a linear in terms of the angle θ between the two vectors. The formula for the dot transformation, which is one of the key notions in the study of linear product of the vectors is ~u · ~v = k~ukk~vk cos θ. We say two vectors ~u and A 2 m×n ◦ algebra. Multiplication by a matrix R can be thought of as ~v are orthogonal if the angle between them is 90 . The dot product of linear transformation T n ◦ computing a A that takes -vectors as inputs and orthogonal vectors is zero: ~u · ~v = k~ukk~vk cos(90 ) = 0. produces m-vectors as outputs: The norm of the cross product is given by k~u ×~vk = k~ukk~vk sin θ. The n m cross product is not commutative: ~u × ~v 6= ~v × ~u, in fact ~u × ~v = −~v × ~u. TA : R ! R : 1A good textbook to (re)learn high school math is minireference.com 2For more info see the video of Prof. Strang’s MIT lecture: bit.ly/10vmKcL 1 2 Instead of writing ~y = TA(~x) for the linear transformation TA applied to III. COMPUTATIONAL LINEAR ALGEBRA the vector ~x, we simply write ~y = A~x. Applying the linear transformation Okay, I hear what you are saying “Dude, enough with the theory talk, let’s TA to the vector ~x corresponds to the product of the matrix A and the see some calculations.” In this section we’ll look at one of the fundamental column vector ~x. We say TA is represented by the matrix A. algorithms of linear algebra called Gauss–Jordan elimination. You can think of linear transformations as “vector functions” and describe their properties in analogy with the regular functions you are familiar with: A. Solving systems of equations n m Suppose we’re asked to solve the following system of equations: function f : R ! R , linear transformation TA : R ! R n 1x1 + 2x2 = 5; input x 2 R , input ~x 2 R (1) m 3x1 + 9x2 = 21: output f(x) , output TA(~x) = A~x 2 R Without a knowledge of linear algebra, we could use substitution, elimina- g ◦ f =g(f(x)) , T (T (~x)) = BA~x B A tion, or subtraction to find the values of the two unknowns x and x . −1 −1 1 2 function inverse f , matrix inverse A Gauss–Jordan elimination is a systematic procedure for solving systems zeros of f ,N (A) ≡ null space of A of equations based the following row operations: range of f , C(A) ≡ column space of A = range of TA α) Adding a multiple of one row to another row β) Swapping two rows γ) Multiplying a row by a constant Note that the combined effect of applying the transformation TA followed by TB on the input vector ~x is equivalent to the matrix product BA~x. These row operations allow us to simplify the system of equations without changing their solution. To illustrate the Gauss–Jordan elimination procedure, we’ll now show the E. Fundamental vector spaces sequence of row operations required to solve the system of linear equations A vector space consists of a set of vectors and all linear combinations of described above. We start by constructing an augmented matrix as follows: these vectors. For example the vector space S = spanf~v ; ~v g consists of 1 2 1 2 5 all vectors of the form ~v = α~v + β~v , where α and β are real numbers. : 1 2 3 9 21 We now define three fundamental vector spaces associated with a matrix A. The column space of a matrix A is the set of vectors that can be produced The first column in the augmented matrix corresponds to the coefficients of as linear combinations of the columns of the matrix A: the variable x1, the second column corresponds to the coefficients of x2, and the third column contains the constants from the right-hand side. m n C(A) ≡ f~y 2 R j ~y = A~x for some ~x 2 R g : The Gauss-Jordan elimination procedure consists of two phases. During the first phase, we proceed left-to-right by choosing a row with a leading The column space is the range of the linear transformation TA (the set one in the leftmost column (called a pivot) and systematically subtracting of possible outputs).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us