§1.4 Matrix Equation Ax = B: Linear Combination

Total Page:16

File Type:pdf, Size:1020Kb

§1.4 Matrix Equation Ax = B: Linear Combination x1.4 Matrix Equation A x = b: Linear Combination (I) Matrix-vector Product () Linear Combination (II) Example: 2 4 3 1 2 −1 A = ; x = 3 0 −5 3 4 5 7 Linear Equations in terms of Matrix-vector Product Example: Linear Equations in terms of Matrix-vector Product Example: Linear Equations in terms of Linear Combinations Linear Equations in terms of Linear Combinations 2 3 1 3 4 b1 4 0 14 10 b2 + 4 b1 5 0 7 5 b3 + 3 b1 2 3 1 3 4 b1 (`3)−(1=2)×(`2)!(`3) =) 4 0 14 10 b2 + 4 b1 5 0 0 0 b3 + 3 b1 − 1=2(b2 + 4 b1) Answer: equation Ax = b consistent () b3 − 1=2 b2 + b1 = 0: Existence of Solutions 2 3 2 3 1 3 4 b1 Example: Let A = 4 −4 2 −6 5 ; and b = 4 b2 5 : −3 −2 −7 b3 Question: For what values of b1; b2; b3 is equation Ax = b consistent? 2 3(`2) + (4) × (`1) ! (`2) 1 3 4 b1 (`3) + (3) × (`1) ! (`3) 4 −4 2 −6 b2 5 =) −3 −2 −7 b3 2 3 1 3 4 b1 4 0 14 10 b2 + 4 b1 5 0 0 0 b3 + 3 b1 − 1=2(b2 + 4 b1) Answer: equation Ax = b consistent () b3 − 1=2 b2 + b1 = 0: Existence of Solutions 2 3 2 3 1 3 4 b1 Example: Let A = 4 −4 2 −6 5 ; and b = 4 b2 5 : −3 −2 −7 b3 Question: For what values of b1; b2; b3 is equation Ax = b consistent? 2 3(`2) + (4) × (`1) ! (`2) 2 3 1 3 4 b1 1 3 4 b1 (`3) + (3) × (`1) ! (`3) 4 −4 2 −6 b2 5 =) 4 0 14 10 b2 + 4 b1 5 −3 −2 −7 b3 0 7 5 b3 + 3 b1 (` )−(1=2)×(` )!(` ) 3 =) 2 3 Answer: equation Ax = b consistent () b3 − 1=2 b2 + b1 = 0: Existence of Solutions 2 3 2 3 1 3 4 b1 Example: Let A = 4 −4 2 −6 5 ; and b = 4 b2 5 : −3 −2 −7 b3 Question: For what values of b1; b2; b3 is equation Ax = b consistent? 2 3(`2) + (4) × (`1) ! (`2) 2 3 1 3 4 b1 1 3 4 b1 (`3) + (3) × (`1) ! (`3) 4 −4 2 −6 b2 5 =) 4 0 14 10 b2 + 4 b1 5 −3 −2 −7 b3 0 7 5 b3 + 3 b1 2 3 1 3 4 b1 (`3)−(1=2)×(`2)!(`3) =) 4 0 14 10 b2 + 4 b1 5 0 0 0 b3 + 3 b1 − 1=2(b2 + 4 b1) Existence of Solutions 2 3 2 3 1 3 4 b1 Example: Let A = 4 −4 2 −6 5 ; and b = 4 b2 5 : −3 −2 −7 b3 Question: For what values of b1; b2; b3 is equation Ax = b consistent? 2 3(`2) + (4) × (`1) ! (`2) 2 3 1 3 4 b1 1 3 4 b1 (`3) + (3) × (`1) ! (`3) 4 −4 2 −6 b2 5 =) 4 0 14 10 b2 + 4 b1 5 −3 −2 −7 b3 0 7 5 b3 + 3 b1 2 3 1 3 4 b1 (`3)−(1=2)×(`2)!(`3) =) 4 0 14 10 b2 + 4 b1 5 0 0 0 b3 + 3 b1 − 1=2(b2 + 4 b1) Answer: equation Ax = b consistent () b3 − 1=2 b2 + b1 = 0: (a) ()def (b) ()def (c): Will show (a) () (d) Equivalent Statements on Existence of Solution (I) Will show (a) () (d) Equivalent Statements on Existence of Solution (I) (a) ()def (b) ()def (c): Equivalent Statements on Existence of Solution (I) (a) ()def (b) ()def (c): Will show (a) () (d) U has a pivot in each row. =) Ux = d has solution =) Ax = b has solution. YES! Equivalent Statements (II): (d) () (a) ? I A has pivot in each row =) Ax = b has solution for each b: row echelon (A j b) =) (U j d) ; =) Ax = b has solution. YES! Equivalent Statements (II): (d) () (a) ? I A has pivot in each row =) Ax = b has solution for each b: row echelon (A j b) =) (U j d) ; U has a pivot in each row. =) Ux = d has solution Equivalent Statements (II): (d) () (a) ? I A has pivot in each row =) Ax = b has solution for each b: row echelon (A j b) =) (U j d) ; U has a pivot in each row. =) Ux = d has solution =) Ax = b has solution. YES! Equivalent Statements (II): (d) () (a) ? I A has pivot in each row =) Ax = b has solution for each b: row echelon (A j b) =) (U j d) ; U has a pivot in each row. =) Ux = d has solution =) Ax = b has solution. YES! Ax = b has solution for each b. =) Let d be vector with all components 1. (work row echelon backwards, can find corresponding b) =) Ux = d having solution implies no zero row in U =) A has a pivot in each row. YES! Equivalent Statements (III): (d) () (a) ? I Ax = b has solution for each b =) A has pivot in each row: row echelon (A j b) =) (U j d) ; =) Let d be vector with all components 1. (work row echelon backwards, can find corresponding b) =) Ux = d having solution implies no zero row in U =) A has a pivot in each row. YES! Equivalent Statements (III): (d) () (a) ? I Ax = b has solution for each b =) A has pivot in each row: row echelon (A j b) =) (U j d) ; Ax = b has solution for each b. =) Ux = d having solution implies no zero row in U =) A has a pivot in each row. YES! Equivalent Statements (III): (d) () (a) ? I Ax = b has solution for each b =) A has pivot in each row: row echelon (A j b) =) (U j d) ; Ax = b has solution for each b. =) Let d be vector with all components 1. (work row echelon backwards, can find corresponding b) =) A has a pivot in each row. YES! Equivalent Statements (III): (d) () (a) ? I Ax = b has solution for each b =) A has pivot in each row: row echelon (A j b) =) (U j d) ; Ax = b has solution for each b. =) Let d be vector with all components 1. (work row echelon backwards, can find corresponding b) =) Ux = d having solution implies no zero row in U Equivalent Statements (III): (d) () (a) ? I Ax = b has solution for each b =) A has pivot in each row: row echelon (A j b) =) (U j d) ; Ax = b has solution for each b. =) Let d be vector with all components 1. (work row echelon backwards, can find corresponding b) =) Ux = d having solution implies no zero row in U =) A has a pivot in each row. YES! Example: 3 x1 + 5 x2 − 4 x3 = 0; −3 x1 − 2 x2 + 4 x3 = 0; 6 x1 + x2 − 8 x3 = 0: Augmented matrix (A j b) to row echelon form 0 3 5 −4 0 1 0 3 5 −4 0 1 0 3 5 −4 0 1 @ −3 −2 4 0 A ∼ @ 0 3 0 0 A ∼ @ 0 3 0 0 A 6 1 −8 0 0 −9 0 0 0 0 0 0 x3 is free variable. x1.5 Solution Sets of Linear Systems: Homogeneous Systems Ax = 0 trivial solution: x = 0; any non-zero solution x is non-trivial. Example: 3 x1 + 5 x2 − 4 x3 = 0; −3 x1 − 2 x2 + 4 x3 = 0; 6 x1 + x2 − 8 x3 = 0: Augmented matrix (A j b) to row echelon form 0 3 5 −4 0 1 0 3 5 −4 0 1 0 3 5 −4 0 1 @ −3 −2 4 0 A ∼ @ 0 3 0 0 A ∼ @ 0 3 0 0 A 6 1 −8 0 0 −9 0 0 0 0 0 0 x3 is free variable. x1.5 Solution Sets of Linear Systems: Homogeneous Systems Ax = 0 trivial solution: x = 0; any non-zero solution x is non-trivial. Augmented matrix (A j b) to row echelon form 0 3 5 −4 0 1 0 3 5 −4 0 1 0 3 5 −4 0 1 @ −3 −2 4 0 A ∼ @ 0 3 0 0 A ∼ @ 0 3 0 0 A 6 1 −8 0 0 −9 0 0 0 0 0 0 x3 is free variable. x1.5 Solution Sets of Linear Systems: Homogeneous Systems Ax = 0 trivial solution: x = 0; any non-zero solution x is non-trivial. Example: 3 x1 + 5 x2 − 4 x3 = 0; −3 x1 − 2 x2 + 4 x3 = 0; 6 x1 + x2 − 8 x3 = 0: x3 is free variable. x1.5 Solution Sets of Linear Systems: Homogeneous Systems Ax = 0 trivial solution: x = 0; any non-zero solution x is non-trivial. Example: 3 x1 + 5 x2 − 4 x3 = 0; −3 x1 − 2 x2 + 4 x3 = 0; 6 x1 + x2 − 8 x3 = 0: Augmented matrix (A j b) to row echelon form 0 3 5 −4 0 1 0 3 5 −4 0 1 0 3 5 −4 0 1 @ −3 −2 4 0 A ∼ @ 0 3 0 0 A ∼ @ 0 3 0 0 A 6 1 −8 0 0 −9 0 0 0 0 0 0 x1.5 Solution Sets of Linear Systems: Homogeneous Systems Ax = 0 trivial solution: x = 0; any non-zero solution x is non-trivial. Example: 3 x1 + 5 x2 − 4 x3 = 0; −3 x1 − 2 x2 + 4 x3 = 0; 6 x1 + x2 − 8 x3 = 0: Augmented matrix (A j b) to row echelon form 0 3 5 −4 0 1 0 3 5 −4 0 1 0 3 5 −4 0 1 @ −3 −2 4 0 A ∼ @ 0 3 0 0 A ∼ @ 0 3 0 0 A 6 1 −8 0 0 −9 0 0 0 0 0 0 x3 is free variable.
Recommended publications
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Linear Algebra for Dummies
    Linear Algebra for Dummies Jorge A. Menendez October 6, 2017 Contents 1 Matrices and Vectors1 2 Matrix Multiplication2 3 Matrix Inverse, Pseudo-inverse4 4 Outer products 5 5 Inner Products 5 6 Example: Linear Regression7 7 Eigenstuff 8 8 Example: Covariance Matrices 11 9 Example: PCA 12 10 Useful resources 12 1 Matrices and Vectors An m × n matrix is simply an array of numbers: 2 3 a11 a12 : : : a1n 6 a21 a22 : : : a2n 7 A = 6 7 6 . 7 4 . 5 am1 am2 : : : amn where we define the indexing Aij = aij to designate the component in the ith row and jth column of A. The transpose of a matrix is obtained by flipping the rows with the columns: 2 3 a11 a21 : : : an1 6 a12 a22 : : : an2 7 AT = 6 7 6 . 7 4 . 5 a1m a2m : : : anm T which evidently is now an n × m matrix, with components Aij = Aji = aji. In other words, the transpose is obtained by simply flipping the row and column indeces. One particularly important matrix is called the identity matrix, which is composed of 1’s on the diagonal and 0’s everywhere else: 21 0 ::: 03 60 1 ::: 07 6 7 6. .. .7 4. .5 0 0 ::: 1 1 It is called the identity matrix because the product of any matrix with the identity matrix is identical to itself: AI = A In other words, I is the equivalent of the number 1 for matrices. For our purposes, a vector can simply be thought of as a matrix with one column1: 2 3 a1 6a2 7 a = 6 7 6 .
    [Show full text]
  • Linear Independence, Span, and Basis of a Set of Vectors What Is Linear Independence?
    LECTURENOTES · purdue university MA 26500 Kyle Kloster Linear Algebra October 22, 2014 Linear Independence, Span, and Basis of a Set of Vectors What is linear independence? A set of vectors S = fv1; ··· ; vkg is linearly independent if none of the vectors vi can be written as a linear combination of the other vectors, i.e. vj = α1v1 + ··· + αkvk. Suppose the vector vj can be written as a linear combination of the other vectors, i.e. there exist scalars αi such that vj = α1v1 + ··· + αkvk holds. (This is equivalent to saying that the vectors v1; ··· ; vk are linearly dependent). We can subtract vj to move it over to the other side to get an expression 0 = α1v1 + ··· αkvk (where the term vj now appears on the right hand side. In other words, the condition that \the set of vectors S = fv1; ··· ; vkg is linearly dependent" is equivalent to the condition that there exists αi not all of which are zero such that 2 3 α1 6α27 0 = v v ··· v 6 7 : 1 2 k 6 . 7 4 . 5 αk More concisely, form the matrix V whose columns are the vectors vi. Then the set S of vectors vi is a linearly dependent set if there is a nonzero solution x such that V x = 0. This means that the condition that \the set of vectors S = fv1; ··· ; vkg is linearly independent" is equivalent to the condition that \the only solution x to the equation V x = 0 is the zero vector, i.e. x = 0. How do you determine if a set is lin.
    [Show full text]
  • Discover Linear Algebra Incomplete Preliminary Draft
    Discover Linear Algebra Incomplete Preliminary Draft Date: November 28, 2017 L´aszl´oBabai in collaboration with Noah Halford All rights reserved. Approved for instructional use only. Commercial distribution prohibited. c 2016 L´aszl´oBabai. Last updated: November 10, 2016 Preface TO BE WRITTEN. Babai: Discover Linear Algebra. ii This chapter last updated August 21, 2016 c 2016 L´aszl´oBabai. Contents Notation ix I Matrix Theory 1 Introduction to Part I 2 1 (F, R) Column Vectors 3 1.1 (F) Column vector basics . 3 1.1.1 The domain of scalars . 3 1.2 (F) Subspaces and span . 6 1.3 (F) Linear independence and the First Miracle of Linear Algebra . 8 1.4 (F) Dot product . 12 1.5 (R) Dot product over R ................................. 14 1.6 (F) Additional exercises . 14 2 (F) Matrices 15 2.1 Matrix basics . 15 2.2 Matrix multiplication . 18 2.3 Arithmetic of diagonal and triangular matrices . 22 2.4 Permutation Matrices . 24 2.5 Additional exercises . 26 3 (F) Matrix Rank 28 3.1 Column and row rank . 28 iii iv CONTENTS 3.2 Elementary operations and Gaussian elimination . 29 3.3 Invariance of column and row rank, the Second Miracle of Linear Algebra . 31 3.4 Matrix rank and invertibility . 33 3.5 Codimension (optional) . 34 3.6 Additional exercises . 35 4 (F) Theory of Systems of Linear Equations I: Qualitative Theory 38 4.1 Homogeneous systems of linear equations . 38 4.2 General systems of linear equations . 40 5 (F, R) Affine and Convex Combinations (optional) 42 5.1 (F) Affine combinations .
    [Show full text]
  • Linear Independence, the Wronskian, and Variation of Parameters
    LINEAR INDEPENDENCE, THE WRONSKIAN, AND VARIATION OF PARAMETERS JAMES KEESLING In this post we determine when a set of solutions of a linear differential equation are linearly independent. We first discuss the linear space of solutions for a homogeneous differential equation. 1. Homogeneous Linear Differential Equations We start with homogeneous linear nth-order ordinary differential equations with general coefficients. The form for the nth-order type of equation is the following. dnx dn−1x (1) a (t) + a (t) + ··· + a (t)x = 0 n dtn n−1 dtn−1 0 It is straightforward to solve such an equation if the functions ai(t) are all constants. However, for general functions as above, it may not be so easy. However, we do have a principle that is useful. Because the equation is linear and homogeneous, if we have a set of solutions fx1(t); : : : ; xn(t)g, then any linear combination of the solutions is also a solution. That is (2) x(t) = C1x1(t) + C2x2(t) + ··· + Cnxn(t) is also a solution for any choice of constants fC1;C2;:::;Cng. Now if the solutions fx1(t); : : : ; xn(t)g are linearly independent, then (2) is the general solution of the differential equation. We will explain why later. What does it mean for the functions, fx1(t); : : : ; xn(t)g, to be linearly independent? The simple straightforward answer is that (3) C1x1(t) + C2x2(t) + ··· + Cnxn(t) = 0 implies that C1 = 0, C2 = 0, ::: , and Cn = 0 where the Ci's are arbitrary constants. This is the definition, but it is not so easy to determine from it just when the condition holds to show that a given set of functions, fx1(t); x2(t); : : : ; xng, is linearly independent.
    [Show full text]
  • Math 2331 – Linear Algebra 6.2 Orthogonal Sets
    6.2 Orthogonal Sets Math 2331 { Linear Algebra 6.2 Orthogonal Sets Jiwen He Department of Mathematics, University of Houston [email protected] math.uh.edu/∼jiwenhe/math2331 Jiwen He, University of Houston Math 2331, Linear Algebra 1 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix 6.2 Orthogonal Sets Orthogonal Sets: Examples Orthogonal Sets: Theorem Orthogonal Basis: Examples Orthogonal Basis: Theorem Orthogonal Projections Orthonormal Sets Orthonormal Matrix: Examples Orthonormal Matrix: Theorems Jiwen He, University of Houston Math 2331, Linear Algebra 2 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets Orthogonal Sets n A set of vectors fu1; u2;:::; upg in R is called an orthogonal set if ui · uj = 0 whenever i 6= j. Example 82 3 2 3 2 39 < 1 1 0 = Is 4 −1 5 ; 4 1 5 ; 4 0 5 an orthogonal set? : 0 0 1 ; Solution: Label the vectors u1; u2; and u3 respectively. Then u1 · u2 = u1 · u3 = u2 · u3 = Therefore, fu1; u2; u3g is an orthogonal set. Jiwen He, University of Houston Math 2331, Linear Algebra 3 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets: Theorem Theorem (4) Suppose S = fu1; u2;:::; upg is an orthogonal set of nonzero n vectors in R and W =spanfu1; u2;:::; upg. Then S is a linearly independent set and is therefore a basis for W . Partial Proof: Suppose c1u1 + c2u2 + ··· + cpup = 0 (c1u1 + c2u2 + ··· + cpup) · = 0· (c1u1) · u1 + (c2u2) · u1 + ··· + (cpup) · u1 = 0 c1 (u1 · u1) + c2 (u2 · u1) + ··· + cp (up · u1) = 0 c1 (u1 · u1) = 0 Since u1 6= 0, u1 · u1 > 0 which means c1 = : In a similar manner, c2,:::,cp can be shown to by all 0.
    [Show full text]
  • Span, Linear Independence and Basis Rank and Nullity
    Remarks for Exam 2 in Linear Algebra Span, linear independence and basis The span of a set of vectors is the set of all linear combinations of the vectors. A set of vectors is linearly independent if the only solution to c1v1 + ::: + ckvk = 0 is ci = 0 for all i. Given a set of vectors, you can determine if they are linearly independent by writing the vectors as the columns of the matrix A, and solving Ax = 0. If there are any non-zero solutions, then the vectors are linearly dependent. If the only solution is x = 0, then they are linearly independent. A basis for a subspace S of Rn is a set of vectors that spans S and is linearly independent. There are many bases, but every basis must have exactly k = dim(S) vectors. A spanning set in S must contain at least k vectors, and a linearly independent set in S can contain at most k vectors. A spanning set in S with exactly k vectors is a basis. A linearly independent set in S with exactly k vectors is a basis. Rank and nullity The span of the rows of matrix A is the row space of A. The span of the columns of A is the column space C(A). The row and column spaces always have the same dimension, called the rank of A. Let r = rank(A). Then r is the maximal number of linearly independent row vectors, and the maximal number of linearly independent column vectors. So if r < n then the columns are linearly dependent; if r < m then the rows are linearly dependent.
    [Show full text]
  • Linear Algebra and Matrix Theory
    Linear Algebra and Matrix Theory Chapter 1 - Linear Systems, Matrices and Determinants This is a very brief outline of some basic definitions and theorems of linear algebra. We will assume that you know elementary facts such as how to add two matrices, how to multiply a matrix by a number, how to multiply two matrices, what an identity matrix is, and what a solution of a linear system of equations is. Hardly any of the theorems will be proved. More complete treatments may be found in the following references. 1. References (1) S. Friedberg, A. Insel and L. Spence, Linear Algebra, Prentice-Hall. (2) M. Golubitsky and M. Dellnitz, Linear Algebra and Differential Equa- tions Using Matlab, Brooks-Cole. (3) K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall. (4) P. Lancaster and M. Tismenetsky, The Theory of Matrices, Aca- demic Press. 1 2 2. Linear Systems of Equations and Gaussian Elimination The solutions, if any, of a linear system of equations (2.1) a11x1 + a12x2 + ··· + a1nxn = b1 a21x1 + a22x2 + ··· + a2nxn = b2 . am1x1 + am2x2 + ··· + amnxn = bm may be found by Gaussian elimination. The permitted steps are as follows. (1) Both sides of any equation may be multiplied by the same nonzero constant. (2) Any two equations may be interchanged. (3) Any multiple of one equation may be added to another equation. Instead of working with the symbols for the variables (the xi), it is eas- ier to place the coefficients (the aij) and the forcing terms (the bi) in a rectangular array called the augmented matrix of the system. a11 a12 .
    [Show full text]
  • Math 22 – Linear Algebra and Its Applications
    Math 22 – Linear Algebra and its applications - Lecture 25 - Instructor: Bjoern Muetzel GENERAL INFORMATION ▪ Office hours: Tu 1-3 pm, Th, Sun 2-4 pm in KH 229 Tutorial: Tu, Th, Sun 7-9 pm in KH 105 ▪ Homework 8: due Wednesday at 4 pm outside KH 008. There is only Section B,C and D. 5 Eigenvalues and Eigenvectors 5.1 EIGENVECTORS AND EIGENVALUES Summary: Given a linear transformation 푇: ℝ푛 → ℝ푛, then there is always a good basis on which the transformation has a very simple form. To find this basis we have to find the eigenvalues of T. GEOMETRIC INTERPRETATION 5 −3 1 1 Example: Let 퐴 = and let 푢 = 푥 = and 푣 = . −6 2 0 2 −1 1.) Find A푣 and Au. Draw a picture of 푣 and A푣 and 푢 and A푢. 2.) Find A(3푢 +2푣) and 퐴2 (3푢 +2푣). Hint: Use part 1.) EIGENVECTORS AND EIGENVALUES ▪ Definition: An eigenvector of an 푛 × 푛 matrix A is a nonzero vector x such that 퐴푥 = 휆푥 for some scalar λ in ℝ. In this case λ is called an eigenvalue and the solution x≠ ퟎ is called an eigenvector corresponding to λ. ▪ Definition: Let A be an 푛 × 푛 matrix. The set of solutions 푛 Eig(A, λ) = {x in ℝ , such that (퐴 − 휆퐼푛)x = 0} is called the eigenspace Eig(A, λ) of A corresponding to λ. It is the null space of the matrix 퐴 − 휆퐼푛: Eig(A, λ) = Nul(퐴 − 휆퐼푛) Slide 5.1- 7 EIGENVECTORS AND EIGENVALUES 16 Example: Show that 휆 =7 is an eigenvalue of matrix A = 52 and find the corresponding eigenspace Eig(A,7).
    [Show full text]
  • Different Forms of Linear Systems, Linear Combinations, and Span
    Math 20F, 2015SS1 / TA: Jor-el Briones / Sec: A01 / Handout 2 Page 1 of2 Different forms of Linear Systems, Linear Combinations, and Span (1.3-1.4) Terms you should know Linear combination (and weights): A vector y is called a linear combination of vectors v1; v2; :::; vk if given some numbers c1; c2; :::; ck, y = c1v1 + c2v2 + ::: + ckvk The numbers c1; c2; :::; ck are called weights. Span: We call the set of ALL the possible linear combinations of a set of vectors to be the span of those vectors. For example, the span of v1 and v2 is written as span(v1; v2) NOTE: The zero vector is in the span of ANY set of vectors in Rn. Rn: The set of all vectors with n entries 3 ways to write a linear system with m equations and n unknowns If A is the m×n coefficient matrix corresponding to a linear system, with columns a1; a2; :::; an, and b in Rm is the constant vector corresponding to that linear system, we may represent the linear system using 1. A matrix equation: Ax = b 2. A vector equation: x1a1 + x2a2 + ::: + xnan = b, where x1; x2; :::xn are numbers. h i 3. An augmented matrix: A j b NOTE: You should know how to multiply a matrix by a column vector, and that doing so would result in some linear combination of the columns of that matrix. Math 20F, 2015SS1 / TA: Jor-el Briones / Sec: A01 / Handout 2 Page 2 of2 Important theorems to know: Theorem. (Chapter 1, Theorem 3) If A is an m × n matrix, with columns a1; a2; :::; an and b is in Rm, the matrix equation Ax = b has the same solution set as the vector equation x1a1 + x2a2 + ::: + xnan = b as well as the system of linear equations whose augmented matrix is h i A j b Theorem.
    [Show full text]
  • Inner Product Spaces
    CHAPTER 6 Woman teaching geometry, from a fourteenth-century edition of Euclid’s geometry book. Inner Product Spaces In making the definition of a vector space, we generalized the linear structure (addition and scalar multiplication) of R2 and R3. We ignored other important features, such as the notions of length and angle. These ideas are embedded in the concept we now investigate, inner products. Our standing assumptions are as follows: 6.1 Notation F, V F denotes R or C. V denotes a vector space over F. LEARNING OBJECTIVES FOR THIS CHAPTER Cauchy–Schwarz Inequality Gram–Schmidt Procedure linear functionals on inner product spaces calculating minimum distance to a subspace Linear Algebra Done Right, third edition, by Sheldon Axler 164 CHAPTER 6 Inner Product Spaces 6.A Inner Products and Norms Inner Products To motivate the concept of inner prod- 2 3 x1 , x 2 uct, think of vectors in R and R as x arrows with initial point at the origin. x R2 R3 H L The length of a vector in or is called the norm of x, denoted x . 2 k k Thus for x .x1; x2/ R , we have The length of this vector x is p D2 2 2 x x1 x2 . p 2 2 x1 x2 . k k D C 3 C Similarly, if x .x1; x2; x3/ R , p 2D 2 2 2 then x x1 x2 x3 . k k D C C Even though we cannot draw pictures in higher dimensions, the gener- n n alization to R is obvious: we define the norm of x .x1; : : : ; xn/ R D 2 by p 2 2 x x1 xn : k k D C C The norm is not linear on Rn.
    [Show full text]