3. Vector Spaces: Linear Combination, Span, Linear Dependence And

Total Page:16

File Type:pdf, Size:1020Kb

3. Vector Spaces: Linear Combination, Span, Linear Dependence And 3. Vector spaces: Linear combination, span, linear dependenceanddependence See section 4.1 - 4.2 of the textbook for definition and first examples of a vector space and subspace. In the following discussion, V is a real vector space and 0 denotes the zero element of V . Definition 3.1. Let S be a nonempty subset of V . ◦ A linear combination of vectors in S is a vector of the form (c1v1+···cnvn)wherev1, ··· ,vn ∈ S and c1, ··· ,cn ∈ R. ◦ We say that such a linear combination is nontrivial, if some ci ̸=0. ◦ The span of S is defined to be set of all linear combinations of elements of S.Thespanof S is denoted by span(S). We define the span of the empty set to be {0}. ◦ We say S is linearly dependent if a nontrivial linear combination of distinct elements of S is equal to the zero vector. ◦ We say that S is a linearly independent set if S is not linearly dependent. Remark 3.2. Let S be a nonempty subset of V .Avectorv ∈ V is a linear combination of elements of S if and only if v can be written in the form v = c1v1 + ···+ cnvn where v1, ··· ,vn ∈ S and c1, ··· ,cn ∈ R. Let u1, ··· ,un ∈ V .Ifv ∈ V is a linear combination of elements of {u1, ··· ,un},thenwesimply say that v is a linear combination of v1, ··· ,vn. Example 3.3. (1) Consider the vector space R3.Let v1 =(0, 1, −1),v2 =(−1, 0, 1),v3 =(1, −1, 0),v4 =(3, 2, −5),v5 =(1, 1, 1). ◦ Here v4 is a linear combination of v1,v2 because we can write v4 =2v1 − 3v2. ◦ Exercise: Verify that v4 is a linear combination of v1,v3. ◦ Verify that v4 is a linear combination of v1,v2,v3,v5. ◦ Exercise: Show that v5 is not a linear combination of v1,v2,v3. ◦ Exercise: Explain geometrically what the above two exercises are saying about the vectors. (2) Consider the vector space V consisting of all continuous functions on R.Considerthe following elements of V : 2 2 f1(x)=x sin x, f2(x)=2x, f3(x)=1,f4(x)=3− x cos x. Show that f4 is a linear combination of f1,f2,f3. (3) Exercise: Show that in example 1 above, the span of v1,v2 consists of all the vectors in 3 the plane S = {(x, y, z) ∈ R : x + y + z =0}.Verifythatv3 and v4 lie in S but v5 does not. What is the span of v1,v2,v3,v4? Theorem 3.4. Let v1, ··· ,vn be elements of V and let S be the span of v1, ··· ,vn. Then S is the smallest subspace of V that contains v1, ··· ,vn. Definition 3.5. Let S be a subset of V .WesaythatS spans V or that S is a spanning set for V if span(S)=V .LetA be a subset of V .WesaythatA is a minimal spanning set for V if A spans V and no proper subset of A span V . Definition 3.6. Let A be an m × n matrix. The span of the rows of A is a subspace of Rn.This subspace is called the row space of A .SotherowspaceofA consists of all linear combinations of the rows of A.Similarlyonedefinesthe column space of A. Exercise: If B is a matrix obtained from A by performing a finite sequence of row operations, then row space of B is equal to the row space of A. 4 Suppose A is a m × n matrix. Often, we can use the above exercise to write down a simple set of vectors spanning row space of A.ApplyGauss-JordanalgorithmtoconvertA to a reduced row echelon matrix B.Nowthenon-zerorowsofB give a set of vectors whose span is row space of A (by the above exercise). 3.7. Dependence relation: Let S be a non-empty subset of V .recallthatS is linearly dependent if and only if there is some relation of the form c1s1 + ···+ ···cnsn =0 where n ≥ 1issomenaturalnumber,s1, ··· ,sn are distinct elements of S,andc1, ··· ,cn are real numbers at least one of which is nonzero. A relation as above iscalledanontrivial dependence relation among elements of S.SoS is linearly dependence if and only if there exists a nontrivial dependent relation among elements of S.Notethat,withoutlossofgenerality,wemayassumethat in the above dependence relation each cj ̸=0.Simplydropthethosetermswherethecoefficients are equal to 0. Note also that if T ⊆ S and T is linearly dependent, then so is S,sinceanontrivialdependence relation between the elements of T is also a nontrivial dependence relation between the elements of S. There are several alternative ways of stating the definition of linear independence. We collect some of these in the following exercise: 3.8. Exercise: Let V be a vector space and let S be a subset of V . Then the following are equivalent: (1) S is linearly independent. (2) If T is a proper subset of S,thenthereexistsv ∈ S such that v/∈ span(T ). (3) There is no vector v ∈ S such that v ∈ span(S −{v}). (4) every element of span(S) can be uniquely written as a linear combination of elements of S. Proof. Let’s prove (1) implies (2). Assume (1). Let T be a proper subset of S.ThenS − T is non-empty. Pick v ∈ (S − T ). If uppose v ∈ Span(T ), then that would mean v = c1t1 + ···cntn for some cj ∈ R and tj ∈ T which would give us a nontrivial dependence relation v +(−c1)t1 +(−c2)t2 + ···+(−cn)tn =0 among elements of S contradicting the linear independence of S.Sov/∈ Span(S). Notice that the dependence relation is nontrivial because the coefficient of v is equal to 1 and all the tj’s are distinct from v since v/∈ T while t1, ··· ,tn ∈ T . Let’s prove (2) implies (3). Assume (2). Let v be any element of S.LetT = S −{v}.Then(2) implies that v/∈ Span(T )=Span(S −{v}). Let’s prove (3) implies (4). Assume (3). Let v ∈ Span(S). If possible, suppose v is written as alinearcombinationofelementsofS in two ways. Let s1, ··· ,sn be all the distinct elements of S that occur while writing these two linear combinations. Thenwehavetwoexpressionsforv of the n n n form v = j=1 cjsj = j=1 djsj for some scalars c1, ··· ,cn,d1, ··· ,dn.So j=1(cj − dj)sj =0. ! ! −1 ! If possible suppose ci ̸= di for some i.Thensi =(ci − di) (dj − dj)sj proving that si ∈ !j≠ i Span(S −{si}), thereby contradicting (3). So we must have cj = dj for all j.Thuswehaveargued that every element of span(S)canbewrittenasacombinationofelementsofS only in one way. Let’s prove (4) implies (1). Assume (4). Suppose S is linearly dependent, then there exists a nontrivial dependence relation among the elements of S of the form c1s1 + ···+ cnsn =0where n ≥ 1, s1, ··· ,sn are distinct elements of S and c1, ··· ,cn are real number, at least one of which is nonzero. Pick a j such that cj ̸=0.Thenwehave (−cj)sj = c1s1 + ···+ cj−1sj−1 + cj+1sj+1 + ···+ cnsn 5 Since s1, ··· ,sn are distinct, the above equation shows that the element (−cj)sj ∈ span(S)canbe written as a linear combination of distinct elements of S in more than one way, thus contradicting (4). ! we get the following characterization of linear independence of finite sets from the third con- dition above: Asubset{v1, ··· ,vn} is linearly independent if and only if v1 ̸=0and vj ∈/ n span{v1, ··· ,vj−1} for all j =2, 3, ··· ,n.Letu1, ··· ,um ∈ R . Proof. If S = {v1, ··· ,vn} is linearly independent, then the condition holds, (use (1) implies (3) of the lemma above). Suppose now S is linearly dependent. Then either v1 =0,orbylookingat anontrivialdependencerelationbetweenelementsofS we get vj ∈/ span{v1, ··· ,vj−1} for some j. ! This characterization lets us check if {u1, ··· ,un} are linearly independent. Write down a matrix A whose rows are u1, ··· ,un.ConvertA to a matridx B in row echelon form. If every row of B has a nonzero pivot then u1, ··· ,un are linearly independent. Otherwise not. 3.9. AfewExercises (1) Let V be a vector space and let S be a nonempty subset of V . (a) Let W be any subspace of V such that S ⊆ W .Showthatspan(S) ⊆ W . (b)Conclude that span(S)isthesmallestsubspaceofV that contains S. (2) Let V be the vector space of all infinitely differentiable functions on R.Letv ∈ V be the element v(x)=x2ex.LetW =span{v, v′,v′′, ··· ,v(n), ···} where v(n) denotes the n-th derivative of v.Findthreeelementsu1,u2,u3 ∈ V such that W =span{u1,u2,u3}. (3) Let V be the vector space in the previous problem and let n be any natural number. Show that V cannot be spanned by any n elements of V . (4) Let A be a m × n matrix. Recall that the row space of S is defined to be the span of the rows of A.LetR(A)denotetherowspaceofA. (a) If B is any k × m matrix, then show that R(BA) ⊆R(A). (b) If B is an invertible k × m matrix, then show that R(BA)=R(A). (c) Let C be a matrix obtained from A by performing finitely many row operations. Show that R(C)=R(A). (d) Let A be the following matrix: 100 0 ⎛201 2⎞ A = ⎜320−2⎟ ⎜ ⎟ ⎜433 3⎟ ⎜ ⎟ ⎝754 3⎠ 4 Find v1,v2,v3 ∈ R such that R(A)=span{v1,v2,v3}. 6.
Recommended publications
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Linear Algebra for Dummies
    Linear Algebra for Dummies Jorge A. Menendez October 6, 2017 Contents 1 Matrices and Vectors1 2 Matrix Multiplication2 3 Matrix Inverse, Pseudo-inverse4 4 Outer products 5 5 Inner Products 5 6 Example: Linear Regression7 7 Eigenstuff 8 8 Example: Covariance Matrices 11 9 Example: PCA 12 10 Useful resources 12 1 Matrices and Vectors An m × n matrix is simply an array of numbers: 2 3 a11 a12 : : : a1n 6 a21 a22 : : : a2n 7 A = 6 7 6 . 7 4 . 5 am1 am2 : : : amn where we define the indexing Aij = aij to designate the component in the ith row and jth column of A. The transpose of a matrix is obtained by flipping the rows with the columns: 2 3 a11 a21 : : : an1 6 a12 a22 : : : an2 7 AT = 6 7 6 . 7 4 . 5 a1m a2m : : : anm T which evidently is now an n × m matrix, with components Aij = Aji = aji. In other words, the transpose is obtained by simply flipping the row and column indeces. One particularly important matrix is called the identity matrix, which is composed of 1’s on the diagonal and 0’s everywhere else: 21 0 ::: 03 60 1 ::: 07 6 7 6. .. .7 4. .5 0 0 ::: 1 1 It is called the identity matrix because the product of any matrix with the identity matrix is identical to itself: AI = A In other words, I is the equivalent of the number 1 for matrices. For our purposes, a vector can simply be thought of as a matrix with one column1: 2 3 a1 6a2 7 a = 6 7 6 .
    [Show full text]
  • Math 2331 – Linear Algebra 6.2 Orthogonal Sets
    6.2 Orthogonal Sets Math 2331 { Linear Algebra 6.2 Orthogonal Sets Jiwen He Department of Mathematics, University of Houston [email protected] math.uh.edu/∼jiwenhe/math2331 Jiwen He, University of Houston Math 2331, Linear Algebra 1 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix 6.2 Orthogonal Sets Orthogonal Sets: Examples Orthogonal Sets: Theorem Orthogonal Basis: Examples Orthogonal Basis: Theorem Orthogonal Projections Orthonormal Sets Orthonormal Matrix: Examples Orthonormal Matrix: Theorems Jiwen He, University of Houston Math 2331, Linear Algebra 2 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets Orthogonal Sets n A set of vectors fu1; u2;:::; upg in R is called an orthogonal set if ui · uj = 0 whenever i 6= j. Example 82 3 2 3 2 39 < 1 1 0 = Is 4 −1 5 ; 4 1 5 ; 4 0 5 an orthogonal set? : 0 0 1 ; Solution: Label the vectors u1; u2; and u3 respectively. Then u1 · u2 = u1 · u3 = u2 · u3 = Therefore, fu1; u2; u3g is an orthogonal set. Jiwen He, University of Houston Math 2331, Linear Algebra 3 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets: Theorem Theorem (4) Suppose S = fu1; u2;:::; upg is an orthogonal set of nonzero n vectors in R and W =spanfu1; u2;:::; upg. Then S is a linearly independent set and is therefore a basis for W . Partial Proof: Suppose c1u1 + c2u2 + ··· + cpup = 0 (c1u1 + c2u2 + ··· + cpup) · = 0· (c1u1) · u1 + (c2u2) · u1 + ··· + (cpup) · u1 = 0 c1 (u1 · u1) + c2 (u2 · u1) + ··· + cp (up · u1) = 0 c1 (u1 · u1) = 0 Since u1 6= 0, u1 · u1 > 0 which means c1 = : In a similar manner, c2,:::,cp can be shown to by all 0.
    [Show full text]
  • Math 22 – Linear Algebra and Its Applications
    Math 22 – Linear Algebra and its applications - Lecture 25 - Instructor: Bjoern Muetzel GENERAL INFORMATION ▪ Office hours: Tu 1-3 pm, Th, Sun 2-4 pm in KH 229 Tutorial: Tu, Th, Sun 7-9 pm in KH 105 ▪ Homework 8: due Wednesday at 4 pm outside KH 008. There is only Section B,C and D. 5 Eigenvalues and Eigenvectors 5.1 EIGENVECTORS AND EIGENVALUES Summary: Given a linear transformation 푇: ℝ푛 → ℝ푛, then there is always a good basis on which the transformation has a very simple form. To find this basis we have to find the eigenvalues of T. GEOMETRIC INTERPRETATION 5 −3 1 1 Example: Let 퐴 = and let 푢 = 푥 = and 푣 = . −6 2 0 2 −1 1.) Find A푣 and Au. Draw a picture of 푣 and A푣 and 푢 and A푢. 2.) Find A(3푢 +2푣) and 퐴2 (3푢 +2푣). Hint: Use part 1.) EIGENVECTORS AND EIGENVALUES ▪ Definition: An eigenvector of an 푛 × 푛 matrix A is a nonzero vector x such that 퐴푥 = 휆푥 for some scalar λ in ℝ. In this case λ is called an eigenvalue and the solution x≠ ퟎ is called an eigenvector corresponding to λ. ▪ Definition: Let A be an 푛 × 푛 matrix. The set of solutions 푛 Eig(A, λ) = {x in ℝ , such that (퐴 − 휆퐼푛)x = 0} is called the eigenspace Eig(A, λ) of A corresponding to λ. It is the null space of the matrix 퐴 − 휆퐼푛: Eig(A, λ) = Nul(퐴 − 휆퐼푛) Slide 5.1- 7 EIGENVECTORS AND EIGENVALUES 16 Example: Show that 휆 =7 is an eigenvalue of matrix A = 52 and find the corresponding eigenspace Eig(A,7).
    [Show full text]
  • Different Forms of Linear Systems, Linear Combinations, and Span
    Math 20F, 2015SS1 / TA: Jor-el Briones / Sec: A01 / Handout 2 Page 1 of2 Different forms of Linear Systems, Linear Combinations, and Span (1.3-1.4) Terms you should know Linear combination (and weights): A vector y is called a linear combination of vectors v1; v2; :::; vk if given some numbers c1; c2; :::; ck, y = c1v1 + c2v2 + ::: + ckvk The numbers c1; c2; :::; ck are called weights. Span: We call the set of ALL the possible linear combinations of a set of vectors to be the span of those vectors. For example, the span of v1 and v2 is written as span(v1; v2) NOTE: The zero vector is in the span of ANY set of vectors in Rn. Rn: The set of all vectors with n entries 3 ways to write a linear system with m equations and n unknowns If A is the m×n coefficient matrix corresponding to a linear system, with columns a1; a2; :::; an, and b in Rm is the constant vector corresponding to that linear system, we may represent the linear system using 1. A matrix equation: Ax = b 2. A vector equation: x1a1 + x2a2 + ::: + xnan = b, where x1; x2; :::xn are numbers. h i 3. An augmented matrix: A j b NOTE: You should know how to multiply a matrix by a column vector, and that doing so would result in some linear combination of the columns of that matrix. Math 20F, 2015SS1 / TA: Jor-el Briones / Sec: A01 / Handout 2 Page 2 of2 Important theorems to know: Theorem. (Chapter 1, Theorem 3) If A is an m × n matrix, with columns a1; a2; :::; an and b is in Rm, the matrix equation Ax = b has the same solution set as the vector equation x1a1 + x2a2 + ::: + xnan = b as well as the system of linear equations whose augmented matrix is h i A j b Theorem.
    [Show full text]
  • Inner Product Spaces
    CHAPTER 6 Woman teaching geometry, from a fourteenth-century edition of Euclid’s geometry book. Inner Product Spaces In making the definition of a vector space, we generalized the linear structure (addition and scalar multiplication) of R2 and R3. We ignored other important features, such as the notions of length and angle. These ideas are embedded in the concept we now investigate, inner products. Our standing assumptions are as follows: 6.1 Notation F, V F denotes R or C. V denotes a vector space over F. LEARNING OBJECTIVES FOR THIS CHAPTER Cauchy–Schwarz Inequality Gram–Schmidt Procedure linear functionals on inner product spaces calculating minimum distance to a subspace Linear Algebra Done Right, third edition, by Sheldon Axler 164 CHAPTER 6 Inner Product Spaces 6.A Inner Products and Norms Inner Products To motivate the concept of inner prod- 2 3 x1 , x 2 uct, think of vectors in R and R as x arrows with initial point at the origin. x R2 R3 H L The length of a vector in or is called the norm of x, denoted x . 2 k k Thus for x .x1; x2/ R , we have The length of this vector x is p D2 2 2 x x1 x2 . p 2 2 x1 x2 . k k D C 3 C Similarly, if x .x1; x2; x3/ R , p 2D 2 2 2 then x x1 x2 x3 . k k D C C Even though we cannot draw pictures in higher dimensions, the gener- n n alization to R is obvious: we define the norm of x .x1; : : : ; xn/ R D 2 by p 2 2 x x1 xn : k k D C C The norm is not linear on Rn.
    [Show full text]
  • Math 480 Notes on Orthogonality the Word Orthogonal Is a Synonym for Perpendicular. Question 1: When Are Two Vectors V 1 and V2
    Math 480 Notes on Orthogonality The word orthogonal is a synonym for perpendicular. n Question 1: When are two vectors ~v1 and ~v2 in R orthogonal to one another? The most basic answer is \if the angle between them is 90◦" but this is not very practical. How could you tell whether the vectors 0 1 1 0 1 1 @ 1 A and @ 3 A 1 1 are at 90◦ from one another? One way to think about this is as follows: ~v1 and ~v2 are orthogonal if and only if the triangle formed by ~v1, ~v2, and ~v1 − ~v2 (drawn with its tail at ~v2 and its head at ~v1) is a right triangle. The Pythagorean Theorem then tells us that this triangle is a right triangle if and only if 2 2 2 (1) jj~v1jj + jj~v2jj = jj~v1 − ~v2jj ; where jj − jj denotes the length of a vector. 0 x1 1 . The length of a vector ~x = @ . A is easy to measure: the Pythagorean Theorem (once again) xn tells us that q 2 2 jj~xjj = x1 + ··· + xn: This expression under the square root is simply the matrix product 0 x1 1 T . ~x ~x = (x1 ··· xn) @ . A : xn Definition. The inner product (also called the dot product) of two vectors ~x;~y 2 Rn, written h~x;~yi or ~x · ~y, is defined by n T X hx; yi = ~x ~y = xiyi: i=1 Since matrix multiplication is linear, inner products satisfy h~x;~y1 + ~y2i = h~x;~y1i + h~x;~y2i h~x1; a~yi = ah~x1; ~yi: (Similar formulas hold in the first coordinate, since h~x;~yi = h~y; ~xi.) Now we can write 2 2 jj~v1 − ~v2jj = h~v1 − ~v2;~v1 − ~v2i = h~v1;~v1i − 2h~v1;~v2i + h~v2;~v2i = jj~v1jj − 2h~v1;~v2i + jj~v2jj; so Equation (1) holds if and only if h~v1;~v2i = 0: n Answer to Question 1: Vectors ~v1 and ~v2 in R are orthogonal if and only if h~v1;~v2i = 0.
    [Show full text]
  • Notes Lecture 3A Matrices and Vectors (Addition, Scalar Multiplication
    Arithmetic with vectors and matrices linear combination Transpose of a matrix Lecture 3a Matrices and Vectors Upcoming (due Thurs): Reading HW 3a Plan Introduce vectors, matrices, and matrix arithmetic. 1/14 Arithmetic with vectors and matrices linear combination Transpose of a matrix Definition: Vectors A vector is a column of numbers, surrounded by brackets. Examples of vectors 0 3 2 − 0 0 ties 3 2 3 3 2 O 3 2 3 0 − 1 4 are 3,0 I and 2 607 ⇥ ⇤ 6 2 7 4 5 6 7 6 7 4 5 4 5 The entries are sometimes called coordinates. • The size of a vector is the number of entries. • An n-vector is a vector of size n. • We sometimes call these column vectors, to distinguish them • from row vectors (when we write the numbers horizontally). An a row vector 2 3 4 example of 2/14 Arithmetic with vectors and matrices linear combination Transpose of a matrix The purpose of vectors The purpose of vectors is to collect related numbers together and work with them as a single object. Example of a vector: Latitude and longitude A point on the globe can be described by two numbers: the latitude and longitude. These can be combined into a single vector: Position of Norman’s train station = 35.13124 , 97.26343 ◦ − ◦ ⇥ row vector ⇤ 3/14 Arithmetic with vectors and matrices linear combination Transpose of a matrix Example of a vector: solution of a linear system A solution of a linear system is given in terms of values of variables, even though we think of this as one object: x =3, y =0, z = 1 − (x, y, z)=(3, 0, 1) − We can restate this as a (column) vector: x 3 y = 0 2 3 2 3 z 1 − 4 5 4 5 4/14 Arithmetic with vectors and matrices linear combination Transpose of a matrix Definition: Matrices A matrix is a rectangular grid of numbers, surrounded by brackets.
    [Show full text]
  • Linear Combinations & Matrices
    Linear Algebra II: linear combinations & matrices Math Tools for Neuroscience (NEU 314) Fall 2016 Jonathan Pillow Princeton Neuroscience Institute & Psychology. Lecture 3 (Thursday 9/22) accompanying notes/slides Linear algebra “Linear algebra has become as basic and as applicable as calculus, and fortunately it is easier.” - Glibert Strang, Linear algebra and its applications today’s topics • linear projection (review) • orthogonality (review) • linear combination • linear independence / dependence • matrix operations: transpose, multiplication, inverse Did not get to: • vector space • subspace • basis • orthonormal basis Linear Projection Exercise w = [2,2] v1 = [2,1] v2 = [5,0] Compute: Linear projection of w onto lines defined by v1 and v2 linear combination is clearly a vector space [verify]. • scaling and summing applied to a group of vectors Working backwards, a set of vectors is said to span a vector space if one can write any v vector in the vector space as a linear com- 1 v3 bination of the set. A spanning set can be redundant: For example, if two of the vec- tors are identical, or are scaled copies of each other. This redundancy is formalized by defining linear• a independence group of vectors.Asetofvec- is linearly tors {⃗v1,⃗v2,...⃗vdependentM } is linearly independent if one can if be written as v2 (and only if) thea only linear solution combination to the equation of the others • otherwise,αn⃗vn =0 linearly independent !n is αn =0(for all n). A basis for a vector space is a linearly in- dependent spanning set. For example, con- sider the plane of this page. One vector is not enough to span the plane.
    [Show full text]
  • Tutorial 3 Using MATLAB in Linear Algebra
    Edward Neuman Department of Mathematics Southern Illinois University at Carbondale [email protected] One of the nice features of MATLAB is its ease of computations with vectors and matrices. In this tutorial the following topics are discussed: vectors and matrices in MATLAB, solving systems of linear equations, the inverse of a matrix, determinants, vectors in n-dimensional Euclidean space, linear transformations, real vector spaces and the matrix eigenvalue problem. Applications of linear algebra to the curve fitting, message coding and computer graphics are also included. For the reader's convenience we include lists of special characters and MATLAB functions that are used in this tutorial. Special characters ; Semicolon operator ' Conjugated transpose .' Transpose * Times . Dot operator ^ Power operator [ ] Emty vector operator : Colon operator = Assignment == Equality \ Backslash or left division / Right division i, j Imaginary unit ~ Logical not ~= Logical not equal & Logical and | Logical or { } Cell 2 Function Description acos Inverse cosine axis Control axis scaling and appearance char Create character array chol Cholesky factorization cos Cosine function cross Vector cross product det Determinant diag Diagonal matrices and diagonals of a matrix double Convert to double precision eig Eigenvalues and eigenvectors eye Identity matrix fill Filled 2-D polygons fix Round towards zero fliplr Flip matrix in left/right direction flops Floating point operation count grid Grid lines hadamard Hadamard matrix hilb Hilbert
    [Show full text]
  • Linear Algebra Saad Jbabdi
    linear algebra Saad Jbabdi • Matrices & GLM • Eigenvectors/eigenvalues • PCA linear algebra Saad Jbabdi • Matrices & GLM • Eigenvectors/eigenvalues • PCA The GLM y = M*x There is a linear relationship between M and y find x? Simultaneous equations 0.4613 1.0000 0.5377 0.8502 1.0000 1.8339 1.0000 x1 + 0.5377 x2 = 0.4613 -0.3777 1.0000 -2.2588 ? 1.0000 x1 + 1.8339 x2 = 0.8502 0.5587 1.0000 0.8622 ? 1.0000 x1 + -2.2588 x2 = -0.3777 0.3956 1.0000 0.3188 1.0000 x1 + 0.8622 x2 = 0.5587 -0.0923 1.0000 -1.3077 1.0000 x1 + 0.3188 x2 = 0.3956 y M x? 1.0000 x1 + -1.3077 x2 = -0.0923 Examples y = M*x some measure FMRI Time series Behavioural scores across subjects y from one voxel across subjects from one voxel “regressors” “regressors” Age, #Years at school M (e.g.: the task) (e.g.: group membership) PEs PEs PEs x (parameter estimates) (parameter estimates) (parameter estimates) The GLM y = M*x There is a linear relationship between M and y find x? solution : x = pinv(M)*y (the actual matlab command) what is the pseudo-inverse pinv ? y y = M*x ŷ ‘M space’ Must find the best (x,ŷ) such that ŷ =M*x (we can’t get out of M space) ŷ is the projection of y onto the ‘M space’ x are the coordinates of ŷ in the ‘M space’ pinv(M) is used to project y onto the ‘M space’ This section is about the ‘M space’ In order to understand the ‘M space’, we need to talk about these concepts: • vectors, matrices • dimension, independence • sub-space, rank definitions • Vectors and matrices are finite collections of “numbers” • Vectors are columns of numbers • Matrices are
    [Show full text]