Vector Spaces

Total Page:16

File Type:pdf, Size:1020Kb

Vector Spaces Linear Algebra III: vector spaces Math Tools for Neuroscience (NEU 314) Fall 2016 Jonathan Pillow Princeton Neuroscience Institute & Psychology. Lecture 4 (Tuesday 9/27) accompanying notes/slides Outline Last time: • linear combination • linear independence / dependence • matrix operations: transpose, multiplication, inverse Topics: • matrix equations • vector space, subspace • basis, orthonormal basis • orthogonal matrix • rank • row space / column space • null space • change of basis inverse • If A is a square matrix, its inverse A-1 (if it exists) satisfies: “the identity” (eg., for 4 x 4) The identity matrix for any vector “the identity” (eg., for 4 x 4) two weird tricks • transpose of a product • inverse of a product (Square) Matrix Equation assume (for now) square and invertible left-multiply both sides by inverse of A: is clearly a vector space [verify]. Working backwards, a set of vectors is said to span a vector space if one can write any v vector in the vector space as a linear com- 1 v3 bination of the set. A spanning set can be redundant: For example, if two of the vec- tors are identical, or are scaled copies of each other. This redundancy is formalized by defining linear independence.Asetofvec- tors {⃗v1,⃗v2,...⃗vM } is linearly independent if v2 (and only if) the only solution to the equation αn⃗vn =0 !n is αn =0(for all n). vector space & basis • vector space - set of all points that can be obtained by A basis for a vector space is a linearly in- linear combinations some set of vectors dependent spanning set. For example, con- • basis - a set of linearly independent vectors that generate sider the plane of this page. One vector is (through linear combinations) all points in a vector space not enough to span the plane. Scalar multi- v ples of this vector will trace out a line (which v v v 2 is a subspace), but cannot “get off the line” vv 2 1 v v to cover the rest of the plane. But two vec- 1 v v tors are sufficient to span the entire plane. 1 Bases are not unique: any two vectors will do, as long as they don’t lie along the same 1D vector space Two different bases for the line. Three vectors are redundant: one can (subspace of R2) same 2D vector space always be written as a linear combination of the other two. In general, the vector space N R requires a basis of size N. e Geometrically, the basis vectors define a set of coordinate axes for the space (although e ˆ2 they need not be perpendicular). The stan- e dard basis is the set of unit vectors that lie along the axes of the space: 1 0 0 eˆ1 ⎛ 0 ⎞ ⎛ 1 ⎞ ⎛ 0 ⎞ eˆ1 = ⎜ 0 ⎟, eˆ2 = ⎜ 0 ⎟,...eˆN = ⎜ 0 ⎟. ⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ v ⎜ 0 ⎟ ⎜ 0 ⎟ ⎜ 1 ⎟ S x ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ 5 x S e ) v x e x S x S e x e v x S e S x) v x e x S x S e x e o o o o o 3 o o o o o 3 is clearly a vector space [verify]. Working backwards, a set of vectors is said to span a vector space if one can write any v vector in the vector space as a linear com- 1 v3 bination of the set. A spanning set can be redundant: For example, if two of the vec- tors are identical, or are scaled copies of each other. This redundancy is formalized by defining linear independence.Asetofvec- tors {⃗v1,⃗v2,...⃗vM } is linearly independent if v2 (and only if) the only solution to the equation αn⃗vn =0 !n is αn =0(for all n). span - to generate via linear combination • vector space - set of all points that can be spanned A basis for a vector space is a linearly in- by some set of vectors dependent spanning set. For example, con- • basis - a set of vectors that can span a vector space sider the plane of this page. One vector is not enough to span the plane. Scalar multi- v ples of this vector will trace out a line (which v v v 2 is a subspace), but cannot “get off the line” vv 2 1 v v to cover the rest of the plane. But two vec- 1 v v tors are sufficient to span the entire plane. 1 Bases are not unique: any two vectors will do, as long as they don’t lie along the same 1D vector space Two different bases for the line. Three vectors are redundant: one can (subspace of R2) same 2D vector space always be written as a linear combination of the other two. In general, the vector space N R requires a basis of size N. e Geometrically, the basis vectors define a set of coordinate axes for the space (although e ˆ2 they need not be perpendicular). The stan- e dard basis is the set of unit vectors that lie along the axes of the space: 1 0 0 eˆ1 ⎛ 0 ⎞ ⎛ 1 ⎞ ⎛ 0 ⎞ eˆ1 = ⎜ 0 ⎟, eˆ2 = ⎜ 0 ⎟,...eˆN = ⎜ 0 ⎟. ⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ v ⎜ 0 ⎟ ⎜ 0 ⎟ ⎜ 1 ⎟ S x ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ 5 x S e ) v x e x S x S e x e v x S e S x) v x e x S x S e x e o o o o o 3 o o o o o 3 is clearly a vector space [verify]. Working backwards, a set of vectors is said to span a vector space if one can write any v vector in the vector space as a linear com- 1 v3 bination of the set. A spanning set can be redundant: For example, if two of the vec- tors are identical, or are scaled copies of each other. This redundancy is formalized by defining linear independence.Asetofvec- tors {⃗v1,⃗v2,...⃗vM } is linearly independent if v2 (and only if) the only solution to the equation αn⃗vn =0 !n is αn =0(for all n). orthonormal basis A basis for a vector space is a linearly in- • basis composed of orthogonal unit vectors dependent spanning set. For example, con- sider the plane of this page. One vector is not enough to span the plane. Scalar multi- v ples of this vector will trace out a line (which v v v 2 is a subspace), but cannot “get off the line” vv 2 1 v v to cover the rest of the plane. But two vec- 1 v v tors are sufficient to span the entire plane. 1 Bases are not unique: any two vectors will do, as long as they don’t lie along the same • Two different orthonormal bases line. Three vectors are redundant: one can for the same vector space always be written as a linear combination of the other two. In general, the vector space N R requires a basis of size N. e Geometrically, the basis vectors define a set of coordinate axes for the space (although e ˆ2 they need not be perpendicular). The stan- e dard basis is the set of unit vectors that lie along the axes of the space: 1 0 0 eˆ1 ⎛ 0 ⎞ ⎛ 1 ⎞ ⎛ 0 ⎞ eˆ1 = ⎜ 0 ⎟, eˆ2 = ⎜ 0 ⎟,...eˆN = ⎜ 0 ⎟. ⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ . ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ v ⎜ 0 ⎟ ⎜ 0 ⎟ ⎜ 1 ⎟ S x ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ 5 x S e ) v x e x S x S e x e v x S e S x) v x e x S x S e x e o o o o o 3 o o o o o 3 Orthogonal matrix • Square matrix whose columns (and rows) form an orthonormal basis (i.e., are orthogonal unit vectors) Properties: length- preserving nothing. This matrix is called the identity,denotedI. If an element of the diagonal is zero, then the associated axis is annihilated. The set of vectors that are annihilated by the matrix form a vector space [prove], which is called the row nullspace,orsimplythenullspace of the matrix. v v 2 0 2 [ 0 0 ] v1 2v1 Another implication of a zero diagonal element is that the matrix cannot “reach” the entire output space, but only a proper subspace. This space is called the column space of the matrix, since it is spanned by the matrix columns. The rank of a matrix is just the dimensionality of the column space. A matrix is said to have full rankOrthogonalif its rank is equal to matrix the smaller of its two dimensions. An orthogonal matrix is a square• matrix2D example: rotation matrix whose columns are pairwise orthogonal unit vectors. Remember that the columns of a matrix describe the response of the system to e^ the standard basis. Thus an orthogonal ma- 2 ( Ο ) = trix maps the standard basis onto a new set e^ 1 ^ of N orthogonal axes, which form an alter- ^ Ο(e1 ) Ο (e2 ) native basis for the space. This operation is cos θ sin θ a generalized rotation, since it corresponds to e g .. Ο = sin θ cosθ] a physical rotation of the space and possibly [ negation of some axes. Thus, the product of two orthogonal matrices is also orthogonal. Note that an orthogonal is full rank (it has no nullspace), since a rotation cannot annihilate any non-zero vector. Linear Systems of Equations The classic motivation for the study of linear algebra is the solution of sets of linear equations such as a11v1 + a12v2 + ...+ a1N vN = b1 a21v1 + a22v2 + ...+ a2N vN = b2 . aM1v1 + aM2v2 + ...+ aMNvN = bM 8 Rank • the rank of a matrix is equal to • # of linearly independent columns • # of linearly independent rows (remarkably, these are always the same) equivalent definition: • the rank of a matrix is the dimensionality of the vector space spanned by its rows or its columns for an m x n matrix A: rank(A) ≤ min(m,n) (can’t be greater than # of rows or # of columns) column space of a matrix W: n × m matrix vector space spanned by the columns of W c1 … cm • these vectors live in an n-dimensional space, so the column space is a subspace of Rn row space of a matrix W: n × m matrix r1 vector space spanned by the … rows of W rn • these vectors live in an m-dimensional space, so the column space is a subspace of Rm null space of a matrix W: n × m matrix • the vector space consisting of r1 all vectors that are orthogonal to … the rows of W rn • equivalently: the null space of W is the vector space of all vectors x such that Wx = 0.
Recommended publications
  • On the Bicoset of a Bivector Space
    International J.Math. Combin. Vol.4 (2009), 01-08 On the Bicoset of a Bivector Space Agboola A.A.A.† and Akinola L.S.‡ † Department of Mathematics, University of Agriculture, Abeokuta, Nigeria ‡ Department of Mathematics and computer Science, Fountain University, Osogbo, Nigeria E-mail: [email protected], [email protected] Abstract: The study of bivector spaces was first intiated by Vasantha Kandasamy in [1]. The objective of this paper is to present the concept of bicoset of a bivector space and obtain some of its elementary properties. Key Words: bigroup, bivector space, bicoset, bisum, direct bisum, inner biproduct space, biprojection. AMS(2000): 20Kxx, 20L05. §1. Introduction and Preliminaries The study of bialgebraic structures is a new development in the field of abstract algebra. Some of the bialgebraic structures already developed and studied and now available in several literature include: bigroups, bisemi-groups, biloops, bigroupoids, birings, binear-rings, bisemi- rings, biseminear-rings, bivector spaces and a host of others. Since the concept of bialgebraic structure is pivoted on the union of two non-empty subsets of a given algebraic structure for example a group, the usual problem arising from the union of two substructures of such an algebraic structure which generally do not form any algebraic structure has been resolved. With this new concept, several interesting algebraic properties could be obtained which are not present in the parent algebraic structure. In [1], Vasantha Kandasamy initiated the study of bivector spaces. Further studies on bivector spaces were presented by Vasantha Kandasamy and others in [2], [4] and [5]. In the present work however, we look at the bicoset of a bivector space and obtain some of its elementary properties.
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Linear Algebra for Dummies
    Linear Algebra for Dummies Jorge A. Menendez October 6, 2017 Contents 1 Matrices and Vectors1 2 Matrix Multiplication2 3 Matrix Inverse, Pseudo-inverse4 4 Outer products 5 5 Inner Products 5 6 Example: Linear Regression7 7 Eigenstuff 8 8 Example: Covariance Matrices 11 9 Example: PCA 12 10 Useful resources 12 1 Matrices and Vectors An m × n matrix is simply an array of numbers: 2 3 a11 a12 : : : a1n 6 a21 a22 : : : a2n 7 A = 6 7 6 . 7 4 . 5 am1 am2 : : : amn where we define the indexing Aij = aij to designate the component in the ith row and jth column of A. The transpose of a matrix is obtained by flipping the rows with the columns: 2 3 a11 a21 : : : an1 6 a12 a22 : : : an2 7 AT = 6 7 6 . 7 4 . 5 a1m a2m : : : anm T which evidently is now an n × m matrix, with components Aij = Aji = aji. In other words, the transpose is obtained by simply flipping the row and column indeces. One particularly important matrix is called the identity matrix, which is composed of 1’s on the diagonal and 0’s everywhere else: 21 0 ::: 03 60 1 ::: 07 6 7 6. .. .7 4. .5 0 0 ::: 1 1 It is called the identity matrix because the product of any matrix with the identity matrix is identical to itself: AI = A In other words, I is the equivalent of the number 1 for matrices. For our purposes, a vector can simply be thought of as a matrix with one column1: 2 3 a1 6a2 7 a = 6 7 6 .
    [Show full text]
  • Solving Cubic Polynomials
    Solving Cubic Polynomials 1.1 The general solution to the quadratic equation There are four steps to finding the zeroes of a quadratic polynomial. 1. First divide by the leading term, making the polynomial monic. a 2. Then, given x2 + a x + a , substitute x = y − 1 to obtain an equation without the linear term. 1 0 2 (This is the \depressed" equation.) 3. Solve then for y as a square root. (Remember to use both signs of the square root.) a 4. Once this is done, recover x using the fact that x = y − 1 . 2 For example, let's solve 2x2 + 7x − 15 = 0: First, we divide both sides by 2 to create an equation with leading term equal to one: 7 15 x2 + x − = 0: 2 2 a 7 Then replace x by x = y − 1 = y − to obtain: 2 4 169 y2 = 16 Solve for y: 13 13 y = or − 4 4 Then, solving back for x, we have 3 x = or − 5: 2 This method is equivalent to \completing the square" and is the steps taken in developing the much- memorized quadratic formula. For example, if the original equation is our \high school quadratic" ax2 + bx + c = 0 then the first step creates the equation b c x2 + x + = 0: a a b We then write x = y − and obtain, after simplifying, 2a b2 − 4ac y2 − = 0 4a2 so that p b2 − 4ac y = ± 2a and so p b b2 − 4ac x = − ± : 2a 2a 1 The solutions to this quadratic depend heavily on the value of b2 − 4ac.
    [Show full text]
  • Introduction Into Quaternions for Spacecraft Attitude Representation
    Introduction into quaternions for spacecraft attitude representation Dipl. -Ing. Karsten Groÿekatthöfer, Dr. -Ing. Zizung Yoon Technical University of Berlin Department of Astronautics and Aeronautics Berlin, Germany May 31, 2012 Abstract The purpose of this paper is to provide a straight-forward and practical introduction to quaternion operation and calculation for rigid-body attitude representation. Therefore the basic quaternion denition as well as transformation rules and conversion rules to or from other attitude representation parameters are summarized. The quaternion computation rules are supported by practical examples to make each step comprehensible. 1 Introduction Quaternions are widely used as attitude represenation parameter of rigid bodies such as space- crafts. This is due to the fact that quaternion inherently come along with some advantages such as no singularity and computationally less intense compared to other attitude parameters such as Euler angles or a direction cosine matrix. Mainly, quaternions are used to • Parameterize a spacecraft's attitude with respect to reference coordinate system, • Propagate the attitude from one moment to the next by integrating the spacecraft equa- tions of motion, • Perform a coordinate transformation: e.g. calculate a vector in body xed frame from a (by measurement) known vector in inertial frame. However, dierent references use several notations and rules to represent and handle attitude in terms of quaternions, which might be confusing for newcomers [5], [4]. Therefore this article gives a straight-forward and clearly notated introduction into the subject of quaternions for attitude representation. The attitude of a spacecraft is its rotational orientation in space relative to a dened reference coordinate system.
    [Show full text]
  • Bases for Infinite Dimensional Vector Spaces Math 513 Linear Algebra Supplement
    BASES FOR INFINITE DIMENSIONAL VECTOR SPACES MATH 513 LINEAR ALGEBRA SUPPLEMENT Professor Karen E. Smith We have proven that every finitely generated vector space has a basis. But what about vector spaces that are not finitely generated, such as the space of all continuous real valued functions on the interval [0; 1]? Does such a vector space have a basis? By definition, a basis for a vector space V is a linearly independent set which generates V . But we must be careful what we mean by linear combinations from an infinite set of vectors. The definition of a vector space gives us a rule for adding two vectors, but not for adding together infinitely many vectors. By successive additions, such as (v1 + v2) + v3, it makes sense to add any finite set of vectors, but in general, there is no way to ascribe meaning to an infinite sum of vectors in a vector space. Therefore, when we say that a vector space V is generated by or spanned by an infinite set of vectors fv1; v2;::: g, we mean that each vector v in V is a finite linear combination λi1 vi1 + ··· + λin vin of the vi's. Likewise, an infinite set of vectors fv1; v2;::: g is said to be linearly independent if the only finite linear combination of the vi's that is zero is the trivial linear combination. So a set fv1; v2; v3;:::; g is a basis for V if and only if every element of V can be be written in a unique way as a finite linear combination of elements from the set.
    [Show full text]
  • Math 2331 – Linear Algebra 6.2 Orthogonal Sets
    6.2 Orthogonal Sets Math 2331 { Linear Algebra 6.2 Orthogonal Sets Jiwen He Department of Mathematics, University of Houston [email protected] math.uh.edu/∼jiwenhe/math2331 Jiwen He, University of Houston Math 2331, Linear Algebra 1 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix 6.2 Orthogonal Sets Orthogonal Sets: Examples Orthogonal Sets: Theorem Orthogonal Basis: Examples Orthogonal Basis: Theorem Orthogonal Projections Orthonormal Sets Orthonormal Matrix: Examples Orthonormal Matrix: Theorems Jiwen He, University of Houston Math 2331, Linear Algebra 2 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets Orthogonal Sets n A set of vectors fu1; u2;:::; upg in R is called an orthogonal set if ui · uj = 0 whenever i 6= j. Example 82 3 2 3 2 39 < 1 1 0 = Is 4 −1 5 ; 4 1 5 ; 4 0 5 an orthogonal set? : 0 0 1 ; Solution: Label the vectors u1; u2; and u3 respectively. Then u1 · u2 = u1 · u3 = u2 · u3 = Therefore, fu1; u2; u3g is an orthogonal set. Jiwen He, University of Houston Math 2331, Linear Algebra 3 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets: Theorem Theorem (4) Suppose S = fu1; u2;:::; upg is an orthogonal set of nonzero n vectors in R and W =spanfu1; u2;:::; upg. Then S is a linearly independent set and is therefore a basis for W . Partial Proof: Suppose c1u1 + c2u2 + ··· + cpup = 0 (c1u1 + c2u2 + ··· + cpup) · = 0· (c1u1) · u1 + (c2u2) · u1 + ··· + (cpup) · u1 = 0 c1 (u1 · u1) + c2 (u2 · u1) + ··· + cp (up · u1) = 0 c1 (u1 · u1) = 0 Since u1 6= 0, u1 · u1 > 0 which means c1 = : In a similar manner, c2,:::,cp can be shown to by all 0.
    [Show full text]
  • Multidisciplinary Design Project Engineering Dictionary Version 0.0.2
    Multidisciplinary Design Project Engineering Dictionary Version 0.0.2 February 15, 2006 . DRAFT Cambridge-MIT Institute Multidisciplinary Design Project This Dictionary/Glossary of Engineering terms has been compiled to compliment the work developed as part of the Multi-disciplinary Design Project (MDP), which is a programme to develop teaching material and kits to aid the running of mechtronics projects in Universities and Schools. The project is being carried out with support from the Cambridge-MIT Institute undergraduate teaching programe. For more information about the project please visit the MDP website at http://www-mdp.eng.cam.ac.uk or contact Dr. Peter Long Prof. Alex Slocum Cambridge University Engineering Department Massachusetts Institute of Technology Trumpington Street, 77 Massachusetts Ave. Cambridge. Cambridge MA 02139-4307 CB2 1PZ. USA e-mail: [email protected] e-mail: [email protected] tel: +44 (0) 1223 332779 tel: +1 617 253 0012 For information about the CMI initiative please see Cambridge-MIT Institute website :- http://www.cambridge-mit.org CMI CMI, University of Cambridge Massachusetts Institute of Technology 10 Miller’s Yard, 77 Massachusetts Ave. Mill Lane, Cambridge MA 02139-4307 Cambridge. CB2 1RQ. USA tel: +44 (0) 1223 327207 tel. +1 617 253 7732 fax: +44 (0) 1223 765891 fax. +1 617 258 8539 . DRAFT 2 CMI-MDP Programme 1 Introduction This dictionary/glossary has not been developed as a definative work but as a useful reference book for engi- neering students to search when looking for the meaning of a word/phrase. It has been compiled from a number of existing glossaries together with a number of local additions.
    [Show full text]
  • Math 22 – Linear Algebra and Its Applications
    Math 22 – Linear Algebra and its applications - Lecture 25 - Instructor: Bjoern Muetzel GENERAL INFORMATION ▪ Office hours: Tu 1-3 pm, Th, Sun 2-4 pm in KH 229 Tutorial: Tu, Th, Sun 7-9 pm in KH 105 ▪ Homework 8: due Wednesday at 4 pm outside KH 008. There is only Section B,C and D. 5 Eigenvalues and Eigenvectors 5.1 EIGENVECTORS AND EIGENVALUES Summary: Given a linear transformation 푇: ℝ푛 → ℝ푛, then there is always a good basis on which the transformation has a very simple form. To find this basis we have to find the eigenvalues of T. GEOMETRIC INTERPRETATION 5 −3 1 1 Example: Let 퐴 = and let 푢 = 푥 = and 푣 = . −6 2 0 2 −1 1.) Find A푣 and Au. Draw a picture of 푣 and A푣 and 푢 and A푢. 2.) Find A(3푢 +2푣) and 퐴2 (3푢 +2푣). Hint: Use part 1.) EIGENVECTORS AND EIGENVALUES ▪ Definition: An eigenvector of an 푛 × 푛 matrix A is a nonzero vector x such that 퐴푥 = 휆푥 for some scalar λ in ℝ. In this case λ is called an eigenvalue and the solution x≠ ퟎ is called an eigenvector corresponding to λ. ▪ Definition: Let A be an 푛 × 푛 matrix. The set of solutions 푛 Eig(A, λ) = {x in ℝ , such that (퐴 − 휆퐼푛)x = 0} is called the eigenspace Eig(A, λ) of A corresponding to λ. It is the null space of the matrix 퐴 − 휆퐼푛: Eig(A, λ) = Nul(퐴 − 휆퐼푛) Slide 5.1- 7 EIGENVECTORS AND EIGENVALUES 16 Example: Show that 휆 =7 is an eigenvalue of matrix A = 52 and find the corresponding eigenspace Eig(A,7).
    [Show full text]
  • 1 Review of Inner Products 2 the Approximation Problem and Its Solution Via Orthogonality
    Approximation in inner product spaces, and Fourier approximation Math 272, Spring 2018 Any typographical or other corrections about these notes are welcome. 1 Review of inner products An inner product space is a vector space V together with a choice of inner product. Recall that an inner product must be bilinear, symmetric, and positive definite. Since it is positive definite, the quantity h~u;~ui is never negative, and is never 0 unless ~v = ~0. Therefore its square root is well-defined; we define the norm of a vector ~u 2 V to be k~uk = ph~u;~ui: Observe that the norm of a vector is a nonnegative number, and the only vector with norm 0 is the zero vector ~0 itself. In an inner product space, we call two vectors ~u;~v orthogonal if h~u;~vi = 0. We will also write ~u ? ~v as a shorthand to mean that ~u;~v are orthogonal. Because an inner product must be bilinear and symmetry, we also obtain the following expression for the squared norm of a sum of two vectors, which is analogous the to law of cosines in plane geometry. k~u + ~vk2 = h~u + ~v; ~u + ~vi = h~u + ~v; ~ui + h~u + ~v;~vi = h~u;~ui + h~v; ~ui + h~u;~vi + h~v;~vi = k~uk2 + k~vk2 + 2 h~u;~vi : In particular, this gives the following version of the Pythagorean theorem for inner product spaces. Pythagorean theorem for inner products If ~u;~v are orthogonal vectors in an inner product space, then k~u + ~vk2 = k~uk2 + k~vk2: Proof.
    [Show full text]
  • Review a Basis of a Vector Space 1
    Review • Vectors v1 , , v p are linearly dependent if x1 v1 + x2 v2 + + x pv p = 0, and not all the coefficients are zero. • The columns of A are linearly independent each column of A contains a pivot. 1 1 − 1 • Are the vectors 1 , 2 , 1 independent? 1 3 3 1 1 − 1 1 1 − 1 1 1 − 1 1 2 1 0 1 2 0 1 2 1 3 3 0 2 4 0 0 0 So: no, they are dependent! (Coeff’s x3 = 1 , x2 = − 2, x1 = 3) • Any set of 11 vectors in R10 is linearly dependent. A basis of a vector space Definition 1. A set of vectors { v1 , , v p } in V is a basis of V if • V = span{ v1 , , v p} , and • the vectors v1 , , v p are linearly independent. In other words, { v1 , , vp } in V is a basis of V if and only if every vector w in V can be uniquely expressed as w = c1 v1 + + cpvp. 1 0 0 Example 2. Let e = 0 , e = 1 , e = 0 . 1 2 3 0 0 1 3 Show that { e 1 , e 2 , e 3} is a basis of R . It is called the standard basis. Solution. 3 • Clearly, span{ e 1 , e 2 , e 3} = R . • { e 1 , e 2 , e 3} are independent, because 1 0 0 0 1 0 0 0 1 has a pivot in each column. Definition 3. V is said to have dimension p if it has a basis consisting of p vectors. Armin Straub 1 [email protected] This definition makes sense because if V has a basis of p vectors, then every basis of V has p vectors.
    [Show full text]