Chapters 5 Vector Spaces

Total Page:16

File Type:pdf, Size:1020Kb

Chapters 5 Vector Spaces Chapters 5 Vector Spaces David Hilbert (1862–1943, Germany) Giuseppe Peano (1858–1932, Italy) 1 5.1 Vector multiplication: Geometric interpretation Think of a vector (an Euclidian vector) as a directed line segment in N- x2 dimensions! (has “length” and “direction”) 6 64 2U 5 Scalar multiplication 4 (“scales” the vector –i.e., 32 U changes length) 3 Source of linear 2 dependence 1 132U x1 -4 -3 -2 -1 1 2 3 4 5 6 -2 2 1 5.1 Vector Addition: Geometric interpretation x2 v' = [2 3] 5 u' = [3 2] 4 u w’ = v‘ + u' = [5 5] 3 v w Note: Two vectors plus the 2 concepts of addition and 1 u multiplication can create a two- x1 dimensional space. (Space = A set 1 2 3 4 5 of points, say R3, a “universe”.) • A vector space is a mathematical structure formed by a collection of vectors, which may be added together and multiplied by scalars. (It is closed under multiplication and addition.) Giuseppe Peano in 1888 gave a precise definition to this concept. 3 5.2 Vector (Linear) Space We introduce an algebraic structure called vector space over a field. We use it to provide an abstract notion of a vector: an element of such algebraic structure. Given a field R (of scalars) and a set V of objects (vectors), on which “vector addition” (VxV→V), denoted by “+”, and “scalar multiplication” (RxV →V), denoted by “. ”, are defined. If the following axioms are true for all objects u, v, and w ∈V and all scalars c and k in R, then V is called a vector space and the objects in V are called vectors. 1. u + v ∈V (closed under addition). 2. u + v = v + u (vector addition is commutative). 3. Ø ∈V, such that u+ Ø = u (Ø = null element). 4. u + (v + w) = (v + u) +w (distributive law of vector addition) 4 2 5.2 Vector Space 5. For each v, there is a –v, such that v + (–v) = Ø 6. c .u ∈ V (closed under scalar multiplication). 7. c. (k . u) = (c .k) u (scalar multiplication is associative). 8. c. (v + u) = (c. v) + (c. u) 9. (c + k) . u = (c. u) + (k. u) 10. 1.u=u (unit element). 11. 0.u= Ø (zero element). We can write S = {V, R, +, .}to denote an abstract vector space. This is a general definition. If the field R represents the real numbers, then we define a real vector space. Giuseppe Peano (1858 – 1932, Italy) 5 5.2 Vector Space: Examples 1. n-dimensional vectors: x ⋮ ∈ Rn, Cn Note: The vector space consisting of n-column vectors, with vector addition and multiplication corresponding to matrix operations is an n-dimensional vector space (Euclidean n-space), which we will denote Rn. 2. An infinite sequence of real numbers. We will be interested in bounded sequences such that {xk} < M, for M < ∞. x ⋮ ∈ R∞, C∞ ⋮ 6 3 5.2 Vector Space: Examples 3. The collection of all continuous real valued functions f(t) on the interval C[a,b] on the real line is a linear vector space. The zero vector is the function identically zero on [a,b]. 4. The collection of polynomial functions on the interval [a,b] is a linear vector space. 7 5.2 Vector Space: Subspace Definition: Subspace Given the vector space V and W a set of vectors, such that W∈V. Then, W is a subspace if it also a vector space. That is, -u, v ∈ W u + v ∈V,and -u ∈ W & for every c ∈R c.u ∈V. -Wcontains the 0-vector of V. Thus, a nonempty subset W of a vector space V that is closed under addition and scalar multiplication and contains the 0-vector of V is a subspace of V. That is, a subspace is a subset of V that can be considered a vector space! 8 4 5.2 Vector Space: Subspace v Example: Subspace 2 W2 W2 is not a subspace of V, it does not include the 0-vector of V. v3 v W2 is just a “hyperplane.” 1 Definition: Linear Combination Given vectors u1, ..., uk,, the vector w = c1 u1+ ....+ ck uk is called a linear combination of the vectors u1, ..., uk,. Notation: <u1, ..., uk> is the set of all linear combinations of uj’s. Recall that a set of vectors is linearly dependent if any one of them can be expressed as a linear combination of the remaining vectors; otherwise, the set is linearly independent (LI). 9 5.2 Vector Space: Rank Definition: Maximally linear independent (max-LI) subset Given a set U={u1, ..., uk} of vectors in a vector space V. If W ={ui,1, ..., ui,q} ∈ U containing q vectors is LI and every subset with more than q vectors is LD, then W is called maximally linear independent subset of U. Moreover, we will call q the rank of the set U, written q = rank(U). (The max-LI subset is not unique.) Definition: Full rank Given A (mxn). We say A has full rank if rank(A) = min(m,n). 38 5 5.2 Vector Space: Spanning Set Definition: Spanning set Given the set Z in V and U = {u1, ..., uk} in Z, we say U spans Z, or U is a spanning set for Z, if Z ∈<u1, ..., uk> or Z ∈<U>. That is, a set of vectors spans Z if all the vectors in Z can be expressed in terms of this set of vectors. v2 Example: Vectors v1 & v3 span the v3 – v1 plane. Also, v3 & u1 also span the same v3 – v1 plane. v3 v 1 u1 35 5.2 Vector Space: Basis Definition: Basis set (“basis”) Given U = {u1, ..., uk} and a subspace W∈V. Then, U is a basis set for W if 1) span the subspace W, 2) U is linearly independent (LI). Example: The N-dimensional subspace WN of the V space (N=2). v2 W2 u1 u2 v3 v1 36 6 5.2 Vector Space: Basis and Space Dimension Theorem: If a vector space has a basis with a finite number N of elements, then, every other basis also has N elements. Definition: Space dimension If a vector space V has a basis with N < ∞ elements, we say that V is a finite dimensional vector space and that V has dimension N, or N = dim(V). Theorem: If {u1, ..., uk} are linearly independent vectors for k < N, where N is the dimension of the vector space, we can always construct a basis by adding additional independent vectors: {u , ..., u , u , ..., u }. 1 k k+1 N 13 5.2 Vector Space: Basis and Space Dimension A vector space V can be decomposed into independent subspaces instead of vectors, say V1, V2, …, Vm. Then, V = V1 + V2 + V3 + … + Vm We call this direct sum decomposition. It is similar to a basis decomposition when the Vi all have dimension 1. Example: The 3-dimensional space V can be decompose as: V = v1+ v2 + v3, where the vi’s are LI. Alternatively, V can be decomposed as V = V1+ v3,where V1 = v1+ v2. If V=U+W, then dim(V)= dim(U)+dim(W), where U∩W = {0}. 14 7 5.2 Vector Space: Measuring Length As we defined them, vector spaces do not provide enough structure to study issues in real analysis, for example convergence of sequences. More structure is needed. For example, we can introduce as an additional structure the concept of order (≤), to compare vectors. This additional structure creates ordered vector spaces. We can introduce a norm, which we will use to measure the length or magnitude of vectors. This creates a normed vector space, denoted as a pair (V, ║.║) where V is a vector space and ║.║ is a norm on V. A normed vector space has a defined mapping from V → R1. 15 5.3 Notes on Vector Operations • An (mx1) column vector u and a (1xn) row vector v, yield a product matrix uv of dimension (mxn). 3 u 2 v 1 4 5 3 x 1 1x3 1 3 3 12 15 u v 2 1 4 5 2 8 10 A matrix 3 x 3 1 1 4 5 1 u ' v 3 2 1 4 16 A scalar 1 x 1 5 16 8 5.3 Vector Multiplication: Dot (inner) Product • The dot or Inner product (IP), “•”, is a function that takes pairs of vectors and produces a number. For vectors c & z, it is defined as: • ∗ + ∗ + ... + ∗ ∑ … • ⋮ • The dot product produces a scalar! y = c’z =(1x1=1xnnx1)= z’c. Note that from the definition, the dot product is commutative. • When c is a vector of 1’s, usually noted as ι, then: • 1 ∗ + 1 ∗ + ... + 1 ∗ ∑ 17 5.3 Vectors: Dot Product • Inner products (IP) in econometrics are common. For example, the Residual Sum of Squares (RSS), where e is a vector of residuals: • ′ ∗+ ∗+...+ ∗ ∑ • It is possible to define an inner product for functions. Instead of a sum over the corresponding elements of a vector, the inner product on functions is defined as an integral over some interval. For example, for functions f(x) & g(x): f • g = f(x) g(x) dx 9 5.3 Vectors: Dot Product - Intuition • Some intuition. - IP is used as a tool to define length or size for vectors: ∥ α ∥ = sqrt[α′ α] = α ⋯α The square root makes the inner product to be expressed in the same units as the original vector. - Now, it is possible to compare vectors and measure “distances” between vectors and, eventually, convergence! - IP also can be used to define a notion like angle between vectors, since any two vectors, say α and β, determine a plane.
Recommended publications
  • The Grassmann Manifold
    The Grassmann Manifold 1. For vector spaces V and W denote by L(V; W ) the vector space of linear maps from V to W . Thus L(Rk; Rn) may be identified with the space Rk£n of k £ n matrices. An injective linear map u : Rk ! V is called a k-frame in V . The set k n GFk;n = fu 2 L(R ; R ): rank(u) = kg of k-frames in Rn is called the Stiefel manifold. Note that the special case k = n is the general linear group: k k GLk = fa 2 L(R ; R ) : det(a) 6= 0g: The set of all k-dimensional (vector) subspaces ¸ ½ Rn is called the Grassmann n manifold of k-planes in R and denoted by GRk;n or sometimes GRk;n(R) or n GRk(R ). Let k ¼ : GFk;n ! GRk;n; ¼(u) = u(R ) denote the map which assigns to each k-frame u the subspace u(Rk) it spans. ¡1 For ¸ 2 GRk;n the fiber (preimage) ¼ (¸) consists of those k-frames which form a basis for the subspace ¸, i.e. for any u 2 ¼¡1(¸) we have ¡1 ¼ (¸) = fu ± a : a 2 GLkg: Hence we can (and will) view GRk;n as the orbit space of the group action GFk;n £ GLk ! GFk;n :(u; a) 7! u ± a: The exercises below will prove the following n£k Theorem 2. The Stiefel manifold GFk;n is an open subset of the set R of all n £ k matrices. There is a unique differentiable structure on the Grassmann manifold GRk;n such that the map ¼ is a submersion.
    [Show full text]
  • Linear Algebra for Dummies
    Linear Algebra for Dummies Jorge A. Menendez October 6, 2017 Contents 1 Matrices and Vectors1 2 Matrix Multiplication2 3 Matrix Inverse, Pseudo-inverse4 4 Outer products 5 5 Inner Products 5 6 Example: Linear Regression7 7 Eigenstuff 8 8 Example: Covariance Matrices 11 9 Example: PCA 12 10 Useful resources 12 1 Matrices and Vectors An m × n matrix is simply an array of numbers: 2 3 a11 a12 : : : a1n 6 a21 a22 : : : a2n 7 A = 6 7 6 . 7 4 . 5 am1 am2 : : : amn where we define the indexing Aij = aij to designate the component in the ith row and jth column of A. The transpose of a matrix is obtained by flipping the rows with the columns: 2 3 a11 a21 : : : an1 6 a12 a22 : : : an2 7 AT = 6 7 6 . 7 4 . 5 a1m a2m : : : anm T which evidently is now an n × m matrix, with components Aij = Aji = aji. In other words, the transpose is obtained by simply flipping the row and column indeces. One particularly important matrix is called the identity matrix, which is composed of 1’s on the diagonal and 0’s everywhere else: 21 0 ::: 03 60 1 ::: 07 6 7 6. .. .7 4. .5 0 0 ::: 1 1 It is called the identity matrix because the product of any matrix with the identity matrix is identical to itself: AI = A In other words, I is the equivalent of the number 1 for matrices. For our purposes, a vector can simply be thought of as a matrix with one column1: 2 3 a1 6a2 7 a = 6 7 6 .
    [Show full text]
  • 2 Hilbert Spaces You Should Have Seen Some Examples Last Semester
    2 Hilbert spaces You should have seen some examples last semester. The simplest (finite-dimensional) ex- C • A complex Hilbert space H is a complete normed space over whose norm is derived from an ample is Cn with its standard inner product. It’s worth recalling from linear algebra that if V is inner product. That is, we assume that there is a sesquilinear form ( , ): H H C, linear in · · × → an n-dimensional (complex) vector space, then from any set of n linearly independent vectors we the first variable and conjugate linear in the second, such that can manufacture an orthonormal basis e1, e2,..., en using the Gram-Schmidt process. In terms of this basis we can write any v V in the form (f ,д) = (д, f ), ∈ v = a e , a = (v, e ) (f , f ) 0 f H, and (f , f ) = 0 = f = 0. i i i i ≥ ∀ ∈ ⇒ ∑ The norm and inner product are related by which can be derived by taking the inner product of the equation v = aiei with ei. We also have n ∑ (f , f ) = f 2. ∥ ∥ v 2 = a 2. ∥ ∥ | i | i=1 We will always assume that H is separable (has a countable dense subset). ∑ Standard infinite-dimensional examples are l2(N) or l2(Z), the space of square-summable As usual for a normed space, the distance on H is given byd(f ,д) = f д = (f д, f д). • • ∥ − ∥ − − sequences, and L2(Ω) where Ω is a measurable subset of Rn. The Cauchy-Schwarz and triangle inequalities, • √ (f ,д) f д , f + д f + д , | | ≤ ∥ ∥∥ ∥ ∥ ∥ ≤ ∥ ∥ ∥ ∥ 2.1 Orthogonality can be derived fairly easily from the inner product.
    [Show full text]
  • Learning Geometric Algebra by Modeling Motions of the Earth and Shadows of Gnomons to Predict Solar Azimuths and Altitudes
    Learning Geometric Algebra by Modeling Motions of the Earth and Shadows of Gnomons to Predict Solar Azimuths and Altitudes April 24, 2018 James Smith [email protected] https://mx.linkedin.com/in/james-smith-1b195047 \Our first step in developing an expression for the orientation of \our" gnomon: Diagramming its location at the instant of the 2016 December solstice." Abstract Because the shortage of worked-out examples at introductory levels is an obstacle to widespread adoption of Geometric Algebra (GA), we use GA to calculate Solar azimuths and altitudes as a function of time via the heliocentric model. We begin by representing the Earth's motions in GA terms. Our representation incorporates an estimate of the time at which the Earth would have reached perihelion in 2017 if not affected by the Moon's gravity. Using the geometry of the December 2016 solstice as a starting 1 point, we then employ GA's capacities for handling rotations to determine the orientation of a gnomon at any given latitude and longitude during the period between the December solstices of 2016 and 2017. Subsequently, we derive equations for two angles: that between the Sun's rays and the gnomon's shaft, and that between the gnomon's shadow and the direction \north" as traced on the ground at the gnomon's location. To validate our equations, we convert those angles to Solar azimuths and altitudes for comparison with simulations made by the program Stellarium. As further validation, we analyze our equations algebraically to predict (for example) the precise timings and locations of sunrises, sunsets, and Solar zeniths on the solstices and equinoxes.
    [Show full text]
  • Schaum's Outline of Linear Algebra (4Th Edition)
    SCHAUM’S SCHAUM’S outlines outlines Linear Algebra Fourth Edition Seymour Lipschutz, Ph.D. Temple University Marc Lars Lipson, Ph.D. University of Virginia Schaum’s Outline Series New York Chicago San Francisco Lisbon London Madrid Mexico City Milan New Delhi San Juan Seoul Singapore Sydney Toronto Copyright © 2009, 2001, 1991, 1968 by The McGraw-Hill Companies, Inc. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior writ- ten permission of the publisher. ISBN: 978-0-07-154353-8 MHID: 0-07-154353-8 The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-154352-1, MHID: 0-07-154352-X. All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps. McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please e-mail us at [email protected]. TERMS OF USE This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to the work.
    [Show full text]
  • Equations of Lines and Planes
    Equations of Lines and 12.5 Planes Copyright © Cengage Learning. All rights reserved. Lines A line in the xy-plane is determined when a point on the line and the direction of the line (its slope or angle of inclination) are given. The equation of the line can then be written using the point-slope form. Likewise, a line L in three-dimensional space is determined when we know a point P0(x0, y0, z0) on L and the direction of L. In three dimensions the direction of a line is conveniently described by a vector, so we let v be a vector parallel to L. 2 Lines Let P(x, y, z) be an arbitrary point on L and let r0 and r be the position vectors of P0 and P (that is, they have representations and ). If a is the vector with representation as in Figure 1, then the Triangle Law for vector addition gives r = r0 + a. Figure 1 3 Lines But, since a and v are parallel vectors, there is a scalar t such that a = t v. Thus which is a vector equation of L. Each value of the parameter t gives the position vector r of a point on L. In other words, as t varies, the line is traced out by the tip of the vector r. 4 Lines As Figure 2 indicates, positive values of t correspond to points on L that lie on one side of P0, whereas negative values of t correspond to points that lie on the other side of P0.
    [Show full text]
  • Orthogonal Complements (Revised Version)
    Orthogonal Complements (Revised Version) Math 108A: May 19, 2010 John Douglas Moore 1 The dot product You will recall that the dot product was discussed in earlier calculus courses. If n x = (x1: : : : ; xn) and y = (y1: : : : ; yn) are elements of R , we define their dot product by x · y = x1y1 + ··· + xnyn: The dot product satisfies several key axioms: 1. it is symmetric: x · y = y · x; 2. it is bilinear: (ax + x0) · y = a(x · y) + x0 · y; 3. and it is positive-definite: x · x ≥ 0 and x · x = 0 if and only if x = 0. The dot product is an example of an inner product on the vector space V = Rn over R; inner products will be treated thoroughly in Chapter 6 of [1]. Recall that the length of an element x 2 Rn is defined by p jxj = x · x: Note that the length of an element x 2 Rn is always nonnegative. Cauchy-Schwarz Theorem. If x 6= 0 and y 6= 0, then x · y −1 ≤ ≤ 1: (1) jxjjyj Sketch of proof: If v is any element of Rn, then v · v ≥ 0. Hence (x(y · y) − y(x · y)) · (x(y · y) − y(x · y)) ≥ 0: Expanding using the axioms for dot product yields (x · x)(y · y)2 − 2(x · y)2(y · y) + (x · y)2(y · y) ≥ 0 or (x · x)(y · y)2 ≥ (x · y)2(y · y): 1 Dividing by y · y, we obtain (x · y)2 jxj2jyj2 ≥ (x · y)2 or ≤ 1; jxj2jyj2 and (1) follows by taking the square root.
    [Show full text]
  • Lecture 3.Pdf
    ENGR-1100 Introduction to Engineering Analysis Lecture 3 POSITION VECTORS & FORCE VECTORS Today’s Objectives: Students will be able to : a) Represent a position vector in Cartesian coordinate form, from given geometry. In-Class Activities: • Applications / b) Represent a force vector directed along Relevance a line. • Write Position Vectors • Write a Force Vector along a line 1 DOT PRODUCT Today’s Objective: Students will be able to use the vector dot product to: a) determine an angle between In-Class Activities: two vectors, and, •Applications / Relevance b) determine the projection of a vector • Dot product - Definition along a specified line. • Angle Determination • Determining the Projection APPLICATIONS This ship’s mooring line, connected to the bow, can be represented as a Cartesian vector. What are the forces in the mooring line and how do we find their directions? Why would we want to know these things? 2 APPLICATIONS (continued) This awning is held up by three chains. What are the forces in the chains and how do we find their directions? Why would we want to know these things? POSITION VECTOR A position vector is defined as a fixed vector that locates a point in space relative to another point. Consider two points, A and B, in 3-D space. Let their coordinates be (XA, YA, ZA) and (XB, YB, ZB), respectively. 3 POSITION VECTOR The position vector directed from A to B, rAB , is defined as rAB = {( XB –XA ) i + ( YB –YA ) j + ( ZB –ZA ) k }m Please note that B is the ending point and A is the starting point.
    [Show full text]
  • Exercise 7.5
    IN SUMMARY Key Idea • A projection of one vector onto another can be either a scalar or a vector. The difference is the vector projection has a direction. A A a a u u O O b NB b N B Scalar projection of a on b Vector projection of a on b Need to Know ! ! ! ! a # b ! • The scalar projection of a on b ϭϭ! 0 a 0 cos u, where u is the angle @ b @ ! ! between a and b. ! ! ! ! ! ! a # b ! a # b ! • The vector projection of a on b ϭ ! b ϭ a ! ! b b @ b @ 2 b # b ! • The direction cosines for OP ϭ 1a, b, c2 are a b cos a ϭ , cos b ϭ , 2 2 2 2 2 2 Va ϩ b ϩ c Va ϩ b ϩ c c cos g ϭ , where a, b, and g are the direction angles 2 2 2 Va ϩ b ϩ c ! between the position vector OP and the positive x-axis, y-axis and z-axis, respectively. Exercise 7.5 PART A ! 1. a. The vector a ϭ 12, 32 is projected onto the x-axis. What is the scalar projection? What is the vector projection? ! b. What are the scalar and vector projections when a is projected onto the y-axis? 2. Explain why it is not possible to obtain! either a scalar! projection or a vector projection when a nonzero vector x is projected on 0. 398 7.5 SCALAR AND VECTOR PROJECTIONS NEL ! ! 3. Consider two nonzero vectors,a and b, that are perpendicular! ! to each other.! Explain why the scalar and vector projections of a on b must! be !0 and 0, respectively.
    [Show full text]
  • MATH 304 Linear Algebra Lecture 24: Scalar Product. Vectors: Geometric Approach
    MATH 304 Linear Algebra Lecture 24: Scalar product. Vectors: geometric approach B A B′ A′ A vector is represented by a directed segment. • Directed segment is drawn as an arrow. • Different arrows represent the same vector if • they are of the same length and direction. Vectors: geometric approach v B A v B′ A′ −→AB denotes the vector represented by the arrow with tip at B and tail at A. −→AA is called the zero vector and denoted 0. Vectors: geometric approach v B − A v B′ A′ If v = −→AB then −→BA is called the negative vector of v and denoted v. − Vector addition Given vectors a and b, their sum a + b is defined by the rule −→AB + −→BC = −→AC. That is, choose points A, B, C so that −→AB = a and −→BC = b. Then a + b = −→AC. B b a C a b + B′ b A a C ′ a + b A′ The difference of the two vectors is defined as a b = a + ( b). − − b a b − a Properties of vector addition: a + b = b + a (commutative law) (a + b) + c = a + (b + c) (associative law) a + 0 = 0 + a = a a + ( a) = ( a) + a = 0 − − Let −→AB = a. Then a + 0 = −→AB + −→BB = −→AB = a, a + ( a) = −→AB + −→BA = −→AA = 0. − Let −→AB = a, −→BC = b, and −→CD = c. Then (a + b) + c = (−→AB + −→BC) + −→CD = −→AC + −→CD = −→AD, a + (b + c) = −→AB + (−→BC + −→CD) = −→AB + −→BD = −→AD. Parallelogram rule Let −→AB = a, −→BC = b, −−→AB′ = b, and −−→B′C ′ = a. Then a + b = −→AC, b + a = −−→AC ′.
    [Show full text]
  • LINEAR ALGEBRA FALL 2007/08 PROBLEM SET 9 SOLUTIONS In
    MATH 110: LINEAR ALGEBRA FALL 2007/08 PROBLEM SET 9 SOLUTIONS In the following V will denote a finite-dimensional vector space over R that is also an inner product space with inner product denoted by h ·; · i. The norm induced by h ·; · i will be denoted k · k. 1. Let S be a subset (not necessarily a subspace) of V . We define the orthogonal annihilator of S, denoted S?, to be the set S? = fv 2 V j hv; wi = 0 for all w 2 Sg: (a) Show that S? is always a subspace of V . ? Solution. Let v1; v2 2 S . Then hv1; wi = 0 and hv2; wi = 0 for all w 2 S. So for any α; β 2 R, hαv1 + βv2; wi = αhv1; wi + βhv2; wi = 0 ? for all w 2 S. Hence αv1 + βv2 2 S . (b) Show that S ⊆ (S?)?. Solution. Let w 2 S. For any v 2 S?, we have hv; wi = 0 by definition of S?. Since this is true for all v 2 S? and hw; vi = hv; wi, we see that hw; vi = 0 for all v 2 S?, ie. w 2 S?. (c) Show that span(S) ⊆ (S?)?. Solution. Since (S?)? is a subspace by (a) (it is the orthogonal annihilator of S?) and S ⊆ (S?)? by (b), we have span(S) ⊆ (S?)?. ? ? (d) Show that if S1 and S2 are subsets of V and S1 ⊆ S2, then S2 ⊆ S1 . ? Solution. Let v 2 S2 . Then hv; wi = 0 for all w 2 S2 and so for all w 2 S1 (since ? S1 ⊆ S2).
    [Show full text]
  • Linear Algebra - Part II Projection, Eigendecomposition, SVD
    Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Punit Shah's slides) 2019 Linear Algebra, Part II 2019 1 / 22 Brief Review from Part 1 Matrix Multiplication is a linear tranformation. Symmetric Matrix: A = AT Orthogonal Matrix: AT A = AAT = I and A−1 = AT L2 Norm: s X 2 jjxjj2 = xi i Linear Algebra, Part II 2019 2 / 22 Angle Between Vectors Dot product of two vectors can be written in terms of their L2 norms and the angle θ between them. T a b = jjajj2jjbjj2 cos(θ) Linear Algebra, Part II 2019 3 / 22 Cosine Similarity Cosine between two vectors is a measure of their similarity: a · b cos(θ) = jjajj jjbjj Orthogonal Vectors: Two vectors a and b are orthogonal to each other if a · b = 0. Linear Algebra, Part II 2019 4 / 22 Vector Projection ^ b Given two vectors a and b, let b = jjbjj be the unit vector in the direction of b. ^ Then a1 = a1 · b is the orthogonal projection of a onto a straight line parallel to b, where b a = jjajj cos(θ) = a · b^ = a · 1 jjbjj Image taken from wikipedia. Linear Algebra, Part II 2019 5 / 22 Diagonal Matrix Diagonal matrix has mostly zeros with non-zero entries only in the diagonal, e.g. identity matrix. A square diagonal matrix with diagonal elements given by entries of vector v is denoted: diag(v) Multiplying vector x by a diagonal matrix is efficient: diag(v)x = v x is the entrywise product. Inverting a square diagonal matrix is efficient: 1 1 diag(v)−1 = diag [ ;:::; ]T v1 vn Linear Algebra, Part II 2019 6 / 22 Determinant Determinant of a square matrix is a mapping to a scalar.
    [Show full text]