Norms and Inner Products

Total Page:16

File Type:pdf, Size:1020Kb

Norms and Inner Products Norms and Inner Products Garrett Thomas July 21, 2018 1 About This document is part of a series of notes about math and machine learning. You are free to distribute it as you wish. The latest version can be found at http://gwthomas.github.io/notes. Please report any errors to [email protected]. 2 Norms Norms generalize the notion of length from Euclidean space. A norm on a vector space V is a function k · k : V ! R that satisfies (i) kvk ≥ 0, with equality if and only if v = 0 (ii) kαvk = jαjkvk (iii) ku + vk ≤ kuk + kvk (the triangle inequality) for all u; v 2 V and all α 2 F. A vector space endowed with a norm is called a normed vector space, or simply a normed space. An important fact about norms is that they induce metrics, giving a notion of convergence in vector spaces. Proposition 1. If a vector space V is equipped with a norm k · k : V ! R, then d(u; v) , ku − vk is a metric on V . Proof. The axioms for metrics follow directly from those for norms. If u; v; w 2 V , then (i) d(u; v) = ku − vk ≥ 0, with equality if and only if u − v = 0, i.e. u = v (ii) d(u; v) = ku − vk = k(−1)(v − u)k = j − 1jkv − uk = d(v; u) (iii) d(u; v) = ku − vk = ku − w + w − vk ≤ ku − wk + kw − vk = d(u; w) + d(w; v) so d is a metric as claimed. 1 Therefore any normed space is also a metric space. If a normed space is complete with respect to the distance metric induced by its norm, it is said to be a Banach space. The metric induced by a norm automatically has the property of translation invariance, meaning that d(u + w; v + w) = d(u; v) for any u; v; w 2 V : d(u + w; v + w) = k(u + w) − (v + w)k = ku + w − v − wk = ku − vk = d(u; v) 3 Inner products An inner product on a vector space V over F is a function h·; ·i : V × V ! F satisfying (i) hv; vi ≥ 0, with equality if and only if v = 0 (ii) Linearity in the first slot: hu + v; wi = hu; wi + hv; wi and hαu; vi = αhu; vi (iii) Conjugate symmetry: hu; vi = hv; ui for all u; v; w 2 V and all α 2 F. A vector space endowed with an inner product is called an inner product space. If F = R, the conjugate symmetry condition reduces to symmetry: hu; vi = hv; ui. Additionally, the combination of linearity and conjugate symmetry imply sesquilinearity: hu; αvi = hαv; ui = αhv; ui = αhv; ui = αhu; vi It is also worth noting that h0; vi = 0 for any v because for any u 2 V hu; vi = hu + 0; vi = hu; vi + h0; vi Proposition 2. The function hx; yi , xy is an inner product on C (considered as a vector space over C). Proof. If x; y; z 2 C and α 2 C, then 1. hz; zi = zz = jzj2 ≥ 0, with equality if and only if z = 0 2. hx + y; zi = (x + y)z = xz + yz = hx; zi + hy; zi, and hαx; yi = (αx)y = α(xy) = αhx; yi 3. hx; yi = xy = yx = hy; xi so the function is an inner product as claimed. 3.1 Orthogonality Orthogonality generalizes the notion of perpendicularity from Euclidean space. Two vectors u and v are said to be orthogonal if hu; vi = 0; we write u ? v for shorthand. A set of vectors v1; : : : ; vn is described as pairwise orthogonal (or just orthogonal for short) if vj ? vk for all j 6= k. Proposition 3. If v1; : : : ; vn are nonzero and pairwise orthogonal, then they are linearly indepen- dent. 2 Proof. Assume v1; : : : ; vn are nonzero and pairwise orthogonal, and suppose α1v1 + : : : αnvn = 0. Then for each j = 1; : : : ; n, 0 = h0; vji = hα1v1 + : : : αnvn; vji = α1hv1; vji + ··· + αnhvn; vji = αjhvj; vji Since vj 6= 0, hvj; vji > 0, so αj = 0. Hence v1; : : : ; vn are linearly independent. 4 Norms induced by inner products Any inner product induces a norm given by p kvk , hv; vi Moreover, these norms have certain special properties related to the inner product. The notation k · k is not yet justified as we have not yet shown that this is in fact a norm. We need first a couple of intermediate results, which are useful in their own right. 4.1 Pythagorean Theorem The well-known Pythagorean theorem generalizes naturally to arbitrary inner product spaces. Proposition 4. (Pythagorean theorem) If u ? v, then ku + vk2 = kuk2 + kvk2 Proof. Suppose u ? v, i.e. hu; vi = 0. Then ku + vk2 = hu + v; u + vi = hu; ui + hv; ui + hu; vi + hv; vi = kuk2 + kvk2 as claimed. It is clear that by repeatedly applying the Pythagorean theorem for two vectors, we can obtain a slight generalization: if v1; : : : ; vn are orthogonal, then 2 2 2 kv1 + ··· + vnk = kv1k + ··· + kvnk 4.2 Cauchy-Schwarz inequality The Cauchy-Schwarz inequality is one of the most widespread and useful inequalities in mathe- matics. Proposition 5. If V is an inner product space, then jhu; vij ≤ kukkvk for all u; v 2 V . Equality holds exactly when u and v are linearly dependent. Proof. If v = 0, then equality holds, for jhu; 0ij = 0 = kuk0 = kukk0k. So suppose v 6= 0. In this case we can define hu; vi w v , kvk2 3 which satisfies hu; vi hu; vi hw; u − wi = v; u − v kvk2 kvk2 ! hu; vi hu; vi = hv; ui − hv; vi kvk2 kvk2 hu; vi hv; ui = hv; ui − kvk2 kvk2 kvk2 = 0 Therefore, by the Pythagorean theorem, kuk2 = kw + u − wk2 = kwk2 + ku − wk2 2 hu; vi ≥ v kvk2 2 hu; vi 2 = kvk kvk2 jhu; vij2 = kvk2 which, after multiplying through by kvk2, yields jhu; vij2 ≤ kuk2kvk2 The claimed inequality follows by taking square roots, since both sides are nonnegative. Observe that the only inequality in the reasoning above comes from ku − wk2 ≥ 0, so equality holds if and only if hu; vi 0 = u − w = u − v kvk2 which implies that u and v are linearly dependent. Conversely if u and v are linearly dependent, then u = αv for some α 2 F, and hαv; vi αhv; vi w = v = v = αv kvk2 kvk2 giving u − w = 0, so that equality holds. We now have all the tools needed to prove that k · k is in fact a norm. Proposition 6. If V is an inner product space, then p kvk , hv; vi is a norm on V . Proof. The axioms for norms mostly follow directly from those for inner products, but the triangle inequality requires a bit of work. If u; v 2 V and α 2 F, then (i) kvk = phv; vi ≥ 0 since hv; vi ≥ 0, with equality if and only if v = 0. 4 (ii) kαvk = phαv; αvi = pααhv; vi = pjαj2hv; vi = jαjphv; vi = jαjkvk. (iii) Using the Cauchy-Schwarz inequality, ku + vk2 = hu + v; u + vi = hu; ui + hu; vi + hv; ui + hv; vi = kuk2 + hu; vi + hu; vi + kvk2 = kuk2 + 2 Rehu; vi + kvk2 ≤ kuk2 + 2jhu; vij + kvk2 ≤ kuk2 + 2kukkvk + kvk2 = (kuk + kvk)2 Taking square roots yields ku + vk ≤ kuk + kvk, since both sides are nonnegative. Hence k · k is a norm as claimed. Thus every inner product space is a normed space, and hence also a metric space. If an inner product space is complete with respect to the distance metric induced by its inner product, it is said to be a Hilbert space. 4.3 Orthonormality A set of vectors e1; : : : ; en are said to be orthonormal if they are orthogonal and have unit norm (i.e. kejk = 1 for each j). This pair of conditions can be concisely expressed as hej; eki = δjk, where δjk is the Kronecker delta. Observe that if v 6= 0, then v=kvk is a unit vector in the same \direction" as v (i.e. a positive scalar multiple of v), for v 1 = kvk = 1 kvk kvk Recall that an orthogonal set of nonzero vectors are linearly independent. Unit vectors are necessarily nonzero (since k0k = 0 6= 1), so an orthonormal set of vectors always forms a basis for its span. Not surprisingly, such a basis is referred to as an orthonormal basis. A nice property of orthonormal bases is that vectors' coefficients in terms of this basis can be computed via the inner product. Proposition 7. If e1; : : : ; en is an orthonormal basis for V , then any v 2 V can be written v = hv; e1ie1 + ··· + hv; enien Proof. Since e1; : : : ; en is a basis for V , any v 2 V has a unique expansion v = α1e1 + ··· + αnen where α1; : : : ; αn 2 F. Then for j = 1; : : : ; n, hv; eji = hα1e1 + ··· + αnen; eji = α1he1; eji + ··· + αnhen; eji = αj since hek; eji = 0 for k 6= j and hej; eji = 1. 5 The norm of vectors expressed in an orthonormal basis is also easily found, for 2 2 2 2 2 kα1e1 + ··· + αnenk = kα1e1k + ··· + kαnenk = jα1j + ··· + jαnj by application of the Pythagorean theorem. Combining this with the previous result, we see 2 2 2 kvk = jhv; e1ij + ··· + jhv; enij 4.4 The Gram-Schmidt process The Gram-Schmidt process produces an orthonormal basis for a vector space from an arbitrary basis for the space. Proposition 8. Let v1; : : : ; vn 2 V be linearly independent. Define wj , vj − hvj; e1ie1 − · · · − hvj; ej−1iej−1 wj ej , kwjk for j = 1; : : : ; n.
Recommended publications
  • On the Bicoset of a Bivector Space
    International J.Math. Combin. Vol.4 (2009), 01-08 On the Bicoset of a Bivector Space Agboola A.A.A.† and Akinola L.S.‡ † Department of Mathematics, University of Agriculture, Abeokuta, Nigeria ‡ Department of Mathematics and computer Science, Fountain University, Osogbo, Nigeria E-mail: [email protected], [email protected] Abstract: The study of bivector spaces was first intiated by Vasantha Kandasamy in [1]. The objective of this paper is to present the concept of bicoset of a bivector space and obtain some of its elementary properties. Key Words: bigroup, bivector space, bicoset, bisum, direct bisum, inner biproduct space, biprojection. AMS(2000): 20Kxx, 20L05. §1. Introduction and Preliminaries The study of bialgebraic structures is a new development in the field of abstract algebra. Some of the bialgebraic structures already developed and studied and now available in several literature include: bigroups, bisemi-groups, biloops, bigroupoids, birings, binear-rings, bisemi- rings, biseminear-rings, bivector spaces and a host of others. Since the concept of bialgebraic structure is pivoted on the union of two non-empty subsets of a given algebraic structure for example a group, the usual problem arising from the union of two substructures of such an algebraic structure which generally do not form any algebraic structure has been resolved. With this new concept, several interesting algebraic properties could be obtained which are not present in the parent algebraic structure. In [1], Vasantha Kandasamy initiated the study of bivector spaces. Further studies on bivector spaces were presented by Vasantha Kandasamy and others in [2], [4] and [5]. In the present work however, we look at the bicoset of a bivector space and obtain some of its elementary properties.
    [Show full text]
  • Metric Geometry in a Tame Setting
    University of California Los Angeles Metric Geometry in a Tame Setting A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Mathematics by Erik Walsberg 2015 c Copyright by Erik Walsberg 2015 Abstract of the Dissertation Metric Geometry in a Tame Setting by Erik Walsberg Doctor of Philosophy in Mathematics University of California, Los Angeles, 2015 Professor Matthias J. Aschenbrenner, Chair We prove basic results about the topology and metric geometry of metric spaces which are definable in o-minimal expansions of ordered fields. ii The dissertation of Erik Walsberg is approved. Yiannis N. Moschovakis Chandrashekhar Khare David Kaplan Matthias J. Aschenbrenner, Committee Chair University of California, Los Angeles 2015 iii To Sam. iv Table of Contents 1 Introduction :::::::::::::::::::::::::::::::::::::: 1 2 Conventions :::::::::::::::::::::::::::::::::::::: 5 3 Metric Geometry ::::::::::::::::::::::::::::::::::: 7 3.1 Metric Spaces . 7 3.2 Maps Between Metric Spaces . 8 3.3 Covers and Packing Inequalities . 9 3.3.1 The 5r-covering Lemma . 9 3.3.2 Doubling Metrics . 10 3.4 Hausdorff Measures and Dimension . 11 3.4.1 Hausdorff Measures . 11 3.4.2 Hausdorff Dimension . 13 3.5 Topological Dimension . 15 3.6 Left-Invariant Metrics on Groups . 15 3.7 Reductions, Ultralimits and Limits of Metric Spaces . 16 3.7.1 Reductions of Λ-valued Metric Spaces . 16 3.7.2 Ultralimits . 17 3.7.3 GH-Convergence and GH-Ultralimits . 18 3.7.4 Asymptotic Cones . 19 3.7.5 Tangent Cones . 22 3.7.6 Conical Metric Spaces . 22 3.8 Normed Spaces . 23 4 T-Convexity :::::::::::::::::::::::::::::::::::::: 24 4.1 T-convex Structures .
    [Show full text]
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • Analysis in Metric Spaces Mario Bonk, Luca Capogna, Piotr Hajłasz, Nageswari Shanmugalingam, and Jeremy Tyson
    Analysis in Metric Spaces Mario Bonk, Luca Capogna, Piotr Hajłasz, Nageswari Shanmugalingam, and Jeremy Tyson study of quasiconformal maps on such boundaries moti- The authors of this piece are organizers of the AMS vated Heinonen and Koskela [HK98] to axiomatize several 2020 Mathematics Research Communities summer aspects of Euclidean quasiconformal geometry in the set- conference Analysis in Metric Spaces, one of five ting of metric measure spaces and thereby extend Mostow’s topical research conferences offered this year that are work beyond the sub-Riemannian setting. The ground- focused on collaborative research and professional breaking work [HK98] initiated the modern theory of anal- development for early-career mathematicians. ysis on metric spaces. Additional information can be found at https://www Analysis on metric spaces is nowadays an active and in- .ams.org/programs/research-communities dependent field, bringing together researchers from differ- /2020MRC-MetSpace. Applications are open until ent parts of the mathematical spectrum. It has far-reaching February 15, 2020. applications to areas as diverse as geometric group the- ory, nonlinear PDEs, and even theoretical computer sci- The subject of analysis, more specifically, first-order calcu- ence. As a further sign of recognition, analysis on met- lus, in metric measure spaces provides a unifying frame- ric spaces has been included in the 2010 MSC classifica- work for ideas and questions from many different fields tion as a category (30L: Analysis on metric spaces). In this of mathematics. One of the earliest motivations and ap- short survey, we can discuss only a small fraction of areas plications of this theory arose in Mostow’s work [Mos73], into which analysis on metric spaces has expanded.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Determinant Notes
    68 UNIT FIVE DETERMINANTS 5.1 INTRODUCTION In unit one the determinant of a 2× 2 matrix was introduced and used in the evaluation of a cross product. In this chapter we extend the definition of a determinant to any size square matrix. The determinant has a variety of applications. The value of the determinant of a square matrix A can be used to determine whether A is invertible or noninvertible. An explicit formula for A–1 exists that involves the determinant of A. Some systems of linear equations have solutions that can be expressed in terms of determinants. 5.2 DEFINITION OF THE DETERMINANT a11 a12 Recall that in chapter one the determinant of the 2× 2 matrix A = was a21 a22 defined to be the number a11a22 − a12 a21 and that the notation det (A) or A was used to represent the determinant of A. For any given n × n matrix A = a , the notation A [ ij ]n×n ij will be used to denote the (n −1)× (n −1) submatrix obtained from A by deleting the ith row and the jth column of A. The determinant of any size square matrix A = a is [ ij ]n×n defined recursively as follows. Definition of the Determinant Let A = a be an n × n matrix. [ ij ]n×n (1) If n = 1, that is A = [a11], then we define det (A) = a11 . n 1k+ (2) If na>=1, we define det(A) ∑(-1)11kk det(A ) k=1 Example If A = []5 , then by part (1) of the definition of the determinant, det (A) = 5.
    [Show full text]
  • Solving Cubic Polynomials
    Solving Cubic Polynomials 1.1 The general solution to the quadratic equation There are four steps to finding the zeroes of a quadratic polynomial. 1. First divide by the leading term, making the polynomial monic. a 2. Then, given x2 + a x + a , substitute x = y − 1 to obtain an equation without the linear term. 1 0 2 (This is the \depressed" equation.) 3. Solve then for y as a square root. (Remember to use both signs of the square root.) a 4. Once this is done, recover x using the fact that x = y − 1 . 2 For example, let's solve 2x2 + 7x − 15 = 0: First, we divide both sides by 2 to create an equation with leading term equal to one: 7 15 x2 + x − = 0: 2 2 a 7 Then replace x by x = y − 1 = y − to obtain: 2 4 169 y2 = 16 Solve for y: 13 13 y = or − 4 4 Then, solving back for x, we have 3 x = or − 5: 2 This method is equivalent to \completing the square" and is the steps taken in developing the much- memorized quadratic formula. For example, if the original equation is our \high school quadratic" ax2 + bx + c = 0 then the first step creates the equation b c x2 + x + = 0: a a b We then write x = y − and obtain, after simplifying, 2a b2 − 4ac y2 − = 0 4a2 so that p b2 − 4ac y = ± 2a and so p b b2 − 4ac x = − ± : 2a 2a 1 The solutions to this quadratic depend heavily on the value of b2 − 4ac.
    [Show full text]
  • Introduction Into Quaternions for Spacecraft Attitude Representation
    Introduction into quaternions for spacecraft attitude representation Dipl. -Ing. Karsten Groÿekatthöfer, Dr. -Ing. Zizung Yoon Technical University of Berlin Department of Astronautics and Aeronautics Berlin, Germany May 31, 2012 Abstract The purpose of this paper is to provide a straight-forward and practical introduction to quaternion operation and calculation for rigid-body attitude representation. Therefore the basic quaternion denition as well as transformation rules and conversion rules to or from other attitude representation parameters are summarized. The quaternion computation rules are supported by practical examples to make each step comprehensible. 1 Introduction Quaternions are widely used as attitude represenation parameter of rigid bodies such as space- crafts. This is due to the fact that quaternion inherently come along with some advantages such as no singularity and computationally less intense compared to other attitude parameters such as Euler angles or a direction cosine matrix. Mainly, quaternions are used to • Parameterize a spacecraft's attitude with respect to reference coordinate system, • Propagate the attitude from one moment to the next by integrating the spacecraft equa- tions of motion, • Perform a coordinate transformation: e.g. calculate a vector in body xed frame from a (by measurement) known vector in inertial frame. However, dierent references use several notations and rules to represent and handle attitude in terms of quaternions, which might be confusing for newcomers [5], [4]. Therefore this article gives a straight-forward and clearly notated introduction into the subject of quaternions for attitude representation. The attitude of a spacecraft is its rotational orientation in space relative to a dened reference coordinate system.
    [Show full text]
  • Lecture 4: April 8, 2021 1 Orthogonality and Orthonormality
    Mathematical Toolkit Spring 2021 Lecture 4: April 8, 2021 Lecturer: Avrim Blum (notes based on notes from Madhur Tulsiani) 1 Orthogonality and orthonormality Definition 1.1 Two vectors u, v in an inner product space are said to be orthogonal if hu, vi = 0. A set of vectors S ⊆ V is said to consist of mutually orthogonal vectors if hu, vi = 0 for all u 6= v, u, v 2 S. A set of S ⊆ V is said to be orthonormal if hu, vi = 0 for all u 6= v, u, v 2 S and kuk = 1 for all u 2 S. Proposition 1.2 A set S ⊆ V n f0V g consisting of mutually orthogonal vectors is linearly inde- pendent. Proposition 1.3 (Gram-Schmidt orthogonalization) Given a finite set fv1,..., vng of linearly independent vectors, there exists a set of orthonormal vectors fw1,..., wng such that Span (fw1,..., wng) = Span (fv1,..., vng) . Proof: By induction. The case with one vector is trivial. Given the statement for k vectors and orthonormal fw1,..., wkg such that Span (fw1,..., wkg) = Span (fv1,..., vkg) , define k u + u = v − hw , v i · w and w = k 1 . k+1 k+1 ∑ i k+1 i k+1 k k i=1 uk+1 We can now check that the set fw1,..., wk+1g satisfies the required conditions. Unit length is clear, so let’s check orthogonality: k uk+1, wj = vk+1, wj − ∑ hwi, vk+1i · wi, wj = vk+1, wj − wj, vk+1 = 0. i=1 Corollary 1.4 Every finite dimensional inner product space has an orthonormal basis.
    [Show full text]
  • THE DIMENSION of a VECTOR SPACE 1. Introduction This Handout
    THE DIMENSION OF A VECTOR SPACE KEITH CONRAD 1. Introduction This handout is a supplementary discussion leading up to the definition of dimension of a vector space and some of its properties. We start by defining the span of a finite set of vectors and linear independence of a finite set of vectors, which are combined to define the all-important concept of a basis. Definition 1.1. Let V be a vector space over a field F . For any finite subset fv1; : : : ; vng of V , its span is the set of all of its linear combinations: Span(v1; : : : ; vn) = fc1v1 + ··· + cnvn : ci 2 F g: Example 1.2. In F 3, Span((1; 0; 0); (0; 1; 0)) is the xy-plane in F 3. Example 1.3. If v is a single vector in V then Span(v) = fcv : c 2 F g = F v is the set of scalar multiples of v, which for nonzero v should be thought of geometrically as a line (through the origin, since it includes 0 · v = 0). Since sums of linear combinations are linear combinations and the scalar multiple of a linear combination is a linear combination, Span(v1; : : : ; vn) is a subspace of V . It may not be all of V , of course. Definition 1.4. If fv1; : : : ; vng satisfies Span(fv1; : : : ; vng) = V , that is, if every vector in V is a linear combination from fv1; : : : ; vng, then we say this set spans V or it is a spanning set for V . Example 1.5. In F 2, the set f(1; 0); (0; 1); (1; 1)g is a spanning set of F 2.
    [Show full text]
  • Bases for Infinite Dimensional Vector Spaces Math 513 Linear Algebra Supplement
    BASES FOR INFINITE DIMENSIONAL VECTOR SPACES MATH 513 LINEAR ALGEBRA SUPPLEMENT Professor Karen E. Smith We have proven that every finitely generated vector space has a basis. But what about vector spaces that are not finitely generated, such as the space of all continuous real valued functions on the interval [0; 1]? Does such a vector space have a basis? By definition, a basis for a vector space V is a linearly independent set which generates V . But we must be careful what we mean by linear combinations from an infinite set of vectors. The definition of a vector space gives us a rule for adding two vectors, but not for adding together infinitely many vectors. By successive additions, such as (v1 + v2) + v3, it makes sense to add any finite set of vectors, but in general, there is no way to ascribe meaning to an infinite sum of vectors in a vector space. Therefore, when we say that a vector space V is generated by or spanned by an infinite set of vectors fv1; v2;::: g, we mean that each vector v in V is a finite linear combination λi1 vi1 + ··· + λin vin of the vi's. Likewise, an infinite set of vectors fv1; v2;::: g is said to be linearly independent if the only finite linear combination of the vi's that is zero is the trivial linear combination. So a set fv1; v2; v3;:::; g is a basis for V if and only if every element of V can be be written in a unique way as a finite linear combination of elements from the set.
    [Show full text]
  • Orthonormality and the Gram-Schmidt Process
    Unit 4, Section 2: The Gram-Schmidt Process Orthonormality and the Gram-Schmidt Process The basis (e1; e2; : : : ; en) n n for R (or C ) is considered the standard basis for the space because of its geometric properties under the standard inner product: 1. jjeijj = 1 for all i, and 2. ei, ej are orthogonal whenever i 6= j. With this idea in mind, we record the following definitions: Definitions 6.23/6.25. Let V be an inner product space. • A list of vectors in V is called orthonormal if each vector in the list has norm 1, and if each pair of distinct vectors is orthogonal. • A basis for V is called an orthonormal basis if the basis is an orthonormal list. Remark. If a list (v1; : : : ; vn) is orthonormal, then ( 0 if i 6= j hvi; vji = 1 if i = j: Example. The list (e1; e2; : : : ; en) n n forms an orthonormal basis for R =C under the standard inner products on those spaces. 2 Example. The standard basis for Mn(C) consists of n matrices eij, 1 ≤ i; j ≤ n, where eij is the n × n matrix with a 1 in the ij entry and 0s elsewhere. Under the standard inner product on Mn(C) this is an orthonormal basis for Mn(C): 1. heij; eiji: ∗ heij; eiji = tr (eijeij) = tr (ejieij) = tr (ejj) = 1: 1 Unit 4, Section 2: The Gram-Schmidt Process 2. heij; ekli, k 6= i or j 6= l: ∗ heij; ekli = tr (ekleij) = tr (elkeij) = tr (0) if k 6= i, or tr (elj) if k = i but l 6= j = 0: So every vector in the list has norm 1, and every distinct pair of vectors is orthogonal.
    [Show full text]