The Dual Space

Total Page:16

File Type:pdf, Size:1020Kb

The Dual Space (III.D) Linear Functionals II: The Dual Space First I remind you that a linear functional on a vector space V over R is any linear transformation f : V ! R. In §III.C we looked at a finite subspace [=derivations] of the infinite- dimensional space of linear functionals on C¥(M) . Now let’s take a finite-dimensional vector space V and consider V_ := fvector space consisting of all linear functionals on Vg, _ read “V-dual”. Two functionals f1, f2 2 V are equal if they give the same value on all ~v 2 V : f1(~v) = f2(~v) . Let B = f~v1,...,~vng be a basis for V , and let fi : V ! R be some special linear functionals de f ined by ( 0, i 6= j fi(~vj) = dij = . 1, i = j By linearity, on any ~v = ∑ ai~vi , this makes fi(~v) = ai. _ PROPOSITION. The f f1,..., fng are a basis for V (the “dual basis”), and dim(V_) = dim(V)[= n]. _ PROOF. that the f fig span V : let f : V ! R be any linear functional, and let bi = f (~vi) . Then on any ~vi = ∑ ai~vi , f (~v) = ∑ ai f (~vi) = ∑ aibi = ∑ bi fi(~v). Therefore f = ∑ bi fi as a functional. 1 2 (III.D) LINEAR FUNCTIONALS II: THE DUAL SPACE that the f fig are linearly independent: suppose ∑ aj fj = 0 as a functional ; that is, on all ~v 2 V , ∑ aj fj(~v) = 0 . In particular, applying this to ~v = ~vi for each i , 0 = ∑ aj fj(~vi) = ∑ ajdij = ai. j j So all the ai are zero. 0 0 0 Change of basis. Take another B = f ~v1,..., ~vng for V , and the 0 _ 0 0 _ 0 0 corresponding dual B = f f1,..., fng for V : that is, fi( ~vj) = dij. Regardless of the basis B, ! ! f (~v) = ∑ ai fi ∑ bj~vj = ∑ aibj fi(~vj) = ∑ aibjdij = ∑ aibi. i j i,j i,j i In terms of vectors, with 0 1 0 1 a1 b1 . _ B . C B . C [ f ]B = @ . A , [~v]B = @ . A , an bn this just says 0 1 b1 t . _ B . C f (~v) = ([ f ]B ) · [~v]B = a1 ··· an @ . A . bn So for the different bases, t t f (~v) = ([ f ]0B_ ) · [~v]0B = ([ f ]0B_ ) · (PB!0B[~v]B) = t t = PB!0B[ f ]0B_ · [~v]B for all ~v , which is to say t [ f ]B_ = PB!0B[ f ]0B_ . But by definition of PB_!0B_ , [ f ]0B_ = PB_!0B_ [ f ]B_ (III.D) LINEAR FUNCTIONALS II: THE DUAL SPACE 3 and comparing equalities gives the formula t −1 PB_!0B_ = PB!0B describing change of coordinates for covectors.1 If you stretch a ba- 0 sis, say B = f~v1,...,~vng −! B = f2~v1, . , 2~vng , then vectors “appear” to shrink by half – that is, their coordinates with respect to the basis shrink, since the vectors must stay the same. This formula, in the same sense, says that covectors would “appear” to double in size. Brief aside on calculus. In R1 , let x be the coordinate with re- spect to B = feˆg and u the coordinate with respect to 0B = f2eˆg. 1 Then x · (eˆ) = u · (2eˆ) , u = 2 x expresses the “shrinking” of coordinates (of a fixed vector) as the basis stretches. In the last lecture we suggested that coefficients of first-order lin- ear differential operators transform (infinitesimally) like coordinates n d o 0 n d o of vectors. Here this looks as follows: B = dx , B = du , d 1 d d d 1 dx = 2 du (the coefficient of du is half that of dx ); intuitively, d d d f du should be bigger than dx because du is the rise in f per “unit” run d f of 2eˆ , while dx is the rise (in f ) per unit run of eˆ . The covectors in this situation (i.e., the functionals on the space of linear differential d operators) are differential forms dx and du defined by dx dx = 1 , d du du = 1 ; thus 1 · (dx) = 2 · (du) , which is exactly what you 1 know from calculus: u = 2 x =) 2du = dx. (So the change of 1 coefficients goes 1 7! 2 instead of 1 7! 2 .) More generally, on Rn you may define a basis for the differen- tial forms by dx ¶ = d ; these give (at each point) the dual i ¶xj ij to the space of (first-order) linear differential operators.2 They are what appears under the integral sign in calculus simply because 1that is, functionals. In this context, the prefix “co” is essentially a synonym for “dual”. 2When the latter are thought of as tangent vectors, the differential 1-forms are “cotangent vectors”. 4 (III.D) LINEAR FUNCTIONALS II: THE DUAL SPACE they transform in exectly the right way under change of variable (u -substitution). Dual of a linear transformation. Every linear transformation T : V ! W induces a dual linear transformation T_ : W_ ! V_ by “pullback” of functionals: given g 2 W_ , define3 T_(g) := g ◦ T . The picture is: R T_g > O g V / W. T As usual take B, C bases for V, W (resp.); B_, C_ their dual bases for V_, W_. We would like to relate the matrices _ C [T]B and B_ [T ]C_ – it turns out they are just transposes of one another. Before doing this in full generality, let’s step back and look at a m m simple case. Consider S : R ! R with matrix [S]eˆ = A with respect to the standard basis. We can think of Rm as its own dual space, as follows. Any~` 2 Rm gives a linear functional4 on Rm by matrix multiplication (of a 1 × m matrix by an m × 1 matrix): ~`(~v) := t~` ·~v. 3also written T∗(g), as dual spaces/bases are often written V∗/B∗. I’m avoiding this so as to minimize confusion when the Hermitian transpose is introduced later on. 4We are really mapping Rm ! (Rm)_ by taking ~` to this functional. (This map also takes eˆ to the dual basis eˆ_ , so eˆ “gives” its own dual basis in the same sense as~` “gives” functionals.) Identifying a vector space with its dual is equivalent to giving an inner product, and what’s going on here is again an ad hoc use of the dot product. (III.D) LINEAR FUNCTIONALS II: THE DUAL SPACE 5 The dual transformation S_ : Rm ! Rm , with matrix B, is defined by (S_~`)(~v) =~`(S(~v)). In terms of matrices this translates to t(B ·~`) ·~v = t~` · (A ·~v) t~` · tB ·~v = t~` · A ·~v. If this is true for all~` , ~v then tB = A or B = t A . So in this context, the fact that the matrix of “S -dual” is the transpose of the matrix of S , is very simple.5 Now to prove the corresponding fact for T and T_ above, start with the definition of T_g , for any g 2 W_ : (1) (T_g)~v = (g ◦ T)~v = g(T~v) for all ~v 2 V . Now recall that for any6 ~v 2 V, f 2 V_, and basis B t for V , f (~v) = [ f ]B_ · [~v]B. Applying this fact to the end terms of (1), we have t _ t (2) [T g]B_ · [~v]B = [g]C_ · [T~v]C . Now using the defintion of the matrices of T (resp. T_ ) with respect _ _ to B and C (resp. B and C ), for example C [T]B · [~v]B = [T~v]C , (2) becomes t _ t (3) B_ [T ]C_ · [g]C_ [~v]B = [g]C_ · (C [T]B · [~v]B) . 5Remark for physicists: this argument may remind anyone exposed to quantum mechanics of “adjoint” operators, perhaps more so if we write ~`(~v) = t~` · ~v as D E ~`,~v : then the definition of S_ reads D E D E S_~`, ~v = ~`, S~v . 6the same of course goes for any w~ 2 W , g 2 W_ , and basis C for W 6 (III.D) LINEAR FUNCTIONALS II: THE DUAL SPACE Applying the rule t(A · B) = tB · t A for multiplying transposes of matrices, to the left hand-side of (3) gives t t _ t [g]C_ · B_ [T ]C_ · [~v]B = [g]C_ · C [T]B · [~v]B, and since this holds for any g 2 W_ and ~v 2 V , we conclude that t _ B_ [T ]C_ = C [T]B or _ t B_ [T ]C_ = C [T]B . The double dual. Associated to any subspace U ⊆ V there is a subspace U◦ ⊆ V_ of complimentary dimension, called the annihilator of U . It consists simply of all linear functionals f 2 V_ such that f (~x) = 0 for all ~x 2 U . For example, suppose U is the plane in R3 consisting of solutions to x1 + 2x2 + 3x3 = 0 . Then in terms of the dual standard basis eˆ_ (see the supplement), U◦ is just the “line” (in (R3)_ ) spanned by 0 1 1 B C [ f ]eˆ_ = @ 2 A , 3 since then for any ~x 2 U , 0 1 x 1 t B C f (~x) = [ f ]eˆ_ · [~x]eˆ = 1 2 3 @ x2 A = x1 + 2x2 + 3x3 = 0. x3 Now suppose you take one more dual, and consider the annihi- lator of U◦ , (U◦)◦ ⊆ (V_)_ ! Well, it turns out that you just get back what you started with: (V_)_ =∼ V , and moreover (U◦)◦ is again just U . We will prove only the first statement, by checking that the linear transformation V −! (V_)_ given by _ ~a 7−! L~a (defined, on any f 2 V , by L~a( f ) := f (~a)) (III.D) LINEAR FUNCTIONALS II: THE DUAL SPACE 7 is 1-to-1 and onto.
Recommended publications
  • On the Second Dual of the Space of Continuous Functions
    ON THE SECOND DUAL OF THE SPACE OF CONTINUOUS FUNCTIONS BY SAMUEL KAPLAN Introduction. By "the space of continuous functions" we mean the Banach space C of real continuous functions on a compact Hausdorff space X. C, its first dual, and its second dual are not only Banach spaces, but also vector lattices, and this aspect of them plays a central role in our treatment. In §§1-3, we collect the properties of vector lattices which we need in the paper. In §4 we add some properties of the first dual of C—the space of Radon measures on X—denoted by L in the present paper. In §5 we imbed C in its second dual, denoted by M, and then identify the space of all bounded real functions on X with a quotient space of M, in fact with a topological direct summand. Thus each bounded function on X represents an entire class of elements of M. The value of an element/ of M on an element p. of L is denoted as usual by/(p.)- If/lies in C then of course f(u) =p(f) (usually written J fdp), and thus the multiplication between Afand P is an extension of the multiplication between L and C. Now in integration theory, for each pEL, there is a stand- ard "extension" of ffdu to certain of the bounded functions on X (we confine ourselves to bounded functions in this paper), which are consequently called u-integrable. The question arises: how is this "extension" related to the multi- plication between M and L.
    [Show full text]
  • FUNCTIONAL ANALYSIS 1. Banach and Hilbert Spaces in What
    FUNCTIONAL ANALYSIS PIOTR HAJLASZ 1. Banach and Hilbert spaces In what follows K will denote R of C. Definition. A normed space is a pair (X, k · k), where X is a linear space over K and k · k : X → [0, ∞) is a function, called a norm, such that (1) kx + yk ≤ kxk + kyk for all x, y ∈ X; (2) kαxk = |α|kxk for all x ∈ X and α ∈ K; (3) kxk = 0 if and only if x = 0. Since kx − yk ≤ kx − zk + kz − yk for all x, y, z ∈ X, d(x, y) = kx − yk defines a metric in a normed space. In what follows normed paces will always be regarded as metric spaces with respect to the metric d. A normed space is called a Banach space if it is complete with respect to the metric d. Definition. Let X be a linear space over K (=R or C). The inner product (scalar product) is a function h·, ·i : X × X → K such that (1) hx, xi ≥ 0; (2) hx, xi = 0 if and only if x = 0; (3) hαx, yi = αhx, yi; (4) hx1 + x2, yi = hx1, yi + hx2, yi; (5) hx, yi = hy, xi, for all x, x1, x2, y ∈ X and all α ∈ K. As an obvious corollary we obtain hx, y1 + y2i = hx, y1i + hx, y2i, hx, αyi = αhx, yi , Date: February 12, 2009. 1 2 PIOTR HAJLASZ for all x, y1, y2 ∈ X and α ∈ K. For a space with an inner product we define kxk = phx, xi . Lemma 1.1 (Schwarz inequality).
    [Show full text]
  • Bornologically Isomorphic Representations of Tensor Distributions
    Bornologically isomorphic representations of distributions on manifolds E. Nigsch Thursday 15th November, 2018 Abstract Distributional tensor fields can be regarded as multilinear mappings with distributional values or as (classical) tensor fields with distribu- tional coefficients. We show that the corresponding isomorphisms hold also in the bornological setting. 1 Introduction ′ ′ ′r s ′ Let D (M) := Γc(M, Vol(M)) and Ds (M) := Γc(M, Tr(M) ⊗ Vol(M)) be the strong duals of the space of compactly supported sections of the volume s bundle Vol(M) and of its tensor product with the tensor bundle Tr(M) over a manifold; these are the spaces of scalar and tensor distributions on M as defined in [?, ?]. A property of the space of tensor distributions which is fundamental in distributional geometry is given by the C∞(M)-module isomorphisms ′r ∼ s ′ ∼ r ′ Ds (M) = LC∞(M)(Tr (M), D (M)) = Ts (M) ⊗C∞(M) D (M) (1) (cf. [?, Theorem 3.1.12 and Corollary 3.1.15]) where C∞(M) is the space of smooth functions on M. In[?] a space of Colombeau-type nonlinear generalized tensor fields was constructed. This involved handling smooth functions (in the sense of convenient calculus as developed in [?]) in par- arXiv:1105.1642v1 [math.FA] 9 May 2011 ∞ r ′ ticular on the C (M)-module tensor products Ts (M) ⊗C∞(M) D (M) and Γ(E) ⊗C∞(M) Γ(F ), where Γ(E) denotes the space of smooth sections of a vector bundle E over M. In[?], however, only minor attention was paid to questions of topology on these tensor products.
    [Show full text]
  • Solution: the First Element of the Dual Basis Is the Linear Function Α
    Homework assignment 7 pp. 105 3 Exercise 2. Let B = f®1; ®2; ®3g be the basis for C defined by ®1 = (1; 0; ¡1) ®2 = (1; 1; 1) ®3 = (2; 2; 0): Find the dual basis of B. Solution: ¤ The first element of the dual basis is the linear function ®1 such ¤ ¤ ¤ that ®1(®1) = 1; ®1(®2) = 0 and ®1(®3) = 0. To describe such a function more explicitly we need to find its values on the standard basis vectors e1, e2 and e3. To do this express e1; e2; e3 through ®1; ®2; ®3 (refer to the solution of Exercise 1 pp. 54-55 from Homework 6). For each i = 1; 2; 3 you will find the numbers ai; bi; ci such that e1 = ai®1 + bi®2 + ci®3 (i.e. the coordinates of ei relative to the ¤ ¤ ¤ basis ®1; ®2; ®3). Then by linearity of ®1 we get that ®1(ei) = ai. Then ®2(ei) = bi, ¤ and ®3(ei) = ci. This is the answer. It can also be reformulated as follows. If P is the transition matrix from the standard basis e1; e2; e3 to ®1; ®2; ®3, i.e. ¡1 t (®1; ®2; ®3) = (e1; e2; e3)P , then (P ) is the transition matrix from the dual basis ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¤ ¡1 t e1; e2; e3 to the dual basis ®1; ®2; a3, i.e. (®1; ®2; a3) = (e1; e2; e3)(P ) . Note that this problem is basically the change of coordinates problem: e.g. ¤ 3 the value of ®1 on the vector v 2 C is the first coordinate of v relative to the basis ®1; ®2; ®3.
    [Show full text]
  • Vectors and Dual Vectors
    Vectors By now you have a pretty good experience with \vectors". Usually, a vector is defined as a quantity that has a direction and a magnitude, such as a position vector, velocity vector, acceleration vector, etc. However, the notion of a vector has a considerably wider realm of applicability than these examples might suggest. The set of all real numbers forms a vector space, as does the set of all complex numbers. The set of functions on a set (e.g., functions of one variable, f(x)) form a vector space. Solutions of linear homogeneous equations form a vector space. We begin by giving the abstract rules for forming a space of vectors, also known as a vector space. A vector space V is a set equipped with an operation of \addition" and an additive identity. The elements of the set are called vectors, which we shall denote as ~u, ~v, ~w, etc. For now, you can think of them as position vectors in order to keep yourself sane. Addition, is an operation in which two vectors, say ~u and ~v, can be combined to make another vector, say, ~w. We denote this operation by the symbol \+": ~u + ~v = ~w: (1) Do not be fooled by this simple notation. The \addition" of vectors may be quite a different operation than ordinary arithmetic addition. For example, if we view position vectors in the x-y plane as \arrows" drawn from the origin, the addition of vectors is defined by the parallelogram rule. Clearly this rule is quite different than ordinary \addition".
    [Show full text]
  • Duality, Part 1: Dual Bases and Dual Maps Notation
    Duality, part 1: Dual Bases and Dual Maps Notation F denotes either R or C. V and W denote vector spaces over F. Define ': R3 ! R by '(x; y; z) = 4x − 5y + 2z. Then ' is a linear functional on R3. n n Fix (b1;:::; bn) 2 C . Define ': C ! C by '(z1;:::; zn) = b1z1 + ··· + bnzn: Then ' is a linear functional on Cn. Define ': P(R) ! R by '(p) = 3p00(5) + 7p(4). Then ' is a linear functional on P(R). R 1 Define ': P(R) ! R by '(p) = 0 p(x) dx. Then ' is a linear functional on P(R). Examples: Linear Functionals Definition: linear functional A linear functional on V is a linear map from V to F. In other words, a linear functional is an element of L(V; F). n n Fix (b1;:::; bn) 2 C . Define ': C ! C by '(z1;:::; zn) = b1z1 + ··· + bnzn: Then ' is a linear functional on Cn. Define ': P(R) ! R by '(p) = 3p00(5) + 7p(4). Then ' is a linear functional on P(R). R 1 Define ': P(R) ! R by '(p) = 0 p(x) dx. Then ' is a linear functional on P(R). Linear Functionals Definition: linear functional A linear functional on V is a linear map from V to F. In other words, a linear functional is an element of L(V; F). Examples: Define ': R3 ! R by '(x; y; z) = 4x − 5y + 2z. Then ' is a linear functional on R3. Define ': P(R) ! R by '(p) = 3p00(5) + 7p(4). Then ' is a linear functional on P(R). R 1 Define ': P(R) ! R by '(p) = 0 p(x) dx.
    [Show full text]
  • A Some Basic Rules of Tensor Calculus
    A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e.
    [Show full text]
  • A Novel Dual Approach to Nonlinear Semigroups of Lipschitz Operators
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 357, Number 1, Pages 409{424 S 0002-9947(04)03635-9 Article electronically published on August 11, 2004 A NOVEL DUAL APPROACH TO NONLINEAR SEMIGROUPS OF LIPSCHITZ OPERATORS JIGEN PENG AND ZONGBEN XU Abstract. Lipschitzian semigroup refers to a one-parameter semigroup of Lipschitz operators that is strongly continuous in the parameter. It con- tains C0-semigroup, nonlinear semigroup of contractions and uniformly k- Lipschitzian semigroup as special cases. In this paper, through developing a series of Lipschitz dual notions, we establish an analysis approach to Lips- chitzian semigroup. It is mainly proved that a (nonlinear) Lipschitzian semi- group can be isometrically embedded into a certain C0-semigroup. As ap- plication results, two representation formulas of Lipschitzian semigroup are established, and many asymptotic properties of C0-semigroup are generalized to Lipschitzian semigroup. 1. Introduction Let X and Y be Banach spaces over the same coefficient field K (= R or C), and let C ⊂ X and D ⊂ Y be their subsets. A mapping T from C into D is called Lipschitz operator if there exists a real constant M>0 such that (1.1) k Tx− Ty k≤ M k x − y k; 8x; y 2 C; where the constant M is commonly referred to as a Lipschitz constant of T . Clearly, if the mapping T is a constant (that is, there exists a vector y0 2 D such that Tx= y0 for all x 2 C), then T is a Lipschitz operator. Let Lip(C; D) denote the space of Lipschitz operators from C into D,andfor every T 2 Lip(C; D)letL(T ) denote the minimum Lipschitz constant of T , i.e., k Tx− Ty k (1.2) L(T )= sup : x;y2C;x=6 y k x − y k Then, it can be shown that the nonnegative functional L(·)isaseminormof Lip(C; D) and hence (Lip(C; D);L(·)) is a seminormed linear space.
    [Show full text]
  • Multilinear Algebra
    Appendix A Multilinear Algebra This chapter presents concepts from multilinear algebra based on the basic properties of finite dimensional vector spaces and linear maps. The primary aim of the chapter is to give a concise introduction to alternating tensors which are necessary to define differential forms on manifolds. Many of the stated definitions and propositions can be found in Lee [1], Chaps. 11, 12 and 14. Some definitions and propositions are complemented by short and simple examples. First, in Sect. A.1 dual and bidual vector spaces are discussed. Subsequently, in Sects. A.2–A.4, tensors and alternating tensors together with operations such as the tensor and wedge product are introduced. Lastly, in Sect. A.5, the concepts which are necessary to introduce the wedge product are summarized in eight steps. A.1 The Dual Space Let V be a real vector space of finite dimension dim V = n.Let(e1,...,en) be a basis of V . Then every v ∈ V can be uniquely represented as a linear combination i v = v ei , (A.1) where summation convention over repeated indices is applied. The coefficients vi ∈ R arereferredtoascomponents of the vector v. Throughout the whole chapter, only finite dimensional real vector spaces, typically denoted by V , are treated. When not stated differently, summation convention is applied. Definition A.1 (Dual Space)Thedual space of V is the set of real-valued linear functionals ∗ V := {ω : V → R : ω linear} . (A.2) The elements of the dual space V ∗ are called linear forms on V . © Springer International Publishing Switzerland 2015 123 S.R.
    [Show full text]
  • Linear Spaces
    Chapter 2 Linear Spaces Contents FieldofScalars ........................................ ............. 2.2 VectorSpaces ........................................ .............. 2.3 Subspaces .......................................... .............. 2.5 Sumofsubsets........................................ .............. 2.5 Linearcombinations..................................... .............. 2.6 Linearindependence................................... ................ 2.7 BasisandDimension ..................................... ............. 2.7 Convexity ............................................ ............ 2.8 Normedlinearspaces ................................... ............... 2.9 The `p and Lp spaces ............................................. 2.10 Topologicalconcepts ................................... ............... 2.12 Opensets ............................................ ............ 2.13 Closedsets........................................... ............. 2.14 Boundedsets......................................... .............. 2.15 Convergence of sequences . ................... 2.16 Series .............................................. ............ 2.17 Cauchysequences .................................... ................ 2.18 Banachspaces....................................... ............... 2.19 Completesubsets ....................................... ............. 2.19 Transformations ...................................... ............... 2.21 Lineartransformations.................................. ................ 2.21
    [Show full text]
  • General Inner Product & Fourier Series
    General Inner Products 1 General Inner Product & Fourier Series Advanced Topics in Linear Algebra, Spring 2014 Cameron Braithwaite 1 General Inner Product The inner product is an algebraic operation that takes two vectors of equal length and com- putes a single number, a scalar. It introduces a geometric intuition for length and angles of vectors. The inner product is a generalization of the dot product which is the more familiar operation that's specific to the field of real numbers only. Euclidean space which is limited to 2 and 3 dimensions uses the dot product. The inner product is a structure that generalizes to vector spaces of any dimension. The utility of the inner product is very significant when proving theorems and runing computations. An important aspect that can be derived is the notion of convergence. Building on convergence we can move to represenations of functions, specifally periodic functions which show up frequently. The Fourier series is an expansion of periodic functions specific to sines and cosines. By the end of this paper we will be able to use a Fourier series to represent a wave function. To build toward this result many theorems are stated and only a few have proofs while some proofs are trivial and left for the reader to save time and space. Definition 1.11 Real Inner Product Let V be a real vector space and a; b 2 V . An inner product on V is a function h ; i : V x V ! R satisfying the following conditions: (a) hαa + α0b; ci = αha; ci + α0hb; ci (b) hc; αa + α0bi = αhc; ai + α0hc; bi (c) ha; bi = hb; ai (d) ha; aiis a positive real number for any a 6= 0 Definition 1.12 Complex Inner Product Let V be a vector space.
    [Show full text]
  • Function Space Tensor Decomposition and Its Application in Sports Analytics
    East Tennessee State University Digital Commons @ East Tennessee State University Electronic Theses and Dissertations Student Works 12-2019 Function Space Tensor Decomposition and its Application in Sports Analytics Justin Reising East Tennessee State University Follow this and additional works at: https://dc.etsu.edu/etd Part of the Applied Statistics Commons, Multivariate Analysis Commons, and the Other Applied Mathematics Commons Recommended Citation Reising, Justin, "Function Space Tensor Decomposition and its Application in Sports Analytics" (2019). Electronic Theses and Dissertations. Paper 3676. https://dc.etsu.edu/etd/3676 This Thesis - Open Access is brought to you for free and open access by the Student Works at Digital Commons @ East Tennessee State University. It has been accepted for inclusion in Electronic Theses and Dissertations by an authorized administrator of Digital Commons @ East Tennessee State University. For more information, please contact [email protected]. Function Space Tensor Decomposition and its Application in Sports Analytics A thesis presented to the faculty of the Department of Mathematics East Tennessee State University In partial fulfillment of the requirements for the degree Master of Science in Mathematical Sciences by Justin Reising December 2019 Jeff Knisley, Ph.D. Nicole Lewis, Ph.D. Michele Joyner, Ph.D. Keywords: Sports Analytics, PCA, Tensor Decomposition, Functional Analysis. ABSTRACT Function Space Tensor Decomposition and its Application in Sports Analytics by Justin Reising Recent advancements in sports information and technology systems have ushered in a new age of applications of both supervised and unsupervised analytical techniques in the sports domain. These automated systems capture large volumes of data points about competitors during live competition.
    [Show full text]