Linear Algebra II

Total Page:16

File Type:pdf, Size:1020Kb

Linear Algebra II Linear Algebra II Andrew Kobin Fall 2014 Contents Contents Contents 0 Introduction 1 1 The Algebra of Vector Spaces 2 1.1 Scalar Fields . .2 1.2 Vector Spaces and Linear Transformations . .3 1.3 Constructing Vector Spaces . .7 1.4 Linear Independence, Span and Bases . 10 1.5 More on Linear Transformations . 17 1.6 Similar and Conjugate Matrices . 23 1.7 Kernel and Image . 24 1.8 The Isomorphism Theorems . 27 2 Duality 29 2.1 Functionals . 29 2.2 Self Duality . 31 2.3 Perpendicular Subspaces . 31 3 Multilinearity 34 3.1 Multilinear Functions . 34 3.2 Tensor Products . 35 3.3 Tensoring Transformations . 38 3.4 Alternating Functions and Exterior Products . 39 3.5 The Determinant . 43 4 Diagonalizability 46 4.1 Eigenvalues and Eigenvectors . 46 4.2 Minimal Polynomials . 50 4.3 Cyclic Linear Systems . 53 4.4 Jordan Canonical Form . 59 5 Inner Product Spaces 65 5.1 Inner Products and Norms . 65 5.2 Orthogonality . 68 5.3 The Adjoint . 71 5.4 Normal Operators . 73 5.5 Spectral Decomposition . 74 5.6 Unitary Transformations . 75 i 0 Introduction 0 Introduction These notes were taken from a second-semester linear algebra course taught by Dr. Frank Moore at Wake Forest University in the fall of 2014. The main reference for the course is A (Terse) Introduction to Linear Algebra by Katznelson. The goal of the course is to expand upon the concepts introduced in a first-semester linear algebra course by generalizing real vector spaces to vector spaces over an arbitrary field, and describing linear operators, canonical forms, inner products and linear groups on such spaces. The main topics and theorems that are covered here are: A rigorous treatment of the algebraic theory of vector spaces and their transformations Duality, dual bases, the adjoint transformation Bilinear maps and the tensor product Alternating functions, the exterior product and the determinant Eigenvalues, eigenvectors, the Cayley-Hamiliton Theorem Jordan canonical form and the theory of diagonalizability Inner product spaces Orthonormal bases and the Gram-Schmidt process Spectral theory 1 1 The Algebra of Vector Spaces 1 The Algebra of Vector Spaces 1.1 Scalar Fields The first question we must answer is: what should the scalars be in a general treatment of linear algebra? The answer is fields: Definition. A field is a set F with two binary operations 00+00 : F × F −! F 00·00 : F × F −! F (a; b) 7−! a + b (a; b) 7−! ab satisfying the following axioms: (1) There is an element 0 2 F such that a + 0 = 0 + a = a for all a 2 F. (2) For all a; b 2 F, a + b = b + a. (3) For each a 2 F, there is an element −a 2 F such that a + (−a) = 0. (4) For all a; b 2 F, ab = ba. (5) For all nonzero a 2 F (the set of such elements is denoted F×) there exists a−1 2 F such that aa−1 = 1. (6) For any a; b; c 2 F, a(bc) = (ab)c. (7) For any a; b; c 2 F, a(b + c) = ab + ac. Note that axioms (1) { (4) say that (F; +) is an abelian group, (5) { (8) say that (F×; ·) is an abelian group, and axiom (9) serves to describe the interaction between the operations. We sometimes say that + and · \play nicely". Examples. 1 The real numbers R, the rational numbers Q and the complex numbers C are all common examples of fields. 2 Z2 = f0; 1g is the smallest possible field, sometimes called the finite field of character- istic 2. This example is important in computer science, particularly coding theory. ¯ ¯ ¯ 3 For any p a prime integer, Zp = f0; 1; 2;:::; p − 1g is called the finite field of char- acteristic p. In contrast, Q, R and C are called infinite fields. 4 The integers Z are not a field. Z has + and · but no multiplicative inverses (i.e. it fails axiom (7)). 5 Let n be a composite number. Then Zn is not a field. For example, consider Z4 = f0¯; 1¯; 2¯; 3¯g. In this set 2¯ does not have an inverse. 2 1.2 Vector Spaces and Linear Transformations 1 The Algebra of Vector Spaces 1.2 Vector Spaces and Linear Transformations Definition. A vector space over a field F is a set V with two operations 00+00 : V × V −! V 00·00 : F × V −! V (u; v) 7−! u + v (a; v) 7−! av called vector addition and scaling, respectively, such that (1) (V; +) satisfies axioms (1) { (4), i.e. V is an additive abelian group. (2) For all a; b 2 F and v 2 V , a(bv) = (ab)v. (3) For all v 2 V , 1v = v. (4) For all a; b 2 F and v 2 V , (a + b)v = av + bv. (5) For all a 2 F and u; v 2 V , a(u + v) = au + av. Examples. n 1 F = f(a1; : : : ; an) j ai 2 Fg is the canonical n-dimensional vector space over F. The operations are the usual componentwise addition and scaling. In particular, a field is a (one-dimensional) vector space over itself. 2 f0g is the trivial vector space over any field F. 3 Mat(n; m; F) denotes the set of all n × m matrices with coefficients in F. It is an F-vector space under componentwise addition and scaling, and is isomorphic to (which will be described later) Fnm. 2 n 4 F[x] = fa0 + a1x + a2x + ::: + anx j ai 2 Fg is called the polynomial algebra over F. It is a vector space via coefficient addition and polynomial multiplication (extended FOIL). This is an example of an infinite dimensional vector space. 5 Let X be a set. Then FX = ff : X ! F j f is a functiong is a vector space over F. For f; g 2 FX and for all x 2 X, addition and scaling are defined by (f + g)(x) = f(x) + g(x) and f(cx) = cf(x): The zero function 0 : X ! F sends every element x 7! 0. It is easy to verify that FX satisfies the properties of a vector space. 6 Let V = ff : R ! C j f satisfies the ODE 3f 000(x) − sin x f 00(x) + 2f(x) = 0g. Then V is a vector space under pointwise addition and scaling. For example, if f; g 2 V and x 2 R then 3(f + g)000(x) − sin x(f + g)00(x) + 2(f + g)(x) = (3f 000(x) − sin x f 00(x) + 2f(x)) + (3g000(x) − sin x g00(x) + 2g(x)) = 0 + 0 = 0: Thus f + g 2 V . The proof for scalar multiplication is similar. 3 1.2 Vector Spaces and Linear Transformations 1 The Algebra of Vector Spaces Next we seek a way of comparing two vector spaces. Definition. A linear transformation is a function T : V ! W , where V and W are F-vector spaces, satisfying (1) T (u + v) = T (u) + T (v) (2) T (cv) = cT (u) where u; v 2 V and c 2 F. Recall the definitions of one-to-one, onto and bijective. Definition. A function f : X ! Y is (a) One-to-one if f(x1) = f(x2) implies x1 = x2. (b) Onto if for all y 2 Y there is an x 2 X such that f(x) = y. (c) Bijective if it is both one-to-one and onto. Remark. Let f : X ! Y be a function. (1) f is 1-1 () there is a function g : im(f) ! X such that gf = idX . (2) f is onto () there is a function h : Y ! X such that fh = idY . Definition. A bijective linear transformation is called an isomorphism. Examples. 1 Recall the vector space Mat(n; m; F). Define a map nm T : Mat(n; m; F) −! F 0 1 a11 B a21 C 2 3 B . C a ··· a B . C 11 1m B . C 6 . .. 7 B a C A = 4 . 5 7−! B n1 C B a C an1 ··· anm B 12 C B . C @ . A anm There is a clear candidate for an inverse function to T : nm S : F −! Mat(n; m; F) 0 1 a11 B a21 C B . C 2 3 B . C a ··· a B . C 11 1m B a C 6 . .. 7 B n1 C 7−! 4 . 5 B a C B 12 C an1 ··· anm B . C @ . A anm and checking T is linear is trivial, so T is an isomorphism. This confirms our earlier ∼ nm comment that Mat(n; m; F) = F . 4 1.2 Vector Spaces and Linear Transformations 1 The Algebra of Vector Spaces 2 Let X be a finite set, say X = fx1; : : : ; xng. Define n X T : F −! F 0 1 a1 . f : X ! F B . C 7−! @ A xi 7! ai an One can verify that this gives an isomorphism. 3 The function Int : C[0; 1] −! R Z 1 f 7−! f(x) dx 0 is a linear transformation since we know from calculus that integrals are linear. However Int is definitely not an isomorphism since there are many continuous functions on [0,1] that have integral 0. 4 Let A 2 Mat(n; m; F) be given by 2 3 a11 ··· a1m 6 . .. 7 A = 4 . 5 an1 ··· anm Then we can define a linear transformation m n TA : F −! F v 7−! Av where Av denotes the usual matrix-vector multiplication. In many branches of algebra there is a notion of subsets of an algebraic object which have the same properties as the larger object.
Recommended publications
  • Arxiv:0704.2561V2 [Math.QA] 27 Apr 2007 Nttto O Xeln Okn Odtos H Eoda Second the Conditions
    DUAL FEYNMAN TRANSFORM FOR MODULAR OPERADS J. CHUANG AND A. LAZAREV Abstract. We introduce and study the notion of a dual Feynman transform of a modular operad. This generalizes and gives a conceptual explanation of Kontsevich’s dual construction producing graph cohomology classes from a contractible differential graded Frobenius alge- bra. The dual Feynman transform of a modular operad is indeed linear dual to the Feynman transform introduced by Getzler and Kapranov when evaluated on vacuum graphs. In marked contrast to the Feynman transform, the dual notion admits an extremely simple presentation via generators and relations; this leads to an explicit and easy description of its algebras. We discuss a further generalization of the dual Feynman transform whose algebras are not neces- sarily contractible. This naturally gives rise to a two-colored graph complex analogous to the Boardman-Vogt topological tree complex. Contents Introduction 1 Notation and conventions 2 1. Main construction 4 2. Twisted modular operads and their algebras 8 3. Algebras over the dual Feynman transform 12 4. Stable graph complexes 13 4.1. Commutative case 14 4.2. Associative case 15 5. Examples 17 6. Moduli spaces of metric graphs 19 6.1. Commutative case 19 6.2. Associative case 21 7. BV-resolution of a modular operad 22 7.1. Basic construction and description of algebras 22 7.2. Stable BV-graph complexes 23 References 26 Introduction arXiv:0704.2561v2 [math.QA] 27 Apr 2007 The relationship between operadic algebras and various moduli spaces goes back to Kontse- vich’s seminal papers [19] and [20] where graph homology was also introduced.
    [Show full text]
  • Formal Power Series - Wikipedia, the Free Encyclopedia
    Formal power series - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Formal_power_series Formal power series From Wikipedia, the free encyclopedia In mathematics, formal power series are a generalization of polynomials as formal objects, where the number of terms is allowed to be infinite; this implies giving up the possibility to substitute arbitrary values for indeterminates. This perspective contrasts with that of power series, whose variables designate numerical values, and which series therefore only have a definite value if convergence can be established. Formal power series are often used merely to represent the whole collection of their coefficients. In combinatorics, they provide representations of numerical sequences and of multisets, and for instance allow giving concise expressions for recursively defined sequences regardless of whether the recursion can be explicitly solved; this is known as the method of generating functions. Contents 1 Introduction 2 The ring of formal power series 2.1 Definition of the formal power series ring 2.1.1 Ring structure 2.1.2 Topological structure 2.1.3 Alternative topologies 2.2 Universal property 3 Operations on formal power series 3.1 Multiplying series 3.2 Power series raised to powers 3.3 Inverting series 3.4 Dividing series 3.5 Extracting coefficients 3.6 Composition of series 3.6.1 Example 3.7 Composition inverse 3.8 Formal differentiation of series 4 Properties 4.1 Algebraic properties of the formal power series ring 4.2 Topological properties of the formal power series
    [Show full text]
  • FUNCTIONAL ANALYSIS 1. Banach and Hilbert Spaces in What
    FUNCTIONAL ANALYSIS PIOTR HAJLASZ 1. Banach and Hilbert spaces In what follows K will denote R of C. Definition. A normed space is a pair (X, k · k), where X is a linear space over K and k · k : X → [0, ∞) is a function, called a norm, such that (1) kx + yk ≤ kxk + kyk for all x, y ∈ X; (2) kαxk = |α|kxk for all x ∈ X and α ∈ K; (3) kxk = 0 if and only if x = 0. Since kx − yk ≤ kx − zk + kz − yk for all x, y, z ∈ X, d(x, y) = kx − yk defines a metric in a normed space. In what follows normed paces will always be regarded as metric spaces with respect to the metric d. A normed space is called a Banach space if it is complete with respect to the metric d. Definition. Let X be a linear space over K (=R or C). The inner product (scalar product) is a function h·, ·i : X × X → K such that (1) hx, xi ≥ 0; (2) hx, xi = 0 if and only if x = 0; (3) hαx, yi = αhx, yi; (4) hx1 + x2, yi = hx1, yi + hx2, yi; (5) hx, yi = hy, xi, for all x, x1, x2, y ∈ X and all α ∈ K. As an obvious corollary we obtain hx, y1 + y2i = hx, y1i + hx, y2i, hx, αyi = αhx, yi , Date: February 12, 2009. 1 2 PIOTR HAJLASZ for all x, y1, y2 ∈ X and α ∈ K. For a space with an inner product we define kxk = phx, xi . Lemma 1.1 (Schwarz inequality).
    [Show full text]
  • Discover Linear Algebra Incomplete Preliminary Draft
    Discover Linear Algebra Incomplete Preliminary Draft Date: November 28, 2017 L´aszl´oBabai in collaboration with Noah Halford All rights reserved. Approved for instructional use only. Commercial distribution prohibited. c 2016 L´aszl´oBabai. Last updated: November 10, 2016 Preface TO BE WRITTEN. Babai: Discover Linear Algebra. ii This chapter last updated August 21, 2016 c 2016 L´aszl´oBabai. Contents Notation ix I Matrix Theory 1 Introduction to Part I 2 1 (F, R) Column Vectors 3 1.1 (F) Column vector basics . 3 1.1.1 The domain of scalars . 3 1.2 (F) Subspaces and span . 6 1.3 (F) Linear independence and the First Miracle of Linear Algebra . 8 1.4 (F) Dot product . 12 1.5 (R) Dot product over R ................................. 14 1.6 (F) Additional exercises . 14 2 (F) Matrices 15 2.1 Matrix basics . 15 2.2 Matrix multiplication . 18 2.3 Arithmetic of diagonal and triangular matrices . 22 2.4 Permutation Matrices . 24 2.5 Additional exercises . 26 3 (F) Matrix Rank 28 3.1 Column and row rank . 28 iii iv CONTENTS 3.2 Elementary operations and Gaussian elimination . 29 3.3 Invariance of column and row rank, the Second Miracle of Linear Algebra . 31 3.4 Matrix rank and invertibility . 33 3.5 Codimension (optional) . 34 3.6 Additional exercises . 35 4 (F) Theory of Systems of Linear Equations I: Qualitative Theory 38 4.1 Homogeneous systems of linear equations . 38 4.2 General systems of linear equations . 40 5 (F, R) Affine and Convex Combinations (optional) 42 5.1 (F) Affine combinations .
    [Show full text]
  • Duality, Part 1: Dual Bases and Dual Maps Notation
    Duality, part 1: Dual Bases and Dual Maps Notation F denotes either R or C. V and W denote vector spaces over F. Define ': R3 ! R by '(x; y; z) = 4x − 5y + 2z. Then ' is a linear functional on R3. n n Fix (b1;:::; bn) 2 C . Define ': C ! C by '(z1;:::; zn) = b1z1 + ··· + bnzn: Then ' is a linear functional on Cn. Define ': P(R) ! R by '(p) = 3p00(5) + 7p(4). Then ' is a linear functional on P(R). R 1 Define ': P(R) ! R by '(p) = 0 p(x) dx. Then ' is a linear functional on P(R). Examples: Linear Functionals Definition: linear functional A linear functional on V is a linear map from V to F. In other words, a linear functional is an element of L(V; F). n n Fix (b1;:::; bn) 2 C . Define ': C ! C by '(z1;:::; zn) = b1z1 + ··· + bnzn: Then ' is a linear functional on Cn. Define ': P(R) ! R by '(p) = 3p00(5) + 7p(4). Then ' is a linear functional on P(R). R 1 Define ': P(R) ! R by '(p) = 0 p(x) dx. Then ' is a linear functional on P(R). Linear Functionals Definition: linear functional A linear functional on V is a linear map from V to F. In other words, a linear functional is an element of L(V; F). Examples: Define ': R3 ! R by '(x; y; z) = 4x − 5y + 2z. Then ' is a linear functional on R3. Define ': P(R) ! R by '(p) = 3p00(5) + 7p(4). Then ' is a linear functional on P(R). R 1 Define ': P(R) ! R by '(p) = 0 p(x) dx.
    [Show full text]
  • 5. Polynomial Rings Let R Be a Commutative Ring. a Polynomial of Degree N in an Indeterminate (Or Variable) X with Coefficients
    5. polynomial rings Let R be a commutative ring. A polynomial of degree n in an indeterminate (or variable) x with coefficients in R is an expression of the form f = f(x)=a + a x + + a xn, 0 1 ··· n where a0, ,an R and an =0.Wesaythata0, ,an are the coefficients of f and n ··· ∈ ̸ ··· that anx is the highest degree term of f. A polynomial is determined by its coeffiecients. m i n i Two polynomials f = i=0 aix and g = i=1 bix are equal if m = n and ai = bi for i =0, 1, ,n. ! ! Degree··· of a polynomial is a non-negative integer. The polynomials of degree zero are just the elements of R, these are called constant polynomials, or simply constants. Let R[x] denote the set of all polynomials in the variable x with coefficients in R. The addition and 2 multiplication of polynomials are defined in the usual manner: If f(x)=a0 + a1x + a2x + a x3 + and g(x)=b + b x + b x2 + b x3 + are two elements of R[x], then their sum is 3 ··· 0 1 2 3 ··· f(x)+g(x)=(a + b )+(a + b )x +(a + b )x2 +(a + b )x3 + 0 0 1 1 2 2 3 3 ··· and their product is defined by f(x) g(x)=a b +(a b + a b )x +(a b + a b + a b )x2 +(a b + a b + a b + a b )x3 + · 0 0 0 1 1 0 0 2 1 1 2 0 0 3 1 2 2 1 3 0 ··· Let 1 denote the constant polynomial 1 and 0 denote the constant polynomial zero.
    [Show full text]
  • Linear Spaces
    Chapter 2 Linear Spaces Contents FieldofScalars ........................................ ............. 2.2 VectorSpaces ........................................ .............. 2.3 Subspaces .......................................... .............. 2.5 Sumofsubsets........................................ .............. 2.5 Linearcombinations..................................... .............. 2.6 Linearindependence................................... ................ 2.7 BasisandDimension ..................................... ............. 2.7 Convexity ............................................ ............ 2.8 Normedlinearspaces ................................... ............... 2.9 The `p and Lp spaces ............................................. 2.10 Topologicalconcepts ................................... ............... 2.12 Opensets ............................................ ............ 2.13 Closedsets........................................... ............. 2.14 Boundedsets......................................... .............. 2.15 Convergence of sequences . ................... 2.16 Series .............................................. ............ 2.17 Cauchysequences .................................... ................ 2.18 Banachspaces....................................... ............... 2.19 Completesubsets ....................................... ............. 2.19 Transformations ...................................... ............... 2.21 Lineartransformations.................................. ................ 2.21
    [Show full text]
  • Ring (Mathematics) 1 Ring (Mathematics)
    Ring (mathematics) 1 Ring (mathematics) In mathematics, a ring is an algebraic structure consisting of a set together with two binary operations usually called addition and multiplication, where the set is an abelian group under addition (called the additive group of the ring) and a monoid under multiplication such that multiplication distributes over addition.a[›] In other words the ring axioms require that addition is commutative, addition and multiplication are associative, multiplication distributes over addition, each element in the set has an additive inverse, and there exists an additive identity. One of the most common examples of a ring is the set of integers endowed with its natural operations of addition and multiplication. Certain variations of the definition of a ring are sometimes employed, and these are outlined later in the article. Polynomials, represented here by curves, form a ring under addition The branch of mathematics that studies rings is known and multiplication. as ring theory. Ring theorists study properties common to both familiar mathematical structures such as integers and polynomials, and to the many less well-known mathematical structures that also satisfy the axioms of ring theory. The ubiquity of rings makes them a central organizing principle of contemporary mathematics.[1] Ring theory may be used to understand fundamental physical laws, such as those underlying special relativity and symmetry phenomena in molecular chemistry. The concept of a ring first arose from attempts to prove Fermat's last theorem, starting with Richard Dedekind in the 1880s. After contributions from other fields, mainly number theory, the ring notion was generalized and firmly established during the 1920s by Emmy Noether and Wolfgang Krull.[2] Modern ring theory—a very active mathematical discipline—studies rings in their own right.
    [Show full text]
  • Matroidal Subdivisions, Dressians and Tropical Grassmannians
    Matroidal subdivisions, Dressians and tropical Grassmannians vorgelegt von Diplom-Mathematiker Benjamin Frederik Schröter geboren in Frankfurt am Main Von der Fakultät II – Mathematik und Naturwissenschaften der Technischen Universität Berlin zur Erlangung des akademischen Grades Doktor der Naturwissenschaften – Dr. rer. nat. – genehmigte Dissertation Promotionsausschuss: Vorsitzender: Prof. Dr. Wilhelm Stannat Gutachter: Prof. Dr. Michael Joswig Prof. Dr. Hannah Markwig Senior Lecturer Ph.D. Alex Fink Tag der wissenschaftlichen Aussprache: 17. November 2017 Berlin 2018 Zusammenfassung In dieser Arbeit untersuchen wir verschiedene Aspekte von tropischen linearen Räumen und deren Modulräumen, den tropischen Grassmannschen und Dressschen. Tropische lineare Räume sind dual zu Matroidunterteilungen. Motiviert durch das Konzept der Splits, dem einfachsten Fall einer polytopalen Unterteilung, wird eine neue Klasse von Matroiden eingeführt, die mit Techniken der polyedrischen Geometrie untersucht werden kann. Diese Klasse ist sehr groß, da sie alle Paving-Matroide und weitere Matroide enthält. Die strukturellen Eigenschaften von Split-Matroiden können genutzt werden, um neue Ergebnisse in der tropischen Geometrie zu erzielen. Vor allem verwenden wir diese, um Strahlen der tropischen Grassmannschen zu konstruieren und die Dimension der Dressschen zu bestimmen. Dazu wird die Beziehung zwischen der Realisierbarkeit von Matroiden und der von tropischen linearen Räumen weiter entwickelt. Die Strahlen einer Dressschen entsprechen den Facetten des Sekundärpolytops eines Hypersimplexes. Eine besondere Klasse von Facetten bildet die Verallgemeinerung von Splits, die wir Multi-Splits nennen und die Herrmann ursprünglich als k-Splits bezeichnet hat. Wir geben eine explizite kombinatorische Beschreibung aller Multi-Splits eines Hypersimplexes. Diese korrespondieren mit Nested-Matroiden. Über die tropische Stiefelabbildung erhalten wir eine Beschreibung aller Multi-Splits für Produkte von Simplexen.
    [Show full text]
  • 1 Sets and Set Notation. Definition 1 (Naive Definition of a Set)
    LINEAR ALGEBRA MATH 2700.006 SPRING 2013 (COHEN) LECTURE NOTES 1 Sets and Set Notation. Definition 1 (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most often name sets using capital letters, like A, B, X, Y , etc., while the elements of a set will usually be given lower-case letters, like x, y, z, v, etc. Two sets X and Y are called equal if X and Y consist of exactly the same elements. In this case we write X = Y . Example 1 (Examples of Sets). (1) Let X be the collection of all integers greater than or equal to 5 and strictly less than 10. Then X is a set, and we may write: X = f5; 6; 7; 8; 9g The above notation is an example of a set being described explicitly, i.e. just by listing out all of its elements. The set brackets {· · ·} indicate that we are talking about a set and not a number, sequence, or other mathematical object. (2) Let E be the set of all even natural numbers. We may write: E = f0; 2; 4; 6; 8; :::g This is an example of an explicity described set with infinitely many elements. The ellipsis (:::) in the above notation is used somewhat informally, but in this case its meaning, that we should \continue counting forever," is clear from the context. (3) Let Y be the collection of all real numbers greater than or equal to 5 and strictly less than 10. Recalling notation from previous math courses, we may write: Y = [5; 10) This is an example of using interval notation to describe a set.
    [Show full text]
  • General Inner Product & Fourier Series
    General Inner Products 1 General Inner Product & Fourier Series Advanced Topics in Linear Algebra, Spring 2014 Cameron Braithwaite 1 General Inner Product The inner product is an algebraic operation that takes two vectors of equal length and com- putes a single number, a scalar. It introduces a geometric intuition for length and angles of vectors. The inner product is a generalization of the dot product which is the more familiar operation that's specific to the field of real numbers only. Euclidean space which is limited to 2 and 3 dimensions uses the dot product. The inner product is a structure that generalizes to vector spaces of any dimension. The utility of the inner product is very significant when proving theorems and runing computations. An important aspect that can be derived is the notion of convergence. Building on convergence we can move to represenations of functions, specifally periodic functions which show up frequently. The Fourier series is an expansion of periodic functions specific to sines and cosines. By the end of this paper we will be able to use a Fourier series to represent a wave function. To build toward this result many theorems are stated and only a few have proofs while some proofs are trivial and left for the reader to save time and space. Definition 1.11 Real Inner Product Let V be a real vector space and a; b 2 V . An inner product on V is a function h ; i : V x V ! R satisfying the following conditions: (a) hαa + α0b; ci = αha; ci + α0hb; ci (b) hc; αa + α0bi = αhc; ai + α0hc; bi (c) ha; bi = hb; ai (d) ha; aiis a positive real number for any a 6= 0 Definition 1.12 Complex Inner Product Let V be a vector space.
    [Show full text]
  • Function Space Tensor Decomposition and Its Application in Sports Analytics
    East Tennessee State University Digital Commons @ East Tennessee State University Electronic Theses and Dissertations Student Works 12-2019 Function Space Tensor Decomposition and its Application in Sports Analytics Justin Reising East Tennessee State University Follow this and additional works at: https://dc.etsu.edu/etd Part of the Applied Statistics Commons, Multivariate Analysis Commons, and the Other Applied Mathematics Commons Recommended Citation Reising, Justin, "Function Space Tensor Decomposition and its Application in Sports Analytics" (2019). Electronic Theses and Dissertations. Paper 3676. https://dc.etsu.edu/etd/3676 This Thesis - Open Access is brought to you for free and open access by the Student Works at Digital Commons @ East Tennessee State University. It has been accepted for inclusion in Electronic Theses and Dissertations by an authorized administrator of Digital Commons @ East Tennessee State University. For more information, please contact [email protected]. Function Space Tensor Decomposition and its Application in Sports Analytics A thesis presented to the faculty of the Department of Mathematics East Tennessee State University In partial fulfillment of the requirements for the degree Master of Science in Mathematical Sciences by Justin Reising December 2019 Jeff Knisley, Ph.D. Nicole Lewis, Ph.D. Michele Joyner, Ph.D. Keywords: Sports Analytics, PCA, Tensor Decomposition, Functional Analysis. ABSTRACT Function Space Tensor Decomposition and its Application in Sports Analytics by Justin Reising Recent advancements in sports information and technology systems have ushered in a new age of applications of both supervised and unsupervised analytical techniques in the sports domain. These automated systems capture large volumes of data points about competitors during live competition.
    [Show full text]