Tensor Algebra, Linear Algebra, Matrix Algebra, Multilinear Algebra

Total Page:16

File Type:pdf, Size:1020Kb

Tensor Algebra, Linear Algebra, Matrix Algebra, Multilinear Algebra Appendix A Tensor Algebra, Linear Algebra, Matrix Algebra, Multilinear Algebra The key word “algebra” is derived from the name and the work of the nineth-century scientist Mohammed ibn Mˆusˆa al-Khowˆarizmi who was born in what is now Uzbekistan and worked in Bagdad at the court of Harun al-Rashid’s son.“al-jabr” appears in the title of his book Kitab al-jabr wail muqabala where he discusses symbolic methods for the solution of equations, K. Vogel: Mohammed Ibn Musa Alchwarizmi’s Algorismus; Das forested Lehrbuch zum Rechnen mit indischen Ziffern, 1461, Otto Zeller Verlagsbuchhandlung, Aalen 1963). Accordingly what is an algebra and how is it tied to the notion of a vector space,atensor space, respectively? By an algebra we mean asetS of elements and a finite set M of operations. Each operation .opera/k is a single-valued function assigning to every finite ordered sequence .x1;:::;xn/ of n D n.k/ elements of S a value S .opera/k .x1;:::;xk / D xl in . In particular for .opera/k.x1;x2/ the operation is called binary,for.opera/k.x1;x2;x3/ ternary, in general for .opera/k.x1;:::;xn/n-ary. For a given set of operation symbols .opera/1, .opera/2;:::, .opera/k we define a word.Inlinear algebra the set M has basically two elements, namely two internal relations .opera/1 worded “addition” (including inverse addition: subtraction) and .opera/2 worded “multiplication” (including inverse multiplication: division). Here the elements of the set S are vectors over the field R of real numbers as long as we refer to linear algebra.Incontrast,in multilinear algebra the elements of the set S are tensors over the field of real numbers R. Only later modules as generalizations of vectors of linear algebra are introduced in which the “scalars” are allowed to be from an arbitrary ring rather than the field R of real numbers. Let us assume that you as a potential reader are in some way familiar with the elementary notion of a threedimensional vector space X with elements called vectors x 2 R3, namely the intuitive space “ we locally live in”. Such an elementary vector space X is equipped with a metric to be referred to as threedimensional Euclidean. E. Grafarend and J. Awange, Applications of Linear and Nonlinear Models, 571 Springer Geophysics, DOI 10.1007/978-3-642-22241-2, © Springer-Verlag Berlin Heidelberg 2012 572 A Tensor Algebra, Linear Algebra, Matrix Algebra, Multilinear Algebra As a real threedimensional vector space we are going to give it a linear and multilinear algebraic structure. In the context of structure mathematics based upon (a) Order structure (b) Topological structure (c) Algebraic structure an algebra is constituted if at least two relations are established, namely one internal and one external. We start with multilinear algebra, in particular with the multilinearity of the tensor product before we go back to linear algebra, in particular to Clifford algebra. p A-1 Multilinear Functions and the Tensor Space Tq Let X be a finite dimensional linear space, e.g. a vector space over the field R of real numbers, in addition denote by X its dual space such that n D dim X D dim X. Complex, quaternion and octonian numbers C, H and O as well as rings will only be introduced later in the context. For p; q 2 ZC being an element of positive integer numbers we introduce Tp X X q . ; / as the p-contravariant, q-covariant tensor space or space of multilinear functions X X f W X ::: X X ::: X ! Rp dim q dim : 1 p If we assume x ;:::;x 2 X and x1;:::;xq 2 X,then § ¤ x1 ˝ :::˝ xp ˝ x ˝ :::˝ x 2 Tp.X; X/ ¦ 1 q q ¥ holds. Multilinearity is understood as linearity in each variable.“˝” identifies 1 p the tensor product, the Cartesian product of elements .x ;:::;x ; x1;:::;xq/. 9 6 Example A.1: Bilinearity of the tensor product x1 ˝ x2 For every x1; x2 2 X, x; y 2 X and r 2 R bilinearity implies .x C y/ ˝ x2 D x ˝ x2 C y ˝ x2 .internal left-linearity/ x1 ˝ .x C y/ D x1 ˝ x C x1 ˝ y .internal right-linearity/ rx ˝ x2 D r.x ˝ x2/.external left-linearity/ x1 ˝ ry D r.x1 ˝ y/.external right-linearity/: | 8 7 p A-1 Multilinear Functions and the Tensor Space Tq 573 x x T0 The generalization of bilinearity of 1 ˝ 2 2 2 to multilinearity of x1 xp x x Tp ˝ :::˝ ˝ 1 ˝ :::˝ q 2 q is obvious. Tp Definition A.1. (multilinearity of tensor space q ): 1 p For every x ;:::;x 2 X and x1;:::;xq 2 X as well as u; v 2 X , x; y 2 X and r 2 R multilinearity implies 2 p .u C v/ ˝ x ˝ :::˝ x ˝ x1 ˝ :::˝ xq 2 p 2 p D u ˝ x ˝ :::˝ x ˝ x1 ˝ :::˝ xq C v ˝ x ˝ :::˝ x ˝ x1 ˝ :::˝ xq .internal left-linearity/ 1 p x ˝ :::˝ x ˝ .x C y/ ˝ x2 ˝ :::xq 1 p 1 p D x ˝ :::˝ x ˝ x ˝ x2 ˝ :::˝ xq C x ˝ :::˝ x ˝ y ˝ x2 ˝ :::˝ xq .internal right-linearity/ 2 p ru ˝ x ˝ :::˝ x ˝ x1 ˝ :::˝ xq 2 p D r.u ˝ x ˝ :::˝ x ˝ x1 ˝ :::˝ xq/.external left-linearity/ 1 p x ˝ :::˝ x ˝ rx ˝ x2 ˝ :::˝ xq 1 p D r.x ˝ :::˝ x ˝ x ˝ x2 ˝ :::˝ xq /.external right-linearity/: A possible way to visualize the different multilinear functions which span T0 T1 T0 T2 T1 T0 Tp f 0; 0; 1; 0; 1; 2;:::; q g is to construct a hierarchical diagram or a special tree as follows. ∈ 0 {0} 0S x1 ∈ 1 x ∈ 0 0 1 1 x1⊗ x2∈ 2 x1 ⊗ x ∈ 1 x ⊗ x ∈ 0 0 1 1 1 2 2 x1 ⊗ x2 ⊗ x3∈ 3 x1 ⊗ x2 ⊗ x ∈ 2 x1 ⊗ x2 ⊗ x ∈ 2 x ⊗ x ⊗ x ∈ 0 0 ; 1 1 ; 1 1 ; 1 2 3 3 574 A Tensor Algebra, Linear Algebra, Matrix Algebra, Multilinear Algebra When we learnt first about the tensor product symbolised by “˝”aswellasits multilinearity we were left with the problem of developing an intuitive under- 1 2 1 standing of x ˝ x , x ˝ x1 and higher order tensor products. Perhaps it is helpful to represent ”the involved vectors” in a contravariant or in a covariant 1 1 2 3 1 2 3 basis. For instances, x D e x1 C e x2 C e x3 or x1 D e1x C e2x C e3x is 1 2 3 a left representation of a three-dimensional vector in a 3-left basis fe ; e ; e gl of contravariant type or in a 3-left basis fe1; e2; e3gl of covariant type. Think in terms of 1 x or x1 as a three-dimensional position vector with right coordinates fx1;x2;x3g or fx1;x2;x3g , respectively. Since the intuitive algebras of vectors is commutative we 1 2 3 may also represent the three-dimensional vector in a in a 3-right basis fe ; e ; e gr of contravariant type or in a 3-right basis fe1; e2; e3gr of covariant type such that 1 1 2 3 1 2 3 x D x1e C x2e C x3e or x1 D x e1 C x e2 C x e3 is a right representation of a three-dimensional position vector which coincides with its left representation thanks to commutativity. Further on, the tensor product x ˝ y enjoys the left and right representations X3 X3 1 2 3 1 2 3 i j e x1 C e x2 C e x3 ˝ e y1 C e y2 C e y3 D e ˝ e xi yj iD1 j D1 and X3 X3 1 2 3 1 2 3 i j x1e C x2e C x3e ˝ y1e C y2e C y3e D xi yj e ˝ e iD1 j D1 which coincides again since we assumed a commutative algebra of vectors. The product of coordinates .xi yj /; i; j 2f1; 2; 3g is often called the dyadic product. Please do not miss the alternative covariant representation of the tensor product x ˝ y which we introduced so far in the contravariant basis, namely X3 X3 1 2 3 1 2 3 i j e1x C e2x C e3x ˝ e1y C e2y C e3y D ei ˝ ej x y iD1 j D1 of left type and X3 X3 1 2 3 1 2 3 i j x e1 C x e2 C x e3 ˝ y e1 C y e2 C y e3 D x y ei ˝ ej iD1 j D1 of right type. In a similar way we produce 3;3;3X 3;3;3X i j k i j k x ˝ y ˝ z D e ˝ e ˝ e xi yj zk D xi yj zk e ˝ e ˝ e i;j;kD1 i;j;kD1 of contravariant type and p A-1 Multilinear Functions and the Tensor Space Tq 575 3;3;3X 3;3;3X i j k i j k x ˝ y ˝ z D ei ˝ ej ˝ ekx y z D x y z ei ˝ ej ˝ ek i;j;kD1 i;j;kD1 of covariant type. “Mixed covariant-contravariant” representations of the tensor 1 product x1 ˝ y are X3 X3 X3 X3 1 j i i j x1 ˝ y D ei ˝ e x yj D x yj ei ˝ e iD1 j D1 iD1 j D1 or X3 X3 X3 X3 1 i j j i x ˝ y1 D e ˝ ej xi y D xi y e ˝ ej : iD1 j D1 iD1 j D1 1 In addition, we have to explain the notion x 2 X ; x1 2 X: While the vector x1 is an element of the vector space X , x1 is an element of its dual space.
Recommended publications
  • Arxiv:2009.07259V1 [Math.AP] 15 Sep 2020
    A GEOMETRIC TRAPPING APPROACH TO GLOBAL REGULARITY FOR 2D NAVIER-STOKES ON MANIFOLDS AYNUR BULUT AND KHANG MANH HUYNH Abstract. In this paper, we use frequency decomposition techniques to give a direct proof of global existence and regularity for the Navier-Stokes equations on two-dimensional Riemannian manifolds without boundary. Our techniques are inspired by an approach of Mattingly and Sinai [15] which was developed in the context of periodic boundary conditions on a flat background, and which is based on a maximum principle for Fourier coefficients. The extension to general manifolds requires several new ideas, connected to the less favorable spectral localization properties in our setting. Our argu- ments make use of frequency projection operators, multilinear estimates that originated in the study of the non-linear Schr¨odingerequation, and ideas from microlocal analysis. 1. Introduction Let (M; g) be a closed, oriented, connected, compact smooth two-dimensional Riemannian manifold, and let X(M) denote the space of smooth vector fields on M. We consider the incompressible Navier-Stokes equations on M, with viscosity coefficient ν > 0, @ U + div (U ⊗ U) − ν∆ U = − grad p in M t M ; (1) div U = 0 in M with initial data U0 2 X(M); where I ⊂ R is an open interval, and where U : I ! X(M) and p : I × M ! R represent the velocity and pressure of the fluid, respectively. Here, the operator ∆M is any choice of Laplacian defined on vector fields on M, discussed below. arXiv:2009.07259v1 [math.AP] 15 Sep 2020 The theory of two-dimensional fluid flows on flat spaces is well-developed, and a variety of global regularity results are well-known.
    [Show full text]
  • Lecture Notes in Mathematics
    Lecture Notes in Mathematics Edited by A. Dold and B. Eckmann Subseries: Mathematisches Institut der Universit~it und Max-Planck-lnstitut for Mathematik, Bonn - voL 5 Adviser: E Hirzebruch 1111 Arbeitstagung Bonn 1984 Proceedings of the meeting held by the Max-Planck-lnstitut fur Mathematik, Bonn June 15-22, 1984 Edited by E Hirzebruch, J. Schwermer and S. Suter I IIII Springer-Verlag Berlin Heidelberg New York Tokyo Herausgeber Friedrich Hirzebruch Joachim Schwermer Silke Suter Max-Planck-lnstitut fLir Mathematik Gottfried-Claren-Str. 26 5300 Bonn 3, Federal Republic of Germany AMS-Subject Classification (1980): 10D15, 10D21, 10F99, 12D30, 14H10, 14H40, 14K22, 17B65, 20G35, 22E47, 22E65, 32G15, 53C20, 57 N13, 58F19 ISBN 3-54045195-8 Springer-Verlag Berlin Heidelberg New York Tokyo ISBN 0-387-15195-8 Springer-Verlag New York Heidelberg Berlin Tokyo CIP-Kurztitelaufnahme der Deutschen Bibliothek. Mathematische Arbeitstagung <25. 1984. Bonn>: Arbeitstagung Bonn: 1984; proceedings of the meeting, held in Bonn, June 15-22, 1984 / [25. Math. Arbeitstagung]. Ed. by E Hirzebruch ... - Berlin; Heidelberg; NewYork; Tokyo: Springer, 1985. (Lecture notes in mathematics; Vol. 1t11: Subseries: Mathematisches I nstitut der U niversit~it und Max-Planck-lnstitut for Mathematik Bonn; VoL 5) ISBN 3-540-t5195-8 (Berlin...) ISBN 0-387q5195-8 (NewYork ...) NE: Hirzebruch, Friedrich [Hrsg.]; Lecture notes in mathematics / Subseries: Mathematischee Institut der UniversitAt und Max-Planck-lnstitut fur Mathematik Bonn; HST This work ts subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically those of translation, reprinting, re~use of illustrations, broadcasting, reproduction by photocopying machine or similar means, and storage in data banks.
    [Show full text]
  • Arxiv:0911.0334V2 [Gr-Qc] 4 Jul 2020
    Classical Physics: Spacetime and Fields Nikodem Poplawski Department of Mathematics and Physics, University of New Haven, CT, USA Preface We present a self-contained introduction to the classical theory of spacetime and fields. This expo- sition is based on the most general principles: the principle of general covariance (relativity) and the principle of least action. The order of the exposition is: 1. Spacetime (principle of general covariance and tensors, affine connection, curvature, metric, tetrad and spin connection, Lorentz group, spinors); 2. Fields (principle of least action, action for gravitational field, matter, symmetries and conservation laws, gravitational field equations, spinor fields, electromagnetic field, action for particles). In this order, a particle is a special case of a field existing in spacetime, and classical mechanics can be derived from field theory. I dedicate this book to my Parents: Bo_zennaPop lawska and Janusz Pop lawski. I am also grateful to Chris Cox for inspiring this book. The Laws of Physics are simple, beautiful, and universal. arXiv:0911.0334v2 [gr-qc] 4 Jul 2020 1 Contents 1 Spacetime 5 1.1 Principle of general covariance and tensors . 5 1.1.1 Vectors . 5 1.1.2 Tensors . 6 1.1.3 Densities . 7 1.1.4 Contraction . 7 1.1.5 Kronecker and Levi-Civita symbols . 8 1.1.6 Dual densities . 8 1.1.7 Covariant integrals . 9 1.1.8 Antisymmetric derivatives . 9 1.2 Affine connection . 10 1.2.1 Covariant differentiation of tensors . 10 1.2.2 Parallel transport . 11 1.2.3 Torsion tensor . 11 1.2.4 Covariant differentiation of densities .
    [Show full text]
  • Lecture 13: Simple Linear Regression in Matrix Format
    11:55 Wednesday 14th October, 2015 See updates and corrections at http://www.stat.cmu.edu/~cshalizi/mreg/ Lecture 13: Simple Linear Regression in Matrix Format 36-401, Section B, Fall 2015 13 October 2015 Contents 1 Least Squares in Matrix Form 2 1.1 The Basic Matrices . .2 1.2 Mean Squared Error . .3 1.3 Minimizing the MSE . .4 2 Fitted Values and Residuals 5 2.1 Residuals . .7 2.2 Expectations and Covariances . .7 3 Sampling Distribution of Estimators 8 4 Derivatives with Respect to Vectors 9 4.1 Second Derivatives . 11 4.2 Maxima and Minima . 11 5 Expectations and Variances with Vectors and Matrices 12 6 Further Reading 13 1 2 So far, we have not used any notions, or notation, that goes beyond basic algebra and calculus (and probability). This has forced us to do a fair amount of book-keeping, as it were by hand. This is just about tolerable for the simple linear model, with one predictor variable. It will get intolerable if we have multiple predictor variables. Fortunately, a little application of linear algebra will let us abstract away from a lot of the book-keeping details, and make multiple linear regression hardly more complicated than the simple version1. These notes will not remind you of how matrix algebra works. However, they will review some results about calculus with matrices, and about expectations and variances with vectors and matrices. Throughout, bold-faced letters will denote matrices, as a as opposed to a scalar a. 1 Least Squares in Matrix Form Our data consists of n paired observations of the predictor variable X and the response variable Y , i.e., (x1; y1);::: (xn; yn).
    [Show full text]
  • 1.2 Topological Tensor Calculus
    PH211 Physical Mathematics Fall 2019 1.2 Topological tensor calculus 1.2.1 Tensor fields Finite displacements in Euclidean space can be represented by arrows and have a natural vector space structure, but finite displacements in more general curved spaces, such as on the surface of a sphere, do not. However, an infinitesimal neighborhood of a point in a smooth curved space1 looks like an infinitesimal neighborhood of Euclidean space, and infinitesimal displacements dx~ retain the vector space structure of displacements in Euclidean space. An infinitesimal neighborhood of a point can be infinitely rescaled to generate a finite vector space, called the tangent space, at the point. A vector lives in the tangent space of a point. Note that vectors do not stretch from one point to vector tangent space at p p space Figure 1.2.1: A vector in the tangent space of a point. another, and vectors at different points live in different tangent spaces and so cannot be added. For example, rescaling the infinitesimal displacement dx~ by dividing it by the in- finitesimal scalar dt gives the velocity dx~ ~v = (1.2.1) dt which is a vector. Similarly, we can picture the covector rφ as the infinitesimal contours of φ in a neighborhood of a point, infinitely rescaled to generate a finite covector in the point's cotangent space. More generally, infinitely rescaling the neighborhood of a point generates the tensor space and its algebra at the point. The tensor space contains the tangent and cotangent spaces as a vector subspaces. A tensor field is something that takes tensor values at every point in a space.
    [Show full text]
  • Introducing the Game Design Matrix: a Step-By-Step Process for Creating Serious Games
    Air Force Institute of Technology AFIT Scholar Theses and Dissertations Student Graduate Works 3-2020 Introducing the Game Design Matrix: A Step-by-Step Process for Creating Serious Games Aaron J. Pendleton Follow this and additional works at: https://scholar.afit.edu/etd Part of the Educational Assessment, Evaluation, and Research Commons, Game Design Commons, and the Instructional Media Design Commons Recommended Citation Pendleton, Aaron J., "Introducing the Game Design Matrix: A Step-by-Step Process for Creating Serious Games" (2020). Theses and Dissertations. 4347. https://scholar.afit.edu/etd/4347 This Thesis is brought to you for free and open access by the Student Graduate Works at AFIT Scholar. It has been accepted for inclusion in Theses and Dissertations by an authorized administrator of AFIT Scholar. For more information, please contact [email protected]. INTRODUCING THE GAME DESIGN MATRIX: A STEP-BY-STEP PROCESS FOR CREATING SERIOUS GAMES THESIS Aaron J. Pendleton, Captain, USAF AFIT-ENG-MS-20-M-054 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION STATEMENT A APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. The views expressed in this document are those of the author and do not reflect the official policy or position of the United States Air Force, the United States Department of Defense or the United States Government. This material is declared a work of the U.S. Government and is not subject to copyright protection in the United States. AFIT-ENG-MS-20-M-054 INTRODUCING THE GAME DESIGN MATRIX: A STEP-BY-STEP PROCESS FOR CREATING LEARNING OBJECTIVE BASED SERIOUS GAMES THESIS Presented to the Faculty Department of Electrical and Computer Engineering Graduate School of Engineering and Management Air Force Institute of Technology Air University Air Education and Training Command in Partial Fulfillment of the Requirements for the Degree of Master of Science in Cyberspace Operations Aaron J.
    [Show full text]
  • Multilinear Algebra and Applications July 15, 2014
    Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3.
    [Show full text]
  • Causalx: Causal Explanations and Block Multilinear Factor Analysis
    To appear: Proc. of the 2020 25th International Conference on Pattern Recognition (ICPR 2020) Milan, Italy, Jan. 10-15, 2021. CausalX: Causal eXplanations and Block Multilinear Factor Analysis M. Alex O. Vasilescu1;2 Eric Kim2;1 Xiao S. Zeng2 [email protected] [email protected] [email protected] 1Tensor Vision Technologies, Los Angeles, California 2Department of Computer Science,University of California, Los Angeles Abstract—By adhering to the dictum, “No causation without I. INTRODUCTION:PROBLEM DEFINITION manipulation (treatment, intervention)”, cause and effect data Developing causal explanations for correct results or for failures analysis represents changes in observed data in terms of changes from mathematical equations and data is important in developing in the causal factors. When causal factors are not amenable for a trustworthy artificial intelligence, and retaining public trust. active manipulation in the real world due to current technological limitations or ethical considerations, a counterfactual approach Causal explanations are germane to the “right to an explanation” performs an intervention on the model of data formation. In the statute [15], [13] i.e., to data driven decisions, such as those case of object representation or activity (temporal object) rep- that rely on images. Computer graphics and computer vision resentation, varying object parts is generally unfeasible whether problems, also known as forward and inverse imaging problems, they be spatial and/or temporal. Multilinear algebra, the algebra have been cast as causal inference questions [40], [42] consistent of higher order tensors, is a suitable and transparent framework for disentangling the causal factors of data formation. Learning a with Donald Rubin’s quantitative definition of causality, where part-based intrinsic causal factor representations in a multilinear “A causes B” means “the effect of A is B”, a measurable framework requires applying a set of interventions on a part- and experimentally repeatable quantity [14], [17].
    [Show full text]
  • Stat 5102 Notes: Regression
    Stat 5102 Notes: Regression Charles J. Geyer April 27, 2007 In these notes we do not use the “upper case letter means random, lower case letter means nonrandom” convention. Lower case normal weight letters (like x and β) indicate scalars (real variables). Lowercase bold weight letters (like x and β) indicate vectors. Upper case bold weight letters (like X) indicate matrices. 1 The Model The general linear model has the form p X yi = βjxij + ei (1.1) j=1 where i indexes individuals and j indexes different predictor variables. Ex- plicit use of (1.1) makes theory impossibly messy. We rewrite it as a vector equation y = Xβ + e, (1.2) where y is a vector whose components are yi, where X is a matrix whose components are xij, where β is a vector whose components are βj, and where e is a vector whose components are ei. Note that y and e have dimension n, but β has dimension p. The matrix X is called the design matrix or model matrix and has dimension n × p. As always in regression theory, we treat the predictor variables as non- random. So X is a nonrandom matrix, β is a nonrandom vector of unknown parameters. The only random quantities in (1.2) are e and y. As always in regression theory the errors ei are independent and identi- cally distributed mean zero normal. This is written as a vector equation e ∼ Normal(0, σ2I), where σ2 is another unknown parameter (the error variance) and I is the identity matrix. This implies y ∼ Normal(µ, σ2I), 1 where µ = Xβ.
    [Show full text]
  • On the Representation of Symmetric and Antisymmetric Tensors
    Max-Planck-Institut fur¨ Mathematik in den Naturwissenschaften Leipzig On the Representation of Symmetric and Antisymmetric Tensors (revised version: April 2017) by Wolfgang Hackbusch Preprint no.: 72 2016 On the Representation of Symmetric and Antisymmetric Tensors Wolfgang Hackbusch Max-Planck-Institut Mathematik in den Naturwissenschaften Inselstr. 22–26, D-04103 Leipzig, Germany [email protected] Abstract Various tensor formats are used for the data-sparse representation of large-scale tensors. Here we investigate how symmetric or antiymmetric tensors can be represented. The analysis leads to several open questions. Mathematics Subject Classi…cation: 15A69, 65F99 Keywords: tensor representation, symmetric tensors, antisymmetric tensors, hierarchical tensor format 1 Introduction We consider tensor spaces of huge dimension exceeding the capacity of computers. Therefore the numerical treatment of such tensors requires a special representation technique which characterises the tensor by data of moderate size. These representations (or formats) should also support operations with tensors. Examples of operations are the addition, the scalar product, the componentwise product (Hadamard product), and the matrix-vector multiplication. In the latter case, the ‘matrix’belongs to the tensor space of Kronecker matrices, while the ‘vector’is a usual tensor. In certain applications the subspaces of symmetric or antisymmetric tensors are of interest. For instance, fermionic states in quantum chemistry require antisymmetry, whereas bosonic systems are described by symmetric tensors. The appropriate representation of (anti)symmetric tensors is seldom discussed in the literature. Of course, all formats are able to represent these tensors since they are particular examples of general tensors. However, the special (anti)symmetric format should exclusively produce (anti)symmetric tensors.
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • Discover Linear Algebra Incomplete Preliminary Draft
    Discover Linear Algebra Incomplete Preliminary Draft Date: November 28, 2017 L´aszl´oBabai in collaboration with Noah Halford All rights reserved. Approved for instructional use only. Commercial distribution prohibited. c 2016 L´aszl´oBabai. Last updated: November 10, 2016 Preface TO BE WRITTEN. Babai: Discover Linear Algebra. ii This chapter last updated August 21, 2016 c 2016 L´aszl´oBabai. Contents Notation ix I Matrix Theory 1 Introduction to Part I 2 1 (F, R) Column Vectors 3 1.1 (F) Column vector basics . 3 1.1.1 The domain of scalars . 3 1.2 (F) Subspaces and span . 6 1.3 (F) Linear independence and the First Miracle of Linear Algebra . 8 1.4 (F) Dot product . 12 1.5 (R) Dot product over R ................................. 14 1.6 (F) Additional exercises . 14 2 (F) Matrices 15 2.1 Matrix basics . 15 2.2 Matrix multiplication . 18 2.3 Arithmetic of diagonal and triangular matrices . 22 2.4 Permutation Matrices . 24 2.5 Additional exercises . 26 3 (F) Matrix Rank 28 3.1 Column and row rank . 28 iii iv CONTENTS 3.2 Elementary operations and Gaussian elimination . 29 3.3 Invariance of column and row rank, the Second Miracle of Linear Algebra . 31 3.4 Matrix rank and invertibility . 33 3.5 Codimension (optional) . 34 3.6 Additional exercises . 35 4 (F) Theory of Systems of Linear Equations I: Qualitative Theory 38 4.1 Homogeneous systems of linear equations . 38 4.2 General systems of linear equations . 40 5 (F, R) Affine and Convex Combinations (optional) 42 5.1 (F) Affine combinations .
    [Show full text]