Paper Introduces the Frst Compiler Technique to Automatically Generate Kernels for Any Compound Tensor Algebra Operation on Dense and Sparse Tensors

Total Page:16

File Type:pdf, Size:1020Kb

Paper Introduces the Frst Compiler Technique to Automatically Generate Kernels for Any Compound Tensor Algebra Operation on Dense and Sparse Tensors The Tensor Algebra Compiler FREDRIK KJOLSTAD, Massachusetts Institute of Technology, USA SHOAIB KAMIL, Adobe Research, USA STEPHEN CHOU, Massachusetts Institute of Technology, USA DAVID LUGATO, French Alternative Energies and Atomic Energy Commission, France SAMAN AMARASINGHE, Massachusetts Institute of Technology, USA Tensor algebra is a powerful tool with applications in machine learning, data analytics, engineering and the physical sciences. Tensors are often sparse and compound operations must frequently be computed in a single kernel for performance and to save memory. Programmers are left to write kernels for every operation of interest, with diferent mixes of dense and sparse tensors in diferent formats. The combinations are infnite, which makes it impossible to manually implement and optimize them all. This paper introduces the frst compiler technique to automatically generate kernels for any compound tensor algebra operation on dense and sparse tensors. The technique is implemented in a C++ library called taco. Its performance is competitive with best-in-class hand-optimized kernels in popular libraries, while supporting far more tensor operations. CCS Concepts: • Software and its engineering → Source code generation; Domain specifc languages; Software libraries and repositories;•Mathematics of computing → Mathematical software performance; Additional Key Words and Phrases: tensors, tensor algebra, linear algebra, sparse data structures, performance, parallelism, code generation, merge lattices, iteration graphs ACM Reference Format: Fredrik Kjolstad, Shoaib Kamil, Stephen Chou, David Lugato, and Saman Amarasinghe. 2017. The Tensor Algebra Compiler. Proc. ACM Program. Lang. 1, OOPSLA, Article 77 (October 2017), 29 pages. https://doi.org/10.1145/3133901 1 INTRODUCTION 77 Tensor algebra is a powerful tool for computing on multidimensional data and has many applications in machine learning, data analytics, engineering, and science [Abadi et al. 2016; Anandkumar et al. 2014; Bader et al. 2008; Einstein 1916; Feynman et al. 1963; Kolecki 2002]. Tensors generalize vectors and matrices to any number of dimensions and permit multilinear computations. Tensors of interest are often sparse, which means they mostly contain zeros that can be compressed away. For example, large real-world datasets used in data analytics, such as Netfix Ratings [Bennett et al. 2007], Facebook Activities [Viswanath et al. 2009], and Amazon Reviews [McAuley and Leskovec 2013], are conveniently cast as sparse tensors. The Amazon Reviews tensor, in particular, contains 1.5 × 1019 components corresponding to 107 exabytes of data (assuming 8 bytes are used per component), but only 1.7 × 109 of the components (13 gigabytes) are non-zero. Authors’ addresses: F. Kjolstad, 32 Vassar St, 32-G778, Cambridge, MA 02139; email: [email protected]; S. Kamil, 104 Fifth Ave, 4th Floor, New York, NY 10011; email: [email protected]; S. Chou, 32 Vassar St, 32-G778, Cambridge, MA 02139; email: [email protected]; D. Lugato, CEA-CESTA, 33114 Le Barp, France; email: [email protected]; S. Amarasinghe, 32 Vassar St, 32-G744, Cambridge, MA 02139; email: [email protected]. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for proft or commercial advantage and that copies bear this notice and the full citation on the frst page. Copyrights for third-party components of this work must be honored. For all other uses, Thiscontact work the is owner/author(s). licensed under a Creative Commons Attribution 4.0 International License. © 2017 Copyright held by the owner/author(s). 2475-1421/2017/10-ART77 https://doi.org/10.1145/3133901 Proc. ACM Program. Lang., Vol. 1, No. OOPSLA, Article 77. Publication date: October 2017. 77:2 Fredrik Kjolstad, Shoaib Kamil, Stephen Chou, David Lugato, and Saman Amarasinghe Optimized kernels that compute compound tensor operations can be very complex. First, sparse input tensors are compressed into indexed data structures that the kernels must operate on. Secondly, only non-zero output values should be produced, and the calculations difer between computed components depending on the input tensors that contribute non-zero values. Finally, sparse tensor data structures typically do not support constant-time random access, so kernels must carefully orchestrate co-iteration over multiple tensors. The current approach is to manually write high-performance code for tensor operations. Libraries provide a limited set of hand-optimized operations and programmers compute compound operations through a sequence of supported operations using temporary tensors. This reduces locality and efciency, and for some compound operations the temporaries are much larger than the kernel’s inputs and output. On the other hand, it is infeasible to write optimized code for every compound operation by hand because of the combinatorial explosion of all possible compound operations, tensor orders, and tensor formats. A compiler approach is therefore needed to generate fast kernels from a high-level notation such as the ubiquitous tensor index notation for tensor expressions [Ricci- Curbastro and Levi-Civita 1901]. Prior to this work, there existed no general mechanism that can generate code for compound tensor operations with sparse tensors. To the best of our knowledge, we present the frst compiler technique that can generate kernels for all sparse and dense tensor index notation expressions. This includes dense and sparse linear algebra expressions. Our technique generates code entirely from tensor index notation and simple format descriptors that fully specify the compressed data structures. Thus, we do not perform pointer alias or dependency analysis at compile time, nor at runtime as in inspector-executor systems. The technique can be used in libraries such as TensorFlow [Abadi et al. 2016] and Eigen [Guennebaud et al. 2010], or with languages such as MATLAB [MATLAB 2014], Julia [Bezanson et al. 2012] and Simit [Kjolstad et al. 2016]. The contributions of this paper are: tensor storage formats that separately designate each dimension as dense or sparse and specify the order in which dimensions are stored, which can describe several widely used formats but generalizes to many more (Section 3); iteration graphs that describe how to co-iterate through the multi-level index data structures of the sparse operands in a compound tensor expression (Section 4); merge lattices that describe how to merge the index data structures of sparse tensors that are used in the same tensor expression (Section 5); and a code generation algorithm that uses the above concepts to generate efcient code that computes a tensor expression with dense, sparse, and mixed operands (Section 6). To demonstrate our approach we have developed a C++ library called taco, short for Tensor Algebra COmpiler (Section 7).1 We compare taco-generated code to hand-written implementations from state-of-the-art widely used libraries. The comparison shows that taco generates efcient code for simpler kernels such as sparse matrix-vector multiplication as well as complex kernels like the matricized tensor times Khatri-Rao product [Bader and Kolda 2007] (Section 8). 2 EXAMPLE Tensors generalize matrices to any number of dimensions (also often called modes, though we will use ‘dimensions’ in the rest of this paper). The number of dimensions that a tensor has is its order. Thus, scalars are 0th-order tensors, vectors are 1st-order tensors, and matrices are 2nd-order tensors. Consider a simple 3rd-order tensor kernel, the tensor-times-vector multiplication (TTV): = Aij ! Bijkck k 1The taco library and tools are available under the MIT license at http://tensor-compiler.org. Proc. ACM Program. Lang., Vol. 1, No. OOPSLA, Article 77. Publication date: October 2017. The Tensor Algebra Compiler 77:3 1 for (int i = 0; i < m; i++) { for (int pB1 = B1_pos[0]; for (int pB1 = B1_pos[0]; 2 pB1 < B1_pos[1]; pB1 < B1_pos[1]; 3 pB1++) { pB1++) { 4 int i = B1_idx[pB1]; int i = B1_idx[pB1]; 5 for (int j = 0; j < n; j++) { for (int pB2 = B2_pos[pB1]; for (int pB2 = B2_pos[pB1]; 6 int pB2 = i * n + j; pB2 < B2_pos[pB1+1]; pB2 < B2_pos[pB1+1]; 7 int pA2 = i * n + j; pB2++) { pB2++) { 8 int j = B2_idx[pB2]; int j = B2_idx[pB2]; 9 int pA2 = i * n + j; int pA2 = i * n + j; 10 for (int k = 0; k < p; k++) { for (int pB3 = B3.pos[pB2]; int pB3 = B3_pos[pB2]; 11 int pB3 = pB2 * p + k; pB3 < B3.pos[pB2+1]; int pc1 = c1_pos[0]; 12 pB3++) { while (pB3 < B3_pos[pB2+1] && 13 int k = B3_idx[pB3]; pc1 < c1_pos[1]) { 14 int kB = B3_idx[pB3]; 15 int kc = c1_idx[pc1]; 16 int k = min(kB, kc); 17 if (kB == k && kc == k) { 18 A[pA2] += B[pB3] * c[k]; A[pA2] += B[pB3] * c[k]; A[pA2] += B[pB3] * c[pc1]; 19 } 20 if (kB == k) pB3++; 21 if (kc == k) pc1++; 22 } } } 23 } } } 24 } } } Fig. 1. A = B c Fig. 2. A = B c (sparse B) Fig. 3. A = B c (sparse B, c) ij "k ijk k ij "k ijk k ij "k ijk k This notation, which we use both in our presentation and as the input to our technique, is a variant of tensor index notation developed by Ricci-Curbastro and Levi-Civita [1901]. The notation uses index variables to relate each component in the result to the components in the operands that are combined to produce its value. In this example, every component Aij of the output is the inner product of a vector from the last dimension of B with c. The kernel that evaluates this expression depends entirely on the formats of the three tensors. The simplest case is when all of the tensors are dense, and the kernel for this is shown in Fig. 1. The loops iterate over the m × n × p rectangular iteration space of the tensor dimensions and, since the tensors are dense, the physical location of every component can be computed.
Recommended publications
  • A Some Basic Rules of Tensor Calculus
    A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e.
    [Show full text]
  • Tensor Algebra
    TENSOR ALGEBRA Continuum Mechanics Course (MMC) - ETSECCPB - UPC Introduction to Tensors Tensor Algebra 2 Introduction SCALAR , , ... v VECTOR vf, , ... MATRIX σε,,... ? C,... 3 Concept of Tensor A TENSOR is an algebraic entity with various components which generalizes the concepts of scalar, vector and matrix. Many physical quantities are mathematically represented as tensors. Tensors are independent of any reference system but, by need, are commonly represented in one by means of their “component matrices”. The components of a tensor will depend on the reference system chosen and will vary with it. 4 Order of a Tensor The order of a tensor is given by the number of indexes needed to specify without ambiguity a component of a tensor. a Scalar: zero dimension 3.14 1.2 v 0.3 a , a Vector: 1 dimension i 0.8 0.1 0 1.3 2nd order: 2 dimensions A, A E 02.40.5 ij rd A , A 3 order: 3 dimensions 1.3 0.5 5.8 A , A 4th order … 5 Cartesian Coordinate System Given an orthonormal basis formed by three mutually perpendicular unit vectors: eeˆˆ12,, ee ˆˆ 23 ee ˆˆ 31 Where: eeeˆˆˆ1231, 1, 1 Note that 1 if ij eeˆˆi j ij 0 if ij 6 Cylindrical Coordinate System x3 xr1 cos x(,rz , ) xr2 sin xz3 eeeˆˆˆr cosθθ 12 sin eeeˆˆˆsinθθ cos x2 12 eeˆˆz 3 x1 7 Spherical Coordinate System x3 xr1 sin cos xrxr, , 2 sin sin xr3 cos ˆˆˆˆ x2 eeeer sinθφ sin 123sin θ cos φ cos θ eeeˆˆˆ cosφφ 12sin x1 eeeeˆˆˆˆφ cosθφ sin 123cos θ cos φ sin θ 8 Indicial or (Index) Notation Tensor Algebra 9 Tensor Bases – VECTOR A vector v can be written as a unique linear combination of the three vector basis eˆ for i 1, 2, 3 .
    [Show full text]
  • Geometric Algebra Techniques for General Relativity
    Geometric Algebra Techniques for General Relativity Matthew R. Francis∗ and Arthur Kosowsky† Dept. of Physics and Astronomy, Rutgers University 136 Frelinghuysen Road, Piscataway, NJ 08854 (Dated: February 4, 2008) Geometric (Clifford) algebra provides an efficient mathematical language for describing physical problems. We formulate general relativity in this language. The resulting formalism combines the efficiency of differential forms with the straightforwardness of coordinate methods. We focus our attention on orthonormal frames and the associated connection bivector, using them to find the Schwarzschild and Kerr solutions, along with a detailed exposition of the Petrov types for the Weyl tensor. PACS numbers: 02.40.-k; 04.20.Cv Keywords: General relativity; Clifford algebras; solution techniques I. INTRODUCTION Geometric (or Clifford) algebra provides a simple and natural language for describing geometric concepts, a point which has been argued persuasively by Hestenes [1] and Lounesto [2] among many others. Geometric algebra (GA) unifies many other mathematical formalisms describing specific aspects of geometry, including complex variables, matrix algebra, projective geometry, and differential geometry. Gravitation, which is usually viewed as a geometric theory, is a natural candidate for translation into the language of geometric algebra. This has been done for some aspects of gravitational theory; notably, Hestenes and Sobczyk have shown how geometric algebra greatly simplifies certain calculations involving the curvature tensor and provides techniques for classifying the Weyl tensor [3, 4]. Lasenby, Doran, and Gull [5] have also discussed gravitation using geometric algebra via a reformulation in terms of a gauge principle. In this paper, we formulate standard general relativity in terms of geometric algebra. A comprehensive overview like the one presented here has not previously appeared in the literature, although unpublished works of Hestenes and of Doran take significant steps in this direction.
    [Show full text]
  • Tensor, Exterior and Symmetric Algebras
    Tensor, Exterior and Symmetric Algebras Daniel Murfet May 16, 2006 Throughout this note R is a commutative ring, all modules are left R-modules. If we say a ring is noncommutative, we mean it is not necessarily commutative. Unless otherwise specified, all rings are noncommutative (except for R). If A is a ring then the center of A is the set of all x ∈ A with xy = yx for all y ∈ A. Contents 1 Definitions 1 2 The Tensor Algebra 5 3 The Exterior Algebra 6 3.1 Dimension of the Exterior Powers ............................ 11 3.2 Bilinear Forms ...................................... 14 3.3 Other Properties ..................................... 18 3.3.1 The determinant formula ............................ 18 4 The Symmetric Algebra 19 1 Definitions Definition 1. A R-algebra is a ring morphism φ : R −→ A where A is a ring and the image of φ is contained in the center of A. This is equivalent to A being an R-module and a ring, with r · (ab) = (r · a)b = a(r · b), via the identification of r · 1 and φ(r). A morphism of R-algebras is a ring morphism making the appropriate diagram commute, or equivalently a ring morphism which is also an R-module morphism. In this section RnAlg will denote the category of these R-algebras. We use RAlg to denote the category of commutative R-algebras. A graded ring is a ring A together with a set of subgroups Ad, d ≥ 0 such that A = ⊕d≥0Ad as an abelian group, and st ∈ Ad+e for all s ∈ Ad, t ∈ Ae.
    [Show full text]
  • A Treatise on Quantum Clifford Algebras Contents
    A Treatise on Quantum Clifford Algebras Habilitationsschrift Dr. Bertfried Fauser arXiv:math/0202059v1 [math.QA] 7 Feb 2002 Universitat¨ Konstanz Fachbereich Physik Fach M 678 78457 Konstanz January 25, 2002 To Dorothea Ida and Rudolf Eugen Fauser BERTFRIED FAUSER —UNIVERSITY OF KONSTANZ I ABSTRACT: Quantum Clifford Algebras (QCA), i.e. Clifford Hopf gebras based on bilinear forms of arbitrary symmetry, are treated in a broad sense. Five al- ternative constructions of QCAs are exhibited. Grade free Hopf gebraic product formulas are derived for meet and join of Graßmann-Cayley algebras including co-meet and co-join for Graßmann-Cayley co-gebras which are very efficient and may be used in Robotics, left and right contractions, left and right co-contractions, Clifford and co-Clifford products, etc. The Chevalley deformation, using a Clif- ford map, arises as a special case. We discuss Hopf algebra versus Hopf gebra, the latter emerging naturally from a bi-convolution. Antipode and crossing are consequences of the product and co-product structure tensors and not subjectable to a choice. A frequently used Kuperberg lemma is revisited necessitating the def- inition of non-local products and interacting Hopf gebras which are generically non-perturbative. A ‘spinorial’ generalization of the antipode is given. The non- existence of non-trivial integrals in low-dimensional Clifford co-gebras is shown. Generalized cliffordization is discussed which is based on non-exponentially gen- erated bilinear forms in general resulting in non unital, non-associative products. Reasonable assumptions lead to bilinear forms based on 2-cocycles. Cliffordiza- tion is used to derive time- and normal-ordered generating functionals for the Schwinger-Dyson hierarchies of non-linear spinor field theory and spinor electro- dynamics.
    [Show full text]
  • 9 the Tensor Algebra
    J. Zintl: Part 3: The Tensor Product 9 THE TENSOR ALGEBRA 9 The tensor algebra 9.1 Multi-fold tensor products Up to this point, we have been considering tensor products of pairs of mod- ules M1 and M2 over a ring R. In many applications, the modules involved are free of finite ranks. In this special case, corollary ?? implies that we ∗ can identify M1 ⊗R M2 with Hom R(M1 ;M2). So for purely computational purposes, working with matrices would suffice in these cases. The theory of multilinear algebra unfolds its full strength in the natural generalization to tensor products of several R-modules M1, M2,..., Mp for some p ≥ 2. Throughout this section let (R; +; ·) always be a commutative ring with a multiplicative identity element. 2 2 9.1 Example. Consider M := R as an R-module. Let fe1; e2g ⊆ R 2 2 2 denote the standard basis of R . The standard inner product h ; i : R ×R ! R is a bilinear map. It is easy to see that the map 2 2 2 ' : R × R × R ! R (u; v; w) 7! hu; vi · hw; e1i is 3-linear. We claim that all of the information about the map ' can be recovered from the family of real numbers f'(ei; ej; ek)g1≤i;j;k≤2. Indeed, for 2 2 2 an arbitrary element (u; v; w) = ((u1; u2); (v1; v2); (w1; w2)) 2 R × R × R , we compute applying the rules for multilinear maps '(u; v; w) = '(u1e1 + u2e2; v1e1 + v2e2; w1e1 + w2e2) P = uivjwk'(ei; ej; ek): 1≤i;j;k≤2 In analogy to lemma 5.10 ??, we could think of the family fai;j;kg1≤i;j;k≤2 := f'(ei; ej; ek)g1≤i;j;k≤2 as a \3-dimensional matrix", or a 2 × 2 × 2 cube with entries in R, which represents the map ' by a suitably defined vector mul- 1 J.
    [Show full text]
  • Two-Spinors and Symmetry Operators
    Two-spinors and Symmetry Operators An Investigation into the Existence of Symmetry Operators for the Massive Dirac Equation using Spinor Techniques and Computer Algebra Tools Master’s thesis in Physics Simon Stefanus Jacobsson DEPARTMENT OF MATHEMATICAL SCIENCES CHALMERS UNIVERSITY OF TECHNOLOGY Gothenburg, Sweden 2021 www.chalmers.se Master’s thesis 2021 Two-spinors and Symmetry Operators ¦ An Investigation into the Existence of Symmetry Operators for the Massive Dirac Equation using Spinor Techniques and Computer Algebra Tools SIMON STEFANUS JACOBSSON Department of Mathematical Sciences Division of Analysis and Probability Theory Mathematical Physics Chalmers University of Technology Gothenburg, Sweden 2021 Two-spinors and Symmetry Operators An Investigation into the Existence of Symmetry Operators for the Massive Dirac Equation using Spinor Techniques and Computer Algebra Tools SIMON STEFANUS JACOBSSON © SIMON STEFANUS JACOBSSON, 2021. Supervisor: Thomas Bäckdahl, Mathematical Sciences Examiner: Simone Calogero, Mathematical Sciences Master’s Thesis 2021 Department of Mathematical Sciences Division of Analysis and Probability Theory Mathematical Physics Chalmers University of Technology SE-412 96 Gothenburg Telephone +46 31 772 1000 Typeset in LATEX Printed by Chalmers Reproservice Gothenburg, Sweden 2021 iii Two-spinors and Symmetry Operators An Investigation into the Existence of Symmetry Operators for the Massive Dirac Equation using Spinor Techniques and Computer Algebra Tools SIMON STEFANUS JACOBSSON Department of Mathematical Sciences Chalmers University of Technology Abstract This thesis employs spinor techniques to find what conditions a curved spacetime must satisfy for there to exist a second order symmetry operator for the massive Dirac equation. Conditions are of the form of the existence of a set of Killing spinors satisfying some set of covariant differential equations.
    [Show full text]
  • On Electromagnetic Duality
    On Electromagnetic Duality Thomas B. Mieling Faculty of Physics, University of Vienna Boltzmanngasse 5, 1090 Vienna, Austria (Dated: November 14, 2018) CONTENTS III. DUAL TENSORS I. Introduction 1 This section introduces the notion of dual ten- sors, whose proper treatment requires some famil- II. Conventions 1 iarity with exterior algebra. Readers unfamiliar with this topic may skip section III A, since sec- III. Dual Tensors 1 tion III B suces for the calculations in the rest of A. The Hodge Dual 1 the paper, except parts of the appendix. B. The Complex Dual of Two-Forms 2 IV. The Free Maxwell-Field 2 A. The Hodge Dual V. General Duality Transformations 3 For an oriented vector space V of dimension VI. Coupled Maxwell-Fields 3 n with a metric tensor, the Hodge star opera- tor provides an isomorphism between the k-vectors VII. Applications 4 n − k-vectors. In this section, we discuss the gen- eral denition in Euclidean and Lorentzian vector VIII. Conclusion 5 spaces and give formulae for concrete calculations in (pseudo-)orthonormal frames. A. Notes on Exterior Algebra 6 The metric tensor g on V induces a metric tensor on ^kV (denoted by the same symbol) by bilinear B. Collection of Proofs 7 extension of the map References 8 g(v1 ^ · · · ^ vk; w1 ^ · · · ^ wk) = det(g(vi; wj)): If the metric on V is non-degenerate, so is g on I. INTRODUCTION ^kV . Let w be a k-vector. For every n − k-vector In abelian gauge theories whose action does not v the exterior product v ^ w is proportional to depend on the gauge elds themselves, but only the unit n-vector Ω, since ^nV is one-dimensional.
    [Show full text]
  • Tensor Algebra and Tensor Calculus
    Appendix A Tensor Algebra and Tensor Calculus We collect here some notions of geometry and analysis used in the previous chapters. This appendix is no more than a brief summary containing a few defi- nitions and statements of results without proof. The conscientious reader unfamiliar with the topics collected here should take from these pages some directions for further reading in appropriate textbooks. A.1 Vector Spaces Definition 1. AsetV is called a vector space (or a linear space) over a field F if it is endowed with two operations, CWV V ! V and ıWV F ! V, called respectively addition and multiplication, such that – addition is commutative, i.e., v C w D w C v for every pair v;w 2 V, – addition is associative, i.e., .v Cw/Cz D v C.wCz/ for every triple v;w; z 2 V, – addition admits a unique neutral element 0 2 V such that v C 0 D v;8v 2 V, – with ˛v denoting the product ˛ ıv, for every ˛; ˇ 2 F and v 2 V, multiplication is associative, i.e., ˛.ˇv/ D .˛ˇ/v, where ˛ˇ is the product in F, – multiplication admits a unique neutral element 1 2 F such that 1v D v;8v 2 V, – multiplication is distributive with respect to addition, i.e., ˛.v C w/ D ˛v C ˛w for every pair v;w 2 V, – multiplication is distributive with respect to addition in F, i.e., for every pair ˛; ˇ 2 F and every v 2 V, .˛ C ˇ/v D ˛v C ˇv: We do not recall the abstract definition of a field in the algebraic sense, because we consider here only the special case of F equal to the set of real numbers R.
    [Show full text]
  • A Basic Operations of Tensor Algebra
    A Basic Operations of Tensor Algebra The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic, coordinate-free) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three-dimensional space (in this book we are limit ourselves to this case). A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a ⊗ b + ...+ c ⊗ d. The scalars, vectors and tensors are handled as invariant (independent from the choice of the coordinate system) quantities. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [32, 36, 53, 126, 134, 205, 253, 321, 343] among others. The basics of the direct tensor calculus are given in the classical textbooks of Wilson (founded upon the lecture notes of Gibbs) [331] and Lagally [183]. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write i i j i j ⊗ a = a gi, A = a b + ...+ c d gi gj Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g.
    [Show full text]
  • Geometric Methods in Mathematical Physics I: Multi-Linear Algebra, Tensors, Spinors and Special Relativity
    Geometric Methods in Mathematical Physics I: Multi-Linear Algebra, Tensors, Spinors and Special Relativity by Valter Moretti www.science.unitn.it/∼moretti/home.html Department of Mathematics, University of Trento Italy Lecture notes authored by Valter Moretti and freely downloadable from the web page http://www.science.unitn.it/∼moretti/dispense.html and are licensed under a Creative Commons Attribuzione-Non commerciale-Non opere derivate 2.5 Italia License. None is authorized to sell them 1 Contents 1 Introduction and some useful notions and results 5 2 Multi-linear Mappings and Tensors 8 2.1 Dual space and conjugate space, pairing, adjoint operator . 8 2.2 Multi linearity: tensor product, tensors, universality theorem . 14 2.2.1 Tensors as multi linear maps . 14 2.2.2 Universality theorem and its applications . 22 2.2.3 Abstract definition of the tensor product of vector spaces . 26 2.2.4 Hilbert tensor product of Hilbert spaces . 28 3 Tensor algebra, abstract index notation and some applications 34 3.1 Tensor algebra generated by a vector space . 34 3.2 The abstract index notation and rules to handle tensors . 37 3.2.1 Canonical bases and abstract index notation . 37 3.2.2 Rules to compose, decompose, produce tensors form tensors . 39 3.2.3 Linear transformations of tensors are tensors too . 41 3.3 Physical invariance of the form of laws and tensors . 43 3.4 Tensors on Affine and Euclidean spaces . 43 3.4.1 Tensor spaces, Cartesian tensors . 45 3.4.2 Applied tensors . 46 3.4.3 The natural isomorphism between V and V ∗ for Cartesian vectors in Eu- clidean spaces .
    [Show full text]
  • 5. SPINORS 5.1. Prologue. 5.2. Clifford Algebras and Their
    5. SPINORS 5.1. Prologue. 5.2. Clifford algebras and their representations. 5.3. Spin groups and spin representations. 5.4. Reality of spin modules. 5.5. Pairings and morphisms. 5.6. Image of the real spin group in the complex spin module. 5.7. Appendix: Some properties of the orthogonal groups. 5.1. Prologue. E. Cartan classified simple Lie algebras over C in his thesis in 1894, a classification that is nowadays done through the (Dynkin) diagrams. In 1913 he classified the irreducible finite dimensional representations of these algebras1. For any simple Lie algebra g Cartan's construction yields an irreducible representation canonically associated to each node of its diagram. These are the so-called fun- damental representations in terms of which all irreducible representations of g can be constructed using and subrepresentations. Indeed, if πj(1 j `) are the ⊗ ≤ ≤ fundamental representations and mj are integers 0, and if vj is the highest vector ≥ of πj, then the subrepresentation of m1 m` π = π⊗ ::: π⊗ 1 ⊗ ⊗ ` generated by m1 m` v = v⊗ ::: v⊗ 1 ⊗ ⊗ ` is irreducible with highest vector v, and every irreducible module is obtained in this manner uniquely. As is well-known, Harish-Chandra and Chevalley (independently) developed around 1950 a general method for obtaining the irreducible representa- tions without relying on case by case considerations as Cartan did. `+1 j If g = sl(` + 1) and V = C , then the fundamental module πj is Λ (V ), and all irreducible modules can be obtained by decomposing the tensor algebra over the 1 defining representation V . Similarly, for the symplectic Lie algebras, the decomposi- tion of the tensors over the defining representation gives all the irreducible modules.
    [Show full text]