Tensor Contractions with Extended BLAS Kernels on CPU and GPU

Total Page:16

File Type:pdf, Size:1020Kb

Tensor Contractions with Extended BLAS Kernels on CPU and GPU Tensor Contractions with Extended BLAS Kernels on CPU and GPU Yang Shi ∗, U. N. Niranjan y, Animashree Anandkumar ∗ Cris Cecka ∗ EECS Department, y ICS Department NVIDIA Research University of California, Irvine Santa Clara, USA Irvine, USA Email: [email protected] Email: fshiy4,un.niranjan,[email protected] Abstract—Tensor contractions constitute a key computa- to the high space and time complexities associated with tional ingredient of numerical multi-linear algebra. However, tensor computations. In this paper, motivated by the recent as the order and dimension of tensors grow, the time and space increased interest from machine learning and deep learning, complexities of tensor-based computations grow quickly. In this paper, we propose and evaluate new BLAS-like primitives that we propose and study library-based communication avoiding are capable of performing a wide range of tensor contractions approaches for performing tensor contractions. on CPU and GPU efficiently. We begin by focusing on single- Conventional approaches for computing general tensor index contractions involving all the possible configurations contractions rely on matricization, the logical or explicit of second-order and third-order tensors. Then, we discuss restructuring of the data so that the computation can be extensions to more general cases. Existing approaches for tensor contractions spend large performed with a sequence of Basic Linear Algebra Sub- amounts of time restructuring the data which typically in- routine (BLAS) library calls. The BLAS routines provide volves explicit copy and transpose operations. In this work, efficient and portable implementations of linear algebra we summarize existing approaches and present library-based primitives, with many fast implementations existing across approaches that avoid memory movement. Through systematic many architectures [8]. benchmarking, we demonstrate that our approach can achieve 10x speedup on a K40c GPU and 2x speedup on dual-socket To this point, the GEneral Matrix Multiply (GEMM) Haswell-EP CPUs, using MKL and CUBLAS respectively, for primitive specified within the BLAS library is possibly small and moderate tensor sizes. This is relevant in many the most optimized and widely used routine in scientific machine learning applications such as deep learning, where computing. Noting that the basic theoretical computational tensor sizes tend to be small, but require numerous tensor con- and communication complexities of most tensor contractions traction operations to be performed successively. Concretely, we implement a Tucker decomposition and show that using is equivalent to that of GEMM, these computations should our kernels yields atleast an order of magnitude speedup as scale equally well. However, we find that existing tensor compared to state-of-the-art libraries. libraries such as the TENSOR TOOLBOX and CYCLOPS Keywords-Parallelism; BLAS; GPU; Tensor; TENSOR FRAMEWORK perform explicit data transposition to compute almost all tensor contractions and the cost of data restructuring often dominates the cost of the actual I. INTRODUCTION AND SCOPE computation. Other approaches have previously proposed Multilinear algebraic computations, are ubiquitous in mul- intrusive compiler and static analysis solutions, whereas we tiple scientific domains such as machine learning and mod- provide a much simpler library-based solution [16], [17]. ern data science [4], quantum chemistry and physics [14], Findings and contributions: We introduce a new BLAS signal and image processing [9], chemometrics [7], and primitive, known as STRIDEDBATCHEDGEMM, that allows biochemistry [13]. The study of tensor computations has the majority of tensor contractions to be computed with- a long and diverse history, as early as in the work by out any explicit memory motion. We detail the so-called Hitchcock [11]. The domains and references provided herein exceptional cases that cannot be evaluated with STRIDED- are by no means exhaustive but merely a small representative BATCHEDGEMM and demonstrate that an efficient solution sample of the various flavors in which tensor computations exists with another small extension to the primitive. are used in science. Tensors are multi-way arrays which can We demonstrate performance improvement using our ap- be viewed as a generalization of matrices to allow multi- proach on both CPU and GPU in direct benchmarks in ad- modality in data. Tensor contractions play a central role in dition to an application study. The Tucker decomposition is a variety of algorithms and applications; for a motivating an important tensor application in machine learning wherein example, see Section II-C. However, non-trivial performance the advantage of our strategy compared to existing libraries bottlenecks in several application areas are encountered due is clear. Finally, the value of this approach and its applications are all indexing is zero-based. R denotes the set of real numbers. being recognized by NVIDIA. As of this writing, the pro- The order of a tensor is the number of modes it admits. A posed interface exists in the CUBLAS 8.0 Release Candidate scalar is a zeroth-order tensor, a vector is a first-order tensor, and is likely to appear the official release later this summer. a matrix (say Amn) is a second-order tensor with the rows II. BACKGROUND (indexed by m) being the first mode and columns (indexed A. Related Work by n) being the second mode, and a three-way array (say Peise et al [22] extended results from Napoli et al [19] Amnp) is a third-order tensor with the first, second and third in mapping tensor contractions to sequences of BLAS rou- modes indexed by m, n, and p, respectively. Note that we tines and modeling the performance of these mappings. In use the term index to name a mode and iterate through the this work, they systematically enumerate and benchmark elements in that mode. combinations of possible BLAS kernels one could use to The dimension of the ith mode, denoted dim<i>, is the compute a given tensor contraction to conclude that the number of elements it contains. The dimension of a mode best performing algorithms involve the GEMM kernel. Some of a tensor is denoted by the bold lowercase letter of the evaluation strategies are neglected to be considered, such respective index; for example, the third-order tensor Amnp as flattening or developing new, generic linear algebraic has dimension dim<0>×dim<1>×dim<2> or m × n × p subroutines that could yield improved performance. where the first mode (indexed by m) takes values 0;:::; m− Li et al [16] also recognizes the cost of explicit copies and 1, the second mode (indexed by n) takes values 0;:::; n−1, proposes evaluation strategies exactly comparable to the flat- the third mode (indexed by p) takes values 0;:::; p − 1. tening and batching strategies addressed in this paper. Their We follow Einstein summation convention to represent discussion of loop modes and component modes map to our tensor contractions.A general tensor contraction is written discussion of batch modes and GEMM modes. However, Li as et al do not discuss strategies beyond tensor-times-matrix multiply. Furthermore, they only consider mode-n tensor- C = α A B + β C (1) C A B C times-matrix contractions of the form Yi1 in−1j iN = P ··· ··· i Xi1 iN Ujin , which avoids the more complicated cases where A; B; C are ordered sequences of indices such that n ··· in this paper. Abdelfattah et al [3] presents a framework C ≡ (A[B) n (A\B). The indices in A\B are called using batched GEMM for tensor contractions on GPUs. contracted indices. The indices in C are called free indices. However, they focus on optimizing only limited number of tensor contraction kernels on extreme small size tensors. C. An Important Practical Application Other works in [1] [20] improve the tensor computation In unsupervised learning, tensor decomposition [4] is performance by doing loop reorganization and fusion. gaining a lot of attention and is the crux of model estimation The STRIDEDBATCHEDGEMM interface proposed in this via the method of moments. A variety of problems such as paper has previously been mentioned by Jhurani et al [12] topic model estimation, Gaussian mixtures model estimation, as a low-overhead interface for multiple small matrices on and social network learning can be provably, consistently NVIDIA GPUs. Jhurani proposes the same interface for and efficiently solved via the tensor decomposition tech- CUBLAS that we propose in this paper and focuses on niques under certain mild assumptions. implementation concerns. In this work, we treat STRID- The basic building blocks of these algorithms involve ten- EDBATCHEDGEMM as an available primitive, benchmark sor contractions. Two frequently used tensor decomposition evaluation strategies that utilize it, and examine how it may methods are the CP decomposition [10] and the Tucker be further extended for use in multi-linear algebra. decomposition [25]. In [27], the authors use the Tucker The BLAS-like Library Instantiation Software (BLIS) decomposition to extract new representations of the face framework [26] offers GEMMs which support non-unit images despite different expressions or camera viewpoints. both strides in the row and column dimensions, which are To illustrate the fundamental importance of tensor contrac- attractive solutions to some of the problems in this paper. tions, we will pick one of the most common tensor de- However, performance is expected to suffer due to decreases composition algorithms, namely the higher-order orthogonal in cache line utilization, and SIMD opportunities. iteration (HOOI) [15] for asymmetric Tucker decomposition, Recent improvements in parallel and distributed comput- and use it as a case-study. In the Einstein notation, the ing systems have made complex tensor computation feasible. m n p factorization of a third-order tensor T 2 R × × is given TensorFlow [2] can handle multi-linear algebra operations i j k by Tmnp = GijkAmiBnjCpk, where G 2 R × × is the and it is primarily a data-flow and task-scheduling frame- m i n j p k core tensor, A 2 R × , B 2 R × , C 2 R × .
Recommended publications
  • Tensor Manipulation in GPL Maxima
    Tensor Manipulation in GPL Maxima Viktor Toth http://www.vttoth.com/ February 1, 2008 Abstract GPL Maxima is an open-source computer algebra system based on DOE-MACSYMA. GPL Maxima included two tensor manipulation packages from DOE-MACSYMA, but these were in various states of disrepair. One of the two packages, CTENSOR, implemented component-based tensor manipulation; the other, ITENSOR, treated tensor symbols as opaque, manipulating them based on their index properties. The present paper describes the state in which these packages were found, the steps that were needed to make the packages fully functional again, and the new functionality that was implemented to make them more versatile. A third package, ATENSOR, was also implemented; fully compatible with the identically named package in the commercial version of MACSYMA, ATENSOR implements abstract tensor algebras. 1 Introduction GPL Maxima (GPL stands for the GNU Public License, the most widely used open source license construct) is the descendant of one of the world’s first comprehensive computer algebra systems (CAS), DOE-MACSYMA, developed by the United States Department of Energy in the 1960s and the 1970s. It is currently maintained by 18 volunteer developers, and can be obtained in source or object code form from http://maxima.sourceforge.net/. Like other computer algebra systems, Maxima has tensor manipulation capability. This capability was developed in the late 1970s. Documentation is scarce regarding these packages’ origins, but a select collection of e-mail messages by various authors survives, dating back to 1979-1982, when these packages were actively maintained at M.I.T. When this author first came across GPL Maxima, the tensor packages were effectively non-functional.
    [Show full text]
  • Electromagnetic Fields from Contact- and Symplectic Geometry
    ELECTROMAGNETIC FIELDS FROM CONTACT- AND SYMPLECTIC GEOMETRY MATIAS F. DAHL Abstract. We give two constructions that give a solution to the sourceless Maxwell's equations from a contact form on a 3-manifold. In both construc- tions the solutions are standing waves. Here, contact geometry can be seen as a differential geometry where the fundamental quantity (that is, the con- tact form) shows a constantly rotational behaviour due to a non-integrability condition. Using these constructions we obtain two main results. With the 3 first construction we obtain a solution to Maxwell's equations on R with an arbitrary prescribed and time independent energy profile. With the second construction we obtain solutions in a medium with a skewon part for which the energy density is time independent. The latter result is unexpected since usually the skewon part of a medium is associated with dissipative effects, that is, energy losses. Lastly, we describe two ways to construct a solution from symplectic structures on a 4-manifold. One difference between acoustics and electromagnetics is that in acoustics the wave is described by a scalar quantity whereas an electromagnetics, the wave is described by vectorial quantities. In electromagnetics, this vectorial nature gives rise to polar- isation, that is, in anisotropic medium differently polarised waves can have different propagation and scattering properties. To understand propagation in electromag- netism one therefore needs to take into account the role of polarisation. Classically, polarisation is defined as a property of plane waves, but generalising this concept to an arbitrary electromagnetic wave seems to be difficult. To understand propaga- tion in inhomogeneous medium, there are various mathematical approaches: using Gaussian beams (see [Dah06, Kac04, Kac05, Pop02, Ral82]), using Hadamard's method of a propagating discontinuity (see [HO03]) and using microlocal analysis (see [Den92]).
    [Show full text]
  • Automated Operation Minimization of Tensor Contraction Expressions in Electronic Structure Calculations
    Automated Operation Minimization of Tensor Contraction Expressions in Electronic Structure Calculations Albert Hartono,1 Alexander Sibiryakov,1 Marcel Nooijen,3 Gerald Baumgartner,4 David E. Bernholdt,6 So Hirata,7 Chi-Chung Lam,1 Russell M. Pitzer,2 J. Ramanujam,5 and P. Sadayappan1 1 Dept. of Computer Science and Engineering 2 Dept. of Chemistry The Ohio State University, Columbus, OH, 43210 USA 3 Dept. of Chemistry, University of Waterloo, Waterloo, Ontario N2L BG1, Canada 4 Dept. of Computer Science 5 Dept. of Electrical and Computer Engineering Louisiana State University, Baton Rouge, LA 70803 USA 6 Computer Sci. & Math. Div., Oak Ridge National Laboratory, Oak Ridge, TN 37831 USA 7 Quantum Theory Project, University of Florida, Gainesville, FL 32611 USA Abstract. Complex tensor contraction expressions arise in accurate electronic struc- ture models in quantum chemistry, such as the Coupled Cluster method. Transfor- mations using algebraic properties of commutativity and associativity can be used to significantly decrease the number of arithmetic operations required for evalua- tion of these expressions, but the optimization problem is NP-hard. Operation mini- mization is an important optimization step for the Tensor Contraction Engine, a tool being developed for the automatic transformation of high-level tensor contraction expressions into efficient programs. In this paper, we develop an effective heuristic approach to the operation minimization problem, and demonstrate its effectiveness on tensor contraction expressions for coupled cluster equations. 1 Introduction Currently, manual development of accurate quantum chemistry models is very tedious and takes an expert several months to years to develop and debug. The Tensor Contrac- tion Engine (TCE) [2, 1] is a tool that is being developed to reduce the development time to hours/days, by having the chemist specify the computation in a high-level form, from which an efficient parallel program is automatically synthesized.
    [Show full text]
  • The Riemann Curvature Tensor
    The Riemann Curvature Tensor Jennifer Cox May 6, 2019 Project Advisor: Dr. Jonathan Walters Abstract A tensor is a mathematical object that has applications in areas including physics, psychology, and artificial intelligence. The Riemann curvature tensor is a tool used to describe the curvature of n-dimensional spaces such as Riemannian manifolds in the field of differential geometry. The Riemann tensor plays an important role in the theories of general relativity and gravity as well as the curvature of spacetime. This paper will provide an overview of tensors and tensor operations. In particular, properties of the Riemann tensor will be examined. Calculations of the Riemann tensor for several two and three dimensional surfaces such as that of the sphere and torus will be demonstrated. The relationship between the Riemann tensor for the 2-sphere and 3-sphere will be studied, and it will be shown that these tensors satisfy the general equation of the Riemann tensor for an n-dimensional sphere. The connection between the Gaussian curvature and the Riemann curvature tensor will also be shown using Gauss's Theorem Egregium. Keywords: tensor, tensors, Riemann tensor, Riemann curvature tensor, curvature 1 Introduction Coordinate systems are the basis of analytic geometry and are necessary to solve geomet- ric problems using algebraic methods. The introduction of coordinate systems allowed for the blending of algebraic and geometric methods that eventually led to the development of calculus. Reliance on coordinate systems, however, can result in a loss of geometric insight and an unnecessary increase in the complexity of relevant expressions. Tensor calculus is an effective framework that will avoid the cons of relying on coordinate systems.
    [Show full text]
  • Appendix a Relations Between Covariant and Contravariant Bases
    Appendix A Relations Between Covariant and Contravariant Bases The contravariant basis vector gk of the curvilinear coordinate of uk at the point P is perpendicular to the covariant bases gi and gj, as shown in Fig. A.1.This contravariant basis gk can be defined as or or a gk g  g ¼  ðA:1Þ i j oui ou j where a is the scalar factor; gk is the contravariant basis of the curvilinear coordinate of uk. Multiplying Eq. (A.1) by the covariant basis gk, the scalar factor a results in k k ðgi  gjÞ: gk ¼ aðg : gkÞ¼ad ¼ a ÂÃk ðA:2Þ ) a ¼ðgi  gjÞ : gk gi; gj; gk The scalar triple product of the covariant bases can be written as pffiffiffi a ¼ ½¼ðg1; g2; g3 g1  g2Þ : g3 ¼ g ¼ J ðA:3Þ where Jacobian J is the determinant of the covariant basis tensor G. The direction of the cross product vector in Eq. (A.1) is opposite if the dummy indices are interchanged with each other in Einstein summation convention. Therefore, the Levi-Civita permutation symbols (pseudo-tensor components) can be used in expression of the contravariant basis. ffiffiffi p k k g g ¼ J g ¼ðgi  gjÞ¼Àðgj  giÞ eijkðgi  gjÞ eijkðgi  gjÞ ðA:4Þ ) gk ¼ pffiffiffi ¼ g J where the Levi-Civita permutation symbols are defined by 8 <> þ1ifði; j; kÞ is an even permutation; eijk ¼ > À1ifði; j; kÞ is an odd permutation; : A:5 0ifi ¼ j; or i ¼ k; or j ¼ k ð Þ 1 , e ¼ ði À jÞÁðj À kÞÁðk À iÞ for i; j; k ¼ 1; 2; 3 ijk 2 H.
    [Show full text]
  • Gicheru3012018jamcs43211.Pdf
    Journal of Advances in Mathematics and Computer Science 30(1): 1-15, 2019; Article no.JAMCS.43211 ISSN: 2456-9968 (Past name: British Journal of Mathematics & Computer Science, Past ISSN: 2231-0851) Decomposition of Riemannian Curvature Tensor Field and Its Properties James Gathungu Gicheru1* and Cyrus Gitonga Ngari2 1Department of Mathematics, Ngiriambu Girls High School, P.O.Box 46-10100, Kianyaga, Kenya. 2Department of Mathematics, Computing and Information Technology, School of Pure and Applied Sciences, University of Embu, P.O.Box 6-60100, Embu, Kenya. Authors’ contributions This work was carried out in collaboration between both authors. Both authors read and approved the final manuscript. Article Information DOI: 10.9734/JAMCS/2019/43211 Editor(s): (1) Dr. Dragos-Patru Covei, Professor, Department of Applied Mathematics, The Bucharest University of Economic Studies, Piata Romana, Romania. (2) Dr. Sheng Zhang, Professor, Department of Mathematics, Bohai University, Jinzhou, China. Reviewers: (1) Bilal Eftal Acet, Adıyaman University, Turkey. (2) Çiğdem Dinçkal, Çankaya University, Turkey. (3) Alexandre Casassola Gonçalves, Universidade de São Paulo, Brazil. Complete Peer review History: http://www.sciencedomain.org/review-history/27899 Received: 07 July 2018 Accepted: 22 November 2018 Original Research Article Published: 21 December 2018 _______________________________________________________________________________ Abstract Decomposition of recurrent curvature tensor fields of R-th order in Finsler manifolds has been studied by B. B. Sinha and G. Singh [1] in the publications del’ institute mathematique, nouvelleserie, tome 33 (47), 1983 pg 217-220. Also Surendra Pratap Singh [2] in Kyungpook Math. J. volume 15, number 2 December, 1975 studied decomposition of recurrent curvature tensor fields in generalised Finsler spaces.
    [Show full text]
  • Tensor Calculus
    A Primer on Tensor Calculus David A. Clarke Saint Mary’s University, Halifax NS, Canada [email protected] June, 2011 Copyright c David A. Clarke, 2011 Contents Preface ii 1 Introduction 1 2 Definition of a tensor 3 3 The metric 9 3.1 Physical components and basis vectors ..................... 11 3.2 The scalar and inner products .......................... 14 3.3 Invariance of tensor expressions ......................... 17 3.4 The permutation tensors ............................. 18 4 Tensor derivatives 21 4.1 “Christ-awful symbols” .............................. 21 4.2 Covariant derivative ............................... 25 5 Connexion to vector calculus 30 5.1 Gradient of a scalar ................................ 30 5.2 Divergence of a vector .............................. 30 5.3 Divergence of a tensor .............................. 32 5.4 The Laplacian of a scalar ............................. 33 5.5 Curl of a vector .................................. 34 5.6 The Laplacian of a vector ............................ 35 5.7 Gradient of a vector ............................... 35 5.8 Summary ..................................... 36 5.9 A tensor-vector identity ............................. 37 6 Cartesian, cylindrical, spherical polar coordinates 39 6.1 Cartesian coordinates ............................... 40 6.2 Cylindrical coordinates .............................. 40 6.3 Spherical polar coordinates ............................ 41 7 An application to viscosity 42 i Preface These notes stem from my own need to refresh my memory on the fundamentals
    [Show full text]
  • GEMM-Like Tensor-Tensor Contraction
    GEMM-like Tensor-Tensor Contraction Paul Springer Paolo Bientinesi AICES, RWTH Aachen University AICES, RWTH Aachen University Aachen, Germany Aachen, Germany [email protected] [email protected] 1 INTRODUCTION 2 GEMM-LIKE TENSOR CONTRACTION Tensor contractions (TCs)—a generalization of matrix-matrix mul- Let A, B, and C be tensors (i.e., multi-dimensional arrays), a tensor tiplications (GEMM) to multiple dimensions—arise in a wide range contraction is: of scientific computations such as machine learning1 [ ], spectral CI α × AI × BI + β × CI ; (1) element methods [10], quantum chemistry calculations [2], mul- C A B C tidimensional Fourier transforms [4] and climate simulations [3]. where IA, IB, and IC denote symbolic index sets. We further define Despite the close connection between TCs and matrix-matrix prod- three important index sets: Im := IA \ IC, In := IB \ IC, and ucts, the performance of the former is in general vastly inferior to Ik := IA \ IB, respectively denoting the free indices of A, the that of an optimized GEMM. To close such a gap, we propose a novel free indices B, and the contracted indices. Moreover, we adopt the approach: GEMM-like Tensor-Tensor (GETT) contraction [8]. Einstein notation, indicating that all indices of Ik are implicitly GETT captures the essential features of a high-performance summed (i.e., contracted). GEMM. In line with a highly optimized GEMM, and in contrast to Adhering to this definition, we can reformulate Equation (1) previous approaches to TCs, our high-performance implementa- such that its similarity to a matrix-matrix multiplication becomes tion of the GETT approach is fully vectorized, exploits the CPU’s apparent: cache hierarchy, avoids explicit preprocessing steps, and operates C C α ×A A ×B B +β ×C C ; (2) Π ¹Im [In º Π ¹Im [Ik º Π ¹In [Ik º Π ¹Im [In º on arbitrary dimensional subtensors while preserving stride-one A B C memory accesses.
    [Show full text]
  • Conformal Differential Geometry of a Subspace
    CONFORMAL DIFFERENTIAL GEOMETRY OF A SUBSPACE BY AARON FIALKOW Table of contents Introduction. 310 I. Preliminary matters 1. Notation. 315 2. Riemann spaces conformai to Vm. 316 3. Conformai tensors. 318 4. Algebraic and metric properties. 319 5. Generalized covariant differentiation and the second fundamental form of V„. 323 II. Conform al tensor algebra 6. The relative conformai curvature of Vn. 325 7. The conformai measure tensors. 329 III. Conformal tensor analysis 8. Conformal differentiation. 331 9. The replacement principle. 335 10. Conformal geodesic and conformal normal coordinates and parameters. 341 11. The Weyl tensors. 342 12. The Bianchi identities for the Weyl tensors. 346 13. The deviation tensor. 350 14. The conformal Riemann tensor of V„. 352 15. Variation of the mean curvature normal. 354 16. The conformal Riemann tensor of Vm. 355 17. The Ricci identities. 359 IV. The conformal vector spaces of Vn 18. The conformal osculating and normal vector spaces of V„. 359 19. The conformal normal measure tensors. 362 20. The conformal Frenet equations. 365 V. The fundamental equations 21. The boundary, algebraic, reality and derived conditions. 370 22. The conformal Gauss-Codazzi equations. 373 23. Higher integrability conditions. 376 24. The third derivatives of the relative conformal curvature. 379 25. The second derivatives of the mean curvature. Discussion for a hypersurface. 383 26. The second derivatives of the mean curvature normal. Discussion for the general case 385 VI. SUBSPACES IN A CONFORMALLY EUCLIDEAN SPACE Rm 27. Symmetry relations. 392 28. Determination of &£$,, U$Ü¿>,£«„>I<W. 396 Presented to the Society, October 26, 1940 and February 22, 1941; received by the editors February 1, 1943, and, in revised form, December 15, 1943.
    [Show full text]
  • Lecture Notes on General Relativity Sean M
    Lecture Notes on General Relativity Sean M. Carroll Institute for Theoretical Physics University of California Santa Barbara, CA 93106 [email protected] December 1997 Abstract These notes represent approximately one semester’s worth of lectures on intro- ductory general relativity for beginning graduate students in physics. Topics include manifolds, Riemannian geometry, Einstein’s equations, and three applications: grav- itational radiation, black holes, and cosmology. Individual chapters, and potentially updated versions, can be found at http://itp.ucsb.edu/~carroll/notes/. arXiv:gr-qc/9712019v1 3 Dec 1997 NSF-ITP/97-147 gr-qc/9712019 i Table of Contents 0. Introduction table of contents — preface — bibliography 1. Special Relativity and Flat Spacetime the spacetime interval — the metric — Lorentz transformations — spacetime diagrams — vectors — the tangent space — dual vectors — tensors — tensor products — the Levi-Civita tensor — index manipulation — electromagnetism — differential forms — Hodge duality — worldlines — proper time — energy-momentum vector — energy- momentum tensor — perfect fluids — energy-momentum conservation 2. Manifolds examples — non-examples — maps — continuity — the chain rule — open sets — charts and atlases — manifolds — examples of charts — differentiation — vectors as derivatives — coordinate bases — the tensor transformation law — partial derivatives are not tensors — the metric again — canonical form of the metric — Riemann normal coordinates — tensor densities — volume forms and integration 3. Curvature
    [Show full text]
  • M.Sc. (MATHEMATICS) MAL-633
    M.Sc. (MATHEMATICS) MECHANICS OF SOLIDS-I MAL-633 DIRECTORATE OF DISTANCE EDUCATION GURU JAMBHESHWAR UNIVERSITY OF SCIENCE & TECHNOLOGY HISAR, HARYANA-125001. MAL-633 MECHANICS OF SOLIDS-I Contents: Chapter -1 ………………………………………………………………… Page: 1-21 1.1. Introduction 01 1.1.1.1. Rank/Order of tensor 02 1.1.1.2. Characteristics of the tensors 02 1.2 Notation and Summation Convention 03 1.3 Law of Transformation 04 1.4 Some properties of tensor 11 1.5 Contraction of a Tensor 15 1.6 Quotient Law of Tensors 16 1.7 Symmetric and Skew Symmetric tensors 18 Chapter - II ……………………………………………………………… Page: 18-32 2.1 The Symbol ij 19 2.1.1. Tensor Equation 20 2.2 The Symbol ijk 24 2.3 Isotropic Tensors 26 2.4 Contravariant tensors(vectors) 29 2.5 Covariant Vectors 30 Chapter –III …………………………………………………………….. Page: 33-38 3.1 Eigen Values and Eigen Vectors 33 Chapter –IV ............................................................................................ Page: 39-65 4.1 Introduction 39 4.2 Transformation of an Elastic Body 40 4.3 Linear Transformation or Affine Transformation 42 4.4 Small/Infinitesimal Linear Deformation 44 4.5 Homogeneous Deformation 45 4.6 Pure Deformation and Components of Strain Tensor 52 4.7 Geometrical Interpretation of the Components of Strain 56 4.8 Normal and Tangential Displacement 64 Chapter –V ..……………………………………………………………… Page: 66-99 5.1 Strain Quadric of Cauchy 66 5.2 Strain Components at a point in a rotation of coordinate axes 69 5.3 Principal Strains and Invariants 71 5.4 General Infinitesimal Deformation 82 5.5 Saint-Venant’s
    [Show full text]
  • A Code Generator for High-Performance Tensor Contractions on Gpus
    A Code Generator for High-Performance Tensor Contractions on GPUs Jinsung Kim∗, Aravind Sukumaran-Rajam∗, Vineeth Thumma∗, Sriram Krishnamoorthy†, Ajay Panyala†, Louis-Noel¨ Pouchet‡, Atanas Rountev∗, P. Sadayappan∗ ∗The Ohio State University, Ohio, USA {kim.4232, sukumaranrajam.1, thumma.6, rountev.1, sadayappan.1}@osu.edu †Pacific Northwest National Laboratory, Washington, USA {sriram, ajay.panyala}@pnnl.gov ‡Colorado State University, Colorado, USA [email protected] Abstract—Tensor contractions are higher dimensional general- The common solution in use today for performing tensor izations of matrix-matrix multiplication. They form the compute- contractions is to first perform suitable index permutation, i.e., intensive core of many applications in computational science layout transformation of one or both input tensors, to move and data science. In this paper, we describe a high-performance GPU code generator for arbitrary tensor contractions. It exploits all contraction indices to the left end or the right end, so domain-specific properties about data reuse in tensor contrac- that the needed sums of products for the contraction can be tions to devise an effective code generation schema, coupled with performed using matrix-matrix multiplication. A further final an effective model-driven search, to determine parameters for index permutation of the result from the matrix multiplication mapping of computation to threads and staging of data through may also be required. However, as elaborated in the next the GPU memory hierarchy. Experimental evaluation using a set of tensor contraction benchmarks demonstrates performance section, this approach, referred to as the TTGT (Transpose- improvement and/or significantly reduced code generation time Transpose-GEMM-Transpose), incurs several disadvantages.
    [Show full text]