Tensor Algebra and Tensor Calculus

Total Page:16

File Type:pdf, Size:1020Kb

Tensor Algebra and Tensor Calculus Appendix A Tensor Algebra and Tensor Calculus We collect here some notions of geometry and analysis used in the previous chapters. This appendix is no more than a brief summary containing a few defi- nitions and statements of results without proof. The conscientious reader unfamiliar with the topics collected here should take from these pages some directions for further reading in appropriate textbooks. A.1 Vector Spaces Definition 1. AsetV is called a vector space (or a linear space) over a field F if it is endowed with two operations, CWV V ! V and ıWV F ! V, called respectively addition and multiplication, such that – addition is commutative, i.e., v C w D w C v for every pair v;w 2 V, – addition is associative, i.e., .v Cw/Cz D v C.wCz/ for every triple v;w; z 2 V, – addition admits a unique neutral element 0 2 V such that v C 0 D v;8v 2 V, – with ˛v denoting the product ˛ ıv, for every ˛; ˇ 2 F and v 2 V, multiplication is associative, i.e., ˛.ˇv/ D .˛ˇ/v, where ˛ˇ is the product in F, – multiplication admits a unique neutral element 1 2 F such that 1v D v;8v 2 V, – multiplication is distributive with respect to addition, i.e., ˛.v C w/ D ˛v C ˛w for every pair v;w 2 V, – multiplication is distributive with respect to addition in F, i.e., for every pair ˛; ˇ 2 F and every v 2 V, .˛ C ˇ/v D ˛v C ˇv: We do not recall the abstract definition of a field in the algebraic sense, because we consider here only the special case of F equal to the set of real numbers R. © Springer Science+Business Media New York 2015 385 P.M. Mariano, L. Galano, Fundamentals of the Mechanics of Solids, DOI 10.1007/978-1-4939-3133-0 386 A Tensor Algebra and Tensor Calculus Definition 2. Elements v1;:::;vn of a linear space are said to be linearly depen- dent if there exist scalars ˛1;:::;˛n not all equal to zero such that Xn ˛ivi D 0: iD1 In contrast, when the previous equation holds if and only if ˛i D 0 for every i,the elements v1;:::;vn are said to be linearly independent. From now on, we shall use Einstein notation, and we shall write ˛ivi instead of Xn ˛ivi, implying summation over repeated indices. iD1 Given v1;:::;vn 2 V, we denote by span fv1;:::;vng the subset of V containing v1;:::;vn and all their linear combinations. Definition 3. A basis (or a coordinate system)inV is a set of linearly independent elements of V,sayv1;:::;vn, such that span fv1;:::;vng D V: In this case, n is the dimension of V. In particular, when n < C1, we say that V has finite dimension, which will be the case throughout this book. The dimension of a linear space V is independent of the selection of a basis for it. Every set of m < n linearly independent elements of an n-dimensional linear space V can be extended to a basis of V. Let e1;:::;en be a basis of a vector space V over the field of real numbers, in short a real vector space. Every v 2 V admits a representation i v D v ei with vi 2 R the ith component of v in that basis. In other words, v is independent of the basis, while its generic component vi exists with respect to a given basis. The distinction is essential, because we want to write the laws of physics independently of specific frames of reference (coordinate systems), i.e., of the observers. Consider the n-dimensional point space E. We can construct a linear space V over it by taking the difference between points, namely y x with x and y in E, and considering V the set of all possible differences between pairs of points in E. Then V is a linear space with dimension n. We call it the translation space over E. Once we select a point in E, call it O, we can express all differences of points in reference to it, by writing y x D .y O/ .x O/ D .y O/ C .1/.x O/: A Tensor Algebra and Tensor Calculus 387 By assigning a basis in V consisting of elements expressed in reference to O, which becomes the origin of the coordinate system, we identify V with Rn. There is, then, a hierarchy among E, V, and Rn,then-dimensional Euclidean point space, the related translational space, and finally, Rn, determined by an increasing richness of geometric structure. We call the elements of a given vector space (a linear space) vectors, although they can be far from the standard visualization of vectors as arrows in the three- dimensional ambient space or in the plane. For example, the set of nm real matrices is a linear space, and so is the set of all polynomials with complex coefficients in a variable t, once we identify F with the set of complex numbers and consider addition and multiplication as the ordinary addition of two polynomials and the multiplication of a polynomial by a complex number. Several other examples of even greater complexity could be given, which are more intricate and difficult to visualize then those we have given. Definition 4. The scalar (inner) product over V is a function h; i W V V ! R that is – commutative, i.e., hv;wi D hw;vi, – distributive with respect to vector addition, i.e., hv;w C zi D hv;wi C hv;zi, – associative with respect to multiplication by elements of F, i.e., ˛ hv;wi D h˛v;wi D hv;˛wi, 8˛ 2 F;v;w 2 V, – nonnegative definite, i.e., hv;vi 0, 8v 2 V, and equality holds only when v D 0. When V is endowed with a scalar product, we say that it is a Euclidean space. We call a point space E Euclidean if its translational space is endowed with a scalar product. Let n be the dimension of V with respect to E. In this case, we shall write En, saying that it is an n-dimensional Euclidean point space. Definition 5. Let fe1;:::;eng be a basis in a Euclidean vector space V. We say that it is orthonormal if ˝ ˛ ei; ej D ıij with ıij the Kronecker delta: 1 if i D j; ı D ij 0 if i ¤ j: ıij ıi We shall also write and j for the components of the Kronecker delta, with the same numerical values but with different meaning of the indices, as we shall make precise later. 388 A Tensor Algebra and Tensor Calculus i j Take two vectors v D v ei, w D w ej in V, expressed in terms of the basis fe1;:::;eng, with n finite, which is not necessarily orthonormal. In terms of components, we write ˝ ˛ ˝ ˛ i j i j i j i j hv;wi D v ei; w ej D v w ei; ej D v gijw D v w gij; where ˝ ˛ gij WD ei; ej and gij D gji; due to the commutativity of the scalar product. In particular, when the basis is orthonormal with respect to the scalar product considered, we have i j i hv;wi D v ıijw D v wi; j where wi D ıijw and the position of the index i has a geometric meaning that we explain below. Before going into details, however, we will say a bit about changes of frames in V. We call gij the ijth component of the metric tensor over V.The terminology recalls that gij determines in the basis fe1;:::;eng the length jvj of v 2 V, defined by the square root of the scalar product of v with itself. The meaning of the word tensor is defined below. We compute 2 i j jvj WD hv;vi D v gijv : In particular, when gij D ıij, we say in this case that the metric is flat,wehave Xn 2 i j i i 2 jvj D v ıijv D v vi D .v / : iD1 Assume V with finite dimension, as we shall do from now on. Take two bases fe1;:::;eng and fi1;:::;ing. Assume that the latter is orthonormal. With respect to such a basis, we have ki ; ei D Ai k k with Ai appropriate proportionality factors. In turn, we have also i N i ; k D Akei N i with Ak other factors. Consequently, we obtain A Tensor Algebra and Tensor Calculus 389 i N i N i li ; k D Akei D AkAi l so that since fi1;:::;ing is orthonormal, we have ˝ ˛ ˝ ˛ ˝ ˛ ı i ; i N r l i ; i N r l i ; i N r l ı N r j : ij D i j D Ai Ar l j D Ai Ar l j D Ai Ar lj D Ai Ar We specialize the˚ analysis to«Rn to clarify further the nature of the proportionality 1 factors. Let z D z ;:::;zn be˚ coordinates« in reference to an orthonormal 1 n basis fi1;:::;ing, and let x D x ;:::;x be another system of coordinates associated with a different basis, namely fe1;:::;eng, and assume the possibility of a coordinate change zj 7! xi.zj/; which is one-to-one and differentiable, with differentiable inverse mapping xi 7! zj.xi/: Consider two smooth curves s 7! x.s/ and s 7!x O .s/ intersecting at a point xN, and compute the derivatives with respect to s at xN: dxi dxOj v D .xN/e ; w D .xN/e ; ds i ds j where ei and ej are the elements of the generic basis mentioned above as a counterpart to the orthonormal one fi1;:::;ing.
Recommended publications
  • Package 'Einsum'
    Package ‘einsum’ May 15, 2021 Type Package Title Einstein Summation Version 0.1.0 Description The summation notation suggested by Einstein (1916) <doi:10.1002/andp.19163540702> is a concise mathematical notation that implicitly sums over repeated indices of n- dimensional arrays. Many ordinary matrix operations (e.g. transpose, matrix multiplication, scalar product, 'diag()', trace etc.) can be written using Einstein notation. The notation is particularly convenient for expressing operations on arrays with more than two dimensions because the respective operators ('tensor products') might not have a standardized name. License MIT + file LICENSE Encoding UTF-8 SystemRequirements C++11 Suggests testthat, covr RdMacros mathjaxr RoxygenNote 7.1.1 LinkingTo Rcpp Imports Rcpp, glue, mathjaxr R topics documented: einsum . .2 einsum_package . .3 Index 5 1 2 einsum einsum Einstein Summation Description Einstein summation is a convenient and concise notation for operations on n-dimensional arrays. Usage einsum(equation_string, ...) einsum_generator(equation_string, compile_function = TRUE) Arguments equation_string a string in Einstein notation where arrays are separated by ’,’ and the result is separated by ’->’. For example "ij,jk->ik" corresponds to a standard matrix multiplication. Whitespace inside the equation_string is ignored. Unlike the equivalent functions in Python, einsum() only supports the explicit mode. This means that the equation_string must contain ’->’. ... the arrays that are combined. All arguments are converted to arrays with
    [Show full text]
  • Notes on Manifolds
    Notes on Manifolds Justin H. Le Department of Electrical & Computer Engineering University of Nevada, Las Vegas [email protected] August 3, 2016 1 Multilinear maps A tensor T of order r can be expressed as the tensor product of r vectors: T = u1 ⊗ u2 ⊗ ::: ⊗ ur (1) We herein fix r = 3 whenever it eases exposition. Recall that a vector u 2 U can be expressed as the combination of the basis vectors of U. Transform these basis vectors with a matrix A, and if the resulting vector u0 is equivalent to uA, then the components of u are said to be covariant. If u0 = A−1u, i.e., the vector changes inversely with the change of basis, then the components of u are contravariant. By Einstein notation, we index the covariant components of a tensor in subscript and the contravariant components in superscript. Just as the components of a vector u can be indexed by an integer i (as in ui), tensor components can be indexed as Tijk. Additionally, as we can view a matrix to be a linear map M : U ! V from one finite-dimensional vector space to another, we can consider a tensor to be multilinear map T : V ∗r × V s ! R, where V s denotes the s-th-order Cartesian product of vector space V with itself and likewise for its algebraic dual space V ∗. In this sense, a tensor maps an ordered sequence of vectors to one of its (scalar) components. Just as a linear map satisfies M(a1u1 + a2u2) = a1M(u1) + a2M(u2), we call an r-th-order tensor multilinear if it satisfies T (u1; : : : ; a1v1 + a2v2; : : : ; ur) = a1T (u1; : : : ; v1; : : : ; ur) + a2T (u1; : : : ; v2; : : : ; ur); (2) for scalars a1 and a2.
    [Show full text]
  • The Mechanics of the Fermionic and Bosonic Fields: an Introduction to the Standard Model and Particle Physics
    The Mechanics of the Fermionic and Bosonic Fields: An Introduction to the Standard Model and Particle Physics Evan McCarthy Phys. 460: Seminar in Physics, Spring 2014 Aug. 27,! 2014 1.Introduction 2.The Standard Model of Particle Physics 2.1.The Standard Model Lagrangian 2.2.Gauge Invariance 3.Mechanics of the Fermionic Field 3.1.Fermi-Dirac Statistics 3.2.Fermion Spinor Field 4.Mechanics of the Bosonic Field 4.1.Spin-Statistics Theorem 4.2.Bose Einstein Statistics !5.Conclusion ! 1. Introduction While Quantum Field Theory (QFT) is a remarkably successful tool of quantum particle physics, it is not used as a strictly predictive model. Rather, it is used as a framework within which predictive models - such as the Standard Model of particle physics (SM) - may operate. The overarching success of QFT lends it the ability to mathematically unify three of the four forces of nature, namely, the strong and weak nuclear forces, and electromagnetism. Recently substantiated further by the prediction and discovery of the Higgs boson, the SM has proven to be an extraordinarily proficient predictive model for all the subatomic particles and forces. The question remains, what is to be done with gravity - the fourth force of nature? Within the framework of QFT theoreticians have predicted the existence of yet another boson called the graviton. For this reason QFT has a very attractive allure, despite its limitations. According to !1 QFT the gravitational force is attributed to the interaction between two gravitons, however when applying the equations of General Relativity (GR) the force between two gravitons becomes infinite! Results like this are nonsensical and must be resolved for the theory to stand.
    [Show full text]
  • 1.2 Topological Tensor Calculus
    PH211 Physical Mathematics Fall 2019 1.2 Topological tensor calculus 1.2.1 Tensor fields Finite displacements in Euclidean space can be represented by arrows and have a natural vector space structure, but finite displacements in more general curved spaces, such as on the surface of a sphere, do not. However, an infinitesimal neighborhood of a point in a smooth curved space1 looks like an infinitesimal neighborhood of Euclidean space, and infinitesimal displacements dx~ retain the vector space structure of displacements in Euclidean space. An infinitesimal neighborhood of a point can be infinitely rescaled to generate a finite vector space, called the tangent space, at the point. A vector lives in the tangent space of a point. Note that vectors do not stretch from one point to vector tangent space at p p space Figure 1.2.1: A vector in the tangent space of a point. another, and vectors at different points live in different tangent spaces and so cannot be added. For example, rescaling the infinitesimal displacement dx~ by dividing it by the in- finitesimal scalar dt gives the velocity dx~ ~v = (1.2.1) dt which is a vector. Similarly, we can picture the covector rφ as the infinitesimal contours of φ in a neighborhood of a point, infinitely rescaled to generate a finite covector in the point's cotangent space. More generally, infinitely rescaling the neighborhood of a point generates the tensor space and its algebra at the point. The tensor space contains the tangent and cotangent spaces as a vector subspaces. A tensor field is something that takes tensor values at every point in a space.
    [Show full text]
  • Multilinear Algebra and Applications July 15, 2014
    Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3.
    [Show full text]
  • A Some Basic Rules of Tensor Calculus
    A Some Basic Rules of Tensor Calculus The tensor calculus is a powerful tool for the description of the fundamentals in con- tinuum mechanics and the derivation of the governing equations for applied prob- lems. In general, there are two possibilities for the representation of the tensors and the tensorial equations: – the direct (symbolic) notation and – the index (component) notation The direct notation operates with scalars, vectors and tensors as physical objects defined in the three dimensional space. A vector (first rank tensor) a is considered as a directed line segment rather than a triple of numbers (coordinates). A second rank tensor A is any finite sum of ordered vector pairs A = a b + ... +c d. The scalars, vectors and tensors are handled as invariant (independent⊗ from the choice⊗ of the coordinate system) objects. This is the reason for the use of the direct notation in the modern literature of mechanics and rheology, e.g. [29, 32, 49, 123, 131, 199, 246, 313, 334] among others. The index notation deals with components or coordinates of vectors and tensors. For a selected basis, e.g. gi, i = 1, 2, 3 one can write a = aig , A = aibj + ... + cidj g g i i ⊗ j Here the Einstein’s summation convention is used: in one expression the twice re- peated indices are summed up from 1 to 3, e.g. 3 3 k k ik ik a gk ∑ a gk, A bk ∑ A bk ≡ k=1 ≡ k=1 In the above examples k is a so-called dummy index. Within the index notation the basic operations with tensors are defined with respect to their coordinates, e.
    [Show full text]
  • Abstract Tensor Systems As Monoidal Categories
    Abstract Tensor Systems as Monoidal Categories Aleks Kissinger Dedicated to Joachim Lambek on the occasion of his 90th birthday October 31, 2018 Abstract The primary contribution of this paper is to give a formal, categorical treatment to Penrose’s abstract tensor notation, in the context of traced symmetric monoidal categories. To do so, we introduce a typed, sum-free version of an abstract tensor system and demonstrate the construction of its associated category. We then show that the associated category of the free abstract tensor system is in fact the free traced symmetric monoidal category on a monoidal signature. A notable consequence of this result is a simple proof for the soundness and completeness of the diagrammatic language for traced symmetric monoidal categories. 1 Introduction This paper formalises the connection between monoidal categories and the ab- stract index notation developed by Penrose in the 1970s, which has been used by physicists directly, and category theorists implicitly, via the diagrammatic languages for traced symmetric monoidal and compact closed categories. This connection is given as a representation theorem for the free traced symmet- ric monoidal category as a syntactically-defined strict monoidal category whose morphisms are equivalence classes of certain kinds of terms called Einstein ex- pressions. Representation theorems of this kind form a rich history of coherence results for monoidal categories originating in the 1960s [17, 6]. Lambek’s con- arXiv:1308.3586v1 [math.CT] 16 Aug 2013 tribution [15, 16] plays an essential role in this history, providing some of the earliest examples of syntactically-constructed free categories and most of the key ingredients in Kelly and Mac Lane’s proof of the coherence theorem for closed monoidal categories [11].
    [Show full text]
  • Quantum Chromodynamics (QCD) QCD Is the Theory That Describes the Action of the Strong Force
    Quantum chromodynamics (QCD) QCD is the theory that describes the action of the strong force. QCD was constructed in analogy to quantum electrodynamics (QED), the quantum field theory of the electromagnetic force. In QED the electromagnetic interactions of charged particles are described through the emission and subsequent absorption of massless photons (force carriers of QED); such interactions are not possible between uncharged, electrically neutral particles. By analogy with QED, quantum chromodynamics predicts the existence of gluons, which transmit the strong force between particles of matter that carry color, a strong charge. The color charge was introduced in 1964 by Greenberg to resolve spin-statistics contradictions in hadron spectroscopy. In 1965 Nambu and Han introduced the octet of gluons. In 1972, Gell-Mann and Fritzsch, coined the term quantum chromodynamics as the gauge theory of the strong interaction. In particular, they employed the general field theory developed in the 1950s by Yang and Mills, in which the carrier particles of a force can themselves radiate further carrier particles. (This is different from QED, where the photons that carry the electromagnetic force do not radiate further photons.) First, QED Lagrangian… µ ! # 1 µν LQED = ψeiγ "∂µ +ieAµ $ψe − meψeψe − Fµν F 4 µν µ ν ν µ Einstein notation: • F =∂ A −∂ A EM field tensor when an index variable µ • A four potential of the photon field appears twice in a single term, it implies summation µ •γ Dirac 4x4 matrices of that term over all the values of the index
    [Show full text]
  • About Matrices, Tensors and Various Abbrevia- Tions
    About matrices, tensors and various abbrevia- tions The objects we deal with in this course are rather complicated. We therefore use a simplifying notation where at every step as much as possible of this complexity is hidden. Basically this means that objects that are made out of many elements or components. are written without indices as much as possible. We also have several different types of indices present. The ones that show up in this course are: • Dirac-Indices a; b; c • Lorentz indices, both upper and lower µ, ν; ρ • SU(2)L indices i; j; k • SU(3)c indices α; β; γ Towards the right I have written the type of symbols used in this note to denote a particular type of index. The course contains even more, three-vector indices or various others denoting sums over types of quarks and/or leptons. An object is called scalar or singlet if it has no index of a particular type of index, a vector if it has one, a matrix if it has two and a tensor if it has two or more. A vector can also be called a column or row matrix. Examples are b with elements bi: 0 1 b1 B C B b2 C b = (b1; b2; : : : ; bn) b = (b1 b2 ··· bn) b = B . C (1) B . C @ . A bn In the first case is a vector, the second a row matrix and the last a column vector. Objects with two indices are called a matrix or sometimes a tensor and de- noted by 0 1 c11 c12 ··· c1n B C B c21 c22 ··· c2n C c = (cij) = B .
    [Show full text]
  • Tensor Calculus and Differential Geometry
    Course Notes Tensor Calculus and Differential Geometry 2WAH0 Luc Florack March 10, 2021 Cover illustration: papyrus fragment from Euclid’s Elements of Geometry, Book II [8]. Contents Preface iii Notation 1 1 Prerequisites from Linear Algebra 3 2 Tensor Calculus 7 2.1 Vector Spaces and Bases . .7 2.2 Dual Vector Spaces and Dual Bases . .8 2.3 The Kronecker Tensor . 10 2.4 Inner Products . 11 2.5 Reciprocal Bases . 14 2.6 Bases, Dual Bases, Reciprocal Bases: Mutual Relations . 16 2.7 Examples of Vectors and Covectors . 17 2.8 Tensors . 18 2.8.1 Tensors in all Generality . 18 2.8.2 Tensors Subject to Symmetries . 22 2.8.3 Symmetry and Antisymmetry Preserving Product Operators . 24 2.8.4 Vector Spaces with an Oriented Volume . 31 2.8.5 Tensors on an Inner Product Space . 34 2.8.6 Tensor Transformations . 36 2.8.6.1 “Absolute Tensors” . 37 CONTENTS i 2.8.6.2 “Relative Tensors” . 38 2.8.6.3 “Pseudo Tensors” . 41 2.8.7 Contractions . 43 2.9 The Hodge Star Operator . 43 3 Differential Geometry 47 3.1 Euclidean Space: Cartesian and Curvilinear Coordinates . 47 3.2 Differentiable Manifolds . 48 3.3 Tangent Vectors . 49 3.4 Tangent and Cotangent Bundle . 50 3.5 Exterior Derivative . 51 3.6 Affine Connection . 52 3.7 Lie Derivative . 55 3.8 Torsion . 55 3.9 Levi-Civita Connection . 56 3.10 Geodesics . 57 3.11 Curvature . 58 3.12 Push-Forward and Pull-Back . 59 3.13 Examples . 60 3.13.1 Polar Coordinates in the Euclidean Plane .
    [Show full text]
  • Tensor Algebra
    TENSOR ALGEBRA Continuum Mechanics Course (MMC) - ETSECCPB - UPC Introduction to Tensors Tensor Algebra 2 Introduction SCALAR , , ... v VECTOR vf, , ... MATRIX σε,,... ? C,... 3 Concept of Tensor A TENSOR is an algebraic entity with various components which generalizes the concepts of scalar, vector and matrix. Many physical quantities are mathematically represented as tensors. Tensors are independent of any reference system but, by need, are commonly represented in one by means of their “component matrices”. The components of a tensor will depend on the reference system chosen and will vary with it. 4 Order of a Tensor The order of a tensor is given by the number of indexes needed to specify without ambiguity a component of a tensor. a Scalar: zero dimension 3.14 1.2 v 0.3 a , a Vector: 1 dimension i 0.8 0.1 0 1.3 2nd order: 2 dimensions A, A E 02.40.5 ij rd A , A 3 order: 3 dimensions 1.3 0.5 5.8 A , A 4th order … 5 Cartesian Coordinate System Given an orthonormal basis formed by three mutually perpendicular unit vectors: eeˆˆ12,, ee ˆˆ 23 ee ˆˆ 31 Where: eeeˆˆˆ1231, 1, 1 Note that 1 if ij eeˆˆi j ij 0 if ij 6 Cylindrical Coordinate System x3 xr1 cos x(,rz , ) xr2 sin xz3 eeeˆˆˆr cosθθ 12 sin eeeˆˆˆsinθθ cos x2 12 eeˆˆz 3 x1 7 Spherical Coordinate System x3 xr1 sin cos xrxr, , 2 sin sin xr3 cos ˆˆˆˆ x2 eeeer sinθφ sin 123sin θ cos φ cos θ eeeˆˆˆ cosφφ 12sin x1 eeeeˆˆˆˆφ cosθφ sin 123cos θ cos φ sin θ 8 Indicial or (Index) Notation Tensor Algebra 9 Tensor Bases – VECTOR A vector v can be written as a unique linear combination of the three vector basis eˆ for i 1, 2, 3 .
    [Show full text]
  • Fast Higher-Order Functions for Tensor Calculus with Tensors and Subtensors
    Fast Higher-Order Functions for Tensor Calculus with Tensors and Subtensors Cem Bassoy and Volker Schatz Fraunhofer IOSB, Ettlingen 76275, Germany, [email protected] Abstract. Tensors analysis has become a popular tool for solving prob- lems in computational neuroscience, pattern recognition and signal pro- cessing. Similar to the two-dimensional case, algorithms for multidimen- sional data consist of basic operations accessing only a subset of tensor data. With multiple offsets and step sizes, basic operations for subtensors require sophisticated implementations even for entrywise operations. In this work, we discuss the design and implementation of optimized higher-order functions that operate entrywise on tensors and subtensors with any non-hierarchical storage format and arbitrary number of dimen- sions. We propose recursive multi-index algorithms with reduced index computations and additional optimization techniques such as function inlining with partial template specialization. We show that single-index implementations of higher-order functions with subtensors introduce a runtime penalty of an order of magnitude than the recursive and iterative multi-index versions. Including data- and thread-level parallelization, our optimized implementations reach 68% of the maximum throughput of an Intel Core i9-7900X. In comparison with other libraries, the aver- age speedup of our optimized implementations is up to 5x for map-like and more than 9x for reduce-like operations. For symmetric tensors we measured an average speedup of up to 4x. 1 Introduction Many problems in computational neuroscience, pattern recognition, signal pro- cessing and data mining generate massive amounts of multidimensional data with high dimensionality [13,12,9]. Tensors provide a natural representation for massive multidimensional data [10,7].
    [Show full text]