Generalized Kronecker and Permanent Deltas, Their Spinor and Tensor Equivalents — Reference Formulae

Total Page:16

File Type:pdf, Size:1020Kb

Generalized Kronecker and Permanent Deltas, Their Spinor and Tensor Equivalents — Reference Formulae Generalized Kronecker and permanent deltas, their spinor and tensor equivalents — Reference Formulae R L Agacy1 L42 Brighton Street, Gulliver, Townsville, QLD 4812, AUSTRALIA Abstract The work is essentially divided into two parts. The purpose of the first part, applicable to n-dimensional space, is to (i) introduce a generalized permanent delta (gpd), a symmetrizer, on an equal footing with the generalized Kronecker delta (gKd), (ii) derive combinatorial formulae for each separately, and then in combination, which then leads us to (iii) provide a comprehensive listing of these formulae. For the second part, applicable to spinors in the mathematical language of General Relativity, the purpose is to (i) provide formulae for combined gKd/gpd spinors, (ii) obtain and tabulate spinor equivalents of gKd and gpd tensors and (iii) derive and exhibit tensor equivalents of gKd and gpd spinors. 1 Introduction The generalized Kronecker delta (gKd) is well known as an alternating function or antisymmetrizer, eg S^Xabc = d\X[def\. In contrast, although symmetric tensors are seen constantly, there does not appear to be employment of any symmetrizer, for eg X(abc), in analogy with the antisymmetrizer. However, it is exactly the combination of both types of symmetrizers, treated equally, that give us the flexibility to describe any type of tensor symmetry. We define such a 'permanent' symmetrizer as a generalized permanent delta (gpd) below. Our purpose is to restore an imbalance between the gKd and gpd and provide a consolidated reference of combinatorial formulae for them, most of which is not in the literature. The work divides into two Parts: the first applicable to tensors in n-dimensions, the second applicable to 2-component spinors used in the mathematical language of General Relativity. In Part I, section 2 gives definitions, illustrations and tables of combinatorial formulae of the gKd and gpd. The large section 3 derives an array of formulae for both a single gKd and products of such, which are collected together in Appendix A. A similarly large section 4 does the same for gpd's — the formulae here are not in the literature anywhere — they are tabulated in Appendix B. In section 5 simplifying formulae for combined gKd's and gpd's are derived, and presented in Appendix C. In Part IE, in section 5 we determine the spinor equivalents of all gKd's and gpd's up to 8 (ie 4 up, 4 down) indices, the most useful for the tensor/spinor languages of Relativity — these are listed in Appendix E, I and II respectively. In section 6, in reverse, we derive the tensor equivalents of gKd and gpd spinors and sums of products of them — the results are collated in Appendix F. Conventions are as follows. In Part I, general indices range from 1,... , n. Bolded indices are to be regarded as fixed. An index set {k} stands for {fcj,... , kp} (p < n) and an index set {ki} stands for {fcn,... , &ip} etc, all indices ranging over 1,... , n. The index summation convention is understood. All index sets are permutations of each other. In Part II, two-component spinor indices, are in capital lower case roman. PART I 2 Definitions and illustrations of the gKd and gpd Complete symmetry of (a tensor's) indices, as opposed to total antisymmetry, is manifested by all positive signs in any p-linear expression. Whereas the gKd's an­ tisymmetry comes about through a determinant (interchanges of rows/columns or indices changes the sign), total positive or pure or permanent symmetry, as we term it, comes about through the use of a permanent Then in complete analogy to the gKd we introduce the generalized permanent delta or gpd. This is defined, like the gKd determinant of a matrix, except that we take all positive signs. We use the kernel letter 7r to denote a gpd and double vertical lines for the permanent of the defining matrix. The gKd and gpd are completely complementary to each other and are defined, for p (< n) distinct indices, respectively by, ft 3p 7T, 31 -3v 31-3p 3p 4 where the first is a determinant and the second a permanent. The gKd has value +1 (-1) depending on whether (ji,... ,jp) is an even (odd) permutation of (ti,... ,ip). The gpd has the permanent value +1 for any permutation of the index sets. Note that 7r£ =<5". The gpd is a complete symmetrizer (permanent symmetrizer) and now one easily sees that ir^Xabe = 3!X(«je/). In terms of bracketed and parenthesized notation, the gKd and gpd are v n 3\-3p ™n-3p P-^Uin 3PY A simple but important interaction between the gKd and gpd is that in any expression containing them as a product, where there is a summation on a pair of indices between them, such expression vanishes. For ,.o...6.. ..a...b.. ...6...0.. ..0...6.. 7T...o...b.. = 6 7T...0...6.. = 0. From this it also obviously follows that the product of any gKd and any gpd with two or more common contracted indices vanishes. If A = [a*-] is an n x n matrix its determinant and permanent are: l—3n 3n detA=-TaV SI n! 31 a'j3n «1—*n in' The Riemann tensor Robed and Lanczos tensor Ly* are Young tableau tensors in their algebraic symmetries and obey the partial symmetries and antisymmetries as determined by the tableau representing them (Agacy1). And so here the interplay of symbols for both permanent symmetries (ir) and antisymmetries (6) manifests itself in the one-line definition of each of the Riemann and Lanczos tensors in respect of their algebraic symmetries1 12 ab vcd V nqs ^-efgh 1 — o "fpk Limn- 3 Generalized Kronecker 8 3.1 Single generalized Kronecker 6 In this section many of the derived formulae have been presented (but without deriva­ tion) long ago in Veblen2. In the following definitions of numerical symbols the range of indices is from 1 to n. In any set of p indices ... ,ip}, it will be understood that for p < n the indices are all distinct The generalized Kronecker 6 (gKd) may be defined by % • • • 6i n-3p The interchange of any two superscripts (or subscripts) interchanges the correspond­ ing rows (or columns) of the determinant thus changing its sign. Additionally if any particular superscript (or subscript) is not contained in the set of subscripts (or su­ perscripts) then that particular row (or column) consists of zeros, the determinant vanishing. Hence only if the set of superscripts and subscripts are permutations of each other does one obtain a non-zero result. The gKd can then equally well be described as +1 if i\,. •. , iP is an even permutation of j\,... , jp —1 if i\,... , ip is an odd permutation of j%,... ,jp 0 if ti,... ,ip is not a permutation of ji,... , jp Unless specifically stated or obvious from the context we will in future take it to be the case that the sets of superscripted and subscripted indices are permutations of each other. Note that the indices iP, js range from 1 to n but that there are p (< n) indices. When p = n (= dim V), and only then, we will also write ^l.-.tn — °ii...in ana € "~ °1 n and refer to e as the alternating or permutation symbol (and in General Rela­ tivity (GR) the Levi-Civita symbol as well). It takes the value +1 when the set {»i,... ,in} is an even permutation of {1,... ,n} and —1 when ,in} is an odd permutation of {1,... ,n}. The gKd is a tensor (transforms like a tensor) whereas and e*1 •*n are not We modify Einstein's summation convention by indicating fixed, repeated indices by bold type so that there is no summation over these indices, which will occur as matching superscripts and subscripts. This notation makes clearer the derivation of most of the following formulae. Unbolding or unfixing means summations go ahead. In considering a product of the two gKd's 0 h-3p °ki...kp there is no summation over the k's. For brevity we sometimes denote a set of summed, repeated indices {ki,... , kp} by {k} and a set of fixed, repeated indices by {k}. Clearly for a non-zero result above the sets {i}, {j} and {k} must all be permutations of each other. As the set {k} is a fixed permutation of the set {i} and the set {j} a permutation of {k} then the resultant value of the product is just the value of a generalized Kronecker 6 as the product of those permutations. Our permutation multiplication convention is from right to left /ki...kp\ (ix...iv\ _ /ii...ip\ Vii\ki...kj \ji...jp) Thus cki..Jtcki...k,p cti...tp _ cti-tp c °h-ip °k!...kp - 'ii... jP- In particular we have a result used often Formula kl. 0il-in - °l-..n °jl...jp - 6 e3i-3n- Formula k2. r ffi-h-ih _ ( I \P+s fii. eh -i—ip u3l-jp-l3p Z-/^ ' v3pv3\-3p-\ 8=1 where a caret ( i ) over an index indicates its omission from the list. Imagine the gKd as a p x p matrix with the indices ... ,ip signifying rows and the indices ji,... , jp signifying columns. Expand the gKd by the last column jp. Note that in the determinantal expansion, the sign associated with the element in row 1 and column p is (—l)p+1, and so on. One then has 2 p+p CfcS = (-ir^i&t. + (-ir ^CA+• • •+(-i) ^::fcl and so we have the result p r«i...ip_i»p v-^ / —1V+* tf? X«i-i«-H> 3\-3p-\3p Z~J^ ' 3p 3\-3p-\ 8=1 Now consider the expansion of the gKd ^"^j^ by its last (p 4- l)th row (see Formula 2).
Recommended publications
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Abstract Tensor Systems and Diagrammatic Representations
    Abstract tensor systems and diagrammatic representations J¯anisLazovskis September 28, 2012 Abstract The diagrammatic tensor calculus used by Roger Penrose (most notably in [7]) is introduced without a solid mathematical grounding. We will attempt to derive the tools of such a system, but in a broader setting. We show that Penrose's work comes from the diagrammisation of the symmetric algebra. Lie algebra representations and their extensions to knot theory are also discussed. Contents 1 Abstract tensors and derived structures 2 1.1 Abstract tensor notation . 2 1.2 Some basic operations . 3 1.3 Tensor diagrams . 3 2 A diagrammised abstract tensor system 6 2.1 Generation . 6 2.2 Tensor concepts . 9 3 Representations of algebras 11 3.1 The symmetric algebra . 12 3.2 Lie algebras . 13 3.3 The tensor algebra T(g)....................................... 16 3.4 The symmetric Lie algebra S(g)................................... 17 3.5 The universal enveloping algebra U(g) ............................... 18 3.6 The metrized Lie algebra . 20 3.6.1 Diagrammisation with a bilinear form . 20 3.6.2 Diagrammisation with a symmetric bilinear form . 24 3.6.3 Diagrammisation with a symmetric bilinear form and an orthonormal basis . 24 3.6.4 Diagrammisation under ad-invariance . 29 3.7 The universal enveloping algebra U(g) for a metrized Lie algebra g . 30 4 Ensuing connections 32 A Appendix 35 Note: This work relies heavily upon the text of Chapter 12 of a draft of \An Introduction to Quantum and Vassiliev Invariants of Knots," by David M.R. Jackson and Iain Moffatt, a yet-unpublished book at the time of writing.
    [Show full text]
  • Multilinear Algebra and Applications July 15, 2014
    Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3.
    [Show full text]
  • Geometric-Algebra Adaptive Filters Wilder B
    1 Geometric-Algebra Adaptive Filters Wilder B. Lopes∗, Member, IEEE, Cassio G. Lopesy, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). They are Faces generated by formulating the underlying minimization problem (a deterministic cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well- Edges suited for the description of geometric transformations. Also, (directed lines) differently from standard adaptive-filtering theory, Geometric Calculus (the extension of GA to differential calculus) allows Fig. 1. A polyhedron (3-dimensional polytope) can be completely described for applying the same derivation techniques regardless of the by the geometric multiplication of its edges (oriented lines, vectors), which type (subalgebra) of the data, i.e., real, complex numbers, generate the faces and hypersurfaces (in the case of a general n-dimensional quaternions, etc. Relying on those characteristics (among others), polytope). a deterministic quadratic cost function is posed, from which the GAAFs are devised, providing a generalization of regular adaptive filters to subalgebras of GA. From the obtained update rule, it is shown how to recover the following least-mean squares perform calculus with hypercomplex quantities, i.e., elements (LMS) adaptive filter variants: real-entries LMS, complex LMS, that generalize complex numbers for higher dimensions [2]– and quaternions LMS. Mean-square analysis and simulations in [10]. a system identification scenario are provided, showing very good agreement for different levels of measurement noise. GA-based AFs were first introduced in [11], [12], where they were successfully employed to estimate the geometric Index Terms—Adaptive filtering, geometric algebra, quater- transformation (rotation and translation) that aligns a pair of nions.
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • Multilinear Algebra
    Appendix A Multilinear Algebra This chapter presents concepts from multilinear algebra based on the basic properties of finite dimensional vector spaces and linear maps. The primary aim of the chapter is to give a concise introduction to alternating tensors which are necessary to define differential forms on manifolds. Many of the stated definitions and propositions can be found in Lee [1], Chaps. 11, 12 and 14. Some definitions and propositions are complemented by short and simple examples. First, in Sect. A.1 dual and bidual vector spaces are discussed. Subsequently, in Sects. A.2–A.4, tensors and alternating tensors together with operations such as the tensor and wedge product are introduced. Lastly, in Sect. A.5, the concepts which are necessary to introduce the wedge product are summarized in eight steps. A.1 The Dual Space Let V be a real vector space of finite dimension dim V = n.Let(e1,...,en) be a basis of V . Then every v ∈ V can be uniquely represented as a linear combination i v = v ei , (A.1) where summation convention over repeated indices is applied. The coefficients vi ∈ R arereferredtoascomponents of the vector v. Throughout the whole chapter, only finite dimensional real vector spaces, typically denoted by V , are treated. When not stated differently, summation convention is applied. Definition A.1 (Dual Space)Thedual space of V is the set of real-valued linear functionals ∗ V := {ω : V → R : ω linear} . (A.2) The elements of the dual space V ∗ are called linear forms on V . © Springer International Publishing Switzerland 2015 123 S.R.
    [Show full text]
  • On Certain Identities Involving Basic Spinors and Curvature Spinors
    ON CERTAIN IDENTITIES INVOLVING BASIC SPINORS AND CURVATURE SPINORS H. A. BUCHDAHL (received 22 May 1961) 1. Introduction The spinor analysis of Infeld and van der Waerden [1] is particularly well suited to the transcription of given flat space wave equations into forms which'constitute possible generalizations appropriate to Riemann spaces [2]. The basic elements of this calculus are (i) the skew-symmetric spinor y^, (ii) the hermitian tensor-spinor a*** (generalized Pauli matrices), and (iii) M the curvature spinor P ykl. When one deals with wave equations in Riemann spaces F4 one is apt to be confronted with expressions of somewhat be- wildering appearance in so far as they may involve products of a large number of <r-symbols many of the indices of which may be paired in all sorts of ways either with each other or with the indices of the components of the curvature spinors. Such expressions are generally capable of great simplifi- cation, but how the latter may be achieved is often far from obvious. It is the purpose of this paper to present a number of useful relations between basic tensors and spinors, commonly known relations being taken for gran- ted [3], [4], [5]. That some of these new relations appear as more or less trivial consequences of elementary identities is largely the result of a diligent search for their simplest derivation, once they had been obtained in more roundabout ways. Certain relations take a particularly simple form when the Vt is an Einstein space, and some necessary and sufficient conditions relating to Ein- stein spaces and to spaces of constant Riemannian curvature are considered in the last section.
    [Show full text]
  • Tensor Algebra
    TENSOR ALGEBRA Continuum Mechanics Course (MMC) - ETSECCPB - UPC Introduction to Tensors Tensor Algebra 2 Introduction SCALAR , , ... v VECTOR vf, , ... MATRIX σε,,... ? C,... 3 Concept of Tensor A TENSOR is an algebraic entity with various components which generalizes the concepts of scalar, vector and matrix. Many physical quantities are mathematically represented as tensors. Tensors are independent of any reference system but, by need, are commonly represented in one by means of their “component matrices”. The components of a tensor will depend on the reference system chosen and will vary with it. 4 Order of a Tensor The order of a tensor is given by the number of indexes needed to specify without ambiguity a component of a tensor. a Scalar: zero dimension 3.14 1.2 v 0.3 a , a Vector: 1 dimension i 0.8 0.1 0 1.3 2nd order: 2 dimensions A, A E 02.40.5 ij rd A , A 3 order: 3 dimensions 1.3 0.5 5.8 A , A 4th order … 5 Cartesian Coordinate System Given an orthonormal basis formed by three mutually perpendicular unit vectors: eeˆˆ12,, ee ˆˆ 23 ee ˆˆ 31 Where: eeeˆˆˆ1231, 1, 1 Note that 1 if ij eeˆˆi j ij 0 if ij 6 Cylindrical Coordinate System x3 xr1 cos x(,rz , ) xr2 sin xz3 eeeˆˆˆr cosθθ 12 sin eeeˆˆˆsinθθ cos x2 12 eeˆˆz 3 x1 7 Spherical Coordinate System x3 xr1 sin cos xrxr, , 2 sin sin xr3 cos ˆˆˆˆ x2 eeeer sinθφ sin 123sin θ cos φ cos θ eeeˆˆˆ cosφφ 12sin x1 eeeeˆˆˆˆφ cosθφ sin 123cos θ cos φ sin θ 8 Indicial or (Index) Notation Tensor Algebra 9 Tensor Bases – VECTOR A vector v can be written as a unique linear combination of the three vector basis eˆ for i 1, 2, 3 .
    [Show full text]
  • Department of Economics
    DEPARTMENT OF ECONOMICS On Kronecker Products, Tensor Products And Matrix Differential Calculus D.S.G. Pollock, University of Leicester, UK Working Paper No. 11/34 July 2011 ON KRONECKER PRODUCTS, TENSOR PRODUCTS AND MATRIX DIFFERENTIAL CALCULUS By D.S.G. Pollock University of Leicester Email: stephen [email protected] The algebra of the Kronecker products of matrices is recapitulated using a notation that reveals the tensor structures of the matrices. It is claimed that many of the difficulties that are encountered in working with the algebra can be alleviated by paying close attention to the indices that are concealed beneath the conventional matrix notation. The vectorisation operations and the commutation transformations that are common in multivariate statistical analysis alter the positional relationship of the matrix elements. These elements correspond to numbers that are liable to be stored in contiguous memory cells of a computer, which should remain undisturbed. It is suggested that, in the absence of an adequate index notation that enables the manipulations to be performed without disturbing the data, even the most clear-headed of computer programmers is liable to perform wholly unnecessary and time-wasting operations that shift data between memory cells. 1. Introduction One of the the bugbears of multivariate statistical analysis is the need to differ- entiate complicated likelihood functions in respect of their matrix arguments. To achieve this, one must resort to the theory of matrix differential calculus, which entails the use of Kronecker products, vectorisation operators and com- mutation matrices. A brief account of the requisite results was provided by Pollock (1979), who described a theory that employs vectorised matrices.
    [Show full text]
  • Low-Level Image Processing with the Structure Multivector
    Low-Level Image Processing with the Structure Multivector Michael Felsberg Bericht Nr. 0202 Institut f¨ur Informatik und Praktische Mathematik der Christian-Albrechts-Universitat¨ zu Kiel Olshausenstr. 40 D – 24098 Kiel e-mail: [email protected] 12. Marz¨ 2002 Dieser Bericht enthalt¨ die Dissertation des Verfassers 1. Gutachter Prof. G. Sommer (Kiel) 2. Gutachter Prof. U. Heute (Kiel) 3. Gutachter Prof. J. J. Koenderink (Utrecht) Datum der mundlichen¨ Prufung:¨ 12.2.2002 To Regina ABSTRACT The present thesis deals with two-dimensional signal processing for computer vi- sion. The main topic is the development of a sophisticated generalization of the one-dimensional analytic signal to two dimensions. Motivated by the fundamental property of the latter, the invariance – equivariance constraint, and by its relation to complex analysis and potential theory, a two-dimensional approach is derived. This method is called the monogenic signal and it is based on the Riesz transform instead of the Hilbert transform. By means of this linear approach it is possible to estimate the local orientation and the local phase of signals which are projections of one-dimensional functions to two dimensions. For general two-dimensional signals, however, the monogenic signal has to be further extended, yielding the structure multivector. The latter approach combines the ideas of the structure tensor and the quaternionic analytic signal. A rich feature set can be extracted from the structure multivector, which contains measures for local amplitudes, the local anisotropy, the local orientation, and two local phases. Both, the monogenic signal and the struc- ture multivector are combined with an appropriate scale-space approach, resulting in generalized quadrature filters.
    [Show full text]
  • M-Eigenvalues of the Riemann Curvature Tensor at That Point Are All Positive
    M-eigenvalues of The Riemann Curvature Tensor ∗ Hua Xianga† Liqun Qib‡ Yimin Weic§ a School of Mathematics and Statistics, Wuhan University, Wuhan, 430072, P.R. China b Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong c School of Mathematical Sciences and Shanghai Key Laboratory of Contemporary Applied Mathematics, Fudan University, Shanghai, 200433, P. R. of China August 31, 2018 Abstract The Riemann curvature tensor is a central mathematical tool in Ein- stein’s theory of general relativity. Its related eigenproblem plays an im- portant role in mathematics and physics. We extend M-eigenvalues for the elasticity tensor to the Riemann curvature tensor. The definition of M-eigenproblem of the Riemann curvature tensor is introduced from the minimization of an associated function. The M-eigenvalues of the Rie- mann curvature tensor always exist and are real. They are invariants of the Riemann curvature tensor. The associated function of the Rie- mann curvature tensor is always positive at a point if and only if the M-eigenvalues of the Riemann curvature tensor are all positive at that point. We investigate the M-eigenvalues for the simple cases, such as the 2D case, the 3D case, the constant curvature and the Schwarzschild so- lution, and all the calculated M-eigenvalues are related to the curvature invariants. Keywords. Eigenproblem, M-eigenvalue, Curvature tensor, Riemann ten- sor, Ricci tensor, Elasticity tensor, Schwarzschild solution, Invariant. 1 Introduction arXiv:1802.10248v6 [math.DG] 30 Aug 2018 The eigenproblem of tensor is an very important topic theoretically and practi- cally. In [9, 10, 18, 24], the elasticity tensor is investigated, including the strong ellipticity, the positive definiteness, the M-eigenvalues, etc.
    [Show full text]
  • Introductions to Kroneckerdelta Introduction to the Tensor Functions
    Introductions to KroneckerDelta Introduction to the tensor functions General The tensor functions discrete delta and Kronecker delta first appeared in the works L. Kronecker (1866, 1903) and T. Levi–Civita (1896). Definitions of the tensor functions For all possible values of their arguments, the discrete delta functions d n and d n1, n2, ¼ , Kronecker delta functions dn and dn1,n2,¼, and signature (Levi–Civita symbol) ¶n1, n2,¼ , nd are defined by the formulas: 1 n 0 H L H L d n 0 True d n1, n2, ¼ 1 ; n1 n2 ¼ 0 H L d n1, n2, ¼ 0 ; Ø n1 n2 ¼ 0 H L 1 n 0 dn H 0 TrueL d ¼ Î Î ¼ n1,n2,¼ 1 ; n1 n2 n1 Q n2 Q d Ø ¼ n1,n2,¼ 0 ; n1 n2 . ì ì ì In other words, the Kronecker delta function is equal to 1 if all its arguments are equal. In the case of one variable, the discrete delta function d n coincides with the Kronecker delta function dn. In the case of several variables, the discrete delta function d n1, n2, ¼, nm coincides with Kronecker delta function dn ,n ,¼,n ,0: 1 2 m H L d n dn H L d n , n , ¼, n d 1 2 m n1,n2,¼,nm,0 H L t ¶n1, n2,¼ , nd -1 H L ¶n1, n2,¼ , nd 0 ; ni nj 1 £ i £ d 1 £ j £ d, H L where t is the number of permutations needed to go from the sorted version of n1, n2, ¼, nd to n1, n2, ¼, nd . ì ì Connections within the group of tensor functions and with other function groups 8 < 8 < Representations through equivalent functions http://functions.wolfram.com 2 The tensor functions d n , d n1, n2, ¼ , dn, and dn1,n2,¼ have the following representations through equivalent functions: H L H L d n and d n1,
    [Show full text]