Introduction to Tensor Calculus

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Tensor Calculus Introduction to Tensor Calculus Kees Dullemond & Kasper Peeters c 1991-2010 This booklet contains an explanation about tensor calculus for students of physics and engineering with a basic knowledge of linear algebra. The focus lies mainly on acquiring an understanding of the principles and ideas underlying the concept of ‘tensor’. We have not pursued mathematical strictness and pureness, but instead emphasise practical use (for a more mathematically pure resum´e, please see the bib- liography). Although tensors are applied in a very broad range of physics and math- ematics, this booklet focuses on the application in special and general relativity. We are indebted to all people who read earlier versions of this manuscript and gave useful comments, in particular G. B¨auerle (University of Amsterdam) and C. Dulle- mond Sr. (University of Nijmegen). The original version of this booklet, in Dutch, appeared on October 28th, 1991. A major update followed on September 26th, 1995. This version is a re-typeset English translation made in 2008/2010. Copyright c 1991-2010 Kees Dullemond & Kasper Peeters. 1 The index notation 5 2 Bases, co- and contravariant vectors 9 2.1 Intuitive approach ............................. 9 2.2 Mathematical approach .......................... 11 3 Introduction to tensors 15 3.1 The new inner product and the first tensor ............... 15 3.2 Creating tensors from vectors ....................... 17 4 Tensors, definitions and properties 21 4.1 Definition of a tensor ............................ 21 4.2 Symmetry and antisymmetry ....................... 21 4.3 Contraction of indices ........................... 22 4.4 Tensors as geometrical objects ....................... 22 4.5 Tensors as operators ............................ 24 5 The metric tensor and the new inner product 25 5.1 The metric as a measuring rod ....................... 25 5.2 Properties of the metric tensor ....................... 26 5.3 Co versus contra ............................... 27 6 Tensor calculus 29 6.1 The ‘covariance’ of equations ....................... 29 6.2 Addition of tensors ............................. 30 6.3 Tensor products ............................... 31 6.4 First order derivatives: non-covariant version .............. 31 6.5 Rot, cross-products and the permutation symbol ............ 32 7 Covariant derivatives 35 7.1 Vectors in curved coordinates ....................... 35 7.2 The covariant derivative of a vector/tensor field ............ 36 A Tensors in special relativity 39 B Geometrical representation 41 C Exercises 47 C.1 Index notation ................................ 47 C.2 Co-vectors .................................. 49 C.3 Introduction to tensors ........................... 49 C.4 Tensoren, algemeen ............................. 50 C.5 Metrische tensor ............................... 51 C.6 Tensor calculus ............................... 51 3 4 1 The index notation Before we start with the main topic of this booklet, tensors, we will first introduce a new notation for vectors and matrices, and their algebraic manipulations: the index notation. It will prove to be much more powerful than the standard vector nota- tion. To clarify this we will translate all well-know vector and matrix manipulations (addition, multiplication and so on) to index notation. Let us take a manifold (=space) with dimension n. We will denote the compo- ~ nents of a vector v with the numbers v1,..., vn. If one modifies the vector basis, in ~ which the components v1,..., vn of vector v are expressed, then these components will change, too. Such a transformation can be written using a matrix A, of which ~ ~ the columns can be regarded as the old basis vectors e1,..., en expressed in the new ~ ~ basis e1′,..., en′, v1′ A11 A1n v1 . · · · . . = . . (1.1) v A A v n′ n1 · · · nn n Note that the first index of A denotes the row and the second index the column. In the next chapter we will say more about the transformation of vectors. According to the rules of matrix multiplication the above equation means: v1′ = A11 v1 + A12 v2 + + A1n vn , . .· .· · · · .· . (1.2) v = A v + A v + + A v , n′ n1 · 1 n2 · 2 · · · nn · n or equivalently, n v1′ = ∑ A1νvν , ν=1 . (1.3) n vn′ = ∑ Anνvν , ν=1 or even shorter, n N vµ′ = ∑ Aµν vν ( µ 1 µ n) . (1.4) ν=1 ∀ ∈ | ≤ ≤ In this formula we have put the essence of matrix multiplication. The index ν is a dummy index and µ is a running index. The names of these indices, in this case µ and 5 CHAPTER 1. THE INDEX NOTATION ν, are chosen arbitrarily. The could equally well have been called α and β: n N vα′ = ∑ Aαβ vβ ( α 1 α n) . (1.5) β=1 ∀ ∈ | ≤ ≤ Usually the conditions for µ (in Eq. 1.4) or α (in Eq. 1.5) are not explicitly stated because they are obvious from the context. The following statements are therefore equivalent: ~v = ~y v = y v = y , ⇔ µ µ ⇔ α α n n (1.6) ~v = A~y vµ = ∑ Aµνyν vν = ∑ Aνµyµ . ⇔ ν=1 ⇔ µ=1 This index notation is also applicable to other manipulations, for instance the inner product. Take two vectors ~v and w~ , then we define the inner product as n ~ ~ v w := v1w1 + + vnwn = ∑ vµwµ . (1.7) · · · · µ=1 (We will return extensively to the inner product. Here it is just as an example of the power of the index notation). In addition to this type of manipulations, one can also just take the sum of matrices and of vectors: C = A + B Cµν = Aµν + Bµν ⇔ (1.8) ~z = ~v + w~ z = v + w ⇔ α α α or their difference, C = A B Cµν = Aµν Bµν − ⇔ − (1.9) ~z = ~v w~ z = v w − ⇔ α α − α ◮ Exercises 1 to 6 of Section C.1. From the exercises it should have become clear that the summation symbols ∑ can always be put at the start of the formula and that their order is irrelevant. We can therefore in principle omit these summation symbols, if we make clear in advance over which indices we perform a summation, for instance by putting them after the formula, n ∑ Aµνvν Aµνvν ν ν=1 → { } n n (1.10) ∑ ∑ AαβBβγCγδ AαβBβγCγδ β, γ β=1 γ=1 → { } From the exercises one can already suspect that almost never is a summation performed over an index if that index only ap- • pears once in a product, almost always a summation is performed over an index that appears twice in • a product, an index appears almost never more than twice in a product. • 6 CHAPTER 1. THE INDEX NOTATION When one uses index notation in every day routine, then it will soon become irritating to denote explicitly over which indices the summation is performed. From experience (see above three points) one knows over which indices the summations are performed, so one will soon have the idea to introduce the convention that, unless explicitly stated otherwise: a summation is assumed over all indices that appear twice in a product, and • no summation is assumed over indices that appear only once. • From now on we will write all our formulae in index notation with this particular convention, which is called the Einstein summation convection. For a more detailed look at index notation with the summation convention we referto[4]. We will thus from now on rewrite n ∑ Aµνvν Aµνvν , ν=1 → n n (1.11) ∑ ∑ AαβBβγCγδ AαβBβγCγδ . β=1 γ=1 → ◮ Exercises 7 to 10 of Section C.1. 7 CHAPTER 1. THE INDEX NOTATION 8 2 Bases, co- and contravariant vectors In this chapter we introduce a new kind of vector (‘covector’), one that will be es- sential for the rest of this booklet. To get used to this new concept we will first show in an intuitive way how one can imagine this new kind of vector. After that we will follow a more mathematical approach. 2.1. Intuitive approach We can map the space around us using a coordinate system. Let us assume that we use a linear coordinate system, so that we can use linear algebra to describe it. Physical objects (represented, for example, with an arrow-vector) can then be described in terms of the basis-vectors belonging to the coordinate system (there are some hidden difficulties here, but we will ignore these for the moment). In this section we will see what happens when we choose another set of basis vectors, i.e. what happens upon a basis transformation. In a description with coordinates we must be fully aware that the coordinates (i.e. the numbers) themselves have no meaning. Only with the corresponding basis vectors (which span up the coordinate system) do these numbers acquire meaning. It is important to realize that the object one describes is independent of the coordi- nate system (i.e. set of basis vectors) one chooses. Or in other words: an arrow does not change meaning when described an another coordinate system. Let us write down such a basis transformation, ~ ~ ~ e1′ = a11 e1 + a12 e2 , (2.1) ~ ~ ~ e2′ = a21 e1 + a22 e2 . This could be regarded as a kind of multiplication of a ‘vector’ with a matrix, as long as we take for the components of this ‘vector’ the basis vectors. If we describe the matrix elements with words, one would get something like: ~ ~ ~ ~ ~e projection of e1′ onto e1 projection of e1′ onto e2 ~e 1′ = 1 . (2.2) ~e ~ ~ ~ ~ ~e 2′ projection of e2′ onto e1 projection of e2′ onto e2! 2 . Note that the basis vector-colums are not vectors, but just a very useful way to . write things down. We can also look at what happens with the components of a vector if we use a different set of basis vectors. From linear algebra we know that the transformation 9 2.1 Intuitive approach e2 e'2 0.8 1.6 v=( 0.4 ) v=( 0.4 ) e1 e'1 Figure 2.1: The behaviour of the transformation of the components of a vector under the transformation of a basis vector~e = 1~e v = 2v .
Recommended publications
  • Topology and Physics 2019 - Lecture 2
    Topology and Physics 2019 - lecture 2 Marcel Vonk February 12, 2019 2.1 Maxwell theory in differential form notation Maxwell's theory of electrodynamics is a great example of the usefulness of differential forms. A nice reference on this topic, though somewhat outdated when it comes to notation, is [1]. For notational simplicity, we will work in units where the speed of light, the vacuum permittivity and the vacuum permeability are all equal to 1: c = 0 = µ0 = 1. 2.1.1 The dual field strength In three dimensional space, Maxwell's electrodynamics describes the physics of the electric and magnetic fields E~ and B~ . These are three-dimensional vector fields, but the beauty of the theory becomes much more obvious if we (a) use a four-dimensional relativistic formulation, and (b) write it in terms of differential forms. For example, let us look at Maxwells two source-free, homogeneous equations: r · B = 0;@tB + r × E = 0: (2.1) That these equations have a relativistic flavor becomes clear if we write them out in com- ponents and organize the terms somewhat suggestively: x y z 0 + @xB + @yB + @zB = 0 x z y −@tB + 0 − @yE + @zE = 0 (2.2) y z x −@tB + @xE + 0 − @zE = 0 z y x −@tB − @xE + @yE + 0 = 0 Note that we also multiplied the last three equations by −1 to clarify the structure. All in all, we see that we have four equations (one for each space-time coordinate) which each contain terms in which the four coordinate derivatives act. Therefore, we may be tempted to write our set of equations in more \relativistic" notation as ^µν @µF = 0 (2.3) 1 with F^µν the coordinates of an antisymmetric two-tensor (i.
    [Show full text]
  • The Levi-Civita Tensor Noncovariance and Curvature in the Pseudotensors
    The Levi-Civita tensor noncovariance and curvature in the pseudotensors space A. L. Koshkarov∗ University of Petrozavodsk, Physics department, Russia Abstract It is shown that conventional ”covariant” derivative of the Levi-Civita tensor Eαβµν;ξ is not really covariant. Adding compensative terms, it is possible to make it covariant and to be equal to zero. Then one can be introduced a curvature in the pseudotensors space. There appears a curvature tensor which is dissimilar to ordinary one by covariant term including the Levi-Civita density derivatives hence to be equal to zero. This term is a little bit similar to Weylean one in the Weyl curvature tensor. There has been attempted to find a curvature measure in the combined (tensor plus pseudotensor) tensors space. Besides, there has been constructed some vector from the metric and the Levi-Civita density which gives new opportunities in geometry. 1 Introduction Tensor analysis which General Relativity is based on operates basically with true tensors and it is very little spoken of pseudotensors role. Although there are many objects in the world to be bound up with pseudotensors. For example most of particles and bodies in the Universe have angular momentum or spin. Here is a question: could a curvature tensor be a pseudotensor or include that in some way? It’s not clear. Anyway, symmetries of the Riemann-Christopher tensor are not compatible with those of the Levi-Civita pseudotensor. There are examples of using values in Physics to be a sum of a tensor and arXiv:0906.0647v1 [physics.gen-ph] 3 Jun 2009 a pseudotensor.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Arxiv:0911.0334V2 [Gr-Qc] 4 Jul 2020
    Classical Physics: Spacetime and Fields Nikodem Poplawski Department of Mathematics and Physics, University of New Haven, CT, USA Preface We present a self-contained introduction to the classical theory of spacetime and fields. This expo- sition is based on the most general principles: the principle of general covariance (relativity) and the principle of least action. The order of the exposition is: 1. Spacetime (principle of general covariance and tensors, affine connection, curvature, metric, tetrad and spin connection, Lorentz group, spinors); 2. Fields (principle of least action, action for gravitational field, matter, symmetries and conservation laws, gravitational field equations, spinor fields, electromagnetic field, action for particles). In this order, a particle is a special case of a field existing in spacetime, and classical mechanics can be derived from field theory. I dedicate this book to my Parents: Bo_zennaPop lawska and Janusz Pop lawski. I am also grateful to Chris Cox for inspiring this book. The Laws of Physics are simple, beautiful, and universal. arXiv:0911.0334v2 [gr-qc] 4 Jul 2020 1 Contents 1 Spacetime 5 1.1 Principle of general covariance and tensors . 5 1.1.1 Vectors . 5 1.1.2 Tensors . 6 1.1.3 Densities . 7 1.1.4 Contraction . 7 1.1.5 Kronecker and Levi-Civita symbols . 8 1.1.6 Dual densities . 8 1.1.7 Covariant integrals . 9 1.1.8 Antisymmetric derivatives . 9 1.2 Affine connection . 10 1.2.1 Covariant differentiation of tensors . 10 1.2.2 Parallel transport . 11 1.2.3 Torsion tensor . 11 1.2.4 Covariant differentiation of densities .
    [Show full text]
  • What's in a Name? the Matrix As an Introduction to Mathematics
    St. John Fisher College Fisher Digital Publications Mathematical and Computing Sciences Faculty/Staff Publications Mathematical and Computing Sciences 9-2008 What's in a Name? The Matrix as an Introduction to Mathematics Kris H. Green St. John Fisher College, [email protected] Follow this and additional works at: https://fisherpub.sjfc.edu/math_facpub Part of the Mathematics Commons How has open access to Fisher Digital Publications benefited ou?y Publication Information Green, Kris H. (2008). "What's in a Name? The Matrix as an Introduction to Mathematics." Math Horizons 16.1, 18-21. Please note that the Publication Information provides general citation information and may not be appropriate for your discipline. To receive help in creating a citation based on your discipline, please visit http://libguides.sjfc.edu/citations. This document is posted at https://fisherpub.sjfc.edu/math_facpub/12 and is brought to you for free and open access by Fisher Digital Publications at St. John Fisher College. For more information, please contact [email protected]. What's in a Name? The Matrix as an Introduction to Mathematics Abstract In lieu of an abstract, here is the article's first paragraph: In my classes on the nature of scientific thought, I have often used the movie The Matrix to illustrate the nature of evidence and how it shapes the reality we perceive (or think we perceive). As a mathematician, I usually field questions elatedr to the movie whenever the subject of linear algebra arises, since this field is the study of matrices and their properties. So it is natural to ask, why does the movie title reference a mathematical object? Disciplines Mathematics Comments Article copyright 2008 by Math Horizons.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix a Spinors in Four Dimensions
    Appendix A Spinors in Four Dimensions In this appendix we collect the conventions used for spinors in both Minkowski and Euclidean spaces. In Minkowski space the flat metric has the 0 1 2 3 form ηµν = diag(−1, 1, 1, 1), and the coordinates are labelled (x ,x , x , x ). The analytic continuation into Euclidean space is madethrough the replace- ment x0 = ix4 (and in momentum space, p0 = −ip4) the coordinates in this case being labelled (x1,x2, x3, x4). The Lorentz group in four dimensions, SO(3, 1), is not simply connected and therefore, strictly speaking, has no spinorial representations. To deal with these types of representations one must consider its double covering, the spin group Spin(3, 1), which is isomorphic to SL(2, C). The group SL(2, C) pos- sesses a natural complex two-dimensional representation. Let us denote this representation by S andlet us consider an element ψ ∈ S with components ψα =(ψ1,ψ2) relative to some basis. The action of an element M ∈ SL(2, C) is β (Mψ)α = Mα ψβ. (A.1) This is not the only action of SL(2, C) which one could choose. Instead of M we could have used its complex conjugate M, its inverse transpose (M T)−1,or its inverse adjoint (M †)−1. All of them satisfy the same group multiplication law. These choices would correspond to the complex conjugate representation S, the dual representation S,and the dual complex conjugate representation S. We will use the following conventions for elements of these representations: α α˙ ψα ∈ S, ψα˙ ∈ S, ψ ∈ S, ψ ∈ S.
    [Show full text]
  • 1.2 Topological Tensor Calculus
    PH211 Physical Mathematics Fall 2019 1.2 Topological tensor calculus 1.2.1 Tensor fields Finite displacements in Euclidean space can be represented by arrows and have a natural vector space structure, but finite displacements in more general curved spaces, such as on the surface of a sphere, do not. However, an infinitesimal neighborhood of a point in a smooth curved space1 looks like an infinitesimal neighborhood of Euclidean space, and infinitesimal displacements dx~ retain the vector space structure of displacements in Euclidean space. An infinitesimal neighborhood of a point can be infinitely rescaled to generate a finite vector space, called the tangent space, at the point. A vector lives in the tangent space of a point. Note that vectors do not stretch from one point to vector tangent space at p p space Figure 1.2.1: A vector in the tangent space of a point. another, and vectors at different points live in different tangent spaces and so cannot be added. For example, rescaling the infinitesimal displacement dx~ by dividing it by the in- finitesimal scalar dt gives the velocity dx~ ~v = (1.2.1) dt which is a vector. Similarly, we can picture the covector rφ as the infinitesimal contours of φ in a neighborhood of a point, infinitely rescaled to generate a finite covector in the point's cotangent space. More generally, infinitely rescaling the neighborhood of a point generates the tensor space and its algebra at the point. The tensor space contains the tangent and cotangent spaces as a vector subspaces. A tensor field is something that takes tensor values at every point in a space.
    [Show full text]
  • Appendix A: Matrices and Tensors
    Appendix A: Matrices and Tensors A.1 Introduction and Rationale The purpose of this appendix is to present the notation and most of the mathematical techniques that will be used in the body of the text. The audience is assumed to have been through several years of college level mathematics that included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in undergraduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles and the vector cross and triple scalar products. The solutions to ordinary differential equations are reviewed in the last two sections. The notation, as far as possible, will be a matrix notation that is easily entered into existing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad etc. The desire to represent the components of three-dimensional fourth order tensors that appear in anisotropic elasticity as the components of six-dimensional second order tensors and thus represent these components in matrices of tensor components in six dimensions leads to the nontraditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in Sect. A.11, along with the rationale for this approach. A.2 Definition of Square, Column, and Row Matrices An r by c matrix M is a rectangular array of numbers consisting of r rows and c columns, S.C.
    [Show full text]
  • Tensor Manipulation in GPL Maxima
    Tensor Manipulation in GPL Maxima Viktor Toth http://www.vttoth.com/ February 1, 2008 Abstract GPL Maxima is an open-source computer algebra system based on DOE-MACSYMA. GPL Maxima included two tensor manipulation packages from DOE-MACSYMA, but these were in various states of disrepair. One of the two packages, CTENSOR, implemented component-based tensor manipulation; the other, ITENSOR, treated tensor symbols as opaque, manipulating them based on their index properties. The present paper describes the state in which these packages were found, the steps that were needed to make the packages fully functional again, and the new functionality that was implemented to make them more versatile. A third package, ATENSOR, was also implemented; fully compatible with the identically named package in the commercial version of MACSYMA, ATENSOR implements abstract tensor algebras. 1 Introduction GPL Maxima (GPL stands for the GNU Public License, the most widely used open source license construct) is the descendant of one of the world’s first comprehensive computer algebra systems (CAS), DOE-MACSYMA, developed by the United States Department of Energy in the 1960s and the 1970s. It is currently maintained by 18 volunteer developers, and can be obtained in source or object code form from http://maxima.sourceforge.net/. Like other computer algebra systems, Maxima has tensor manipulation capability. This capability was developed in the late 1970s. Documentation is scarce regarding these packages’ origins, but a select collection of e-mail messages by various authors survives, dating back to 1979-1982, when these packages were actively maintained at M.I.T. When this author first came across GPL Maxima, the tensor packages were effectively non-functional.
    [Show full text]
  • Handout 9 More Matrix Properties; the Transpose
    Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric).
    [Show full text]
  • On Manifolds of Tensors of Fixed Tt-Rank
    ON MANIFOLDS OF TENSORS OF FIXED TT-RANK SEBASTIAN HOLTZ, THORSTEN ROHWEDDER, AND REINHOLD SCHNEIDER Abstract. Recently, the format of TT tensors [19, 38, 34, 39] has turned out to be a promising new format for the approximation of solutions of high dimensional problems. In this paper, we prove some new results for the TT representation of a tensor U ∈ Rn1×...×nd and for the manifold of tensors of TT-rank r. As a first result, we prove that the TT (or compression) ranks ri of a tensor U are unique and equal to the respective seperation ranks of U if the components of the TT decomposition are required to fulfil a certain maximal rank condition. We then show that d the set T of TT tensors of fixed rank r forms an embedded manifold in Rn , therefore preserving the essential theoretical properties of the Tucker format, but often showing an improved scaling behaviour. Extending a similar approach for matrices [7], we introduce certain gauge conditions to obtain a unique representation of the tangent space TU T of T and deduce a local parametrization of the TT manifold. The parametrisation of TU T is often crucial for an algorithmic treatment of high-dimensional time-dependent PDEs and minimisation problems [33]. We conclude with remarks on those applications and present some numerical examples. 1. Introduction The treatment of high-dimensional problems, typically of problems involving quantities from Rd for larger dimensions d, is still a challenging task for numerical approxima- tion. This is owed to the principal problem that classical approaches for their treatment normally scale exponentially in the dimension d in both needed storage and computa- tional time and thus quickly become computationally infeasable for sensible discretiza- tions of problems of interest.
    [Show full text]