Appendix A: Relations Between Covariant and Contravariant Bases
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Tensors and Differential Forms on Vector Spaces
APPENDIX A TENSORS AND DIFFERENTIAL FORMS ON VECTOR SPACES Since only so much of the vast and growing field of differential forms and differentiable manifolds will be actually used in this survey, we shall attempt to briefly review how the calculus of exterior differential forms on vector spaces can serve as a replacement for the more conventional vector calculus and then introduce only the most elementary notions regarding more topologically general differentiable manifolds, which will mostly be used as the basis for the discussion of Lie groups, in the following appendix. Since exterior differential forms are special kinds of tensor fields – namely, completely-antisymmetric covariant ones – and tensors are important to physics, in their own right, we shall first review the basic notions concerning tensors and multilinear algebra. Presumably, the reader is familiar with linear algebra as it is usually taught to physicists, but for the “basis-free” approach to linear and multilinear algebra (which we shall not always adhere to fanatically), it would also help to have some familiarity with the more “abstract-algebraic” approach to linear algebra, such as one might learn from Hoffman and Kunze [ 1], for instance. 1. Tensor algebra. – A tensor algebra is a type of algebra in which multiplication takes the form of the tensor product. a. Tensor product. – Although the tensor product of vector spaces can be given a rigorous definition in a more abstract-algebraic context (See Greub [ 2], for instance), for the purposes of actual calculations with tensors and tensor fields, it is usually sufficient to say that if V and W are vector spaces of dimensions n and m, respectively, then the tensor product V ⊗ W will be a vector space of dimension nm whose elements are finite linear combinations of elements of the form v ⊗ w, where v is a vector in V and w is a vector in W. -
Tensors Notation • Contravariant Denoted by Superscript Ai Took a Vector and Gave for Vector Calculus Us a Vector
Tensors Notation • contravariant denoted by superscript Ai took a vector and gave For vector calculus us a vector • covariant denoted by subscript Ai took a scaler and gave us a vector Review To avoid confusion in cartesian coordinates both types are the same so • Vectors we just opt for the subscript. Thus a vector x would be x1,x2,x3 R3 • Summation representation of an n by n array As it turns out in an cartesian space and other rectilinear coordi- nate systems there is no difference between contravariant and covariant • Gradient, Divergence and Curl vectors. This will not be the case for other coordinate systems such a • Spherical Harmonics (maybe) curvilinear coordinate systems or in 4 dimensions. These definitions are closely related to the Jacobian. Motivation If you tape a book shut and try to spin it in the air on each indepen- Definitions for Tensors of Rank 2 dent axis you will notice that it spins fine on two axes but not on the Rank 2 tensors can be written as a square array. They have con- third. That’s the inertia tensor in your hands. Similar are the polar- travariant, mixed, and covariant forms. As we might expect in cartesian izations tensor, index of refraction tensor and stress tensor. But tensors coordinates these are the same. also show up in all sorts of places that don’t connect to an anisotropic material property, in fact even spherical harmonics are tensors. What are the similarities and differences between such a plethora of tensors? Vector Calculus and Identifers The mathematics of tensors is particularly useful for de- Tensor analysis extends deep into coordinate transformations of all scribing properties of substances which vary in direction– kinds of spaces and coordinate systems. -
A.7 Orthogonal Curvilinear Coordinates
~NSORS Orthogonal Curvilinear Coordinates 569 )osition ated by converting its components (but not the unit dyads) to spherical coordinates, and 1 r+r', integrating each over the two spherical angles (see Section A.7). The off-diagonal terms in Eq. (A.6-13) vanish, again due to the symmetry. (A.6-6) A.7 ORTHOGONAL CURVILINEAR COORDINATES (A.6-7) Enormous simplificatons are achieved in solving a partial differential equation if all boundaries in the problem correspond to coordinate surfaces, which are surfaces gener (A.6-8) ated by holding one coordinate constant and varying the other two. Accordingly, many special coordinate systems have been devised to solve problems in particular geometries. to func The most useful of these systems are orthogonal; that is, at any point in space the he usual vectors aligned with the three coordinate directions are mutually perpendicular. In gen st calcu eral, the variation of a single coordinate will generate a curve in space, rather than a :lrSe and straight line; hence the term curvilinear. In this section a general discussion of orthogo nal curvilinear systems is given first, and then the relationships for cylindrical and spher ical coordinates are derived as special cases. The presentation here closely follows that in Hildebrand (1976). ial prop Base Vectors ve obtain Let (Ul, U2' U3) represent the three coordinates in a general, curvilinear system, and let (A.6-9) ei be the unit vector that points in the direction of increasing ui• A curve produced by varying U;, with uj (j =1= i) held constant, will be referred to as a "u; curve." Although the base vectors are each of constant (unit) magnitude, the fact that a U; curve is not gener (A.6-1O) ally a straight line means that their direction is variable. -
21. Orthonormal Bases
21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else. -
Abstract Tensor Systems and Diagrammatic Representations
Abstract tensor systems and diagrammatic representations J¯anisLazovskis September 28, 2012 Abstract The diagrammatic tensor calculus used by Roger Penrose (most notably in [7]) is introduced without a solid mathematical grounding. We will attempt to derive the tools of such a system, but in a broader setting. We show that Penrose's work comes from the diagrammisation of the symmetric algebra. Lie algebra representations and their extensions to knot theory are also discussed. Contents 1 Abstract tensors and derived structures 2 1.1 Abstract tensor notation . 2 1.2 Some basic operations . 3 1.3 Tensor diagrams . 3 2 A diagrammised abstract tensor system 6 2.1 Generation . 6 2.2 Tensor concepts . 9 3 Representations of algebras 11 3.1 The symmetric algebra . 12 3.2 Lie algebras . 13 3.3 The tensor algebra T(g)....................................... 16 3.4 The symmetric Lie algebra S(g)................................... 17 3.5 The universal enveloping algebra U(g) ............................... 18 3.6 The metrized Lie algebra . 20 3.6.1 Diagrammisation with a bilinear form . 20 3.6.2 Diagrammisation with a symmetric bilinear form . 24 3.6.3 Diagrammisation with a symmetric bilinear form and an orthonormal basis . 24 3.6.4 Diagrammisation under ad-invariance . 29 3.7 The universal enveloping algebra U(g) for a metrized Lie algebra g . 30 4 Ensuing connections 32 A Appendix 35 Note: This work relies heavily upon the text of Chapter 12 of a draft of \An Introduction to Quantum and Vassiliev Invariants of Knots," by David M.R. Jackson and Iain Moffatt, a yet-unpublished book at the time of writing. -
Tensor Manipulation in GPL Maxima
Tensor Manipulation in GPL Maxima Viktor Toth http://www.vttoth.com/ February 1, 2008 Abstract GPL Maxima is an open-source computer algebra system based on DOE-MACSYMA. GPL Maxima included two tensor manipulation packages from DOE-MACSYMA, but these were in various states of disrepair. One of the two packages, CTENSOR, implemented component-based tensor manipulation; the other, ITENSOR, treated tensor symbols as opaque, manipulating them based on their index properties. The present paper describes the state in which these packages were found, the steps that were needed to make the packages fully functional again, and the new functionality that was implemented to make them more versatile. A third package, ATENSOR, was also implemented; fully compatible with the identically named package in the commercial version of MACSYMA, ATENSOR implements abstract tensor algebras. 1 Introduction GPL Maxima (GPL stands for the GNU Public License, the most widely used open source license construct) is the descendant of one of the world’s first comprehensive computer algebra systems (CAS), DOE-MACSYMA, developed by the United States Department of Energy in the 1960s and the 1970s. It is currently maintained by 18 volunteer developers, and can be obtained in source or object code form from http://maxima.sourceforge.net/. Like other computer algebra systems, Maxima has tensor manipulation capability. This capability was developed in the late 1970s. Documentation is scarce regarding these packages’ origins, but a select collection of e-mail messages by various authors survives, dating back to 1979-1982, when these packages were actively maintained at M.I.T. When this author first came across GPL Maxima, the tensor packages were effectively non-functional. -
Multilinear Algebra and Applications July 15, 2014
Multilinear Algebra and Applications July 15, 2014. Contents Chapter 1. Introduction 1 Chapter 2. Review of Linear Algebra 5 2.1. Vector Spaces and Subspaces 5 2.2. Bases 7 2.3. The Einstein convention 10 2.3.1. Change of bases, revisited 12 2.3.2. The Kronecker delta symbol 13 2.4. Linear Transformations 14 2.4.1. Similar matrices 18 2.5. Eigenbases 19 Chapter 3. Multilinear Forms 23 3.1. Linear Forms 23 3.1.1. Definition, Examples, Dual and Dual Basis 23 3.1.2. Transformation of Linear Forms under a Change of Basis 26 3.2. Bilinear Forms 30 3.2.1. Definition, Examples and Basis 30 3.2.2. Tensor product of two linear forms on V 32 3.2.3. Transformation of Bilinear Forms under a Change of Basis 33 3.3. Multilinear forms 34 3.4. Examples 35 3.4.1. A Bilinear Form 35 3.4.2. A Trilinear Form 36 3.5. Basic Operation on Multilinear Forms 37 Chapter 4. Inner Products 39 4.1. Definitions and First Properties 39 4.1.1. Correspondence Between Inner Products and Symmetric Positive Definite Matrices 40 4.1.1.1. From Inner Products to Symmetric Positive Definite Matrices 42 4.1.1.2. From Symmetric Positive Definite Matrices to Inner Products 42 4.1.2. Orthonormal Basis 42 4.2. Reciprocal Basis 46 4.2.1. Properties of Reciprocal Bases 48 4.2.2. Change of basis from a basis to its reciprocal basis g 50 B B III IV CONTENTS 4.2.3. -
Geometric-Algebra Adaptive Filters Wilder B
1 Geometric-Algebra Adaptive Filters Wilder B. Lopes∗, Member, IEEE, Cassio G. Lopesy, Senior Member, IEEE Abstract—This paper presents a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). They are Faces generated by formulating the underlying minimization problem (a deterministic cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well- Edges suited for the description of geometric transformations. Also, (directed lines) differently from standard adaptive-filtering theory, Geometric Calculus (the extension of GA to differential calculus) allows Fig. 1. A polyhedron (3-dimensional polytope) can be completely described for applying the same derivation techniques regardless of the by the geometric multiplication of its edges (oriented lines, vectors), which type (subalgebra) of the data, i.e., real, complex numbers, generate the faces and hypersurfaces (in the case of a general n-dimensional quaternions, etc. Relying on those characteristics (among others), polytope). a deterministic quadratic cost function is posed, from which the GAAFs are devised, providing a generalization of regular adaptive filters to subalgebras of GA. From the obtained update rule, it is shown how to recover the following least-mean squares perform calculus with hypercomplex quantities, i.e., elements (LMS) adaptive filter variants: real-entries LMS, complex LMS, that generalize complex numbers for higher dimensions [2]– and quaternions LMS. Mean-square analysis and simulations in [10]. a system identification scenario are provided, showing very good agreement for different levels of measurement noise. GA-based AFs were first introduced in [11], [12], where they were successfully employed to estimate the geometric Index Terms—Adaptive filtering, geometric algebra, quater- transformation (rotation and translation) that aligns a pair of nions. -
Matrices and Tensors
APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM.. -
Area and Volume in Curvilinear Coordinates 1 Overview
Area and Volume in curvilinear coordinates 1 Overview There are several reasons why we need to have a way of measuring ares and volumes in general relativity. the gravitational field of a point mass m will be singular at the location of the mass, so we typically compute the field of a mass density; i.e., the mass contained in a unit volume. If we are dealing with the Gauss law of electrodynamics (and there will be a similar law for gravity) then we need to compute the flux through an area. How should areas and volumes be defined in general metrics? Let us start in flat 3-dimensional space with Cartesian coordinates. Consider two vectors V;~ W~ . These vectors describe the sides of a parallelogram, whose area can be written as A~ = V~ × W~ (1) The cross-product has magnitude jA~j = jV~ jjW~ j sin θ (2) and is seen to equal the area of the parallelogram. The direction of the cross product also serves a useful purpose: it gives the normal direction to the area spanned by the vectors. Note that there are two normal directions to the area { pointing above the plane and below the plane { and the `right hand rule' for cross products is a convention that picks out one of these over the other. Thus there is a choice of convention in the choice of normal; we will see this fact again later. The cross product can be written in terms of a determinant 0 x^ y^ z^ 1 V~ × W~ = @ V x V y V z A (3) W x W y W z A determinant is computed by computing a product of numbers, taking one number from each row, while making sure that we also take only one number from each column. -
Multilinear Algebra
Appendix A Multilinear Algebra This chapter presents concepts from multilinear algebra based on the basic properties of finite dimensional vector spaces and linear maps. The primary aim of the chapter is to give a concise introduction to alternating tensors which are necessary to define differential forms on manifolds. Many of the stated definitions and propositions can be found in Lee [1], Chaps. 11, 12 and 14. Some definitions and propositions are complemented by short and simple examples. First, in Sect. A.1 dual and bidual vector spaces are discussed. Subsequently, in Sects. A.2–A.4, tensors and alternating tensors together with operations such as the tensor and wedge product are introduced. Lastly, in Sect. A.5, the concepts which are necessary to introduce the wedge product are summarized in eight steps. A.1 The Dual Space Let V be a real vector space of finite dimension dim V = n.Let(e1,...,en) be a basis of V . Then every v ∈ V can be uniquely represented as a linear combination i v = v ei , (A.1) where summation convention over repeated indices is applied. The coefficients vi ∈ R arereferredtoascomponents of the vector v. Throughout the whole chapter, only finite dimensional real vector spaces, typically denoted by V , are treated. When not stated differently, summation convention is applied. Definition A.1 (Dual Space)Thedual space of V is the set of real-valued linear functionals ∗ V := {ω : V → R : ω linear} . (A.2) The elements of the dual space V ∗ are called linear forms on V . © Springer International Publishing Switzerland 2015 123 S.R. -
Tensor Calculus and Differential Geometry
Course Notes Tensor Calculus and Differential Geometry 2WAH0 Luc Florack March 10, 2021 Cover illustration: papyrus fragment from Euclid’s Elements of Geometry, Book II [8]. Contents Preface iii Notation 1 1 Prerequisites from Linear Algebra 3 2 Tensor Calculus 7 2.1 Vector Spaces and Bases . .7 2.2 Dual Vector Spaces and Dual Bases . .8 2.3 The Kronecker Tensor . 10 2.4 Inner Products . 11 2.5 Reciprocal Bases . 14 2.6 Bases, Dual Bases, Reciprocal Bases: Mutual Relations . 16 2.7 Examples of Vectors and Covectors . 17 2.8 Tensors . 18 2.8.1 Tensors in all Generality . 18 2.8.2 Tensors Subject to Symmetries . 22 2.8.3 Symmetry and Antisymmetry Preserving Product Operators . 24 2.8.4 Vector Spaces with an Oriented Volume . 31 2.8.5 Tensors on an Inner Product Space . 34 2.8.6 Tensor Transformations . 36 2.8.6.1 “Absolute Tensors” . 37 CONTENTS i 2.8.6.2 “Relative Tensors” . 38 2.8.6.3 “Pseudo Tensors” . 41 2.8.7 Contractions . 43 2.9 The Hodge Star Operator . 43 3 Differential Geometry 47 3.1 Euclidean Space: Cartesian and Curvilinear Coordinates . 47 3.2 Differentiable Manifolds . 48 3.3 Tangent Vectors . 49 3.4 Tangent and Cotangent Bundle . 50 3.5 Exterior Derivative . 51 3.6 Affine Connection . 52 3.7 Lie Derivative . 55 3.8 Torsion . 55 3.9 Levi-Civita Connection . 56 3.10 Geodesics . 57 3.11 Curvature . 58 3.12 Push-Forward and Pull-Back . 59 3.13 Examples . 60 3.13.1 Polar Coordinates in the Euclidean Plane .