A Naive Approach to Tensors on Manifolds

Total Page:16

File Type:pdf, Size:1020Kb

A Naive Approach to Tensors on Manifolds A naive approach to Tensors on Manifolds Cho, Yong-Hwa Department of Mathematical Sciences, KAIST 1 / 35 Manifolds? 2 / 35 Manifolds? 2 / 35 Manifolds? 2 / 35 = Topological Nature 3 / 35 Topological Nature = 3 / 35 \Flat Space" Rescaling the plane Studying Manifolds: How? 4 / 35 \Flat Space" Rescaling the plane Studying Manifolds: How? 4 / 35 \Flat Space" Rescaling the plane Studying Manifolds: How? 4 / 35 \Flat Space" Studying Manifolds: How? Rescaling the plane 4 / 35 Studying Manifolds: How? \Flat Space" Rescaling the plane 4 / 35 = 126 \Oriented Line density" Vectors and Covectors (2D) · ? = Scalar 5 / 35 Scalar = 12 \Oriented Line density" Vectors and Covectors (2D) · ? = = 6 5 / 35 Scalar = 6 \Oriented Line density" Vectors and Covectors (2D) · ? = = 12 5 / 35 Scalar = 12 Vectors and Covectors (2D) · ? = = 6 \Oriented Line density" 5 / 35 Codimension 1 oriented plane density · = = 3 Dimension 1 oriented plane capacity Vectors and Covectors (3D) 6 / 35 Codimension 1 oriented plane density Dimension 1 oriented plane capacity Vectors and Covectors (3D) · = = 3 6 / 35 Vectors and Covectors (3D) Codimension 1 oriented plane density · = = 3 Dimension 1 oriented plane capacity 6 / 35 Forms 2-form 3-form ! & "= " & = 2-vector 3-vector Multivectors Multivectors and Forms 1-form 1-vector 7 / 35 Forms 3-form " & = 3-vector Multivectors Multivectors and Forms 1-form 2-form ! & "= 1-vector 2-vector 7 / 35 Forms Multivectors Multivectors and Forms 1-form 2-form 3-form ! & "= " & = 1-vector 2-vector 3-vector 7 / 35 Multivectors and Forms Forms 1-form 2-form 3-form ! & "= " & = 1-vector 2-vector 3-vector Multivectors 7 / 35 contravariant parts(vectors) covariant parts(forms) A tensor T with n covariant and m contravariant parts b1b2···bm Ta1a2···an !a !ab !abc v a v ab v abc Abstract Index Notation 8 / 35 contravariant parts(vectors) covariant parts(forms) !a !ab !abc v a v ab v abc Abstract Index Notation A tensor T with n covariant and m contravariant parts b1b2···bm Ta1a2···an 8 / 35 covariant parts(forms) !a !ab !abc v a v ab v abc Abstract Index Notation A tensor T with n covariant and m contravariant parts b1b2···bm contravariant parts(vectors) Ta1a2···an 8 / 35 contravariant parts(vectors) !a !ab !abc v a v ab v abc Abstract Index Notation A tensor T with n covariant and m contravariant parts b1b2···bm Ta1a2···an covariant parts(forms) 8 / 35 Abstract Index Notation A tensor T with n covariant and m contravariant parts b1b2···bm contravariant parts(vectors) Ta1a2···an covariant parts(forms) !a !ab !abc v a v ab v abc 8 / 35 Tensor product - different indicies, indicies determine the order: (~v ⊗ ~w =) v aw b = w bv a 6= v bw a (= ~w ⊗~v) Contraction - same index: a ~v · ! = v !a Contractions between tensors - indicies determine the contraction: c a c a c c (~v · T)b d = v Tab d 6= v Tba d = (~v · T)b d Index Notation, Tensor Products and Contractions 9 / 35 Contraction - same index: a ~v · ! = v !a Contractions between tensors - indicies determine the contraction: c a c a c c (~v · T)b d = v Tab d 6= v Tba d = (~v · T)b d Index Notation, Tensor Products and Contractions Tensor product - different indicies, indicies determine the order: (~v ⊗ ~w =) v aw b = w bv a 6= v bw a (= ~w ⊗~v) 9 / 35 Contractions between tensors - indicies determine the contraction: c a c a c c (~v · T)b d = v Tab d 6= v Tba d = (~v · T)b d Index Notation, Tensor Products and Contractions Tensor product - different indicies, indicies determine the order: (~v ⊗ ~w =) v aw b = w bv a 6= v bw a (= ~w ⊗~v) Contraction - same index: a ~v · ! = v !a 9 / 35 Index Notation, Tensor Products and Contractions Tensor product - different indicies, indicies determine the order: (~v ⊗ ~w =) v aw b = w bv a 6= v bw a (= ~w ⊗~v) Contraction - same index: a ~v · ! = v !a Contractions between tensors - indicies determine the contraction: c a c a c c (~v · T)b d = v Tab d 6= v Tba d = (~v · T)b d 9 / 35 = = a = v !ab = ~v · ! · = = 6 a ~v · ! = v !a · = = 9 ~ = 1 ab ~v · ! 2! v !ab · ~v · ! Tensor Algebra: Contractions of co and contra parts 10 / 35 = = a = v !ab = ~v · ! · = = 9 ~ = 1 ab ~v · ! 2! v !ab · ~v · ! Tensor Algebra: Contractions of co and contra parts · = = 6 a ~v · ! = v !a 10 / 35 = = a = v !ab = ~v · ! · ~v · ! Tensor Algebra: Contractions of co and contra parts · = = 6 a ~v · ! = v !a · = = 9 ~ = 1 ab ~v · ! 2! v !ab 10 / 35 = = a = v !ab = ~v · ! Tensor Algebra: Contractions of co and contra parts · = = 6 a ~v · ! = v !a · = = 9 ~ = 1 ab ~v · ! 2! v !ab · ~v · ! 10 / 35 = = ~v · ! Tensor Algebra: Contractions of co and contra parts · = = 6 a ~v · ! = v !a · = = 9 ~ = 1 ab ~v · ! 2! v !ab · = a ~v · ! = v !ab 10 / 35 Tensor Algebra: Contractions of co and contra parts · = = 6 a ~v · ! = v !a · = = 9 ~ = 1 ab ~v · ! 2! v !ab · = = a ~v · ! = v !ab = ~v · ! 10 / 35 ^ = = a b a?b b a (v ^ w)ab v ^ w = v w − v w = ^ = ? = !a ^ σb = !aσb − !bσa = (! ^ σ)ab Tensor Algebra: Exterior Product 11 / 35 = a b a?b b a (v ^ w)ab v ^ w = v w − v w = ? = !a ^ σb = !aσb − !bσa = (! ^ σ)ab Tensor Algebra: Exterior Product ^ = ^ = 11 / 35 a b a?b b a (v ^ w)ab v ^ w = v w − v w = ? = !a ^ σb = !aσb − !bσa = (! ^ σ)ab Tensor Algebra: Exterior Product ^ = = ^ = 11 / 35 a b a?b b a (v ^ w)ab v ^ w = v w − v w = ? !a ^ σb = !aσb − !bσa = (! ^ σ)ab Tensor Algebra: Exterior Product ^ = = ^ = = 11 / 35 ? ? Tensor Algebra: Exterior Product ^ = = ab v a ^ w b = v aw b − v bw a = (v ^ w) ^ = = !a ^ σb = !aσb − !bσa = (! ^ σ)ab 11 / 35 !a ^ σbc = !aσbc + !bσca + !cσab = (! ^ σ)abc Tensor Algebra: Exterior Product k-form ^ l-form = (k + l)-form ^ = = 12 / 35 Tensor Algebra: Exterior Product k-form ^ l-form = (k + l)-form ^ = = !a ^ σbc = !aσbc + !bσca + !cσab = (! ^ σ)abc 12 / 35 3 2 1 Tensor Fields: Scalar and Covector Fields 13 / 35 3 2 1 Tensor Fields: Scalar and Covector Fields 13 / 35 Tensor Fields: Scalar and Covector Fields 3 2 1 13 / 35 Space Compression r Incompatible r Example: Gradient field rφ Space Compression r~ r~ 14 / 35 Space Compression r r Example: Gradient field rφ Space Compression r~ Incompatible r~ 14 / 35 Incompatible Space Compression r~ r~ Example: Gradient field rφ Space Compression r r 14 / 35 r^ r^ = r ^ ! r^ = r ^ ! r ^ r^ = 0 Tensor Calculus: Exterior Differentiation r^ (=d) 15 / 35 = r ^ ! = r ^ ! r ^ r^ = 0 Tensor Calculus: Exterior Differentiation r^ (=d) r^ r^ r^ 15 / 35 r ^ ! r ^ ! r ^ r^ = 0 Tensor Calculus: Exterior Differentiation r^ (=d) r^ r^ = r^ = 15 / 35 = = r ^ ! r ^ r^ = 0 r^ Tensor Calculus: Exterior Differentiation r^ (=d) r^ r ^ ! 15 / 35 r^ = r ^ ! = r ^ ! r ^ r^ = 0 r^ Tensor Calculus: Exterior Differentiation r^ (=d) r^ 15 / 35 r^ r ^ ! r ^ ! r ^ r^ = 0 Tensor Calculus: Exterior Differentiation r^ (=d) r^ r^ = r^ = 15 / 35 = r ^ ! = r ^ r^ = 0 r^ r^ Tensor Calculus: Exterior Differentiation r^ (=d) r^ r ^ ! 15 / 35 = r ^ ! = r ^ ! r ^ r^ = 0 r^ r^ Tensor Calculus: Exterior Differentiation r^ (=d) r^ 15 / 35 r ^ ! r ^ ! Tensor Calculus: Exterior Differentiation r^ (=d) r^ r^ = r^ = r ^ r^ = 0 15 / 35 Z ! = 2 L Z ! = 3 S Tensor Calculus: Integration Integration of k-form fields over oriented k-dimensional surfaces: 16 / 35 Tensor Calculus: Integration Integration of k-form fields over oriented k-dimensional surfaces: Z ! = 2 L Z ! = 3 S 16 / 35 0+1+1−1+1−1+1 = 2 +1−1+1+0+1−1+1 = 2 Z Z r ^ k ! = k ! Sk+1 @Sk Stokes' Theorem Z r ^ ! = S Z ! = @S 17 / 35 +1+1−1+1−1+1 = 2 +1+0+1−1+1 = 2 Z Z r ^ k ! = k ! Sk+1 @Sk Stokes' Theorem +1 Z −1 r ^ ! = 0 S Z ! = +1−1 @S 17 / 35 +1−1+1−1+1 = 2 +0+1−1+1 = 2 Z Z r ^ k ! = k ! Sk+1 @Sk Stokes' Theorem +1 +1 Z r ^ ! = 0+1 S Z ! = +1−1+1 @S 17 / 35 +1−1+1 = 2 +1−1+1 = 2 Z Z r ^ k ! = k ! Sk+1 @Sk Stokes' Theorem +1 Z r ^ ! = 0+1+1−1 −1 S Z ! = +1−1+1+0 @S 17 / 35 +1 = 2 +1 = 2 Z Z r ^ k ! = k ! Sk+1 @Sk Stokes' Theorem Z r ^ ! = 0+1+1−1+1−1 S Z ! = +1−1+1+0+1−1 −1 +1 @S −1 +1 17 / 35 Z Z r ^ k ! = k ! Sk+1 @Sk Stokes' Theorem Z r ^ ! = 0+1+1−1+1−1+1 = 2 S Z ! = +1−1+1+0+1−1+1 = 2 +1 @S +1 17 / 35 Stokes' Theorem Z r ^ ! = 0+1+1−1+1−1+1 = 2 S Z ! = +1−1+1+0+1−1+1 = 2 @S Z Z r ^ k ! = k ! Sk+1 @Sk 17 / 35 the volume form n a1···an and the volume element n~ -1 1 · n~ -1= ( -1)a1···an = 1 n n! a1···an provide the unit density, volume and the space orientation. The volume of the manifold M: Z V (M) = n M Volume Form and Volume Element On the n dimensional orientable manifold, 18 / 35 The volume of the manifold M: Z V (M) = n M Volume Form and Volume Element On the n dimensional orientable manifold, the volume form n a1···an and the volume element n~ -1 1 · n~ -1= ( -1)a1···an = 1 n n! a1···an provide the unit density, volume and the space orientation.
Recommended publications
  • Abstract Tensor Systems and Diagrammatic Representations
    Abstract tensor systems and diagrammatic representations J¯anisLazovskis September 28, 2012 Abstract The diagrammatic tensor calculus used by Roger Penrose (most notably in [7]) is introduced without a solid mathematical grounding. We will attempt to derive the tools of such a system, but in a broader setting. We show that Penrose's work comes from the diagrammisation of the symmetric algebra. Lie algebra representations and their extensions to knot theory are also discussed. Contents 1 Abstract tensors and derived structures 2 1.1 Abstract tensor notation . 2 1.2 Some basic operations . 3 1.3 Tensor diagrams . 3 2 A diagrammised abstract tensor system 6 2.1 Generation . 6 2.2 Tensor concepts . 9 3 Representations of algebras 11 3.1 The symmetric algebra . 12 3.2 Lie algebras . 13 3.3 The tensor algebra T(g)....................................... 16 3.4 The symmetric Lie algebra S(g)................................... 17 3.5 The universal enveloping algebra U(g) ............................... 18 3.6 The metrized Lie algebra . 20 3.6.1 Diagrammisation with a bilinear form . 20 3.6.2 Diagrammisation with a symmetric bilinear form . 24 3.6.3 Diagrammisation with a symmetric bilinear form and an orthonormal basis . 24 3.6.4 Diagrammisation under ad-invariance . 29 3.7 The universal enveloping algebra U(g) for a metrized Lie algebra g . 30 4 Ensuing connections 32 A Appendix 35 Note: This work relies heavily upon the text of Chapter 12 of a draft of \An Introduction to Quantum and Vassiliev Invariants of Knots," by David M.R. Jackson and Iain Moffatt, a yet-unpublished book at the time of writing.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix A: Matrices and Tensors
    Appendix A: Matrices and Tensors A.1 Introduction and Rationale The purpose of this appendix is to present the notation and most of the mathematical techniques that will be used in the body of the text. The audience is assumed to have been through several years of college level mathematics that included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in undergraduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles and the vector cross and triple scalar products. The solutions to ordinary differential equations are reviewed in the last two sections. The notation, as far as possible, will be a matrix notation that is easily entered into existing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad etc. The desire to represent the components of three-dimensional fourth order tensors that appear in anisotropic elasticity as the components of six-dimensional second order tensors and thus represent these components in matrices of tensor components in six dimensions leads to the nontraditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in Sect. A.11, along with the rationale for this approach. A.2 Definition of Square, Column, and Row Matrices An r by c matrix M is a rectangular array of numbers consisting of r rows and c columns, S.C.
    [Show full text]
  • Geometric Algebra and Covariant Methods in Physics and Cosmology
    GEOMETRIC ALGEBRA AND COVARIANT METHODS IN PHYSICS AND COSMOLOGY Antony M Lewis Queens' College and Astrophysics Group, Cavendish Laboratory A dissertation submitted for the degree of Doctor of Philosophy in the University of Cambridge. September 2000 Updated 2005 with typo corrections Preface This dissertation is the result of work carried out in the Astrophysics Group of the Cavendish Laboratory, Cambridge, between October 1997 and September 2000. Except where explicit reference is made to the work of others, the work contained in this dissertation is my own, and is not the outcome of work done in collaboration. No part of this dissertation has been submitted for a degree, diploma or other quali¯cation at this or any other university. The total length of this dissertation does not exceed sixty thousand words. Antony Lewis September, 2000 iii Acknowledgements It is a pleasure to thank all those people who have helped me out during the last three years. I owe a great debt to Anthony Challinor and Chris Doran who provided a great deal of help and advice on both general and technical aspects of my work. I thank my supervisor Anthony Lasenby who provided much inspiration, guidance and encouragement without which most of this work would never have happened. I thank Sarah Bridle for organizing the useful lunchtime CMB discussion group from which I learnt a great deal, and the other members of the Cavendish Astrophysics Group for interesting discussions, in particular Pia Mukherjee, Carl Dolby, Will Grainger and Mike Hobson. I gratefully acknowledge ¯nancial support from PPARC. v Contents Preface iii Acknowledgements v 1 Introduction 1 2 Geometric Algebra 5 2.1 De¯nitions and basic properties .
    [Show full text]
  • Appendix a Multilinear Algebra and Index Notation
    Appendix A Multilinear algebra and index notation Contents A.1 Vector spaces . 164 A.2 Bases, indices and the summation convention 166 A.3 Dual spaces . 169 A.4 Inner products . 170 A.5 Direct sums . 174 A.6 Tensors and multilinear maps . 175 A.7 The tensor product . 179 A.8 Symmetric and exterior algebras . 182 A.9 Duality and the Hodge star . 188 A.10 Tensors on manifolds . 190 If linear algebra is the study of vector spaces and linear maps, then multilinear algebra is the study of tensor products and the natural gener- alizations of linear maps that arise from this construction. Such concepts are extremely useful in differential geometry but are essentially algebraic rather than geometric; we shall thus introduce them in this appendix us- ing only algebraic notions. We'll see finally in A.10 how to apply them to tangent spaces on manifolds and thus recoverx the usual formalism of tensor fields and differential forms. Along the way, we will explain the conventions of \upper" and \lower" index notation and the Einstein sum- mation convention, which are standard among physicists but less familiar in general to mathematicians. 163 164 APPENDIX A. MULTILINEAR ALGEBRA A.1 Vector spaces and linear maps We assume the reader is somewhat familiar with linear algebra, so at least most of this section should be review|its main purpose is to establish notation that is used in the rest of the notes, as well as to clarify the relationship between real and complex vector spaces. Throughout this appendix, let F denote either of the fields R or C; we will refer to elements of this field as scalars.
    [Show full text]
  • Vectors and Matrices Notes
    Vectors and Matrices Notes. Jonathan Coulthard [email protected] 1 Index Notation Index notation may seem quite intimidating at first, but once you get used to it, it will allow us to prove some very tricky vector and matrix identities with very little effort. As with most things, it will only become clearer with practice, and so it is a good idea to work through the examples for yourself, and try out some of the exercises. Example: Scalar Product Let's start off with the simplest possible example: the dot product. For real column vectors a and b, 0 1 b1 Bb2C a · b = aT b = a a a ··· B C = a b + a b + a b + ··· (1) 1 2 3 Bb3C 1 1 2 2 3 3 @ . A . or, written in a more compact notation X a · b = aibi; (2) i where the σ means that we sum over all values of i. Example: Matrix-Column Vector Product Now let's take matrix-column vector multiplication, Ax = b. 2 3 2 3 2 3 A11 A12 A23 ··· x1 b1 6A21 A22 A23 ··· 7 6x27 6b27 6 7 6 7 = 6 7 (3) 6A31 A32 A33 ···7 6x37 6b37 4 . 5 4 . 5 4 . 5 . .. You are probably used to multiplying matrices by visualising multiplying the elements high- lighted in the red boxes. Written out explicitly, this is b2 = A21x1 + A22x2 + A23x3 + ··· (4) If we were to shift the A box and the b box down one place, we would instead get b3 = A31x1 + A32x2 + A33x3 + ··· (5) 1 It should be clear then, that in general, for the ith element of b, we can write bi = Ai1x1 + Ai2x2 + Ai3x3 + ··· (6) Or, in our more compact notation, X bi = Aijxj: (7) j Note that if the matrix A had only one column, then i would take only one value (i = 1).
    [Show full text]
  • Appendix: an Index Notation for Multivariate Statistical Analysis
    19 : APPENDIX An Index Notation for Multivariate Analysis 1. Tensor Products and Formal Orderings The algebra of multivariate statistical analysis is, predominantly, the al- gebra of tensor products, of which the Kronecker product of matrices is a particular instance. The Kronecker product of the t × n matrix B =[blj] and the s × m matrix A =[aki] is defined as an ts×nm matrix B ⊗A =[bljA] whose ljth partition is bljA. In many statistical texts, this definition provides the basis for subsequent alge- braic developments. The disadvantage of using the definition without support from the theory of multilinear algebra is that the results which are generated often seem purely technical and lacking in intuitive appeal. Another approach to the algebra of tensor products relies upon the algebra of abstract vector spaces. Thus The tensor product U⊗Vof two finite-dimensional vector spaces U and V may be defined as the dual of the vector space of all bilinear functionals on U and V. This definition facilitates the development of a rigorous abstract theory. In particular, U⊗V, defined in this way, already has all the features of a vector space. However, the definition also leads to acute technical difficulties when we seek to represent the resulting algebra in terms of coordinate vectors and matrices. The approach which we shall adopt here is to define tensor products in terms of formal products. According to this approach, a definition of U⊗V may ⊗ be obtained by considering the set of all objects of the form i j xij(ui vj), where ui ⊗vj, which is described as an elementary or decomposable tensor prod- uct, comprises an ordered pair of elements taken from the two vector spaces.
    [Show full text]
  • Two-Spinors, Oscillator Algebras, and Qubits: Aspects of Manifestly Covariant Approach to Relativistic Quantum Information
    Quantum Information Processing manuscript No. (will be inserted by the editor) Two-spinors, oscillator algebras, and qubits: Aspects of manifestly covariant approach to relativistic quantum information Marek Czachor Received: date / Accepted: date Abstract The first part of the paper reviews applications of 2-spinor methods to rela- tivistic qubits (analogies between tetrads in Minkowski space and 2-qubit states, qubits defined by means of null directions and their role for elimination of the Peres-Scudo- Terno phenomenon, advantages and disadvantages of relativistic polarization operators defined by the Pauli-Lubanski vector, manifestly covariant approach to unitary rep- resentations of inhomogeneous SL(2,C)). The second part deals with electromagnetic fields quantized by means of harmonic oscillator Lie algebras (not necessarily taken in irreducible representations). As opposed to non-relativistic singlets one has to dis- tinguish between maximally symmetric and EPR states. The distinction is one of the sources of ‘strange’ relativistic properties of EPR correlations. As an example, EPR averages are explicitly computed for linear polarizations in states that are antisymmet- ric in both helicities and momenta. The result takes the familiar form p cos 2(α β) ± − independently of the choice of representation of harmonic oscillator algebra. Parameter p is determined by spectral properties of detectors and the choice of EPR state, but is unrelated to detector efficiencies. Brief analysis of entanglement with vacuum and vacuum violation of Bell’s inequality is given. The effects are related to inequivalent notions of vacuum states. Technical appendices discuss details of the representation I employ in field quantization. In particular, M-shaped delta-sequences are used to define Dirac deltas regular at zero.
    [Show full text]
  • 2 Review of Stress, Linear Strain and Elastic Stress- Strain Relations
    2 Review of Stress, Linear Strain and Elastic Stress- Strain Relations 2.1 Introduction In metal forming and machining processes, the work piece is subjected to external forces in order to achieve a certain desired shape. Under the action of these forces, the work piece undergoes displacements and deformation and develops internal forces. A measure of deformation is defined as strain. The intensity of internal forces is called as stress. The displacements, strains and stresses in a deformable body are interlinked. Additionally, they all depend on the geometry and material of the work piece, external forces and supports. Therefore, to estimate the external forces required for achieving the desired shape, one needs to determine the displacements, strains and stresses in the work piece. This involves solving the following set of governing equations : (i) strain-displacement relations, (ii) stress- strain relations and (iii) equations of motion. In this chapter, we develop the governing equations for the case of small deformation of linearly elastic materials. While developing these equations, we disregard the molecular structure of the material and assume the body to be a continuum. This enables us to define the displacements, strains and stresses at every point of the body. We begin our discussion on governing equations with the concept of stress at a point. Then, we carry out the analysis of stress at a point to develop the ideas of stress invariants, principal stresses, maximum shear stress, octahedral stresses and the hydrostatic and deviatoric parts of stress. These ideas will be used in the next chapter to develop the theory of plasticity.
    [Show full text]
  • Arxiv:1412.2393V4 [Gr-Qc] 27 Feb 2019 2.6 Geodesics and Normal Coordinates
    Riemannian Geometry: Definitions, Pictures, and Results Adam Marsh February 27, 2019 Abstract A pedagogical but concise overview of Riemannian geometry is provided, in the context of usage in physics. The emphasis is on defining and visualizing concepts and relationships between them, as well as listing common confusions, alternative notations and jargon, and relevant facts and theorems. Special attention is given to detailed figures and geometric viewpoints, some of which would seem to be novel to the literature. Topics are avoided which are well covered in textbooks, such as historical motivations, proofs and derivations, and tools for practical calculations. As much material as possible is developed for manifolds with connection (omitting a metric) to make clear which aspects can be readily generalized to gauge theories. The presentation in most cases does not assume a coordinate frame or zero torsion, and the coordinate-free, tensor, and Cartan formalisms are developed in parallel. Contents 1 Introduction 2 2 Parallel transport 3 2.1 The parallel transporter . 3 2.2 The covariant derivative . 4 2.3 The connection . 5 2.4 The covariant derivative in terms of the connection . 6 2.5 The parallel transporter in terms of the connection . 9 arXiv:1412.2393v4 [gr-qc] 27 Feb 2019 2.6 Geodesics and normal coordinates . 9 2.7 Summary . 10 3 Manifolds with connection 11 3.1 The covariant derivative on the tensor algebra . 12 3.2 The exterior covariant derivative of vector-valued forms . 13 3.3 The exterior covariant derivative of algebra-valued forms . 15 3.4 Torsion . 16 3.5 Curvature .
    [Show full text]
  • Index Notation
    Index Notation January 10, 2013 One of the hurdles to learning general relativity is the use of vector indices as a calculational tool. While you will eventually learn tensor notation that bypasses some of the index usage, the essential form of calculations often remains the same. Index notation allows us to do more complicated algebraic manipulations than the vector notation that works for simpler problems in Euclidean 3-space. Even there, many vector identities are most easily estab- lished using index notation. When we begin discussing 4-dimensional, curved spaces, our reliance on algebra for understanding what is going on is greatly increased. We cannot make progress without these tools. 1 Three dimensions To begin, we translated some 3-dimensional formulas into index notation. You are familiar with writing boldface letters to stand for vectors. Each such vector may be expanded in a basis. For example, in our usual Cartesian basis, every vector v may be written as a linear combination ^ ^ ^ v = vxi + vyj + vzk We need to make some changes here: 1. Replace the x; y; z labels with a numbers 1; 2; 3: (vx; vy; vz) −! (v1; v2; v3) 2. Write the labels raised instead of lowered. 1 2 3 (v1; v2; v3) −! v ; v ; v 3. Use a lower-case Latin letter to run over the range, i = 1; 2; 3. This means that the single symbol, vi stands for all three components, depending on the value of the index, i. 4. Replace the three unit vectors by an indexed symbol: e^i ^ ^ ^ so that e^1 = i; e^2 = j and e^3 = k With these changes the expansion of a vector v simplifies considerably because we may use a summation: ^ ^ ^ v = vxi + vyj + vzk 1 2 3 = v e^1 + v e^2 + v e^3 3 X i = v e^i i=1 1 We have chosen the index positions, in part, so that inside the sum there is one index up and one down.
    [Show full text]
  • An Index Notation for Tensor Products
    APPENDIX 6 An Index Notation for Tensor Products 1. Bases for Vector Spaces Consider an identity matrix of order N, which can be written as follows: 1 0 0 e1 0 1 · · · 0 e2 (1) [ e1 e2 eN ] = . ·.· · . = . . · · · . .. 0 0 1 eN · · · On the LHS, the matrix is expressed as a collection of column vectors, denoted by ei; i = 1, 2, . , N, which form the basis of an ordinary N-dimensional Eu- clidean space, which is the primal space. On the RHS, the matrix is expressed as a collection of row vectors ej; j = 1, 2, . , N, which form the basis of the conjugate dual space. The basis vectors can be used in specifying arbitrary vectors in both spaces. In the primal space, there is the column vector (2) a = aiei = (aiei), i X and in the dual space, there is the row vector j j (3) b0 = bje = (bje ). j X Here, on the RHS, there is a notation that replaces the summation signs by parentheses. When a basis vector is enclosed by pathentheses, summations are to be taken in respect of the index or indices that it carries. Usually, such an index will be associated with a scalar element that will also be found within the parentheses. The advantage of this notation will become apparent at a later stage, when the summations are over several indices. A vector in the primary space can be converted to a vector in the conjugate i dual space and vice versa by the operation of transposition. Thus a0 = (aie ) i is formed via the conversion ei e whereas b = (bjej) is formed via the conversion ej e .
    [Show full text]