Appendix A: Matrices and Tensors

Total Page:16

File Type:pdf, Size:1020Kb

Appendix A: Matrices and Tensors Appendix A: Matrices and Tensors A.1 Introduction and Rationale The purpose of this appendix is to present the notation and most of the mathematical techniques that will be used in the body of the text. The audience is assumed to have been through several years of college level mathematics that included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in undergraduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles and the vector cross and triple scalar products. The solutions to ordinary differential equations are reviewed in the last two sections. The notation, as far as possible, will be a matrix notation that is easily entered into existing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad etc. The desire to represent the components of three-dimensional fourth order tensors that appear in anisotropic elasticity as the components of six-dimensional second order tensors and thus represent these components in matrices of tensor components in six dimensions leads to the nontraditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in Sect. A.11, along with the rationale for this approach. A.2 Definition of Square, Column, and Row Matrices An r by c matrix M is a rectangular array of numbers consisting of r rows and c columns, S.C. Cowin, Continuum Mechanics of Anisotropic Materials, 335 DOI 10.1007/978-1-4614-5025-2, # Springer Science+Business Media New York 2013 336 Appendix A: Matrices and Tensors 2 3 M11 M12 :::M1c 6 ::: 7 6 M21 M22 M2c 7: M ¼ 4 ::::::5 (A.1) Mr1 ::::Mrc The typical element of the array, Mij, is the ith element in the jth column; in this text the elements Mij will be real numbers or functions whose values are real numbers. The transpose of the matrix M is denoted by MT and is obtained from M by interchanging the rows and columns 2 3 M11 M21 :::Mr1 6 ::: 7 T 6 M12 M22 Mr2 7: M ¼ 4 ::::::5 (A.2) M1c ::::Mrc The operation of obtaining MT from M is called transposition. In this text we are interested in special cases of the r by c matrix M. These special cases are those of the square matrix, r ¼ c ¼ n, the case of the row matrix, r ¼ 1, c ¼ n, and the case of column matrix, r ¼ n, c ¼ 1. Further, the special sub-cases of interest are n ¼ 2, n ¼ 3, and n ¼ 6; the sub-case n ¼ 1 reduces all three special cases to the trivial situation of a single number or scalar. A square matrix A has the form 2 3 A11 A12 :::A1n 6 ::: 7 6 A21 A22 A2n 7; A ¼ 4 ::::::5 (A.3) An1 ::::Ann while row and column matrices, r and c, have the forms 2 3 c1 6 7 6 c2 7 6 7 6 : 7 r ¼½r1 r2 :::rn ; c ¼ 6 7; (A.4) 6 : 7 4 : 5 cn respectively. The transpose of a column matrix is a row matrix, thus T c ¼½c1 c2 :::cn : (A.5) To save space in books and papers the form of c in (A.5) is used more frequently than the form in the second of (A.4). Wherever possible, square matrices will be denoted by upper case boldface Latin letters, while row and column matrices will be denoted by lower case boldface Latin letters as is the case in equations (A.3) and (A.4). A.3 The Types and Algebra of Square Matrices 337 A.3 The Types and Algebra of Square Matrices The elements of the square matrix A given by (A.3) for which the row and column indices are equal, namely the elements A11, A22,..., Ann, are called diagonal elements. The sum of the diagonal elements of a matrix is a scalar called the trace of the matrix and, for a matrix A, it is denoted by trA, trA ¼ A11 þ A22 þÁÁÁþAnn: (A.6) If the trace of a matrix is zero, the matrix is said to be traceless. Note also that trA ¼ trAT. A matrix with only diagonal elements is called a diagonal matrix, 2 3 A11 0 ::: 0 6 ::: 7 6 0 A22 0 7: A ¼ 4 ::::::5 (A.7) 0 ::::Ann The zero and the unit matrix, 0 and 1, respectively, constitute the null element, the 0, and the unit element, the 1, in the algebra of square matrices. The zero matrix is a matrix whose every element is zero and the unit matrix is a diagonal matrix whose diagonal elements are all one: 2 3 2 3 00:::0 10:::0 6 ::: 7 6 ::: 7 6 00 0 7; 6 01 0 7: 0 ¼ 4 : :::::5 1 ¼ 4 ::::::5 (A.8) 0 ::::0 0 ::::1 A special symbol, the Kronecker delta dij, is introduced to represent the components of the unit matrix. When the indices are equal, i ¼ j, the value of the Kronecker delta is one, d11 ¼ d22 ¼ ... ¼ dnn ¼ 1 and when they are unequal, i 6¼ j, the value of the Kronecker delta is zero, d12 ¼ d21 ¼ ... ¼ dn1 ¼ d1n ¼ 0. The multiplication of a matrix A by a scalar is defined as the multiplication of every element of the matrix A by the scalar a, thus 2 3 aA11 aA12 :::aA1n 6 ::: 7 6 aA21 aA22 aA2n 7: aA ¼ 4 ::::::5 (A.9) aAn1 ::::aAnn It is then easy to show that 1A ¼ A, À1A ¼A,0A ¼ 0, and a0 ¼ 0. The addition of matrices is defined only for matrices with the same number of rows and columns. The sum of two matrices, A and B, is denoted by A þ B, where 338 Appendix A: Matrices and Tensors 2 3 A11 þ B11 A12 þ B12 :::A1n þ B1n 6 ::: 7 6 A21 þ B21 A22 þ B22 A2n þ B2n 7: A þ B ¼ 4 ::::::5 (A.10) An1 þ Bn1 ::::Ann þ Bnn Matrix addition is commutative and associative, A þ B ¼ B þ A and A þðB þ CÞ¼ðA þ BÞþC; (A.11) respectively. The following distributive laws connect matrix addition and matrix multiplication by scalars: aðA þ BÞ¼aA þ aB and ða þ bÞA ¼ aA þ bA; (A.12) where a and b are scalars. Negative square matrices may be created by employing the definition of matrix multiplication by a scalar (A.8) in the special case when a ¼1. In this case the definition of addition of square matrices (A.10) can be extended to include the subtraction of square matrices, AÀB. A matrix for which B ¼ BT is said to be a symmetric matrix and a matrix for which C ¼CT is said to be a skew-symmetric or anti-symmetric matrix. The symmetric and anti-symmetric parts of a matrix, say AS and AA, are constructed from A as follows: 1 symmetric part of AS ¼ ðA þ ATÞ; and (A.13) 2 1 anti À symmetric part of AA ¼ ðA À ATÞ: (A.14) 2 It is easy to verify that the symmetric part of A is a symmetric matrix and that the skew-symmetric part of A is a skew-symmetric matrix. The sum of the symmetric part of A and the skew-symmetric part of A, AS þ AA,isA: 1 1 A ¼ AS þ AA ¼ ðA þ ATÞþ ðA À ATÞ: (A.15) 2 2 This result shows that any square matrix can be decomposed into the sum of a symmetric and a skew-symmetric matrix or anti-symmetric matrix. Using the trace operation introduced above, the representation (A.15) can be extended to three-way decomposition of the matrix A, trA 1 trA 1 A ¼ 1 þ A þ AT À 2 1 þ ðA À ATÞ: (A.16) n 2 n 2 A.3 The Types and Algebra of Square Matrices 339 The last term in this decomposition is still the skew-symmetric part of the matrix. The second term is the traceless symmetric part of the matrix and the first term is simply the trace of the matrix multiplied by the unit matrix. Example A.3.1 Construct the three-way decomposition of the matrix A given by: 2 3 123 A ¼ 4 4565: 789 Solution: The symmetric and skew-symmetric parts of A, AS, and AA, as well as the trace of A are calculated, 2 3 2 3 135 0 À1 À2 1 6 7 1 6 7 AS ¼ ðA þ ATÞ¼4 3575; AA ¼ ðA À ATÞ¼4 10À1 5; trA ¼ 15; 2 2 579 21 0 then, since n ¼ 3, it follows from (A.16) that 2 3 2 3 2 3 500 À435 0 À1 À2 A ¼ 4 0505 þ 4 3075 þ 4 10À1 5: 005 574 21 0 Introducing the notation for the deviatoric part of an n by n square matrix A, trA devA ¼ A À 1; (A.17) n the representation for the matrix A given by (A.16) may be rewritten as the sum of three terms trA A ¼ 1 þ devAS þ AA; (A.18) n where the first term is called the isotropic,orspherical (or hydrostatic as in hydraulics) part of A. Note that 1 devAS ¼ ðdevA þ devATÞ: (A.19) 2 Example A.3.2 Show that tr (devA) ¼ 0.
Recommended publications
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Appendix a Multilinear Algebra and Index Notation
    Appendix A Multilinear algebra and index notation Contents A.1 Vector spaces . 164 A.2 Bases, indices and the summation convention 166 A.3 Dual spaces . 169 A.4 Inner products . 170 A.5 Direct sums . 174 A.6 Tensors and multilinear maps . 175 A.7 The tensor product . 179 A.8 Symmetric and exterior algebras . 182 A.9 Duality and the Hodge star . 188 A.10 Tensors on manifolds . 190 If linear algebra is the study of vector spaces and linear maps, then multilinear algebra is the study of tensor products and the natural gener- alizations of linear maps that arise from this construction. Such concepts are extremely useful in differential geometry but are essentially algebraic rather than geometric; we shall thus introduce them in this appendix us- ing only algebraic notions. We'll see finally in A.10 how to apply them to tangent spaces on manifolds and thus recoverx the usual formalism of tensor fields and differential forms. Along the way, we will explain the conventions of \upper" and \lower" index notation and the Einstein sum- mation convention, which are standard among physicists but less familiar in general to mathematicians. 163 164 APPENDIX A. MULTILINEAR ALGEBRA A.1 Vector spaces and linear maps We assume the reader is somewhat familiar with linear algebra, so at least most of this section should be review|its main purpose is to establish notation that is used in the rest of the notes, as well as to clarify the relationship between real and complex vector spaces. Throughout this appendix, let F denote either of the fields R or C; we will refer to elements of this field as scalars.
    [Show full text]
  • Vectors and Matrices Notes
    Vectors and Matrices Notes. Jonathan Coulthard [email protected] 1 Index Notation Index notation may seem quite intimidating at first, but once you get used to it, it will allow us to prove some very tricky vector and matrix identities with very little effort. As with most things, it will only become clearer with practice, and so it is a good idea to work through the examples for yourself, and try out some of the exercises. Example: Scalar Product Let's start off with the simplest possible example: the dot product. For real column vectors a and b, 0 1 b1 Bb2C a · b = aT b = a a a ··· B C = a b + a b + a b + ··· (1) 1 2 3 Bb3C 1 1 2 2 3 3 @ . A . or, written in a more compact notation X a · b = aibi; (2) i where the σ means that we sum over all values of i. Example: Matrix-Column Vector Product Now let's take matrix-column vector multiplication, Ax = b. 2 3 2 3 2 3 A11 A12 A23 ··· x1 b1 6A21 A22 A23 ··· 7 6x27 6b27 6 7 6 7 = 6 7 (3) 6A31 A32 A33 ···7 6x37 6b37 4 . 5 4 . 5 4 . 5 . .. You are probably used to multiplying matrices by visualising multiplying the elements high- lighted in the red boxes. Written out explicitly, this is b2 = A21x1 + A22x2 + A23x3 + ··· (4) If we were to shift the A box and the b box down one place, we would instead get b3 = A31x1 + A32x2 + A33x3 + ··· (5) 1 It should be clear then, that in general, for the ith element of b, we can write bi = Ai1x1 + Ai2x2 + Ai3x3 + ··· (6) Or, in our more compact notation, X bi = Aijxj: (7) j Note that if the matrix A had only one column, then i would take only one value (i = 1).
    [Show full text]
  • Appendix: an Index Notation for Multivariate Statistical Analysis
    19 : APPENDIX An Index Notation for Multivariate Analysis 1. Tensor Products and Formal Orderings The algebra of multivariate statistical analysis is, predominantly, the al- gebra of tensor products, of which the Kronecker product of matrices is a particular instance. The Kronecker product of the t × n matrix B =[blj] and the s × m matrix A =[aki] is defined as an ts×nm matrix B ⊗A =[bljA] whose ljth partition is bljA. In many statistical texts, this definition provides the basis for subsequent alge- braic developments. The disadvantage of using the definition without support from the theory of multilinear algebra is that the results which are generated often seem purely technical and lacking in intuitive appeal. Another approach to the algebra of tensor products relies upon the algebra of abstract vector spaces. Thus The tensor product U⊗Vof two finite-dimensional vector spaces U and V may be defined as the dual of the vector space of all bilinear functionals on U and V. This definition facilitates the development of a rigorous abstract theory. In particular, U⊗V, defined in this way, already has all the features of a vector space. However, the definition also leads to acute technical difficulties when we seek to represent the resulting algebra in terms of coordinate vectors and matrices. The approach which we shall adopt here is to define tensor products in terms of formal products. According to this approach, a definition of U⊗V may ⊗ be obtained by considering the set of all objects of the form i j xij(ui vj), where ui ⊗vj, which is described as an elementary or decomposable tensor prod- uct, comprises an ordered pair of elements taken from the two vector spaces.
    [Show full text]
  • 2 Review of Stress, Linear Strain and Elastic Stress- Strain Relations
    2 Review of Stress, Linear Strain and Elastic Stress- Strain Relations 2.1 Introduction In metal forming and machining processes, the work piece is subjected to external forces in order to achieve a certain desired shape. Under the action of these forces, the work piece undergoes displacements and deformation and develops internal forces. A measure of deformation is defined as strain. The intensity of internal forces is called as stress. The displacements, strains and stresses in a deformable body are interlinked. Additionally, they all depend on the geometry and material of the work piece, external forces and supports. Therefore, to estimate the external forces required for achieving the desired shape, one needs to determine the displacements, strains and stresses in the work piece. This involves solving the following set of governing equations : (i) strain-displacement relations, (ii) stress- strain relations and (iii) equations of motion. In this chapter, we develop the governing equations for the case of small deformation of linearly elastic materials. While developing these equations, we disregard the molecular structure of the material and assume the body to be a continuum. This enables us to define the displacements, strains and stresses at every point of the body. We begin our discussion on governing equations with the concept of stress at a point. Then, we carry out the analysis of stress at a point to develop the ideas of stress invariants, principal stresses, maximum shear stress, octahedral stresses and the hydrostatic and deviatoric parts of stress. These ideas will be used in the next chapter to develop the theory of plasticity.
    [Show full text]
  • Index Notation
    Index Notation January 10, 2013 One of the hurdles to learning general relativity is the use of vector indices as a calculational tool. While you will eventually learn tensor notation that bypasses some of the index usage, the essential form of calculations often remains the same. Index notation allows us to do more complicated algebraic manipulations than the vector notation that works for simpler problems in Euclidean 3-space. Even there, many vector identities are most easily estab- lished using index notation. When we begin discussing 4-dimensional, curved spaces, our reliance on algebra for understanding what is going on is greatly increased. We cannot make progress without these tools. 1 Three dimensions To begin, we translated some 3-dimensional formulas into index notation. You are familiar with writing boldface letters to stand for vectors. Each such vector may be expanded in a basis. For example, in our usual Cartesian basis, every vector v may be written as a linear combination ^ ^ ^ v = vxi + vyj + vzk We need to make some changes here: 1. Replace the x; y; z labels with a numbers 1; 2; 3: (vx; vy; vz) −! (v1; v2; v3) 2. Write the labels raised instead of lowered. 1 2 3 (v1; v2; v3) −! v ; v ; v 3. Use a lower-case Latin letter to run over the range, i = 1; 2; 3. This means that the single symbol, vi stands for all three components, depending on the value of the index, i. 4. Replace the three unit vectors by an indexed symbol: e^i ^ ^ ^ so that e^1 = i; e^2 = j and e^3 = k With these changes the expansion of a vector v simplifies considerably because we may use a summation: ^ ^ ^ v = vxi + vyj + vzk 1 2 3 = v e^1 + v e^2 + v e^3 3 X i = v e^i i=1 1 We have chosen the index positions, in part, so that inside the sum there is one index up and one down.
    [Show full text]
  • An Index Notation for Tensor Products
    APPENDIX 6 An Index Notation for Tensor Products 1. Bases for Vector Spaces Consider an identity matrix of order N, which can be written as follows: 1 0 0 e1 0 1 · · · 0 e2 (1) [ e1 e2 eN ] = . ·.· · . = . . · · · . .. 0 0 1 eN · · · On the LHS, the matrix is expressed as a collection of column vectors, denoted by ei; i = 1, 2, . , N, which form the basis of an ordinary N-dimensional Eu- clidean space, which is the primal space. On the RHS, the matrix is expressed as a collection of row vectors ej; j = 1, 2, . , N, which form the basis of the conjugate dual space. The basis vectors can be used in specifying arbitrary vectors in both spaces. In the primal space, there is the column vector (2) a = aiei = (aiei), i X and in the dual space, there is the row vector j j (3) b0 = bje = (bje ). j X Here, on the RHS, there is a notation that replaces the summation signs by parentheses. When a basis vector is enclosed by pathentheses, summations are to be taken in respect of the index or indices that it carries. Usually, such an index will be associated with a scalar element that will also be found within the parentheses. The advantage of this notation will become apparent at a later stage, when the summations are over several indices. A vector in the primary space can be converted to a vector in the conjugate i dual space and vice versa by the operation of transposition. Thus a0 = (aie ) i is formed via the conversion ei e whereas b = (bjej) is formed via the conversion ej e .
    [Show full text]
  • 482 Copyright A. Steane, Oxford University 2010, 2011; Not for Redistribution. Chapter 17
    482 Copyright A. Steane, Oxford University 2010, 2011; not for redistribution. Chapter 17 Spinors 17.1 Introducing spinors Spinors are mathematical entities somewhat like tensors, that allow a more general treatment of the notion of invariance under rotation and Lorentz boosts. To every tensor of rank k there corresponds a spinor of rank 2k, and some kinds of tensor can be associated with a spinor of the same rank. For example, a general 4-vector would correspond to a Hermitian spinor of rank 2, which can be represented by a 2 £ 2 Hermitian matrix of complex numbers. A null 4-vector can also be associated with a spinor of rank 1, which can be represented by a complex vector with two components. We shall see why in the following. Spinors can be used without reference to relativity, but they arise naturally in discussions of the Lorentz group. One could say that a spinor is the most basic sort of mathematical object that can be Lorentz-transformed. The main facts about spinors are given in the box. This summary is placed here rather than at the end of the chapter in order to help the reader follow the main thread of the argument. It appears that Klein originally designed the spinor to simplify the treatment of the classical spinning top in 1897. The more thorough understanding of spinors as mathematical objects is credited to Elie¶ Cartan in 1913. They are closely related to Hamilton's quaternions (about 1845). Spinors began to ¯nd a more extensive role in physics when it was discovered that electrons and other particles have an intrinsic form of angular momentum now called `spin', and the be- haviour of this type of angular momentum is correctly captured by the mathematics discovered by Cartan.
    [Show full text]
  • 2 Differential Geometry
    2 Differential Geometry The language of General relativity is Differential Geometry. The preset chap- ter provides a brief review of the ideas and notions of Differential Geometry that will be used in the book. In this role, it also serves the purpose of setting the notation and conventions to be used througout the book. The chapter is not intended as an introduction to Differential Geometry and as- sumes a prior knowledge of the subject at the level, say, of the first chapter of Choquet-Bruhat (2008) or Stewart (1991), or chapters 2 and 3 of Wald (1984). As such, rigorous definitions of concepts are not treated in depth —the reader is, in any case, refered to the literature provided in the text, and that given in the final section of the chapter. Instead, the decision has been taken of discussing at some length topics which may not be regarded as belonging to the standard bagage of a relativist. In particular, some detail is provided in the discussion of general (i.e. non Levi-Civita) connections, the sometimes 1+3 split of tensors (i.e. a split based on a congruence of timelike curves, rather than on a foliation as in the usual 3 + 1), and a discussion of submanifolds using a frame formalism. A discussion of conformal geometry has been left out of this chapter and will be undertaken in Chapter 5. 2.1 Manifolds The basic object of study in Differential Geometry is a differentiable man- ifold . Intuitively, a manifold is a space that locally looks like Rn for some n N.
    [Show full text]
  • Xact`Spinors` This Is the Doc File Spinorsdoc.Nb of Version 0.9.2 of Spinors`
    SpinorsDoc.nb 1 xAct`Spinors` This is the doc file SpinorsDoc.nb of version 0.9.2 of Spinors`. Last update on 9 September 2009. à Authors © 2006-2009, under the GNU General Public License (GPL) Alfonso García-Parrado Gómez-Lobo Centro de Matematica Universidade do Minho, Portugal [email protected] and José M. Martín-Garcí a Institut d'Astrophysique de Paris & Laboratoire Univers et Théories, CNRS, France [email protected] http://metric.iem.csic.es/Martin-Garcia/ à Intro Spinors` is the xAct` package for computations with spinors in four dimensional Lorentzian manifolds. The conventions adopted in the standard formulae of the spinor calculus are those of Penrose & Rindler, Spinors and Space- Time, Cambridge University Press (Vols. 1, 2). We assume that the user is already familiar with spinor calculus and therefore we will not explain the terminology or symbols related to this subject unless necessary to clarify the workings of the package. à Load the package This loads the package from the default directory, for example $Home/.Mathematica/Applications/xAct/ for a single-user installa- tion under Linux. In[1]:= MemoryInUse@D Out[1]= 14 362 304 ©2003-2004 José M. Martín-Garcí a 2 SpinorsDoc.nb In[2]:= <<xAct`Spinors` ------------------------------------------------------------ Package xAct`xPerm` version 1.0.3, 82009, 9, 9< CopyRight HCL 2003-2008, Jose M. Martin-Garcia, under the General Public License. Connecting to external linux executable... Connection established. ------------------------------------------------------------ Package xAct`xTensor` version 0.9.9, 82009, 9, 9< CopyRight HCL 2002-2008, Jose M. Martin-Garcia, under the General Public License. ------------------------------------------------------------ Package xAct`Spinors` version 0.9.2, 82009, 9, 9< CopyRight HCL 2006-2008, Alfonso Garcia-Parrado Gomez-Lobo and Jose M.
    [Show full text]
  • Two-Spinors and Symmetry Operators
    Two-spinors and Symmetry Operators An Investigation into the Existence of Symmetry Operators for the Massive Dirac Equation using Spinor Techniques and Computer Algebra Tools Master’s thesis in Physics Simon Stefanus Jacobsson DEPARTMENT OF MATHEMATICAL SCIENCES CHALMERS UNIVERSITY OF TECHNOLOGY Gothenburg, Sweden 2021 www.chalmers.se Master’s thesis 2021 Two-spinors and Symmetry Operators ¦ An Investigation into the Existence of Symmetry Operators for the Massive Dirac Equation using Spinor Techniques and Computer Algebra Tools SIMON STEFANUS JACOBSSON Department of Mathematical Sciences Division of Analysis and Probability Theory Mathematical Physics Chalmers University of Technology Gothenburg, Sweden 2021 Two-spinors and Symmetry Operators An Investigation into the Existence of Symmetry Operators for the Massive Dirac Equation using Spinor Techniques and Computer Algebra Tools SIMON STEFANUS JACOBSSON © SIMON STEFANUS JACOBSSON, 2021. Supervisor: Thomas Bäckdahl, Mathematical Sciences Examiner: Simone Calogero, Mathematical Sciences Master’s Thesis 2021 Department of Mathematical Sciences Division of Analysis and Probability Theory Mathematical Physics Chalmers University of Technology SE-412 96 Gothenburg Telephone +46 31 772 1000 Typeset in LATEX Printed by Chalmers Reproservice Gothenburg, Sweden 2021 iii Two-spinors and Symmetry Operators An Investigation into the Existence of Symmetry Operators for the Massive Dirac Equation using Spinor Techniques and Computer Algebra Tools SIMON STEFANUS JACOBSSON Department of Mathematical Sciences Chalmers University of Technology Abstract This thesis employs spinor techniques to find what conditions a curved spacetime must satisfy for there to exist a second order symmetry operator for the massive Dirac equation. Conditions are of the form of the existence of a set of Killing spinors satisfying some set of covariant differential equations.
    [Show full text]
  • Continuum Mechanics
    “conti” — 2004/9/6 — 9:53 — page 1 — #1 CONTINUUM MECHANICS lecture notes 2003 jp dr.-ing. habil. ellen kuhl technical university of kaiserslautern “conti” — 2004/9/6 — 9:53 — page 2 — #2 “conti” — 2004/9/6 — 9:53 — page i — #3 Contents 1 Tensor calculus 1 1.1 Tensor algebra . 1 1.1.1 Vector algebra . 1 1.1.1.1 Notation . 1 1.1.1.2 Euclidian vector space . 2 1.1.1.3 Scalar product . 4 1.1.1.4 Vector product . 5 1.1.1.5 Scalar triple vector product . 6 1.1.2 Tensor algebra . 7 1.1.2.1 Notation . 7 1.1.2.2 Scalar products . 10 1.1.2.3 Dyadic product . 12 1.1.2.4 Scalar triple vector product . 13 1.1.2.5 Invariants of second order tensors . 14 1.1.2.6 Trace of second order tensors . 15 1.1.2.7 Determinant of second order tensors . 16 1.1.2.8 Inverse of second order tensors . 17 1.1.3 Spectral decomposition . 18 1.1.4 Symmetric – skew-symmetric decomposition . 19 1.1.4.1 Symmetric tensors . 20 1.1.4.2 Skew-symmetric tensors . 21 1.1.5 Volumetric – deviatoric decomposition . 22 1.1.6 Orthogonal tensors . 23 1.2 Tensor analysis . 24 1.2.1 Derivatives . 24 1.2.1.1 Frechet derivative . 24 1.2.1.2 Gateaux derivative . 24 1.2.2 Gradient . 25 1.2.2.1 Gradient of a scalar–valued function . 25 1.2.2.2 Gradient of a vector–valued function .
    [Show full text]