03 - Tensor Calculus - Tensor Analysis • (Principal) Invariants of Second Order Tensor

Total Page:16

File Type:pdf, Size:1020Kb

03 - Tensor Calculus - Tensor Analysis • (Principal) Invariants of Second Order Tensor tensor algebra - invariants 03 - tensor calculus - tensor analysis • (principal) invariants of second order tensor • derivatives of invariants wrt second order tensor 03 - tensor calculus 1 tensor calculus 2 tensor algebra - trace tensor algebra - determinant • trace of second order tensor • determinant of second order tensor • properties of traces of second order tensors • properties of determinants of second order tensors tensor calculus 3 tensor calculus 4 tensor algebra - determinant tensor algebra - inverse • determinant defining vector product • inverse of second order tensor in particular • adjoint and cofactor • determinant defining scalar triple product • properties of inverse tensor calculus 5 tensor calculus 6 tensor algebra - spectral decomposition tensor algebra - sym/skw decomposition • eigenvalue problem of second order tensor • symmetric - skew-symmetric decomposition • solution in terms of scalar triple product • symmetric and skew-symmetric tensor • characteristic equation • symmetric tensor • spectral decomposition • skew-symmetric tensor • cayleigh hamilton theorem tensor calculus 7 tensor calculus 8 tensor algebra - symmetric tensor tensor algebra - skew-symmetric tensor • symmetric second order tensor • skew-symmetric second order tensor • processes three real eigenvalues and corresp.eigenvectors • processes three independent entries defining axial vector such that • square root, inverse, exponent and log • invariants of skew-symmetric tensor tensor calculus 9 tensor calculus 10 tensor algebra - vol/dev decomposition tensor algebra - orthogonal tensor • volumetric - deviatoric decomposition • orthogonal second order tensor • decomposition of second order tensor • volumetric and deviatoric tensor such that and • volumetric tensor • proper orthogonal tensor has eigenvalue • deviatoric tensor with interpretation: finite rotation around axis tensor calculus 11 tensor calculus 12 tensor analysis - frechet derivative tensor analysis - gateaux derivative • consider smooth differentiable scalar field with • consider smooth differentiable scalar field with scalar argument scalar argument vector argument vector argument tensor argument tensor argument • frechet derivative (tensor notation) • gateaux derivative,i.e.,frechet wrt direction (tensor notation) scalar argument scalar argument vector argument vector argument tensor argument tensor argument tensor calculus 13 tensor calculus 14 tensor analysis - gradient tensor analysis - divergence • consider scalar- and vector field in domain • consider vector- and 2nd order tensor field in domain • divergence of vector- and 2nd order tensor field • gradient of scalar- and vector field renders vector- and 2nd order tensor field renders scalar- and vector field tensor calculus 15 tensor calculus 16 tensor analysis - laplace operator tensor analysis - transformation formulae • consider scalar- and vector field in domain • consider scalar,vector and 2nd order tensor field on • laplace operator acting on scalar- and vector field • useful transformation formulae (tensor notation) renders scalar- and vector field tensor calculus 17 tensor calculus 18 tensor analysis - transformation formulae tensor analysis - integral theorems • consider scalar,vector and 2nd order tensor field on • consider scalar,vector and 2nd order tensor field on • useful transformation formulae (index notation) • integral theorems (tensor notation) green gauss gauss tensor calculus 19 tensor calculus 20 tensor analysis - integral theorems voigt / matrix vector notation • consider scalar,vector and 2nd order tensor field on • strain tensors as vectors in voigt notation • integral theorems (tensor notation) • stress tensors as vectors in voigt notation green gauss gauss • why are strain & stress different? check energy expression! tensor calculus 21 tensor calculus 22 voigt / matrix vector notation deformation gradient • given the deformation gradient, play with matlab to • fourth order material operators as matrix in voigt notation become familiar with basic tensor operations! • uniaxial tension (incompressible), simple shear, rotation • why are strain & stress different? check these expressions! tensor calculus 23 example #1 - matlab 24 second order tensors - scalar products fourth order tensors - scalar products • inverse of second order tensor • symmetric fourth order unit tensor • right / left cauchy green and green lagrange strain tensor • screw-symmetric fourth order unit tensor • trace of second order tensor • volumetric fourth order unit tensor • (principal) invariants of second order tensor • deviatoric fourth order unit tensor example #1 - matlab 25 example #1 - matlab 26 neo hooke‘ian elasticity matlab • free energy • play with the matlab routine to familiarize yourself with tensor expressions! • 1st and 2nd piola kirchhoff stress and cauchy stress • calculate the stresses for different deformation gradients! • which of the following stress tensors is symmetric and • 4th order tangent operators could be represented in voigt notation? • what would look like in the linear limit, for • what are the advantages of using the voigt notation? example #1 - matlab 27 homework #2 28.
Recommended publications
  • Linear Algebra I
    Linear Algebra I Martin Otto Winter Term 2013/14 Contents 1 Introduction7 1.1 Motivating Examples.......................7 1.1.1 The two-dimensional real plane.............7 1.1.2 Three-dimensional real space............... 14 1.1.3 Systems of linear equations over Rn ........... 15 1.1.4 Linear spaces over Z2 ................... 21 1.2 Basics, Notation and Conventions................ 27 1.2.1 Sets............................ 27 1.2.2 Functions......................... 29 1.2.3 Relations......................... 34 1.2.4 Summations........................ 36 1.2.5 Propositional logic.................... 36 1.2.6 Some common proof patterns.............. 37 1.3 Algebraic Structures....................... 39 1.3.1 Binary operations on a set................ 39 1.3.2 Groups........................... 40 1.3.3 Rings and fields...................... 42 1.3.4 Aside: isomorphisms of algebraic structures...... 44 2 Vector Spaces 47 2.1 Vector spaces over arbitrary fields................ 47 2.1.1 The axioms........................ 48 2.1.2 Examples old and new.................. 50 2.2 Subspaces............................. 53 2.2.1 Linear subspaces..................... 53 2.2.2 Affine subspaces...................... 56 2.3 Aside: affine and linear spaces.................. 58 2.4 Linear dependence and independence.............. 60 3 4 Linear Algebra I | Martin Otto 2013 2.4.1 Linear combinations and spans............. 60 2.4.2 Linear (in)dependence.................. 62 2.5 Bases and dimension....................... 65 2.5.1 Bases............................ 65 2.5.2 Finite-dimensional vector spaces............. 66 2.5.3 Dimensions of linear and affine subspaces........ 71 2.5.4 Existence of bases..................... 72 2.6 Products, sums and quotients of spaces............. 73 2.6.1 Direct products...................... 73 2.6.2 Direct sums of subspaces................
    [Show full text]
  • Vector Spaces in Physics
    San Francisco State University Department of Physics and Astronomy August 6, 2015 Vector Spaces in Physics Notes for Ph 385: Introduction to Theoretical Physics I R. Bland TABLE OF CONTENTS Chapter I. Vectors A. The displacement vector. B. Vector addition. C. Vector products. 1. The scalar product. 2. The vector product. D. Vectors in terms of components. E. Algebraic properties of vectors. 1. Equality. 2. Vector Addition. 3. Multiplication of a vector by a scalar. 4. The zero vector. 5. The negative of a vector. 6. Subtraction of vectors. 7. Algebraic properties of vector addition. F. Properties of a vector space. G. Metric spaces and the scalar product. 1. The scalar product. 2. Definition of a metric space. H. The vector product. I. Dimensionality of a vector space and linear independence. J. Components in a rotated coordinate system. K. Other vector quantities. Chapter 2. The special symbols ij and ijk, the Einstein summation convention, and some group theory. A. The Kronecker delta symbol, ij B. The Einstein summation convention. C. The Levi-Civita totally antisymmetric tensor. Groups. The permutation group. The Levi-Civita symbol. D. The cross Product. E. The triple scalar product. F. The triple vector product. The epsilon killer. Chapter 3. Linear equations and matrices. A. Linear independence of vectors. B. Definition of a matrix. C. The transpose of a matrix. D. The trace of a matrix. E. Addition of matrices and multiplication of a matrix by a scalar. F. Matrix multiplication. G. Properties of matrix multiplication. H. The unit matrix I. Square matrices as members of a group.
    [Show full text]
  • An Approximate Nodal Is Developed to Calculate the Change of •Laatio Constants Induced by Point Defect* in Hep Metals
    1 - INTBOPUCTIOB the elastic conatants.aa well aa othernmechanieal pro partita of* IC/79/lW irradiated materiale (are vary sensitive to tne oonoantration of i- INTERNAL REPORT (Limited distribution) rradiation produced point defeota.One of the firat eatimatee of thla effect waa done by Dienea [l"J who aiaply averaged over the whole la- International Atomic Energy Agency ttice the locally changed interatoalo bonds due to the pxesense of" and the defect.With tola nodal ha predioted an inoreaee of the alaatle United Nations Educational Scientific and Cultural Organization constant* of about 10* par atonic f of interatitlala in Ou and a da. oreaee of l]t par at. % of vacanolea. INTERNATIONAL CENTRE FOE THEORETICAL PHYSICS Later on,experlaental etudiea by Konlg at al.[2]and wanal lilgar* very large decrease a of about 5O}( per at.)t of Vrenlcal dafaota.fha theory waa than iaproved in order to relate the change of a la* tic con a tan t a to the defect lnduoed change of force oonatanta and tno equivalent aethoda were devalopedi the energy-»athod of ludwig [4] CHANGE OF ELASTIC CONSTANTS and the t-aatrix method of Slllot et al.C ?]. INDUCED BY FOIMT DEFECTS IN hep CRYSTALS * Iheoretioal eetlnates for oublo oryetala have been oarrie* out by Ludwig[4] for the caae of vacancleafby Piatorlueld for intarati- Carlos Tome •• tiala and by Sederloaa et al.t?] for duabell interotitiala. International Centre for Theoretical Physics, Trieste, Italy. Re thaoretloal work baa been done ao far for hexagonal cryatmlaj and the experimental neaeurentanta (available only for Kg) ax* eona- ABSTRACT what crude t8,9i,ayan though in the last few.
    [Show full text]
  • Estimations of the Trace of Powers of Positive Self-Adjoint Operators by Extrapolation of the Moments∗
    Electronic Transactions on Numerical Analysis. ETNA Volume 39, pp. 144-155, 2012. Kent State University Copyright 2012, Kent State University. http://etna.math.kent.edu ISSN 1068-9613. ESTIMATIONS OF THE TRACE OF POWERS OF POSITIVE SELF-ADJOINT OPERATORS BY EXTRAPOLATION OF THE MOMENTS∗ CLAUDE BREZINSKI†, PARASKEVI FIKA‡, AND MARILENA MITROULI‡ Abstract. Let A be a positive self-adjoint linear operator on a real separable Hilbert space H. Our aim is to build estimates of the trace of Aq, for q ∈ R. These estimates are obtained by extrapolation of the moments of A. Applications of the matrix case are discussed, and numerical results are given. Key words. Trace, positive self-adjoint linear operator, symmetric matrix, matrix powers, matrix moments, extrapolation. AMS subject classifications. 65F15, 65F30, 65B05, 65C05, 65J10, 15A18, 15A45. 1. Introduction. Let A be a positive self-adjoint linear operator from H to H, where H is a real separable Hilbert space with inner product denoted by (·, ·). Our aim is to build estimates of the trace of Aq, for q ∈ R. These estimates are obtained by extrapolation of the integer moments (z, Anz) of A, for n ∈ N. A similar procedure was first introduced in [3] for estimating the Euclidean norm of the error when solving a system of linear equations, which corresponds to q = −2. The case q = −1, which leads to estimates of the trace of the inverse of a matrix, was studied in [4]; on this problem, see [10]. Let us mention that, when only positive powers of A are used, the Hilbert space H could be infinite dimensional, while, for negative powers of A, it is always assumed to be a finite dimensional one, and, obviously, A is also assumed to be invertible.
    [Show full text]
  • Gravitation in the Surface Tension Model of Spacetime
    IARD 2018 IOP Publishing IOP Conf. Series: Journal of Physics: Conf. Series 1239 (2019) 012010 doi:10.1088/1742-6596/1239/1/012010 Gravitation in the surface tension model of spacetime H A Perko1 1Office 14, 140 E. 4th Street, Loveland, CO, USA 80537 E-mail: [email protected] Abstract. A mechanical model of spacetime was introduced at a prior conference for describing perturbations of stress, strain, and displacement within a spacetime exhibiting surface tension. In the prior work, equations governing spacetime dynamics described by the model show some similarities to fundamental equations of quantum mechanics. Similarities were identified between the model and equations of Klein-Gordon, Schrödinger, Heisenberg, and Weyl. The introduction did not explain how gravitation arises within the model. In this talk, the model will be summarized, corrected, and extended for comparison with general relativity. An anisotropic elastic tensor is proposed as a constitutive relation between stress energy and curvature instead of the traditional Einstein constant. Such a relation permits spatial geometric terms in the mechanical model to resemble quantum mechanics while temporal terms and the overall structure of tensor equations remain consistent with general relativity. This work is in its infancy; next steps are to show how the anisotropic tensor affects cosmological predictions and to further explore if geometry and quantum mechanics can be related in more than just appearance. 1. Introduction The focus of this research is to find a mechanism by which spacetime might curl, warp, or re-configure at small scales to provide a geometrical explanation for quantum mechanics while remaining consistent with gravity and general relativity.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • 1.2 Topological Tensor Calculus
    PH211 Physical Mathematics Fall 2019 1.2 Topological tensor calculus 1.2.1 Tensor fields Finite displacements in Euclidean space can be represented by arrows and have a natural vector space structure, but finite displacements in more general curved spaces, such as on the surface of a sphere, do not. However, an infinitesimal neighborhood of a point in a smooth curved space1 looks like an infinitesimal neighborhood of Euclidean space, and infinitesimal displacements dx~ retain the vector space structure of displacements in Euclidean space. An infinitesimal neighborhood of a point can be infinitely rescaled to generate a finite vector space, called the tangent space, at the point. A vector lives in the tangent space of a point. Note that vectors do not stretch from one point to vector tangent space at p p space Figure 1.2.1: A vector in the tangent space of a point. another, and vectors at different points live in different tangent spaces and so cannot be added. For example, rescaling the infinitesimal displacement dx~ by dividing it by the in- finitesimal scalar dt gives the velocity dx~ ~v = (1.2.1) dt which is a vector. Similarly, we can picture the covector rφ as the infinitesimal contours of φ in a neighborhood of a point, infinitely rescaled to generate a finite covector in the point's cotangent space. More generally, infinitely rescaling the neighborhood of a point generates the tensor space and its algebra at the point. The tensor space contains the tangent and cotangent spaces as a vector subspaces. A tensor field is something that takes tensor values at every point in a space.
    [Show full text]
  • Math 217: Multilinearity of Determinants Professor Karen Smith (C)2015 UM Math Dept Licensed Under a Creative Commons By-NC-SA 4.0 International License
    Math 217: Multilinearity of Determinants Professor Karen Smith (c)2015 UM Math Dept licensed under a Creative Commons By-NC-SA 4.0 International License. A. Let V −!T V be a linear transformation where V has dimension n. 1. What is meant by the determinant of T ? Why is this well-defined? Solution note: The determinant of T is the determinant of the B-matrix of T , for any basis B of V . Since all B-matrices of T are similar, and similar matrices have the same determinant, this is well-defined—it doesn't depend on which basis we pick. 2. Define the rank of T . Solution note: The rank of T is the dimension of the image. 3. Explain why T is an isomorphism if and only if det T is not zero. Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of basis B), which happens if and only if det T 6= 0. 3 4. Now let V = R and let T be rotation around the axis L (a line through the origin) by an 21 0 0 3 3 angle θ. Find a basis for R in which the matrix of ρ is 40 cosθ −sinθ5 : Use this to 0 sinθ cosθ compute the determinant of T . Is T othogonal? Solution note: Let v be any vector spanning L and let u1; u2 be an orthonormal basis ? for V = L . Rotation fixes ~v, which means the B-matrix in the basis (v; u1; u2) has 213 first column 405.
    [Show full text]
  • Determinants Math 122 Calculus III D Joyce, Fall 2012
    Determinants Math 122 Calculus III D Joyce, Fall 2012 What they are. A determinant is a value associated to a square array of numbers, that square array being called a square matrix. For example, here are determinants of a general 2 × 2 matrix and a general 3 × 3 matrix. a b = ad − bc: c d a b c d e f = aei + bfg + cdh − ceg − afh − bdi: g h i The determinant of a matrix A is usually denoted jAj or det (A). You can think of the rows of the determinant as being vectors. For the 3×3 matrix above, the vectors are u = (a; b; c), v = (d; e; f), and w = (g; h; i). Then the determinant is a value associated to n vectors in Rn. There's a general definition for n×n determinants. It's a particular signed sum of products of n entries in the matrix where each product is of one entry in each row and column. The two ways you can choose one entry in each row and column of the 2 × 2 matrix give you the two products ad and bc. There are six ways of chosing one entry in each row and column in a 3 × 3 matrix, and generally, there are n! ways in an n × n matrix. Thus, the determinant of a 4 × 4 matrix is the signed sum of 24, which is 4!, terms. In this general definition, half the terms are taken positively and half negatively. In class, we briefly saw how the signs are determined by permutations.
    [Show full text]
  • Matrices and Tensors
    APPENDIX MATRICES AND TENSORS A.1. INTRODUCTION AND RATIONALE The purpose of this appendix is to present the notation and most of the mathematical tech- niques that are used in the body of the text. The audience is assumed to have been through sev- eral years of college-level mathematics, which included the differential and integral calculus, differential equations, functions of several variables, partial derivatives, and an introduction to linear algebra. Matrices are reviewed briefly, and determinants, vectors, and tensors of order two are described. The application of this linear algebra to material that appears in under- graduate engineering courses on mechanics is illustrated by discussions of concepts like the area and mass moments of inertia, Mohr’s circles, and the vector cross and triple scalar prod- ucts. The notation, as far as possible, will be a matrix notation that is easily entered into exist- ing symbolic computational programs like Maple, Mathematica, Matlab, and Mathcad. The desire to represent the components of three-dimensional fourth-order tensors that appear in anisotropic elasticity as the components of six-dimensional second-order tensors and thus rep- resent these components in matrices of tensor components in six dimensions leads to the non- traditional part of this appendix. This is also one of the nontraditional aspects in the text of the book, but a minor one. This is described in §A.11, along with the rationale for this approach. A.2. DEFINITION OF SQUARE, COLUMN, AND ROW MATRICES An r-by-c matrix, M, is a rectangular array of numbers consisting of r rows and c columns: ¯MM..
    [Show full text]
  • Glossary of Linear Algebra Terms
    INNER PRODUCT SPACES AND THE GRAM-SCHMIDT PROCESS A. HAVENS 1. The Dot Product and Orthogonality 1.1. Review of the Dot Product. We first recall the notion of the dot product, which gives us a familiar example of an inner product structure on the real vector spaces Rn. This product is connected to the Euclidean geometry of Rn, via lengths and angles measured in Rn. Later, we will introduce inner product spaces in general, and use their structure to define general notions of length and angle on other vector spaces. Definition 1.1. The dot product of real n-vectors in the Euclidean vector space Rn is the scalar product · : Rn × Rn ! R given by the rule n n ! n X X X (u; v) = uiei; viei 7! uivi : i=1 i=1 i n Here BS := (e1;:::; en) is the standard basis of R . With respect to our conventions on basis and matrix multiplication, we may also express the dot product as the matrix-vector product 2 3 v1 6 7 t î ó 6 . 7 u v = u1 : : : un 6 . 7 : 4 5 vn It is a good exercise to verify the following proposition. Proposition 1.1. Let u; v; w 2 Rn be any real n-vectors, and s; t 2 R be any scalars. The Euclidean dot product (u; v) 7! u · v satisfies the following properties. (i:) The dot product is symmetric: u · v = v · u. (ii:) The dot product is bilinear: • (su) · v = s(u · v) = u · (sv), • (u + v) · w = u · w + v · w.
    [Show full text]
  • An Attempt to Intuitively Introduce the Dot, Wedge, Cross, and Geometric Products
    An attempt to intuitively introduce the dot, wedge, cross, and geometric products Peeter Joot March 21, 2008 1 Motivation. Both the NFCM and GAFP books have axiomatic introductions of the gener- alized (vector, blade) dot and wedge products, but there are elements of both that I was unsatisfied with. Perhaps the biggest issue with both is that they aren’t presented in a dumb enough fashion. NFCM presents but does not prove the generalized dot and wedge product operations in terms of symmetric and antisymmetric sums, but it is really the grade operation that is fundamental. You need that to define the dot product of two bivectors for example. GAFP axiomatic presentation is much clearer, but the definition of general- ized wedge product as the totally antisymmetric sum is a bit strange when all the differential forms book give such a different definition. Here I collect some of my notes on how one starts with the geometric prod- uct action on colinear and perpendicular vectors and gets the familiar results for two and three vector products. I may not try to generalize this, but just want to see things presented in a fashion that makes sense to me. 2 Introduction. The aim of this document is to introduce a “new” powerful vector multiplica- tion operation, the geometric product, to a student with some traditional vector algebra background. The geometric product, also called the Clifford product 1, has remained a relatively obscure mathematical subject. This operation actually makes a great deal of vector manipulation simpler than possible with the traditional methods, and provides a way to naturally expresses many geometric concepts.
    [Show full text]