Jared Park. A Brief Review of Operations for Students of / JAEM 5 (2018) pp-pp Journal of Applied Engineering Mathematics Volume 5, December 2018

A Brief Review of Tensor Operations for Students of Continuum Mechanics

Jared Park

Mechanical Engineering Department Brigham Young University Provo, UT 84602 [email protected]

I.ABSTRACT Operations performed on these different tensor sizes can be- come confusing as there is no standardized notation and no The purpose of this paper is to give a brief overview of simple way to deal with multidimensional . To simplify advanced mathematical concepts that will be encountered in the operations on these tensors a system of index notation has been Continuum Mechanics course. It was created mainly for students used that greatly simplify the portrayal of the operations. Along who take the class without having completed a course of Applied with this simplification, it also allows operations involving higher Engineering Mathematics, or the equivalent. The scope of the order tensors to be shown, which are extremely difficult to prove paper is to give a concise explanation of index notation (aka without the notation. Einstein/tensor notation) and tensor operations. It is not in the Index notation involves index variables written as subscripts, scope of this paper to show the derivation of operations unless with one index for each . For example, a vector is a the derivation is not easily accessible (e.g. the derivative of a one dimensional array of 3 values and is written in index notation symmetric inverse with itself). as a = ai. A second order tensor has two of 3, and II.DEFINITION OF NOTATION is written as: B = Bi j. The following notation rules will be followed for tensors when B. Summation Convention not written in index notation: Paired indices indicate summations of the components. For • Scalars will be written as italicized characters; e.g. a, b, ψ. example, the is defined as:     • Vectors will be written as bolded lower case letters; e.g. a, a1 b1 b. a · b = a2 · b2 = a1b1 + a2b2 + a3b3 • 2 Dimensional arrays will be written as bolded capital letters; a3 b3 e.g. A, and B. This can be shown using index notation as:

III.INDEX NOTATION aibi = a1b1 + a2b2 + a3b3 A. Indicies As can be seen above, the operation described by the left hand side of the equation is performed on each component pair, then Continuum Mechanics deals with deformations of materials in they are all summed together. a 3 dimensional Euclidean space ( 3) which are described using E This does not only apply to multiplication, but to any linear arrays of real numbers ( ). Most operations will occur using 4 R operation. For example, the divergence of a vector is written as: types of tensors: ∂a ∂a ∂a ∂a 1) Zeroth order tensors, also known as scalars. 5 · a = i = 1 + 2 + 3 2) First order tensors, also known as vectors. These are 1- ∂xi ∂x1 ∂x2 ∂x3 dimensional matrices with 3 values, that denote a direction Differentiation can also be shown in the indices as comma and magnitude. separated indices. The index following the comma will define the 3) Second order tensors, which are 3x3 matrices. dimension on which the derivative is performed. For example: 4) Fourth order tensors, which are 4 dimensional 3x3x3x3 ∂ai = ai,i matrices. ∂xi While third order tensors do exist, they are rarely used in ∂ai = ai, j Continuum Mechanics. ∂x j C. Dummy Variables and Free Indices A. Scalar Product As shown in the examples above, whenever any subscript Also known as the dot product, this operation multiplies the occurs twice in a term, a summation is implied over that index. components of two vectors together and sums them. The solution However, when an index exists without a pair it will not sum will be a scalar.     with the components of another tensor. Paired indices are called a1 b1 ”dummy variables” as they will disappear once the sum is com- a · b = a2 · b2 = a1b1 + a2b2 + a3b3 = aibi plete. Unpaired indices are called free indices. As an example: a3 b3

aibic j = d j B. Vector Product The index i occurs twice in the same term, and so a summation Also known as the cross product, this operation is defined as:   will occur between the components of those tensors. The index a2b3 − b2a1 j is a free index, and will not disappear. a × b = (εi jka jbk)i = b1a3 − a1b3 = ci These same rules apply no matter the size of the tensor, such a1b2 − b1a1 i as the multiplication of a matrix by a vector: Note that the index i was written as a subscript to the parenthetical

aibi j = c j term. This was done to show more clearly what index will remain after the operation is performed. This notation is optional. Indexes should be chosen carefully to avoid confusion. A single index should not occur more than twice in the same term. C. Change of Index The function will change the index of a tensor D. Special symbols when one of the indices is shared: Two important symbols in index notation are the Kronecker a = a Delta symbol and the Permutation symbol δi j i j The Kronecker Delta symbol is defined as: This will also change the index of higher order tensors: ( 1, if i = j δi jbik = b jk δi j = 0, if i 6= j D. Determinent of Second Order Tensors This is equivalent to the identity matrix in index notation. The determinant of a second order tensor can be written as: The permutation symbol is defined as:  det[A] = εi jkAi1A j2Ak3 1, if i jk = 123, 231, or 312  E. Trace of a Second order Tensor εi jk = −1, if i jk = 321, 213, or 132 The trace of a second order tensor is the sum of the terms 0, if otherwise along the diagonal: This symbol is used at various times, such as the cross product tr[A] = A = A + A + A of two vectors: ii 11 22 33 a × b = εi jkaib j = ck F. E. Order of indices for Second Order Tensors The transpose of a second order tensor can be shown as a flip of the indices: The first index of a second order tensor corresponds with AT = A the rows of the matrix. The second corresponds with columns. ji By convention the indices will be in alphabetical order unless G. Double Inner Product modified by some operation (such as transposing a matrix). Analogous to the Scalar product, the double inner product sums   the multiples of all the components of 2 second order tensors: A11 ··· Ai3  . .. .  A : B = A B = C A = Ai j =  . . .  i j ji A3 j ··· A33 H. Outer Product F. Conclusion to Index Notation Overview The outer product of two vectors creates a second order tensor To follow the scope of this paper, index notation will not be from two vectors: T covered further than what is shown above. If misunderstandings a ⊗ b = ab = aib j = Ci j still exist it is recommended to refer to other sources. It is V. also highly recommended to practice using this notation, as understanding will grow once it is used more frequently Calculus operators are common in continuum mechanics, and it is difficult to find a good resource for all the operations that IV. one might encounter. It is the hope of this paper that all the This section should be a review of topics covered in Linear operations that will be needed in the completion of Continuum algebra with the added step of converting all operations into Index Mechanics will be found here. The derivation of these operations Notation. are generally not included in this paper.

Journal of Applied Engineering Mathematics December 2018, Vol. 5 2 Copyright ®2018 by ME505 BYU A. Euclidean Norms G. Derivative of a Matrix Trace with respect to Itself The Euclidean norms for first and second order tensors are The definition of this derivative is: defined as: ∂ (tr[A]) ∂Aii √ √ = = δlk |a| = a · a = aiai ∂A ∂Akl √ p |A| = A : A = Ai jAi j The derivation of this definition is included in the appendix.

H. The Derivative of a Symmetric Matrix with Respect to itself The Euclidean norms provide a measure of the magnitude of the The derivative of any second order tensor with itself is: vector or the matrix. ∂A ∂Ai j 1 = = (δikδ jl + δilδ jk) B. Scalar Gradient ∂A ∂Akl 2 This operation finds the derivative of a scalar with respect to The derivation of this definition is included in the appendix. each of the Cartesian directions: ∂φ(x) I. The Derivative of a Symmetric Matrix Inverse with respect to 5φ(x) = Itself ∂xi The derivative of a matrix inverse with respect to itself is: Observe that it will return a vector. −1 A−1 ∂A ∂ i j 1  −1 −1 −1 −1 C. Divergence = = − Aik Al j + Ail Ak j ∂A ∂Akl 2 The divergence of a vector is defined as: The derivation of this definition is included in the appendix. ∂a 5 · a = i ∂xi Unlike the gradient, the divergence of a vector will be a scalar. The divergence of a second order tensor can also be found as:

∂Ai j 5 · A = = Ai j, j ∂x j VI.APPENDIX D. Curl The following derivations will be made knowing that: The curl of a vector is defined as: ∂Aab ∂ak = δacδbd (1) 5 × a = (εi jk )i = εi jkak, j ∂Acd ∂x j A. Derivation of the Derivative of a Matrix Trace with respect E. Derivative of a Matrix with Respect to Itself to itself The derivative of a matrix with respect to itself creates a fourth The trace of a matrix is defined as A , so the derivative of the order tensor. This can be understood by inspection, where the ii trace can be found using equation 1: change of a value in the matrix corresponding to the last two indices (kl) will create a unit change only when k = i and l = j. ∂Aaa = δacδad So the fourth order tensor will be a 4 dimensional array of ones ∂cd and zeros defined as: = δdc ∂Ai j = δikδ jl ∂kl B. Derivation of the Derivative of a Symmetric Matrix with F. Derivative of a Matrix Determinant with Respect to Itself Respect to itself The derivative of the determinant of a matrix with respect to A symmetric matrix is defined as: itself is: ∂det[A] −T 1 = det[A]A Aab = (Aab + Aba) ∂A 2 This definition was not converted to indicial notation to highlight Differentiating this quantity using equation 1 gives: that it requires the inverse of the transpose. ∂A 1 ∂A ∂A  Written in index notation gives: ab = ab + ba ∂Acd 2 Acd ∂Acd ∂det[Ai j] −T 1 = det[Ai j]Alk = (δacδbd + δadδbc) (2) ∂Akl 2

Journal of Applied Engineering Mathematics December 2018, Vol. 5 3 Copyright ®2018 by ME505 BYU C. The Derivation of the Derivative of a Matrix Inverse with Respect to Itself This derivation can be found using equations 1 and 2 and the definition of the identity matrix: −1 δac = AabAbc ∂δ ∂A A−1 ac = ab bc ∂Ae f ∂Ae f −1 ∂Aab −1 ∂Abc 0ace f = Abc + Aab ∂Ae f ∂Ae f −1 ∂Abc ∂Aab −1 Aab = − Abc ∂Ae f ∂Ae f −1 −1 ∂Abc ∂Aab −1 −1 Aga Aab = − Aga Abc ∂Ae f ∂Ae f −1 ∂Abc 1  −1 −1 δgb = − δaeδb f + δa f δbe Aga Abc ∂Ae f 2 −1 ∂Agc 1  −1 −1 −1 −1 = − Age A f c + Ag f Aec ∂Ae f 2

Journal of Applied Engineering Mathematics December 2018, Vol. 5 4 Copyright ®2018 by ME505 BYU