<<

Ruhr-University Bochum Faculty of Civil and Environmental Institute of Mechanics

A Small Compendium on Vector and and Calculus

Klaus Hackl Mehdi Goodarzi

2010 ii Contents

Foreword v

1 Vectors and algebra1 1.1 Scalars and vectors...... 1 1.2 Geometrical operations on vectors...... 2 1.3 Fundamental properties...... 2 1.4 Vector decomposition...... 3 1.5 of vectors...... 3 1.6 ...... 5 1.7 Tensors of second order...... 5 1.8 Dyadic product...... 7 1.9 Tensors of higher order...... 7 1.10 Coordinate transformation...... 10 1.11 Eigenvalues and eigenvectors...... 12 1.12 Invariants...... 14 1.13 Singular value decomposition...... 15 1.14 ...... 15 1.15 Triple product...... 16 1.16 Dot and cross product of vectors: miscellaneous formulas...... 17

2 19 2.1 Tensor functions & tensor elds...... 19 2.2 Derivative of tensor functions...... 20 2.3 Derivatives of tensor elds, , Nabla operator...... 20 2.4 Integrals of tensor functions...... 21 2.5 integrals...... 22 2.6 Path independence...... 23 2.7 integrals...... 23 2.8 and curl operators...... 24 2.9 Laplace's operator...... 25 2.10 Gauss' and Stoke's Theorems...... 26 2.11 Miscellaneous formulas and theorems involving nabla...... 27 2.12 Orthogonal curvilinear coordinate systems...... 28

iii iv CONTENTS

2.13 Dierentials...... 29 2.14 Dierential transformation...... 31 2.15 Dierential operators in ...... 31 Foreword

A quick review of vector and , and analysis is presented. The reader is supposed to have sucient familiarity with the subject and the material is included as an entry as well as a reference for the subsequence.

v vi FOREWORD Chapter 1

Vectors and tensors algebra

Algebra is concerned with operations dened in sets with certain properties. Tensor and vec- tor algebra deals with properties and operations in the set of tensors and vectors. Through- out this section together with algebraic aspects, we also consider geometry of tensors to obtain further insight.

1.1 Scalars and vectors

There are physical quantities that can be specied by a single real , like temperature and speed. These are called scalars. Other quantities whose specication requires mag- nitude (a positive ) and direction, for instance velocity and force, are called vectors. A typical illustration of a vector is an arrow which is a directed line segment. Given an appropriate unit, length of the arrow reects magnitude of the vector. In the present context the precise terminology would be or geometric vector, to acknowledge that in , vector has a broader meaning with which we are not concerned here. It is instructive to have a closer look at the meaning of direction of a vector. Given a vector v and a directed line L (Fig. 1.1), projection of the vector onto the direction is a in vL = kvk cos θ v which θ is the angle between v and L . Therefore, a vector θ eL can be thought as a function that assigns a scalar to each L direction . This notion can be naturally extended v(L ) = vL to understand tensors. A tensor (of second order) is a function kvk cos θ that assigns vectors to directions in the sense of Fig. 1.1: Projection. T (L ) = T L projection. In other words the projection of tensor on direction is a vector like . T L T L This looks rather abstract but its meaning is going to be clear in the sequel when we explain the Cauchy's formula in which the dot product of stress (tensor) and area (vector) yields traction force (vector).

Notation 1. Vectors and tensors are denoted by bold-face letters like A and v, and mag- nitude of a vector by kvk or simply by its -face v. In handwriting an underbar is used to denote tensors and vectors like v.

1 2 CHAPTER 1. VECTORS AND TENSORS ALGEBRA

1.2 Geometrical operations on vectors

Denition 2 (Equality of vectors). Two vectors are equal if they have the same magnitude and direction.

Denition 3 (Summation of vectors). The sum of every two vectors v and w is a vector u = v + w formed by placing initial point of w on terminal point of v and joining initial point of v and terminal point of w (Fig. 1.2(a,b)). This is equivalent to parallelogram law for vector addition as shown in Fig. 1.2(c). Subtraction of vectors v − w is dened as v + (−w).

w v w v u = v + w

v u = v + w w

(a) (b) (c) Fig. 1.2: Vector summation.

Denition 4 (Multiplication of a vector by a scalar). For any real number (scalar) α and vector v, their multiplication is a vector αv whose magnitude is |α| times the magnitude of v and whose direction is the same as v if α > 0 and opposite to v if α < 0. If α = 0 then αv = 0.

1.3 Fundamental properties

The set of geometric vectors V together with the just dened summation and multiplication operations is subject to the following properties

Property 5 (Vector summation). 1. v + 0 = 0 + v = v Existence of additive identity 2. v + (−v) = (−v) + v = 0 Existence of additive inverse 3. v + w = w + v Commutative law 4. u + (v + w) = (u + v) + w Associative law Property 6 (Multiplication with scalars).

1. α (v + w) = αv + αw Distributive law 2. (α + β) v = αv + βv Distributive law 3. α (βv) = (αβ) v Associative law 4. 1 v = v Multiplication with scalar identity. A vector can be rigorously dened as an element of a .

Denition 7 (Real vector space). The mathematical system (V, +, R, ∗) is called a real vector space or simply a vector space, where V is the set of geometrical vectors, + is vector 1.4. VECTOR DECOMPOSITION 3 summation with the four properties mentioned, R is the set of real furnished with summation and multiplication of numbers, and ∗ stands for multiplication of a vector with a real number (scalar) having the above mentioned properties. The reader may wonder why this recent denition is needed. Remember that a quantity whose expression requires magnitude and direction is not necessarily a vector(!). A well known example is nite which has both magnitude and direction but is not a vector because it does not follow the properties of vector summation, as dened before. Therefore, having magnitude and direction is not sucient for a quantity to be identied as a vector, but it must also obey the rules of summation and multiplication with scalars. Denition 8 (). A unit vector is a vector of unit magnitude. For any vector v its unit vector is referred to by ev or vˆ which is equal to vˆ = v/kvk. One would say that the unit vector carries the information about direction. Therefore magnitude and direction as constituents of a vector are multiplicatively decomposed as v = vvˆ.

1.4 Vector decomposition

Writing a vector v as the sum of its components is called decomposition. Components of a vector are its projections onto the coordinate axes (Fig. 1.3)

v = v1e1 + v2e2 + v3e3 , (1.1) where {e1, e2, e3} are unit vectors of , {v1, v2, v3} are components of v and {v1e1, v2e2, v3e3} are component vectors of v. Since a vector is specied by its components relative to a given coordinate system, it can be alternatively de- X3 noted by array notation   v v1 v3e3 T v ≡  v2  = (v1 v2 v3) . (1.2) v3 v1e1 X2

Using Pythagorean theorem the norm (magnitude) of a X1 v2e2 vector v in three-dimensional space can be written as Fig. 1.3: Components. q 2 2 2 (1.3) kvk = v1 + v2 + v3 . Remark 9. Representation of a geometrical vector by its equivalent array (or tuple) is the underlying idea of which basically paves the way for an algebraic treatment of geometry.

1.5 Dot product of vectors

Dot product (also called inner or scalar product) of two vectors v and w is dened as v · w = v w cos θ , (1.4) where θ is the angle between v and w. 4 CHAPTER 1. VECTORS AND TENSORS ALGEBRA

Property 10 (Dot product).

1. v · w = w · v Commutativity 2. u · (αv + βw) = αu · v + βu · w Linearity 3. v · v ≥ 0 and v · v = 0 ⇔ v = 0 Positivity.

Geometrical interpretation of dot product

Denition of dot product (1.4) can be restated as

v · w = (vvˆ) · (wwˆ ) = v (vˆ · w) = w (v · wˆ ) = vw cos θ , (1.5) which says the dot product of vectors v and w equals projection of w on v direction times magnitude of v (see Fig. 1.1), or the other way around. Also, dividing leftmost and rightmost terms of (1.5) by vw gives

vˆ · wˆ = cos θ , (1.6) which says the dot product of unit vectors (standing for directions) determines the angle between them. A simple and yet interesting result is obtained for components of a vector v as

v1 = v · e1 , v2 = v · e2 , v3 = v · e3. (1.7)

Important results

1. In any orthonormal coordinate system including Cartesian coordinates it holds that

e1 · e1 = 1 , e1 · e2 = 0 , e1 · e3 = 0

e2 · e1 = 0 , e2 · e2 = 1 , e2 · e3 = 0 (1.8)

e3 · e1 = 0 , e3 · e2 = 0 , e3 · e3 = 1 ,

2. therefore ∗

v · w = v1w1 + v2w2 + v3w3 . (1.9)

3. For any two nonzero vectors a and b, a · b = 0 i a is normal to b.

4. Norm of a vector v can be obtained by

kvk = (v · v)1/2 . (1.10)

∗ Note that v · w = (v1e1 + v2e2 + v3e3) · (w1e1 + w2e2 + w3e3) = v1w1e1 · e1 + v1w2e1 · e2 + ··· . 1.6. INDEX NOTATION 5

1.6 Index notation

A typical tensorial calculation demands component-wise arithmetic derivations which is usually a tedious task. As an example let us try to expand the left-hand-side of (1.9)

v · w = (v1e1 + v2e2 + v3e3) · (w1e1 + w2e2 + w3e3)

= v1w1e1 · e1 + v1w2e1 · e2 + v1w3e1 · e3

+ v2w1e2 · e1 + v2w2e2 · e2 + v2w3e2 · e3

+ v3w1e3 · e1 + v3w2e3 · e2 + v3w3e3 · e3 , (1.11) which together with equation (1.8) yields the right-hand-side of (1.9). As can be seen for a very basic expansion considerable eort is required. The so called index notation is developed for simplication. Index notation uses parametric index instead of explicit index. If for example the letter i is used as index in vi it addresses any of the possible values i = 1, 2, 3 in three dimensional space, therefore representing any of the vector v's components. Then {vi} is equivalent to the set of all vi's, namely {v1, v2, v3}. Notation 11. When an index appears twice in one term then that term is summed up over all possible values of the index. This is called Einstein's convention.

To clarify this, let's have a look at trace of a P3 3 × 3 tr(A) = i=1 Aii = A11 + A22 + A33. Based on Einstein's convention one could write tr(A) = Aii, because the index i has appeared twice and shall be summed up over all possible values of i = 1, 2, 3 then Aii = A11 + A22 + A33. As another example a vector v can be written as v = viei because the index i has appeared twice and we can write viei = v1e1 + v2e2 + v3e3. Remark 12. A repeating index (which is summed up over) can be freely renamed. For example viwiAmn can be rewritten as vkwkAmn because in fact viwiAmn = Bmn and index i does not appear as one of the free indices which can vary among {1, 2, 3}. Note that renaming i to m or n would not be possible as it changes the meaning.

1.7 Tensors of second order

Based on the denition of inner product, length and angle which are the building blocks of become completely general concepts (Eqs. (1.10) and (1.6)). That is why an n-dimensional vector space (Def.7) together with an inner product operation n (Prop. 10) is called a denoted by E . Since we are basically unable to draw geometrical scenes in higher than three (or maybe two in fact!), it may seem just too abstract to generalize classical geometry to higher dimensions. If you remember that the physical space-time can be best expressed in a four-dimensional geometrical framework, then you might agree that this limitation of dimensions is more likely to be a restriction of our perception rather than a part of physical reality. 6 CHAPTER 1. VECTORS AND TENSORS ALGEBRA

Another generalization would concern projection. Projection can be seen as geometrical transformation that can be dened based on inner product. Up to three dimensions vector projection is easily understood. Now if we allow vectors to be entities belonging to an arbitrary vector space endowed with an inner product, one can think of inner product as the means of projection of a vector in the space onto another vector or direction. Note that we have put no restrictions on the vector space such as its . By the way one would ask what do we need all these cumbersome generalizations for? The answer is to understand more complex constructs based on the basic understanding we already have of components of a geometrical vector. So far we have seen that a scalar can be expressed by a single real number. This can be seen as: scalars have no components on the three coordinate axes so require 30 real numbers to be specied. On the other hand a vector v has three components which are its projections onto the three coordinate axes Xi's each obtained by vi = v · ei. Therefore, v requires 31 scalars to be specied. There are more complex constructs which are called second order tensors. The three projections of a second order tensor A onto coordinate axes are obtained by inner product of A with basis vectors as Ai = A · ei. These projections are vectors (not scalars!), and since each vector is determined by three scalars  its components  a second order tensor requires 3 × 3 = 32 real numbers to be specied. In array form the three components of a tensor A are vectors denoted by

 A = A1 A2 A3 (1.12) where each vector Ai has three components   A1i Ai =  A2i  (1.13) A3i therefore A is written in the matrix form   A11 A12 A13 A =  A21 A22 A23  (1.14) A31 A32 A33 where each vector Ai is written component-wise in one column of the matrix. In analogy to the notation

v = viei (1.15) for vectors (based on Einstein's summation convention), a tensor can be written as

A = Aijeiej or A = Aijei ⊗ ej (1.16) with ⊗ being dyadic product. Note that we usually use the rst notation for brevity. 1.8. DYADIC PRODUCT 7

1.8 Dyadic product

Dyadic product of two vectors v and w is a tensor A such that

Aij = viwj . (1.17) Property 13. Dyadic product has the following properties 1. v ⊗ w 6= w ⊗ v 2. u ⊗ (αv + βw) = αu ⊗ v + βu ⊗ w (αu + βv) ⊗ w = αu ⊗ w + βv ⊗ w 3. (u ⊗ v) · w = (v · w) u u · (v ⊗ w) = (u · v) w From equation (1.16) considering the above properties we obtain

Aij = ei · A · ej , (1.18) which is the analog of equation (1.7). In equation (1.16) the dyads eiej are basis tensors which are given in matrix notation as

 1 0 0   0 1 0   0 0 1  e1e1 =  0 0 0  e1e2 =  0 0 0  e1e3 =  0 0 0  (1.19) 0 0 0 0 0 0 0 0 0  0 0 0   0 0 0   0 0 0  e2e1 =  1 0 0  e2e2 =  0 1 0  e2e3 =  0 0 1  (1.20) 0 0 0 0 0 0 0 0 0  0 0 0   0 0 0   0 0 0  e3e1 =  0 0 0  e3e2 =  0 0 0  e3e3 =  0 0 0  (1.21) 1 0 0 0 1 0 0 0 1 Note that not every second order tensor can be expressed as dyadic product of two vectors in general, however every second order tensor can be written as linear combination of dyadic products of vectors, as in its most common form given by the decomposition onto a given coordinate system as A = Aijeiej.

1.9 Tensors of higher order

Dyadic product provides a convenient passage to denition of higher order tensors. We know that dyadic product of two vectors (rst-order tensor) yields a second-order tensor.

Generalization of the idea is immediate for a set of n given vectors v1,..., vn as

T = v1v2 ··· vn (1.22) which is a tensor of the nth order. To make sense of it, let's take a look at inner product of the above tensor with vector w

T · w = (v1v2 ··· vn) · w = (vn · w)(v1v2 ··· vn−1) (1.23) 8 CHAPTER 1. VECTORS AND TENSORS ALGEBRA

th where (vn · w) is of course a scalar and v1 ··· vn−1 a tensor of (n − 1) order. This means the tensor T maps a rst-order tensor (vector) to a tensor of the (n − 1)th order. We reemphasize that not all tensors can be written as dyadic product of vectors, however tensors can be decomposed on the basis of a coordinate system. Decomposition of the nth-order tensor T is written as (1.24) T = Ti1i2···in ei1 ei2 ··· ein . where and is the basis tensor of the th order. It should i1 ··· in ∈ {1, 2, 3} ei1 ei2 ··· ein n be clear that, visualization of e.g. third-order tensors will require three-dimensional arrays and so on. Remark 14. A vector is a rst-order tensor and a scalar is a zeroth-order tensor. Order of a tensor is equal to number of its indices in index notation. Furthermore, a tensor of order n requires 3n scalars to be specied. We usually address a second order tensor by simply calling it a tensor and the meaning should be clear from the context. If an mth order tensor A is multiplied with an nth order tensor B we get an (m + n)th order tensor C such that

C = A ⊗ B = (Ai1···im ei1 ··· eim ) ⊗ (Bj1···jn ej1 ··· ejn )

= Ai1···im Bj1···jn ei1 ··· eim ej1 ··· ejn (1.25) = Ck1···km+n ek1 ··· ekm+n .

Algebra and properties of higher order tensors

On the set of nth-order tensors summation operation can be dened as (1.26) A + B = (Ai1···in + Bi1···in ) ei1 ··· ein , and multiplication with a scalar as (1.27) αA = (αAi1···in ) ei1 ··· ein , having the properties similar to5 and6. In fact tensors of order n are elements of a vector space having the dimension 3n. Exercise 1. Show that dimension of a vector space consisting of all nth-order tensors is 3n. (Hint: nd 3n independent tensors that can span all elements of the space.)

Generalization of inner product Inner/Dot product of two tensors and is given A = Ai1···im ei1 ··· eim B = Bj1···jn ej1 ··· ejn as

A · B = Ai1···im ei1 ··· eim · Bj1···jn ej1 ··· ejn (1.28) = Ai1···im−1kBkj2···jn ei1 ei2 ··· eim−1 ej2 ej3 ··· ejn which is a tensor of (m + n − 2)th order. Note that the innermost index k is common and summed up. Generalization of this idea follows 1.9. TENSORS OF HIGHER ORDER 9

Denition 15 (contraction product). For any two tensors A of the mth order and B of r the nth order, their r-contraction product for r ≤ min{m, n} yields a tensor C = A B of order (m + n − 2r) such that (1.29) C = Ai1···im−rk1···kr Bk1···krjr+1···jn ei1 ··· eim−r ejr ··· ejn . Note that the r innermost indices are common. The frequently used case of contraction 2 product is the so called double-contraction which is denoted by A : B ≡ A B.

Exercise 2. Write the identity σ = C : ε in index notation, where C is a fourth-order tensor and σ and ε are second-order tensors. Then, expand the components σ12 and σ11 for the case of two-dimensional Cartesian coordinates. r Remark 16. Contraction product of two tensors is not symmetric in general, i.e. A B 6= r B A, except when r = 1 and A and B are rst-order tensors (vectors), or A and B have certain properties.

Kronecker delta

The second order unity tensor or identity tensor denoted by I is dened by the following fundamental property. Property 17. For every vector v it holds that I · v = v · I = v . (1.30) Representation of unity tensor by index notation as

 1 , if i = j δ = (1.31) ij 0 , if i 6= j . is called Kronecker's delta. Its expansion for i, j ∈ {1, 2, 3} gives in matrix form  1 0 0  (δij) =  0 1 0  . (1.32) 0 0 1 Some important properties of are

1. Orthonormality of the basis vectors {ei} in equation (1.8), can be abbreviated as

ei · ej = δij . (1.33) 2. The so called index exchange rule is given as

uiδij = uj or in general Aij···m···zδmn = Aij···n···z . (1.34) 3. Trace of Kronecker delta

δii = 3 . (1.35) Exercise 3. Proofs are left to the reader as a practice of index notation. 10 CHAPTER 1. VECTORS AND TENSORS ALGEBRA

Totally The so called totally antisymmetric tensor or permutation symbol, also named after Italian Tullio Levi-Civita as Levi-Civita symbol, is a third-order tensor dened as  if is an even permutation of  +1 , (i, j, k) (1, 2, 3) ijk = −1 , if (i, j, k) is an odd permutation of (1, 2, 3) (1.36)  0 , otherwise, i.e. if i = j or j = k or k = i. There are important properties and relations on permutation symbol as follows 1. It is immediately known from the denition that

ijk = −jik = −ikj = −kji = kij = jki . (1.37)

2. Determinant of a 3 × 3 matrix A can be expressed by

det(A) = ijkA1iA2jA3j . (1.38)

3. The general relationship between Kronecker delta and permutation symbol reads   δil δim δin ijklmn = det δjl δjm δjn  (1.39) δkl δkm δkn and as special cases

jkljmn = δkmδln − δknδlm , (1.40)

ijkijm = 2δkm . (1.41)

Exercise 4. Verify the identities in (1.40) and (1.41) based on (1.39).

1.10 Coordinate transformation

Either seen as an empirical fact or a mathematical postulate, it is well accepted that Denition 18 (Principle of frame indierence). A mathematical model of any physical phenomenon must be formulated without reference to any particular coordinate system. This is a merit of vectors and tensors that provide mathematical models with such a generality. However in solving real world problems everything shall be expressed in terms of scalars nally for computation, and therefore introduction of a coordinate system is inevitable. The immediate question would be how to transform vectorial and tensorial quantities form one coordinate system to another. Substitution from (1.42) into the latter equation gives

0 0 0  0  0  v1 = (v1e1 + v2e2 + v3e3) · e1 = e1 · e1 v1 + e1 · e2 v2 + e1 · e3 v3 0 0 0  0  0  v2 = (v1e1 + v2e2 + v3e3) · e2 = e2 · e1 v1 + e2 · e2 v2 + e2 · e3 v3 0 0 0  0  0  (1.44) v3 = (v1e1 + v2e2 + v3e3) · e3 = e3 · e1 v1 + e3 · e2 v2 + e3 · e3 v3 . 1.10. COORDINATE TRANSFORMATION 11

Suppose that vector v is decomposed in Cartesian coor- dinates X X X with bases {e , e , e } (Fig. 1.4) 0 X3 1 2 3 1 2 3 X3 (1.42) v = v1e1 + v2e2 + v3e3 . v Since a vector is completely determined by its compo- nents, we should be able to nd the decomposition of X0 X2 1 X vector on a second coordinate system 0 0 0 with 1 v X1X2X3 bases 0 0 0 in terms of . Using equa- 0 {e1, e2, e3} {v1, v2, v3} X2 tion (1.7) we have Fig. 1.4: Transformation of coor- dinates. 0 0 0 0 0 0 (1.43) v1 = v · e1 , v2 = v · e2 , v3 = v · e3 . Restatement in array notation reads

 0   0 0 0    v1 e1 · e1 e1 · e2 e1 · e3 v1 0 0 0 0 (1.45)  v2  =  e2 · e1 e2 · e2 e2 · e3   v2  , 0 0 0 0 v3 e3 · e1 e3 · e2 e3 · e3 v3 and in index notation 0 0  (1.46) vj = ej · ei vi .   Then again, introducing transformation tensor 0 we can rewrite the transfor- Q = ej · ei mation rule as v0 = Qv . (1.47) Property 19 (Orthonormal tensor). Transformation Q is geometrically a rotation map with the property QT Q = I or Q−1 = QT (1.48) which is called orthonormality.

Transformation of tensors

th Having the decomposition of an n -order tensor A in X1X2X3 coordinates as (1.49) A = Ai1···in ei1 ··· ein , we look for its components in another coordinates 0 0 0 . Following the same approach X1X2X3 as for vectors, using equation (1.29)

n 0 0 0  Aj1···jn = A ej1 ··· ejn n 0 0  = Ai1···in ei1 ··· ein ej1 ··· ejn 0  0  (1.50) = ei1 · ej1 ··· ein · ejn Ai1···in , For the special case when A is a second order tensor we can write in matrix notation 0 T 0  (1.51) A = QAQ , Q = ej · ei . 12 CHAPTER 1. VECTORS AND TENSORS ALGEBRA

Remark 20. The above transformation rules are substantial. They are so fundamental that, as an alternative, we could have dened vectors and tensors based on their transformation properties instead of the geometric and algebraic approach.

Exercise 5. Find the for a rotation by angle θ around X1 in three- dimensional Cartesian coordinates X1X2X3. Exercise 6. For a symmetric second order tensor A derive the transformation formula with a transformation given as

 cos ϕ sin ϕ 0  (Qij) =  − sin ϕ cos ϕ 0  . 0 0 1

1.11 Eigenvalues and eigenvectors

The question of transformation from one coordinate system to another is answered. In practical situations we face a second question which is the choice of coordinate system in order to simplify the representation of the formulation at hand. A very basic example is explained here. Suppose that we want to write a vector v in components. With an arbitrary choice T of reference the answer is v = v1 v2 v3 , with v1, v2, v3 ∈ R. However with the choice of reference coordinates such that e.g. X1 axis coincides with vector v we come up with T v = v1 0 0 . This second representation of the vector is simplied to one component only which is in fact a scalar, and therefore being one-dimensional. Of course in more complicated situations an appropriate choice of coordinate system can make a much greater dierence. We will see that the consequences of our rather practical question turns out to be theoretically subtle. The next step in complexity is how to simplify tensors with a proper choice of coordinate system.

Denition 21. Given a second order tensor A, every nonzero vector like ψ is called an eigenvector of A if there exists a real number λ such that

A · ψ = λψ (1.52) where λ is the corresponding eigenvalue of ψ. In general, a tensor multiplied by a vector gives another vector with a dierent mag- nitude and direction, interestingly however, there are vectors ψ on which the tensor acts like a scalar λ as laid down by equation (1.52). Now suppose that a coordinate system is chosen so that X1 axis coincides with ψ. It is clear that

A · e1 = A11e1 + A21e2 + A31e3 and also back to (1.52)

A · e1 = λe1 (1.53) 1.11. EIGENVALUES AND EIGENVECTORS 13 comparing both equation gives

A11 = λ , A21 = 0 ,A31 = 0 . (1.54)

3 It is possible to show that for non-degenerate symmetric tensors in R there are three or- thonormal eigenvectors which establish an orthonormal coordinate system which is called principal coordinates. Following equation (1.54) a σ in principal coordi- nates takes the diagonal form   σ11 0 0 σ =  0 σ22 0  , (1.55) 0 0 σ33 where its diagonal elements are eigenvalues corresponding to principal directions which are parallel to eigenvectors. The following theorem generalizes these ideas to n dimensions.

Theorem 22 (Spectral decomposition). Let A be a tensor of second order in n-dimensional space with being its linearly independent eigenvectors and their corresponding ψ1, ··· , ψn n eigenvalues λ1, ··· , λn. Then A can be written as

A = ΨΛΨ−1 (1.56) where is the matrix having 's as its columns , and is an Ψ n × n ψ Ψ = (ψ1, ··· , ψn) Λ th n × n diagonal matrix with i diagonal element as λi. Remark 23. In the case of symmetric tensors equation (1.56) takes the form of

A = ΨΛΨT (1.57) because for orthonormal eigenvectors, the matrix Ψ is an orthonormal matrix which is in fact a rotation transformation. Remark 24. The Λ is the representation of A in principal coordinates. Since it is diagonal, one can write n X Λ = λinini . (1.58) i=1 Suppose that we want to derive some relationship or develop a model based on tensorial expressions. If the task is accomplished in principal coordinates it takes much less eort and at the same time, the results are completely general i.e. they hold for any other coordinate system. This is often done in material modeling and in other branches of science as well. As the nal step we would like to solve the equation (1.52). It can be recast in the form

(A − λI) ψ = 0 . (1.59)

This equation has nonzero solutions if

det(A − λI) = 0 (1.60) 14 CHAPTER 1. VECTORS AND TENSORS ALGEBRA which is called the characteristic equation of tensor A. Let us have a look at its expansion

A11 − λ A12 A13

A21 A22 − λ A23 = 0 , (1.61)

A31 A32 A33 − λ which gives

  A11 A12 A13 3 2 A22 A23 A11 A13 A11 A12 λ −(A11 + A22 + A33) λ + + + λ− A21 A22 A23 = 0 A32 A33 A31 A33 A21 A22 A31 A32 A33 (1.62) which is a cubic equation with at most three distinct roots.

1.12 Invariants

Components of a vector v change from one coordinate system to another, however its length remains the same in all coordinates because length is a scalar. This is interesting because one can write the length of a vector in terms of its components in any coordinate 1/2 system as kvk = (vivi) , which means there is a combination of components that does not change by coordinate transformation while the components themselves do. Any function of components f(v1, v2, v3) that does not change under coordinate transformation is called an . This independence of coordinate system makes invariants physically important, and the reason should become clear soon. Tensors of higher order have also invariants. For a second order tensor A, looking back on equation (1.62), the values λ, λ2 and λ3 are scalars and stay unchanged under coordinate transformation, therefore coecients of the equation must remain unchanged as well, and they are all invariants of tensor A denoted by

IA = A11 + A22 + A33 (1.63)

A22 A23 A11 A13 A11 A12 IIA = + + (1.64) A32 A33 A31 A33 A21 A22

A11 A12 A13

IIIA = A21 A22 A23 (1.65)

A31 A32 A33 where I, II and III are called rst, second and third principal invariants of A. The above formulas look familiar. The rst invariant is trace, the second is sum of principal minors and the third is determinant. Of course, we could express invariants in terms of eigenvalues of the tensor

IA = tr(A) = λ1 + λ2 + λ3 (1.66) 1   II = (tr(A))2 − trA2 = λ λ + λ λ + λ λ (1.67) A 2 1 2 2 3 3 1 IIIA = det(A) = λ1λ2λ3 . (1.68) 1.13. SINGULAR VALUE DECOMPOSITION 15

In material modeling we often look for a scalar valued function of a tensor in the form of φ(A). Since the function value φ is a scalar it must be invariant under transformation of coordinates. To fulll this requirement, the function is formulated in terms of invariants of A, then it will be invariant itself. Therefore the most natural form of such a formulation would be

φ = φ(IA, IIA, IIIA) . (1.69) This will be used in the sequel when hyper-elastic material models are explained.

1.13 Singular value decomposition

The Euclidean norm of a vector is a non-negative scalar that represents its magnitude. It is also possible to talk about norm of a tensor. Avoiding the details, we only mention that the question of norm of a tensor A comes down to nding the maximum eigenvalue of AT A. Since the matrix AT A is positive semidenite its eigenvalues are non-negative.

Denition 25. If 2 is an eigenvalue of T , then is called a singular value of . λi A A λi > 0 A

When A is a symmetric tensor singular values of A equal the absolute values of its eigenvalues.

1.14 Cross product

Cross product of two vectors v and w is a vector u = v × w such that u is to v and w, and in the direction u = v × w that the triple (v, w, u) be right-handed (Fig. 1.5). Magnitude of u is given by

u = vw sin θ 0 ≤ θ ≤ π . (1.70) v w The cross product of vectors has the following properties. θ Property 26 (Cross product). Fig. 1.5: Cross product.

1. v × w = −w × v Anti-symmetry 2. u × (αv + βw) = αu × v + βu × w Linearity.

Important results 1. In any orthonormal coordinate system including Cartesian coordinates it holds that

e1 × e1 = 0 , e1 × e2 = e3 , e1 × e3 = −e2 e2 × e1 = −e3 , e2 × e2 = 0 , e2 × e3 = e1 (1.71) e3 × e1 = e2 , e3 × e2 = −e1 , e3 × e3 = 0 , 16 CHAPTER 1. VECTORS AND TENSORS ALGEBRA

2. therefore

e1 e2 e3

v × w = v1 v2 v3 , (1.72)

w1 w2 w3

3. which in index notation reads

or (1.73) v × w = ijkvjwkei [v × w]i = ijkvjwk .

Exercise 7. Using equation (1.71) prove the identity (1.72).

Geometrical interpretation of cross product

Norm of the vector v ×w is equal to the area of parallelogram having sides v and w, and since it is normal to of the parallelogram we consider it to be the area vector. Then it is w θ clear the the area of a with sides and is given sin

A v w w by θ 1 v A = v × w . (1.74) 2 Fig. 1.6: Area of triangle.

Cross product for tensors

Cross product can be naturally extended to the case of a tensor A and a vector v as

A × v = jmnAimvneiej and v × A = imnvmAnjeiej . (1.75)

1.15 Triple product

For every three vectors u, v and w, combination of dot and cross products gives

u · (v × w) , (1.76) the so called triple product. Based on equation (1.72) it can be written in component form u

w u1 u2 u3

u · (v × w) = v1 v2 v3 , (1.77) v

w1 w2 w3 Fig. 1.7: Parallelepiped. or in index notation

u · (v × w) = ijkuivjwk . (1.78) Geometrically, triple product of the three vectors equals the volume of parallelepiped with sides u, v and w (Fig. 1.7). 1.16. DOT AND CROSS PRODUCT OF VECTORS: MISCELLANEOUS FORMULAS17

1.16 Dot and cross product of vectors: miscellaneous formu- las

u × (v × w) = (u · w) v − (u · v) w (1.79) (u × v) × w = (u · w) v − (v · w) u (1.80) (u × v) · (w × x) = (u · w)(v · x) − (u · x)(v · w) (1.81)

(u × v) × (w × x) = (u · (v × x)) w − (u · (v × w)) x = (u · (w × x)) v − (v · (w × x)) u (1.82)

Exercise 8. Using index notation verify the above identities.

Exercise 9. For a tensor given as T = I + vw with v and w being orthogonal vectors i.e. , calculate 2 3 n and nally T (Hint: T P∞ 1 n). v · w = 0 T , T , ··· , T e e = n=0 n! T 18 CHAPTER 1. VECTORS AND TENSORS ALGEBRA Chapter 2

Tensor calculus

So far, vectors and tensors are considered based on operations by which we can combine them. In this section we study vector and tensor valued functions in three dimensional space, together with their derivatives and integrals. It is important to keep in mind that we address tensors and vectors both by the term tensor, and the reader should already know that a vector is a rst-order tensor and a scalar a zeroth-order tensor.

2.1 Tensor functions & tensor elds

A tensor function A(u) assigns a tensor A to a real number u. In formal mathematical notation  A : R −→ T (2.1) u 7−→ A(u) where T is the space of tensors of the same order, for instance second order tensors T = 3×3 3 3×3×3×3 R or vectors T = R or maybe fourth order tensors T = R . A typical example is the vector of a moving particle given as a function of time x(t) where t is a real number and X2 x is a vector with initial point at and terminal point at the moving particle (Fig. 2.1). On the other hand a tensor led A(x) assigns a tensor A to a point in space, say three dimensional. Again one could ) x (t formally write x  A : 3 −→ T R . (2.2) O X x 7−→ A(x) 1 Fig. 2.1: Position vector. 3 Here, R is the usual three dimensional Euclidean space. For instance, we can think of temperature distribution in a given T (x) which is a scalar eld that assigns scalars (temperatures) to points x in the domain. Another example could be strain led ε(x) in a solid body under deformation, which assigns second-order tensors ε to points x in the body.

19 20 CHAPTER 2. TENSOR CALCULUS

2.2 Derivative of tensor functions

Derivative of a dierentiable tensor function A(u) is dened as

dA A(u + ∆u) − A(u) = lim . (2.3) du ∆u→0 ∆u We assume that all derivatives exist unless otherwise stated. For the case of a vector function A(u) = Ai(u) ei in Cartesian coordinates the above denition becomes

dA dA dA dA dA = i e = 1 e + 2 e + 3 e . (2.4) du du i du 1 du 2 du 3

Because ei's remain unchanged in Cartesian coordinates their derivatives are zero.

Exercise 10. For a given second-order tensor function A(u) = Aij(u) eiej in Cartesian coordinates write down the derivatives.

Property 27. For tensor functions A(u) , B(u) , C(u), and vector function a(u) we have

d dB dA 1. (A · B) = A · + · B (2.5) du du du d dB dA 2. (A × B) = A × + × B (2.6) du du du d dA dB   dC  3. [A · (B × C)] = · (B × C) + A · × C + A · B × (2.7) du du du du da da 4. a · = a (2.8) du du da 5. a · = 0 i kak = const (2.9) du

Exercise 11. Verify the 4th and the 5th properties above.

2.3 Derivatives of tensor elds, Gradient, Nabla operator

Since a tensor led A(x) is a multi-variable function its dierential dA depends on the direction of dierential of independent variable dx which is a vector dx = dxiei. When dx is parallel to one of coordinate axes, partial derivatives of the tensor eld A(x) = A(x1, x2, x3) are obtained

∂A A(x + ∆x , x , x ) − A(x , x , x ) = lim 1 1 2 3 1 2 3 ∂x1 ∆x1→0 ∆x1 ∂A A(x , x + ∆x , x ) − A(x , x , x ) = lim 1 2 2 3 1 2 3 (2.10) ∂x2 ∆x2→0 ∆x2 ∂A A(x , x , x + ∆x ) − A(x , x , x ) = lim 1 2 3 3 1 2 3 ∂x3 ∆x3→0 ∆x3 2.4. INTEGRALS OF TENSOR FUNCTIONS 21 or in compact form ∂A A(x + ∆x e ) − A(x) = lim i i . (2.11) ∂xi ∆xi→0 ∆xi In general, dx is not parallel to coordinate axes. Therefore based on the chain rule of dierentiation ∂A ∂A ∂A ∂A dA(x1, x2, x3) = dxi = dx1 + dx3 + dx3 , (2.12) ∂xi ∂x1 ∂x2 ∂x3 Remembering from calculus of multi-variable functions, this can also be written as ∂A dA = grad(A) · dx with grad(A) = ei . (2.13) ∂xi In the latter formula gradient of tensor eld grad(A) appears which is a tensor of one order higher than A itself. To highlight this fact, we rewrite the above equation as ∂A grad(A) = ⊗ ei . (2.14) ∂xi A proper choice of notation in mathematics is substantial and introduction of operators notation in calculus is no exception. Since linear operators follow algebraic rules somewhat similar to that of arithmetics of real numbers, writing relations in operators notation is invaluable. Here, we introduce the so called nabla operator as ∂ ∂ ∂ ∂ ∇ ≡ ei = e1 + e2 + e3 . (2.15) ∂xi ∂x1 ∂x2 ∂x3 Then gradient of the tensor led in equation (2.13) can be written as∗

grad(A) = A∇ = A ⊗ ∇ . (2.16)

2.4 Integrals of tensor functions

If A(u) and B(u) are tensor functions such that A(u) = d/du B(u), the indenite integral of is A Z A(u) du = B(u) + C (2.17) with C = const, and the denite integral of A over the [a, b] is Z b A(u) du = B(b) − B(a) . (2.18) a Decomposition of equation (2.17) for (say) a second-order tensor gives Z Aij(u) du = Bij(u) + Cij , (2.19) where Aij is a real valued function of real numbers. Therefore, integration of a tensor function comes down to integration of its components.

∗In recent equations the dyadic ⊗ is written explicitly sometimes for pedagogical reasons. 22 CHAPTER 2. TENSOR CALCULUS

2.5 Line integrals

Consider a space C joining two points P1 (a1, a2, a3) and as shown in Fig. 2.2. If the curve is divided X3 P2 (b1, b2, b3) P2 into parts by subdivision points , then the line N x1,..., xN−1 C integral of a tensor eld A(x) along C is given by P 1 xn N Z Z P2 X X2 A · dx = A · dx = lim A(xi) · ∆xi (2.20) N→∞ X1 C P1 i=1 Fig. 2.2: Space curve. where ∆xi = xi − xi−1, and when N → ∞ the largest of divisions tends to zero

max k∆xik → 0 . i In Cartesian coordinates the line integral can be written as

Z Z A · dx = (A1 dx1 + A2 dx2 + A3 dx3) . (2.21) C C Line integral has the following properties.

Property 28.

Z P2 Z P1 1. A · dx = − A · dx (2.22) P1 P2 Z P2 Z P3 Z P2 2. A · dx = A · dx + A · dx P3 is between P1 and P2 (2.23) P1 P1 P3

Parameterization

A curve in space is a one-dimensional entity which can be specied by a parameterization of the form x(s) (2.24) which obviously means

x1 = x1(s) , x2 = x2(s) , x3 = x3(s) .

Therefore a path integral can be parametrized in terms of s by

Z x2 Z s2 dx A · dx = A · ds . (2.25) x1 s1 ds 2.6. PATH INDEPENDENCE 23

2.6 Path independence

A line integral as introduced above, depends on its limit points P1 and P2, on particular choice of the path C, and on its integrand A(x). However, there are cases where the line integral is independent of the path C. Then, having specied the tensor eld A(x) and the initial point P1, the line integral is a function of the position of P2 only. If we denote the position of P2 by x then Z x φ(x) = A · dx . (2.26) P1 Now if we consider dierentiating the tensor eld φ(x) it yields

dφ = A · dx , (2.27) which according denition of gradient (2.13) means

A = grad(φ) = φ∇ . (2.28) This argument could be reversed which would lead to the following theorem. Theorem 29. The line integral R is path independent if and only if there is a C A · dx function φ(x) such that A = φ∇. Exercise 12. Prove the reverse argument of the above theorem. That is, starting from existence of φ(x) such that A = φ∇, prove that the line integral is path independent. Remark 30. Note that in particular case of C being a closed path the line integral is path independent if and only if I A · dx = 0 , (2.29) C for arbitrary path C. The on integral sign shows that C is closed. This is another statement of path-independence which is equivalent to the ones mentioned before.

2.7 Surface integrals

Consider a smooth connected surface S in space (Fig. 2.3) X ni subdivided into N elements ∆Si, i = 1,...,N at positions xi 3 ∆Si with unit normal vectors to the surface ni. Let tensor eld A(x) be dened over the domain Ω ⊇ S. Then the surface S integral of normal component of A over S is dened as

N Z X A · n dS = lim A(xi) · ni ∆Si (2.30) X N→∞ 2 S i=1 X where it holds that 1 Fig. 2.3: .

N → ∞ : max k∆Sik → 0 . i 24 CHAPTER 2. TENSOR CALCULUS

The surface integral of tangential component of A over S is dened as

N Z X A × n dS = lim A(xi) × ni ∆Si (2.31) N→∞ S i=1 Note the surface integral (2.30) is a tensor of one order lower than A due to appearance of A · n in the integrand, while the surface integral (2.31) is a tensor of the same order as A due to the term A × n.

Surface integrals in terms of double-integrals Surface integrals can be calculated in Cartesian coordinates. This is the case usually when the surface S does not have any symmetries, such as rotational symmetry. For this the surface S is projected onto Cartesian plane X1X2 which gives a domain like S (Fig. 2.3). Then Z ZZ dx dx A · n dS = A · n 1 2 . (2.32) S n · e3 S The transformation between the area element dS and its projection dx1dx2 is given by

dx1dx2 = (ndS) · e3 = (n · e3) dS , where n is the unit normal vector to dS and e3 is the unit normal vector to dx1dx2. Exercise 13. Calculate the surface integral of the normal component of vector eld v = 1 /r2 er over the unit sphere centered at origin.

2.8 Divergence and curl operators

In surface integrals introduced above, the case when the sur- face S is closed has a special importance. In a physical con- X3 n dS text, integral of the normal component of a tensor eld over a closed surface expresses the overall ux of the eld through the surface which is a measure of intensity of sources†. On the S other hand, closed surface integral of the tangential compo- nent represents the so called rotation of the eld around the X2 surface. X1 Assume that S is a closed surface with outward unit nor- Fig. 2.4: Closed surface. mal vector n, occupying the domain Ω with volume V . Inte- gral over closed is usually denoted by H . We can dene surface integrals of a given tensor eld A(x) over S as in equations (2.30) and (2.31). As mentioned before, in the case of closed surfaces these have special physical meanings. A question that naturally arises is: what are the average intensity of sources and rotations over the domain Ω. The answer is 1 I 1 I D = A · n dS C = A × n dS , (2.33) V S V S †A source, as it literally suggests, is what causes the tensor eld. 2.9. LAPLACE'S OPERATOR 25 where integrals are divided by volume V . Now if the domain Ω shrinks, that is V → 0, then these average values reect the local intensity or concentration of sources and rotations of the tensor eld. This is the physical analogue of density which is the concentration of mass. These local values are called divergence and curl of the tensor eld A, denoted and formally dened by 1 I div(A) = lim A · n dS (2.34) V →0 V S 1 I curl(A) = lim A × n dS . (2.35) V →0 V S It can be shown that div(A) = A · ∇ (2.36) curl(A) = A × ∇ (2.37) in operator notation. The proof is straightforward in Cartesian coordinates and can be found in most calculus books. Exercise 14. Using a good calculus book, starting from (2.34) and (2.35) derive equations (2.36) and (2.37) in Cartesian coordinates for a vector eld u(x) as

div(u) = u · ∇ = ui∂i and curl(u) = u × ∇ = kijui∂j . Remark 31. So far gradient, divergence and curl operators are applied to their operand form the right-hand-side i.e. grad(A) = A∇ , div(A) = A · ∇ , curl(A) = A × ∇. This is not a strict requirement by their denition. In fact the direction from which the operator is applied depends on the context. If we exchange the order of dyadic, inner and outer products in equations (2.26), (2.34) and (2.35) respectively, the direction of application of nabla will be reversed, to wit Z 1 I 1 I φ = dx · ∇φ lim n · A dS = ∇ · A lim n × A dS = ∇ × A (2.38) V →0 V S V →0 V S which are equally valid. However, keeping a consistence convention shall not be overlooked.

Exercise 15. Calculate ∇ × (∇ × v) for the vector eld v.

2.9 Laplace's operator

We remember from equation (2.28) that a tensor eld A whose line integrals are path- independent can be written as the gradient of a lower-order tensor eld as A = φ∇. Now if we are interested in the concentration of sources that generate A then

div(A) = div(grad(φ)) = φ∇ · ∇ (2.39) This combination of gradient and divergence operators appears so often that it is given a distinct name, the so called Laplace's operator or Laplacian which is denoted by

∆ ≡ ∇ · ∇ = ∂k∂k . (2.40) 26 CHAPTER 2. TENSOR CALCULUS

Laplacian of a eld ∆A is sometimes denoted by ∇2A. Laplace's operator is also called harmonic operator. That is why two consecutive application of Laplacian over a eld is called biharmonic operator denoted by

∇4A = ∆∆A = ∆ (∆A) (2.41) which is expanded in Cartesian coordinates as

4 4 4 4 4 4 ∂ φ ∂ φ ∂ φ ∂ φ ∂ φ ∂ φ (2.42) ∆∆φ = 4 + 4 + 4 + 2 2 2 + 2 2 2 + 2 2 2 . ∂x1 ∂x2 ∂x3 ∂x1∂x2 ∂x2∂x3 ∂x3∂x1

2.10 Gauss' and Stoke's Theorems

Theorem 32 (Gauss theorem). Let S be a closed surface bounding a region Ω of volume V with outward unit normal vector n, and A(x) a tensor eld (Fig. 2.4). Then we have

Z I A · ∇ dV = A · n dS . (2.43) V S

This result is also called Green's or divergence theorem. If we remember the physical interpretation of divergence, then the Gauss theorem means the ux of a tensor eld through a closed surface equals the intensity of the sources bounded by the surface, which physically makes perfect sense. n Theorem 33 (Stoke's theorem). Let S be an open two-sided X3 dS surface bounded by a closed non-self-intersecting curve C as in Fig. 2.5. Then S I Z A · dx = (A × ∇) · n dS (2.44) C C S X2 where is directed towards counter-clockwise view of the path n X1 direction as indicated in the gure. Fig. 2.5: Stoke's theorem.

Physical interpretation of Stoke's theorem is much similar to that of Gauss' theorem except that it involves closed line integration and enclosed surface, instead of closed surface integration and enclosed volume. Namely, the overall rotation of the tensor eld along a closed path equals the intensity of rotations enclosed by the path. 2.11. MISCELLANEOUS FORMULAS AND THEOREMS INVOLVING NABLA 27

2.11 Miscellaneous formulas and theorems involving nabla

For given vector elds v and w, tensor elds A and B, and scalar elds α and β we have

∇(A + B) = ∇A + ∇B (2.45) ∇ · (A + B) = ∇ · A + ∇ · B (2.46) ∇ × (A + B) = ∇ × A + ∇ × B (2.47) ∇ · (αA) = (∇α) · A + α(∇ · A) (2.48) ∇ × (αA) = (∇α) × A + α(∇ × A) (2.49) ∇ × (∇ × A) = ∇(∇ · A) − ∇2A (2.50) ∇ · (v × w) = (∇ × v) · w + v · (∇ × w) (2.51) ∇ × (v × w) = (w · ∇)v − (∇ · v)w − (v · ∇)w + (∇ · w)v (2.52) ∇(v · w) = (w · ∇)v + (v · ∇)w + w × (∇ × v) + v × (∇ × w) (2.53)

Theorem 34. For a tensor eld A

∇ × (∇A) = 0 (A∇) × ∇ = 0 (2.54) ∇ · (∇ × A) = 0 (A × ∇) · ∇ = 0 (2.55) given that A is smooth enough. Let us rephrase the above theorem as follows: ˆ Gradient of a tensor eld is curl-free, i.e. its curl vanishes.

ˆ Curl of a tensor eld is divergence-free, i.e. its divergence vanishes. Comparing to Gauss and Stokes's theorems, these mean I Z (A × ∇) · n dS = (A × ∇) · ∇ dV = 0 (2.56) S V and I Z (A∇) · dx = (A∇) × ∇ · n dS = 0 . (2.57) C S Exercise 16. Show that in general ∇ · A × ∇ 6= 0 and ∇ × A · ∇ 6= 0. Remark 35. The recent equation, according (2.29), is equivalent to path-independence.

Theorem 36 (Green's identities). For two scalar elds φ and ψ the rst Green's identity is Z I [φ (ψ∆) + (φ∇) · (ψ∇)] dV = φ (ψ∇) · n dS (2.58) V S and the second Green's identity Z I [φ (ψ∆) − (φ∆) ψ] dV = [φ (ψ∇) − (φ∇) ψ] · n dS . (2.59) V S 28 CHAPTER 2. TENSOR CALCULUS

Alternative forms of Gauss' and Stoke's theorems

For tensor eld A and scalar eld φ we have Z I I Z A × ∇ dV = A × n dS φ dx = (φ∇) × n dS . (2.60) V S C S which could be equivalently stated as Z I I Z ∇ × A dV = n × A dS φ dx = n × (∇φ) dS . (2.61) V S C S

u3

c2 = 2 e u u 3 1 = c1 e1 e2

u1 u3 = c3 u2

Fig. 2.6: Curvilinear coordinates.

2.12 Orthogonal curvilinear coordinate systems

A point P in space (Fig. 2.6) can be specied by its rectangular coordinates (another name for Cartesian coordinates) (x1, x2, x3), or curvilinear coordinates (u1, u2, u3). Then there must be transformation rules of the form

x = x(u) and u = u(x) (2.62) where the transformation functions are smooth and on-to-one. Smoothness means being dierentiable, and being one-to-one requires Jacobian determinant to be non-zero

∂u1 ∂u1 ∂u1 ∂ (u , u , u ) ∂x1 ∂x2 ∂x3 det 1 2 3 = ∂u2 ∂u2 ∂u2 6= 0 (2.63) ∂ (x , x , x ) ∂x1 ∂x2 ∂x3 1 2 3 ∂u3 ∂u3 ∂u3 ∂x1 ∂x2 ∂x3 m n Denition 37 (Jacobian). For a dierentiable function f(x) such that f : R → R , its   Jacobian is an m × n matrix dened as J = ∂fi with i = 1, . . . , m and j = 1, . . . , n. ∂xj At the point P there are three surfaces described by

u1 = const , u2 = const , u3 = const (2.64) passing through P , called coordinate surfaces. And there are also three which are intersections of every two coordinate surfaces. These are called coordinate curves. On each 2.13. DIFFERENTIALS 29

coordinate curve only one of the three coordinates ui varies and the other two are constant. If the position vector is denoted by r then the vector ∂r/∂ui is tangent to ui coordinate curve, and the unit tangent vectors (Fig. 2.6) are obtained from

∂r/∂u1 ∂r/∂u2 ∂r/∂u3 e1 = , e2 = , e3 = (2.65) k∂r/∂u1k k∂r/∂u2k k∂r/∂u3k

Introducing scale factors hi = k∂r/∂uik we have

∂r/∂u1 = h1e1 , ∂r/∂u2 = h2e2 , ∂r/∂u3 = h3e3 (2.66)

If e1, e2 and e3 are mutually orthogonal then we call (u1, u2, u3) an orthogonal curvilinear coordinate system. Remark 38. In rectangular coordinates the coordinate surfaces are at planes and coordi- nate curves are straight lines. Also, the scale factors hi's are equal to one due to straight coordinate curves. Furthermore, the unit vectors ei's are the same for all points in space. These properties do not generally hold for curvilinear coordinate systems.

2.13 Dierentials

In three dimensional space, there are three types of geometrical objects (other than points) regarding dimensions

ˆ one-dimensional objects: lines ˆ two-dimensional objects: surfaces ˆ three-dimensional objects: volumes.

Each type has its own dierential element, namely line elements, surface elements and volume elements. We assume that the reader has already dealt with the related ideas in rectangular (Cartesian) coordinates. Now we want to generalize those ideas to curvilinear coordinates.

Line elements An innitesimal line segment with an arbitrary direction in space can be projected onto coordinate axes. Consider the position vector r explained in a curvilinear coordinate system by r(u1, u2, u3). A is the dierential of the position vector obtained by chain rule and using equation (2.66)

∂r ∂r ∂r dr = du1 + du2 + du3 = h1 du1 e1 + h2 du2 e2 + h3 du3 e3 . (2.67) ∂u1 ∂u2 ∂u3 where

dr1 = h1 du1 , dr2 = h2 du2 , dr3 = h3 du3 (2.68) 30 CHAPTER 2. TENSOR CALCULUS are the basis line elements. If the position vector sweeps a curve (which is typical for example in ) the element denoted by ds is obtained by

2 2 2 2 2 2 2 (2.69) (ds) = dr · dr = h1 (du1) + h2 (du2) + h3 (du3) . Note that the above formulas are completely general. For instance in Cartesian coordi- nates where hi = 1 they are reduced to familiar forms

2 2 2 2 dr = dx1e1 + dx2e2 + dx3e3 and (ds) = (dx1) + (dx2) + (dx3) .

Area elements An area element is a vector described by its magnitude and direction

dS = dS n , (2.70) whose decomposition is obtained by

dS = (dS · e1) e1 + (dS · e2) e2 + (dS · e3) e3 (2.71) where dS · ei is in fact the projection of dS on the coordinate surface normal to ei. On the other hand, having the coordinate line elements (2.68), the area elements on each coordinate surface are obtained by

dS1e1 = dr2e2 × dr3e3 = h2 h3 du2 du3e1

dS2e2 = dr3e3 × dr1e1 = h3 h1 du3 du1e2 (2.72)

dS3e3 = dr1e1 × dr2e2 = h1 h2 du1 du2e3 which are called basis area elements. Note that an area element is a parallelogram with its side being line elements, and its area is obtained by cross product of the line element vectors. Comparing the two above equation gives

dS = (h2 h3 du2 du3) e1 + (h3 h1 du3 du1) e2 + (h1 h2 du1 du2) e3 (2.73)

As the simplest case, in Cartesian coordinates the latter formula gives the familiar form

dS = dx2 dx3 e1 + dx3 dx1 e2 + dx1 dx2 e3 .

Volume element

A volume element dV is a parallelepiped built by the three coordinate line elements dr1e1, dr2e2 and dr3e3. Since the volume of a parallelepiped is obtained by triple product of its side vectors as in equation (1.77), we have

dV = (dr1e1) · (dr2e2) × (dr3e3) = h1 h2 h3 du1 du2 du3 . (2.74) 2.14. DIFFERENTIAL TRANSFORMATION 31

2.14 Dierential transformation

For two given orthogonal curvilinear coordinate (u1, u2, u3) and (q1, q2, q3), we are interested in transformation of dierentials between them. Here we adopt the index notation. There are two types of expressions in the aforementioned formulas to be transformed. One is the dierentials dui and the other is the factors of the form ∂r/∂ui or their norms hi. We consider the transformation of each separately. Starting from transformations of the form u(q), applying the chain rule we have

∂ui ∂r ∂r ∂qj dui = dqj and = (2.75) ∂qj ∂ui ∂qj ∂ui and because of of coordinates

 1/2  2 ∂r X ∂r ∂qj =   . (2.76) ∂u ∂q ∂u i j j i Transformation of area element form one curvilinear coordinates to another is obtained by

 −T ∂ (q1, q2, q3) ∂ (q1, q2, q3) dAq = · dAu (2.77) ∂ (u1, u2, u3) ∂ (u1, u2, u3) And volume element transforms according

∂ (q1, q2, q3) dVq = dVu (2.78) ∂ (u1, u2, u3) 2.15 Dierential operators in curvilinear coordinates

Here we briey address the general formulations of gradient, divergence, curl and Laplacian in orthogonal curvilinear coordinates. Formulations are given in a concrete form for a scalar

eld φ and a vector eld a = a1e1 + a2e2 + a3e3 however the reader should be able to use them for tensor elds of any order.

1 ∂φ 1 ∂φ 1 ∂φ ∇φ = + + (2.79) h1 ∂u1 h2 ∂u2 h3 ∂u3 1  ∂ ∂ ∂  ∇ · a = (h2h3a1) + (h3h1a2) + (h1h2a3) (2.80) h1h2h3 ∂u1 ∂u2 ∂u3

h1e1 h2e2 h3e3 1 ∇ × a = ∂/∂u1 ∂/∂u2 ∂/∂u3 (2.81) h1h2h3 h1a1 h2a2 h3a3 1  ∂ h h ∂φ  ∂ h h ∂φ  ∂ h h ∂φ  ∆φ = 2 3 + 3 1 + 1 2 .(2.82) h1h2h3 ∂u1 h1 ∂u1 ∂u2 h2 ∂u2 ∂u3 h3 ∂u3 Let us apply these formulas for the two most commonly used curvilinear coordinate systems, namely, cylindrical and spherical coordinates. 32 CHAPTER 2. TENSOR CALCULUS

X3

z = x3

x

x1 X ϕ r 2 x2

X1 Fig. 2.7: Cylindrical coordinates.

Cylindrical coordinates

In cylindrical coordinates a point is determined by (r, ϕ, z) (Fig. 2.7). Transformation to rectangular coordinates are given as

x1 = r cos ϕ , x2 = r sin ϕ , x3 = z (2.83) q 2 2 (2.84) r = x1 + x2 , ϕ = arctan(x2/x1) , z = x3

Scale factors are

hr = 1 hϕ = r hz = 1 , (2.85) and dierential elements

dr = dr er + r dϕ eϕ + dz ez (2.86)

dS = r dϕ dz er + dr dz eϕ + r dr dϕ ez (2.87) dV = r dr dϕ dz . (2.88)

Finally the dierential operators can be listed as

∂ 1 ∂ ∂ ∇ ≡ e + e + e (2.89) r ∂r ϕ r ∂ϕ z ∂z 1 ∂ 1 ∂a ∂a ∇ · a = (ra ) + ϕ + z (2.90) r ∂r r r ∂ϕ ∂z 1 ∂  ∂φ 1 ∂2φ ∂2φ ∂2φ 1 ∂φ 1 ∂2φ ∂2φ ∆φ = r + + = + + + (2.91) r ∂r ∂r r2 ∂ϕ2 ∂z2 ∂r2 r ∂r r2 ∂ϕ2 ∂z2 1 ∂a ∂a  ∂a ∂a  1  ∂ ∂a  ∇ × a = z − ϕ e + r − z e + (ra ) − r e (2.92) r ∂ϕ ∂z r ∂z ∂r ϕ r ∂r ϕ ∂ϕ z 2.15. DIFFERENTIAL OPERATORS IN CURVILINEAR COORDINATES 33

X3

x3

r θ x1 X ϕ 2 x2

X1

Fig. 2.8: Spherical coord.

Spherical coordinates

In spherical coordinates a point is determined by (r, θ, φ) (Fig. 2.8). Transformation to rectangular coordinates are given as

x1 = r sin θ cos φ , x2 = r sin θ sin φ , x3 = r cos θ (2.93) q 2 2 2 r = x1 + x2 + x3 , θ = arccos(x3/r) , φ = arctan(x2/x1) . Scale factors are

hr = 1 hθ = r hφ = r sin θ , (2.94) and dierential elements

dr = dr er + r dθ eθ + r sin θ dφ eφ (2.95) 2 dS = r sin θ dθ dφ er + r sin θ dr dφ eθ + r dr dθ eφ (2.96) dV = r2 sin θ dr dθ dφ . (2.97) Finally the dierential operators can be listed as ∂ 1 ∂ 1 ∂ ∇ ≡ e + e + e (2.98) r ∂r θ r ∂θ φ r sin θ ∂φ 1 ∂ 1 ∂ 1 ∂a ∇ · a = r2a  + (sin θa ) + φ (2.99) r2 ∂r r r sin θ ∂θ θ r sin θ ∂φ ∂2ξ 2 ∂ξ cos θ ∂ξ 1 ∂2ξ 1 ∂2ξ ∆ξ = + + + + (2.100) ∂r2 r ∂r r2 sin θ ∂θ r2 ∂θ2 r2 sin2 θ ∂φ2 1  ∂ ∂a  1  1 ∂a ∂  ∇ × a = (sin θa ) − θ e + r − (ra ) e (2.101) r sin θ ∂θ φ ∂φ r r sin θ ∂φ ∂r φ θ 1  ∂ ∂a  + (ra ) − r e (2.102) r ∂r θ ∂θ φ 34 CHAPTER 2. TENSOR CALCULUS Bibliography

[1] A.I. Borisenko and I.E. Tarapov. Vector and Tensor Analysis with Applications. Dover Pub. Inc, 1968.

[2] P.C. Chou and N.J. Pagano. Elasticity: Tensor, Dyadic, and Engineering Approaches. Dover Pub. Inc, 1992.

[3] R.W. Ogden. Non-Linear Elastic Deformations. Dover Pub. Inc, 1997.

[4] M.R. Spiegel. Mathematical Handbook of Formulas and Tables. McGraw-Hill (Schaum's outline), 1979.

35