<<

An Introduction to Vectors and from a Computational Perspective

UTC-CECS-SimCenter-2014-01 August 2014

GRADUATE SCHOOL OF COMPUTATIONAL 701 East M.L. King Boulevard • Chattanooga, TN 37403

AN INTRODUCTION TO VECTORS AND TENSORS FROM A COMPUTATIONAL PERSPECTIVE

W. Roger Briley SimCenter: National Center for Computational Engineering University of Tennessee at Chattanooga

August 2014

This report is an updated version of Report UTC-CECS-SimCenter-2012-01.

Preface This report is intended to provide a self-contained introduction to Cartesian tensors for students just entering graduate school in engineering and science majors, especially those interested in computational engineering and applied computational science. This introduction assumes students have a background in multivariable calculus but no familiarity with tensors.

______

TABLE OF CONTENTS 1. Introduction ...... 2 2. Vector Basics ...... 4 3. for Vectors, Tensors and Matrices ...... 4 4. Basics ...... 7 5. Vector and Tensor Fields ...... 9 6. Calculus Operations in Notation ...... 9 7. Transformation Laws for Cartesian Coordinates and Tensor Components ...... 11 8. Transformation from Cartesian to General ...... 14 REFERENCES ...... 15 APPENDIX – Dual- ...... 16 ______© Copyright: SimCenter: National Center for Computational Engineering, 2012-2014

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 1

AN INTRODUCTION TO VECTORS AND TENSORS FROM A COMPUTATIONAL PERSPECTIVE ______

1. Introduction 1.1 Vectors and Tensors in Magnitude and direction are common geometric properties of physical entities. Scalars are physical quantities such as density and temperature that have magnitude (measured in a specified system of units) but no directional . Vectors are physical quantities such as and with magnitude (length) and a single direction. The direction of vectors can be defined only in relation to a specified of N reference directions that comprise a of reference for the N-dimensional physical space considered: typically N = 1, 2 or 3. The reference frame could be a set of unit vectors or a . Vector magnitude and direction are quantified by N components that are defined by onto these directions. Although vector components depend on the choice of reference directions, the magnitude and direction of the vector are physical properties that are independent of the . Tensors are physical quantities such as stress and strain that have magnitude and two or more directions. For example, stress is a relationship between force and area (magnitude and two directions) and is thus a second-order tensor with N 2 components. Tensors also have invariant physical properties that are coordinate independent. True physical tensors of order higher than two are uncommon, but higher order tensors are common in mathematical descriptions of physics. 1.2 Vectors and Tensors in Mathematics Mathematically, vectors and tensors describe physical entities and their mathematical abstractions as directional objects represented by scalar components that are defined by projection onto a specified set of vectors (possibly unit vectors) comprising a basis, and that satisfy transformation laws for a . It is important to recognize that the term tensor is a general mathematical description for geometric objects that have magnitude and any of directions. A tensor of order p has content from p directions and has N p components. Thus a scalar is a zeroth-order tensor, a vector is a first-order tensor, and so on. 1.3 A Computational Perspective The present introduction will consider vectors and tensors as encountered in computational simulations of physical fields in which scalar, vector and tensor quantities vary with in space and with time. Fields require a coordinate system to locate points in space. Vector and tensor fields also require a local basis at each to define vector/tensor components. Disconflation of Vector Bases and Coordinates Systems - Most mathematical treatments of tensors assume that the local basis is aligned with the coordinate directions: cf. [1,2] . However, the alignment of base vectors and coordinate directions introduces complexity in curvilinear

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 2 orthogonal and especially in nonorthogonal coordinates that is not present in Cartesian coordinates: The local basis is constant for Cartesian coordinates but varies with spatial position using any curvilinear coordinate system. Nonorthogonal coordinates introduce a : one basis is parallel to the coordinate lines (the contravariant basis) and a reciprocal basis (the covariant basis) is to coordinate planes. These parallel and reciprocal bases coincide for curvilinear but vary with spatial position. The resulting complications include differentiation of spatially varying base vectors, the , dual base vectors, contravariant and covariant components, tensor differentiation, and . The calculus of tensors in general nonorthogonal coordinates is therefore significantly more complicated than that of Cartesian tensors. However, the complexity of variable-direction and nonorthogonal base vectors in general coordinates is commonly avoided in computational solution of both differential and integral conservation laws in discrete form, whether using structured or unstructured grids. Structured Grids - In differential approaches using coordinate systems, the technique used is to transform spatial derivative terms from Cartesian to general curvilinear coordinates, while retaining a uniform Cartesian local basis for vector/tensor components. The governing are thereby written in general curvilinear coordinates, but the Cartesian vector/tensor components remain as dependent variables. This approach is widely discussed and has been standard practice in computational fluid dynamics for many years: cf. [3,4,5]. The use of a uniform Cartesian local basis may also reduce spatial discretization error in computations. The reason is that spatial variation in base vectors causes extraneous, nonphysical spatial variation in components that when differentiated may require higher local than Cartesian components: for example in regions of large coordinate . Unstructured Grids - Another common approach for spatial discretization uses unstructured grids, for which no identifiable family of coordinate lines exists: cf. [6,7]. The governing equations are typically written in integral rather than , with discrete integral approximations that require multidimensional interpolation of vectors and tensors rather than differentiation. A pointwise Cartesian local basis is used for vector/tensor components, with a local of Cartesian unit vectors for alignment with local as needed in the interpolation process. Finally, it should be emphasized that tensor mathematics is a broad area of study that can be far more complicated than what is needed and discussed here as background for computational simulation. The present introduction covers basic material that is fundamental to the understanding and computation of physical vector and tensor fields. It is hoped that the present effort to disconflate the local vector basis and coordinate system will provide useful insight into the computation of tensor fields. The main body of this report addresses Cartesian coordinates and basis vectors and that are not necessarily aligned. For those who are interested, the APPENDIX gives a summary of dual-basis vector calculus for general curvilinear coordinates. Detailed discussions of vectors and tensors are given in [1,2,8] and in many other references.

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 3

2. Vector Basics First consider a vector a with base O and tip A, as shown in the sketch. The vector is a directed segment (arrow) that has inherent magnitude and direction. The vector is called a free vector if its location is not specified and a fixed or bound vector if the base O has a specific location in space. The direction of is quantified by direction cosines of the angles between and a set of N arbitrary but linearly independent base vectors comprising a basis. The standard Euclidean basis is a set of right-hand mutually orthogonal unit vectors (called an ) located at the base O and denoted eˆ1,, eˆ 2 eˆ 3  . All examples in this introduction will assume N  3. Although the magnitude a  a and direction of are invariants that do not depend on the choice of basis, the direction cosines are obviously basis dependent. For a given basis, a vector is represented by N scalar components, which are the scalar projections of the vector onto the set of N base vectors, as shown in the nearby figure. Letting cosae , ˆ 1  denote the cosine of the included angle between

and eˆ 1 , and with similar notation for eˆ 2 and eˆ 3 , the components of are given by

a11 acos a , eˆ  ; a 22 acos a , eˆ  ; a 33 acos a , eˆ  The vector is then expressed as a of the base vectors: aa eˆ  a eˆ  a eˆ 1 1 2 2 3 3 The vector components in a given basis are equivalent to the vector itself, since it is a simple matter to calculate the invariant magnitude and direction from known values of a1,, a 2 a 3  :

2 2 2 a a1  a 2  a 3 ; cosae ,ˆ 11  aa / ; cosae ,ˆ 22  aa / ; cosae ,ˆ 33  aa / 3. Index Notation for Vectors, Tensors and Matrices Index notation is a concise way to represent vectors, matrices, and tensors. Instead of writing the components of separately as a1,, a 2 a 3  , the indexed variable a i represents all components of collectively as follows: a a,, a a i  1 2 3  By convention, the index is understood to take on values in the range i 1, 2 , 3 or more generally iN1, 2 , , . Using index notation, the complete vector a can be written as 3 a aii eˆ  a1 eˆ 1  a 2 eˆ 2  a 3 eˆ 3 i  1

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 4

3.1 Einstein Summation Convention The important summation convention states that if an index appears twice in a single term, then it is understood that the repeated index is summed over its range from 1 to N. The summation symbol is then redundant, and the vector can be written concisely as

aaii eˆ  a1 eˆ 1  a 2 eˆ 2  a 3 eˆ 3 (with implied summation on i)

Finally, if base vectors eˆ i have been clearly defined in context, all vectors and tensors can be unambiguously represented by their components alone; actual display of the base vectors is unnecessary and purely a matter of notational preference. To summarize, the vector a is represented concisely as

ae aiiˆ

The following shorthand notation is often used, provided the base vectors are defined by context:

a  a i

3.2 Index Rules and Terminology

The index notation can be used with any number of subscripts. For example, A ij denotes the square

AAA11 12 13  AAAAij  21 2 2 2 3  AAA 31 32 33 In general, the i and j indices can be assigned separate ranges, for example to represent a 35 matrix. However, all indices are assumed to have the same N  3 range in this report.

Range Convention Variables, terms and expressions may be assigned one or more Latin index letters such as i,, j k . Each of these indices can independently take on

integer values in their range 1, 2 , , N . For example, abik and

aAi jk are terms combining vectors a i and b k and the matrix A jk . Index Rule Each index letter can occur either once or twice in a single term, but no index can occur more than twice. For example, aAj jk is a valid term, but

aAj j j is invalid.

Free Indices An index letter that occurs only once in a single term is called a free index (or range index). A valid must have the same free indices in each term. For example, ABCjk jk jk is valid, but ABCi j i k jk is invalid. A tensor with p free indices has order p and N p components.

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 5

Summation Indices An index letter that occurs twice in a single term is called a summation index. The repeated index invokes a summation over its range. For 3 example, aj b j a j b j  a1 b 1  a 2 b 2  a 3 b 3 . j  1 Dummy Indices A summation index is also called a dummy index because it can be replaced by a different index letter without changing its meaning. For example, in the equation ai A i j a k A k j , i and k are arbitrary dummy indices, and j is a free index that must appear once in each term.

3.3 Special Symbols There are two specially defined symbols that simplify index notations and operations:

The  ij is defined by 1 0 0 1 if ij     0 1 0 ij  , or if expressed as an array: ij  0 if ij  0 0 1

The Levi-Civita symbol or permutation symbol  i j k is a three-dimensional array defined by

1 ifi , j , k are an even (or cyclic) permutation of 1, 2, 3   i j k  1 ifi , j , k are an odd (or non-cyclic) permutation of 1, 2, 3  0 if any index is repeated  An even permutation is any three consecutive integers in the 1, 2, 3, 1, 2, 3. An odd permutation is any three consecutive integers in the sequence 3, 2, 1, 3, 2, 1. Thus,

1 2 3  2 31   31 2  1, and 3 2 1  2 1 3   1 3 2  1. All other values are zero. It is perhaps of passing interest that the following  identity can be used to generate all of the identities of vector analysis:       ijk irs jrks jsrk

3.4 Vector Operations in Index Notation

Scalar or Dot : eeˆˆi j  i j ,

a b()()aii eˆ b jj eˆ  a ijij b eˆ eˆ   a ijij b  a kk b

Vector or : eˆi eˆ j  i j k eˆ k ,

a b()()aii eˆ  b jj eˆ  a iji b eˆ  eˆ j  a ijijkk b  eˆ

1/2 1/2 Magnitude of a Vector: a()() a a aakk

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 6

Determinant of a :

AAA11 12 13

det AA AAAAAA21 2 2 2 3   i j k 1 i 2 j 3 k AAA 31 32 33

4. Tensor Basics We have seen that vectors are familiar geometric objects with invariant magnitude and direction. Tensors extend the description of vectors to geometric objects that have magnitude and any number of directions.

Order of a Tensor – The order of a tensor is equal to the number of its free indices. A tensor of order p has p free indices, involves p directions in an N-dimensional space, and has N p components, as summarized in the following Table:

Type Notation Order Components

Scalar  0 0 N Vector a 1 1 i N Tensor A 2 2 ij N Tensor A p p i j p N The indices i and j each take on the values i  1, 2, 3 and j  1, 2, 3 for N  3 .

Although any second-order tensor A ij can be interpreted as a square matrix, all square matrices are not tensors. Matrices are simple arrays of arbitrary elements; whereas, tensors incorporate geometric directional information, satisfy transformation laws for a change of basis, and have invariant properties independent of basis. The full definition of a second-order tensor is

A A i j eˆˆ i e j ; its shorthand notation A  A ij can be interpreted as a matrix of tensor components. Note that a common naming convention (followed in the table above) uses lower-case Greek letters to denote scalars, lower-case Latin letters for vectors, and upper-case Latin letters for matrices and tensors. Physical Example - The stress tensor of is a familiar example of a second-order tensor. Stress is a force per unit area acting on an internal within a material body. The state of stress at a point is uniquely determined by knowledge of the three-component stress vector acting on each of three mutually perpendicular planes. Therefore, stress is described by a second-order tensor τ i j eˆˆ i e jor i j defined by nine component stresses.

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 7

The stress vector at a point acting on any arbitrary with ne n iiˆ is given by the projection of the stress tensor onto the normal direction of that plane: τ n i jn j eˆ i .

Contraction of Free Indices – The summation convention is invoked by equating any two free indices (replacing them by a common index letter), and this is called a contraction of indices. Contraction of two indices reduces the order of a tensor by two.

 Contraction of i and j in A ij gives the scalar A kk . The of a square matrix is a

contraction of its indices: tr  AAi j  i i

 Contraction of i and j in Ai j k gives the first-order tensor Ammk (a vector).

Outer and Inner Products of Tensors  The outer or of two tensors A and Β is expressed as ΑB . The tensor product symbol  is often omitted when using index notation. If A and Β are second- order tensors, then their is defined as

Α B Α B ABABjkjk eˆe ˆ  rsrse ˆe ˆ  jkrs

The shorthand notation ΑB ABjk r s is usually used. Other examples of outer products

are ABi j k , ABij, and ABi j k r . Note that the outer product of two vectors such as

eeˆˆij or abij is called a dyad, a term from vector analysis.  The inner or of two tensors is written as ΑB. If and B are second-

order tensors, then their dot product is evaluated by taking the dot product of the innermost base vectors as follows:

Α B ABABABjkjk eˆe ˆ  rsrse ˆe ˆ  jkeˆje ˆ krrse ˆ  e ˆ s  jke ˆ j k r rs e ˆ s

ABABj k k seeˆˆjs j k k s If we omit the base vectors using shorthand notation, this becomes

ΑBABjk r s ABj k k s This shows that in shorthand notation the dot product can be interpreted as an outer

product ABj k r s plus a contraction of the two innermost pair of free indices, made up of

one index from each term in the product (in this case k and r) to obtain ABj k k s . The dot-product contraction reduces the order of a tensor by two. Other examples of dot

products are ABABCjk r s jm ms j s and ai b jk a j b jk c k .

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 8

5. Vector and Tensor Fields The previous discussion has only considered vectors and tensors defined at a point, without stating where the point is located. Vector and tensor fields are defined by assigning a vector or tensor to each point in a region of space, and their components are then functions of spatial position, as located by a global coordinate system. A is illustrated in the nearby figure. Points in space are located by a spatial position vector x directed from a fixed O to each arbitrary point P x , and then vector components are defined using a local basis. A rectangular Cartesian coordinate system is defined by choosing an orthonormal coordinate basis eˆi   eˆ1,, eˆ 2 eˆ 3 located at the origin O. The Cartesian coordinates xi   x1,, x 2 x 3 of point P are simply the scalar components of the position vector x defined by projection onto the Cartesian basis: xx eˆ  x eˆ  x eˆ  x eˆ ii 1 1 2 2 3 3 A is expressed as  x , and a vector field is denoted by ax  . A local basis is needed to define the scalar components a i x at each point in the field. The simplest choice for a local basis is shown in the preceding figure, which is to use the Cartesian coordinate basis eˆ i at the origin O but to relocate it by without rotation from the origin O to the point P. The vector field is then given by a x a x eˆ  a x e ˆ  a x e ˆ  a  x e ˆ ii 1 1 2 2 3 3

A second-order Ax  with components A ijx is expressed as

A x  A i j x eˆˆ i e j . Note that the dependence in terms such as a i x and A ijx has been explicitly shown here for emphasis. As in other areas of calculus, however, this dependence is often omitted for conciseness, and must be implied by context. In this example, the Cartesian local basis eˆ i does not vary with .

6. Calculus Operations in Cartesian Tensor Notation The following calculus operations are applicable to rectangular Cartesian coordinate systems with the same Cartesian local basis for vector and tensor components:

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 9

Gradient Operator: The operator implies differentiation by a vector, and its components are written as

    (),,      xi   x1  x 2  x 3 It has been mentioned that the display of base vectors is optional, but they can be shown as     ()  eˆi e ˆ1 e ˆ 2  e ˆ 3 xi  x1  x 2  x 3

The “Comma” Derivative Notation: An even more concise notation indicating differentiation is the use of a comma subscript followed by an index, as shown in the following examples: 2 () A Ai  Aij (), i A, j  ; Aij,  ; Ai j, m n   x i x j x j xxmn

Gradient of Scalar Field: If  x is a scalar field, then its gradient is

   grad    , i or including base vectors:  eeˆˆi  i, i  x i  x i

Directional Derivative Operator: If ne niiˆ is a , then the for direction n is () n ()   ni  xi

Divergence of a Vector Field: If u u x is a vector field, then its is

u i uu div   u ii,  xi

Curl of a Vector Field: If u u x is a vector field, then its has components

u k  uu curl i j k  i j k u k, j  x j

u k or including base vectors:  u i j k eˆˆ i  i j ku k, j e i  x j

Note that the vector cross product a b  i j kab i j eˆ k has a different subscript ordering.

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 10

Laplacian Operator: 2 2 () ()()()()div grad   , ii xxii

If  x is a scalar field, then its Laplacian is

     2   2   2  2  eˆ   eˆ   eˆ eˆ      i   j   i j i j, i i xi    x j   x i  x j  x i  x j  x i  x i

If u u x is a vector field, then its Laplacian has components 2 2 2  u k 2  u k u   u k, i i or adding base vectors: u  eˆˆk  u k, i i e k xxii xxii

Divergence of a Second-Order Tensor:  A  AAAi j  i j  i j, i  x i

Double Dot Product: The term  ij:  u arises in and is the “double dot”  product of two tensors. It is evaluated as the double contraction of  i ju n , giving  x m  i j: u   i juu i   i j i, j  x j

7. Transformation Laws for Cartesian Coordinates and Tensor Components It is sometimes useful to introduce a rotation of vector basis and coordinates to a different orientation. The tensor components and coordinates then satisfy linear transformation laws that define coordinates and components in one basis in terms of known components in another basis. This section will give the linear transformation laws for a change from one Cartesian basis to another basis that has been rotated about the same origin. Unstructured Grid Example - A computational example of this basis transformation arises when solving integral conservation equations using unstructured grids. Since there are no coordinate lines connecting grid points, the spatial points and solution variables are identified individually by their

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 11 global Cartesian coordinates. Discrete approximations for integral field equations are then constructed for individual and surface elements using multidimensional interpolation among localized point groupings. Components of vector/tensor field variables are defined using the global coordinate basis, but components aligned with individual surface elements are needed in a rotated local basis, as indicated in the figure above. Cartesian Transformation Laws - Consider two rectangular Cartesian coordinate systems whose origins coincide, as shown in the nearby figure. The position vector x from the origin to an arbitrary point P is identical in both coordinate systems. The base vectors eˆ j and coordinates x j are unprimed in the first system, and they are denoted by primed variables eˆ 'i and x 'i in the second system. Thus,

xxxj eˆˆ j'' i e i (7.1)

The coordinate transformation law is a linear relation that defines the x 'i coordinates in terms of x j , which are assumed to be known. The x 'i coordinates are determined by their fundamental definition as scalar projections of the position vector x onto each of the unit vectors eˆ 'i . The x 'i coordinates can be calculated individually using (7.1) as x'eˆ ' x e ˆ ' xe ˆ  x ('e ˆe ˆ )  x (1)(1)cos(',)e ˆe ˆ 1 1 1 j j j 1 j j 1 j (7.2a) Similarly,

xx'22 jj cos (eeˆˆ ' , ) ; xx'33 jj cos (eeˆˆ ' , ) (7.2b)

Each x 'i is a sum of x j components weighted by the direction cosines of the angles between the respective x 'i and x j axes. The individual equations in (7.2) are then written collectively as xx' cos (eeˆˆ ' , ) i j i j (7.3) For conciseness, the matrix of direction cosines is rewritten as R  cos (eeˆˆ ' , ) i j i j (7.4) where the first and second indices correspond to the primed and unprimed basis vectors, respectively. Substituting in (7.3) then gives the linear coordinate transformation law

x'i R i j x j Coordinate Transformation (7.5)

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 12

The inverse transformation is obtained by a similar derivation in which x j is calculated from known values of x 'i . The result is

xi R ji x' j Inverse Transformation (7.6) yTransformationTransformation Note that the second index corresponding to x j is summed in (7.5), whereas the first index corresponding to x ' j is summed in (7.6). Since the coordinates x i and x 'i are just components of an arbitrary position vector x , this same transformation applies to components of an arbitrary field vector a , so that

a'  R a Component Transformation i i j j ai R j i a' j Inverse Transformation (7.7) Analogous transformation laws for tensors of any order are given below: The forward and inverse transformation laws for second-order tensors are given by

ARRA'  Component Transformation i j i k j l k l ARRAi j m i n j' m n Inverse Transformation (7.8) The forward and inverse transformation laws for higher-order tensors are given by

ARRRA'  Component Transformation ijk ir jskt rst ARRRA ' Inverse Transformation ijk ri sjtk rst (7.9)

Computation of Matrix - Finally, the elements of the constant matrix Rij can be calculated by differentiating (7.5):

x'i  xm Rimm x  R im  R immj  R ij  R (7.10) xj  x j  x j A similar differentiation of (7.6) gives the matrix :  x x' i R x'  Rm  R  R  RT (7.11) x'''  x mim mi  x mimj ji j j j Combining (7.10) and (7.11) with the chain rule gives  x  x' i j  IRRRR T  1 (7.12) xx' ik jk T 1 Equation (7.12) shows that RR , and thus R  R ij is an .

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 13

8. Transformation from Cartesian to General Curvilinear Coordinates Structured Grid Example – When solving differential forms of field equations, general curvilinear coordinates and structured grids are often used to conform to geometric or boundary surfaces. Although standard treatments of non-Cartesian coordinate systems generally use local basis vectors aligned with the coordinates, this requires differentiation of spatially varying base vectors for curvilinear coordinates and introduces dual base vectors for nonorthogonal coordinates. As mentioned in the Introduction, these complexities are avoided by transforming the spatial derivative terms from Cartesian to general curvilinear coordinates, while retaining a uniform Cartesian local basis for vector/tensor components, as indicated in the adjacent figure. The governing equations are thereby expressed in general curvilinear coordinates, but the dependent variables are the Cartesian vector/tensor components. A simple chain-rule derivative transformation is all that is needed to implement this approach. Coordinate Transformation – A given point in space P()x can be defined using Cartesian coordinates xi   x1,, x 2 x 3 or general curvilinear coordinates i   1,,  2  3 . The transformation from Cartesian to general coordinates is given by

1 1x 1,, x 2 x 3 2 2x 1,, x 2 x 3 3 3x 1,, x 2 x 3  ; ; and the inverse transformation is given by

xx1 1 1,,  2  3 xx2 2 1,,  2  3  xx3 3 1,,  2  3 ; ;

In index notation, these become j jx i  and xxi i j  .

Assuming the governing differential equations are available in Cartesian coordinates, the transformed equations are obtained by replacing all Cartesian derivative terms with the chain- rule substitution

()()  j     (8.1) xx   i i j

Equation (8.1) can be implemented analytically if the grid transformation  jix  is known (for example in cylindrical or spherical coordinates). However in computations, the structured grid is

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 14 usually defined by the Cartesian coordinates of each grid point x ij  . In this case, it is a simple matter to calculate the derivatives of the inverse transformation xij/  numerically and make use of the identity

x ij       I (8.2)  x  ij ji  

The substitution (8.1) can then be written in terms of xij/  as

1 ()()  x i     (8.3) x    i j j

REFERENCES 1. Aris, Rutherford, Vectors, Tensors and the Basic Equations of Fluid Mechanics. Dover Press, ISBN-10:0486661105, 1962. 2. Fleisch, Daniel, A Student's Guide to Vectors and Tensors. Cambridge Press, ISBN-10: 0521171903, ISBN-13: 978-0521171908, 2011. 3. Vinokur, M, Conservation Equations of Gas Dynamics in Curvilinear Coordinate Systems. J. Computational Physics, 14: 105-125, 1974. 4. Steger, J. L., Implicit Finite-Difference Solution of Flow about Arbitrary Two-Dimensional . AIAA Journal, 16(7):679-686, 1978. 5. Taylor, L. K. and Whitfield, D. L., Unsteady Three-Dimensional Incompressible Euler and Navier-Stokes Solver for Stationary and Dynamic Grids. AIAA Paper 91-1650, 1991. 6. Anderson, W. K., and Bonhaus, D.L., An Implicit Upwind Algorithm for Computing Turbulent Flows on Unstructured Grids. Computers and Fluids, 23(1):1-21, 1994. 7. Mavriplis, D. J., Unstructured Grid Techniques, Annual Review of Fluid Mechanics. 29:473-514, 1997 8. V. N. Kaliakin, Introduction to Approximate Solution Techniques, Numerical Modeling, and Finite Element Methods. Marcel Decker, Inc., New York, ISBN: 0-8247-0679-X, 2002. http://www.ce.udel.edu/faculty/kaliakin/appendix_tensors.pdf

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 15

APPENDIX - DUAL-BASIS VECTOR CALCULUS

______Preface - In computational field simulations, a local Cartesian vector basis is commonly used to define vector/tensor components, even if a curvilinear non-orthogonal orthogonal coordinate system is used to locate points in space (see Sections 7 and 8). However it is common in mathematical treatments for curvilinear coordinates to assume that the local vector basis is defined in terms of the local coordinate directions. This introduces a dual system of local base vectors consisting of a) the tangent (covariant) base vectors that are tangent to the coordinate directions, and b) the reciprocal (contravariant) base vectors that are perpendicular to the coordinate surfaces. These are identical only if the coordinates are orthogonal. In addition, the notation for normalized unit basis vectors is not standardized, since un-normalized base vectors often provide simpler formulas. This APPENDIX summarizes the formulas that arise in dual- basis vector calculus for general coordinates.

General Curvilinear Non-Orthogonal Coordinates Coordinate Systems and Transformations - Locating a point P in space requires a coordinate system; the location or position of the point is specified by a set of coordinates values xk   x1,, x 2 x 3  . This same point location can be expressed in a different coordinate system xk   x 1,, x  2 x  3  , and the functional relationship xk x k x k  between the two sets of coordinates is called a coordinate transformation. The inverse transformation is xk x k x k  .

Position Vector – Each point P in a continuum space can be identified by a position vector rr x k  that points from an arbitrary origin O to the point Px k  .

Tangent and Reciprocal Base Vectors – The components of an arbitrary vector A bound to the point are defined by projection onto local base vectors at . The set of base vectors e k tangent to coordinate lines passing through the point is defined by

r r kk e k  ; dre d xk d x  x k  x k

A set of reciprocal base vectors e k is defined as follows:

ee ee ee e 1  23, e 2  31, e 3  12 e1 e 2 e 3  e1 e 2 e 3  e1 e 2 e 3 

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 16

This definition ensures that the reciprocal base vectors have the following two properties: 1) Each reciprocal base vector is perpendicular to each of the two tangent base vectors that have different indices. Thus, 1 e is perpendicular to e 2 and e 3 2 e is perpendicular to e 1 and 3 e is perpendicular to e 1 and 2) The dot product of tangent and reciprocal base vectors having the same index is unity. Thus, 1 2 3 ee1 1; ee2 1; ee3 1

The length of the reciprocal base vectors is thus given by 1 e k  (no summation on k) k ekkcos e , e 

Contravariant and Covariant Vector Components - The vector A can be expressed either in j terms of tangent base vectors e j and contravariant components A or in terms of reciprocal k base vectors e and covariant components Ak as follows:

jk AAA ejk e

These vector components can be calculated from the vector and base vectors as follows: kk A Ae , AkkAe

The contravariant components represent parallel projections onto the tangent base vectors , and the covariant components represent perpendicular projections onto the reciprocal base vectors . If needed, normalized or unit base vectors can be defined by e e k eˆ  k eˆ k  (no summation on k) k k e k e

If a unit-vector basis is used, then the vector can be expressed in normalized contravariant ˆ j ˆ components A or covariant components Ak as follows: ˆˆjkˆ kkˆ AAA eˆˆjk e , where A Aeˆ and AkkAeˆ .

Note that this notation for normalized unit vectors is not standard, although it is unambiguous in the current context.

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 17

Change of Vector Basis by Local Coordinate Transformation – If a given point

P x1,, x 2 x 3  is located by a new (primed) coordinate system as P x1,, x 2 x 3  , then the r tangent base vectors at point P for the primed system are given by ek  , evaluated at  x k point . The tangent base vectors transform as the inverse of the coordinate transformation 1 xx' jk ej e k e k kj  x  x The contravariant vector components transform as  x ' j AA' jk  x k The xx'/jk terms represent the projections of the tangent base vectors of the original x- system onto the tangent base vectors of the new x’-system. The coordinate differentials transform as  x ' j dx' jk dx  x k Derivatives transform as the inverse transformation    x k     xxj j x k The covariant vector components transform as the inverse transformation  x k AA' jk  x ' j The xxkj/' terms represent the components of the reciprocal base vectors perpendicular to the original x-axes, expressed in the new x’-system. The reciprocal base vectors transform as 1 xxk ' j e j e k e k jk xx'

Distance and the Metric Tensor – Consider the vector d r connecting two points that are an infinitesimal ds apart. Since is a vector, it can be expressed as

jk dr ejk d x e d x

kk d x d re , d xkk d re

The distance is determined by d s2  drr d , which gives

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 18

2 j k j k k j dsddr  r  ej  e k dxdx  e  e dxdx j k  e j  e  dxdx k

 g jk j k k  g   j

This suggests the definition of metric tensors such that

g j kee j k

g j kee j k

k k eej  j and thus distance ds is given by

2 j k j k j ds drr  d  gj k dxdx  g dxdx j k  dxdx k

Special Case of Orthogonal Coordinates – If the coordinates are orthogonal, then the base vectors are perpendicular to each other, so that

gj kee j  k 0 if j  k

The metric tensors are thus diagonal matrices, and distance is given by

2 2 2 2 ds hdx1 1  hdx 2 2   hdx 3 3  where

hgk k, ke k The unit vectors are given by

e k eˆ k  (no summation on k) h k and since we have e k kk eˆˆkk h e  e (no summation on k) h k

Length and Angle between Two Vectors - The length of a vector is given by

j k j k j A gj k A A  g A j A k  A A k The angle between two vectors is given by

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 19

jk AB g A B cos   jk AB j k j k gj k A A g j k B B ABk  k kk AABBkk gjk A B  jk j k j k g Aj A k g B j B k

Raising and Lowering Indices using the Metric Tensor - The metric tensor can be used to convert between contravariant and covariant components of vectors and tensors as follows: Covariant components are obtained using

k Aj g j k A

jk Contravariant components are obtained using the inverse of g jk , which is just g , as follows:

k1 j k A gj k A j g A j Derivatives of Vectors, Christoffel Symbols – Consider the derivative of a vector with respect to a single coordinate:

j  A e j e A  j  A j j  e 1  A x1  x 1  x 1  x 1

The first of the last two terms are just the tangent derivatives of each vector component

j 1 Ax/ , times the tangent base vector e 1 . The second term is zero for Cartesian coordinates because each of the base vectors e j is constant (independent of spatial position), but for general non-Cartesian systems, e j varies with each coordinate direction. Since each vector component A j has to be summed with the tangent derivatives of each base vector , this term has components in all three directions. Obviously, general non-Cartesian coordinates adds considerable complexity to the calculation of derivatives of vectors. Since the derivative of each basis vector is itself a vector with components in all three directions, i a new symbol called a Christoffel symbol  jk is introduced to express this additional sum (using the i index) across base vectors. The Christoffel-symbol notation is given by

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 20

e j i jkei  x k Once the Christoffel symbol “components” are defined, then the summing process needed to construct the derivative of a vector can be expressed in compact notation. It is noted that the Christoffel symbols are nothing more than weighting factors representing the three tangent projections of the derivatives of each of the three base vectors in each of the three coordinate directions. Hence, there are twenty-seven Christoffel-symbol values to be defined because there

k are three summing weights for each of the nine components of e j / x .

Evaluation of Christoffel Symbols – It is noted (without derivation) that the Christoffel symbols can be rewritten as

i i  e j jk e   x k and evaluated from knowledge of the metric tensor as follows:  i 1 li gj l  g k l  g j k jk g    klj 2 xx x Covariant Derivatives – As explained above, the derivative of a vector (as evaluated with Christoffel symbols) includes the effect of changes in both the vector components and in the magnitude and direction of the base vectors used to define the vector components. Consider the following derivative of a vector written using Christoffel symbols: jje  A AAjij  j eejj AA   ik k k k k x  x  x  x As indicated above, the Christoffel symbols allow the derivative to be written with the base vectors “factored out” of the terms containing the contravariant vector components. The quantity in parenthesis is called the “covariant” derivative, which includes changes in both the vector components and the base vectors. It has been given a special notation that places a semicolon (;) before the index of the independent variable in the derivative, so that

 A j AAjj i  ; k ik  x k The analogous formula for differentiating the covariant vector components is given by  A j i AA   jk j; k x k i

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 21

The process of covariant differentiation can be applied to higher-order tensors using formulas analogous to those above.

Combining Cartesian Local Base Vectors with General Coordinates – It was shown in Section 8 that the complexity of Christoffel symbols can be avoided in computational methods by the simple device of using Cartesian base vectors to define vector and tensor components but defining derivatives and position in a general curvilinear coordinate system, obtained by a transformation from Cartesian to curvilinear coordinates. Since the Cartesian base vectors are constant, the Christoffel symbols all vanish in this formulation, which greatly reduces the computational complexity.

Gradient Operator - The gradient operator is given by

k  j k      ee  j g xxkk

SimCenter: National Center for Computational Engineering • UT Chattanooga • 8/23/2014 22