<<

Notation • contravariant denoted by superscript Ai took a vector and gave For vector calculus us a vector

• covariant denoted by subscript Ai took a scaler and gave us a vector Review To avoid confusion in cartesian coordinates both types are the same so • Vectors we just opt for the subscript. Thus a vector x would be x1,x2,x3 R3 • Summation representation of an n by n array As it turns out in an cartesian space and other rectilinear coordi- nate systems there is no difference between contravariant and covariant • Gradient, and Curl vectors. This will not be the case for other coordinate systems such a • Spherical Harmonics (maybe) curvilinear coordinate systems or in 4 . These definitions are closely related to the Jacobian. Motivation If you tape a book shut and try to spin it in the air on each indepen- Definitions for Tensors of Rank 2 dent axis you will notice that it spins fine on two axes but not on the Rank 2 tensors can be written as a square array. They have con- third. That’s the inertia in your hands. Similar are the polar- travariant, mixed, and covariant forms. As we might expect in cartesian izations tensor, index of refraction tensor and tensor. But tensors coordinates these are the same. also show up in all sorts of places that don’t connect to an anisotropic material property, in fact even spherical harmonics are tensors. What are the similarities and differences between such a plethora of tensors? Vector Calculus and Identifers The of tensors is particularly useful for de- Tensor analysis extends deep into coordinate transformations of all scribing properties of substances which vary in direction– kinds of spaces and coordinate systems. At a rather shallow level it can although that’s only one example of their use. -Feynman II be used to significantly simplify the operations and identifers of vector 31-1 calculus.

Definitions Scaler Product • q = Scaler Take for example the Scaler or . a.k.a. Rank 0 tensor scaler One real number, invariant under rotation A B • Ai = Vector · = AiBi = AiBi i a.k.a Rank 1 tensor where i=1,2,3... X z }| { Components will transform under rotations. Simply put, an ar- Clearly i is just a general way of specifying the components to multiply. row does not need to start at the origin or point in any specific Here we sum the product of the two components with i is the same. The orientation, but it’s still the same arrow with the same magnitude. equation is also written without the summation. They are equivalent • Aij = Rank 2 tensor because of the Einstein summation convention. a.k.a. tensor Can be writhen out as a square array. Not all square arrays are Scaler Product with an invariant tensor (Kronecker tensors, there are some specific requirements. In cartesian space they must be an orthogonal norm preserving . delta) • In N-dimensional space a tensor of rank n has N n components. In a more thorough treatment we can also take the Scaler product k using a mixed tensor of rank 2, δl more commonly recognized as the Rank 1 Tensors (Vectors) δij 1 i=j The definitions for contravariant and covariant tensors are inevitably δij = (0 i =6 j defined at the beginning of all discussion on tensors. Their definitions or are inviably without explanation. In that spirit we begin our discussion of rank 1 tensors. The main difference between contravaariant and co- 1 0 δk = variant tensors is in how they are transformed. For example, a rotation l 0 1 of a vector. µ ¶ Clearly, the Kronecker delta is a rank two tensor that can be expressed as a matrix, but rarely takes such an inherently dangerous form. The Contravariant Kronecker delta is a so called invariant tensor. Now we can write our In general for we can define a contravariant vector as one that can be dot product as transformed as Aj is in the following: 0 A · B = Aiδij Bj = Aiδij Bj = AiBi 0 ∂xi A = Aj j i ∂x X j j The sum is, of course, neglected because of the summation convention. X 0 Clearly the Kronecker delta facilitates the scaler product. Furthermore, here might xi is the transformed x-axis, and xj is the old x-axis. the Kronecker delta can be applied to any situation where the product of orthogonal complements must be zero. Covariant The partial derivative above may have you thinking of a gradient. Summation Convention and Contraction The gradient is the prototype for a covariant vector which is defined as 2 0 Summation Convention When a subscript (letter, not number) ∂ϕ ∂ϕ ∂xj = appears twice on one side of an equation, summation with respect to ∂x0 ∂x ∂x0 i j j i that subscript is implied. X Covariant vectors are actually a linear form an not a vector. The linear Contraction Consists of setting two unlike indices (subscripts) equal form is a mapping of vectors into scalars which is additive and homoge- to each other and then summing as implied by the summation conven- neous under multiplication by scalars. 1 tion. 1http://xxx.lanl.gov/PS cache/gr-qc/pdf/9807/9807044.pdf 2From Arfkin p138

Author: W. Schlotter Last Modified: May 20, 2003 One-pager [email protected] 1.02 and the invariant Levi-Civita Symbol Symmetry The rank three analog of the Kronecker delta is known as the Levi- symetric antisymetric Civita Symbol, εijk. 1 1 Aij = (Aij + Aji)+ (Aij − Aji) 2 2 1 if(ijk)= even permutation:ε123 = ε231 = ε312 z }| { z }| { εijk = −1 if(ijk) = odd permutation: ε132 = ε213 = ε321 As used in P220 0 if any of the two indexes are the same  The functions of position are given by  0 It’s a three dimensional 27-element array with only six non-zero com- f = x ponents. Levi-Civita is a completely antisymmetric unit tensor. We can i 0 use it to preform some amazing operations. g = xj 0 h = xk(not given in class) Cross product See Griffiths Appendix A on Vector Calculus and Curvilinear Coor- dinates.

A = B × C Vector identity examples Ai = εijkBj Ck Let’s take a closer look εijkεlmk = δjm(δilδjm − δimδjl) 1 −1 = δilδji − δimδml Ax = εxjkBj Ck = εxyz ByCz + εxzy BzCy = 3δ − δ = 2δ −1 1 il il il z}|{ z}|{ Ay = εyjkBj Ck = εyxz BxCz + εyzx BzCx Okay now lets derive identities 1 −1 z}|{ z}|{ d = a × (b × c) Az = εzjkBj Ck = εzxy BxCy + εzyx ByCx e z}|{ z}|{ di = εijka|j e{zk =}εijkaj εklmblcm Now you can see why we just write Ai = εijkBj Ck. = (δilδjm − δimδjl)aj blcm = bi(aj cj ) − ci(aj bj ) Partial derivative, Grad, Div and Curl d = b(a · c) − c(a · b) We can write a partial derivative in general form as follows And now for another identity . ∂ ∇ × (∇φ) = 0 ∂i = ∂xi εijk∂j (∂kφ) = 0 Grad And another A Thus ∇ · (∇ × ) = 0 . ∇φ = ∂iφ ∂iεijk∂j Ax = 0 We know this will generate a vector since there is nothing to sum And now a really complicated one over. c = ∇ × (a × b) Divergence ci = εijk∂j (εkmlalbm) ci = (δilδjm − δimδlj ) [(∂j al)bm + al(∂j bm)] Thus . c = (∂ a )b + a (∂ b ) − (∂ a )b − a (∂ b ) ∇ · A = ∇ · Ai = ∂iAi = ∂iδij Aj i j i i i j j j j i j j j c b a a b b a a b Since two indexes are the same we will sum to generate at scaler. = ( · ∇) + (∇ · ) − (∇ · ) − ( · ∇)

Curl References and Further Reading th Thus Arfkin and Weber, Mathematical Methods for Physicists 5 ed. . (2001), Section 2.6 Tensor Analysis ∇ × A = ∇ × Ai = εijk∂j Ak = ci Feynman, Lectures on , Lecture II 31 Now we have three indexes so we sum twice to get a vector. This is a prime example of contraction. Related topics Rank 2 Tensors • Spherical Harmonics The Kronoker delta was previously mentioned to be rank 2 mixed • Group Theory tensor, but without further explanation. There are three types of rank Spherical Harmonics 2 tensors Group Theory

0 0 0 ∂x ∂xj A ij = i Akl contravariant ∂x ∂x kl k l X 0 0i ∂xi ∂xl k Bj = 0 Bl mixed ∂xk ∂xj Xkl 0 ∂xk ∂xl Cij = 0 0 Ckl covariant ∂xi ∂xj Xkl

Author: W. Schlotter Last Modified: May 20, 2003 One-pager [email protected] 1.02