<<

in generalized coordinate systems: components and direct notation

Math 1550 lecture notes, Prof. Anna Vainchtein

1 Vectors in

Consider a generalized (x1, x2, x3) with the local {e1, e2, e3}. The basis is not necessarily orthogonal, let alone orthonormal. It comes along with its reciprocal basis {e1, e2, e3}. Recall that we can write a vector a in terms of either basis, using contravariant components ai and covariant components ai, respectively:

1 2 3 i 1 2 3 i a = a e1 + a e2 + a e3 = a ei, a = a1e + a2e + a3e = aie . (1)

Recall also that we can find the covariant and contravariant components of the vector by taking dot products with the basis vectors and reciprocal basis vectors, respectively: i i ai = a · ei, a = a · e (2) Consider now another coordinate system (¯x1, x¯2, x¯3), with the basis vec- tors p p p ¯ei = αi ep, αi = ¯ei · e . (3) Recall that the reciprocal basis vectors then transform via the inverse trans- formation (see the first set of notes):

i −1 i p −1 i i ¯e = (α )pe , (α )p = ep · ¯e (4) Recall also that covariant and contravariant components of a vector (first order ) transform according to

p i −1 i p a¯i = αi ap, a¯ = (α )pa . (5) Notice that that the transformation law for the covariant components in- volves the direct transformation α, while the one for contravariant components has the inverse α−1. As we have seen

1 before, this is becausea ¯i = a·¯ei, where ¯ei is related to ei via the direct trans- formation, while ina ¯i = a · ¯ei, the reciprocal vector ¯ei transforms according to (4). In particular, the new and old coordinates of the same with vector 1 2 3 1 2 3 r = x e1 + x e2 + x e3 =x ¯ ¯e1 +x ¯ ¯e2 +x ¯ ¯e3 are related by i −1 i p p p i x¯ = (α )px , x = αi x¯ , and thus we can also represent the direct and inverse transformation matrices in terms of partial derivatives of the old and new coordinates with respect to one another: ∂xp ∂x¯i αp = , (α−1)i = (6) i ∂x¯i p ∂xp Using this, we can rewrite (5) as ∂xp ∂x¯i a¯ = a , a¯i = ap (7) i ∂x¯i p ∂xp Finally, recall that covariant and contravariant components are not inde- pendent. They are related by

i ik k a = g ak, ai = gika ,

ik i k where we recall that gik = ei ·ek and g = e ·e are covariant and contravari- ant components of the . We called this raising or lowering the index via the metric tensor.

Remark. In the case of two rectangular coordinate systems with orthonor- i i i i mal bases that we considered earlier, we have e = ei, ¯e = e , ai = a and α becomes an , α−1 = αT . Thus, in this case we have j −1 j αi = Qij and (α )i = Qji, where Qij = ¯ei · ej are components of an orthog- onal matrix.

Example 1. Consider the vector

a = x1x2i1 + x2x3i2 + x1x3i3, where {i1, i2, i3} is the in the Cartesian coordinate system (x1, x2, x3).

2 a) Find the covariant components of a in parabolic cylindrical coordinates (v, w, z) defined by v2 − w2 x = , x = vw, x = z. 1 2 2 3 b) Express its contravariant components in terms of covariant ones.

Solution. a) The new basis is ∂x ∂x ∂x ¯e = 1 i + 2 i + 3 i = vi + wi 1 ∂u 1 ∂u 2 ∂u 3 1 2 ∂x ∂x ∂x ¯e = 1 i + 2 i + 3 i = −wi + vi 2 ∂v 1 ∂v 2 ∂v 3 1 2 ∂x ∂x ∂x ¯e = 1 i + 2 i + 3 i = i 3 ∂z 1 ∂z 2 ∂z 3 3 The transformation matrix is  v w 0  j [αi ] = [¯ei · ij] =  −w v 0  0 0 1

j The covariant components of a in the new basis area ¯i = αi aj, or

     1 2 2   1 2 2 2 2  a¯1 v w 0 2 (v − w )vw 2 (v − w )v w + vw z 1 2 2 2 2  a¯2  =  −w v 0   vwz  =  − 2 (v − w )vw + v wz  . 1 2 2 1 2 2 a¯3 0 0 1 2 (v − w )z 2 (v − w )z √ 2 2 b) Note that the new basis is orthogonal, with h1 = |¯e1| = v + w = |¯e2| = h2 and h3 = |¯e3| = 1. Thus the contravariant components of the 1 metric tensor areg ¯ij = 0 for i 6= j andg ¯11 =g ¯22 = andg ¯33 = 1. v2 + w2 Therefore, 1 1 a¯1 =g ¯11a¯ = a¯ , a¯1 =g ¯22a¯ = a¯ , a¯3 =g ¯33a¯ =a ¯ . 1 v2 + w2 1 2 v2 + w2 2 3 3

Example 2. Let f(x1, x2, x3) be a field. If we change to new coor- dinatesx ¯i =x ¯i(x1, x2, x3), we have, via the chain rule, ∂f ∂f ∂xk = , ∂x¯i ∂xk ∂x¯i

3 which by the first in (6) yields ∂f ∂f = αk . ∂x¯i i ∂xk ∂f Thus, v = transform as covariant components of the vector ( i ∂xi of f) ∂f v = ei = ∇f, ∂xi where ei are the reciprocal basis vectors associated with coordinates (x1, x2, x3). Of course, the vector v also has contravariant components ∂f ∂f vi = gikv = gik , v = gik e . k ∂xk ∂xk i 2 General higher order tensors

By analogy with above transformation laws for covariant and contravariant components of a vector, we can now introduce a more general definition of a second order tensor, no longer considering only rectangular coordinate sys- tems:

Definition. A second order tensor in Rd is a quantity uniquely specified 2 by d (its components). These components can be covariant (Aij), ij ·j i contravariant (A ) or mixed (Ai , A·j), and they transform under the (3) according to ¯ p q Aij = αi αj Apq (8) ¯ij −1 i −1 j pq A = (α )p(α )qA (9) ¯·j p −1 j ·q Ai = αi (α )qAp (10) ¯i −1 i q p A·j = (α )pαj A·q (11) ·j The little dot helps us denote the actual position of the index, e.g. in Ai ·j j j the index j comes second. Since Ai 6= A·i in general, writing Ai is mislead- ing, since we don’t know which of the two is actually meant ( j i δi = δj is an exception). Note that in the transformation laws the covariant indices always require direct transformation matrix α, while the contravari- ant indices come along with inverse transformation matrix α−1, just like

4 it was for the vector transformation laws (5). The indices in the transfor- mation matrices are arranged so that the indices that are not summed over are in the same position as in the new component, while the summation is over the opposite indices. The only exception is Cartesian coordinates with orthonormal bases, where reciprocal vectors coincide with the regular ones, and the covariant, contravariant and mixed components coincide for each pair of indices. Recalling that the transformation matrix can be represented as (6), we can also write (8)-(11) as ∂xp ∂xq A¯ = A ij ∂x¯i ∂x¯j pq i j ¯ij ∂x¯ ∂x¯ pq A = p q A ∂x ∂x (12) ∂xp ∂x¯j A¯·j = A·q i ∂x¯i ∂xq p ∂x¯i ∂xq A¯i = Ap ·j ∂xp ∂x¯j ·q This is how second order tensors are defined in some books. Similar to vectors, the covariant, contravariant and mixed components of a second order tensor are related to one another via the metric tensor, which raises or lowers the corresponding indices. We have

mn ·n m Aij = gimgjnA = gjnAi = gimA·j ij im jn im ·j jn i A = g g Amn = g Am = g A·n ·j jn mj (13) Ai = g Ain = gimA i im in A·j = g Amj = gjnA We can now easily generalize this to write transformation laws for higher order tensors. For example, ¯ p q r Aijk = αi αj αkApqr ¯ijk −1 i −1 j −1 k pqr A = (α )p(α )q(α )r A ¯ij −1 i −1 j r pq A··k = (α )p(α )qαkA··r ¯·j· p −1 j r ·q· Ai·k = αi (α )qαkAp·r, and so on. Once again, we define a third order tensor as an object whose various components transform according to these rules - and this is what we

5 need to check to verify these are components of a tensor. The components of a third order tensor are again related by the metric tensor:

m mn mnl Aijk = gimA· jk = gimginA·· j = gimgjngklA , etc.

ijkl Exercise. Write the transformation laws for the components Aijkl, A , ··kl ·jk· Aij and Ai··l of a fourth order tensor and the relations between these com- ponents.

Example 1. Contravariant components of a tensor A in the basis

e1 = (0, 1, 1), e2 = (1, 0, 1), e3 = (1, 1, 1) are  −1 2 0  ij [A ] =  2 0 3  . 0 3 −2

i ·j Find its mixed components A·j and Ai and covariant components Aij.

i im Solution. We have A·j = A gmj. The metric tensor is given by

 2 1 2  [gmj] = [em · ej] =  1 2 2  . 2 2 3

Therefore,

 −1 2 0   2 1 2   0 3 2  i im [A·j] = [A ][gmj] =  2 0 3   1 2 2  =  10 8 13  . 0 3 −2 2 2 3 −1 2 0

Next,

 2 1 2   −1 2 0   0 10 −1  ·j mj [Ai ] = [gim][A ] =  1 2 2   2 0 3  =  3 8 2  . 2 2 3 0 3 −2 2 13 0

6 and finally,

 0 10 −1   2 1 2   8 18 17  ·m [Aij] = [Ai ][gmj] =  3 8 2   1 2 2  =  18 23 28  . 2 13 0 2 2 3 17 28 30

Note that due to the of [Aij] and the metric tensor, the matrix [Aij] is also symmetric, while the matrices of the mixed components are not symmetric but are transposes of one another (not true in general).

Example 2. In a rectangular coordinate system with an {e1, e2, e3} the components of the second order tensor A are

 2 −1 0  [Aij] =  −1 0 3  0 3 2

ij i ·j (Observe that A = A·j = Ai = Aij in this case). Find the covariant components of this tensor in the coordinate system with the basis

¯e1 = e1 + e2, ¯e2 = e2 − e3, ¯e3 = e1 + 2e3.

k Solution. Recalling that ¯ei = αi ek, we have  1 1 0  k α = [αi ] =  0 1 −1  1 0 2

(the rows are simply complonents of ¯ei in the old basis). By (8), we have

 1 1 0   2 −1 0   1 0 1  ¯ T [Aij] = α[Apq]α =  0 1 −1   −1 0 3   1 1 0  1 0 2 0 3 2 0 −1 2  1 −1 3   1 0 1   0 −4 7  =  −1 −3 1   1 1 0  =  −4 −4 1  . 2 5 4 0 −1 2 7 1 10

7 ∂v Example 3. Show that i , where v (x1, x2, x3) are covariant components ∂xk i of a vector field, do not form a second order tensor. ∂xm Solution. By (7),v ¯ = v , so i ∂x¯i m ∂v¯ ∂ ∂xm  ∂xm ∂v ∂2xm ∂xm ∂xn ∂v ∂2xm i = v = m + v = m + v ∂x¯k ∂x¯k ∂x¯i m ∂x¯i ∂x¯k ∂x¯k∂x¯i m ∂x¯i ∂x¯k ∂xn ∂x¯k∂x¯i m where we used chain rule to obtain the first term in the last equality. Now, without the boxed second term, this would result in the transfor- ∂v mation rule for covariant components A = i - see the first equality in ik ∂xk ∂xm (12). The problem is, this term only vanishes if = const, i.e. if we ∂x¯i restrict ourselves to rectangular or oblique coordinate systems. In general, ∂v however, this term is nonzero, so i are not components of a (general) sec- ∂xk ∂v ond order tensor. Later we will show that adding a suitable quantity to i ∂xk fixes the problem and yields a second order tensor.

3 Metric tensor

We are now in position to show that the metric tensor is indeed a second order tensor, according to our definition. We need to check that its components transform according to the rules (8)-(11). Indeed, by (3) and the definition of metric tensor in the old and new bases,

m n m n m n g¯ij = ¯ei · ¯ej = αi em · αj en = αi αj (em · en) = αi αj gmn, confirming (8). Likewise, using (4), we obtain

ij i j −1 i m −1 j −1 i −1 j m −1 i −1 j mn g¯ = ¯e ·¯e = (α )me ·(α )nen = (α )m(α )n(e ·en) = (α )m(α )ng , so (9) also holds. Finally,

·j j m −1 j n m −1 j n m −1 j ·n g¯i = ¯ei · ¯e = αi em · (α )ne = αi (α )n(em · e ) = αi (α )ngm,

i ·j j so (10) holds. We do not need to check (11) because g·j = gi = δi (why?). So far, we have shown that all components transform according to the rules.

8 To show that these are the components of the same tensor, we also need to m m verify the relations (13). Note that ei = (ei · em)e = gime , so that

m n mn gij = ei · ej = gime · gjne = gimgjng , verifying the first identity in (13). Also,

m m gij = ei · ej = gime · ej = gimg·j verifies another identity in (13). In a similar way, we can show that the other identities also hold.

Exercise. Verify the other relations in (13) between the components of the metric tensor.

4 Tensor products

To write tensors using the direct notation instead of components, we need to introduce tensor products. 3 Let a = a1i1 + a2i2 + a3i3 and b = b1i1 + b2i2 + b3i3 be two vectors in R . Here {i1, i2, i3} is an orthonormal basis in a Cartesian coordinate system. Then the (also known as dyadic product and outer product) of the two vectors is the second order tensor

a ⊗ b whose components in the same basis are aibj, i.e. it is represented by the matrix obtained by the matrix product of the two vectors:     a1 a1b1 a1b2 a1b3    a2  b1 b2 b3 =  a2b1 a2b2 a2b3  . a3 a3b1 a3b2 a3b3

(We have shown in class that this is a .) For example, the tensor product a ⊗ b of a = i1 + 2i2 + 3i3 and b = i1 + i2 + 2i3 is represented by the matrix  1   1 1 2     2  1 1 2 =  2 2 4  . 3 3 3 6

9 Note that b ⊗ a = (a ⊗ b)T - for instance, in the above example b ⊗ a has components biaj which form the matrix

 1   1 2 3     1  1 2 3 =  1 2 3  2 2 4 6 that is of a ⊗ b. In the same way we can define a ⊗ b ⊗ c, a third order tensor with Cartesian components aibjck, a fourth order tensor a ⊗ b ⊗ c ⊗ d, etc. An important property of the tensor product is that if we multiply it on the right by a vector, we get another vector given by

(a ⊗ b)c = (b · c)a. (14)

Indeed, in component notation we have (using summation convention - note the sum over j!) (aibj)cj = (bjcj)ai = (b · c)ai. Similarly, if we multiply the tensor product on the left by a vector, we get

c · (a ⊗ b) = (c · a)b, because ci(aibj) = (ciai)bj = (c · a)bj. Other properties of the tensor product are (γa) ⊗ b = a ⊗ (γb) = γ(a ⊗ b), where γ is any scalar, and

a ⊗ (b + c) = a ⊗ b + a ⊗ c (a + b) ⊗ c = a ⊗ c + b ⊗ c.

5 Tensors as linear combinations of tensor products of basis vectors

Recall that we used covariant and contravariant components of a vector to represent it in terms of basis vectors and reciprocal basis vectors, as in (1). We can do the same for higher order tensors, only now there are more types of components, and instead of basis vectors we will use their tensor products.

10 For example, using the contravariant components of a second order tensor A, we can write it as ij A = A ei ⊗ ej. (15) 11 Note that this is a double sum, so what we really mean is A = A e1 ⊗ e1 + 12 33 A e1 ⊗ e2 + ··· + A e3 ⊗ e3. Now, for each i and j the tensor product ei ⊗ ej is itself a second order tensor, so we are writing our tensor A as a of such basis tensors. Observe also that

Aij = ei · Aej, (16) which is the analog of the second identity in (2) for contravariant components of a vector. Indeed, by (15) (change the summation indices to m, n)

i j i mn j e · Ae = e · A (em ⊗ en)e .

But by (14) we have

j j j (em ⊗ en)e = (en · e )em = δnem, where in the last equality we used the property of the reciprocal bases. Thus,

i j i mn j i mj mj i mj i ij e · Ae = e · A δnem = e · A em = A (e · em) = A δm = A , yielding (16). To see that the representation (15), (16) is consistent with our definition of a second order tensor, we must show that the transformation law (9) holds for (16). The new contravariant components are

A¯ij = ¯ei · A¯ej.

Substituting (4), we get

¯ij −1 i p −1 j q −1 i −1 j p q A = (α )pe · A(α )qe = (α )p(α )q(e · Ae ), which by (16) yields (9). Thus, (16) indeed defines contravariant components of the second order tensor. Now, instead of using the original basis vectors, we can represent the same tensor in terms of the tensor products of the reciprocal basis vectors and covariant components Aij:

i j A = Aije ⊗ e ,Aij = ei · Aej, (17)

11 where the second identity is shown in the same way as (16) above (try it!) and is the analog of the first identity in (2) for covariant components of a vector. The new covariant components of A are ¯ Aij = ¯ei · A¯ej

Substituting (3), we get

¯ p q p q Aij = αi ep · A(αj eq) = αi αj (ep · Aeq)

But ep · Aeq = Apq, so we get (8), confirming that the representation (17) is consistent with our earlier definition. But why stop there? We can also mix the regular and reciprocal basis vectors and use the mixed components:

·j i ·j j A = Ai e ⊗ ej,Ai = ei · Ae (18) and i j i i A = A·jei ⊗ e ,A·j = e · Aej (19) The new mixed components are

¯·j j ¯i i Ai = ¯ei · A¯e , A·j = ¯e · A¯ej

Substituting (3) and (4) into the first of these, we get

¯·j p −1 j q Ai = αi ei · A(α )qe , and recalling the second identity in (18) we obtain (10), as expected.

Exercise. Use (3), (4) and (19) to obtain the transformation law (11) for the second type of mixed components.

Higher order tensors are can be written in terms of tensor products in a similar fashion. For instance, a third order tensor may be written as

i j k ijk ij k ·j· i k A = Aijke ⊗ e ⊗ e = A ei ⊗ ej ⊗ ek = A··kei ⊗ ej ⊗ e = Ai·ke ⊗ ej ⊗ e , among the different possibilities. Again, the rule of thumb here is that sum- mation is always done over the opposite indices. Note also that the order of

12 the indices in the components and in the tensor products they multiply must be kept the same. We can use the direct notation and the properties of tensor products to derive the relations between the different components of a tensor. For example, to see the first of the identities in (13), observe that

mn mn Aij = ei · Aej = ei · A (em ⊗ en)ej = ei · A (en · ej)em

But en · ej = gjn = gnj, so that

mn mn Aij = A gjnei · em = gimgjnA because ei · em = gim. To get the second identity in (13), just use a different representation of A:

·n m ·n m ·n m Aij = ei · Aej = ei · Am(e ⊗ en)ej = ei · Ame (en · ej) = Amgnjei · e ·n m ·n ·n = Amgnjδi = Ai gnj = gjnAi

The other relations in (13) are derived the same way.

13