Tensors 7.1 Introduction in Two Dimensions
Total Page:16
File Type:pdf, Size:1020Kb
Physics 411 Lecture 7 Tensors Lecture 7 Physics 411 Classical Mechanics II September 12th 2007 In Electrodynamics, the implicit law governing the motion of particles is F α = m x¨α. This is also true, of course, for most of classical physics and the details of the physical principle one is discussing are hidden in F α, and potentially, its potential. That is what defines the interaction. In general relativity, the motion of particles will be described by µ µ α β x¨ + Γ αβ x_ x_ = 0 (7.1) and this will occur in a four-dimensional space-time { but that doesn't con- cern us for now. The point of the above is that it lacks a potential, and can be connected in a natural way to the metric. 7.1 Introduction in Two Dimensions We will start with some basic examples in two-dimensions for concreteness. Here we will always begin in a Cartesian-parametrized plane, with basis vectors x^ and y^. 7.1.1 Rotation To begin, consider a simple rotation of the usual coordinate axes through an angle θ (counterclockwise) as shown in Figure 7.1. From the figure, we can easily relate the coordinates w.r.t. the new axes (¯x andy ¯) to coordinates w.r.t. the \usual" axes (x and y) { define ` = 1 of 12 7.1. INTRODUCTION IN TWO DIMENSIONS Lecture 7 yˆ y¯ y x¯ ! x¯ y¯ ψ θ xˆ x Figure 7.1: Two sets of axes, rotated through an angle θ with respect to each other. px2 + y2, then the invariance of length allows us to write x¯ = ` cos( − θ) = ` cos cos θ + ` sin sin θ = x cos θ + y sin θ (7.2) y¯ = ` sin( − θ) = ` sin cos θ − ` cos sin θ = y cos θ − x sin θ; which can be written in matrix form as: x¯ cos θ sin θ x = : (7.3) y¯ − sin θ cos θ y | {z α } ≡R _=R β If we think of infinitesimal displacements centered at the origin, then we α α β would write dx¯ = R β dx . Notice that in this case, as a matrix, the inverse of R is cos θ − sin θ −1 _= (7.4) R sin θ cos θ and this is also equal to the transpose of R, a defining property of \ro- −1 T tation" or \orthogonal" matrices: R = R , in this case coming from the observation that one coordinate system's clockwise rotation is another's counter-clockwise, and symmetry properties of the trigonometric functions. 2 of 12 7.1. INTRODUCTION IN TWO DIMENSIONS Lecture 7 Scalar This one is easy { a scalar \does not" transform under coordinate transfor- mations { if we have a function φ(x; y) in the first coordinate system, then we have: φ¯(¯x; y¯) = φ(x(¯x; y¯); y(¯x; y¯)) (7.5) so that operationally, all we do is replace the x and y appearing in the definition of φ with the relation x =x ¯ cos θ−y¯ sin θ and y =x ¯ sin θ+¯y cos θ. Vector (Contravariant) With the fabulous success and ease of scalar transformations, we are led to a natural definition for an object that transforms as a vector. If we write a generic vector in the original coordinate system: f = f x(x; y) x^ + f y(x; y) y^; (7.6) then we'd like the transformation to be defined as: ¯f = f¯x(¯x; y¯) x¯ + f¯y(¯x; y¯) y¯ = f x(x; y) x^ + f y(x; y) y^; (7.7) i.e. identical to the scalar case, but with the non-trivial involvement of the basis vectors. From Figure 7.1, we can easily see how to write the basis vector x¯ and y¯ in terms of x^ and y^: x¯ = cos θ x^ + sin θ y^ (7.8) y¯ = − sin θ x^ + cos θ y^; and using this, we can write ¯f in terms of the original basis vectors, the elements in front of these will then define f x and f y according to our target. ¯f =f¯x cos θ − f¯y sin θ x^ +f¯x sin θ + f¯y cos θ y^; (7.9) so that we learn (slash demand) that f x = f¯x cos θ − f¯y sin θ (7.10) f y = f¯x sin θ + f¯y cos θ; or, inverting, we have, now in matrix form: f¯x cos θ sin θ f x = : (7.11) f¯y − sin θ cos θ f y 3 of 12 7.1. INTRODUCTION IN TWO DIMENSIONS Lecture 7 ¯α α β Notice that the \components" can be written as f = R β f from the above, and this leads to the usual statement that \a (contravariant) vector transforms `like' the coordinates themselves." (more appropriately, like the α α β coordinate differentials, which are actual vectors { recall dx¯ = R β dx from above). The rotation matrix and transformation is just one example, and the form is telling { notice that (somewhat sloppily) dx¯α dx¯α = Rα dxβ −! = Rα (7.12) β dxβ β and so it is tempting to generalize the contravariant vector transformation law to include an arbitrary relation between \new" and \old" coordinates: dx¯α f¯α = f β; (7.13) dxβ precisely what we defined for a contravariant vector originally. 7.1.2 Non-Orthogonal Axes We will stick with linear transformations for a moment, but now allow the axes to be skewed { this will allow us to distinguish between the above con- travariant transformation law and the covariant (one-form) transformation law. Consider the two coordinate systems shown in Figure 7.2 { from the figure, we can write the relation for dx¯α in terms of dxα: dx 1 cos θ dx¯ = (7.14) dy 0 sin θ dy¯ or, we can write in the more standard form (inverting the above): dx¯ 1 − cot θ dx = 1 ; (7.15) dy¯ 0 sin θ dy | {z α } ≡T _=T β α α β so that dx¯ = T β dx and the matrix T is the inverse of the matrix in (7.14). 4 of 12 7.1. INTRODUCTION IN TWO DIMENSIONS Lecture 7 yˆ y¯ dy dy¯ θ xˆ, x¯ dx¯ dx Figure 7.2: Original, orthogonal Cartesian axes, and a skewed set with coordinate differential shown (parallel projection is used since differentials are contravariant). Let us first check the contravariant form dx = dx x^ + dy y^ transforms ap- propriately. We need the basis vectors in the skewed coordinate system { from the figure, these are x¯ = x^ and y¯ = cos θ x^ + sin θ y^, then dx¯ = dx¯ x¯ + dy¯y¯ dy =(dx − cot θ dy) x^ + (cos θ x^ + sin θ y^) (7.16) sin θ = dx x^ + dy y^ as expected. Recall that we developed the metric for this type of situation previously { if we take the scalar length dx2 + dy2, and write it in terms of the dx¯ and dy¯ infinitesimals, then 1 cos θ dx¯ dx2 + dy2 = dx¯2 + dy¯2 2 dx¯ dy¯ cos θ _= dx¯ dy¯ ; cos θ 1 dy¯ | {z } ≡g¯µν (7.17) α β α β defining the metric. We have the relation dx gαβ dx = dx¯ g¯αβ dx¯ , which tells us how the metric itself transforms. From the definition of the trans- α α β dx¯α α formation, dx¯ = T β dx , we have dxβ = T β as usual, but we can also write the inverse transformation, making use of the inverse of the matrix T 5 of 12 7.1. INTRODUCTION IN TWO DIMENSIONS Lecture 7 ~α −1 in (7.14). Letting T β≡_ T , dxα dxα = T~α dx¯β −! = T~α : (7.18) β dx¯β β Now going back to the invariant, α β α γ β ρ dx¯ g¯αβ dx¯ = (T γ dx )g ¯αβ (T ρ dx ): (7.19) α β For this to expression to equal dx gαβ dx , we must have ~σ ~δ g¯αβ = T α gσδ T β; (7.20) which we can put in as a check: α β α γ β ρ dx¯ g¯αβ dx¯ = (T γ dx )g ¯αβ (T ρ dx ) α γ σ δ β ρ = (T dx ) T~ gσδ T~ (T dx ) γ α β ρ (7.21) σ δ γ ρ = δ γ δ ρ gσδ dx dx σ δ = dx gσδ dx : The metric transformation law (7.20) is different from that of a vector { first of all, there are two indices, but more important, rather than transforming α with T β, the covariant indices transform with the inverse of the transfor- mation. This type of transformation is not obvious under rotations since T rotations leave the metric itself invariant (R g R = g), and we don't notice the transformation of the metric. Covariant \vector" Transformation If we generalize the above covariant transformation to non-linear coordinate relations, we define covariant vector transformation as: dxβ f¯ = f : (7.22) α dx¯α β For a second rank covariant tensor, like the metric gµν, we just introduce a copy of the transformation for each index (the same is true for the con- travariant transformation law): dxα dxβ g¯ = g (7.23) µν dx¯µ dx¯ν αβ 6 of 12 7.1. INTRODUCTION IN TWO DIMENSIONS Lecture 7 Now the question becomes: For contravariant vectors, the model was dxα, what is the model covariant vector? The answer is, \the gradient". Consider a scalar φ(x; y) { we can write its partial derivaties w.r.t.