The Rigid Rotor in Classical and Quantum Mechanics Paul E.S
Total Page:16
File Type:pdf, Size:1020Kb
The rigid rotor in classical and quantum mechanics paul e.s. wormer Institute of Theoretical Chemistry, University of Nijmegen, Toernooiveld, 6525 ED Nijmegen, The Netherlands Contents 1 Introduction 1 2 The mathematics of rotations in R3 2 3 The algebra of real antisymmetric matrices 8 4 The kinematics of a rigid body 11 5 Kinetic energy of a rigid rotor 14 6 The Euler equations 20 7 Quantization 21 8 Rigid rotor functions 23 9 The quantized energy levels of rigid rotors 29 10 Angular momenta and Lie derivatives 34 10.1 Infinitesimal rotations of functions f(r) ............ 35 10.2 Infinitesimal rotations of functions of Euler angles . ..... 37 1 Introduction The following text contains notes on the classical and quantum mechanical rigid rotor. The classical part is based on the books of H. Goldstein Classical Mechanics [Addison-Wesley, Reading, MA, 1980, 2nd Ed.] and V. I. Arnold, Mathematical Methods of Classical Mechanics, [Springer-Verlag, New York, 1989, 2nd Ed.]. In the following pages Goldstein’s exposition is condensed, whereas Arnold’s terse mathematical treatment is expanded. The quantum mechanical part is in the spirit of L. C. Biedenharn and J. D. Louck Angular Momentum in Quantum Physics: Theory and Application. [Addison-Wesley, Reading, MA 1981]. 1 2 The mathematics of rotations in R3 Consider a real 3 3 matrix R with columns r , r , r , i.e., R =(r , r , r ). × 1 2 3 1 2 3 The matrix R is orthogonal if r r = δ , i, j =1, 2, 3. i · j ij The matrix R is a proper rotation matrix, if it is orthogonal and if r1, r2 and r3 form a right-handed set, i.e., 3 r r = ε r . (1) i × j ijk k Xk=1 Here εijk is the antisymmetric (Levi-Civita) tensor, ε123 = ε312 = ε231 =1 (2) ε = ε = ε = 1 213 321 132 − and εijk = 0 if two or more indices are equal. The matrix R is an improper rotation matrix if its column vectors form a left-handed set, i.e., r r = ε r . (3) i × j − ijk k Xk Equations (1) and (3) can be condensed into one equation 3 r r = det(R) ε r (4) i × j ijk k Xk=1 by virtue of the following lemma. Lemma 1. The determinant of a proper rotation matrix is 1 and of an improper rotation 1. − Proof The determinant of a 3 3 matrix (a, b, c) can be written as a (b c). × · × Now, for a proper rotation, we find by Eq. (1) and remembering that the rk are orthonormal, r (r r )= ε r r = ε =1, 1 · 2 × 3 23k 1 · k 231 Xk and likewise we find 1 for an improper rotation by Eq. (3). − 2 The Levi-Civita tensor allows the following compact notation for the vec- tor product (a b) = ε a b . × i ijk j k Xj,k For instance, (a b) = ε a b + ε a b = a b + a b . × 2 213 1 3 231 3 1 − 1 3 3 1 Theorem 1 A proper rotation matrix R =(r1, r2, r3) can be factorized thus R = Rz(ω3) Ry(ω2) Rx(ω1) (the“xyz-parametrization”) or also R = Rz(α) Ry(β) Rz(γ) (the “Euler parametrization”) where cos ϕ sin ϕ 0 − Rz(ϕ) := sin ϕ cos ϕ 0 0 0 1 cos ϕ 0 sin ϕ R (ϕ) := 0 1 0 (5) y sin ϕ 0 cos ϕ − 10 0 R (ϕ) := 0 cos ϕ sin ϕ . x 0 sin ϕ −cos ϕ Proof We first prove the xyz-parametrization by describing an algorithm for the factorization of R. Consider to that end cos ω cos ω sin ω cos ω sin ω 3 2 − 3 3 2 R (ω ) R (ω )= sin ω cos ω cos ω sin ω sin ω =: (a , a , a ). z 3 y 2 3 2 3 3 2 1 2 3 sin ω 0 cos ω − 2 2 (6) Note that the multiplication by Rx(ω1) on the right does not affect the first column, so that r1 = a1. Solve ω2 and ω3 from the first column of R, cos ω3 cos ω2 r1 = sin ω3 cos ω2 . sin ω − 2 This is possible. First solve ω for π/2 ω π/2 from 2 − ≤ 2 ≤ sin ω = R (r ) . 2 − 31 ≡ − 1 3 3 Then solve ω for 0 ω 2π from 3 ≤ 3 ≤ R11 cos ω3 = cos ω2 R21 sin ω3 = . cos ω2 This determines the vectors a2 and a3. Since a1, a2 and a3 are the columns of a proper rotation matrix [Eq. (6)] they form an orthonormal right-handed system. The plane spanned by a2 and a is orthogonal to a r and hence contains r and r . Thus, 3 1 ≡ 1 2 3 cos ω sin ω (r , r )=(a , a ) 1 1 . (7) 2 3 2 3 sin ω −cos ω 1 1 Since r2, a2 and a3 are known unit vectors we can compute a r = cos ω (8) 2 · 2 1 a r = sin ω . 3 · 2 1 These equations give ω with 0 ω 2π. Augment the matrix in Eq. (7) 1 ≤ 1 ≤ to Rx(ω1), then R (r , r , r )=(r , a , a )R (ω ) ≡ 1 2 3 1 2 3 x 1 = (a1, a2, a3)Rx(ω1)= Rz(ω3) Ry(ω2) Rx(ω1). This concludes the proof of the xyz parametrization. The Euler parametrization is obtained by solving ω2 and ω3 from r3 = a3 and then considering cos ω sin ω (r , r )=(a , a ) 1 1 (9) 1 2 1 2 sin ω −cos ω 1 1 or, a r = cos ω , a r = sin ω . 1 · 1 1 2 · 1 1 Equation (9) can be written as (r1, r2, r3)=(a1, a2, r3) Rz(ω1)= Rz(ω3) Ry(ω2) Rz(ω1) , which proves the Euler parametrization. Note. Some confusion exists about the Euler angles of an improper orthogonal ma- trix S. One can write S = S′R, where R is proper and has a unique Euler parametrization and S′ is another improper rotation matrix. Different 4 choices of S′ are possible. Some workers choose S′ = 1 (space inversion), − while others choose a reflection, for instance in the xy plane: 10 0 S′ = 01 0 . 0 0 1 − Since the choice of S′ is usually implicit and clouded by physical arguments, it is not always clear that the choice is only a matter of convention. In any case, one needs an extra convention, added to the Euler convention, to uniquely parametrize an improper rotation matrix. Yet another parameterization of proper rotation matrices, the (n,ϕ) pa- rameterization, is useful. In order to introduce it, we first prove the existence of a rotation (invariant) axis. Theorem 2 (Euler’s theorem) A rotation matrix R has at least one invariant vector n, i.e., R n = n. If R has more than one invariant vector, R = 1 (the unit matrix) and any vector is an invariant vector. Proof We show that the matrix R has an eigenvalue λ = 1. Since det(R)−1 = det(R−1) = 1, we find, using the rules det(AT ) = det(A) and det(A B) = det(A) det(B), det(R 1) = det (R 1)T = det(R T 1) = det(R−1 1) − − − − = det R−1(R 1) = det( 1) det(R−1) det(R 1) − − − − = det(R 1). − − Hence, det (R 1) = det (R 1) so that det (R 1) = 0, and we con- − − − − clude that the secular equation det (R λ1) = 0 has the root λ = 1. The − corresponding eigenvector is n. From linear algebra we know the general result that an m m matrix A × has m orthogonal eigenvectors if and only if it is normal, that is, if AA† = A†A. That is, a normal matrix is unitarily equivalent to a diagonal matrix. Its eigenvectors and eigenvalues may be complex. In the case at hand R, which obviously is normal, is equivalent to a matrix of the form eiφ 0 0 0 e−iφ 0 , 0 0 1 because, as we just saw, it has at least one eigenvalue 1. Further the diagonal matrix is unitary since R is orthogonal. The diagonal elements of a diagonal 5 unitary matrix lie on the unit circle in the complex plane. Finally det(R)=1 (is the product of the diagonal elements), so that the two complex eigenvalues are each others complex conjugate. The two corresponding eigenvectors are in general complex. The matrix R has only two real eigenvectors, other than n, if φ = π or φ = 0. For φ = π the eigenvectors change sign and are not invariant. For φ = 0 we have the unit matrix. This proves Theorem 2. Often one writes a proper rotation matrix as R(n,ϕ), where the invariant vector n is the rotation axis and ϕ is the angle of rotation around n. It is not difficult to give an explicit expression for R(n,ϕ). Consider to that end an arbitrary vector r = an in R3, (a R), and decompose it into a component 6 ∈ parallel to the invariant unit vector n and a component x⊥ perpendicular to it: r =(r n) n + x with x = r (r n) n. (10) · ⊥ ⊥ − · The vectors r, x and n are in one plane, while y n r is perpendicular ⊥ ⊥ ≡ × to this plane. The vectors n, x⊥ and y⊥ form a right-handed frame. The vector n has unit length by definition and the vectors x⊥ and y⊥ both have the length r 2 (n r)2 (which is not necessarily unity).