Analytical Mechanics: Lagrangian and Hamiltonian Formalism
Total Page:16
File Type:pdf, Size:1020Kb
MATH-F-204 Analytical mechanics: Lagrangian and Hamiltonian formalism Glenn Barnich Physique Théorique et Mathématique Université libre de Bruxelles and International Solvay Institutes Campus Plaine C.P. 231, B-1050 Bruxelles, Belgique Bureau 2.O6. 217, E-mail: [email protected], Tel: 02 650 58 01. The aim of the course is an introduction to Lagrangian and Hamiltonian mechanics. The fundamental equations as well as standard applications of Newtonian mechanics are derived from variational principles. From a mathematical perspective, the course and exercises (i) constitute an application of differential and integral calculus (primitives, inverse and implicit function theorem, graphs and functions); (ii) provide examples and explicit solutions of first and second order systems of differential equations; (iii) give an implicit introduction to differential manifolds and to realiza- tions of Lie groups and algebras (change of coordinates, parametrization of a rotation, Galilean group, canonical transformations, one-parameter subgroups, infinitesimal transformations). From a physical viewpoint, a good grasp of the material of the course is indispensable for a thorough understanding of the structure of quantum mechanics, both in its operatorial and path integral formulations. Furthermore, the discussion of Noether’s theorem and Galilean invariance is done in a way that allows for a direct generalization to field theories with Poincaré or conformal invariance. 2 Contents 1 Extremisation, constraints and Lagrange multipliers 5 1.1 Unconstrained extremisation . .5 1.2 Constraints and regularity conditions . .5 1.3 Constrained extremisation . .7 2 Constrained systems and d’Alembert principle 9 2.1 Holonomic constraints . .9 2.2 d’Alembert’s theorem . 10 2.3 Non-holonomic constraints . 10 2.4 d’Alembert’s principle . 13 3 Hamilton and least action principles 15 4 Lagrangian mechanics 17 5 Applications of the Lagrangian formalism 19 6 Covariance of the Lagrangian formalism 21 6.1 Change of dependent variables . 21 6.2 Change of the independent variable . 22 6.3 Combined change of variables . 23 7 Noether’s theorem and Galilean invariance 25 7.1 Noether’s first theorem . 25 7.2 Applications . 26 7.2.1 Explicit time independence and conservation of energy . 26 7.2.2 Cyclic variables and conservation of canonical momentum . 26 7.2.3 One-parameter groups of transformations . 26 7.2.4 One-parameter groups of transformations involving time . 28 7.2.5 Galilean boosts . 30 8 Hamiltonian formalism 31 9 Canonical transformations 33 10 Dynamics in non-inertial frames 35 11 Charged particle in electromagnetic field 37 References 37 3 4 CONTENTS Chapter 1 Extremisation, constraints and Lagrange multipliers The purpose of this chapter is to re-derive in the finite-dimensional case standard results on constrained extremisation so as to motivate and clarify analogous results needed in the functional case. It is based on chapters II.1, II.2, II.5 of [8]. The discussion on constraints and regularity conditions is taken from chapter 1 of [11]. 1.1 Unconstrained extremisation The problem consists of finding the extrema (=stationary points) of a function F (x1; : : : ; xn) of n variables xa, a = 1; : : : ; n in the interior of a domain. We write the variation of such a function under variations δxa of the variables xa as @F δF = δxa : (1.1) @xa @F @F Here and throughout, we use the summation convention over repeated indices, δxa = Pn δxa . @xa a=1 @xa b Since for a collection of functions fa(x ), a a faδx = 0; 8δx =) fa = 0; (1.2) it follows that the extrema x¯a are determined by the vanishing of the gradient of F , @F δF = 0 () j a a = 0 : (1.3) @xa x =¯x Recall further that the nature (maximum, minimum, saddle point) of each extremum is determined by the eigenvalues of the matrix @2F j : (1.4) @xa@xb x¯ 1.2 Constraints and regularity conditions Very often, one is interested in a problem of extremisation, where the variables xa are subjected to con- straints, that is to say where r independent relations Gm(x) = 0; m = 1; : : : r; (1.5) 5 6 CHAPTER 1. EXTREMISATION, CONSTRAINTS AND LAGRANGE MULTIPLIERS @G are supposed to hold. Standard regularity on the functions G are that the rank of the matrix m is m @xa r (locally, in a neighborhood of Gm = 0). This means that one may choose locally a new system of a α coordinates such that Gm are r of the new coordinates, x ! q ;Gm, with α = 1; : : : n − r. Lemma 1. A function f that vanishes when the constraints hold can be written as a combination of the functions defining the constraints: m fjGm=0 = 0 () f = g Gm ; (1.6) for some coefficients gm(x). That the condition is sufficient ((=) is obvious. That it is necessary (=)) is shown in the new α coordinate system, where f(q ;Gm = 0) = 0 and thus 1 1 Z d Z @f f(qα;G ) = dτ f(qα; τG ) = G dτ (qα; τG ); (1.7) m dτ m m @G m 0 0 m m R 1 @f α so that g = dτ (q ; τGm). 0 @Gm In the following, we will use the notation f ≈ 0 for a function that vanishes when the constraints hold, m so that the lemma becomes f ≈ 0 () f = g Gm. Lemma 2. Suppose that the constraints hold and that one restricts oneself to variations δxa tangent to the constraint surface, @G G = 0; m δxa ≈ 0: (1.8) m @xa It follows that @G f δxa ≈ 0; 8δxa satisfying (1.8) () f ≈ µm m ; (1.9) a a @xa for some µm. NB: In this lemma, the use of ≈ means that these equalities in (1.8) and (1.9) are understood as holding when Gm = 0. That the condition is sufficient ((=) is again direct when contracting the expression for fa in the RHS of (1.9) with δxa and using the second of (1.8). That the condition is necessary (=)) follows by first substracting from the LHS of (1.9) the combi- nation @G −λm m δxa ≈ 0, with arbitrary λm, which gives @xa @G (f − λm m )δxa ≈ 0: (1.10) a @xa Without loss of generality (by renaming if necessary some of the variables), we can assume that @G j m j 6= 0; for a = n − r + 1; : : : ; n: (1.11) @xa This implies that one may locally solve the constraints in terms of the r last coordinates, ∆ ∆ α Gm = 0 () x = X (x ); ∆ = n − r + 1; : : : ; n; α = 1; : : : ; n − r: (1.12) In this case, one refers to the x∆ as dependent and to the xα as independent variables. This also implies that, locally, there exists an invertible matrix M m∆ such that m∆ @Gm0 m @Gm mΓ Γ M = δ 0 ; M = δ : (1.13) @x∆ m @x∆ ∆ 1.3. CONSTRAINED EXTREMISATION 7 In terms of the various types of variables, (1.10) may be decomposed as @G @G (f − λm m )δx∆ + (f − λm m )δxα ≈ 0: (1.14) ∆ @x∆ α @xα Since the λm are arbitrary, we are free to fix @G λm = M mΓf := µm () f = µm m ; (1.15) Γ ∆ @x∆ so that the first term (1.14) vanishes. We then remain with @G (f − µm m )δxα ≈ 0: (1.16) α @xα Since the variables xα are unconstrained and the variations δxα independent, this implies as in (1.2) that @G f ≈ µm m : (1.17) α @xα 1.3 Constrained extremisation a Extremisation of a function F (x ) under r constraints Gm = 0 can then be done in two equivalent ways: Method 1: The first is to solve the constraints as in (1.12) and to extremise the function F G(xα) = F (xα;X∆(xβ)) : (1.18) In terms of the unconstrained variables xα, we are back to an unconstrained extremisation problem, whose extrema are determined by @F G = 0 : (1.19) @xα In order to relate to the previous discussion and the second method to be explained below, note that, ∆ ∆ α by using Lemma 1, Gm = 0 () x − X (x ) = 0 implies ∆ ∆ α ∆m x − X (x ) = g Gm; (1.20) for g∆m(xa) invertible. Differentiation with respect to xΓ yields @g∆m @G δ∆ = G + g∆m m : (1.21) Γ @xΓ m @xΓ @G Hence, if the constraints are satisfied, i.e., if G = 0, g∆m is the inverse matrix to m , g∆m ≈ M ∆m. m @x∆ On the one hand, @F G @F @F @XΓ = j ∆ ∆ + j ∆ ∆ : (1.22) @xα @xα x =X @xΓ x =X @xα On the other hand, @XΓ @(XΓ − xΓ) @gΓm @G = = − G − gΓm m ; (1.23) @xα @xα @xα m @xα ∆ ∆ so that, if Gm = 0, or equivalently, if one substitutes x by X , (1.19) may be written as @F @G @F ≈ µm m ; µm ≈ g∆m: (1.24) @xα @xα @x∆ 8 CHAPTER 1. EXTREMISATION, CONSTRAINTS AND LAGRANGE MULTIPLIERS Furthermore, if Gm = 0, one may also write @F @F @F @G @G = δΓ ≈ gΓm m ≈ µm m : (1.25) @x∆ @xΓ ∆ @xΓ @x∆ @x∆ by using (1.21) and the definition of µm in (1.24). Method 2: Instead of restricting oneself to the space of independent variables xa, one extends the space of variables xa by additional variables λm, called Lagrange multipliers, and one considers the unconstrained extremisation of the function, λ a m a m λ F (x ; λ ) = F (x ) − λ Gm; δF = 0 : (1.26) By unconstrained extremisation, we understand that the variables xa; λm are considered as unconstrained at the outset, with independent variations δxa; δλm. That this gives the same extrema can be seen as follows: @F @G 0 = δF λ = ( − λm m )δxa − δλmG ; (1.27) @xa @xa m implies @F @G = λm m ;G = 0: (1.28) @xa @xa m @G A variation of the second equation then implies that m δxa = 0, while contraction of the first equation @xa with δxa gives us (1.10) of the proof of lemma 2.