<<

Hamiltonian

December 5, 2012

1 Phase

Phase space is a dynamical arena for in which the number of independent dynamical variables is doubled from n variables qi, i = 1, 2, . . . , n to 2n by treating either the or the momenta as independent variables. This has two important consequences. First, the equations of become first order differential equations instead of second order, so that the initial conditions is enough to specify a unique point in . The means that, unlike the configurations space treatment, there is a unique solution to the equtions of motion through each point. This permits some useful geometric techniques in the study of the system. Second, as we shall see, the set of transformations that preserve the is enlarged. In , we are free to use n general coordinates, qi, for our description. In phase space, however, we have 2n coordinates. Even though transformations among these 2n coordinates are not com- pletely arbitrary, there are far more allowed transformations. This large set of transformations allows us, in principal, to formulate a general solution to mechanical problems via the Hamilton-Jacobi equation.

1.1 phase space While we will not be using velocity phase space here, it provides some motivation for our developments in the next Sections. The formal presentation of Hamiltonian begins in Section 1.3. Suppose we have an functional

S = L (q , q˙ , t) dt ˆ i j

dependent on n dynamical variables, qi (t), and their derivatives. We might instead treat L (qi, uj, t) as a function of 2n dynamical variables. Thus, instead of treating the the velocities as time derivatives of the position variables, (qi, q˙i) we introduce n velocities ui and treat them as independent. Then the variations of the velocities δui are also independent, and we end up with 2n equations. We then include n constraints, restoring the relationship between qi and q˙i,

h X i S = L (q , u , t) + λ (q ˙ − u ) dt ˆ i j i i i Variation of the original dynamical variables then results in

0 = δqS  ∂L  = δqi + λiδq˙i dt ˆ ∂qi   ∂L ˙ = − λi δqidt ˆ ∂qi

1 so that ˙ ∂L λi = ∂qi For the velocities, we find

0 = δuS  ∂L  = − λi δuidt ˆ ∂ui so that ∂L λi = ∂ui

and finally, varying the Lagrange multipliers, λi, we recover the constraints,

ui =q ˙i We may eliminate the multipliers by differentiating the velocity equation d d  ∂L  (λi) = dt dt ∂ui ˙ ˙ to find λi, then substituting for ui and λi into the qi equation,   ˙ d ∂L λi = dt ∂q˙i d  ∂L  ∂L = dt ∂q˙i ∂qi d  ∂L  ∂L − = 0 dt ∂q˙i ∂qi Pn 1 2 and we recover the Euler-Lagrange equations. If the kinetic is of the form i=1 2 mui , then the Lagrange multipliers are just the momenta, ∂L λi = ∂ui = mui

= mq˙i

1.2 Phase space We can make the construction above more general by requiring the Lagrange multipliers to always be the conjugate . Combining the constraint equation with the equation for λi we have ∂L λi = ∂q˙i We now define the conjugate momentum to be exactly this derivative, ∂L pi ≡ ∂q˙i Then the action becomes h X X i S = L (q , u , t) + p q˙ − p u dt ˆ i j i i i i h X X i = L (q , u , t) − p u + p q˙ dt ˆ i j i i i i

2 For Lagrangians quadratic in the velocities, the first two terms become X X L (qi, uj, t) − piui = L (qi, q,˙ t) − piq˙i X = T − V − piq˙i = − (T + V )

We define this quantity to be the Hamiltonian, X H ≡ piq˙i − L (qi, q,˙ t)

Then hX i S = p q˙ − H dt ˆ i i This successfully eliminates the Lagrange multipliers from the formulation. The term “phase space” is generally reserved for momentum phase space, spanned by coordinates qi, pj.

1.2.1 Notice that H is, by definition, independent of the velocities, since X H = pjq˙j − L   ∂H ∂ X = p q˙ − L ∂q˙ ∂q˙  j j  i i j X ∂L = p δ − j ij ∂q˙ j i ∂L = pi − ∂q˙i ≡ 0

Therefore, the Hamiltonian is a function of qi and pi only. This is an example of a general technique called Legendre transformation. Suppose we have a function f, which depends on independent variables A, B and dependent variables, having partial derivatives ∂f = P ∂A ∂f = Q ∂B Then the differential of f is df = P dA + QdB A Legendre transformation allows us to interchange variables to make either P or Q or both into the independent variables. For example, let g (A, B, P ) ≡ f − PA. Then

dg = df − AdP − P dA = P dA + QdB − AdP − P dA = QdB − AdP

3 so that g actually only changes with B and P , g = g (B,P ). Similarly, h = f − QB is a function of (A, Q) only, while k = − (f − PA − QB) has (P,Q) as independent variables. Explicitly,

dk = −df + P dA + AdP + QdB + BdQ = AdP + BdQ

and we now have ∂f = A ∂P ∂f = B ∂Q 2 Hamilton’s equations

The essential formalism of Hamilton’s equation is as follows. We begin with the action

S = L (q , q˙ , t) dt ˆ i j and define the conjugate momenta ∂L pi ≡ ∂q˙i and Hamiltonian X H (qi, pj, t) ≡ pjq˙j − L (qi, q˙j, t) Then the action may be written as

hX i S = p q˙ − H (q , p , t) dt ˆ j j i j

where qi and pj are now treated as independent variables. Finding extrema of the action with respect to all 2n variables, we find:

0 = δqk S  P   ∂ j pjq˙j ∂H = δq˙k − δqk dt ˆ ∂q˙k ∂qk   X ∂H = p δ δq˙ − δq dt ˆ  j jk k ∂q k j k  ∂H  = pkδq˙k − δqk dt ˆ ∂qk  ∂H  = −p˙k − δqkdt ˆ ∂qk so that ∂H p˙k = − ∂qk and

0 = δpkS  P   ∂ j pjq˙j ∂H = δpk − δpk dt ˆ ∂pk ∂pk

4  ∂H  = q˙kδpk − δpk dt ˆ ∂pk  ∂H  = q˙k − δpkdt ˆ ∂pk so that ∂H q˙k = ∂pk

These are Hamilton’s equations. Whenever the Legendre transformation between L and H and between q˙k and pk is non-degenerate, Hamilton’s equations, ∂H q˙k = ∂pk ∂H p˙k = − ∂qk form a system equivalent to the Euler-Lagrange or Newtonian equations.

2.1 Example: Newton’s second law Suppose the Lagrangian takes the form 1 L = mx˙ 2 − V (x) 2 Then the conjugate momenta are ∂L pi = ∂x˙ i = mx˙ i and the Hamiltonian becomes X H (xi, pj, t) ≡ pjx˙ j − L (xi, x˙ j, t) 1 = mx˙ 2 − mx˙ 2 + V (x) 2 1 = mx˙ 2 + V (x) 2 1 = p2 + V (x) 2m Notice that we must invert the relationship between the momenta and the velocities, p x˙ = i i m then expicitly replace all occurrences of the velocity with appropriate combinations of the momentum. Hamilton’s equations are: ∂H x˙ k = ∂pk p = k m ∂H p˙k = − ∂xk ∂V = − ∂xk thereby reproducing the usual definition of momentum and Newton’s second law.

5 2.2 Example Suppose we have a coupled oscillator comprised of two identical pendula of length l and each of m, connected by a light spring with spring constant k. Then for small displacements, the action is

1   1  S = ml2 θ˙2 + θ˙2 − k (l sin θ − l sin θ )2 − mgl (1 − cos θ ) − mgl (1 − cos θ ) dt ˆ 2 1 2 2 1 2 1 2

which for small angles becomes approximately

1   1 1  S = ml2 θ˙2 + θ˙2 − k (lθ − lθ )2 − mgl θ2 + θ2 dt ˆ 2 1 2 2 1 2 2 1 2

The conjugate momenta are: ∂L p = 1 ˙ ∂θ1 2 ˙ = ml θ1 ∂L p = 2 ˙ ∂θ2 2 ˙ = ml θ2

and the Hamiltonian is ˙ ˙ H = p1θ1 + p2θ2 − L 1   1 1  = ml2θ˙2 + ml2θ˙2 − ml2 θ˙2 + θ˙2 − k (lθ − lθ )2 − mgl θ2 + θ2 1 2 2 1 2 2 1 2 2 1 2 1   1 1 = ml2 θ˙2 + θ˙2 + k (lθ − lθ )2 + mgl θ2 + θ2 2 1 2 2 1 2 2 1 2 1 1 1 = p2 + p2 + k (lθ − lθ )2 + mgl θ2 + θ2 2ml2 1 2 2 1 2 2 1 2

Notice that we always eliminate the velocities and write the Hamiltonian as a function of the momenta, pi. Hamilton’s equations are:

˙ ∂H θ1 = ∂p1 1 = p ml2 1 ˙ ∂H θ2 = ∂p2 1 = p ml2 2 ∂H p˙1 = − ∂θ1 1 = − kl2 (θ − θ ) − mglθ 2 1 2 1 ∂H p˙2 = − ∂θ2 1 = kl2 (θ − θ ) − mglθ 2 1 2 2

6 ˙ From here we may solve in any way that suggests itself. If we differentiate θ1 again, and use the third equation, we have 1 θ¨ = p˙ 1 ml2 1 1 1 g = − kl2 (θ − θ ) − θ ml2 2 1 2 l 1 k 1 g = − (θ − θ ) − θ m 2 1 2 l 1

Similarly, for θ2 we have k 1 g θ¨ = (θ − θ ) − θ 2 m 2 1 2 l 2 Subtracting, k 1 k 1 θ¨ − θ¨ = − (θ − θ ) − (θ − θ ) 1 2 m 2 1 2 m 2 1 2 d2 k (θ − θ ) + (θ − θ ) = 0 dt2 1 2 m 1 2 so that r r k k θ − θ = A sin t + B cos t 1 2 m m Adding instead, we find g θ¨ + θ¨ = − (θ + θ ) 1 2 l 1 2 so that rg rg θ + θ = C sin t + D cos t 1 2 l l

3 Formal developments

In order to fully appreciate the and uses of Hamiltonian mechanics, we make some formal developments. First, we write Hamilton’s equations, ∂H x˙ k = ∂pk ∂H p˙k = − ∂xk for k = 1, . . . , n, in a different way. Define a unified name for our 2n coordinates,

ξA = (xi, pj)

for A = 1,..., 2n. That is, more explicitly,

ξi = xi

ξn+i = pi

We may immediately write the left side of both of Hamilton’s equations at once as ˙ ξA = (x ˙ i, p˙j)

7 The right side of the equations involves all of the terms ∂H ∂H ∂H  = , ∂ξA ∂xi ∂pj but there is a difference of a minus sign between the two equations and the interchange of xi and pi. We handle this by introducing a matrix called the symplectic form,  0 1  Ω = AB −1 0 where

[1]ij = δij is the n × n identity matrix. Then, using the summation convention, Hamilton’s equations take the form of a single expression, ˙ ∂H ξA = ΩAB ∂ξB We may check this by writing it out explicitly,

∂H !    0  x˙ i 0 δij ∂xj = ∂H p˙i0 −δi0j 0 ∂pj0 ∂H ! δij0 ∂pj0 = ∂H −δi0j ∂xj ∂H ! = ∂pi − ∂H ∂xi0

In the example above, we have ξ1 = θ1, ξ2 = θ2, ξ3 = p1 and ξ4 = p2. In terms of these, the Hamiltonian may be written as 1 H = H ξ ξ 2 AB A B  kl2 + mgl −kl2 0 0   −kl2 kl2 + mgl 0 0  HAB =  1   0 0 ml2 0  1 0 0 0 ml2 and with ∂ 1 1 H = HABδAC ξB + HABξAδBC = HCBξB ∂ξC 2 2 Hamilton’s equations are

 ˙     2 2    ξ1 0 0 1 0 kl + mgl −kl 0 0 ξ1 ˙ 2 2  ξ2   0 0 0 1   −kl kl + mgl 0 0   ξ2   ˙  =    1     ξ3   −1 0 0 0   0 0 ml2 0   ξ3  ˙ 1 ξ4 0 −1 0 0 0 0 0 ml2 ξ4  1    0 0 ml2 0 ξ1 1  0 0 0 ml2   ξ2  =  2 2     −kl − mgl kl 0 0   ξ3  2 2 kl −kl − mgl 0 0 ξ4  1  ml2 ξ3 1  ml2 ξ4  =  2 2   −kl ξ1 − mglξ1 + kl ξ2  2 2 −kl ξ2 − mglξ2 + kl ξ1

8 so that  ξ˙   1  1 ml2 ξ3 ˙ 1  ξ2   ml2 ξ4   ˙  =  2   ξ3   −kl (ξ1 − ξ2) − mglξ1  ˙ 2 ξ4 kl (ξ1 − ξ2) − mglξ2 as expected.

3.1 Properties of the symplectic form We note a number of important properties of the symplectic form. First, it is antisymmetric,

Ωt = −Ω

ΩAB = −ΩBA

and it squares to minus the 2n-dimensional identity,

Ω2 = −1  0 1   0 1   −1 0  = −1 0 −1 0 0 −1  1 0  = − 0 1

We also have

Ωt = Ω−1

t t 2 since Ω = −Ω, and therefore ΩΩ = Ω (−Ω) = −Ω = 1. Since all components of ΩAB are constant, it is also true that ∂ ∂AΩBC = ΩBC = 0 ∂ξA This last condition does not hold in every basis, however. The defining properties of the symplectic form, necessary and sufficient to guarantee that it has the properties we require for Hamiltonian mechanics are that it be a 2n × 2n matrix satisfying two properties at each point of phase space: 1. Ω2 = −1

2. ∂AΩBC + ∂BΩCA + ∂C ΩAB = 0  0 1  The first of these is enough for there to exist a change of basis so that Ω = at any given AB −1 0 point, while the vanishing combination of derivatives insures that this may be done at every point of phase space.

3.2 Conservation and cyclic coordinates From the relationship between the Lagrangian and the Hamiltonian,

H = pix˙ i − L

we see that ∂H ∂L = − ∂xi ∂xi

9 ∂L If the coordinate xi is cyclic, = 0, then the corresponding Hamilton equation reads ∂xi ∂H p˙i = − ∂xi = 0

and the conjugate momentum ∂L pi = ∂x˙ i is conserved, so the relationship between cyclic coordinates and conserved quantities still holds. Since the Lagrangian is independent of pi, depending only on xi and x˙ i, we also have a corresponding statement about momentum. Suppose some momentum, pi, is cyclic in the Hamiltonian, ∂H = 0 ∂pi Then from either Hamilton’s equations or from the relationship between the Hamiltonian and the Lagrangian, we immediately have x˙ i = 0

so that the coordinate xi is a constant of the motion. Suppose we have a cyclic coordinate, say xn. Then the conserved momentum takes its initial value, pn0, and the Hamiltonian is H = H (x1, . . . xn−1; p1, . . . pn−1, pn0) and therefore immediately becomes a function of n − 1 variables. This is simpler than the Lagrangian case, where constancy of pn makes no immediate simplification of the Lagrangian.

Example 1: As a simple example, consider the 2-dimensional , with Lagrangian 1   GM L = m r˙2 + r2θ˙2 + 2 r The coordinate θ is cyclic and therefore ∂L l = = mr2θ˙ ∂θ˙ is conserved, but this quantity does not explicitly occur in the Lagrangian. However, the Hamiltonian is

1  p2  GM H = p2 + θ − 2m r r2 r

so pθ = l is constant and we immediately have

1  l2  GM H = p2 + − 2m r r2 r

Example 2 (problem 21a): Consider a flywheel of mass M and radius a, its center fixed . A rod of length a is attached to the perimeter with its other end constrained to lie on a horizontal line through the center of the flywheel, and is attached to the massless rod of a simple of length l and mass m. Find the Hamiltonian. First, find L, 1 1 T = mv2 + Iϕ˙ 2 2 2

10 where ϕ˙ (t) is the of the flywheel and M I = x2 + y2 + z2 − z2 dxdydz 33 πa2d ˆ M = x2 + y2 dxdy πa2 ˆ M = ρ3dρdϕ πa2 ˆ 2πM a4 = πa2 4 1 = Ma2 2 while the velocity of the pendulum bob is the combination of the swinging, lθ˙, and the oscillatory motion of the suspension point, located at 2a cos ωt. With the postion of m given by x = 2a cos ϕ + l sin θ y = l cos θ the velocity has components x˙ = −2aϕ˙ sin ϕ + lθ˙ cos θ y˙ = −lθ˙ sin θ Therefore, 1 1 T = mv2 + Iϕ˙ 2 2 2 1 1 = m x˙ 2 +y ˙2 + Ma2ϕ˙ 2 2 4 1   1 = m 4a2ϕ˙ 2 sin2 ϕ − 4alϕ˙θ˙ sin ϕ cos θ + l2θ˙2 + Ma2ϕ˙ 2 2 4 and the potential is simply V = −mgl cos θ up to an arbitrary constant. Therefore, 1   1 L = m 4a2ϕ˙ 2 sin2 ϕ − 4alϕ˙θ˙ sin ϕ cos θ + l2θ˙2 + Ma2ϕ˙ 2 + mgl cos θ 2 4 To find the Hamiltonian, we first find the conjugate momenta, ∂L pθ = ∂θ˙ = ml2θ˙ − 2malϕ˙ sin ϕ cos θ ∂L p = ϕ ∂ϕ˙ 1 = 4ma2ϕ˙ sin2 ϕ − 2malθ˙ sin ϕ cos θ + Ma2ϕ˙ 2 Next, find the Hamiltonian in terms of both velocities and momenta,    1  H = ml2θ˙ − 2malϕ˙ sin ϕ θ˙ + 4ma2ϕ˙ sin2 ϕ − 2malθ˙ sin ϕ + Ma2ϕ˙ ϕ˙ 2 1   1 − m 4a2ϕ˙ 2 sin2 ϕ − 4alϕ˙θ˙ sin ϕ + l2θ˙2 − Ma2ϕ˙ 2 − mgl cos θ 2 4 1 1 = ml2θ˙2 − 2malϕ˙θ˙ sin ϕ + 2ma2ϕ˙ 2 sin2 ϕ + Ma2ϕ˙ 2 − mgl cos θ 2 4

11 Finally, solve for the velocities and eliminate them from H, p 2aϕ˙ θ˙ = θ + sin ϕ ml2 l 1 p + 2malθ˙ sin ϕ = 4ma2ϕ˙ sin2 ϕ + Ma2ϕ˙ ϕ 2  p 2aϕ˙  1 p + 2mal θ + sin ϕ sin ϕ = 4ma2ϕ˙ sin2 ϕ + Ma2ϕ˙ ϕ ml2 l 2 2ap 1 p + θ sin ϕ = 4ma2ϕ˙ sin2 ϕ + Ma2ϕ˙ − 4ma2ϕ˙ sin2 ϕ ϕ l 2  1  =ϕ ˙ 4ma2 sin2 ϕ + Ma2 − 4ma2 sin2 ϕ 2 1 = Ma2ϕ˙ 2 2 4 sin ϕ ϕ˙ = p + p Ma2 ϕ alM θ and then p 2aϕ˙ θ˙ = θ + sin ϕ ml2 l p 4 8 sin2 ϕ = θ + p sin ϕ + p ml2 Mal ϕ l2M θ 1  8m  4 = 1 + sin2 ϕ p + p sin ϕ ml2 M θ Mal ϕ so finally, 1 1 H = ml2θ˙2 − 2malϕ˙θ˙ sin ϕ + 2ma2ϕ˙ 2 sin2 ϕ + Ma2ϕ˙ 2 − mgl cos θ 2 4  2   ! 1 2 1 8m 2 2 8 sin ϕ 8m 2 16 2 2 = ml 1 + sin ϕ pθ + 1 + sin ϕ pϕpθ + pϕ sin ϕ 2 (ml2)2 M Mmal3 M M 2a2l2  2 4 sin ϕ   1  8m  4  −2mal p + p 1 + sin2 ϕ p + p sin ϕ sin ϕ Ma2 ϕ alM θ ml2 M θ Mal ϕ  1   4 2 4 sin ϕ 16 sin2 ϕ  + 2ma2 sin2 ϕ + Ma2 p2 + 2 p p + p2 − mgl cos θ 4 M 2a4 ϕ Ma2 ϕ alM θ a2l2M 2 θ 1  8m 2 4 sin ϕ  8m  8m = 1 + sin2 ϕ p2 + 1 + sin2 ϕ p p + p2 sin2 ϕ 2ml2 M θ Mal M ϕ θ M 2a2 ϕ 4 sin ϕ  8m  8 sin2 ϕ  8m  8m 32m − 1 + sin2 ϕ p p − 1 + sin2 ϕ p p − p p sin2 ϕ − p p sin3 ϕ Mla M ϕ θ Ml2 M θ θ M 2a2 ϕ ϕ M 2al θ ϕ  8m 1  32m sin3 ϕ 4 sin ϕ 32m sin4 ϕ 4 sin2 ϕ + sin2 ϕ + p2 + + p p + + p2 − mgl cos θ M 2a2 Ma2 ϕ alM 2 alM ϕ θ l2M 2 l2M θ 4 Note

My computer has swallowed the next ten pages of notes. Instead of rewriting it all now, I am copying relevant sections of my Mechanics book. I’ll try to fill in any missing details.

5 Phase space and the symplectic form

We now explore some of the properties of phase space and Hamilton’s equations.

12 One advantage of the Hamiltonian formulation is that there is now one equation for each initial condition. This gives the space of all qs and ps a uniqueness property that configuration space (the space spanned by the qs only) doesn’t have. For example, a projectile which is launched from the origin. Knowing only this fact, we still don’t know the path of the object – we need the initial velocity as well. As a result, many possible pass through each point of configuration space. By contrast, the initial point of a in phase space gives us both the initial position and the initial momentum. There can be only one path of the system that passes through that point. Systems with any number of degrees of freedom may be handled in this way. If a system has N degrees of freedom then its phase space is the 2N-dimensional space of all possible values of both position and momen- tum. We define configuration space to be the space of all possible postions of the particles comprising the system, or the complete set of possible values of the degrees of freedom of the problem. Thus, configuration space is the N-dimensional space of all values of qi. By momentum space, we mean the N-dimensional space of all possible values of all of the conjugate momenta. Hamilton’s equations then consist of 2N first order differential equations for the motion in phase space. We illustrate these points with the simple example of a one dimensional . Let a mass, m, free to move in one direction, experience a Hooke’s law restoring , F = −kx. Solve Hamilton’s equations and study the motion of system in phase space. The Lagrangian for this system is L = T − V 1 1 = mx˙ 2 − kx2 2 2 The conjugate momentum is just ∂L p = = mx˙ ∂x˙ so the Hamiltonian is H = px˙ − L p2 1 1 = − mx˙ 2 + kx2 m 2 2 p2 1 = + kx2 2m 2 Hamilton’s equations are ∂H p x˙ = = ∂p m ∂H p˙ = − = −kx ∂x ∂H ∂L = − = 0 ∂t ∂t Note that Hamilton’s equations are two first-order equations. From this point on the coupled linear equations p˙ = −kx p x˙ = m may be solved in any of a variety of ways. Let’s treat it as a matrix system, d  x   1   x  = m (1) dt p −k p     −k q k q k The matrix M = 1 has eigenvalues ω = m , − m and diagonalizes to m  −iω 0  = AMA−1 0 iω

13 where √ 1  i km 1  A = √ √ 2i km i km −1  −1 −1  A−1 = √ √ −i km i km r k ω = m Therefore, multiplying eq.(1) on the left by A and inserting 1 = A−1A,

d  x   1   x  A = A m A−1A (2) dt p −k p we get decoupled equations in the new variables:         1 x − √ip a q 2 km † = A =     (3) a p 1 x + √ip 2 km

The decoupled equations are d  a   −iω 0   a  = (4) dt a† 0 iω a† or simply

a˙ = −iωa a˙ † = −iωa†

with solutions

−iωt a = a0e † † iωt a = a0e The solutions for x and p may be written as p x = x cos ωt + 0 sin ωt 0 mω p = −mωx0 sin ωt + p0 cos ωt

Notice that once we specify the initial point in phase space, (x0, p0) , the entire solution is determined. This solution gives a parameterized curve in phase space. To see what curve it is, note that

m2ω2x2 p2 m2ω2x2 p2 + = 2 2 2 2 + 2 2 2 2 2mE 2mE p0 + m ω x0 p0 + m ω x0 2 2 2 m ω  p0  = 2 2 2 2 x0 cos ωt + sin ωt p0 + m ω x0 mω 1 2 + 2 2 2 2 (−mωx0 sin ωt + p0 cos ωt) p0 + m ω x0 2 2 2 2 m ω x0 p0 = 2 2 2 2 + 2 2 2 2 p0 + m ω x0 p0 + m ω x0 = 1

14 or m2ω2x2 + p2 = 2mE This describes an ellipse in the xp plane. The larger the energy, the larger the ellipse, so the possible of the system give a set of nested, non-intersecting ellipses. Clearly, every point of the xp plane lies on exactly one ellipse. The phase space description of classical systems are equivalent to the configuration space solutions and are often easier to interpret because more information is displayed at once. The price we pay for this is the doubled – paths rapidly become difficult to plot. To ofset this problem, we can use Poincaré sections – projections of the phase space plot onto subspaces that cut across the trajectories. Sometimes the patterns that occur on Poincaré sections show that the motion is confined to specific regions of phase space, even when the motion never repeats itself. These techniques allow us to study systems that are chaotic, meaning that the phase space paths through nearby points diverge rapidly. Now consider the general case of N degrees of freedom. Let

A i  ξ = q , pj

where A = 1,..., 2N. Then the 2N variables ξA provide a set of coordinates for phase space. We would like to write Hamilton’s equations in terms of these, thereby treating all 2N directions on an equal footing. In terms of ξA, we have

dξA  q˙i  = dt p˙j ∂H ! ∂pi = ∂H − ∂qj ∂H = ΩAB ∂ξB

where the presence of ΩAB in the last step takes care of the difference in signs on the right. Here ΩAB is just the inverse of the symplectic form found from the curl of the dilatation, given by

 i  AB 0 δj Ω = j −δi 0 Its occurrence in Hamilton’s equations is an indication of its central importance in Hamiltonian mechanics. We may now write Hamilton’s equations as

dξA ∂H = ΩAB (5) dt ∂ξB Consider what happens to Hamilton’s equations if we want to change to a new set of phase space coordinates, χA = χA (ξ) . Let the inverse transformation be ξA (χ) . The time derivatives become

dξA ∂ξA dχB = dt ∂χB dt while the right side becomes ∂H ∂χC ∂H ΩAB = ΩAB ∂ξB ∂ξB ∂χC Equating these expressions, ∂ξA dχB ∂χD ∂H = ΩAB ∂χB dt ∂ξB ∂χD

15 ∂χC we multiply by the Jacobian matrix, ∂ξA to get ∂χC ∂ξA dχB ∂χC ∂χD ∂H = ΩAB ∂ξA ∂χB dt ∂ξA ∂ξB ∂χD dχB ∂χC ∂χD ∂H δC = ΩAB B dt ∂ξA ∂ξB ∂χD and finally dχC ∂χC ∂χD ∂H = ΩAB dt ∂ξA ∂ξB ∂χD Defining the symplectic form in the new , ∂χC ∂χD Ω˜ CD ≡ ΩAB ∂ξA ∂ξB we see that Hamilton’s equations are entirely the same if the transformation leaves the symplectic form invariant, Ω˜ CD = ΩCD A Any linear transformation M B leaving the symplectic form invariant, AB A B CD Ω ≡ M C M DΩ is called a symplectic transformation. Coordinate transformations which are symplectic transformations at each point are called canonical. Therefore those functions χA (ξ) satisfying ∂χC ∂χD ΩCD ≡ ΩAB ∂ξA ∂ξB are canonical transformations. Canonical transformations preserve Hamilton’s equations.

5.1 Poisson brackets We may also write Hamilton’s equations in terms of the Poisson brackets. Recall that the of any two dynamical variables f and g is given by ∂f ∂g {f, g} = ΩAB ∂ξA ∂ξB The importance of this product is that it too is preserved by canonical transformations. We see this as follows. Let ξA be any set of phase space coordinates in which Hamilton’s equations take the form of eq.(5), and let f and g be any two dynamical variables, that is, functions of these phase space coordinates, ξA. The Poisson bracket of f and g is given above. In a different set of coordinates, χA (ξ) , we have ∂f ∂g {f, g}0 = ΩAB ∂χA ∂χB  ∂ξC ∂f   ∂ξD ∂g  = ΩAB ∂χA ∂ξC ∂χB ∂ξD  ∂ξC ∂ξD  ∂f ∂g = ΩAB ∂χA ∂χB ∂ξC ∂ξD Therefore, if the coordinate transformation is canonical so that ∂ξC ∂ξD ΩAB = ΩCD ∂χA ∂χB

16 then we have ∂f ∂g {f, g}0 = ΩAB = {f, g} ∂ξC ∂ξD and the Poisson bracket is unchanged. We conclude that canonical transformations preserve all Poisson brackets. An important special case of the Poisson bracket occurs when one of the functions is the Hamiltonian. In that case, we have ∂f ∂H {f, H} = ΩAB ∂ξA ∂ξB ∂f ∂H ∂f ∂H = i − i ∂x ∂pi ∂p ∂xi ∂f dxi ∂f  dp  = − − i ∂xi dt ∂pi dt df ∂f = − ∂t ∂t or simply, df ∂f = {f, H} + ∂t ∂t This shows that as the system evolves classically, the total time rate of change of any dynamical variable is the sum of the Poisson bracket with the Hamiltonian and the partial time derivative. If a dynamical variable ∂f has no explicit time dependence, then ∂t = 0 and the total time derivative is just the Poisson bracket with the Hamiltonian. i The coordinates now provide a special case. Since neither x nor pi has any explicit time dependence, with have dxi = H, xi dt dp i = {H, p } (6) dt i and we can check this directly: dq i = H, xi dt N X  ∂xi ∂H ∂xi ∂H  = − ∂xj ∂p ∂p ∂xj j=1 j j N X ∂H = δ ij ∂p j=1 j ∂H = ∂pi and dp i = {H, p } dt i N   X ∂pi ∂H ∂pi ∂H = − ∂q ∂p ∂p ∂q j=1 j j j j ∂H = − ∂qi

17 ∂qi ∂pi Notice that since qi, pi and are all independent, and do not depend explicitly on time, = = 0 = ∂pj ∂qj ∂qi ∂pi ∂t = ∂t . i Finally, we define the fundamental Poisson brackets. Suppose x and pj are a set of coordinates on phase space such that Hamilton’s equations hold in the either the form of eqs.(6) or of eqs.(5). Since they m themselves are functions of (x , pn) they are dynamical variables and we may compute their Poisson brackets A m with one another. With ξ = (x , pn) we have

∂xi ∂xj xi, xj = ΩAB ξ ∂ξA ∂ξB N X  ∂xi ∂xj ∂xi ∂xj  = − ∂xm ∂p ∂p ∂xm m=1 m m = 0 for xi with xj,

∂xi ∂p xi, p = ΩAB j j ξ ∂ξA ∂ξB N  i i  X ∂x ∂pj ∂x ∂pj = − ∂xm ∂p ∂p ∂xm m=1 m m N X i m = δmδj m=1 i = δj

i for x with pj and finally ∂p ∂p {p , p } = ΩAB i j i j ξ ∂ξA ∂ξB N   X ∂pi ∂pj ∂pi ∂pj = − ∂xm ∂p ∂p ∂xm m=1 m m = 0

for pi with pj. The subscript ξ on the bracket indicates that the partial derivatives are taken with respect A i  to the coordinates ξ = x , pj . We summarize these relations as

 A B AB ξ , ξ ξ = Ω

We summarize the results of this subsection with a theorem: Let the coordinates ξA be canonical. Then a transformation χA (ξ) is canonical if and only if it satisfies the fundamental bracket relation

 A B AB χ , χ ξ = Ω For proof, note that the bracket on the left is defined by

∂χA ∂χB χA, χB = ΩCD ξ ∂ξC ∂ξD

so in order for χA to satisfy the canonical bracket we must have

∂χA ∂χB ΩCD = ΩAB (7) ∂ξC ∂ξD

18 which is just the condition shown above for a coordinate transformation to be canonical. Conversely, suppose A  A B AB the transformation χ (ξ) is canonical and ξ , ξ ξ = Ω . Then eq.(7) holds and we have ∂χA ∂χB χA, χB = ΩCD = ΩAB ξ ∂ξC ∂ξD so χA satisfies the fundamental bracked relation. In summary, each of the following statements is equivalent: 1. χA (ξ) is a . 2. χA (ξ) is a coordinate transformation of phase space that preserves Hamilton’s equations. 3. χA (ξ) preserves the symplectic form, according to ∂ξC ∂ξD ΩAB = ΩCD ∂χA ∂χB

4. χA (ξ) satisfies the fundamental bracket relations ∂χA ∂χB χA, χB = ΩCD ξ ∂ξC ∂ξD These bracket relations represent a set of integrability conditions that must be satisfied by any new set of . When we formulate the problem of canonical transformations in these terms, it is not i j  j  obvious what functions q x , pj and πi x , pj will be allowed. Fortunately there is a simple procedure for generating canonical transformations, which we develop in the next section. We end this section with three examples of canonical transformations.

5.1.1 Example 1: Coordinate transformations Let the new configuration space variable, qi, be and an arbitrary function of the spatial coordinates: qi = qi xj

i i  and let πj be the momentum variables corresponding to q . Then q , πj satisfy the fundamental Poisson bracket relations iff:  i j q , q x,p = 0  i i q , πj x,p = δj

{πi, πj}x,p = 0 Check each: N X  ∂qi ∂qj ∂qi ∂qj  qi, qj = − x,p ∂xm ∂p ∂p ∂xm m=1 m m = 0

j since ∂q = 0. For the second bracket, ∂pm i  i δj = q , πj x,p N  i i  X ∂q ∂πj ∂q ∂πj = − ∂xm ∂p ∂p ∂xm m=1 m m N i X ∂q ∂πj = ∂xm ∂p m=1 m

19 i Since q is independent of pm, we can satisfy this only if m ∂πj ∂x = j ∂pm ∂q Integrating gives ∂xn π = p + c j ∂qj n j

with cj an arbitrary constant. The presence of cj does not affect the value of the Poisson bracket. Choosing cj = 0, we compute the final bracket:

N   X ∂πi ∂πj ∂πi ∂πj {π , π } = − i j x,p ∂xm ∂p ∂p ∂xm m=1 m m N X  ∂2xn ∂xm ∂xm ∂2xn  = p − p ∂xm∂qi n ∂qj ∂qi ∂xm∂qj n m=1 N X ∂xm ∂ ∂xn ∂xm ∂ ∂xn  = − p ∂qj ∂xm ∂qi ∂qi ∂xm ∂qj n m=1 N X  ∂ ∂xn ∂ ∂xn  = − p ∂qj ∂qi ∂qi ∂qj n m=1 = 0

Therefore, the transformations

qj = qj(xi) ∂xn π = p + c j ∂qj n j

is a canonical transformation for any functions qi(x). This means that the group of Hamilton’s equations is at least as big as the symmetry group of the Euler-Lagrange equations.

5.1.2 Example 2: Interchange of x and p. The transformation

i q = pi i πi = −x

is canonical. We easily check the fundamental brackets:

 i j q , q x,p = {pi, pj}x,p = 0  i  j q , πj x,p = pi, −x x,p  j = − pi, x x,p  j = + x , pi x,p j = δi  i j {πi, πj}x,p = −x , −x x,p = 0

i Interchange of x and pj, with a sign, is therefore canonical. The use of does not include such a possibility, so Hamiltonian dynamics has a larger symmetry group than Lagrangian dynamics.

20 For our last example, we first show that the composition of two canonical transformations is also canonical. Let ψ (χ) and χ (ξ) both be canonical. Defining the composition transformation, ψ (ξ) = ψ (χ (ξ)) , we compute

∂ψA ∂ψB ∂ψA ∂χE  ∂ψB ∂χF  ΩCD = ΩCD ∂ξC ∂ξD ∂χE ∂ξC ∂χF ∂ξD ∂χE ∂χF ∂ψA  ∂ψB  = ΩCD ∂ξC ∂ξD ∂χE ∂χF ∂ψA  ∂ψB  = ΩEF ∂χE ∂χF = ΩAB

so that ψ (χ (ξ)) is canonical.

5.1.3 Example 3: Momentum transformations By the previous results, the composition of an arbitratry coordinate change with x, p interchanges is canon- ical. Consider the effect of composing (a) an interchange, (b) a coordinate transformation, and (c) an interchange. For (a), let

i q1 = pi 1 i πi = −x

i Then for (b) we choose an arbitrary function of q1 :

i i  j i Q = Q q1 = Q (pj) ∂qn ∂p P = 1 π = − n xn i ∂Qi n ∂Qi

Finally, for (c), another interchange:

∂p qi = P = − n xn i ∂Qi i i πi = −Q = −Q (pj)

This establishes that replacing the momenta by any three independent functions of the momenta, preserves Hamilton’s equations.

5.2 Generating functions There is a systematic approach to canonical transformations using generating functions. We will give a i simple example of the technique. Given a system described by a Hamiltonian H(x , pj), we seek another 0 i Hamiltonian H (q , πj) such that the equations of motion have the same form, namely

dxi ∂H = dt ∂pi dp ∂H i = − dt ∂xi

21 in the original system and dqi ∂H0 = dt ∂πi dπ ∂H0 i = − dt ∂qi in the transformed variables. The principle of least action must hold for each pair:

S = p dxi − Hdt ˆ i S0 = π dqi − H0dt ˆ i where S and S0 differ by at most a constant. Correspondingly, the integrands may differ by the addition of df a total differential, df = dt dt, since this will integrate to a surface term and therefore will not contribute to the variation. Notice that this corresponds exactly to a local dilatation, which produces a change

0 α α Wαdx = Wαdx − df df = W dxα − dt α dt In general we may therefore write

i i 0 pidx − Hdt = πidq − H dt + df A convenient way to analyze the condition is to solve it for the differential df

i i 0 df = pidx − πidq + (H − H) dt For the differential of f to take this form, it must be a function of xi, qi and t, that is, f = f(xi, qi, t). Therefore, the differential of f is ∂f ∂f ∂f df = dxi + dqi + dt ∂xi ∂qi ∂t Equating the expressions for df we match up terms to require ∂f p = (8) i ∂xi ∂f π = − (9) i ∂qi ∂f H0 = H + (10) ∂t The first equation ∂f(xj, qj, t) p = (11) i ∂xi i gives q implicitly in terms of the original variables, while the second determines πi. Notice that we may pick i i j any function q = q (pj, x , t). This choice fixes the form of πi by the eq.(9), while the eq.(10) gives the new Hamiltonian in terms of the old one. The function f is the generating function of the transformation.

6 General solution in Hamiltonian dynamics

We conclude with the crowning theorem of Hamiltonian dynamics: a proof that for any Hamiltonian dynam- ical system there exists a canonical transformation to a set of variables on phase space such that the paths of motion reduce to single points. Clearly, this theorem shows the power of canonical transformations! The theorem relies on describing solutions to the Hamilton-Jacobi equation, which we introduce first.

22 6.1 The Hamilton-Jacobi Equation We have the following equations governing Hamilton’s principal function. ∂S = 0 ∂pi ∂S = pi ∂xi ∂S = −H ∂t

Since the Hamiltonian is a given function of the phase space coordinates and time, H = H(xi, pi, t), we combine the last two equations: ∂S ∂S = −H(xi, pi, t) = −H(xi, , t) ∂t ∂xi

This first order differential equation in s + 1 variables (t, xi; i = 1, . . . s) for the principal function S is the Hamilton-Jacobi equation. Notice that the Hamilton-Jacobi equation has the same general form as the Schrödinger equation and is equally difficult to solve for all but special potentials. Nonetheless, we are guaranteed that a complete solution exists, and we will assume below that we can find it. Before proving our central theorem, we digress to examine the exact relationship between the Hamilton-Jacobi equation and the Schrödinger equation.

6.2 and the Hamilton-Jacobi equation The Hamiltonian-Jacobi equation provides the most direct link between classical and quantum mechanics. There is considerable similarity between the Hamilton-Jacobi equation and the Schrödinger equation: ∂S ∂S = −H(xi, , t) ∂t ∂xi ∂ψ i = H(ˆx , pˆ , t) ~ ∂t i i We make the relationship precise as follows. Suppose the Hamiltonian in each case is that of a single particle in a potential: p2 H = + V (x) 2m Write the quantum as i ϕ ψ = Ae ~ The Schrödinger equation becomes

 i ϕ ∂ Ae ~ 2 ~ 2  i ϕ  i ϕ i = − 5 Ae ~ + V Ae ~ ~ ∂t 2m 2   ∂A i ϕ i ϕ ∂ϕ ~ i ϕ i i ϕ i ϕ i~ e ~ − Ae ~ = − 5 · e ~ 5 A + Ae ~ 5 ϕ + V Ae ~ ∂t ∂t 2m ~ 2   ~ i ϕ i 2 = − e ~ 5 ϕ 5 A + 5 A 2m ~ 2   ~ i ϕ i i 2 − e ~ 5 A · 5ϕ + A 5 ϕ 2m ~ ~ 2  2 ~ i i ϕ − e ~ (A 5 ϕ · 5ϕ) 2m ~ i ϕ +V Ae ~

23 Then cancelling the exponential, ∂A ∂ϕ i 2 i − A = − ~ 5 ϕ 5 A − ~ 52 A ~ ∂t ∂t 2m 2m i i − ~ 5 A · 5ϕ − ~ A 52 ϕ 2m 2m 1 + (A 5 ϕ · 5ϕ) + VA 2m Collecting by powers of ~, ∂ϕ 1 O 0 : − = 5 ϕ · 5ϕ + V ~ ∂t 2m 1 ∂A 1  2  O 1 : = − 5 A · 5ϕ + 52ϕ ~ A ∂t 2m A 2 O 2 : 0 = − ~ 52 A ~ 2m The zeroth order terms is the Hamilton-Jacobi equation, with ϕ = S: ∂S 1 − = 5 S · 5S + V ∂t 2m 1 = p2 + V (x) 2m where p = 5S. Therefore, the Hamilton-Jacobi equation is the ~ → 0 limit of the Schrödinger equation.

6.3 Trivialization of the motion We now seek a solution, in principle, to the complete mechanical problem. The solution is to find a canonical transformation that makes the motion trivial. Hamilton’s principal function, the solution to the Hamilton- Jacobi equation, is the generating function of this canonical transformation. To begin, suppose we have a solution to the Hamilton-Jacobi equation of the form

S = g(t, x1, . . . , xs, α1, . . . , αs) + A where the αi and A provide s + 1 constants describing the solution. Such a solution is called a complete integral of the equation, as opposed to a general integral which depends on arbitrary functions. We will show below that a complete solution leads to a general solution. We use S as a generating function. i  Our canonical transformation will take the variables (xi, pi) to a new set of variables β , αi . Since S depends on the old coordinates xi and the new momenta αi, we have the relations ∂S pi = ∂xi ∂S βi = ∂αi ∂S H0 = H + ∂t Notice that the new Hamiltonian, H0, vanishes because the Hamiltonian-Jacobi equation is satisfied by S!. With H0 = 0, Hamilton’s equations in the new canonical coordinates are simply dα ∂H0 i = = 0 dt ∂βi dβ ∂H0 i = − = 0 dt ∂αi

24 with solutions

αi = const.

βi = const.

The system remains at the phase space point (αi, βi). To find the motion in the original coordinates as functions of time and the 2s constants of motion,

xi = xi(t; αi, βi) we can algebraically invert the s equations

∂g(xi, t, αi) βi = ∂αi The momenta may be found by differentiating the principal function,

∂S(xi, t, αi) pi = ∂xi Therefore, solving the Hamilton-Jacobi equation is the key to solving the full mechanical problem. Fur- thermore, we know that a solution exists because Hamilton’s equations satisfy the integrability equation for S. We note one further result. While we have made use of a complete integral to solve the mechanical problem, we may want a general integral of the Hamilton-Jacobi equation. The difference is that a complete integral of an equation in s + 1 variables depends on s + 1 constants, while a general integral depends on s functions. Fortunately, a complete integral of the equation can be used to construct a general integral, and there is no loss of generality in considering a complete integral. We see this as follows. A complete solution takes the form S = g(t, x1, . . . , xs, α1, . . . , αs) + A

To find a general solution, think of the constant A as a function of the other s constants, A(α1, . . . , αs). Now replace each of the αi by a function of the coordinates and time, αi → hi(t, xi). This makes S depend on arbitrary functions, but we need to make sure it still solves the Hamilton-Jacobi equation. It will provided the partials of S with respect to the coordinates remain unchanged. In general, these partials are given by ∂S  ∂S   ∂S  ∂h = + k ∂x ∂x ∂h ∂x i i hi=const. k x=const. i We therefore still have solutions provided  ∂S  ∂h k = 0 ∂hk x=const. ∂xi

and since we want hk to be an arbitrary function of the coordinates, we demand  ∂S  = 0 ∂hk x=const. Then ∂S ∂ = (g(t, xi, αi) + A(αi)) = 0 ∂hk ∂hk and we have A(α1, . . . , αs) = const. − g This just makes A into some specific function of xi and t. Since the partials with respect to the coordinates are the same, and we haven’t changed the time depen- dence, S = g(t, x1, . . . , xs, h1, . . . , hs) + A (hi) is a general solution to the Hamilton-Jacobi equation.

25 6.3.1 Example 1: Free particle The simplest example is the case of a free particle, for which the Hamiltonian is

p2 H = 2m and the Hamilton-Jacobi equation is ∂S 1 = − (S0)2 ∂t 2m Let S = f(x) − Et Then f(x) must satisfy df √ = 2mE dx and therefore √ f(x) = 2mEx − c = πx − c

where c is constant and we write the integration constant E in terms of the new (constant) momentum. Hamilton’s principal function is therefore

π2 S (x, π, t) = πx − t − c 2m Then, for a generating function of this type we have ∂S p = = π ∂x ∂S π q = = x − t ∂π m ∂S H0 = H + = H − E ∂t Because E = H, the new Hamiltonian, H0, is zero. This means that both q and π are constant. The solution for x and p follows immediately: π x = q + t m p = π

We see that the new canonical variables (q, π) are just the initial position and momentum of the motion, and therefore do determine the motion. The fact that knowing q and π is equivalent to knowing the full motion rests here on the fact that S generates motion along the classical path. In fact, given initial conditions (q, π), we can use Hamilton’s principal function as a generating function but treat π as the old momentum and x as the new coordinate to reverse the process above and generate x(t) and p.

6.3.2 Example 2: Simple harmonic oscillator For the simple harmonic oscillator, the Hamiltonian becomes

p2 1 H = + kx2 2m 2

26 and the Hamilton-Jacobi equation is ∂S 1 1 = − (S0)2 + kx2 ∂t 2m 2 Letting S = f(x) − Et as before, f(x) must satisfy s df  1  = 2m E − kx2 dx 2 and therefore s  1  f(x) = 2m E − kx2 dx ˆ 2 p = π2 − mkx2dx ˆ √ π2 where we have set E = 2m . Now let mkx = π sin y. The integral is immediate: p f(x) = π2 − mkx2dx ˆ π2 = √ cos2 ydy mk ˆ π2 = √ (y + sin y cos y) 2 mk Hamilton’s principal function is therefore r ! π2 √ x  √ x x2 S (x, π, t) = √ sin−1 mk + mk 1 − mk 2 mk π π π2 π2 − t − c 2m π2 √ x  xp π2 = √ sin−1 mk + π2 − mkx2 − t − c 2 mk π 2 2m and we may use it to generate the canonical change of variable. This time we have ∂S p = ∂x

π 1 1p 2 2 x −mkx = q + π − mkx + √ 2 x2 2 2 π2 − mkx2 1 − mk π2 1 π2 1 mkx2  = √ + π2 − mkx2 − π2 − mkx2 2 2 2 p = π2 − mkx2 ∂S q = ∂π π √ x  π2 1  √ x  = √ sin−1 mk + √ − mk q 2 mk π 2 mk x2 π 1 − mk π2

27 x π π + √ − t 2 π2 − mkx2 m π √ x  π = √ sin−1 mk − t mk π m ∂S H0 = H + = H − E = 0 ∂t The first equation relates p to the energy and position, the second gives the new position coordinate q, and third equation shows that the new Hamiltonian is zero. Hamilton’s equations are trivial, so that π and q are q k constant, and we can invert the expression for q to give the solution. Setting ω = m , the solution is

π mω  x (t) = sin q + ωt mω π = A sin (ωt + φ)

where

q = Aφ π = Amω

The new canonical coordinates therefore characterize the initial amplitude and phase of the oscillator.

6.3.3 Example 3: One dimensional particle motion Now suppose a particle with one degree of freedom moves in a potential U(x). Little is changed. The the Hamiltonian becomes p2 H = + U 2m and the Hamilton-Jacobi equation is ∂S 1 = − (S0)2 + U(x) ∂t 2m Letting S = f(x) − Et as before, f(x) must satisfy df = p2m (E − U (x)) dx and therefore

f(x) = p2m (E − U (x))dx ˆ = pπ2 − 2mU (x)dx ˆ

π2 where we have set E = 2m . Hamilton’s principal function is therefore π2 S (x, π, t) = pπ2 − 2mU (x)dx − t − c ˆ 2m and we may use it to generate the canonical change of variable.

28 This time we have ∂S p = = pπ2 − 2mU (x) ∂x ∂S ∂  x  π q = = pπ2 − 2mU (x)dx − t ∂π ∂π ˆx0 m ∂S H0 = H + = H − E = 0 ∂t The first and third equations are as expected, while for q we may interchange the order of differentiation and integration: ∂   π q = pπ2 − 2mU (x)dx − t ∂π ˆ m ∂   π = pπ2 − 2mU (x) dx − t ˆ ∂π m πdx π = − t ˆ pπ2 − 2mU (x) m To complete the problem, we need to know the potential. However, even without knowing U(x) we can make sense of this result by combining the expression for q above to our previous solution to the same problem. There, gives a first integral to Newton’s second law, p2 E = + U 2m 1 dx2 = m + U 2 dt so we arrive at the familiar quadrature x mdx t − t0 = dt = p ˆ ˆx0 2m (E − U) Substituting into the expression for q, x πdx π x mdx π q = − − t0 p 2 p ˆ π − 2mU (x) m ˆx0 2m (E − U) m x πdx x πdx π = − − t0 p 2 p 2 ˆ π − 2mU (x) ˆx0 π − 2mU (x) m x0 πdx π = − t0 ˆ pπ2 − 2mU (x) m

We once again find that q is a constant characterizing the initial configuration. Since t0 is the time at which the position is x0 and the momentum is p0, we have the following relations: p2 p2 + U(x) = 0 + U(x ) = E = const. 2m 2m 0 and x dx t − t0 = q ˆx0 2 m (E − U) which we may rewrite as x dx x0 dx m t − = t0 − = q = const. ˆ q 2 ˆ q 2 π m (E − U) m (E − U)

29