Contents

Contents i

List of Tables iii

List of Figures iii

4 Statistical Ensembles 1

4.1 References ...... 1

4.2 (µCE) ...... 2

4.2.1 The microcanonical distribution function ...... 2

4.2.2 Densityofstates ...... 3

4.2.3 Arbitrariness in the definition of S(E) ...... 6

4.2.4 Ultra-relativistic ideal gas ...... 6

4.2.5 Discrete systems ...... 6

4.3 The Quantum Mechanical Trace ...... 7

4.3.1 The densitymatrix ...... 7

4.3.2 Averaging the DOS ...... 8

4.3.3 Coherentstates ...... 9

4.4 Thermal Equilibrium ...... 11

4.4.1 Two systems in thermal contact ...... 11

4.4.2 Thermal, mechanical and chemical equilibrium ...... 13

4.4.3 Gibbs-Duhem relation ...... 13

i ii CONTENTS

4.5 Ordinary (OCE) ...... 14

4.5.1 Canonical distribution and partition function ...... 14

4.5.2 The difference between P (En) and Pn ...... 15 4.5.3 Additional remarks ...... 15

4.5.4 Averages within the OCE ...... 16

4.5.5 Entropyand free energy ...... 16

4.5.6 Fluctuations in the OCE ...... 17

4.5.7 Thermodynamics revisited ...... 18

4.5.8 Generalized susceptibilities ...... 19

4.6 (GCE) ...... 20

4.6.1 Grand canonical distribution and partition function ...... 20

4.6.2 and Gibbs-Duhem relation ...... 22

4.6.3 Generalized susceptibilities in the GCE ...... 23

4.6.4 Fluctuations in the GCE ...... 23

4.6.5 Gibbs ensemble ...... 24

4.7 Statistical Ensembles from Maximum Entropy ...... 25

4.7.1 µCE ...... 25

4.7.2 OCE ...... 25

4.7.3 GCE ...... 26

4.8 Ideal Gas ...... 27

4.8.1 Maxwell velocity distribution ...... 27

4.8.2 Equipartition ...... 29

4.9 Selected Examples ...... 30

4.9.1 Spins in an external magnetic field ...... 30

4.9.2 Negative (!) ...... 32

4.9.3 Adsorption ...... 33

4.9.4 Elasticity of wool ...... 34

4.9.5 Noninteracting dimers ...... 36 4.10 Statistical Mechanics of Molecular Gases ...... 37 4.10.1 Separation of translational and internal degrees of freedom ...... 37 4.10.2 Ideal gas law ...... 39 4.10.3 The internal coordinate partition function ...... 40 4.10.4 Rotations ...... 40 4.10.5 Vibrations ...... 43 4.10.6 Two-level systems : Schottky anomaly ...... 43 4.10.7 Electronic and nuclear excitations ...... 45 4.11 Appendix I : Additional Examples ...... 47 4.11.1 Three state system ...... 47 4.11.2 Spins and vacancies on a surface ...... 48 4.11.3 Fluctuating interface ...... 50 4.11.4 Dissociation of molecular hydrogen ...... 52

List of Tables

4.1 Rotational and vibrational of common molecules ...... 41 4.2 Nuclear angular momentum states for homonuclear diatomic molecules ...... 47

List of Figures

4.1 Complex integration contours for inverse Laplace transform ...... 4 C

iii iv LIST OF FIGURES

4.2 Asystem S in contact with a ‘world’ W ...... 8 4.3 Averaging the quantum mechanical discrete density of states ...... 9 4.4 Two systems in thermal contact ...... 12 4.5 Microscopic interpretation of the First Law of Thermodynamics ...... 19

4.6 Maxwell distribution of speeds ϕ(v/v0) ...... 28 4.7 Entropy versus energy for a spin system ...... 32 4.8 A crude model for the elasticity of wool ...... 34 4.9 Length versus temperature and force ...... 35 4.10 Noninteracting spin dimers on a lattice ...... 38 4.11 capacity per molecule as a function of temperature ...... 44 Chapter 4

Statistical Ensembles

4.1 References

– F. Reif, Fundamentals of Statistical and Thermal Physics (McGraw-Hill, 1987) This has been perhaps the most popular undergraduate text since it first appeared in 1967, and with good reason.

– A. H. Carter, Classical and Statistical Thermodynamics (Benjamin Cummings, 2000) A very relaxed treatment appropriate for undergraduate physics majors.

– D. V. Schroeder, An Introduction to Thermal Physics (Addison-Wesley, 2000) This is the best undergraduate thermodynamics book I’ve come across, but only 40% of the book treats statistical mechanics.

– C. Kittel, Elementary Statistical Physics (Dover, 2004) Remarkably crisp, though dated, this text is organized as a series of brief discussions of key con- cepts and examples. Published by Dover, so you can’t beat the price.

– M. Kardar, Statistical Physics of Particles (Cambridge, 2007) A superb modern text, with many insightful presentations of key concepts.

– M. Plischke and B. Bergersen, Equilibrium Statistical Physics (3rd edition, World Scientific, 2006) An excellent graduate level text. Less insightful than Kardar but still a good modern treatment of the subject. Good discussion of mean field theory.

– E. M. Lifshitz and L. P. Pitaevskii, Statistical Physics (part I, 3rd edition, Pergamon, 1980) This is volume 5 in the famous Landau and Lifshitz Course of Theoretical Physics. Though dated, it still contains a wealth of information and physical insight.

1 2 CHAPTER 4. STATISTICAL ENSEMBLES

d

4.2 Microcanonical Ensemble (µCE)

4.2.1 The microcanonical distribution function

We have seen how in an ergodic dynamical system, time averages can be replaced by aver- ages: ergodicity f(ϕ) = f(ϕ) , (4.1) ⇐⇒ t S where T 1 f(ϕ) = lim dt f ϕ(t) . (4.2) t T →∞ T Z 0  and f(ϕ) = dµ f(ϕ) δ E Hˆ (ϕ) dµ δ E Hˆ (ϕ) . (4.3) S − − Z Z   Here Hˆ (ϕ) = Hˆ (q, p) is the Hamiltonian, and where δ(x) is the Dirac δ-function1. Thus, averages are taken over a constant energy hypersurface which is a subset of the entire phase space.

We’ve also seen how any phase space distribution ̺(Λ1,...,Λk) which is a function of conserved quan- titied Λa(ϕ) is automatically a stationary (time-independent) solution to Liouville’s equation. Note that the microcanonical distribution,

̺ (ϕ)= δ E Hˆ (ϕ) dµ δ E Hˆ (ϕ) , (4.4) E − − Z   is of this form, since Hˆ (ϕ) is conserved by the dynamics. Linear and angular momentum conservation generally are broken by elastic scattering off the walls of the sample. So averages in the microcanonical ensemble are computed by evaluating the ratio

Tr A δ(E Hˆ ) A = − , (4.5) Tr δ(E Hˆ ) − where Tr means ‘trace’, which entails an integration over all phase space:

N 1 ddp ddq Tr A(q,p) i i A(q,p) . (4.6) ≡ N! (2π~)d Yi=1 Z Here N is the total number of particles and d is the dimension of physical space in which each particle moves. The factor of 1/N!, which cancels in the ratio between numerator and denominator, is present for

1We write the Hamiltonian as Hˆ (classical or quantum) in order to distinguish it from magnetic field, H. In chapter 2, we used the symbol H for enthalpy and wrote the magnetic field as H, but since enthalpy doesn’t get a mention in this chapter, we shift notation slightly. 4.2. MICROCANONICAL ENSEMBLE (µCE) 3

indistinguishable particles2. The normalization factor (2π~)−Nd renders the trace dimensionless. Again, this cancels between numerator and denominator. These factors may then seem arbitrary in the defini- tion of the trace, but we’ll see how they in fact are required from quantum mechanical considerations. So we now adopt the following metric for classical phase space integration:

N 1 ddp ddq dµ = i i . (4.7) N! (2π~)d Yi=1

4.2.2 Density of states

The denominator, D(E)= Tr δ(E Hˆ ) , (4.8) − is called the density of states. It has dimensions of inverse energy, such that

E+∆E D(E)∆E = dE′ dµ δ(E′ Hˆ )= dµ (4.9) − EZ Z E

Let us now compute D(E) for the nonrelativistic ideal gas. The Hamiltonian is

N p2 Hˆ (q,p)= i . (4.10) 2m Xi=1 We assume that the gas is enclosed in a region of volume V , and we’ll do a purely classical calculation, neglecting discreteness of its quantum spectrum. We must compute

N N 1 ddp ddq p2 D(E)= i i δ E i . (4.11) N! (2π~)d − 2m Z Yi=1  Xi=1 

We shall calculate D(E) in two ways. The first method utilizes the Laplace transform, Z(β):

∞ ˆ Z(β)= D(E) dE e−βE D(E)= Tr e−βH . (4.12) L ≡ Z   0 The inverse Laplace transform is then

c+i∞ dβ D(E)= −1 Z(β) eβE Z(β) , (4.13) L ≡ 2πi Z   c−i∞ 2More on this in chapter 5. 4 CHAPTER 4. STATISTICAL ENSEMBLES

Figure 4.1: Complex integration contours for inverse Laplace transform −1 Z(β) = D(E). When C L the product dN is odd, there is a branch cut along the negative Re β axis.   where c is such that the integration contour is to the right of any singularities of Z(β) in the complex β-plane. We then have

N d d 1 d x d p p2 Z(β)= i i e−β i /2m N! (2π~)d Yi=1 Z ∞ Nd (4.14) N N Nd/2 V dp 2 V m = e−βp /2m = β−Nd/2 . N!  2π~  N! 2π~2 −∞Z     The inverse Laplace transform is then

V N m Nd/2 dβ D(E)= eβE β−Nd/2 N! 2π~2 2πi   I C (4.15) 1 Nd−1 V N m Nd/2 E 2 = , N! 2π~2 Γ(Nd/2)   exactly as before. The integration contour for the inverse Laplace transform is extended in an infinite semicircle in the left half β-plane. When Nd is even, the function β−Nd/2 has a simple pole of order Nd/2 at the origin. When Nd is odd, there is a branch cut extending along the negative Re β axis, and the integration contour must avoid the cut, as shown in fig. 4.1. One can check that this results in the same expression above, i.e. we may analytically continue from even values of Nd to all positive values of Nd. For a general system, the Laplace transform, Z(β) = D(E) also is called the partition function. We L shall again meet up with Z(β) when we discuss the ordinary canonical ensemble.   Our final result, then, is 1 Nd−1 V N m Nd/2 E 2 D(E,V,N)= . (4.16) N! 2π~2 Γ(Nd/2)   4.2. MICROCANONICAL ENSEMBLE (µCE) 5

Here we have emphasized that the density of states is a function of E, V , and N. Using Stirling’s approximation, ln N!= N ln N N + 1 ln N + 1 ln(2π)+ N −1 , (4.17) − 2 2 O we may define the statistical entropy,  E V S(E,V,N) k ln D(E,V,N)= Nk φ , + (ln N) , (4.18) ≡ B B N N O   where E V d E V d m φ , = ln + ln + ln + 1+ 1 d . (4.19) N N 2 N N 2 dπ~2 2         Recall k = 1.3806503 10−16 erg/K is Boltzmann’s constant.  B ×

Second method

The second method invokes a mathematical trick. First, let’s rescale pα √2mE uα. We then have i ≡ i Nd V N √2mE 1 D(E)= dMu δ u2 + u2 + . . . + u2 1 . (4.20) N! h E 1 2 M − ! Z  Here we have written u = (u1, u2, . . . , uM ) with M = Nd as a M-dimensional vector. We’ve also used the rule δ(Ex)= E−1δ(x) for δ-functions. We can now write

M M−1 d u = u du dΩM , (4.21)

3 where dΩM is the M-dimensional differential solid angle. We now have our answer: Nd N √ 1 V 2m 2 Nd−1 1 D(E)= E 2 ΩNd . (4.22) N! h ! ·

What remains is for us to compute ΩM , the total solid angle in M dimensions. We do this by a nifty mathematical trick. Consider the integral ∞ 2 2 = dMu e−u = Ω du uM−1 e−u IM M Z Z0 ∞ (4.23) 1 1 2 M−1 −s 1 1 = 2 ΩM ds s e = 2 ΩM Γ 2 M , Z 0  where s = u2, and where ∞ Γ(z)= dt tz−1 e−t (4.24) Z0

3 1 2 1 1 The factor of 2 preceding ΩM in eqn. 4.22 appears because δ(u − 1) = 2 δ(u − 1) + 2 δ(u + 1). Since u = |u| ≥ 0, the second term can be dropped. 6 CHAPTER 4. STATISTICAL ENSEMBLES

is the Gamma function, which satisfies z Γ(z) = Γ(z + 1).4 On the other hand, we can compute in IM Cartesian coordinates, writing

∞ M 2 M = du e−u1 = √π . (4.25) IM  1  Z −∞    Therefore 2πM/2 Ω = . (4.26) M Γ(M/2)

2 Thus we obtain Ω2 = 2π, Ω3 = 4π, Ω4 = 2π , etc., the first two of which are familiar.

4.2.3 Arbitrariness in the definition of S(E)

Note that D(E) has dimensions of inverse energy, so one might ask how we are to take the logarithm of a dimensionful quantity in eqn. 4.18. We must introduce an energy scale, such as ∆E in eqn. 4.9, and define D˜ (E;∆E)= D(E)∆E and S(E;∆E) k ln D˜(E;∆E). The definition of statistical entropy ≡ B then involves the arbitrary parameter ∆E, however this only affects S(E) in an additive way. That is,

∆E S(E,V,N;∆E )= S(E,V,N;∆E )+ k ln 1 . (4.27) 1 2 B ∆E  2 

Note that the difference between the two definitions of S depends only on the ratio ∆E1/∆E2, and is independent of E, V , and N.

4.2.4 Ultra-relativistic ideal gas

Consider an ultrarelativistic ideal gas, with single particle dispersion ε(p)= cp. We then have

∞ N V N ΩN V N Γ(d) Ω N Z(β)= d dp pd−1 e−βcp = d . (4.28) N! hN d   N! cd hd βd Z0     E V The statistical entropy is S(E,V,N)= kB ln D(E,V,N)= NkB φ N , N , with E V E V Ω Γ(d) φ , = d ln + ln + ln d + (d + 1) (4.29) N N N N (dhc)d        

4.2.5 Discrete systems

For classical systems where the energy levels are discrete, the states of the system σ are labeled by a | i set of discrete quantities σ ,σ ,... , where each variable σ takes discrete values. The number of ways { 1 2 } i 4Note that for integer argument, Γ(k)=(k − 1)! 4.3. THE QUANTUM MECHANICAL TRACE 7

of configuring the system at fixed energy E is then

Ω(E,N)= δ , Hˆ (σ),E (4.30) σ X where the sum is over all possible configurations. Here N labels the total number of particles. For 1 example, if we have N spin- 2 particles on a lattice which are placed in a magnetic field H, so the indi- vidual particle energy is ε = µ Hσ, where σ = 1, then in a configuration in which N particles have i − 0 ± ↑ σ = +1 and N = N N particles have σ = 1, the energy is E = (N N )µ H. The number of i ↓ − ↑ i − ↓ − ↑ 0 configurations at fixed energy E is N N! Ω(E,N)= = , (4.31) N N E N E ↑ 2 2µ H ! 2 + 2µ H !   − 0 0 N E   since N↑/↓ = 2 2µ H . The statistical entropy is S(E,N)= kB ln Ω(E,N). ∓ 0

4.3 The Quantum Mechanical Trace

Thus far our understanding of ergodicity is rooted in the dynamics of classical mechanics. A Hamil- tonian flow which is ergodic is one in which time averages can be replaced by phase space averages using the microcanonical ensemble. What happens, though, if our system is quantum mechanical, as all systems ultimately are?

4.3.1 The density matrix

First, let us consider that our system S will in general be in contact with a world W . We call the union of S and W the universe, U = W S. Let N denote a quantum mechanical state of W , and let n ∪ denote a quantum mechanical state of S. Then the most general wavefunction we can write is of the form Ψ = Ψ N n . (4.32) N,n ⊗ N,n X

Now let us compute the expectation value of some operator ˆ which acts as the identity within W , ′ A meaning N ˆ N = Aˆ δ ′ , where Aˆ is the ‘reduced’ operator which acts within S alone. We then A NN have ∗ ′ Ψ ˆ Ψ = Ψ Ψ ′ ′ δ ′ n Aˆ n = Tr ̺ˆAˆ , (4.33) A N,n N ,n NN N,N ′ n,n′ X X  where ∗ ′ ̺ˆ = ΨN,n ΨN,n′ n n (4.34) N n,n′ X X is the density matrix. The time-dependence of ̺ˆ is easily found:

∗ ′ −iHt/ˆ ~ +iHt/ˆ ~ ̺ˆ(t)= ΨN,n ΨN,n′ n (t) n(t) = e ̺ˆ e , (4.35) N n,n′ X X

8 CHAPTER 4. STATISTICAL ENSEMBLES

Figure 4.2: A system S in contact with a ‘world’ W . The union of the two, universe U = W S, is said ∪ to be the ‘universe’. where Hˆ is the Hamiltonian for the system S. Thus, we find

∂̺ˆ i~ = H,ˆ ̺ˆ . (4.36) ∂t   Note that the density matrix evolves according to a slightly different equation than an operator in the Heisenberg picture, for which

~ ˆ ~ ∂Aˆ Aˆ(t)= e+iHt/ A e−iHt/ = i~ = A,ˆ Hˆ = H,ˆ Aˆ . (4.37) ⇒ ∂t −     For Hamiltonian systems, we found that the phase space distribution ̺(q,p,t) evolved according to the Liouville equation, i ∂̺/∂t = L ̺ , where the Liouvillian L is the differential operator

Nd ∂Hˆ ∂ ∂Hˆ ∂ L = i . (4.38) − (∂pj ∂qj − ∂qj ∂pj ) Xj=1

Accordingly, any distribution ̺(Λ1,...,Λk) which is a function of constants of the motion Λa(q,p) is a stationary solution to the Liouville equation: ∂t ̺(Λ1,...,Λk) = 0. Similarly, any quantum mechan- ical density matrix which commutes with the Hamiltonian is a stationary solution to eqn. 4.36. The corresponding microcanonical distribution is ̺ˆ = δ E Hˆ . E −  4.3.2 Averaging the DOS

If our quantum mechanical system is placed in a finite volume, the energy levels will be discrete, rather than continuous, and the density of states (DOS) will be of the form

D(E)= Tr δ E Hˆ = δ(E E ) , (4.39) − − l l  X 4.3. THE QUANTUM MECHANICAL TRACE 9

Figure 4.3: Averaging the quantum mechanical discrete density of states yields a continuous curve. where E are the eigenvalues of the Hamiltonian Hˆ . In the thermodynamic limit, V , and the { l} → ∞ discrete spectrum of kinetic energies remains discrete for all finite V but must approach the continuum result. To recover the continuum result, we average the DOS over a window of width ∆E:

E+∆E 1 D(E)= dE′ D(E′) . (4.40) ∆E EZ If we take the limit ∆E 0 but with ∆E δE, where δE is the spacing between successive quantized → ≫ levels, we recover a smooth function, as shown in fig. 4.3. We will in general drop the bar and refer to this function as D(E). Note that δE 1/D(E) = e−Nφ(ε,v) is (typically) exponentially small in the size ∼ of the system, hence if we took ∆E V −1 which vanishes in the thermodynamic limit, there are still ∝ exponentially many energy levels within an interval of width ∆E.

4.3.3 Coherent states

The quantum-classical correspondence is elucidated with the use of coherent states. Recall that the one- dimensional harmonic oscillator Hamiltonian may be written p2 Hˆ = + 1 mω2 q2 = ~ω a†a + 1 , (4.41) 0 2m 2 0 0 2  where a and a† are ladder operators satisfying a, a† = 1, which can be taken to be ∂ q   ∂ q a = ℓ + , a† = ℓ + , (4.42) ∂q 2ℓ − ∂q 2ℓ ~ with ℓ = ~/2mω . Note that q = ℓ (a + a†) and p = (a a†) . 0 2iℓ − The groundp state satisfies a ψ0(q) = 0, which yields

2 −1/4 −q2/4ℓ2 ψ0(q) = (2πℓ ) e . (4.43) 10 CHAPTER 4. STATISTICAL ENSEMBLES

The normalized coherent state z is defined as | i ∞ n 1 2 † 1 2 z z = e− 2 |z| eza 0 = e− 2 |z| n . (4.44) | i | i √ | i n=0 n! X The overlap of coherent states is given by

1 1 − |z |2 − |z |2 z¯ z z z = e 2 1 e 2 2 e 1 2 , (4.45) h 1 | 2 i hence different coherent states are not orthogonal. Despite this nonorthogonality, the coherent states allow a simple resolution of the identity,

d2z d2z d Rez d Imz 1= z z ; (4.46) 2πi | ih | 2πi ≡ π Z which is straightforward to establish. To gain some physical intuition about the coherent states, define Q iℓP z + (4.47) ≡ 2ℓ ~ and write z Q, P . One finds (exercise!) | i ≡ | i ~ ~ 2 2 ψ (q)= q z = (2πℓ2)−1/4 e−iPQ/2 eiPq/ e−(q−Q) /4ℓ , (4.48) Q,P h | i hence the coherent state ψQ,P (q) is a wavepacket Gaussianly localized about q = Q, but oscillating with average momentum P . For example, we can compute

Q, P q Q, P = z ℓ (a + a†) z = 2ℓ Re z = Q (4.49) ~ ~ Q, P p Q, P = z (a a †) z = Im z = P (4.50) 2iℓ − ℓ as well as

Q, P q2 Q, P = z ℓ2 (a + a†)2 z = Q2 + ℓ2 (4.51) ~2 ~2 Q, P p2 Q, P = z (a a† )2 z = P 2 + . (4.52) − 4ℓ2 − 4ℓ2

Thus, the root mean square fluctuations in the coherent state Q, P are | i ~ ~ m~ω ∆q = ℓ = , ∆p = = 0 , (4.53) s2mω0 2ℓ r 2 and ∆q ∆p = 1 ~. Thus we learn that the coherent state ψ (q) is localized in phase space, i.e. in both · 2 Q,P position and momentum. If we have a general operator Aˆ(q,p), we can then write

Q, P Aˆ(q,p) Q, P = A(Q, P )+ (~) , (4.54) O

4.4. THERMAL EQUILIBRIUM 11

where A(Q, P ) is formed from Aˆ(q,p) by replacing q Q and p P . → → Since d2z d Rez d Imz dQ dP = , (4.55) 2πi ≡ π 2π~ we can write the trace using coherent states as

∞ ∞ 1 Tr Aˆ = dQ dP Q, P Aˆ Q, P . (4.56) 2π~ Z Z −∞ −∞

~ We now can understand the origin of the factor 2π in the denominator of each (qi,pi) integral over classical phase space in eqn. 4.6.

Note that ω0 is arbitrary in our discussion. By increasing ω0, the states become more localized in q and more plane wave like in p. However, so long as ω0 is finite, the width of the coherent state in each direction is proportional to ~1/2, and thus vanishes in the classical limit.

4.4 Thermal Equilibrium

4.4.1 Two systems in thermal contact

Consider two systems in thermal contact, as depicted in fig. 4.4. The two subsystems #1 and #2 are free to exchange energy, but their respective volumes and particle numbers remain fixed. We assume the contact is made over a surface, and that the energy associated with that surface is negligible when compared with the bulk energies E1 and E2. Let the total energy be E = E1 + E2. Then the density of states D(E) for the combined system is

D(E)= dE D (E ) D (E E ) . (4.57) 1 1 1 2 − 1 Z The probability density for system #1 to have energy E1 is then D (E ) D (E E ) P (E )= 1 1 2 − 1 . (4.58) 1 1 D(E)

Note that P1(E1) is normalized: dE1 P1(E1) = 1. We now ask: what is the most probable value of E1? We find out by differentiating P (E ) with respect to E and setting the result to zero. This requires 1R 1 1 1 dP (E ) ∂ 0= 1 1 = ln P (E ) P (E ) dE ∂E 1 1 1 1 1 1 (4.59) ∂ ∂ = ln D1(E1)+ ln D2(E E1) . ∂E1 ∂E1 − We conclude that the maximally likely partition of energy between systems #1 and #2 is realized when ∂S ∂S 1 = 2 . (4.60) ∂E1 ∂E2 12 CHAPTER 4. STATISTICAL ENSEMBLES

Figure 4.4: Two systems in thermal contact.

This guarantees that S(E, E )= S (E )+ S (E E ) (4.61) 1 1 1 2 − 1 is a maximum with respect to the energy E1, at fixed total energy E. The temperature T is defined as 1 ∂S = , (4.62) T ∂E  V,N a result familiar from thermodynamics. The difference is now we have a more rigorous definition of the entropy. When the total entropy S is maximized, we have that T1 = T2. Once again, two systems in thermal contact and can exchange energy will in equilibrium have equal temperatures. According to eqns. 4.19 and 4.29, the of nonrelativistic and ultrarelativistic ideal gases in d space dimensions are given by

E V S = 1 Nd k ln + Nk ln + const. (4.63) NR 2 B N B N     E V S = Nd k ln + Nk ln + const. . (4.64) UR B N B N     1 Invoking eqn. 4.62, we then have ENR = 2 Nd kBT and EUR = Nd kBT .

We saw that the probability distribution P1(E1) is maximized when T1 = T2, but how sharp is the peak ∗ ∗ in the distribution? Let us write E1 = E1 +∆E1, where E1 is the solution to eqn. 4.59. We then have

2 2 ∗ ∗ 1 ∂ S1 2 1 ∂ S2 2 ln P1(E1 +∆E1)=ln P1(E1 )+ 2 (∆E1) + 2 (∆E1) + ... , (4.65) 2kB ∂E ∗ 2kB ∂E ∗ 1 E1 2 E2

where E∗ = E E∗. We must now evaluate 2 − 1 ∂2S ∂ 1 1 ∂T 1 = = = , (4.66) ∂E2 ∂E T −T 2 ∂E −T 2 C    V,N V where CV = ∂E/∂T V,N is the . Thus,

 2 2 ¯ ∗ −(∆E1) /2k T CV P1 = P1 e B , (4.67) 4.4. THERMAL EQUILIBRIUM 13

where C C ¯ V,1 V,2 CV = . (4.68) CV,1 + CV,2

The distribution is therefore a Gaussian, and the fluctuations in ∆E1 can now be computed:

(∆E )2 = k T 2 C¯ = (∆E ) = k T C¯ /k . (4.69) 1 B V ⇒ 1 RMS B V B q The individual heat capacities C and C scale with the volumes V and V , respectively. If V V , V,1 V,2 1 2 2 ≫ 1 then C C , in which case C¯ C . Therefore the RMS fluctuations in ∆E are proportional to V,2 ≫ V,1 V ≈ V,1 1 the square root of the system size, whereas E itself is extensive. Thus, the ratio (∆E ) /E V −1/2 1 1 RMS 1 ∝ scales as the inverse square root of the volume. The distribution P1(E1) is thus extremely sharp.

4.4.2 Thermal, mechanical and chemical equilibrium

1 We have dS V,N = T dE , but in general S = S(E,V,N). Equivalently, we may write E = E(S, V, N). The full differential of E(S, V, N) is then dE = T dS p dV +µ dN, with T = ∂E and p = ∂E − ∂S V,N − ∂V S,N ∂E and µ = ∂N S,V . As we shall discuss in more detail, p is the pressure and µ is the chemical potential . We may thus write the total differential dS as  1 p µ dS = dE + dV dN . (4.70) T T − T Employing the same reasoning as in the previous section, we conclude that entropy maximization for two systems in contact requires the following:

If two systems can exchange energy, then T = T . This is thermal equilibrium. • 1 2 If two systems can exchange volume, then p /T = p /T . This is mechanical equilibrium. • 1 1 2 2 If two systems can exchange particle number, then µ /T = µ /T . This is chemical equilibrium. • 1 1 2 2

4.4.3 Gibbs-Duhem relation

The energy E(S, V, N) is an extensive function of extensive variables, i.e. it is homogeneous of degree one in its arguments. Therefore E(λS, λV, λN)= λE, and taking the derivative with respect to λ yields ∂E ∂E ∂E E = S + V + N ∂S ∂V ∂N  V,N  S,N  S,V (4.71) = TS pV + µN . − Taking the differential of each side, using the Leibniz rule on the RHS, and plugging in dE = T dS − p dV + µ dN, we arrive at the Gibbs-Duhem relation5, S dT V dp + N dµ = 0 . (4.72) − 5See §2.7.4. 14 CHAPTER 4. STATISTICAL ENSEMBLES

This, in turn, says that any one of the intensive quantities (T,p,µ) can be written as a function of the other two, in the case of a single component system.

4.5 Ordinary Canonical Ensemble (OCE)

4.5.1 Canonical distribution and partition function

Consider a system S in contact with a world W , and let their union U = W S be called the ‘universe’. ∪ The situation is depicted in fig. 4.2. The volume VS and particle number NS of the system are held fixed, but the energy is allowed to fluctuate by exchange with the world W . We are interested in the limit N , N , with N N , with similar relations holding for the respective volumes and S → ∞ W → ∞ S ≪ W energies. We now ask what is the probability that S is in a state n with energy E . This is given by the | i n ratio

DW(EU En)∆E Pn = lim − ∆E→0 DU(EU)∆E (4.73) # of states accessible to W given that E = E = S n . total # of states in U

Then ln P = ln D (E E ) ln D (E ) n W U − n − U U ∂ ln D (E) = ln D (E ) ln D (E ) E W + . . . (4.74) W U − U U − n ∂E E=EU

α βE . ≡− − n The constant β is given by ∂ ln D (E) 1 β = W = . (4.75) ∂E kBT E=EU

−α −βEn Thus, we find Pn = e e . The constant α is fixed by the requirement that n Pn = 1:

1 ˆP P = e−βEn ,Z(T,V,N)= e−βEn = Tr e−βH . (4.76) n Z n X We’ve already met Z(β) in eqn. 4.12 – it is the Laplace transform of the density of states. It is also called the partition function of the system S. Quantum mechanically, we can write the ordinary canonical density matrix as e−βHˆ ̺ˆ = , (4.77) Tr e−βHˆ which is known as the Gibbs distribution. Note that ̺,ˆ Hˆ = 0, hence the ordinary canonical distribution is a stationary solution to the evolution equation for the density matrix. Note that the OCE is specified   by three parameters: T , V , and N. 4.5. ORDINARY CANONICAL ENSEMBLE (OCE) 15

4.5.2 The difference between P (En) and Pn

Let the total energy of the Universe be fixed at EU. The joint probability density P (ES, EW) for the system to have energy ES and the world to have energy EW is

P (E , E )= D (E ) D (E ) δ(E E E ) D (E ) , (4.78) S W S S W W U − S − W U U where  ∞ D (E )= dE D (E ) D (E E ) , (4.79) U U S S S W U − S −∞Z which ensures that dES dEW P (ES, EW) = 1. The probability density P (ES) is defined such that P (ES) dES is the (differential) probability for the system to have an energy in the range [ES, ES + dES]. The units −1 R R of P (ES) are E . To obtain P (ES), we simply integrate the joint probability density P (ES, EW) over all possible values of EW, obtaining

DS(ES) DW(EU ES) P (ES)= − , (4.80) DU(EU) as we have in eqn. 4.73. Suppose we wish to know the probability Pn that the system is in a particular state n with energy E . Clearly | i n

probability that ES [En, En +∆E] P (En)∆E DW(EU En) Pn = lim ∈ = = − . (4.81) ∆E→0 # of S states with E [E , E +∆E] D (E )∆E D (E ) S ∈ n n S n U U

4.5.3 Additional remarks

The formula of eqn. 4.73 is quite general and holds in the case where N /N = (1), so long as we are S W O in the thermodynamic limit, where the energy associated with the interface between S and W may be neglected. In this case, however, one is not licensed to perform the subsequent Taylor expansion, and 6 the distribution Pn is no longer of the Gibbs form. It is also valid for quantum systems , in which case we interpret P = n ̺ n as a diagonal element of the density matrix ̺ . The density of states functions n h | S| i S may then be replaced by

EU−En+∆E

SW(EU−En , ∆E) Tra ˆ DW(EU En)∆E e dE δ(E HW) − → ≡ W − E Z−En U (4.82) EU+∆E

SU(EU , ∆E) Tra ˆ DU(EU)∆E e dE δ(E HU) . → ≡ U − Z EU

The off-diagonal matrix elements of ̺S are negligible in the thermodynamic limit.

6See T.-C. Lu and T. Grover, arXiv 1709.08784. 16 CHAPTER 4. STATISTICAL ENSEMBLES

4.5.4 Averages within the OCE

To compute averages within the OCE,

n Aˆ n e−βEn Aˆ = Tr ̺ˆAˆ = n h | | i , (4.83) e−βEn P n  where we have conveniently taken the trace in a basisP of energy eigenstates. In the classical limit, we have 1 ˆ ˆ ˆ ̺(ϕ)= e−βH(ϕ) ,Z = Tr e−βH = dµ e−βH(ϕ) , (4.84) Z Z 1 N d d d with dµ = N! j=1(d qj d pj/h ) for identical particles (‘Maxwell-Boltzmann statistics’). Thus, Q dµ A(ϕ) e−βHˆ (ϕ) A = Tr (̺A)= . (4.85) h i −βHˆ (ϕ) R dµ e R 4.5.5 Entropy and free energy

The Boltzmann entropy is defined by

S = k Tr ̺ˆln̺ ˆ)= k P ln P . (4.86) − B − B n n n X The Boltzmann entropy and the statistical entropy S = kB ln D(E) are identical in the thermodynamic limit. We define the Helmholtz free energy F (T,V,N) as F (T,V,N)= k T ln Z(T,V,N) , (4.87) − B hence P = eβF e−βEn and ln P = βF βE . The entropy is then n n − n F Hˆ S = k P βF βE = + h i , (4.88) − B n − n − T T n X  which is to say F = E TS, where − Tr Hˆ e−βHˆ E = P E = (4.89) n n Tr −βHˆ n e X is the average energy. We also see that −βE ˆ E e n ∂ ∂ Z = Tr e−βH = e−βEn = E = n n = ln Z = βF . (4.90) ⇒ e−βEn −∂β ∂β n P n X  P Thus, F (T,V,N) is a Legendre transform of E(S, V, N), with dF = S dT p dV + µ dN , (4.91) − − which means ∂F ∂F ∂F S = , p = , µ =+ . (4.92) − ∂T − ∂V ∂N  V,N  T,N  T,V 4.5. ORDINARY CANONICAL ENSEMBLE (OCE) 17

4.5.6 Fluctuations in the OCE

In the OCE, the energy is not fixed. It therefore fluctuates about its average value E = Hˆ . Note that h i ∂E ∂E ∂2 ln Z = k T 2 = − ∂β B ∂T ∂β2 ˆ ˆ 2 Tr Hˆ 2 e−βH Tr Hˆ e−βH (4.93) = ˆ ˆ Tr e−βH − Tr e−βH ! = Hˆ 2 Hˆ 2 . − Thus, the heat capacity is related to the fluctuations in the energy, just as we saw at the end of 4.4: § ∂E 1 ˆ 2 ˆ 2 CV = = 2 H H (4.94) ∂T V,N kBT −     d For the nonrelativistic ideal gas, we found CV = 2 NkB, hence the ratio of RMS fluctuations in the energy to the energy itself is

ˆ 2 (∆H) k T 2 C 2 = B V = , (4.95) q ˆ d H NkBT Nd h i p2 r and the ratio of the RMS fluctuations to the mean value vanishes in the thermodynamic limit. The full distribution function for the energy is Tr δ( Hˆ ) e−βHˆ 1 P ( )= δ( Hˆ ) = E − = D( ) e−βE . (4.96) E E − Tr e−βHˆ Z E Thus, e−β[E−TS(E)] P ( )= , (4.97) E d ′ e−β[E′−TS(E′)] E where S( ) = k ln D( ) is the statistical entropy. Let’s write = E + δ , where E extremizes the E B E R E E combination TS( ), i.e. the solution to TS′(E) = 1, where the energy derivative of S is performed E − E at fixed volume V and particle number N. We now expand S(E + δ ) to second order in δ , obtaining E E 2 δ δ S(E + δ )= S(E)+ E 2E + . . . (4.98) E T − 2T C V ′′ ∂ 1 1 Recall that S (E)= ∂E T = T 2C . Thus, − V  (δ )2 TS( )= E TS(E)+ E + (δ )3 . (4.99) E− E − 2T CV O E  Applying this to both numerator and denominator of eqn. 4.97, we obtain7 (δ )2 P ( )= exp E2 , (4.100) E N "− 2kBT CV #

7In applying eqn. 4.99 to the denominator of eqn. 4.97, we shift E ′ by E and integrate over the difference δE ′ ≡ E ′ − E, retaining terms up to quadratic order in δE ′ in the argument of the exponent. 18 CHAPTER 4. STATISTICAL ENSEMBLES

where = (2πk T 2C )−1/2 is a normalization constant which guarantees d P ( ) = 1. Once again, N B V E E we see that the distribution is a Gaussian centered at = E, and of width (∆ ) = k T 2 C . This hEi R E RMS B V is a consequence of the Central Limit Theorem. p

4.5.7 Thermodynamics revisited

The average energy within the OCE is

E = EnPn , (4.101) n X and therefore dE = E dP + P dE =dQ ¯ dW¯ , (4.102) n n n n − n n X X where dQ¯ = E dP , dW¯ = P dE . (4.103) n n − n n n n X X −1 −En/k T Finally, from Pn = Z e B , we can write E = k T ln Z k T ln P , (4.104) n − B − B n with which we obtain

dQ¯ = En dPn n X = k T ln Z dP k T ln P dP − B n − B n n (4.105) n n X X = T d k P ln P = T dS . − B n n n  X  Note also that dW¯ = P dE − n n n X ∂E = P n dX − n ∂X i (4.106) n i ! X Xi ∂Hˆ = P n n dX F dX , − n ∂X i ≡ i i n,i i i X X so the generalized force Fi conjugate to the generalized displacement dXi is ∂E ∂Hˆ F = P n = . (4.107) i − n ∂X − ∂X n i i X   This is the force acting on the system8. In the chapter on thermodynamics, we defined the generalized force conjugate to X as y F . i i ≡− i 8In deriving eqn. 4.107, we have used the so-called Feynman-Hellman theorem of quantum mechanics: dhn|Hˆ |ni = hn| dHˆ |ni, if |ni is an energy eigenstate. 4.5. ORDINARY CANONICAL ENSEMBLE (OCE) 19

Figure 4.5: Microscopic, statistical interpretation of the First Law of Thermodynamics.

Thus we see from eqn. 4.102 that there are two ways that the average energy can change; these are depicted in the sketch of fig. 4.5. Starting from a set of energy levels E and probabilities P , we { n} { n} can shift the energies to E′ . The resulting change in energy (∆E) = W is identified with the work { n} I − done on the system. We could also modify the probabilities to P ′ without changing the energies. The { n} energy change in this case is the heat absorbed by the system: (∆E)II = Q. This provides us with a statistical and microscopic interpretation of the First Law of Thermodynamics.

4.5.8 Generalized susceptibilities

Suppose our Hamiltonian is of the form

Hˆ = Hˆ (λ)= Hˆ λ Qˆ , (4.108) 0 − ˆ ˆ where λ is an intensive parameter, such as magnetic field. Then Z(λ)= Tr e−β(H0−λQ) and

1 ∂Z 1 ˆ = β Tr Qˆ e−βH(λ) = β Qˆ . (4.109) Z ∂λ · Z h i   But then from Z = e−βF we have ∂F Q(λ, T )= Qˆ = . (4.110) h i − ∂λ  T 20 CHAPTER 4. STATISTICAL ENSEMBLES

Typically we will take Q to be an extensive quantity. We can now define the susceptibility χ as

1 ∂Q 1 ∂2F χ = = . (4.111) V ∂λ −V ∂λ2

The volume factor in the denominator ensures that χ is intensive. ˆ ˆ ˆ It is important to realize that we have assumed here that H0 , Q = 0, i.e. the ‘bare’ Hamiltonian H0 Qˆ and the operator commute. If they do not commute, then the response functions must be computed within a proper quantum mechanical formalism, which we shall not discuss here. ˆ ˆ ˆ Note also that we can imagine an entire family of observables Qk satisfying Qk , Qk′ = 0 and Hˆ , Qˆ = 0 k k′ 0 k , for all and . Then for the Hamiltonian      Hˆ (~λ)= Hˆ λ Qˆ , (4.112) 0 − k k Xk we have that ~ ˆ ∂F Qk(λ, T )= Qk = (4.113) h i − ∂λk T,N , λ   a k′=6 k and we may define an entire matrix of susceptibilities,

2 1 ∂Qk 1 ∂ F χkl = . (4.114) ≡ V ∂λl −V ∂λk ∂λl

4.6 Grand Canonical Ensemble (GCE)

4.6.1 Grand canonical distribution and partition function

Consider once again the situation depicted in fig. 4.2, where a system S is in contact with a world W , their union U = W S being called the ‘universe’. We assume that the system’s volume V is fixed, but ∪ S otherwise it is allowed to exchange energy and particle number with W . Hence, the system’s energy ES and particle number N will fluctuate. We ask what is the probability that S is in a state n with energy S | i En and particle number Nn. This is given by the ratio

DW(EU En ,NU Nn)∆E Pn = lim − − ∆E→0 DU(EU,NU)∆E (4.115) # of states accessible to W given that E = E and N = N = S n S n . total # of states in U 4.6. GRAND CANONICAL ENSEMBLE (GCE) 21

Then ln P = ln D (E E ,N N ) ln D (E ,N ) n W U − n U − n − U U U = ln D (E ,N ) ln D (E ,N ) W U U − U U U (4.116) ∂ ln D (E,N) ∂ ln D (E,N) E W N W + . . . n E=E n E=E − ∂E U − ∂N U N=N N=N U U

α βE + βµN . ≡− − n n The constants β and µ are given by ∂ ln D (E,N) 1 β = W = (4.117) E=E ∂E U kBT N=N U

∂ ln DW( E,N) µ = kBT . (4.118) E=E − ∂N U N=N U

The quantity µ has dimensions of energy and is called the chemical potential. Nota bene: Some texts define the ‘grand canonical Hamiltonian’ Kˆ as Kˆ Hˆ µNˆ . Thus, P = e−α e−β(En−µNn). Once again, ≡ − n the constant α is fixed by the requirement that n Pn = 1: 1 P P = e−β(En−µNn) n Ξ (4.119) ˆ Ξ(β,V,µ)= e−β(En−µNn) = Tr e−βK . n X Thus, the quantum mechanical grand canonical density matrix is given by e−βKˆ ̺ˆ = . (4.120) Tr e−βKˆ Note that ̺,ˆ Kˆ = 0. The quantity Ξ(T,V,µ) is called the grand partition function. It stands in relation to a corresponding free energy in the usual way:   Ξ(T,V,µ) e−βΩ(T,V,µ) Ω = k T ln Ξ , (4.121) ≡ ⇐⇒ − B where Ω(T,V,µ) is the grand potential, also known as the Landau free energy. The dimensionless quantity z eβµ is called the fugacity. ≡ If H,ˆ Nˆ = 0, the grand potential may be expressed as a sum over contributions from each N sector, viz.

  Ξ(T,V,µ)= eβµN Z(T,V,N) . (4.122) XN When there is more than one species, we have several chemical potentials µ , and accordingly we { a} define Kˆ = Hˆ µ Nˆ , (4.123) − a a a X ˆ with Ξ = Tr e−βK as before. 22 CHAPTER 4. STATISTICAL ENSEMBLES

4.6.2 Entropy and Gibbs-Duhem relation

In the GCE, the Boltzmann entropy is

S = k P ln P − B n n n X = k P βΩ βE + βµN − B n − n n (4.124) n X   Ω Hˆ µ Nˆ = + h i h i , − T T − T which says Ω = E TS µN , (4.125) − − where

Tr ˆ E = En Pn = ̺ˆH n X  (4.126) Tr ˆ N = Nn Pn = ̺ˆN . n X  Therefore, Ω(T,V,µ) is a double Legendre transform of E(S, V, N), with

dΩ = S dT p dV N dµ , (4.127) − − − which entails

∂Ω ∂Ω ∂Ω S = , p = ,N = . (4.128) − ∂T − ∂V − ∂µ  V,µ  T,µ  T,V

Since Ω(T,V,µ) is an extensive quantity, we must be able to write Ω = Vω(T,µ). We identify the function ω(T,µ) as the negative of the pressure:

∂Ω k T ∂Ξ 1 ∂E = B = n e−β(En−µNn) ∂V − Ξ ∂V Ξ ∂V T,µ n   X (4.129) ∂E = = p(T,µ) . ∂V −  T,µ

Therefore, Ω = pV , and p = p(T,µ) is an equation of state. This is consistent with the result from − thermodynamics that G = E TS + pV = µN. Taking the differential, we recover the Gibbs-Duhem − relation,

dΩ = S dT p dV N dµ = p dV V dp S dT V dp + N dµ = 0 . (4.130) − − − − − ⇒ − 4.6. GRAND CANONICAL ENSEMBLE (GCE) 23

4.6.3 Generalized susceptibilities in the GCE

We can appropriate the results from 4.5.8 and apply them, mutatis mutandis, to the GCE. Suppose we ˆ § ˆ ˆ ˆ ˆ ˆ ˆ have a family of observables Qk satisfying Qk , Qk′ = 0 and H0 , Qk = 0 and Na , Qk = 0 for all k, k′, and a. Then for the grand canonical Hamiltonian        Kˆ (~λ)= Hˆ µ Nˆ λ Qˆ , (4.131) 0 − a a − k k a X Xk we have that ~ ˆ ∂Ω Qk(λ, T )= Qk = (4.132) h i − ∂λk T,µ , λ   a k′(=6 k) and we may define the matrix of generalized susceptibilities,

2 1 ∂Qk 1 ∂ Ω χkl = = . (4.133) V ∂λl −V ∂λk ∂λl

4.6.4 Fluctuations in the GCE

Both energy and particle number fluctuate in the GCE. Let us compute the fluctuations in particle num- ber. We have Tr Nˆ e−β(Hˆ −µNˆ) 1 ∂ N = Nˆ = = ln Ξ . (4.134) h i Tr e−β(Hˆ −µNˆ) β ∂µ Therefore,

2 1 ∂N Tr Nˆ 2 e−β(Hˆ −µNˆ) Tr Nˆ e−β(Hˆ −µNˆ ) = ˆ ˆ ˆ ˆ β ∂µ Tr e−β(H−µN) − Tr e−β(H−µN) ! (4.135) = Nˆ 2 Nˆ 2 . −

Note now that 2 Nˆ 2 Nˆ k T ∂N k T − = B = B κ , (4.136) ˆ 2 N 2 ∂µ V T N  T,V where κT is the isothermal compressibility. Note: ∂N ∂(N,T,V ) ∂(N,T,V ) = = ∂µ ∂(µ, T, V ) − ∂(V,T,µ)  T,V 1 ∂(N,T,V ) ∂(N,T,p) ∂(V,T,p) ∂(N,T,µ) = (4.137) − ∂(N,T,p) · ∂(V,T,p) · z∂(N,T,µ}| ){ · ∂(V,T,µ) N 2 ∂V N 2 = = κ . − V 2 ∂p V T  T,N 24 CHAPTER 4. STATISTICAL ENSEMBLES

Thus, (∆N) k T κ RMS = B T , (4.138) N r V which again scales as V −1/2.

4.6.5 Gibbs ensemble

Let the system’s particle number N be fixed, but let it exchange energy and volume with the world W . Mutatis mutandis, we have

DW(EU En ,VU Vn)∆E ∆V Pn = lim lim − − . (4.139) ∆E→0 ∆V →0 DU(EU,VU)∆E ∆V Then ln P = ln D (E E ,V V ) ln D (E ,V ) n W U − n U − n − U U U = ln D (E ,V ) ln D (E ,V ) W U U − U U U (4.140) ∂ ln D (E,V ) ∂ ln D (E,V ) E W V W + . . . n E=E n E=E − ∂E U − ∂V U V =V V =V U U

α βE βp V . ≡− − n − n The constants β and p are given by

∂ ln D (E,V ) 1 β = W = (4.141) E=E ∂E U kBT V =V U

∂ ln DW(E,V ) p = kBT . (4.142) E=E ∂V U V =V U

The corresponding partition function is

∞ ˆ 1 Y (T, p, N)= Tr e−β(H+pV ) = dV e−βpV Z(T,V,N) e−βG(T,p,N) , (4.143) V0 ≡ Z0 −1 where V0 is a constant which has dimensions of volume. The factor V0 in front of the integral renders ′ ′ Y dimensionless. Note that G(V0 ) = G(V0)+ kBT ln(V0 /V0), so the difference is not extensive and can be neglected in the thermodynamic limit. In other words, it doesn’t matter what constant we choose for V0 since it contributes subextensively to G. Moreover, in computing averages, the constant V0 divides out in the ratio of numerator and denominator. Like the Helmholtz free energy, the Gibbs free energy G(T, p, N) is also a double Legendre transform of the energy E(S, V, N), viz. G = E TS + pV − (4.144) dG = S dT + V dp + µ dN , − 4.7. STATISTICAL ENSEMBLES FROM MAXIMUM ENTROPY 25

which entails ∂G ∂G ∂G S = ,V =+ , µ =+ . (4.145) − ∂T ∂p ∂N  p,N  T,N  T,p

4.7 Statistical Ensembles from Maximum Entropy

The basic principle: maximize the entropy,

S = k P ln P . (4.146) − B n n n X

4.7.1 µCE

We maximize S subject to the single constraint

C = P 1 = 0 . (4.147) n − n X We implement the constraint C = 0 with a Lagrange multiplier, λ¯ k λ, writing ≡ B S∗ = S k λ C , (4.148) − B and freely extremizing over the distribution P and the Lagrange multiplier λ. Thus, { n} δS∗ = δS k λδC k C δλ − B − B = k ln P +1+ λ δP k C δλ 0 . (4.149) − B n n − B ≡ n X h i We conclude that C = 0 and that ln P = 1+ λ , (4.150) n − and we fix λ by the normalization condition n Pn = 1. This gives Pn = 1/Ω, with P Ω= Θ(E +∆E E ) Θ(E E) , (4.151) − n n − n X i.e. the total number of energy states lying in the interval [E, E +∆E] .

4.7.2 OCE

We maximize S subject to the two constraints

C = P 1 = 0 , C = E P E = 0 . (4.152) 1 n − 2 n n − n n X X 26 CHAPTER 4. STATISTICAL ENSEMBLES

We now have two Lagrange multipliers. We write

2 S∗ = S k λ C , (4.153) − B j j Xj=1 and we freely extremize over P and C . We therefore have { n} { j} 2 δS∗ = δS k λ + λ E δP k C δλ − B 1 2 n n − B j j n j=1 X  X (4.154) 2 = k ln P +1+ λ + λ E δP k C δλ 0 . − B n 1 2 n n − B j j ≡ n X h i Xj=1 Thus, C1 = C2 = 0 and ln P = 1+ λ + λ E . (4.155) n − 1 2 n We define λ2 β and we fix λ1 by normalization. This yields  ≡ 1 ˆ P = e−βEn ,Z = e−βEn = Tr e−βH . (4.156) n Z n X

4.7.3 GCE

We maximize S subject to the three constraints

C = P 1 = 0 , C = E P E = 0 , C = N P N = 0 . (4.157) 1 n − 2 n n − 3 n n − n n n X X X We now have three Lagrange multipliers. We write

3 S∗ = S k λ C , (4.158) − B j j Xj=1 and hence 3 δS∗ = δS k λ + λ E + λ N δP k C δλ − B 1 2 n 3 n n − B j j n j=1 X  X (4.159) 3 = k ln P +1+ λ + λ E + λ N δP k C δλ 0 . − B n 1 2 n 3 n n − B j j ≡ n X h i Xj=1 Thus, C1 = C2 = C3 = 0 and ln P = 1+ λ + λ E + λ N . (4.160) n − 1 2 n 3 n We define λ β and λ βµ, and we fix λ by normalization. This yields 2 ≡ 3 ≡− 1 1 ˆ ˆ P = e−β(En−µNn) , Ξ = e−β(En−µNn) = Tr e−β(H−µN) . (4.161) n Ξ n X 4.8. IDEAL GAS STATISTICAL MECHANICS 27

4.8 Ideal Gas Statistical Mechanics

The ordinary canonical partition function for the ideal gas was computed in eqn. 4.14. We found

N d d 1 d x d p p2 Z(T,V,N)= i i e−β i /2m N! (2π~)d Yi=1 Z ∞ Nd (4.162) N N V dp 2 1 V = e−βp /2m = , ~ d N!  2π  N! λT −∞Z     where λT is the thermal wavelength: ~2 λT = 2π /mkBT . (4.163)

The physical interpretation of λT is that it is thep de Broglie wavelength for a particle of mass m which has a kinetic energy of kBT . In the GCE, we have

∞ Ξ(T,V,µ)= eβµN Z(T,V,N) N=0 X N (4.164) ∞ µ/k T µ/k T 1 V e B V e B = d = exp d . N! λT ! λT ! NX=1 −Ω/k T From Ξ = e B , we have the grand potential is

µ/k T d Ω(T,V,µ)= V k T e B λ . (4.165) − B T Since Ω = pV (see 4.6.2), we have  − §

−d µ/kBT p(T,µ)= kBT λT e . (4.166)

The number density can also be calculated:

N 1 ∂Ω −d µ/k T n = = = λ e B . (4.167) V −V ∂µ T  T,V

Combined, the last two equations recapitulate the ideal gas law, pV = NkBT .

4.8.1 Maxwell velocity distribution

The distribution function for momenta is given by

N 1 g(p)= δ(p p) . (4.168) N i − D Xi=1 E 28 CHAPTER 4. STATISTICAL ENSEMBLES

Figure 4.6: Maxwell distribution of speeds ϕ(v/v0). The most probable speed is vMAX = √2 v0. The 8 √ average speed is vAVG = π v0. The RMS speed is vRMS = 3 v0. q

Note that g(p) = δ(pi p) is the same for every particle, independent of its label i. We compute the −ˆ ˆ average A = Tr Ae−βH / Tr e−βH . Setting i = 1, all the integrals other than that over p divide out h i 1 between numerator and denominator. We then have  3 −βp2/2m d p δ(p p) e 1 g(p)= 1 1 − 3 −βp2/2m R d p1 e 1 (4.169) −3/2 −βp2/2m = (2πmkR BT ) e . Textbooks commonly refer to the velocity distribution f(v), which is related to g(p) by

f(v) d3v = g(p) d3p . (4.170)

Hence, 3/2 m −mv2/2k T f(v)= e B . (4.171) 2πk T  B  This is known as the Maxwell velocity distribution. Note that the distributions are normalized, viz.

d3p g(p)= d3vf(v) = 1 . (4.172) Z Z If we are only interested in averaging functions of v = v which are isotropic, then we can define the | | Maxwell speed distribution, f˜(v), as

3/2 2 m 2 −mv2/2k T f˜(v) = 4π v f(v) = 4π v e B . (4.173) 2πk T  B  4.8. IDEAL GAS STATISTICAL MECHANICS 29

∞ Note that f˜(v) is normalized according to dv f˜(v) = 1 . It is convenient to represent v in units of 0 R v0 = kBT/m, in which case

p 1 2 f˜(v)= ϕ(v/v ) , ϕ(s)= 2 s2 e−s /2 . (4.174) v 0 π 0 q The distribution ϕ(s) is shown in fig. 4.6. Computing averages, we have

∞ 2 C sk = ds sk ϕ(s) = 2k/2 Γ 3 + k . (4.175) k ≡ h i · √π 2 2 Z 0 

8 Thus, C0 = 1, C1 = π , C2 = 3, etc. The speed averages are q k T k/2 vk = C B . (4.176) k m  

Note that the average velocity is v = 0, but the average speed is v = 8k T/πm. The speed distribu- h i h i B tion is plotted in fig. 4.6. p

4.8.2 Equipartition

The Hamiltonian for ballistic (i.e. massive nonrelativistic) particles is quadratic in the individual com- ponents of each momentum pi. There are other cases in which a classical degree of freedom appears quadratically in Hˆ as well. For example, an individual normal mode ξ of a system of coupled oscillators has the Lagrangian L = 1 ξ˙2 1 ω2 ξ2 , (4.177) 2 − 2 0 where the dimensions of ξ are [ξ] = M 1/2L by convention. The Hamiltonian for this normal mode is then p2 Hˆ = + 1 ω2 ξ2 , (4.178) 2 2 0 from which we see that both the kinetic as well as potential energy terms enter quadratically into the Hamiltonian. The classical rotational kinetic energy is also quadratic in the angular momentum compo- nents. Let us compute the contribution of a single quadratic degree of freedom in Hˆ to the partition function. We’ll call this degree of freedom ζ – it may be a position or momentum or angular momentum – and ˆ ˆ 1 2 we’ll write its contribution to H as Hζ = 2 Kζ , where K is some constant. Integrating over ζ yields the following factor in the partition function:

∞ 1/2 2 2π dζ e−βKζ /2 = . (4.179) Kβ −∞Z   30 CHAPTER 4. STATISTICAL ENSEMBLES

The contribution to the Helmholtz free energy is then K ∆F = 1 k T ln , (4.180) ζ 2 B 2πk T  B  and therefore the contribution to the E is ∂ 1 ∆E = β ∆F = = 1 k T . (4.181) ζ ∂β ζ 2β 2 B  We have thus derived what is commonly called the equipartition theorem of classical statistical mechanics:

To each degree of freedom which enters the Hamiltonian quadratically is associated a contri- 1 bution 2 kBT to the internal energy of the system. This results in a concomitant contribution 1 of 2 kB to the heat capacity.

We now see why the internal energy of a classical ideal gas with f degrees of freedom per molecule is 1 1 E = 2 fNkBT , and CV = 2 NkB. This result also has applications in the theory of solids. The in a solid possess kinetic energy due to their motion, and potential energy due to the spring-like in- teratomic potentials which tend to keep the atoms in their preferred crystalline positions. Thus, for a three-dimensional crystal, there are six quadratic degrees of freedom (three positions and three mo- menta) per , and the classical energy should be E = 3NkBT , and the heat capacity CV = 3NkB. As we shall see, quantum mechanics modifies this result considerably at temperatures below the high- est normal mode (i.e. phonon) frequency, but the high temperature limit is given by the classical value

CV = 3νR (where ν = N/NA is the number of moles) derived here, known as the Dulong-Petit limit.

4.9 Selected Examples

4.9.1 Spins in an external magnetic field

Consider a system of N spins , each of which can be either up (σ = +1) or down (σ = 1). The s − Hamiltonian for this system is Ns Hˆ = µ H σ , (4.182) − 0 j Xj=1 ˆ where now we write H for the Hamiltonian, to distinguish it from the external magnetic field H, and µ0 is the magnetic moment per particle. We treat this system within the ordinary canonical ensemble. The partition function is ˆ Z = e−βH = ζNs , (4.183) · · · σ σ X1 XNs where ζ is the single particle partition function:

µ Hσ/k T µ0H ζ = e 0 B = 2 cosh . (4.184) kBT σX=±1   4.9. SELECTED EXAMPLES 31

The Helmholtz free energy is then

µ H F (T,H,N )= k T ln Z = N k T ln 2 cosh 0 . (4.185) s − B − s B k T "  B # The magnetization is ∂F µ H M = = N µ tanh 0 . (4.186) − ∂H s 0 k T  T,Ns  B  The energy is ∂ µ H E = βF = N µ H tanh 0 . (4.187) ∂β − s 0 k T  B   Hence, E = HM, which we already knew, from the form of Hˆ itself. − Each spin here is independent. The probability that a given spin has polarization σ is

eβµ0Hσ Pσ = βµ H −βµ H . (4.188) e 0 + e 0

The total probability is unity, i.e. P↑ + P↓ = 1, and the average polarization is a weighted average of σ = +1 and σ = 1 contributions: − µ H σ = P P = tanh 0 . (4.189) h i ↑ − ↓ k T  B  −2µ H/k T At low temperatures T µ H/k , we have P 1 e 0 B . At high temperatures T >µ H/k , ≪ 0 B ↑ ≈ − 0 B 1 σµ0H the two polarizations are equally likely, and Pσ 2 1+ k T . ≈ B   The isothermal magnetic susceptibility is defined as

1 ∂M µ2 µ H χ = = 0 sech2 0 . (4.190) T N ∂H k T k T s  T B  B  χ 2 (Typically this is computed per unit volume rather than per particle.) At H = 0, we have T = µ0/kBT , which is known as the Curie law.

Aside

The energy E = HM here is not the same quantity we discussed in our study of thermodynamics. − In fact, the thermodynamic energy for this problem vanishes! Here is why. To avoid confusion, we’ll need to invoke a new symbol for the thermodynamic energy, . Recall that the thermodynamic energy E is a function of extensive quantities, meaning = (S, M, N ). It is obtained from the free energy E E E s F (T,H,Ns) by a double Legendre transform:

(S, M, N )= F (T,H,N )+ TS + HM . (4.191) E s s 32 CHAPTER 4. STATISTICAL ENSEMBLES

Figure 4.7: When entropy decreases with increasing energy, the temperature is negative. Typically, kinetic degrees of freedom prevent this peculiarity from manifesting in physical systems.

Now from eqn. 4.185 we derive the entropy

∂F µ H µ H µ H S = = N k ln 2 cosh 0 N 0 tanh 0 . (4.192) − ∂T s B k T − s T k T "  B #  B  Thus, using eqns. 4.185 and 4.186, we obtain (S, M, N ) = 0. E s The potential confusion here arises from our use of the expression F (T,H,Ns). In thermodynamics, it is the Gibbs free energy G(T, p, N) which is a double Legendre transform of the energy: G = TS + pV . E− By analogy, with magnetic systems we should perhaps write G = TS HM, but in keeping with E − − many textbooks we shall use the symbol F and refer to it as the Helmholtz free energy. The quantity we’ve called E in eqn. 4.187 is in fact E = HM, which means = 0. The energy (S, M, N ) vanishes E− E E s here because the spins are noninteracting.

4.9.2 Negative temperature (!)

Consider again a system of N spins, each of which can be either up (+) or down ( ). Let N be the s − σ number of sites with spin σ, where σ = 1. Clearly N + N = N . We now treat this system within the ± + − s microcanonical ensemble. The energy of the system is E = HM, where H is an external magnetic field, and M = (N N ) µ − + − − 0 is the total magnetization. We now compute S(E) using the ordinary canonical ensemble. The number of ways of arranging the system with N+ up spins is N Ω= s , (4.193) N  + 4.9. SELECTED EXAMPLES 33

hence the entropy is S = k lnΩ = N k x ln x + (1 x) ln(1 x) (4.194) B − s B − − n o in the thermodynamic limit: N , N , x = N /N constant. Now the magnetization is s → ∞ + → ∞ + s M = (N N )µ = (2N N )µ , hence if we define the maximum energy E N µ H, then + − − 0 + − s 0 0 ≡ s 0 E M E E = = 1 2x = x = 0 − . (4.195) E0 −Ns µ0 − ⇒ 2E0

We therefore have

E E E E E + E E + E S(E,N )= N k 0 − ln 0 − + 0 ln 0 . (4.196) s − s B 2E 2E 2E 2E " 0   0   0   0 # We now have 1 ∂S ∂S ∂x N k E E = = = s B ln 0 − . (4.197) T ∂E ∂x ∂E 2E E + E  Ns 0  0  We see that the temperature is positive for E E < 0 and is negative for 0 < E E . − 0 ≤ ≤ 0 What has gone wrong? The answer is that nothing has gone wrong – all our calculations are perfectly correct. This system does exhibit the possibility of negative temperature. It is, however, unphysical in that we have neglected kinetic degrees of freedom, which result in an entropy function S(E,Ns) which is an increasing function of energy. In this system, S(E,Ns) achieves a maximum of Smax = Ns kB ln 2 1 at E = 0 (i.e. x = 2 ), and then turns over and starts decreasing. In fact, our results are completely consistent with eqn. 4.187 : the energy E is an odd function of temperature. Positive energy requires negative temperature! Another example of this peculiarity is provided in the appendix in 4.11.2. §

4.9.3 Adsorption

PROBLEM: A surface containing Ns adsorption sites is in equilibrium with a monatomic ideal gas. Atoms adsorbed on the surface have an energy ∆ and no kinetic energy. Each adsorption site can accommo- − date at most one atom. Calculate the fraction f of occupied adsorption sites as a function of the gas density n, the temperature T , the binding energy ∆, and physical constants. The grand partition function for the surface is

N s N −Ωsurf /kBT s j(µ+∆)/kBT Ξsurf = e = e j (4.198) Xj=0   µ/k T ∆/k T Ns = 1+ e B e B .

The fraction of occupied sites is 

µ/k T Nˆ 1 ∂Ω e B f = h surf i = surf = . µ/k T −∆/k T (4.199) Ns −Ns ∂µ e B + e B 34 CHAPTER 4. STATISTICAL ENSEMBLES

Figure 4.8: The monomers in wool are modeled as existing in one of two states. The low energy unde- formed state is A, and the higher energy deformed state is B. Applying tension induces more monomers to enter the B state.

Since the surface is in equilibrium with the gas, its fugacity z = exp(µ/kBT ) and temperature T are the same as in the gas. SOLUTION: −3 For a monatomic ideal gas, the single particle partition function is ζ = V λT , where λT = ~2 2π /mkBT is the thermal wavelength. Thus, the grand partition function, for indistinguishable par- ticles, is p −3 µ/kBT Ξgas = exp V λT e . (4.200)   The gas density is ˆ Ngas 1 ∂Ωgas −3 µ/k T n = h i = = λ e B . (4.201) V −V ∂µ T

µ/k T 3 We can now solve for the fugacity: z = e B = nλT . Thus, the fraction of occupied adsorption sites is

nλ3 f = T . (4.202) 3 −∆/kBT nλT + e

Interestingly, the solution for f involves the constant ~. It is always advisable to check that the solution makes sense in various limits. First of all, if the gas density tends to zero at fixed T and ∆, we have f 0. On the other hand, if n we have f 1, → → ∞ → which also makes sense. At fixed n and T , if the adsorption energy is ( ∆) , then once again − → −∞ f = 1 since every adsorption site wants to be occupied. Conversely, taking ( ∆) + results in − → ∞ n 0, since the energetic cost of adsorption is infinitely high. →

4.9.4 Elasticity of wool

Wool consists of interlocking protein molecules which can stretch into an elongated configuration, but reversibly so. This feature gives wool its very useful elasticity. Let us model a chain of these proteins by assuming they can exist in one of two states, which we will call A and B, with energies εA and εB and lengths ℓA and ℓB. The situation is depicted in fig. 4.8. We model these conformational degrees of freedom by a spin variable σ = 1 for each molecule, where σ = +1 in the A state and σ = 1 in the B ± − state. Suppose a chain consisting of N monomers is placed under a tension τ. We then have

N Hˆ = ε δ + ε δ . (4.203) A σj ,+1 B σj ,−1 Xj=1 h i 4.9. SELECTED EXAMPLES 35

Figure 4.9: Upper panel: length L(τ, T ) for kBT/ε˜ = 0.01 (blue), 0.1 (green), 0.5 (dark red), and 1.0 (red). Bottom panel: dimensionless force constant k/N(∆ℓ)2 versus temperature.

Similarly, the length is N Lˆ = ℓ δ + ℓ δ . (4.204) A σj ,+1 B σj ,−1 Xj=1 h i −K/kˆ T The Gibbs partition function is Y = Tr e B , with Kˆ = Hˆ τLˆ : − N Kˆ = ε˜ δ +ε ˜ δ , (4.205) A σj ,+1 B σj ,−1 Xj=1 h i where ε˜A εA τℓA and ε˜B εB τℓB. At τ = 0 the A state is preferred for each monomer, but when τ ∗≡ − ≡ − exceeds τ , defined by the relation ε˜A =ε ˜B, the B state is preferred. One finds ε ε τ ∗ = B − A . (4.206) ℓ ℓ B − A

Once again, we have a set of N noninteracting spins. The partition function is Y = ζN , where ζ is the ˆ single monomer partition function, ζ = Tr e−βh, where

hˆ =ε ˜ δ +ε ˜ δ (4.207) A σj ,1 B σj ,−1 36 CHAPTER 4. STATISTICAL ENSEMBLES

is the single “spin” Hamiltonian. Thus,

−βhˆ −βε˜ −βε˜ ζ = Tr e = e A + e B , (4.208)

It is convenient to define the differences

∆ε = ε ε , ∆ℓ = ℓ ℓ , ∆˜ε =ε ˜ ε˜ (4.209) B − A B − A B − A in which case the partition function Y is

N −Nβ ε˜ −β∆˜ε Y (T, τ, N)= e A 1+ e (4.210)

h i −∆˜ε/k T G(T, τ, N)= Nε˜ Nk T ln 1+ e B (4.211) A − B h i The average length is

∂G L = Lˆ = h i − ∂τ  T,N (4.212) N∆ℓ = Nℓ + . A (∆ε−τ∆ℓ)/k T e B + 1

The polymer behaves as a spring, and for small τ the spring constant is

∂τ 4k T ∆ε k = = B cosh2 . (4.213) ∂L N(∆ℓ)2 2k T τ=0  B 

The results are shown in fig. 4.9. Note that length increases with temperature for τ <τ ∗ and decreases with temperature for τ >τ ∗. Note also that k diverges at both low and high temperatures. At low T , the 1 energy gap ∆ε dominates and L = NℓA, while at high temperatures kBT dominates and L = 2 N(ℓA+ℓB).

4.9.5 Noninteracting spin dimers

Consider a system of noninteracting spin dimers as depicted in fig. 4.10. Each dimer contains two spins, and is described by the Hamiltonian

Hˆ = Jσ σ µ H (σ + σ ) . (4.214) dimer − 1 2 − 0 1 2 Here, J is an interaction energy between the spins which comprise the dimer. If J > 0 the interaction is ferromagnetic, which prefers that the spins are aligned. That is, the lowest energy states are and |↑↑ i . If J < 0 the interaction is antiferromagnetic, which prefers that spins be anti-aligned: and |↓↓ i |↑↓ i .9 |↓↑ i 9Nota bene we are concerned with classical spin configurations only – there is no superposition of states allowed in this model! 4.10. STATISTICAL MECHANICS OF MOLECULAR GASES 37

Nd Suppose there are Nd dimers. Then the OCE partition function is Z = ζ , where ζ(T,H) is the single dimer partition function. To obtain ζ(T,H), we sum over the four possible states of the two spins, obtaining

−Hˆ /k T ζ = Tr e dimer B

−J/k T J/k T 2µ0H = 2 e B + 2 e B cosh . k T  B  Thus, the free energy is

−J/k T J/k T 2µ0H F (T,H,N )= N k T ln 2 N k T ln e B + e B cosh . (4.215) d − d B − d B k T "  B # The magnetization is

2µ H J/kBT 0 ∂F e sinh k T M = = 2N µ B (4.216) d 0 2µ H − ∂H · −J/kBT J/kBT  0 T,Nd e + e cosh   kBT   It is instructive to consider the zero field isothermal susceptibility per spin,

2 J/k T 1 ∂M µ 2 e B χ 0 T = = J/k T −J/k T . (4.217) 2N ∂H k T · B B d H=0 B e + e

The quantity µ2/k T is simply the Curie susceptibility for noninteracting classical spins. Note that we 0 B correctly recover the Curie result when J = 0, since then the individual spins comprising each dimer are in fact noninteracting. For the ferromagnetic case, if J k T , then we obtain ≫ B 2µ2 χ 0 T (J kBT ) . (4.218) ≫ ≈ kBT This has the following simple interpretation. When J k T , the spins of each dimer are effectively ≫ B locked in parallel. Thus, each dimer has an effective magnetic moment µeff = 2µ0. On the other hand, there are only half as many dimers as there are spins, so the resulting Curie susceptibility per spin is 1 (2µ )2/k T . 2 × 0 B When J k T , the spins of each dimer are effectively locked in one of the two antiparallel configu- − ≫ B rations. We then have 2µ2 χ 0 −2|J|/kBT T ( J kBT ) e . (4.219) − ≫ ≈ kBT In this case, the individual dimers have essentially zero magnetic moment.

4.10 Statistical Mechanics of Molecular Gases

4.10.1 Separation of translational and internal degrees of freedom

The states of a noninteracting atom or molecule are labeled by its total momentum p and its internal quantum numbers, which we will simply write with a collective index α, specifying rotational, vibra- 38 CHAPTER 4. STATISTICAL ENSEMBLES

Figure 4.10: A model of noninteracting spin dimers on a lattice. Each red dot represents a classical spin for which σ = 1. j ± tional, and electronic degrees of freedom. The single particle Hamiltonian is then

p2 hˆ = + hˆ , (4.220) 2m int with ~2k2 hˆ k , α = + ε k , α . (4.221) 2m α  

The partition function is ˆ 2 Tr −βh −βp /2m −βεj ζ = e = e gj e . (4.222) p X Xj Here we have replaced the internal label α with a label j of energy eigenvalues, with gj being the de- generacy of the internal state with energy εj. To do the p sum, we quantize in a box of dimensions L L L , using periodic boundary conditions. Then 1 × 2 ×···× d 2π~n 2π~n 2π~n p = 1 , 2 ,..., d , (4.223) L L L  1 2 d  where each ni is an integer. Since the differences between neighboring quantized p vectors are very tiny, we can replace the sum over p by an integral:

ddp (4.224) ∆p ∆p p −→ 1 d X Z · · · where the volume in momentum space of an elementary rectangle is

(2π~)d (2π~)d ∆p ∆p = = . (4.225) 1 · · · d L L V 1 · · · d 4.10. STATISTICAL MECHANICS OF MOLECULAR GASES 39

Thus,

d d p −p2/2mk T −ε /k T −d ζ = V e B g e j B = V λ ξ (4.226) (2π~)d j T Z Xj −εj /kBT ξ(T )= gj e . (4.227) Xj Here, ξ(T ) is the internal coordinate partition function. The full N-particle ordinary canonical partition function is then 1 V N Z = ξN (T ) . (4.228) N N! λd  T  Using Stirling’s approximation, we find the Helmholtz free energy F = k T ln Z is − B V F (T,V,N)= NkBT ln d +1+ln ξ(T ) − " NλT #   (4.229) V = Nk T ln + 1 + Nϕ(T ) , − B Nλd "  T  # where ϕ(T )= k T ln ξ(T ) (4.230) − B is the internal coordinate contribution to the single particle free energy. We could also compute the partition function in the Gibbs (T, p, N) ensemble:

∞ 1 Y (T, p, N)= e−βG(T,p,N) = dV e−βpV Z(T,V,N) V0 Z0 (4.231) k T k T N = B B ξN (T ) . pV p λd  0  T  Thus, in the thermodynamic limit,

G(T, p, N) p λd µ(T,p)= = k T ln T k T ln ξ(T ) N B k T − B  B  (4.232) p λd = k T ln T + ϕ(T ) . B k T  B 

4.10.2 Ideal gas law

Since the internal coordinate contribution to the free energy is volume-independent, we have

∂G Nk T V = = B , (4.233) ∂p p  T,N 40 CHAPTER 4. STATISTICAL ENSEMBLES

and the ideal gas law applies. The entropy is

∂G k T S = = Nk ln B +1+ 1 d Nϕ′(T ) , (4.234) − ∂T B pλd 2 −  p,N "  T  # and therefore the heat capacity is

∂S C = T = 1 d + 1 Nk NT ϕ′′(T ) (4.235) p ∂T 2 B −  p,N ∂S  C = T = 1 dNk NT ϕ′′(T ) . (4.236) V ∂T 2 B −  V,N

Thus, any temperature variation in Cp must be due to the internal degrees of freedom.

4.10.3 The internal coordinate partition function

At energy scales of interest we can separate the internal degrees of freedom into distinct classes, writing

ˆ ˆ ˆ ˆ hint = hrot + hvib + helec (4.237) as a sum over internal Hamiltonians governing rotational, vibrational, and electronic degrees of free- dom. Then ξ = ξ ξ ξ . (4.238) int rot · vib · elec Associated with each class of excitation is a characteristic temperature Θ. Rotational and vibrational temperatures of a few common molecules are listed in table tab. 4.1.

4.10.4 Rotations

Consider a class of molecules which can be approximated as an axisymmetric top. The rotational Hamil- tonian is then

L2 + L2 L2 hˆ = a b + c rot 2I 2I 1 3 (4.239) ~2L(L + 1) 1 1 = + L2 , 2I 2I − 2I c 1  3 1  L where nˆ a.b,c(t) are the principal axes, with nˆ c the symmetry axis, and a,b,c are the components of the angular momentum vector L about these instantaneous body-fixed principal axes. The components of L along space-fixed axes x,y,z are written as Lx,y,z. Note that { } µ L ν µ ν µ ν ν ν λ λ ν L , c = nc L , L + L , nc L = iǫµνλ nc L + iǫµνλ nc L = 0 , (4.240)       4.10. STATISTICAL MECHANICS OF MOLECULAR GASES 41

molecule Θrot(K) Θvib(K)

H2 85.4 6100 N2 2.86 3340 H2O 13.7 , 21.0 , 39.4 2290 , 5180 , 5400

Table 4.1: Some rotational and vibrational temperatures of common molecules.

which is equivalent to the statement that L = nˆ L is a rotational scalar. We can therefore simul- c c · taneously specify the eigenvalues of L2,Lz, L , which form a complete set of commuting observ- { c} ables (CSCO)10. The eigenvalues of Lz are m~ with m L,...,L , while those of L are k~ with ∈ {− } c k L,...,L . There is a (2L + 1)-fold degeneracy associated with the Lz quantum number. ∈ {− } We assume the molecule is prolate, so that I3 Θ. We conclude that the rotational partition function for an axisymmetric molecule is given by e ∞ L −L(L+1) Θ/T −k2 (Θe−Θ)/T ξrot(T )= (2L + 1) e e (4.242) LX=0 kX=−L

In diatomic molecules, I is extremely small, and Θ k T at all relevant temperatures. Only the k = 0 3 ≫ B term contributes to the partition sum, and we have e ∞ −L(L+1) Θ/T ξrot(T )= (2L + 1) e . (4.243) LX=0 When T Θ, only the first few terms contribute, and ≪ −2Θ/T −6Θ/T ξrot(T )=1+3 e + 5 e + . . . (4.244)

In the high temperature limit, we have a slowly varying summand. The Euler-MacLaurin summation formula may be used to evaluate such a series:

n n ∞ B F = dk F (k)+ 1 F (0) + F (n) + 2j F (2j−1)(n) F (2j−1)(0) (4.245) k 2 (2j)! − k=0 Z j=1 X 0   X h i th where Bj is the j Bernoulli number where

B = 1 , B = 1 , B = 1 , B = 1 , B = 1 . (4.246) 0 1 − 2 2 6 4 − 30 6 42 10Note that while we cannot simultaneously specify the eigenvalues of two components of L along axes fixed in space, we can simultaneously specify the components of L along one axis fixed in space and one axis rotating with a body. See Landau and Lifshitz, Quantum Mechanics, §103. 42 CHAPTER 4. STATISTICAL ENSEMBLES

Thus, ∞ ∞ 1 F = dx F (x)+ 1 F (0) 1 F ′(0) F ′′′(0) + ... . (4.247) k 2 − 12 − 720 Xk=0 Z0 ∞ −x(x+1)Θ/T T We have F (x) = (2x + 1) e , for which dx F (x)= Θ , hence 0 R T 1 1 Θ 4 Θ 2 ξ = + + + + ... . (4.248) rot Θ 3 15 T 315 T   Recall that ϕ(T ) = k T ln ξ(T ). We conclude that ϕ (T ) 3k T e−2Θ/T for T Θ and ϕ (T ) − B rot ≈ − B ≪ rot ≈ k T ln(T/Θ) for T Θ. We have seen that the internal coordinate contribution to the heat capacity − B ≫ is ∆C = NT ϕ′′(T ). For diatomic molecules, then, this contribution is exponentially suppressed for V − T Θ, while for high temperatures we have ∆C = Nk . One says that the rotational excitations are ≪ V B ‘frozen out’ at temperatures much below Θ. Including the first few terms, we have

Θ 2 ∆C (T Θ) = 12 Nk e−2Θ/T + . . . (4.249) V ≪ B T   1 Θ 2 16 Θ 3 ∆C (T Θ)= Nk 1+ + + . . . . (4.250) V ≫ B 45 T 945 T (     )

Note that CV overshoots its limiting value of NkB and asymptotically approaches it from above. Special care must be taken in the case of homonuclear diatomic molecules, for then only even or odd L states are allowed, depending on the total nuclear spin. This is discussed below in 4.10.7. § For polyatomic molecules, the moments of inertia generally are large enough that the molecule’s rota- tions can be considered classically. We then have

L2 L2 L2 L L L a b c ε( a, b, c)= + + . (4.251) 2I1 2I2 2I3

We then have L L L 1 d a d b d c dφ dθ dψ −ε(L L L )/k T ξ (T )= e a b c B , (4.252) rot g (2π~)3 rot Z where (φ, θ ψ) are the Euler angles. Recall φ [0, 2π], θ [0,π], and ψ [0, 2π]. The factor g ac- ∈ ∈ ∈ rot counts for physically indistinguishable orientations of the molecule brought about by rotations, which can happen when more than one of the nuclei is the same. We then have

2k T 3/2 ξ (T )= B πI I I . rot ~2 1 2 3 (4.253)   p 3 This leads to ∆CV = 2 NkB. 4.10. STATISTICAL MECHANICS OF MOLECULAR GASES 43

4.10.5 Vibrations

Vibrational frequencies are often given in units of inverse wavelength, such as cm−1, called a wavenum- ∗ ∗ ∗ −1 ber. To convert to a temperature scale T , we write kBT = hν = hc/λ, hence T = (hc/kB) λ , and we multiply by hc = 1.436 K cm . (4.254) kB · For example, infrared absorption ( 50 cm−1 to 104 cm−1) reveals that the ‘asymmetric stretch’ mode of ∼ −1 the H2O molecule has a vibrational frequency of ν = 3756 cm . The corresponding temperature scale is T ∗ = 5394 K. Vibrations are normal modes of oscillations. A single normal mode Hamiltonian is of the form p2 hˆ = + 1 mω2q2 = ~ω a†a + 1 . (4.255) 2m 2 2  In general there are many vibrational modes, hence many normal mode frequencies ωα. We then must sum over all of them, resulting in (α) ξvib = ξvib . (4.256) α Y For each such normal mode, the contribution is ∞ ∞ n −(n+ 1 )~ω/k T −~ω/2k T −~ω/k T ξ = e 2 B = e B e B n=0 n=0 X X   (4.257) −~ω/2k T e B 1 = −~ω/k T = , 1 e B 2 sinh(Θ/2T ) − ~ where Θ = ω/kB. Then

ϕ = kBT ln 2 sinh(Θ/2T ) (4.258) = 1 k Θ + k T ln 1 e−Θ/T . 2 B B − The contribution to the heat capacity is 

Θ 2 eΘ/T ∆C = Nk V B T (eΘ/T 1)2   − (4.259) Nk (Θ/T )2 exp( Θ/T ) (T 0) = B − → Nk (T ) ( B →∞

4.10.6 Two-level systems : Schottky anomaly

Consider now a two-level system, with energies ε and ε . We define ∆ ε ε and assume without 0 1 ≡ 1 − 0 loss of generality that ∆ > 0. The partition function is

−βε −βε −βε −β∆ ζ = e 0 + e 1 = e 0 1+ e . (4.260)  44 CHAPTER 4. STATISTICAL ENSEMBLES

Figure 4.11: Heat capacity per molecule as a function of temperature for (a) heteronuclear diatomic gases, (b) a single vibrational mode, and (c) a single two-level system.

The free energy is −∆/k T f = k T ln ζ = ε k T ln 1+ e B . (4.261) − B 0 − B The entropy for a given two level system is then 

∂f −∆/k T ∆ 1 s = = k ln 1+ e B + B ∆/k T (4.262) −∂T T · e B + 1  and the heat capacity is = T (∂s/∂T ), i.e.

2 ∆/k T ∆ e B c(T )= . (4.263) 2 ∆/k T 2 kBT · e B + 1

Thus, 

∆2 −∆/kBT c (T ∆)= 2 e (4.264) ≪ kBT ∆2 c (T ∆)= 2 . (4.265) ≫ 4kBT We find that c(T ) has a characteristic peak at T ∗ 0.42 ∆/k . The heat capacity vanishes in both the ≈ B low temperature and high temperature limits. At low temperatures, the gap to the is much greater than kBT , and it is not possible to populate it and store energy. At high temperatures, both and excited state are equally populated, and once again there is no way to store energy. If we have a distribution of independent two-level systems, the heat capacity of such a system is a sum over the individual Schottky functions:

C(T )= c (∆i/kBT )= N d∆ P (∆) c(∆/T ) , (4.266) Xi Z0 e e 4.10. STATISTICAL MECHANICS OF MOLECULAR GASES 45

2 x x 2 where N is the number of two level systems, c(x)= kB x e /(e +1) , and where P (∆) is the normalized distribution function, which satisfies the normalization condition ∞e d∆ P (∆) = 1 . (4.267) Z0 N is the total number of two level systems. If P (∆) ∆r for ∆ 0, then the low temperature heat s ∝ → capacity behaves as C(T ) T 1+r. Many amorphous or glassy systems contain such a distribution ∝ of two level systems, with r 0 for glasses, leading to a linear low-temperature heat capacity. The ≈ origin of these two-level systems is not always so clear but is generally believed to be associated with local atomic configurations for which there are two low-lying states which are close in energy. The paradigmatic example is the mixed crystalline solid (KBr)1−x(KCN)x which over the range 0.1 < x < 0.6 forms an ‘orientational glass’ at low temperatures. The two level systems are associated with∼ different∼ orientation of the cyanide (CN) dipoles.

4.10.7 Electronic and nuclear excitations

For a monatomic gas, the internal coordinate partition function arises due to electronic and nuclear degrees of freedom. Let’s first consider the electronic degrees of freedom. We assume that kBT is small compared with energy differences between successive electronic shells. The atomic ground state is then computed by filling up the hydrogenic orbitals until all the are used up. If the atomic number is a ‘magic number’ (A = 2 (He), 10 (Ne), 18 (Ar), 36 (Kr), 54 (Xe), etc.) then the atom has all shells filled and L = 0 and S = 0. Otherwise the last shell is partially filled and one or both of L and S will be 2J+1 nonzero. The atomic ground state configuration LS is then determined by Hund’s rules:

1. The LS multiplet with the largest S has the lowest energy. 2. If the largest value of S is associated with several multiplets, the multiplet with the largest L has the lowest energy. 3. If an incomplete shell is not more than half-filled, then the lowest energy state has J = L S . If | − | the shell is more than half-filled, then J = L + S.

The last of Hund’s rules distinguishes between the (2S +1)(2L+1) states which result upon fixing S and L as per rules #1 and #2. It arises due to the atomic spin-orbit coupling, whose effective Hamiltonian may be written Hˆ = ΛL S, where Λ is the Russell-Saunders coupling. If the last shell is less than or · equal to half-filled, then Λ > 0 and the ground state has J = L S . If the last shell is more than | − | half-filled, the coupling is inverted, i.e. Λ< 0, and the ground state has J = L + S.11 The electronic contribution to ξ is then

L+S −∆ε(L,S,J)/kBT ξelec = (2J + 1) e (4.268) J=X|L−S| 11See e.g. §72 of Landau and Lifshitz, Quantum Mechanics, which, in my humble estimation, is the greatest physics book ever written. 46 CHAPTER 4. STATISTICAL ENSEMBLES

where ∆ε(L,S,J)= 1 Λ J(J + 1) L(L + 1) S(S + 1) . (4.269) 2 − − h i At high temperatures, kBT is larger than the energy difference between the different J multiplets, and we −βε0 have ξelec (2L+1)(2S +1) e , where ε0 is the ground state energy. At low temperatures, a particular ∼ −βε value of J is selected – that determined by Hund’s third rule – and we have ξ (2J + 1) e 0 . If, in elec ∼ addition, there is a nonzero nuclear spin I, then we also must include a factor ξnuc = (2I + 1), neglecting the small hyperfine splittings due to the coupling of nuclear and electronic angular momenta. For heteronuclear diatomic molecules, i.e. molecules composed from two different atomic nuclei, the in- (1) (2) ternal partition function simply receives a factor of ξ ξnuc ξnuc, where the first term is a sum over elec · · molecular electronic states, and the second two terms arise from the spin degeneracies of the two nuclei. For homonuclear diatomic molecules, the exchange of nuclear centers is a symmetry operation, and does not represent a distinct quantum state. To correctly count the electronic states, we first assume that the total electronic spin is S = 0. This is generally a very safe assumption. Exchange symmetrynow puts re- strictions on the possible values of the molecular angular momentum L, depending on the total nuclear angular momentum Itot. If Itot is even, then the molecular angular momentum L must also be even. If the total nuclear angular momentum is odd, then L must be odd. This is so because the molecular 1 + 12 ground state configuration is Σg . The total number of nuclear states for the molecule is (2I + 1)2, of which some are even under nuclear exchange, and some are odd. The number of even states, corresponding to even total nuclear angular momentum is written as gg, where the subscript conventionally stands for the (mercifully short) German word gerade, meaning ‘even’. The number of odd (Ger. ungerade) states is written gu. Table 4.2 gives the values of gg,u corresponding to half-odd-integer I and integer I. The final answer for the rotational component of the internal molecular partition function is then

ξrot(T )= gg ζg + gu ζu , (4.270) where

−L(L+1) Θrot/T ζg = (2L + 1) e LXeven (4.271) −L(L+1) Θrot/T ζu = (2L + 1) e . LXodd For hydrogen, the molecules with the larger nuclear statistical weight are called orthohydrogen and those 1 with the smaller statistical weight are called parahydrogen. For H2, we have I = 2 hence the ortho state has gu = 3 and the para state has gg = 1. In D2, we have I = 1 and the ortho state has gg = 6 while the para state has gu = 3. In equilibrium, the ratio of ortho to para states is then

ortho ortho NH g ζ 3 ζ ND gg ζg 2 ζg 2 = u u = u , 2 = = . (4.272) N para g ζ ζ N para g ζ ζ H2 g g g D2 u u u

12See Landau and Lifshitz, Quantum Mechanics, §86. 4.11. APPENDIX I : ADDITIONAL EXAMPLES 47

2I gg gu odd I(2I + 1) (I + 1)(2I + 1) even (I + 1)(2I + 1) I(2I + 1)

Table 4.2: Number of even (gg) and odd (gu) total nuclear angular momentum states for a homonuclear diatomic molecule. I is the ground state nuclear spin.

Incidentally, how do we derive the results in Tab. 4.2 ? The total nuclear angular momentum Itot is the quantum mechanical sum of the two individual nuclear angular momenta, each of which are of magnitude I. From elementary addition of angular momenta, we have

I I = 0 1 2 2I . (4.273) ⊗ ⊕ ⊕ ⊕···⊕ The right hand side of the above equation lists all the possible multiplets. Thus, I 0, 1,..., 2I . tot ∈ { } Now let us count the total number of states with even Itot. If 2I is even, which is to say if I is an integer, we have I g(2I=even) = 2 (2n) + 1 = (I + 1)(2I + 1) , (4.274) g · nX=0 n o because the degeneracy of each multiplet is 2Itot + 1. It follows that

g(2I=even) = (2I + 1)2 g = I(2I + 1) . (4.275) u − g On the other hand, if 2I is odd, which is to say I is a half odd integer, then

1 I− 2 g(2I=odd) = 2 (2n) + 1 = I(2I + 1) . (4.276) g · n=0 X n o It follows that g(2I=odd) = (2I + 1)2 g = (I + 1)(2I + 1) . (4.277) u − g

4.11 Appendix I : Additional Examples

4.11.1 Three state system

Consider a spin-1 particle where σ = 1, 0, +1. We model this with the single particle Hamiltonian − hˆ = µ Hσ + ∆(1 σ2) . (4.278) − 0 − We can also interpret this as describing a spin if σ = 1 and a vacancy if σ = 0. The parameter ∆ then ± represents the vacancy formation energy. The single particle partition function is

Tr −βhˆ −β∆ ζ = e = e + 2 cosh(βµ0H) . (4.279) 48 CHAPTER 4. STATISTICAL ENSEMBLES

With Ns distinguishable noninteracting spins (e.g. at different sites in a crystalline lattice), we have Z = ζNs and F N f = k T ln Z = N k T ln e−β∆ + 2 cosh(βµ H) , (4.280) ≡ s − B − s B 0 where f = k T ln ζ is the free energy of a single particle.h Note that i − B ∂hˆ nˆ = 1 σ2 = (4.281) V − ∂∆ ∂hˆ mˆ = µ σ = (4.282) 0 −∂H are the vacancy number and magnetization, respectively. Thus,

−∆/k T ∂f e B nV = nˆV = = (4.283) ∂∆ −∆/kBT e + 2 cosh(µ0H/kBT ) and ∂f 2µ sinh(µ H/k T ) m = mˆ = = 0 0 B . (4.284) ∂H −∆/kBT − e + 2 cosh(µ0H/kBT )

At weak fields we can compute

2 ∂m µ0 2 χT = = . (4.285) ∂H k T · −∆/kBT H=0 B 2+ e

We thus obtain a modified Curie law. At temperatures T ∆/k , the vacancies are frozen out and we ≪ B recover the usual Curie behavior. At high temperatures, where T ∆/kB, the low temperature result 2 ≫ is reduced by a factor of 3 , which accounts for the fact that one third of the time the particle is in a nonmagnetic state with σ = 0.

4.11.2 Spins and vacancies on a surface

PROBLEM: 1 A collection of spin- 2 particles is confined to a surface with N sites. For each site, let σ = 0 if there is a vacancy, σ = +1 if there is particle present with spin up, and σ = 1 if there is a particle present − with spin down. The particles are non-interacting, and the energy for each site is given by ε = Wσ2, − where W < 0 is the binding energy. −

(a) Let Q = N↑ + N↓ be the number of spins, and N0 be the number of vacancies. The surface mag- netization is M = N N . Compute, in the microcanonical ensemble, the statistical entropy ↑ − ↓ S(Q, M).

(b) Let q = Q/N and m = M/N be the dimensionless particle density and magnetization density, respectively. Assuming that we are in the thermodynamic limit, where N, Q, and M all tend to infinity, but with q and m finite, Find the temperature T (q,m). Recall Stirling’s formula

ln(N!) = N ln N N + (ln N) . − O 4.11. APPENDIX I : ADDITIONAL EXAMPLES 49

(c) Show explicitly that T can be negative for this system. What does negative T mean? What physical degrees of freedom have been left out that would avoid this strange property?

SOLUTION: There is a constraint on N↑, N0, and N↓:

N↑ + N0 + N↓ = Q + N0 = N . (4.286)

The total energy of the system is E = WQ. −

(a) The number of states available to the system is

N! Ω= . (4.287) N↑! N0! N↓!

Fixing Q and M, along with the above constraint, is enough to completely determine N ,N ,N : { ↑ 0 ↓} N = 1 (Q + M) ,N = N Q,N = 1 (Q M) , (4.288) ↑ 2 0 − ↓ 2 − whence N! Ω(Q, M)= . (4.289) 1 (Q + M) ! 1 (Q M) ! (N Q)! 2 2 − −

The statistical entropy is S = kB ln Ω:   

S(Q, M)= k ln(N!) k ln 1 (Q + M)! k ln 1 (Q M)! k ln (N Q)! . (4.290) B − B 2 − B 2 − − B −       (b) Now we invoke Stirling’s rule,

ln(N!) = N ln N N + (ln N) , (4.291) − O to obtain

ln Ω(Q, M)= N ln N N 1 (Q + M) ln 1 (Q + M) + 1 (Q + M) − − 2 2 2   1 (Q M) ln 1 (Q M) + 1 (Q M) − 2 − 2 − 2 −   (4.292) (N Q) ln(N Q) + (N Q) − − − − Q + M = N ln N 1 Q ln 1 (Q2 M 2) 1 M ln − 2 4 − − 2 Q M h i  −  Combining terms,

q + m ln Ω(Q, M)= Nq ln 1 q2 m2 1 Nm ln N(1 q) ln(1 q) , (4.293) − 2 − − 2 q m − − − h p i  −  50 CHAPTER 4. STATISTICAL ENSEMBLES

where Q = Nq and M = Nm. Note that the entropy S = kB ln Ω is extensive. The statistical entropy per site is thus q + m s(q,m)= k q ln 1 q2 m2 1 k m ln k (1 q) ln(1 q) . (4.294) − B 2 − − 2 B q m − B − − h p i  −  The temperature is obtained from the relation 1 ∂S 1 ∂s = = T ∂E W ∂q  M  m (4.295) 1 1 = ln(1 q) ln 1 q2 m2 . W − − W 2 − h p i Thus, W/k T = B . (4.296) ln 2(1 q)/ q2 m2 − − (c) We have 0 q 1 and q m q, soT is real (thankp heavens!). But it is easy to choose q,m ≤ ≤ − ≤ ≤ { } such that T < 0. For example, when m = 0 we have T = W/k ln(2q−1 2) and T < 0 for all B − q 2 , 1 . The reason for this strange state of affairs is that the entropy S is bounded, and is not an ∈ 3 monotonically increasing function of the energy E (or the dimensionless quantity Q). The entropy  is maximized for N = N = N = 1 , which says m = 0 and q = 2 . Increasing q beyond this ↑ 0 ↓ 3 3 point (with m = 0 fixed) starts to reduce the entropy, and hence (∂S/∂E) < 0 in this range, which immediately gives T < 0. What we’ve left out are kinetic degrees of freedom, such as vibrations and rotations, whose energies are unbounded, and which result in an increasing S(E) function.

4.11.3 Fluctuating interface

Consider an interface between two dissimilar fluids. In equilibrium, in a uniform gravitational field, the denser fluid is on the bottom. Let z = z(x,y) be the height the interface between the fluids, relative to equilibrium. The potential energy is a sum of gravitational and surface tension terms, with z 2 ′ ′ Ugrav = d x dz ∆ρ g z (4.297) Z Z0 1 2 ∇ 2 Usurf = 2 σ d x ( z) . (4.298) Z We won’t need the kinetic energy in our calculations, but we can include it just for completeness. It isn’t so clear how to model it a priori so we will assume a rather general form ∂z(x,t) ∂z(x′,t) T = d2x d2x′ 1 µ(x, x′) . (4.299) 2 ∂t ∂t Z Z We assume that the (x,y) plane is a rectangle of dimensions L L . We also assume µ(x, x′)= µ x x × y | − x′ . We can then Fourier transform |  −1/2 ik·x z(x)= Lx Ly zk e , (4.300) k  X 4.11. APPENDIX I : ADDITIONAL EXAMPLES 51

where the wavevectors k are quantized according to

2πn 2πn k = x xˆ + y yˆ , (4.301) Lx Ly with integer nx and ny, if we impose periodic boundary conditions (for calculational convenience). The Lagrangian is then 1 2 2 L = µ z˙ g ∆ρ + σk2 z , (4.302) 2 k k − k k X h  i where µ = d2xµ x e−ik·x . (4.303) k | | Z  Since z(x,t) is real, we have the relation z = z∗, therefore the Fourier coefficients at k and k are not −k k − independent. The canonical momenta are given by

∂L ∗ ∂L ∗ pk = ∗ = µk z˙k , pk = = µk z˙k (4.304) ∂z˙k ∂z˙k The Hamiltonian is then ′ Hˆ = p z∗ + p∗ z L (4.305) k k k k − Xk h i ′ p 2 = | k| + g ∆ρ + σk2 z 2 , (4.306) µ | k| k  k  X  where the prime on the k sum indicates that only one of the pair k, k is to be included, for each k. { − } We may now compute the ordinary canonical partition function:

2 2 ′ d pk d zk −|p |2/µ k T −(g ∆ρ+σk2) |z |2/k T Z = e k k B e k B (2π~)2 k Z Y 2 (4.307) ′ k T µ = B k . 2~ g ∆ρ + σk2 k Y     Thus, k T F = k T ln B , B ~ (4.308) − 2 Ωk Xk   where13 g ∆ρ + σk2 1/2 Ω = . (4.309) k µ  k  is the normal mode frequency for surface oscillations at wavevector k. For deep water waves, it is appropriate to take µ = ∆ρ k , where ∆ρ = ρ ρ ρ is the difference between the densities of k | | L − G ≈ L water and air.  13Note that there is no prime on the k sum for F , as we have divided the logarithm of Z by two and replaced the half sum by the whole sum. 52 CHAPTER 4. STATISTICAL ENSEMBLES

It is now easy to compute the thermal average

2 2 2 −(g ∆ρ+σk2) |z |2/k T 2 −(g ∆ρ+σk2) |z |2/k T z = d z z e k B d z e k B | k| k | k| k Z  Z (4.310) k T = B . g ∆ρ + σk2

Note that this result doesnot dependon µk, i.e. on our choice of kinetic energy. One defines the correlation function

1 d2k k T C(x) z(x) z(0) = z 2 eik·x = B eik·x ≡ L L | k| (2π)2 g ∆ρ + σk2 x y k Z   ∞X (4.311) k T eik|x| k T = B dq = B K x /ξ , 4πσ 2 2 4πσ 0 | | Z q + ξ 0  p where ξ = g ∆ρ/σ is the correlation length, and where K0(z) is the Bessel function of imaginary argument. The asymptotic behavior of K (z) for small z is K (z) ln(2/z), whereas for large z one has p 0 0 ∼ K (z) (π/2z)1/2 e−z. We see that on large length scales the correlations decay exponentially, but on 0 ∼ small length scales they diverge. This divergence is due to the improper energetics we have assigned to short wavelength fluctuations of the interface. Roughly, it can cured by imposing a cutoff on the integral, or by insisting that the shortest distance scale is a molecular diameter.

4.11.4 Dissociation of molecular hydrogen

Consider the reaction H ↽⇀ p+ + e− . (4.312) −− In equilibrium, we have µH = µp + µe . (4.313) What is the relationship between the temperature T and the fraction x of hydrogen which is dissociated? Let us assume a fraction x of the hydrogen is dissociated. Then the densities of H, p, and e are then

n = (1 x) n , n = xn , n = xn . (4.314) H − p e The single particle partition function for each species is

N N g V −Nε /k T ζ = e int B , (4.315) N! λ3  T  where g is the degeneracy and εint the internal energy for a given species. We have εint = 0 for p and 2 e, and εint = ∆ for H, where ∆ = e /2aB = 13.6 eV, the binding energy of hydrogen. Neglecting − 14 1 hyperfine splittings , we have gH = 4, while ge = gp = 2 because each has spin S = 2 . Thus, the

14 4 2 −6 The hyperfine splitting in hydrogen is on the order of (me/mp) α mec ∼ 10 eV, which is on the order of 0.01 K. Here α = e2/~c is the fine structure constant. 4.11. APPENDIX I : ADDITIONAL EXAMPLES 53

associated grand potentials are

−3 (µ +∆)/k T Ω = g V k T λ e H B (4.316) H − H B T,H

−3 µ /k T Ω = g V k T λ e p B (4.317) p − p B T,p −3 µ /k T Ω = g V k T λ e e B , (4.318) e − e B T,e where 2π~2 λT,a = (4.319) smakBT for species a. The corresponding number densities are

1 ∂Ω −3 (µ−ε )/k T n = = g λ e int B , (4.320) V ∂µ T  T,V µ/k T and the fugacity z = e B of a given species is given by

−1 3 ε /k T z = g n λT e int B . (4.321)

We now invoke µH = µp + µe, which says zH = zp ze, or

−1 3 −∆/kBT −1 3 −1 3 gH nH λT,H e = gp np λT,p ge ne λT,e , (4.322) which yields   2 x 3 −∆/k T nλ˜ = e B , (4.323) 1 x T  −  where λ˜ = 2π~2/m∗k T , with m∗ = m m /m m . Note that T B p e H ≈ e p ˜ 4πmH ∆ λT = aB , (4.324) s mp skBT

where aB = 0.529 A˚ is the Bohr radius. Thus, we have

x2 T 3/2 (4π)3/2 ν = e−T0/T , (4.325) 1 x · T  −   0  where T = ∆/k = 1.578 105 K and ν = na3 . Consider for example a temperature T = 3000 K, for 0 B × B which T /T = 52.6, and assume that x = 1 . We then find ν = 1.69 10−27, corresponding to a density 0 2 × of n = 1.14 10−2 cm−3. At this temperature, the fraction of hydrogen molecules in their first excited × (2s) state is x′ e−T0/2T = 3.8 10−12. This is quite striking: half the hydrogen atoms are completely ∼ × dissociated, which requires an energy of ∆, yet the number in their first excited state, requiring energy 1 2 ∆, is twelve orders of magnitude smaller. The student should reflect on why this can be the case.