1
Statistical Thermodynamics
Professor Dmitry Garanin
Statistical physics
May 17, 2021
I. PREFACE
Statistical physics considers systems of a large number of entities (particles) such as atoms, molecules, spins, etc. For these system it is impossible and even does not make sense to study the full microscopic dynamics. The only relevant information is, say, how many atoms have a particular energy, then one can calculate the observable thermodynamic values. That is, one has to know the distribution function of the particles over energies that defines the macroscopic properties. This gives the name statistical physics and defines the scope of this subject. The approach outlined above can be used both at and off equilibrium. The branch of physics studying non- equilibrium situations is called physical kinetics. In this course, we study only statistical physics at equilibrium. It turns out that at equilibrium the energy distribution function has an explicit general form and the only problem is to calculate the observables. The term statistical mechanics means the same as statistical physics. One can call it statistical thermodynamics as well. The formalism of statistical thermodynamics can be developed for both classical and quantum systems. The resulting energy distribution and calculating observables is simpler in the classical case. However, the very formulation of the method is more transparent within the quantum mechanical formalism. In addition, the absolute value of the entropy, including its correct value at T → 0, can only be obtained in the quantum case. To avoid double work, we will consider only quantum statistical thermodynamics in this course, limiting ourselves to systems without interaction. The more general quantum results will recover their classical forms in the classical limit.
II. MICROSTATES AND MACROSTATES
From quantum mechanics follows that the states of the system do not change continuously (like in classical physics) but are quantized. There is a huge number of discrete quantum states i with corresponding energy values εi being the main parameter characterizing these states. In the absence of interaction, each particle has its own set of quantum states which it can occupy. For identical particles these sets of states are identical. One can think of boxes i into which particles are placed, Ni particles in the ith box. The particles can be distributed over the boxes in a number of different ways, corresponding to different microstates, in which the state i of each particle is specified. The information contained in the microstates is excessive, and the only meaningful information is provided by the numbers Ni that define the distribution of particles over their quantum states. These numbers Ni specify what in statistical physics is called macrostate. If these numbers are known, the energy and other quantities of the system can be found. It should be noted that also the statistical macrostate contains more information than the macroscopic physical quantities that follow from it, as a distribution contains more information than an average over it. Each macrostate k, specified by the numbers Ni, can be realized by a number wk of microstates, the so-called thermodynamic probability. The latter is typically a large number, unlike the usual probability that changes between 0 and 1. Redistributing the particles over the states i while keeping the same values of Ni generates different microstates within the same macrostate. The basic assumption of statistical mechanics is the equidistrubution over microstates. That is, each microstate within a macrostate is equally probable for occupation. Macrostates having a larger wk are more likely to be realized. As we will see, for large systems, thermodynamic probabilities of different macrostates vary in a wide range and there is the state with the largest value of w that wins over all other macrostates. If the number of quantum states i is finite, the total number of microstates can be written as X Ω ≡ wk, (1) k 2 the sum rule for thermodynamic probabilities. For an isolated system the number of particles N and the energy U are conserved, thus the numbers Ni satisfy the constraints X Ni = N, (2) i X Niεi = U (3) i that limit the variety of the allowed macrostates k.
III. TWO-STATE PARTICLES (COIN TOSSING)
A tossed coin can land in two positions: Head up or tail up. Considering the coin as a particle, one can say that this particle has two “quantum” states, 1 corresponding to the head and 2 corresponding to the tail. If N coins are tossed, this can be considered as a system of N particles with two quantum states each. The microstates of the system are specified by the states occupied by each coin. As each coin has 2 states, there are total
Ω = 2N (4) microstates. The macrostates of this system are defined by the numbers of particles in each state, N1 and N2. These two numbers satisfy the constraint condition (2), i.e., N1 + N2 = N. Thus one can take, say, N1 as the number k labeling macrostates. The number of microstates in one macrostate (that is, the number of different microstates that belong to the same macrostate) is given by the binomial distribution
N! N wN1 = = . (5) N1!(N − N1)! N1
This formula can be derived as follows. We have to pick N1 particles to be in the state 1, all others will be in the state 2. How many ways are there to do this? The first “0” particle can be picked in N ways, the second one can be picked in N − 1 ways since we can choose of N − 1 particles only. The third “1” particle can be picked in N − 2 different ways etc. Thus one obtains the number of different ways to pick the particles is N! N × (N − 1) × (N − 2) × ... × (N − N1 + 1) = , (6) (N − N1)! where the factorial is defined by
N! ≡ N × (N − 1) × ... × 2 × 1, 0! = 1. (7)
The recurrent definition of the factorial is
N! = N (N − 1)!, 0! = 1. (8)
The expression in Eq. (6) is not yet the thermodynamical probability wN1 because it contains multiple counting of the same microstates. The realizations, in which N1 “1” particles are picked in different orders, have been counted as different microstates, whereas they are the same microstate. To correct for the multiple counting, one has to divide by the number of permutations N1! of the N1 particles that yields Eq. (5). One can check that the condition (1) is satisfied,
N N X X N! N wN1 = = 2 . (9) N1!(N − N1)! N1=0 N1=0
The thermodynamic probability wN1 has a maximum at N1 = N/2, half of the coins head and half of the coins tail. This macrostate is the most probable state. Indeed, as for an individual coin the probabilities to land head up and tail up are both equal to 0.5, this is what we expect. For large N the maximum of wN1 on N1 becomes sharp.
To prove that N1 = N/2 is the maximum of wN1 , one can rewrite Eq. (5) in terms of the new variable p = N1 −N/2 as N! w = . (10) N1 (N/2 + p)!(N/2 − p)! 3
FIG. 1: The binomial distribution for an ensemble of N two-state systems becomes narrow and peaked at p ≡ N1 − N/2 = 0.
One can see that wp is symmetric around N1 = N/2, i.e., p = 0. Using Eq. (8), one obtains
w (N/2)!(N/2)! N/2 N/2±1 = = < 1, (11) wN/2 (N/2 + 1)!(N/2 − 1)! N/2 + 1
one can see that N1 = N/2 is indeed the maximum of wN1 . The binomial distribution is shown in Fig. 1 for three different valus of N. As the argument, the variable p/pmax ≡ (N1 − N/2) / (N/2) is used so that one can put all the data on the same plot. One can see that in the limit of large N the binomial distribution becomes narrow and centered at p = 0, that is, N1 = N/2. This practically means that if the coin is tossed many times, significant deviations from the 50:50 relation between the numbers of heads and tails will be extremely rare.
IV. STIRLING FORMULA AND THERMODYNAMIC PROBABILITY AT LARGE N
Analysis of expressions with large factorials is simplified by the Stirling formula
√ N N N! =∼ 2πN . (12) e √ In many important cases, the prefactor 2πN is irrelevant, as we will see below. With the Stirling formula substituted, Eq. (10) becomes
√ N ∼ 2πN (N/e) wN1 = p2π (N/2 + p) [(N/2 + p) /e]N/2+p p2π (N/2 − p) [(N/2 − p) /e]N/2−p r 2 1 N N = q N/2+p N/2−p πN 2p 2 (N/2 + p) (N/2 − p) 1 − N wN/2 = q , (13) 2p 2 2p N/2+p 2p N/2−p 1 − N 1 + N 1 − N where
r 2 w =∼ 2N (14) N/2 πN 4 is the maximal value of the thermodynamic probability. Eq. (13) can be expanded for |p| N. Since p enters both the bases and the exponents, one has to be careful and expand the logarithm of wN1 rather than wN1 itself. The square root term in Eq. (13) can be discarded as it gives a negligible contribution of order p2/N 2. One obtains
N 2p N 2p ln w =∼ ln w − + p ln 1 + − − p ln 1 − N1 N/2 2 N 2 N " # " # N 2p 1 2p2 N 2p 1 2p2 =∼ ln w − + p − − − p − − N/2 2 N 2 N 2 N 2 N 2p2 p2 2p2 p2 =∼ ln w − p − + + p − + N/2 N N N N 2p2 = ln w − (15) N/2 N and thus 2p2 w =∼ w exp − . (16) N1 N/2 N √ One can see that wN1 becomes very small if |p| & N that for large N does not violate the applicability condition
|p| N. That is, wN1 is small in the main part of the interval 0 ≤ N1 ≤ N and is sharpy peaked near N1 = N/2.
V. MANY-STATE PARTICLES
The results obtained in Sec. III for two-state particles can be generalized for n-state particles. We are looking for the number of ways to distribute N particles over n boxes so that there are Ni particles in ith box. That is, we look for the number of microstates in the macrostate described by the numbers Ni. The result is given by N! N! w = = Qn . (17) N1!N2! ...Nn! i=1 Ni!
This formula can be obtained by using Eq. (5) successively. The number of ways to put N1 particles in box 1 and the other N − N1 in other boxes is given by Eq. (5). Then the number of ways to put N2 particles in box 2 is given by a similar formula with N → N − N1 (there are only N − N1 particles after N1 particles have been put in box 1) and N1 → N2. These numbers of ways should multiply. Then one considers box 3 etc. until the last box n. The resulting number of microstates is N! (N − N )! (N − N − N )! w = × 1 × 1 2 × N1!(N − N1)! N2!(N − N1 − N2)! N3!(N − N1 − N2 − N3)! (N + N + N )! (N + N )! N ! ... × n−2 n−1 n × n−1 n × n . (18) Nn−2!(Nn−1 + Nn)! Nn−1!Nn! Nn!0! In this formula, all numerators except for the first one and all second terms in the denominators cancel each other, so that Eq. (17) follows.
VI. THERMODYNAMIC PROBABILITY AND ENTROPY
We have seen that different macrostates k can be realized by largely varying numbers of microstates wk. For large systems, N 1, the difference between different thermodynamic probabilities wk is tremendous, and there is a sharp maximum of wk at some value of k = kmax. The main postulate of statistical physics is that in measurements on large systems, only the most probable macrostate, satisfying the constraints of Eqs. (2) and (3), makes a contribution. For instance, a macrostate of an ideal gas with all molecules in one half of the container is much less probable than the macrostate with the molecules equally distributed over the whole container. (Like the state with all coins landed head up is much less probable than the state with half of the coins landed head up and the other half tail up). For this reason, if the initial state is all molecules in one half of the container, then in the course of evolution the system will come to the most probably state with the molecules equally distributed over the whole container, and will stay in this state forever. 5
We have seen in thermodynamics that an isolated system, initially in a non-equilibrium state, evolves to the equilib- rium state characterized by the maximal entropy. On this way one comes to the idea that entropy S and thermodynamic probability w should be related, one being a monotonic function of the other. The form of this function can be found if one notices that entropy is additive while the thermodynamic probability is miltiplicative. If a system consists of two subsystems that weakly interact with each other (that is almost always the case as the intersubsystem interaction is limited to the small region near the interface between them), then S = S1 + S2 and w = w1w2. If one chooses (L. Boltzmann)
S = kB ln w, (19) then S = S1 + S2 and w = w1w2 are in accord since ln (w1w2) = ln w1 + ln w2. In Eq. (19) kB is the Boltzmann constant,
−23 −1 kB = 1.38 × 10 JK . (20)
It will be shown that the statistically defined entropy above coincides with the thermodynamic entropy at equilibrium. On the other hand, statistical entropy is well defined for nonequilibrium states as well, where as the thermodynamic entropy is usually undefined off equilibrium.
VII. BOLTZMANN DISTRIBUTION AND CONNECTION WITH THERMODYNAMICS
In this section we obtain the distribution of particles over energy levels i as the most probable macrostate by maximizing its thermodynamic probability w. We label quantum states by the index i and use Eq. (17). The task is to find the maximum of w with respect to all Ni that satisfy the constraints (2) and (3). Practically it is more convenient to maximize ln w than w itself. Using the method of Lagrange multipliers, one searches for the maximum of the target function X X Φ(N1,N2, ··· ,Nn) = ln w + α Ni − β εiNi. (21) i i
Here α and β are Lagrange multipliers with (arbitrary) signs chosen anticipating the final result. The maximum satisfies ∂Φ = 0, i = 1, 2,.... (22) ∂Ni
As we are interested in the behavior of macroscopic systems with Ni 1, the factorials can be simplified with the help of Eq. (12) that takes the form √ ln N! =∼ N ln N − N + ln 2πN. (23)
Neglecting the relatively small last term in this expression, one obtains
∼ X X X X Φ = ln N! − Nj ln Nj + Nj + α Nj − β εjNj. (24) j j j j
The first term here is a constant and it won’t contribute to the derivatives ∂Φ/∂Ni. In the latter, the only contribution comes from the terms with j = i in the above expression. One obtains the equations ∂Φ = − ln Ni + α − βεi = 0 (25) ∂Ni that yield
α−βεi Ni = e , (26) the Boltzmann distribution with yet undefined Lagrange multipliers α and β. The latter can be found from Eqs. (2) and (3) in terms of N and U. Summing Eq. (26) over i, one obtains
N = eαZ, α = ln (N/Z) , (27) 6 where X Z = e−βεi (28) i is the so-called partition function (German Zustandssumme) that plays a major role in statistical physics. Then, eliminating α from Eq. (26) yields
N N = e−βεi , (29) i Z the Boltzmann distribution. After that for the internal energy U one obtains
X N X N ∂Z U = ε N = ε e−βεi = − (30) i i Z i Z ∂β i i or ∂ ln Z U = −N . (31) ∂β This formula implicitly defines the Lagrange multiplier β as a function of U. The statistical entropy, Eq. (19), within the Stirling approximation becomes ! X S = kB ln w = kB N ln N − Ni ln Ni . (32) i
Inserting here Eq. (26) and α from Eq. (27), one obtains
S X = N ln N − N (α − βε ) = N ln N − αN + βU = N ln Z + βU. (33) k i i B i The statistical entropy depends only on the parameter β. Its differential is given by
dS ∂ ln Z dU dS = dβ = N + U + β dβ = βdU. (34) dβ ∂β dβ
Comparing this with the thermodynamic relation dU = T dS at V = 0, one identifies
1 β = . (35) kBT
2 Now, with the help of dT/dβ = −kBT , one can represent the internal energy, Eq. (31), via the derivative with respect to T as
∂ ln Z U = Nk T 2 . (36) B ∂T Eq. (33) becomes
U S = Nk ln Z + . (37) B T From here one obtains the statistical formula for the free energy
F = U − TS = −NkBT ln Z. (38)
One can see that the partition function Z contains the complete information of the system’s thermodynamics since other quantities such as pressure P follow from F. In particular, one can check ∂F/∂T = −S. 7
VIII. QUANTUM STATES AND ENERGY LEVELS
A. Stationary Schr¨odingerequation
In the formalism of quantum mechanics, quantized states and their energies E are the solutions of the eigenvalue problem for a matrix or for a differential operator. In the latter case the problem is formulated as the so-called stationary Schr¨odingerequation
Hˆ Ψ = EΨ, (39) where Ψ = Ψ (r) is the complex function called wave function. The physical interpretation of the wave function is that |Ψ(r)|2 gives the probability for a particle to be found near the space point r. As above, the number of measurements dN of the toral N measurements in which the particle is found in the elementary volume d3r = dxdydz around r is given by
dN = N |Ψ(r)|2 d3r. (40)
The wave function satisfies the normalization condition ¢ 1 = d3r |Ψ(r)|2 . (41)
The operator Hˆ in Eq. (39) is the so-called Hamilton operator or Hamiltonian. For one particle, it is the sum of kinetic and potential energies
ˆp2 Hˆ = + U(r), (42) 2m where the classical momentum p is replaced by the operator ∂ ˆp = −i . (43) ~∂r The Schr¨odingerequation can be formulated both for single particles and the systems of particles. In this course we will restrict ourselves to single particles. In this case the notation ε will be used for single-particle energy levels instead of E. One can see that Eq. (39) is a second-order linear differential equation. It is an ordinary differential equation in one dimension and partial differential equation in two and more dimensions. If there is a potential energy, this is a linear differential equation with variale coefficients that can be solved analytically only in special cases. In the case U = 0 this is a linear differential equation with constant coefficients that is easy to solve analytically. An important component of the quantum formalism is boundary conditions for the wave function. In particular, for a particle inside a box with rigid walls the boundary condition is Ψ=0 at the walls, so that Ψ (r) joins smoothly with the value Ψ (r) = 0 everywhere outside the box. In this case it is also guaranteed that |Ψ(r)|2 is integrable and Ψ can be normalized according to Eq. (41). It turns out that the solution of Eq. (39) that satisfies the boundary conditions exists only for a discrete set of E values that are called eigenvalues. The corresponding Ψ are calles eigenfunctions, and all together is called eigenstates. Eigenvalue problems, both for matrices and differential operators, were known in mathematics before the advent of quantum mechanics. The creators of quantum mechanics, mainly Schr¨odingerand Heisenberg, realised that this mathematical formalism can accurately describe quantization of energy levels observed in experiments. Whereas Schr¨odingerformlated his famous Schr¨odingerequation, Heisenberg made a major contribution into description of quantum systems with matrices.
B. Energy levels of a particle in a box
As an illustration, consider a particle in a one-dimensional rigid box, 0 ≤ x ≤ L. In this case the momentum becomes d pˆ = −i (44) ~dx and Eq. (39) takes the form
2 d2 − ~ Ψ(x) = εΨ(x) (45) 2m dx2 8 and can be represented as
d2 2mε Ψ(x) + k2Ψ(x) = 0, k2 ≡ . (46) dx2 ~2 The solution of this equation satisfying the boundary conditions Ψ(0) = Ψ(L) = 0 has the form π Ψ (x) = A sin (k x) , k = ν, ν = 1, 2, 3,... (47) ν ν ν L where eigenstates are labeled by the index ν. The constant A following from Eq. (41) is A = p2/L. The energy eigenvalues are given by
2k2 π2 2ν2 ε = ~ ν = ~ . (48) ν 2m 2mL2
One can see the the energy ε is quandratic in the momentum p = ~k (de Broglie relation), as it should be, but the energy levels are discrete because of the quantization. For very large boxes, L → ∞, the energy levels become quasicontinuous. The lowest-energy level with ν = 1 is called the ground state. For a three-dimensional box with sides Lx,Ly, and Lz one has to solve the Schr¨odingerequation
2 d2 d2 d2 − ~ + + Ψ(x) = εΨ(x) (49) 2m dx2 dy2 dz2 with similar boundary conditions. The solution factorizes and has the form π Ψνx,νy ,νz (x, y, z) = A sin (kνx x) sin kνy x sin (kνz x) , kα = να, να = 1, 2, 3,..., (50) Lα where α = x, y, z. The energy levels are
2 2 2 2 2 2 k ~ kx + ky + kz ε = ~ = (51) 2m 2m and parametrized by the three quantum numbers να. The ground state is (νx, νy, νz) = (1, 1, 1). One can order the states in increasing ε and number them by the index j, the ground state being j = 1. If Lx = Ly = Lz = L, then
π2 2 ε = ~ ν2 + ν2 + ν2 (52) νx,νy ,νz 2mL2 x y z and the same value of εj can be realized for different sets of νx, νy, and νz, for instance, (1,5,12), (5,12,1), etc. The number of different sets of (νx, νy, νz) having the same εj is called degeneracy and is denoted as gj. States with νx = νy = νz have gj = 1 and they are called non-degenerate. If only two of the numbers νx, νy, and νz coincide, the degeneracy is gj = 3. If all numbers are different, gj = 3! = 6. If one sums over the energy levels parametrized by j, one has to multiply the summands by the corresponding degeneracies.
C. Density of states
For systems of a large size and thus very finely quantized states, one can define the density of states ρ(ε) as the number of energy levels dn in the interval dε, that is,
dn = ρ(ε)dε. (53)
It is convenient to start calculation of ρ(ε) by introducing the number of states dn in the “elementary volume” dνxdνydνz, considering νx, νy, and νz as continuous. The result obviously is
dn = dνxdνydνz, (54) that is, the corresponding density of states is 1. Now one can rewrite the same number of states in terms of the wave vector k using Eq. (51) V dn = dk dk dk , (55) π3 x y z 9 where V = LxLyLz is the volume of the box. After that one can go over to the number of states within the shell dk, as was done for the distribution function of the molecules over velocities. Taking into account that kx, ky, and kz are all positive, this shell is not a complete spherical shell but 1/8 of it. Thus
V 4π dn = k2dk. (56) π3 8 The last step is to change the variable from k to ε using Eq. (51). With
2m r2m√ dk 1r2m 1 k2 = ε, k = ε, = √ (57) ~2 ~2 dε 2 ~2 ε one obtains Eq. (53) with
V 2m3/2 √ ρ(ε) = ε. (58) (2π)2 ~2
IX. STATISTICAL THERMODYNAMICS OF THE IDEAL GAS
In this section we demonstrate how the results for the ideal gas, previously obtained within the molecular theory, follow from the more general framework of statistical physics in the preceding section. Consider an ideal gas in a sufficiently large container. In this case the energy levels of the system, see Eq. (51), are quantized so finely that one can introduce the density of states ρ (ε) defined by Eq. (53). The number of particles in quantum states within the energy interval dε is the product of the number of particles in one state N(ε) and the number of states dnε in this energy interval. With N(ε) given by Eq. (29) one obtains
N dN = N(ε)dn = e−βερ (ε) dε. (59) ε ε Z For finely quantized levels one can replace summation in Eq. (28) and similar formulas by integration as ¢ X ... ⇒ dερ (ε) ... (60) i
Thus for the partition function (28) one has ¢ Z = dερ (ε) e−βε. (61)
For quantum particles in a rigid box with the help of Eq. (58) one obtains ¢ √ V 2m3/2 ∞ √ V 2m3/2 π mk T 3/2 Z = dε εe−βε = = V B . (62) 2 2 2 2 3/2 2 (2π) ~ 0 (2π) ~ 2β 2π~
Using the classical relation ε = mv2/2 and thus dε = mvdv, one can obtain the formula for the number of particles in the speed interval dv
dNv = Nf(v)dv, (63) where the speed distribution function f(v) is given by
2 3/2 3/2 r 1 −βε 1 2π~ V 2m m −βε f(v) = e ρ (ε) mv = 2 2 v mv e Z V mkBT (2π) ~ 2 m 3/2 mv2 = 4πv2 exp − . (64) 2πkBT 2kBT This result coincides with the Maxwell distribution function obtained earlier from the molecular theory of gases. The Plank’s constant ~ that links to quantum mechanics disappeared from the final result. 10
The internal energy of the ideal gas is its kinetic energy
U = N¯ε, (65)
¯ε = mv¯2/2 being the average kinetic energy of an atom. The latter can be calculated with the help of the speed distribution function above, as was done in the molecular theory of gases. The result has the form f ¯ε = k T, (66) 2 B where in our case f = 3 corresponding to three translational degrees of freedom. The same result can be obtained from Eq. (31) " # ∂ ln Z ∂ m 3/2 3 ∂ 3 1 3 U = −N = −N ln V = N ln β = N = NkBT. (67) ∂β ∂β 2π~2β 2 ∂β 2 β 2
After that the known result for the heat capacity CV = (∂U/∂T )V follows. The pressure P is defined by the thermodynamic formula ∂F P = − . (68) ∂V T With the help of Eqs. (38) and (62) one obtains ∂ ln Z ∂ ln V Nk T P = Nk T = Nk T = B (69) B ∂V B ∂V V that amounts to the equation of state of the ideal gas PV = NkBT.
X. STATISTICAL THERMODYNAMICS OF HARMONIC OSCILLATORS
Consider an ensemble of N identical harmonic oscillators, each of them described by the Hamiltonian
pˆ2 kx2 Hˆ = + . (70) 2m 2 Here the momentum operatorp ˆ is given by Eq. (44) and k is the spring constant. This theoretical model can describe, for instance, vibrational degrees of freedom of diatomic molecules. In this case x is the elongation of the chemical bond between the two atoms, relative to the equilibrium bond length. The stationary Schr¨odinger equation (39) for a harmonic oscillator becomes 2 d2 kx2 − ~ + Ψ(x) = εΨ(x). (71) 2m dx2 2
The boundary conditions for this equation require that Ψ(±∞) = 0 and the integral in Eq. (41) converges. In contrast to Eq. (45), this is a linear differential equation with a variable coefficient. Such differential equations in general do not have solutions in terms of known functions. In some cases the solution can be expressed through special functions such as hypergeometrical, Bessel functions, etc. Solution of Eq. (71) can be found but this task belongs to quantum mechanics courses. The main result that we need here is that the energy eigenvalues of the harmonic oscillator have the form 1 ε = ω ν + , ν = 0, 1, 2,..., (72) ν ~ 0 2 p where ω0 = k/m is the frequency of oscillations. The energy level with ν = 0 is the ground state. The ground-state energy ε0 = ~ω0/2 is not zero, as would be the case for a classical oscillator. This quantum ground-state energy is called zero-point energy. It is irrelevant in the calculation of the heat capacity of an ensemble of harmonic oscillators. The partition function of a harmonic oscillator is
∞ ∞ −β ω /2 X X ν e ~ 0 1 2 Z = e−βεν = e−β~ω0/2 e−β~ω0 = = = , (73) 1 − e−β~ω0 eβ~ω0/2 − e−β~ω0/2 sinh (β ω /2) ν=0 ν=0 ~ 0 11 where the result for the geometrical progression 1 + x + x2 + x3 + ... = (1 − x)−1 , x < 1 (74) was used. The hyperbolic functions are defined by ex − e−x ex + e−x sinh(x) ≡ , cosh(x) ≡ 2 2 sinh(x) cosh(x) 1 tanh(x) ≡ , coth(x) ≡ = . (75) cosh(x) sinh(x) tanh(x) We will be using the formulas sinh(x)0 = cosh(x), cosh(x)0 = sinh(x) (76) and x, x 1 x, x 1 sinh(x) =∼ , tanh(x) =∼ (77) ex/2, x 1 1, x 1.
The internal (average) energy of the ensemble of oscillators is given by Eq. (31). This yields ∂ ln Z ∂ ln sinh (β ω /2) N ∂ sinh (β ω /2) ω β ω U = −N = N ~ 0 = ~ 0 = N ~ 0 coth ~ 0 (78) ∂β ∂β sinh (β~ω0/2) ∂β 2 2 or ω ω U = N ~ 0 coth ~ 0 . (79) 2 2kBT Another way of writing the internal energy is 1 1 U = N~ω0 + , (80) eβ~ω0 − 1 2 where the constant term with 1/2 is the zero-point energy. The limiting low- and high-temperature cases of this formula are N ω /2, k T ω U =∼ ~ 0 B ~ 0 (81) NkBT, kBT ~ω0.
In the low-temperature case, almost all oscillators are in their ground states, since e−β~ω0 1. Thus the main term that contributes into the partition function in Eq. (73) is ν = 0. Correspondingly, U is the sum of ground-state energies of all oscillators. At high temperatures, the previously obtained classical result is reproduced. The Planck’s constant ~ that links to quantum mechanics disappears. In this case, e−β~ω0 is only slightly smaller than one, so that very many different n contribute to Z in Eq. (73). In this case one can replace summation by integration, as was done above for the particle in a potential box. In that case one also obtained the classical results. One can see that the crossover between the two limiting cases corresponds to kBT ∼ ~ω0. In the high-temperature ∗ limit kBT ~ω0, many low-lying energy levels are populated. The top populated level ν can be estimates from ∗ kBT ∼ ~ω0ν , so that k T ν∗ ∼ B 1. (82) ~ω0 Probability to find an oscillator in the states with ν ν∗ is exponentially small. The heat capacity is defined by 2 dU ~ω0 1 ~ω0 ~ω0/(2kBT ) C = = N − 2 − 2 = NkB . (83) dT 2 sinh [~ω0/(2kBT )] 2kBT sinh [~ω0/(2kBT )] This formula has limiting cases ( 2 ω0 ω0 NkB ~ exp − ~ , kBT ω0 C =∼ kB T kB T ~ (84) NkB, kBT ~ω0. 12
FIG. 2: Heat capacity of harmonic oscillators.
One can see that at high temperatures the heat capacity can be written as f C = Nk , (85) 2 B where the effective number of degrees of freedom for an oscillator is f = 2. The explanation of the additional factor 2 is that the oscillator has not only the kinetic, but also the potential energy, and the average values of these two energies are the same. Thus the total amount of energy in a vibrational degree of freedom doubles with respect to the translational and rotational degrees of freedom. At low temperatures the vibrational heat capacity above becomes exponentially small. One says that vibrational degrees of freedom are getting frozen out at low temperatures. The heat capacity of an ensemble of harmonic oscillators in the whole temperature range, Eq. (83), is plotted in Fig. 2. The average quantum number of an oscillator is given by
∞ 1 X n ≡ hνi = νe−βεν . (86) Z ν=0
Using Eq. (72), one can calculate this sum as follows
∞ ∞ ∞ 1 X 1 1 X 1 1 1 X Z 1 1 ∂Z 1 U 1 n = + ν e−βεν − e−βεν = ε e−βεν − = − − = − . (87) Z 2 Z 2 ω Z ν 2Z ω Z ∂β 2 N ω 2 ν=0 ν=0 ~ 0 ν=0 ~ 0 ~ 0
Finally with the help of Eq. (80), one finds 1 1 n = β ω = . (88) e ~ 0 − 1 exp [~ω0/(kBT )] − 1 This is the Bose-Einstein distribution that will be considered later. The meaning of it is the following. Considering the oscillator as a “box” or “mode”, one can ask what is the average number of quanta (that is, “particles” or quasiparticles) in this box at a given temperature. The latter is given by the formula above. Substituting Eq. (88) into Eq. (80), one obtains a nice formula
1 U = N ω n + (89) ~ 0 2 13 resembling Eq. (72). At low temperatures, kBT ~ω0, the average quantum number n becomes exponentially small. This means that the oscillator is predominantly in its ground state, ν = 0. At high temperatures, kBT ~ω0, Eq. (88) yields
k T n =∼ B (90) ~ω0 that has the same order of magnitude as the top populated quantum level number ν∗ given by Eq. (82). The density of states ρ (ε) for the harmonic oscillator can be easily found. The number of levels dnν in the interval dν is
dnν = dν (91)
(that is, there is only one level in the interval dν = 1). Then, changing the variables with the help of Eq. (72), one finds the number of levels dnε in the interval dε as
dν dε −1 dn = dε = dε = ρ (ε) dε, (92) ε dε dν where 1 ρ (ε) = . (93) ~ω0 To conclude this section, let us reproduce the classical high-temperature results by a simpler method. At high temperatures, many levels are populated and the distribution function e−βεν only slightly changes from one level to the other. Indeed its relative change is
−βεν −βεν+1 e − e ω0 −β(εν+1−εν ) −β~ω0 ∼ ~ −βε = 1 − e = 1 − e = 1 − 1 + β~ω0 = 1. (94) e ν kBT In this situation one can replace summation in Eq. (73) by integration over ε using the density of states of Eq. (93): ¢ ¢ ∞ 1 ∞ 1 k T Z = dερ (ε) e−βε = dεe−βε = = B . (95) 0 ~ω0 0 ~ω0β ~ω0 Also, one can integrate over ν considering it as continuous and not using the density of states explicitly: ¢ ¢ ¢ ∞ ∞ dν 1 ∞ 1 k T Z = dνe−βε = dε e−βε = dεe−βε = = B . (96) 0 0 dε ~ω0 0 ~ω0β ~ω0 The results of this calculation is in accord with Eq. (93). Now the internal energy U is given by
∂ ln Z ∂ ln 1 + ... ∂ ln β + ... N U = −N = −N β = N = = Nk T (97) ∂β ∂β ∂β β B that coincides with the second line of Eq. (81). In 1907 Albert Einstein proposed a theory of thermal properties of solids based on the assumption of the existence therein of independent thermally excited harmonic oscillators. The essential moment was that Einstein applied the Max Planck’s idea of quantization, extending it from the electromagnetic radiation (see Chapter XX) to the vibrations of atoms and molecules in solids. Before the Einstein’s work, it was a puzzle why the heat capacity is not constant, as predicted by the classical theory (our high-temperature limit) but becomes small at low temperatures. Fig. 2 is the result of the Einstein’s theory. This was historically the third application of the idea of quantization, after the seminal 1900 work of Max Planck, and Einstein’s theory of the photoeffect of 1905. All these three achievement were accomplished before quantum mechanics was finalized about 1927. Einstein’s result at low temperatures is only qualitative as the concept ignores the interaction of the oscillations of different atoms. The accurate theory by Debye, 3 see Chapter XII, leads to the dependence CV ∝ T in 3D solids at low temperatures. Still, Figs. 2 and 4 look pretty similar. 14
XI. STATISTICAL THERMODYNAMICS OF ROTATORS
The rotational kinetic energy of a rigid body, expressed with respect to the principal axes, reads 1 1 1 E = I ω2 + I ω2 + I ω2. (98) rot 2 1 1 2 2 2 2 3 3
We will restrict our consideration to the axially symmetric body with I1 = I2 = I, since for the fully asymmetric body the quantum energy levels cannot be found analytically. In this case Erot can be rewritten in terms of the angular momentum components Lα = Iαωα , where α = 1, 2, 3, as follows
2 2 2 2 L1 L2 L3 L 1 1 1 2 Erot = + + = + − L3, (99) 2I1 2I2 2I3 2I 2 I3 I
2 2 2 2 where L = L1 + L2 + L3. In the important case of a diatomic molecule with two identical atoms, then the center of mass is located between the two atoms at the distance d/2 from each, where d is the distance between the atoms. The moment of inertia of the molecule with respect to the 3 axis going through both atoms is zero, I3 = 0. The moments of inertia with respect to the 1 and 2 axis are equal to each other:
d2 1 I = I = I = 2 × M = Md2, (100) 1 2 2 2 where M is the mass of the atom. In our case, I3 = 0, there is no kinetic energy associated with rotation around the 3 axis. Then Eq. (99) requires L3 = 0, that is, the angular momentum of a diatomic molecule is perpendicular to its axis. In quantum mechanics L and L3 become operators, and the rotational energy above becomes the Hamiltonian, 2 Erot ⇒ Hˆrot. The solution of the stationary Schr¨odingerequation yields eigenvalues of L ,L3 in terms of three quantum numbers l, m, and n:
2 2 Lˆ = ~ l(l + 1), l = 0, 1, 2,... (101) ˆ L3 = ~n, m, n = −l, −l + 1, . . . , l − 1, l. (102) Thus, the energy eigenvalues are given by
2 2 ~ l(l + 1) ~ 1 1 2 εl,n = + − n (103) 2I 2 I3 I that follows by substitution of Eqs. (101) and (102) into Eq. (99). The quantum number m accounts for the 2l + 1 different orientations of the vector L in space. As all these orientations are equivalent, the energy does not depend on m. Thus, there are at least 2l + 1 degenerate energy levels. Further, the axis of the molecule can have 2l + 1 different projections on Lˆ parametrized by the quantum number n. In the general case I3 6= I these states are non-degenerate. However, they become degenerate for the fully symmetric body, I1 = I2 = I3 = I. In this case the energy eigenvalues become 2l(l + 1) ε = ~ , (104) l 2I
2 and the degeneracy of the quantum level l is gl = (2l + 1) . For the diatomic molecule I3 = 0, and the only acceptable value of n is n = 0. Thus one obtains Eq. (104) again but with the degeneracy gl = 2l + 1. Below we will consides only the two cases: a) a fully summetric body and b) diatomic molecule. The rotational partition function for both of them is given by X ξ 2, symmetric body Z = g e−βεl , g = (2l + 1) , ξ = (105) l l 1, diatomic molecule. l
In both cases Z cannot be calculated analytically in general. In the general axially-symmetric case I1 = I2 6= I3 with I3 6= 0 one has to perform a numerical summation over both l and n. However, even the sum over l above cannot be calculated analytically, so that we will consider the limits of low and hogh temperatures. 15
FIG. 3: Rotational heat capacity of a fully symmetric body and a diatomic molecule.
In the low-temperature limit, most of rotators are in their ground state l = 0, and very few are in the first excited state l = 1. Discarding all other values of l, one obtains 2 ξ β~ Z =∼ 1 + 3 exp − (106) I at low temperatures. The rotational internal energy is given by ∂ ln Z 2 β 2 2 2 U = −N = 3ξ ~ exp − ~ = 3ξ ~ exp − ~ (107) ∂β I I I IkBT and it is exponentially small. The heat capacity is
2 2 2 ∂U ξ ~ ~ CV = = 3 kB exp − (108) ∂T V IkBT IkBT and it is also exponentially small. In the high-temperature limit, very many energy levels are thermally populated, so that in Eq. (105) one can neglect 1 l and replace summation over l by integration. With
2 2 2 r 2 r ∼ ξ ∼ ~ l ∂ε ~ l ∂l I ~ I gl = (2l) , εl ≡ ε = , = , = = (109) 2I ∂l I ∂ε ~2 2Iε 2~2ε one has ¢ ¢ ¢ ∞ ∞ ∞ ∂l ∼ −βεl ∼ ξ ξ −βεl ξ ξ −βε Z = dl gle = 2 dl l e = 2 dε l e (110) 0 0 0 ∂ε and further r ¢ ξ/2 (ξ+1)/2 ¢ I ∞ 1 2Iε I ∞ ∼ ξ √ −βε (3ξ−1)/2 (ξ−1)/2 −βε Z = 2 2 dε 2 e = 2 2 dε ε e 2~ 0 ε ~ ~ 0 ¢ (ξ+1)/2 ∞ (3ξ−1)/2 I (ξ−1)/2 −x −(ξ+1)/2 = 2 2 dx x e ∝ β . (111) β~ 0 Now the internal energy is given by ∂ ln Z ξ + 1 ∂ ln β ξ + 1 1 ξ + 1 U = −N = N = N = Nk T, (112) ∂β 2 ∂β 2 β 2 B 16 whereas the heat capacity is ∂U ξ + 1 CV = = NkB. (113) ∂T V 2 In these two formulas, f = ξ + 1 is the number of degrees of freedom, f = 2 for a diatomic molecule and f = 3 for a fully symmetric rotator. This result confirms the principle of equidistribution of energy over the degrees of freedom in the classical limit that requires temperatures high enough. The all-temperature result for the heat capacity of rotators based on the numerical summation in Eq. (105) is shown in Fig. 3. Surprizingly, there is a maximum of the heat capacity in the intermediate temperature region that contrasts the monotonic increase of the heat capacity of the harmonic oscillator shown in Fig. 2. To conclude this section, let us calculate the partition function classically at high temperatures. The calculation can be performed in the general case of all moments of inertia different. The rotational energy, Eq. (98), can be written as 2 2 2 L1 L2 L3 Erot = + + . (114) 2I1 2I2 2I3
The dynamical variables here are the angular momentum components L1,L2, and L3, so that the partition function can be obtained by integration with respect to them ¢ ∞ Zclass = dL1dL2dL3 exp (−βErot) ¢−∞ ¢ ¢ ∞ 2 ∞ 2 ∞ 2 βL1 βL2 βL3 = dL1 exp − × dL2 exp − × dL3 exp − −∞ 2I1 −∞ 2I2 −∞ 2I3 ¢ s ∞ 3 2I1 2I2 2I3 −x2 −3/2 = × × dx e ∝ β . (115) β β β −∞
In fact, Zclass contains some additional coefficients, for instance, from the integration over the orientations of the molecule. However, these coefficients are irrelevant in the calculation of the the internal energy and heat capacity. For the internal energy one obtains ∂ ln Z 3 ∂ ln β 3 U = −N = N = Nk T (116) ∂β 2 ∂β 2 B that coincides with Eq. (112) with ξ = 2. In the case of the diatomic molecule the energy has the form
2 2 L1 L2 Erot = + . (117) 2I1 2I2 Then the partition function becomes ¢ ∞ Zclass = dL1dL2 exp (−βErot) ¢−∞ ¢ ∞ 2 ∞ 2 βL1 βL2 = dL1 exp − × dL2 exp − −∞ 2I1 −∞ 2I2 ¢ s ∞ 2 2I1 2I2 −x2 −1 = × dx e ∝ β . (118) β β −∞ This leads to
U = NkBT, (119) in accordance with Eq. (112) with ξ = 1.
XII. STATISTICAL PHYSICS OF VIBRATIONS IN SOLIDS
Above in Sec. X we have studied statistical mechannics of harmonic oscillators. For instance, a diatomic molecule behaves as an oscillator, the chemical bond between the atoms periodically changing its length with time in the 17 classical description. For molecules consisting of N ≥ 2 atoms, vibrations become more complicated and, as a rule, involve all N atoms. Still, within the harmonic approximation (the potential energy expanded up to the second-order terms in deviations from the equilibrium positions) the classical equations of motion are linear and can be dealt with matrix algebra. Diagonalizing the dynamical matrix of the system, one can find 3N − 6 linear combinations of atomic coordinates that dynamically behave as independent harmonic oscillators. Such collective vibrational modes of the system are called normal modes. For instance, a hypothetic triatomic molecule with atoms that are allowed to move along a straight line has 2 normal modes. If the masses of the first and third atoms are the same, one of the normal modes corresponds to antiphase oscillations of the first and third atoms with the second (central) atom being at rest. Another normal mode corresponds to the in-phase oscillations of the first and third atoms with the central atom oscillating in the anti-phase to them. For more complicated molecules normal modes can be found numerically. Normal modes can be considered quantum mechanically as independent quantum oscillators as in Sec. X. Then the vibrational partition function of the molecule is a product of partition functions of individual normal modes and the internal energy is a sum of internal energies of these modes. The latter is the consequence of the independence of the normal modes. The solid having a well-defined crystal structure is translationally invariant that simplifies finding normal modes, in spite of a large number of atoms N. In the simplest case of one atom in the unit cell normal modes can be found immediately by making Fourier transformation of atomic deviations. The resulting normal modes are sound waves with different wave vectors k and polarizations (in the simplest case one longitudinal and two equivalent transverse modes). In the sequel, to illustrate basic principles, we will consider only one type of sound waves (say, the longitudinal wave) but we will count it three times. The results can be then generalized for different kinds of waves. For wave vectors small enough, there is the linear relation between the wave vector and the frequency of the oscillations
ωk = vk (120)
In this formula, ωk depends only on the magnitude k = 2π/λ and not on the direction of k and v is the speed of sound. The acceptable values of k should satisfy the boundary conditions at the boundaries of the solid body. Obviously, the thermodynamics of a large body is insensitive the shape of the body and to the exact form of these conditions. (Note that the body’s surface can be rough and then it is difficult to formulate a model of what exactly happens at the surface!) The simplest possibility is to consider a body of the box shape with dimentions Lx,Ly, and Lz and use the clamped boundary conditions, so that the atoms at the boundary cannot move. These boundary conditions are the same as in the quantum problem of a particle in a potential box. One obtains the atomic displacements in the normal modes of the type u(x) = u0 sin (kxx) etc., and the discrete values of the wave vector components kα with α = x, y, z
να kα = π , ν = 1, 2, 3,... (121) Lα
If the atoms are arranged in a simple cubic lattice with the lattice spacing a, then Lα = aNα, where Nα are the numbers of atoms (or unit cells) in each direction α. Then
π να kα = . (122) a Nα Similarly to the problem of the particle in a box, one can introduce the frequency density of normal modes ρ (ω) via the number of normal modes in the frequency interval dω
dnω = ρ (ω) dω. (123)
Using Eq. (56) multiplied by the number of phonon modes 3 and changing from k to ω with the help of Eq. (120) one obtains 3V ρ (ω) = ω2. (124) 2π2v3 The total number of normal modes should be 3N: X X 3N = 3 1 = 3 1. (125)
k νx,νy ,νz
This equation splits into the three separate equations
να,max X Nα = 1 = να,max. (126)
να=1 18
This defines the maximal value of να and, according to Eq. (122), of kα that happens to be kα,max = π/a for all α. As the solid body is macroscopic and there are very many values of the allowed wave vectors, one can replace summation my integration and use the density of states: ¢ X ωmax ... ⇒ dωρ (ω) ... (127) k 0 At large k and thus ω, the linear dependence of Eq. (120) is no longer valid. Moreover, for models with elastic bonds on the lattice at large k there is no purely longutudinal and transverse modes, and the mode frequency depends on the direction of the wave vector. Thus that Eq. (124) becomes invalid as well. Still, one can make a crude approximation assuming that Eqs. (120) and (124) are valid until some maximal frequency of the sound waves in the body. The model based on this assumption is called Debye model. The maximal frequency (or the Debye frequency ωD) is defined by Eq. (125) that takes the form ¢ ωD 3N = dωρ (ω) . (128) 0 With the use of Eq. (124) one obtains ¢ ωD 3 3V 2 V ωD 3N = dω 2 3 ω = 2 3 , (129) 0 2π v 2π v wherefrom
1/3 v ω = 6π2 , (130) D a
3 where V/N = v0 = a , is the unit-cell volume. It is convenient to rewrite Eq. (124) in terms of ωD as
ω2 ρ (ω) = 9N 3 . (131) ωD The consideration up to this pont is completely classical. Now we will consider each k-mode as an independent harmonic oscillator. The quantum energy levels of the latter are given by Eq. (72). In our case for a k-oscillator it becomes 1 ε = ω ν + , ν = 0, 1, 2,... (132) k,νk ~ k k 2 k
One quant of the energy ~ωk in the k-mode is called phonon. The quantum number νk is the number of phonons in the k-mode. Note the relation
ε = ~ω (133) between the energy of a quant of any wave (phonons, photons) and its frequency. All the modes contribute additively into the internal energy, thus
X X 1 X 1 U = 3 hε i = 3 ω hν i + = 3 ω n + , (134) k,νk ~ k k 2 ~ k k 2 k k k where, similarly to Eq. (88), 1 nk = . (135) exp (β~ωk) − 1 Replacing summation by integration and rearranging Eq. (134), within the Debye model one obtains ¢ ω D ~ω U = U0 + dωρ (ω) , (136) 0 exp (β~ω) − 1 where U0 is the zero-temperature value of U due to the zero-point motion. The integral term in Eq. (136) can be calculated analytically at low and high temperatures. 19
3 FIG. 4: Temperature dependence of the phonon heat capacity. The low-temperature result C ∝ T (dashed line) holds only for temperatures much smaller than the Debye temperature θD.
For kBT ~ωD, the integral converges at ω ωD, so that one can extend the upper limit of integration to ∞. One obtains ¢ ¢ ∞ ∞ ~ω 9N 2 ~ω U = U0 + dωρ (ω) = U0 + 3 dω ω 0 exp (β~ω) − 1 ω 0 exp (β~ω) − 1 ¢ D ∞ 3 4 4 3 9N x 9N π 3π kBT = U0 + 3 4 dx x = U0 + 3 4 = U0 + NkBT . (137) (~ωD) β 0 e − 1 (~ωD) β 15 5 ~ωD This result can be written in the form
3π4 T 3 U = U0 + NkBT , (138) 5 θD where
θD = ~ωD/kB (139) is the Debye temperature. The heat capacity is given by
∂U 12π4 T 3 CV = = NkB ,T θD. (140) ∂T V 5 θD Although the Debye temperature enters Eqs. (138) and (140), these results are accurate and insensitive to the Debye approximation. For kBT ~ωD, the exponential in Eq. (136) can be expanded for β~ω 1. It is, however, convenient to step back and to do the same in Eq. (134). WIth the help of Eq. (125) one obtains
X ~ωk X U = U0 + 3 = U0 + 3kBT 1 = U0 + 3NkBT. (141) β ωk k ~ k
Note that the phonon spectrum ωk that is due to the interactions in the system, disappeared from the formula. For the heat capacity one then obtains
CV = 3NkB, (142) 20 kB per each of approximately 3N vibrational modes. One can see that Eqs. (141) and (142) are accurate because they do not use the Debye model. The assumption of the Debye model is really essential at intermediate temperatures, where Eq. (136) is approximate. The temperature dependence of the photon heat capacity, following from Eq. (136), is shown in Fig. 4 together with its low-temperature asymptote, Eq. (140), and the high-temperature asymptote, Eq. (142). Because of the large 3 numerical coefficient in Eq. (140), the law CV ∝ T in fact holds only for the temperatures much smaller than the Debye temperature. The theory of quantized vibrations in the solid, developed by Debye, improves the more primitive Einstein’s model assuming independent oscillators.
XIII. SPINS IN MAGNETIC FIELD
Magnetism of solids is in most cases due to the intrinsic angular momentum of the electron that is called spin. In quantum mechanics, spin is an operator represented by a matrix. The value of the electronic angular momentum is s = 1/2, so that it is described by a 2×2 matrix and the quantum-mechanical eigenvalue of the square of the electron’s angular momentum (in the units of ~) is ˆs2 = s (s + 1) = 3/4 (143) c.f. Eq. (101). Eigenvalues ofs ˆz, projection of a free spin on any direction z, are given by
sˆz = m, m = ±1/2 (144) c.f. Eq. (102). Spin of the electron can be interpreted as circular motion of the charge inside this elementary particle. This results in the magnetis moment of the electron
µˆ = gµBˆs, (145)
−23 where g = 2 is the so-called g-factor and µB is Bohr’s magneton, µB = e~/ (2me) = 0.927 × 10 J/T. Note that the model of circularly moving charges inside the electron leads to the same result with g = 1; The true value g = 2 follows from the relativistic quantum theory. The energy (the Hamiltonian) of an electron spin in a magnetic field of the induction B has the form
ˆ H = −µˆ · B = −gµBˆs · B, (146) the so-called Zeeman Hamiltonian. One can see that the minimum of the energy corresponds to the spin pointing in the direction of B. Choosing the z axis in the direction of B, one obtains ˆ H = −gµBsˆzB. (147) Thus the energy eigenvalues of the electronic spin in a magnetic field with the help of Eq. (144) become
εm = −gµBmB, (148) where, for the electron, m = ±1/2. In an atom, spins of all electrons usually combine into a collective spin S that has half-integer values and is described by an (2S + 1) × (2S + 1) matrix. Instead of Eq. (146), one has
ˆ ˆ H = −gµBS · B. (149)
Eigenvalues of the projections of Sˆ on the direction of B are given by
m = −S, −S + 1,...,S − 1, S, (150) all together 2S + 1 different values, c.f. Eq. (101). The 2S + 1 different energy levels of the spin are given by Eq. (148). The thermodynamics of the spin is determined by the partition function
S S X −βεm X my ZS = e = e , (151) m=−S m=−S 21
FIG. 5: Brillouin function BS (x) for different S.
where εm is given by Eq. (148) and
gµBB y ≡ βgµBB = . (152) kBT Eq. (151) by the change of the summation index to k = m + S can be reduced to a finite geometrical progression
2S (2S+1)y (S+1/2)y −(S+1/2)y X k e − 1 e − e sinh [(S + 1/2)y] Z = e−Sy (ey) = e−Sy = = . (153) S ey − 1 ey/2 − e−y/2 sinh (y/2) k=0 For S = 1/2 with the help of sinh(2x) = 2 sinh(x) cosh(x) one can transform Eq. (153) to
Z1/2 = 2 cosh (y/2) . (154) The partition function beng found, one can easily obtain thermodynamic quantities.. For instance, the free energy F is given by Eq. (38), sinh [(S + 1/2)y] F = −Nk T ln Z = −Nk T ln . (155) B S B sinh (y/2) The average spin polarization is defined by
S 1 X hS i = hmi = me−βεm . (156) z Z m=−S With the help of Eq. (151) one obtains 1 ∂Z ∂ ln Z hS i = = . (157) z Z ∂y ∂y With the use of Eq. (153) and the derivatives
sinh(x)0 = cosh(x), cosh(x)0 = sinh(x) (158) one obtains
hSzi = bS(y), (159) 22 where 1 1 1 1 b (y) = S + coth S + y − coth y . (160) S 2 2 2 2
Usually the function bS(y) is represented in the form bS(y) = SBS(Sy), where BS(x) is the Brillouin function 1 1 1 1 B (x) = 1 + coth 1 + x − coth x , (161) S 2S 2S 2S 2S
(see Fig. 5). One can see that bS(±∞) = ±S and BS(±∞) = ±1. For S = 1/2 from Eq. (154) one obtains 1 y hS i = b (y) = tanh (162) z 1/2 2 2 and B1/2(x) = tanh x. In the opposite limit S → ∞, the Brillouin function becomes the classical Langevin function: 1 B (x) ≡ L(x) = coth x − . (163) 1/2 x Although atoms never have a really large spin, the classical approximation is very useful as it can strongly simplify calculations and it provides qualitatively reasonable results in most cases. At zero magnetic field, y = 0, the spin polarization should vanish. It is immediately seen in Eq. (162) but not in ∼ Eq. (159). To clarify the behavior of bS(y) at small y, one can use coth x = 1/x + x/3 that yields " # ∼ 1 1 1 y 1 1 1 y bS(y) = S + 1 + S + − 1 + 2 S + 2 y 2 3 2 2 y 2 3 " # y 12 12 S(S + 1) = S + − = y. (164) 3 2 2 3
In physical units, this means
∼ S(S + 1) gµBB hSzi = , gµBB kBT. (165) 3 kBT The average magnetic moment of the spin can be obtained from Eq. (145)
hµzi = gµB hSzi . (166) The magnetization of the sample M is defined as the magnetic moment per unit volume. If there is one magnetic atom per unit cell and all of them are uniformly magnetized, the magnetization reads
hµzi gµB M = = hSzi , (167) v0 v0 where v0 is the unit-cell volume. The internal energy U of a system of N spins can be obtained from the general formula, Eq. (31). The calculation can be, however, avoided since from Eq. (148) simply follows
U = N hεmi = −NgµBB hmi = −NgµBB hSzi , (168) wherehSzi is given by Eq. (159). In particular, at low magnetic fields (or high temperatures) one has
S(S + 1) (gµ B)2 U =∼ −N B . (169) 3 kBT The magnetic susceptibility per spin is defined by
∂ hµ i χ = z . (170) ∂B 23
FIG. 6: Temperature dependence of the heat capacity of spins in a magnetic fields with different values of S.
From Eqs. (166) and (159) one obtains
2 (gµB) 0 χ = bS(y), (171) kBT where
db (y) S + 1/2 2 1/2 2 b0 (y) ≡ S = − + . (172) S dy sinh [(S + 1/2) y] sinh [y/2]
For S = 1/2 from Eq. (162) follows 1 b0 (y) = . (173) 1/2 4 cosh2 (y/2)
0 From Eq. (164) one obtains bS(0) = S(S + 1)/3. Thus in the high-temperature limit (or for B = 0)
2 S(S + 1) (gµB) χ = , kBT gµBB. (174) 3 kBT The susceptibility becomes small at high temperatures as the directions of the spins are strongly disordered by thermal 0 agitation and it is difficult to order them by applying a magnetic field. In the opposite limit y 1 the function bS(y) and thus the susceptibility also becomes small. The physical reason for this is that at low temperatures, kBT gµBB, ∼ the spins are already strongly aligned by the magnetic field, hSzi = S, so that hSzi becomes hardly sensitive to small changes of B. As a function of temperature, χ has a maximum at intermediate temperatures. The heat capacity C can be obtained from Eq. (168) as
∂U ∂ hS i ∂y C = = −Ngµ B z = −Ngµ Bb0 (y) = Nk y2b0 (y). (175) ∂T B ∂T B S ∂T B S As both the magnetic susceptivility and the heat capacity are expressed via the derivative of the Brillouin function, they are related. The relation has the form
C B2 = . (176) Nχ T 24
One can see that C also has a maximum at intermediate temperatures, y ∼ 1. At high temperatures C decreases as 1/T 2:
2 ∼ S(S + 1) gµBB C = NkB , (177) 3 kBT and at low temperatures C becomes exponentially small, except for the case S = ∞ (see Fig. 6). The finite value of the heat capacity in the limit T → 0 contradicts the third law of thermodynamics. Thus the classical spin model fails in this limit.
XIV. PHASE TRANSITIONS AND THE MEAN-FIELD APPROXIMATION
As explained in thermodynamics, with changing thermodynamic parameters such as temperature, pressure, etc., the system can undergo phase transitions. In the first-order phase transitions, the chemical potentials µ of the two competing phases become equal at the phase transition line, while on each side of this line they are unequal and only one of the phases is thermodynamically stable. The first-order transition is switching between two different phases (as, for instance, between water and ice) and is thus abrupt. To the contrast, second-order transitions are gradual: The so-called order parameter continuously grows from zero as the phase-transition line is crossed. Thermodynamic quantities are singular at second-order transitions. Phase transitions are complicated phenomena that arise due to interaction between particles in many-particle systems in the thermodynamic limit N → ∞. It can be shown that thermodynamic quantities of finite systems are non-singular and, in particular, there are no second-order phase transitions. Up to now in this course we studied systems without interaction. Including the latter requires an extension of the formalism of statistical mechanics that is done in Sec. XVI. In such an extended formalism, one has to calculate partition functions over the energy levels of the whole system that, in general, depend on the interaction. In some cases these energy levels are known exactly but they are parametrized by many quantum numbers over which summation has to be carried out. In most of the cases, however, the energy levels are not known analytically and their calculation is a huge quantum-mechanical problem. In both cases calculation of the partition function is a very serious task. There are some models for which thermodynamic quantities had been calculated exactly, including models that possess second-order phase transitions. For other models, approximate numerical methods had been developed that allow calculating phase transition temperatures and critical indices. It was shown that there are no phase transitions in one-dimensional systems with short-range interactions. It is most convenient to illustrate the physics of phase transitions considering magnetic systems, or the systems of spins introduced in Sec. XIII. The simplest forms of interaction between different spins are Ising interaction −JijSˆizSˆjz including only z components of the spins and Heisenberg interaction −JijSˆi · Sˆj. The exchange interaction Jij follows from quantum mechanics of atoms and is of electrostatic origin. In most cases practically there is only interaction J between spins of the neighboring atoms in a crystal lattice, because Jij decreases exponentially with distance. For the Ising model, all energy levels of the whole system are known exactly while for the Heisenberg model they are not. In the case J > 0 neighboring spins have the lowest interaction energy when they are collinear, and the state with all spins pointing in the same direction is the exact ground state of the system. For the Ising model, all spins in the ground state are parallel or antiparallel with the z axis, that is, it is double-degenerate. For the Heisenberg model the spins can point in any direction in the ground state, so that the ground state has a continuous degeneracy. In both cases, Ising and Heisenberg, the ground state for J > 0 is ferromagnetic. The nature of the interaction between spins considered above suggests that at T → 0 the system should fall into its D E ˆ ground state, so that thermodynamic averages of all spins approach their maximal values, Si → S. With increasing D E ˆ temperature, excited levels of the system become populated and the average spin value decreases, Si < S. At high temperatures all energy levels become populated, so that neighboring spins can have all possible orientations with respect to each other. In this case, if there is no external magnetic field acting on the spins (see Sec. XIII), the average spin value should be exactly zero, because there as many spins pointing in one direction as there are the spins pointing in the other direction. This is why the high-temperature state is called symmetric state. If now the temperature is lowered, there should be a phase transition temperature TC (the Curie temperature for ferromagnets) below which the order parameter (average spin value) becomes nonzero. Below TC the symmetry of the state is spontaneously broken. This state is called ordered state. Note that if there is an applied magnetic field, it will break the symmetry by creating a nonzero spin average at all temperatures, so that there is no sharp phase transition, only a gradual increase D E ˆ of Si as the temperature is lowered. Although the scenario outlined above looks persuasive, there are subtle effects that may preclude ordering and D E ˆ ensure Si = 0 at all nonzero temperatures. As said above, ordering does not occur in one dimension (spin chains) 25 for both Ising and Heisenberg models. In two dimensions, Ising model shows a phase transition, and the analytical solution of the problem exists and was found by Lars Onsager. However, two-dimensional Heisenberg model does not order. In three dimensions, both Ising and Heisenberg models show a phase transition and no analytical solution is available.
A. The mean-field approximation for ferromagnets
While a rigorous solution of the problem of phase transition is very difficult, there is a simple approximate solution that captures the physics of phase transitions as described above and provides qualitatively correct results in most cases, including three-dimensional systems. The idea of this approximate solution is to reduce the original many-spin problem to an effective self-consistent one-spin problem by considering one spin (say i) and replacing other spins in the D E interaction by their average thermodynamic values, −JijSˆi ·Sˆj ⇒ −JijSˆi · Sˆj . One can see that now the interaction term has mathematically the same form as the term describing interaction of a spin with the applied magnetic field, Eq. (149). Thus one can use the formalism of Sec. (XIII) to solve the problem analytically. This aproach is called mean field approximation (MFA) or molecular field approximation and it was first suggested by Weiss for ferromagnets. The effective one-spin Hamiltonian within the MFA is similar to Eq: (149) and has the form D E 1 D E2 Hˆ = −Sˆ· gµ B + Jz Sˆ + Jz Sˆ , (178) B 2 where z is the number of nearest neigbors for a spin in the lattice (z = 6 for the simple cubic lattice). Here the last 2 ˆ DˆE ˆ ˆ DˆE DˆE term ensures that the replacement S ⇒ S in H yields H = − S ·gµBB − (1/2)Jz S that becomes the correct D E ˆ ground-state energy at T = 0 where S = S. For the Heisenberg model, in the presence of a whatever weak applied field B the ordering will occur in the direction of B. Choosing the z axis in this direction, one can simplify Eq. (178) to D E 1 D E2 Hˆ = − gµ B + Jz Sˆ Sˆ + Jz Sˆ . (179) B z z 2 z The same mean-field Hamilotonian is valid for the Ising model as well, if the magnetic field is applied along the z axis. Thus, within the MFA (but not in general!) Ising and Heisenberg models are equivalent. The energy levels corresponding to Eq. (179) are D E 1 D E2 ε = − gµ B + Jz Sˆ m + Jz Sˆ , m = −S, −S + 1,...,S − 1, S. (180) m B z 2 z Now, after calculation of the partition function Z, Eq. (151), one arrives at the free energy per spin
1 D E2 sinh [(S + 1/2)y] F = −k T ln Z = Jz Sˆ − k T ln , (181) B 2 z B sinh (y/2) where D ˆ E gµBB + Jz Sz y ≡ . (182) kBT D E To find the actual value of the order parameter Sˆz at any temperature T and magnetic field B, one has to minimize D E F with respect to Sˆz , as explained in thermodynamics. One obtains the equation
∂F D ˆ E 0 = D E = Jz Sz − JzbS(y), (183) ∂ Sˆz where bS(y) is defined by Eq. (160). Rearranging terms one arrives at the transcedental Curie-Weiss equation D E gµ B + Jz Sˆz D ˆ E B Sz = bS (184) kBT 26
FIG. 7: Graphic solution of the Curie-Weiss equation.
D E FIG. 8: Free energy of a ferromagnet vs Sˆz within the MFA. (Arbitrary vertical shift.)
D E that defines Sˆz . In fact, this equation could be obtained in a shorter way by using the modified argument, Eq. (182), in Eq. (159). D E For B = 0, Eq. (184) has the only solution Sˆz = 0 at high temperatures. As bS(y) has the maximal slope at D E y = 0, it is sufficient that this slope (with respect to Sˆz ) becomes smaller than 1 to exclude any solution other than D E D E Sˆz = 0. Using Eq. (164), one obtains that the only solution Sˆz = 0 is realized for T ≥ TC , where
S(S + 1) Jz TC = (185) 3 kB 27
D E is the Curie temperature within the mean-field approximation. Below TC the slope of bS(y) with respect to Sˆz D E D E D E exceeds 1, so that there are three solutions for Sˆz : One solution Sˆz = 0 and two symmetric solutions Sˆz 6= 0 (see D E Fig. 7). The latter correspond to the lower free energy than the solution Sˆz = 0 thus they are thermodynamically stable (see Fig. 8).. These solutions describe the ordered state below TC . D E 3 Slightly below TC the value of Sˆz is still small and can be found by expanding bS(y) up to y . In particular, for S = 1/2 one has 1 y 1 1 b (y) = tanh =∼ y − y3. (186) 1/2 2 2 4 48
Using TC = (1/4)Jz/kB and thus Jz = 4kBTC for S = 1/2, one can rewrite Eq. (184) with B = 0 in the form
D E T D E 4 T 3 D E3 Sˆ = C Sˆ − C Sˆ . (187) z T z 3 T z D E One of the solutions is Sˆz = 0 while the other two solutions are
D ˆ E s Sz T T 1 = ± 3 1 − ,S = . (188) S TC TC 2
As this result is only valid for T near TC , one can discard the factor in front of the square root and simplify this formula to D ˆ E s Sz T 1 = ± 3 1 − ,S = , (189) S TC 2 √ Although obtained near TC , in this form the result is only by the factor 3 off at T = 0. The singularity of the order parameter near TC is square root, so that for the magnetization critical index defined as
D ˆ E β Sz ∝ M ∝ (TC − T ) (190) one has β = 1/2. Results of the numerical solution of the Curie-Weiss equation with B = 0 for different S are shown in Fig. 9. Note that the magnetization is related to the spin average by Eq. (167). Let us now consider the magnetic susceptibility per spin χ defined by Eq. (170) above TC . Linearizing Eq. (184) with the use of Eq. (164), one obtains
D ˆ E gµBB + Jz Sz D E S(S + 1) S(S + 1) gµBB TC D E Sˆz = = + Sˆz . (191) 3 kBT 3 kBT T From this equation one obtains
D E S(S + 1) gµBB 1 S(S + 1) gµBB Sˆz = = . (192) 3 kBT 1 − TC /T 3 kB (TC − T ) Then the magnetic susceptibility can be calculated by differentiation: D E ∂ Sˆz 2 ∂ hµzi S(S + 1) (gµB) χ = = gµB = . (193) ∂B ∂B 3 kB (T − TC )
In contrast to non-interacting spins, Eq. (174), the susceptibility diverges at T = TC rather than at T = 0. The inverse −1 susceptivility χ is a straight line crossing the T axis at TC . In the theory of phase transitions the critcal index for the susceptibility γ is defined as.
−γ χ ∝ (T − TC ) . (194) One can see that γ = 1 within the MFA. 28
D E FIG. 9: Temperature dependences of normalized spin averages Sˆz /S for S = 1/2, 5/2, ∞ obtained from the numerical solution of the Curie-Weiss equation.
At T = T , there is no linear susceptibility, and the magnetization is a singular function of the magnetic field, that C D E defines another critical index, δ. In this case, the Curie-Weiss equation can be expanded for small Sˆz and small B.
Using Eq. (186), at T = TC one obtains
3 gµBB 4 D E 0 = − Sˆz , (195) 4kBT 3 c.f. Eq. (187). Thus, 1/3 D E 3gµBB 1 Sˆz = S = . (196) 16kBT 2 The critical isotherm index δ is defined by
D ˆ E 1/δ Sz ∝ B , (197) thus within the mean-field approximation δ = 3. More precise methods than the MFA yield smaller values of β and larger values of γ and δ, depending on the details of the model such as the number of interacting spin components (1 for the Ising and 3 for the Heisenberg) and the dimension of the lattice. Critical indices are insensitive to such factors as lattice structure and the value of the spin S. On the other hand, the value of TC does not possess such universality and depends on all parameters of the problem. Accurate values of TC are by up to 25% lower than their mean-field values in three dimensions.
XV. 1D ISING MODEL
The 1D Ising model is a chain of magnetic atoms with spins S = 1/2 interacting with their nearest neighbors in the chain via the coupling of their z-components. The Hamiltonial of the 1D Ising model reads
N−1 N ˆ X ˆ ˆ X ˆ H = −J SizSi+1,z − gµBB Siz, (198) i=1 i=1 where N is the number of atoms in the chain, the exchange coupling J > 0 for the ferromagnetic interaction, and B is the magnetic field applied along the z-axis. The quantum energy levels of the Ising model are trivial and correspond 29 to the values of Sˆiz equal to mi = ±1/2. (In the presense of a transverse field or a coupling of other spin components the quantum problem becomes non-trivial.) Thus, the energy of the system divided by the temperature that enters the partition function can be written as
N−1 N E X X βE ≡ = −θ m m − ρ m , (199) k T i i+1 i B i=1 i=1 where we have introduced the dimensionless exchange and field parameters J gµ B θ ≡ βJ = , ρ ≡ B . (200) kBT kBT One can calculate the partition function analytically if one considers the ring instead of the open chain. In the thermodynamic limit, there should be no difference between the two model variants but for the ring the analytical calculation exists for any N. Thus to βE we add the term −θmN m1 (the last spin is coupled with the first one. The partition function for the Ising ring is given by X Z = exp (θm1m2 + ρm1 + θm2m3 + ρm2 + ... + θmN m1 + ρmN ) . (201)
m1,m2,...,mN It is convenient to make a trick that allows representing the partition function as a product of matrices. For this, we redistribute the field terms: X ρ ρ ρ Z = exp θm m + (m + m ) + θm m + (m + m ) + ... + θm m + (m + m ) . (202) 1 2 2 1 2 2 3 2 2 3 N 1 2 N 1 m1,m2,...,mN Then,
X X N N Z = Am1m2 Am2m3 ...Am m Am m1 = = Tr (203) N−1 N N A m1m1 A m1,m2,...,mN m1 is the trace of the Nth power of the 2 × 2 matrix with matrix elements ρ A = exp θmk + (m + k) , m, k = ±1/2 (204) mk 2 that is,
exp (θ/4 − ρ/2) exp (−θ/4) = . (205) A exp (−θ/4) exp (θ/4 + ρ/2)
It can be easily shown that the trace of a matrix is invariant under the unitary transformation. IfU is the unitary matrix, then the unitary transformation reads
−1 B = U AU, (206) where the inverse unitary matrix U−1 is just transposed and complex conjugate of U. As one can make a cyclic permutation of matrices under the Tr operator, one can write