Professor Dmitry Garanin


May 17, 2021


Statistical physics considers of a large number of entities () such as , , spins, etc. For these it is impossible and even does not make sense to study the full microscopic . The only relevant is, say, how many atoms have a particular , then one can calculate the thermodynamic values. That is, one has to know the distribution function of the particles over that defines the macroscopic properties. This gives the name and defines the scope of this subject. The approach outlined above can be used both at and off equilibrium. The branch of physics studying non- equilibrium situations is called physical . In this course, we study only statistical physics at equilibrium. It turns out that at equilibrium the energy distribution function has an explicit general form and the only problem is to calculate the . The term statistical means the same as statistical physics. One can call it statistical thermodynamics as well. The formalism of statistical thermodynamics can be developed for both classical and systems. The resulting energy distribution and calculating observables is simpler in the classical case. However, the very formulation of the method is more transparent within the quantum mechanical formalism. In , the absolute value of the , including its correct value at T → 0, can only be obtained in the quantum case. To avoid double , we will consider only quantum statistical thermodynamics in this course, limiting ourselves to systems without interaction. The more general quantum results will recover their classical forms in the .


From follows that the states of the system do not change continuously (like in ) but are quantized. There is a huge number of discrete quantum states i with corresponding energy values εi being the main parameter characterizing these states. In the absence of interaction, each has its own set of quantum states which it can occupy. For these sets of states are identical. One can think of boxes i into which particles are placed, Ni particles in the ith box. The particles can be distributed over the boxes in a number of different ways, corresponding to different microstates, in which the state i of each particle is specified. The information contained in the microstates is excessive, and the only meaningful information is provided by the numbers Ni that define the distribution of particles over their quantum states. These numbers Ni specify what in statistical physics is called macrostate. If these numbers are known, the energy and other quantities of the system can be found. It should be noted that also the statistical macrostate contains more information than the macroscopic physical quantities that follow from it, as a distribution contains more information than an average over it. Each macrostate k, specified by the numbers Ni, can be realized by a number wk of microstates, the so-called thermodynamic . The latter is typically a large number, unlike the usual probability that changes between 0 and 1. Redistributing the particles over the states i while keeping the same values of Ni generates different microstates within the same macrostate. The basic assumption of is the equidistrubution over microstates. That is, each microstate within a macrostate is equally probable for occupation. Macrostates having a larger wk are more likely to be realized. As we will see, for large systems, thermodynamic of different macrostates vary in a wide range and there is the state with the largest value of w that wins over all other macrostates. If the number of quantum states i is finite, the total number of microstates can be written as X Ω ≡ wk, (1) k 2 the sum rule for thermodynamic probabilities. For an the number of particles N and the energy U are conserved, thus the numbers Ni satisfy the constraints X Ni = N, (2) i X Niεi = U (3) i that limit the variety of the allowed macrostates k.


A tossed coin can land in two positions: Head up or tail up. Considering the coin as a particle, one can say that this particle has two “quantum” states, 1 corresponding to the head and 2 corresponding to the tail. If N coins are tossed, this can be considered as a system of N particles with two quantum states each. The microstates of the system are specified by the states occupied by each coin. As each coin has 2 states, there are total

Ω = 2N (4) microstates. The macrostates of this system are defined by the numbers of particles in each state, N1 and N2. These two numbers satisfy the constraint condition (2), i.e., N1 + N2 = N. Thus one can take, say, N1 as the number k labeling macrostates. The number of microstates in one macrostate (that is, the number of different microstates that belong to the same macrostate) is given by the

N!  N  wN1 = = . (5) N1!(N − N1)! N1

This can be derived as follows. We have to pick N1 particles to be in the state 1, all others will be in the state 2. How many ways are there to do this? The first “0” particle can be picked in N ways, the second one can be picked in N − 1 ways since we can choose of N − 1 particles only. The third “1” particle can be picked in N − 2 different ways etc. Thus one obtains the number of different ways to pick the particles is N! N × (N − 1) × (N − 2) × ... × (N − N1 + 1) = , (6) (N − N1)! where the factorial is defined by

N! ≡ N × (N − 1) × ... × 2 × 1, 0! = 1. (7)

The recurrent definition of the factorial is

N! = N (N − 1)!, 0! = 1. (8)

The expression in Eq. (6) is not yet the thermodynamical probability wN1 because it contains multiple counting of the same microstates. The realizations, in which N1 “1” particles are picked in different orders, have been counted as different microstates, whereas they are the same microstate. To correct for the multiple counting, one has to divide by the number of N1! of the N1 particles that yields Eq. (5). One can check that the condition (1) is satisfied,

N N X X N! N wN1 = = 2 . (9) N1!(N − N1)! N1=0 N1=0

The thermodynamic probability wN1 has a maximum at N1 = N/2, half of the coins head and half of the coins tail. This macrostate is the most probable state. Indeed, as for an individual coin the probabilities to land head up and tail up are both equal to 0.5, this is what we expect. For large N the maximum of wN1 on N1 becomes sharp.

To prove that N1 = N/2 is the maximum of wN1 , one can rewrite Eq. (5) in terms of the new p = N1 −N/2 as N! w = . (10) N1 (N/2 + p)!(N/2 − p)! 3

FIG. 1: The binomial distribution for an ensemble of N two-state systems becomes narrow and peaked at p ≡ N1 − N/2 = 0.

One can see that wp is symmetric around N1 = N/2, i.e., p = 0. Using Eq. (8), one obtains

w (N/2)!(N/2)! N/2 N/2±1 = = < 1, (11) wN/2 (N/2 + 1)!(N/2 − 1)! N/2 + 1

one can see that N1 = N/2 is indeed the maximum of wN1 . The binomial distribution is shown in Fig. 1 for three different valus of N. As the argument, the variable p/pmax ≡ (N1 − N/2) / (N/2) is used so that one can put all the data on the same plot. One can see that in the limit of large N the binomial distribution becomes narrow and centered at p = 0, that is, N1 = N/2. This practically means that if the coin is tossed many , significant deviations from the 50:50 relation between the numbers of heads and tails will be extremely rare.


Analysis of expressions with large factorials is simplified by the Stirling formula

√ N N N! =∼ 2πN . (12) e √ In many important cases, the prefactor 2πN is irrelevant, as we will see below. With the Stirling formula substituted, Eq. (10) becomes

√ N ∼ 2πN (N/e) wN1 = p2π (N/2 + p) [(N/2 + p) /e]N/2+p p2π (N/2 − p) [(N/2 − p) /e]N/2−p r 2 1 N N = q N/2+p N/2−p πN 2p 2 (N/2 + p) (N/2 − p) 1 − N wN/2 = q , (13) 2p 2 2p N/2+p 2p N/2−p 1 − N 1 + N 1 − N where

r 2 w =∼ 2N (14) N/2 πN 4 is the maximal value of the thermodynamic probability. Eq. (13) can be expanded for |p|  N. Since p enters both the bases and the exponents, one has to be careful and expand the of wN1 rather than wN1 itself. The square root term in Eq. (13) can be discarded as it gives a negligible contribution of order p2/N 2. One obtains

N   2p N   2p ln w =∼ ln w − + p ln 1 + − − p ln 1 − N1 N/2 2 N 2 N " # " # N  2p 1 2p2 N  2p 1 2p2 =∼ ln w − + p − − − p − − N/2 2 N 2 N 2 N 2 N 2p2 p2 2p2 p2 =∼ ln w − p − + + p − + N/2 N N N N 2p2 = ln w − (15) N/2 N and thus  2p2  w =∼ w exp − . (16) N1 N/2 N √ One can see that wN1 becomes very small if |p| & N that for large N does not violate the applicability condition

|p|  N. That is, wN1 is small in the main part of the interval 0 ≤ N1 ≤ N and is sharpy peaked near N1 = N/2.


The results obtained in Sec. III for two-state particles can be generalized for n-state particles. We are looking for the number of ways to distribute N particles over n boxes so that there are Ni particles in ith box. That is, we look for the number of microstates in the macrostate described by the numbers Ni. The result is given by N! N! w = = Qn . (17) N1!N2! ...Nn! i=1 Ni!

This formula can be obtained by using Eq. (5) successively. The number of ways to put N1 particles in box 1 and the other N − N1 in other boxes is given by Eq. (5). Then the number of ways to put N2 particles in box 2 is given by a similar formula with N → N − N1 (there are only N − N1 particles after N1 particles have been put in box 1) and N1 → N2. These numbers of ways should multiply. Then one considers box 3 etc. until the last box n. The resulting number of microstates is N! (N − N )! (N − N − N )! w = × 1 × 1 2 × N1!(N − N1)! N2!(N − N1 − N2)! N3!(N − N1 − N2 − N3)! (N + N + N )! (N + N )! N ! ... × n−2 n−1 n × n−1 n × n . (18) Nn−2!(Nn−1 + Nn)! Nn−1!Nn! Nn!0! In this formula, all numerators except for the first one and all second terms in the denominators cancel each other, so that Eq. (17) follows.


We have seen that different macrostates k can be realized by largely varying numbers of microstates wk. For large systems, N  1, the difference between different thermodynamic probabilities wk is tremendous, and there is a sharp maximum of wk at some value of k = kmax. The main postulate of statistical physics is that in on large systems, only the most probable macrostate, satisfying the constraints of Eqs. (2) and (3), makes a contribution. For instance, a macrostate of an ideal with all molecules in one half of the container is much less probable than the macrostate with the molecules equally distributed over the whole container. (Like the state with all coins landed head up is much less probable than the state with half of the coins landed head up and the other half tail up). For this reason, if the initial state is all molecules in one half of the container, then in the course of evolution the system will come to the most probably state with the molecules equally distributed over the whole container, and will stay in this state forever. 5

We have seen in thermodynamics that an isolated system, initially in a non-equilibrium state, evolves to the equilib- rium state characterized by the maximal entropy. On this way one comes to the idea that entropy S and thermodynamic probability w should be related, one being a monotonic function of the other. The form of this function can be found if one notices that entropy is additive while the thermodynamic probability is miltiplicative. If a system consists of two subsystems that weakly interact with each other (that is almost always the case as the intersubsystem interaction is limited to the small region near the interface between them), then S = S1 + S2 and w = w1w2. If one chooses (L. Boltzmann)

S = kB ln w, (19) then S = S1 + S2 and w = w1w2 are in accord since ln (w1w2) = ln w1 + ln w2. In Eq. (19) kB is the ,

−23 −1 kB = 1.38 × 10 JK . (20)

It will be shown that the statistically defined entropy above coincides with the thermodynamic entropy at equilibrium. On the other hand, statistical entropy is well defined for nonequilibrium states as well, where as the thermodynamic entropy is usually undefined off equilibrium.


In this section we obtain the distribution of particles over energy levels i as the most probable macrostate by maximizing its thermodynamic probability w. We label quantum states by the index i and use Eq. (17). The task is to find the maximum of w with respect to all Ni that satisfy the constraints (2) and (3). Practically it is more convenient to maximize ln w than w itself. Using the method of Lagrange multipliers, one searches for the maximum of the target function X X Φ(N1,N2, ··· ,Nn) = ln w + α Ni − β εiNi. (21) i i

Here α and β are Lagrange multipliers with (arbitrary) signs chosen anticipating the final result. The maximum satisfies ∂Φ = 0, i = 1, 2,.... (22) ∂Ni

As we are interested in the behavior of macroscopic systems with Ni  1, the factorials can be simplified with the help of Eq. (12) that takes the form √ ln N! =∼ N ln N − N + ln 2πN. (23)

Neglecting the relatively small last term in this expression, one obtains

∼ X X X X Φ = ln N! − Nj ln Nj + Nj + α Nj − β εjNj. (24) j j j j

The first term here is a constant and it won’t contribute to the derivatives ∂Φ/∂Ni. In the latter, the only contribution comes from the terms with j = i in the above expression. One obtains the ∂Φ = − ln Ni + α − βεi = 0 (25) ∂Ni that yield

α−βεi Ni = e , (26) the Boltzmann distribution with yet undefined Lagrange multipliers α and β. The latter can be found from Eqs. (2) and (3) in terms of N and U. Summing Eq. (26) over i, one obtains

N = eαZ, α = ln (N/Z) , (27) 6 where X Z = e−βεi (28) i is the so-called partition function (German Zustandssumme) that plays a major role in statistical physics. Then, eliminating α from Eq. (26) yields

N N = e−βεi , (29) i Z the Boltzmann distribution. After that for the U one obtains

X N X N ∂Z U = ε N = ε e−βεi = − (30) i i Z i Z ∂β i i or ∂ ln Z U = −N . (31) ∂β This formula implicitly defines the Lagrange multiplier β as a function of U. The statistical entropy, Eq. (19), within the Stirling approximation becomes ! X S = kB ln w = kB N ln N − Ni ln Ni . (32) i

Inserting here Eq. (26) and α from Eq. (27), one obtains

S X = N ln N − N (α − βε ) = N ln N − αN + βU = N ln Z + βU. (33) k i i B i The statistical entropy depends only on the parameter β. Its differential is given by

dS  ∂ ln Z dU  dS = dβ = N + U + β dβ = βdU. (34) dβ ∂β dβ

Comparing this with the thermodynamic relation dU = T dS at V = 0, one identifies

1 β = . (35) kBT

2 Now, with the help of dT/dβ = −kBT , one can represent the internal energy, Eq. (31), via the derivative with respect to T as

∂ ln Z U = Nk T 2 . (36) B ∂T Eq. (33) becomes

U S = Nk ln Z + . (37) B T From here one obtains the statistical formula for the free energy

F = U − TS = −NkBT ln Z. (38)

One can see that the partition function Z contains the complete information of the system’s thermodynamics since other quantities such as P follow from F. In particular, one can check ∂F/∂T = −S. 7


A. Stationary Schr¨odingerequation

In the formalism of quantum mechanics, quantized states and their energies E are the solutions of the eigenvalue problem for a or for a differential . In the latter case the problem is formulated as the so-called stationary Schr¨odingerequation

Hˆ Ψ = EΨ, (39) where Ψ = Ψ (r) is the complex function called function. The physical interpretation of the is that |Ψ(r)|2 gives the probability for a particle to be found near the point r. As above, the number of measurements dN of the toral N measurements in which the particle is found in the elementary d3r = dxdydz around r is given by

dN = N |Ψ(r)|2 d3r. (40)

The wave function satisfies the normalization condition ¢ 1 = d3r |Ψ(r)|2 . (41)

The operator Hˆ in Eq. (39) is the so-called Hamilton operator or Hamiltonian. For one particle, it is the sum of kinetic and potential energies

ˆp2 Hˆ = + U(r), (42) 2m where the classical p is replaced by the operator ∂ ˆp = −i . (43) ~∂r The Schr¨odingerequation can be formulated both for single particles and the systems of particles. In this course we will restrict ourselves to single particles. In this case the notation ε will be used for single-particle energy levels instead of E. One can see that Eq. (39) is a second-order linear differential . It is an ordinary differential equation in one and partial differential equation in two and more . If there is a , this is a linear differential equation with variale coefficients that can be solved analytically only in special cases. In the case U = 0 this is a linear differential equation with constant coefficients that is easy to solve analytically. An important component of the quantum formalism is boundary conditions for the wave function. In particular, for a particle inside a box with rigid walls the boundary condition is Ψ=0 at the walls, so that Ψ (r) joins smoothly with the value Ψ (r) = 0 everywhere outside the box. In this case it is also guaranteed that |Ψ(r)|2 is integrable and Ψ can be normalized according to Eq. (41). It turns out that the solution of Eq. (39) that satisfies the boundary conditions exists only for a discrete set of E values that are called eigenvalues. The corresponding Ψ are calles , and all together is called eigenstates. Eigenvalue problems, both for matrices and differential operators, were known in mathematics before the advent of quantum mechanics. The creators of quantum mechanics, mainly Schr¨odingerand Heisenberg, realised that this mathematical formalism can accurately describe of energy levels observed in experiments. Whereas Schr¨odingerformlated his famous Schr¨odingerequation, Heisenberg made a major contribution into description of quantum systems with matrices.

B. Energy levels of a

As an illustration, consider a particle in a one-dimensional rigid box, 0 ≤ x ≤ L. In this case the momentum becomes d pˆ = −i (44) ~dx and Eq. (39) takes the form

2 d2 − ~ Ψ(x) = εΨ(x) (45) 2m dx2 8 and can be represented as

d2 2mε Ψ(x) + k2Ψ(x) = 0, k2 ≡ . (46) dx2 ~2 The solution of this equation satisfying the boundary conditions Ψ(0) = Ψ(L) = 0 has the form π Ψ (x) = A sin (k x) , k = ν, ν = 1, 2, 3,... (47) ν ν ν L where eigenstates are labeled by the index ν. The constant A following from Eq. (41) is A = p2/L. The energy eigenvalues are given by

2k2 π2 2ν2 ε = ~ ν = ~ . (48) ν 2m 2mL2

One can see the the energy ε is quandratic in the momentum p = ~k (de Broglie relation), as it should be, but the energy levels are discrete because of the quantization. For very large boxes, L → ∞, the energy levels become quasicontinuous. The lowest- with ν = 1 is called the . For a three-dimensional box with sides Lx,Ly, and Lz one has to solve the Schr¨odingerequation

2  d2 d2 d2  − ~ + + Ψ(x) = εΨ(x) (49) 2m dx2 dy2 dz2 with similar boundary conditions. The solution factorizes and has the form  π Ψνx,νy ,νz (x, y, z) = A sin (kνx x) sin kνy x sin (kνz x) , kα = να, να = 1, 2, 3,..., (50) Lα where α = x, y, z. The energy levels are

2 2 2 2 2 2 k ~ kx + ky + kz ε = ~ = (51) 2m 2m and parametrized by the three quantum numbers να. The ground state is (νx, νy, νz) = (1, 1, 1). One can order the states in increasing ε and number them by the index j, the ground state being j = 1. If Lx = Ly = Lz = L, then

π2 2 ε = ~ ν2 + ν2 + ν2 (52) νx,νy ,νz 2mL2 x y z and the same value of εj can be realized for different sets of νx, νy, and νz, for instance, (1,5,12), (5,12,1), etc. The number of different sets of (νx, νy, νz) having the same εj is called and is denoted as gj. States with νx = νy = νz have gj = 1 and they are called non-degenerate. If only two of the numbers νx, νy, and νz coincide, the degeneracy is gj = 3. If all numbers are different, gj = 3! = 6. If one sums over the energy levels parametrized by j, one has to multiply the summands by the corresponding degeneracies.

C. of states

For systems of a large size and thus very finely quantized states, one can define the ρ(ε) as the number of energy levels dn in the interval dε, that is,

dn = ρ(ε)dε. (53)

It is convenient to start calculation of ρ(ε) by introducing the number of states dn in the “elementary volume” dνxdνydνz, considering νx, νy, and νz as continuous. The result obviously is

dn = dνxdνydνz, (54) that is, the corresponding density of states is 1. Now one can rewrite the same number of states in terms of the k using Eq. (51) V dn = dk dk dk , (55) π3 x y z 9 where V = LxLyLz is the volume of the box. After that one can go over to the number of states within the shell dk, as was done for the distribution function of the molecules over . Taking into account that kx, ky, and kz are all positive, this shell is not a complete spherical shell but 1/8 of it. Thus

V 4π dn = k2dk. (56) π3 8 The last step is to change the variable from k to ε using Eq. (51). With

2m r2m√ dk 1r2m 1 k2 = ε, k = ε, = √ (57) ~2 ~2 dε 2 ~2 ε one obtains Eq. (53) with

V 2m3/2 √ ρ(ε) = ε. (58) (2π)2 ~2


In this section we demonstrate how the results for the ideal gas, previously obtained within the molecular , follow from the more general framework of statistical physics in the preceding section. Consider an ideal gas in a sufficiently large container. In this case the energy levels of the system, see Eq. (51), are quantized so finely that one can introduce the density of states ρ (ε) defined by Eq. (53). The number of particles in quantum states within the energy interval dε is the product of the number of particles in one state N(ε) and the number of states dnε in this energy interval. With N(ε) given by Eq. (29) one obtains

N dN = N(ε)dn = e−βερ (ε) dε. (59) ε ε Z For finely quantized levels one can replace summation in Eq. (28) and similar by integration as ¢ X ... ⇒ dερ (ε) ... (60) i

Thus for the partition function (28) one has ¢ Z = dερ (ε) e−βε. (61)

For quantum particles in a rigid box with the help of Eq. (58) one obtains ¢ √ V 2m3/2 ∞ √ V 2m3/2 π mk T 3/2 Z = dε εe−βε = = V B . (62) 2 2 2 2 3/2 2 (2π) ~ 0 (2π) ~ 2β 2π~

Using the classical relation ε = mv2/2 and thus dε = mvdv, one can obtain the formula for the number of particles in the interval dv

dNv = Nf(v)dv, (63) where the speed distribution function f(v) is given by

 2 3/2  3/2 r 1 −βε 1 2π~ V 2m m −βε f(v) = e ρ (ε) mv = 2 2 v mv e Z V mkBT (2π) ~ 2  m 3/2  mv2  = 4πv2 exp − . (64) 2πkBT 2kBT This result coincides with the Maxwell distribution function obtained earlier from the molecular theory of . The Plank’s constant ~ that links to quantum mechanics disappeared from the final result. 10

The internal energy of the ideal gas is its

U = N¯ε, (65)

¯ε = mv¯2/2 being the average kinetic energy of an . The latter can be calculated with the help of the speed distribution function above, as was done in the molecular theory of gases. The result has the form f ¯ε = k T, (66) 2 B where in our case f = 3 corresponding to three translational degrees of freedom. The same result can be obtained from Eq. (31) " # ∂ ln Z ∂  m 3/2 3 ∂ 3 1 3 U = −N = −N ln V = N ln β = N = NkBT. (67) ∂β ∂β 2π~2β 2 ∂β 2 β 2

After that the known result for the capacity CV = (∂U/∂T )V follows. The pressure P is defined by the thermodynamic formula  ∂F  P = − . (68) ∂V T With the help of Eqs. (38) and (62) one obtains ∂ ln Z ∂ ln V Nk T P = Nk T = Nk T = B (69) B ∂V B ∂V V that amounts to the of the ideal gas PV = NkBT.


Consider an ensemble of N identical harmonic oscillators, each of them described by the Hamiltonian

pˆ2 kx2 Hˆ = + . (70) 2m 2 Here the momentum operatorp ˆ is given by Eq. (44) and k is the constant. This theoretical model can describe, for instance, vibrational degrees of freedom of diatomic molecules. In this case x is the elongation of the between the two atoms, relative to the equilibrium bond length. The stationary Schr¨odinger equation (39) for a becomes  2 d2 kx2  − ~ + Ψ(x) = εΨ(x). (71) 2m dx2 2

The boundary conditions for this equation require that Ψ(±∞) = 0 and the in Eq. (41) converges. In contrast to Eq. (45), this is a linear differential equation with a variable coefficient. Such differential equations in general do not have solutions in terms of known functions. In some cases the solution can be expressed through special functions such as hypergeometrical, Bessel functions, etc. Solution of Eq. (71) can be found but this task belongs to quantum mechanics courses. The main result that we need here is that the energy eigenvalues of the harmonic oscillator have the form  1 ε = ω ν + , ν = 0, 1, 2,..., (72) ν ~ 0 2 p where ω0 = k/m is the of . The energy level with ν = 0 is the ground state. The ground-state energy ε0 = ~ω0/2 is not zero, as would be the case for a classical oscillator. This quantum ground-state energy is called zero-point energy. It is irrelevant in the calculation of the of an ensemble of harmonic oscillators. The partition function of a harmonic oscillator is

∞ ∞ −β ω /2 X X ν e ~ 0 1 2 Z = e−βεν = e−β~ω0/2 e−β~ω0  = = = , (73) 1 − e−β~ω0 eβ~ω0/2 − e−β~ω0/2 sinh (β ω /2) ν=0 ν=0 ~ 0 11 where the result for the geometrical progression 1 + x + x2 + x3 + ... = (1 − x)−1 , x < 1 (74) was used. The hyperbolic functions are defined by ex − e−x ex + e−x sinh(x) ≡ , cosh(x) ≡ 2 2 sinh(x) cosh(x) 1 tanh(x) ≡ , coth(x) ≡ = . (75) cosh(x) sinh(x) tanh(x) We will be using the formulas sinh(x)0 = cosh(x), cosh(x)0 = sinh(x) (76) and  x, x  1  x, x  1 sinh(x) =∼ , tanh(x) =∼ (77) ex/2, x  1 1, x  1.

The internal (average) energy of the ensemble of oscillators is given by Eq. (31). This yields ∂ ln Z ∂ ln sinh (β ω /2) N ∂ sinh (β ω /2) ω β ω  U = −N = N ~ 0 = ~ 0 = N ~ 0 coth ~ 0 (78) ∂β ∂β sinh (β~ω0/2) ∂β 2 2 or ω  ω  U = N ~ 0 coth ~ 0 . (79) 2 2kBT Another way of writing the internal energy is  1 1 U = N~ω0 + , (80) eβ~ω0 − 1 2 where the constant term with 1/2 is the zero-point energy. The limiting low- and high- cases of this formula are  N ω /2, k T  ω U =∼ ~ 0 B ~ 0 (81) NkBT, kBT  ~ω0.

In the low-temperature case, almost all oscillators are in their ground states, since e−β~ω0  1. Thus the main term that contributes into the partition function in Eq. (73) is ν = 0. Correspondingly, U is the sum of ground-state energies of all oscillators. At high , the previously obtained classical result is reproduced. The Planck’s constant ~ that links to quantum mechanics disappears. In this case, e−β~ω0 is only slightly smaller than one, so that very many different n contribute to Z in Eq. (73). In this case one can replace summation by integration, as was done above for the particle in a potential box. In that case one also obtained the classical results. One can see that the crossover between the two limiting cases corresponds to kBT ∼ ~ω0. In the high-temperature ∗ limit kBT  ~ω0, many low-lying energy levels are populated. The top populated level ν can be estimates from ∗ kBT ∼ ~ω0ν , so that k T ν∗ ∼ B  1. (82) ~ω0 Probability to find an oscillator in the states with ν  ν∗ is exponentially small. The heat capacity is defined by      2 dU ~ω0 1 ~ω0 ~ω0/(2kBT ) C = = N − 2 − 2 = NkB . (83) dT 2 sinh [~ω0/(2kBT )] 2kBT sinh [~ω0/(2kBT )] This formula has limiting cases (  2   ω0 ω0 NkB ~ exp − ~ , kBT  ω0 C =∼ kB T kB T ~ (84) NkB, kBT  ~ω0. 12

FIG. 2: Heat capacity of harmonic oscillators.

One can see that at high temperatures the heat capacity can be written as f C = Nk , (85) 2 B where the effective number of degrees of freedom for an oscillator is f = 2. The explanation of the additional factor 2 is that the oscillator has not only the kinetic, but also the potential energy, and the average values of these two energies are the same. Thus the total amount of energy in a vibrational degree of freedom doubles with respect to the translational and rotational degrees of freedom. At low temperatures the vibrational heat capacity above becomes exponentially small. One says that vibrational degrees of freedom are getting frozen out at low temperatures. The heat capacity of an ensemble of harmonic oscillators in the whole temperature range, Eq. (83), is plotted in Fig. 2. The average of an oscillator is given by

∞ 1 X n ≡ hνi = νe−βεν . (86) Z ν=0

Using Eq. (72), one can calculate this sum as follows

∞ ∞ ∞ 1 X 1  1 X 1 1 1 X Z 1 1 ∂Z 1 U 1 n = + ν e−βεν − e−βεν = ε e−βεν − = − − = − . (87) Z 2 Z 2 ω Z ν 2Z ω Z ∂β 2 N ω 2 ν=0 ν=0 ~ 0 ν=0 ~ 0 ~ 0

Finally with the help of Eq. (80), one finds 1 1 n = β ω = . (88) e ~ 0 − 1 exp [~ω0/(kBT )] − 1 This is the Bose-Einstein distribution that will be considered later. The meaning of it is the following. Considering the oscillator as a “box” or “mode”, one can ask what is the average number of quanta (that is, “particles” or ) in this box at a given temperature. The latter is given by the formula above. Substituting Eq. (88) into Eq. (80), one obtains a nice formula

 1 U = N ω n + (89) ~ 0 2 13 resembling Eq. (72). At low temperatures, kBT  ~ω0, the average quantum number n becomes exponentially small. This means that the oscillator is predominantly in its ground state, ν = 0. At high temperatures, kBT  ~ω0, Eq. (88) yields

k T n =∼ B (90) ~ω0 that has the same order of magnitude as the top populated quantum level number ν∗ given by Eq. (82). The density of states ρ (ε) for the harmonic oscillator can be easily found. The number of levels dnν in the interval dν is

dnν = dν (91)

(that is, there is only one level in the interval dν = 1). Then, changing the variables with the help of Eq. (72), one finds the number of levels dnε in the interval dε as

dν  dε −1 dn = dε = dε = ρ (ε) dε, (92) ε dε dν where 1 ρ (ε) = . (93) ~ω0 To conclude this section, let us reproduce the classical high-temperature results by a simpler method. At high temperatures, many levels are populated and the distribution function e−βεν only slightly changes from one level to the other. Indeed its relative change is

−βεν −βεν+1 e − e ω0 −β(εν+1−εν ) −β~ω0 ∼ ~ −βε = 1 − e = 1 − e = 1 − 1 + β~ω0 =  1. (94) e ν kBT In this situation one can replace summation in Eq. (73) by integration over ε using the density of states of Eq. (93): ¢ ¢ ∞ 1 ∞ 1 k T Z = dερ (ε) e−βε = dεe−βε = = B . (95) 0 ~ω0 0 ~ω0β ~ω0 Also, one can integrate over ν considering it as continuous and not using the density of states explicitly: ¢ ¢ ¢ ∞ ∞ dν 1 ∞ 1 k T Z = dνe−βε = dε e−βε = dεe−βε = = B . (96) 0 0 dε ~ω0 0 ~ω0β ~ω0 The results of this calculation is in accord with Eq. (93). Now the internal energy U is given by

∂ ln Z ∂ ln 1 + ... ∂ ln β + ... N U = −N = −N β = N = = Nk T (97) ∂β ∂β ∂β β B that coincides with the second line of Eq. (81). In 1907 proposed a theory of thermal properties of based on the assumption of the existence therein of independent thermally excited harmonic oscillators. The essential was that Einstein applied the ’s idea of quantization, extending it from the electromagnetic radiation (see Chapter XX) to the of atoms and molecules in solids. Before the Einstein’s work, it was a puzzle why the heat capacity is not constant, as predicted by the classical theory (our high-temperature limit) but becomes small at low temperatures. Fig. 2 is the result of the Einstein’s theory. This was historically the third application of the idea of quantization, after the seminal 1900 work of Max Planck, and Einstein’s theory of the photoeffect of 1905. All these three achievement were accomplished before quantum mechanics was finalized about 1927. Einstein’s result at low temperatures is only qualitative as the concept ignores the interaction of the oscillations of different atoms. The accurate theory by Debye, 3 see Chapter XII, to the dependence CV ∝ T in 3D solids at low temperatures. Still, Figs. 2 and 4 look pretty similar. 14


The rotational kinetic energy of a , expressed with respect to the principal axes, reads 1 1 1 E = I ω2 + I ω2 + I ω2. (98) rot 2 1 1 2 2 2 2 3 3

We will restrict our consideration to the axially symmetric body with I1 = I2 = I, since for the fully asymmetric body the quantum energy levels cannot be found analytically. In this case Erot can be rewritten in terms of the components Lα = Iαωα , where α = 1, 2, 3, as follows

2 2 2 2   L1 L2 L3 L 1 1 1 2 Erot = + + = + − L3, (99) 2I1 2I2 2I3 2I 2 I3 I

2 2 2 2 where L = L1 + L2 + L3. In the important case of a diatomic with two identical atoms, then the center of is located between the two atoms at the d/2 from each, where d is the distance between the atoms. The moment of of the molecule with respect to the 3 axis going through both atoms is zero, I3 = 0. The moments of inertia with respect to the 1 and 2 axis are equal to each other:

d2 1 I = I = I = 2 × M = Md2, (100) 1 2 2 2 where M is the mass of the atom. In our case, I3 = 0, there is no kinetic energy associated with around the 3 axis. Then Eq. (99) requires L3 = 0, that is, the angular momentum of a diatomic molecule is perpendicular to its axis. In quantum mechanics L and L3 become operators, and the above becomes the Hamiltonian, 2 Erot ⇒ Hˆrot. The solution of the stationary Schr¨odingerequation yields eigenvalues of L ,L3 in terms of three quantum numbers l, m, and n:

2 2 Lˆ = ~ l(l + 1), l = 0, 1, 2,... (101) ˆ L3 = ~n, m, n = −l, −l + 1, . . . , l − 1, l. (102) Thus, the energy eigenvalues are given by

2 2   ~ l(l + 1) ~ 1 1 2 εl,n = + − n (103) 2I 2 I3 I that follows by substitution of Eqs. (101) and (102) into Eq. (99). The quantum number m accounts for the 2l + 1 different orientations of the vector L in space. As all these orientations are equivalent, the energy does not depend on m. Thus, there are at least 2l + 1 . Further, the axis of the molecule can have 2l + 1 different projections on Lˆ parametrized by the quantum number n. In the general case I3 6= I these states are non-degenerate. However, they become degenerate for the fully symmetric body, I1 = I2 = I3 = I. In this case the energy eigenvalues become 2l(l + 1) ε = ~ , (104) l 2I

2 and the degeneracy of the quantum level l is gl = (2l + 1) . For the diatomic molecule I3 = 0, and the only acceptable value of n is n = 0. Thus one obtains Eq. (104) again but with the degeneracy gl = 2l + 1. Below we will consides only the two cases: a) a fully summetric body and b) diatomic molecule. The rotational partition function for both of them is given by  X ξ 2, symmetric body Z = g e−βεl , g = (2l + 1) , ξ = (105) l l 1, diatomic molecule. l

In both cases Z cannot be calculated analytically in general. In the general axially-symmetric case I1 = I2 6= I3 with I3 6= 0 one has to perform a numerical summation over both l and n. However, even the sum over l above cannot be calculated analytically, so that we will consider the limits of low and hogh temperatures. 15

FIG. 3: Rotational heat capacity of a fully symmetric body and a diatomic molecule.

In the low-temperature limit, most of rotators are in their ground state l = 0, and very few are in the first l = 1. Discarding all other values of l, one obtains  2  ξ β~ Z =∼ 1 + 3 exp − (106) I at low temperatures. The rotational internal energy is given by ∂ ln Z 2  β 2  2  2  U = −N = 3ξ ~ exp − ~ = 3ξ ~ exp − ~ (107) ∂β I I I IkBT and it is exponentially small. The heat capacity is

   2 2  2  ∂U ξ ~ ~ CV = = 3 kB exp − (108) ∂T V IkBT IkBT and it is also exponentially small. In the high-temperature limit, very many energy levels are thermally populated, so that in Eq. (105) one can neglect 1  l and replace summation over l by integration. With

2 2 2 r 2 r ∼ ξ ∼ ~ l ∂ε ~ l ∂l I ~ I gl = (2l) , εl ≡ ε = , = , = = (109) 2I ∂l I ∂ε ~2 2Iε 2~2ε one has ¢ ¢ ¢ ∞ ∞ ∞ ∂l ∼ −βεl ∼ ξ ξ −βεl ξ ξ −βε Z = dl gle = 2 dl l e = 2 dε l e (110) 0 0 0 ∂ε and further r ¢ ξ/2 (ξ+1)/2 ¢ I ∞ 1 2Iε  I  ∞ ∼ ξ √ −βε (3ξ−1)/2 (ξ−1)/2 −βε Z = 2 2 dε 2 e = 2 2 dε ε e 2~ 0 ε ~ ~ 0 ¢  (ξ+1)/2 ∞ (3ξ−1)/2 I (ξ−1)/2 −x −(ξ+1)/2 = 2 2 dx x e ∝ β . (111) β~ 0 Now the internal energy is given by ∂ ln Z ξ + 1 ∂ ln β ξ + 1 1 ξ + 1 U = −N = N = N = Nk T, (112) ∂β 2 ∂β 2 β 2 B 16 whereas the heat capacity is ∂U  ξ + 1 CV = = NkB. (113) ∂T V 2 In these two formulas, f = ξ + 1 is the number of degrees of freedom, f = 2 for a diatomic molecule and f = 3 for a fully symmetric rotator. This result confirms the of equidistribution of energy over the degrees of freedom in the classical limit that requires temperatures high enough. The all-temperature result for the heat capacity of rotators based on the numerical summation in Eq. (105) is shown in Fig. 3. Surprizingly, there is a maximum of the heat capacity in the intermediate temperature region that contrasts the monotonic increase of the heat capacity of the harmonic oscillator shown in Fig. 2. To conclude this section, let us calculate the partition function classically at high temperatures. The calculation can be performed in the general case of all moments of inertia different. The rotational energy, Eq. (98), can be written as 2 2 2 L1 L2 L3 Erot = + + . (114) 2I1 2I2 2I3

The dynamical variables here are the angular momentum components L1,L2, and L3, so that the partition function can be obtained by integration with respect to them ¢ ∞ Zclass = dL1dL2dL3 exp (−βErot) ¢−∞ ¢ ¢ ∞  2  ∞  2  ∞  2  βL1 βL2 βL3 = dL1 exp − × dL2 exp − × dL3 exp − −∞ 2I1 −∞ 2I2 −∞ 2I3 ¢ s  ∞ 3 2I1 2I2 2I3 −x2 −3/2 = × × dx e ∝ β . (115) β β β −∞

In fact, Zclass contains some additional coefficients, for instance, from the integration over the orientations of the molecule. However, these coefficients are irrelevant in the calculation of the the internal energy and heat capacity. For the internal energy one obtains ∂ ln Z 3 ∂ ln β 3 U = −N = N = Nk T (116) ∂β 2 ∂β 2 B that coincides with Eq. (112) with ξ = 2. In the case of the diatomic molecule the energy has the form

2 2 L1 L2 Erot = + . (117) 2I1 2I2 Then the partition function becomes ¢ ∞ Zclass = dL1dL2 exp (−βErot) ¢−∞ ¢ ∞  2  ∞  2  βL1 βL2 = dL1 exp − × dL2 exp − −∞ 2I1 −∞ 2I2 ¢ s  ∞ 2 2I1 2I2 −x2 −1 = × dx e ∝ β . (118) β β −∞ This leads to

U = NkBT, (119) in accordance with Eq. (112) with ξ = 1.


Above in Sec. X we have studied statistical mechannics of harmonic oscillators. For instance, a diatomic molecule behaves as an oscillator, the chemical bond between the atoms periodically changing its length with in the 17 classical description. For molecules consisting of N ≥ 2 atoms, vibrations become more complicated and, as a rule, involve all N atoms. Still, within the harmonic approximation (the potential energy expanded up to the second-order terms in deviations from the equilibrium positions) the classical equations of are linear and can be dealt with matrix . Diagonalizing the dynamical matrix of the system, one can find 3N − 6 linear combinations of atomic coordinates that dynamically behave as independent harmonic oscillators. Such collective vibrational modes of the system are called normal modes. For instance, a hypothetic triatomic molecule with atoms that are allowed to move along a straight line has 2 normal modes. If the of the first and third atoms are the same, one of the normal modes corresponds to antiphase oscillations of the first and third atoms with the second (central) atom being at rest. Another corresponds to the in- oscillations of the first and third atoms with the central atom oscillating in the anti-phase to them. For more complicated molecules normal modes can be found numerically. Normal modes can be considered quantum mechanically as independent quantum oscillators as in Sec. X. Then the vibrational partition function of the molecule is a product of partition functions of individual normal modes and the internal energy is a sum of internal energies of these modes. The latter is the consequence of the independence of the normal modes. The having a well-defined structure is translationally invariant that simplifies finding normal modes, in spite of a large number of atoms N. In the simplest case of one atom in the unit cell normal modes can be found immediately by making Fourier transformation of atomic deviations. The resulting normal modes are with different wave vectors k and polarizations (in the simplest case one longitudinal and two equivalent transverse modes). In the sequel, to illustrate basic , we will consider only one type of sound waves (say, the ) but we will count it three times. The results can be then generalized for different kinds of waves. For wave vectors small enough, there is the linear relation between the wave vector and the frequency of the oscillations

ωk = vk (120)

In this formula, ωk depends only on the magnitude k = 2π/λ and not on the direction of k and v is the . The acceptable values of k should satisfy the boundary conditions at the boundaries of the solid body. Obviously, the thermodynamics of a large body is insensitive the shape of the body and to the exact form of these conditions. (Note that the body’s surface can be rough and then it is difficult to formulate a model of what exactly happens at the surface!) The simplest possibility is to consider a body of the box shape with dimentions Lx,Ly, and Lz and use the clamped boundary conditions, so that the atoms at the boundary cannot move. These boundary conditions are the same as in the quantum problem of a particle in a potential box. One obtains the atomic displacements in the normal modes of the type u(x) = u0 sin (kxx) etc., and the discrete values of the wave vector components kα with α = x, y, z

να kα = π , ν = 1, 2, 3,... (121) Lα

If the atoms are arranged in a simple cubic lattice with the lattice spacing a, then Lα = aNα, where Nα are the numbers of atoms (or unit cells) in each direction α. Then

π να kα = . (122) a Nα Similarly to the problem of the particle in a box, one can introduce the frequency density of normal modes ρ (ω) via the number of normal modes in the frequency interval dω

dnω = ρ (ω) dω. (123)

Using Eq. (56) multiplied by the number of modes 3 and changing from k to ω with the help of Eq. (120) one obtains 3V ρ (ω) = ω2. (124) 2π2v3 The total number of normal modes should be 3N: X X 3N = 3 1 = 3 1. (125)

k νx,νy ,νz

This equation splits into the three separate equations

να,max X Nα = 1 = να,max. (126)

να=1 18

This defines the maximal value of να and, according to Eq. (122), of kα that happens to be kα,max = π/a for all α. As the solid body is macroscopic and there are very many values of the allowed wave vectors, one can replace summation my integration and use the density of states: ¢ X ωmax ... ⇒ dωρ (ω) ... (127) k 0 At large k and thus ω, the linear dependence of Eq. (120) is no longer valid. Moreover, for models with elastic bonds on the lattice at large k there is no purely longutudinal and transverse modes, and the mode frequency depends on the direction of the wave vector. Thus that Eq. (124) becomes invalid as well. Still, one can make a crude approximation assuming that Eqs. (120) and (124) are valid until some maximal frequency of the sound waves in the body. The model based on this assumption is called . The maximal frequency (or the Debye frequency ωD) is defined by Eq. (125) that takes the form ¢ ωD 3N = dωρ (ω) . (128) 0 With the use of Eq. (124) one obtains ¢ ωD 3 3V 2 V ωD 3N = dω 2 3 ω = 2 3 , (129) 0 2π v 2π v wherefrom

1/3 v ω = 6π2 , (130) D a

3 where V/N = v0 = a , is the unit-cell volume. It is convenient to rewrite Eq. (124) in terms of ωD as

ω2 ρ (ω) = 9N 3 . (131) ωD The consideration up to this pont is completely classical. Now we will consider each k-mode as an independent harmonic oscillator. The quantum energy levels of the latter are given by Eq. (72). In our case for a k-oscillator it becomes  1 ε = ω ν + , ν = 0, 1, 2,... (132) k,νk ~ k k 2 k

One quant of the energy ~ωk in the k-mode is called phonon. The quantum number νk is the number of in the k-mode. Note the relation

ε = ~ω (133) between the energy of a quant of any wave (phonons, ) and its frequency. All the modes contribute additively into the internal energy, thus

X X  1 X  1 U = 3 hε i = 3 ω hν i + = 3 ω n + , (134) k,νk ~ k k 2 ~ k k 2 k k k where, similarly to Eq. (88), 1 nk = . (135) exp (β~ωk) − 1 Replacing summation by integration and rearranging Eq. (134), within the Debye model one obtains ¢ ω D ~ω U = U0 + dωρ (ω) , (136) 0 exp (β~ω) − 1 where U0 is the zero-temperature value of U due to the zero-point motion. The integral term in Eq. (136) can be calculated analytically at low and high temperatures. 19

3 FIG. 4: Temperature dependence of the phonon heat capacity. The low-temperature result C ∝ T (dashed line) holds only for temperatures much smaller than the Debye temperature θD.

For kBT  ~ωD, the integral converges at ω  ωD, so that one can extend the upper limit of integration to ∞. One obtains ¢ ¢ ∞ ∞ ~ω 9N 2 ~ω U = U0 + dωρ (ω) = U0 + 3 dω ω 0 exp (β~ω) − 1 ω 0 exp (β~ω) − 1 ¢ D ∞ 3 4 4  3 9N x 9N π 3π kBT = U0 + 3 4 dx x = U0 + 3 4 = U0 + NkBT . (137) (~ωD) β 0 e − 1 (~ωD) β 15 5 ~ωD This result can be written in the form

3π4  T 3 U = U0 + NkBT , (138) 5 θD where

θD = ~ωD/kB (139) is the Debye temperature. The heat capacity is given by

∂U  12π4  T 3 CV = = NkB ,T  θD. (140) ∂T V 5 θD Although the Debye temperature enters Eqs. (138) and (140), these results are accurate and insensitive to the Debye approximation. For kBT  ~ωD, the exponential in Eq. (136) can be expanded for β~ω  1. It is, however, convenient to step back and to do the same in Eq. (134). WIth the help of Eq. (125) one obtains

X ~ωk X U = U0 + 3 = U0 + 3kBT 1 = U0 + 3NkBT. (141) β ωk k ~ k

Note that the phonon ωk that is due to the interactions in the system, disappeared from the formula. For the heat capacity one then obtains

CV = 3NkB, (142) 20 kB per each of approximately 3N vibrational modes. One can see that Eqs. (141) and (142) are accurate because they do not use the Debye model. The assumption of the Debye model is really essential at intermediate temperatures, where Eq. (136) is approximate. The temperature dependence of the heat capacity, following from Eq. (136), is shown in Fig. 4 together with its low-temperature asymptote, Eq. (140), and the high-temperature asymptote, Eq. (142). Because of the large 3 numerical coefficient in Eq. (140), the law CV ∝ T in fact holds only for the temperatures much smaller than the Debye temperature. The theory of quantized vibrations in the solid, developed by Debye, improves the more primitive Einstein’s model assuming independent oscillators.


Magnetism of solids is in most cases due to the intrinsic angular momentum of the that is called . In quantum mechanics, spin is an operator represented by a matrix. The value of the electronic angular momentum is s = 1/2, so that it is described by a 2×2 matrix and the quantum-mechanical eigenvalue of the square of the electron’s angular momentum (in the units of ~) is ˆs2 = s (s + 1) = 3/4 (143) c.f. Eq. (101). Eigenvalues ofs ˆz, projection of a free spin on any direction z, are given by

sˆz = m, m = ±1/2 (144) c.f. Eq. (102). Spin of the electron can be interpreted as of the inside this . This results in the magnetis moment of the electron

µˆ = gµBˆs, (145)

−23 where g = 2 is the so-called g-factor and µB is Bohr’s magneton, µB = e~/ (2me) = 0.927 × 10 J/T. Note that the model of circularly moving charges inside the electron leads to the same result with g = 1; The true value g = 2 follows from the relativistic . The energy (the Hamiltonian) of an electron spin in a magnetic field of the induction B has the form

ˆ H = −µˆ · B = −gµBˆs · B, (146) the so-called Zeeman Hamiltonian. One can see that the minimum of the energy corresponds to the spin pointing in the direction of B. Choosing the z axis in the direction of B, one obtains ˆ H = −gµBsˆzB. (147) Thus the energy eigenvalues of the electronic spin in a magnetic field with the help of Eq. (144) become

εm = −gµBmB, (148) where, for the electron, m = ±1/2. In an atom, spins of all usually combine into a collective spin S that has half-integer values and is described by an (2S + 1) × (2S + 1) matrix. Instead of Eq. (146), one has

ˆ ˆ H = −gµBS · B. (149)

Eigenvalues of the projections of Sˆ on the direction of B are given by

m = −S, −S + 1,...,S − 1, S, (150) all together 2S + 1 different values, c.f. Eq. (101). The 2S + 1 different energy levels of the spin are given by Eq. (148). The thermodynamics of the spin is determined by the partition function

S S X −βεm X my ZS = e = e , (151) m=−S m=−S 21

FIG. 5: Brillouin function BS (x) for different S.

where εm is given by Eq. (148) and

gµBB y ≡ βgµBB = . (152) kBT Eq. (151) by the change of the summation index to k = m + S can be reduced to a finite geometrical progression

2S (2S+1)y (S+1/2)y −(S+1/2)y X k e − 1 e − e sinh [(S + 1/2)y] Z = e−Sy (ey) = e−Sy = = . (153) S ey − 1 ey/2 − e−y/2 sinh (y/2) k=0 For S = 1/2 with the help of sinh(2x) = 2 sinh(x) cosh(x) one can transform Eq. (153) to

Z1/2 = 2 cosh (y/2) . (154) The partition function beng found, one can easily obtain thermodynamic quantities.. For instance, the free energy F is given by Eq. (38), sinh [(S + 1/2)y] F = −Nk T ln Z = −Nk T ln . (155) B S B sinh (y/2) The average spin is defined by

S 1 X hS i = hmi = me−βεm . (156) z Z m=−S With the help of Eq. (151) one obtains 1 ∂Z ∂ ln Z hS i = = . (157) z Z ∂y ∂y With the use of Eq. (153) and the derivatives

sinh(x)0 = cosh(x), cosh(x)0 = sinh(x) (158) one obtains

hSzi = bS(y), (159) 22 where  1  1  1 1  b (y) = S + coth S + y − coth y . (160) S 2 2 2 2

Usually the function bS(y) is represented in the form bS(y) = SBS(Sy), where BS(x) is the Brillouin function  1   1   1  1  B (x) = 1 + coth 1 + x − coth x , (161) S 2S 2S 2S 2S

(see Fig. 5). One can see that bS(±∞) = ±S and BS(±∞) = ±1. For S = 1/2 from Eq. (154) one obtains 1 y hS i = b (y) = tanh (162) z 1/2 2 2 and B1/2(x) = tanh x. In the opposite limit S → ∞, the Brillouin function becomes the classical Langevin function: 1 B (x) ≡ L(x) = coth x − . (163) 1/2 x Although atoms never have a really large spin, the classical approximation is very useful as it can strongly simplify calculations and it provides qualitatively reasonable results in most cases. At zero magnetic field, y = 0, the spin polarization should vanish. It is immediately seen in Eq. (162) but not in ∼ Eq. (159). To clarify the behavior of bS(y) at small y, one can use coth x = 1/x + x/3 that yields   "   #   ∼ 1 1 1 y 1 1 1 y bS(y) = S + 1  + S + − 1 + 2 S + 2 y 2 3 2 2 y 2 3 " # y  12 12 S(S + 1) = S + − = y. (164) 3 2 2 3

In physical units, this means

∼ S(S + 1) gµBB hSzi = , gµBB  kBT. (165) 3 kBT The average of the spin can be obtained from Eq. (145)

hµzi = gµB hSzi . (166) The magnetization of the sample M is defined as the magnetic moment per unit volume. If there is one magnetic atom per unit cell and all of them are uniformly magnetized, the magnetization reads

hµzi gµB M = = hSzi , (167) v0 v0 where v0 is the unit-cell volume. The internal energy U of a system of N spins can be obtained from the general formula, Eq. (31). The calculation can be, however, avoided since from Eq. (148) simply follows

U = N hεmi = −NgµBB hmi = −NgµBB hSzi , (168) wherehSzi is given by Eq. (159). In particular, at low magnetic fields (or high temperatures) one has

S(S + 1) (gµ B)2 U =∼ −N B . (169) 3 kBT The magnetic susceptibility per spin is defined by

∂ hµ i χ = z . (170) ∂B 23

FIG. 6: Temperature dependence of the heat capacity of spins in a magnetic fields with different values of S.

From Eqs. (166) and (159) one obtains

2 (gµB) 0 χ = bS(y), (171) kBT where

db (y)  S + 1/2 2  1/2 2 b0 (y) ≡ S = − + . (172) S dy sinh [(S + 1/2) y] sinh [y/2]

For S = 1/2 from Eq. (162) follows 1 b0 (y) = . (173) 1/2 4 cosh2 (y/2)

0 From Eq. (164) one obtains bS(0) = S(S + 1)/3. Thus in the high-temperature limit (or for B = 0)

2 S(S + 1) (gµB) χ = , kBT  gµBB. (174) 3 kBT The susceptibility becomes small at high temperatures as the directions of the spins are strongly disordered by thermal 0 agitation and it is difficult to order them by applying a magnetic field. In the opposite limit y  1 the function bS(y) and thus the susceptibility also becomes small. The physical reason for this is that at low temperatures, kBT  gµBB, ∼ the spins are already strongly aligned by the magnetic field, hSzi = S, so that hSzi becomes hardly sensitive to small changes of B. As a function of temperature, χ has a maximum at intermediate temperatures. The heat capacity C can be obtained from Eq. (168) as

∂U ∂ hS i ∂y C = = −Ngµ B z = −Ngµ Bb0 (y) = Nk y2b0 (y). (175) ∂T B ∂T B S ∂T B S As both the magnetic susceptivility and the heat capacity are expressed via the derivative of the Brillouin function, they are related. The relation has the form

C B2 = . (176) Nχ T 24

One can see that C also has a maximum at intermediate temperatures, y ∼ 1. At high temperatures C decreases as 1/T 2:

 2 ∼ S(S + 1) gµBB C = NkB , (177) 3 kBT and at low temperatures C becomes exponentially small, except for the case S = ∞ (see Fig. 6). The finite value of the heat capacity in the limit T → 0 contradicts the third law of thermodynamics. Thus the classical fails in this limit.


As explained in thermodynamics, with changing thermodynamic parameters such as temperature, pressure, etc., the system can undergo phase transitions. In the first-order phase transitions, the chemical potentials µ of the two competing phases become equal at the line, while on each side of this line they are unequal and only one of the phases is thermodynamically stable. The first-order transition is switching between two different phases (as, for instance, between water and ice) and is thus abrupt. To the contrast, second-order transitions are gradual: The so-called order parameter continuously grows from zero as the phase-transition line is crossed. Thermodynamic quantities are singular at second-order transitions. Phase transitions are complicated phenomena that arise due to interaction between particles in many-particle systems in the N → ∞. It can be shown that thermodynamic quantities of finite systems are non-singular and, in particular, there are no second-order phase transitions. Up to now in this course we studied systems without interaction. Including the latter requires an extension of the formalism of statistical mechanics that is done in Sec. XVI. In such an extended formalism, one has to calculate partition functions over the energy levels of the whole system that, in general, depend on the interaction. In some cases these energy levels are known exactly but they are parametrized by many quantum numbers over which summation has to be carried out. In most of the cases, however, the energy levels are not known analytically and their calculation is a huge quantum-mechanical problem. In both cases calculation of the partition function is a very serious task. There are some models for which thermodynamic quantities had been calculated exactly, including models that possess second-order phase transitions. For other models, approximate numerical methods had been developed that allow calculating phase transition temperatures and critical indices. It was shown that there are no phase transitions in one-dimensional systems with short-range interactions. It is most convenient to illustrate the physics of phase transitions considering magnetic systems, or the systems of spins introduced in Sec. XIII. The simplest forms of interaction between different spins are Ising interaction −JijSˆizSˆjz including only z components of the spins and Heisenberg interaction −JijSˆi · Sˆj. The Jij follows from quantum mechanics of atoms and is of electrostatic origin. In most cases practically there is only interaction J between spins of the neighboring atoms in a crystal lattice, because Jij decreases exponentially with distance. For the , all energy levels of the whole system are known exactly while for the Heisenberg model they are not. In the case J > 0 neighboring spins have the lowest interaction energy when they are collinear, and the state with all spins pointing in the same direction is the exact ground state of the system. For the Ising model, all spins in the ground state are parallel or antiparallel with the z axis, that is, it is double-degenerate. For the Heisenberg model the spins can point in any direction in the ground state, so that the ground state has a continuous degeneracy. In both cases, Ising and Heisenberg, the ground state for J > 0 is ferromagnetic. The of the interaction between spins considered above suggests that at T → 0 the system should fall into its D E ˆ ground state, so that thermodynamic averages of all spins approach their maximal values, Si → S. With increasing D E ˆ temperature, excited levels of the system become populated and the average spin value decreases, Si < S. At high temperatures all energy levels become populated, so that neighboring spins can have all possible orientations with respect to each other. In this case, if there is no external magnetic field acting on the spins (see Sec. XIII), the average spin value should be exactly zero, because there as many spins pointing in one direction as there are the spins pointing in the other direction. This is why the high-temperature state is called symmetric state. If now the temperature is lowered, there should be a phase transition temperature TC (the Curie temperature for ferromagnets) below which the order parameter (average spin value) becomes nonzero. Below TC the of the state is spontaneously broken. This state is called ordered state. Note that if there is an applied magnetic field, it will break the symmetry by creating a nonzero spin average at all temperatures, so that there is no sharp phase transition, only a gradual increase D E ˆ of Si as the temperature is lowered. Although the scenario outlined above looks persuasive, there are subtle effects that may preclude ordering and D E ˆ ensure Si = 0 at all nonzero temperatures. As said above, ordering does not occur in one dimension (spin chains) 25 for both Ising and Heisenberg models. In two dimensions, Ising model shows a phase transition, and the analytical solution of the problem exists and was found by . However, two-dimensional Heisenberg model does not order. In three dimensions, both Ising and Heisenberg models show a phase transition and no analytical solution is available.

A. The mean-field approximation for ferromagnets

While a rigorous solution of the problem of phase transition is very difficult, there is a simple approximate solution that captures the physics of phase transitions as described above and provides qualitatively correct results in most cases, including three-dimensional systems. The idea of this approximate solution is to reduce the original many-spin problem to an effective self-consistent one-spin problem by considering one spin (say i) and replacing other spins in the D E interaction by their average thermodynamic values, −JijSˆi ·Sˆj ⇒ −JijSˆi · Sˆj . One can see that now the interaction term has mathematically the same form as the term describing interaction of a spin with the applied magnetic field, Eq. (149). Thus one can use the formalism of Sec. (XIII) to solve the problem analytically. This aproach is called mean field approximation (MFA) or molecular field approximation and it was first suggested by Weiss for ferromagnets. The effective one-spin Hamiltonian within the MFA is similar to Eq: (149) and has the form  D E 1 D E2 Hˆ = −Sˆ· gµ B + Jz Sˆ + Jz Sˆ , (178) B 2 where z is the number of nearest neigbors for a spin in the lattice (z = 6 for the simple cubic lattice). Here the last 2 ˆ DˆE ˆ ˆ DˆE DˆE term ensures that the replacement S ⇒ S in H yields H = − S ·gµBB − (1/2)Jz S that becomes the correct D E ˆ ground-state energy at T = 0 where S = S. For the Heisenberg model, in the presence of a whatever weak applied field B the ordering will occur in the direction of B. Choosing the z axis in this direction, one can simplify Eq. (178) to  D E 1 D E2 Hˆ = − gµ B + Jz Sˆ Sˆ + Jz Sˆ . (179) B z z 2 z The same mean-field Hamilotonian is valid for the Ising model as well, if the magnetic field is applied along the z axis. Thus, within the MFA (but not in general!) Ising and Heisenberg models are equivalent. The energy levels corresponding to Eq. (179) are  D E 1 D E2 ε = − gµ B + Jz Sˆ m + Jz Sˆ , m = −S, −S + 1,...,S − 1, S. (180) m B z 2 z Now, after calculation of the partition function Z, Eq. (151), one arrives at the free energy per spin

1 D E2 sinh [(S + 1/2)y] F = −k T ln Z = Jz Sˆ − k T ln , (181) B 2 z B sinh (y/2) where D ˆ E gµBB + Jz Sz y ≡ . (182) kBT D E To find the actual value of the order parameter Sˆz at any temperature T and magnetic field B, one has to minimize D E F with respect to Sˆz , as explained in thermodynamics. One obtains the equation

∂F D ˆ E 0 = D E = Jz Sz − JzbS(y), (183) ∂ Sˆz where bS(y) is defined by Eq. (160). Rearranging terms one arrives at the transcedental Curie-Weiss equation  D E gµ B + Jz Sˆz D ˆ E B Sz = bS   (184) kBT 26

FIG. 7: Graphic solution of the Curie-Weiss equation.

D E FIG. 8: Free energy of a ferromagnet vs Sˆz within the MFA. (Arbitrary vertical shift.)

D E that defines Sˆz . In fact, this equation could be obtained in a shorter way by using the modified argument, Eq. (182), in Eq. (159). D E For B = 0, Eq. (184) has the only solution Sˆz = 0 at high temperatures. As bS(y) has the maximal slope at D E y = 0, it is sufficient that this slope (with respect to Sˆz ) becomes smaller than 1 to exclude any solution other than D E D E Sˆz = 0. Using Eq. (164), one obtains that the only solution Sˆz = 0 is realized for T ≥ TC , where

S(S + 1) Jz TC = (185) 3 kB 27

D E is the Curie temperature within the mean-field approximation. Below TC the slope of bS(y) with respect to Sˆz D E D E D E exceeds 1, so that there are three solutions for Sˆz : One solution Sˆz = 0 and two symmetric solutions Sˆz 6= 0 (see D E Fig. 7). The latter correspond to the lower free energy than the solution Sˆz = 0 thus they are thermodynamically stable (see Fig. 8).. These solutions describe the ordered state below TC . D E 3 Slightly below TC the value of Sˆz is still small and can be found by expanding bS(y) up to y . In particular, for S = 1/2 one has 1 y 1 1 b (y) = tanh =∼ y − y3. (186) 1/2 2 2 4 48

Using TC = (1/4)Jz/kB and thus Jz = 4kBTC for S = 1/2, one can rewrite Eq. (184) with B = 0 in the form

D E T D E 4 T 3 D E3 Sˆ = C Sˆ − C Sˆ . (187) z T z 3 T z D E One of the solutions is Sˆz = 0 while the other two solutions are

D ˆ E s Sz T  T  1 = ± 3 1 − ,S = . (188) S TC TC 2

As this result is only valid for T near TC , one can discard the factor in front of the square root and simplify this formula to D ˆ E s Sz  T  1 = ± 3 1 − ,S = , (189) S TC 2 √ Although obtained near TC , in this form the result is only by the factor 3 off at T = 0. The singularity of the order parameter near TC is square root, so that for the magnetization critical index defined as

D ˆ E β Sz ∝ M ∝ (TC − T ) (190) one has β = 1/2. Results of the numerical solution of the Curie-Weiss equation with B = 0 for different S are shown in Fig. 9. Note that the magnetization is related to the spin average by Eq. (167). Let us now consider the magnetic susceptibility per spin χ defined by Eq. (170) above TC . Linearizing Eq. (184) with the use of Eq. (164), one obtains

D ˆ E gµBB + Jz Sz D E S(S + 1) S(S + 1) gµBB TC D E Sˆz = = + Sˆz . (191) 3 kBT 3 kBT T From this equation one obtains

D E S(S + 1) gµBB 1 S(S + 1) gµBB Sˆz = = . (192) 3 kBT 1 − TC /T 3 kB (TC − T ) Then the magnetic susceptibility can be calculated by differentiation: D E ∂ Sˆz 2 ∂ hµzi S(S + 1) (gµB) χ = = gµB = . (193) ∂B ∂B 3 kB (T − TC )

In contrast to non-interacting spins, Eq. (174), the susceptibility diverges at T = TC rather than at T = 0. The inverse −1 susceptivility χ is a straight line the T axis at TC . In the theory of phase transitions the critcal index for the susceptibility γ is defined as.

−γ χ ∝ (T − TC ) . (194) One can see that γ = 1 within the MFA. 28

D E FIG. 9: Temperature dependences of normalized spin averages Sˆz /S for S = 1/2, 5/2, ∞ obtained from the numerical solution of the Curie-Weiss equation.

At T = T , there is no linear susceptibility, and the magnetization is a singular function of the magnetic field, that C D E defines another critical index, δ. In this case, the Curie-Weiss equation can be expanded for small Sˆz and small B.

Using Eq. (186), at T = TC one obtains

3 gµBB 4 D E 0 = − Sˆz , (195) 4kBT 3 c.f. Eq. (187). Thus,  1/3 D E 3gµBB 1 Sˆz = S = . (196) 16kBT 2 The critical isotherm index δ is defined by

D ˆ E 1/δ Sz ∝ B , (197) thus within the mean-field approximation δ = 3. More precise methods than the MFA yield smaller values of β and larger values of γ and δ, depending on the details of the model such as the number of interacting spin components (1 for the Ising and 3 for the Heisenberg) and the dimension of the lattice. Critical indices are insensitive to such factors as lattice structure and the value of the spin S. On the other hand, the value of TC does not possess such universality and depends on all parameters of the problem. Accurate values of TC are by up to 25% lower than their mean-field values in three dimensions.


The 1D Ising model is a chain of magnetic atoms with spins S = 1/2 interacting with their nearest neighbors in the chain via the coupling of their z-components. The Hamiltonial of the 1D Ising model reads

N−1 N ˆ X ˆ ˆ X ˆ H = −J SizSi+1,z − gµBB Siz, (198) i=1 i=1 where N is the number of atoms in the chain, the exchange coupling J > 0 for the ferromagnetic interaction, and B is the magnetic field applied along the z-axis. The quantum energy levels of the Ising model are trivial and correspond 29 to the values of Sˆiz equal to mi = ±1/2. (In the presense of a transverse field or a coupling of other spin components the quantum problem becomes non-trivial.) Thus, the energy of the system divided by the temperature that enters the partition function can be written as

N−1 N E X X βE ≡ = −θ m m − ρ m , (199) k T i i+1 i B i=1 i=1 where we have introduced the dimensionless exchange and field parameters J gµ B θ ≡ βJ = , ρ ≡ B . (200) kBT kBT One can calculate the partition function analytically if one considers the ring instead of the open chain. In the thermodynamic limit, there should be no difference between the two model variants but for the ring the analytical calculation exists for any N. Thus to βE we add the term −θmN m1 (the last spin is coupled with the first one. The partition function for the Ising ring is given by X Z = exp (θm1m2 + ρm1 + θm2m3 + ρm2 + ... + θmN m1 + ρmN ) . (201)

m1,m2,...,mN It is convenient to make a trick that allows representing the partition function as a product of matrices. For this, we redistribute the field terms: X  ρ ρ ρ  Z = exp θm m + (m + m ) + θm m + (m + m ) + ... + θm m + (m + m ) . (202) 1 2 2 1 2 2 3 2 2 3 N 1 2 N 1 m1,m2,...,mN Then,

X X N  N  Z = Am1m2 Am2m3 ...Am m Am m1 = = Tr (203) N−1 N N A m1m1 A m1,m2,...,mN m1 is the trace of the Nth of the 2 × 2 matrix with matrix elements  ρ  A = exp θmk + (m + k) , m, k = ±1/2 (204) mk 2 that is,

 exp (θ/4 − ρ/2) exp (−θ/4)  = . (205) A exp (−θ/4) exp (θ/4 + ρ/2)

It can be easily shown that the trace of a matrix is invariant under the unitary transformation. IfU is the unitary matrix, then the unitary transformation reads

−1 B = U AU, (206) where the inverse unitary matrix U−1 is just transposed and complex conjugate of U. As one can make a cyclic of matrices under the Tr operator, one can write

N  N −1 −1 N  −1 −1 −1  N  Tr A = Tr A UU = Tr U A U = Tr U AUU AU ... U AU = Tr B . (207)

One can choose the unitary transformation that diagonalizes A, in this case    N  λ1 0 N λ1 0 B = , B = N . (208) 0 λ2 0 λ2

Thus, the problem reduced to calculating the eigenvalues of A above. The secular equation has the form

exp (θ/4 − ρ/2) − λ exp (−θ/4) 0 = = [exp (θ/4 − ρ/2) − λ] [exp (θ/4 + ρ/2) − λ] − exp (−θ/2) (209) exp (−θ/4) exp (θ/4 + ρ/2) − λ or   λ2 − eθ/4 eρ/2 + e−ρ/2 λ + eθ/2 − e−θ/2 = 0 (210) 30 or, finally, ρ θ λ2 − 2eθ/4 cosh λ + 2 sinh = 0. (211) 2 2 The solution of this quandatic equations are r ρ ρ θ ρ r ρ λ = eθ/4 cosh ± eθ/2 cosh2 − 2 sinh = eθ/4 cosh ± eθ/2 cosh2 − eθ/2 + e−θ/2 1,2 2 2 2 2 2  ρ r ρ  = eθ/4 cosh ± sinh2 + e−θ . (212) 2 2

Now the partition function is

N  ρ r ρ  Z = λN + λN ⇒ eNθ/4 cosh + sinh2 + e−θ , (213) 1 2 2 2 where we have discarded the smaller eigenvalue that makes a negligible contribution for large N. One can see that Z does not have any singularities, so that in the 1D Ising model there are no phase transitions. 1D Ising model with an arbitrary S can be solved in a similar way by diagonalizing an (2S + 1) × (2S + 1) matrix. However, this solution is cumbersome an can be done only numerically, in general. There is no phase transition for any S, including S → ∞. All thermodynamic quantities can be obtained from Z. Let us first calculate the internal energy in zero field. In this case ρ = 0 and

 N  N θ Z = eNθ/4 1 + e−θ/2 = eθ/4 + e−θ/4 = 2N coshN . (214) 4 The energy is given by

∂ ln Z ∂ ln Z ∂ ln cosh θ 1 θ 1 J U = − = −J = −NJ 4 = − NJ tanh = − NJ tanh . (215) ∂β ∂θ ∂θ 4 4 4 4kBT In the limit T → 0 (θ → ∞) one has 1 U = − NJ (216) 4 that can be obtained directly as the ground state of the Hamiltonian, Eq. (198). In the opposite limit, kBT  J (θ  1) the energy behaves as

1 1 J 2 U =∼ − NJθ = − N . (217) 16 16 kBT The heat capatity is given by

 2 ∂U J 2 J C = = NkB / cosh . (218) ∂T 4kBT 4kBT

This is exponentially small at kBT  J and behaves as

 J 2 C = NkB (219) 4kBT for kBT  J . At kBT ∼ J the heat capacity has a broad maximum. The magnetization of the Ising model per spin is given by the standard formula, Eq. (166). The spin polarization D E Sˆz is defined by

* N + N ! N ! D E 1 X 1 1 X X X Sˆ = m = m exp θm m + θm m + ... + θm m + ρ m (220) z N i N Z i 1 2 2 3 N 1 i i=1 m1,m2,...,mN i=1 i=1 31 that can be expressed as D E 1 1 ∂Z 1 ∂ ln Z Sˆ = = . (221) z N Z ∂ρ N ∂ρ

From Eq. (213) one obtains

sinh ρ cosh ρ 1 sinh ρ + 1 √ 2 2 D E ∂  ρ r ρ  2 2 2 sinh2 ρ +e−θ ˆ 2 −θ 2 Sz = ln cosh + sinh + e = q (222) ∂ρ 2 2 ρ 2 ρ −θ cosh 2 + sinh 2 + e that simplifies to

D E sinh ρ ˆ 1 2 Sz = q . (223) 2 2 ρ −θ sinh 2 + e

One can see that the spin polarization vanishes in the limit B → 0 (ρ → 0), thus there is no spontaneour ordering D E in this model. In the high-field limit, ρ → ∞, one has Sˆz = 1/2, the full alignment of the spins. The magnetic susceptibility per spin is given by D E D E ∂ Sˆz 2 ∂ Sˆz ∂ hµzi (gµB) χ = = gµB = . (224) ∂B ∂B kBT ∂ρ Using the formula above, one obtains   (gµ )2 1 cosh ρ sinh ρ cosh ρ (gµ )2 cosh ρ h ρ ρi χ = B 2 − 2 2 = B 2 sinh2 + e−θ − sinh . q 3/2  3/2 kBT 4 2 ρ −θ sinh2 ρ + e−θ kBT 4 sinh2 ρ + e−θ 2 2 sinh 2 + e 2 2 (225) In zero field, ρ = 0, this becomes

(gµ )2 (gµ )2 J χ = B eθ/2 = B exp . (226) 4kBT 4kBT 2kBT One can see that the magnetic susceptibility exponentially diverges at T = 0 that is the most interesting result for the 1D Ising model. At high temperatures, the exponential term tends to 1 and the result is the susceptibility of noninteracting spins, a particular case of Eq. (174) for S = 1/2.


In the previous chapters we considered statistical properties of systems of non-interacting and distinguishable par- ticles. Assigning a for one such a particle is independent from assigning the states for other particles, these processes are statistically independent. Particle 1 in state 1 and particle 2 in state 2, the microstate (1,2), and the microstate (2,1) are counted as different microstates within the same macrostate. However, quantum-mechanical particles such as electrons are indistinguishable from each other. Thus microstates (1,2) and (2,1) should be counted as a single microstate, one particle in state 1 and one in state 2, without specifying with particle is in which state. This means that assigning quantum states for different particles, as we have done above, is not statistically independent. Accordingly, the method based on the minimization of the Boltzmann entropy of a large system, Eq. (19), becomes invalid. This changes the foundations of statistical description of such systems. In addition to indistinguishability of quantum particles, it follows from the relativistic quantum theory that particles with half-integer spin, called , cannot occupy a quantum state more than once. There is no place for more than one in any quantum state, the so-called exclusion principle. An important example of fermions is electron. To the contrary, particles with integer spin, called , can be put into a quantum state in unlimited numbers. Examples of bosons are and other atoms with an integer spin. Note that in the case of fermions, even for a large system, one cannot use the Stirling formula, Eq. (12), to simplify combinatorial expressions, since the occupation numbers of quantum states can be only 0 and 1. This is an additional reason why the Boltzmann formalism does not work here. 32

Finally, the Boltzmann formalism of quantum does not work for systems with interaction because the quantum states of such systems are quantum states of the system as the whole rather than one-particle quantum states. To construct the statistical mechanics of systems of indistinguishable particles and/or systems with interaction, one has to consider the of N systems. The quantum states of a system are labeled by ξ, and the ensemble contains any numbers Nξ of systems in states ξ. The total number of systems in the ensemble is X N = Nξ. (227) ξ

The number of particles in the states is not fixed, each state ξ has Nξ particles. We require that the average number of particles in the system and the average energy of the system over the ensemble are fixed:

1 X 1 X N = N N ,U = E N . (228) N ξ ξ N ξ ξ ξ ξ

Do not confuse Nξ with Nξ! The number of ways W in which the ensemble of systems, specified by Nξ, can be realized (i.e., the number of microstates or thermodynamic probability) can be calculated in exactly the same way as it was done above in the case of Boltzmann statistics. As the systems we are considering are distinguishable, one obtains N ! W = Q . (229) ξ Nξ!

Since the number of systems in the ensemble N and thus all Nξ can be arbitrary high, the actual state of the ensemble is that maximizing W under the constraints of Eq. (228). Within the method of Lagrange multipliers, one has to maximize the target function X X Φ(N1, N2, ··· ) = ln W + α NξNξ − β EξNξ (230) ξ ξ

[c.f. Eq. (21) and below]. Using the Stirling formula, Eq. (12), one obtains

∼ X X X X Φ = ln N ! − Nξ ln Nξ + Nξ + α NξNξ − β EξNξ. (231) ξ ξ ξ ξ

Minimizing with respect to Nξ, considering N as a constant, one obtains the equations ∂Φ = − ln Nξ + αNξ − βEξ = 0 (232) ∂Nξ that yield

αNξ−βEξ Nξ = e , (233) the Gibbs distribution of the so-called grand canonical ensemble. The total number of systems N is given by

X X αNξ−βEξ N = Nξ = e = Z. (234) ξ ξ where Z is the partition function of the grand canonical ensemble. The coefficients α and β are defined from the consditions for the average number of particles and average energy over the ensemble, Eq. (228). For the average number of particles, one obtains

1 X 1 X ∂ ln Z N = N N = N eαNξ−βEξ = . (235) N ξ ξ Z ξ ∂α ξ ξ

The ensemble average of the internal energy becomes

1 X ∂ ln Z U = E eαNξ−βEξ = − . (236) Z ξ ∂β ξ 33

In contrast to Eq. (31), this formula does not contain N explicitly since the average number of particles in a system is encapsulated in Z. We define the statistical entropy per system as 1 1 N ! S = kB ln W = kB ln Q . (237) N N ξ Nξ! Using the Stirling formula, Eq. (12) without the irrelevant prefactor, one obtains

S 1 X 1 X 1 X = ln N − 1 − Nξ ln Nξ + Nξ = ln N − Nξ ln Nξ. (238) kB N N N ξ ξ ξ

Inserting here the Gibbs distribution, Eq. (233), results in

S 1 X = ln N − Nξ (αNξ − βEξ) = ln Z − αN + βU. (239) kB N ξ To clarify the physical meaning of the parameters α and β, one can calculate the differential dS with respect to these parameters and compare the result with the thermodynamic formula for the systems with a variable number of particles and V = const, 1 µ dS = dU − dN, (240) T T where µ is the . One obtains 1 ∂S ∂ ln Z ∂N ∂U ∂N ∂U = − N − α + β = −α + β kB ∂α ∂α ∂α ∂α ∂α ∂α 1 ∂S ∂ ln Z ∂N ∂U ∂N ∂U = + U − α + β = −α + β . (241) kB ∂β ∂β ∂β ∂β ∂β ∂β Combining these formulas and using ∂N ∂N ∂U ∂U dN = dα + dβ, dU = dα + dβ, (242) ∂α ∂β ∂α ∂β one obtains ∂S ∂S dS = k dα + k dβ = −k αdN + k βdU. (243) B ∂α B ∂β B B Comparing this formula with Eq. (240), one identifies 1 µ β = , α = = βµ. (244) kBT kBT Thus Eq. (236) can be rewritten as ∂ ln Z ∂T ∂ ln Z U = − = k T 2 . (245) ∂T ∂β B ∂T Substituting the values of α and β into Eq. (239), one obtains

− kBT ln Z = U − TS − µN. (246) Using the thermodynamical formulas

µN = G = U − TS + PV, (247) one finally obtains

− kBT ln Z = −PV = Ω, (248) where Ω(T, V, µ) is the Ω-potential. Using the main thermodynamic relation for Ω, one can obtain all thermodynamic quantities from ln Z. 34


The Gibbs distribution looks similar to the Boltzmann distribution, Eq. (26). However, this is a distribution of systems of particles in the grand canonical ensemble, rather than the distribution of particles over their individual energy levels. For systems of non-interacting particles, the individual energy levels i exist and the distribution of particles over them can be found. For these systems, the state ξ is specified by the set {Ni} of particles’ population numbers satisfying X X Nξ = Ni,Eξ = εiNi. (249) i i State obtained by redistribution of the particles over the one-particle states i does not count as new states, only the numbers Ni . This implies that within the approach of the grand canonical ensemble the particles are considered as indistinguishable. The average of Ni over the grand canonical ensemble is given by 1 X 1 X N = N N = N eαNξ−βEξ , (250) i Z i ξ Z i ξ ξ where Z is defined by Eq. (234). As both Nξ and Eξ are additive in Ni, see Eqs. (249), the summand of Eq. (234) is multiplicative:

X (α−βε1)N1 X (α−βε2)N2 Y Z = e × e × ... = Zj. (251)

N1 N2 j

As similar factorization occurs in Eq. (250), all factors except the factor containing Ni cancel each other leading to 1 ∂ ln Z X (α−βεi)Ni i N i = Nie = . (252) Zi ∂α Ni

The partition functions for quantum states i have different forms for bosons and fermions. For fermions Ni take the values 0 and 1 only, thus

α−βεi Zi = 1 + e . (253)

For bosons, Ni take any values from 0 to ∞. Thus

∞ 1 X (α−βεi)Ni Zi = e = . (254) 1 − eα−βεi Ni=0

Now N i can be calculated from Eq. (252). Discarding the bar over Ni, one obtains 1 1 Ni = = , (255) eβεi−α ± 1 eβ(εi−µ) ± 1 where (+) corresponds to fermions and (-) corresponds to bosons and α was expressed via µ using Eq. (244). In the first case the distribution is called Fermi-Dirac distribution and in the second case the Bose-Einstein distribution. Eq. (255) should be compared with the Boltzmann distribution, Eq. (26). The chemical potential µ is defined by the normalization condition X 1 N = (256) eβ(εi−µ) ± 1 i Replacing summation by integration, this can be rewritten as ¢ ∞ 1 N = dερ(ε)f(ε, µ), f (ε, µ) = . (257) β(ε−µ) 0 e ± 1 The internal energy U is given by ¢ ∞ X εi U = ⇒ dερ(ε)εf(ε, µ). (258) eβ(εi−µ) ± 1 i 0 35

At high temperatures µ becomes large negative, so that −βµ  1. In this case one can neglect ±1 in the denominator in comparizon to the large exponential, and the Bose-Einstein and Fermi-Dirac distributions simplify to the Boltzmann distribution. Replacing summation by integration, one obtains ¢ ∞ N = eβµ dερ(ε)e−βε = eβµZ. (259) 0 Using the result for the one-particle partition function for particles in a box, Eqs. (62), yields

Z 1 mk T 3/2 e−βµ = = B  1 (260) N n 2π~2 and our calculation is self-consistent. At low temperatures ±1 in the denominator of Eq. (257) cannot be neglected, and the system is called degenerate. The Ω-potential is given by Eq. (248) that upon substituting Eqs. (251), (253), and (254) becomes   X X β(µ−εi) Ω = −kBT ln Z = −kBT ln Zi = ∓kBT ln 1 ± e , (261) i i where the upper and lower signs correspond to the Fermi and Bose statistics. Replacing summation by integration, one obtains ¢  β(µ−ε) Ω = ∓kBT dερ(ε) ln 1 ± e . (262)

m For the density of states in the form of a power of the energy, ρ(ε) = ρ0ε , one can integrate by parts: ¢ m+1 ∞ m+1 ρ0ε  β(µ−ε) ρ0ε 1 Ω = ∓kBT ln 1 ± e − dε . (263) m + 1 0 m + 1 eβ(ε−µ) ± 1

Here the first term vanishes and the second term is proportional to the energy given by Eq. (258): 1 Ω = − U. (264) m + 1 For non-relativistic particles, m = 1/2, see Eq. (58), so that one obtains the relation 2 PV = U. (265) 3

In particular, for the Maxwell-Boltzmann gas U = (3/2) NkBT , so that the equation of state PV = NkBT results. For ultra-relativistic particles there is a linear relation between the energy and momentum, as for phonons and photons. This implies m = 2, see Eq. (124) keeping in ε = ~ω. In this cas one obtains the relation 1 PV = U. (266) 3 .


In the macroscopic limit N → ∞ and V → ∞, so that the concentration of particles n = N/V is constant, the energy levels of the system become so finely quantized that they become quasicontinuous. In this case summation in Eq. (256) can be replaced by integration. However, it turns out that in 3D below some temperature TB the ideal bose-gas undergoes the so-called Bose condensation. The latter means that a mascoscopic number of particles N0 ∼ N falls into the ground state. These particles are called Bose condensate. Setting the energy of the ground state to zero (that always can be done) from Eq. (257) one obtains 1 N = . (267) 0 e−βµ − 1 36

Resolving this for µ, one obtains   1 ∼ kBT µ = −kBT ln 1 + = − . (268) N0 N0

Since N0 is very large, one has practically µ = 0 below the Bose condensation temperature. Having µ = 0, one can easily calculate the number of particles in the excited states Nex by integration as ¢ ∞ ρ(ε) Nex = dε βε . (269) 0 e − 1 The total number of particles is then

N = N0 + Nex. (270)

In 3D, using the density of states ρ(ε) given by Eq. (58), one obtains ¢ √ ¢ √ V 2m3/2 ∞ ε V 2m3/2 1 ∞ x N = dε = dx . (271) ex 2 2 βε 2 2 3/2 x (2π) ~ 0 e − 1 (2π) ~ β 0 e − 1 The integral here is a number, a particular case of the general integral ¢ ∞ xs−1 dx x = Γ(s)ζ(s), (272) 0 e − 1 where Γ(s) is the gamma-function satisfying

Γ(n + 1) = nΓ(n) = n! (273) √ √ Γ(1/2) = π, Γ(3/2) = π/2 (274) and ζ(s) is the Riemann function

∞ X ζ(s) ≡ k−s (275) k=1 having the values ζ(5/2) ζ(1) = ∞, ζ(3/2) = 2.612, ζ(5/2) = 1.341, = 0.5134. (276) ζ(3/2)

Thus Eq. (271) yields

 3/2 V 2mkBT Nex = Γ(3/2)ζ(3/2) (277) (2π)2 ~2 increasing with temperature. At T = TB one has Nex = N that is,

V 2mk T 3/2 N = B B Γ(3/2)ζ(3/2), (278) (2π)2 ~2 and thus the condensate disappears, N0 = 0. From the equation above follows !2/3 2 (2π)2 n N k T = ~ , n = , (279) B B 2m Γ(3/2)ζ(3/2) V

2/3 that is, TB ∝ n . In typical situations TB < 0.1 K that is a very low temperature that can be obtained by special methods such as cooling. Now, obviously, Eq. (277) can be rewritten as

 3/2 Nex T = ,T ≤ TB (280) N TB 37

FIG. 10: Condensate fraction N0(T )/N and the fraction of excited particles Nex(T )/N for the ideal Bose-Einstein gas.

FIG. 11: Chemical potential µ(T ) of the ideal 3D Bose-Einstein gas (µ = 0 for T < TB and was computed numerically by solving Eq. (282) for T > TB . Dashed line: High-temperature asymptote corresponding to the Boltzmann statistics, Eq. (260).

The temperature dependence of the Bose-condensate is given by

 3/2 N0 Nex T = 1 − = 1 − ,T ≤ TB. (281) N N TB

One can see that at T = 0 all bosons are in the Bose condensate, while at T = TB the Bose condensate disappears. Indeed, in the whole temperature range 0 ≤ T ≤ TB one has N0 ∼ N and our calculations using µ = 0 in this temperature range are self-consistent. The results above are shown in Fig. 10. 38

FIG. 12: Heat capacity CV (T ) of the ideal 3D Bose-Einstein gas.

Above TB there is no Bose condensate, thus µ 6= 0 and is defined by the equation ¢ ∞ ρ(ε) N = dε . (282) β(ε−µ) 0 e − 1 This is a nonlinear equation that has no general analytical solution. The result of its numerical solution is shown in Fig. 11. At high temperatures the Bose-Einstein (and also Fermi-Dirac) distribution simplifies to the Boltzmann distribution and µ can be easily found. Equation (260) with the help of Eq. (279) can be rewritten in the form

1  T 3/2 e−βµ =  1, (283) ζ(3/2) TB so that the high-temperature limit implies T  TB. Let us consider now the energy of the ideal . Since the energy of the condensate is zero, in the thermodynamic limit one has ¢ ∞ ερ(ε) U = dε (284) β(ε−µ) 0 e − 1 in the whole temperature range. At high temperatures, T  TB, the Boltzmann distribution applies, so that U is given by Eq. (67) and the heat capacity is a constant, CV = (3/2) NkB. For T < TB one has µ = 0, thus ¢ ∞ ερ(ε) U = dε βε (285) 0 e − 1 that in three dimensions becomes ¢  3/2 ∞ 3/2  3/2 V 2m ε V 2m 5/2 U = 2 2 dε βε = 2 2 Γ(5/2)ζ(5/2) (kBT ) . (286) (2π) ~ 0 e − 1 (2π) ~ With the help of Eq. (278) this can be expressed in the form

 T 3/2 Γ(5/2)ζ(5/2)  T 3/2 3 ζ(5/2) U = NkBT = NkBT ,T ≤ TB. (287) TB Γ(3/2)ζ(3/2) TB 2 ζ(3/2) The corresponding heat capacity is

∂U   T 3/2 15 ζ(5/2) CV = = NkB . (288) ∂T V TB 4 ζ(3/2) 39

To find the heat capacity at T > TB, one has first to find µ(T ) numerically from Eq. (282), then substitite it into Eq. (284) to find the energy, and then differentiate the result with respect to T . The result for the heat capacity of the 3D bose gas in the whole range of the temperature is shown in Fig. 12. To calculate the pressure of the ideal bose-gas, one can start with the entropy that below TB is given by ¢ T 0  3/2 0 CV (T ) 2 T 5 ζ(5/2) S = dT 0 = CV = NkB . (289) 0 T 3 TB 2 ζ(3/2) Differentiating the free energy

 T 3/2 ζ(5/2) F = U − TS = −NkBT , (290) TB ζ(3/2) one obtains the pressure

 ∂F   ∂F  ∂T   3 F   2 T  F P = − = − B = − − − B = − , (291) ∂V T ∂TB T ∂V T 2 TB 3 V V i.e. the equation of state

 T 3/2 ζ(5/2) PV = NkBT ,T ≤ TB, (292) TB ζ(3/2) compared to PV = NkBT at high temperatures. In fact, this result can be obtained immediately from the pressure- energy relation, Eq. (265). 3/2 One can see that the pressure of the ideal bose gas with the condensate contains an additional factor (T/TB) < 1, the fraction of particles in the excited states. This is because the particles in the condensate are not thermally agitated and thus they do not contribute into the pressure. Using Eq. (280), one can rewrite the equation of state of the 3D bose gas at T ≤ TB as ζ(5/2) PV = N (T )k T . (293) ex B ζ(3/2)

Other thermodynamic functions above can be rewritten in this way, too. Combining this with Eq. (277), one can see that the volume cancels out in the equation of state and the pressure is a function of the temperature alone:

 3/2 Γ(3/2)ζ(5/2) 2mkBT P = PB(T ) ≡ kBT. (294) (2π)2 ~2 This is the same behavior as that of the saturated vapor. One can see that one cannot define the heat capacity at constant pressure CP as one cannot change T keeping P = const. Also, one cannot define the meaningful coefficient and for the Bose-gas below TB. If the external pressure exceed the pressure of the gas above, the gas will be compressed to zero volume. In the opposite case, the gas will expand until the condensate disappears, and then the volume stabilizes at some finite value.


Properties of the [plus sign in Eq. (257)] are different from those of the Bose gas because the exclusion principle prevents multi-occupancy of quantum states. As a result, no condensation at the ground state occurs at low temperatures. For a macroscopic system, the chemical potential can be found from the equation ¢ ¢ ∞ ∞ ρ(ε) N = dερ(ε)f(ε) = dε (295) β(ε−µ) 0 0 e + 1 at all temperatures. This is a nonlinear equation for µ that in general can be solved only numerically. The result of the numerical solution for µ(T ) is shown in Fig. 13. The Fermi-Dirac distribution function for different temperatures is shown in Fig. 14. 40

FIG. 13: Chemical potential µ(T ) of the ideal Fermi-Dirac gas. Dashed line: High-temperature asymptote corresponding to the Boltzmann statistics, Eq. (260). Dashed-dotted line: Low-temperature asymptote, Eq. (310).

FIG. 14: Fermi-Dirac distribution function at different temperatures.

In the limit T → 0 fermions fill the certain number of low-lying energy levels to minimize the total energy while obeying the exclusion principle. As we will see, the chemical potential of fermions is positive at low temperatures, µ > 0. For T → 0 (i. e., β → ∞) one has eβ(ε−µ) → 0 if ε < µ and eβ(ε−µ) → ∞ if ε > µ. Thus Eq. (295) becomes

 1, ε < µ f(ε) = (296) 0, ε > µ.

The zero-temperature value of µ that we call the εF , is defined by the equation ¢ εF N = dερ(ε). (297) 0 41

FIG. 15: Heat capacity C(T ) of the ideal Fermi-Dirac gas.

Fermions are mainly electrons having spin 1/2 and correspondingly degeneracy 2 because of the two states of the spin. In three dimensions, using Eq. (58) with an additional factor 2 for the degeneracy one obtains ¢  3/2 εF  3/2 2V 2m √ 2V 2m 2 3/2 N = 2 2 dε ε = 2 2 εF . (298) (2π) ~ 0 (2π) ~ 3 From here one obtains 2 2/3 ε = ~ 3π2n . (299) F 2m It is convenient also to introduce the Fermi temperature as

kBTF = εF . (300) 5 One can see that TF has the same structure as TB defined by Eq. (279). In typical metals, TF ∼ 10 K, so that at room temperatures T  TF and the electron gas is degenerate. It is convenient to express the density of states in three dimensions, Eq. (58), in terms of εF : √ 3 ε ρ(ε) = N . (301) 2 3/2 εF Let us now calculate the internal energy U at T = 0: ¢ ¢ µ0 3 N εF 3 N 2 3 U = dερ(ε)ε = dε ε3/2 = ε5/2 = Nε . (302) 2 3/2 2 3/2 5 F 5 F 0 εF 0 εF

One cannot calculate the heat capacity CV from this formula as it requires taking into account small temperature- dependent corrections in U. This will be done later. One can calculate the pressure at low temperatures since in this region the entropy S should be small and F = U − TS =∼ U. One obtains       2 ∂F ∼ ∂U 3 ∂εF 3 2 εF 2 ~ 2 22/3 5/3 P = − = − = − N = − N − = nεF = 3π n . (303) ∂V T =0 ∂V T =0 5 ∂V 5 3 V 5 2m 5 To calculate the heat capacity at low temperatures, one has to find temperature-dependent corrections to Eq. (302). We will need the integral of a general type ¢ ¢ ∞ ∞ εη M = dε εηf(ε) = dε (304) η (ε−µ)/(k T ) 0 0 e B + 1 42 that enters Eq. (295) for N and the similar equation for U. With the use of Eq. (301) one can write 3 N N = M , (305) 2 3/2 1/2 εF 3 N U = M . (306) 2 3/2 3/2 εF

If will be shown at the end of this section that for kBT  µ the expansion of Mη up to quadratic terms has the form " # µη+1 π2η (η + 1) k T 2 M = 1 + B . (307) η η + 1 6 µ

Now Eq. (305) takes the form " # π2 k T 2 ε3/2 = µ3/2 1 + B (308) F 8 µ that defines µ(T ) up to the terms of order T 2:

−2/3 " 2  2# " 2  2# " 2  2# π kBT ∼ π kBT ∼ π kBT µ = εF 1 + = εF 1 − = εF 1 − (309) 8 µ 12 µ 12 εF or, with the help of Eq. (300), " # π2  T 2 µ = εF 1 − . (310) 12 TF

It is not surprising that the chemical potential decreases with temperature because at high temperatures it takes large negative values, see Eq. (260). Equation (306) becomes " # " # 3 N µ5/2 5π2 k T 2 3 µ5/2 5π2 k T 2 U = 1 + B ∼ N 1 + B . (311) 3/2 = 3/2 2 (5/2) 8 µ 5 8 εF εF εF Using Eq. (310), one obtains

" #5/2 " # 3 π2  T 2 5π2  T 2 U = NεF 1 − 1 + 5 12 TF 8 TF

" 2  2#" 2  2# ∼ 3 5π T 5π T = NεF 1 − 1 + (312) 5 24 TF 8 TF that yields " # 3 5π2  T 2 U = NεF 1 + . (313) 5 12 TF

At T = 0 this formula reduces to Eq. (302). Now one obtains

∂U  π2 T CV = = NkB (314) ∂T V 2 TF that is small at T  TF . The numerically found heat capacity in the whole temperature range is shown in Fig. 15. In the high-temperature limit it tends to the classical result CV = (3/2)NkB. Let us now derive Eq. (307). Integrating Eq. (304) by parts, one obtains ¢ η+1 ∞ ∞ η+1 ε ε ∂f(ε) Mη = f(ε) − dε . (315) η + 1 0 0 η + 1 ∂ε 43

The first term of this formula is zero. At low temperatures f(ε) is close to a step function fast changing from 1 to 0 in the vicinity of ε = µ. Thus

∂f(ε) ∂ 1 βeβ(ε−µ) β = = − 2 = − 2 (316) ∂ε ∂ε eβ(ε−µ) + 1 eβ(ε−µ) + 1 4 cosh [β (ε − µ) /2] has a sharp negative peak at ε = µ. To the contrary, εη+1 is a slow function of ε that can be expanded in the in the vicinity of ε = µ. Up to the second order one has

η+1 2 η+1 η+1 η+1 ∂ε 1 ∂ ε 2 ε = µ + (ε − µ) + 2 (ε − µ) ∂ε ε=µ 2 ∂ε ε=µ 1 = µη+1 + (η + 1) µη (ε − µ) + η (η + 1) µη−1 (ε − µ)2 . (317) 2 Introducing x ≡ β (ε − µ) /2 and formally extending integration in Eq. (315) from −∞ to ∞, one obtains ¢ µη+1 ∞ 1 η + 1 η (η + 1)  1 M = dx + x + x2 . (318) η 2 2 2 η + 1 −∞ 2 βµ β µ cosh (x) The contribution of the linear x term vanishes by symmetry. Using the ¢ ¢ ∞ 1 ∞ x2 π2 dx 2 = 2, dx 2 = , (319) −∞ cosh (x) −∞ cosh (x) 6 one arrives at Eq. (307).


The heat propagating away from hot objects is electromagnetic radiation. It was experimentally established that in a cavity withing a body there is electromagnetic radiation at equilibrium with the body whose intensity and the average frequency increase with the temperature of the body. If a small hole is bored in the body connecting the cavity with the outside , a stream of electromagnetic radiation goes out and can be analyzed. Analysis of its intensity and frequency distribution to the introduction of the concept of quantization in physics by Max Planck in 1900. The properties of the equilibrium electromagnetic radiation are similar to those of elastic vibrations in solids. Quanta of the latter are called phonons while those of the former are called photons. So, instead of electromagnetic radiation one can speak of the phonon gas. It is clear that the properties of the photon gas do not depend of the shape of the cavity (that also was proven experimentally). Thus, one can consider a cavity in the form of a box with the sizes Lx, Ly, and Lz, applying boundary conditions at the borders of the cavity. The most natural boundary conditions, plausible in the case of metals, require that the electric field in the electromagnetic wave be zero at the borders, otherwise a strong current will be induced. This boundary condition leads to the normal modes in a form of standing plane waves with the wave vectors k = {kx, ky, kz} given by π kα = να, να = 1, 2, 3, . . . , α = x, y, z. (320) Lα This formula is similar to those for a quantum particle in a box, Eq. (50) and for a clamped elastic body, Eq. (121). An important difference with the elestic problem is that the values of the wave vectors are unlimited. The frequency density of the modes can be obtained in the same way as it was done for the elastic waves. There is only two transverse polarizations of the electromagnetic waves instead of three polarizations of the elastic waves. Taking this into account and using ω = ck, where c is the speed of , one obtains V ρ (ω) = ω2, (321) π2c3 where V = LxLyLz is the volume of the cavity. In fact, the properties of the electromagnetic radiation (also called black-body radiation) were experimentally inves- tigated before those of the elastic system. First, the problem was considered classically and it was supposed that there 44 is the energy equal to kBT in each mode. Multiplying the formula above by kBT and dividing it by V , one obtains the spectral energy density k T ρ (ω) = B ω2, (322) E π2c3 the Rayleigh-Jeans formula. The integral of Eq. (322) over the , with the upper bound equal to infinity, is diverging, that is termed the catastrophe. Experiments, however, showed that this formula is valid only at low frequencies. The experimentally found spectral energy density had a maximum at some frequency,dependent on the temperature, and decreased exponentially at high frequencies. Trying to understand this, Max Planck came up with the idea of quantization of the energy of a harmonic oscillator, in this case of the energy of an electromagnetic wave. Within the classical statistical physics, the energy of the oscillator obeys the Boltzmann statistics with the continuous values of the oscillator’s energy, and the partition function of the oscillator is given by

¢∞ 1 Z = dεe−βε = . (323) β 0 (There is no density of states ρ(ε) here as a single oscillator is considered.) Then, the oscillator’s becomes ∂ ln Z 1 E = − = = k T (324) ∂β β B that leads to the Rayleigh-Jeans formula, Eq. (322). Planck tried quantized values of the oscillator’s energies, ε = ~ων with ν = 0, 1, 2,..., and ω being the oscillator’s frequency. In this case the partition function is a geometrical progression:

∞ X ν 1 Z = e−β~ω = (325) 1 − e−β~ω ν=0 and the oscillator’s thermal energy becomes ∂ ln Z ω E = − = ~ . (326) ∂β eβ~ω − 1

−β ω At high temperatures, β~ω  1, the classical result E = kBT is recovered. In this case, e ~ is close to one and summation can be replaced by integration. In general, one obtains the Planck’s formula

~ ω3 ρE (ω) = (327) π2c3 eβ~ω − 1 that perfectly describes the experiment. In particular, the maximum of the spectral energy density satisfies the transcedental equation

−β ω 3 1 − e ~ = β~ω, (328) thus ωmax ∝ kBT/~. In this way the Plank’s constant has been introduced in physics and could be measured. Plank has empirically found quantum energy levels of the harmonic oscillator before quantum mechanics was created. From Plank’s finding it follows that the energy of the electromagnetic field is split into discrete portions, quanta, with the energy ~ω. These quanta of electromagnetic energy are called photons. Photons and phonons are examples of quasiparticles - excitations having some properties of particles. The derivation above repeats that for the quantum harmonic oscillator starting from Eq. (72). The only difference is that the zero-point energy is absent in the Plank’s derivation. However, as this energy cannot be released, it cannot be measured and has to be discarded in interpreting the energy of the black-body radiation. Integrating the spectral energy density over the frequencies, one obtains the energy of the photon gas:

¢∞ 2 π 4 U = u(T )V, u(T ) = dωρE (ω) = (kBT ) (329) 15c3~3 0 that is similar to Eq. (138) for the phonon gas in the elastic body. 45

Let us calculate the pressure of the photon gas. The heat capacity is given by

  2 ∂U 4π V kB 3 CV = = 3 3 (kBT ) . (330) ∂T V 15c ~ The entropy can be obtained by the integration:

¢T 0 2 0 CV (T ) 1 4π V kB 3 S = dT = CV = (kBT ) . (331) T 0 3 45c3~3 0 Then, the free energy becomes 4 1 1 F = U − TS = U − U = − U = − u(T )V. (332) 3 3 3 Now, the equation of state is given by

 ∂F  1 P = − = u(T ), (333) ∂V T 3 in accordance with Eq. (266) for ultra-relativistic particles. One can see that the photon gas behaves as saturated vapor – it can be compressed without any pressure increase because of condensation. Indeed, one can calculate the number of photons in the photon gas and it turns out to be proportional to the volume and depending on the temperature. Thus, the number of photons is not conserved, unlike that of the ordinary particles. The pressure of the photons is small, 2.4 × 10−6 N/m2 at T = 300 K, much smaller than the atmospheric pressure 105 N/m2, still it can be demonstrated in a high-school experiment. The electromagnetic radiation at equilibrium can be considered in a different way using the statistics of indistin- guishable particles based on the grand canonical ensemble. In the description of harmonic oscillators, phonons and photons above, we considered equidistant quantum energy levels of harmonis oscillators, or normal modes that behave like harmonic oscillators, and we applied the Boltzmann statistics to find the numbers of the oscillators in each of their quantum states. The oscillators or normal modes are obviously distinguishable, so they are considered as such. Now we take another point of view and consider normal modes as quantum states, asking how many excitations (quasiparticles such as phonos or photons) are in each mode, that is, in each quantum state in the new sense. Obviuosly, the number of excitations in each state is unlimited and the excitations are indistinguishable by definition. Thus, one can apply the method of the grand canonical ensemble and use the Bose-Einstein distribution for the numbers of excitations in each state. The total number of excitations N is obviously not conserved. Thus, in the grand canonical ensemble there is no constraint on the number of particles and thus no Lagrange parameter α and no chemical potential µ. As a result, one can use the Bose distribution function with µ = 0 1 n(ω) = (334) eβ~ω − 1 for the number of excitations in the mode (state) with the frequency ω. This is Eq. (88) obtained for the harmonic oscillator by summing up the geometrical progression. Multiplying this by the density of modes (now states) ρ(ω) of Eq. (321) for photons and by the energy of the mode (now state) ~ω, one arrives at the Planck’s formula, Eq. (327). It should be stressed that the picture of the excited states of the modes is only possible if the excitation spectrum of the mode is equidistant. Then the energy of the mode is proportional to the number of excitations and one can use the energy constraint in the form of Eqs. (3) or (249). If the energy levels are not equidistant, as is the case for the anharmonic oscillator, rotator, or even a particle in a box, one cannot use these formulas for the energy within the quasiparticle description. Still, the approach based on the Boltzmann statistics is working in these cases.