Thermal Physics

David Roundy Contents

1 Spring 2020 Physics 441 4 Remote learning ...... 4 Introduction and course philosophy ...... 5 Brief review of Energy and ...... 6

2 Week 1: Gibbs entropy approach 8 Microstates vs. macrostates ...... 8 Probabilities of microstates ...... 9 Energy as a constraint ...... 9 Lagrange multipliers (for those who are curious) ...... 11 Maximizing entropy ...... 12 Homework for week 1 (PDF) ...... 13

3 Week 2: Entropy and (K&K 2, Schroeder 6) 15 Quick version ...... 15 Multiplicity of a paramagnet ...... 16 Entropy of our spins ...... 18 Thermal contact (probably skipped in class) ...... 19 Homework for week 2 (PDF) ...... 21

4 Week 3: Boltzmann distribution and Helmholtz (K&K 3, Schroeder 6) 22 Spring 2020: We skipped this section last week, and will skip this section this week ...... 22 Internal energy ...... 23 Pressure ...... 24 ...... 24 Using the free energy ...... 25 with just one atom ...... 26 Ideal gas with multiple atoms ...... 28 Homework for week 3 (PDF) ...... 30

5 Week 4: Thermal radiation and Planck distribution (K&K 4, Schroeder 7.4) 32 Harmonic oscillator ...... 32 Summing over microstates ...... 34

1 Black body radiation ...... 35 Low temperature heat capacity ...... 37 Homework for week 4 (PDF) ...... 38

6 Week 5: and Gibbs distribution (K&K 9, Schroeder 7.1) 40 Chemical potential ...... 40 Gibbs factor and sum ...... 42 Homework for week 5 (PDF) ...... 47

7 Week 6: Ideal gas (K&K 6, Schroeder 6.7) 49 Midterm on Monday ...... 49 Motivation ...... 49 and orbitals ...... 49 Fermi-Dirac distribution ...... 51 Bose-Einstein distribution ...... 52 Entropy...... 53 Classical ideal gas ...... 53 Homework for week 6 (PDF) ...... 55

8 Week 7: Fermi and Bose gases (K&K 7, Schroeder 7)) 57 Density of (orbital) states ...... 57 Finding the density of states ...... 58 Using the density of states ...... 59 Fermi gas at finite temperature ...... 60 Bose gas ...... 62 Homework for week 7 (PDF) ...... 63

9 Week 8: Work, heat, and cycles (K&K 8, Schroeder 4) 65 Heat and work ...... 65 Homework for week 8 (PDF) ...... 67

10 Week 9: Phase transformations (K&K 10, Schroeder 5.3) 69 Coexistence ...... 69 Clausius-Clapeyron ...... 70 van der Waals ...... 71 van der Waals and liquid-vapor phase transition ...... 74 Examples of phase transitions ...... 75 Landau theory ...... 76 Homework for week 9 (PDF) ...... 78

11 Review 81 Equations to remember ...... 81 Equations not to remember ...... 82

12 Solutions 84 Solution for week 1 ...... 84 Solution for week 2 ...... 88

2 Solution for week 3 ...... 90 Solution for week 4 ...... 98 Solution for week 5 ...... 102 Solution for week 6 ...... 106 Solution for week 7 ...... 112 Solution for week 8 ...... 116 Solution for week 9 ...... 120

3 Chapter 1

Spring 2020 Physics 441

Office hours MWTRF 11:00-11:30. Office hours Remote learning will be held via zoom (see the new canvas site) or slack. You may message me at any time, but Welcome to thermal physics! As most of you are likely I can’t guarantee that I’ll respond except during aware, OSU will be employing remote teaching for at scheduled office hours. least the first few weeks of spring term. We are still Syllabus The syllabus is here. in the process of determining exactly what the course Textbook An Introduction to Thermal Physics by will look like, and we will post updates as we learn Schroeder or Thermal Physics by Kittel and more. Until then, below are some general things that Kroemer. A textbook is not required, but ei- we will try to follow: ther of these textbooks can provide an additional • The course will be delivered remotely. You will resource for learning the course material. need access to a computer/tablet/phone with Course notes If you wish, you may download my Internet, and you will need to download OSU’s entire course notes as a lengthy PDF file. Note video software, Zoom. (You might want to start that I do edit my notes as the class progresses, practicing how to use Zoom now!) You can so you may want to re-download the notes after get help at https://learn.oregonstate.edu/keep- a while. learning). Homework Homework will be due via Gradescope on Wednesday of each week (but not the first • Homework will be turned in through Canvas via week of class). You should be able to start each Gradescope. homework the week before it is due. See the syllabus for details on homework grading. You • The course will be delivered synchronously. You may use the solutions that I provide (or any should plan to “attend” class at the scheduled other resource you wish to use) but at the end time: MWF 12-1. I try to record class sessions of each problem, please cite what resources you for those who cannot attend, but best to plan on used (students you worked with, whether you showing up. looked at the solutions, etc). Note that verbatim • The course will be interactive, to the extent that copying from any source is plagiarism, and is not is possible. You will be working in groups for a permitted. during the course using Zoom. • I have set up a slack channel, #ph441 for discus- sion related to this class. You are welcome to use this channel as you would have used Weniger 304F to discuss homework with each other, or to

4 ask questions about the class. 2. Physics lectures can be active engagement, pro- vided students do the math in real time as the lecture is given. This works well for some stu- Introduction and course philoso- dents (including most of us who are now faculty). We can hope that in the senior year you are able phy to follow a lecture in real time. This is your second course in thermal physics. Energy 3. Some things you can’t figure out for yourself, and and Entropy took a thermodyamics-first approach, you just need to be told. (Counter argument: with primary emphasis on how you could measure maybe you should read this content. . . but it’s something, and only later introducing how you could also true that some things you won’t understand predict it. I strongly support this approach, but it is the first time you encounter, so reading and being not the most common approach to thermal physics. told beats just reading the text.)

I will teach this course in a more traditional order So what are the advantages of pre-recorded lectures? and approach. Since this is now your second course in 1. It can be polished a bit more. thermal physics, this should balance things out. With your Energy and Entropy background, you should be 2. You can watch and re-watch them whenever you able to make physical connections with the math more like, including repeating portions that may be easily. By organizing this course in this different way, confusing. I hope to broaden and deepen your understanding of 3. We can devote more class time to activities. thermal physics, while at the same time showing you a wide range of different physical systems. And the advantages of live lectures? 1. You never know if I’ll make a mistake (I frequently Teaching remotely do), so you have some motivation to follow along as I talk, so you can catch those mistakes. This With the need to teach remotely this year, I’ll have to may increase the odds of you having an “active” make some decisions regarding how I provide content lecture experience. remotely. When taught in person, much of the time is spent lecturing, with small group activities taking 2. Because I have not polished the performance of possibly a third of class time. A challenging question the lecture, I will model for you how I reason is whether it is better to lecture real-time via Zoom, about my math and how I look for mistakes that or to provide pre-recorded lectures that you can watch I have made. at your leisure. Each approach has a few advantages, 3. You can ask questions as I give the lecture. and I think it’s worth articulating my reasoning. On the whole, I think live lectures will bring the great- Firstly, you might wonder why I lecture so much in the est benefit. You can read my notes to get something first place, given the amount of research demonstrat- a bit more polished. I try to ensure that everything I ing the superiority of active engagement pedagogy. am planning to say is in the notes, so reading them There are three major reasons for using lecture rather should give roughly the same benefit as watching than student activities: pre-recorded video. 1. It is fast. If you have lots of content to cover, talk- We will also be attempting to use breakout rooms ing continuously without interruption is fastest. with the zoom whiteboards to do group work. I will (The counter argument is that quickly covering adjust how much time is spent on group work based material without students learning it is a waste on our experience together. I am open to suggestions of time. . . ) for how to improve the class.

5 Brief review of Energy and En- Derivatives Measure changes of one thing as the tropy other changes, with the right stuff held fixed. Extensive/intensive (Schroeder 5.2) First Law (Energy conservation, Schroeder 1.4) If you consider two identical systems taken together (e.g. two cups of water, or two identical cubes of dU = dQ + dW (1.3) metal), each thermodynamic property either doubles or remains the same. Second Law (Entropy increases, Extensive An extensive property, such as mass will Schroeder 2.3) double when you’ve got twice as much stuff. Intensive An intensive property, such as density ∆Ssystem + ∆Senvironment ≥ 0 (1.4) will be the same regardless of how much stuff you’ve got. Thermodynamic identity (Schroeder We care about extensivity and intensivity for several 3.4) reasons. In one sense it functions like dimensions as a way to check our work. In another sense, it is dU = T dS − pdV (1.5) a fundamental aspect of each measurable property, and once you are accustomed to this, you will feel Total differentials very uncomfortable if you don’t know whether it is extensive or intensive. When we have a total differential, the things in front of the dS, dV , etc are partial derivatives. How to measure things ∂U  ∂U  T = −p = (1.6) Volume Measure dimensions and compute it. ∂S V ∂V S (extensive) Also, you can integrate along a path using a total Pressure Force per area. Can equalize if systems differential, and can do linear algebra using total can exchange volume. (intensive) (Schroeder differential equations, e.g. substituting one for another. 1.2) Fun example: Temperature Find something that depends on tem- 1 p perature, and calibrate it. Alternatively use an dS = dU + dV (1.7) T T ideal gas. Equalizes when systems are in contact. 1  ∂S  p  ∂S  (intensive) = = (1.8) Energy Challenging. . . measure work and T ∂U V T ∂V U heat (e.g. by measuring power into resistor). (the second derivative here is known as the cyclic chain (extensive) rule in a more general sense) Z W = − pdV (1.1) Thermodynamic potentials (Schroeder 1.6, 5.1) Entropy (extensive) Measure heat for a quasistatic process and find Helmholtz free energy Z dQ F = U − TS (1.9) ∆S = (1.2) T dF = dU − T dS − SdT (1.10) (Schroeder 3.2) = −SdT − pdV (1.11)

6 Enthalpy

H = U + pV (1.12) dH = dU + pdV + V dp (1.13) = T dS + V dp (1.14)

Gibbs free energy

G = H − TS (1.15) = U − TS + pV (1.16) dG = dH − T dS − SdT (1.17) = −SdT + V dp (1.18)

Statistical entropy (Schroeder 2.6, Problem 6.43) Boltzmann formulation (microcanonical or for large N):

S(E) = kB ln g(e) (1.19)

Gibbs formulation (always true):

all states X S(E) = −kB Pi ln Pi (1.20) i

Boltzmann ratio (Schroeder 6.1)

E −E Pi − i j = e kB T (1.21) Pj E −E − i j e kB T P = (1.22) i Ej Pall states − k T j e B

Thermal averages (Schroeder 6.2) The average value of any quantity is given by the weighted average

all states X hXi = PiXi (1.23) i

7 Chapter 2

Week 1: Gibbs entropy approach

The only reference in the text is (Schroeder Problem When you have a non-quantum mechanical system (or 6.43). one that you want to treat classically), a microstate represents one of the “primitive” states of the system, There are two different approaches for deriving the in which you have specified all possible variables. In results of . These results differ practice, it is common when doing this to specify what in what fundamental postulates are taken, but agree we might call a “mesostate”, but call it a microstate. in the resulting predictions. The textbook takes a e.g. you might hear someone describe a microstate traditional microcanonical Boltzmann approach. of a system of marbles in urns as defined by how many marbles of each color are in each urn. Obvi- This week, before using that approach, we will reach ously there are many quantum mechanical microstates the same results using the Gibbs formulation of the corresponding to each of those states. entropy (sometimes referred to as the “information theoretic entropy”), as advocated by Jaynes. Note that Boltzmann also used the more general Gibbs Small White Boards Write down a description of entropy, even though it doesn’t appear on his tomb- one particular macrostate. stone.

A macrostate is a state of a system in which we Microstates vs. macrostates have specified all the properties of the system that will affect any measurements we may care about. For You can think of a microstate as a quantum mechan- instance, when defining a macrostate of a given gas ical energy eigenstate. As you know from quantum or liquid, we could specify the internal energy, the mechanics, once you have specified an energy eigen- number of molecules (or equivalently mass), and the state, you know all that you can about the state of volume. We need to specify all three properties (if we a system. Note that before quantum mechanics, this want to ask, for instance, for the entropy), because was more challenging. You can define a microstate otherwise we won’t have a unique answer. For different classically, and people did so, but it was harder. In sorts of systems there are different ways that we can particular, the number of microstates classically is gen- specify a macrostate. In this way, macrostates have a erally both infinite and non-countable, since any real flexibility that real microstates do not. e.g. I could ar- number for position and velocity is possible. Quan- gue that the macrostate of a system of marbles in urns tum mechanics makes this all easier, since any finite would be defined by the number of marbles of each (in size) system will have an infinite but countable color in each urn. After all, each macrostate would number of microstates. still correspond to many different energy eigenstates.

8 Probabilities of microstates was always an eigenvalue of the observable, not the expectation value (which is itself an average). The The name of the game in statistical mechanics is de- difference is in how we are imagining performing a termining the probabilities of the various microstates, measurement, and what the size of the system is which we call {Pi}, where i represents a microstate. I thought to be. will note here the term ensemble, which refers to a set of microstates with their associated probabilities. In contrast, imagine measuring the mass of a liter of We define ensembles according to what constraints water, for instance using a balance. While you are we place on the microstates, e.g. in this discussion we measuring its mass, there are water molecules leaving will constrain all microstates to have the same volume the glass (evaporating), and other water molecules and number of particles, which defines the canonical from the air are entering the glass and condensing. ensemble. Next week/chapter we will discuss the The total mass is fluctuating as this occurs, far more microcanonical ensemble (which also constrains all rapidly than the scale can tip up or down. It reaches microstates to have identical energy), and other en- balance when the weights on the other side balance sembles will follow. Today’s discussion, however, will the average weight of the glass of water. be largely independent of which ensemble we choose to The process of measuring pressure and energy are work with, which generally depends on what processes similar. There are continually fluctuations going on, we wish to consider. as energy is going back and forth between your system and the environment, and the process of measurement Normalization (which is slow) will end up measuring the average. The total probability of all microstates added up must In contrast, when you perform spectroscopy on a sys- be one. tem, you do indeed see lines corresponding to discrete all µstates eigenvalues, even though you are using a macroscopic X Pi = 1 (2.1) amount of light on what may be a macroscopic amount i of gas. This is because each that is absorbed by the system will be absorbed by a single molecule This may seem obvious, but it is very easy to forget (or perhaps by two that are in the process of colliding). when lost in algebra! Thus you don’t measure averages in a direct way. From probabilities to observables In thermal systems such as we are considering in this course, we will consider the kind of observable If we want to find the value that will be measured for where the average value of that observable is what is a given observable, we will use the weighted average. measured. This is why statistics are relevant! For instance, the internal energy of a system is given by:

all µstates Energy as a constraint X U = PiEi (2.2) i Energy is one of the most fundamental concepts. When we describe a macrostate, we will (almost) al- = hE i (2.3) i ways need to constrain the energy. For real systems, where Ei is the energy eigenvalue of a given microstate. there are always an infinite number of microstates The hEii notation simply denotes a weighted average with no upper bound on energy. Since we never have of E. The subscript in this notation is optional. infinite energy in our labs or kitchens, we know that there is a practical bound on the energy. This may seem wrong to you. In quantum mechanics, you were taught that the outcome of a measurement We can think of this as applying a mathematical

9 constraint on the system: we specify a U, and this 2. The entropy must be a continuous function of disallows any set of probabilities {Pi} that have a the probabilities {Pi}. Realistically, we want it different U. to be analytic. Small Group Question Consider a system that 3. The entropy shouldn’t change if we shuffle around has just three microstates, with energies −, 0, the labels of our states, i.e. it should be symmet- and . Construct three sets of probabilities cor- ric. responding to U = 0. 4. When all microstates are equally likely, the en- I picked an easy U. Any “symmetric” distribution of tropy should be maximized. probabilities will do. You probably chose something like: 5. All microstates have zero probability except one, Ei: − 0  the entropy should be minimized. Pi: 0 1 0 1 1 Note The constant k is called Boltzmann’s constant, Pi: 2 0 2 1 1 1 and is sometimes written as kB. Kittel and Kroe- Pi: 3 3 3 mer prefer to set kB = 1 in effect, and defines σ Question Given that each of these answers has the as S/kB to make this explicit. I will include kB same U, how can we find the correct set of prob- but you can and should keep in mind that it is abilities are for this U? Vote on which you think just a unit conversion constant. Note also that most likely! changing the base of the logarithm in effect just The most “mixed up” would be ideal. But how do changes this constant. we define mixed-up-ness? The “mixed-up-ness” of How is this mixed up? a probability distribution can be quantified via the Gibbs formulation of entropy: Small Group Question Compute S for each of the above probability distributions. all µstates X Answer S = −k Pi ln Pi (2.4) Ei: − 0  S/kB i Pi: 0 1 0 0 all µstates 1 1 X Pi: 2 0 2 ln 2 = Pi (−k ln Pi) (2.5) 1 1 1 Pi: ln 3 i 3 3 3

= h−k ln Pii (2.6) You can see that if more states are probable, the entropy is higher. Or alternatively you could say So entropy is a kind of weighted average of − ln Pi that if the probability is more “spread out”, the (which is also the average of ln(1/Pi)). entropy is higher. The Gibbs entropy expression (sometime referred to as the information theory entropy, or Shannon entropy) Maximize entropy can be shown to be the only possible entropy function (of {Pi}) that has a reasonable set of properties: The correct distribution is that which maximizes the entropy of the system, its Gibbs entropy, subject to 1. It must be extensive. If you subdivide your sys- the appropriate constraints. Yesterday we tackled a tem into uncorrelated and noninteracting subsys- pretty easy 3-level system with U = 0. If I had chosen tems (or combine two noninteracting systems), a different energy, it would have been much harder to the entropy must just add up. Solve problem 2 find the distribution that gave the highest entropy. on the homework this week to show this. (Tech- nically, it must be additive even if the systems With only three microstates and two constraints (total interact, but that is more complicated.) probability = 1 and average energy = U), we could

10 just maximize the entropy by eliminating two proba- derivative to zero!

bilities and then setting a derivative to zero. But this dS wouldn’t work if we had more than three microstates. − dP− = 0 (2.18) k Small Group Question Find the probability distri- U  = ln P− + 1 − 2 ln 1 − 2P− −  − 2 bution for our 3-state system that maximizes the + ln P + U  + 1 (2.19) entropy, given that the total energy is U. −  U  U  Answer We have two constraints, = ln P− − 2 ln 1 − 2P− −  + ln P− +  (2.20) X U  ! Pi = 1 (2.7) P− P− +  = ln 2 (2.21) i 1 − 2P − U  X −  PiEi = U (2.8) U  P− P− +  i 1 = (2.22) U 2 1 − 2P− −  and we want to maximize And now it is just a polynomial equation. . .

X 2 S = −k Pi ln Pi. (2.9) U  U  P− P− +  = 1 − 2P− −  (2.23) i 2 U U 2 U U 2 P− +  P− = 1 − 4P− − 2  + 4P− + 4P−  + 2 Fortunately, we’ve only got three states, so we (2.24) write down each sum explicitly, which will make At this stage I’m going to stop. Clearly you could things easier. keep going and solve for P− using the quadratic equation, but we wouldn’t learn much from do- P− + P0 + P+ = 1 (2.10) ing so. The point here is that we can solve for −P− + P+ = U (2.11) the three probabilities given the internal energy U constraint. However, doing so is a major pain, P = P + (2.12) + −  and the result is not looking promising in terms U of simplicity. There is a better way! P + P + P + = 1 (2.13) − 0 −  U P = 1 − 2P − (2.14) Lagrange multipliers (for those 0 −  (2.15) who are curious)

If you have a function of N variables, and want to Now that we have all our probabilities in terms apply a single constraint, one approach is to use the of P− we can simplify our entropy: constraint to algebraically eliminate one of your vari- ables. Then you can set the derivatives with respect S to all remaining variables to zero to maximize. A − = P− ln P− + P0 ln P0 + P+ ln P+ (2.16) k nicer approach for maximization under constraints is U  U  = P− ln P− + P− +  ln P− +  the method of Lagrange multipliers. U  U  + 1 − 2P− −  ln 1 − 2P− −  The idea of Lagrange multipliers is to introduce an (2.17) additional variable (called the Lagrange multiplier) rather than eliminating one. This may seem counter- Now we can maximize this entropy by setting its intuitive, but it allows you to create a new function

11 that can be maximized by setting its derivative with We now have six equations and six unknowns, since to all N variables to zero, while still satisfying the λ1 and λ2 have also been added as unknowns, and constraint. thus we can solve all these equations simultaneously, which will give us the maximum under the constraint. Suppose we have a situation where we want to maxi- We also get the λ values for free. mize F under some constraints: F = F (w, x, y, z) (2.25) The meaning of the Lagrange multiplier f1(w, x, y, z) = C1 (2.26) So far, this approach probably seems pretty abstract, f2(w, x, y, z) = C2 (2.27) and the Lagrange multiplier λi seems like a strange We define a new variable L as follows: number that we just arbitrarily added in. Even were there no more meaning in the multipliers, this method L ≡ F + λ (C − f (w, x, y, z)) + λ (C − f (w, x, y, z)) 1 1 1 2 2 2 would be a powerful tool for maximization (or min- (2.28) imization). However, as it turns out, the multiplier Note that L = F provided the constraints are satisfied, often (but not always) has deep physical meaning. since the constraint means that f1(x, y, z) − C1 = 0. Examining the Lagrangian L, we can see that We then maximize L by setting its derivatives to zero:  ∂L   ∂L  = λ1 (2.39) ∂C = 0 (2.29) 1 w,x,y,z,C2 ∂w x,y,z ∂F  ∂f ∂f so the multiplier is a derivative of the lagrangian with = − λ 1 − λ 2 (2.30) ∂w 1 ∂w 2 ∂w respect to the corresponding constraint value. This x,y,z doesn’t seem to useful. ∂L = 0 (2.31) More importantly (and less obviously), we can now ∂x w,y,z   think about the original function we maximized F as ∂F ∂f1 ∂f2 = − λ1 − λ2 (2.32) a function (after maximization) of just C1 and C2. If ∂x w,y,z ∂x ∂x we do this, then we find that

 ∂F  ∂L = λ1 (2.40) = 0 (2.33) ∂C 1 C2 ∂y w,x,z   ∂F ∂f1 ∂f2 I think this is incredibly cool! And it is a hint that = − λ1 − λ2 Lagrange multipliers may be related to Legendre trans- ∂y w,x,z ∂y ∂y (2.34) forms. ∂L = 0 (2.35) ∂z w,x,y Maximizing entropy   ∂F ∂f1 ∂f2 = − λ1 − λ2 When maximizing the entropy, we need to apply two ∂z w,x,y ∂z ∂z constraints. We must hold the total probability to 1, (2.36) and we must fix the mean energy to be U: This gives us four equations. But we need to keep in X mind that we also have the two constraint equations: Pi = 1 (2.41) i f1(x, y, z) = C1 (2.37) X PiEi = U (2.42) f2(x, y, z) = C2 (2.38) i

12 I’m going to call my lagrange multipliers αkB and which is called the partition function. Putting this βkB so as to make all the Boltzmann constants go together, the probability is away. e−βEi ! ! Pi = (2.55) X X Z L = S + αkB 1 − Pi + βkB U − PiEi Boltzmann factor = (2.56) i i partition function (2.43) ! At this point, we haven’t yet solved for β, and to do X X so, we’d need to invoke the internal energy constraint: = −kB Pi ln Pi + αkB 1 − Pi i i X ! U = EiPi (2.57) X i + βkB U − PiEi (2.44) P E e−βEi i U = i i (2.58) Z where α and β are the two Lagrange multipliers. I’ve As it turns out, β = 1 . This follows from my claim added here a couple of factors of kB mostly to make kB T that the Lagrange multiplier is the partial derivative the kB in the entropy disappear. We want to maximize this, so we set its derivatives to zero: with respect to the constaint value  ∂S  ∂L k β = (2.59) = 0 (2.45) B ∂U ∂Pi Normalization=1 = −kB (ln Pi + 1) − kBα − βkBEi (2.46) However, I did not prove this to you. I will leave

ln Pi = −1 − α − βEi (2.47) demonstrating this as a homework problem.

−1−α−βEi Pi = e (2.48) Homework for week 1 (PDF) So now we know the probabilities in terms of the two Lagrange multipliers, which already tells us that 1. Energy, Entropy, and Probabilities The the probability of a given microstate is exponentially goal of this problem is to show that once we have related to its energy. At this point, it is convenient maximized the entropy and found the microstate to invoke the normalization constraint. . . probabilities in terms of a Lagrange multiplier β, 1 X we can prove that β = kT based on the statistical 1 = Pi (2.49) definitions of energy and entropy and the ther- i modynamic definition of temperature embodied X = e−1−α−βEi (2.50) in the thermodynamic identity. i The internal energy and entropy are each defined −1−α X −βEi = e e (2.51) as a weighted average over microstates: i X X X e1+α = e−βEi (2.52) U = EiPi S = −kB Pi ln Pi (2.60) i i i (2.53) We saw in clase that the probability of each mi- crostate can be given in terms of a Lagrange Where we define the normalization factor as multiplier β as all states X e−βEi X Z ≡ e−βEi (2.54) P = Z = e−βEi (2.61) i Z i i

13 Put these probabilities into the above weighted c) What are the three probabilities at zero averages in order to relate U and S to β. Then temperature? What is the internal energy make use of the thermodynamic identity U? What is the entropy S?

dU = T dS − pdV (2.62) d) What happens to the probabilities if you allow the temperature to be negative? 1 to show that β = kT . 2. Gibbs entropy is extensive Consider two non- interacting systems A and B. We can either treat these systems as separate, or as a single combined system AB. We can enumerate all states of the combined by enumerating all states of each sep- arate system. The probability of the combined AB A B state (iA, jB) is given by Pij = Pi Pj . In other words, the probabilities combine in the same way as two dice rolls would, or the probabilities of any other uncorrelated events. a) Show that the entropy of the combined sys- tem SAB is the sum of of the two separate systems considered individu- ally, i.e. SAB = SA + SB. This means that entropy is extensive. Use the Gibbs entropy for this computation. You need make no approximation in solving this problem. b) Show that if you have N identical non- interacting systems, their total entropy is NS1 where S1 is the entropy of a single system. Note In real materials, we treat properties as being extensive even when there are interac- tions in the system. In this case, extensivity is a property of large systems, in which sur- face effects may be neglected. 3. Boltzmann probabilities Consider the three- state system with energies (−, 0, ) that we dis- cussed in class. a) At infinite temperature, what are the prob- abilities of the three states being occupied? What is the internal energy U? What is the entropy S? b) At very low temperature, what are the three probabilities?

14 Chapter 3

Week 2: Entropy and Temperature (K&K 2, Schroeder 6)

This week we will be following Chapter 2 of Kittel multiplicity (which did not show up last week!) be- and Kroemer, which uses a microcanonical approach comes a fundamentally important quantity, since the (or Boltzmann entropy approach) to relate entropy macrostate with the highest multiplicity is the most to temperature. This is an alternative derivation to probable macrostate. the Gibbs approach we used last week, and it can be helpful to have seen both. In a few ways the Outline of the week One or two topics per day: Boltzmann approach is conceptually simpler, while 1. Quick version showing the conclusions we there are a number of other ways in which the Gibbs will reach. approach is simpler. 2. Finding the multiplicity of a paramagnet (Chapter 1). 3. (Probably skipped in class) Combining two Fundamental assumption non-interacting systems; defining tempera- The difference between these two approaches is in ture. what is considered the fundamental assumption. In the Gibbs entropy approach we assumed that the entropy was a “nice” function of the probabilities of Quick version microstates, which gave us the Gibbs formula. From there, we could maximize the entropy to find the This quick version will tell you all the essential physics probabilities under some set of constraints. results for the week, without proof. The beauty of statistical mechanics (whether following the text or The Boltzmann approach makes what is perhaps sim- using the information-theory approach of last week) pler assumption, which is that if only microstates is that you don’t actually need to take on either faith with a given energy are permitted, then all of the or experiment the connection between the statistical microstates with that energy are equally probable. theory and the empirical definitions used in thermo- (This scenario with all microstates having the same dynamics. energy is the microcanonical ensemble.) Thus the macrostate with the most corresponding microstates Entropy will be most probable macrostate. The number of microstates corresponding to a given macrostate is The multiplicity sounds sort of like entropy (since it called the multiplicity g(E,V ). In this approach, is maximized), but the multiplicity is not extensive

15 (nor intensive), because the number of microstates Differentiable multiplicity The above assumes for two identical systems taken together is the square that g(E) is a differentiable function, which of the number of microstates available to one of the means that the number of microstates must be single systems. This naturally leads to the Boltzmann a continuous function of energy! This highlights definition of the entropy, which is one of the distinctions between the microcanoni- cal approach and our previous (cannonical) Gibbs S(E,V ) = kB ln g(E,V ). (3.1) approach.

The logarithm converts the multiplicity into an exten- In reality, we know from quantum mechanics that sive quantity, in a way that is directly analogous to any system of finite size has a finite number of the logarithm that appears in the Gibbs entropy. eigenstates within any given energy range, and thus g(E) cannot be either continuous or differ- 23 For large systems (e.g. systems composed of ∼ 10 entiable. Boltzmann, of course, did not know particles$), the most probable configuration is essen- this, and assumed that there were an infinite tially the same as any remotely probable configuration. number of microstates possible within any energy This comes about due for the same reason that if you range, and would strictly speaking interpret g(E) 23 22 12 flip 10 coins, you will get 5 × 10 ± 10 heads. in terms of a volume of phase space. On an absolute scale, that’s a lot of uncertainty in the number of heads that would show up, but on a The resolution to this conundrum is to invoke fractional scale, you’re pretty accurate if you assume large numbers, and to assume that we are averag- that 50% of the flips will be heads. ing g(E) over a range of energies in which there are many, many states. For real materials with N ≈ 1023, this assumption is pretty valid. Much Temperature of this chapter will involve learning to work with From Energy and Entropy (and last week), you will this large N assumption, and to use it to extract remember that dU = T dS − pdV , which tells us that physically meaningful results. In the Gibbs ap- ∂U  proach this large N assumption was not needed. T = ∂S V . If we assume that only states with one particular energy E have a non-zero probability of As Kittel discusses towards the end of the chap- being occupied, then U = E, i.e. the thermodynamic ter, we only really need to know g(E) up to some internal energy is the same as the energy of any al- constant factor, since a constant factor in g be- lowed microstate. Then we can replace U with E and comes a constant additive change in S, which conclude that doesn’t have any physical impact. ∂E  T = (3.2) The “real” g(E) is a smoothed average over a range ∂S V of energies. In practice, doing this can be confusing, 1  ∂S  and so we tend to focus on systems where the energy = (3.3) is always an integer multiple of some constant. Thus T ∂E V a focus on spins in a magnetic field, and harmonic ∂k ln g(E,V ) = B (3.4) oscillators. ∂E V 1  ∂g  = kB (3.5) g ∂E V Multiplicity of a paramagnet From this perspective, it looks like our job is to learn So now the question becomes how to find the number to solve for g(E) and from that to find S(E), and once of microstates that correspond to a given energy g(E). we have done those tasks we will know the temperature Once we have this in an analytically tractable form, (and soon everything else). we can everything else we might care for (with effort).

16 This is essentially a counting problem, and much of µtot = 4m ↑↑↑↑ g=1 what you need is introduced in Chapter 1. We will spend some class time going over one example of To generalize this to g(N, s), we need to come up computing the multiplicity. Consider a paramagnetic with a systematic way to count the states that 1 have the same spin excess s. Clearly if s = ±N/2, system consisting of spin 2 particles that can be either up or down. Each spin has a magnetic moment in g = 1, since that means that all the spins are the zˆ direction of ±m, and we are interested in the pointed the same way, and there is only one way to do that. total magnetic moment µtot, which is the sum of all the individual magnetic moments. Note that the 1 magnetization M used in electromagnetism is just the g(N, s = ± N) = 1 (3.7) 2 total magnetic moment of the material divided by its volume. Now if we have just one spin going the other way, µ there are going to be N ways we could manage M ≡ tot (3.6) V that: Note It is confusingly common to refer to the total  1  g N, s = ± N − 1 = N (3.8) magnetic moment as the magnetization. Given 2 either a numerical value or an expression, it’s usually easy to tell what you’ve got by checking Now when we go to flip it so we have two spins the dimensions. up, there will be N − 1 ways to flip the second spin. But then, when we do this we will end Small Group Question Work out how many ways up counting every possibility twice, which means a system of 4 spins can have any possible magne- that we will need to divide by two. tization of enumerating all the microstates corre- sponding to each magnetization.  1  g N, s = ± N − 2 = N(N − 1)/2 (3.9) Now find a mathematical expression that will tell 2 you the multiplicity of a system with an even number N spins and just one ↑ spin. Then find When we get to adding the third ↑ spin, we’ll have the multiplicity for two ↑ spins, and for three ↑ N − 2 spins to flip. But now we have to be even spins. more careful, since for the same three up-spins, we have several ways to reach that microstate. Now find a mathematical expression that will In fact, we will need to divide by 6, or 3 × 2 to tell you the multiplicity of a system with an get the correct answer (as we can check for our even number N spins and total magnetic moment four-spin example). µtot = 2sm where s is an integer. We call s the spin excess, since N = 1 N + s. Alternatively, ↑ 2  1  N(N − 1)(N − 2) you could write your expression in terms of the g N, s = ± N − 3 = 2 3! number of up spins N and the number of down ↑ (3.10) spins N↓. Answer We can enumerate all spin microstates: At this stage we can start to see the pattern, which comes out to µtot = −4m ↓↓↓↓ g=1 N! µtot = −2m ↓↓↓↑ ↓↓↑↓ ↓↑↓↓ ↑↓↓↓ g=4 g (N, s) = (3.11) 1 N + s! 1 N − s! µ = 0 ↓↓↑↑ ↓↑↑↓ ↑↑↓↓ ↑↓↓↑ ↑↓↑↓ ↓↑↓↑ g=6 2 2 tot N! = (3.12) µtot = 2m ↑↑↑↓ ↑↑↓↑ ↑↓↑↑ ↓↑↑↑ g=4 N↑!N↓!

17 Stirling’s approximation usually omitted when using Stirling’s approximation, since it is much smaller than the others when N  0 As you can see, we now have a bunch of factorials. Once we compute the entropy, we will have a bunch of logarithms of factorials. Entropy of our spins N Y N! = i (3.13) I’m going to use a different approach than the text to i=1 find the entropy of this spin system when there are N ! many spins and the spin excess is relatively small. Y ln N! = ln i (3.14) i=1 S = k ln g (N, s) (3.21) N ! X N! = ln i (3.15) = k ln (3.22) 1 N + s! 1 N − s! i=1 2 2  N!  So you can see that the log of a factorial is a sum of = k ln (3.23) logs. When the number of things being summed is N↑!N↓! large, we can approximate this sum with an integral.  N!  = k ln (3.24) This may feel like a funny business, particularly for (h + s)! (h − s)! those of you who took my computational class, where we frequently used sums to approximate integrals! At this point I’m going to define for convenience h ≡ 1 1 But the approximation can go both ways. In this 2 N, just to avoid writing so many 2 . I’m also going case, if we approximate the integral as a sum we can to focus on the s dependence of the entropy. find an analytic expression for the factorial: S N = ln (N!) − ln (N↑!) − ln (N↓!) (3.25) X k ln N! = ln i (3.16) = ln N! − ln(h + s)! − ln(h − s)! (3.26) i=1 h+s h−s Z N X X ≈ ln xdx (3.17) = ln N! − ln i − ln i (3.27) 1 i=1 i=1 = x ln x − x|N (3.18) 1 At the last step, I wrote the log of the factorial as a = N ln N − N + 1 (3.19) sum of logs. This is still looking pretty hairy. So let’s now consider the difference between the entropy with At this point, we should recognize that the 1 that we s and the entropy when s = 0 (which I will call here see is much smaller than the other two terms, and S for compactness and convenience). is actually likely to be wrong. Importantly, there is 0

a larger error being made here, which we can see if h+s h−s h h S(s) − S0 X X X X we zoom into the upper end of our integral. We are = − ln i − ln i + ln i + ln i 1 kB missing 2 ln N! The reason is that our integral went i=1 i=1 i=1 i=1 precisely to N, but if we imagine a midpoint rule (3.28) picture (or trapezoidal rule) we are missing half of h+s h X X that last point. This gives us: = − ln i + ln j (3.29)  1 i=h+1 j=h−s+1 ln N! ≈ N + ln N − N (3.20) 2 where I have changed the sums to account for the We could find the constant term correctly (it is not difference between the sums with s and those without. 1 1), but that is more work, and even the 2 above is At this stage, our indices are starting to feel a little

18 inconvenient given the short range we are summing involves turning the sum into an integral. As you can over, so let’s redefine our index ov summation so the see in the figure, the integral sums will run up to s. In preparation for this, at the Z s last step, I renamed one of my dummy indexes. 1 2 xdx = 2 s (3.38) 0 i = h + k j = h + 1 − k (3.30) has the same value as the sum, since the area under With these indexes, each sum can go from k = 1 to the orange curve (which is the sum) is equal to the k = s, which will enable us to combine our sums into area under the blue curve (which is the integral). one. Sum geometrically s s S − S0 X X = − ln(h + k) + ln(h + 1 − k) The geometric way to solve this looks visually very k k=1 k=1 much the same as the integral picture, but instead of (3.31) computing the area from the straight line, we cut the s X stair-step area “half” and fit the two pieces together = (ln(h + 1 − k) − ln(h + k)) (3.32) such that they form a rectangle with width s/2 and k=1 height s. At this point, if you’re anything like me, you’re think- Taken together, this tells us that when s  N ing “I could turn that difference of logs into a log of a ratio!” Sadly, this doesn’t turn out to help us. In- 4 s2 S(N, s) ≈ S(N, s = 0) − k (3.39) stead, we are going to start trying to get the h out of N 2 the way in preparation for taking the limit as s  h. 2s2 = S(N, s = 0) − k (3.40) s     N S − S0 X k − 1 k = ln h + ln 1 − − ln h − ln 1 + k h hThis means that the multiplicity is gaussian: k=1 (3.33) S = k ln g (3.41) s      X k − 1 k S(N,s) = ln 1 − − ln 1 + g(N, s) = e k (3.42) h h 2 k=1 S(N,s=0) − 2s (3.34) = e k N (3.43) 2 − 2s It is now time to make our first approximation: we = g(N, s = 0)e N (3.44) assume s  N, which means that s  h. That Thus the multiplicity (and thus probability) is peaked enables us to simplify these logarithms drastically! ^¨ √ at s = 0 as a gaussian with width ∼ N. This tells s   S − S0 X k − 1 k us that the width of the peak increases as we increase ≈ − − (3.35) k h h N. However, the excess spin per particle decreases as k=1 ∼ √1 . So that means that our fractional polarization s N 2 X = − k − 1  (3.36) becomes far more sharply peaked as we increase N. h 2 k=1 s 4 X = − k − 1  (3.37) Thermal contact (probably N 2 k=1 skipped in class) Sum to integral Suppose we put two systems in contact with one Now we have this sum to solve. You can find this sum another. This means that energy can flow from one either geometrically or with calculus. The calculus system to the other. We assume, however, that the

19 contact between the two systems is weak enough that Small group question What is the multiplicity their energy eigenstates are unaffected. This is a bit of the combined system if the energy is 3, of a contradiction you’ll need to get used to: we treat i.e. gAB(EAB = 3)? our systems as non-interacting, but assume there is Answer To solve this, we just need to multiply the some energy transfer between them. The reasoning is multiplicities of the two systems and add up all that the interaction between them is very small, so the energy possibilities that total 3: that we can treat each system separately, but energy can still flow. gAB(EAB = 0) = gA(−1)gB(4) + gA(1)gB(2) + gA(3)gB(0) (3.45) We ask the question: “How much energy will each = 3 · 1 + 3 · 4 + 1 · 6 (3.46) system end up with after we wait for things to settle down?” The answer to this question is that energy = 21 (3.47) will settle down in the way that maximizes the number Small group question What is the probability of microstates. that system A has energy 1, if the combined Let us consider two simple systems: a 2-spin param- energy is 3? agnet, and a 4-spin paramagnet. Answer To solve this, we just need to multiply the multiplicities of the two systems, which we al- System A A system of 3 spins each with energy ±1. ready found and divide by the total number of This system has the following multiplicity found microstates: from Pascal’s triangle: gA(1)gB(2) 1 P (EA = 1|EAB = 3) = (3.48) 1 1 gAB(3) 3 · 4 1 2 1 = (3.49) 1 3 3 1 21 −3 −1 1 3 4 = (3.50) System B A system of 4 spins each with energy ±1. 7 This system has the following multiplicity found which shows that this is the most probable dis- from Pascal’s triangle: tribution of energy between the two subsystems. 1 Given that these two systems are able to exchange 1 1 energy, they ought to have the same temperature. 1 2 1 To find the most probable energy partition between 1 3 3 1 the two systems, we need to find the partition that 1 4 6 4 1 maximizes the multiplicity of the combined system: −4 −2 0 2 4

Question What is the total number of microstates gAB(EA) = gA(EA)gB(EAB − EA) (3.51) when you consider systems A and B together as dg 0 = AB (3.52) a combined system? Answer dEA We need to multiply the numbers of microstates 0 0 = gAgB − gBgA (3.53) for each system separately, because for each mi- g0 g0 crostate of A, it is possible to have B be in any A = B (3.54) of its microstates. So the total is 2324 = 128. gA gB 1 ∂gA(EA) 1 ∂gB(EB) Since we have two separate systems here, it is mean- = (3.55) g (E ) ∂E g (E ) ∂E ingful to ask what the probability is for system A to A A A B B B have energy EA, given that the combined system has This tells us that the “thing that becomes equal” when energy EAB. the two systems are in thermal contact is this strange

20 ratio of the derivative of the multiplicity with respect where m is the magnetic moment of a single spin. to energy divided by the multiplicity itself. You may Find the total magnetization of the system as a be able to recognize this as what is called a logarithmic function of the magnetic field B and temperature derivative. T , number of spins N, and magnetic moment m. ∂ 1 ∂gA(EA) ln(gA(EA)) = (3.56) 3. Quantum harmonic oscillator ∂EA gA(EA) ∂EA a) Find the entropy of a set of N oscillators thus we can conclude that when two systems are in of frequency ω as a function of the total thermal contact, the thing that equalizes is quantum number n. Use the multiplicity ∂ ln g  function: β ≡ (3.57) ∂E (N + n − 1)! V g(N, n) = (3.61) 1 n!(N − 1)! At this stage, we haven’t shown that β = kT , but we have shown that it should be a function of T , since T and assume that N  1. This means you is also a thing that is equalized when two systems are can make the Sitrling approximation that in thermal contact. log N! ≈ N log N − N. It also means that By dimensional reasoning, you can recognize that this N − 1 ≈ N. could be 1 , and we’re just going to leave this at that. b) Let E denote the total energy n~ω of the kT oscillators. Express the entropy as S(E,N). Show that the total energy at temperature Homework for week 2 (PDF) T is N ω E = ~ (3.62) ~ω 1. Entropy and Temperature (K&K 2.1) Sup- e kT − 1 3N/2 pose g(E) = CE , where C is a constant and This is the Planck result found the hard N is the number of particles. way. We will get to the easy way soon, and 3 you will never again need to work with a a) Show that E = 2 NkBT .  ∂2S  multiplicity function like this. b) Show that ∂E2 is negative. This form of N g(E) actually applies to a monatomic ideal gas. 2. Paramagnetism For a paramagnetic system, the entropy as a function of excess spin s is ap- proximately given by 2s2 S(s) ≈ k log g(N, 0) − k (3.58) B B N as will be/has been derived in class. The energy of a paramagnet is given by

E = −µtotB (3.59)

and the total magnetization µtot is related to the spin excess as

µtot = 2ms (3.60)

21 Chapter 4

Week 3: Boltzmann distribution and Helmholtz (K&K 3, Schroeder 6)

This week we will be deriving the Boltzmann ratio and using classical , i.e. we can conclude the Helmholtz free energy, continuing the microcanon- that the above equation applies, and we can assume ical approach we started last week. Last week we saw that the temperature of this huge system is unaffected that when two systems were considered together in by the small change in energy that could happen due a microcanonical picture, the energy of each system to differences in the small system. taken individually is not fixed. This provides our Let us now examine the multiplicity of our combined stepping stone to go from a microcanonical picture system, making B be our large system: where all possible microstates have the same energy X (and equal probability) to a canonical picture where gAB(E) = gA(EA)gB(EAB − EA) (4.2)

all energies are possible and the probability of a given EA microstate being observed depends on the energy of We can further find the probability of any particular that state. energy being observed from

gA(EA)gB(EAB − EA) Spring 2020: We skipped this PA(EA|EAB) = P 0 0 (4.3) E0 gA(EA)gB(EAB − EA) section last week, and will skip A where we are counting how many microstatesstates this section this week of the combined system have this particular energy in system A, and dividing by the total number of We ended last week by finding that the following quan- microstates of the combined system to create a proba- tity is equal in two systems in thermal equalibrium bility. So far this is identical to what we had last week. The difference is that we are now claiming that system ∂ ln g  β = (4.1) B is huge. This means that we can approximate gB. ∂E V Doing so, however requires some care. where g(E) is the multiplicity in the microcanonical Warning wrong! We might be tempted to simply ensemble. To more definitively connect this with Taylor expand gB temperature, we will again consider two systems in thermal equilibrium using a microcanonical ensemble, gB(EAB − EA) ≈ gB(EAB) − βgB(EAB)EA + ··· but this time we will make one of those two systems (4.4) huge. In fact, it will be so huge that we can treat it ≈ gB(EAB)(1 − βEA) (4.5)

22 This however, would be wrong unless βEA  1. microstate, we just need to divide the probability of One way to see that this expansion must have its energy by the number of microstates at that energy, limited range is that if βEa ≥ 1 then we will i.e. drop the factor of g: end up with a negative multiplicity, which is meaningless. The trouble is that we only assumed e−βEi P A = (4.10) that EA was small enough not to change the i Z temperature (or β), which does not mean that all energies X βEA < 1. Thus this expansion is guaranteed to Z = g(E)e−βE (4.11) fail. E all µstates When we run into this problem, we can consider X −βE that ln g(E) is generally a smoother function than = e (4.12) g(E). Based on the Central Limit Theorem, we i expect g(E) to typically have a Gaussian shape, which is one of our analytic functions that is least This is all there is to show the Boltzmann probabil- well approximated by a polynomial. In contrast, ity distribution from the microcanonical picture: Big ln g will be parabolic (to the extent that g is system with little system, treat big system thermody- Gaussian), which makes it a prime candidate for namically, count microstates. a Taylor expansion. Note We still haven’t shown (this time around) that Right way The right way to do this is to Taylor β = 1 . Right now β is just still a particular kB T expand the ln g (which will be entropy), since the derivative that equalizes when two systems are derivative of ln g is the thing that equilibrates, in thermal equilibrium. and thus we can assume that this derivative won’t change much when we make a small change to a large system. Internal energy

ln gB(EAB − EA) ≈ ln gB(EAB) − βEA + ··· (4.6) Now that we have the set of probabilities expressed again in terms of β, there are a few things we can −βEA gB(EAB − EA) ≈ gB(EAB)e (4.7) solve for directly, namely any quantities that are di- rectly defined from probabilities. Most specifically, Now we can plug this into the probability equation the internal energy above to find that X  −βEA U = PiEi (4.13) gA(EA)gB(EAB)e PA(EA) =  0 (4.8) i P 0  −βEA E0 gA(EA)gB(EAB)e −βE A X e i − EA = Ei (4.14) kB T Z gA(EA)e i = E0 (4.9) A 1 P 0 − k T X −βEi E0 gA(EA)e B = Eie (4.15) A Z i Now this looks a bit different than the probabilities we saw previously (two weeks ago), because this is Now doing yet another summation will often feel te- the probability that we see an energy EA, not the dious. There are a couple of ways to make this easier. probability for a given microstate, and thus it has The simplest is to examine the sum above and notice the factors of gA, and it sums over energies rather how very similar it is to the partition function itself. than microstates. To find the probability of a given If you take a derivative of the partition function with

23 respect to β, you will find: fix the entropy! In quantum mechanics, we can show that such a process is possible using time- ∂Z  X = e−βEi (−E ) (4.16) dependent perturbation theory. Under certain ∂β i V i conditions, if you perturb a system sufficiently = −UZ (4.17) slowly, it will remain in the “same” eigenstate it was in originally. Although the eignestate 1 ∂Z  U = − (4.18) changes, and its energy changes, they do so con- Z ∂β V tinuously. ∂ ln Z  = − (4.19) ∂β V If we take a derivative of U with respect to volume Big Warning In this class, I do not want you begin- while holding the probabilities fixed, we obtain the ning any solution (either homework or exam) following result: with a formula for U in terms of Z! This step is not that hard, and you need to do it every time. ∂U  p = − (4.22) What you need to remember is definitions, which ∂V in this case is how U comes from probability. The S ∂ P E P  reasoning here is that I’ve all too often seen stu- = − i i i (4.23) dents who years after taking thermal physics can ∂V S only remember that there is some expression for X dEi = − P (4.24) U in terms of Z. It is easier and more correct i dV i to remember that U is a weighted average of the −βEi X e dEi energy. = − (4.25) Z dV i Pressure So the pressure is just a weighted sum of derivatives How do we compute pressure? So far, everything of energy eigenvalues with respect to volume. We can we have done has kept the volume fixed. Pressure apply the derivative trick to this also: tells us how the energy changes when we change the volume, i.e. how much work is done. From Energy 1  ∂Z  and Entropy, we know that p = (4.26) βZ ∂V β dU = T dS − pdV (4.20) 1 ∂ ln Z  = (4.27) ∂U  β ∂V p = − (4.21) β ∂V S So how do we find the pressure? We need to find the Now we have an expression in terms of ln Z and β. change in internal energy when we change the volume at fixed entropy. Small white boards How do we keep the entropy fixed when changing the volume? Helmholtz free energy Answer Experimentally, we would avoid allowing any heating by insulating the system. Theoret- We saw a hint above that U somehow relates to ln Z, ically, this is less easy. When we consider the which hinted that ln Z might be something special, Gibbs entropy, if we could keep all the prob- and now ln Z also turns out to relate to the pres- abilities fixed while expanding, we would also sure somehow. Let’s put this into thermodynamics

24 language.1 In effect, this expression gives us a physical meaning for the partition function. ∂ ln Z  U = − (4.28) ∂β V Small groups Consider a system with g eigenstates, ∂ ln Z  each with energy E . What is the free energy? d ln Z = −Udβ + dV (4.29) 0 ∂V β Answer We begin by writing down the partition d ln Z = −Udβ + βpdV (4.30) function We can already see work (i.e. −pdV ) showing up here. X So now we’re going to try a switch to a dU rather Z = e−βEi (4.42) than a dβ, since we know something about dU. i = ge−βE0 (4.43) d(βU) = Udβ + βdU (4.31) d ln Z = − (d(βU) − βdU) + βpdV (4.32) Now we just need a log and we’re done. p = βdU − d(βU) + dV (4.33) β F = −kT ln Z (4.44) βdU = d (ln Z + βU) − βpdV (4.34) = −kT ln ge−βE0  (4.45) 1 dU = d (ln Z + βU) − pdV (4.35) −βE0  β = −kT ln g + ln e (4.46) Comparing this result with the thermodynamic iden- = E0 − T k ln g (4.47) tity tells us that This is just what we would have concluded about S = kB ln Z + U/T (4.36) the free energy if we had used the Boltzmann F ≡ U − TS (4.37) expression for the entropy in this microcanonical ensemble. = U − T (kB ln Z + U/T ) (4.38) = U − kBT ln Z + U (4.39) Waving our hands, we can understand F = = −kBT ln Z (4.40) −kT ln Z in two ways:

That was a bit of a differentials slog, but got us the 1. If there are more accessible microstates, Z same result for the Helmholtz free energy without P is bigger, which means S is bigger and F assuming the Gibbs entropy S = −k i Pi ln Pi. It must be more negative. did, however, demonstrate a not-quite-contradiction, 2. If we only consider the most probable ener- in that the expression we found for the entropy is not gies, to find the energy from Z, we need the mathematically equal to the Boltmann entropy. It negative logarithm, and a kT to cancel out approaches the same thing for large systems, although the β. I won’t prove that now. Memorizing This expression for the Helmholtz free energy I will absolutely support you using as a Using the free energy starting point for homework or exam answers, and it is well worth memorizing. Why the big deal with the free energy? One way to F = −kT ln Z (4.41) put it is that it is relatively easy to compute. The other is that once you have an analytic expression 1Of course, we already talked last week about F = −kT ln Z, but that was done using the Gibbs entopy, which we’re pre- for the free energy, you can solve for pretty much tending we don’t yet know. . . anything else you want.

25 Recall: However, the “heat” reasoning allows us to recognize that the heat capacity at constant pressure will have F ≡ U − TS (4.48) the same form when expressed as an entropy deriva- dF = dU − SdT − T dS (4.49) tive. This expression is also convenient when we = −SdT − pdV (4.50) compute the entropy from the Helmholtz free energy, because we already know the entropy as a function of ∂F  −S = (4.51) T . ∂T V  ∂F  −p = (4.52) ∂V T Ideal gas with just one atom Thus by taking partial derivatives of F we can find S Let us work on the free energy of a particle in a 3D and p as well as U with a little arithmetic. You have all box. seen the Helmholtz free energy before so this shouldn’t be much of a surprise. Practically, the Helmholtz free Small groups (5 minutes) Work out (or write energy is why finding an analytic expression for the down) the energy eigenstates for a particle con- partition function is so valuable. fined to a cubical volume with side length L. You In addition to the “fundamental” physical parameters, may either use periodic boundary conditions or we can also find response functions, such as heat an infinite square well. When you have done capacity or compressibility which are their derivatives. so, write down an expression for the partition Of particular interest is the heat capacity at fixed function. volume. The heat capacity is vaguely defined as: Answer The energy is just the kinetic energy, given by dQ¯  CV ≡∼ (4.53) ∂T 2|~k|2 V T = ~ (4.56) by which I mean the amount of heat required to 2m change the temperature by a small amount, divided The allowed values of k are determined by the by that small amount, while holding the volume fixed. boundary conditions. If we choose periodic The First Law tells us that the heat is equal to the boundary conditions, then change in internal energy, provided no work is done (i.e. holding volume fixed), so 2π kx = nx nx = any integer (4.57) ∂U  L C = (4.54) V ∂T V and similarly for ky and kz, which gives us which is a nice equation, but can be a nuisance because 2 2 we often don’t know U as a function of T , which is 2π ~ 2 2 2 Enxny nz = nx + ny + nz (4.58) not one of its natural variables. We can also go back mL2 to our Energy and Entropy relationship between heat where n , n , and n take any integer values. and entropy where dQ¯ = T dS, and use that to find x y z If we chose the infinite square well boundary the ratio that defines the heat capacity: conditions instead, our integers would be positive  ∂S  values only, and the prefactor would differ by a C = T . (4.55) V factor of four. ∂T V Note that this could also have come from a manipula- From this point, we just need to sum over all states to tion of the previous derivative of the internal energy. find Z, and from that the free energy and everything

26 else! So how do we sum all these things up? This gives us a very easy integral.

∞ ∞ ∞ 3 2π2 2 2 2 2 r 2 ∞ ! X X X −β ~ (n +n +n ) k T mL Z 2 Z = e mL2 x y z B −ξ Z = 2 2 e dξ (4.65) nx=−∞ ny =−∞ nz =−∞ 2π ~ −∞ (4.59) 3  2  2 Z ∞ 3 kBT mL 2 2 2 2 2 2 2 −ξ 2π ~ 2 2π ~ 2 2π ~ 2 = e dξ (4.66) X X X −β 2 nx −β 2 ny −β 2 nz 2 2 = e mL e mL e mL 2π ~ −∞ nx ny nz 3   2 kBT m 3 (4.60) = V π 2 (4.67) 2π2 2 !   ! ~ 2 2 2 2 2 2 3 X −β 2π ~ n2 X −β 2π ~ n2 X −β 2π ~ n2   2 = e mL2 x  e mL2 y  e mL2 z kBT m = V 2 (4.68) nx ny nz 2π~ (4.61) So there we have our partition function for a single ∞ !3 2 2 atom in a big box. Let’s go on to find exciting things! X −β 2π ~ n2 = e mL2 (4.62) First off, let’s give a name to the nasty fraction to n=−∞ 3 the 2 power. It has dimensions of inverse volume, or The last bit here basically looks a lot like separation number per volume, and it has ~ in it (which makes of variables. Our energy separates into a sum of x, y it quantum) so let’s call it nQ, since I use n = N/V and z portions (which is why we can use separation for number density.

of variables for the quantum problem), but that also 3   2 causes things to separate (into a product) when we kBT m nQ = (4.69) compute the partition function. 2π~2 F = −kT ln Z (4.70) This final sum here is now something we would like to approximate. If our box is reasonably big (and = −kT ln (V nQ) (4.71) our temperature is not too low), we can assume that 2 2 This looks like (and is) a very simple formula, but you 4π ~ 2  1, which is the classical limit. In this limit, kB T mL need to keep in mind that n depends on temperature, the “thing in the exponential” hardly changes when Q so it’s not quite as simple as it looks. Now that we we change n by 1, so we can reasonably replace this have the Helmholtz free energy, we can solve for the summation with an integral. entropy, pressure, and internal energy pretty quickly. Note You might have thought to use a power series Small groups Solve for the entropy, pressure, and expansion (which is a good instinct!) but in this internal energy. case that won’t work, because n gets arbitrarily large. ∂F  S = − (4.72) ∂T V kT dnQ ∞ 2 2 3 Z 2π ~ 2  = k ln (V nQ) + V (4.73) − 2 n Z ≈ e kB T mL dn (4.63) V nQ dT −∞ kT 3 nQ = k ln (V nQ) + (4.74) We can now do a u substitution to simplify this inte- nQ 2 T gral. 3 = k ln (V nQ) + kB (4.75) s s 2 2π2 2 2π2 2 ~ ~ You could find U by going back to the weighted ξ = 2 n dξ = 2 dn (4.64) kBT mL kBT mL average definition and using the derivative trick

27 from the partition function, but with the free and will be a product of N single-particle states energy and entropy it is just algebra. (or orbitals)

U = F + TS (4.76) N Y 3 Ψmicrostate(~r1, ~r2, ··· , ~rN ) = φnxi,nyi,nzi (~ri) = −kT ln (V n ) + kT ln (V n ) + k T Q Q 2 B i=1 (4.77) (4.81) 3 and the energy will just be a sum of different = kBT (4.78) 2 energies. The result of this will be that the par- The pressure derivative gives a particularly simple tition function of the entire system will just be result. the product of the partition functions of all the separate non-interacting systems (which happen  ∂F  p = − (4.79) to all be equal). This is mathematically equiva- ∂V T lent to what already happened with the three x, kT y and z portions of the partition function. = (4.80) V N ZN = Z1 (4.82) Ideal gas with multiple atoms FN = NF1 (4.83)

Extending from a single atom to several requires just This results in simply scaling all of our extensive a bit more subtlety. Naively, you could just argue quantities by N except the volume, which didn’t that because we understand extensive and intensive increase when we added more atoms. quantities, we should be able to go from a single atom This result sounds great, in that it seems to to N atoms by simply scaling all extensive quantities. be perfectly extensive, but when we look more That is almost completely correct (if done correctly). closely, we cna see that it is actually not exten- The entropy has an extra term (the “entropy of mix- sive! ing”), which also shows up in the free energy. Note that while we may think of this “extra term” as an FN = −NkT ln(V nQ) (4.84) abstract counting thing, it is physically observable, If we double the size of our system, so N → 2N provided we do the right kind of experiment (which and V → 2V , you can see that the free energy turns out to need to involve changing N, so we won’t does not simply double, because the V in the discuss it in detail until we talk about changing N logarithm doubles while n remains the same later). Q (because it is intensive). So there must be an There are a few different ways we could imagine error here, which turns out to be caused by having putting N non-interacting atoms together. I will dis- treated all the atoms as distinct. If each atom is cuss a few here, starting from simplest, and moving a unique snowflake, then it doesn’t quite make up to the most complex. sense to expect the result to be extensive, since you aren’t scaling up “interchangeable” things. Different atoms, same box One option is to con- sider a single box with volume V that holds N Identical atoms, but different boxes We can different atoms, each of a different kind, but all also consider saying all atoms are truly identical, with the same mass. In this case, each microstate but each atom is confined into a different box, of the system will consist of a microstate for each each with identical (presumably small) size. In atom. Quantum mechanically, the wave function this case, the same reasoning as we used above for the entire state with N atoms will separate applies, but now we also scale the total volume

28 up by N. This is a more natural application of Thus we have a corrected partition function the idea of extensivity. 1 Z = ZN (4.88) N N! 1 FN = NF1 + kBT ln N! (4.89) N ZN = Z (4.85) 1 ≈ NF1 + NkBT (ln N − 1) (4.90) FN = NF1 (4.86) = −NkT ln(V nQ) + NkT (ln N − 1) V = NV1 (4.87) (4.91)   N   = NkT ln − 1 (4.92) V nQ This is taking the idea of extensivity to an ex-   n   treme: we keep saying that a system with half = NkT ln − 1 (4.93) nQ as much volume and half as many atoms is “half as much” until there is only one atom left. You This answer is extensive, because now we have a would be right to be skeptical that putting one ratio of V and N in the logarithm. So yay. atom per box hasn’t resulted is an error. We now have the true free energy for an ideal gas at high enough temperature.

Identical atoms, same box This is the picture for Small groups (10-15 minutes) Given this free en- a real ideal gas. All of our atoms are the same, ergy, solve for S, U, and p. or perhaps some fraction are a different isotope, Answer This is very similar to what we did with but who cares about that? Since they are all in just one atom, but now it will give us the true the same box, we will want to write the many- answer for the monatomic ideal gas. atom wavefunction as a product of single-atom wavefunctions (sometimes called orbitals). Thus ∂F  S = − (4.94) the wave function looks like our first option of ∂T V “different atoms, same box”, but we have fewer  n  ∂  n  distinct microstates, since swapping the quan- = −Nk(ln − 1) − NkT ln nQ ∂T nQ tum numbers of two atoms doesn’t change the (4.95) microstate.  n  ∂ = −Nk ln + Nk + NkT ln nQ nQ ∂T How to remove this duplication, which is sort (4.96) of a fundamental problem when our business is  n  3 = −Nk ln + Nk + Nk (4.97) counting microstates? Firstly, we will consider nQ 2 it vanishingly unlikely for two atoms to be in  n  5 = −Nk ln + Nk (4.98) the same orbital (when we study Bose condensa- n 2 tion, we will see this assumption breaking down). Q Then we need to figure out exactly how many This is called the Sackur-Tetrode equation. The times we counted each orbital, so we can correct quantum mechanics shows up here (~2/m), even our number of microstates (and our partition though we took the classical limit, because the function). That number is equal to the number entropy of a truly classical ideal gas has no min- of permutations of N distinct numbers, which is imum value. So quantum mechanics sets the N!, if we can assume that there is negligible prob- zero of entropy. Note that the zero of entropy ability that two atoms are in an identical state. is a bit tricky to measure experimentally (albeit

29 possible). The zero of entropy is in fact set by 2. Magnetic susceptibility Consider a paramag- the Third Law of Thermodynamics, which you net, which is a material with n spins per unit probably haven’t heard of. volume each of which may each be either “up” or “down”. The spins have energy ±mB where m Now we can solve for the internal energy: is the magnetic dipole moment of a single spin, and there is no interaction between spins. The U = F + TS (4.99) magnetization M is defined as the total magnetic   n    n  5 = NkT ln − 1 − NkT ln + NkT moment divided by the total volume. Hint: each nQ nQ 2 individual spin may be treated as a two-state sys- (4.100) tem, which you have already worked with above. 3 = NkT (4.101) Plot of magnetization vs. B field 2 a) Find the Helmholtz free energy of a param- This is just the standard answer you’re familiar agnetic system (assume N total spins) and with. You can notice that it doesn’t have any F show that NkT is a function of only the ratio quantum mechanics in it, because we took mB x ≡ kT . The pressure is easier than the entropy, since the b) Use the (i.e. partition volume is only inside the log: function and probabilities) to find an exact  ∂F  expression for the total magentization M p = − (4.102) (which is the total dipole moment per unit ∂V T volume) and the susceptibility NkT = (4.103) V ∂M  χ ≡ (4.104) ∂B This is the ideal gas law. Again, the quantum T mechanics has vanished in the classical limit. as a function of temperature and magnetic field for the model system of magnetic mo- ments in a magnetic field. The result for Homework for week 3 (PDF) the magnetization is

For each problem, please let me know how long the mB  problem took, and what resources you used to solve it! M = nm tanh (4.105) kT 1. Free energy of a two state system (K&K 3.1, modified) where n is the number of spins per unit volume. The figure shows what this magne- a) Find an expression for the free energy as a tization looks like.

function of T of a system with two states, 2 c) Show that the susceptibility is χ = nm in one at energy 0 and one at energy ε. kT b) From the free energy, find expressions for the limit mB  kT . the internal energy U and entropy S of the 3. Free energy of a harmonic oscillator A one- system. dimensional harmonic oscillator has an infinite c) Plot the entropy versus T . Explain its series of equally spaced energy states, with εn = asymptotic behavior as the temperature be- n~ω, where n is an integer ≥ 0, and ω is the comes high. classical frequency of the oscillator. We have d) Plot the S(T ) versus U(T ). Explain the chosen the zero of energy at the state n = 0 which maximum value of the energy U. we can get away with here, but is not actually the

30 zero of energy! To find the true energy we would that the concentration n0 thus defined is equal to 1 have to add a 2 ~ω for each oscillator. the quantum concentration nQ defined by (63):

a) Show that for a harmonic oscillator the free 3 MkT  2 energy is nQ ≡ (4.109) 2π~2  ~ω  − k T F = kBT log 1 − e B (4.106) within a factor of the order of unity.

Note that at high such that 6. One-dimensional gas (K&K 3.11) Consider an ideal gas of N particles, each of mass M, con- kBT  ~ω we may expand the argument of ω  fined to a one-dimensional line of length L. The the logarithm to obtain F ≈ kBT log ~ . kT particles have spin zero (so you can ignore spin) b) From the free energy above, show that the and do not interact with one another. Find the entropy is entropy at temperature T . You may assume that the temperature is high enough that kBT is much ~ω S kT  − ~ω  = − log 1 − e kT (4.107) greater than the ground state energy of one par- ~ω kB e kT − 1 ticle. This entropy is shown in the nearby figure, as well as the heat capacity. Entropy of a simple harmonic oscillator Heat capacity of a simple harmonic oscilla- tor 4. Energy fluctuations (K&K 3.4, modified) Con- sider a system of fixed volume in thermal contact with a resevoir. Show that the mean square fluc- tuations in the energy of the system is   D 2E 2 ∂U (ε − hεi) = kBT (4.108) ∂T V Here U is the conventional symbol for hεi. Hint: ∂U  Use the partition function Z to relate ∂T V to the mean square fluctuation. Also, multiply out the term (··· )2. 5. Quantum concentration (K&K 3.8) Consider one particle confined to a cube of side L; the con- centration in effect is n = L−3. Find the kinetic energy of the particle when in the ground state. There will be a value of the concentration for which this zero-point quantum kinetic energy is equal to the temperature kT . (At this concentra- tion the occupancy of the lowest orbital is of the order of unity; the lowest orbital always has a higher occupancy than any other orbital.) Show

31 Chapter 5

Week 4: Thermal radiation and Planck distribution (K&K 4, Schroeder 7.4)

This week we will be tackling things that reduce to easier this way. a bunch of simple harmonic oscillators. Any system ∞ that classically reduces to a set of normal modes each 1 X −(n+ )β~ω with its own frequency falls in this category. We will Z = e 2 (5.2) n=0 start with just an ordinary simple harmonic oscillator, ∞ 1 and will move on to look at radiation () and − β ω X −nβ ω = e 2 ~ e ~ (5.3) sound in solids (). n=0

Now the sum is actually a harmonic (or geometric) Harmonic oscillator sum, which has a little trick to solve: ∞ 1 X n You will recall that the energy eigenvalues of a single Z = e− 2 β~ω e−β~ω (5.4) simple harmonic oscillator are given by n=0 ξ ≡ e−β~ω (5.5) 1 En = (n + 2 )~ω (5.1) The trick involves multiplying the series by ξ and Note : The text uses s rather than n for the quantum subtracting: number, but that is annoying, and on the blackboard ∞ my s looks too much like my S, so I’ll stick with n. X n The text also omits the zero-point energy. It does Ξ = ξ (5.6) make the math simpler, but I think it’s worth seeing n=0 2 how when the zero-point energy disappears from the = 1 + ξ + ξ + ··· (5.7) results. ξΞ = ξ + ξ2 + ··· (5.8) Ξ − ξΞ = 1 (5.9) We will begin by solving for the properties of a single 1 simple harmonic oscillator at a given temperature. Ξ = (5.10) You already did this once using multiplicities, but it’s 1 − ξ

32 Thus we find that the partition function is simply

1 e− 2 β~ω Z = (5.11) ∂F  1 − e−β~ω S = − (5.20) ∂T V This gives us the free energy e−β~ω ω = −k ln 1 − e−β~ω + kT ~ (5.21) F = −kT ln Z (5.12) 1 − e−β~ω kT 2 −β~ω  1  −β ω e ~ω − β~ω ~ e 2 = −k ln 1 − e + −β ω (5.22) = −kT ln   (5.13) 1 − e ~ T 1 − e−β~ω 1 = ω + kT ln 1 − e−β~ω (5.14) 2~ We can see now that the ground state energy just ends Planck distribution up as a constant that we add to the free energy, which is what you probably would have guessed. Kittel was Finally, we can find the internal energy and the aver- able to omit this constant simply by redefining the age quantum number (or number of “phonons”). The zero of energy. latter is known as the Planck distribution. Small groups Solve for the high temperature limit of the free energy. U = F + TS (5.23) Answer At high temperatures, β~ω  1, which means 1 e−β~ω ~ω = ~ω + T (5.24) 2 1 − e−β~ω T −β ω 1 2 e ~ = 1 − β~ω + (β~ω) + ··· 1 e−β~ω  2 = + ~ω (5.25) (5.15) 2 1 − e−β~ω 1 −β ω 2  ~ 1  U = hni + 2 ~ω (5.26) ln(1 − e ) = ln β~ω − 2 (β~ω) + ··· (5.16) e−β~ω hni = (5.27)   1 − e−β~ω 1 ~ω F ≈ 2 ~ω + kT ln (5.17) 1 kT = (5.28) eβ~ω − 1 So far this doesn’t tell us much, but from it we can quickly tell the high temperature limits of the entropy and internal energy: So far, all we’ve done is a straightforward application of the canonical formalism from last week: we com-  ω  kT  ω  S ≈ −k ln ~ − kT − ~ (5.18) puted a partition function, took a log, and from that kT ~ω kT 2 found the entropy and internal energy.  kT   = k ln + 1 (5.19) ~ω Small groups Solve for the high-temperature and The entropy increases as we increase tempera- low-temperature limits of the internal energy ture, as it always must. The manner in which S and/or the average number of quanta hni. increases logarithmically with temperature tells High temperature answer First we consider us that the number of accessible microstates must kT  1 or β ω  1. In this case, the exponen- ~ω ~ be proportional to kT . tial is going to be very close to one, and we can ~ω

33 use a power series approximation for it. summing over microstates is very easy. Unfortunately, this makes it less obvious that this requires teaching, 1 hni = (5.29) and I have a tendency to skim over this summation. eβ~ω − 1 1 ≈ (5.30) 1 + β ω + 1 (β ω)2 + ···  − 1 ~ 2 ~ A nice example of this was the second homework, kT 1 which involved the paramagnet again. You needed = 1 (5.31) ~ω 1 + 2 β~ω + ··· to find the partition function for N dipoles. After kT 1  spending a week working with multiplicities, it would = 1 − β~ω + ··· (5.32) ~ω 2 be very natural to take the kT 1 ≈ − (5.33) ~ω 2

1 X −βEµ The first term is our equipartition term: 2 kT Z ≡ e (5.38) each for the kinetic and potential energy. The µ second term is our next-order correction, which you need not necessarily include. There would be a next term which would be proportional to and think of the µ as having something to do with 1/T , but we have omitted. spin excess, and to think that this sum should involve Low temperature answer At low temperature multiplicities. You can write a solution here using β ω  1, and we would rather look the other ~ multiplicities and summing over all possible energies, representation: but that is the hard way. The easy way only looks 1 easy once you know how to do it. The easy way in- hni = (5.34) eβ~ω − 1 volves literally summing over every possible sequence e−β~ω of spins. = (5.35) 1 − e−β~ω because now the exponentials are small (rather X X X −βE(s ,s ,··· ,s than large), which means we can expand the Z = ··· e 1 2 N (5.39) denominator as a power series. s1=±1 s2=±1 sN =±1

hni = e−β~ω 1 + e−β~ω + ···  (5.36) ≈ e−β~ω + e−2β~ω (5.37) This may look messy, but things simplify when we consider the actual energy (unless we try to simplify Once again, I kept one more term than is abso- that by expressing it in terms of N↑ or the spin excess). lutely needed. Clearly at low temperature we have a very low number of quanta, which should be no shock. I hope you all expected that the system would be in the ground state at very low E(s1, s2, ··· , sN ) = −s1mB − s2mB − · · · − sN mB temperature. (5.40)

Summing over microstates Now this may look pretty nasty, but it’s actually I realized that we haven’t spent much time talking beautiful, because each si has a separate term that about how to sum over microstates. Once you “get it,” is added together, which means that it separates! I’ll

34 use fewer words for a bit. . . to predict the spectrum of black body radation. A X X X key idea was to recognize that the light itself should Z = ··· eβ(s1mB+s2mB+···+sN mB) be in thermal equilibrium. s1=±1 s2=±1 sN =±1 (5.41) One example of a “black body” is a small hole in a large box. Any light that goes into the hole will X X X βs mB βs mB = ··· e 1 e 2 ··· (5.42) bounce around so many times before coming out of s1=±1 s2=±1 sN =±1 the hole, that it will all be absorbed. This leads to X X X = eβs1mB eβs2mB ··· eβsN mB the idea of studying the radiation in a closed box,

s1=±1 s2=±1 sN =±1 which should match that of a black body, when it is (5.43) in thermal equilibrium. ! ! X X = eβs1mB ··· eβsN mB (5.44) Eigenstates of an empty box s1=±1 sN =±1 ! ! So what are the properties of an empty box? Let’s X X = eβsmB ··· eβsmB (5.45) assume metal walls, and not worry too much about s=±1 s=±1 details of the boundary conditions, which shouldn’t !N make much difference provided the box is pretty big. X = eβsmB (5.46) The reasoning is basically the same as for a particle in a 3D box: the waves must fit in the box. As for s=±1 the particle in a box, we can choose either periodic The important steps above were boundary conditions or put nodes at the boundaries. I generally prefer periodic (which gives both positive 1. Writing the sum over states as a nested sum over and negative ~k), rather than dealing with sine waves every quantum number of the system. (which are superpositions of the above). A beautiful 2. Breaking the exponential into a product, which thing about periodic boundary conditions is that your we can do because the energy is a sum of terms set of ~k vectors is independent of the Hamiltonian, so each of which involve just one quantum number. this looks very much like the single atom in a box we 3. Doing each sum separately, and finding the result did last week. as the product of all those sums. 2π k = n n = any integer (5.47) Note that the final result here is a free energy that is x x L x just N times the free energy for a system consisting and similarly for k and k , which gives us of a single spin. And thus we could alternatively do y z our computation for a system with a single spin, and ω(~k) = c|~k| (5.48) then multiply everything that is extensive by N. The 2πcq latter is a valid shortcut, but you should know why it ω = n2 + n2 + n2 (5.49) nxny nz L x y z gives a correct answer, and when (as when we have identical particles) you could run into trouble. where now we need to be extra-careful to remember that in this expression nx is not a number of photons, even though n is a number of photons. Fortunately, Black body radiation we will soon be done with our nx, once we finish summing. The possible energies of a single mode are Researchers in 1858-1860 realized that materials emit- those of a simple harmonic oscillator, so for each of the ted light in strict proportion to their ability to absorb nx,ny,nz triples there is a different quantum number it, and hence a perfectly black body would be emit n, and an energy given by the most radiation when heated. Planck and others realized that we should be able to use thermal physics En = n~ω (5.50)

35 where technically there will also be a zero-point energy of a black body. To do that we’ll have to figure out (like for the physical harmonic oscillator), but we how much of this radiation will escape through a little won’t want to include the zero point energy for a hole. couple of reasons. Firstly, it can’t be extracted from the vacuum, so including it would make it harder to Sefan-Boltzmann law of radiation reason about the quantity of radiation leaking from a hole. Secondly, the total zero point energy of the To find the radiation power, we need do a couple of vacuum will be infinite, which makes it a nuisance. things. One is to multiply the energy per volume by the speed of light, which would tell us the energy In your homework, you will use a summation over flux through a hole if all the energy were passing all the normal modes to solve for the thermodynamic straight through that hole. However, there is an properties of the vacuum, and will show that additional geometrical term we will need to find the actual magnitude of the power, since the radiation is V (kT )4 Z ∞ −ξ 2 travelling equally in all directions. This will give us F = 8π 3 3 ln 1 − e ξ dξ (5.51) h c 0 another dimensionless factor. 8π5 V (kT )4 = − (5.52) If we define the velocity as ckˆ where c is the speed of 45 h3c3 light and kˆ is its direction, the power flux (or intensity) π2 V (kT )4 = − (5.53) in the zˆ direction will be given by the energy density 45 3c3 ~ times the average value of the positive zˆ component ~c of the velocity. When vz < 0, the light doesn’t come provided the box is big enough that LkT  1. At first this looks freaky, because the free energy is always out the hole at all. This average can be written as negative while you know that the energy is always π R 2π R 2 U vz sin θdθdφ positive. This just means that entropy is dominating I = 0 0 (5.58) the free energy. V 4π 2π π U R R 2 c cos θ sin θdθdφ The entropy is given by = 0 0 (5.59) V 4π 3 Z 1 32π5 kT  U c S = kV (5.54) = ξdξ (5.60) 45 hc V 2 0 3 U c 4π2 kT  = (5.61) = kV (5.55) V 4 45 ~c 4 Z ∞ (kT ) −ξ 2 = −6π 3 2 ln 1 − e ξ dξ (5.62) which is a comfortingly positive quantity, and the h c 0 energy is This is the famous Stefan-Boltzmann law of radiation. U 8π5 (kT )4 Since the constants are all mostly a nuisance, they = (5.56) are combined into the Stefan-Boltzmann constant: V 15 h3c3 2 4 π (kT ) I ≡ power radiated per area of surface (5.63) = 3 3 (5.57) 15 ~ c 4 = σBT (5.64) which is also nicely positive. 4 Z ∞ kB −ξ 2 σB = −6π 3 2 ln 1 − e ξ dξ (5.65) Note also that these quantities are also nicely exten- h c 0 sive, as you would hope. Side note Why is this T 4 law important for incan- Knowing the thermodynamic properties of the vacuum descent light bulbs? The resistivity of a metal is handy, but doesn’t tell us yet about the properties increases with temperature. In a light bulb, if

36 you have a bit of wire that is a bit hotter than At this point we can identify the internal energy per the rest, its resistivity will be higher, and that frequency as will cause it to have more Joule heating, and get hotter. If nothing else came into play, we’d have dU V ω2 ω = ~ (5.72) a feedback loop, and the hot bit would just keep dω c32π2 eβ~ω − 1 getting hotter, and having higher resistivity, until it vaporized. Boom. Fortunately, the power of which is proportional to the spectral density of power, 3 light radiated from that hot bit of wire will in- which is proportional to ω at low frequency, and dies crease faster than its resistivity goes up (because off exponentially at high frequency. T 4 is serious!), preventing the death spiral, and saving the day! Skipped: Kirchoff law and Surface temperature Planck radiation law Having found the total power radiated, a fair question is how much of that power is at each possible frequency. Low temperature heat capacity This defines the black body spectrum. Each mode has an occupancy hni that is the same as that of Much of the same physics that we have considered the harmonic oscillator from Monday. But the power here applies also to solids. One of the mysteries of radiated also depends on how many modes there are was why the heat capacity of an at a given frequency. This may be more clear if we insulator drops at low temperatures. Experimentally, solve for the internal energy in a different way: it was found that

3 Cp ∝ T (5.73)

modes X at low temperatures, which was pretty mysterious. U = hnji~ωj (5.66) Why would solids have their energy drop off like this? j Based on the classically, the modes X ωj heat capacity should be independent of temperature = ~ (5.67) eβ~ωj − 1 provided all the terms in the energy are quadratic. j q ∞ hc n2 + n2 + n2 ZZZ L x y z Einstein model ≈ √ dnxdnydnz ~2πc 2 2 2 β L nx+ny +nz −∞ e − 1 Einstein proposed that we view a solid as a whole (5.68) bunch of harmonic oscillators, all with the same fre- hc Z ∞ n quency (for simplicity). In this case, the internal = L 4πn2dn (5.69) β hc n 0 e L − 1 energy is given by

Now we can transform the integral from n to ω via N~ω 1 U = β ω + N~ω (5.74) ωn = c2πn/L. e ~ − 1 2

 3 Z ∞ Small groups What is the heat capacity at low tem- L ~ω 2 U = 4πω dω (5.70) peratures according to this picture? Roughly c2π eβ~ω − 1 0 sketch this by hand. Z ∞ V ~ω 2 Answer At low temperature, the exponential on the = 3 3 β ω 4πω dω (5.71) c (2π) 0 e ~ − 1 bottom is crazy big, so we can see that the inter-

37 nal energy will be box with a small hole in it. If we treat the large box a metal cube with side length L and metal −β~ω U ≈ N~ωe (5.75) walls, the frequency of each normal mode will be ∂U  given by: CV = (5.76) ∂T πcq V ω = n2 + n2 + n2 (5.80)  2 nxny nz L x y z ~ω −β ω ≈ NkB e ~ (5.77) kT where each of nx, ny, and nz will have positive integer values. This simply comes from the fact This dies off very quickly as the temperature be- that a half wavelength must fit in the box. There comes kT  ~ω. We call it “exponential” scaling, is an additional quantum number for polarization, but it’s exponential in β not T . which has two possible values, but does not affect This happens (we see exponential low-T scaling in the the frequency. Note that in this problem I’m heat capacity) whenever we have a finite energy gap using different boundary conditions from between the ground state and the first excited state. what I use in class. It is worth learning to work with either set of quantum numbers. Debye theory Each normal mode is a harmonic oscillator, with energy eigenstates En = n~ω where we will not ~ 1 For smallish k, the frequency of a is propor- include the zero-point energy 2 ~ω, since that tional to |~k|, with the speed of sound as the propor- energy cannot be extracted from the box. (See tionality constant. We won’t have time for this in the Casimir effect for an example where the zero class, but the reasoning is very similar. point energy of photon modes does have an ef- fect.) The key thing Debye realized was that unlike light, the phonon ~k has a maximum value, which is de- Note This is a slight approximation, as the termined by the number of atoms per unit volume, boundary conditions for light are a bit more since one wavelength of sound must include at least complicated. However, for large n values a couple of atoms. This means that when we com- this gives the correct result. pute the internal energy of a crystal full of phonons, a) Show that the free energy is given by there is a maximum k value, and thus a maximum 4 ∞ q 2 2 2 V (kT ) Z value for nx + ny + nz and a maximum value for −ξ 2 F = 8π 3 3 ln 1 − e ξ dξ h c 0 the frequency, which we call ωD. The internal energy is thus (5.81) 5 4 3 8π V (kT )  L  Z ωD ω = − (5.82) ~ 2 45 h3c3 U = β ω 4πω dω (5.78) vs2π 0 e ~ − 1 π2 V (kT )4 V Z ωD ω = − (5.83) = ~ 4πω2dω (5.79) 45 ~3c3 c3(2π)3 eβ~ω − 1 0 ~c provided the box is big enough that LkT  1. Note that you may end up with a slightly At low temperature we can RUN OUT OF TIME different dimensionless integral that numer- PREPARING FOR LECTURE! ically evaluates to the same result, which would be fine. I also do not expect you Homework for week 4 (PDF) to solve this definite integral analytically, a numerical confirmation is fine. How- 1. N Radiation in an empty box As discussed ever, you must manipulate your in- in class, we can consider a black body as a large tegral until it is dimensionless and

38 has all the dimensionful quantities re- 4. Heat shields (K&K 4.8) A black (nonreflective) moved from it! plane at high temperature Th is parallel to a cold black plane at temperature T . The net b) Show that the entropy of this box full of c energy flux density (i.e. power transfered per photons at temperature T is unit area) in vacuum between the two planes 3 4 4 32π5 kT  is JU = σB Th − Tc , where σB is the Stefan- S = kV (5.84) Boltzmann constant used in (26). A third black 45 hc plane is inserted between the other two and is 3 4π2 kT  allowed to come to a steady state temperature = kV (5.85) 45 ~c Tm. Find Tm in terms of Th and Tc, and show that the net energy flux density is cut in half c) Show that the internal energy of this box because of the presence of this plane. This is the full of photons at temperature T is principle of the heat shield and is widely used to reduce radiant . Comment: The U 8π5 (kT )4 result for N independent heat shields floating in = (5.86) V 15 h3c3 temperature between the planes Tu and Tl is that T 4−T 4 π2 (kT )4 the net energy flux density is J = σ h c . = (5.87) U B N+1 15 3c3 ~ 5. Heat capacity of vacuum 2. Surface temperature of the earth (K&K 4.5) a) Solve for the heat capacity of a vacuum, Calculate the temperature of the surface of the given the above, and assuming that photons Earth on the assumption that as a black body in represent all the energy present in vacuum. thermal equilibrium it reradiates as much thermal b) Compare the heat capacity of vacuum at radiation as it receives from the Sun. Assume room temperature with the heat capacity of also that the surface of the Earth is a constant an equal volume of water. temperature over the day-night cycle. Use the sun’s surface temperature T = 5800K; and the 10 sun’s radius R = 7×10 cm; and the Earth-Sun distance of 1.5 × 1013cm. 3. N Pressure of thermal radiation (modified from K&K 4.6) We discussed in class that

 ∂F  p = − (5.88) ∂V T Use this relationship to show that a)   X dωj p = − hn i , (5.89) j ~ dV j

where hnji is the number of photons in the mode j; b) Solve for the relationship between pressure and internal energy.

39 Chapter 6

Week 5: Chemical potential and Gibbs distribution (K&K 9, Schroeder 7.1)

This week be looking at scenarios where the number momentum and position. of particles in a system changes. We could technically  2  −β p +mgz always manage to solve problems without doing such 2m e a system, but allowing N to change is often a lot P1(~p,~r) = (6.1) easier, just as letting the energy change made things Z1 2 −β p −βmgz easier. In both case, we enable ourselves to consider a e 2m = (6.2) smaller system, which tends to be both conceptually Z1 and mathematically simpler. This tells us that the probability of this atom being at any height drops exponentially with height. If Small white boards (3 minutes) Talk with your we extend this to many atoms, clearly the density neighbor for a moment about how you expect the must drop exponentially with height. This week we’ll density of the atmosphere to vary with altitude. be looking at easier approaches to explain this sort of phenomenon. You can see the obvious fact that that potential energy will affect density, and hence pressure. We will be generalizing the idea of potential energy into what is called chemical potential. The atmosphere Chemical potential Let’s talk about the atmosphere for a moment. Each Imagine for a moment what happens if you allow just atmosphere has a potential energy. We can solve two systems to exchange particles as well as energy. this problem using the canonical ensemble as we have Clearly they will exchange particles for a while, and learned. We will consider just one atom, but now then things will settle down. If we hold them at fixed with gravitational potential energy as well as kinetic temperature, their combined Helmholtz free energy energy. This time around we’ll do this classically will be maximized. EXPLAIN WHY rather than quantum mechanically. We can work out the probability of this atom having any particular Minimizing free energy Let’s consider a system

40 that is in thermal contact with its surroundings, We can now interpret this extra term as a new way so the two remain at the same temperature. The to change the internal energy of a system, by adding system has a fixed volume, so no work is done, or removing particles. This differentialalso gives us and the energy transfered by heating Q is equal to a new partial derivative definition for the chemical the change in internal energy ∆U. Let’s further potential assume that the surroundings are big enough that the temperature doesn’t change even when the system heats its surroundings for vice versa. We  ∂U  can apply the Second Law in this case: µ = (6.12) ∂N S,V ∆Ssys + ∆Ssurr ≥ 0 (6.3) Now the change in entropy of the surroundings will just be −Q/T where Q is the energy trans- The chemical potential expands our set of thermody- fered to the system by heating. This means that namic variables, and allows all sorts of nice excitement. Q Specifically, we now have three extensive variables ∆S − ≥ 0 (6.4) that the internal energy depends on, as well as their sys T derivatives, the temperature, pressure, and chemical ∆U ∆S − sys ≥ 0 (6.5) potential. sys T T ∆Ssys − ∆Usys ≥ 0 (6.6) ∆U − T ∆S ≤ 0 (6.7) sys sys Note In general, there is one chemical potential for ∆Fsys ≤ 0 (6.8) each kind of particle, thus the word “chemical” This means that the Helmholtz free energy of the in chemical potential. Thus the “three” I discuss system can only decrease under these conditions. is actually a bit flexible. Thus when it reaches equilibrium (where it cannot change any further) the Helmholtz free energy must be at a minimum. The fact that F is minimized means that the deriva- tive of the Helmholtz free energy with respect to N must be equal on both sides of the box. This defines Internal and external the chemical potential, which is the thing that be- comes equal when two systems are placed in diffusive The chemical potential is in fact very much like po- equilibrium (i.e. particles may be exchanged). tential energy. We can distinguish between external chemical potential, which is basically ordinary po-  ∂F  µ = (6.9) tential energy, and internal chemical potential, ∂N T,V which is the chemical potential that we compute as a property of a material. We’ll do a fair amount of com- This expands our total differential of the free energy puting of the internal chemical potential this week, dF = −SdT − pdV + µdN (6.10) but keep in mind that the total chemical potential is what becomes equal in systems that are in equilib- which also expands our understanding of the thermo- rium. The total chemical potential at the top of the dynamic identity atmosphere, is equal to the chemical potential at the bottom. If it were not, then atoms would diffuse from dU = T dS − pdV + µdN (6.11) one place to the other.

41 Ideal gas chemical potential We can solve for the density now, as a function of position. Recall the Helmholtz free energy of an ideal gas is given by  n  kBT ln = µtot − mgz (6.24) F = NF1 + kBT ln N! (6.13) nQ

3 ! −β(µtot−mgz)   2 n = n e (6.25) mkBT Q = −NkBT ln V + kBTN(ln N − 1) 2π~2 This is just telling us the same result we already knew, (6.14) which is that the density must drop exponentially with = −NkBT ln (V nQ) + kBTN(ln N − 1) (6.15) height. N 1  = NkT ln − NkT (6.16) V nQ 3 Interpreting the chemical potential   2 mkBT N nQ ≡ , n ≡ (6.17) The chemical potential can be challenging to under- 2π 2 V ~ stand intuitively, for myself as well as for you. The Small groups Find the chemical potential of the ideal gas expression ideal gas. βµ Answer To find the chemical potential, we just need n = nQe (6.26) to take a derivative.  ∂F  can help with this. This tells us that the density µ = (6.18) increases as we increase the chemical potential. Parti- ∂N V,T cles spontaneously flow from high chemical potential N 1  = kBT ln (6.19) to low chemical potential, just like heat flows from V nQ high temperature to low. This fits with the idea that  n  at high µ the density is high, since I expect particles = kBT ln (6.20) nQ to naturally flow from a high density region to a low density region. where the number density n is given by n ≡ N/V . This equation can be solved to find the density in The distinction between internal and external chemi- terms of the chemical potential: cal potential allows us to reason about systems like the atmosphere. Where the external chemical poten- βµ n = nQe (6.21) tial is high (at high altitude), the internal chemical potential must be lower, and there is lower density. This might remind you of the Boltzmann relation. This is because particles have already fled the high-µ In fact, it’s very closely related to the Boltzmann region to happier locations closer to the Earth. relation. We do want to keep in mind that the µ above is the internal chemical potential. The total chemical potential is given by the sum of the Gibbs factor and sum internal chemical potential and the external chemical potential, and that total is what is equalized between Let’s consider how we maximize entropy when we systems that are in diffusive contact. allow not just microstates with different energy, but also microstates with different number of particles. µtot = µint + mgz (6.22) The problem is the same was we dealt with the first  n  = kBT ln + mgz (6.23) week. We want to maximize the entropy, but need to nQ fix the total probability, the average energy and now

42 the average number. tion constraint.

X X hNi = N = PiNi (6.27) 1 = Pi (6.34) i i X X −1−α−βEi−γNi hEi = U = PiEi (6.28) = e (6.35) i i X −1−α X −βEi−γNi 1 = Pi (6.29) = e e (6.36) i i 1 e−1−α = (6.37) P e−βEi−γNi To solve for the probability Pi we will want to max- i P imize the entropy S = −k i Pi ln Pi subject to the above constraints. Like what I did the first week of Thus we find that the probability of a given mi- class, we will need to use Lagrange multipliers. crostate is

The Lagrangian which we want to maximize will look e−βEi−γNi P = (6.38) like i Z X Z ≡ e−βEi−γNi (6.39) ! X X i L = −k Pi ln Pi + kα 1 − Pi i i ! where we will call the new quantity Z the grand X partition function or Gibbs sum. + kβ U − PiEi i ! We have already identified β as 1 , but what is this X kT + kγ N − PiNi (6.30) γ? It is a dimensionless quantity. We expect that γ i will relate to a derivative of the entropy with respect to N (since it is the Lagrange multiplier for the N constraint). We can figure this out by examining the Small groups Solve for the probabilities Pi that maximize this Lagrangian, subject to the above newly expanded total differential of entropy: constraints. Eliminate α from the expression for probability, so you will end up with probabilities dU = T dS − pdV + µdN (6.40) that depend on the other two Lagrange multipli- 1 p µ dS = dU + dV − dN (6.41) ers, one of which is our usual β, while the other T T T one we will relate to chemical potential. Answer We maximize L by setting its derivatives Small groups I’d like you to repeat your first ever equal to zero. homework problem in this class, but now with the N-changing twist. Given the above set 1 ∂L of probabilities, along with the Gibbs entropy 0 = − (6.31) P k ∂Pi S = −k P ln P , find the total differential of = ln Pi + 1 + α + βEi + γNi (6.32) entropy in terms of dU and dN, keeping in mind that V is inherently held fixed by holding the P = e−1−α−βEi−γNi (6.33) i energy eigenvalues fixed. Equate this total differ- ential to the dS above to identify β and γ with Now as before we’ll want to apply the normaliza- thermodynamic quantities.

43 Answer where you must keep in mind that the sums are over X all microstates (including states with different N). S = −k Pi ln Pi (6.42) We can go back to our expressions for internal energy i and number X e−βEi−γNi  = −k P ln (6.43) X i Z U = PiEi (6.56) i i X 1 = −k Pi(−βEi − γNi − ln Z) (6.44) X −β(Ei−µNi) = Eie (6.57) i Z i = kβU + kγN + k ln Z (6.45) X N = PiNi (6.58) Now we can zap this with d to find its derivatives: i 1 X −β(Ei−µNi) dZ = Nie (6.59) dS = kβdU + kUdβ + kγdN + kNdγ + k Z Z i (6.46) We can now use the derivative trick to relate U and N to the Gibbs sum Z, should we so desire. Now we just need to find dZ... Small groups Work out the partial derivative tricks ∂Z ∂Z dZ = dβ + dγ (6.47) to compute U and N from the grand sum. Hint: ∂β ∂γ You will need to take derivatives of Z with respect X −βEi−γNi X −βEi−γNi = − Eie dβ − Nie dN to β and µ. i i Answer Let’s start by exploring the derivative with (6.48) respect to β, which worked so nicely with the = −UZdβ − NZdγ (6.49) partition function. 1 ∂Z 1 X Putting dS together gives = − (E − µN )e−β(Ei−µNi) Z ∂β Z i i i dS = kβdU + kγdN (6.50) (6.60) 1 µ = dU − dN (6.51) = −U + µN (6.61) T T Now let’s examine a derivative with respect to µ. Thus, we conclude that 1 ∂Z 1 X −β(Ei−µNi) 1 µ = (βNi)e (6.62) kβ = kγ = − (6.52) Z ∂µ Z T T i 1 = βN (6.63) β = γ = −βµ (6.53) kT Arranging these to find N and U is not hard. Small groups Show that Actual Gibbs sum (or grand sum) ∂N  > 0 (6.64) Putting this interpretation for γ into our probabilities ∂µ we find the Gibbs factor and Gibbs sum (or grand T,V sum or grand partition function) to be: Answer X −β (E − µN ) N = NiPi (6.65) P = j j (6.54) j Z i X 1 ∂Z  Z ≡ e−β(Ei−µNi) (6.55) = kT (6.66) i Z ∂µ β

44 So the derivative we seek will be Small white boards Suppose the water is added at     dN ∂N X ∂Pi a rate dt . Suppose you know the values of N, S, = N (6.67) ∂µ i ∂µ V , and U for a given amount of room temperature T,V i β water (which we can call N0, S0, etc.). Find the   ! X Pi ∂Z rate of change of these quantities. = Ni βNiPi − Answer Because these are extensive quantities, they Z ∂µ β i must all be increasing with equal proportion (6.68) X = N (βN P − βhNiP ) (6.69) dV V0 dN i i i i = (6.73) i dt N0 dT We can simplify this the notation by expressing dS S0 dN = (6.74) things in terms of averages, since we’ve got sums dt N0 dT of P times something. dU U dN i = 0 (6.75) dt N dT = β hNi(Ni − hNi)i (6.70) 0 2 2 = β N − hNi (6.71) This tells us that differential changes to each of these D 2E quantities must be related in the same way, for this = β (N − hNi) (6.72) process of pouring in more identical water. And we This is positive, because it is an average of some- can drop the 0 subscript, since the ratio of quantities thing squared. The last step is a common step is the same regardless of how much water we have. when examining variances of distributions, and V relies on the fact that hN − hNii = 0. dV = 0 dN (6.76) N0 V Euler’s homogeneous function theorem dV = dN (6.77) N There is a nice theorem we can use to better under- U dU = dN (6.78) stand the chemical potential, and how it relates to N the . This involves reasoning about how internal energy changes when all the extensive Thus given the thermodynamic identity variables are changed simultaneously, and connects with Euler’s homogeneous function theorem. dU = T dS − pdV + µdN (6.79) U S S Suppose we have a glass that we will slowly pour dN = T dN − p dN + µdN (6.80) water into. We will define our “system” to be all N N N the water in the glass. The glass is open, so the U = TS − pV + µN (6.81) pressure remains constant. Since the water is at room temperature (and let’s just say the room humidity This is both crazy and awesome. It feels very counter- is 100%, to avoid thinking about evaporation), the intuitive, and you might be wondering why we didn’t temperature remains constant as well. tell you this way back in Energy and Entropy to save you all this trouble with derivatives. The answer is Small white boards What is the initial internal en- that it is usually not directly all that helpful, since ergy (and entropy and volume and N) of the we now have a closed-form expression for U in terms system? i.e. when there is not yet any water in of six mutually dependent variables! So you can’t use the glass. . . this form in order to evaluate derivatives (much). Answer Since these are extensive quantities, they must all be zero when there is no water in the This expression is however very helpful in terms of glass. understanding the chemical potential. Consider the

45 Gibbs free energy: can be turned into other particles, so we have a more complicated scenario, but it still involves changing G ≡ U − TS + pV (6.82) the number of particles in a system. In chemical = µN (6.83) equilibrium, when a given reaction is in equilibrium the sum of the chemical potentials of the reactants which tells us that the chemical potential is just the must be equal to the sum of the chemical potentials Gibbs free energy per particle. If we have several of the products. chemical species, this expression just becomes An example may help. Consider for instance making X G = µiNi (6.84) water from scratch: i 2H2 + O2 → 2H2O (6.88) so each chemical potential is a partial Gibbs free energy per molecule. In this case in This explains why the chemical potential is seldom dis- cussed in chemistry courses: they spend all their time 2µH2 + µO2 = 2µH2O (6.89) talking about the Gibbs free energy, which just turns out to be the same thing as the chemical potential. We can take this simple equation, and turn it into an equation involving activities, which is productive Side note There is another interesting thing we can if you think of an activity as being something like do with the relationship that a concentration (and if you care about equilibrium concentrations): U = TS − pV + µN (6.85) β(2µ −2µ −µ ) e H2O H2 O2 = 1 (6.90) and that involves zapping it with d. This tells us λ2 that H2O 2 = 1 (6.91) λO λ dU = T dS + SdT − pdV − V dp + µdN + Ndµ 2 H2 (6.86) Now this looks sort of like the law of mass action, ex- cept that our equilibrium constant is 1. To get to the which looks downright weird, since it’s got twice more familiar law of mass action, we need to introduce as many terms as we normally see. This tells us (a caricature of) the chemistry version of activity. The that the extra terms must add to zero: thing in square brackets is actually a relative activity, 0 = SdT − V dp + Ndµ (6.87) not a concentration as is often taught in introductory classes (and was considered correct prior to the late This relationship (called the Gibbs-Duhem equa- nineteenth century). It is only proportional to con- tion) tells us just how T , p and µ must change in centration to the extent that the substance obeys the order to keep our extensive quantities extensive ideal gas relationship between chemical potential and and our intensive quantities intensive. concentration. Fortunately, this is satisfied for just about anything at low concentration. For solvents Chemistry (and dense materials like a solid reactant or product) the chemical potential doesn’t (appreciably) change Chemical equilibrium is somewhat different than the as the reaction proceeds, so it is normally omitted diffusive equilibrium that we have considered so far. from the mass action equation. When I was taught In diffusive equilibirum, two systems can exchange this in a chemistry class back in the nineties, I was particles, and the two systems at equilibirum must taught that the “concentration” of such a substance have equal chemical potentials. In chemistry, particles was dimensionless and had value 1.

46 Specifically, we define the thing in square brackets as Homework for week 5 (PDF) ∗ β(µ −µ∗ ) [H O] ≡ n e H2O H2O (6.92) 2 H2O λ = n∗ H2O (6.93) H2O λ∗ H2O where n∗ is a reference concentration, and µ∗ is the chemical potential of the fluid at that reference density. Using this notation, we can solve for the activity

∗ [H2O] λH O = λ (6.94) 2 H2O n∗ H2O So now we can rewrite our weird mass action equation from above  2 ∗ [H2O] λH O ∗ 2 nH O 2 = 1 (6.95)    2 ∗ [O2] ∗ [H2] λ ∗ λ ∗ O2 n H2 n O2 H2 and then we can solve for the equilibrium constant for the reaction [H O]2 (n∗ )2 λ∗ (λ∗ )2 2 = H2O O2 H2 (6.96) [O ][H ]2 n∗ (n∗ )2 (λ∗ )2 2 2 O2 H2 H2O ∗ 2 (n ) ∗ ∗ ∗ H2O β(µ +2µ −2µ ) = e O2 H2 H2O (6.97) n∗ (n∗ )2 O2 H2 ∗ 2 (n ) ∗ = H2O e−β∆G (6.98) n∗ (n∗ )2 O2 H2 where at the last step I defined ∆G∗ as the difference in Gibbs free energy between products and reactants, and used the fact that the chemical potential is the Gibbs free energy per particle. Figure 6.1: Centrifugal Force by Randall Munroe, at xkcd. This expression for the chemical equilibrium constant is the origin of the intuition that a reaction will go 1. Centrifuge (K&K 5.1) A circular cylinder of ra- forward if the Gibbs free energy of the products is dius R rotates about the long axis with angular lower than that of the reactants. velocity ω. The cylinder contains an ideal gas of atoms of mass M at temperature T . Find an ex- I hope you found interesting this little side expedi- pression for the dependence of the concentration tion into chemistry. I find fascinating where these n(r) on the radial distance r from the axis, in fundamental chemistry relations come from, and also terms of n(0) on the axis. Take µ as for an ideal that the relationship between concentrations arises gas. from an ideal gas approximation! Which is why it is only valid in the limit of low concentration, and why 2. Potential energy of gas in gravitational the solvent is typically omitted from the equilibrium field (K&K 5.3) Consider a column of atoms constant, since its activity is essentially fixed. each of mass M at temperature T in a uniform

47 gravitational field g. Find the thermal average equilibrium with O2 and CO in the gas phases potential energy per atom. The thermal average at concentrations such that the activities are −5 −7 kinetic energy is independent of height. Find λ(O2) = 1 × 10 and λ(CO) = 1 × 10 , all the total heat capacity per atom. The total heat at body temperature 37◦C. Neglect any spin capacity is the sum of contributions from the ki- multiplicity factors. netic energy and from the potential energy. Take a) First consider the system in the absence of the zero of the gravitational energy at the bottom CO. Evaluate ε such that 90 percent of h = 0 of the column. Integrate from h = 0 to A the Hb sites are occupied by O . Express h = ∞. You may assume the gas is ideal. 2 the answer in eV per O2. 3. Gibbs sum for a two level system (Modified b) Now admit the CO under the specified con- from K&K 5.6) ditions. Fine εB such that only 10% of the Hb sites are occupied by O . a) Consider a system that may be unoccupied 2 with energy zero, or occupied by one particle in either of two states, one of energy zero and one of energy ε. Find the Gibbs sum for this system is in terms of the activity λ ≡ eβµ. Note that the system can hold a maximum of one particle. b) Solve for the thermal average occupancy of the system in terms of λ. c) Show that the thermal average occupancy of the state at energy ε is

− ε λe kT hN(ε)i = (6.99) Z d) Find an expression for the thermal average energy of the system. e) Allow the possibility that the orbitals at 0 and at ε may each be occupied each by one particle at the same time; Show that

− ε 2 − ε Z = 1 + λ + λe kT + λ e kT (6.100) − ε  = (1 + λ) 1 + λe kT (6.101)

Because Z can be factored as shown, we have in effect two independent systems. 4. Carbon monoxide poisoning (K&K 5.8) In carbon monoxide poisoning the CO replaces the O2 adsorbed on hemoglobin (Hb) molecules in the blood. To show the effect, consider a model for which each adsorption site on a heme may be vacant or may be occupied either with energy εA by one molecule O2 or with energy εB by one molecule CO. Let N fixed heme sites be in

48 Chapter 7

Week 6: Ideal gas (K&K 6, Schroeder 6.7)

Midterm on Monday Motivation

Topics are everything through week 4, including week You may recall that when we solved for the free energy 3 homework, which was due in week 4. Problems of an ideal gas, we had a fair amount of work to sum should be similar to homework problems, but de- over all possible sets of quantum numbers for each signed to be completed in class. The exam will be atom, and then to remove the double-counting due closed notes. You should be able to remember the to the fact that our atoms were identical. We had a fundamental equations: similar issue when dealing with photon modes and blackbody radiation, but in that case one approach was to treat each mode as a separate system, and dU = T dS − pdV (7.1) then just sum over all the modes separately, without F = U − TS (7.2) ever needing to find the partition function of all the dF = −SdT − pdV (7.3) modes taken together. e−βEi P = (7.4) This week we will be looking at how we can treat each i Z orbital (i.e. possible quantum state for a single non- X Z = e−βEi (7.5) interacting particle) as a separate system (which may i or may not be occupied). This can only work when X we work in the grand canonical ensemble, but will U = E P (7.6) i i greatly simplify our understanding of such systems. i F = −kT ln Z (7.7) X S = −k Pi ln Pi (7.8) Quantum mechanics and orbitals i (7.9) Kittel uses the term orbital to refer to an energy eigenstate (or wave function) of a one-particle system. How do things differ when we have more than one If you need a property of a particular system (the ideal particle? gas, the simple harmonic oscillator), it will be given to you. There is no need, for instance, to remember Suppose we have three particles (and ignore spin for the Stefan-Boltzmann law or the Planck distribution. a moment). The wave function would be written as

49 Ψ(~r1, ~r2, ~r3, ··· ). This function in general has nothing Fermions to do with any single-particle orbitals. Orbitals arise Fermions are particles with half-integer spin, such as when we consider a Hamiltonian in which there are electrons and protons. Fermions are antisymmetric no interactions between particles: when we exchange the labels of any two particles. pˆ2 pˆ2 Hˆ = 1 + V (~r ) + 2 + V (~r ) + ··· (7.10) Ψ(~r1, ~r2, ~r3, ··· ) = −Ψ(~r2, ~r1, ~r3, ··· ) (7.15) 2m 1 2m 2 This formula is Pauli’s exclusion principle. When our Hamiltonian is separable in this way (i.e. the particles don’t interact, and there are no terms that This isn’t a quantum class, so I won’t say much more, but we do need to connect with the orbitals picture. involve both ~r1 and ~r2), we can use separation of vari- ables in the solution, and we obtain a wave function When we have non-interacting fermions, their energy that is a product of orbitals: eigenstates can be written using a Slater determinant, which is just a convenient way to write the proper an- tisymmetric linear combination of all possible product |i1, i2, i3, · · · i ˙=φi (~r1)φi (~r2)φi (~r3) ··· (7.11) 1 2 3 states with the same set of orbitals: Assuming the potential and mass are the same for Ψ (~r , ~r , ~r , ··· ) = every particle, these orbitals are eigenstates of the i1i2i3··· 1 2 3

following single-particle eigenvalue equation: φi1 (~r1) φi1 (~r2) φi1 (~r3) ···

1 φi2 (~r1) φi2 (~r2) φi2 (~r3) ···  2  √ (7.16) pˆ φi3 (~r1) φi3 (~r2) φi3 (~r3) ··· + V (~r) φi(~r) = εiφi(~r) (7.12) N! 2m ...... There is a catch, however, which arises if the particles This relies on the properties of a determinant, which are truly indistinguishable (as is the case for electrons, changes sign if you swap two rows or two columns. protons, atoms of the same isotope, etc.). In this case, This means that if two of your orbitals are the same, there is a symmetry which means that permuting the the result will be zero, so the “occupancy” of any labels of our particles cannot change any probabilities: orbital is either 0 or 1. Note that the N! is required in order to ensure that the wave function is normalized 2 2 |Ψ(~r1, ~r2, ~r3, ··· )| = |Ψ(~r2, ~r1, ~r3, ··· )| (7.13) provided the orbitals are orthonormal. 2 = |Ψ(~r2, ~r3, ~r1, ··· )| (7.14) The simple product we wrote above doesn’t have this symmetery, and thus while it is an eigenfunction of our Bosons have integer spin, and differ from fermions in eigenvalue equation, it cannot represent the state of a that their sign does not change when you interchange real system of identical particles. Fortunately, this is particles. pretty easy to resolve: permuting the labels doesn’t Ψ(~r , ~r , ~r , ··· ) = Ψ(~r , ~r , ~r , ··· ) (7.17) change the energy, so we have a largish degenerate 1 2 3 2 1 3 subspace in which to work. We are simply required to take a linear combination of these product states The wavefunction for noninteracting bosons looks very which does have the necessary symmetry. much like the Slater determinant above, only with a special version of the determinant that has all + The above equation, while true, does not tell us what signs. The bosons can have as many particles as they happens to the wave function when we do a permuta- want in a given orbital. In the limiting case where all tion, only to its magnitude. As it turns out, there are particles are in the same orbital, a single product of two types of symmetry possible: bosons and fermions. orbitals satisfies the required symmetry.

50 Fermi-Dirac distribution the same X Let us now consider a set of non-interacting fermions. hNi = NiPi (7.20) These fermions have a Hamiltonian with a set of i single-particle energy eigenvalues given by ε . How 0 + e−β(ε−µ) i = (7.21) do we find the probability of any given many-body Z microstate? As always, the probability of any given e−β(ε−µ) = (7.22) microstate is given by the Boltzmann distribution, 1 + e−β(ε−µ) but given that are particles are non-interacting, we’d 1 = (7.23) prefer to deal with just one at a time. As it turns 1 + eβ(ε−µ) out, dealing with one particle at a time is not really possible, but in a grand canonical ensemble we can Finding the energy is basically the same, since deal with a single orbital at a time with much greater the energy is proportional to the occupancy: ease. We can think of each orbital as a separate X system, and ask how many particles it has! Particles hEi = EiPi (7.24) can now be exchanged between orbitals just like they i were between systems last week. 0 + εe−β(ε−µ) = (7.25) Z Small groups Work out the grand partition func- = εhNi (7.26) tion for a single orbital with energy εi that may occupied by a fermion. The average occupancy of an orbital is called the Answer Now that we are thinking of an orbital as Fermi-Dirac function, and is normally written as: a system, we can pretty easily write down all 1 the possible states of that system: it is either f(ε) = (7.27) occupied or unoccupied. The latter case has 0 eβ(ε−µ) + 1 energy, and also N = 0, while the former case Whenever you are looking at non-interacting fermions, has energy ε and N = 1. Summing over these f(ε) will be very helpful. gives the Gibbs sum Small groups Sketch the Fermi-Dirac function. all µstates When talking about electrons, we often refer to the X −β(εi−µNi) Z = e (7.18) chemical potential µ as the Fermi level. Kittel also i defines the Fermi energy εF as the Fermi level when −β(ε−µ) = 1 + e (7.19) the temperature is zero, i.e.

Note that the same statistics would apply to a state εF ≡ µ(T = 0) (7.28) for a classical particle if there were an infinite energy At zero temperature, all the orbitals with energy less required to have two particles in the same state. The than ε are occupied, while all the orbitals with higher physics here is a system that can hold either zero or F energy are unoccupied. one particles, and there are various ways you could imagine that happening. Actual electrons You might (or might not) be won- dering how we can talk about electrons as non- Small groups Find the energy and the average oc- interacting particles. After all, they are charged cupancy (hNi) of the orbital particles, which naturally repel each other rather Answer If we want to find hNi of the system, we strongly. Indeed, a Slater determinant is a ter- can do that in the usual way Finding is basically rible approximation for an energy eigenstate for

51 any many-electron system. So why are we both- Answer The Gibbs sum will be ering talking about orbitals and the Fermi-Dirac ∞ distribution that relies on orbitals being an actual X Z = e−β(Nε−µN) (7.29) thing? N=0 ∞ X  N = e−β(ε−µ) (7.30) I’m not going to thoroughly explain this, but n=0 rather just give a few hints about why what we’re ∞ X  N doing might be reasonable. The key idea is that = e−β(ε−µ) (7.31) what we are really interested in is the behavior n=0 of excited states of our many-body system. (The This looks suspiciously like a simple harmonic ground state is also very interesting, e.g. if you oscillator. The same harmonic summation trick want to study vibrations or phonons, but not in applies, and we see that terms of the thermal behavior of the electrons themselves.) Fortunately, even though the elec-  2 trons really do interact with one another very Z = 1 + e−β(ε−µ) + e−β(ε−µ) + ··· strongly, it is possible to construct a picture of (7.32) elementary excitations that treats these exci-  2 tations as not interacting with one another. In e−β(ε−µ)Z = e−β(ε−µ) + e−β(ε−µ) + ··· this kind of a picture, what we are talking about (7.33) are called quasiparticles. These represent an excitation of the many-body state. And it turns Subtracting the two gives out that in many cases (particularly for solids) we can represent a given excited state of the many-   1 − e−β(ε−µ) Z = 1 (7.34) body system as a sum of the energy of a bunch 1 of non-interacting quasiparticles. When this Z = (7.35) breaks down, we invent new names like exciton 1 − e−β(ε−µ) to represent an excitation in which more than one quasiparticle are interacting. Solving for the average occupancy hNi is again more tedious than for a fermion: X hNi = NiPi (7.36) i ∞ 1 X = Ne−β(ε−µ)N (7.37) Z Bose-Einstein distribution N=0 1 ∂Z  1  = (7.38) The same ideas apply to bosons as to fermions: we Z ∂µ β h can treat each orbital as a separate system in the hh−βh(ε−µ)   1 − e hh  −β(ε−µ) 1 grand canonical ensemble. In this case, however, the = − −e β¡ −β(ε−µ)2C β¡ occupancy N can have any (non-negative) value. 1 − e (7.39) e−β(ε−µ) = (7.40) Small groups Solve for the Gibbs sum for an orbital 1 − e−β(ε−µ) with energy ε, and solve for the hNi for a single 1 f(ε) = (7.41) orbital occupied by bosons. eβ(ε−µ) − 1

52 This turns out to be just the Planck distribution we distribution and the Bose-Einstein distribution be- aleady saw, only with a chemical potential as ref- come identical. erence. Why does this bosonic system look like a 1 simple harmonic oscillator? Since the particles are f (ε) = ≈ e−β(ε−µ) (7.44) FD eβ(ε−µ) + 1 non-interacting, we have the same set of energy eigen- 1 values, which is to say an equally spaced series of f (ε) = ≈ e−β(ε−µ) (7.45) BE β(ε−µ) states. This is conversely related to why we can de- e − 1 scribe solutions to the simple harmonic oscillator as In this limit (which is the low-density limit), the bosonic phonons. system will behave as a classical ideal gas. Small groups Sketch the Bose-Einstein distribution A reasonable question is, “what is the chemical po- function. tential.” We already handled this, but can now look This expression, the Bose-Einstein distribution, tells at this answer in terms of orbitals and the classical us that at low temperatures, we could end up seeing distribution function. (Note: classical distribution a lot of particles in low energy states (if there are any function is a bit of a misnomer in this context, as it eigenvalues below µ), in contrast to the Fermi-Dirac defines how many particles are in a given quantum distribution, which never sees more than one particle mechanical orbital.) per state.

orbitals X Entropy hNi = f(εi) (7.46) i Small groups Find the entropy of a single orbital orbitals X that may hold a fermion. = e−β(εi−µ) (7.47) i Answer We begin with the probabilities of the two orbitals X microstates: = eβµ e−β(εi) (7.48) i = eβµZ (7.49) 1 e−β(ε−µ) 1 P = P = (7.42) βµ 0 Z 1 Z N = e nQV (7.50)

where where Z1 is the partition function for a single particle in a box, which we derived a few weeks ago to be −β(ε−µ) 3 Z = 1 + e (7.43) mkT  2 nQV where nQ ≡ 2 . Thus we can once again 2π~ Now we just find the entropy using FINISH THIS! find the expression we found last week, where 1 N n eβµ = = (7.51) Classical ideal gas nQ V nQ We can solve for the chemical potential We are now prepared to talk about a gas in the classi- cal limit. In the classical limit, there is no difference 3 3 2  µ = kT ln N − ln V − 2 ln(kT ) + 2 ln 2π~ /m in behavior between fermions and bosons. This hap- (7.52) pens when the probability of finding a particle in a particular orbital is  1. And this happens when Thus it decreases as volume increases or as the temper- β(ε − µ)  1 for all orbitals, i.e. when µ is very neg- ature increases. We can further find the free energy ative. When this is the case, both the Fermi-Dirac by integrating the chemical potential. This is again

53 redundant when compared with the approach we al- That was pretty easy, once we saw that nQ was ready solved for this. Remember that independent of volume. This expression is known as the ideal gas law. dF = −SdT − pdV + µdN (7.53) Small groups Solve for the internal energy of the  ∂F  ideal gas µ = (7.54) Answer ∂N V,T U = F + TS (7.66) Note that this must be an integral at fixed V and T : 3 = NkT (7.67) Z N 2 F = µdN (7.55) Also pretty familiar. 0 Small groups Solve for the heat capacity at con- Z N = kT (ln N − ln V − ln nQ) dN (7.56) stant volume of the ideal gas 0 Answer = kT (N ln N − N − N ln V − N ln nQ) (7.57) ∂U  C = (7.68)   n   V ∂T = NkT ln − 1 (7.58) V,N nQ  ∂S  = T (7.69) Small groups Solve for the entropy of the ideal gas ∂T V,N (from this free energy). 3 = Nk (7.70) Answer 2 This one is relatively easy. ∂F  −S = (7.59) Small groups Solve for the heat capacity at con- ∂T V,N stant pressure of the ideal gas   n   NkT dn Answer = Nk ln − 1 − Q (7.60) nQ nQ dT  ∂S      Cp = T (7.71) n Nk@T 3 n¨Q ∂T = Nk ln − 1 − ¨ p,N nQ ¨n¨Q 2 @T (7.72) (7.61) This one requires one (small) step more. We have   n  5 −S = Nk ln − (7.62) to convert the volume into a pressure in the free nQ 2 energy expression.   nQ  5 n ! S = Nk ln + (7.63) ∂Nk ln Q  + 5  n 2 C = T n 2 (7.73) p ∂T p,N This expression for the entropy is known as the n ! ∂ ln Q  + 5  Sackur-Tetrode equation. = NkT n 2 (7.74) ∂T Small groups Solve for the pressure of the ideal gas p,N   (from the free energy)  V nQ  ∂ ln N Answer = NkT (7.75)  ∂T   ∂F  p,N p = − (7.64)   ∂V  NkT nQ  T,N ∂ ln p N NkT = NkT   (7.76) = (7.65) ∂T V p,N

54 At this point we peek inside and see that nQ ∝ which is doubly degenerate; that is, two or- 3 T 2 and can complete the derivative bitals have the identical energy ε. If both orbitals are occupied the toal energy is 2ε. 5 C = Nk (7.77) How does this differ from part (a)? p 2 4. Entropy of mixing (Modified from K&K 6.6) This has been a series of practice computations in- Suppose that a system of N atoms of type A is volving the ideal gas. The results are useful for some placed in diffusive contact with a system of N of your homework, and the process of finding these atoms of type B at the same temperature and properties is something you will need to know for the volume. final exam. Ultimately, pretty much everything comes down to summing and integrating to find partition a) Show that after diffusive equilibrium is functions, and then taking derivatives (and occasional reached the total entropy is increased by integrals) to find everything else. 2Nk ln 2. The entropy increase 2Nk ln 2 is known as the entropy of mixing. b) If the atoms are identical (A = B), show Homework for week 6 (PDF) that there is no increase in entropy when dif- 1. Derivative of Fermi-Dirac function (K&K fusive contact is established. The difference ∂f has been called the Gibbs paradox. 6.1) Show that − ∂ε evaluated at the Fermi level 1 ε = µ has the value 4kT . Thus the lower the c) Since the Helmholtz free energy is lower temperature, the steeper the slope of the Fermi- for the mixed AB than for the separated Dirac function. A and B, it should be possible to extract 2. Symmetry of filled and vacant orbitals work from the mixing process. Construct (K&K 6.2) Show that a process that could extract work as the two gasses are mixed at fixed temperature. f(µ + δ) = 1 − f(µ − δ) (7.78) You will probably need to use walls that are permeable to one gas but not the other. Thus the probability that an orbital δ above the Note This course has not yet covered work, but Fermi level is occupied is equal to the probability it was covered in Energy and Entropy, so an orbital δ below the Fermi level is vacant. A you may need to stretch your memory to vacant orbital is sometimes known as a hole. finish part (c). 3. Distribution function for double occu- 5. Ideal gas in two dimensions (K&K 6.12) pancy statistics (K&K 6.3) Let us imagine a new mechanics in which the allowed occupan- a) Find the chemical potential of an ideal cies of an orbital are 0, 1, and 2. The values of monatomic gas in two dimensions, with N the energy associated with these occupancies are atoms confined to a square of area A = L2. assumed to be 0, ε, and 2ε$, respectively. The spin is zero. b) Find an expression for the energy U of the a) Derive an expression for the ensemble aver- gas. age occupancy hNi, when the system com- c) Find an expression for the entropy σ. The posed of this orbital is in thermal and diffu- temperature is kT . sive contact with a resevoir at temperature T and chemical potential µ. 6. Ideal gas calculations (K&K 6.14) Consider b) Return now to the usual quantum mechan- one mole of an ideal monatomic gas at 300K and ics, and derive an expression for the ensem- 1 atm. First, let the gas expand isothermally and ble average occupancy of an energy level reversibly to twice the initial volume; second, let

55 this be followed by an isentropic expansion from twice to four times the original volume. a) How much heat (in joules) is added to the gas in each of these two processes? b) What is the temperature at the end of the second process? c) Suppose the first process is replaced by an irreversible expansion into a vacuum, to a total volume twice the initial volume. What is the increase of entropy in the irreversible expansion, in J/K?

56 Chapter 8

Week 7: Fermi and Bose gases (K&K 7, Schroeder 7))

This week we will look at Fermi and Bose gases. These It is also analogous to superconductivity, but again, consist of noninteracting fermions or bosons. There is not the same thing. The first actual Bose-Einstein no point studying these at high temperatures and/or condensate wasn’t formed until 1995, out of rubidium low densities, since that is just where they are identical atoms at 170 nanokelvins. So “low temperature” in to the classical ideal gas, which we covered last week. this case was actually pretty chilly. So we’ll be low-temperature all week. What happens at low temperatures, and where do we see these gasses in real life? Density of (orbital) states

The Fermi gas is most widely seen in metals and semi- We have found ourselves often writing summations conductors. In both cases, the electrons (or possibly over all the orbitals, such as holes) can be sufficiently dense that “low temperature” corresponds to room temperature or even far above. X X X N = f(εn n n ) (8.1) Now, you might wonder in what insane world it makes x y z nx ny nz sense to think of the electrons in a metal as “nonin- ZZZ

teracting.” If so, you could read my little note about = f(εnxny nz )dnxdnydnz (8.2) “actual electrons” towards the end of the section on Z ∞ the Fermi-Dirac distribution. In any case, it is reason- = f(ε(n))4πn2dn (8.3) able and useful to treat metals as a non-interacting 0 Fermi gas. Room temperature is pretty low, as it Then we make the integral dimensionless, etc. This turns out, from the perspective of the electrons in can be tedious to do over and over again. In the a metal, and it’s not hard to get things colder than classical limit we can often use a derivative trick to room temperature. write an answer as a derivative of a sum we have Bose gases at effectively low temperatures are less already solved, but that that doesn’t work at low commonly found, and thus in some ways are more cool. temperatures (i.e. the quantum limit). There is an- Partly this is because there are fewer particles. other approach, which is to solve for the density of You need to look at atoms with integer spin, such as states and then use that. (Note that while it is called 4He. The “new” quantum thing that Bose gases do “density of states” it is more accurately described as is to condense at low temperatures. This condensate a density of orbitals, since it refers to the solutions to is similar to a superfluid, but not the same thing. the one-particle problem.)

57 The density of states is the number of orbitals per I should also perhaps warn you that when in- unit energy at a given energy. So basically it does tegrating δ-functions you will always want to two things for us. First, it turns the 3D integral perform a change of variables such that the inte- into a 1D integral. Secondly, it converts from an “n” gration variable is present inside the δ-function integral into an energy integral. This isn’t as nice and is not multiplied by anything. as a dimensionless integral, but we can still do that Answer ourselves later. ∞ X X X  2 π2  We use a density of states by converting D(ε) = 2 δ ~ (n2 + n2 + n2) − ε 2m L2 x y z Z nx=1 ny nz X X X (8.7) F (εnxny nz ) = dεF (ε)D(ε) (8.4) nx ny nz ZZZ ∞  2 2  ~ π 2 2 2 = 2 δ 2 (nx + ny + nz) − ε dnxdnydnz where F is any function of the orbital energy ε. You 0 2m L can see why it is convenient, particularly because the (8.8) ZZZ ∞  2 2  density of states is often itself a very simple function. 2 ~ π 2 2 2 = δ 2 (nx + ny + nz) − ε dnxdnydnz 8 −∞ 2m L Finding the density of states (8.9) At this point I have converted to an integral Kittel gives a method for finding the density of states over all “space”. Now we’ll switch into spherical which involves first integrating to find the number coordinates before doing a change of variables. of states under a given energy ε, and then taking a Actually, it is useful (although I don’t expect derivative of that. This is a perfectly find approach, students to come up with this), to switch into but I think a simpler method involves just using a k-space before going into an energy integral. ~k ≡ Dirac δ-function. π L ~n all orbitals X  L 3 ZZZ ∞  2k2  D(ε) = δ(εi − ε) (8.5) D(ε) = 2 δ ~ − ε d3k i 2π −∞ 2m (8.10) where you do need to be certain to turn the summation correctly into an integral before making use of the δ- I hope that ~k as a vector feels more comfortable function. to you than a vector of quantum numbers ~n. In Small groups Solve for the density of states of an any case, we now want to do a change of variables 1 into spherical coordinates electron gas (or of any other spin- 2 gas, which will have the same expression. You need to know 3  L  Z ∞  2k2  that D(ε) = 2 δ ~ − ε 4πk2dk 2π 0 2m 2 π2 (8.11) ε = ~ (n2 + n2 + n2) (8.6) nxny nz 2m L2 x y z At this point I’d like to pause and point out that where nx and the other range from 1 to ∞. This an integral over momenta will always end up look- corresponds to hard-wall boundary conditions, ing basically like this (except in some cases the since we’re putting a half-wavelength in the box. ε(k) will be different), and this is an acceptable You should also keep in mind that each combi- starting point for your solutions on homework nation of nx, ny, and nz will correspond to two or an exam. If we have fewer dimensions, we orbitals, one for each possible spin state. may have an area or line element rather than a

58 volume derivative in k space, and we would have temperature, since the Fermi-Dirac function turns into L fewer factors of 2π . Now we can do a change of a step function at that limit. We can start by solving variables into an energy variable: for the Fermi energy of a fermi gas, which is equal to the chemical potential when the temperature is zero. 2k2 2k  = ~ d = ~ dk (8.12) We do this by solving for the number assuming we 2m m know εF , and then backwards solving for εF . I will 2m m k2 =  kdk = d (8.13) do a couple of extra steps here to remind you how ~2 ~2 this relates to what we did last week.

And now putting all these things into our integral all orbitals X we find N = f(εi) (8.17) i V Z ∞ r2m m D(ε) = δ ( − ε)  d (8.14) all orbitals 2 2 2 X 1 π 0 ~ ~ = (8.18) β(ε −µ) 3 e i − 1   2 ∞ i V 2m Z 1 2 Z ∞ = 2 2 δ ( − ε)  d (8.15) 1 2π 0 = D(ε) dε (8.19) ~ β(ε−µ) 0 e − 1 And now our integral is in a form where we can Z εF finally make use of the delta function! If we had = D(ε)dε (8.20) not transformed it in this way (as I suspect is 0 a common error), we would get something with In the last step, I made the assumption that T = 0, incorrect dimensions! _¨ so I could turn the Fermi-Dirac function into a step

3 function, which simply changes the bounds of the   2 V 2m 1 integral. It is all right to start with this assumption, D(ε) = ε 2 (8.16) 2π2 ~2 when doing computations at zero temperature. Now I’ll put in the density of states. And that is the density of states for a non- 3 relativistic particle in 3 dimensions. For your εF   2 Z V 2m 1 2 homework, you will get to solve for the proper- N = 2 2 ε dε (8.21) 0 2π ~ ties of a highly relativistic particle, which has the 3   2 Z εF same density of states as a photon (apart from a V 2m 1 2 = 2 2 ε dε (8.22) factor of two due to polarization, and any factor 2π ~ 0 due to spin). 3   2 3 V 2m 2 2 = εF (8.23) Common error A very common error which I made 2π2 ~2 3 myself when writing these notes is to forget the factor of two due to spin. Of course, if you have Now we can just solve for the Fermi energy! 3 a spin fermion, it would be a factor of four. 2 2 3 ! 3 One advantage of using a density of states is that N  2  2 ε = 3π2 ~ (8.24) it already includes this factor, so you no longer F V 2m

need to remember it! ^¨ 2 2 N  3 = ~ 3π2 (8.25) 2m V Using the density of states This is the energy of the highest occupied orbital in Once we have the density of states, we can solve for the gas, when the temperature is zero. As you will various interesting properties of quantum (or classical) see, many of the properties of a metal (which is the gasses. The easiest thing to do is a fermi gas at zero Fermi gas that you use on an everyday basis) depend

59 fundamentally on the Fermi energy. For this reason, and as a result all of my physics courses used we also like to define other properties of electrons Gaussian units (which is synonymous with cgs). at the Fermi energy: momentum, velocity (techni- Before we move on, it is worth showing you how we cally speed, but it is called Fermi velocity), and even can simplify the density of states now that we know “temperature”. what the Fermi energy is: 1 N  3 3 2   2 kF = 3π (8.26) V 2m 1 V D(ε) = ε 2 (8.35) 2π2 ~2 pF = kF (8.27) ~ 3 − 3 1 1 = Nε 2 ε 2 (8.36)   3 F N 2 2 = ~ 3π (8.28) V There is nothing particularly deep here, but this is pF vF = (8.29) somewhat more compact, and often the factors of εF m will end up canceling out. 1 N  3 = ~ 3π2 (8.30) Small groups Solve for the internal energy at zero m V temperature of a Fermi gas. εF TF = (8.31) Answer We just need to do a different integral. kB Z εF The text contains a table of properties of metals at the U = D(ε)εdε (8.37) Fermi energy for a number of simple metals. I don’t 0 εF 3 − 3 Z 1 expect you to remember them, but it’s worth having 2 2 = NεF ε εdε (8.38) them down somewhere so you can check the reason- 2 0 ableness of an answer from time to time. Basically, 3 − 3 2 5 = Nε 2 ε 2 (8.39) they all come down to 2 F 5 F 3 = Nε (8.40) εF ∼ 4eV (with ∼ ×2 variation) (8.32) 5 F k ∼ 108cm−1 (8.33) F We generally find when looking at Fermi gasses 8 −1 vF ∼ 10 cm s (8.34) that things with dimensions of energy end up proportional to the Fermi energy. The N we Biographical sidenote My PhD advisor insisted could also have predicted, in order to end up that I memorize these numbers (and a few more) with an extensive internal energy. prior to my oral exam. He said that experimen- talists think that theorists don’t know anything about the real world, and hence it is important Fermi gas at finite temperature to be able to estimate things. Sure enough, on my exam I had to estimate the frequency of radi- The Fermi gas is more exciting (and more. . . ther- ation that passes unabsorbed through at typical mal?) when the temperature is not precisely zero. superconductor (which is in the microwave band). Let’s start with the heat capacity at low tempera- Units sidenote My advisor also insisted that I mem- tures, which is one area where metals inherently differ orize these results in cgs units rather than SI from semiconductors and insulators. (i.e. mks) units, since that is what any faculty We are looking at a metal with n electrons per unit would be comfortable with. You may have no- volume, at temperature T , where kT  ε . We are ticed that Kittel preferentially uses cgs units, F looking to find out the heat capacity C . although not exclusively. Berkeley (where Kittel V taught) was one of the last hotbeds of cgs units, Small whiteboards How would you approach this?

60 Answers Remember that Fermi function times density of states ∂U   ∂S  To find the energy, of course, we need the Fermi-Dirac CV = = T (8.41) function times the density of states. You might think ∂T V,N ∂T V,N that the red and blue areas will now be unequal, since which means that we need either S or U in order we are multiplying the blue region by a larger density to find the heat capacity at fixed volume. We of states than the red region. However, provided the could do either, but given what we know about number of electrons is fixed (as is usual), the chemical the electron gas, U is easier to find. potential must shift such that the two areas are equal. We can find U by integrating with the density of So how do we find the heat capacity? We can work states and the Fermi-Dirac distribution. This is a new out a rough equation for the internal energy change variant of our usual (relative to zero), and then take a derivative. Now the width of the red and blue regions is ∼ kT . We all µstates X know this from your first homework problem last week, U = P E (8.42) i i where you showed that the slope at the chemical poten- i 1 tial is 4kT . A steeper slope meens a proportionately In this case, we will instead sum over all orbitals the wider region that is neither zero nor one. energy contribution of each orbital, again in effect treating each orbital as a separate system. U(T ) − U(0) ∼ (# electrons excited) (∆energy)

all orbitals (8.47) X U = f(εi)εi (8.43) ∼ (D(εF )kT )(kT ) (8.48) i ∂U  Z CV = (8.49) = εf(ε)D(ε)dε (8.44) ∂T N,V 2 ∼ D(εF )k T (8.50) Remember that for an electron gas which tells us that the heat capacity vanishes at low 3   2 V 2m 1 temperatures, and is proportional to T , which is a D(ε) = ε 2 (8.45) stark contrast to insulators, for which C ∝ T 3 as 2π2 ~2 V predicted by the Debye model. 3 N √ = 3 ε (8.46) 2 ε 2 F Heat capacity without so much waving Sometimes one or the other of these may be more To find the heat capacity more carefully, we could set convenient. up this integral, noting that the Fermi-Dirac function Fermi function for zero and finite temperature is the only place where temperature dependence arises: ∂U  CV = (8.51) Hand-waving version of heat capacity ∂T N,V Z ∞ ∂f We can begin with a hand-waving version of solving for = εD(ε) dε (8.52) the heat capacity. We look at the Fermi-Dirac function 0 ∂T at both finite and zero temperature, and we can note Z ∞ (ε − ε )eβ(ε−εF ) 1 = εD(ε) F dε (8.53) that the red and blue shaded areas, representing the 2 2 0 eβ(ε−εF ) + 1 kT probability of orbitals being unoccupied below εF and the probability of excited orbitals above εF being where in the last stage I assumed that the chemical occupied are equal (this was your homework). potential would not be changing significantly over our

61 (small) temperature range. An interesting question Bose function for finite temperature is what the shape of ∂f is. The exponential on top ∂T The divergence of the Bose-Einstein distribution causes it to drop exponentially at ε − ε  kT , while F means that µ must be always less than the minimum the expoential on the bottom causes it to drop at orbital energy, i.e. µ < 0. As before, the total number low energies where ε − ε  kT . This makes it F is given by sharply peaked, provided kT  εF , which can justify evaluating the density of states at the Fermi energy. Z ∞ We can also for the same reason set the bounds on N = f(ε)D(ε)dε (8.61) 0 the integral to be all energies ∞ Z 1 = (V ··· ) f(ε)ε 2 dε (8.62) D(ε ) Z ∞ (ε − ε )eβ(ε−εF ) C ≈ F ε F dε (8.54) 0 V 2 2 kT −∞ eβ(ε−εF ) + 1 where in the second step I assumed a three- Now we have an integral that we would love to make dimensional gas, and omitted the various constants dimensionless, but which has an annoying ε that does in the density of states for brevity. not have εF subtracted from it. Let’s look at even Bose function times density of states and oddness. The ratio does not look either very even or very odd, but we can make it do so by multiplying What happens when kT gets low enough that quan- tum mechanics matters, i.e. such that n ≈ n? Al- by e−β(ε−εF ) top and bottom. Q ternatively, we can ask ourselves what happens as N Z ∞ D(εF ) 1 becomes large, such that N ≥ n ? If we fix T , we C = ε(ε − ε ) dε V Q V 2 F β(ε−ε )  −β(ε−ε )  kT −∞ e F + 1 e F +can 1 increase N only by shifting µ to the right, which (8.55) increases f(ε) at every energy. But there is a limit! Z ∞ If we try to make N too big, we will find that even D(εF ) 1 = ε(ε − εF ) dεif we set µ = 0, we don’t get enough particles. What kT 2 eβ(ε−εF ) + e−β(ε−εF ) + 2 −∞ does that mean? Surely there cannot be a maximum (8.56) value to N?! Now we can do a change of variables No, the D(ε) integral we are using accounts for all of

ξ = β(ε − εF ) dξ = βdε (8.57) the states except for one! The ground state doesn’t show up, because ~k = 0, so ε = 0 (or if you like non- This makes our integral almost dimensionless: periodic boundary conditions, when nx = ny = nz = ∞ 2 2 D(ε ) Z   1 ~ π  F ¨*odd 1, and ε = 2m L ). This state isn’t counted in the CV = 2 kT ξ +¨εF (kT ξ) ξ −ξ kT dξ kT −∞ e + e + 2integral, and all the extra bosons end up in it. If you (8.58) like, µ doesn’t become zero, but rather approaches Z ∞ ξ2 the ground state energy. And as it does, the number = D(ε )k2T dξ (8.59) of particles occupying the ground state diverges. We F eξ + e−ξ + 2 −∞ can find the number of particles in the ground state So here is our answer, expressed in terms of a dimen- by subtracting the total number of particles from the sionless integral. Wolfram alpha tells me this integral number of particles in all the excited states: π2 is 3 . Z ∞ Nground = N − f(ε, µ = 0)D(ε)dε (8.63) 0 Bose gas When Nground > 0 according to this formula, we say that we have a Bose-Einstein condensate. The tem- 1 f(ε) = (8.60) perature at which this transition happens (for a fixed eβ(ε−µ) − 1 density) is the Bose-Einstein transition temperature.

62 Your last homework this week is to solve for this state of the gas is transition temperature. 3 U = Nε (8.65) 0 4 F Experimental observation 2. Pressure and entropy of degenerate Fermi gas (K&K 7.3) The first observation of Bose condensation was in 1995, and involved a couple of thousand Rb atoms at a) Show that a Fermi electron gas in the ground lower than 170 nK. They were cooled using first laser state exerts a pressure cooling and then evaporative cooling. 2 5 2 3 3π 2 N  3 p = ~ (8.66) 5 m V Room-temperature Bose-Einstein con- densate In a uniform decrease of the volume of a cube every orbital has its energy raised: The One student asked about some recent experiments 1 energy of each orbital is proportional to L2 1 demonstrating a room-temperature Bose-Einstein con- or to 2 . V 3 densate (BEC). I don’t think these “condensates” are b) Find an expression for the entropy of a Fermi really a BEC in the normal sense, since they are not electron gas in the region kT  εF . Notice composed of matter, and they are not in thermal that S → 0 as T → 0. equilibrium. I have not read the papers themselves, and can believe that it is new and exciting, but I am 3. Mass-radius relationship for white dwarfs not comfortable calling it a BEC. The particles being (Modified from K&K 7.6) Consider a white dwarf condensed are quasiparticles (polaritons) which are of mass M and radius R. The dwarf consists of basically photons that are strongly interacting with ionized hydrogen, thus a bunch of free electrons matter. and protons, each of which are fermions. Let the electrons be degenerate but nonrelativistic; the protons are nondegenerate. Homework for week 7 (PDF) a) Show that the order of magnitude of the 2 gravitational self-energy is − GM , where 1. Energy of a relativistic Fermi gas (Slightly R G is the gravitational constant. (If the modified from K&K 7.2) For electrons with an mass density is constant within the sphere energy ε  mc2, where m is the mass of the of radius R, the exact potential energy is electron, the energy is given by ε ≈ pc where 2 − 5 GM ). p is the momentum. For electrons in a cube of 3 R b) Show that the order of magnitude of the volume V = L3 the momentum takes the same kinetic energy of the electrons in the ground values as for a non-relativistic particle in a box. state is

a) Show that in this extreme relativistic limit 2 5 2 5 ~ N 3 ~ M 3 the Fermi energy of a gas of N electrons is 2 ≈ 5 (8.67) mR 3 2 given by mMH R

1 where m is the mass of an electron and MH   3 3n is the mas of a proton. εF = ~πc (8.64) π c) Show that if the gravitational and kinetic energies are of the same order of magni- N where n ≡ V is the number density. tude (as required by the virial theorem of 1 20 1 b) Show that the total energy of the ground mechanics), M 3 R ≈ 10 g 3 cm.

63 d) If the mass is equal to that of the Sun (2 × 1033g), what is the density of the white dwarf? e) It is believed that pulsars are stars com- posed of a cold degenerate gas of neutrons (i.e. neutron stars). Show that for a neutron 1 17 1 star M 3 R ≈ 10 g 3 cm. What is the value of the radius for a neutron star with a mass equal to that of the Sun? Express the result in km. 4. Fluctuations in a Fermi gas (K&K 7.11) Show for a single orbital of a fermion system that

(∆N)2 = hNi (1 + hNi) (8.68)

if hNi is the average number of fermions in that orbital. Notice that the fluctuation vanishes for orbitals with energies far enough from the chemi- cal potential µ so that hNi = 1 or hNi = 0. 5. Einstein condensation temperature (Roundy problem) Starting from the density of free particle orbitals per unit energy range

3   2 V 2M 1 D(ε) = ε 2 (8.69) 4π2 ~2 show that the lowest temperature at which the total number of atoms in excited states is equal to the total number of atoms is

2   3 1 ~2 N 4π2 TE =  √  TE = kB 2M V R ∞ ξ 0 eξ−1 dξ (8.70)

The infinite sum may be numerically evaluated to be 2.612. Note that the number derived by integrating over the density of states, since the density of states includes all the states except the ground state. Note: This problem is solved in the text itself. I intend to discuss Bose-Einstein condensation in class, but will not derive this result.

64 Chapter 9

Week 8: Work, heat, and cycles (K&K 8, Schroeder 4)

This week we will be zooming through chapters 8 of and we can add to that the results from this class: Kittel and Kroemer. Chapter 8 covers heat and work,  n  5 which you learned about during Energy and Entropy. S = Nk ln Q + (9.6) Hopefully this will be a bit of review and catch-up n 2 time, before we move on to phase transitions.   n   F = NkT ln − 1 (9.7) nQ βµ Heat and work n = nQe (9.8) 3 mkT  2 As we reviewd in week 1, heat and work for a qua- nQ ≡ 2 (9.9) sistatic process are given by 2π~ Z (9.10) Q = T dS (9.1) Z Let us consider a simple cycle in which we start with W = − pdV (9.2) the gas at temperature TC . But we can often make use of the First Law in order 1. Adiabatically compress the gas until it reaches to avoid computing both of these (if we know how to temperature TH . find the internal energy): 2. Expand a gas to double its volume (the volume ∆U = Q + W (9.3) it reached at the end of the last step) at fixed temperature TH . Carnot cycle 3. Expand the gas at fixed entropy until its temper- We have a monatomic ideal gas, and you can use any ature reaches TC . of its properties that we have worked out in class. We 4. Finally go back to the original volume at fixed can begin with what you saw in Energy and Entropy temperature TC . pV = NkT (9.4) Small groups Solve for the heat and work on each 3 of these steps. In addition find the total work U = NkT (9.5) 2 done.

65 Answer We can solve this problem most easily by the same entropy as volume V1 at TH . working out the heat at each step. S0 = S1 (9.11)  5  5 Nk ln(n /n ) + = Nk ln(n /n ) + B Q0 0 2 B Q1 1 2 1. Since the process is adiabatic, Q1 = 0. To find the work, we just need to know (9.12) 3 ∆U = 2 Nk∆T . So the work must be nQ0/n0 = nQ1/n1 (9.13) 3 W = Nk∆(TH − TC ). 3 3 2 mkT  2 V mkT  2 V C 0 = H 0 2π~2 N 2π~2 N (9.14) 2. Now we are increasing the volume, which 3   2 will change the entropy. Since the temper- TC V0 = V1 ature is fixed, Q = T ∆S, and we can find TH ∆S easily enough from the Sackur-Tetrode (9.15) entropy: ∆S = Nk ln 2. Since the internal energy doesn’t change, the heat and work Now we need to find the volume at the be- are opposite. Q = −W = NkTH ln 2. ginning of step 4, right after the adiabatic expansion. Once again, we need to identify the volume that has the same entropy at 3. Now we are again not changing the en- temperature TC as the system with volume tropy, and thus not heating the system, 2V1 and temperature TH . The math is iden- so W = ∆U, and the work done is equal tical, with just a factor of two difference:

and opposite of the work done on step #1. 3   2 W = 3 Nk∆(T − T ). TC 2 C H V3 = 2V1 (9.16) TH

So we can see that the isothermal compres- 4. This will be like step 2, but now the temper- sion (of the cold gas) is again cutting the ature is different, and the sign of change is volume in half. Since we are compress- different. But more challenging is that the ing the gas (rather than expanding) the we don’t yet know the initial or final vol- work is positive while the heat is negative: 1 umes for this process (or more importantly Q = −W = NkTC ln 2 = −NkTC ln 2. the ratio of them). To find that, we’re going Putting these all together, the total work done is to have to figure out how much the volume changed in steps 1 and 3. Working this out W = NkT ln 2 − NkT ln 2 (9.17) requires that we give a name to one of our H C volumes, and for convenience I’ll call the vol- = ln 2Nk(TH − TC ) (9.18) ume at the end of step 1 V1, which means the volume at the end of step 2 is 2V . This is not the easy way to find the total work, which 1 is to apply the First Law to the entire cycle. But this serves as an example to help you to see how knowing a few more equations of state can make it a bit easier to To find the volume at the beginning of step solve traditional “thermodynamics” problems. In this 1 (which I’ll call V0), we need to identify the case, knowing the entropy made it relatively simple volume which at temperature TC will have to identify the curves of the adiabats.

66 Efficiency of an engine of a room to acheive nothing. The engine could pre- cisely power the refridgerator such that no net heat If we are interested in this as a heat engine, we have is exchanged between the room and its environment. to ask what we put into it. This diagram shows where energy and entropy go. The engine itself (our ideal Carnot fridge energy and entropy flow diagram. gas in this case) returns to its original state after Naturally, we cannot create an ideal Carnot engine or one cycle, so it doesn’t have any changes. However, an ideal Carnot refridgerator, because in practice a we have a hot place (where the temperature is T , H truly reversible engine would never move. However, it which has lost energy due to heating our engine as it is also very useful to know these fundamental limits, expanded in step 2), and a cool place at T , which C which can guide real heat engines (e.g. coal or nuclear got heated up when we compressed our gas at step power plants, some solar power plands) and refridger- 4. In addition, over the entire cycle some work was ators or air conditioners. Another use of this ideal done. picture is that of a heat pump, which is a refridgerator Carnot engine energy and entropy flow diagram. The in which you cool the outside in order to heat your entropy lost by the hot place is the same as the entropy house (or anything else). A heat pump can thus be gained by the cold place, because the Carnot engine more efficient than an ordinary heater. Just looking is reversible. at the diagram for a Carnot fridge, you can see that the heat in the hot location exceeds the work done, The energy we put in is all the energy needed to keep preciesly because it also cools down the cold place. the hot side hot, which is the Q for step 2.

QH = NkTH ln 2 (9.19) Homework for week 8 (PDF) The efficiency is the ratio of what we get out to what we put in, which gives us 1. Heat pump (K&K 8.1) a) Show that for a reversible heat pump the W ε = (9.20) energy required per unit of heat delivered QH inside the building is given by the Carnot ln 2Nk(T − T ) = H C (9.21) efficiency: NkTH ln 2 TC W TH − TC = 1 − (9.22) = ηC = (9.23) TH QH TH ’ and this is just the famous Carnot efficiency. What happens if the heat pump is not re- versible? Note I could have made this an easier problem if I b) Assume that the electricity consumed by a had changed the statement to expand at fixed reversible heat pump must itself be gener- temperature until the entropy changed by a given ated by a Carnot engine operating between ∆S. Then we would not have had to use the the even hotter temperature T and the Sackur-Tetrode equation at all, and our result HH cold (outdoors) temperature TC . What is would have been true for any material, not just QHH the ratio of the heat consumed at THH an ideal gas! QH (i.e. fuel burned) to the heat delivered at TH We could also have run this whole cycle in reverse. (in the house we want to heat)? Give numer- That would look like the next figure. This is how a ical values for THH = 600K; TH = 300K; refridgerator works. If you had an ideal refridgerator TC = 270K. and an ideal engine with equal capacity, you could c) Draw an energy-entropy flow diagram for operate them both between the inside and outside the combination heat engine-heat pump,

67 similar to Figures 8.1, 8.2 and 8.4 in the text (or the equivalent but sloppier) figures in the course notes. However, in this case we will involve no external work at all, only energy and entropy flows at three tempera- tures, since the work done is all generated from heat. 2. Photon Carnot engine (Modified from K&K 8.3) In our week on radiation, we saw that the Helmholtz free energy of a box of radiation at temperature T is

V (kT )4 π4 F = −8π (9.24) h3c3 45 From this we also found the internal energy and entropy

(kT )4 π4 U = 24π V (9.25) h3c3 45 kT 3 π4 S = 32πkV (9.26) hc 45 Given these results, let us consider a Carnot engine that uses an empty metalic piston (i.e. a photon gas).

a) Given TH and TC , as well as V1 and V2 (the two volumes at TH ), determine V3 and V4 (the two volumes at TC ). b) What is the heat QH taken up and the work done by the gas during the first isothermal expansion? Are they equal to each other, as for the ideal gas? c) Does the work done on the two isentropic stages cancel each other, as for the ideal gas? d) Calculate the total work done by the gas during one cycle. Compare it with the heat taken up at TH and show that the energy conversion efficiency is the Carnot efficiency. 3. Light bulb in a refridgerator (K&K 8.7) A 100W light bulb is left burning inside a Carnot refridgerator that draws 100W. Can the refridger- ator cool below room temperature?

68 Chapter 10

Week 9: Phase transformations (K&K 10, Schroeder 5.3)

We will be ending this class by looking at phase trans- A phase diagram of an ordinary pure material will formations, such as the transformation from liquid to have two interesting points, and three interesting lines. solid, or from liquid to gas. The existence of phase The two interesting points are the triple point (at transformations—which are ubiquitous in nature— which solid, liquid, and vapor can all coexist), and requires interactions between particles, which up to the critical point, at which the distinction between now we have neglected. Hence, we will be reverting liquid and vapor vanishes. The three lines represent to a thermodynamics approach, since incorporating coexistence between solid and gas (or vapor), coexis- interactions into statistical mechanics is not so easy. tence between liquid and gas, and coexistence between liquid and solid. One of the key aspects for most phase transformations is coexistence. It is posssible to have both ice and liquid water in equilibrium with each other, coexisting Coexistence happily. The existence of coexistence in fact breaks some assumptions that we have made. For instance, To understand a phase transformation, we first need starting way back in Energy and Entropy, we have to understand the state of coexistence. assured you that you could describe the state of a system using practically any pair of state variables (or Question If we view the liquid and solid here as two triple, now that we include N). However, if ice and separate systems that are in equilibrium with water are coexisting, then there must be an ambiguity, each other, what can you tell me about those two because at that temperature and pressure the system systems? could be either ice or water, which are different! Answer They must be at the same temperature (since they can exchange energy), they must be Phase diagram of water (high resolution)., from at the same pressure (since they can exchange Wikipedia volume), and least obvious they must be at the same chemical potential, since they can exchange For your online edification (probably not much in molecules. class), I include here a phase diagram of water, which includes not only the liquid, vapor and solid phases, The first two properties define why we can draw the but also a dozen or so different crystal phases that coexistence as a line on a pressure-temperature dia- you can reach at some combination of high pressure gram, since when the two phases coexist they must or low temperature. have the same pressure and temperature. If we drew

69 a volume-temperature diagram, the coexisting phases by would not lie at the same point. The final property,     ∂µg ∂µ` that the chemical potentials must be identical, may ∂T − ∂T dp p,N p,N seem obvious in retrospect. This also means that the =     (10.5) dT ∂µ` ∂µg Gibbs free energy per particle of the two phases must ∂p − ∂p T,N T,N be equal (since this is equal to the chemical potential).     ∂µg ∂µ` ∂T − ∂T p,N p,N = −    (10.6) ∂µg ∂µ` ∂p − ∂p Clausius-Clapeyron T,N T,N

When you look at the phase diagram in its usual pres- Small groups Find an expression for these deriva- sure versus temperature representation, you can now tives to express this ratio in terms of thermal think of the lines as representing the points where variables that are more comfortable. You will two chemical potentials are equal (e.g. the chemi- want to make use of the fact we derived a few cal potential of water and ice). A natural question weeks ago, which says that the chemical poten- would be whether you could predict the slopes of these tial is the Gibbs free energy per particle, where curves? Or alternatively, does knowing the slopes of G = U − TS + pV . these curves tell you anything about the materials in Answer question? G = U − TS + pV (10.7) We can begin by considering two very close points on = µN (10.8) the liquid-vapor curve, separated by dp and dT . We know that dG = dU − T dS − SdT + pdV − V dp (10.9) = −SdT + V dp + µdN (10.10) µg(T, p) = µ`(T, p) (10.1) Ndµ = −SdT + V dp + µdN (10.11)

µg(T + dT, p + dp) = µ`(T + dT, p + dp) (10.2) S V µ dµ = − dT + dp + dN (10.12) N N N We can now expand the small difference in terms of differentials From this differential we can see that S  ∂µ      − = (10.13)  ∂µg ∂µg µg(T, p) + dT + dp N ∂T p,N ∂T ∂p p,N T,N V ∂µ     = (10.14)  ∂µ` ∂µ` = µ`(T, p) + dT + dp (10.3) N ∂p T,N ∂T p,N ∂p T,N Thus we can put these into the ratios above, and We can now collect the two differentials and find their we will find thatthe Ns will cancel, and the minus ratio. sign on the entropy will cancel the minus sign that was out front.     ! ∂µg ∂µ` S − dT g − S` dp Ng N` ∂T p,N ∂T p,N = (10.15) Vg V ! dT − ` ∂µ  ∂µ  Ng N` = ` − g dp (10.4) ∂p T,N ∂p T,N This looks like a bit of a nuisance having all these N values on the bottom. It looks cleaner if we S Thus the derivative of the coexistence curve is given just define s ≡ N as the specific entropy (or

70 entropy per atom) and v ≡ VN as the specific transformation. We can either do this from the bot- volume (or volume per atom). Thus we have tom up, by constructing a system in which there are interactions and then solving for the properties of that dp s − s = g ` (10.16) system, or we could use a more empirical approach, dT vg − v` in which we use an approximate set of equations of state (or a free energy) that behaves much like a real This is the famous Clausius-Clapeyron equation, system. and is true for any phase coexistence curve in the pressure-temperature phase diagram. The van der Waals fluid is sort of in between these two approaches. I will describe how we would “derive” the We can further expand this by interpreting the chance van der Waals free energy in a slightly hand-waving in entropy as a latent heat. If the entropy changes manner, and then we will use it as an effectively discontinuously, since the phase transformation hap- empirical system that we can use to explore how a pens entirely at a single temperature, we can use the phase transition might happen. The van der Waals relationship between heat and entropy to find that fluid in essence is a couple of corrections to the ideal gas law, which together add enough interactions to Q = T δS (10.17) give a plausible description of a liquid-vapor phase We call the heat needed to change from one phase to transition. another the latent heat L, which gives us that Small white boards What kind of interactions might exist in a real gas that are ignored when dp L = (10.18) we treat it as an ideal gas? dT T ∆V Answer Repulsion and attraction! ^¨ Atoms will This equation can be a bit tricky to use right, since have a very high energy if they sit on top of an- you could get the direction of ∆V wrong. The one other atom, but atoms that are at an appropriate with entropy and volume is easier, since as long as distance will feel an attractive interaction. both changes are in the same direction (vapor minus In fluids, attraction and repulsion tend to be treated liquid or vice versa) it is still correct. very differently. Repulsion tends to primarily decrease From Clausius-Clapeyron we can see that so long as entropy rather than increasing energy, because the the volume increases as the entropy also increases, the atoms can simply avoid being on top of each other. In coexistence curve will have a positive slope. contrast, attraction often has little effect on entropy (except when there is a phase transformation), but Question When would the slope ever be negative? can decrease the energy. It has little effect on entropy It requires a high-entropy phase that also has because the attraction is often very weak, so it doesn’t lower volume! (much) affect where the atoms are, but does affect the Answer Ice and water! Water has higher entropy, energy, provided the atoms are close enough. but also has lower volume than ice (i.e. is more dense). This is backwards from most other ma- Building up the free energy: repulsion terials, and causes the melting curve to slope up and to the left for ice. Let’s start by looking at the ideal gas free energy:

 n V   F = −NkT ln Q + 1 (10.19) van der Waals ideal N

When we talk about phase transformations, we require This free energy depends on both volume and temper- some sort of system in which there are interactions ature (also N, but let’s keep that fixed). The temper- between particles, since that is what leads to a phase ature dependence is out front and in nQ. The volume

71 dependence is entirely in the logarithm. When we this case, in order to derive (or motivate?) the van der add the repulsive interaction, we can wave our hands Waals equation, our reference would be the system a bit and argue that the effect of repulsion is to keep with repulsion only, and the perturbation would be atoms from sitting too close to one another, and that the attraction between our atoms. We want to solve results in each atom having less volume it could be this purely classically, since we don’t know how to placed in. The volume available for a given atom will solve the energy eigenvalue equation with interactions be the total volume V , minus the volume occupied between particles included. by all the other atoms, which we can call Nb where N is the number of atoms, and b is the excluded vol- ume per atom. You might argue (correctly) that the Classically, we would begin by writing down energy, excluded volume should be (N − 1)b, but we will be and then we would work out the partition function by working in the limit of N  1 and can ignore that summing over all possible microstates in the canonical fine distinction. Making this substitution gives us ensemble. A logarithm would then tell us the free energy. The energy will be  n (V − Nb)  F = −NkT ln Q + 1 with repulsion N (10.20) all atoms atom pairs X p2 1 X E = i + U(|~r − ~r |) (10.21) 2m 2 i j This free energy is going to be higher than the ideal i ij gas free energy, because we are making the logarithm lesser, but there is a minus sign out front. That is good, because we would hardly expect including where U(r) is an attractive pair potential, which is repulsion to lower the free energy. to say, a potential energy of interaction between each In your homework you will (incidentally) show that pair of atoms. The first term is the kinetic energy this free energy gives an internal energy that is identi- (and is the same for the ideal gas), while the second cal to the ideal gas free energy, which bears out what term is a potential energy (and is zero for the ideal I said earlier about repulsion affecting the entropy gas). The partition function is then rather than the energy. Z Z Z 1 3 3 3 Adding attraction: mean field theory Z = d r1 d r2 ··· d rN N!

 p2  When we want to add in attraction to the free energy, Z Z Z −β P i + 1 P U(|~r −~r |) 3 3 3 2m 2 i j the approach we will use is called mean field the- d p1 d p2 ··· d pN e ory. I prefer to talk about it as first-order thermo- (10.22) dynamic perturbation theory. (Actually, mean  p2  Z Z Z P i field theory is often more accurately described as a 1 −β 2m = d3p d3p ··· d3p e poor approximation to first-order perturbation the- N! 1 2 N ory, as it is common in mean-field theory to ignore Z Z Z 1 P 3 3 3 −β( U(|~ri−~rj |)) any correlations in the reference fluid.) You know d r1 d r2 ··· d rN e 2 perturbation theory from quantum mechanics, but (10.23) the fundamental ideas can be applied to any theory, including statistical mechanics. The fundamental idea of perturbation theory is to At this point I will go ahead and split this parti- break your Hamiltonian into two terms: one that you tion function into two factors, an ideal gas partition are able to solve, and a second term that is small. In function plus a correction factor that depends on the

72 potential energy of interaction. potential was always small, but the repulsive part of the potential is not small. But we’ll ignore that for  p2  N Z Z Z P i V −β 2m now. Including it properly would be doing this right, Z = d3p d3p ··· d3p e N! 1 2 N but instead we’ll use the approach that leads to the Z Z Z van der Waals equation of state. To continue. . . 1 3 3 3 −β 1 P U(|~r −~r |) d r d r ··· d r e ( 2 i j ) V N 1 2 N (10.24) Fexcess = −kT ln Zconf (10.35) Z Z  βN 2 Z d3r  1 3 3 −β 1 P U(|~r −~r |) = Z d r ··· d r e ( 2 i j ) = −kT ln 1 − U(r) (10.36) ideal V N 1 N 2 V (10.25) βN 2 Z d3r ≈ kT U(r) (10.37) = ZidealZconfigurational (10.26) 2 V N 2 Z d3r Now we can express the free energy! = U(r) (10.38) 2 V F = −kT ln Z (10.27) N 2 ≡ − a (10.39) = −kT ln(ZidealZconfigurational) (10.28) V

= Fideal − kT ln Zconfigurational (10.29) 1 R 3 where I’ve defined a ≡ − 2 d rU(r). The minus sign here is to make a a positive quantity, given that So we just need to approximate this excess free energy U(r) < 0. Putting this together with the ideal gas (beyond the ideal gas free energy). Let’s get to the free energy modified to include a simple repulsion approximation bit. term, we have the van der Waals free energy: 3 3 Z d r Z d r 1 P 1 N −β( U(|~ri−~rj |)) 2 Zconfig = ··· e 2  n (V − Nb)  N V V F = −NkT ln Q + 1 − a vdW N V (10.30) (10.40)   Z 3 Z 3 d r1 d rN X β ≈ ··· 1 − U(r ) V V  2 ij  van der Waals equation of state ij (10.31) Small groups Solve for the van der Waals pressure, as a function of N, V , and T (and of course, also At this point I have used a power series approximation a and b). on the exponentials, under the assumption that our Answer attraction is sufficiently small. Now we can write  ∂F  this sum in a simpler manner, taking account of the p = − (10.41) symmetry between the different particles. ∂V T,N 2 Z 3 Z 3 NkT N β X d r1 d rN = − a (10.42) Zconfig = 1 − ··· U(rij) (10.32) V − Nb V 2 2 V V ij This equation is the van der Waals equation of state, βN 2 Z d3r Z d3r = 1 − 1 2 U(r ) (10.33) which is often rewritten to look like: 2 V V 12  2  βN 2 Z d3r N = 1 − U(r) (10.34) p + a (V − Nb) = NkT (10.43) 2 V V 2 At this stage, I’ve gotten things way simpler. Note as you can see it is only slightly modified from the also, that I did something wrong. I assumed that the ideal gas law, provided a  p and Nb  V .

73 van der Waals and liquid-vapor We now see that there are just two “constants” to deal with, kT , and a each of which have dimensions phase transition b b2 of pressure. The former, of course, depends on tem- kT b Let’s start by looking at the pressure as a function of perature, and the ratio between them (i.e. a ) will volume according to the van der Waals equation: fully determine the shape of our pressure curve (in terms of density). NkT N 2 p = − a (10.44) V − Nb V 2 The van der Waals pressure for a few temperatures. kT N 2 = − a (10.45) Clearly something interesting is happening at low V − b V 2 N temperatures. This is a phase transition. But how do Clearly the pressure will diverge as the volume is we find out what the density (or equivalently, volume) decreased towards Nb, which puts a lower bound on of the liquid and solid are? You already know that the the volume. This reflects the fact that each atom takes pressure, temperature and chemical potential must all b volume, so you can’t compress the fluid smaller than be equal when two phases are in coexistence. From that. At larger volumes, the pressure will definitely this plot we can identify triples of densities where the be positive and decreasing, since the attractive term temperature and pressure are both identical. Which dies off faster than the first term. However, if a is corresponds to the actual phase transition? sufficiently large (or T is sufficiently small), we may find that the second term dominates when the volume Question How might you determine from this van is not too large. der Waals equation of state or free energy where the phase transformation happens? We can also rewrite this pressure to express it in terms Answer As before, we need to have identical pres- N of the number density n ≡ V , which I find a little sure, temperature and chemical potential. So we more intuitive than imagining the volume changing: need to check which of these equal pressure states have the same chemical potential. kT 2 p = 1 − n a (10.46) n − b n = kT − n2a (10.47) 1 − nb Common tangent So this tells us that as we increase the density from Most approaches require us to work with the zero, the pressure will begin by increasing linearly. Helmholtz free energy rather than the pressure equa- It will end by approaching infinity as the density ap- tion. If we plot the Helmholtz free energy versus 1 volume (with number fixed) the pressure is the nega- proaches b . In between, the attractive term may or may not cause the pressure to do something interest- tive slope. We also need to ensure that the chemical ing. potential (or Gibbs free energy) is identical at the two points. The van der Waals pressure for a few temperatures. This equation is kind of nice, but it’s still pretty con- The van der Waals free energy. fusing because it has three different constants (other than n) in it. We can reduce that further by rewriting it in terms of the packing fraction η ≡ nb, which is the fraction of the volume that is filled with atoms. G = F + pV (10.49)   kT η a ∂F p = − η2 (10.48) = F − V (10.50) b 1 − η b2 ∂V N,T

74 So let us set the Gibbs free energies and pressures a bit better by thinking a bit about the differen- equal for two points: tial of G:

p1 = p2 (10.51) dG = −SdT + V dp (10.56)  ∂F  − = same for each (10.52) ∂V This tells us that the slope of the G versus p curve N,T (at fixed temperature) is just the volume of the G1 = G2 (10.53) system. Since the volume can vary continuously F1 + pV1 = F2 + pV2 (10.54) (at least in the Helmholtz free energy we con- structed), this slope must continuously change F1 − F2 = p (V2 − V1) (10.55) as we follow the path. That explains why we So for two points to have the same Gibbs free energy, have pointy points, since the slope must be the their Helmholtz free energy (at fixed temperature) same on both sides of the curve. The points thus must pass through a line with slope equal to the represent the states where the pressure has an negative of the pressure. If those two points also have extremum, as we change the volume. In between that pressure as their (negative) slope, then they have those two extrema is the range where increasing both equal slope and equal chemical potential, and volume causes the pressure to increase. These are our two coexisting states. This is the common states are mechanically unstable, and thus cannot tangent construction. be observed. The common tangent construction is very commonly used when looking at multiple crystal structures, when Examples of phase transitions you don’t even know which ones are stable in the first place. I’d like to spend just a bit of time talking about the Note The common tangent construction also works wide variety of different phase transitions that can and when we plot F versus n or N. do happen, before we discuss how these transitions can be understood in a reasonably unified way through Gibbs free energy Landau theory.

Another approach to solve for coexistence points is Liquid-vapor to plot the Gibbs free energy versus pressure, each of which can be computed easily from the Helmholtz The liquid-vapor transition is what we just discussed. free energy. When we plot the Gibbs free energy The only fundamental difference between liquid and versus pressure, we find that there is a crossing and a vapor is the density of the fluid. (abrupt) little loop. This loop corresponds to metastable and unstable states, and the crossing point is where the Melting/freezing two phases (liquid and vapor, in our case) coexist. Melting and freezing is similar to the liquid-vapor The van der Waals Gibbs free energy. transition, with the difference however, that there As we increase the temperature, we will find that this cannot be a critical point, since we cannot go from little loop becomes smaller, as the liquid and vapor solid to liquid without a phase transition. (abrupt) densities get closer and closer. The critical point is where it disappears. Sublimation Why does G look like this? We had a good ques- Sublimation is very much like melting. Its major tion about what the “points” represent in the difference happens because of the difference between Gibbs free energy curve. We can understand this a gas and a liquid, which means that there is no

75 temperature low enough that there will not be a gas complicated in terms of their cause and phase dia- in equilibirum with a solid at low pressure. (abrupt) gram. (continuous)

Solid/solid Superfluidity Solid-solid phase transitions are interesting in that A superfluid (and helium 4 is the classic example) has different solid phases have different crystal symme- zero at low temperatures. For helium this tries which make it both possible and reasonable to transition temperature is 2.17K. (continuous) compute (and possibly even observe) properties for different phases at the same density and pressure. (abrupt) Bose-Einstein condensation The transition to having a macroscopic occupation in Ferromagnetism the ground state in a gas of bosons is another phase A ferromagnetic material (such as iron or nickel) will transition. (continuous) spontaneously magnetize itself, although the magne- tized regions do break into domains. When the mate- Mixing of binary systems rial is heated above a given temperature (called the Curie temperature) it is no longer ferromagnetic, but In binary systems (e.g. salt and water, or an alloy of instead behaves as an ordinary paramagnetic material. nickel and iron) there are many of the same phase This is therefore a phase transitions. (continuous) transitions (e.g. liquid/gas/solid), but now we have an additional parameter which is the fraction of each Ferroelectrics component in the phase. Kittel and Kroemer have a whole chapter on this kind of phase transition. A ferroelectric material is a material that has a sponta- neous electric dipole polarization at low temperatures. It behaves very much like an electrical analogue of a Landau theory ferromagnetic material. (continuous) There are so many kinds of phase transitions, you Antiferromagnetism might wonder whether they are all different, or if we can understand them in the same (or a similar) way. An antiferromagnetic material (such as nickel oxide) Landau came up with an approach that allows us to will have different atoms with oppositely polarized view the whole wide variety of phase transitions in a spin. This is less easy to observe by elementary school unified manner. children than ferromagnetism, but is also a distinct phase, with a phase transition in which the spins The key idea is to identify an order parameter ξ, become disordered. (continuous) which allows us to distinguish the two phases. This order parameter ideally should also be something that Superconductivity has interactions we can control through some sort of an external field. Examples of order parameters: A superconductor at low temperatures has zero elec- trical resistivity. At higher temperature it is (for Liquid-vapor volume or density ordinary superconductors) an ordinary metal. Lead Ferromagnetism magnetization is a classic example of a superconductor, and has a Ferroelectrics electric polarization density transition temperature of 7.19K. You see high-Tc su- Superconductivity or superfluidity quantum perconductors in demos more frequently, which have mechanical amplitude (including phase) transition temperatures up to 134K, but are more Binary mixtures fraction of components

76 The key idea of Landau is to express a Helmholtz free the free energy by setting its derivative to zero. energy as a function of the order parameter: ∂F  L = 0 (10.61) FL(ξ, T ) = U(ξ, T ) − TS(ξ, T ) (10.57) ∂ξ T 3 = α(T − T0)ξ + g4(T0)ξ (10.62) Now at a given temperature there is an equilibrium value for the order parameter ξ0, which is determined This has two solutions:

by minimizing the free energy, and this equilibrium 2 α ξ = 0 ξ = (T0 − T ) (10.63) order parameter defines the actual Helmholtz free g4(T0) energy. If T > T0 there is only one (real) solution, which is F (T ) = FL(ξ0,T ) ≤ FL(ξ, T ) (10.58) that the order parameter is zero. Thus when T > T0, we can see that So far this hasn’t given us much. Landau theory becomes powerful is when we expand the free energy F (T ) = g0(T ) (10.64) as a power series in the order parameter (and later as exactly, since ξ = 0 causes all the other terms in the a power series in temperature). Landau free energy to vanish.

In contrast, when T < T0, there are two solutions that A continuous phase transition p are minima (± (T0 − T )α/g4(T0)), and one maxi- To make things concrete, let us assume an order pa- mum at ξ = 0. In this case the order parameter rameter with inversion symmetry, such as magnetiza- continuously (but not smoothly) goes to zero. This tells us that the free energy at low temperatures will tion or electrical polarization. This means that FL must be an even function of ξ, so we can write that be given by 2 1 1 α 2 F (ξ, T ) = g (T ) + g (T )ξ2 + g (T )ξ4 + ··· F (T ) = g0(T ) − (T − T0) (10.65) L 0 2 2 4 4 4g4(T0) (10.59) Small groups Solve for the entropy of this system when the temperature is near T . The entire temperature dependence is now hidden in 0 Answer We can find the entropy from the free energy the coefficients of the power series. A simple example by considering its total differential where we could have a phase transition, would be if the sign of g2 changed at some temperature T0. In dF = −SdT − pdV (10.66) this case, we could do a power series expansion of our coefficients around T0, and we would have something which tells us that like: ∂F  −S = (10.67) 1 1 ∂T F (ξ, T ) = g (T ) + α(T − T )ξ2 + g (T )ξ4 V L 0 2 0 4 4 0 (10.60) Let’s start by finding the entropy for T < T0: 2 dg0 α where I am ignoring the temperature dependence of S< = − − (T0 − T ) (10.68) dT 2g4(T0) g4, under the assumption that it doesn’t do anything too fancy near T0. I’m leaving g0(T ) alone, because When the temperature is high, this is easier: it causes no trouble, and will be useful later. I’m also going to assume that α and g (T ) are positive. Now dg0 4 0 S = − (10.69) we can solve for the order parameter that minimizes < dT

77 This tells us that the low-temperature phase has Note that this has four solutions. Two have ξ < 0, an extra-low entropy relative to what it would and show up because our free energy is even. One have had without the phase transition. How- of the other solutions is a local maximum, and ever, the entropy is continuous, which means that the final solution is a local minimum. For this to there is no latent heat associated with this phase have a real solution, we would need for the thing transition, which is called a continuous phase in the square root to be positive, which means transition. An older name for this kind of phase 2 transition (used in the text) is a second order g4(T ) ≥ 4g6α(T − T0) (10.75) phase transition. Currently “continuous” is It would be tempting to take this as an equality prefered for describing phase transitions with no when we are at the phase transition. However, latent heat, because they are not always actually that is just the point at which there is a local second order as is this example. minimum, but we are looking for a global mini- Examples of continuous phase transitions include fer- mum (other than ξ = 0). This global minimum romagnets and superconductors. will require that

An abrupt phase transition FL(ξ > 0) < FL(ξ = 0) (10.76) To get an abrupt phase transition with a nonzero which leads us to conclude that latent heat (as for melting or boiling), we need to 1 1 1 α(T − T )ξ2 − |g (T )|ξ4 + g ξ6 < 0 consider a scenario where g4 < 0 and g6 > 0. This 2 0 4 4 6 6 gives us two competing local minima at different val- (10.77) ues for the order parameter. (Note that an abrupt phase transition is also known as a first order phase We can plug in the criterion for an extremum in transition.) the free energy at nonzero ξ to find: 1 2 4 2 1 4 1 6 1 2 1 4 1 6 |g4(T )|ξ − g6ξ ξ − |g4(T )|ξ + g6ξ < 0 FL = g0(T ) + α(T − T0)ξ − |g4(T )|ξ + g6ξ + ··· 2 4 6 2 4 6 (10.78) (10.70) 1 4 1 6 4 |g4(T )|ξ − 3 g6ξ < 0 Small groups if we have time Find the solutions (10.79) for the order parameter, and in particular find a 1 1 criterion for the phase transition to happen. |g (T )| − g ξ2 < 0 4 4 3 6 Answer We want to find minima of our free en- (10.80) ergy. . . At this point we would want to make use of the ∂FL = 0 (10.71) solution for ξ2 above that used the quadratic ∂ξ 3 5 equation. We would then have eliminated ξ from = α(T − T0)ξ − |g4(T )|ξ + g6ξ (10.72) the equation, and could solve for a relationship One solution is ξ = 0. Otherwise, between |g4(T )|, g6, and α(T − T0). 0 = α(T − T ) − |g (T )|ξ2 + g ξ4 (10.73) 0 4 6 Homework for week 9 (PDF) which is just a quadratic. It has solutions when 1. Vapor pressure equation (David) Consider a |g (T )| ± pg (T )2 − 4g α(T − T ) ξ2 = 4 4 6 0 phase transformation between either solid or liq- 2g6 uid and gas. Assume that the volume of the (10.74) gas is way bigger than that of the liquid or

78 solid, such that ∆V ≈ Vg. Furthermore, as- sume that the ideal gas law applies to the gas phase. Note: this problem is solved in the textbook, in the section on the Clausius- Clapeyron equation. dp a) Solve for dT in terms of the pressure of the vapor and the latent heat L and the temperature. b) Assume further that the latent heat is roughly independent of temperature. In- tegrate to find the vapor pressure itself as a function of temperature (and of course, the latent heat). Note that this is a rather coarse approximation, since the latent heat of water varies by about 10% between 0◦C and 100◦C. Still, you will see a pretty cool result, that is roughly accurate (and good enough for the problems below). 2. Entropy, energy, and enthalpy of van der Waals gas (K&K 9.1) In this entire problem, keep results to first order in the van der Waals correction terms a and $b. a) Show that the entropy of the van der Waals gas is

 n (V − Nb) 5 S = Nk ln Q + N 2 (10.81)

b) Show that the energy is

3 N 2a U = NkT − (10.82) 2 V c) Show that the enthalpy H ≡ U + pV is

5 N 2bkT N 2a H(T,V ) = NkT + − 2 2 V V (10.83) 5 2Nap H(T, p) = NkT + Nbp − 2 kT (10.84)

dT 3. Calculation of dp for water (K&K 9.2) Cal- culate based on the Clausius-Clapeyron equation

Figure 10.1: Effects of High Altitude by Randall 79Munroe, at xkcd. dT the value of dp near p = 1atm for the liquid-vapor equilibrium of water. The heat of vaporization at 100◦C is 2260J g−1. Express the result in kelvin/atm. 4. Heat of vaporization of ice (Modified K&K 9.3) The pressure of water vapor over ice is 518 Pa at −2◦C. The vapor pressure of water at its triple point is 611 Pa, at 0.01◦C (see Wikipedia water data page). Estimate in J mol−1 the heat of vaporization of ice just under freezing. How does this compare with the heat of vaporization of water?

80 Chapter 11

Review

Topics are everything that has been covered on home- Heat and work You should remember the expres- work. Problems should be similar to homework prob- sions for differential heat and work lems, but short enough to be completed during the exam. The exam will be closed notes. You should be dQ = T dS (11.6) able to remember the fundamental equations. dW = −pdV (11.7)

and you should be able to use these expressions Equations to remember fluently, including integrating to find total heat or work, or solving for entropy given heat: Most of the equations I expect you to remember date dQ back from Energy and Entropy, with a few exceptions. dS = (11.8) T Thermodynamic identity The thermodynamic identity, including the chemical potential: Efficiency You should know that efficiency is defined as “what you get out” divided by “what you put dU = T dS − pdV + µdN (11.1) in”, and that for a heat engine this comes down to You should be able from this to extract relation- ships such as µ = ∂U  . W ∂N S,V  = net (11.9) Thermodynamic potentials You need to know QH the Helmholtz and Gibbs free energies. Entropy You should remember the Gibbs expression F = U − TS (11.2) for entropy in terms of probability. G = U − TS + pV (11.3) X S = −k Pi ln Pi (11.10) dF = −SdT − pdV + µdN (11.4) i dG = −SdT + V dp + µdN (11.5) Boltzmann probability You should be comfort- You don’t need to remember their differentials, able with the Boltzmann probability, able to but you do need to be able to find them quickly predict properties of systems using them. and use them, e.g. to find out how µ relates to F e−βEi as a derivative. I’ll point out that by remember- Pi = (11.11) ing how to find the differentials, you also don’t Z X need to remember the sign of U − TS, since you Z = e−βEi (11.12) can figure it out from the thermodynamic identity i by making the T dS term cancel. F = −kT ln Z (11.13)

81 Derivative trick You may need to remember the and should be able to use them to make predic- derivative trick for turning a summation into a tions for properties of non-interacting systems of derivative of another summation in order to com- fermions and bosons. This also requires remem- plete a problem. More particularly, I want you bering how to reason about orbitals as essentially not to use an expression for U in terms of Z that independent systems within the grand canonical comes from the derivative trick, without writing ensemble. You should remember that the Planck down the three lines of math (or so) required to distribution for photons (or phonons) is the same show that it is true. as the Bose-Einstein distribution, but with µ = 0. Thermal averages You should remember that the This comes about because photons and phonons internal energy is given by a weighted average: are bosons, but are a special kind of boson that X has no conservation of particle number. U = EiPi (11.14) Density of states You should remember how to use i a density of states together with the above dis- And similarly for other variables, such as N in tributions to find properties of a system of non- the grand canonical ensemble. interacting fermions or bosons Chemical potential You should remember that the chemical potential is the Gibbs free energy per Z particle. hX(ε)i = D(ε)f(ε)X(ε)dε (11.20) G µ = (11.15) N You should also be able to make a distinction As special cases of this, you should be able to find between internal and external chemical potential N (or given N find µ) or the internal energy. We to solve problems such as finding the density as a had a few homeworks where you found entropy function of altitude (or in a centrifuge), if I give from the density of states, but I think that was a you the expression for the chemical potential of bit too challenging/confusing to put on the final an ideal gas (or other fluid). exam. Gibbs factor and sum You should be comfortable Conditions for coexistence You should remember with the Gibbs sum and finding probabilities in that when two phases are in coexistence, their the grand canonical ensemble. temperatures, pressures, and chemical potentials must be identical, and you should be able to e−β(Ei−µNi) make use of this. P = (11.16) i Z X Z = e−β(Ei−µNi) (11.17) i Incidentally, in class we didn’t cover the grand potential (or grand free energy), but that is what Equations not to remember you get if you try to find a free energy using the Gibbs sum like the partition function. If you need a property of a particular system (the ideal Fermi-Dirac, Bose-Einstein, and Planck distributionsgas, the simple harmonic oscillator), it will be given You should remember these distributions to you. There is no need, for instance, to remember 1 the Stefan-Boltzmann law or the Planck distribution. f (ε) = (11.18) FD eβ(ε−µ) + 1 1 Heat capacity I do not expect you to remember the fBE(ε) = (11.19) eβ(ε−µ) − 1 definition of heat capacity (although you proba-

82 bly will remember it). but you should remember what an efficiency is, and should be able to pretty quickly solve for   ∂S the net work and high-temperature heat for a CV = T (11.21) ∂T V,N Carnot engine by looking at it in T /S space. (Or ∂U  similarly for a Carnot refridgerator.) = (11.22) Density of states for particular systems You ∂T V,N need not remember any expression for the   ∂S density of states e.g. for a gas. But given such an Cp = T (11.23) ∂T p,N expression, you should be able to make use of it. Fermi energy You need not remember any particu- I do expect you to be able to make use of these lar expression for the Fermi energy of a particular equations when given. Similarly, you should be system, but should be able to make use of an ex- able to show that the two expressions for CV are pression for the Fermi energy of a system. equal, using the thermodynamic identity. Enthalpy If I give you the expression for enthalpy (U + pV ) you should be able to work with it, but since we didn’t touch it in class, I don’t expect you to remember what it is. Any property of an ideal gas I don’t expect you to remember any property of an ideal gas, includ- ing its pressure (i.e. ideal gas law), free energy, entropy, internal energy, or chemical potential. You should be comfortable with these expressions, however, and if I provide them should be able to make use of them. Stefan-Boltzmann equation You should be able to make use of the expression that

4 I = σBT (11.24)

where I is the power radiated per area of surface, but need not remember this. Clausius-Clapeyron equation You should be able to make use of dp s − s = g ` (11.25) dT vg − v`

but I don’t expect you to remember this. You should also be able to convert between this ex- pression and the one involving latent heat using your knowledge of heat and entropy. Carnot efficiency You need not remember the Carnot efficiency

T  = 1 − C (11.26) TH

83 Chapter 12

Solutions

Here are the solutions for all the homework problems. you are doing in each step. Although these solutions are available even before homework is due, I recommend that you do your best solve the homework problems without checking the solutions. I would encourage you to go so far as to not check the solutions until after you have turned in your homework. This will enable you to practice determining if your answer is correct without knowing Solution for week 1 what the correct answer is. This is what you will have to do after you have graduated (or on the exam)! PDF version of solutions In any case, please include for each homework problem that you solve an explanation of what resources you made use of, whether it be help from other students, 1. Energy, Entropy, and Probabilities these solutions, or some other resource. Please explain in each case how you used the resource. e.g. did you look at the solutions to confirm that your answer was To begin, as the question prompts, we will stick correct, or did you look at the solutions to see how to the probabilities into our expressions for U and start the problem? Your grade on the homework will S. If you knew how everything was going to work not be diminished, even if you essentially copied the out, you could only stick them into the ln Pi, but solution. I’ll act as though I haven’t solved this N  1 I would also appreciate it if you let me know where times before. you got stuck on a given problem. I may address your difficulty in class, or I may choose to change the homework problem for next year. X e−βEi U = E (12.1) i Z Please note that verbatim copying of any solu- i tion (whether it be these, or a homework from X e−βEi e−βEi  S = −k ln (12.2) another student) is plagiarism, and is not per- B Z Z mitted. If you wish to copy a solution, please i use it as a guide for how to do the steps, but perform each step on your own, and ideally add some words of your own explaining what At this point, we can see that there is a chance

84 to simplify the ln. Let’s begin by lumping together the two dβ terms. They look suspiciously similar to our previous e−βEi X −βEi   expression for U, which is unshocking, since Eq. S = −kB ln e − ln Z (12.3) Z 12.9 showed that U was inversely proportional to i β. X e−βEi = −kB (−βEi − ln Z) (12.4)   Z dS S ln Z dβ 1 i dU = − − − dZ kBβ kBβ β β βZ X e−βEi = k (βE + ln Z) (12.5) (12.12) B Z i i dS U 1 = − dβ − dZ (12.13) X e−βEi X e−βEi k β β βZ = k βE + k ln Z B B Z i B Z i i Let’s start by unpacking this dZ: (12.6) X −βEi P P = 1 dZ = e (−Eidβ) (12.14) U i i ¨* > i −βEi¨ −βEi X e ¨ X e X = kBβ ¨ Ei + kB ln Z  −βEi ¨ Z  Z = −dβ e Ei (12.15) ¨i i i (12.7) X e−βEi = −Zdβ E (12.16) S = kBβU + kB ln Z (12.8) Z i S 1 i U = − ln Z (12.9) = −ZUdβ (12.17) kBβ β Yay for recognizing something we have computed I’ll note that I would normally not show so many before! Now let’s put this back in our dU. steps above in my own work, but am being extra dS U 1 thorough in this solution. dU = − dβ − (−ZUdβ) (12.18) kBβ β βZ We are now at the point where we can start think- dS ing thermo, and make use of the thermodynamic = (12.19) k β identity. To do that, let us “zap with d” to see B what dU is in terms of dS and dβ, etc. Whew! Everything cancelled (as it had to, but one algebra error would mess this all up. . . ), and 0 dU = T dS − p¨dV¨* (12.10) we are left with a simple expression that does not have a dβ in sight! This is good, because we dS S ln Z 1 = − dβ + dβ − dZ argued before that with volume held fixed k β k β2 β2 βZ B B 1 (12.11) dU = T dS = dS (12.20) kBβ So far, this may not look promising to you, but 1 T = (12.21) perseverence pays off! kBβ 1 Note I threw out the dV because our statistical β = (12.22) formalism only includes states with a given kBT volume. Including the volume dependence is So we have just proven what you all knew about not complicated, but it requires us to take the relationship between β and T . This is valu- derivatives of Ei with respect to volume, able, because it establishes the connection be- which is a nuisance we can live without for tween the theoretical Lagrange multiplier and now. the temperature defined in thermodynamics.

85 The text uses a different (microcanonical) ap- above. Then we repeat N times, to show proach to establishing the connection between that SN = NS1. There are other ways to the statistical approach and the temperature that show this, e.g. by repeatedly dividing the we define in thermodynamics. system (SN = SN/2 + SN/2). 2. Gibbs entropy is extensive 3. Boltzmann probabilities a) To begin, we remember the relationship be- a) At infinite temperature β = 0, which makes tween probabilities given in the problem: computing probabilities easy: they are all equal. Thus the probabilities are each 1/3. P AB = P AP B (12.23) ij i j The internal energy is given by the average This means that the probability of finding of the energies of the microstates, which in system A in state i while system B is in this case gives us zero. state j is just the product of the separate X U = E P (12.31) probabilities. Now the entropy is i i i all states 1 1 1 X = − + 0 · +  = 0 (12.32) SAB = Pα ln Pα (12.24) 3 3 3 α states of A states of B The entropy is given by the Gibbs expression X X A B A B = Pi Pj ln Pi Pj X S = −k Pi ln Pi (12.33) i j (12.25) i 1 1 1  states of A states of B = −k ln(1/3) + ln(1/3) + ln(1/3) X X 3 3 3 = P AP B ln P A + ln P B i j i j (12.34) i j (12.26) = k ln 3 (12.35) X X X X = P AP B ln P A + P AP B ln P B i j i i j j b) At very low temperatures β  1. Remem- i j i j ber that the probabilities are given by (12.27) −βEi X A A X B X A X B B e = Pi ln Pi Pj + Pi Pj ln Pj Pi = (12.36) i j i j Z β −β (12.28) Z = e + 1 + e (12.37) S S ¨* A 1 1 We can see¨* thatB our “small quantity” for ¨  > > ¨ −β ¨ ! ! a power¨ series should be e , since that X A ¨ A X B X A X B¨ B = P¨i ln Pi  Pj  + Pi  Pis¨j theln P smallj  thing in the partition function. ¨¨   ¨ ¨ i  j  i ¨¨j We can start with the ground state, which (12.29) we expect to be overwhelmingly occupied: = SA + SB (12.30) eβ P = (12.38) 0 eβ + 1 + e−β b) At this stage, we can basically use just words 1 = (12.39) to solve this. We consider the system of N 1 + e−β + e−2β identical subsystems as a combination of  2 −β −2β −¨¨β −2β a single one and N − 1 subsystems, thus ≈ 1 − e + e + ¨e + e + ··· SN = S1 + SN−1 from what we showed (12.40)

86 At the last step, we used a power series I could understand saying that P2 ≈ 0, but approximation for 1/(1−z). We now need to ideally you should give a nonzero answer for gather terms so that we keep all terms to the each probability when asked about very low same order. In this case the best option is temperatures, because none of them are ex- to keep all terms up to e−2β, since that way actly zero. If you have an experimant that we will be able to account for the occupation measures P2 (perhaps state 2 has a distinc- of the highest energy state. Keeping these tive property you can observe), then you terms gives will not find it to be zero at any tempera- ture (unless you have poor resolution), and −β −3β P0 ≈ 1 − e + O e (12.41) it is best to show how it scales. becaue the e−2β terms cancel each other c) At zero temperature, the the above answers out. Thus the ground state will be almost simplify. The lowest energy state is 100% 100% occupied. When we look at the other occupied, and the system has zero proba- two states we will get exponentially smaller bility of being in either of the two higher probabilities: energy states. 1 The internal energy at zero temperature is P = (12.42) thus −, since the system is definitely in the 1 eβ + 1 + e−β ground state. 1 = e−β (12.43) 1 + e−β + e−2β The entropy is maybe a bit tricky, depend- −β ing on whether you remember the value of = e P0 (12.44) −β −β zero times log of zero (which is a bit tricky ≈ e 1 − e (12.45) because it is zero times infinity). I’ll go = e−β − e−2β (12.46) through this in detail here, but you could give a shorter answer (so long as it’s clear). The middle state with zero energy is less X occupied by precisely a factor of e−β. We S = −k Pi ln Pi (12.50) could have predicted this from the Boltz- i mann ratio.  :0 :0 :0 = −k 1 ln 1 +0 ln 0 +0 ln 0 (12.51) e−β P = (12.47) = 0 (12.52) 2 eβ + 1 + e−β −β = e P1 (12.48) The latter two cases could be confusing, since ln 0 = −∞. We resolve this by us- ≈ e−2β (12.49) ing L’Hopital’s rule. And the high energy state is hardly occu- ln P ∞ lim P ln P = lim = (12.53) pied at all, the same factor smaller than the P →0 P →0 1 P ∞ previous state. d ln P = lim dP (12.54) This solution kept all terms that were at P →0 d 1 −2β dP P least order e for each probability, which 1 resulted in a set of probabilities that add up = lim P (12.55) P →0 1 to one. It would also have been reasonable P 2 −β to answer that P0 ≈ 1 and P1 ≈ e , and = lim P (12.56) then discuss that actually the probability of P →0 being in the ground state is not precisely 1. = 0 (12.57)

87 You don’t need to do this more than once In the last two steps there, we made use of yourself, but you do need to remember this properties of log. If these are not obvious to result. you, you absolutely must take the time to review the properties of logarithms. They So the point is that when the temperature are absolutely critical to this course! is zero, the entropy is also zero. This will always happen if the ground state is not 1  ∂S  degenerate. = (12.63) T ∂E V,N d) If we allow the temperature to be negative, 3 1 = NkB (12.64) then higher energy states will be more prob- 2 E able than lower energy states. If the energy 3 E = NkBT (12.65) is small and negative (which was not speci- 2 fied in the question), then the system will Yay. almost always be in the + energy state. b) We just need to take one more derivative, Another behavior with negative tempera- since we already found ∂S  in part (a). tures for this system is that U > 0. For ∂E V,N positive temperatures, the internal energy  ∂2S  3 1 only approaches zero as the temperature = − Nk (12.66) ∂E2 2 B E2 gets very high. If the temperature becomes V,N negative, the energy can exceed zero. For < 0, (12.67) other systems, of course, this will not be the case, but this will be true for any system in where in the last step we only needed to which the energy states are symmetrically assume that N > 0 (natural for the number arranged around zero. of particles) and that the energy E is real (which it always must be). Thus ends the solution. Solution for week 2 Because the second derivative of the entropy is always negative, the first derivative is PDF version of solutions monotonic, which means that the tempera- 1. Entropy and Temperature (K&K 2.1) ture (which is positive) will always increase if you increase the energy of the system and a) We begin by finding the entropy given the vice versa. provided multiplicity. 2. Paramagnetism

S(E,N) = kB log g(E,N) (12.58) We are looking to solve for µtot = 2ms. Since we   want to know the magnetization as a function of = k log CE3N/2 (12.59) B temperature, we’re going to have to solve for the    = k log C + log E3N/2 temperature, which requires us to know S(E) so B we can find 1 = ∂S . (12.60) T ∂E 3 To convert from S(s) to S(E) we need to relate = kB log C + NkB log E the energy to the excess spin s. This relies on 2 (12.61) the energy expression (12.62) E = −B2sm (12.68)

88 which is given. At this point, it is a simple sub- negative in order to maintain a positive tem- E stitution of s = − 2mB : perature. This relates to the energy of a single spin always being either positive or 2 2 − E  negative, with equal and opposite options. S(E) = S − k 2mB (12.69) 0 B N This problem illustrates a weird phe- E2 nomenon: if the energy is positive, then = S0 − kB 2 2 (12.70) 2m B N we must conclude that the temperature is To determine 1/kT , we just need to take a deriva- negative. Furthermore, the temperature dis- tive: continuously passes from ∞ to −∞ as the energy passes through zero. There are differ- 1  ∂S  = (12.71) ent interpretations of these “negative tem- T ∂E V perature” states. You cannot reach them by E heating a system (adding energy via heat- = −k (12.72) B m2B2N ing), and they cannot be exist in equilibrium 1 E if the system has contact with any quantity = − 2 2 (12.73) kBT m B N of material that can have kinetic energy. So I consider these to be unphysical (or non- At this point we just have a bit of algebra to equilibrium) states. Since temperature is an finish the problem. equilibrium proprty, I would not say that E a negative temperature is physically mean- µ = − (12.74) tot B ingful. That said, it is pretty common to 1  m2B2N  make an analogy that can be made to popu- = − − (12.75) lation inversion in a laser, which is a pretty B kT interesting highly non-equilibrium system. m2NB = (12.76) kT 3. Quantum harmonic oscillator (K&K 2.3) As you would expect (if you remember magnetic a) Given the multiplicity, we just need to take susceptibility from electromagnetism), the mag- a logarithm, and simplify. netization is proportional to the magnetic field. Dimensionally, you can recognize NmB as an S(N, n) = k log g(N, n) (12.77) extensive energy, and kT as an intensive energy, (N + n − 1)! = k log (12.78) and m as an intensive magentic moment. So the n!(N − 1)! result is an extensive magnetic moment, as it should be. = k (log(N + n + 1)! − log n! − log(N − 1)!) (12.79) Of interest This relationship is very different ≈ k ((N + n + 1) log(N + n + 1) − n log n − (N − 1) log(N − 1)) than the one we saw in the previous prob- (12.80) lem! Previously, we saw the temperature being proportional to the internal energy, ≈ k ((N + n) log(N + n) − n log n − N log N) and here we see it as inversely proportional, (12.81) meaning that as the energy approaches zero  N + n N + n = k N log + n log the temperature becomes infinite. N n We also previously had an energy that was (12.82) positive. Here we have a negative sign, = k (N log (1 + n/N) + n log (1 + N/n)) which suggests that the energy should be (12.83)

89 You need not simplify your answer this far, Well, didn’t that simplify down nicely? but it is good to get practice simplifying The key was to multiply the first term by answers, particularly involving logarithms. N~ω/E so that it shared a denominator In particular, it is usually helpful at this with the last term (and ended up being equal point in a computation to verify that the and opposite). entropy is indeed extensive. Both N (the Solving for E is not bad at all, now, we’ll number of oscillators) and n (the sum of just take an exponential of both sides: all the quantum numbers of all the oscilla-

tors) are extensive quantities. Thus n/N ~ω N~ω e kT = 1 + (12.92) and N/n are intensive, which is good be- E

cause otherwise we could not add them to N~ω ~ω = e kT − 1 (12.93) 1. Each term is now clearly extensive, and E the entropy behaves as we expect. N ω E = ~ (12.94) ~ω b) Now we want to find E(T ), which will re- e kT − 1 quire us to find S(E) (via simple substitu- Note As I mentioned in the homework, this tion of n = E/~ω) and T from a derivative is the hard way to solve this problem. of that. That said, it wasn’t actually particu-  E  E  N ω  larly hard, you just need to be comfort- S(E) = Nk log 1 + + Nk log 1 + ~ N~ω N~ω E able doing algebra with logarithms, and (12.84) simplifying annoying ratios. Now we just have a derivative to take, and then a mess of algebra to simplify. Solution for week 3

PDF version of solutions 1  ∂S  = (12.85) 1. Free energy of a two state system (K&K T ∂E N,V 3.1, modified) Nk 1 = (12.86) a) The partition function of this two-state sys- 1 + E N~ω N~ω tem is very simple:   1 N~ω + Nk log 1 + (12.87) all states N ω E X ~ Z = e−βEs (12.95) E 1 N ω − Nk ~ (12.88) s N ω N~ω E2 0 −βε ~ 1 + E = e + e (12.96) And now to simplify. . . = 1 + e−βε (12.97) ω 1  N ω  N~ω ~ = + log 1 + ~ − E Now the free energy is just a log of this: kT 1 + E E 1 + N~ω N~ω E (12.89) F = −kT log Z (12.98) −βε N~ω   N~ω = −kT log 1 + e (12.99) E N~ω E = + log 1 + − − ε  N~ω E N~ω = −kT log 1 + e kT (12.100) 1 + E 1 + E (12.90) We can ask ourselves if this simplifies in   N~ω any limits, and the easiest one is the low- = log 1 + (12.91) − ε E temperature limit where e kT  1. In this

90 limit, the free energy is given by trick” or some other memorized formula in-

− ε volving partition function, but in this class F ≈ −kT e kT (12.101) I want you to always go back to the physics. b) To solve for the internal energy and entropy Plot of entropy vs. temperature we can make use of the definition of the free c) Here is a nice plot of the entropy versus energy the thermodynamic identity: temperature. As you can see, the entropy F ≡ U − TS (12.102) asymptotes to a maximum value of k log 2 as the temperature increases (the dotted dF = dU − T dS − SdT (12.103) line is at that value). This is reasonable = T dS − pdV − T dS − SdT (12.104) because there are only two microstates pos- = −SdT − pdV (12.105) sible, so the maximum possible entropy is k log 2. You can think of the Boltzmann which tells us that we can find the entropy formulation with its log of the number of mi- by taking a derivative: crostates. At high temperatures the system ∂F  approaches this maximum entropy state, in S = − (12.106) which both states are equally probable. ∂T V  kT   Plot of entropy vs. energy − kT  − kT = k log 1 + e + −  e 2 1 + e kT kT d) Now let us look at this plot of the entropy (12.107) as a function of internal energy. The first   1 − kT  thing you can note (and that the problem = k log 1 + e +  T 1 + e kT asks about) is that the inernal energy only (12.108) 1 goes up to 2 . This may be counterintuitive, since the maximum energy of the system is I note here that entropy has dimensions of . The reason the internal energy maxes energy per temperature, and so do both of out halfway there is because no matter how my terms, so we’re looking good so far. It hot the system gets, it will never occupy the is also worth checking that our entropy is higher energy state with greater probability positive, which it should always be. In this than the lower energy state, so we can never case each term is always positive so we are get it to have more than a 50% probability good. Interesting note: the first time I of being in that state with energy . solved this I lost the minus sign, and things did not make sense! Now to find If we had approached this problem from U I just need to add TS to my free energy. a microcanonical perspective, where we choose the energy and then solve for the U = F + TS (12.109) entropy and temperature, we could have 0 : specified an internal energy greater than − ε  = −kT log1 + e kT (12.110) 1 , and would have found that the entropy  2 : 0 decreases for these energies, and that the − 1   kT  temperature therefore is negative. This is + kTlog 1 + e +    1 + e kT discussed in Schroeder 3.3 and K&K Ap- 1 =   (12.111) pendix E. 1 + e kT 2. Magnetic susceptibility I will point out that you could have solved for the internal energy using the “derivative a) Before anything else, I’ll define the energy

91 of a single spin to be given by: don’t know the derivative of a tanh, since I couldn’t remember it myself. E± = ∓mB (12.112) (12.113)  ∂  eβmB − eβ(−m)B χ = nm βmB β(−m)B where the ± refers to the direction of the ∂B T e + e spin. We just need to find the partition (12.122) function (and thus free energy) for a single 2 ! eβmB + e−βmB eβmB − e−βmB spin and then multiply that free energy by = nm βmB −βmB − 2 βm N to find a total free energy. e + e (eβmB + e−βmB) (12.123) βmB −βmB Z1 = e + e (12.114) 2 ! nm2 eβmB − e−βmB F = −NkT ln Z (12.115) = 1 − kT (eβmB + e−βmB)2 βmB −βmB = −NkT ln e + e (12.116) (12.124) 2 2 This gives us the free energy, and clearly nm2 eβmB + e−βmB − eβmB − e−βmB F = is only a function of βmB. This tells 2 NkT kT (eβmB + e−βmB) us that the once we know the behavior at (12.125) one temperature and all magnetic fields we nm2 4 actually know the behavior at all tempera- = (12.126) tures. kT (eβmB + e−βmB)2 b) To find the magnetization (which is a per volume quantity), we start by finding the It’s not necessary to fully simplify this an- average magnetization of a single spin: swer, but it is helpful to practice making your answers as pretty as possible, as it hmi = mP+ − mP− (12.117) tends to make it easier to understand and easier to reason about. eβE+ − eβE− = m (12.118) eβE+ + eβE− Looking at this solution, you can see that eβmB − eβ(−m)B = m (12.119) the susceptibility is always positive (as it βmB β(−m)B e + e must be), and that it is proportional to the = m tanh (βmB) (12.120) density of spins as it must be (*interesting note: when I solved this at first I omitted To find the magnetization M, we just need the factor of n and only when doing this to know the total dipole moment per unit reasoning caught the error). You can see volume, which is just the mean dipole mo- that at large B field (either positive or neg- ment of a single spin times the number of ative) the susceptibility vanishes (because spins per unit volume. Thus everything is already pointing the right way). mB  You can see that the susceptibility is an even M = n hmi = nm tanh (12.121) function of B, which reflects the symmetry kT of the system under a redefinition of the as given in the problem. To find the suscep- “up” direction. tibility, we now just need to take a deriva- tive of this thing with respect to B while c) To solve this at high temperatures, we just holding temperature fixed. I’ll assume you need to do a power series expansion of χ for

92 small values of βmB. Thus is algebra:

−β ω e±βmB ≈ 1 ± βmB (12.127) e ~ Z = Z − 1 (12.133) −β ω nm2 4 1 − e ~ Z = 1 (12.134) χ ≈ kT 2 1 (1 + βmB + 1 − βmB) Z = (12.135) (12.128) 1 − e−β~ω nm2 F = −kT ln Z (12.136) = (12.129) kT = kT ln 1 − e−β~ω (12.137) and we can see that this is accurate even to first order in B. In fact the symmetry of where in the last step I used the properties the system prevents χ from having any odd of a logarithm to put the denoninator on terms in its power series. You might wonder top. why m shows up squared. That is because Harmonic oscillator entropy vs. temperature one factor of m comes from the magnetic moment causing the spins to align with the b) To solve for the entropy, we just need to magnetic field, while the other factor comes remember (or derive) the total differential from the fact that the total dipole moment of the free energy: is also proportional to m. dF = −SdT − pdV + µdN (12.138) 3. Free energy of a harmonic oscillator ∂F  S = − (12.139) a) We start as usual by writing down the par- ∂T V,N tition function, from which finding the free energy is easy. So let’s start taking a derivative!   ∞ −β ω kT −β ω ~ω X S = −k ln 1 − e ~ − −e ~ Z = e−βn~ω (12.130) 1 − e−β~ω kT 2 n=0 (12.140)

Here we need a little trick, which is how where the trickiest part was getting the right you do a geometric sum. The trick involves number of factors of −1 in the second term. multiplying by e−β~ω (with no n) on both ω e−β~ω sides of the above. This gives us S = −k ln 1 − e−β~ω + ~ T 1 − e−β~ω ∞ (12.141) X e−β~ωZ = e−β(n+1)~ω (12.131) −β ω ~ω 1 n=0 = −k ln 1 − e ~ + T eβ~ω − 1 (12.142) Now we can shift n by 1 in this infinite sum:

∞ which is the same as the equation in the X e−β~ωZ = e−βn~ω (12.132) problem. You can look at the plot to see n=1 what’s going on. At temperatures much less than ~ω/k the entropy rapidly approaches Now we can observe that the right hand zero, because the oscillator is essentially al- side is just the original expression for Z ways in the ground state. If you plot further missing the n = 0 term, after which the rest out, you can see that the entropy increases

93 without bound. At high temperatures, you we’ll need to involve the Boltzmann factor). can work out that the entropy varies log-     ∂U X ∂Pi arithmically with temperature (as demon- = E (12.149) ∂T i ∂T strated in the figure), which you could have V i V predicted if you remembered the equiparti- X  ∂ e−βEi  = E (12.150) tion theorem (or if I had taught it already?). i ∂T Z i V

 −βEi −βEi  X e Ei e ∂Z = E − 4. Energy fluctuations There are two ways to i Z kT 2 Z2 ∂T i approach this problem: from the right or from (12.151) the left. I will show how to approach it from the X e−βEi E2 X e−βEi ∂Z left, but you could alternatively start by taking = i − E Z kT 2 i Z2 ∂T a derivative of U and work from there. i i (12.152)

We begin by writing down an expression for the At this point we can interpret a couple of the sums mean energy (or internal energy). physically, and then cope with the derivative of the partition function (which you have probably seen before). X U = hεi = E P (12.143) ∂U  hε2i 1 ∂Z i i = − U (12.153) i 2 ∂T V kT Z ∂T So let’s do that derivative of the partition func- Now let’s look at the variation that we are looking tion. for: 1 ∂Z 1 X ∂e−βEi = (12.154) Z ∂T Z ∂T i X 1 X Ei (ε − U)2 = (E − U)2P (12.144) = e−βEi (12.155) i i Z kT 2 i i X 2 X X 2 1 e−βEi = Ei Pi − 2 EiUPi + U Pi X = 2 Ei (12.156) i i i kT Z i (12.145) U X 2 2 2 = 2 (12.157) = Ei Pi − 2U + U (12.146) kT i Thus we see that X 2 2 = Ei Pi − U (12.147) ∂U  hε2i U 2 = − (12.158) i ∂T kT 2 kT 2 = hε2i − hεi2 (12.148) V (ε − U)2 = (12.159) kT 2 So far we haven’t done anything thermal, we have And thus we have shown that the fluctuations in just used the properties of weighted averages. the energy are proportional to the heat capacity CV as we were told to show. One can find a similar relationship for the fluctuation of any Now let’s start working from the right, since it’s thermal variable, e.g. the fluctuation-dissipation not obvious where to go from here (except that theorem (which we will not cover in class).

94 5. Quantum concentration (K&K 3.8) We need 6. One-dimensional gas (K&K 3.11) Consider an to start by finding the formula for the ground ideal gas of N particles, each of mass M, confined state energy of a particle in a box. It’s all right if to a one-dimensional line of length L. Find the you just remember this, but it also isn’t hard to entropy at temperature T . The particles have figure out without doing any complicated math spin zero. or boundary conditions. The particle needs to To find the entropy at temperature T we need first have a half-wavelength of L in each direction in to consider the energy eigenstates of this system. order to fit in the box, thus λ = 2L for each We could use either periodic boundary conditions direction. This means that that the wave vector or a particle-in-a-box potential to confine the is given by k = k = k = ± 2π . Of course, the x y z 2L particles. The kinetic energy of a plane wave is particle isn’t in a traveling wave, but rather in a given by superposition of the ± versions of these traveling waves (i.e. a standing wave). The kinetic energy 2k2 E = ~ (12.167) is given by k 2m p2 For a particle in a box, a half-integer number of KE = (12.160) wavelengths must fit in the box: 2M 2 2 2 2 ~ (kx + ky + kz ) nλ = (12.161) L = (12.168) 2 √ 2M 2 2 1 2π 3~ π = n (12.169) = 2 (12.162) 2 k 2ML πn k = . (12.170) Now as instructed we set the kinetic energy to n L 1 kT and solve for the density, given by n = L3 . Thus, the energy is given by

√ 2 2 2 3~2π2 ~ π n kT = (12.163) En = (12.171) 2ML2 2m L2 √ 2 2 2 3 2π2 π ~ n = ~ (12.164) = (12.172) −2/3 2mL2 √2Mn 2 2 − 2 3~ π No, you don’t need to derive this, but in my n 3 = (12.165) 2MkT opinion it is easier to derive than to remember. 3 If I were not writing up solutions, I would have  2MkT  2 n = √ (12.166) done several of the steps above in my head. 3~2π2 Now that we have our energy, we can start think- As predicted, this difers from nQ by a factor of ing about how to find the partition function for 3   2 √4 , which is of order unity. Its value is a single particle. We can get away with this be- π 3 cause the particles are non-interacting, so the around 0.6. total energy is just a sum of the energies of each Let’s remind ourselves: this is the density at individual particle. which quantum stuff becomes really important ∞ X −βEn at a given temperature. In this problem, we Z1 = e (12.173) showed that the density nQ is basically the same n=1 ∞ as the density of a single particle that is confined 2 2 X −β π ~ n2 enough that its kinetic energy is the same as the = e 2mL2 (12.174) temperature. n=1

95 This is the point where we typically step back trick to be aware of. 2 2 π ~ and tell ourselves that kBT  2mL2 (because the Z ∞ distance L is macroscopic, and we aren’t looking −u2 at crazy-low temperatures), which means that IG ≡ e du (12.178) 0 we can turn the sum into an integral: ∞ 1 Z 2 = e−u du (12.179) 2 −∞ Z ∞ 2 2 −u2 (2IG) = e du (12.180) Z ∞ 2 2 π ~ 2 −∞ −β 2 n Z1 ≈ e 2mL dn (12.175) ∞ ∞ Z 2  Z 2  0 = e−x dx e−y dy −∞ −∞ (12.181) ∞ ∞ Z Z 2 2 The smart move here it to do a u substitution, = e−(x +y )dxdy to get all the ugly stuff out of our integral. −∞ −∞ (12.182) ∞ 2π Z Z 2 = e−r rdφdr (12.183) 0 0 r r Z ∞ β π β π 2 u = ~n du = ~dn (12.176) = 2π e−r rdr (12.184) 2m L 2m L 0 ξ = r2 dξ = 2rdr (12.185) Z ∞ dξ (2I )2 = 2π e−ξ (12.186) G 2 This gives us a simple gaussian integral: 0 = π (12.187) √ π I = (12.188) G 2 √ Z ∞ L −u2 Z1 ≈ 2mkT e du. (12.177) π~ 0 The trick was simply to square the integral, and then move from Cartesian to polar co- ordinates. This only works because we are integrating to ∞. The value of the gaussian integral here doesn’t have any particular physical impact, since it is Back to our original task, we have now found just a dimensionless numerical constant. It does, that of course, impact the numerical value. r πmkT L Z1 ≈ (12.189) 2 π~ Gaussian integrals You are welcome to look up the value of integrals like this, or mem- orize the answer√ (I always just remember To find the entropy, we will want to construct the that it’s got a π in it, which doesn’t help Helmholtz free energy. We will need the entire much). I’ll show you here how to find the partition function, which will have the N! in it value of a gaussian integral, which is a nice to avoid double-counting states, since these are

96 indistinguishable particles. a 1D system) which means that ∂F  ZN −S = (12.200) Z = 1 (12.190) ∂T L,V,N N! ∂F  F = −kBT log Z (12.191) S = − (12.201) ∂T L,V,N ZN  = −k T log 1 (12.192) r ! ! B N! mkT L 1 = NkB log + 1 + NkBT 2π~2 N 2T = −kBT (N log Z1 − log N!) (12.193) (12.202) ≈ −kBT (N log Z1 − N log N + N) (12.194) r ! ! = −NkBT (log Z1 − log N + 1) (12.195) mkT L 3 = NkB log + (12.203) r ! ! 2π 2 N 2 πmkT L ~ = −NkBT log − log N + 1 2 π~ This is our final answer. It looks shockingly like (12.196) the entropy of a 3D ideal gas, right down to r ! ! the quantum length scale (which is no longer a mkT L quantum density), commonly called the “thermal = −NkBT log + 1 2π~2 N de Broglie wavelength.” (12.197) Nasty logs In computing the derivative of a (12.198) nasty logarithm (which I did in my head above), you can use the following shortcut, provided you remember the properties of Here we are at a good point to check whether our logarithms: answer makes sense. Firstly, we can check that F r ! r ! is indeed extensive. It is, since it is proportional mkT L mkT  L  to N, and the only other extensive quantities in log = log + log 32π~2 N 32π~2 N it are the L and N in the logarithm, and they form a ratio, which is therefore intensive. We can (12.204) also check dimensions. 1  mkT   L  = log + log 2 32π~2 N 2 2 ~ k (12.205) We know that 2m is an energy, which means 2 that ~ has dimensions of energy times distance 1 1  mk   L  m = log (T ) + log + log squared. The kT cancels the energy, and the 2 2 32π~2 N square root turns the resulting one over distance (12.206) squared into an inverse distance, which happily cancels with the L, so the argument of our loga- Now you can take a derivative of this, which rithm is dimensionless. is way easier than a derivative of the whole mess, and clearly must give you the same answer. There are other ways to do it, but I Now we will get to the answer pretty soon! Recall find this particularly simple, and it has the that: advantage that it’s the same kind of manip- ulation you’re doing all the time anyhow, dF = −SdT − pdV (12.199) just to simplify your results. If you do this in your head (or on scratch (although the second term should have a dL for paper), you can immediately discard any of

97 the terms that will vanish when you take separate. a derivative of them, which makes it much ∞ ∞ simpler than what I wrote down. X X Z = e−β~ω111n111 e−β~ω112n112

n111=0 n112=0 ∞ X ··· e−β~ω∞∞∞n∞∞∞ (12.208)

Solution for week 4 n∞∞∞=0

PDF version of solutions Each of these sums can now be done inde- pendently. . . 1. Radiation in an empty box ∞ ! ∞ ! X X Z = e−β~ω111n111 e−β~ω112n112 a) To find the free energy, we’ll first want to n111=0 n112=0 find the partition function, so we can take a ∞ ! X log of it. That will involve summing over all ··· e−β~ω∞∞∞n∞∞∞ (12.209)

the possible microstates of the entire system. n∞∞∞=0 This can be a little confusing, and there are indeed several ways that you could approach and this turns our nested sums into a prod- this other than the one I show here. Please uct of sums. We can now write down ex- feel free to try something else, and talk with plicitly this product in a simpler way, and

me about whether it is also correct. we can plug in the expression for ωnxny nz . Note also that for every ~k, there are two po- Summing over microstates will be similar to larizations of photon, so we need to square it what we did for the ideal gas, and you’ve all. I left that out above, because I thought seen things like this before, but I think it’s we had enough indices to worry about. worth talking through again in detail. A  2 microstate is defined by a single quantum ∞ ∞ ∞ ∞ √ Y Y Y X −β hc n2 +n2 +n2 n Z = e ( 2L x y z ) number nnxny nz for each set of nx, ny, nz   (plus polarization, which I’ll ignore for now). nx=1 ny =1 nz =1 n=0 The sum over all microstates then becomes (12.210)

∞ ∞ ∞ ∞ Fortunately, the inmost sum over n we have X X X X Z = ··· already solved in class (we recognize it as a geometric sum). Thus we get n111=0 n112=0 n113=0 n∞∞∞=0

−β (ω111n111+ω112n112+ω113n113+···+ω∞∞∞n∞∞∞) e ~ 2  ∞ ∞ ∞  (12.207) Y Y Y 1 Z = √ .  βhc 2 2 2  − 2L nx+ny +nz nx=1 ny =1 nz =1 1 − e So we’ve got an infinite number of nested (12.211) sums (one sum for each mode), each going from 0 → ∞, and an exponential with an This still probably doesn’t leave you dancing infinite number of n’s added together. The with joy. Fortunately, there are quite a few energy separates (the modes are indepen- simplifications left. Before we start simpli- dent, or the photons don’t interact with fying further, let’s go ahead and look at the each other), which means that the sums Helmholtz free energy, which will turn our

98 products into more familiar summations. that it should scale with volume. The latter should have been obvious, since it’s the only F = −kT ln Z (12.212) way that the free energy could be extensive.  ∞  You might worry about the definite integral, Y Y Y 1 = −2kT ln √ but it is just a dimensionless number!  βhc 2 2 2  − 2L nx+ny +nz nx=1 ny nz 1 − e Yes, it matters if we want to know precisely (12.213) how much radiation to expect from a black ∞ √ body, but we could solve this integral nu- X X X  − βhc n2 +n2 +n2  = 2kT ln 1 − e 2L x y z merically, or we could have just done an

nx=1 ny nz experimental measurement to see what this (12.214) number is. Wolfram Alpha tells us that this number is -2.165. You could tell it should This is starting to look more friendly. As be negative because the thing in the ln is usual (and as the problem instructs us), we’d always less than one, meaning the ln is al- like to take a large-volume approximation, ways negative. You might be weirded out βhc which will mean that L  1. In this that the free energy density is negative, but limit, we can turn our summation into an this just means it is dominated by entropy, integration. since the entropy term is always negative. ZZZ ∞ √  − βhc n2 +n2 +n2  Extra fun: Can any of you find an elegant F ≈ 2kT ln 1 − e 2L x y z dnxdnydnz solution to the above definite integral? If 0 (12.215) so, please share it with me, and I’ll share it ZZZ ∞ √ with the class. kT  − βhc n2 +n2 +n2  = ln 1 − e 2L x y z dnxdnydnz Z ∞ π4 4 −∞ ln 1 − e−ξ ξ2dξ = (12.220) (12.216) 0 45 but I don’t know how to prove this except This integral is begging to be done in spher- numerically. ical coordinates, and I’ll just define n ≡ q 2 2 2 b) We can solve for the entropy straight off: nx + ny + nz for that integral. I’ll also divide by 8 and do the integral over all “n” ∂F  space, instead of just the positive n space. S = − (12.221) ∂T V ∞ 3 Z βhc   Z ∞ kT  − n 2 kT F ≈ ln 1 − e 2L 4πn dn = −32πkV ln 1 − e−ξ ξ2dξ 4 0 hc 0 (12.217) (12.222)  3 Z ∞ LkT −ξ 2 = 8πkT ln 1 − e ξ dξ c) We also want to know the internal energy hc 0 (12.218) U = F + TS (12.223) 4 Z ∞ V (kT ) −ξ 2 4 Z ∞ = 8π ln 1 − e ξ dξ (kT ) −ξ 2 3 3 = (8 − 32)πV ln 1 − e ξ dξ h c 0 3 3 h c 0 (12.219) (12.224) 4 Z ∞ At this point we have pretty much solved U (kT ) −ξ 2 = −24π 3 3 ln 1 − e ξ dξ for the free energy of a vacuum at tempera- V h c 0 ture T . We know it should scale as T 4, and (12.225)

99 So the internal energy scales the same as the free energy, but is positive, as we would expect. r R TE = T (12.233) 2. Surface temperature of the earth 2AU r7 × 1010cm = 5800K (12.234) This problem comes down to balancing the radi- 3 × 1013cm ation absorbed by the earth with the radiation ≈ 280K (12.235) emitted by the earth. Interestingly, the answer won’t change if we drop the assumption that the earth is a perfect black body, so long as its ab- This is a bit cold, but when you consider the ap- sorption is indepednent of frequency (which isn’t proxmations isn’t a bad estimate of the Earth’s true). The assumption that it remains constant temperature. This neglects the power from ra- temperature over day and night is also a bit weak, dioactivity, and also the greenhouse effect (which but fractionally the variation of temperature is is a violation of the assumption that the absorp- actually relatively small. tion and emmission have the same proportion at all wavelengths). The total energy radiated by the sun is 3. Pressure of thermal radiation Okay, let’s be- 4 2 P = σBT 4πR (12.226) gin with the free energy from class:

Now, the fraction f of that power that is absorbed 4 Z ∞ by the Earth is given by V (kT ) −ξ 2 F = 8π 3 3 ln 1 − e ξ dξ (12.236) h c 0 cross-sectional area of earth f = area of sphere with Earth’s orbit Since this is proportional to volume, its derivative (12.227) with respect to volume is really easy. 2 πRE = 2 (12.228) 4πAU 4 Z ∞ (kT ) −ξ 2 p = −8π 3 3 ln 1 − e ξ dξ (12.237) Okay, now we just need the energy radiated by h c 0 the earth: Unfortunately, we’ve already done our summa- 4 2 PE = σBTE4πRE (12.229) tion over all the modes, so this didn’t help us as much as we might have hoped for part (a). To Setting the energy absorbed by the earth to the get an expression in terms of the modes, we need energy radiated by the earth gives to go back to the expression for free energy that had P P P and recognize that as a sum nx ny nz 2 over modes. πRE 4 2 4 2 ¨σ¨BT H4HπR = ¨σ¨BT H4HπR (12.230) 4πAU 2 E E X R2 R2 F = kT ln 1 − e−β~ωj  (12.238) T 4 = E T 4 (12.231) E 2 2 j 4AU RE 1 R2 = T 4 (12.232) 4 AU 2 Now we can take our derivative and hope to get

100 an expression involving photons in modes: which tells us that the pressure is one third of the internal energy per volume. You might want  ∂F  p = − (12.239) a dimension check: work (which is an energy) is ∂V T dW = pdV which will remind you that pressure −β ωj   ¨X −e ~ dωj is energy per volume. If you have doubts, you = −¨kT −β¡~ (12.240) 1 − e−β~ωj dV could remind yourself that pressure is force per j area, but force is energy per distance (going back β ωj X −e ~ dωj = − ~ (12.241) to work from ). 1 − e−β~ωj dV j 1 The factor of 3 comes from the fact that we’re X dωj = − hn i (12.242) in three dimensions, and the linear dispersion j ~ dV j relation for light. So yay. It worked out as we were told, which is 4. Heat shields also reasonably intuitive: the pressure is just the total pressure from all the photons. The middle plane is going to absorb the same rate of energy from the hot side that it gives to the Now we want to find the actual pressure, and cold side. The net transfer from hot to middle relate it to the internal energy. We can do this (per area) will be two ways. One is to use the above expression for total pressure and compare with U from class. 4 4 Jh = σB(T − T ) (12.250) This is totally correct and fine. I’ll use a different h m approach here, since it may be less obvious, and while the transfer from middle to cold will be may give insight. 4 4 We’ll take this expression we just found, and see Jc = σB(Tm − Tc ) (12.251) how the mode frequencies change with volume. Recall from class that: Setting these equal tells us that q 2πc 2 2 2 ωn n n = n + n + n (12.243) 4 4 4 4 x y z L x y z Th − Tm = Tm − Tc (12.252) 2πc q 4 4 4 √ 2 2 2 2Tm = Th + Tc (12.253) = 3 nx + ny + nz (12.244) V r 4 4 4 T + T T = h c (12.254) Thus we have that m 2 dωn n n 1 ωn n n x y z = − x y z (12.245) Now we can see that the power transfered per dV 3 V area is given by Putting this into our pressure, we see that 4 4 X dωj Jh = σB(Th − Tm) (12.255) p = − hnji~ (12.246) dV T 4 + T 4 j = σ (T 4 − h c ) (12.256)   B h 2 X 1 ωj = − hnji~ − (12.247) T 4 − T 4 3 V = σ h c (12.257) j B 2 1 X = hn i ω (12.248) 3V j ~ j which is precisely 1 of the power that would have j 2 been transfered without the heat shield. U = (12.249) 3V 5. Heat capacity of vacuum

101 a) We can begin with either the entropy or the Solution for week 5 internal energy, since we know that PDF version of solutions ∂U  C = (12.258) V ∂T V 1. Centrifuge The key concept here is to recognize  ∂S  that when working with a rotating system such = T (12.259) ∂T V as a centrifuge, there is classically a centrifugal potential energy. This energy serves as the ex- Now from the first problem, we know that ternal chemical potential, and allows us to solve U π2 (kT )4 for the properties of the gas by setting the total = (12.260) chemical potential equal everywhere, and solving V 15 3c3 ~ for the internal chemical potential, which we can so let us just start with this. We just need relate to the concentration. a temperature derivative. 4π2 (kT )3 CV = V k (12.261) 15 ~3c3 b) At this point we just need to plug in num- bers. I will do a cubic centimeter since I have a solid intuition for 1 mL. It’s the amount of liquid vitamin D to give a toddler. Because I know room temperature in eV, I’ll be using some idiosyncratic units. I’d encourage you to use whichever units you have the most familiarity with. To start, let’s find the ratio kT 25 × 10−3eV = ~c 6.6 × 10−16eV s · 3 × 1010cm s−1 (12.262) = 1.26cm−1 (12.263)

where you really shouldn’t trust all my digits given how I rounded things. Now since π2 = 10... 4 · 10 C = (1cm3) (1.26cm−1)3k V 15 B (12.264) J ≈ 10−22 (12.265) K In comparison, the heat capacity of one mL of water is 4.2 J/K (you can easily find this using Google). So the vacuum indeed has a Figure 12.1: Centrifugal Force by Randall Munroe, at low heat capacity. xkcd.

102 First we need the centrifugal potential. You As you would expect, the density is higher at may remember this from Central Forces, but you larger radii, since the centrifugal force compresses can also solve for it if you remember how to derive atoms near the edge. the centripetal acceleration. This comes from the 2. Potential energy of gas in gravitational second derivative of the displacement of an object field We can begin by writing down (from class moving in a circle. notes) the expression for the (internal) chemical ~r(t) = R cos(ωt)ˆx + R sin(ωt)ˆy (12.266) potential of an ideal gas. 2 d ~r βµint = −ω2~r (12.267) n = nQe (12.275) dt2 1 In this case the external potential is linear = F~ (12.268) m µext = Mgh (12.276) So the centrifugal force is mω2~r (which is outward, opposite the centripetal force). The centrifugal The internal chemical potential is the total minus work is the integral of the centrifugal force, which the external, telling us that gives us a centrifugal potential energy that 1 β(µtot−Mgh) 2 2 n(h) = nQe (12.277) is V = − 2 mω r . The potential is negative because the force is outwards. = n(0)e−βMgh (12.278) Now that we have a potential energy, we can find We can find the average potential energy by find- the internal chemical potential from the total: ing the total potential energy and dividing by the 1 1 number of atoms. µ = µ − mω2r2µ = µ + mω2r2 tot int 2 int tot 2 R Mghn(h)dh (12.269) hV i = (12.279) R n(h)dh Finally, we need to use the expression for the R ∞ he−βMghdh = Mg 0 (12.280) chemical potential of an ideal gas. R ∞ −βMgh 0 e dh  n  R ∞ −ξ Mg 0 ξe dξ µint = kBT ln (12.270) = ∞ (12.281) nQ βMg R −ξ 0 e dξ 1! or alternatively = kT (12.282) 0! βµint n = nQe (12.271) = kT (12.283) Now we can just plug in our µ (r) to find n(r): int So that is interesting, our potential energy per 1 2 2 β(µtot+ mω r ) atom is just kT , as if we had two degrees of free- n(r) = nQe 2 (12.272) dom according to equipartition (which we don’t). We were asked to find the ratio between n(r) and In this case, the equipartition theorem doesn’t n(0) in order to avoid having to solve for µtot or apply, because the potential is not quadratic. to specify something banal like the total number My favorite integral I’ll just mention that I of molecules. used here (twice!) my very favorite definite 1 2 2 β(µtot+ mω r ) n(r) nQe 2 integral: = βµ (12.273) n(0) nQe tot Z ∞ n −u 1 2 2 u e du = n! (12.284) β 2 mω r = e (12.274) 0

103 You can prove this using integration by parts a) The Gibbs sum is just a few times, if need be. But it’s really handy X to remember this. It comes up very often Z = e−β(Ei−µNi) (12.295) when working on the hydrogen atom, for i X instance. And yes, I learned this from an = eβ(µNi−Ei) (12.296) integral table. i

X Ni −βEi To find the heat capacity we can start by writing = λ e (12.297) down the internal energy by adding the kinetic i −β and potential energies: = 1 + λ + λe (12.298)

3 b) The average occupancy of the system is U = NkT + NkT (12.285) given by 2 5 X = NkT (12.286) hNi = P N (12.299) 2 i i i X λNi e−βEi Then we can find the heat capacity by taking = N (12.300) i Z a temperature derivative, noting that the vol- i ume is essentially held constant (perhaps infinite, λ + λe−β since the column of gas extends upward with no = (12.301) 1 + λ + λe−β bound?). c) This is just the probability of the state being ∂U  at energy , which is C = (12.287) V ∂T V,N e−β(Ei−µNi) 5 Pi = (12.302) = Nk (12.288) Z 2 λe−β P = (12.303) and thus the heat capacify per molecule is just Z The average number in this state is equal 5 to the probability. cV = k (12.289) 2 d) The thermal average energy is even easier, since the energies are zero and , the average is just probability of energy  times . 3. Gibbs sum for a two level system Okay, we have three microstates, which I’ll call 0, 1, and 2 λe−β hEi =  (12.304) Z N0 = 0 (12.290) e) Now we’re adding one more microstate to N = N = 1 (12.291) 1 2 the system, which is a microstate with E =  E0 = E1 = 0 (12.292) and N = 2. Our Gibbs sum will just have E2 =  (12.293) this one additional term in it.

X Ni −βEi Now we remind ourselves that the activity λ is Z = λ e (12.305) defined by i = 1 + λ + λe−β + λ2e−β (12.306) λ ≡ eβµ (12.294) = (1 + λ)(1 + λe−β) (12.307)

104 The separation now comes about because so it will look a heck of a lot like our last we can now separate the first orbital from homework problem. the second, and the energy and number are −βεA −βεB both the sum of value for the first orbital Z = 1 + λO2 e + λCOe (12.317) plus the value for the second orbital. We are now asking how strongly carbon 4. Carbon monoxide poisoning The main idea monxide has to bind in order to allow only here is that because the oxygen and carbon 10% of the hemoglobin to be occupied by monoxide are in equilibrium with air, we can oxygen. So we are again going to be looking determine the activities (or equivalent chemical at the probability of oxygen occupying the potential) of the molecules from the air. hemoglobin

a) We are looking for probabilities of occu- −βεA λO2 e pancy, so as usual let’s start with a Gibbs PO = 2 1 + λ e−βεA + λ e−βεB sum. Right now we only have oxygen, so O2 CO (12.318) −βεA Z = 1 + λO2 e (12.308) We are looking to isolate εB now, since we −βεA λO2 e PO = (12.309) are told everything else. 2 −βεA 1 + λO2 e λ 1 −βεA −βεB O2 −βεA = (12.310) 1 + λO2 e + λCOe = e βεA PO 1 + e /λO2 2 (12.319) We are working to solve for εA here. . . And moving everything to one side gives 1 = 1 + eβεA /λ (12.311) O2   PO2 1 λ e−βεB = λ − 1 e−βεA − 1  1  CO O2 βεA PO2 e = λO2 − 1 (12.312) PO2 (12.320)   1  λ  1  1 −βεB O2 −βεA εA = kT ln λO2 − 1 (12.313) e = − 1 e − PO2 λCO PO2 λCO   1  (12.321) = 26.7meV ln 10−5 − 1 0.9 and at long last (12.314)  λ  1  1  = 26.7meV × −13.7 (12.315) O2 −βεA εB = −kT ln − 1 e − = −366meV (12.316) λCO PO2 λCO (12.322) −2 −1 where I used kB = 8.617 × 10 meV K = −26.7meV ln 102 × 9e13.7 − 107 and body temperature is T = 310.15K to (12.323) find kT in meV. This binding energy is quite 8 high, more than a third of an eV! Covalent = −26.7meV ln 7.9 × 10 (12.324) bonds tend to be a few eV in strength, but = −26.7meV × 20.5 (12.325) they don’t reverse without significant effort, = −547meV (12.326) whereas it’s really important for oxygen to spontaneously unbind from hemoglobin. So the carbon monoxide doesn’t need to be b) When we add in carbon monoxide, our much favored over oxygen energetically (in hemoglobin will have three possible states, terms of ratio) in order to crowd out almost

105 all the oxygen, even though there is way less So we can see that these two things look pretty carbon monoxide available. Of course, it is similar, but they don’t look like they should add not ratios of energies that matter here, so to one. To show that, I’ll add the two together much as energy differences, ant that is about and then use the old multiply-top-and-bottom 7kT , which is hardly a small difference. trick. 1 1 f(µ − δ) + f(µ + δ) = + Solution for week 6 1 + eβδ 1 + e−βδ (12.336) PDF version of solutions 1 1 eβδ = βδ + −βδ βδ 1. Derivative of Fermi-Dirac function This 1 + e 1 + e e (12.337) just comes down to math on the Fermi func- 1 eβδ tion. I’m going to use Newton’s notation for the = + derivative (which I rarely use) because it makes 1 + eβδ eβδ + 1 it a bit easier to specify at which value we are (12.338) evaluating the derivative: 1 + eβδ = βδ (12.339) 1 1 + e f(ε) ≡ (12.327) = 1 (12.340) 1 + e−β(ε−µ) 0 1 −β(ε−µ) We could subtract to get the form that Kittel f (ε) = − 2 e (−β) 1 + e−β(ε−µ) has, but I think it is more intuitive to think of (12.328) the two values as adding to one. Thus if one is 1 e−β(ε−µ) high, the other must be low. = 2 (12.329) kT 1 + e−β(ε−µ) 3. Distribution function for double occu- pancy statistics 0 Now I will plug in to find f (µ): a) This is much like we did in class for the fermions. We will solve for the Gibbs sum, 1 e−β(µ−µ) 0 and then for hNi. f (µ) = 2 (12.330) kT 1 + e−β(µ−µ) −β(ε−µ) −β(2ε−2µ) 1 1 Z = 1 + e + e (12.341) = (12.331) kT (1 + 1)2 = 1 + e−β(ε−µ) + e−2β(ε−µ) (12.342) 1 = (12.332) The occupancy is given by 4kT 0 × 1 + 1 × e−β(ε−µ) + 2 × e−2β(ε−µ) hNi = 2. Symmetry of filled and vacant orbitals We 1 + e−β(ε−µ) + 2e−2β(ε−µ) are interested in f(µ ± δ), so I’ll start by just (12.343) handling both versions at the same time. e−β(ε−µ) + 2e−2β(ε−µ) = (12.344) 1 1 + e−β(ε−µ) + e−2β(ε−µ) f(µ ± δ) = (12.333) 1 + e−β((µ±δ)−µ) 1 + 2e−β(ε−µ) = (12.345) 1 β(ε−µ) −β(ε−µ) = (12.334) e + 1 + e 1 + e−β(±δ) 1 = (12.335) b) If we have two fermion energy levels with the 1 + e∓βδ same energy ε, the total occupancy of the

106 two will just be the sum of their individual We can rewrite this to look like a single ideal occupancies. gas with the geometric mean of the two nQs, with twice the volume and number: hNi = f(ε) + f(ε) (12.346)   2 S0 √ V0 5 = (12.347) = ln nQAnQB + (12.350) 1 + eβ(E−µ) 2Nk N 2 So the total initial entropy would be just It’s kind of hard to say how it differs from twice the individual entropy if the two gasses (a), they are so dissimilar. The low energy had the same mass. We now seek the final, (β(ε − µ)  −1) behavior has a different mixed entropy Sf . For this, each gas will exponential scaling. At high temperatures have a volume of 2V . (or equivalently, ε = µ) both systems have a 0 total occupancy of 1. At high energies, once S  2V  5  2V  5 0 = ln n 0 + + ln n 0 + again the “double-occupancy statistics” oc- Nk QA N 2 QB N 2 cupation has the same exponential behavior, (12.351) but half as many occupants. This is because  2  (2V0) at high energies the “double occupied” state = ln nQAnQB + 5 becomes irrelevant relative tot he “single N occupied” state. In the Fermi-Dirac statis- (12.352)   tics, since they are different orbitals, each √ 2V0 = 2 ln n n + 5 orbital contributes the same amount to the QA QB N occupancy. (12.353) √ V  4. Entropy of mixing = 2 ln n n 0 + 5 + 2 ln 2 QA QB N a) We are considering two sets of atoms with (12.354) same temperature and volume. Initially   they are separate, and in the end, they will S0 √ V0 5 = ln nQAnQB + + ln 2 be equally mixed (according to intuition: 2Nk N 2 mixed gasses don’t unmix, and everything (12.355) is equal). So we can just use the Sackur- Tetrode entropy for the two scenarios. The The challenging step here is to explain why key idea that is not obvious here is that we we can just treat each gas individually when can compute the entropy of each gas sep- they are mixed. One reason is that the two arately and add them together even when gasses don’t interact, so we have separation they are mixed! We will call the initial vol- of variables. Moreover, since they are differ- ent kinds of particles, they don’t occupy the ume V0, and the initial entropy S0. I’ll same orbitals (or you could say the overall define each gas to have a distinct nQ, since they presumably have different masses. I’ll wavefunction is just a product of the two just use N for the number of each gas type. individual wave functions A and B). Another argument we could make would be S  V  5  V  5 0 = ln n 0 + + ln n 0 + a thought experiment. Suppose we had two Nk QA N 2 QB N 2 boxes, one of which was entirely permeable (12.348) to atoms A, but held B in, and the other   V0 of which was entirely permeable to B but = ln (n n ) + 2 ln + 5 QA QB N held in A. With these boxes, without do- (12.349) ing any work, we could separate the mixed

107 gasses into unmixed gasses, each with the c) I would extract work from the system us- same volume of 2V0. Thus the mixed gasses ing a special wall separating the two boxes, must have the same free energy as two un- which is permeable to A but not B, and an- mixed gasses with twice the volume. By other wall that has the inverse permeability. this reasoning, what we are seeing is not the One of these walls will feel a pressure to the entropy of mixing, but rather the entropy right, but not the left, and the other will of expansion. (But it is called entropy of have the inverse pressure difference. We can mixing, so that is what we call it.) then slowly move the two permeable walls apart from one another, and the pressure b) Now we consider a different final entropy, difference will do work. If we insulate the when the particles are identical. From box, the temperature of the gas will drop our thermodynamic perspective, this should due to the First Law (i.e. energy conserva- have the same entropy as the two unmixed tion). If we don’t insulate the boxes (and go (but identical) gasses, just by the extensiv- sufficiently slowly), we will cool the room a ity reasoning we always use. But it’s worth tiny bit as energy flows into the box through seeing that happen mathematically. Note heating. that we now have one gas with volume 2V0 and number 2N. S  2V  5 AA = ln n 0 + (12.356) 2Nk Q 2N 2 5. Ideal gas in two dimensions  V  = 2 ln n 0 + 5 (12.357) Q N

This is the same as S0, provided the two a) This requires us to use the eigenstates of nQs are the same. the 2D box. We can go ahead and use non- The “Gibbs paradox” here is just that if periodic boundary conditions, which gives you view this scenario classically (with no single-particle energy eigenvalues of wavefunctions), it is not obvious why the be- havior of distinguishable and indistinguish- able objects should differ. If I have a bunch 2 2 of billiard balls, writing numbers on them ~ π 2 2 εnxny = nx + ny (12.358) shouldn’t affect the pressure they exert on 2m L2 the walls of a container, for instance. The resulution to the classical paradox is to note that as long as the numbers you draw on the where n goes from 1 to ∞, since we have to balls does not impact their behavior in an ex- x fit a half-integer number of wavelengths in periment, you will predict the same outcome the box with side length L. We can assume for your experiment whether or not you view this is a low-density system in the classical the particles as indistinguishable. True, they limit, since it is described as an ideal gas. have a different entropy, but without quan- Thus we can say that the occupancy of each tum mechanics (which makes explicit the orbital will be difference between indistinguishable and dis- tinguishable particles in terms of the wave function) there is no absolute definition of entropy, since there is no unique way to −β(ε−µ) count microstates. f(ε) = e (12.359)

108 Thus we can add up to find N: Now we could go into polar coordinates, or ∞ ∞ we could use the fact that we each of these X X √ N = f(ε) (12.360) two integrals gives π. I’ll use the latter approach. nx=1 ny =1 ∞ ∞ mL2 X X −β ε −µ −βµ = e ( nxny ) Ne = (12.369) 2πβ~2 nx=1 ny =1 (12.361) Now we can go about solving for the chemi- ∞ ∞ cal potential: 2 π2 2 2  X X −β ~ (n +n )−µ = e 2m L2 x y  A m  −βµ = ln 2 (12.370) nx=1 ny =1 N 2πβ~ (12.362)  A m  ∞ ∞ µ = −kT ln (12.371) 2 2 2 −βµ X X − β~ π (n2 +n2 ) N 2πβ~ Ne = e 2m L2 x y  2  N 2πβ~ nx=1 ny =1 = kT ln (12.372) (12.363) A m

Z ∞Z ∞ 2 2 − β~ π (n2 +n2 ) ≈ e 2m L2 x y dnxdny b) There are a few ways we could solve for the 0 0 internal energy of the 2D ideal gas. The one (12.364) most suggested by this chapter would be to Now let’s do a change of variables into a di- add up the probability of each orbital being mensionless argument, and let’s also change occupied times the energy of that orbital. ∞ ∞ the limits to go down to −∞ and divide by X X a factor of 2 (per integral). U = εnxny f(εnxny ) (12.373) nx=1 ny =1 Z ∞ Z ∞ 2 2 −βµ 1 − β~ π n2 +n2 ∞ ∞ 2m L2 ( x y ) Ne = e dnxdny X X −β ε −µ 4 ( nxny ) −∞ −∞ = εnxny e

(12.365) nx=1 ny =1 (12.374) At this point we want to do a subsititution that turns our integral into a dimensionless At this point we could recognize that there one. It’s a little weird defining x and y as is a derivative trick we can do, since this dimensionless coordinates, but it’s compact sum looks so very similar to the sum for and will naturally transition us into polar. N we had earlier. (Note, this problem is r r also very much solvable by just doing the β~2π2 β~2π2 integral, which isn’t much harder than the x = nx y = ny 2mL2 2mL2 previous portion). We can see that (12.366)   ∞ ∞ ∂N X X −β ε −µ = (µ − ε )e ( nxny ) This gives us in cartesian coordinates ∂β nxny µ nx=1 ny =1 2 1 2mL ZZ 2 2 Ne−βµ = e−(x +y )dxdy (12.375) 4 β 2π2 ~ = µN − U (12.376) (12.367) 2 And thus that mL ZZ 2 2 = e−(x +y )dxdy 2 2 ∂N  2π β~ U = µN − (12.377) (12.368) ∂β µ

109 Now we can use our expression for N from the exponential is small, and we can approx- before to compute U. imate the log.

mA ε − µ −β(ε−µ) N = eβµ (12.378) Sorbital ≈ e 2πβ 2 T ~       2 − k 1 − e−β(ε−µ) −e−β(ε−µ) ∂N mA βµ A β~ βµ = − 2 2 e + e µ ∂β µ 2πβ ~ π 2πm (12.386) (12.379) ε − µ ≈ e−β(ε−µ) + ke−β(ε−µ) = −NkT + Nµ (12.380) T (12.387) Thus the internal energy is  ε − µ = k + e−β(ε−µ) (12.388) T U = µN − N(µ − kT ) (12.381) In the second approximation, I dropped the = NkT (12.382) term that was proportional to the occupancy squared, since it was much smaller than the This matches with the equipartition expec- other terms. Now to find the total entropy, tation, which would be 1 kT per degree of 2 we can add up the entropy of all the orbitals. freedom, which in this case is two degrees of freedom per atom. ∞ ∞   X X εnxny − µ −β ε −µ S = k + e ( nxny ) c) The entropy of this system we can just add T nx=1 ny =1 up the entropy of each orbital. This uses a (12.389) perspective where we recognize each orbital U µ as a separate non-interacting system, with = Nk + − N (12.390) T T only two eigenstates either occupied or un- µ occupied. The probability of being occupied = 2Nk − N (12.391) T is e−β(ε−µ), so the probability of not being N 2πβ 2  occupied is 1 minus that. = 2Nk − Nk ln ~ (12.392) A m all microstates of orbital   A mkT   X = Nk ln + 2 (12.393) Sorbital = −k Pi ln Pi N 2π~2 i (12.383) This looks vaguely like the Sackur-Tetrode   equation, but an number-per-area density = −ke−β(ε−µ) ln e−β(ε−µ) rather than a volume density, and a 2 where 5  −β(ε−µ)  −β(ε−µ) there would otherwise be a . − k 1 − e ln 1 − e 2 (12.384) Note You can also solve for the entropy by finding 1 the free energy as we did in class, and then taking = (ε − µ)e−β(ε−µ) its derivative. That is almost certainly an easier T     approach. − k 1 − e−β(ε−µ) ln 1 − e−β(ε−µ) 6. Ideal gas calculations (12.385) a) The easy one is the second process: Q2 = 0, Here we can use that in the classical limit the since the process is adiabatic. The first occupancy of every orbital is small. Thus process, we could either use the change in

110 entropy, the change in free energy, or we in mind that N doesn’t change either. could integrate the work. In the latter two  n  5  n  5 cases, we would also invoke the First Law, to Nk ln Qi + = Nk ln Qf + argue that the work done to the system must ni 2 nf 2 equal the change in internal energy plus the (12.402) energy added to it by heating. Let’s just go n V n V Qi i = Qf f (12.403) with the ∆S approach. N N n V = n V (12.404) ¯ Qi i Qf f dQ = T dS (12.394) 3 3 2 2 Z Ti Vi = Tf Vf (12.405) Q = T dS (12.395) − 2 Tf = 2 3 Ti (12.406) Z = T dS (12.396) ≈ 189K (12.407) I’ll note that you could have skipped a few = T ∆S (12.397) steps in solving this. But once again, you  n  5 n  5 = NkT ln Q + £ − ln Q − £ really need to always keep in mind tha nQ nf £2 ni £2 depends on temperature! (12.398) c) The increase of entropy of a system in an  n  = NkT ln i (12.399) irreversible process is the same as for a re- nf versible process with the same starting and = NkT ln 2 (12.400) ending conditions. In this case, an irre- versible expansion into vacuum will do no where in the last step I used the fact the the work (since it moves nothing other than 1 final density was 2 of the initial density. the gas itself), which means that it will not change the internal energy (unless en- b) Finding the temperature at the end of the ergy is transfered by heating). Since for a second process requires finding the state 3 monatomic ideal gas U = 2 NkT , keeping with four times the original volume that has the internal energy fixed means the tem- the same entropy as twice the original vol- perature also remains fixed, there won’t be ume. The text finds a relationship between any heating and the temperature will cer- p and V for an adiabatic expansion involving tainly stay fixed. Thus we can work out the γ pV , but knowing that result is less useful change in entropy using the Sackur-Tetrode than deriving that result. We have at least equation again. two ways to derive this relationship. One     would be to use the ideal gas law combined nQ nQ 3 Sf − Si = Nk ln − Nk ln with the internal energy 2 NkT and to make nf ni use of energy conservation. Since we have (12.408) recently derived the Sackur-Tetrode equa-   ni tion for the entropy of an ideal gas, we may = Nk ln (12.409) n as well use that. f = Nk ln 2 (12.410)  n  5 S = Nk ln Q + (12.401) n 2 We could also have obtained this by integrat- R dQ ing ∆S = T for a reversible isothermal We just need to set the two entropies to be expansion, as I think you did in Energy and equal before and after expansion, keeping Entropy.

111 Solution for week 7 solving for N.

Z εF PDF version of solutions N = D(ε)dε (12.417) 0 V Z εF 1. Energy of a relativistic Fermi gas There are = ε2dε (12.418) π2 3c3 a couple of ways you could go through this prob- ~ 0 V 1 3 lem. One would be to just integrate to find the = εF (12.419) π2~3c3 3 Fermi energy, and then to integrate to find the 1 internal energy. It’s not bad done that way. The N  3 ε = 3π2 3c3 (12.420) other way, which I’ll demonstrate, is to first solve F V ~ for the density of states, and then use that to 1 2  3 find the Fermi energy and U. = 3π n ~c (12.421)

just as the problem says. The dimensions 1  L 3 ZZZ ∞ are energy because n 3 is inverse length, D(ε) = 2 δ(ε(k) − ε)d3k 2π which when multiplied by c gives inverse −∞ time. is energy times time, so we get an (12.411) ~ energy as we expect.  3 Z ∞ L 2 = 2 δ(~ck − ε)4πk dk 2π 0 b) The internal energy at zero temperature (12.412) (which is the total energy of the ground state) just requires us to just integrate the density of states time energy. Note that the factors of two above are for the spin degeneracy. Now changing variables to an Z εF U = D(ε)εdε (12.422) energy: 0 Z εF V 3 = 2 3 3 ε εdε (12.423) 0 π ~ c  = ~ck d = ~cdk (12.413) V 1 4 = εF (12.424) π2~3c3 4  : N And we get V 1 3 3 =  εF εF (12.425) π2~3c3 3 4 3 = Nε (12.426)  L 3  1 3 Z ∞ 4 F D(ε) = 8π δ( − ε)2d 2π ~c 0 (12.414) The trickiest step was looking back at our previous expression for N to substitute.  L 3 = 8π ε2 (12.415) 2π~c 2. Pressure and entropy of a degenerate V Fermi gas = ε2 (12.416) π2~3c3 a) As we saw before (when working with the radiation pressure of a vacuum?) the pres- a) Solving for the Fermi energy comes down to sure is given by the thermal average value

112 of the derivative of the energy eigenvalues. farther, it is worth simplifying the latter.   1 ∂U 1 − f = 1 − (12.436) p = − (12.427) eβ(ε−µ) + 1 ∂V S,N eβ(ε−µ) + 1 1 all microstates   = − X ∂Ei β(ε−µ) β(ε−µ) = P − (12.428) e + 1 e + 1 i ∂V (12.437) i eβ(ε−µ) The usual challenge here is that fixed tem- = (12.438) eβ(ε−µ) + 1 perature is not the same thing as fixed en- 1 tropy. In this case, when T = 0, we know = (12.439) −β(ε−µ) that the probabilities are all predetermined e + 1 (via the Fermi-Dirac distribution), and we The entropy corresponding to a single or- can just take a simple derivative of the en- bital, thus is ergy we derived in class.. Sorbital = −k(f ln f + (1 − f) ln(1 − f)) 3 U = Nε (12.429) (12.440) 5 F 2  1  1  3 2  N  3 = −k ln = N ~ 3π2 (12.430) eβ(−µ) + 1 eβ(−µ) + 1 5 2m V 1  1   ∂U  + ln p = − (12.431) e−β(−µ) + 1 e−β(−µ) + 1 ∂V S,N (12.441) 2 3 2  N  3 2 1 k   = N ~ 3π2 (12.432) = ln eβ(−µ) + 1 5 2m V 3 V eβ(−µ) + 1 2 N k   = ε (12.433) + ln e−β(−µ) + 1 5 V F e−β(−µ) + 1 5 2   3 (12.442) 1 ~ 2 4 N = 3 3 π 3 (12.434) 5 m V Plot of entropy vs. eigenenergy This agrees with the expression given in the This is inherently symmetric as we change problem itself, so yay. the sign of ε − µ, which makes sense given b) The entropy is a Fermi gas, when kT  what we know about the Fermi-Dirac distri- bution. It is less obvious in this form that εF . We can start with the general form of entropy: the entropy does the right thing (which is to approach zero) when |ε − µ|  kT . We all µstates expect the entropy to go to zero in this case, X S = −k Pi ln Pi (12.435) and one term very obviously goes to zero, i but the other requires a bit more thinking. A simple approach is to plot the entropy, as We will begin by first finding the entropy I do below, which deomonstrates that the of a single orbital, and then adding up the entropy does indeed vanish at energies far entropy of all the orbitals. One orbital has from the Fermi level. only two microstates, occupied and unoccu- pied, which correspondingly have probabil- Using this expression for the entropy of a ities f(ε) and 1 − f(ε). Before we go any single orbital, we can solve for the entropy

113 of the whole gas. At the second step below scale as M 2, since increasing M will either we will make use of the fact that kT  F , increase ρ or increase the volume over which which means that the entropy looks very the mass is spread. The denominator |~r−~r0| much like a Dirac δ-function that hasn’t is going to on average scale as the radius been properly normalized. R. Thus it makes sense that the potential GM 2 Z ∞ energy would be about ∼ − R . Another S = D(ε)Sorbital(ε)dε (12.443) approach here would have been to use di- 0 mensional analysis to argue that the energy Z ∞ must be this. Alternatively, you could have ≈ D(εF ) Sorbital(ε)dε (12.444) −∞ assumed a uniform mass density, and then Z ∞  k   argued that the actual energy must be of a = D(ε ) ln eβ(−µ) + 1 F β(−µ) similar order of magnitude. −∞ e + 1 k    b) The kinetic energy of the electrons is the U + ln e−β(−µ) + 1 dε e−β(−µ) + 1 of the Fermi gas, which in class we showed (12.445) to be This looks nasty, but we can make it dimen- KE ∼ NεF (12.449) sionless, and it’ll just be a number! 2 2 N  3 ∼ N ~ (12.450) ξ = β(ε − µ) dξ = βdε (12.446) 2m V 2 5 which gives us ~ N 3 ∼ 2 (12.451) 2 mR S = D(εF )k T Z ∞ ln eξ + 1 ln e−ξ + 1 where m is the mass of the electron. Then + dε we can reason that the number of electrons eξ + 1 e−ξ + 1 −∞ is equal to the number of protons, and if (12.447) the star is made of hydrogen the total mass Now the last bit is just a number, which of the star is equal to the total mass of its happens to be finite. protons. 3. White dwarf M N ≈ (12.452) a) Showing that something is a given order of MH 2 5 magnitude can be both tricky and confusing. ~ M 3 1 KE ∼ 5 2 (12.453) The potential energy is exactly given by m 3 R MH 1 Z Z Gρ(~r)ρ(~r0) U = − d3r d3r0 2 |~r − ~r0| c) At this stage is is worth motivating the virial (12.448) theorem from mechanics, which basically says that the magnitude of the average po- as you learned in static fields, where ρ(~r) is tential energy of a bound system (which is the mass density. Unfortunately, we don’t bound by a power law force) is about the know what the mass density is as a function same as the average of its kinetic energy. of position, or how that function depends This makes sense in that the thing that is on the mass of the white dwarf. holding a bound state together is the poten- An adequate if not satisfying set of reason- tial energy, while the thing that is pulling ing is to say that the integral above must it apart is the kinetic energy. If they aren’t

114 in balance, then something weird must be solar mass. going on. BTW, this virial theorem also ap- 2 plies quite well to the quantum mechanical  −27 g cm2  10 s hydrogen atom. R ∼ 1 8  3  (1033g) 3 (10−24g) 3 10−7 cm All right, so g s2 (12.463) 2 2 5 6 GM ~ M 3 1 ∼ 10 cm ∼ 10km (12.464) ∼ 5 2 (12.454) R m M 3 R H That’s what I call a small-town star! 2 1 3 ~ M R ∼ 5 (12.455) mM 3 G 4. Fluctuations in the Fermi gas We are looking H here at a single orbital, and asking what is the At this point we just need to plug in num- variance of the occupancy number. bers. (∆N)2 = (N − hNi)2 (12.465) d) Again we need to plug in numbers. X 2 = Pi(Ni − hNi) (12.466) 2 i 1 3 ~ M R ∼ 5 (12.456) 3 mMH G Now this single orbital has only two possible 3 2 ! states: occupied and unoccupied! So we can write 3 ~ 1 this down pretty quickly, using the probability of R ∼ 5 (12.457) 3 M mMH G those two states, which are f and 1 − f. We also M note that hNi = f, so we’re going to have f all ρ = (12.458) 4π 3 over the place. 3 R 3 2 ! 2 2 2 2 ~ (∆N) = P1(1 − f) + P0(0 − f) (12.467) ∼ M 5 (12.459) 3 2 2 mMH G = f(1 + f − 2f) + (1 − f¡)f (12.468) Plug in numbers. = f − f 2 (12.469) e) All that changes is that our degenerate gas = hNi (1 − hNi) (12.470) has mass MN ≈ MH , and our total mass is now M = NMN ≈ NMH . This tells us that there is no variation in occu- pancy when the occupancy reaches 0 or 1. In 2 2 5 GM M 3 retrostpect that is obvious. If there is definitely ∼ ~ (12.460) 5 an electron there, then we aren’t uncertain about R M M 3 H H whether there is an electron there. 2 1 3 ~ M R ∼ 8 (12.461) 3 5. Einstein condensation temperature I’m go- MH G 2 ing to be sloppier on this solution, because this R ∼ ~ (12.462) is done in the textbook, and I’m still quite sick. 1 8 3 M 3 MH G The idea is to set the chemical potential to 0, which is its maximum value and integrate to find Plug in numbers to find the neutron star the number of atoms not in the ground state, NE, radius in kilometers when its mass is one which is normally essentially equal to the number

115 of atoms total. Note that I’m defining all heats and works Z ∞ to be positive, and taking their direction in- NE = D(ε)f(ε)dε (12.471) nto account (which explains the minus sign 0 in ∆SC . Now to find the amount of work 3   2 ∞ V 2M Z 1 1 we had to do, we just need to use energy = ε 2 dε (12.472) 2 2 βε conservation (i.e. the First Law). The en- 4π ~ 0 e − 1 ergy inputs to the heat pump are our work Naturally at this stage we will want to use a W and the heat from the cool side QC . The change of variables to take the physics out of the energy output is just QH . integral as we typically do. W + Q = Q (12.480) 3 √ C H   2 Z ∞ V 2M 3 ξ W Q N = (kT ) 2 dξ C E 4π2 2 eξ − 1 = 1 − (12.481) ~ 0 QH QH (12.473) T = 1 − C (12.482) TH Now we can simply solve for TE.

2 which is just the Carnot efficiency, as pre-   3 1 ~2 N 4π2 dicted. Note however, that in this case this TE =  √  (12.474) efficiency is not “what we get out divided kB 2M V R ∞ ξ 0 eξ−1 dξ by what we put in,” but rather the inverse of that. So in this case, when TC  TH , we Solution for week 8 have a very inefficient heat pump, since we hardly get any “free” energy. PDF version of solutions If the heat pump is not reversible, as always, 1. Heat pump things look worse, we will need more work to get the same amount of heating. Note here a) To approach this, I’ll essentially do a quick that there are two possible interpretations re-derivation of the Carnot efficiency, based of the word “reversible”. What is meant in around the idea that the process is reversible. the question is that the entropy of the pump Since it is a cycle, the state of the system and its surroundings doesn’t change. From doesn’t change, only that of the environ- a practical perspective a heat pump may be ment. The hot side in this case gets hotter, described as “reversible” when it can also while the cool side gets cooler, and the en- function as an air conditioner in the summer tropy change of each must be equal and (as most do). opposite. b) Now we have an engine driving a heat pump. QH ∆SH = (12.475) The work output from the engine must equal TH the work input of the pump. We can recall QC from class (or reproduce with reasoning like ∆SC = − (12.476) TC that above) that ∆SH + ∆SC = 0 (12.477) W TC QH QC = 1 − (12.483) = (12.478) QHH THH TH TC T Q C = C (12.479) Now we just need to eliminate the work to TH QH find how much heat input at the very high

116 temperature QHH we need in order to get V1 have at TH . a given amount of heat in our home QH . S3 = S2 (12.490) W 1 − TC kT 3 π4 kT 3 π4 QH TH C H = (12.484) 32πkV3 = 32πkV2 W TC hc 45 hc 45 Q 1 − T HH HH (12.491) QHH THH TH − TC = (12.485) V T 3 = V T 3 (12.492) QH TH THH − TC 3 C 2 H T 3 For the three temperatures requested, this V = V H 3 2 T comes out to C (12.493) QHH 600 300 − 270  3 = (12.486) TH QH 300 600 − 270 V4 = V1 TC 30 = 2 (12.487) (12.494) 330 2 b) Now we could integrate to find the work, = (12.488) 11 but the easy approach is to find the heat ≈ .18 (12.489) from TH ∆S, and then use the First Law to find the work. So you save about a factor of five in fuel by Z not just burning it in your home to heat it, QH = T dS (12.495) but instead using it to power a heat pump. = TH ∆S (12.496) Yay. And this is with it freezing outside, kT 3 π4 and uncomfortably warm inside! = kT 32π(V − V 1) H H 2 hc 45 c) Here is a pretty picture illustrating where (12.497) the energy and entropy come and go. The 4 4 (kTH ) π heats all came from the above computations, = 32π(V2 − V 1) (12.498) while the entropies came from dividing each h3c3 45 heat by its temperature. For the energy plot Now using the First Law. . . I made a distinction between the heat added ∆U = Q + W or removed by the heat pump (left) and the H H engine (right). For the entorpy plot I just (12.499) lumped those together. 4 4 4 4 (kTH ) π (kTH ) π 24π 3 3 (V2 − V1) = 32π(V2 − V 1) 3 3 + WH Sankey diagram of energy and entropy in the h c 45 h c 45 (12.500) engine-heat-pump combination (kT )4 π4 W = −8π H (V − V 1) 2. Photon Carnot engine H h3c3 45 2 (12.501) a) The engine starts at TH and V1 and first ex- pands isothermally to V2. Then is expands This tells us that the photon gas does work adiabatically to TC and V3. Next is is com- as it expands, like the ideal gas does, but pressed at fixed temperature to V4, from unlike the ideal gas, the work done is con- which it expands adiabatically to V1. So we siderably less than the heat absorbed by can identify V3 and V4 as the two volumes the gas, since its internal energy increases that have the same entropy at TC as V2 and significantly.

117 c) To find the work on each adiabatic stage just this way, because I could have just used requires finding ∆U, since Q = 0 for any energy conservation and the fact that my isentropic (or adiabatic) process. The first system is a cycle to say that the total work adiabatic stage goes from V2 to V3 while the must be equal and opposite to the total heat. temperature changes from TH to TC . The I’m completely fine with you doing that, but internal energy change is thus I think there is some pedagogical utility in seeing that doing all the works individually ∆U = W (12.502) 32 2 gives us the same answer, even though we k4 π4 don’t see the same detailed cancellation that = 24π V T 4 − V T 4  h3c3 45 3 C 2 H happens with the ideal gas. Oh, but I’m (12.503) noticing that I haven’t yet computed WC .I 4 4  3 ! think you can see that it’ll be exactly like k π TH 4 4 = 24π V2 T − V2T W only with the temperature and volumes h3c3 45 T C H H C replaced with their appropriate values. This (12.504) gives us for WC : k4 π4 = 24π V T 3 (T − T ) h3c3 45 2 H C H (12.505) So the system is losing energy (thus doing work) as we adiabatically expand it down to lower temperature. At the other end, going from V4 and TC to V1 and TH , we have

∆U14 = W4 (12.506) 4 4 k π 4 4 = 24π V T 4 − V T 4  (kTC ) π 3 3 1 H 4 C W = −8π (V − V 3) (12.510) h c 45 C h3c3 45 4 (12.507) (kT )4 π4 T 3 ! = −8π C (V − V ) H k4 π4 T 3 3 3 1 2 3 4 H 4£ h c 45 TC = 24π V1TH − V1 TH h3c3 45 3 (12.511) TC (12.508) k4 π4 = 8π (V − V ) T 3 T 4 4 3 3 2 1 H C k π 3 h c 45 = 24π V1T (TH − TC ) (12.512) h3c3 45 H (12.509) So as normal we do work while compressing the photon gas adiabatically. However, the amount of work in this case is not equal and opposite to the amount of work done when the gas was adiabatically expanded. This is because V1 =6 V2, which causes the internal energies to be different in the two cases. The ideal gas is unique in that its energy is independent of its density. Note also that the sign of the work was op- d) For the total work, I will add up all four posite because we were compressing rather little works. It’s a little annoying to do it than expanding. Plugging this in we can see

118 that: added to a system, which I used in this solution (and is convenient when using the W = WH + W2 + WC + W4 (12.513) First Law), but differs from the standard (kT )4 π4 convention when discussing engines, where = −8π H (V − V 1) h3c3 45 2 all signs are taken to be positive. k4 π4 + 24π V T 3 (T − T ) h3c3 45 2 H C H k4 π4 3. Light bulb in a refridgerator + 8π (V − V ) T 3 T h3c3 45 2 1 H C k4 π4 3 We have a 100W light bulb in a refridgerator that + 24π 3 3 V1TH (TH − TC ) h c 45 draws 100W power. This means the work done (12.514) by the refridgerator per second is 100W. How 4 4 k π 4 we need to ask how efficient the refridgerator = −8π (V2 − V 1)T h3c3 45 H can be, to see how much it can cool its inside. 4 4 k π 3 The refridgerator operates between two tempera- + 24π 3 3 V2TH (TC − TH ) h c 45 tures the inside (which I’ll call TC ) and the room 4 4 k π (which I’ll call TH ). Energy conservation tells us + 8π (V − V ) T 3 T h3c3 45 2 1 H C that k4 π4 + 24π V T 3 (T − T ) h3c3 45 1 H H C (12.515) QC + W = QH (12.521) k4 π4 = −8π T 3 (V − V 1)(T − T ) h3c3 45 H 2 H C k4 π4 where I’ve taken the usual convention (for this + 24π T 3 (V − V )(T − T ) h3c3 45 H 1 2 H C kind of problem) where all signs are positive, (12.516) so QC is the magnitude of heat drawn from the k4 π4 inside, W is the work done, and QH is the amount = −32π T 3 (V − V 1)(T − T ) h3c3 45 H 2 H C of heat dumped in the room. If this is a reversible (12.517) cycle, then the change in entropy of the room must be equal and opposite to the change in I’ll take the ratio that I want now. Almost entropy of the inside of the fridge. That means everything will cancel. that

k4 π4 3 W −32π h3c3 45 TH (V2 − V 1)(TH − TC ) = 4 (kTH ) π4 QH 32π(V − V 1) QC QH 2 h3c3 45 = (12.522) (12.518) TC TH

TH − TC TH = − (12.519) QH = QC (12.523) TH TC  T  = − 1 − C (12.520) TH If we have an irreversible fridge, entropy of the This is just the Carnot efficiency from class room plus fridge can only go up, which would with a minus sign. This sign came from the mean less cooling in the fridge (since the entropy convention that positive work means work of the inside is going down). Putting these equa-

119 tions together, we can see that class. dp L = (12.531) TH dT 0 QC + W = QC (12.524) T (V − V ) TC g `   TH L W = QC − 1 (12.525) = (12.532) TC TVg W L QC = (12.526) = NkT (12.533) TH − 1 T p TC QC 1 Lp = (12.527) = 2 (12.534) W TH − 1 NkT TC b) Now we will assume L is a constant, and This tells us the effiency of our refridgerator. As solve for the vapor pressure p(T ). The key long as this efficiency is greater than 1, our fridge will be to put p and T on separate sides of can out-cool the light bulb. So when is this equal the equation so we can integrate. to one? dp Lp = (12.535) dT NkT 2 1 1 dp L = 1 (12.528) = (12.536) TH − 1 2 TC p dT NkT T T TH Z 1 dp Z L 1 = − 1 (12.529) dT = dT (12.537) TC 2 T0 p dT T0 NkT TH = 2TC (12.530) Z p 1 L Z T 1 dp = 2 dT (12.538) p p Nk T T So our fridge can indeed cool the insides below 0 0  p  L  1 1  room temperature even with the light bulb (who ln = − − (12.539) ever put a 100W light bulb in a fridge?!), and p0 Nk T T0 could in fact (in principle) cool it down to like Now we can solve for p! 150K which would be crazy cold. Of course, the  L  − L poor insulation would prevent that, as well as p = p0e NkT0 e NkT (12.540) the capabilities of the pumps and refridgerant. We could clump all the stuff in parenthe- Is this the answer you were expecting for this ses into a big constant without any loss, problem? I can tell you that it was not the but I kind of like making it explicit that if answer I was expecting. Kind of crazy. we know that (p0,T0) is on the coexistence curve then we have a closed solution here. Note again that this makes an assumption about L being independent of temperature Solution for week 9 that is not entirely accurate. 2. Entropy, energy, and enthalpy of van der PDF version of solutions Waals gas We will begin with the free energy of the van der Waals gas: 1. Vapor pressure equation     2 nQ(V − Nb) N a dp F = −NkT ln + 1 − a) To solve for dT we will begin with the N V Clausius-Clapeyron equation derived in (12.541)

120 a) We can find the entropy as usual by taking and in the text, so we needn’t duplicate it. a derivative of the free energy.

H = U + pV (12.549) ∂F  NkT N 2 S = − (12.542) p = − 2 a (12.550) ∂T V V − Nb V  n (V − Nb)  3 N 2a  NkT N 2  = Nk ln Q + 1 H = NkT − + − a V N 2 V V − Nb V 2 NkT dn (12.551) + Q (12.543) n dT 3 N 2a NkT Q = NkT − 2 +  n (V − Nb)  2 V 1 − Nb = Nk ln Q + 1 V N (12.552) NkT 3 n 3 N 2a  Nb + 2 Q (12.544) ≈ NkT − 2 + NkT 1 + nQ T 2 V V  n (V − Nb) 5 (12.553) = Nk ln Q + N 2 5 N 2a N 2bkT = NkT − 2 + (12.545) 2 V V (12.554) 5 2Nap H(T, p) = NkT + Nbp − 2 kT In the penultimate (second-to-last) step, I 3 (12.555) used the fact that nQ ∝ T 2 . b) We can find the internal energy from F = U − TS now that we know the entropy. In the approximate step, we were just do- Nb ing a power series expansion since V  1. Now we just want to express the enthalpy in U = F + TS (12.546) terms of pressure, since it is usually used at     2 fixed pressure. That requires us to replace nQ(V −Nb) N a = −NkT ln  + 1 − volume with pressure. Solving for volume  N V in terms of pressure is a slight nuisance.     nQ(V −Nb) 5 + Nk ln  + T  N 2 (12.547)  N 2  p + a (V − Nb) = NkT 3 N 2a V 2 = NkT − (12.548) 2 V (12.556) N 2 pV − pNb + a ≈ NkT V which looks like the monatomic ideal gas (12.557) internal energy plus a correction term, which pV 2 − pNbV + N 2a = NkT V depends on the density of the fluid. (12.558) c) To find the enthalpy we just need the pres- 2 2 sure. We could find it using a derivative of pV − (pNb + NkT )V + N a = 0 the free energy, but that was done in class (12.559)

121 Now we can use the quadratic equation. Okay, this is admittedly looking a little hairy. But remember that we only need to keep pNb + NkT ± p(pNb + NkT )2 − 4pN 2a V = terms that are linear in a and b, so that ac- 2p tually just kills our correction term entirely. (12.560) And after all that work! pNb + NkT ± p(NkT )2 + 2pN 2bkT − 4pN 2a ≈ 2p 5 p H ≈ NkT + N 2bkT − 2N 2a (12.561) 2 NkT q 2pbkT −4pa (12.571) pNb + NkT ± NkT 1 + (kT )2 = 5 p 2p = NkT + N (bkT − 2a) (12.572) 2 kT (12.562) 5  2a   1 2pbkT −4pa  = NkT + Np b − (12.573) pNb + NkT + NkT 1 + 2 (kT )2 2 kT ≈ 2p (12.563) which matches with the expected answer. So pNb + 2NkT + pNbkT −2pNa yay. In retrospect, we could have simplified = kT 2p the solving for the volume bit drastically, (12.564) if I had noticed that V only occured in H in a ratio to a small quantity, and thus we 2pNb + 2NkT − 2pNa = kT (12.565) didn’t need to keep up to the first order 2p terms, we only needed up to the zero order NkT Na term, which would have eliminated most of = + Nb − (12.566) p kT the work. You are welcome to argue this in your solution, but should try to argue it What a nuisance. Each approximation I well. Otherwise, you’d just want to do all made eliminated a term that had two or the same tedium I did. more factors of a or b, which are taken to be small quantities (albeit with dimensions). dT Note that the first term is just what we get 3. Calculation of dp for water (K&K 9.2) from the ideal gas law. The rest is the first- order correction to the volume. Now that We begin with the Clausius-Clapeyron equation. we have an expression for V in terms of p However, to do this we have to ensure that we and T get everything in the right units, and such that 5 N 2bkT − 2N 2a the volume we divide by corresponds to the same H = NkT + (12.567) 2 V quantity of water as the latent heat on top. Let’s 5 N 2bkT − 2N 2a say we have one gram of water (which is nice up = NkT + (12.568) top), we need the volume of a gram of steam. 2 NkT Na p + Nb − kT 5 p N 2bkT − 2N 2a = NkT + 2 NkT Na  p ∆V ≈ Vg (12.574) 1 + Nb − kT NkT NkT (12.569) = (12.575) p 5 p   Na p  ≈ NkT + N 2bkT − 2N 2a 1 − Nb − 2 NkT kT NkT (12.570) I’ll start by finding the number of molecules in a

122 gram of water: 4. Heat of vaporization of ice (K&K 9.3) 1g Okay, once again we’ll want to find the volume N = −1 NA (12.576) of gas, which gives us part of the answer. In ad- 18g mol dp dition, we’ll need to find dT from the given pres- 1g 23 = −1 6.0221 × 10 (12.577) sures at a couple of temperatures. That leaves 18g mol us with nothing but the latent heat to solve for. 22 ≈ 3.35 × 10 (12.578) I’ll start with the volume of a gram of water. It’s like the last problem, but at a lower temperature. 5 Since one atmosphere is ∼ 10 pascals, we have I’ll not copy the work that is identical. One mm Hg is 133 Pa, so 3.35 × 1022(1.38 × 10−23JK−1)(373K) ∆V ≈ 105J m−3 3.35 × 1022(1.38 × 10−23JK−1)(272K) ∆V ≈ (12.579) 4.2 × 133J m−3 ≈ 1.7 × 10−3m3 (12.580) (12.586) ≈ .225m3 (12.587) Just as a check, we may as well verify that Vg  Vl. I know that liquid water has a density of So the volume of the gram of chilly steam is about 1g cm−3. Which makes the vapor density drammatically higher, due to the vapor pressure about two thousand times higher, so we’re okay being way lower at these frigid temperatures. ignoring the liquid density, yay. Putting this Note that I used a pressure halfway between the together, we can find the slope we are asked for. two pressures give, since we know the derivative most accurately at this half-way point. dp L = (12.581) dT T ∆V Now for the derivative itself: −1 2260J g dp 611 − 518 = (12.582) = Pa (12.588) (373K)(1.7 × 10−3m3g−1) dT 2.01K Pa −1 = 3530 (12.583) ≈ 46Pa K (12.589) K Putting things together, we get: Now we were asked for this in units of atmosphere per Kelvin, which gives us a change of five orders dp L = (12.590) of magnitude. dT T ∆V dp dp L = T ∆V (12.591) = 3.5 × 10−2atm K−1 (12.584) dT dT = (272K)(.225m3g−1)(46Pa K−1) Actually, we were asked for the inverse of this, (12.592) presumably so we’d have a number greater than ≈ 2815J g−1 (12.593) one. And now I remember we want our final answer dT −1 = 28K atm−1 (12.585) in J mol . I guess it would have been smart dp to compute the volume of a mole, rather than a gram. Oh well, it’s not a hard conversion. That tells us that if the liquid-vapor coexistence curve were a straight line, it would drop to zero L = 2815J g−1(18g mol−1) (12.594) ◦ vapor pressure at 72 C, which is emphasizes how −1 much curvature there is to the coexistence curve. ≈ 51kJ mol (12.595)

123 This isn’t all that accurate. The answer per gram can be compared with the latent heat of vaporization given in the last problem, and you can see that it’s higher for the ice, but only about twice as high, which reflects the fact that the liquid still has most of the same hydrogen bonds that hold the solid ice together.

124