<<

and Entropy Physics 423

David Roundy

Spring 2014 Draft 3/17/2014

Contents

Contents i

1 Monday: Lab 1: Heat and Temperature1 Lecture: ...... 1 Lecture: ...... 1 Lecture: ...... 1

2 Tuesday: First and Second Laws3 Lecture: ...... 3 Lecture: ...... 3 Lecture: ...... 3 Lecture: ...... 3 Lecture: ...... 4 Lecture: ...... 4 Lecture: ...... 4 Lecture: ...... 5 Lecture: ...... 5

3 Wednesday: Second Law lab6 Lecture: ...... 6

Physics 423 i 4 Thursday: Heat and work 10

5 Friday: 11 Lecture: ...... 11 Lecture: ...... 11

6 Monday: 13 Lecture: ...... 13 Lecture: ...... 13 Lecture: ...... 14 Lecture: ...... 14

7 Tuesday: Lab 2: rubber band 15

8 Wednesday: 16 Lecture: ...... 16 Lecture: ...... 16

9 Thursday: Thermodynamics practice 18

10 Friday: Black bodies and statistics 19 Lecture: ...... 19 Lecture: ...... 20 Lecture: ...... 20

11 Monday: Statistical approach 22 Lecture: ...... 22 Lecture: ...... 23 Lecture: ...... 24 Lecture: ...... 27

12 Tuesday: From statistics to thermodynamics 28 Lecture: ...... 28

13 Wednesday: Statistical mechanics of air 31 Lecture: ...... 31 Lecture: ...... 32

14 Thursday: Statistical mechanics of air 35 Lecture: ...... 35

15 Friday: 37 Lecture: ...... 37

Physics 423 ii 1 Monday: Lab 1: Heat and

This class is called Energy and Entropy. You’ve already learned quite a bit about energy, but entropy is probably quite new to you. Thermodynamics is a field that involves making experimental measurements of bulk substances, and using theory to connect those measurements with other measurements. We will begin this course by making measurements of how much energy is needed to heat up (and ) by a certain amount.

Now that you all have some data being collected, let’s talk about how you will be analyzing it. Many thermodynamic measurements are measurements of derivatives. The temperature you can measure directly, and the power of the heater (which is the energy dissipated into the water per unit time) we can work out directly. From the two of those, we will work out the Write on board: dQ¯ “C = ” p ∂T  p The heat (called Q) is the amount of energy which is thermally transfered, just like work is the amount of energy which is mechanically transfered. I write the derivative funny because Q is not a function of T . The p subscript just means we’re keeping the pressure constant. Another quantity we’ll be looking at is the entropy. We’ll spend much of this course talking about what entropy “really is”, but for now, just know that you can measure entropy by measuring heat and integrating: Write on board: dQ¯ ∆S = reversible T Z

In 1819, shortly after Dalton had introduced the concept of atomic weight in 1808, Dulong and Petit observed that if they measured the specific heat per unit mass of a variety of , and divided by the atomic weights of those solids, the resulting per-atom specific heat was essentially constant. This is the Dulong-Petit law, although we have since given a name to

Physics 423 1 Monday 4/8/2014 that constant, which is 3R or 3kB, depending on whether the relative atomic mass (atomic weight) or the absolute atomic mass is used. This law isn’t precisely true, and isn’t always true, and is never true at low . But it captures some physics that we will later call the equipartition theorem. We will write Dulong-Petit’s law as:

Cp = 3NkB (1.1) where N is the total number of atoms.

Physics 423 2 Tuesday 4/9/2014 2 Tuesday: First and Second Laws

Thermodynamic quantities can be divided according to how they scale with quantity of material present. This division is sort of like dimensional analysis, and allows us to catch mistakes and reason about how properties behave. Extensive properties are those that are proportional to the amount of quantity present. Intensive properties are independent of the quantity present. It helps to imagine dividing your (homogeneous) system into two, and asking whether the quantity you’re looking at is divided into two. e.g. imagine only “counting” half of this of water. As an example, V is extensive: if you’ve got twice as much stuff, it’s

Every time you come up with an answer in this course, you should ask yourself whether the result is extensive or intensive, and whether this is what you expected. It’s like checking that the dimensions match up!

We distinguish in thermodynamics between a system and its surroundings. The system is the thing we’re measuring or predicting the properties of, and the surroundings is everything else. Which is which will depend on what experiment we’re doing.

The first law of thermodynamics simply states that energy is conserved (or is a substance, to use Aristotle’s terminology). But it is useful to look at those two non-state variables work and heat. Both are changes in energy of a system, so we can write the first law as

∆U = Q + W (2.1)

where U is the internal energy of the system, Q is the energy added to the system by heating, and W is the work done by the system (or the energy removed from the system by working). It is more convenient mathematically, however, to have a framework for talking about infinitesimal changes in the total energy... Write on board: dU =dQ ¯ +dW ¯

Physics 423 3 Tuesday 4/9/2014 The second law of thermodynamics clarifies this rule, and extends it to cases where there might be other things going on, e.g. in the case of a refrigerator. The second law involves the change in entropy, which I defined for you yesterday:

dQ¯ ∆S = reversible (2.2) T Z The Second Law of Thermodynamics simply states that for any possible process, the change in entropy of a system plus its surroundings is either positive or zero. Write on board: ∆S + ∆S 0 system surroundings ≥

In 2011, I ad libbed quite a bit after this.

a) Fast vs. slow

b) quasistatic vs. reversible vs. irreversible vs. spontaneous

c) adiabatic and its various meanings (including isentropic)

The internal energy is clearly a state function, and thus its differential must be an exact differential.

dU = ? (2.3) =dQ ¯ dW¯ (2.4) − =dQ ¯ pdV only when change is quasistatic (2.5) − I spent some time on work being pdV . What is thisdQ ¯ ? As it turns out, we can define a state function S called entropy and so long as a process is done reversibly

dQ¯ = T dS only when change is quasistatic (2.6)

so finally we find out that Write on board: dU = T dS pdV − The fact that the T in this equation is actually the physical temperature was originally an experimental observation. At this point, the entropy S is just some weird heat-related state function.

Physics 423 4 Tuesday 4/9/2014 As we learned last week, heat capacity is amount of energy required to raise the temper- ature of an object by a small amount. dQ¯ C (2.7) ∼ ∂T dQ¯ = CdT At constant what? (2.8) If we hole the volume constant, then we can see from the first law that dU =dQ ¯ pdV (2.9) − since dV = 0 for a constant-volume process, ∂U C = (2.10) V ∂T  V But we didn’t measure CV on Monday, since we didn’t hold the volume of the water constant. Instead we measured Cp, but what is that? To distinguish between different sorts of heat capacities, we need to specify the sort of path used. So, for instance, we could write dQ¯ = T dS (2.11)

dQ¯ = CαdT +?dα (2.12)

T dS = CαdT +?dα (2.13) C ? dS = α dT + dα (2.14) T T ∂S C = T (2.15) α ∂T  α This may look like an overly-tricky derivative, so let’s go through the first law and check that we got it right in a few cases. I’ll do the CV case. We already know that dU =dQ ¯ pdV (2.16) − ∂U C = (2.17) V ∂T  V ∂U ∂S = (2.18) ∂S ∂T  V  V ∂S = T (2.19) ∂T  V where the second step just uses the ordinary chain rule.

We can find a change in entropy from the heat capacity quite easily: 1 ∆S = dQ¯ (2.20) T QS Z C(T ) = dT (2.21) T Z

Physics 423 5 Wednesday 4/10/2014 3 Wednesday: Second Law lab

We can view the of the ice in the warm water as a two step process, assuming that the ice completely melts. In the first step, the ice completely melts, while the water

Ti 0°C

Q1

Tm 0°C

Q2

Tf Tf

cools down. This involves the water heating the ice, transfering an amount of energy which ◦ we will call Q1. After this step, the melted ice water is still at 0 C. In the second step, the cold water (which was formerly ice) is warmed up by the warmer water, until finally they end up at the same temperature. In this case, we will call the heat Q2. Of course, the real ice doesn’t undergo these two distinct steps, since it melts gradually, and the melted water doesn’t wait to start warming up, but immediately begins step two. What we are doing here is envisioning a simpler (and quasistatic) path that the system could have taken to reach the same final ending point. We have diagrammed three unknowns, Q1, Q2 and Tf . To solve for these quantities, we will need to invoke known . The entire experiment is done at fixed pressure, rather than fixed volume. If we examine the first law:

∆U = Q + W (3.1)

or the thermodynamic identity

dU = T dS pdV (3.2) −

Physics 423 6 Wednesday 4/10/2014 we can see that the internal energy U isn’t easy to relate to the heats Q1 and Q2, since we don’t know how much work is done as the ice melts (or as the water changes temperature). You have seen Legendre transforms briefly, one of which gives the

H = U + pV (3.3) dH = dU + pdV + V dp (3.4) = T dS + V dp (3.5)

From the total derivative of the enthalpy, you can see that if we hold pressure fixed, then

∆H = T dS (3.6) Z = Q (3.7)

So the change in enthalpy (at fixed pressure) is equal to the amount of energy transfered by heating. The value Q1 is determined by the energy needed to melt your ice at fixed pressure, which is

Q1 = ∆Hf (3.8)

= 333 J/g Mice (3.9)

an interesting question is how different the answer would be if we were to hold the volume fixed. To find this, we simply need to know the change in volume as ice is melted.

ρice = 0.9167 g/mL (3.10) ◦ ρwater = 0.99984 g/mL at 0 C (3.11) 1 1 ∆V = M (3.12) ρ − ρ  water ice  = 0.091M mL/g (3.13) H = U + pV (3.14) ∆H = ∆U + p∆V (3.15) ∆U = ∆H p∆V (3.16) − = (333 J/g 0.101 J/mL 0.91 mL/g) M (3.17) − = 332.9 J/g M (3.18)

So those of you who pointed out that it isn’t a big effect are right. If we were examining , the effect would be very large, since the change in volume is so large. In any case, you should always use the appropriate and correct formulation, which in this case means using the enthalpy of fusion to find the heat required to melt ice at fixed pressure.1 1I should perhaps also note that the change in volume as water freezes or ice melts is a very strong effect, in terms of breaking things...

Physics 423 7 Wednesday 4/10/2014 Given that we know Q1, we can solve to find Tm using the heat capacity, which you were told to assume is constant over this temperature range. In reality, it varies between 4.219 J/gK to 4.18 J/gK over the temperature range studied (see the NIST website). I should probably also note that cv ranges from 4.217 J/gK to 3.991 J/gK over the same range. So using cV instead of cp wouldn’t have given a huge error, but the error would be considerably greater than that which we made by assuming that cp is independent of temperature. Anyhow, for a fixed-pressure process, looking at the water being cooled:

∂S C = T (3.19) p ∂T  p ∂H = (3.20) ∂T  p Q = ∆H (3.21) − 1

= CpdT (3.22) Z = Cp∆T (3.23) = C (T T ) (3.24) p m − i Q1 Tm = Ti (3.25) − Cp,water ∆Hf = Ti (3.26) − Cp,water (3.27) where I made use of the fact that the water is losing the energy Q1 which is gained by the ice. For step 2, we have a simpler situation, where water is being warmed, and other water is being cooled.

Q = C (T 0◦C) (3.28) 2 p,ice f − Q = C (T T ) (3.29) − 2 p,water f − m C (T 0◦C) = C (T T ) (3.30) p,ice f − p,water m − f And you can solve this.

Changes in entropy I’ll just briefly go over the change in entropy of the ice water here. For your homework, you’ll also need to solve questions 2.4, 2.5 and 2.6 from yesterday’s lab (questions handed out today), which handle this. You will need to solve for the change of entropy for each of the sub-processes. In every case, the equation you will need is: dQ¯ ∆S = quasistatic (3.31) T Z

Physics 423 8 Wednesday 4/10/2014 The challenge will simply be to figure out what the infinitesimal heatdQ ¯ is for that process, and what the temperature is—provided the process was done quasistatically. Entropy is a state function, so you don’t need to do the integral over the path that was actually taken (which was not quasistatic), but instead need to look at each change and imagine that it was done quasistatically instead. Processes that are done quasistatically and involve heating can be a bit confusing, but are actually really simple. You simply need to imagine that you had a pretty good insula- tor between the two systems that are exchanging energy by heating. This will slow down the process, and ensure that each side remains at a uniform and well-defined temperature throughout the process.

Physics 423 9 Thursday 4/11/2014 4 Thursday: Heat and work

Physics 423 10 Friday 4/12/2014 5 Friday:

We’ve already looked at the second law of thermodynamics: ∆S + ∆S 0 (5.1) system surroundings ≥ As it turns out there are a couple of other equivalent ways to state this law. The Kelvin formulation states that No process is possible in which the sole result is the absorption of heat from a reservoir and its complete conversion into work. The Clausius formulation states that No process is possible whose sole result is the transfer of heat from a body of lower temperature to a body of higher temperature. These formulations make it clear that if you could violate the Second Law, you could become filthy rich.

A heat engine is a device that converts heat into work. I will diagram

heat engines as displayed in this picture. The heat engine contains Th several parts. Qh At the top and bottom are hot and cold heat sinks. The operator of • the engine has to keep these two heat sinks at fixed temperature, which means burning fuel to warm up the hot sink, and using W something like a radiator to keep the cold sink cool. Qc

In the middle of the picture is the engine itself, which will contain Tc • some sort of a working substance that is (most likely) alternately heated and cooled. There is some amount of heat Q transfered from the hot sink to • h the engine, and some other amount of heat Qc (both taken to be positive numbers) transfered from the engine to the cold sink. By the first law, the difference between these must be the work. W = Q Q h − c It may seem like heat engines (and steam engines) are a bit old-fashioned, but about 80% (according to wikipedia) of electric power in the world is generated by steam turbines1— which are simple heat engines. So it’s not really a 19th century application, although it was pretty well understood in the 19th century. 1This consists of all coal-burning plants and nuclear power plants, and I’m not sure what else... almost certainly geothermal.

Physics 423 11 Friday 4/12/2014 Efficiency The efficiency in general is what you get out divided by Th what you put in. In this case, what we have to put in is the heat added Qh to the hot resevoir Qh, and what we get out is the work W , so W η = (5.2) W Qh Qh Qc Q = − (5.3) c Qh Tc Q = 1 c (5.4) − Qh So clearly, we’d like to minimize the amount of heat sent to the cold sink. This also has environmental advantages. If each step is done reversibly, then we could run a heat engine in reverse, and have a refridgerator. Thus we put do work on the fridge, and cool off the cold sink, while warming up the hot sink. The efficiency of any reversible heat engine must be the same as the

efficiency of any reversible refridgerator. We can see this by using a heat Th engine to drive a refridgerator. By choice the work done by the engine is Qh Qh the same as the work done on the fridge. If the efficiencies of the fridge and engine differ, then there will be a net transfer of heat either from hot sink to cold sink or from cold sink to hot sink. The former would W be reasonable and natural, but the latter would be crazy, which means

that the fridge cannot be more efficient than the engine. However, if Qc Qc

both fridge and engine are reversible, then if the fridge is less efficient Tc than the engine, then we could run the thing in reverse, and get the crazy situation again happening, in which nothing changes except that heat is transferred from a cold place to a hot place... and that just isn’t natural! “So we can only conclude that every possible reversible heat engine must have the same efficiency!”

Physics 423 12 Monday 4/15/2014 6 Monday:

Legendre Transforms revisited The first week of class, we discussed the Legendre trans- forms. This began with the thermodynamic identity (given here with the possibility of the number of N changing):

dU = T dS pdV + µdN (6.1) − I pointed out that this makes U a natural function of S and V , and means that derivatives of U are simple when S or V is held constant. While there are times when it’s easy to fix or adjust S or V , these times are relatively rare. Often it is easier to control T or p. For these situations, the Legendre transforms make things easier, by giving us a set of thermodynamic potentials that are naturally functions of T or p.

F = U TS (Helmholtz free energy) (6.2) − dF = dU T dS SdT (6.3) − − = SdT pdV + µdN (6.4) − − H = U + pV (Enthalpy)) (6.5) dH = T dS + V dp + µdN (6.6) G = U TS + pV (Gibbs free energy)) (6.7) − dG = SdT + V dp + µdN (6.8) − Φ = U TS µN (Grand potential or grand free energy)) (6.9) − − dΦ = SdT pdV Ndµ (6.10) − − − (6.11)

What is the enthalpy, Helmholtz free energy and Gibbs free energy? This is easiest to see in a couple of simple cases. Suppose you want to know how much energy is needed to melt some ice or boil some water at constant pressure? For one thing, you’d need to know the change in internal energy U, but that’s not all. Since the volume changes, you also will need to push aside some air, and that’s going to require some work... in fact, it’ll require p∆V of work. So the total energy we’ll need to boil the water or melt the ice will have to account for that work as well. In one case, it’ll be easier than we think,

Physics 423 13 Monday 4/15/2014 and in the other case it’ll be harder. The change in enthalpy will give us the answer in either case. Now suppose we have have some steam and we want to use it to do some work. How much work can we get out of it? Obviously, the internal energy is relevant here, since energy must be conserved. But suppose we want to achieve this work at constant temperature (so we won’t have to insulate things)? If we’re keeping things at constant temperature, then there is heating going on... so the amount of work can’t be the same as the change in internal energy. How much work can we do? The answer is given by the change in Helmholtz free energy. Finally we get to the Gibbs free energy. As you might gather, it’s helpful when you want to keep both temperature and pressure constant. You might wonder what you could possibly be changing, if you keep both pressure and temperature constant. In general, you can phrase such a change as one in which the number of molecules N changes. One possibility is that you could be doing a transition, like melting ice at zero centigrade and atmospheric pressure, in which case the number of ice molecules is decreasing and the number of water molecules is increasing. This is possible because the Gibbs free of ice and water are the same at that temperature and pressure. Another possibility is that you’re undergoing a chemical reaction... which is another sort of a . Reactions or phase transitions are reversible (or “in equilibrium”) when the change in Gibbs free energy is zero. If it’s negative, they happen spontaneously, and if it’s positive they don’t happen at all.

Maxwell relations Last week (or the week before last?), we learned that mixed partial derivatives are the same, regardless of the order in which we take the derivative, so

∂f ∂f ∂ ∂ ∂y ∂x y = x (6.12) ∂y  !  ∂x  x y ∂2f ∂2f  = (6.13) ∂x∂y ∂y∂x As you know, in thermodynamics, partial derivatives are often physical quantities, things we can measure. In such a case, their derivatives also may be measurable—and important— quantities. From each thermodynamic potential, we can find one Maxwell relation (or two, or three, if you count ones involving changing N or µ). You need not memorize Maxwell relations, but you do need to know the thermodynamic identity, and should be able to quickly find the total derivatives of the other thermodynamic potentials. From any given thermodynamic potential, you do need to understand how to find a Maxwell relation. Tomorrow, you will do a lab in which you will use a Maxwell relation to measure a change in entropy, without having to do any calorimetry.

Physics 423 14 Tuesday 4/16/2014 7 Tuesday: Lab 2: rubber band

Physics 423 15 Wednesday 4/17/2014 8 Wednesday:

About integrating experimental data. . . I will discuss this in the context of the first lab (melting ice with a resistor), but it is also relevant to the lab you will be turning in on Friday. You were asked to find the entropy by integrating

t2 dQ¯ ∆S = (8.1) T Zt1 t2 P (t)dt = (8.2) T Zt1 t2 1 = P dt (8.3) T (t) Zt1 where P is the power output by your resistor. As an aside, you needed to be sure to use T in Kelvins. If it ever makes a difference, you always need to use Kelvins, as the zero value of Celsius is arbitrary. Many of you fit a line to the temperature curve, and then analytically integrated that line, which gave a log. This is all right in this case, since the fit is quite good, but isn’t a general solution, since Cp will depend on T in general. Fortunately, there is actually an even easier way to do the integral, that is also more general: you can just perform the integral numerically!

t2 1 ∆S = P dt (8.4) T (t) Zt1 1 = ∆t (8.5) T (t) t X This is a sum that you can easily do on your spreadsheet, for instance. You would need to be a bit careful if your points were irregularly spaced (as they will be for your rubber band lab, where you do a different integral), in which case you might need to use the trapezoidal method to perform your integrals. But ultimately, a definite integral is just the area under a curve, and when you’re dealing with experimental data, finding the area under the curve is a pretty simple calculation.

Physics 423 16 Wednesday 4/17/2014 Review of thermodynamics and math concepts

Total differential

∂U ∂U dU = dS + dV (8.6) ∂S ∂V  V  S We can interpret a total differental to find an expression for a partial derivative. • We can substitute one total differential into another and do linear algebra. • We can integrate to find a finite change. • Mixed partial derivatives

∂U ∂ ∂U ∂ ∂U ∂2U ∂ ∂S V = ∂V S = = ∂S p (8.7) ∂V ∂S ∂S∂V 6 ∂V  !S  !V  !S First law

dU =dQ ¯ +dW ¯ ∆U = Q + W (8.8)

Second law

∆S + ∆S 0 (8.9) system surroundings ≥ Measuring entropy

dQ¯ ∆S = quasistatic (8.10) T Z ∂S C = T (8.11) α ∂T  α Legendre transforms

dU = T dS pdV U = internal energy (8.12) − dF = SdT pdV F = U TS Helmholtz free energy (8.13) − − − dH = T dS + V dp F = U + pV Enthalpy (8.14) dG = SdT + V dp G = U TS + pV Gibbs free energy (8.15) − − From each of the above total differentials, you can construct a Maxwell relation from the mixed partial derivative.

Physics 423 17 Thursday 4/18/2014 9 Thursday: Thermodynamics practice

Physics 423 18 Friday 4/19/2014 10 Friday: Black bodies and statistics

Let’s consider the following system. Here we’ve got two objects in an insulated environment, surrounded by vacuum, so the only way insulation they can exchange energy is through electromagnetic radiation. Further, Q let’s assume that each object is perfectly black, so that it absorbs any 1 radiation incident on it. We will define Q1 and Q2 to be the energy radiated by objects 1 and 2 respectively during a given (short) period T of time. We’ll assume any energy radiated by one object is absorbed by T1 2 the other. Also, assume that these objects are large enough that their temperatures are not changing much. Q2 Differing temperatures First, let us suppose T1 > T2. What can we say about Q1 and Q2 on the basis of the Second Law?

Q2 Q1 ∆S1 = − (10.1) T1 Q1 Q2 ∆S2 = − (10.2) T2 The second law tells us that

∆S + ∆S 0 (10.3) 1 2 ≥ Q Q Q Q 2 − 1 + 1 − 2 0 (10.4) T1 T2 ≥ 1 1 (Q Q ) 0 (10.5) 2 − 1 T − T ≥  1 2  Q Q 0 (10.6) 2 − 1 ≤ Q Q (10.7) 2 ≤ 1

Equal temperatures Now suppose T1 = T2, what can we say about Q1 and Q2? They must be the same. It’s a little tricky to understand this, since when the two temperatures are the same, the combined change in entropy of the two systems is zero, even if Q = Q2. 1 6 However, you can easily see that if Q1 = Q2, then soon the two temperatures would not be the same, and then the heat would be flowing6 the wrong way.

Physics 423 19 Friday 4/19/2014 A grey body Finally, let us suppose that we replace the right-hand object with one that only absorbs two thirds of incident radiation and reflects the other one third. How much will such a body radiate? We can answer this question by considering the two systems at equal temperatures. We know that two systems at equal temperatures cannot have a heat one another up. The amount radiated by the left-hand body cannot be changed by the change in the right-hand 1 object. However, 3 of that energy will be reflected back and reabsorbed by the left-hand object. Since anything radiated by the right-hand object must also be absorbed by the left- 2 hand object, we conclude that it must only radiate 3 what it would have radiated were it truly black. This tells us that if a body is a poor absorber of radiation (i.e. is light-colored) it will also be a poor thermal emitter of radiation. Interestingly, our results must hold even if we put a filter that reflects all but one frequency in between the two objects (you do need to convince yourself that such a filter is possible), which means that at every frequency, the absorption must be proportional to the emmision, with the same proportionality constant!

Efficiency of solar power On the subject of light, let’s consider the thermodynamic limit on the efficiency of solar power (e.g. photovoltaic cells). The sun’s temperature is around 6000K. Our temperature is around 300K. If we want to convert energy from the sun into electric energy, we are limited by thermodynamics. Given these two temperatures, the maximum efficiency for a solar cell would be

TC ηmax = 1 (10.8) − TH 300K = 1 (10.9) − 6000K = 95% (10.10)

For photovoltaic cells, this is not an important limitation (because we can’t even approach it due to practical issues).

A statistical approach So far in this class, you have learned classical thermodynamics. Starting next week, we will be studying statistical mechanics. Thermodynamics may look “theoretical” because it involves a lot of math, but ultimately it is an experimental science. Thermodynamics puts severe (and interesting) constraints on equations of state, but can never tell us what the equations of state actually are. Similarly, thermodynamics can allow us to measure one quantity and use it to predict the result of a very different measurement. But it could never give us the ideal law, or the internal energy of an ideal gas.

Physics 423 20 Friday 4/19/2014 Statistical mechanics is the theoretical counterpart of thermodynamics. It’s how we can predict thermodynamic quantities from first principles. It also allows us to use thermody- namic measurements to extract microscopic properties of a system. From quantum mechanics, you know that given a Hamiltonian describing a system, you could (in principle) solve for all the possible eigenstates and their energies. But how can you know which of those states a given system will be in? And given that state, how can you predict the result of interactions of the system with its surroundings, when you don’t know the hamiltonian or eigenstates of the surroundings? These are the questions that are answered by statistical mechanics.

Inputs to stat mech: Energies and eigenstates of the Hamiltonian. We can actually get much of what interests us out of just the energies, just as we could compute all the thermodynamic properties from U(S,V ), if only we knew what it was... or from G(T, p), as you did in your homework.

Output of stat mech: Probabilities (at a given temperature) of each energy eigenstate, U, S, p, H and all thermodynamic functions. Statistical mechanics is awkward, so we will mostly want to use thermodynamics approaches when we can. e.g. if we know U and T and S, we can just use F = U TS. − Large numbers In macroscopic systems, there are many atoms and molecules, typically around 1023. As a result, we have no hope of actually examining every possible eigenstate, nor could we practically determine the precise microstate of the system. Instead, we need to examine how likely various states are. Average properties become extremely well-defined when many things are averaged. If I flip one coin, I’ll get 50% heads, but with a pretty large uncertainty. When I flip 100 coins, I get 50 heads 10 coins. If I flip 1022 coins, I will get 5 1021 heads 1011. This is a lot of uncertainty in± the total number of heads, but a very small× uncertainty± in the fraction of coins that will end up being heads.

Physics 423 21 Monday 4/22/2014 11 Monday: Statistical approach

The fairness function

The primary quantity in statistical mechanics is the probability Pi of finding the system in eigenstate i. Once we know the probability of each eigenstate for any given state, we will be able to compute every thermodynamic property of the system. The approach we are going to use is to state that the probabilities are those which maximize the fairness (or minimize the bias). So we need to define a fairness function that we can maximize. First, let’s talk about some properties the fairness function shouldF satisfy. a) it should be continuous b) it should be symmetric

(P ,P ,P ,...) = (P ,P ,P ,...) (11.1) F 1 2 3 F 3 2 1

c) it should be minimum when P2 = P3 = ... = 0 and P1 = 1 (1, 0, 0,...) = minimum (11.2) F d) it should be maximum when P = P = P = 1 2 3 ··· (P,P,P,...) = maximum (11.3) F e) “Addition rule” if I have two uncorrelated systems, then their fairness should add (extensivity!!!). This corresponds to the following:

(P ,P ) + (P ,P ,P ) = (P P ,P P ,P P ,P P ,P P ,P P ) (11.4) F A B F 1 2 3 F A 1 A 2 A 3 B 1 B 2 B 3 There aren’t many functions which satisfies all these rules! Write on board: all states Fairness = k P ln P − i i i X

Physics 423 22 Monday 4/22/2014 This particular function satisfies all these constraints. It is continuous, symmetric, mini- mum when maximally unfair and maximum when maximally fair. Continuous and symmetric are reasonably obvious. Let’s show is minimum when maximally unfair. This is when one Pi = 1 and the rest are zero. F

lim P ln P = 0 (11.5) P →0 × ∞ ln P = lim (11.6) P →0 1 P 1 = lim P (11.7) P →0 1 − P 2 = 0 (11.8)

We next consider the contribution for the P = 1, but that’s easy, since ln 1 = 0, so that term is also zero. Since P ln P can never be positive for 0 P 1, we can see that the maximally unfair situation has minimum fairness, as it should.≤ ≤ 1 Next, let’s consider the maximally-fair situation, where P1 = P2 = = N . To demon- strate that this is maximum fairness is more tricky, and will be addressed··· next, when we will go about maximizing the fairness.

Demonstrating extensivity Consider two systems A and B, which you can think of as dice or quantum mechanical systems with several eigenstates. Each system has a set of possible states i. If the two are uncorrelated, then we can describe their separate probabilities A B as Pi and Pj . What is the probability Pij of finding A in state i and B in state j?

A B Pij = Pi Pj (11.9) Suppose the two systems are correlated, and I know that the probability of finding the systems in states i and j respectively is Pij for any states i and j. How would I find the probability of finding A in state n, regardless of the state of B?

A Pn = Pnj (11.10) j X Given the above two uncorrelated systems, A and B, we can show that the fairness of the combined system is equal to the sum of the fairnesses of the two separate systems. This means that is extensive. F = k P ln P (11.11) FA − i i i X = k P ln P (11.12) FB − i i i X Physics 423 23 Monday 4/22/2014 = k P ln (P ) (11.13) FAB − ij ij ij X = k P P ln (P P ) (11.14) − i j i j ij X = k P P (ln P + ln P ) (11.15) − i j i j ij X = k P P ln P k P P ln P (11.16) − i j i − i j j ij ! ij ! X X = k P ln P P k P ln P P (11.17) − i i j − j j i i ! j ! j ! i ! X X X X = k P ln P k P ln P (11.18) − i i − j j i ! j ! X X = + (11.19) FA FB

Maximizing the fairness I have introduced the fairness function, and argued that the fairness was maximized. Now we’ll look at how we go about maximizing it.

Usually, (analytically) we maximize functions by setting their derivatives equal to zero. So we could maximize the fairness by ∂ F = 0 (11.20) ∂Pi = k (ln P + 1) (11.21) − B i

Using the formula for the fairness function, what can this tell us about Pi? It doesn’t make −1 much sense at all... it means Pi = e ._ ¨ There is a problem with this, which is that Pi can’t take just any values, because these are probabilities, and they have to add up to one. This is a constraint, and to solve a constrained maximization, we use the method of Lagrange multipliers. We first define a Lagrangian1:

= + αk 1 P (11.22) L F B − i i ! X 1The term Lagrangian means different things in different fields. In this case, we aren’t using the normal Physics meaning for Lagrangian, but rather the definition from optimization theory, since we are optimizing.

Physics 423 24 Monday 4/22/2014 Note that since the added term should be zero, we haven’t changed the thing we want to maximize. Now we maximize this in the same way, but we’ve got some extra terms that show up in our derivatives. We could, by the way, obtain our constraint by maximizing over α (the Lagrange multiplier) as well as the probabilities Pi. When we minimize , we find L ∂ L = kB(ln Pi + 1) αkB (11.23) ∂Pi − − = 0 (11.24)

ln Pi + 1 = α (11.25) −−1−α Pi = e (11.26) This tells us that all the states are equally probable. To find the actual probabilities, we would need to also apply the constraint:

0 = 1 P (11.27) − i i X = 1 e−1−α (11.28) − i X = 1 e−1−α (11.29) − i X = 1 Ne−1−α (11.30) − 1 e−1−α = (11.31) N 1 P = (11.32) i N Which tells us that the probability of each state is equal to one over the total number of states. This makes some degree of sense: if all states are equally probable, then it does make sense that the probability of each state must be one in N. However, this doesn’t really make much physical sense, since it means states with a huge energy are just as likely as states with a very small energy. The reason is because we haven’t yet taken into account the energy. We did, however, succeed in demonstrating that the maximum fairness occurs when all states are equally probable, as promised.

Weighted averages Most thermodynamic quantities can be expressed as weighted averages over all possible eigenstates (or microstates). For instance, we the internal energy is given by:

U = PiEi (11.33) i X Note that this will probably not be an eigenvalue of the energy, but that’s okay. The energy eigenvalues are so close for the total energy of a macroscopic object that we couldn’t

Physics 423 25 Monday 4/22/2014 distinguish them anyhow. Any thermodynamic quantity that is defined for a microstate will be computed using precisely this sort of average: this will also cover magnetization and pressure, for instance, or the intensity of electromagnetic radiation at a given frequency.

Internal energy constraint If the energy of a system is actually constrained (as it gen- erally is), then we should be applying a second constraint, besides the one that allows us to normalize our probabilities.

= k P ln P + αk 1 P + βk U P E (11.34) L − B i i B − i B − i i i i ! i ! X X X where α and β are the two Lagrange multipliers. We want to maximize this, so we set its derivatives to zero: ∂ L = 0 (11.35) ∂Pi = k (ln P + 1) k α βk E (11.36) − B i − B − B i ln P = 1 α βE (11.37) i − − − i At this point, it is convenient to invoke the normalization constraint...

Pi = 1 (11.38) i X 1 = e−1−α−βEi (11.39) i X 1 = e−1−α e−βEi (11.40) i X e1+α = e−βEi (11.41) i X (11.42)

Write on board: all states Z e−βi ≡ i X Write on board: e−βi P = i Z

Boltzmann factor P = (11.43) i partition function

Physics 423 26 Monday 4/22/2014 At this point, we haven’t yet solved for β, and to do so, we’d need to invoke the internal energy constraint:

U = EiPi (11.44) i X E e−βEi U = i i (11.45) Z P The partition function is a particularly useful quantity. Physically, it is nothing more than the normalization factor needed in order to compute probabilities, but in practice, finding that normalization is typically the hardest part of a calculation—once you have found all the energy eigenvalues, that is. One interesting question is whether the partition function is intensive or extensive. To examine that question, we will look at the partition function of two combined, uncorrelated systems, similar to what we examined earlier.

−βEA ZA = e i (11.46) i X B −βEj ZB = e (11.47) j X A B −β(Ei +Ej ) ZAB = e (11.48) ij X − A − B = e βEi e βEj (11.49) ij X − A − B = e βEi e βEj (11.50) i j X X − A − B = e βEi e βEj (11.51) i j X X − A − B = e βEi e βEj (11.52) i ! j ! X X = ZAZB (11.53) So the partition function of two uncorrelated systems is multiplied rather than added. This means that the log of the partition function is itself extensive! It will turn out to be a thermodynamic state function that you have already encountered, as we will see tomorrow. One consequence of this logarithmic extensivity is that if you have N identical non- interacting systems with uncorrelated probabilities, you can write their combined partition function as

N Z = Z1 (11.54) where Z1 is the partition function of a single non-interacting system.

Physics 423 27 Tuesday 4/23/2014 12 Tuesday: From statistics to thermodynamics

Let’s talk a bit about fairness. We used the fairness to find the probabilities of being in the various eigenstates, by assuming that the “fairest” distribution would prevail. If you bring two separate systems together and allow them to equilibrate, then you would expect that the net fairness would either remain the same or would increase. This sounds a little like entropy in the second law, in that the net entropy of system plus surroundings can increase or stay the same, but cannot decrease. The maximum value of the fairness for a given system (which is the value it will have in equilibrium) is its entropy.

Let’s look the maximum value of the fairness (a.k.a. entropy), which is

= k P ln P (12.1) F − i i i X U = PiEi (12.2) i X e−βEi P = (12.3) i Z “On your big white boards, solve for the fairness as a function of U, β and Z. i.e. eliminate the sum over i.”

max = k P ln P (12.4) F − B i i i X e−βEi e−βEi = k ln (12.5) − B Z Z i X   e−βEi = k ( βE ln Z) (12.6) − B Z − i − i X E e−βEi ln Ze−βEi = k β i + k (12.7) B Z B Z i i X X max = k βU + k ln Z (12.8) F B B

Physics 423 28 Tuesday 4/23/2014 At this point, we may want to solve for U again, to get yet another relationship for U:

max 1 U = F ln Z (12.9) kBβ − β

We saw before that ln Z was extensive, so we can now conclude that β is intensive. From which it is also clear that entropy is extensive (which we already knew). Since we believe that

S = max, (12.10) F let’s see what else we can extract from Equation 12.9 for U. We also know that

dU = T dS pdV (12.11) − ∂U T = (12.12) ∂S  V Since we have an equation for U in terms of S, we just need to figure out how to hold V constant, and we’ll know what T is! What does it mean to hold V constant? It hasn’t shown up in any of our statistical equations? If we change the volume, we will change the energy eigenvalues, so if we hold V constant (and in general, do no work) then the energy eigenvalues are fixed. So until we explicitly add states with different volumes, then V is held constant in our calculations, and thus we should be able to evaluate the derivative of U with respect to S at fixed V to find the temperature.

S 1 U = ln Z (12.13) kBβ − β ∂U T = (12.14) ∂S  V 1 S ∂β ln Z ∂β 1 ∂Z = + (12.15) kβ − kβ2 ∂S β2 ∂S − Zβ ∂S  V  V  V 1 S ∂β ln Z ∂β 1 ∂Z ∂β = + (12.16) kβ − kβ2 ∂S β2 ∂S − Zβ ∂β ∂S  V  V  V  V 1 1 S ln Z 1 ∂Z ∂β = + + (12.17) kβ β −kβ β − Z ∂β ∂S   V   V 1 1 S ln Z ∂β = + + + U (12.18) kβ β −kβ β ∂S    V 1 = (12.19) kβ 1 β = (12.20) kT

Physics 423 29 Tuesday 4/23/2014 where in step 12.18, we used the equation for U, Equation 12.9, which saved us the tedium ∂β of evaluating ∂S V . By inserting this definition for β, into Equation 12.9, we can see that

 U = TS kT ln Z (12.21) − kT ln Z = U TS (12.22) − − F = k T ln Z (12.23) − B So it turns out that the log of the partition function just about gives us the Helmholtz free energy!^ ¨ This is often a bit more useful than our expression for U, since we know the derivatives of F with regard to T :

dF = SdT pdV (12.24) − − So we could conveniently evaluate S by taking a derivative of the Helmholtz free energy— which would take us in a bit of a circle, but would allow us to express S directly in terms of the partition function Z. Once we have computed F (T,V ) using statistical mechanics— which really only requires that we evaluate the partition function—we can use ordinary thermodynamics to compute all other thermodynamic quantities!

Physics 423 30 Wednesday 4/24/2014 13 Wednesday: Statistical mechanics of air

I’m going to quickly review and introduce the energy eigenvalues for some simple quantum mechanical problems. For each of the following, I will sketch out the potential, then sketch the wavefunctions and the spacing of the energy levels.

Particle in a box The first problem you handled was a particle in an infinite square well potential: 10 8  2 2

π 6 2 2 2 mL ¯ h

h¯ ∂ 2

 4 = − 2 (13.1) H 2m ∂x E 2 h¯2π2n2 E = (13.2) 0 n 2mL2 0.0 0.2 0.4 0.6 0.8 1.0 x/L where n 1. We could solve the same problem in three dimensions, and the three coordinates would separate≥ (i.e. we could use separation of variables), and we would have:

h¯2 = − 2 (13.3) H 2m ∇ h¯2 ∂2 h¯2 ∂2 h¯2 ∂2 = (13.4) −2m ∂x2 − 2m ∂y2 − 2m ∂z2 2 2 2 2 2 h¯ π nx + ny + nz Enxnynz = 2 (13.5) L 

Rigid rotor The next moderately simple problem is the rigid rotator. In this case, the only energy in the Hamiltonian is the angular kinetic energy:

h¯2 = − L2 (13.6) H 2I h¯2l(l + 1) E = (13.7) lm 2I

Physics 423 31 Wednesday 4/24/2014 Simple harmonic oscillator 3.0 Finally, we have a simple problem that 2.5 hasn’t yet come up in the paradigms, which is the simple harmonic ) 2.0 hω oscillator. In this case we have both kinetic and potential energy: (¯ 1.5

E 1.0 h¯2 ∂2 mω2 0.5 = − + 0 x2 (13.8) 0.0 2 2 1 0 1 2 H 2m ∂x 2 − − ¯h 1 x mω En = n + hω¯ 0 (13.9) 2 q    Of course, you also studied the atom, but its solution is less general than those we’ve listed here. Any diatomic behaves like a rigid rotator and a simple harmonic oscillator, and like a particle in a box, too!

Diatomic ideal gas Let’s consider a diatomic ideal gas, such as . In this case, the energy levels of a single molecule are given by the sum of the translational kinetic energy, rotational kinetic energy and vibrational energy—both kinetic and potential: h¯2π2 n2 + n2 + n2 h¯2l(l + 1) 1 E(1) = x y z + + n + hω¯ (13.10) nxnynznvlm 2mL2 2I 2 0    That’s an awful lot of quantum numbers, and that’s just one molecule, and we’re neglecting any possible electronic excited states, anharmonicity or coupling of rotation with vibration! How does this change when we’ve got N molecules all confined in the same box? We’ve already talked about how energies relate when we combine systems:

N (1) Etot = Ei (13.11) i X where I’ve left out all the quantum numbers, since there are so very many. If we want to know the internal energy, we’ll need to sum over every possible state, with the probability of that particular state. To do this, we’ll need to know the partition function, so let’s start with that. all states Z = e−βEthis state (13.12)

 (1) (2)  X −β En n n n l m +En n n n l m +··· = e x1 y1 z1 v1 1 1 x2 y2 z2 v2 2 2 (13.13) n n n n l m ,n n n n l m ,··· x1 y1 z1 v1 1 1Xx2 y2 z2 v2 2 2 . . . except that this isn’t quite right. We can’t distinguish between the different molecules... when we swap two of them in the sum, we’re really talking about the same state! We can fix this double-counting by multiplying by an N!, which gives us:

 (1) (2)  1 −β En n n n l m +En n n n l m +··· Z = e x1 y1 z1 v1 1 1 x2 y2 z2 v2 2 2 (13.14) N! n n n n l m ,n n n n l m ,··· x1 y1 z1 v1 1 1Xx2 y2 z2 v2 2 2 Physics 423 32 Wednesday 4/24/2014 (1) (2) 1 −βEn n n n l m −βEn n n n l m = e x1 y1 z1 v1 1 1 e x2 y2 z2 v2 2 2 (13.15) N! ··· n n n n l m ,n n n n l m ,··· x1 y1 z1 v1 1 1Xx2 y2 z2 v2 2 2

(1) (2) 1 −βEn n n n l m −βEn n n n l m = e x1 y1 z1 v1 1 1 e x2 y2 z2 v2 2 2 N!     ··· n n n n l m n n n n l m x1 y1 Xz1 v1 1 1 x2 y2 Xz2 v2 2 2    (13.16) N 1 −βE(1) = e nxnynznvlm (13.17) N!   nxnyXnznvlm ! N  h¯2π2 n2 +n2+n2 2 ( x y z) h¯ l(l+1) 1 −β + + n+ ¯hω0 1 2mL2 2I ( 2 ) = e (13.18) N!   nxnyXnznvlm   N 2 2 2 2 2 h¯ π (nx+ny+nz) 2 1 −β −β h¯ l(l+1) −β n+ 1 ¯hω = e 2mL2 e 2I e ( 2 ) 0 (13.19) N!   nxnyXnznvlm   N 2 2 2 2 2 h¯ π (nx+ny+nz) 2 1 −β −β h¯ l(l+1) −β n+ 1 ¯hω = e 2mL2 e 2I e ( 2 ) 0 (13.20) N!   n n n n xXy z Xv Xlm  N  2 2 2 2 2 N N h¯ π (nx+ny+nz) 2 1 −β −β h¯ l(l+1) −β n+ 1 ¯hω = e 2mL2 e 2I e ( 2 ) 0 (13.21) N!   n n n n ! ! xXy z Xv Xlm   Now, if we were computing the internal energy U, we’d be able to do something very similar:

U = PiEi (13.22) i X1 = P E (13.23) N! ... tot n n n n l m ,n n n n l m ,··· x1 y1 z1 v1 1 1Xx2 y2 z2 v2 2 2 1 (1) (2) = P... En n n n l m + En n n n l m + N! x1 y1 z1 v1 1 1 x2 y2 z2 v2 2 2 ··· n n n n l m ,n n n n l m ,··· x1 y1 z1 v1 1 1Xx2 y2 z2 v2 2 2   (13.24)

N (1) = P...En n n n l m (13.25) N! x1 y1 z1 v1 1 1 n n n n l m ,n n n n l m ,··· x1 y1 z1 v1 1 1Xx2 y2 z2 v2 2 2 (1) = N PnxnynznvlmEnxnynznvlm (13.26) nxnyXnznvlm h¯2π2 n2 + n2 + n2 h¯2l(l + 1) 1 = N P x y z + + n + hω¯ (13.27) nxnynznvlm 2mL2 2I 2 0 nxnynznvlm  ! X   Physics 423 33 Wednesday 4/24/2014 h¯2π2 n2 + n2 + n2 h¯2l(l + 1) 1 = N P x y z + P + P n + hω¯  nxnynz 2mL2 lm 2I nv 2 0 nxnynz  lm nv X X X    (13.28)

So you can see, if we can work out the average translational kinetic energy, rotational energy and vibrational energy of a molecule in this gas, then we’ll easily have the total internal energy of this system just by adding everything up and multiplying by N.

Physics 423 34 Thursday 4/25/2014 14 Thursday: Statistical mechanics of air

Limiting cases We began, yesterday, to look at the high- and low-temperature limiting cases of the internal energy from varous terms in the hamiltonian. Today, we will continue that activity, but I want to begin by talking a bit about what we mean by limiting cases, and how we find the results. Let’s examine the following function. The point of examining a limiting case is to understand a function, which is often too complicated to understand without simplification. Limiting cases often allow us to make predictions that are valid over a wide range of input parameters.

1.4

1.2

1.0

) 0.8 T ( f 0.6

0.4

0.2

0.0 0.0 0.5 1.0 1.5 2.0 T/T0 In the plot above, you can see that the behavior at low temperature is different from that at high temperature. At high temperature, the function is clearly proportional to temperature. To find this out from its formula (and to find the constant of proportionality), we would need to examine the limiting case. It is also clear from the plot that the function goes to zero at zero temperature, but it isn’t clear precisely how it does so. It clearly isn’t linear, but is it quadratic? cubic? By finding the functional form in the limit of low and high temperature, we are able to create very simple functions that accurately describe this function at most temperatures.

Physics 423 35 Thursday 4/25/2014 The standard approach to solving for a limiting case is to use a Taylor expansion, and you should have learned this method in the first paradigm. The procedure is to first identify a small dimensionless quantity, and then rewrite your function in terms of that quantity. You then write down the Taylor expansion of that quantity, and keep just the first couple of non-zero terms. The Taylor expansion method fails if you encounter a function whose derivatives are undefined when your small quantity is zero. When you encounter such a case, you need to look for an alternative small quantity, or perhaps a hierarchy of sizes, where you can demonstrate that certain terms are much smaller than the others. Finally, there is a question of how to handle limiting cases of summations, which can foil Taylor expansions by causing your “small” term to not always be small. If the difference between neighboring terms in a sum can be shown to be small, a sum may be converted to an integral, which may be easier to perform. This typically happens in the high temperature limit, in which systems behave classically, and quantization becomes unimportant.

Physics 423 36 Friday 4/26/2014 15 Friday:

Yesterday we worked out the internal energy per molecule of a diatomic gas associated with translational kinetic energy, rotational kinetic energy, and vibrational energy (which has both a kinetic and potential component). For each case (except translation), you considered both the low- and high- temperature limits. In each case you had sums that looked like

somethinge−βEi (15.1) i X and β(E E ) was either large or small. 1 − 0 For the high-temperature limits, you needed to convert summations into integrals, which was a reasonable approximation because the change of the thing being summed (summand?) was small as you changed the quantum numbers by one, so treating it as a continuum was okay. For the low-temperature limit, you had an easier scenario, as all the Boltzmann factors were all very small compared with the ground state. So you could just truncate the sum after a few terms.

Translation at high T When looking at the translational kinetic energy at high tem- β¯h2π2 perature, we can see that our small quantity is 2mL2 . Because this quantity is small, our exponential isn’t going to change much as nx etc. make small changes, so we can turn the summation into an integration.

2 2 2 2 2 2 2 2 2 2 h¯ π (nx+ny+nz) U h¯ π nx + ny + nz 1 −β = e 2mL2 (15.2) N 2mL2 Z nxnynz  X 2 2 2 2 2 2 ∞ ∞ ∞ 2 2 2 2 h¯ π (nx+ny+nz) h¯ π nx + ny + nz 1 −β 2mL2 2 e dnxdnydnz (15.3) ≈ 0 0 0 2mL  Z Z Z Z 2 2 2 2 2 2 2 ∞ ∞ ∞ h¯ π (nx+ny+nz) h¯ π 1 2 2 2 −β = n + n + n e 2mL2 dn dn dn (15.4) 2mL2 Z x y z x y z Z0 Z0 Z0  At this point, we want to make a substitution that will remove the “physics” from the integral. This is a very helpful trick, which you should always try for if you can manage it. When you’ve got a definite integral that goes to , you can often perform a change of ∞ Physics 423 37 Friday 4/26/2014 variables to a dimensionless variable which removes all physical quantities from the integral. Once you’ve done this, the integral is just some number, which is either going to be 0, or something close to 1 (in order of magnitude). ∞ hπ¯ ux = β nx (15.5) √2mL 3 p2 2 ∞ ∞ ∞ U h¯ π 1 √2mL√kBT 2 2 2 = u2 + u2 + u2 e−(ux+uy+uz)du du du N 2mL2 Z hπ¯ x y z x y z ! Z0 Z0 Z0  (15.6) 3 ∞ ∞ ∞ √2mL(kBT ) 2 1 2 2 2 = u2 + u2 + u2 e−(ux+uy+uz)du du du (15.7) hπ¯ Z x y z x y z Z0 Z0 Z0  At this point we’ve extracted the physics from the integral. It’s clearly not zero, and it also isn’t infinite, so it’s just some number that we can work out later. But we still need Z...

2 2 2 2 2 h¯ π (nx+ny+nz) −β Z = e 2mL2 (15.8) n n n xXy z 2 2 2 2 2 ∞ ∞ ∞ h¯ π (nx+ny+nz) −β e 2mL2 dn dn dn (15.9) ≈ x y z Z0 Z0 Z0 β hπ¯ u = n (15.10) x 2m L x r ∞ ∞ ∞ √2mkBTL 2 2 2 Z = e−(ux+uy+uz)du du du (15.11) hπ¯ x y z   0 0 0 Z ∞Z Z 3 √2mk TL 2 = B e−u du (15.12) hπ¯   Z0  Once again, we’ve extracted the physics from the integral, leaving a dry, dimensionless husk. In this case, I cleaned that husk up a bit, so it’ll be a bit more compact. Putting these together (with a minimum of simplification, we get:

√ 3 2 2mL(kB T ) 2 ∞ 2 −u2 ∞ −u2 U ¯hπ 3 0 u e du 0 e du = √ (15.13) ∞ 3 N 2mkRB TL −u 2 R  ¯hπ 0 e du ∞u2e−u2 du  R  = 3k T 0 (15.14) B ∞ e−u2 du R 0 3 = k T R (15.15) 2 B

Rotation at low 2IkB T T Low T in the context of rotation means that ¯h2 1, which means 2  that β¯h 1, which means that the exponentials in the partition function or internal energy 2I  Physics 423 38 Friday 4/26/2014 2 − βh¯ are small quantities, each one much smaller than the last (by a factor of around e 2I , which is way less than one). So we can just keep the first few terms.

∞ l 2 −β h¯ l(l+1) Z = e 2I (15.16) l=0 m=−l X X 2 −β h¯ 1 + 3e I (15.17) ≈ Here the factor of 3 came from the three possible m values for l = 1.

2 ∞ l ¯h2l(l+1) −β h¯ l(l+1) U e 2I = 2I (15.18) N Z l=0 m=−l X X 2 2 −β h¯ h¯ e I 3 (15.19) ≈ I Z 2 2 h¯ −β h¯ 3 e I (15.20) ≈ I (15.21)

You should notice here that the internal energy drops exponentially (like e−X/T ) to zero at low temperature. This is a universal property of systems with a “gap,” which is to say, with a finite energy difference between the ground state and the first excited state. At low temperatures, such systems are almost exclusively in their ground state and first excited state (and any degenerate friends), which simplifies the computation considerably!

Rotation at high T At high temperatures, we can convert the summation to an inte- gration, keeping in mind the need to track the m degeneracy (which gives us a factor of 2l + 1.

∞ l 2 −β h¯ l(l+1) Z = e 2I (15.22) l=0 m=−l X∞ X 2 −β h¯ l(l+1) = (2l + 1)e 2I (15.23) l=0 X∞ 2 −β h¯ l(l+1) (2l + 1)e 2I dl (15.24) ≈ Z0 As it turns out, we can relatively easily do this integral. However, the “+1” terms are insignificant, since l is integrated up to infinity, and the large l contribution dominates. So we can once again do a change of variables that will remove all the physics from the integral:

∞ 2 2 −β h¯ l Z 2le 2I dl (15.25) ≈ Z0 Physics 423 39 Friday 4/26/2014 hl¯ u = (15.26) √2IkBT ∞ 8Ik T 2 Z = B ue−u u (15.27) h¯2 Z0 (15.28)

The integral is easy to do, but there’s no urgent need to do so: we have already taken the physics out of the integral.

∞ 2 2 2 2 U 1 h¯ l −β h¯ l 2l e 2I dl (15.29) N ≈ Z 0 2I Z 2 2 ∞ 1 h¯ 4Ik T 2 = 2 B u3e−u du (15.30) Z 2I h¯2   Z0 2 2 ¯h 4IkB T 2 ∞ 3 −u 2 2 u e du = 2I ¯h 0 (15.31) 8IkB T ∞ −u2 ¯h2 0 Rue u ∞ u3e−u2 du = 4k T 0 R (15.32) B ∞ ue−u2 u R 0 = kT R (15.33) Here we can see much of the essential physics by recognizing that the energy is proportional to kBT without performing the integrals.

Harmonic oscillator at high T For the harmonic oscillator, I’ll demonstrate a different approach, since I think showing the same approach for the third time in a row is a bit boring.

∞ −β n+ 1 ¯hω Z = e ( 2 ) 0 (15.34) n=0 X ∞ −β 1 ¯hω −βn¯hω = e 2 0 e 0 (15.35) n=0 X∞ −β 1 ¯hω −β¯hω n = e 2 0 e 0 (15.36) n=0 X  This is just a harmonic series, so we can solve it using the standard trick, where I’ll call the series s:

∞ n s e−β¯hω0 (15.37) ≡ n=0 X ∞  n e−β¯hω0 s = e−β¯hω0 e−β¯hω0 (15.38) n=0 X  Physics 423 40 Friday 4/26/2014 ∞ n = e−β¯hω0 (15.39) n=1 X∞  n = e−β¯hω0 1 (15.40) − n=0 ! X  e−β¯hω0 s = s 1 (15.41) − 1 = 1 e−β¯hω0 s (15.42) − 1 s =  (15.43) 1 e−β¯hω0 −−β 1 ¯hω e 2 0 Z = (15.44) 1 e−β¯hω0 − So that’s nice. Of course, we still want to find the energy. To do this, we can employ yet another trick—although it’s not so hard to do in the high-temperature limit the same way we solved the previous problem. We can recognize that

U = PiEi (15.45) i X e−βEi = E (15.46) i Z i X1 = E e−βEi (15.47) Z i i X Z = e−βEi (15.48) i X ∂Z = E e−βEi (15.49) ∂β − i Ei i   X ∂Z ∂β U = Ei (15.50) − Z So once we have the partition function, we could just take a derivative to find the internal energy. So for the simple harmonic oscillator, we have:

− 1 β 2 ¯hω0 ∂Z 1 e −β¯hω0 = hω¯ 0Z 2 e hω¯ 0 (15.51) ∂β −2 − (1 e−β¯hω0 )  Ei − 1 1 −β¯hω0 = hω¯ 0Z e hω¯ 0Z (15.52) −2 − 1 e−β¯hω0 − 1 hω¯ 0 = hω¯ 0Z Z (15.53) −2 − eβ¯hω0 1 − U 1 hω¯ 0 = hω¯ 0 + (15.54) N 2 eβ¯hω0 1 − Physics 423 41 Friday 4/26/2014 This gives us an exact solution for the internal energy of a simple harmonic oscillator, but we still haven’t found the high-temperature limit. To find that, we have to take βhω¯ 0 1. In this case, we can just use a simple Taylor’s expansion approach: 

U 1 hω¯ 0 = hω¯ 0 + (15.55) N 2 eβ¯hω0 1 1 − hω¯ hω¯ + 0 (15.56) ≈ 2 0 1 + βhω¯ + 1 (βhω¯ )2 1 0 2 0 − 1 1 = hω¯ 0 + kBT 1 (15.57) 2 1 + 2 βhω¯ 0 1 1 hω¯ + k T 1 βhω¯ (15.58) ≈ 2 0 B − 2 0   = kBT (15.59)

This tells us that like the rotational energy, the vibrational energy approaches kBT per molecule at high temperatures. There is a general rule which occurs in the classical limit (which is the high-temperature 1 limit), that any quadratic term in the energy ends up providing 2 kBT to the internal energy. This is called the equipartition theorem. Since there are three translational degrees of freedom 2 3 (v in each direction), the kinetic energy gives us 2 kBT . There are two ways to rotate a diatomic molecule, which gives us an additional kBT . And finally, the vibration has both kinetic and potential energy, which each provide half of the final kBT .

Harmonic oscillator at low T We could solve for the low T limit of the vibrational internal energy the same way we did for the rotational, but instead let’s take the limit of the exact solution, just to be different. Here eβ¯hω0 1:  U 1 hω¯ 0 = hω¯ 0 + (15.60) N 2 eβ¯hω0 1 1 − hω¯ +hω ¯ e−β¯hω0 (15.61) ≈ 2 0 0 1 =hω ¯ + e−β¯hω0 (15.62) 0 2   Once again (as in low T rotation), we’ve got a gap, so we see an exponential suppression of the internal energy at low temperature. The only differences are that we don’t have a degenerate excited state, and our ground-state energy isn’t zero.

Summary of diatomic gas We examined three energy scales in the diatomic gas: there is one energy scale each for translation, rotation and vibration. The translational energy scale depends on the density, but typically is in the millikelvin or microkelvin range. Below this temperature is where Bose-Einstein occurs. This is a bit tricky (depends on whether our molecules are fermions or bosons), so we didn’t cover it. The rotational energy

Physics 423 42 Friday 4/26/2014 scale is often in the microwave region. For hydrogen I think it was around 80K or so, but it’ll be lower for more massive elements.

The third law of thermodynamics The entropy of any perfect at zero temper- ature is zero.

S = k P ln P (15.63) − B i i i X “When is this zero?” It’s equivalent to saying that any perfect crystal at zero temperature is in a non-degenerate ground state. It doesn’t sound so deep when looked at from a stat-mech viewpoint, but when you look at it from a thermodynamic standpoint, it’s close to amazing. It means that if you...

Start with and graphite at zero temperature. • Gradually warm them up. •

Allow the graphite to slowly burn, producing CO2. We have to do this at just the right • temperature, so that the process is reversible.

Slowly cool down the resulting CO to zero temperature. • 2 If we integrate dQ ∆S = reversible (15.64) T Z we find no change in the entropy! This means that we really can tabulate the absolute entropy of any material we like. All we have to do to figure out the zero of entropy is cool it down to zero Kelvin.

Physics 423 43 Friday 4/26/2014 Thermodynamics State functions (and recognize what isn’t a state function). • Be able to integrate its total differential to find a change of a state function. • Use the following if you can find a quasistatic path from your initial to final state, and • can work out the heating along that path. If it’s an isothermal path, the integral can be quite easy! dQ¯ ∆S = qs (15.65) T Z Understand heat capacity • First law • Second law • Thermodynamic identity: dU = T dS pdV • − Legendre transforms • Be able to find and use Maxwell relations • Be able to read and use pV or TS diagrams • Derivative relations (but I won’t go crazy!) • Ideal gas you do not need to memorize. It will be provided if you • really need it.

Common sense may be required. If you increase pressure at fixed temperature, volume • drops because you’re compressing it, etc.

Statistical mechanics Work out probabilities and relative probabilities of being in different eigenstates. • Find partition function, internal energy, other thermodynamic quantities, given a set • of energy states.

Find low-T and high-T limits. •

Physics 423 44 Friday 4/26/2014