<<

Matthew Schwartz , Spring 2019

Lecture 13:

1 Introduction

Fermions satisfy the Pauli-exclusion principle: no two can occupy the same state. This makes fermionic systems act very dierently from bosonic systems. We introduced the non-interacting in Lecture 10, and will continue to discuss it here. We assume the fermions do not interact with each other. This means that the single-particle states of a system are not aected by which states are occupied. So we will x the states, then ll them from the bottom up, then see what happens when we heat it up. You can think of the possible states as bunch of little shelves and we stack the fermions onto them one-by-one. The exclusion principle implies that once a state is occupied, it cannot be occupied again: the occupancy number of each state is either 0 or 1. This to the single-particle grand partition function of state i

("i )n ("i ) = e¡ ¡ = 1 + e¡ ¡ (1) Zi n=0;1 X and grand free energy for state i:

1 1 (" )  = ln = ln[1 + e¡ i¡ ] (2) i ¡ Zi ¡ The occupation number for state i we then found to be given by (3)

@i 1 ni = = f("i) = ("i ) h i ¡ @ e ¡ + 1 

This is known as the Fermi-Dirac distribution or Fermi function, f("). As T 0 the function ! approaches a step function where all the states below  are occupied and the staes above  are unoccupied. The value of  at T = 0 is called the and denoted by "F . Sometimes the Fermi energy is also called the . At any T > 0, some fermions will have energies higher than "F , and therefore some states with energy less than "F will not be occupied. We call the unoccupied states holes. You can picture a nite-temperature Fermi gas as having fermions constantly bouncing out of lower-energy states into higher energy states, leaving holes, then other fermions falling out of excited states into those holes. The /hole picture gives a useful way to think about nite temperature metals and gives a powerful way to understand the physics of , as we will see in this and the next lecture. We call the temperature corresponding to the Fermi energy the Fermi temperature:

"F TF (4)  kB

1 2 Section 2

When the temperature of a system is well below the Fermi temperature, T T , then  " and  F  F the Fermi distribution looks very much like a step function f(") (" "). In such situations, the  F ¡ lowest states are almost all occupied and we say the system is highly degenerate. In a degenerate system, quantum eects are important.

Generally, Fermi in metals are very high (for example, TF for in is around 80,000K, as we will see below). So high in fact, that it is generally impossible to heat a system above the Fermi temperature without changing the nature of the system (i.e. melting the ). Thus, quantum statistics are always important for electrons in metals. The typical picture of a highly degenerate fermionic system is having a relatively small set of energy levels that are relevant, namely those near " . Energy levels with " " require a lot of F  F energy, of order "F , to be excited into an empty state, since the rst available state is above "F . This is nearly impossible when T T . The set of low-energy states that do not participate in the  F thermal activity are called the Fermi sea. The Fermi sea can be very deep (a lot of states), but its depth is largely irrelevant to the thermal properties of the material. Similarly, the levels with energies much higher than " are impossible to excite when T T . So only those levels near " F  F F . Thus degenerate gases have something very close to a symmetry between states above and below "F : a symmetry between electrons and holes. We'll use this picture to understand some properties of metals.

2 Free electron gas

The possible single-particle states of the electrons in a free electron gas are the same as for a in a box. The allowed wavenumbers are  ~k = ~n; ~n = triplet of whole numbers (5) L In the non-relativistic limit energies are determined by the usual relation in

2~ 2 2 2 ~ k  ~ 2 "n = = 2 n (6) 2me 2me L with n = ~n . So j j L 2m L 2m d" n = e p"; dn = e (7)  ~2 2 ~2 p" r r For electrons, there are two spins. So we compute the of states via

1 V 2m 3/2 2 2 4n2dn = e p"d" (8) !  8 22 2 n ~ X Z   Z 1 As before, the 8 accounting for the modes only counting the rst octant (i.e. the ~n are whole numbers) while the integral over the sphere includes all 8 octants. We'll also study the relativistic and ultrarelativistic limits when we discuss stars in Lecture 15, but in metals the 3/2 electrons are generally non-relativistic, as we will check shortly. Thus g(") = V 2me p". This 22 ~2 scaling g(") p" is the same as for the from last lecture, only the constant in g(") is  ¡  dierent.

1; " < "F Recall that as T 0, the Fermi function becomes a step function: n" . The Fermi ! h i ! 0; " > "F  energy "F is the at T = 0 or equivalently the energy below which all the states are occupied at T = 0. So in particular, the number of of states at T = 0 is

3/2 "F 3/2 1 T =0 V 2me V 2me 2 3/2 N = n" g(")d" 2 2 p"d" = 2 2 "F (9) 0 h i !!!!!!!!!!!!! 2 ~ 0 2 ~ 3 Z   Z   Sommerfeld for metals 3

Therefore 2 N 2/3 " = ~ 32 (10) F 2m V e  Using this, it's convenient to write the as

V 2m 3/2 3N g(") = e p" = p" (11) 22 ~2 3/2   2"F

The energy of the Fermi gas at zero temperature is

3N "F 3 E = E(T = 0) = "p"d" = N" (12) 0 3/2 5 F 2"F Z0

E 3 So the average energy of each electron is N = 5 "F . An important concept in electron gases, or Fermi gases in general, is that of degeneracy . Pressure is dened as P = @E . Normally we associate pressure with the kinetic motion ¡ @V of molecules bombarding the walls of a container. This exerts a force on the walls, so that if the volume increases, the energy would go down. Because the force is due to thermal motion, this ordinary pressure is absent at T = 0. Degeneracy pressure, on the other hand, is a contribution to @E in Fermi gases that persists even at T = 0. ¡ @V The way to understand degeneracy pressure is by thinking about how the energy levels change as the volume is shrunk, for example under isothermal compression. For a classical gas, isothermal compression does not change the energy of the gas (i.e. E = CVkBT is volume independent). What it does is increase the density of molecules, so the number of collisions on the wall of a container N goes up, and hence the pressure increases (P = V kBT ). For a Bose gas, as the volume shrinks, all the energy levels go up, but the occupancies of dierent levels adjust so that the total energy is the same. Thus, Bose gases behave essentially like classical gases when compressed. For a degenerate Fermi gas, when the volume goes down, the occupancies of states cannot adjust. Instead, the energy levels go up and the total energy simply increases. This contribution to @E is called degeneracy ¡ @V pressure. To see that this pressure really is dierent from kinetic motion on container walls, consider the T = 0 limit. As we saw before, T = 0 is often a good approximation when T T . Since we have  F been working at T = 0 already, all we have to do is dierentiate Eq. (12):

@E @ 3 3 @ 2 N 2/3 2 E P = = N" = N ~ 32 = 0 (13) ¡ @V ¡@V 5 F ¡5 @V 2m V 3 V   " e  # In this limit, the classical pressure goes to zero but the degeneracy pressure persists. Degeneracy pressure is essential to understand white dwarfs and stars, as we'll see in Lecture 15.

3 Sommerfeld free electron model for metals

Sommerfeld proposed that the electrons in a can be described by a free electron gas. The basic idea is that each in a metal has some number of loosely bound electrons that it contributes to the gas. The alkali metals (rst column of periodic table, e.g. potassium) are monovalent, contributing one electron each. Some metals contribute more than one electron per atom (we'll see why in Lecture 15). The weakly bound electrons, called the valence electrons, are assumed to be free at leading order, so they just bounce around like particles in a box, with the size of the box determined by the size of the metal. This is called the free electron model, 4 Section 3

Before getting into the predictions of the free electron model, it's worth a few comments about why on earth this model should be at all reasonable. At rst thought, it seems crazy that we could both ignore the attractive interactions of the electrons to the positively charged nuclei and ignore the repulsive Coulomb interactions among the electrons themselves. On second thought, however, we realize that because of the exclusion principle, electrons like to stay away from each other, so the electron-electron forces may indeed be small. As for the nuclei, we can't forget that metals have not only fairly high nuclear charge (Z = 29 for copper), but also a whole lot of other electrons more tightly bound to the nucleus than the electron being contributed to the electron gas. Thus the valence electrons are forced to be far away from the nuclei (by the exclusion principle) and moreover, the other electrons screen the nuclear charge. This justication of the free electron model, with some more quantitative details, is known as . The bottom line is that with these considerations, the free electron model at least does not seem horribly wrong, and we can start to study it.

"F First, let's compute the Fermi energy "F and fermi temperature TF = in the free electron kB model. Consider copper, which has a density  = 9.0 g and an atomic weight of 63.6 g , so the cm3 mol number density of is n = 9.0 g /63.6 g = 0.14 mol . Treating copper as monovalent (Cu is cm3 mol cm3 [Ar]3d104s1 as we explain next lecture), this is also the number density of valence electrons. So n = 0.14 mol = 8.4 1028 electrons . The Fermi energy is then e cm3  m3 2 ~ 2 2/3 "F = (3 n) = 7 eV (14) 2me

2 Note that this is much less than the electron rest mass mec = 511keV, so the non-relativistic limit is justied. The Fermi energy is also much higher than kBT = 25 meV at room temperature. The corresponding Fermi temperature is

"F TF = = 82000K (15) kB

TF Compared to room temperature T0 = 298K. We see that 270. Since the Fermi temperature T0  is much higher than room temperature, copper at room temperature is highly degenerate and quantum eects dominate. Let's next look at the degeneracy pressure in a metal. From Eq. (13) we can write

2 E 2 N 2 N P = 0 = " = k T (16) 3 V 5 V F 5 V B F

Now the Fermi temperature in metals is always well above room temperature, as in Eq. (15), TF 270. The density of metals is also typically much higher than in gases, by a factor of 1000 or T0  so. Thus the degeneracy pressure is of order 1000 270 1atm=105 atm  enormous! The electrons   really really want to get out of the metal; in a larger volume, their energy levels would go down and the degeneracy pressure would be relieved. Fortunately, the electrons cannot leave since the outward force is compensated by strong attractive forces holding the metal together. Thus we are not going to be able to measure degeneracy pressure directly in a metal without understanding those (rather complicated) attractive foces. Instead of pressure, a more reasonable quantity to look at is the bulk modulus, B = V dP . ¡ dV T The bulk modulus measures how much a solid will compress when a pressure is applied.It tells how springy a material is. Indeed, as mentioned in Lecture 11, the bulk modulus determines the dP B in , as cs = d =  with  the density. Thus B plays the role of the spring constant k in Hooke's law. In qsolids, tqhis constant should be determined by the restoring force when atoms are moved, which is due to the electrons. Thus we expect that B can be computable from a good model of electrons in solids. Sommerfeld free electron model for metals 5

For a free electron gas at T = 0,

dP d2 3 2 N 2/3 2 N 10 E 5 B = V = V N ~ 32 = " = 0 = P (17) ¡ dV dV 2 5 2m V 3 V F 9 V 3 " e  # Let's compute this for potassium. Solid potassium has an valence electron number density of 28 1 19 n = 1.4 10 . This gives a Fermi energy of " = 3.4 10¡ J and a bulk modulus of  m3 F  B = 3.2 109 N . The experimental value is B = 3.1 109 N . Thus our prediction from the free  m2  m2 electron gas is in excellent agreement with the measured value. Equivalently using the density of potassium is  = 856 kg we predict a speed of sound c = B = 1933 m ; the measured value is m3 s  s m cs = 2000 s . Apparently, the of metals is determineqd by the degeneracy pressure of the electrons within it. This is rather a profound conclusion: when I squeeze a penny in my hand, the penny is ghting back not with the classical Coulomb force, but with the quantum Pauli exclusion principle. Next, we want to compute the temperature dependence of the energy and . To do that, we need to rst eliminate the chemical potential. As with , we can trade  for N by integrating over the density of states. For fermions, using Eq. (11), we nd

1 g(") 3N p  N = (" ) d" = 3/2 3/2 Li3/2( e ) (18) 0 e + 1 ¡2 ¡ Z ¡ 2"F ! where Li3/2 is a polylogarithm function. Thus, we get an equation relating  to "F at a given T : 4 ( " )3/2 = Li ( e ) (19) ¡3p F 3/2 ¡

Unfortunately, the polylogarithm of an exponential is really hard to work with. So let us start TF by examining it numerically. At room temperature for copper "F = = 270, and Eq. (19) gives T0  = 0.9999"F . At the melting temperature of copper, Tmelt = 1358K, Eq. (19) gives  = 0.9996"F . Thus in the entire range for which copper is a solid, T = 0 to T = Tmelt  only varies from 1.0"F to 0.9997"F . This justies the common practice of using the chemical potential and Fermi energy interchangeably when studying metals. At nite temperature, the energy of a free electron gas is:

3N 1 "p" 15pE0  E = 3/2 (" ) d" = ¡ 5/2 Li5/2( e ) = (20) 0 e + 1 8( " ) ¡ 2"F Z ¡ F

where E = 3 N" as in Eq. (12) and  = " is used for the plot. This is a smooth function of T 0 5 F F near T = 0 despite the nasty polylogarithm in the analytic formula. Before doing a careful small T expansion, let's think a little about what we expect. The key to small T behavior of the energy is understanding how many states can be excited. Because N is so large, the energy levels near the are essentially continuous: there are generally a lot of electrons and a lot of states. Most of the electrons are in the Fermi sea, and it takes a lot of energy to excite an electron out of deep in the sea. Since kBT TF for any metal, only the highest energy electrons have any hope of being excited. All of the thermal activity comes from a small number of electrons close to the Fermi surface moving to states slightly above it. The electrons deep in the sea would require energies of order "F kBT to move into a free state, for which there T /T  are exponentially small probabilities, P e¡ F . In other words, the states that are involved in the thermal activity are essentially onlythose with energies " k T " " + k T F ¡ B . . F B 6 Section 3

At a temperature T some states will be excitable and gain energy k T . So the total energy  B should be E E + N k T . This scaling is the essential content of the .  0 excitable B As a check on this logic, for a bosonic gas, we found the density of states to scale like g(") "2 so  the total number of excitable states are those with " < k T which is N kBTg(")d" T 3. B excitable  0  Thus E T 4 for bosons, as we found through explicit calculation in the . Ffor a  R fermionic gas, only those states with energies " between " k T and " + k T can be excited, F ¡ B F B so N k T and therefore E E + k2 T 2. Plugging in E from Eq. (12) we get excitable  B  0 B 0 2 2 3 kBT E = N"F + aN + (21) 5 "F 

2 for some a. The factor of "F in the denominator of the T term was included by dimensional analysis  it is the only scale that could possibly appear. Eq. (21) is in qualitative agreement with Eq. (20). This heuristic argument is helpful to understand the quadratic dependence on T , however, it cannot give us the coecient a. We now turn to computing this coecient a.

3.1 Small T expansion In this section, we will discuss how to expand the chemical potential and total energy at small T . These quantities are determined by Eqs. (18) and (20):

N = 1g(")f(")d"; E = 1"g(")f(") (22) Z0 Z0 3N 1 with g(") = 3/2 p" and f(") = (" ) the Fermi distribution. Although we can do the integrals 2"F e ¡ + 1 analytically, the resulting polylogaritms are not easy to expand around T = 0. If course, you can just look up the polylogarithm limits, but we will learn more about the system by nessing the small T expansion using some tricks developed by Sommerfeld. We'll start with the chemical potential, then do the energy. Don't worry too much if you don't follow everything here, we'll do the calculation in a slicker way in Section 3.2. The reason I include this calculation is that it introduces some useful tricks which you can add to your toolbox to use in other contexts. The rst trick is to integrate by parts:

1 3N 1 N 3/2 3N 1 2 3/2 N = 3/2 f(")p"d" = 3/2 " f(") 3/2 [f 0(")] " d" (23) 0 ¡ 0 3 2"F Z "F #0 2"F Z   =0

The rst term on the right vanishes|||a|||||t|||||||b||||||o|||||th{zb}}}}}o}}}}}u}}}}}}n}}}}}d}}}}}aries, so it is zero. Then we have

1 1 3/2 1 = [ f 0(")]" d" (24) 3/2 ¡ "F Z0

The derivative of the Fermi distribution is essentially a Gaussian centered on  with width pkBT :

1 f 0(") = = (25) ¡ 2 1 + cosh( (" )) ¡ Sommerfeld free electron model for metals 7

It is not quite a Gaussian, and is restricted to " > 0 but it's very nearly Gaussian as you can see from the plot.

The second trick is to observe that when T T region, the support of f 0(") is for energies  F "  k T . In this region we can Taylor expand the function multiplying f 0(") in Eq. (24) j ¡ j . B around " =  giving

1 1 3/2 3 3 2 1 = [ f 0(")]  + p(" ) + (" ) + d" (26) 3/2 ¡ 2 ¡ 8p ¡  "F Z0  

The third trick is to note that since f 0(") is exponentially suppressed at small " we can extend the lower limit of integration to . Once the integral is from to the integral is much ¡1 ¡1 1 easier to do. Changing variables to x = (" ) makes the integrals doable by Mathematica ¡

3/2 3/2 2  1 dx 3 3 2   1 = 1 + x + 2 2 x + = 1 + 2 2 + (27) "F 2 + 2cosh(x) 2  8   "F 8     Z¡1      

Now we can solve perturbatively for . Substituting = " +aT 2 + into Eq. (27) and expanding F  to order T 2 leads us directly to 2 k2  = " 1 B T 2 + (28) F ¡ 12 "2   F 

As a check on this, plugging in T = 1358K (the of copper), this gives  = 0.9997"F in excellent agreement with our exact numerical calculation. We can do a similar expansion for the energy. The only dierence is that instead of integrating the Fermi function against g(") we integrate against "g("). After integrating by parts and dropping the boundary term, the equation for the total energy, Eq. (20) becomes

N 1 3 5/2 3 3/2 9 2 E = [ f 0(")]  +  (" ) + p(" ) + d" (29) 3/2 ¡ 5 2 ¡ 8 ¡  "F Z0  

3/2  1 dx 3 3 9 2 =N  + x + 2 2 x + (30) "F 2 + 2cosh(x) 5 2  8     Z¡1  

 3/2 3 32 =N  + + (31) " 5 8 22   F   

Substituting the form of  in Eq. (28) we nd that to second order in T

2 2 2 3  kBT E = N"F + N + (32) 5 4"F 

2 Thus the unknown coecient a in Eq. (21) is now known: a =  . 4 2 What did we learn from all these tricks? First of all, we got the answer a = 4 without having  to expand Li5/2(e¡ ). The second thing we learned was that the expansion was dicult mainly because of the " > 0 constraint  we couldn't just integrate 1 "f(")d" from the start since the integrand blows up at " , so we had to do this integration b¡y1parts trick to get the integral in a ! 1 R form where we could extend the limits of integration. The physical reason that the extension works is that the very low energy states and the very high energy states are irrelevant, as we anticipated. In fact, there's a more physical and less mathematical way to isolate these relavant states called the electron/hole picture that we will discuss next. 8 Section 3

3.2 Electrons and holes

The electron and hole picture gives a quick way to understand the leading temperature dependence of the energy. Since T T for any metal, or equivalently, k T " , it is very hard to excite  F B  F electrons deep in the Fermi sea. As mentioned by Eq. (21) only states within kBT of "F  can  contribute to the energy shift E E and to the heat capacity. Thus overall, both excited states ¡ 0 (above ) and states below  that do not have electrons in them (holes) are rare. The idea of the electron/hole picture is to replace the original Fermi gas, with its inert Fermi sea, by a symmetric thermal system of electron excitations and the holes. Of course, in reality the electrons and holes are related, since each electron excitation comes from a hole. However, with a large N of electrons, we can treat the electrons and holes as their own statistical mechanical system and ignore the (very small) correlations among them. To construct the gas, we rst dene the zero-point of energy in the new system to be at the fermi level. Now, each excited electron in the original system in a state with energy " >  contributes positive energy  = "  > 0 to this new gas. Each hole, from a state of energy " <  in the orignal ¡ system, represents the absence of a state with negative energy  = "  < 0. Thus each hole ¡ ¡ contributes positive energy  =  " to the new gas. The positive energy can be understood if ¡ we think about an excitation of the original gas from energy   to  +  . This excitation ¡ h e contributes a total of h + e to the energy, with e coming from the electron and h from the hole. Recall that the probabilities of occupation for each state are independent in a Fermi gas (this is the main reason we use the grand at xed  rather than the canonical ensemble at xed N). So the probability of nding an electron with excitation energy  = "  is e ¡

fe(e) = f(") = f( + e) (33)

1 where f(") = (" ) is the Fermi function. e ¡ + 1

What is the probability of nding a hole of energy h? Since any state has either a hole or an electron, the probability of nding a state of energy " to be empty is 1 f("). Since " =   ¡ ¡ h for holes, the probability of nding a hole with energy h is then

f ( ) = 1 f(  ) (34) h h ¡ ¡ h 1 Now a useful property of the Fermi distribution f(") = (" ) is that it is symmetric around e ¡ + 1 its midpoint at " = . More precisely,

f( + ) = 1 f( ) (35) ¡ ¡ And therefore

f () = 1 f( ) = f( + ) = f () (36) h ¡ ¡ e prob:ofholewithenergy>0 prob:ofelectronwithenergy>0

Thus holes and||||e{lze}c}}}trons have identical probability distributions! ||||{z}}}} This gives us an easy way to calculate the total energy: we just calculate the energy of the electron excitations and double. The electron excitation contributions are like the original electron contributions but shifted to  = 0. In this region, the density of states is approximately constant

3N g( + ) g( ) g("F ) = (37)  ¡  2"F

So, the contribution to the total energy from the electron excitations about  is

2 2 1 1  3N   N 2 2 Ee = (" )g(")f(")d" g("F )  d = 2 = kBT (38) ¡  e + 1 2"F 12 8"F Z Z0   Sommerfeld free electron model for metals 9

The hole contribution is identical

 2 2 1  3N   N 2 2 Eh = ( ")g(")f(")d" g("F )  d = 2 = kBT (39) ¡  e + 1 2"F 12 8"F Z0 Z0   The only additional approximation we must do for holes is take the upper limit of integration on  to , since " > 0 means  < . Due to the exponential expression at large , taking the upper 1 h limit to has essentially no eect on the integral. 1 Thus the total energy is

2  N 2 2 E = E0 + Ee + Eh = E0 + kBT (40) 4"F

This is in perfect agreement with Eq. (32). (To get subleading terms in Eq. (32) you can include subleading terms in Eqs. (37)-(39).) This calculation was certainly simpler than Sommerfeld's from Section 3.1 (although you may not have believed this one without that one as a check). The two are actually equivalent and make the same approximations. In this version, we sidestepped the fact that the energy of the holes is bounded by  < "F by setting the hole contribution equal to electron contribution, and integrating to . In the Sommerfeld calculation, we integrated by parts and dropped the boundary terms to 1 avoid the " < limit. The electron/hole picture is nice not only because it is simpler but because F 1 it gives us a powerful language for discussion fermionic systems. This language is particularly useful for solid-state physics, as we'll see in the next lecture.

3.3 Heat capacity

Having computed the temperature dependence of the energy of a free electron gas, it is now trivial to nd the heat capacity 2 electons @E  T CV = = NkB + (41) @T 2 TF 

T We computed this as the leading term in the expansion, but since TF 50000K for metals, this TF & expansion can work for any temperature a metal can have  up to the melting point. In particular, it holds even if T 298K.  We can compare this to the heat capacity due to (oscillations of the atoms), for example through the law of Dulong and Petit: Cphonons 3Nk . Recall that we derived this behavior in V  B Lecture 11 as the high-temperature limit of the Debye model. Typical Debye temperatures for metals are around 100 K, so T T 100K is reasonable at room temperature. Note that high  D  temperature for phonons (T T ) is consistent with low temperature for electrons (T T ), and  D  F so we can use both limits to see Celectrons 2 T V = 1 (42) phonons 6 T CV F 

This explains why the electrons do not contribute appreciably to the total heat capacity CV = Cphotons + Celectrons Cphonons. Historically, there was a puzzle for some time about why the law V V  V of Dulong and Petite should be correct, since the equipartition theorem says that electrons should contribute to the heat capacity just like the atoms. We now understand that because of Fermi- Dirac statistics the valence electrons have very high energy compared to room temperature, so only a small fraction of the electrons can be thermally excited (those near the Fermi level). Thus, at room temperature the electronic contribution is very small and this historical puzzle is thereby resolved by quantum statistics. 10 Section 3

At very low temperatures T T , the law of Dulong and Petite no longer applies. In the Debye  D model we found that for T T  D 124 T 3 CDebye = Nk (43) V 5 B T  D  where the formula for the Debye temperature is

! N 1/3 T = ~ D = ~ c 62 (44) D k k s V B B   with cs the speed of sound. Thus the full heat capacity at low temperature is the sum of the electronic and phononic parts

2 T 124 T 3 C = Cphotons + Celectrons = Nk + Nk (45) V V V 2 B T 5 B T F  D 

For copper TD = 315K and TF = 80; 000K. Thus we have to go to very low temperatures to see the electronic contribution. The two contributions are comparable when

T T 3 T 3 T D (46) T  T )  T F  D  r F For copper this is T 19K.  We can see the electronic contribution by measuring the heat capacity at small T and plotting CV as a function of T 2: T 2 4 CV  1 12 1 2 = NkB + NkB 3 (T ) (47) T 2 TF 5 TD

The y-intercept of such a plot gives the electronic contribution and the slope gives the contribution. Here is such a plot for copper:

Figure 1. Heat capacity of copper at low temperatures compared to the prediction from the Sommerfeld free electron gas model.

Measurements like this demonstrate an electronic contribution in good agreement with the prediction from the Sommerfeld free electron model. An obvious limitation of the free electron model is that it is the same model for any metal. Thus it cannot tell us what is a conductor, what is an , or account for any of the interesting structure of the elements in the periodic table. To do that we need to make some improvements. Nearly free electron gas 11

4 Nearly free electron gas

To improve on the free electron model, we can note that each atom contributes one electron to the gas. If the atoms are far apart, then the electrons would have been in orbitals around their atom. There is thus some kind of attractive potential V (r) trying to keep each electron bound to one atom. As atoms move closer together, their potentials start to overlap. In the limit that the potentials overlap a lot, the sum of the potentials can be very weakly position dependent:

Figure 2. When atoms are far apart (left), the atomic potentials do not overlap. When atoms become close (right), potentials overlap with the net result of a nearly free potential: the sum of the potentials is nearly at.

Most metals have a very regular lattice strcture. More precisely, these potentials are periodic, meaning that there exists a vector ~a such that V (~x + ~a) = V (~x). It is interesting therefore to consider a weak periodic potential as a model of the atomic system. This is known as the nearly free electron model. It is hard to calculate properties of periodic potentials in three dimensions exactly, but in one dimension we can actually nd some exact solutions that are qualitatively similar to 3D systems and to real metals. In one dimension, we are interested in nding the energies of a system with a periodic potential. By periodic potential, we mean that the full potential is the sum over potentials V1(x) representing atoms, shifted by the lattice spacing a. That is, the potential has the form

1 V (x) = V (x na) = (48) 1 ¡ n= X¡1

Summing n from to is an approximation, but allows us to nd a nice simple form for the result. Innite per¡io1dic po1tentials have been studied in great detail and a lot of general results are known about them. For example, the energy eigenstates (x) satisfy a kind of periodicity of their own: ika (x + a) = e p(x) (49)

This general result is known as Bloch0stheorem. Here a is the lattice spacing, k is the wavenumber, and p(x) is a function with the periodicity of the lattice, i.e. p(x + a) = p(x) in this case. These solutions are known as Bloch . Bloch's theorem holds in any number of dimensions. An exact result about periodic potentials in 1 dimension following from Bloch's theorem is that the energies " for a given k satisfy

2me" cos 2 a +  ~ = cos (ka) (50) q t  j j 12 Section 4

This is an implicit formula for the "(k) of the system. The quantities t and  are j j the magnitude and phase of a single object t = t ei. This t is exactly the transmission coecient j j for scattering past a single potential V1(x). The appearance of t can be understood because the waves at a given k have to scatter past each individual potential. We're not going to derive Eq. (50), but the derivation can be found in som solid state physics books (e.g. Hook and Hall Chapter 4 or Ashcroft and Mermin Chapter 8). To be concrete, consider a special case where the potentials are square wells. This case is known as the Kronig-Penney model and an exact solution is known. The solution is particularly simple in the limit where the width of the wells goes to zero and the strength of the wells to innity so that each potential is a -function, i.e. V1(x) = (x). In this case, the magnitude and phase of the transmission coecient are given by 1 t = cos = (51) 2 2 j j me 1 + ~4k2 q You can nd the derivation of the Kronig-Penney model in many places, including Wikipedia (although, as always, be very cautious about the accuracy of Wikipedia articles). We're not going to derive it here. Let's look at the the energies of the Kronig-Penney model as a function of k. We do this by nding solutions to Eq. (50) using the expressions in Eq. (51) for t and . As usual, we begin by nding the solutions numerically. The result looks like this: j j

Figure 3. Allowed and forbidden region in the Kronig-Penney model. Only have the bands, for k > 0 are shown. There are mirror image bands for k < 0.

In this gure, you can see that the energies break up into a series of bands. The energy dierence between the top of one band and bottom of the next is known as the bandwidth. As k is increased, there are certain energies that never appear. These are energies in the . The Kronig- Penney model is the simplest exactly solvable model where the band structure appears. Bands are critical to understanding properties of metals and semiconductors. We can see fairly easily from the general result in Eq. (50) why there are bands, i.e. why there are not solutions for every ". Because t is a trasmission coecient, t < 1 and thus the cosine on j j the left is bigger than the one on the right. So when the left cosine is bigger than t then there j j can be no solution. This happens around the regions where k =  ; 2 ; 3 ; The band gaps are a a a  centered around these regions. The wavenumbers k =  correspond to periodic eigenfunctions that a sit right on the -functions (if the -functions are repulsive) or right in between them (if the - functions are attractive). Electrons move out of the least attractive region. As the potential is removed, V 0, we should recover the free-particle solutions. You can ! actually see this from the gure: as the gaps are removed, the bands merge into a parabola. The 2 parabola is none other than the dispersion relation for the free electron gas, Eq (6): " = k . To see 2me this analytically, note that with the potential removed, the transmission coecient is t = 1. From Eq. (50) we can see that when t = 1 and  = 0 then 2meE = k, agreeing with the dispersion j j ~2 relation for the free electron gas. q Nearly free electron gas 13

Although this model was only one-dimensional, periodic potentials predict bands also in three dimensions, and bands are observed experimentally in metals. What is critically important in determining properties of the solid is whether the Fermi level is in the middle orealf a band or in the band gap. For example, suppose each potential corresponds to one atom and each atom donates one electron to the nearly-free electron gas. In this case, there are N electrons and the maximum k   allowed (in 1D) is k =N L with L the size of the system. Using L=Na we see k 6 a , which, matching onto Figure 3 would the highest occupied electron level right at the edge of the band. Remember though that electrons have two states, so each level is actually two-fold degenerate. This means that if each atom donates one electron, the lowest band will actually be only half lled. With a half-lled band the electrons can easily be excited, gaining energy and momentum. If you put an electric eld on such a system, the electrons can adjust, picking up momentum in one direction:

Figure 4. The thin curve in these plots shows the band: how the energy levels E depend on wavenumber k. The thick curve are the levels that are occupied. For metals in which atoms donate one electron each half the band is lled. With a half-lled band, an applied electric eld can easily make the energy levels adjust, showing that such materials are conductors. Insulators have a completely lled band and cannot easily adjust to an applied eld.

Thus mono-valent (one electron donated per atom) materials (most metals) are conductors. That metals conduct is closely related to their shininess: the conduction electrons move freely, so they easily absorb and re-emit radiation. If each atom donates two electrons into the band, the band would be completely lled. In that case, a tiny amount of energy cannot excite the electron and the material does not conduct electricity well. Such materials are known as insulators or more precisely band insulators. In three dimensions, the relationship "(~k) = "F denes a surface instead of two points. This surface is called the Fermi surface. The Fermi surface is an abstract surface in momentum space, not the real surface of a metal in position space. It describes the boundary between occupied and 2~ 2 unoccupied energy levels at T = 0. For the free-electron gas where " = ~ k this is the surface of 2me constant ~k namely a sphere. In metals such as sodium which have very weakly bound valence electrons,jsoj the free-electron gas model is an excellent approximation, the Fermi surface is indeed spherical. For other metals, the Fermi surface can have a dierent shape. Here are the Fermi surfaces for sodium, copper and

Figure 5. Fermi surfaces for sodium (left), copper (middle) and graphene (right).

In copper the things that look like holes in the Fermi surface are not really holes but rather the regions where d" blows up. This is the 3D analog of a the 1D bandgap region in the Kronig-Penney dk model: when you go from one band to the next, " jumps as k changes by a little bit, so d" is large. dk 14 Section 5

Graphene is a 2D sheet of . So the values of ~k with "(~k) = "F are not a set of points, as in 1D, or a surface, as in 3D, but a curve. The curve is given by an intersection of a plane (the energy plane) with the surface. In the gure, the Fermi energy is indicated by the red plane. We see that as " " the allowed k go to points along the surface of a cone. These cones are called ! F Dirac cones. In an ultrarelativistic electron gas, the dispersion relation is "(~k) = c ~k . Because of the absolute value, the ultrarelativistic dispersion relation has the same cuspy behajvijor and Dirac cone structure we can see in the graphene Fermi surface. Thus electrons in graphene near the cusp 2 act like ultrarelativistic particles, even though their energies can be " ."F mec . These eectively massless excitations give graphene some of its most interesting properties: it is the world's most conductive material. Normal metals have to move massive electrons around, but graphene can move around eectively massless ones. Graphene is also 200 times stronger than steel. By the way, graphene was discovered in 2004 by Geim and Novoselov by peeling scotch tape o a block of (you make graphene every time you write with a pencil). Geim and Novoselov won the Nobel Prize for their discovery in 2010.

5 Summary

We have seen the free electron gas give a number of predictions Bulk modulus of metals in agreement with data  Heat capacity of metals in agreement with data  To compute the heat capacity we had to integrate by parts in order to expand for T TF to get an analytic answer. We also saw that the same answer resulted much more quickly oncewe developed the treatment of the free electron gas as a gas of electrons and holes with a symmetric spectrum around the Fermi level. A step beyond the free electron gas is the nearly free electron gas. Our treatment of the nearly- free gas was somewhat heuristic. The required calculations are not hard, and covered in any solid- state physics book, but they are beyond what we can do in one lecture. The important qualitative results we found were that The addition of periodic potentials generates a band  The band structure can explain insulators and conductors  Unfortunately, it is very dicult to connect the (nearly) free electron gas to actual metals. That is, we cannot determine which materials will conduct and which will insulate. To understand real materials, a more exible method is the tight-binding model. The tight- binding model takes as its starting point for the expansion the energy levels of individual atoms. It brings them together to show how the electrons get distributed through . The bands emerge in the tight-binding model from energy levels of atomic orbitals (4s; 3d, etc). The tight-binding model follows from rst studying molecular orbitals and molecular bonding, which is the approach we take in the next lecture.