Lecture Notes for Solid State Physics (3rd Year Course 6) Hilary Term 2012
c Professor Steven H. Simon Oxford University
January 9, 2012
i
Short Preface to My Second Year Lecturing This Course
Last year was my first year teaching this course. In fact, it was my first experience teaching any undergraduate course. I admit that I learned quite a bit from the experience. The good news is that the course was viewed mostly as a success, even by the tough measure of student reviews. I particularly would like to thank that student who wrote on his or her review that I deserve a raise — and I would like to encourage my department chair to post this review on his wall and refer to it frequently. With luck, the second iteration of the course will be even better than the first. Having learned so much from teaching the course last year, I hope to improve it even further for this year. One of the most important things I learned was how much students appreciate a clear, complete, and error-free set of notes. As such, I am spending quite a bit of time reworking these notes to make them as perfect as possible. Repeating my plea from last year, if you can think of ways that these notes (or this course) could be further improved (correction of errors or whatnot) please let me know. The next generation of students will certainly appreciate it and that will improve your Karma. ,
Oxford, United Kingdom January, 2012. ii
Preface
When I was an undergraduate, I thought solid state physics (a sub-genre of condensed matter physics) was perhaps the worst subject that any undergraduate could be forced to learn – boring and tedious, “squalid state” as it was commonly called1. How much would I really learn about the universe by studying the properties of crystals? I managed to avoid taking this course altogether. My opinion at the time was not a reflection of the subject matter, but rather was a reflection of how solid state physics was taught. Given my opinion as an undergraduate, it is a bit ironic that I have become a condensed matter physicist. But once I was introduced to the subject properly, I found that condensed matter was my favorite subject in all of physics – full of variety, excitement, and deep ideas. Many many physicists have come to this same conclusion. In fact, condensed matter physics is by far the largest single subfield of physics (the annual meeting of condensed matter physicists in the United States attracts over 6000 physicists each year!). Sadly a first introduction to the topic can barely scratch the surface of what constitutes the broad field of condensed matter. Last year when I was told that a new course was being prepared to teach condensed matter physics to third year Oxford undergraduates, I jumped at the opportunity to teach it. I felt that it must be possible to teach a condensed matter physics course that is just as interesting and exciting as any other course that an undergraduate will ever take. It must be possible to convey the excitement of real condensed matter physics to the undergraduate audience. I hope I will succeed in this task. You can judge for yourself. The topics I was asked to cover (being given little leeway in choosing the syllabus) are not atypical for a solid state physics course. In fact, the new condensed matter syllabus is extremely similar to the old Oxford B2 syllabus – the main changes being the removal of photonics and device physics. A few other small topics, such as superconductivity and point-group symmetries, are also nonexaminable now, or are removed altogether . A few other topics (thermal expansion, chemical bonding) are now added by mandate of the IOP2. At any rate, the changes to the old B2 syllabus are generally minor, so I recommend that Oxford students use the old B2 exams as a starting point for figuring out what it is they need to study as the exams approach. In fact, I have used precisely these old exams to figure out what I need to teach. Being that the same group of people will be setting the exams this year as set them last year, this seems like a good idea. As with most exams at Oxford, one starts to see patterns in terms of what type of questions are asked year after year. The lecture notes contained here are designed to cover exactly this crucial material. I realize that these notes are a lot of material, and for this I apologize. However, this is the minimum set of notes that covers all of the topics that have shown up on old B2 exams. The actual lectures for this course will try to cover everything in these notes, but a few of the less crucial pieces will necessarily be glossed over in the interest of time. Many of these topics are covered well in standard solid state physics references that one might find online, or in other books. The reason I am giving these lectures (and not just telling students to go read a standard book) is because condensed matter/solid-state is an enormous subject — worth many years of lectures — and one needs a guide to decide what subset of topics
1This jibe against solid state physics can be traced back to the Nobel Laureate Murray Gell-Mann, discoverer of the quark, who famously believed that there was nothing interesting in any endeavor but particle physics. Interestingly he now studies complexity — a field that mostly arose from condensed matter. 2We can discuss elsewhere whether or not we should pay attention to such mandates in general – although these particular mandates do not seem so odious. iii are most important (at least in the eyes of the examination committee). I believe that the lectures contained here give depth in some topics, and gloss over other topics, so as to reflect the particular topics that are deemed important at Oxford. These topics may differ a great deal from what is deemed important elsewhere. In particular, Oxford is extremely heavy on scattering theory (x-ray and neutron diffraction) compared with most solid state courses or books that I have seen. But on the other hand, Oxford does not appear to believe in group representations (which resulted in my elimination of point group symmetries from the syllabus). I cannot emphasize enough that there are many many extremely good books on solid-state and condensed matter physics already in existence. There are also many good resources online (in- cluding the rather infamous “Britney Spears’ guide to semiconductor physics” — which is tongue- in-cheek about Britney Spears3, but actually is a very good reference about semiconductors). I will list here some of the books that I think are excellent, and throughout these lecture notes, I will try to point you to references that I think are helpful.
States of Matter, by David L. Goodstein, Dover • Chapter 3 of this book is a very brief but well written and easy to read description of much of what we will need to cover (but not all, certainly). The book is also published by Dover which means it is super-cheap in paperback. Warning: It uses cgs units rather than SI units, which is a bit annoying. Solid State Physics, 2nd ed by J. R. Hook and H. E. Hall, Wiley • This is frequently the book that students like the most. It is a first introduction to the subject and is much more introductory than Ashcroft and Mermin. The Solid State, by H M Rosenberg, OUP • This slightly more advanced book was written a few decades ago to cover what was the solid state course at Oxford at that time. Some parts of the course have since changed, but other parts are well covered in this book. Solid-State Physics, 4ed, by H. Ibach and H. Luth, Springer-Verlag • Another very popular book on the subject, with quite a bit of information in it. More advanced than Hook and Hall Solid State Physics, by N. W. Ashcroft and D. N. Mermin, Holt-Sanders • This is the standard complete introduction to solid state physics. It has many many chapters on topics we won’t be studying, and goes into great depth on almost everything. It may be a bit overwhelming to try to use this as a reference because of information-overload, but it has good explanations of almost everything. On the whole, this is my favorite reference. Warning: Also uses cgs units. Introduction to Solid State Physics, 8ed, by Charles Kittel4, Wiley • This is a classic text. It gets mixed reviews by some as being unclear on many matters. It is somewhat more complete than Hooke and Hall, less so than Ashcroft and Mermin. Its selection of topics and organization may seem a bit strange in the modern era. The Basics of Crystallography and Diffraction, 3ed, by C Hammond, OUP • This book has historically been part of the syllabus, particularly for the scattering theory part of the course. I don’t like it much.
3This guide was written when Ms. Spears was just a popular young performer and not the complete train wreck that she appears to be now. 4Kittel happens to be my dissertation-supervisor’s dissertation-supervisor’s dissertation-supervisor’s dissertation- supervisor, for whatever that is worth. iv
Structure and Dynamics, by M.T. Dove, Oxford University Press • This is a more advanced book that covers scattering in particular. It is used in the Condensed Matter option 4-th year course. Magnetism in Condensed Matter, by Stephen Blundell, OUP • Well written advanced material on the magnetism part of the course. It is used in the Condensed Matter option 4-th year course. Band Theory and Electronic Properties of Solids, by John Singleton, OUP • More advanced material on electrons in solids. Also used in the Condensed Matter option 4-th year course. Solid State Physics, by G. Burns, Academic • Another more advanced book. Some of its descriptions are short but very good.
I will remind my reader that these notes are a first draft. I apologize that they do not cover the material uniformly. In some places I have given more detail than in others – depending mainly on my enthusiasm-level at the particular time of writing. I hope to go back and improve the quality as much as possible. Updated drafts will hopefully be appearing. Perhaps this pile of notes will end up as a book, perhaps they will not. This is not my point. My point is to write something that will be helpful for this course. If you can think of ways that these notes could be improved (correction of errors or whatnot) please let me know. The next generation of students will certainly appreciate it and that will improve your Karma. ,
Oxford, United Kingdom January, 2011. v
Acknowledgements
Needless to say, I pilfered a fair fraction of the content of this course from parts of other books (mostly mentioned above). The authors of these books put great thought and effort into their writing. I am deeply indebted to these giants who have come before me. Additionally, I have stolen many ideas about how this course should be taught from the people who have taught the course (and similar courses) at Oxford in years past. Most recently this includes Mike Glazer, Andrew Boothroyd, and Robin Nicholas. I am also very thankful for all the people who have helped me proofread, correct, and otherwise tweak these notes and the homework problems. These include in particular Mike Glazer, Alex Hearmon, Simon Davenport, Till Hackler, Paul Stubley, Stephanie Simmons, Katherine Dunn, and Joost Slingerland. Finally, I thank my father for helping proofread and improve these notes... and for a million other things. vi Contents
1 About Condensed Matter Physics 1 1.1 WhatisCondensedMatterPhysics...... 1 1.2 WhyDoWeStudyCondensedMatterPhysics? ...... 1
I Physics of Solids without Considering Microscopic Structure: The Early Days of Solid State 5
2 SpecificHeatofSolids:Boltzmann,Einstein,andDebye 7 2.1 Einstein’s Calculation ...... 8 2.2 Debye’s Calculation ...... 11 2.2.1 About Periodic (Born-Von-Karman) Boundary Conditions ...... 12 2.2.2 Debye’s Calculation Following Planck ...... 13 2.2.3 Debye’s “Interpolation” ...... 15 2.2.4 SomeShortcomingsoftheDebyeTheory ...... 15 2.3 SummaryofSpecificHeatofSolids...... 17 2.4 Appendix to this Chapter: ζ(4)...... 17
3 Electrons in Metals: Drude Theory 19 3.1 Electrons in Fields ...... 20 3.1.1 Electrons in an Electric Field ...... 20 3.1.2 Electrons in Electric and Magnetic Fields ...... 21 3.2 ThermalTransport...... 23 3.3 SummaryofDrudeTheory ...... 25
4 More Electrons in Metals: Sommerfeld (Free Electron) Theory 27 4.1 Basic Fermi-Dirac Statistics ...... 28
vii viii CONTENTS
4.2 ElectronicHeatCapacity ...... 30 4.3 Magnetic Spin Susceptibility (Pauli Paramagnetism) ...... 32 4.4 WhyDrudeTheoryWorkssoWell ...... 35 4.5 ShortcomingsoftheFreeElectronModel ...... 35 4.6 Summaryof(Sommerfeld)Free ElectronTheory ...... 37
II Putting Materials Together 39
5 WhatHoldsSolidsTogether:ChemicalBonding 41 5.1 GeneralConsiderationsaboutBonding ...... 41 5.2 IonicBonds...... 44 5.3 CovalentBond ...... 47 5.3.1 Particle in a Box Picture ...... 47 5.3.2 Molecular Orbital or Tight Binding Theory ...... 47 5.4 Van der Waals, Fluctuating Dipole Forces, or Molecular Bonding ...... 53 5.5 Metallic Bonding ...... 54 5.6 Hydrogenbonds ...... 55 5.7 SummaryofBonding(Pictoral)...... 55
6 Types of Matter 57
III Toy Models of Solids in One Dimension 61
7 One Dimensional Model of Compressibility, Sound, and Thermal Expansion 63
8 VibrationsofaOneDimensionalMonatomicChain 67 8.1 FirstExposuretotheReciprocalLattice ...... 68 8.2 Properties of the Dispersion of the One Dimensional Chain ...... 70 8.3 QuantumModes:Phonons ...... 72 8.4 CrystalMomentum...... 74 8.5 Summary of Vibrations of the One Dimensional Monatomic Chain ...... 75
9 VibrationsofaOneDimensionalDiatomicChain 77 9.1 Diatomic Crystal Structure: Some useful definitions ...... 77 9.2 Normal Modes of the Diatomic Solid ...... 79 CONTENTS ix
9.3 Summary of Vibrations of the One Dimensional Diatomic Chain ...... 85
10 Tight Binding Chain (Interlude and Preview) 87 10.1 Tight Binding Model in One Dimension ...... 87 10.2 Solution of the Tight Binding Chain ...... 89 10.3 Introduction to Electrons Filling Bands ...... 92 10.4MultipleBands ...... 93 10.5 Summary of Tight Binding Chain ...... 95
IV Geometry of Solids 97
11 Crystal Structure 99 11.1 Lattices and Unit Cells ...... 99 11.2 Lattices in Three Dimensions ...... 106 11.3 SummaryofCrystalStructure ...... 112
12 Reciprocal Lattice, Brillouin Zone, Waves in Crystals 115 12.1 The Reciprocal Lattice in Three Dimensions ...... 115 12.1.1 Review of One Dimension ...... 115 12.1.2 Reciprocal Lattice Definition ...... 116 12.1.3 The Reciprocal Lattice as a Fourier Transform ...... 117 12.1.4 Reciprocal Lattice Points as Families of Lattice Planes ...... 118 12.1.5 Lattice Planes and Miller Indices ...... 120 12.2 Brillouin Zones ...... 123 12.2.1 Review of One Dimensional Dispersions and Brillouin Zones ...... 123 12.2.2 General Brillouin Zone Construction ...... 124 12.3 Electronic and Vibrational Waves in Crystals in Three Dimensions ...... 125 12.4 Summary of Reciprocal Space and Brillouin Zones ...... 127
V Neutron and X-Ray Diffraction 129
13 Wave Scattering by Crystals 131 13.1 TheLaueandBraggConditions ...... 132 13.1.1 Fermi’s Golden Rule Approach ...... 132 13.1.2 DiffractionApproach...... 133 x CONTENTS
13.1.3 Equivalence of Laue and Bragg conditions ...... 134 13.2 ScatteringAmplitudes ...... 135 13.2.1 SystematicAbsencesandMoreExamples ...... 138 13.3 MethodsofScatteringExperiments...... 140 13.3.1 Advanced Methods (interesting and useful but you probably won’t be tested onthis) ...... 140 13.3.2 Powder Diffraction (you will almost certainly be tested on this!) ...... 141 13.4 Stillmoreaboutscattering ...... 147 13.4.1 Variant: Scattering in Liquids and Amorphous Solids ...... 147 13.4.2 Variant: Inelastic Scattering ...... 148 13.4.3 ExperimentalApparatus...... 148 13.5SummaryofDiffraction ...... 149
VI Electrons in Solids 151
14 Electrons in a Periodic Potential 153 14.1 NearlyFreeElectronModel ...... 153 14.1.1 DegeneratePerturbationTheory ...... 155 14.2Bloch’sTheorem ...... 160 14.3 Summary of Electrons in a Periodic Potential ...... 161
15 Insulator, Semiconductor, or Metal 163 15.1 Energy Bands in One Dimension: Mostly Review ...... 163 15.2 EnergyBandsinTwo(orMore)Dimensions...... 166 15.3TightBinding...... 168 15.4 Failures of the Band-Structure Picture of Metals and Insulators...... 170 15.5 BandStructureandOpticalProperties...... 171 15.5.1 Optical Properties of Insulators and Semiconductors ...... 171 15.5.2 Direct and Indirect Transitions ...... 171 15.5.3 Optical Properties of Metals ...... 172 15.5.4 Optical Effects of Impurities ...... 173 15.6 Summary of Insulators, Semiconductors, and Metals ...... 174
16 Semiconductor Physics 175 16.1ElectronsandHoles ...... 175 CONTENTS xi
16.1.1 DrudeTransport:Redux ...... 178 16.2 Adding Electrons or Holes With Impurities: Doping ...... 179 16.2.1 ImpurityStates...... 180 16.3 Statistical Mechanics of Semiconductors ...... 183 16.4 Summary of Statistical Mechanics of Semiconductors ...... 187
17 Semiconductor Devices 189 17.1 BandStructureEngineering ...... 189 17.1.1 DesigningBandGaps ...... 189 17.1.2 Non-HomogeneousBandGaps ...... 190 17.1.3 Summary of the Examinable Material ...... 190 17.2 p-n Junction ...... 191
VII Magnetism and Mean Field Theories 193
18 Magnetic Properties of Atoms: Para- and Dia-Magnetism 195 18.1 Basic Definitions of types of Magnetism ...... 196 18.2 Atomic Physics: Hund’s Rules ...... 197 18.2.1 WhyMomentsAlign...... 200 18.3 Coupling of Electrons in Atoms to an External Field ...... 202 18.4 Free Spin (Curie or Langevin) Paramagnetism ...... 204 18.5LarmorDiamagnetism ...... 206 18.6AtomsinSolids...... 207 18.6.1 Pauli Paramagnetism in Metals ...... 207 18.6.2 Diamagnetism in Solids ...... 207 18.6.3 Curie Paramagnetism in Solids ...... 208 18.7 Summary of Atomic Magnetism; Paramagnetism and Diamagnetism ...... 209
19 Spontaneous Order: Antiferro-, Ferri-, and Ferro-Magnetism 211 19.1 (Spontaneous)MagneticOrder ...... 212 19.1.1 Ferromagnets...... 212 19.1.2 Antiferromagnets...... 212 19.1.3 Ferrimagnetism...... 214 19.2BreakingSymmetry ...... 214 19.2.1 IsingModel...... 215 xii CONTENTS
19.3 SummaryofMagneticOrders ...... 216
20 Domains and Hysteresis 217 20.1 MacroscopicEffects in Ferromagnets: Domains ...... 217 20.1.1 Disorder and Domain Walls ...... 218 20.1.2 Disorder Pinning ...... 219 20.1.3 TheBloch/N´eelWall...... 219 20.2 HysteresisinFerromagnets ...... 222 20.2.1 Single-Domain Crystallites ...... 222 20.2.2 Domain Pinning and Hysteresis ...... 223 20.3 Summary of Domains and Hysteresis in Ferromagnets ...... 224
21 Mean Field Theory 227 21.1 Mean Field Equations for the Ferromagnetic Ising Model ...... 227 21.2 Solution of Self-Consistency Equation ...... 229 21.2.1 Paramagnetic Susceptibility ...... 231 21.2.2 FurtherThoughts...... 232 21.3 SummaryofMeanFieldTheory ...... 232
22MagnetismfromInteractions:TheHubbardModel 235 22.1 FerromagnetismintheHubbardModel...... 236 22.1.1 Hubbard Ferromagnetism Mean Field Theory ...... 236 22.1.2 StonerCriterion ...... 237 22.2 Mott Antiferromagnetism in the Hubbard Model ...... 239 22.3 SummaryoftheHubbardModel ...... 241 22.4 Appendix: The Hubbard model for the Hydrogen Molecule ...... 241
23 Magnetic Devices 245
Indices 247 IndexofPeople...... 248 IndexofTopics...... 250 Chapter 1
About Condensed Matter Physics
This chapter is just my personal take on why this topic is interesting. It seems unlikely to me that any exam would ask you why you study this topic, so you should probably consider this section to be not examinable. Nonetheless, you might want to read it to figure out why you should think this course is interesting if that isn’t otherwise obvious.
1.1 What is Condensed Matter Physics
Quoting Wikipedia: Condensed matter physics is the field of physics that deals with the macro- scopic and microscopic physical properties of matter. In particular, it is concerned with the “condensed” phases that appear whenever the num- ber of constituents in a system is extremely large and the interactions be- tween the constituents are strong. The most familiar examples of condensed phases are solids and liquids, which arise from the electromagnetic forces between atoms.
The use of the term “Condensed Matter” being more general than just solid state was coined and promoted by Nobel-Laureate Philip W. Anderson.
1.2 Why Do We Study Condensed Matter Physics?
There are several very good answers to this question
1. Because it is the world around us Almost all of the physical world that we see is in fact condensed matter. Questions such as
why are metals shiny and why do they feel cold? • why is glass transparent? • 1 2 CHAPTER 1. ABOUT CONDENSED MATTER PHYSICS
why is water a fluid, and why does fluid feel wet? • why is rubber soft and stretchy? • These questions are all in the domain of condensed matter physics. In fact almost every question you might ask about the world around you, short of asking about the sun or stars, is probably related to condensed matter physics in some way.
2. Because it is useful Over the last century our command of condensed matter physics has enabled us humans to do remarkable things. We have used our knowledge of physics to engineer new materials and exploit their properties to change our world and our society completely. Perhaps the most remarkable example is how our understanding of solid state physics enabled new inventions exploiting semiconductor technology, which enabled the electronics industry, which enabled computers, iPhones, and everything else we now take for granted.
3. Because it is deep The questions that arise in condensed matter physics are as deep as those you might find anywhere. In fact, many of the ideas that are now used in other fields of physics can trace their origins to condensed matter physics. A few examples for fun:
The famous Higgs boson, which the LHC is searching for, is no different from a phe- • nomenon that occurs in superconductors (the domain of condensed matter physicists). The Higgs mechanism, which gives mass to elementary particles is frequently called the “Anderson-Higgs” mechanism, after the condensed matter physicist Phil Anderson (the same guy who coined the term “condensed matter”) who described much of the same physics before Peter Higgs, the high energy theorist. The ideas of the renormalization group (Nobel prize to Kenneth Wilson in 1982) was • developed simultaneously in both high-energy and condensed matter physics. The ideas of topological quantum field theories, while invented by string theorists as • theories of quantum gravity, have been discovered in the laboratory by condensed matter physicists! In the last few years there has been a mass exodus of string theorists applying black- • hole physics (in N-dimensions!) to phase transitions in real materials. The very same structures exist in the lab that are (maybe!) somewhere out in the cosmos!
That this type of physics is deep is not just my opinion. The Nobel committee agrees with me. During this course we will discuss the work of no fewer than 50 Nobel laureates! (See the index of scientists at the end of this set of notes).
4. Because reductionism doesn’t work begin rant People frequently have the feeling that if you continually ask “what is it made of” you{ learn} more about something. This approach to knowledge is known as reductionism. For example, asking what water is made of, someone may tell you it is made from molecules, then molecules are made of atoms, atoms of electrons and protons, protons of quarks, and quarks are made of who-knows-what. But none of this information tells you anything about why water is wet, about why protons and neutrons bind to form nuclei, why the atoms bind to form water, and so forth. Understanding physics inevitably involves understanding how many objects all interact with each other. And this is where things get difficult very 1.2. WHY DO WE STUDY CONDENSED MATTER PHYSICS? 3
quickly. We understand the Schroedinger equation extremely well for one particle, but the Schroedinger equations for four or more particles, while in principle solvable, in practice are never solved because they are too difficult — even for the world’s biggest computers. Physics involves figuring out what to do then. How are we to understand how many quarks form a nucleus, or how many electrons and protons form an atom if we cannot solve the many particle Schroedinger equation? Even more interesting is the possibility that we understand very well the microscopic theory of a system, but then we discover that macroscopic properties emerge from the system that we did not expect. My personal favorite example is when one puts together many electrons (each with charge e) one can sometimes find new particles emerging, each having one third the charge of an electron!− 1 Reductionism would never uncover this — it misses the point completely. end rant { } 5. Because it is a Laboratory Condensed matter physics is perhaps the best laboratory we have for studying quantum physics and statistical physics. Those of us who are fascinated by what quantum mechanics and statistical mechanics can do often end up studying condensed matter physics which is deeply grounded in both of these topics. Condensed matter is an infinitely varied playground for physicists to test strange quantum and statistical effects. I view this entire course as an extension of what you have already learned in quantum and statistical physics. If you enjoyed those courses, you will likely enjoy this as well. If you did not do well in those courses, you might want to go back and study them again because many of the same ideas will arise here.
1Yes, this truly happens. The Nobel prize in 1998 was awarded to Dan Tsui, Horst Stormer and Bob Laughlin, for discovery of this phenomenon known as the fractional quantum Hall effect. 4 CHAPTER 1. ABOUT CONDENSED MATTER PHYSICS Part I
Physics of Solids without Considering Microscopic Structure: The Early Days of Solid State
5
Chapter 2
Specific Heat of Solids: Boltzmann, Einstein, and Debye
Our story of condensed matter physics starts around the turn of the last century. It was well known (and you should remember from last year) that the heat capacity1 of a monatomic (ideal) gas is Cv =3kB/2 per atom with kB being Boltzmann’s constant. The statistical theory of gases described why this is so. As far back as 1819, however, it had also been known that for many solids the heat capacity is given by2
C = 3kB per atom or C = 3R which is known as the Law of Dulong-Petit3. While this law is not always correct, it frequently is close to true. For example, at room temperature we have With the exception of diamond, the law C/R = 3 seems to hold extremely well at room temper- ature, although at lower temperatures all materials start to deviate from this law, and typically
1We will almost always be concerned with the heat capacity C per atom of a material. Multiplying by Avogadro’s number gives the molar heat capacity or heat capacity per mole. The specific heat (denoted often as c rather than C) is the heat capacity per unit mass. However, the phrase “specific heat” is also used loosely to describe the molar heat capacity since they are both intensive quantities (as compared to the total heat capacity which is extensive — i.e., proportional to the amount of mass in the system). We will try to be precise with our language but one should be aware that frequently things are written in non-precise ways and you are left to figure out what is meant. For example, Really we should say Cv per atom = 3kB/2 rather than Cv = 3kB /2 per atom, and similarly we should say C per mole = 3R. To be more precise I really would have liked to title this chapter “Heat Capacity Per Atom of Solids” rather than “Specific Heat of Solids”. However, for over a century people have talked about the “Einstein Theory of Specific Heat” and “Debye Theory of Specific Heat” and it would have been almost scandalous to not use this wording. 2 Here I do not distinguish between Cp and Cv because they are very close to the same. Recall that Cp Cv = 2 − V T α /βT where βT is the isothermal compressibility and α is the coefficient of thermal expansion. For a solid α is relatively small. 3Both Pierre Dulong and Alexis Petit were French chemists. Neither is remembered for much else besides this law.
7 8 CHAPTER 2. SPECIFIC HEAT OF SOLIDS: BOLTZMANN, EINSTEIN, AND DEBYE
Material C/R Aluminum 2.91 Antimony 3.03 Copper 2.94 Gold 3.05 Silver 2.99 Diamond 0.735
Table 2.1: Heat Capacities of Some Solids
C drops rapidly below some temperature. (And for diamond when the temperature is raised, the heat capacity increases towards 3R as well, see Fig. 2.2 below). In 1896 Boltzmann constructed a model that accounted for this law fairly well. In his model, each atom in the solid is bound to neighboring atoms. Focusing on a single particular atom, we imagine that atom as being in a harmonic well formed by the interaction with its neighbors. In such a classical statistical mechanical model, the heat capacity of the vibration of the atom is 3kB per atom, in agreement with Dulong-Petit. (Proving this is a good homework assignment that you should be able to answer with your knowledge of statistical mechanics and/or the equipartition theorem). Several years later in 1907, Einstein started wondering about why this law does not hold at low temperatures (for diamond, “low” temperature appears to be room temperature!). What he realized is that quantum mechanics is important! Einstein’s assumption was similar to that of Boltzmann. He assumed that every atom is in a harmonic well created by the interaction with its neighbors. Further he assumed that every atom is in an identical harmonic well and has an oscillation frequency ω (known as the “Einstein” frequency). The quantum mechanical problem of a simple harmonic oscillator is one whose solution we know. We will now use that knowledge to determine the heat capacity of a single one dimensional harmonic oscillator. This entire calculation should look familiar from your statistical physics course.
2.1 Einstein’s Calculation
In one dimension, the eigenstates of a single harmonic oscillator are
En = ~ω(n +1/2) with ω the frequency of the harmonic oscillator (the “Einstein frequency”). The partition function is then4
−β~ω(n+1/2) Z1D = e > nX0 e−β~ω/2 1 = = 1 e−β~ω 2 sinh(β~ω/2) − 4 We will very frequently use the standard notation β = 1/(kB T ). 2.1. EINSTEIN’S CALCULATION 9
The expectation of energy is then
1 ∂Z ~ω β~ω 1 E = = coth = ~ω n (β~ω)+ (2.1) h i −Z ∂β 2 2 B 2 5 where nB is the Bose occupation factor 1 n (x)= B ex 1 − This result is easy to interpret: the mode ω is an excitation that is excited on average nB times, or equivalently there is a “boson” orbital which is “occupied” by nB bosons. Differentiating the expression for energy we obtain the heat capacity for a single oscillator,
∂ E eβ~ω C = h i = k (β~ω)2 ∂T B (eβ~ω 1)2 −
Note that the high temperature limit of this expression gives C = kB (check this if it is not obvious!). Generalizing to the three-dimensional case, ~ Enx,ny ,nz = ω[(nx +1/2)+(ny +1/2)+(nz +1/2)] and −βEn ,n ,n 3 Z3D = e x y z = [Z1D] > nx,nXy ,nz 0 resulting in E =3 E , so correspondingly we obtain h 3Di h 1Di eβ~ω C =3k (β~ω)2 B (eβ~ω 1)2 − Plotted this looks like Fig. 2.1.
5Satyendra Bose worked out the idea of Bose statistics in 1924, but could not get it published until Einstein lent his support to the idea. 10 CHAPTER 2. SPECIFIC HEAT OF SOLIDS: BOLTZMANN, EINSTEIN, AND DEBYE
1
0.8
0.6 B k C 3 0.4
0.2
0 0 0.5 1 1.5 2
kBT/(~ω)
Figure 2.1: Einstein Heat Capacity Per Atom in Three Dimensions
Note that in the high temperature limit k T ~ω recover the law of Dulong-Petit — 3k B B heat capacity per atom. However, at low temperature (T ~ω/kB) the degrees of freedom “freeze out”, the system gets stuck in only the ground state eigenstate, and the heat capacity vanishes rapidly. Einstein’s theory reasonably accurately explained the behavior of the the heat capacity as a function of temperature with only a single fitting parameter, the Einstein frequency ω. (Sometimes this frequency is quoted in terms of the Einstein temperature ~ω = kBTEinstein). In Fig. 2.2 we show Einstein’s original comparison to the heat capacity of diamond. For most materials, the Einstein frequency ω is low compared to room temperature, so the Dulong-Petit law hold fairly well (being relatively high temperature compared to the Einstein frequency). However, for diamond, ω is high compared to room temperature, so the heat capacity is lower than 3R at room temperature. The reason diamond has such a high Einstein frequency is that the bonding between atoms in diamond is very strong and its mass is relatively low (hence a high ω = κ/m oscillation frequency with κ a spring constant and m the mass). These strong bonds also result in diamond being an exceptionally hard material. p Einstein’s result was remarkable, not only in that it explained the temperature dependence 2.2. DEBYE’S CALCULATION 11
Figure 2.2: Plot of Molar Heat Capacity of Diamond from Einstein’s Original 1907 paper. The fit is to the Einstein theory. The x-axis is kBT in units of ~ω and the y axis is C in units of cal/(K-mol). In these units, 3R 5.96. ≈ of the heat capacity, but more importantly it told us something fundamental about quantum mechanics. Keep in mind that Einstein obtained this result 19 years before the Schroedinger equation was discovered!6
2.2 Debye’s Calculation
Einstein’s theory of specific heat was extremely successful, but still there were clear deviations from the predicted equation. Even in the plot in his first paper (Fig. 2.2 above) one can see that at low temperature the experimental data lies above the theoretical curve7. This result turns out to be rather important! In fact, it was known that at low temperatures most materials have a heat capacity that is proportional to T 3 (Metals also have a very small additional term proportional to T which we will discuss later in section 4.2. Magnetic materials may have other additional terms as well8. Nonmagnetic insulators have only the T 3 behavior). At any rate, Einstein’s formula at low temperature is exponentially small in T , not agreeing at all with the actual experiments. In 1912 Peter Debye9 discovered how to better treat the quantum mechanics of oscillations of atoms, and managed to explain the T 3 specific heat. Debye realized that oscillation of atoms is the same thing as sound, and sound is a wave, so it should be quantized the same way as Planck quantized light waves. Besides the fact that the speed of light is much faster than that of sound, there is only one minor difference between light and sound: for light, there are two polarizations for each k whereas for sound, there are three modes for each k (a longitudinal mode, where the atomic motion is in the same direction as k and two transverse modes where the motion is perpendicular to k. Light has only the transverse modes.). For simplicity of presentation here we will assume that the transverse and longitudinal modes have the same velocity, although in truth the longitudinal
6Einstein was a pretty smart guy. 7Although perhaps not obvious, this deviation turns out to be real, and not just experimental error. 8We will discuss magnetism in part VII. 9Peter Debye later won a Nobel prize in Chemistry for something completely different. 12 CHAPTER 2. SPECIFIC HEAT OF SOLIDS: BOLTZMANN, EINSTEIN, AND DEBYE velocity is usually somewhat greater than the transverse velocity10. We now repeat essentially what was Planck’s calculation for light. This calculation should also look familiar from your statistical physics course. First, however, we need some preliminary information about waves:
2.2.1 About Periodic (Born-Von-Karman) Boundary Conditions
Many times in this course we will consider waves with periodic or “Born-Von-Karman” boundary conditions. It is easiest to describe this first in one dimension. Here, instead of having a one dimensional sample of length L with actual ends, we imagine that the two ends are connected together making the sample into a circle. The periodic boundary condition means that, any wave in this sample eikr is required to have the same value for a position r as it has for r + L (we have gone all the way around the circle). This then restricts the possible values of k to be
2πn k = L for n an integer. If we are ever required to sum over all possible values of k, for large enough L we can replace the sum with an integral obtaining11
L ∞ dk → 2π −∞ Xk Z A way to understand this mapping is to note that the spacing between allowed points in k space is 2π/L so the integral dk can be replaced by a sum over k points times the spacing between the points. R In three dimensions, the story is extremely similar. For a sample of size L3, we identify opposite ends of the sample (wrapping the sample up into a hypertorus!) so that if you go a distance L in any direction, you get back to where you started12. As a result, our k values can only take values 2π k = (n ,n ,n ) L 1 2 3 3 for integer values of ni, so here each k point now occupies a volume of (2π/L) . Because of this discretization of values of k, whenever we have a sum over all possible k values we obtain
L3 dk → (2π)3 k X Z 10We have also assumed the sound velocity to be the same in every direction, which need not be true in real materials. It is not too hard to include anisotropy into Debye’s theory as well. 11In your previous courses you may have used particle in a box boundary conditions where instead of plane waves ei2πnr/L you used particle in a box wavefunctions of the form sin(knπr/L). This gives you instead L ∞ dk → π Z Xk 0 which will inevitably result in the same physical answers as for the periodic boundary condition case. All calculations can be done either way, but periodic Born-Von-Karmen boundary conditions are almost always simpler. 12Such boundary conditions are very popular in video games. It may also be possible that our universe has such boundary conditions — a notion known as the doughnut universe. Data collected by Cosmic Microwave Background Explorer (led by Nobel Laureates John Mather and George Smoot) and its successor the Wilkinson Microwave Anisotropy Probe appear consistent with this structure. 2.2. DEBYE’S CALCULATION 13 with the integral over all three dimensions of k-space (this is what we mean by the bold dk). One might think that wrapping the sample up into a hypertorus is very unnatural compared to considering a system with real boundary conditions. However, these boundary conditions tend to simplify calculations quite a bit and most physical quantities you might measure could be measured far from the boundaries of the sample anyway and would then be independent of what you do with the boundary conditions.
2.2.2 Debye’s Calculation Following Planck
Debye decided that the oscillation modes were waves with frequencies ω(k)= v k with v the sound velocity — and for each k there should be three possible oscillation modes, one| | for each direction of motion. Thus he wrote an expression entirely analogous to Einstein’s expression (compare to Eq. 2.1)
1 E = 3 ~ω(k) n (β~ω(k)) + h i B 2 k X L3 1 = 3 dk ~ω(k) n (β~ω(k)) + (2π)3 B 2 Z
Each excitation mode is a boson of frequency ω(k) and it is occupied on average nB(β~ω(k)) times. By spherical symmetry, we may convert the three dimensional integral to a one dimensional integral ∞ dk 4π k2dk → Z Z0 (recall that 4πk2 is the area of the surface of a sphere13 of radius k) and we also use k = ω/v to obtain 4πL3 ∞ 1 E =3 ω2dω(1/v3)(~ω) n (β~ω)+ h i (2π)3 B 2 Z0 It is convenient to replace nL3 = N where n is the density of atoms. We then obtain
∞ 1 E = dω g(ω)(~ω) n (β~ω)+ (2.2) h i B 2 Z0 where the density of states is given by
12πω2 9ω2 g(ω)= N = N (2.3) (2π)3nv3 ω3 d where 3 2 3 ωd =6π nv (2.4) This frequency will be known as the Debye frequency, and below we will see why we chose to define it this way with the factor of 9 removed. The meaning of the density of states14 here is that the total number of oscillation modes with frequencies between ω and ω + dω is given by g(ω)dω. Thus the interpretation of Eq. 2.2 is
13Or to be pedantic, dk 2π dφ π dθ sin θ k2dk and performing the angular integrals gives 4π. → 0 0 14We will encounter theR conceptR of densityR of statesR many times, so it is a good idea to become comfortable with it! 14 CHAPTER 2. SPECIFIC HEAT OF SOLIDS: BOLTZMANN, EINSTEIN, AND DEBYE simply that we should count how many modes there are per frequency (given by g) then multiply by the expected energy per mode (compare to Eq. 2.1) and finally we integrate over all frequencies. This result, Eq. 2.2, for the quantum energy of the sound waves is strikingly similar to Planck’s result for the quantum energy of light waves, only we have replaced 2/c3 by 3/v3 (replacing the 2 light modes by 3 sound modes). The other change from Planck’s classic result is the +1/2 that we obtain as the zero point energy of each oscillator15. At any rate, this zero point energy gives us a contribution which is temperature independent16. Since we are concerned with C = ∂ E /∂T this term will not contribute and we will separate it out. We thus obtain h i 9N~ ∞ ω3 E = dω + T independent constant h i ω3 eβ~ω 1 d Z0 − by defining a variable x = β~ω this becomes 9N~ ∞ x3 E = dx + T independent constant h i ω3(β~)4 ex 1 d Z0 − The nasty integral just gives some number17 – in fact the number is π4/15. Thus we obtain 4 4 (kBT ) π E =9N 3 + T independent constant h i (~ωd) 15 Notice the similarity to Planck’s derivation of the T 4 energy of photons. As a result, the heat capacity is 3 4 ∂ E (kB T ) 12π 3 C = h i = NkB 3 T ∂T (~ωd) 5 ∼ This correctly obtains the desired T 3 specific heat. Furthermore, the prefactor of T 3 can be calculated in terms of known quantities such as the sound velocity and the density of atoms. Note that the Debye frequency in this equation is sometimes replaced by a temperature
~ωd = kB TDebye known as the Debye temperature, so that this equation reads ∂ E (T )3 12π4 C = h i = NkB 3 ∂T (TDebye) 5
15Planck should have gotten this energy as well, but he didn’t know about zero-point energy — in fact, since it was long before quantum mechanics was fully understood, Debye didn’t actually have this term either. 16Temperature independent and also infinite. Handling infinities like this is something that gives mathematicians nightmares, but physicist do it happily when they know that the infinity is not really physical. We will see below in section 2.2.3 how this infinity gets properly cut off by the Debye Frequency. 17If you wanted to evaluate the nasty integral, the strategy is to reduce it to the famous Riemann zeta function. We start by writing ∞ x3 ∞ x3e−x ∞ ∞ ∞ ∞ ∞ 1 dx = dx = dx x3e−x e−nx = dx x3e−nx = 3! Z ex 1 Z 1 e−x Z Z n4 0 − 0 − 0 nX=0 nX=1 0 nX=1 ∞ −p The resulting sum is a special case of the famous Riemann zeta function defined as ζ(p) = n=1 n where here we are concerned with the value of ζ(4). Since the zeta function is one of the most importantP functions in all of mathematics18 , one can just look up its value on a table to find that ζ(4) = π4/90 thus giving us the above stated result that the nasty integral is π4/15. However, in the unlikely event that you were stranded on a desert island and did not have access to a table, you could even evaluate this sum explicitly, which we do in the appendix to this chapter. 18One of the most important unproven conjectures in all of mathematics is known as the Riemann hypothesis and is concerned with determining for which values of p does ζ(p) = 0. The hypothesis was written down in 1869 by Bernard Riemann (the same guy who invented Riemannian geometry, crucial to general relativity) and has defied proof ever since. The Clay Mathematics Institute has offered one million dollars for a successful proof. 2.2. DEBYE’S CALCULATION 15
2.2.3 Debye’s “Interpolation”
Unfortunately, now Debye has a problem. In the expression derived above, the heat capacity is proportional to T 3 up to arbitrarily high temperature. We know however, that the heat capacity should level off to 3kBN at high T . Debye understood that the problem with his approximation is that it allows an infinite number of sound wave modes — up to arbitrarily large k. This would imply more sound wave modes than there are atoms in the entire system. Debye guessed (correctly) that really there should be only as many modes as there are degrees of freedom in the system. We will see in sections 8-12 below that this is an important general principle. To fix this problem, Debye decided to not consider sound waves above some maximum frequency ωcutoff , with this frequency chosen such that there are exactly 3N sound wave modes in the system (3 dimensions of motion times N particles). We thus define ωcutoff via
ωcutoff 3N = dω g(ω) (2.5) Z0 We correspondingly rewrite Eq. 2.2 for the energy (dropping the zero point contribution) as
ωcutoff E = dω g(ω) ~ωn (β~ω) (2.6) h i B Z0 Note that at very low temperature, this cutoff does not matter at all, since for large β the Bose factor nB will very rapidly go to zero at frequencies well below the cutoff frequency anyway. Let us now check that this cutoff gives us the correct high temperature limit. For high temperature 1 k T n (β~ω)= B B eβ~ω 1 → ~ω − Thus in the high temperature limit, invoking Eqs. 2.5 and 2.6 we obtain
ωcutoff E = k T dωg(ω) =3k TN h i B B Z0 yielding the Dulong-Petit high temperature heat capacity C = ∂ E /∂T =3kBN =3kB per atom. For completeness, let us now evaluate our cutoff frequency, h i
ωcutoff ωcutoff ω2 ω3 3N = dωg(ω)=9N dω =3N cutoff ω3 ω3 Z0 Z0 d d we thus see that the correct cutoff frequency is exactly the Debye frequency ωd. Note that k = 2 1/3 ωd/v = (6π n) (from Eq. 2.4) is on the order of the inverse interatomic spacing of the solid. More generally (in the neither high nor low temperature limit) one has to evaluate the integral 2.6, which cannot be done analytically. Nonetheless it can be done numerically and then can be compared to actual experimental data as shown in Fig. 2.3. It should be emphasized that the Debye theory makes predictions without any free parameters, as compared to the Einstein theory which had the unknown Einstein frequency ω as a free fitting parameter.
2.2.4 Some Shortcomings of the Debye Theory
While Debye’s theory is remarkably successful, it does have a few shortcomings. 16 CHAPTER 2. SPECIFIC HEAT OF SOLIDS: BOLTZMANN, EINSTEIN, AND DEBYE