Lecture 7: Canonical Ensemble

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 7: Canonical Ensemble Physics 127a: Class Notes Lecture 7: Canonical Ensemble – Simple Examples The canonical partition function provides the standard route to calculating the thermodynamic properties of macroscopic systems—one of the important tasks of statistical mechanics X −βEj Hamiltonian → QN = e → free energy A(T,V,N)→ etc. (1) j A number of simple examples illustrate this type of calculation, and provide useful physical insight into the behavior of more realistic systems. The following are simple because they are a collection of noninteracting objects, which makes the enumeration of states easy. Harmonic Oscillators Classical The Hamiltonian for one oscillator in one space dimension is p2 1 H(x,p) = + mω2x2 (2) 2m 2 0 with m the mass of the particle and ω0 the frequency of the oscillator. The partition function for one oscillator is Z ∞ p2 1 dx dp Q = −β + mω2x2 . 1 exp 0 (3) −∞ 2m 2 h The integrations over the Gaussian functions are precisely as in the ideal gas, so that / / 1 2πm 1 2 2π 1 2 kT Q = = , (4) 1 h β βmω2 hω 0 ¯ 0 introducing h¯ = h/2π for convenience. For N independent oscillators N N kT QN = (Q1) = (5) hω¯ 0 and then hω¯ A = NkT 0 , ln kT (6) U = NkT, (7) kT S = Nk ln + 1 , (8) hω¯ 0 hω¯ µ = kT 0 . ln kT (9) Equation (7) is an example of the general equipartition theorem: each coordinate or momentum appearing p2/ m Kx2/ 1 kT as a quadratic term in the Hamiltonian (e.g. 2 , 2) contributes 2 to the average energy in the classical limit. The proof is an obvious generalization of the integrations done in Eq. (4)—see Pathria §3.7 for a more formal proof. For oscillators in 3 space dimensions, replace N by 3N in the above expressions. 1 Quantum The quantum calculation is very easy in this case. The energy levels of a single, one dimensional harmonic oscillator are 1 Ej = (j + )hω¯ 2 0 (10) so that X −β(j+1/2)hω¯ 0 Q1 = e (11) j −βhω / e ¯ 0 2 = = 1 . −βhω¯ (12) 1 − e 0 2 sinh(βhω¯ 0/2) N For N one dimensional oscillators QN = (Q1) from which the thermodynamic behavior follows 1 −βhω¯ 0 A = NkT ln [2 sinh(βhω¯ 0/2)] = N hω¯ 0 + kT ln 1 − e , (13) 2 U = 1Nhω (βhω / ) = Nhω 1 + 1 . ¯ 0 coth ¯ 0 2 ¯ 0 βhω (14) 2 2 e ¯ 0 − 1 1 0.8 0.6 0.4 specific heat 0.2 0.5 1 1.5 T 2 Specific heat of N one dimensional harmonic oscillators scaled to Nk as a function of temperature (scaled to hω¯ 0/k). It is interesting to focus on the specific heat βhω ∂U e ¯ 0 2 C = = Nk(βhω¯ 0) . (15) ∂T βhω 2 e ¯ 0 − 1 For T →∞, β → 0 and the exponentials can be expanded, and we find the classical, equipartition result C = Nk. For any finite temperature the specific heat is reduced below the classical result, and for low 2 −hω¯ 0/kT temperatures kT hω¯ 0 the exponentials are large and C ' Nk(hω¯ 0/kT) e so that the specific heat is exponentially small. The results are plotted above. These results, with N → 3N, are the Einstein model for the specific heat of the phonons in a solid. Paramagnetism Consider N magnetic moments µ in an applied magnetic field B. There is competition between the magnetic energy of size µB which tends to align the moments along the field, and the thermal fluctuations. 2 Classical vector spins The Hamiltonian is XN XN E H =− µE · B =−µB cos θi (16) i=1 i E taking the field B to be in the z direction and θi is the polar angle of the ith moment. This gives the partition N function QN = (Q1) with Z βµB cos θ Q1 = d e (17) with Z Z Z 2π π d = dφ sin θdθ (18) 0 0 the integral over all angles. The average z magnetic moment is hMzi = N hµzi R βµB cos θ d µcos θe ∂ ln Q1 hµzi = R = kT . (19) d eβµB cos θ ∂B The one-moment partition function is easily evaluated Z 1 (βµB) βµB cos θ sinh Q1 = 2π d(cos θ)e = 4π , (20) −1 βµB so that hµzi = µ L(βµB), (21) with L the Langevin function L(x) = x − 1 . coth x (22) Note that L(x →∞) → 1, and for small x x x3 L(x) ' − +··· . (23) 3 45 For large temperatures or small fields (small βµB) Nµ2B hMzi ' . (24) 3kT χ = ∂ hM i /∂B| The linear increase with a small applied field is known as the magnetic susceptibility z B→0, so that Nµ2 χ = . (25) 3kT This T −1 susceptibility is known as a Curie susceptibility. 3 1 S = 1 Ising model This might correspond to a quantum spin- 2 ( 2 ) system in which each spin has only two possible orientations, or a classical spin with strong, uniaxial, crystalline anisotropy. The Hamiltonian is X H =− µiB (26) i with µi =µ the magnetic moment of the ith spin. We assume there is no interaction between different spins. There are only two states for a single spin so the calculation Q1 is very easy βµB −βµB Q1 = e + e = 2 cosh(µB/kT ). (27) N Since the spins are non-interacting QN = (Q1) , and so A(N, T ) =−NkT ln [2 cosh(µB/kT )] , (28) U =−NµB tanh(µB/kT ). (29) 0.4 0.3 0.2 specific heat 0.1 1 2 3 4 T 5 Specific heat scaled to Nk as a function of temperature scaled to µB/k for N moments µ in a field B. The specific heat is µB 2 1 C = Nk (30) kT 2 µB cosh kT which is plotted in the figure. The specific heat is proportional to T −2 at high temperatures and exponentially small at low temperatures. In between is a peak at kT ' µB known as a Schottky anomaly. Since we can also understand the specific heat as C = T∂S/∂T, we identify the anomaly with the decrease in entropy as the moments become ordered along the field. 1 0.8 0.6 0.4 susceptibility 0.2 1 2 3 4 T 5 Nµ µB/k N 1 Magnetization scaled to as a function of temperature scaled to for spin- 2 objects. 4 We are also interested in the average magnetic moment P −βµi B i µie hMzi = P . (31) −βµi B i e Just as in calculating the average energy we see this is conveniently written ∂ ∂A hM i = kT Q =− z ∂B ln N ∂B (32) (with partials at constant N,T). This gives µB hM i = Nµ . z tanh kT (33) At large field or low temperature we get saturation hMzi ' Nµ, whereas at small fields or high temperature 2 the behavior is linear in the field hMzi ' Nµ B/kT. The susceptibility is Nµ2 χ = . kT (34) again of Curie form. Pathria §3.9 studies the case of arbitrary S quantum spins. 5.
Recommended publications
  • Guide for the Use of the International System of Units (SI)
    Guide for the Use of the International System of Units (SI) m kg s cd SI mol K A NIST Special Publication 811 2008 Edition Ambler Thompson and Barry N. Taylor NIST Special Publication 811 2008 Edition Guide for the Use of the International System of Units (SI) Ambler Thompson Technology Services and Barry N. Taylor Physics Laboratory National Institute of Standards and Technology Gaithersburg, MD 20899 (Supersedes NIST Special Publication 811, 1995 Edition, April 1995) March 2008 U.S. Department of Commerce Carlos M. Gutierrez, Secretary National Institute of Standards and Technology James M. Turner, Acting Director National Institute of Standards and Technology Special Publication 811, 2008 Edition (Supersedes NIST Special Publication 811, April 1995 Edition) Natl. Inst. Stand. Technol. Spec. Publ. 811, 2008 Ed., 85 pages (March 2008; 2nd printing November 2008) CODEN: NSPUE3 Note on 2nd printing: This 2nd printing dated November 2008 of NIST SP811 corrects a number of minor typographical errors present in the 1st printing dated March 2008. Guide for the Use of the International System of Units (SI) Preface The International System of Units, universally abbreviated SI (from the French Le Système International d’Unités), is the modern metric system of measurement. Long the dominant measurement system used in science, the SI is becoming the dominant measurement system used in international commerce. The Omnibus Trade and Competitiveness Act of August 1988 [Public Law (PL) 100-418] changed the name of the National Bureau of Standards (NBS) to the National Institute of Standards and Technology (NIST) and gave to NIST the added task of helping U.S.
    [Show full text]
  • Ideal Gasses Is Known As the Ideal Gas Law
    ESCI 341 – Atmospheric Thermodynamics Lesson 4 –Ideal Gases References: An Introduction to Atmospheric Thermodynamics, Tsonis Introduction to Theoretical Meteorology, Hess Physical Chemistry (4th edition), Levine Thermodynamics and an Introduction to Thermostatistics, Callen IDEAL GASES An ideal gas is a gas with the following properties: There are no intermolecular forces, except during collisions. All collisions are elastic. The individual gas molecules have no volume (they behave like point masses). The equation of state for ideal gasses is known as the ideal gas law. The ideal gas law was discovered empirically, but can also be derived theoretically. The form we are most familiar with, pV nRT . Ideal Gas Law (1) R has a value of 8.3145 J-mol1-K1, and n is the number of moles (not molecules). A true ideal gas would be monatomic, meaning each molecule is comprised of a single atom. Real gasses in the atmosphere, such as O2 and N2, are diatomic, and some gasses such as CO2 and O3 are triatomic. Real atmospheric gasses have rotational and vibrational kinetic energy, in addition to translational kinetic energy. Even though the gasses that make up the atmosphere aren’t monatomic, they still closely obey the ideal gas law at the pressures and temperatures encountered in the atmosphere, so we can still use the ideal gas law. FORM OF IDEAL GAS LAW MOST USED BY METEOROLOGISTS In meteorology we use a modified form of the ideal gas law. We first divide (1) by volume to get n p RT . V we then multiply the RHS top and bottom by the molecular weight of the gas, M, to get Mn R p T .
    [Show full text]
  • Neighbor List Collision-Driven Molecular Dynamics Simulation for Nonspherical Hard Particles
    Neighbor List Collision-Driven Molecular Dynamics Simulation for Nonspherical Hard Particles. I. Algorithmic Details Aleksandar Donev,1, 2 Salvatore Torquato,1, 2, 3, ∗ and Frank H. Stillinger3 1Program in Applied and Computational Mathematics, Princeton University, Princeton NJ 08544 2Materials Institute, Princeton University, Princeton NJ 08544 3Department of Chemistry, Princeton University, Princeton NJ 08544 Abstract In this first part of a series of two papers, we present in considerable detail a collision-driven molecular dynamics algorithm for a system of nonspherical particles, within a parallelepiped sim- ulation domain, under both periodic or hard-wall boundary conditions. The algorithm extends previous event-driven molecular dynamics algorithms for spheres, and is most efficient when ap- plied to systems of particles with relatively small aspect ratios and with small variations in size. We present a novel partial-update near-neighbor list (NNL) algorithm that is superior to previ- ous algorithms at high densities, without compromising the correctness of the algorithm. This efficiency of the algorithm is further increased for systems of very aspherical particles by using bounding sphere complexes (BSC). These techniques will be useful in any particle-based simula- tion, including Monte Carlo and time-driven molecular dynamics. Additionally, we allow for a nonvanishing rate of deformation of the boundary, which can be used to model macroscopic strain and also alleviate boundary effects for small systems. In the second part of this series of papers we specialize the algorithm to systems of ellipses and ellipsoids and present performance results for our implementation, demonstrating the practical utility of the algorithm. ∗ Electronic address: [email protected] 1 I.
    [Show full text]
  • Equipartition of Energy
    Equipartition of Energy The number of degrees of freedom can be defined as the minimum number of independent coordinates, which can specify the configuration of the system completely. (A degree of freedom of a system is a formal description of a parameter that contributes to the state of a physical system.) The position of a rigid body in space is defined by three components of translation and three components of rotation, which means that it has six degrees of freedom. The degree of freedom of a system can be viewed as the minimum number of coordinates required to specify a configuration. Applying this definition, we have: • For a single particle in a plane two coordinates define its location so it has two degrees of freedom; • A single particle in space requires three coordinates so it has three degrees of freedom; • Two particles in space have a combined six degrees of freedom; • If two particles in space are constrained to maintain a constant distance from each other, such as in the case of a diatomic molecule, then the six coordinates must satisfy a single constraint equation defined by the distance formula. This reduces the degree of freedom of the system to five, because the distance formula can be used to solve for the remaining coordinate once the other five are specified. The equipartition theorem relates the temperature of a system with its average energies. The original idea of equipartition was that, in thermal equilibrium, energy is shared equally among all of its various forms; for example, the average kinetic energy per degree of freedom in the translational motion of a molecule should equal that of its rotational motions.
    [Show full text]
  • Atkins' Physical Chemistry
    Statistical thermodynamics 2: 17 applications In this chapter we apply the concepts of statistical thermodynamics to the calculation of Fundamental relations chemically significant quantities. First, we establish the relations between thermodynamic 17.1 functions and partition functions. Next, we show that the molecular partition function can be The thermodynamic functions factorized into contributions from each mode of motion and establish the formulas for the 17.2 The molecular partition partition functions for translational, rotational, and vibrational modes of motion and the con- function tribution of electronic excitation. These contributions can be calculated from spectroscopic data. Finally, we turn to specific applications, which include the mean energies of modes of Using statistical motion, the heat capacities of substances, and residual entropies. In the final section, we thermodynamics see how to calculate the equilibrium constant of a reaction and through that calculation 17.3 Mean energies understand some of the molecular features that determine the magnitudes of equilibrium constants and their variation with temperature. 17.4 Heat capacities 17.5 Equations of state 17.6 Molecular interactions in A partition function is the bridge between thermodynamics, spectroscopy, and liquids quantum mechanics. Once it is known, a partition function can be used to calculate thermodynamic functions, heat capacities, entropies, and equilibrium constants. It 17.7 Residual entropies also sheds light on the significance of these properties. 17.8 Equilibrium constants Checklist of key ideas Fundamental relations Further reading Discussion questions In this section we see how to obtain any thermodynamic function once we know the Exercises partition function. Then we see how to calculate the molecular partition function, and Problems through that the thermodynamic functions, from spectroscopic data.
    [Show full text]
  • Kinetics and Atmospheric Chemistry
    12/9/2017 Kinetics and Atmospheric Chemistry Edward Dunlea, Jose-Luis Jimenez Atmospheric Chemistry CHEM-5151/ATOC-5151 Required reading: Finlayson-Pitts and Pitts Chapter 5 Recommended reading: Jacob, Chapter 9 Other reading: Seinfeld and Pandis 3.5 General Outline of Next 3 Lectures • Intro = General introduction – Quick review of thermodynamics • Finlayson-Pitts & Pitts, Chapter 5 A. Fundamental Principles of Gas-Phase Kinetics B. Laboratory Techniques for Determining Absolute Rate Constants for Gas-Phase Reactions C. Laboratory Techniques for Determining Relative Rate Constants for Gas-Phase Reactions D. Reactions in Solution E. Laboratory Techniques for Studying Heterogeneous Reactions F. Compilations of Kinetic Data for Atmospheric Reactions 1 12/9/2017 Kinetics and Atmospheric Chemistry • What we’re doing here… – Photochemistry already covered – We will cover gas phase kinetics and heterogeneous reactions – Introductions to a few techniques used for measuring kinetic parameters • What kind of information do we hope to get out of “atmospheric kinetics”? – Predictive ability over species emitted into atmosphere • Which reactions will actually proceed to products? • Which products will they form? • How long will emitted species remain before they react? Competition with photolysis, wash out, etc. • Pare down list of thousands of possible reactions to the ones that really matter – Aiming towards idea practical predictive abilities • Use look up tables to decide if reaction is likely to proceed and determine an “atmospheric lifetime”
    [Show full text]
  • Chemical Examples for the Fit Equations
    Chemical Examples for the Fit Equations The following are just a few of the applications for nonlinear curve fitting. A nice introduction to biological applications of nonlinear curve fitting by Dr. Harvey Motulsky is available on the Web 1. a exp(-bx) f = a e-bx Example: Chemical kinetics first order decay of a reactant. A → P gives [A] = [A] o e-kt a(1-exp(-bx)) f = a (1 - e-bx ) 1.2 1 0.8 0.6 0.4 a(1-exp(-bx)) 0.2 0 0 2 4x 6 8 10 Example: Chemical kinetics first order increase of a product. A → P gives [P] = [P] o(1 - e-kt ) a(1-exp(-b(x-c))) f = a (1 - e-b(x-c) ) Example: This form is the same function as above with an x axis offset. For first order kinetics a delay in the time of the start of the reaction of t o would give the rate law [P] = [P] o(1 - e-k(t-to) ) where t o = c in the fitting function. a(1-exp(-bx)) + c f = a (1 - e-bx ) + c Example: This form introduces a constant offset for first order type growth. For a first order chemical reaction [P] = [P] o(1 - e-kt ) + c where the offset, c, is due to miscalibration or an instrumental artifact. This form can be used interchangeably with f = A e-bx + C as given in the next function with A = -a and C = a+c. a exp(-bx) + c f = a e-bx + c 1 0.8 0.6 0.4 a exp(-bx) + c a exp(-bx) 0.2 0 0 2 4x 6 8 10 Example: Chemical kinetics first order decay of a reactant towards equilibrium.
    [Show full text]
  • Section 2 Introduction to Statistical Mechanics
    Section 2 Introduction to Statistical Mechanics 2.1 Introducing entropy 2.1.1 Boltzmann’s formula A very important thermodynamic concept is that of entropy S. Entropy is a function of state, like the internal energy. It measures the relative degree of order (as opposed to disorder) of the system when in this state. An understanding of the meaning of entropy thus requires some appreciation of the way systems can be described microscopically. This connection between thermodynamics and statistical mechanics is enshrined in the formula due to Boltzmann and Planck: S = k ln Ω where Ω is the number of microstates accessible to the system (the meaning of that phrase to be explained). We will first try to explain what a microstate is and how we count them. This leads to a statement of the second law of thermodynamics, which as we shall see has to do with maximising the entropy of an isolated system. As a thermodynamic function of state, entropy is easy to understand. Entropy changes of a system are intimately connected with heat flow into it. In an infinitesimal reversible process dQ = TdS; the heat flowing into the system is the product of the increment in entropy and the temperature. Thus while heat is not a function of state entropy is. 2.2 The spin 1/2 paramagnet as a model system Here we introduce the concepts of microstate, distribution, average distribution and the relationship to entropy using a model system. This treatment is intended to complement that of Guenault (chapter 1) which is very clear and which you must read.
    [Show full text]
  • 4 Pressure and Viscosity
    4 Pressure and Viscosity Reading: Ryden, chapter 2; Shu, chapter 4 4.1 Specific heats and the adiabatic index First law of thermodynamics (energy conservation): d = −P dV + dq =) dq = d + P dV; (28) − V ≡ ρ 1 = specific volume [ cm3 g−1] dq ≡ T ds = heat change per unit mass [ erg g−1] − s ≡ specific entropy [ erg g−1 K 1]: The specific heat at constant volume, @q −1 −1 cV ≡ [ erg g K ] (29) @T V is the amount of heat that must be added to raise temperature of 1 g of gas by 1 K. At constant volume, dq = d, and if depends only on temperature (not density), (V; T ) = (T ), then @q @ d cV ≡ = = : @T V @T V dT implying dq = cV dT + P dV: 1 If a gas has temperature T , then each degree of freedom that can be excited has energy 2 kT . (This is the equipartition theorem of classical statistical mechanics.) The pressure 1 ρ P = ρhjw~j2i = kT 3 m since 1 1 3kT h mw2i = kT =) hjw~j2i = : 2 i 2 m Therefore kT k P V = =) P dV = dT: m m Using dq = cV dT + P dV , the specific heat at constant pressure is @q dV k cP ≡ = cV + P = cV + : @T P dT m Changing the temperature at constant pressure requires more heat than at constant volume because some of the energy goes into P dV work. 19 For reasons that will soon become evident, the quantity γ ≡ cP =cV is called the adiabatic index. A monatomic gas has 3 degrees of freedom (translation), so 3 kT 3 k 5 k 5 = =) c = =) c = =) γ = : 2 m V 2 m P 2 m 3 A diatomic gas has 2 additional degrees of freedom (rotation), so cV = 5k=2m, γ = 7=5.
    [Show full text]
  • Electrons and Holes in Semiconductors
    Hu_ch01v4.fm Page 1 Thursday, February 12, 2009 10:14 AM 1 Electrons and Holes in Semiconductors CHAPTER OBJECTIVES This chapter provides the basic concepts and terminology for understanding semiconductors. Of particular importance are the concepts of energy band, the two kinds of electrical charge carriers called electrons and holes, and how the carrier concentrations can be controlled with the addition of dopants. Another group of valuable facts and tools is the Fermi distribution function and the concept of the Fermi level. The electron and hole concentrations are closely linked to the Fermi level. The materials introduced in this chapter will be used repeatedly as each new device topic is introduced in the subsequent chapters. When studying this chapter, please pay attention to (1) concepts, (2) terminology, (3) typical values for Si, and (4) all boxed equations such as Eq. (1.7.1). he title and many of the ideas of this chapter come from a pioneering book, Electrons and Holes in Semiconductors by William Shockley [1], published Tin 1950, two years after the invention of the transistor. In 1956, Shockley shared the Nobel Prize in physics for the invention of the transistor with Brattain and Bardeen (Fig. 1–1). The materials to be presented in this and the next chapter have been found over the years to be useful and necessary for gaining a deep understanding of a large variety of semiconductor devices. Mastery of the terms, concepts, and models presented here will prepare you for understanding not only the many semiconductor devices that are in existence today but also many more that will be invented in the future.
    [Show full text]
  • Chapter 7. Statistical Mechanics
    Chapter 7. Statistical Mechanics When one is faced with a condensed-phase system, usually containing many molecules, that is at or near thermal equilibrium, it is not necessary or even wise to try to describe it in terms of quantum wave functions or even classical trajectories of all of the constituent molecules. Instead, the powerful tools of statistical mechanics allow one to focus on quantities that describe the most important features of the many-molecule system. In this Chapter, you will learn about these tools and see some important examples of their application. I. Collections of Molecules at or Near Equilibrium As noted in Chapter 5, the approach one takes in studying a system composed of a very large number of molecules at or near thermal equilibrium can be quite different from how one studies systems containing a few isolated molecules. In principle, it is possible to conceive of computing the quantum energy levels and wave functions of a collection of many molecules, but doing so becomes impractical once the number of atoms in the system reaches a few thousand or if the molecules have significant intermolecular interactions. Also, as noted in Chapter 5, following the time evolution of such a large number of molecules can be “confusing” if one focuses on the short-time behavior of any single molecule (e.g., one sees “jerky” changes in its energy, momentum, and angular momentum). By examining, instead, the long-time average behavior of each molecule or, alternatively, the average properties of a significantly large number of PAGE 1 molecules, one is often better able to understand, interpret, and simulate such condensed- media systems.
    [Show full text]
  • Johnson Noise and Shot Noise: the Determination of the Boltzmann Constant, Absolute Zero Temperature and the Charge of the Electron
    Johnson Noise and Shot Noise: The Determination of the Boltzmann Constant, Absolute Zero Temperature and the Charge of the Electron MIT Department of Physics Some kinds of electrical noise that interfere with a measurement may be avoided by proper attention to experimental design such as provisions for adequate shielding or multiple grounds. Johnson noise [1] and shot noise, however, are intrinsic phenomena of all electrical circuits, and their magnitudes are related to important physical constants. The purpose of this experiment is to measure these two fundamental electrical noises. From the measurements, values of the Boltzmann constant, k, and the charge of the electron, e will be derived. PREPARATORY QUESTIONS To illustrate the dilemma faced by physicists in 1900, consider the highly successful kinetic theory of gases 1. De¯ne the following terms: Johnson noise, shot based on the atomic hypothesis and the principles of noise, RMS voltage, thermal equilibrium, temper- statistical mechanics from which one can derive the ature, Kelvin and centigrade temperature scales, equipartition theorem. The theory showed that the well- entropy, dB. measured gas constant Rg in the equation of state of a 2. What is the Nyquist theory ?? of Johnson noise? mole of a gas at low density, 3. What is the mean square of the fluctuating compo- nent of the current in a photodiode when its average P V = RgT (1) current is Iav ? is related to the number of degrees of freedom of the The goals of the present experiment are: system, 3N , by the equation 1. To measure the properties of Johnson noise in a k(3N) R = = kN (2) variety of conductors and over a substantial range g 3 of temperature and to compare the results with the Nyquist theory; where N is the number of molecules in one mole (Avo- gadro's number), and k is the Boltzmann constant de- 2.
    [Show full text]