03. Entropy: Boltzmann, Gibbs, Shannon
Total Page:16
File Type:pdf, Size:1020Kb
03. Boltzmann Entropy, Gibbs Entropy, Shannon Information. I. Entropy in Statistical Mechanics. • Goal: To explain the behavior of macroscopic systems in terms of the dynamical laws governing their microscopic consituents. - In particular: To provide a micro-dynamical explanation of the 2nd Law. 1. Boltzmann's Approach. • Consider different "macrostates" of a gas: Ludwig Boltzmann (1844-1906) • Why does the gas prefer to be in the Thermodynamic equilibrium macrostate = constant thermodynamic properties equilibrium macrostate (last one)? (temperature, volume, pressure, etc.) • Suppose the gas consists of N identical particles governed by Hamilton's equations of motion (the micro-dynamics). Def. 1. A microstate X of a gas is a specification of the position (3 values) and momentum (3 values) for each of its N particles. Let Ω = phase space = 6N-dim space of all possible microstates. Let ΩE = region of Ω that consists of all microstates with constant energy E. • Ω • E • Hamiltonian dynamics • maps initial microstate • • X to final microstate X . • • i f Xf Xi • Can 2nd Law be • explained by recourse to • • • this dynamics? • • Def. 2. A macrostate Γ of a gas is a specification of the gas in terms of macroscopic properties (pressure, temperature, volume, etc.). • Relation between microstates and macrostates: Macrostates supervene on microstates! - To each microstate there corresponds exactly one macrostate. - Many distinct microstates can correspond to the same macrostate. • So: ΩE is partitioned into a finite number of regions corresponding to macrostates, with each microstate X belonging to one macrostate Γ(X). • • • ΩE microstates • • Γ(X) • • • • macrostate • X • • • • • • • • • Boltzmann's Claim: The equilibrium macrostate Γeq is vastly larger than any other macrostate (so it contains the vast majority of possible microstates). Def. 3. The Boltzmann Entropy is defined by So: SB(Γ(X)) is a measure of the size of Γ(X). SB(Γ(X)) = k log|Γ(X)| And: SB(Γ(X)) obtains its where |Γ(X)| is the volume of Γ(X). maximum value for Γeq. • • Γeq • ΩE • • • • • • • • very small non- • • equilibrium • • • macrostates • • • • • • • • • • • Thus: SB increases over time because, for any initial microstate Xi, the dynamics will map Xi into Γeq very quickly, and then keep it there for an extremely long time. Two Ways to Explain the Approach to Equilibrium: (a) Appeal to Typicality (Goldstein 2001) Claim: A system approaches equilibrium because equilibrium microstates are typical and nonequilibrium microstates are atypical. • Why? For large N, ΩE is almost entirely filled up with equilibrium microstates. Hence they are "typical". - But: What is it about the dynamics that evolves atypical states to typical states? - "If a system is in an atypical microstate, it does not evolve into an equilibrium microstate just because the latter is typical." (Frigg 2009) - Need to identify properties of the dynamics that guarantee atypical states evolve into typical states. - And: Need to show that these properties are typical. - Ex: If the dynamics is chaotic (in an appropriate sense), then (under certain conditions), any initial microstate Xi will quickly be mapped into Γeq and remain there for long periods of time. (Frigg 2009) (b) Appeal to Probabilities Claim: A system approaches equilibrium because it evolves from states of lower toward states of higher probability, and the equilibrium state is the state of highest probabililty. • Associate probabilities with macrostates: the larger the macrostate, the greater the probability of finding a microstate in it. "In most cases, the initial state will be a very unlikely state. From this state the system will steadily evolve towards more likely states until it has finally reached the most likely state, i.e., the state of thermal equilibrium." Task: Make this a bit more precise (Boltzmann's combinatorial argument)... P89 • • • Arrangement #1: P6 w1 w2 w3 state of P6 in w1, state of P89 in w3, etc. • • • • • • • • • • Ωµ! • Start with the 6-dim phase space Ωµ of a single particle. ΩE = N copies of Ωµ • Partition Ωµ into ℓ cells w1, w2, ..., wℓ of size δw. point in Ωµ = single- • A state of an N-particle system is given by N points in Ωµ. particle microstate. Def. 4. An arrangement is a specification of which points lie in which cells. P6 • • • Arrangement #1: P89 w1 w2 w3 state of P6 in w1, state of P89 in w3, etc. Arrangement #2: • • • • state of P89 in w1, state of P6 in w3, etc. • Distribution: • • (1, 0, 2, 0, 1, 1, ...) • • • Takes form (n1, n2, ..., nℓ), where nj = # of points in wj. Ωµ • Start with the 6-dim phase space Ωµ of a single particle. ΩE = N copies of Ωµ • Partition Ωµ into ℓ cells w1, w2, ..., wℓ of size δw. point in Ωµ = single- • A state of an N-particle system is given by N points in Ωµ. particle microstate. Def. 4. An arrangement is a specification of which points lie in which cells. Def. 5. A distribution is a specification of how many points (regardless of which ones) lie in each cell. • Note: More than one arrangement can correspond to the same distribution. • How many arrangements G(Di) are compatible with a given distribution Di = (n1, n2, ..., nℓ)? n ! = n(n − 1)(n − 2)"1 = # of ways to arrange n distinguishable objects N ! 0! = 1 • Answer: G(Di ) = n1 !n2 !!nℓ ! Number of ways to arrange N distinguishable objects into ! bins with capacities n1, n2, ..., n!. Check: Let D1 = (N, 0, ..., 0) and D2 = (N − 1, 1, 0, ..., 0). - G(D1) = N !/N ! = 1. (Only one way for all N particles to be in w1.) - G(D2) = N !/(N − 1)! = N(N − 1)(N − 2)"1/(N − 1)(N − 2)"1 = N. (There are N different ways w2 could have one point in it; namely, if P1 was in it, or if P2 was in it, or if P3 was in it, etc...) "The probability of this distribution [Di] is then given by the number of permutations of which the elements of this distribution are capable, that is by the number [G(Di)]. As the most probable distribution, i.e., as the one corresponding to thermal equilibrium, we again regard that distribution for which this expression is maximal..." • Again: The probability of a distribution Di is given by G(Di). • And: Each distribution Di corresponds to a macrostate ΓDi. Why? Because a system's macroscopic properties (volume, pressure, temp, etc) only depend on how many particles are in particular microstates, and not on which particles are in which microstates. • What is the size of this macrostate? - A point in ΩE corresponds to an arrangement of Ωµ. - The size of a macrostate ΓDi in ΩE is given by the number of points it contains (the number of arrangements compatible with Di) multiplied by a volume element of ΩE. - A volume element of ΩE is given by N copies of a volume element δw of Ωµ. number of volume element • So: The size of ΓD is |ΓD | = arrangements × i i of ΩE compatible with Di N = G(Di) δw In other words: The probability G(Di) of a distribution Di is proportional to the size of its corresponding macrostate ΓDi. - The equilibrium macrostate, being the largest, is the most probable; and a system evolves from states of low probabilty to states of high probability. • And: Each distribution Di corresponds to a macrostate ΓDi. Why? Because a system's macroscopic properties (volume, pressure, temp, etc) only depend on how many particles are in particular microstates, and not on which particles are in which microstates. • What is the size of this macrostate? - A point in ΩE corresponds to an arrangement of Ωµ. - The size of a macrostate ΓDi in ΩE is given by the number of points it contains (the number of arrangements compatible with Di) multiplied by a volume element of ΩE. - A volume element of ΩE is given by N copies of a volume element δw of Ωµ. number of volume element • So: The size of ΓD is |ΓD | = arrangements × i i of ΩE compatible with Di N = G(Di) δw • The Boltzmann entropy of ΓDi is given by: SB is a measure of how N large a macrostate is, and SB(ΓDi) = k log(G(Di) δw ) thus how probable the = k log(G(Di)) + Nk log(δw) corresponding distribution of microstates is. = k log(G(Di)) + const. Other formulations of SB Stirling's approx: SB(ΓDi) = klog(G(Di)) + const. logn! ≈ nlogn − n ⎛ N ! ⎟⎞ = k log⎜ ⎟ +const. ⎜ ⎟ n1 + ... + nℓ = N ⎝⎜n1 !n2 !!nℓ !⎠⎟ = klog(N !) − klog(n1!) − ... − klog(nℓ!) + const. ≈ (NklogN − N ) − (n1klogn1 − n1) − ... − (nℓklognℓ − nℓ) + const. ℓ SB in terms of microstate occupation numbers n . = −k∑nj lognj +const. j j=1 Probabilities for microstates, probability of finding not macrostates/distributions! • Let: pj = nj/N = a randomly chosen microstate in cell wj SB in terms of microstate probabilities pj. ℓ • Then: S (Γ ) = −Nk p log p +const. B Di ∑ j j The biggest value of S is for the j=1 B distribution Di for which the pj's are all 1/ℓ; i.e., the nj's are all N/ℓ (the equilibrium distribution). Relation between Boltzmann Entropy SB and Thermodynamic Entropy ST • First: Let's derive the Maxwell-Boltzmann equilibrium distribution. • Assume: A system of N weakly interacting particles described by: nj = # states in cell wj n = N Weakly ineracting ∑ j N = total # particles j assumption means total internal energy is just sum εj = energy of microstate in wj of energies of each particle ∑εj nj = U j U = total internal energy • Recall: SB(nj) = (Nklog N − N ) − k ∑(nj log nj − nj ) + const. j d d • So: SB = 0 −k∑ (nj lognj −nj ) + 0 dnj j dnj = −k ∑(log nj + nj /nj − 1) j = −k ∑ log nj j • Or: dSB = −k ∑ log nj dnj j * • Now: SB takes its maximum value for the values nj that solve: * Small changes to SB due dSB = −k ∑ log nj dnj = 0 only to small changes dn .