Dirty tricks for statistical mechanics

Martin Grant Physics Department, McGill University

c MG, August 2004, version 0.91 ° ii Preface

These are lecture notes for PHYS 559, Advanced Statistical Mechanics, which I’ve taught at McGill for many years. I’m intending to tidy this up into a book, or rather the ﬁrst half of a book. This half is on equilibrium, the second half would be on dynamics. These were handwritten notes which were heroically typed by Ryan Porter over the summer of 2004, and may someday be properly proof-read by me. Even better, maybe someday I will revise the reasoning in some of the sections. Some of it can be argued better, but it was too much trouble to rewrite the handwritten notes. I am also trying to come up with a good book title, amongst other things. The two titles I am thinking of are “Dirty tricks for statistical mechanics”, and “Valhalla, we are coming!”. Clearly, more thinking is necessary, and suggestions are welcome. While these lecture notes have gotten longer and longer until they are al- most self-suﬃcient, it is always nice to have real books to look over. My favorite modern text is “Lectures on Phase Transitions and the Renormalisation Group”, by Nigel Goldenfeld (Addison-Wesley, Reading Mass., 1992). This is referred to several times in the notes. Other nice texts are “Statistical Physics”, by L. D. Landau and E. M. Lifshitz (Addison-Wesley, Reading Mass., 1970) par- ticularly Chaps. 1, 12, and 14; “Statistical Mechanics”, by S.-K. Ma (World Science, Phila., 1986) particularly Chaps. 12, 13, and 28; and “Hydrodynamic Fluctuations, Broken Symmetry, and Correlation Functions”, by D. Forster (W. A. Benjamin, Reading Mass., 1975), particularly Chap. 2. These are all great books, with Landau’s being the classic. Unfortunately, except for Goldenfeld’s book, they’re all a little out of date. Some other good books are mentioned in passing in the notes. Finally, at the present time while I ﬁgure out what I am doing, you can read these notes as many times you like, make as many copies as you like, and refer to them as much as you like. But please don’t sell them to anyone, or use these notes without attribution. Contents

1 Motivation 1

2 Review of Stat Mech Ideas 5

3 Independence of Parts 7 3.1 SelfAveraging ...... 9 3.2 CentralLimitTheorem ...... 11 3.3 Correlations in Space ...... 15

4 Toy Fractals 21 4.1 VonKochSnowﬂake ...... 22 4.2 Sierpinskii Triangle ...... 24 4.3 Correlations in Self-Similar Objects ...... 27

5 Molecular Dynamics 31

6 Monte Carlo and the Master Equation 39

7 Review of Thermodynamics 49

8 Statistical Mechanics 55

9 Fluctuations 59 9.1 Thermodynamic Fluctuations ...... 62

10 Fluctuations of Surfaces 67 10.1 Lattice Models of Surfaces ...... 70 10.2 Continuum Model of Surface ...... 72 10.3 Impossibility of Phase Coexistence in d=1 ...... 88 10.4Numbersford=3...... 90

11 Broken Symmetry and Correlation Functions 93 11.1 Proof of Goldstone’s Theorem ...... 96

iii iv CONTENTS

12 Scattering 99 12.1 Scattering from a Flat Interface ...... 103 12.2 Roughness and Diﬀuseness ...... 106 12.3 Scattering From Many Flat Randomly–Oriented Surfaces . . . . 108

13 Fluctuations and Crystalline Order 113

14 Ising Model 121 14.1 Lattice–GasModel ...... 126 14.2 One–dimensional Ising Model ...... 127 14.3 Mean–Field Theory ...... 130

15 Landan Theory and Ornstien–Zernicke Theory 139 15.1LandanTheory ...... 144 15.2 Ornstein — Zernicke Theory of Correlations ...... 145

16 Scaling Theory 151 16.1 Scaling With ξ ...... 155

17 Renormalization Group 161 17.1 ǫ–ExpansionRG ...... 170 Chapter 1

Motivation

Why is a coin toss random? Statistical Mechanics: Weighting for averages is by “equal a priori probability.” So, all many—body quantum states with the same energy (for a given ﬁxed volume) have equal weight for calculating prob- abilities. This is a mouthful, but at least it gives the right answer. There are two microscopic states, HEADS and TAILS, which are energetically equivalent. So the result is random. However nice it is to get the right answer, and however satisfying it is to have a prescription for doing other problems, this is somewhat unsatisfying. It is worth going a little deeper to understand how microscopic states (HEADS or TAILS) become macroscopic observations (randomly HEADS or TAILS). To focus our thinking it is useful to do an experiment. Take a coin and ﬂip it, always preparing it as heads. If you ﬂip it at least 10–20cm in the air, the result is random. If you ﬂip it less than this height, the result is not random. Say you make N attemps, and the number of heads is nheads and the number of tails is ntails, for diﬀerent ﬂipping heights h. Then you obtain something like this:

1

0 2 4 6 8 10 h

Figure 1.1: Bias of Flips With Height h

1 2 CHAPTER 1. MOTIVATION

There are a lot of other things one could experimentally measure. In fact, my cartoon should look exponential, for reasons we will see later on in the course. Now we have something to think about: Why is the coin toss biased for ﬂips of small height? In statistical mechanics language we would say, “why is the coin toss correlated with its initial state?” Remember the coin was always prepared as heads. (Of course, if it was prepared randomly, why ﬂip it?) There is a length scale over which those correlations persist. For our example, this scale is 2–3cm, which we call ξ, the correlation length. The tailing oﬀ of the graph above∼ follows: Correlations e−ξ/h ∼ which we need more background to show. But if we did a careful experiment, we could show this is true for large h. It is sensible that

ξ coin diameter ≥ as a little thinking will clarify. This is still unsatisfying though. It leads to the argument that coin tosses are random because we ﬂip the coin much higher than its correlation length. Also, we learn that true randomness requires ξ/h 0, which is analogous to requiring an inﬁnite system in the thermodynamic limit.→ [In fact, in passing, note that for ξ/h = ( ) the outcome of the ﬂip (a preponderance of heads) does not reﬂect the microscopicO ′ states (HEADS and TAILS equivalent). To stretch a bit, this is related to broken symmetry in another context. That is, the macroscopic state has a lower symmetry than the microscopic state. This happens in phase transitions, where ξ/L = ( ) because ξ L as L , where L is the edge length of the system. It alsoO ′ happens in≈ quantum mesoscopic→ ∞ systems, where ξ/L = ( ) because L ξ and L is somewhat small. This latter case is more like theO present′ examlpe≈ of coin tossing.] But what about Newton’s laws? Given an initial conﬁguration, and a precise force, we can calculate the trajectory of the coin if we neglect inhomogeneities in the air. The reason is that your thumb cannot apply a precise force for ﬂipping to within one correlation length of height. The analogous physical problem is that macroscopic operating conditions (a ﬁxed pressure or energy, for example) do not determine microscopic states. Many such microscopic states are consistent with one macroscopic state. These ideas, correlations on a microscopic scale, broken symmetries, and correlation lengths, will come up throughout the rest of the course. The rest of the course follows this scenario:

Fluctuations and correlations: What can we do without statistical me- • chanics?

What can we do with thermodynamics and statistical mechanics? • Detailed study of interfaces. • Detailed study of phase transitions. • 3

The topics we will cover are as current as any issue of Physical Review E. Just take a look. We’ll begin the proper course with a review of, and a reﬂection on, some basic ideas of statistical mechanics. 4 CHAPTER 1. MOTIVATION Chapter 2

Review of Stat Mech Ideas

In statistical physics, we make a few assumptions, and obtain a distrobution to do averages. A famous assumption is that of “equal a priori probability.” By this it is meant that all microscopic states, consistent with a paticular ﬁxed total energy E and number of particles N, are equally likely. This gives the microcanonical ensemble. It can then be checked that thermodynamic quanti- ties have exceedingly small ﬂuctuations, and that one can derive — from the microcanonical ensemble — the canonical ensemble, where temperature T and volume V are ﬁxed. It is this latter ensemble which is most commonly used. Microscopic states, consistent with T and V are not equally likely, they are weighted with the probability factor

e−Estate/kB T (2.1) where Estate is the energy of the state, and kB is Boltzmann’s Constant. The probability of being in a state is 1 ρ = e−Estate/kB T (2.2) Z where the partition function is

Z = e−Estate/kB T (2.3) {statesX } The connection to thermodynamics is from the Helmholtz free energe

F = E TS (2.4) − where S is the entropy, and e−F/kB T = Z (2.5) Note that since F is extensive, that is F (N), we have ∼ O −f N Z = (e kB T )

5 6 CHAPTER 2. REVIEW OF STAT MECH IDEAS where F = fN. This exponential factor of N arises from the sum over states which is typically = eO(N) (2.6) {statesX } For example, for a magnetic dipole, which can have each constivent up or down, there are evidently

dipole #1 dipole #2...dipole #N = 2 2 ...2 = 2N = eN ln 2 (2.7) × × × total states. The total number of states in a volume V is

N N V V N ln V = e N (2.8) N! ≈ N N where the factor N! accounts for indistinguisability. These numbers are huge in a way which cannot be imagined. Our normal scales sit somewhere between microscopic and astronomical. Microscopic sizes are (A˙) with times (10−12sec). Astronomical sizes, like the size and age of theO universe are (10O30m) and (1018s). The ratio of these numbers is O O 1030m 1040 (2.9) 10−10m ∼ This is in no way comparable to (!)

23 1010 (2.10)

In fact, it seems a little odd (and unfair) that a theorist must average over eN states to explain what an experimentalist does in one experiment. Chapter 3

Independence of Parts

It is worth thinking about experiments a bit to make this clear. Say an ex- perimentalist measures the speed of sound in a sample, as shown. The experi-

Figure 3.1: Experiment 1 mentalist obtains some deﬁnite value of c, the speed of sound. Afterwards, the experimentlalist might cut up the sample into many parts so that his or her friends can do the same experiment. That is as shown in Fig. 3.2. Each of

Figure 3.2: Experiment 2

7 8 CHAPTER 3. INDEPENDENCE OF PARTS these new groups also measure the sound speed as shown. It would seem that the statistics improve because 4 experiments are now done?! In fact one can continue breaking up the system like this, as shown in Fig. 3.3.

Figure 3.3: Experiment 3

Of course, 1 experiment on the big system must be equivalent to many experiments on individual parts of a big system, as long as those individual parts are independent. By this it is meant that a system “self-averages” all its independent parts. The number of independent parts is (to use the same notation as the number of particles) L3 N = ( ) (3.1) O ξ3 for a system of edge length L. The new quantity introduced here is the cor- relation length ξ, the scale over which things are correlated, so they are not independent. The correlation length must be at least as large as an atom so

ξ (A˙) (3.2) ≥ O and indeed usually ξ (A˙). Hence for a macroscopic length, the number of independent points is≈ O N (1023) (3.3) ∼ O 3.1. SELF AVERAGING 9 of course. Now, each independent part has a few degrees of freedom, such as quantum numbers. Hence, the total number of states in a typical system are

dq1 dq2 . . . dqN quantum quantum quantum statesR statesR statesR of part 1 of part 2 of part N

Say dqi = 4 for an example, so that the total number of states is

R eO(N) (3.4) as it should. Now, however, we have a feeling for the origin of some of the assumptions entering statistical physics. We can go a little farther with this, and prove the central–limit theorem of probability theory.

3.1 Self Averaging

Consider an extensive quantity X. If a system consists of N independent parts as shown, then

i = 1 i = 2 i = 3

i = 4 i = 5 i = 6

i = 7 i = 8 i = N

Figure 3.4: N independent parts

N X = Xi (3.5) i=1 X where Xi is the value of X in independent part i. The average value of X is

N X = X (3.6) h i h ii i=1 X But since X = X (N), we have h ii 6 h ii X = (N) (3.7) h i O It is handy to introduce the intensive quantity X x = (3.8) N 10 CHAPTER 3. INDEPENDENCE OF PARTS then x = (1) h i O This is not very amazing yet. Before continuing, it is worth giving an example of what we mean by the average. For example, it might be a microscopically long, but macroscopically short time average dt X(t) X = τ h i R τ as drawn. As also indicated, the quantity (X X )2 1/2 is a natural one to h − h i i X (X X )2 1/2 X h − h i i h i

τ t

Figure 3.5: Time Average consider. Let the deviation of X from its average value be

∆X = X X (3.9) − h i Then clearly ∆X = X X = 0 h i h i − h i as is clear from the cartoon. The handier quantity is (∆X)2 = (X X )2 h i h − h i i = X2 2X X + X 2 (3.10) h − h i h i i = X2 X 2 h i − h i Giving the condition, in passing,

X2 X 2 (3.11) h i ≥ h i since (∆X)2 0. Splitting the system into independent parts, again, gives h i ≥ N N (∆X)2 = ∆X ∆X h i h i j i i=1 j=1 X X Now using that i may or may not equal j

N N (∆X)2 = (∆X )2 + (∆X )(∆X ) (3.12) h i h i i h i j i i=1 i=1 (Xi=j) Xj=1 (i6=j) 3.2. CENTRAL LIMIT THEOREM 11

But box i is independent of box j, so,

∆X ∆X = ∆X ∆X h i ji h iih j i = 0 0 (3.13) × = 0 and, N (∆X)2 = (∆X )2 h i h i i i=1 X or (∆X)2 = (N) (3.14) h i O Also, the intensive x = X/N satisﬁes 1 (∆x)2 = ( ) (3.15) h i O N This implies that ﬂuctuations in quantities are very small in systems of many (N ) independent parts. That is, because → ∞ (∆X)2 1/2 X X 1 + (h i ) ≈ h i{ O X } h i (3.16) 1 X 1 + ( ) ≈ h i{ O N 1/2 } So, in essence, X X . The result ≈ h i (∆X)2 1 h i = ( ) (3.17) X 2 O N h i that relative ﬂuctuations are very small, if N is large, is called the self-averaging lemma. Note that, since N = V/ξ3,

(∆X)2 ξ3 h i = ( ) (3.18) X 2 O V h i so ﬂuctuations measure microscopic correlations, and become appreciable for ξ3 V (1). This occurs in quantum mesoscopic systems (where V is small), and∼ phase O transitions (where ξ is large), for example.

3.2 Central Limit Theorem

We will now prove (to a physicist’s rigor) the central–limit theorem for systems if many independent parts. In the same manner as above we will calculate all the moments, such as the nth moment,

(∆X)n (3.19) h i 12 CHAPTER 3. INDEPENDENCE OF PARTS or at least their order with N. For simplicity we will assume that all the odd moments vanish

(∆X)2n+1 = 0 h i for intiger n. This means the distrobution ρ is symetric about the mean X as shown. It turns out that one can show that the distrobution function is symetrich i ρ ρ ρ

x xx xx x h i h i h i

Figure 3.6: Symetric distribution functions ρ(x) as (N ), but lets just assume it so the ideas in the algebra are clear. It can be shown→ ∞ (see below) that we can then reconstruct the distrobution ρ(X) if we know all the even moments (∆X)2n h i for positive integer n. We know n = 1, so let’s do n = 2, as we did (∆X)2 above h i

N N N N (∆X)4 = ∆X ∆X ∆X ∆X h i h i j k li i=1 j=1 X X kX=1 Xl=1 N N : 0 = (∆X )4 + 3 (∆X )2(∆X )2 + ∆X (3.20) h i i h i j i Oh ii i=1 1=1 (i=Xj=k=l) Xj=1 (i6=j) = (N) + (N 2) O O So as (N ) → ∞ N N (∆X)4 = 3 (∆X )2 (∆X )2 h i h i ih j i i=1 j=1 X X or (∆X)4 = 3 (∆X)2 2 = (N 2) (3.21) h i h i O Note that the largest term (as (N )) came from pairing up all the ∆Xi’s appropriately. A little thought makes→ it ∞ clear that the largest term in (∆X)2n will result from this sort of pairing giving, h i

(∆X)2n = const. (∆X)2 n = (N n) (3.22) h i h i O 3.2. CENTRAL LIMIT THEOREM 13

We will now determine the const. in Eq. 3.22. Note that

(∆X)2n = ∆X ∆X ∆X...∆X ∆X h i h i There are (2n 1) ways to pair oﬀ the ﬁrst ∆X. After those two ∆X’s are gone, there are− then (2n 3) ways to pair oﬀ the next free ∆X. And so on, until ﬁnally, we get to the− last two ∆X’s which have 1 way to pair oﬀ. Hence

(∆X)2n = (2n 1)(2n 3)(2n 5)...(1) (∆X)2 n (2n 1)!! (∆X)2 n h i − − − h i ≡ − h i or more conventionally,

2n! (∆X)2n = (∆X)2 n (3.23) h i 2nn!h i Now we know all the even moments in terms of the ﬁrst two moments X and (∆X)2 . We can reconstruct the distribution with a trick. First, leth usi be hexpliciti about ρ. By an average we shall mean,

∞ Q(X) = ρ(X)Q(X) (3.24) h iX Z−∞ where Q(X) is some quantity. Note that

′ δ(X X ) ′ = ρ(X) (3.25) h − iX where δ(X) is the Dirac delta function! Recall the Fourrier representation

∞ ′ 1 ′ δ(X X ) = dk e−ik(X−X ) (3.26) − 2π Z−∞ ′ Averaging over X gives

∞ ′ ′ 1 −ikX ikX ρ(X) = δ(X X ) ′ = dk e e ′ (3.27) h − iX 2π h iX Z−∞ In statistics, eikX is called the characteristic function. But we can now expand the exponentialh (fori convenience, let X = 0 for now, one only has to add and ′ h i′ subtract it in e−ik(X−X ) = e−ik(∆X−∆X ) but it makes the algebra messier) as

∞ n ′ (ik) ′ e−ikX = (X )n n! n=0 X So that we have,

1 ∞ ∞ (ik)n ρ(X) = dk eikX Xn (3.28) 2π n! h i −∞ n=0 Z X 14 CHAPTER 3. INDEPENDENCE OF PARTS where the prime superscript has been dropped. Note that Eq. 3.28 means that ρ(X) can be reconstructed from all the Xn , in general. For our case, all the odd moments vanish, so h i

1 ∞ ∞ ( k2)n ρ(X) = dk eikX − X2n 2π (2n)! h i −∞ n=0 Z X But X2n = (2n)! X2 n from above, so, h i 2nn! h i 1 ∞ ∞ 1 k2 ρ(X) = dk eikX (− X2 )n 2π n! 2 h i −∞ n=0 Z X which is the exponential again, so,

∞ 2 1 ikX −k hX2i ρ(X) = dk e e 2 (3.29) 2π Z−∞ 2 ikX −k hX2i (In passing, it is worth remembering that e = e 2 for a Gaussian distribution.) The integral can be done by completingh i the square, adding and −X2 subtracting 2hX2i , so that

2 ∞ 2 1/2 1 −X −( khX i + iX )2 ρ(X) = e 2hX2i dk e 21/2 2hX2i1/2 2π Z−∞ * √π 2 ∞ 1 −X 2 2 = e 2hX2i du e−u 2π X2 sh i Z−∞ using an obvious sudstitution. Hence, on putting X as non–zero again, h i 1 −(X−hXi)2 ρ(X) = e 2h(∆X)2i (3.30) 2π (∆X)2 h i which is the Gaussian distribution.p This is the central–limit theorem for a system of many independent parts. Its consequences are most transparent if X one deals with the intensive quantity x = N , then recall x = (1) h i O (3.31) (∆x)2 = (1/N) h i O Letting a2 (∆x)2 (3.32) h i ≡ N where a2 = (1) we have, up to normalization, O 2 −N (x−hxi) ρ(x) e 2a2 (3.33) ∝ so the width of the Gaussian is exceedingly small and it most represents a spike. 3.3. CORRELATIONS IN SPACE 15

ρ(x) (1/N 1/2) O

x = (1) x h i O Figure 3.7: Gaussian (Width Exaggerated)

Which is to say again that ﬂuctuations are small, and average values are well

ρ(x) (1/N 1/2) O

x = (1) x h i O Figure 3.8: Gaussian (Width (sort of) to scale) deﬁned for a system of many (N ) independent parts. → ∞ 3.3 Correlations in Space

The results so far apply to a large system, and perhaps it is not surprising that the global ﬂuctuations in a large system are small. These results for the global ﬂuctuations have implications for the correlations of quantities, though. Say the intensive variable x has diﬀerent values at points ~r in space. (Precisely, let d~r, the volume element, be microscopically large, but macroscopically small). Consider the correlation function

′ ′ ∆x(~r) ∆x(~r ) C(~r, ~r ) (3.34) h i ≡ ′ We can say a few things about C(~r, ~r ) from general considerations.

1. If space is translationally invariant, then the origin of the coordinate sys- ′ ′ tem for ~r and ~r is arbitrary. It can be placed at point ~r or ~r . The ′ ′ physical length of intrest is (~r ~r ) or (~r ~r). Hence − − ′ ′ ′ C(~r, ~r ) = C(~r ~r ) = C(~r ~r) (3.35) − − 16 CHAPTER 3. INDEPENDENCE OF PARTS

∆x(~r) ~r ~r − ′ ∆x(~r ) ~r ′

~r o ′

Figure 3.9: Transitional invariance (only ~r ~r is important) − ′

and, of course,

′ ′ ∆x(~r) ∆x(~r ) = ∆x(~r ~r ) ∆x(0) (3.36) h i h − i etc.

′ 2. If space is isotropic, then the direction of ∆x(~r) from ∆x(~r ) is unim- ′ portant, and only the magnitude of the distance ~r ~r is physically | − | ∆x(~r ) ′ ~r ~r | − ′| ∆x(~r)

Figure 3.10: Isotropy (only ~r ~r is important) | − ′| relevant. Hence ′ ′ C(~r, ~r ) = C( ~r ~r ) (3.37) | −′ | So what was a function of 6 numbers (~r, ~r ) is now a function of only 1, ′ ~r ~r . | − | 3. Usually correlations fall oﬀ if points are arbitrarily far apart. That is, : 0 : 0 ∆x(r ) ∆x(0) = ∆x(r ) ∆x(0) h → ∞ i h → ∞ ih i = 0 or C(r ) = 0 (3.38) → ∞ The scale of the correlation is set by the correlation length. Now we know ′ ′ C( ~r ~r ) = ∆x(~r) ∆x(~r ) (3.39) | − | h i ′ Subject to these conditions. A constraint on C( ~r ~r ) is provided by our previous result | − | 1 (∆x)2 = ( ) h i O N 3.3. CORRELATIONS IN SPACE 17

C(r) (ξ) O

r

Figure 3.11: Typical correlation function for a global ﬂuctuation. Clearly, d~r ∆x(~r) ∆x = d~r R (3.40) 1 = R d~r ∆x(~r) V Z So we have

1 ′ ′ d~r d~r ∆x(~r) ∆x(~r ) = (∆x)2 V h i h i Z Z (3.41) 1 = ( ) O N or, 2 ′ ′ V d~r d~r C(~r ~r ) = ( ) − O N Z Z ~ ′ ~ ′ 1 ′ Let R = ~r ~r , R = 2 (~r ~r ). The Jacobian of the transformation is unity, and we obtain− − 2 ′ V dR~ dR~ C(R~) = ( ) O N Z Z (3.42) = V dR~ C(R~) Z So letting R~ ~r, → V d~rC(~r) = ( ) O N Z V Ld But N = ξ3 , or in d dimensions, N = ξd , so

d~rC(~r) = (ξ3) O Z or

d~r ∆x(~r) ∆x(0) = (ξ3) h i O Z (3.43) 18 CHAPTER 3. INDEPENDENCE OF PARTS for a system of N independant parts. This provides a constraint on correlations. It is a neat→ ∞ result in Fourier space. Let

~ Cˆ(~k) d~reik·~rC(~r) ≡ Z and ﬁnally

d~k ~ C(~r) = e−ik·~rCˆ(~k) (2π)d Z (3.44) and, ~ ∆ˆx(k) 2 d~reik·~r ∆x(~r) ∆x(0) h| | i ≡ h i Z Hence the constraint of Eq. 3.43 is

lim Cˆ(k) = (ξ3) k→0 O or

lim ∆ˆx(k) 2 = (ξ3) k→0 h| | i O (3.45)

Many functions satisfy the constraint of Eq. 3.43 (see Fig. 3.11). It turns out that usually, ∆x(~r) ∆x(0) e−r/ξ (3.46) h i ∼ for large r, which satisﬁes Eq. 3.43. This is obtained from analyticity of (Cˆ(k))−1 near k = 0. It can be argued, 1 (const.) + (const.)k2 + ... Cˆ(k) ≈

2 const. so that ∆ˆx(k) = const.+k2 near k = 0. Fourier transformation gives the result ofh| Eq. 3.46| i above. (Note: In the previous notation with boxes, as in Fig. 3.4, we had

∆X = ∆Xi i X or 1 ∆x = ∆X N i i X This means 1 1 d~r ∆x(~r) = ∆X (3.47) V N i i Z X 3.3. CORRELATIONS IN SPACE 19

This is satisﬁed by V N ∆x(r) = δ(~r ~r )∆X N − i i i=1 X where ~ri is a vector pointing to box i, and ∆Xi = ∆X(~ri).) Let me emphasize again that all these results follow from a system having N independent parts. To recapitulate, for x = X/N, where X is extensive, N →= V/ξ ∞ 3 is the number of independent parts. V is the volume, and ξ is the correlation length:

x = (1) (3.7) h i O 1 (∆x)2 = ( ) (3.15) h i O N (x−hxi)2 ρ(x) e 2h(∆x)2i (3.30) ∝ d~r ∆x(r) ∆x(0) = (ξ3) (3.43) h i O Z The purpose of this course is to examine the cases where V/ξ3 = (1) and these results break down! It is worthwhile to give some toy examplesO now where, in particular, Eq. 3.43 breaks down. 20 CHAPTER 3. INDEPENDENCE OF PARTS Chapter 4

Toy Fractals

Toy Fractals are mathamatical constructs that seem to exist in fractional di- mensions between 1 and 2, or, 2 and 3, or what have you. They can provide a useful starting point for thinking about real systems where V/ξ3 = (1), or if V = L3, ξ/L = (1). O We’ll brieﬂyO discuss self-aﬃne, or surface fractals, and volume fractals. This

Figure 4.1: Self–Aﬃne Fractal is a self–aﬃne fractal where the surface is very jagged, or as is said in physics, rough. The self-aﬃne fractal has the property that its perimeter length P (or surface area in three dimensions) satisﬁes,

P Lds , as L (4.1) ∼ → ∞ where ds is the surface self-aﬃne fractal exponent. For a circle of diameter L

Pcircle = πL while for a square of width L

Psquare = 4L

21 22 CHAPTER 4. TOY FRACTALS

In fact, for run-of-the-mill surfaces in d dimensions

P Ld−1 (4.2) nonfractal ∼ Usually d > d 1 (4.3) s − I’ll give an example below. This is a volume fractal, note all the holes with

Figure 4.2: Volume Fractal missing stuﬀ. The fractal has the propert that its mass M satisﬁes

M Ldf as L (4.4) ∼ → ∞ 2 where df is the fractal dimension. For a circle M = π/4 L , in units of the density for a diameter L, while for a square M = L3. For run-of-the-mill objects M Ld (4.5) nonfractal ∼ in d dimensions. Usually

df < d (4.6)

Note that since the mass density is M/Ld, this implies that

1 DENSITY OF A FRACTAL 0 (4.7) ∼ Ld−df → as L , implying fractals ﬂoat (!?). Think about this. → ∞ Two toy examples show that d > d 1 and d < d are possible. s − f

4.1 Von Koch Snowﬂake

To draw the von Koch snowﬂake, follows the steps shown. To ﬁnd the self-aﬃne 4.1. VON KOCH SNOWFLAKE 23

Figure 4.3: Von Koch Snowﬂake fractal exponent for the perimeter, imagine I had (somehow) drawn one of these fractals out a very very large number of steps. Say the smallest straight link is length l. We’ll calculate the perimeter as we go through increasing numbers of links in a large complex fractal. Hence

Figure 4.4:

n Pn = 4 l when

n Ln = 3 l (4.8)

Taking the logarithm

ln P/l = n ln 4 and

ln L/l = n ln 3

Dividing these into each other

ln P/l ln 4 = ln L/l ln 3 24 CHAPTER 4. TOY FRACTALS or

P/l = (L/l)ds (4.9) where

ln 4 d = 1.26 s ln 3 ≈ > d l = 1 −

It is common in the physics literature to write

P (L) = w(L)Ld−1 (4.10) where w(L) is the width or thickness of a rough interface, and

w(L) Lx , as L (4.11) ∼ → ∞ where x is the roughening exponent, and clearly, (x 0.26 for the snowﬂake), ≈

d = x + d 1 (4.12) s −

4.2 Sierpinskii Triangle

To draw the Sierpinskii triangle, follow the steps shown. To ﬁnd the fractal

Figure 4.5: Sierpinskii Triangle exponent for the mass, imagine I had (somehow) drawn one of these fractals out to a very large number of steps. Say the smallest ﬁlled triangle had mass m and volume equal to v. Then we’ll calculate the mass as we go through increasing the numbers of triangles. Hence 4.2. SIERPINSKII TRIANGLE 25

Figure 4.6:

n Mn = 3 m when

n Vn = 4 v (4.13)

1 2 1 2 or since Vn = 2 Ln and v = 2 l we have,

2 n 2 n Ln = 4 l or Ln = 2 l

Taking logarithms, M ln = n ln 3 m and

L ln = n ln 2 l or, taking the ratio

ln M/m ln 3 = ln L/l ln 2 and M L = ( )df (4.14) m l where

ln 3 d = 1.585 (4.15) f ln 2 ≈

So df < d where d = 2 here. 26 CHAPTER 4. TOY FRACTALS

This is not the common notation in the physics literature. Below, but near a second–order phase transition, the excess density satisﬁes

n T T β ∝ | − c| where Tc is the critical temperatire. However the correlation length satisﬁes

ξ T T −v ∝ | − c| Or, since ξ’s largest value is limited by the edge length L of the entire system

ξ L T T −v ∼ ∼ | − c| Using this above gives

n [L−1/v]β ∝ or, Excess density near Tc

1 n ∝ Lβ/v where β and v are critical exponents. But, for a fractal

1 n d d ∝ L − f so d = d β/v (4.16) f − where β/v 0.415 for the Sierpinskii snowﬂake. For a real≈ system (in the Ising universality class in two dimensions),

x = 1/2, so ds = 1 1/2 and β/v = 1/8, so df = 1 7/8

Figure 4.7: 4.3. CORRELATIONS IN SELF-SIMILAR OBJECTS 27 4.3 Correlations in Self-Similar Objects

Figure 4.8:

A striking thing about a fractal is that it “looks the same” on diﬀerent length scales. This is to say that your eye sees similar (or, for these toy fractals, identical features) correlations if you look on length scales r or r/b where b > 1. So far as density correlations go, let

C(r) = ∆n(r) ∆n(0) h i (where the average is, say, over all orientations in space for convenience). Then the evident self–similarity of fractals implies

C(r) C(r/b) ∝ or

C(r) = f(b) C(r/b) (4.17)

It is easy to show that f(b) = b−p (4.18) where p is a constant. Take the derivative with respect to b of Eq. 4.17, letting

rk r/b (4.19) ≡ Then dC(r) df dC dr∗ = 0 = C(r∗) + f db db dr∗ db 28 CHAPTER 4. TOY FRACTALS or,

df r∗ dC C(r∗) = + f db b dr∗ and d ln f(b) d ln C(r∗) = (4.20) d ln b d ln r∗ Because one side only depends on b, and the other only on r∗. Hence

d ln f(b) = const. p (4.21) d ln b ≡ − So f(b) b−p (4.22) ∝ and the invariance relation is

C(r) = b−pC(r/b) (4.23)

(up to a multiplicative constant). If one lets b = 1 + ǫ, ǫ << 1 (4.24)

r C(r) = (1 + ǫ)−p C( ) 1 + ǫ ∂C (1 pǫ) C(r) ǫr + ... ≈ − { − ∂r } Terms of order ǫ give ∂C p C(r) = r − ∂r The solution of which is 1 C p (4.25) ∝ r Hence, self similarity implies homogeneous correlation functions

C(r) = b−p C(r/b) and power–law correlation 1 C(r) (4.26) ∝ rp Usually, these apply for large r. The sign of p chosen to make C(r ) 0. Note that this is quite diﬀerent from what occurs for a system→ of ∞ many→ 4.3. CORRELATIONS IN SELF-SIMILAR OBJECTS 29

C(r)

1/rp

r

Figure 4.9: Decay of Correlations in a Self–Similar Object

C(r)

e−r/ξ ξ

r

Figure 4.10: Decay of Correlations in a System of Many Independent Parts independent parts where (we said)

C(r) e−r/ξ ∼ The restriction d~rC(r) = (ξd) O Z in d–dimensions is evidently unsatisﬁed for 1 C(r) ∼ rp if d>p, for then the integral diverges! Usually, p = 2 η (4.27) − and η 0, so d>p in d = 2 and 3. So,≥ fractals, self–aﬃne (rough) surfaces, and second–order phase transition ﬂuctuations do not satisfy the central limit theorem. In fact, rough surfaces and phase transitions will be the focus for the rest of the course. 30 CHAPTER 4. TOY FRACTALS Chapter 5

Molecular Dynamics

At this point, one might say, computers are so big and ﬁerce, why not simlpy solve the microscopic problem completely! This is the approach of molecular dynamics. It is a very important numerical technique, which is very very con- servative. It is used by physicists, chemists, and biologists. Imagine a classical solid, liquid or gas composed of many atoms or molecules. Each atom has a position ~ri and a velocity ~vi (or momentum m~vi). The Hamil- particle i ~vi ~vj

~ri ~rj particle j | − | ~ri

~rj o

Figure 5.1: tonian is (where m is the atom’s mass)

N m~v2 H = i + V (~r , ...~r ) 2 1 N i=1 X Usually, the total potential can be well approximated as the sum of pairwise forces, i.e. N V (r , ...r ) = V ( ~r ~r ) 1 N | i − j | i=1, j=1 (Xi>j) which only depends on the distance between atoms. This potential usually has

31 32 CHAPTER 5. MOLECULAR DYNAMICS a form like that drawn. Where the depth of the well must be

v

ǫ r r0

Figure 5.2:

ǫ (100′s K)k ∼ B and

r (a few A˙ ′s) o ∼ This is because, in a vapor phase, the density is small and the well is unimpor- tant; when the well becomes important, the atoms condense into a liquid phase. Hence ǫ boiling point of a liquid, and ro is the separation of atoms in a liquid. Of course,∼ say the mass of an atom is

m 10 10−27kg ∼ × r 2A˙ o ∼ then the liquid density is

m 10 10−27kg ρ = × 3 3 ∼ ro 10A˙ 10−27 (103gm) = × (10−8cm)3 ρ 1gm/cm3 ∼ of course. To convince oneself that quantum eﬀects are unimportant, simlpy estimate the de Broglie wavelength ¯h λ ∼ √mkBT 1A/˙ √T ∼ where T is measured in Kelvin. So this eﬀect is unimportant for most liq- uids. The counter example is Helium. In any case, quantum eﬀects are usually 33 marginal, and lead to no quantitative diﬀerences of importance. This is sim- ply because the potentials are usually parameterized ﬁts such as the Lennard - Jones potential σ σ V (r) = 4ǫ[( )12 ( )6] r − r where

r = 21/6σ 1.1σ o ≈ and

V (r ) = ǫ o − by construction. For Argon, a good “ﬁt” is

ro = 3.4A˙ ǫ = 120K

Nummerically it is convenient that V (2ro) << ǫ, and is practically zero. Hence one usually considers

Lennard – Jones , r < 2r V (r) = o (0 , r > 2ro

1/6 where ro = 2 σ. Lennard-Jones has a run–of–the–mill phase diagram which is representative of Argon or Neon, as well as most simple run–of–the–mill pure substances. Simpler potentials like the square well also have the same properties.

P T s l and l v s

v Coexistance

T ρ

Figure 5.3: Leonnard Jousium (The solid phase is fcc as I recall) 34 CHAPTER 5. MOLECULAR DYNAMICS

But the hard sphere potential has a more simple–minded phase diagram. This

V , r<σ ∞ V = ǫ ,σ

Figure 5.4:

V , r<σ V = ∞ (0 , r>σ

r where,

P T s l and s l coex.

T ρ

Figure 5.5: is because of the lack of an attractive interaction. Finally, lets get to the molecular dynamics method. Newton’s equations (or Hamilton’s) are, d~r i = ~v dt i

F~i and ~ai = m d~v 1 N ∂ i = V ( ~r ~r ) dt m ∂~r | i − j | j=1 i (Xi6=j) Clearly, we can simply solve these numerically given all the positions and veloc- ities initially, i.e. ~r (t = 0), ~v (t = 0) . A very simple (in fact too simple) way { i i } 35 to do this is by d~r r (t + ∆t) r i = i − i dt ∆t for small ∆t. This gives

~ri(t + ∆t) = ~ri(t) + ∆t~vi(t) and

1 ~r (t + ∆t) = ~v (t) + ∆t F~ i i m i where F~i(t) is determined by ~ri(t) only. A better (but still simple) method is due to Verlet. It should be remembered{ } that the discretization method has a little bit of black magic in it. Valet notes that,

(∆t)2 ~r (t ∆t) = ~r (t) ∆t~v + ~a (t) + ... i ± i ± i 2 i

2 ~ d ~ri where ~ai = Fi/m = dt2 . Add this to itself 1 ~r (t + ∆t) = 2~r (t) ~r (t ∆t) + (∆t)2 F~ + (∆t) i i − i − m i O And us the simple rule

r (t + ∆t) r (t ∆t) ~v (t + ∆t) = i − i − i 2∆t

Thats all. You have to keep one extra set of ri’s, but it is the common method, and works well. To do statistical mechanics, you do time averages, after the system equilibrates. Or ensemble averages over diﬀerent initial conditions. The ensemble is microcanonical sine energy is ﬁxed. Usually one solves it for a ﬁxed volume, using periodic boundary conditions. A common practice is to do a

Figure 5.6: Periodic Boundary Conditions 36 CHAPTER 5. MOLECULAR DYNAMICS

“cononical” ensemble by ﬁxing temperature through the sum rule

1 1 m v2 = k T 2 h i 2 B or, v2 T = h i mkB This is done by constantly rescaling the magnitude of the velocities to be

v v2 1/2 mk T | | ≡ h i ≡ B p What about microscopic reversibility and the arrow of time? Recall the original equations

∂~r i = ~v ∂t i

∂~v 1 ∂ i = V ( r r ) ∂t m ∂~r | i − j | j i (Xj6=i)

The equations are invariant under time reversal t t if → − ~r ~r → ~v ~v → − How then can entropy increase? This is a deep and uninteresting question. Entropy does increase; its a law of physics. The real question is how to calculate how how fast entropy increases; e.g. calculate a viscosity or a diﬀusion constant, which set the time scale over which entropy increases. Nevertheless, here’s a neat way to fool around with this a bit. Take a gas of N atoms in a box of size Ld where energy is ﬁxed such that the average equilibrium separation between atoms

s = (3 or 4)r eq O o where ro is a parameter giving the rough size of the atoms. Now initialize the atoms in a corner of the box with

s(t = 0) = (r ) O o or smaller (see ﬁg. 5.7). Now, at time t = later, let all ~v ~v. By microscopic reversibility you should get ﬁg. 5.8. Try it! You get ﬁg.→ 5.9. − It is round oﬀ 37

t = 0 t = later t = equilibrium

“Entropy”

later t

Figure 5.7:

t = 0 t = later t = 2later “Entropy” 1

later t

Figure 5.8:

“Entropy” 1

later t

Figure 5.9: error, but try to improve your precision. You will always get the same thing. Think about it. The second law is a law, not someone’s opinion. 38 CHAPTER 5. MOLECULAR DYNAMICS Chapter 6

Monte Carlo and the Master Equation

An improtant description for nonequilibrium statistical mechanics is provided by the master equation. This equation also provides the motivation for the Monte Carlo method, the most important numerical method in many–body physics. The master equation, despite the pretentious name, is phenomenology fol- lowing from a few reasonable assumptions. Say a system is deﬁned by a state S . This state might, for example, be all the spins 1 that an Ising model has {at} each site. That is S are all the states appearing± in the { } Z = e−E{S}/kB T (6.1) X{S} partition function above. We wish to know the probability of being in some state S at a time t, { } p( S , t) =? (6.2) { } from knowledge of probabilities of states at earlier times. In the canonical ensemble (ﬁxed T , ﬁxed number of particles)

P ( S ) e−E{S}/kB T (6.3) eq { } ∝ of course. For simplicity, lets drop the brackets and let { } S S (6.4) { } ≡ As time goes on, the probability of being in state S increases because one makes transitions into this state from other (presumably nearby in some sence) states S′. Say the probability of going from S S′, in a unit time is → W (S, S′) W (S S′) (6.5) (Read≡ As) ← This is a transition rate as shown in ﬁg. 6.1. Hence

39 40 CHAPTER 6. MONTE CARLO AND THE MASTER EQUATION W (S,S′)

S′ S

W (S′,S)

Figure 6.1:

W (S, S′) P (S′) (6.6) are contributions increasing P (S). However, P (S) decreases because one makes transitions out of S to the states S′ with probability W (S′, S) W (S′ S) (6.7) (Read≡ As) ← So that W (S′, S) P (S) (6.8) are contributions decreasing P (S) as time goes on. The master equation incorporates both these processes as

∂P (S, t) = [W (S, S′) P (S′) W (S′, S) P (S)] (6.9) ∂t ′ − XS which can be written as ∂P (S, t) = L(S, S′) P (S′) (6.10) ∂t ′ XS where the inappropriately named Liouvillian is

′ ′ ′′ L(S, S ) = W (S, S ) ( W (S , S))δS, S′ (6.11) − ′′ XS Note that the master equation

1. has no memory of events further back in time than the diﬀerential element “dt”. This is called a Markov process

2. is linear.

3. is not time reversal invariant as t t (since it is ﬁrst order in time). → − In equilibrium, ∂P (S) eq = 0 (6.12) ∂t 41 since we know this to be the deﬁnition of equilibrium. Hence in Eq. 6.9,

′ ′ ′ W (S, S ) Peq(S ) = W (S , S) Peq(S) (6.13) which is called detailed balance of transitions or ﬂuctuations. For the canonical ensemble therefore,

′ W (S, S ) ′ = e−(E(S)−E(S ))/kB T (6.14) W (S′, S)

This gives a surprisingly simple result for the transitions W , it is necessary (but not suﬃcient) that they satisfy Eq. 6.14. In fact, if we are only concerned with equilibrium properties, any W which satisﬁes detailed balance is allowable. We need only pick a convenient W ! The two most common W ’s used in numerical work are the metropolis rule

−∆E/kB T ′ e , ∆E > 0 WMetropolis(S, S ) = (6.15) 1 , ∆E 0 ( ≤ and the Glauber rule

′ 1 ∆E WGlauber(S, S ) = (1 tanh ) (6.16) 2 − 2kBT where ∆E = E(S) E(S′) (6.17) − ′ Let us quickly check that WMetropolis satisﬁes detailed balance. If E(S) > E(S )

′ W (S, S′) e−(E(S)−E(S ))/kB T = = e−∆E/kB T W (S′, S) 1

Now if E(S) < E(S′)

′ W (S, S ) 1 −∆E/kB T = ′ = e W (S′, S) e−(E(S)−E(S ))/kB T where ∆E = E(S) E(S′) as above. So Detailed balance works. You can check yourself that the− Glauber rule works. The form of the rule is shown in ﬁgures 6.2 and 6.3. To do numerical work via Monte Carlo, we simply make transitions from state S′ S using the probability W ( S , S′ ). A last bit of black magic is{ that} →one { usually} wants the states S{′ }and{ }S to be “close”. For something like the Ising model, a state S is{ close} to {S′} if the two states diﬀer by no more than one spin ﬂip, as drawn{ } in ﬁg. 6.4.{ Let} us do this explicitly for the 2 dimensional Ising model in zero ﬁeld, the same thing that Onsager solved! E = J S S (6.18) STATE − i j Xhiji 42 CHAPTER 6. MONTE CARLO AND THE MASTER EQUATION

Metropolis 1 1 1

∆E ∆E ∆E Low T Moderate T High T

Figure 6.2:

Glauber 1 1 1 1/2 1/2

∆E ∆E ∆E Low T Moderate T High T

Figure 6.3:

Source State S′ Target State S { } { }

? or

Outcome with probability W ( S , S′ ) { } { } Figure 6.4: 43

? Case 1 prob = e−8J/kB T

? Case 2 prob = e−4J/kB T

? Case 3 prob = 1

? Case 4 prob = 1

? Case 5 prob = 1

Figure 6.5: 44 CHAPTER 6. MONTE CARLO AND THE MASTER EQUATION

where Si = 1 are the spins on sites i = 1, 2, ... N spins (not to be confused with our notation± for a state S !), and ij means only nearest neighbours are summed over. The positive{ constant} Jh isi the coupling interaction. If only one spin can possibly ﬂip, note there are only ﬁve cases to consider as shown in ﬁg. 6.5. To get the probabilities (Metropolis) For Case 1:

E = 4J , E = +4J Before − After ∆E = 8J W (S, S′) = Prob. of ﬂip = e−8J/kB T

For Case 2:

E = 2J , E = +2J Before − After ∆E = 4J W (S, S′) = Prob. of ﬂip = e−4J/kB T

For Case 3:

EBefore = 0 , EAfter = 0 ∆E = 0 W (S, S′) = Prob. of ﬂip = 1

For Case 4:

E = +2J , E = 2J Before After − ∆E = 4J − W (S, S′) = Prob. of ﬂip = 1

For Case 5:

E = +4J , E = 4J Before After − ∆E = 8J − W (S, S′) = Prob. of ﬂip = 1

This is easy if W = 1, if W = e−4J/kB T , then you have to know the temper- ature in units of J/kB. Let’s say, for example,

W = 0.1

You know the spin ﬂips 10% of the time. You can enforce this by comparing W = 0.1 to a random number uniformly between 0 and 1. If the random number is less that or equal to W , then you ﬂip the spin. Here is an outline of a computer code. If you start at a large temperature and 45

Initialize System - How many spins, How long to average time t = 0 - How hot, (give T = (...)J/kB)? - How much ﬁeld H?

- Give all N spins some initial value (such as Si = 1). use periodic boundary conditions

Do time = 0 to maximum time

Grinding Do i = 1 to N any 1mcs - Pick a spin at random Do this N times time step - Check if it ﬂips (each spin is ﬂipped on average) as N updates - Update the system

Calculate magnetization per spin M = 1 S h i N h i ii Output Calculate energyP per spin E = −1 S S h J i N h hiji i ji averaging overP time (after throwing away some early time).

Figure 6.6: 46 CHAPTER 6. MONTE CARLO AND THE MASTER EQUATION

M 1 high T >Tc

t

t in Monte Carlo step 1, 1 unit of time corresponds to attempting N spin ﬂips

Figure 6.7:

M transient time average to get M low T

t

Figure 6.8:

M h i 1

M T T −beta, β = 1/8 ∼ | − c| T T 2.269J/k c ≈ B

Figure 6.9: 47

E/J h i T Tc

2 −

Figure 6.10:

initially Si = 1, the magnetization quickly relaxes to zero (see ﬁg. 6.7). Below Tc, the magnetization quickly relaxes to its equilibrium value (see ﬁg. 6.8). Doing this for many temperatures, for a large enough latice gives ﬁg. 6.9. Doing this for the energy per spin gives ﬁg. 6.10. which is pretty boring until one plots the heat capacity per spin 1 ∂ E C h i ≡ J ∂T Try it. You’ll get very close to Ousager’s results with fairly small systems

C ln T T | − c|

T c 2.269 J/kB ≈ Figure 6.11:

N = 10 10, or 100 100 over a thousand or so time steps, and you’ll do much better than× the mean× ﬁeld theory we did in class. As an exercise, think how would you do this for the solid–on–solid model

Z = e−EState/kB T {StatesX } where N E = J h h State | i − i+1| i=1 X where hi are heights at sites i. 48 CHAPTER 6. MONTE CARLO AND THE MASTER EQUATION Chapter 7

Review of Thermodynamics

There are a lot of good thermodynamics books, and a lot of bad ones too. The ones I like are “Heat and Thermodynamics” by Zemansky and Dittman (but beware their mediocre discussion of statistical mechanics, and their confusing discussion of phase transitions), and Callin’s book, “Thermostatics”, (which is overly formal). A thermodynamic state of a system is described by a ﬁnite number of vari- ables, usually at least two. One needs at least two because of conservation laws for energy E and number of particles N. So one needs at least (E, N) to specify a system, or two independent variables which could give (E, N). Some systems need more than two, and for some systems, it is a useful approach to only consider one variable, but for now, we will consider that two are enough. There is a deep question lurking here. Thermodynamics describes systems in the thermodynamic limit: N , volume V , such that number density n = N/V = const. Each→ classical ∞ particle has,→ ∞in three dimensions, 6 independent degrees of freedom describing its position and momentum. So there are not 2, but 1023 degrees of freedom! This is a very deep question. Many people have worried∼ about it. But let’s take the physics view. We need at least two variables, so let’s use two. Experimentally, this describes thermodynamic systems on large length and time scales, far from, for example, phase transitions.

Consider a pure system which can be in a gas, liq- 2 out of uid, or simple solid phase. It can be described by tem- perature T , pressure P , entropy S, in addition to E, T , P , S, E, V , V , and N. Temperature is a ”good” thermodynamic N, ... variable by the zeroth law of thermodynamics (two systems in thermal equilibrium with a third system are independent are in thermal equilibrium with each other). Entropy is a ”good” thermodynamic variable by the second law of thermodynamics (the entropy of an isolated Figure 7.1: Our System system never decreases). Variables can be classiﬁed by whether they are

49 50 CHAPTER 7. REVIEW OF THERMODYNAMICS extencive or intensive. Say one considers a system of two independent parts as drawn in ﬁg. 7.2.

1 2 1 2

Figure 7.2: A system with two independent parts.

An extensive variable satisﬁes

Vtotal = V1 + V2

Etotal = E1 + E2

Stotal = S1 + S2 while an intensive variable satisﬁes

T = T1 = T2

P = P1 = P2 From mechanics, the work done on a system to displace it by dx is

F dx − where F is the force. Acting on an area, we have dW¯ = P dV − Small Pressure Diﬀerential |{z}amount|{z} volume|{z} of work change Two new concepts arise in thermodynamics. Temperature, which is something systems in thermal equilibrium share; and heat, which is a form of energy ex- changed between systems at diﬀerent temperatures. The law of energy concervation (ﬁst law of thermodynamics) is dE = dW¯ + dQ¯ change in work heat |{z}internal|{z} done on|{z} transfered energy system The second law of thermodynamics is

dQ¯ = T dS where S is entropy. Heat always ﬂows from the hot system to the cold system. In ﬁg. 7.3, both sides change due to transfer of heat Q. For the hot side 51

THOT TCOLD

Q

Figure 7.3: System with hot and cold parts exchanging heat Q.

Q (∆S)HOT = − THOT for the cold side Q (∆S)COLD = − TCOLD Hence 1 1 (∆S)TOTAL = Q [ ] TCOLD − THOT So (∆S) 0 TOTAL ≥ This is the usual way the second law is presented. Note that this is the only law of physics that distinguishes past and future. As an aside, it is interesting to note that S must be a “good” thermodynam- ics variable, or else the second law is violated. If we assume the opposite (that two distinct states have the same S) we can create a perpetual motion machine, as drawn in ﬁg. 7.4. Blobbing up the 1st and 2nd laws gives

(T,S)

P (T,S) Q = W

isotherm

adiabat

V

Figure 7.4: Perpetual Motion Machine

dE = P dV + T dS − where only 2 out of E, P, V, T and S are independent. This is the most impor- tant and useful result of thermodynamics. To get further, we need equations of state, which relate the variables. Whithin thermodynamics, these are determined experimentally. For example, the ideal 52 CHAPTER 7. REVIEW OF THERMODYNAMICS gas equation of state is

P = P (V, T ) = N kB T/V and E = E(T ) where usually E = CT and C is a constant determined experimentally. Then Nk T C dT = B dV + T dS − V So, for an adiabatic process (wherein no heat is exchanged, so dS = 0), a change in volume causes a change in temperature. In general, equations of state are not known. Instead one obtains diﬀerential equations of state ∂P ∂P dP = ( ) dV + ( ) dT ∂V T ∂T V and

∂E ∂E dE = ( ) dV + ( ) dT ∂V T ∂T V

∂E Clearly, if we know all the (T, V ) dependence of these derivatives, such as ( ∂T )V , we know the equations of state. In fact, most work focuses on the derivatives. Some of these are so important they are given their own names. 1 ∂V Isothermal Compressibility k = ( ) T −V ∂P T 1 ∂V Adiabatic Compressibility k = ( ) s −V ∂P s 1 ∂V Volume Expansivity α = ( ) V ∂T P ∂S Heat Capacity at Constant Volume C = T ( ) V ∂T V ∂S Heat Capacity at Constant Pressure C = T ( ) P ∂T P

Of course, kT = kT (T, V ), etc. These results rest on such ﬁrm foundations, and follow from such simple principles, that any microscopic theory must be consistent with them. Further- more, a microscopic theory should be able to address the unanswered questions of thermodynamics, e.g., calculate kT or CV from ﬁrst principles. These quantities can have very striking behavior. Consider the phase dia- gram of a pure substance (ﬁg. 7.5). The lines denote 1st order phase transi- 53

P s l

v

T

Figure 7.5: Solid, liquid, vapor phases of pure substance

P

s l Pc

v

T T1 T2

Figure 7.6: tions. The dot denotes a continuous phase transition, often called second order. Imagine preparing a system at the pressure corresponding to the continuous transition, and increasing temperature from zero through that transition (see ﬁg. 7.6). Figure 7.7 shows the behavior of kT and CV : At the ﬁrst-order transition, kT has a delta-function divergence while CV has a cusp. At the continuous transition, both CV and kT have power-law divergences. We will spend a lot of time on continuous, so-called second order transitions in the course. 54 CHAPTER 7. REVIEW OF THERMODYNAMICS

k P P T ≡ c

T1 T2

C P P v ≡ c

T1 T2

Figure 7.7: Chapter 8

Statistical Mechanics

We will develop statisitical machanics with “one eye” on thermodynamics. We know that matter is made up of a large number of atoms, and so we expect thermodynamic quantities to be avarages of these atomic properties. To do the averages, we need a probability weight. We will assume that, in a isolated system, the probability of being in a state is a function of the entropy alone. This is because, in thermodynamics, the equilibrium state is the one with the maximum entropy. Hence, the probability weight ρ = ρ(S) To get the form of S we need to use 1. S is extensive. 2. S is maximized in equilibrium. 3. S is deﬁned up to a constant.

Since S is extensive, two independent systems of respective entropies S1 and S2 have total entropy (S1 + S2) (see ﬁg. 8.1). But

1 2

Figure 8.1:

ρ(Stotal) = ρ(S1)ρ(S2) Since they are independent (and note I will not worry about normalization for now). Since S is extensive we have

ρ(S1 + S2) = ρ(S1)ρ(S2)

55 56 CHAPTER 8. STATISTICAL MECHANICS or ln ρ(S1 + S2) = ln ρ(S1) + ln ρ(S2) This is of the form f(S1 + S2) = f(S1) + f(S2) where f is some function. The solution is clearly

f(S) = AS where A is a constant, so ρ = eAS But S is deﬁned up to a constant so

S/kB ρ = Be , A = 1/kB where the constant B is for normalization, and our earlier carelessness with normalization has no consequence. This is the probability weight for the micro- cannonical ensemble since S = S(E, V ) in thermodynamics from

T dS = dE + P dV

That is, it is the ensemble for given (E, V). The more handy ensemble, for given (T, V) is the canonical one. Split the system into a reservoir and a system. They are still independent, but the resevoir is much bigger than the system (ﬁg. 8.2).

System ﬁxed T , V

Resevoir

Figure 8.2:

Then

S(E ) = S(E E) resevoir total − ∂S S(D ) ( ) E ≈ total − ∂E total and

E S(E ) = const resevoir − T 57

So, for the canonical ensemble

Estate − k T ρstate = const e B

The normalization factor is

Z = e−Estate/kB T states X The connection to thermodynamics is from the Helmholtz free energy F = E TS, where − Z = e−F/kB T We haven’t shown this, but it is done in many books. 58 CHAPTER 8. STATISTICAL MECHANICS Chapter 9

Fluctuations

Landan and Lifshitz are responsible for “organizing” the theory of ﬂuctuations. They attribute the theory to Einstein in their book on statistical physics. I am following their treatment. First, lets do toy systems, where entropy only depends on one variable, x. The probability weight ρ eS(x)/kB ∼ so the probability of being in x x + dx is → const eS(x)/kB dx where the constant gives normalization. At equilibrium, S is maximized, and x is at its mean value of x as drawn in ﬁg. 9.1. Near the maximum we have h i

S

x h i x Figure 9.1:

∂S 1 ∂2S S(x) = S( x ) + (x x ) + (x + x )2 + ... h i ∂x − h i 2 ∂x2 h i ¯x=hxi ¯x=hxi ¯ ¯ ¯ ¯ But S( x ) is just a constant,¯ ∂S/∂ x = 0, and ∂2S/∂¯ x 2 is negative deﬁnite, since Shisi a maximum. Hence, h i h i

S(x) 1 2 = 2 (x x ) kB 2σ − h i

59 60 CHAPTER 9. FLUCTUATIONS where σ2 k /(∂2S/∂ x 2) ≡ − B h i So the probability of being in x x + dx is → 2 2 const. e−(x−hxi) /2σ which is simply a Gaussian distribution. Since it is sharply peaked at x = x , we will let integrals vary from x = to x = . Then h i −∞ ∞ (x x )2 = σ2 h − h i i i.e., k (x x )2 = − B h − h i i ∂2S/∂ x 2 h i This is called a ﬂuctuation-dissipation relation of the ﬁrst kind. Fluctuations in x are related to changes in entropy. They are also called correlation–response relations. As an aside, the “second kind” of ﬂuctuation–dissipation relations turn up in nonequilibrium theory. Since S is extensive, we have

(∆x)2 = (1/N) h i O again, if x is intensive. Note that, from before we said that

(∆x)2 = (ξ3/L3) h i O So that ﬂuctuations involved microscopic correlation lengths. Now we have

k − B = (ξ3/L3) ∂2S/∂ x 2 O h i So thermodynamic derivatives are determined by microscopic correlations. This is why we need statistical mechanics to calculate thermodynamic derivatives. We need more degrees of freedom than x to do anything all that interesting, but we have enough to do something. Consider isothermal toys. These are things like balls, pencils, ropes, and pendulums, in thermal equilibrium. They have a small number of degrees of freedom. For constant temperature, using thermodynamics

T ∆S = ∆E W − 0 work done ≡ for|{z} simple|{z} to system systems

Hence ∆S = W/T . Since entropy is only deﬁned up to a constant, this is all we need. − This is how it works. Consider a simple pendulum (ﬁg. 9.2). The amount of 61

θ

x

Figure 9.2: Simple pendulum of mass m, length l, displaced an angle θ.

work done by the thermal background to raise the pendulum a height x is

mgx where g is the gravitational acceleration. From the picture

x = l l cos θ − θ2 l ≈ 2 so

mglθ2 W = 2

2 and S = mglθ + const. so, since θ = 0, we have − 2 h i

k T θ2 = B h i mgl

This is a very small ﬂuctuation for a pendulum like in the physics labs. When the mass is quite small, the ﬂuctuations are appriciable. This is the origin, for example, of ﬂuctuations of small particles in water, called Brownian motion, which was ﬁgured out by Einstein. There are many other isothermal toys. 62 CHAPTER 9. FLUCTUATIONS

Time to Fall 5s

T

Figure 9.3:

Landan and Lifshitz give several examples with solutions. My favorite, not given in Landan’s book though, is the pencil balanced on its tip. It is interesting to estimate how long it takes to fall as a function of temperature (at zero temperature, one needs to use the uncertainty principle).

9.1 Thermodynamic Fluctuations

We can use this approach to calculate the corrections to thermodynamics. Again imagine we have two weakly interacting systems making up a larger system. Due to a ﬂuctuation, there is a change in entropy of the total system due to each subsystem: “1”, the system we are interested in, and “2”, the reservoir

2 “1” The system “2” The resevior

1 N ﬁxed, common T , P

Figure 9.4:

∆S = ∆S1+ ∆S2 dE + P dV = ∆S + 2 2 1 T thermodynamics for |the reservoir{z } But when the reservoir increases its energy, the system decreases its energy. The same thing for the volume. However, they have the same P (for mechanical equilibrium) and T (for thermal equilibrium). So we have ∆E P ∆S = ∆S 1 ∆V 1 − T − T 1 9.1. THERMODYNAMIC FLUCTUATIONS 63

The probability weight is e∆S/kB , so we have

− 1 (∆E−T ∆S+P ∆V ) P robability e kB T ∝ where I have dropped the “1” subscript. There are still only 2 relevant variables, so, for example:

∆E =∆E(S, V ) ∂E ∂E = ∆S + ∆V + ∂S ∂V 1 ∂2E ∂2E ∂2E + [ (∆S)2 + 2 (∆S∆V ) + (∆V )2] 2 ∂S2 ∂S∂V ∂V 2 to second order. By ∂E/∂V , I mean (∂E/∂V )S, but it is already pretty messy notation. Rearranging: ∂E ∂E 1 ∂E ∂E ∆E = ∆S + ∆V + [∆ ∆S + ∆ ∆V ] ∂S ∂V 2 ∂S ∂V But ∂E ( ) = T ∂S V and ∂E ( ) = P ∂V S − so we have 1 ∆E =T ∆S P ∆V + (∆T ∆S ∆P ∆V ) − 2 − The term in brackets gives the ﬁrst–order correlation to thermodynamics. In the probability weight

P robability e−(∆T ∆S−∆P ∆V )/2kB T ∝ There are still only two independent variables, so, for example ∂S ∂S ∆S = ( ) ∆T + ( ) ∆V ∂T V ∂V T C ∂P = V ∆T + ( ) ∆V T ∂T T Likewise ∂P ∂P ∆P = ( ) ∆T + ( ) ∆V ∂T V ∂V T ∂P 1 = ( )V ∆T ∆V ∂T − κT V 64 CHAPTER 9. FLUCTUATIONS

Putting these expressions into the probability weight, and simplifying, gives

−CV 2 1 2 [ 2 (∆T ) − 2k T κ V (∆V ) ] P robability e 2kB T B T ∝ Evidently the probabilities for ﬂuctuations in T and V are independent since

P robability = Prob(∆T ) Prob(∆V ) · Also, all the averages are gaussians, so by inspection:

k T 2 (∆T )2 = B h i CV (∆V )2 = k Tκ V h i B T and

∆T ∆V = 0 h i If we had chosen ∆S and ∆P as our independent variables, we would have obtained [ −1 (∆S)2− κS V (∆P )2] P robability e 2kB CP 2kB T ∝ so that

(∆S)2 = k C h i B p k T (∆P )2 = B h i κSV ∆S ∆P = 0 h i Higher–order moments are determined by the Gaussian distributions. So, for example,

(∆S)4 = 3 (∆S)2 2 h i h i 2 = 3(kBCP )

Note that any thermodynamic quantity is a function of only two variables. Hence, its ﬂuctuations can be completly determined from, for example (∆T )2 , (∆V )2 , and ∆T ∆V . For example h i h i h i ∆Q = ∆Q(T, V ) ∂Q ∂Q = ( ) ∆T + ( ) ∆V ∂T V ∂V T so ∂Q : 0 ∂Q (∆Q)2 =( )2 (∆T )2 + ... ∆T ∆V + ( )2 (∆V )2 h i ∂T V h i h i ∂V T h i 9.1. THERMODYNAMIC FLUCTUATIONS 65 and

2 2 ∂Q 2 kBT ∂Q 2 (∆Q) =( )V + ( )T kBTκT V h i ∂T CV ∂V Check this, for example, with Q = S or Q = P . Note the form of these relations is (∆x)2 = (1/N) h i O for intensive variables, and

(∆X)2 = (N) h i O for extensive variables. Also note the explicit relationship between corrections 2 to thermodynamics ( (∆T ) ), thermodynamic derivatives (CV = T (∂S/∂T )V ), and microscopic correlationsh i (1/N = ξ3/L3). The most important relationship is

(∆V )2 = k T κ V h i B T although not in this form. Consider the number density n = N/V . Since N is ﬁxed N N ∆n = ∆ = ∆V V −V 2 Hence V 4 (∆V )2 = (∆n)2 h i N 2 h i so,

N 2 (∆n)2 = k Tκ V h i V 4 B T or

n2k Tκ (∆n)2 = B T h i V for the ﬂuctuations in the number density. This corresponds to the total ﬂuc- tuation of number density in our system. That is, if ∆n(~r) is the ﬂuctuation of number density at a point in space ~r, 1 ∆n = d~r∆n(~r) V Z (Sorry for the confusing tooking notation.) Hence

1 (∆n)2 = d~r d~r ∆n(~r)∆n(~r ) h i V 2 h ′ i Z Z 66 CHAPTER 9. FLUCTUATIONS

Again, for translational invariance of space, and isotropy of space

∆n(~r)∆n(~r ) = C( ~r ~r ) h ′ i | − ′| In the same way as done earlier we have 1 (∆n)2 = d~rC(r) h i V Z 2 2 where the dummy integral “~r” is “~r ~r ” of course. But (∆n) = n kBTκT /V , so we have the thermodynamic “sum− rule”:′ h i

d~rC(r) d~r ∆n(r)∆n(0) = n2k Tκ ≡ h i B T Z Z If ~ Cˆ(k) d~reik·~rC(~r) ≡ Z we have 2 lim Cˆ(k) = n kBTκT k→0 The quantity Cˆ(k) is called the structure factor. It is directly observable by x– ray, neutron, light, etc. scattering. For a liquid, one typically obtains something like this The point at k = 0 is ﬁxed by the result above.

Cˆ(k) (peaks correspond to short scale order)

Cˆ(k)

k

Figure 9.5: Typical structure factor for a liquid. Chapter 10

Fluctuations of Surfaces

A striking fact is that, although most natural systems are inhomogeneous, with complicated form and structure, almost all systems in thermodynamic equilib- rium must have no structure, by Gibbs’ phase rule. As an example, consider a pure system which can be in the homogeneous phases of solid, liquid, or vapour. Each phase can be described by a Gibbs energy gs(T, P ), gl(T, P ), or gv(T, P ). In a phase diagram, the system chooses its lowest Gibbs free energy at each point (T, P ). Hence, the only possibility for inhomogeneous phases are along lines and points:

gs(T, P ) = gl(T, P ) gives the solidiﬁcation line

gl(T, P ) = gv(T, P ) gives the vaporization line, while

gs(T, P ) = gl(T, P ) = gv(T, P ) gives the triple point of solid–liquid–vapour coexistance.

P s l

v

T

Figure 10.1:

Inhomogeneous states exist on lines in the phase diagram, a set of measure zero compared to the area taken up by the P –T plane.

67 68 CHAPTER 10. FLUCTUATIONS OF SURFACES

Much modern work in condensed–matter shysics concerns the study of in- homogeneous states. To get a basis for this, we shall consider the most simple such states, equilibrium coexisting phases, separated by a surface.

Phase Diagram Cartoon of System P

Pc Vapor l

v Liquid

Tc T

Figure 10.2:

As every one knows, in a ﬁnite volume droplet, droplets are spheres while much larger droplets are locally ﬂat. This is the same for liquid–vapour systems, oil–water systems, magnetic domain walls, as well as many other systems. The reason for this is simple thermodynamics. A system with a surface has extra free energy proportional to the area of that surface. Using the Helmholtz free energy now, for convenience,

F = (Bulk Energy) + σ d~x (10.1) Z Surface Area

where the positive constant σ is the surface tension.| {z } Note that the extra free

Bulk Energy σ d~x R y

~x

L

Figure 10.3: energy due to the surface is minimized by minimizing the surface area. The coordinate system is introduced in the cartoon above. Note that y is perpendicular to the surface, while ~x is a (d 1)–dimensional vector parallel to the surface. Furthermore, the edge length of− the system is L, so its total volume

V = Ld 69 while the total surface area A = Ld−1 Equation 10.1, ∆F = σLd−1, is enough to get us started at describing ﬂuc- tuations of a surface at a non–zero temperature. Of course, Eq. 10.1 is ther- modynamics, while in statistical mechanics, we need a probability function, or equivalently, the partition function.

Z = e−Estate/kB T (10.2) {statesX } to calculate such ﬂuctuations strangely enough, using Eq. 10.1 we can construct models for Estate in Eq. 10.2! This sounds suspicious since Estate should be microscopic, but it is a common practice. We shall make the decision to limit our investigations to long-length-scale phenomena. So it is natural to expect that phenomena on small length scales, like the mass of a quark or the number of gluons, do not aﬀect our calculations. In particle physics they call this renormalizability: if we consider length scales

r > a few Angstroms all the physics on scales below this only gives rise to coupling constants entering our theory on those larger length sacles. The main such coupling constant we shall consider is simply the radius of an atom

r 5A˙ (10.3) o ≈ so that we consider thength scales

r r ≥ o Usually this appears as a restriction, and introduction of coupling constant, in wavenumber k, so that we consider

k Λ ≤ where Λ 2π/r . In particle physics this is called an ultraviolet cutoﬀ. I ≡ o emphasize that for us Λ or ro is a physically real cutoﬀ giving the smallest length scale in which our theory is well deﬁned. This is not too dramatic so far. An important realization in statistical mechanics is that many microscopic systems can give rise to the same behaviour on long length scales. In the most simple sense, this means that large–length– scale physics is independent of the mass of the gluon. We can choose any convenient mass we want, and get the same large–length–scale behaviour. In fact, it is commonplace to go much much further than this, and simlpy invent microscopic models — which are completely unrelated to the true microscopics in question! — which are consistent with thermodynamics. The rule of thumb is to construct models which are easy to solve, but still retain the correct long– length–scale physics. 70 CHAPTER 10. FLUCTUATIONS OF SURFACES

This insight, which has dramatically changed condensed–matter physics over the last twenty years, is called “universality”. Many microscopic models have the same long–length–scale physics, and so are said to be in the same “universality class”. The notion of a universality class can be made precise in the context of the theory of phase transitions. Even without precision, however, it is commonly applied throughout modern condenced–matter physics!

10.1 Lattice Models of Surfaces

To get a sense of this, let us reconsider our system of intrest, an interface. We will construct a “microscopic” model, as shown. We have split the ~x and y axes

y

~x

Physical Surface Microscopic modeled abstraction

Figure 10.4: into discrete elements. Instead of a continuous variable where

L/2 x L/2 and L/2 y L/2 (10.4) − ≤ ≤ − ≤ ≤ Say (in d = 2), we have a discrete sum along the x direction L L i = , ..., 3, 2, 1, 0, 1, 2, 3, ..., − 2 − − − 2

(where L is measured in units of ro, the small length scale cutoﬀ), and a similar discrete sum along the y direction L L j = , ..., 3, 2, 1, 0, 1, 2, 3, ..., (10.5) − 2 − − − 2 Actually, for greater convenience, let the index along the x axis vary as

i = 1, 2, 3, ..., L (10.6)

All the possible states of the system then correspond to the height of the in- terface at each point i, called hi, where hi is an integer L/2 hi L/2 as shown in ﬁg. 10.5. Hence the sum in the partition function− is ≤ ≥

L L/2 = = (10.7) i=1 {statesX } {Xhi} Y hi=X−L/2 10.1. LATTICE MODELS OF SURFACES 71

i = 1 i = 2 i = 3 i = 4 i = L

h5 = 4

h1 = 0

h2 = 1 h3 = 0 h4 = 1

Figure 10.5:

This is a striking simpliﬁcation of the microscopics of a true liquid–vapour sys- tem, but as L , the artiﬁcial constant imposed by the i, j discrete grid will become unimportant.→ ∞ Now let us invent a “microscopic” energy Estate = E hi which seems phys- ically reasonable, and consistent with the free energy ∆{f =}σLd−1. Physically, we want to discourage surfaces with too much area, since the free energy is minimized by a ﬂat surface. It is natural for a microscopic energy to act with a

Discouraged by Estate Encouraged by Estate

Figure 10.6: microscopic range. So we enforce ﬂatness, by enforcing local ﬂatness local E = +(h h )2 (10.8) state i − i+1 So the energy increases if a system does not have a locally ﬂat interface. In total L E = J (h h )2 (10.9) state i − i+1 i=1 X where J is a positive constant. This is called the discrete Gaussian solid–on– solid model. Equally good models(!) are the generalized solid–on–solid models

L En = J h h n (10.10) state | i − i+1| i=1 X 72 CHAPTER 10. FLUCTUATIONS OF SURFACES where n = 1 is the solid–on–solid model, and n = 2 is the discrete Gaussian model. With Eq. 10.9, the partition function is

L J 2 − P |h ′ −h ′ | kB T i i +1 Z = ( )e i′=1 (10.11) i Y Xhi This is for d = 2, but the generalization to d=3 is easy. Note that

k T Z = Z( B , L) (10.12) J One can straight forwardly solve Z numerically. Analytically it is a little tough. It turns out to be easier to solve for the continuum version of Eq. 10.11 which we shall consider next.

10.2 Continuum Model of Surface

Instead of using a discrete mesh, let’s do everything in the continuum limit. Let h(~x) be the height of the interface at any point ~x as shown below. This

y y = h(~x)

is the interface h(~x) ~x

Figure 10.7: is a useful description if there are no overhangs or bubbles, which cannot be described by y = h(~x). Of course, although it was not mentioned above, the These give a multivalued h(~x), as shown

Overhang Bubble

Figure 10.8:

“lattice model” (ﬁg. 10.5) also could not have overhangs or bubbles. The states 10.2. CONTINUUM MODEL OF SURFACE 73 the system can have are any value of h(~x) at any point ~x. The continuum analogue of Eq. 10.7 is

= = dh(~x) ~x {statesX } {Xh~x} Y Z (10.13) − h( ) ≡ D · Z− Z where the last step just shows some common notation for ~x dh(~x). Things are a little subtle because ~x is a continuous variable, but we can work it through. One other weaselly point is that the sum over states is dimensionless,Q R while

~x dh(~x) has dimensions of ( x L). This constant factor of dimensionality does not aﬀect our results, so we shall ignore it. Q ForR the energy of a state, we shallQ simply make it proportional to the amount of area in a state h(~x), as shown in ﬁg. 10.7.

E h(~x) (Area of surface y = h(~x)) state{ } ∝ or ′ d−1 Estate = σ(L ) (10.14) where (L′)d−1 is the area of the surface y = h(~x), and σ is a positive constant. It is worth stopping for a second to realize that this is consistent with out deﬁnition of excess free energy ∆F = σLd−1. In fact, it almost seems a little too consistent, being essentially equal to the excess free energy! This blurring of the distinction between microscopic energies and macro- scopic free energies is common in modern work. It means we are considering a “coarse–grained” or mesoscopic description. Consider a simple example

e−F/kB T Z = e−Eh/kB T ≡ X{h} where h labels a state. Now say one has a large number of states which have the same energy Eh. Call these states, with the same Eh, H. Hence we can rewrite −Eh/kB T −EH /kB T e = gH e X{h} {XH} where gH = # of states h for state H Now let the local entropy S = kB ln gH So we have e−F/kB T = e−FH /kB T {XH} where FH = EH TSH . This example is partially simlpe since we only partially summed states with− the same energy. Nevertheless, the same logic applies if one 74 CHAPTER 10. FLUCTUATIONS OF SURFACES does a partial sum over states which might have diﬀerent energies. this long digression is essentially to rationalize the common notation (instead of Eq. 10.14) of ′ d−1 Estate = Fstate = σ(L ) (10.15) Let us now calculate (L′)d−1 for a given surface y = h(~x)

(dh)2 + (dx)2

p dh y dx

x

Figure 10.9:

As shown above, each element (in d = 2) of the surface is

dL′ = (dh)2 + (dx)2 p dh (10.16) = dx 1 + ( )2 r dx In d–dimensions one has ∂h dd−1L′ = dd−1x 1 + ( )2 (10.17) r ∂~x Integrating over the entire surface gives

∂h (L′)d−1 = dd−1x 1 + ( )2 ∂~x Z r so that ∂h E = σ dd−1~x 1 + ( )2 (10.18) state ∂~x Z r Of course, we expect ∂h ( )2 << 1 (10.19) ∂~x (that is, that the surface is fairly ﬂat), so expanding to lowest order gives σ ∂h E = σLd−1 + dd−1~x( )2 (10.20) state 2 ∂x Z Extra surface area due to ﬂuctuation The partition function is therefore, | {z }

Z = e−Estate/kB T {statesX } 10.2. CONTINUUM MODEL OF SURFACE 75 or, d−1 − σL σ d−1 ∂h 2 kB T Z = e ( dh(x)) exp − d ~x( ′ ) (10.21) 2kBT ∂~x Y~x Z Z This looks quite formidable now, but it is straightforward to evaluate. In par- ticle physics it is called a free–ﬁeld theory. Note the similarity to the discrete partition function of Eq. 10.11. The reason Eq. 10.21 is easy to evaluate is that it simply involves a large number of Gaussian integrals

2 dh e−ah Z As we know, Gaussian integrals only have two important moments,

h(x) h i and h(~x) h(~x′) h i All properties can be determined from knowledge of these two moments! How- ever, space is translationally invariant and isotropic in the ~x plane (the plane of the interface), so properties involving two points do not depend upon their

x x | 1 − 2|

~x1 ~x2 o

Figure 10.10: ~x plane exact positions, only the distance between them. That is

h(x) = const. h i and h(~x ) h(~x ) = G( ~x ~x ) (10.22) h 1 2 i | 1 − 2| For convenience, we can choose

h = 0 (10.23) h i then our only task is to evaluate the correlation function

G(x) = h(~x) h(0) h i (10.24) = h(~x + x′) h(~x′) h i 76 CHAPTER 10. FLUCTUATIONS OF SURFACES

To get G(x), we use the partition function from Eq. 10.21, from which the probability of a state is

−σ R dd−1~x( ∂h )2 ρ e 2kB T ∂~x (10.25) state ∝ The subtle thing here is the gradients which mix the probability of h(~x) with that of a very near by h(~x′). Eventually this means we will use fourier transforms. To see this mixing, consider

∂h ∂ d~x( )2 = d~x[ d~x′h(~x′)δ(~x ~x′)]2 ∂~x ∂~x − Z Z Z ∂ ∂ = d~x d~x′ d~x′′h(~x)h(~x′′)( δ(~x ~x′)) ( δ(~x ~x′′)) ∂~x − · ∂~x − Z Z Z Using simple properties of the Dirac delta function. Now let

x x′′ → x′ x → x′′ x′ → Since they are just dummy indices, giving

∂ ∂ = d~x d~x′ d~x′′h(~x)h(~x′)( δ(~x′′ ~x)) ( δ(~x′′ ~x′)) ∂~x′′ − · ∂~x′′ − Z Z Z ∂ ∂ = d~x d~x′ d~x′′h(~x)h(~x′) (δ(~x′′ ~x) δ(~x′′ ~x′)) ∂~x · ∂~x′ − − Z Z Z ∂ ∂ = d~x d~x′h(~x)h(~x′) δ(~x ~x′) ∂~x · ∂~x′ − Z Z ∂2 = d~x d~x′h(~x)h(~x′)[ δ(~x] ~x′) −∂~x2 − Z Z (you can do this by parts integration, and there will be terms involving surfaces at , which all vanish.) Hence, ∞ ∂h dd−1~x( )2 = dd−1~x dd−1~x′h(~x)M(~x ~x′)h(~x′) (10.26) ∂~x − Z Z Z where the interaction matrix,

∂2 M(~x) = δ(~x) (10.27) −∂~x2 which has the form shown in ﬁg. 10.11. This “Mexican hat” interaction matrix appears in many diﬀerent contexts. It causes h(~x) to interact with h(~x′). In fourier space, gradients ∂/∂~x become numbers, or rather wave numbers ~k. We will use this to simplify our project. It is convenient to introduce continuous 10.2. CONTINUUM MODEL OF SURFACE 77

2 M(~x) = ∂ δ(~x) − ∂~x2

~x

Figure 10.11: and discrete fourier transforms. One or another is more convenient at a given time. We have,

dd−1~k i~k·~xˆ (2π)d−1 e h(k) (Continuous) h(~x) = 1 i~k·~xˆ (10.28) RLd−1 e hk (Discrete) ~k P where

~ hˆ(k) = dd−1~xe−ik·~xh(~x) Z (10.29) = hˆk

Closure requires dd−1~k i~k·~x (2π)d−1 e δ(~x) = 1 i~k·~x (10.30) RLd−1 e ~k P and d−1 d ~x ~ δ(~k) = eik·~x (2π)d−1 Z but 1 ~ δ = dd−1~xeik·~x (10.31) ~k, 0 Ld−1 Z is the Kronecker delta 1 , if ~k = 0 δ~k, 0 = (10.32) (0 , otherwise not the Dirac delta function. We have used the density os states in ~k space as L ( )d−1 2π so that the discrete continuum limit is → 78 CHAPTER 10. FLUCTUATIONS OF SURFACES

L ( )d−1 dd−1~k → 2π X~k Z and L δ ( )d−1δ(~k) (10.33) ~k, 0 → 2π

Using the Fourier transforms we can simplify the form of Estate in Eq. 10.20. Consider ∂h dd−1~x( )2 = ∂x Z ∂ d~k ~ = d~x( eik·~xhˆ(~k))2 ∂~x (2π)d−1 Z Z ′ d~k d~k ~ ~ ′ = d~x ( ~k ~k′)hˆ(~k)hˆ(~k′)ei(k+k )·~x (2π)d−1 (2π)d−1 − · Z Z Z ′ d~k d~k ~ ~ ′ = ( ~k ~k′)hˆ(~k)hˆ(~k′)[ d~xei(k+k )·x] (2π)d−1 (2π)d−1 − · Z Z Z d~k = k2hˆ(k)hˆ( k) (2π)d−1 − Z after doing ~k′ integral. But h(~x) is real, so hˆ∗(~k) = hˆ( k) (from Eq. 10.29), and we have −

∂h dd−1~k dd−1~x( )2 = k2 hˆ(~k) 2 (Continuum) ∂~x (2π)d−1 | | Z Z or ∂h 1 dd−1x( )2 = k2 hˆ 2 (Discrete) (10.34) ∂~x Ld−1 | ~k| Z X~k ˆ Hence modes of h~k are uncoupled in fourier space; the matrix M(~x) has been replaced by the number k2, (see Eqs. 10.26 and 10.27). Therefore we have σ E = σLd−1 + k2 hˆ 2 (10.35) state 2Ld−1 | ~k| X~k

For the partition function we need to sum out all the states {states} using the ˆ fourier modes hk rather than h(~x). This is a little subtle becauseP h(~x) is real ˆ while h~k is complex. Of course not all the components are independent since ˆ ˆ∗ hk = h−k, so we double count each mode. To do the sum properly we have

= dh(~x) = d2hˆ (10.36) ′ ~k {statesX } Y~x Z Y~k Z 10.2. CONTINUUM MODEL OF SURFACE 79 where d2hˆ = d hˆ d hˆ k ℜ k ℑ k and Restriction to half of ~k space to avoid double counting ′ ← Y~k For the most part, this prime plays no role in the algebra below. Recall that

ky

′ Integrating in kg > 0, only, for example k Q

kx

Figure 10.12: the only nontrivial moment is G(x) = h(~x)h(0) h i = h(~x + ~x′)h(~x′) h i from Eq. 10.23 above. This has a simple consequence for the Fourier mode correlation function hˆ(~k) hˆ(~k′) h i Note

~ ~ ′ ′ hˆ(~k) hˆ(~k′) = dd−1~x dd−1~x′e−ik·~x−ik ·~x h(~x) h(~x′) h i h i Z Z ~ ~ ′ ′ = d~x d~x′e−ik·~x−ik ·~x G(~x ~x′) − Z Z Let, 1 ~y = ~x ~x′ ~x = ~y′ + ~y − 2 1 1 ~y = (~x + ~x′) ~x′ = ~y′ + ~y 2 2 so that d~xd~x′ = d~yd~y′, and

′ −i(~k−~k′)· ~y ′ −i(~k+~k′)·~y′ hˆ(~k) hˆ(~k ) = d~ye 2 G(y) d~y e h i × Z Z so,

~ hˆ(~k) hˆ(~k′) = d~ye−ik·~yG(y)(2π)d−1δ(~k + ~k′) h i Z ~ = [ d~xe−ik·~xG(~x)](2π)d−1δ(~k + ~k′) , letting ~y ~x → Z 80 CHAPTER 10. FLUCTUATIONS OF SURFACES as Gˆ(~k) (2π)d−1δ(~k + ~k′) (Continuum) hˆ(~k) hˆ(~k′) = (10.37) ˆ d−1 h i (G~k L δ~k+~k′, 0 (Discrete) Hence, in Fourier space, the only non–zero Fourier cos is

hˆ hˆ = hˆ 2 h ~k −~ki h| ~k| i ˆ which is directly related to G~k. Using the probability

σ 2 ˆ 2 − d−1 P~k k |hk| ρ e 2kB TL state ∝ we have,

σ ′′2 ˆ 2 − d−1 P ~′′ k |hk′′ | 2ˆ ˆ 2 2kB TL k ′ d hk′ hK e hˆ = ~k ′ | | (10.38) ~k σ ′′2 ˆ 2 h| |i ′ − d−1 P ~′′ k |hk′′ | 2ˆ 2kB TL k Q ~Rk′ d hk′ e Q R ˆ Except for the integrals involving h~k, everything on the numerator and denomi- nator cancels out. We have to be careful with real and imaginary parts though. Consider k2 hˆ 2 = k2hˆ hˆ | k| k −k Xk Xk For the particular term involving hˆ15, for example

(15)2hˆ hˆ + ( 15)2hˆ hˆ = 2(15)2hˆ hˆ 15 −15 − −15 −15 15 −15 giving a factor of 2. In any case,

2σ 2 2 − k |hˆk| 2 2 2k TLd−1 d hˆ hˆk e B hˆ 2 = ~k| | ~k 2σ 2 2 − k |hˆk| h| | i 2 2k TLd−1 R d hke B

(These steps are given in Goldenfeld’sR book in Section 6.3, p.174.) Now let

ˆ iθ h~k = Re so that 2 ∞ RdR R2e−R /a hˆ 2 = 0 h| k| i ∞ −R2/a R 0 R dR e where R k T 1 a = B (10.39) σk2 Ld−1 A little ﬁddling gives k T 1 hˆ 2 = a = B h| k| i σk2 Ld−1 10.2. CONTINUUM MODEL OF SURFACE 81

Hence we obtain, from Eq. 10.37 hˆ(~k) hˆ(~k′) = Gˆ(~k)(2π)d−1δ(~k + ~k′) h i where k T 1 Gˆ(~k) = B σ k2 and G(x) = h(~x)h(0) (10.40) h i From this correlation function we will obtain the width of the surface. We will

W W , width of surface y

~x

Figure 10.13: deﬁne the width to be the root–mean–square of the height ﬂuctuations averaged over the ~x plane. That is 1 w2 = dd−1~x (h(~x) h(~x) )2 (10.41) Ld−1 h − h i i Z which is overcomplicated for our case, since h(~x) = 0, and h(~x)h(~x) = G(0), from above. So we have the simpler form h i h i w2 = G(0) (10.42) From the deﬁnition of fourier transform,

dd−1~k k T G(0) = B (10.43) (2π)d−1 σk2 Z which has a potential divergence as ~k 0, or ~k . We shall always restrict our integrations so that → → ∞ 2π ~k Λ (10.44) L ≤ | | ≤ Lattic constant System size |{z} Then it is easy to do the integral|{z} of Eq. 10.43. First in d = 2, k T 1 Λ 1 G(0) = B dk σ 2π k2 Z−Λ Λ kBT 1 = dk 2 πσ 2π k Z L 82 CHAPTER 10. FLUCTUATIONS OF SURFACES and k T G(0) = B L , d = 2 2π2σ so kBT 1/2 w = 2 L , d = 2 (10.45) r2π σ Similarly, in d = 3, Λ kBT 1 k dk G(0) = 2 2π 2 σ 4π 2π k Z L and k T G(0) = B ln (LΛ/2π) , d = 3 2πσ so, k T LΛ w = B (ln )1/2 , d = 3 (10.46) r 2πσ 2π It is easy to obtain L dependences as dimension of space varies. One obtaines,

undefined , d 1 ≤ LX , 1 < d < 3 w = (10.47) (lnL)1/2 , d = 3 const. , d > 3 where 3 d X = − (10.48) 2 is called the roughening exponent. The fact that w is undeﬁned for d 1 is explained below. ≤ Eq. 10.47 implies the interface is rough and increases with L for d 3. However, note that the width (for d > 1) is not large compared to the system≤

W L1/2 Width in d = 2 remains small ∼ compared to system as L L → ∞

L

Figure 10.14: itself, as L . → ∞ w 1 1 = = (10.49) L L1−X L(d−1)/2 Hence the width is small for d > 1. In d = 1, it appears the width is comparable to the system size! This means the interface — and indeed phase coexistance 10.2. CONTINUUM MODEL OF SURFACE 83 itself — does not exist in d = 1. This is one signature of a lower critical dimension, at and below which, phase coexistance cannot occur at any nonzero temperature. In general, one deﬁnes the roughening exponent X by w LX (10.50) ∼ as L . This is equivalent to the deﬁnition of the surface correlation expo- → ∞ nent ηs 1 G(ˆk) (10.51) ∼ k2−ηs as k 0. (Here ηs = 0, of course.) The two exponents are related by the formula→ 3 d η X = − s (10.52) 2 − 2 The surface turns out to be self-aﬃne, in the way we discussed earlier for the Von Koch snowﬂake, since correlation functions like Gˆ(~k) are power–law like. The self–aﬃne dimension of the rough surface is d = X + d 1 (10.53) s − To obtain the correlation function in real space is a little awkward because of the divergence of G(X 0) as L 0. For convenience, consider → → g(x) (h(~x) h(0))2 (10.54) ≡ h − i Multiplying this out gives g(x) = 2( h2 h(~x)h(0) h i − h i This is simply 2 (G(0) G(x)) − g(x) = or (10.55) 2 (w2 G(x)) − The interpretation of g(x) from Eq. 10.54 is simple, it is the (squared) diﬀerence in the heights between two points a distance x apart, as shown in ﬁg. 10.15. Not surprisingly, we expect (for large L) g(x L) = 2w2 (10.56) → * 0 * 0 since, G(L) = h(L) h(0) = h(L) h(0) because the points 0 and L are so far appart so as toh be uncorrelated,i h soihG(L)i = 0. In any case, we have

k T 1 1 ei~k·~x g(x) = 2 B dd−1~k( − ) σ (2π)d−1 k2 Z (10.57) k T 1 1 cos~k ~x = 2 B dd−1~k( − · ) σ (2π)d−1 k2 Z 84 CHAPTER 10. FLUCTUATIONS OF SURFACES

g(~x ~x′) − p

h(~x) h(~x′)

Figure 10.15:

Consider d = 2. Let

k T 1 Λ 1 cos kx g(x) = 2 B dk( − ) σ 2π k2 Z−Λ k T Λ 1 cos kx = 2 B dk − πσ k2 Z2π/L let u = kx k T Λx 1 cos u g(x) = 2 B x du − πσ { u2 } Z2πx/L let Λ and L in the integral (for convenience) → ∞ → ∞ k T ∞ 1 cos u g(x) = 2 B x ( du − ) πσ u2 Z0 π/2 so, | {z } k T g(x) = B x , d = 2 (10.58) σ In d = 3,

k T 1 2π Λ 1 cos kx cos θ g(x) = 2 B dθ kdk( − ) σ 4π2 k2 Z0 Z2π/L This is a little more involved, and I will just quote the answer k T g(x) = B ln xΛ , d = 3 (10.59) 4πσ In general, one obtians

undefined , d = 1 x2X , 1 < d < 3 g(x) = (10.60) ln x , d = 3 const. , d > 3 10.2. CONTINUUM MODEL OF SURFACE 85 where, again X = (3 d)/2. This conﬁrms our earlier claim that correlations are power — law like.− (There is a subtlety here involving constraints which deserves a minute of time. The limit g(x L) L2X , but the constraint is not the same as 2w2 as derived above. Reconsider→ ∼ the integral of Eq. 10.58:

Λx kBT 1 cos u g(x) = 2 x du − 2 πσ { 2πx u } Z L Now just let Λ = . Rewriting gives ∞ ∞ 2πx kBT 1 cos u L 1 cos u g(x) = 2 x du − 2 du − 2 πσ { 0 u − 0 u } Z 2πx Z k T π L 1 cos u = 2 B x du − πσ { 2 − u2 } Z0 or, for d = 2 k T g(x, L) = B xf(x/L) (10.61) σ where 2 2πx 1 cos u f(x) = 1 du − (10.62) − π u2 Z0 Note that f(x ) = 0 → ∞ but f(x 0) = 1 → This is called a scaling function. End of digression.) f(x/L)

1

0 x/L

Figure 10.16:

We are pretty well done now, except we shall calculate the “renormalized” surface tension from the partition function. Recall from Eq. 10.21

d−1 − σL σ d−1 ∂h 2 kB T Z = e ( dh(x)) exp − d ~x( ′ ) 2kBT ∂~x Y~x Z Z 86 CHAPTER 10. FLUCTUATIONS OF SURFACES

Switching to Fourier space using Eqs. 10.34 and 10.36 gives

d−1 − σL 2 σ 1 2 2 kB T ˆ ˆ Z = e ( d hk) exp − d−1 k hk ′ 2kBT L | | ~ k (10.63) Yk X e−F/kB T ≡ So calculating Z gives the thermodynamic free energy F . We will call the “renormolized”, or more properly the thermodynamic surface tension σ∗ ∂F σ∗ ( ) ≡ ∂A T,V,N or, ∂F σ∗ = ( ) (10.64) ∂Ld−1 T,V,N going back to Eq. 10.63

σLd−1 σ k2 ˆ 2 − 2 − |hk| Z = e kB T d hˆ e 2kB T Ld−1 ′ ~k Y~k Z Since all modes are independent. But ∞ ∞ d2hˆ = d hˆ d hˆ ~k ℜ ~k ℑ ~k Z Z−∞ Z−∞ ˆ iθ or if h~k = Re

2π ∞ 2 d hˆk = dθ RdR Z Z0 Z0 (10.65) Hence

d−1 2π ∞ 2 − σL −( σ k )R2 Z = e kB T dθ R dR e 2kB T Ld−1 ′ 0 0 Y~k Z Z 2 σ k 2 let u = ( d−1 )R 2kBT L ∞ σLd−1 : 0 − 1 −u Z = e kB T 2π du e ′ σ k2 2( 2k T Ld−1 ) 0 Y~k B Z d−1 d−1 − σL 2πkBT L = e kB T ′ σk2 Y~k d−1 σLd−1 2πk TL − ln Q′ B = e kB T e ~k σk2 d−1 d−1 σL 2πkBT L = exp( + ln 2 ) − kBT ′ σk X~k 10.2. CONTINUUM MODEL OF SURFACE 87

d−1 d−1 σL 1 2πkBT L Z = exp( + ln 2 ) (10.66) − kBT 2 σk X~k But, Z = exp( F/k T ), up to a constant, so − B k T 2πk T Ld−1 F = σLd−1 B ln B (10.67) − 2 σk2 X~k is the free energy due to the surface. The second term gives the contribution due to ﬂuctuations. To take the derivative with respect to ∂/∂Ld−1, it is important to note that the ~k’s get shifted with L’s, and that it is the (kL) combination which is not a function of L. So to take the derivative, we rewrite

′ 2π/k 2π/k note k shifts to k′ as L L + dL →

L L + dL

Figure 10.17:

k T 2πk T Ld−1 F = σLd−1 B ln B − 2 σ(kL)2 X~k or, k T k T 2πk T F = σLd−1 B ln(Ld−1)(d+1)/(d−1) B ln B − 2 − 2 σ(kL)2 X~k X~k or, k T d + 1 k T 2πk T F = σLd−1 B ( )(ln Ld−1) B ln B (10.68) − 2 d 1 − 2 σ(kL)2 − X~k X~k d−1 ∂F ∗ The last term is independent of L , so taking the derivative ( ∂Ld−1 )T,V,N = σ gives, k T d + 1 1 σ∗ = σ B ( ) − 2 d 1 Ld−1 (Discrete) − X~k or, k T d + 1 dd−1~k σ∗ = σ B ( ) (10.69) − 2 d 1 (2π)d−1 (Continuum) − Z 88 CHAPTER 10. FLUCTUATIONS OF SURFACES is the renormalized surface tension. Note that ﬂuctuations decrease the value of σ to the lower σ∗.

dd−1~k 1 Λ Λ In d = 2, = dk = (10.70) (2π)d−1 2π π Z Z−Λ dd−1~k 1 2π Λ 1 Λ2 In d = 3, = dθ kdk = πΛ2 = (10.71) (2π)d−1 4π2 4π2 4π Z Z0 Z0 dd−1~k In d = 1, = 1 (10.72) (2π)d−1 Z Hence we have,

3k T σ∗ = σ B Λ , d = 2 − 2π k T σ∗ = σ B Λ2 , d = 3 − 4π and since d = 1 is odd, let d = 1 + ǫ, for very small ǫ, and

k T σ∗ = σ B , d = 1 + ǫ , ǫ << 1 − ǫ (10.73)

(It is worth nothing that this is the ﬁrst term in the d = 1 + ǫ expansion, where we have only considered the leading term in E = σ dd−1~x 1 + ( ∂h )2 State ∂~x ≈ d−1 σ d−1 ∂h 2 σL + 2 d ~x( ∂x ) + ...) R q R 10.3 Impossibility of Phase Coexistence in d=1

Note that, again, the renormalized surface tension vanishes as d 1 (as ǫ 0). In our simple calculation σ∗ actually becomes negative inﬁnity→ at d = 1. → In fact, the lack of phase coexistance in d = 1 is a theorm due to Landan and Pierls, which is worth quickly proving. (See Goldenfeld’s book, p.46.) A homogeneous phase has an energy

E +Ld−1 (10.74) Bulk ∝ Putting in a surface gives an extra energy

E +Ld−1 (10.75) Surface ∝ So the extra energy one gets due to adding a surface is

d−1 ∆E1Surface +L ∝ (10.76) In d = 1, ∆E + (1) 1Surface ∝ O 10.3. IMPOSSIBILITY OF PHASE COEXISTENCE IN D=1 89

E Ld bulk ∝ E Ld−1 surface ∝

y

~x

Figure 10.18:

Basically, this looks bad for surfaces. Since the energe increases, it looks like one wants to have the fewest possible surfaces, so the picture on the previous page is that one: two coexisting phases with one well–deﬁned surface. However, the system’s state is chosen by free energy minimization not energy minimization, where F = E TS − Surfaces are good for entropy because they can occur anywhere along the y axis, as shown. The total number of such ways is L along the y axis. Of course there

or or or etc.

Figure 10.19: are all rotations and upside down interfaces, so that in the end there are Ld ways. The entropy is

∆S = kB ln(numberofstates) so we obtain, for a surface,

d ∆S = +kB ln L or, ∆S = +kBd(ln L) (10.77) is the increase in entropy due to having a surface. Hence, the total change in the free energy due to a surface is

∆F = + (Ld−1) k T (ln L) (10.78) O − B O 90 CHAPTER 10. FLUCTUATIONS OF SURFACES

So, for d > 1, L , the free energy increases if a surface is added, but for d = 1 (at T > 0)→ the ∞ free energy decreases if a surface is added! This means coexisting phases are stable for d > 1, but in d = 1 they are unstable. Since one interface decreases F , we can add another and another and another, and keep on decreasing F until there are no longer coexisting phases (of bulk energy E Ld as L ), just a mishmash of tiny regions of phase ﬂuctuations. Hence,∼ for systems→ with∞ short–ranged forces, at T > 0, there can be no phase coexistance in d = 1. (Short–ranged forces were assumed when we said the surface energy E Ld−1). Surface ∼ 10.4 Numbers for d=3

Some of these things may seem a little academic, since we have constantly done everything in arbitrary d. Our results for d = 2, apply to ﬁlms absorbed on liquid surfaces and to ledges in crystal surfaces, for example. Experiments have

Figure 10.20: been done on these systems, conﬁrming our d = 2 results, such as w L1/2. In d = 3, it is worth looking again at two of our main results, Eq.∼ 10.47 k T LΛ w = B (ln )1/2 (10.79) r 2πσ 2π k T σ∗ = σ B Λ2 − 4π or 4πk T Λ σ∗ = σ(1 B ( )2) − σ 2π 10.4. NUMBERS FOR D=3 91 or σ∗ = σ(1 T/T ∗) (10.80) − where ∗ σ T 2 (10.81) ≡ 4πkB(Λ/2π) Now, reconsider the phase diagram of a pure substance. Our analysis describes,

P s l Pc Critical point

triple point v T Tc

Figure 10.21: Phase Diagram for example, the behavior along the liquid-vapour coexistance line. Near the triple point of a typical ﬂuid, the surface tension is such that

k T B t 1A˙ (10.82) r 2πσ ≈ where Tt is the triple point temperature. Similarly, one roughly has Λ 10A˙ (10.83) 2π ≈ Putting these in Eq. 10.79 above implies

L w = 1A˙(ln )1/2 (10.84) 10A˙ Putting in numbers is amusing. The width divergence is very weak.

2.6A,˙ L = 10000A˙ (light wavelength) 4.3A,˙ L = 10cm (coﬀee cup) w = 5.9A,˙ L = 1000km (ocean) 9.5A,˙ L = 1030m (universe) Of course, in a liquid–vapour system, gravity plays a role. (Certainly it plays a role if one had a liquid–vapour system the size of the universe.) The extra ¡ energy due to gravity is

1 E = dd−1~x(ρ ρ )gh2(x) Gravity 2 l − v Z 92 CHAPTER 10. FLUCTUATIONS OF SURFACES where ρ is the mass density of the liquid or vapour, and g is gravity. This changes our results in a way you should work out for yourself. Putting in numbers for the renormalized surface tension gives the result ¢ σ∗(T = T ) σ(0.3) t ≈ which is a much bigger result! ∗ Note that σ vanishes (assume σ = σ(T )) at T = Tt, as T is increased. It is tempting, although a little too rough6 to interpret

∗ T = Tc

∗ putting in numbers gives T 1.4Tt, which is Tc to the right order. More precisely, it is known that ≈

σ = Const.(1 T/T )µ ∗ − c for T Tc, where µ 1.24 in d = 3. So our theory, and (over)–interpretation gives µ→= 1, instead.≈ This is not too bad, considering we only went to lowest 2 order in the EState = d~x 1 + (∂h/∂~x) expansion, and did not consider bulk properties at all. R p Chapter 11

Broken Symmetry and Correlation Functions

A major concept in modern work is symmetry. These results for surfaces are an example of more general properties of correlations in systems with broken symmetries. By a broken symmetry, we mean the following. Note that in the partition function Z = e−EStates/kB T {StatesX } the energy of a state has a “symmetry”

EState(h + Const.) = EState(h) (11.1) Since σ dh E = σLd−1 + dd−1~x( )2 State 2 ∂~x Z Naively, since the sum {States} does not break this symmetry, we would expect it to be respected by any quantity obtained from Z. The two main quantities we have obtianed are theP moments h and hh . As we found before, h i h i h = 0 (11.2) h i In fact we could have the average value be any result, without changing our algebra. Note that h’s expectation value is a deﬁnite number. In particular

h + Const. = h h i 6 h i This is a common signature of a broken symmetry variable: Its correlations and averages do not respect that full symmetry of Z or EState. It is also instructive to think of the symmetry of Eq. 11.1 in a more abstract way. Note that homogeneous systems, whether liquid or vapour, are transla- tionally invariant and rotationally invariant. However, a system with a surface,

93 94CHAPTER 11. BROKEN SYMMETRY AND CORRELATION FUNCTIONS

Figure 11.1: as drawn in ﬁg. 11.1, only has translational invariance and isotropy in the ~x plane of the interface; There is no invariance in the y direction. Hence the system with a surface has a lower symmetry (a broken symmetry) compared to the liquid or vapour system. This perfect symmetry of a homogeneous system is lost precisely because h, the broken symmetry variable, has a deﬁnite value, like h = 0. h i It is instructive to see that the fact Z and EState have this symmetry implies the form of the correlation functions of the broken symmetry variable h. Con- sider d = 2 only. Since a surface can be anywhere for a given system (it only has a deﬁnite value for a given system), it can be diﬀerent places for diﬀerent systems. In a very large system, part of h might have one value, while far far h i

y

x

Figure 11.2: away on the x axis it might want to have another value. The free energy cost of such a ﬂuctuation, if it involves very large length scales (equivalent to k 0) is inﬁnitesimal. → σ ∆E (k) = k2 hˆ 2 State 2 | k| as k 0. Indeed, if one has a very very large system encompassing the four shown→ in ﬁg. 11.2, the four subsystems behave like independent systems which

y

x

Figure 11.3: Surface of a very big system 95 each have a deﬁnite value for h . Since the energy cost of such ﬂuctuations is inﬁnitesimal as k 0, we canh i set up long–wavelength sloshing modes with → inﬁnitesimal kBT . Roughly, k T σ B = k2 hˆ 2 2 2 | k| and k T 1 hˆ 2 B h| k| i ∼ σ k2 as we recall. These sloshing modes are capillary waves driven by thermal noise. But we need not know the form of E to obtain hˆ 2 1/k2. We State h| k| i ∼ only need to know that it breaks a symmetry of Z and EState. As shown in ﬁg. 11.3, ﬂuctuations occur, say ∆y, over a long distance ∆x. The values of ∆y (which could be A˙) and ∆x±(which could be meters) are not important for the argument. Let∼ us invent a model where, after∼ one goes a distance ∆x to the right, one goes up or down ∆y, at random, as shown in ﬁg. 11.4. Of

step 2 vapor y step 1 liquid x

Figure 11.4: Interface (in units of ∆x and ∆y) course, this is a one–dimensional random walk along the y axis in “time” x. The root–mean–square deviation of y from its initial value is

y x1/2 RMS ∼ for large “time” x. Of course yRMS is related to the correlation function (g(x))1/2 from Eq. 10.54, so we have

g(x) x ∼ Fourier transforming, and ﬁddling via Eqs. 10.55 and 10.56, gives Gˆ(k) 1/k2 for small k so that, ∼ hˆ 2 1/k2 h| k| i ∼ as expected. Our argument was only in d = 2, but it can be generalized to other dimensions. Note that it was the fact that h was a broken symmetry variable which gave rise to the 1/k2; no special details of the interaction were necessary. Before obtaining the general result, note that the system reacts to a sym- metry breaking variable by “shaking it” in such a way as to try and restore the 96CHAPTER 11. BROKEN SYMMETRY AND CORRELATION FUNCTIONS

A symmetry breaking variable ends up having its ﬂuctuations try to restore the symmetry.

w Lx ∼ y

x

Figure 11.5: broken symmetry. The rough width of a surface is this attempt to restore the full translational invariance in the y direction, as shown in ﬁg. 11.5. In fact, in d = 1, the ﬂuctuations are so strong that the full symmetry is restored, and one has translational invariance in the y drection (since coexisting phases cannot exist). In, 1 < d 3, the ﬂuctuations do not restore the symmetry, but they make the surface≤ rough, or diﬀuse, as L . In d > 3, ﬂuctuations do not aﬀect the ﬂat interface. → ∞ Finally, it should be noted that the cartoon on p. 64 is self–similar and self aﬃne, in the same sense as the von–koch snowﬂake we discussed below. Note the important new ingredient: Broken symmetry power–law correlations self–similar structure. ⇔ ⇔

11.1 Proof of Goldstone’s Theorem

We shall now prove Goldstone’s theorem. The theorem is, if B is a broken symmetry variable, where B(~r) is B’s local variable, (~r is a ﬁeld point in d dimensions) then Bˆ(~k) Bˆ(~k′) = (2π)dδ(~k + ~k′)Gˆ(k) h i where lim Gˆ(k) k→0 → ∞ and usually Gˆ(k) 1/k2 ∝ We will give not so much a proof of this as an indication of how to work it out for whatever system one is interested in. To make things easier, say B = 0 h i If this is not true, redeﬁne B B B . Assuming one has identiﬁed B as an important broken symmetry variable,⇒ − h theni the free energy’s dependence on B can be found by a Taylor expansion: ∂F 1 ∂2F F (B) = F (0) + B + B2 + ... ∂B 2 ∂B2 11.1. PROOF OF GOLDSTONE’S THEOREM 97

∂F But F (0) is just a constant, and ∂B = 0 at equilibrium, so 1 ∂2F F (B) = B2 + ... (11.3) 2 ∂B2

Of course, by F we mean EState or FState in the sense of Eq. 10.10. Now, it turns out that it is important to deal with the local value of B(~r), or the Fourier component Bˆ(~k). If we use the local value for the expansion in Eq. 11.3 we have,

1 ∂2F F (B) = dd~r dd~r′ B(~r)B(~r′) (11.4) 2 ∂B(~r)∂B(~r′) Z Z ∂2F which looks worse than it is. Since ∂B(~r)∂B(~r′) is a thermodynamic derivative of some kind, in a system which is (we assume) rotationally invariant and transla- tionally invariant, ∂2F C(~r, ~r′) = C(~r ~r′) (11.5) ∂B(~r)∂B(~r′) ≡ − In fourier space

1 dd~k dd~k ∂2F F (B) = Bˆ(~k) Bˆ(~k′) (11.6) 2 (2π)d (2π)d ˆ ~ ˆ ~ ′ Z Z ∂B(k) ∂B(k ) But translational invariance gives ∂2F = (2π)dδ(~k + ~k′)Cˆ(k) (11.7) ∂Bˆ(~k) ∂Bˆ(~k′) and, if B(~x) is real, so Bˆ∗( k) = Bˆ(k), we obtain (ﬁnally) − d 1 d k Cˆ(k) Bˆ(k) 2 (Continuum) F (B) = 2 (2π)d | | (11.8) 1 ˆ ˆ 2 ( d ~ Ck Bk (Discrete) 2LR k | | We can obtain the form of CˆP(~k) form expanding close to ~k = 0.

Cˆ(~k) = (Const.) + (Const.)k2 + (Const.)k4 + ... (11.9) where we have assumed everything is analytic near k = 0. Hence terms like ~k ~k = k2 and (~k ~k)2 = k4 appear, but not nonanalytic terms like (~k ~k) ~k = k3, (which· has the absolute· value sign). · | | Finally, we use that B breaks a symmetry of F . Hence F can have no dependence on B in the thermodynamic limit ~k = 0. Therefore

lim Cˆ(k) = 0 (11.10) k=0 and, if we have analyticity

Cˆ(k) = (Const.)k2 (11.11) 98CHAPTER 11. BROKEN SYMMETRY AND CORRELATION FUNCTIONS for a broken–symmetry variable near ~k = 0. This gives

F (B) = (Const.) k2 B~ 2 (11.12) | k| X~k This is the same as Eq. 10.35 so we have

Bˆ(~k)Bˆ(~k′) = (2π)dδ(~k + ~k′)Gˆ(k) h i where 1 Gˆ(k) , ~k = 0 ∝ k2 as claimed. If Cˆ(k) is not analytic as assumed in Eq. 11.9 we get the weaker condition lim Gˆ(k) k→0 → ∞ Finally, it is amusing to note that if B is not a broken symmetry variable, from Eq. 11.9 we have

F (B) = (Const.) (k2 + ξ−2) Bˆ 2 | k| X~k where ξ−2 is a constant. This gives Bˆ(~k)Bˆ(~k′) = (2π)dδ(~k + ~k′)Gˆ(k) where Gˆ(k) 1 as k 0. Fourier transformingh givesi ∝ k2+ξ−2 → B(~r)B(0) = G(r) e−r/ξ h i ∝ as r , as we anticipated earlier, when correlations were discussed. → ∞ Chapter 12

Scattering

Figure 12.1:

Scattering experiments evidently involve the change in momentum of some incident particle or radiation by a sample. The momentum transfered to the sample (in units ofh ¯) is ~q = ~k ~k (12.1) i − f by conservation of momentum. This, along with the intensity of the scattered radiation I(~q) (12.2) is measured at the detector. We shall only consider elastic scattering where energy, w in units ofh ¯, is unchanged. Furthermore, we are implicitly considering the ﬁrst Born approximation and there by neglecting multiple scattering. The intensity of the scattered radiation might be due to an electromagnetic ﬁeld. That is I(~q) Eˆ(q) 2 (12.3) ∝ | | 99 100 CHAPTER 12. SCATTERING where Eˆ is the electric ﬁeld. Clearly the electric ﬁeld variations with ~q are due to variations in the dielectric properties of the sample, i.e.,

Eˆ(q) δǫˆ(q) (12.4) ∝ where δǫˆ is the dielectric variability. Now, if δǫˆ is a function of only a small number of local thermodynamic variables, like density and temperature

∂ǫ ∂ǫ δǫˆ = ( )δρˆ + ( )δTˆ (12.5) ∂ρ ∂T

Usually ∂ǫ/∂T is exceedingly small, so we have

I(q) ρˆ(q) 2 (12.6) ∝ | | A similar argument can be constructed if the scattering is not electromagnetic, giving the same result. We’ll not focus on ρˆ(q) 2 letting | | S(q) ρˆ(q) 2 (12.7) ≡ | | be the structure factor. I Should note that, although I’ve skimmed this, the details giving Eq. 12.6 are often experimentally important. But that’s their job, ours is to ﬁgure out S(q). We will deal with two cases forρ ˆ(q)

1. ρ corresponds to the order parameter, a local density or local concentra- tion.

2. The order parameter is not the local density, but instead corresponds to a sublattice concentration or magnetization, or to a Bragg peak’s intensity.

The ﬁrst of these is easier. Consider a system of uniform density (we’ll do interfaces later), then ρ(x) ρ(0) = ρ2 (12.8) h i of course, so the fourier transform gives

S(q) δ(q) (12.9) ∝ A cartoon (ﬁg. 12.2) may make this clear. The second case is more subtle. An example is the sublattice concentration of a binary alloy. Say atoms are A and B in the alloy, and that locally, A wants to be neighbours with B but not with another A. This makes the ordered state look like a checker–board as shown in ﬁg. 12.3. Assuming the scattering is diﬀerent from A or B atoms this sort of order can be seen by looking at S(q). However, it must be realized that the ordering of, say, the A atoms is on a sublattice, which has twice the lattice spacing of the original system as shown in ﬁg. 12.4. Hence the density of A atoms is uniform on a sublattice with twice the lattice constant of the original system, and the 101

y S(q)

x

Uniform boring system qy, or, qx ρ(x) ρ(0) = ρ2 S(q) δ(q) h i ∝

Scattering as shaded region

Figure 12.2:

Perfectly ordered A B alloy − = A

= B

Figure 12.3:

Sublatice 1 (Shaded)

2a a

(Sublatice 2 is unshaded)

Figure 12.4: 102 CHAPTER 12. SCATTERING

S(q) qy q0

or q q q x − 0 0 q − 0 q q qy − 0 0 Figure 12.5:

scattering will show peaks not at ~q = 0 but at ~q = ~q0 corresponding to that structure S(~q) δ(~q ~q ) (12.10) ∝ − 0 X~q0 where q0 = 2π/(2a), and a is the original lattice spacing. The same thing happens in a crystal, where Bragg peaks form at speciﬁc positions in ~q space. By monitoring the height and width of such peaks, the degree of order can be determined. A further complication is that usually the sample will not be so well aligned with the incident radiation that one gets spots on the qx and qy axes as shown. Instead they appear at some random orientation (ﬁg. 12.6). This might seem trivial (one could just realign the sam-

qy qy qy

q q0 0 or or q0 etc qx qx qx

Figure 12.6: ple), except that often one has many diﬀerent crystallite orientations scattering simultaneously, so that all the orientations indecated above are smeared into a ring of radious q0.

qy

q 0 qx

Figure 12.7: Average of Many Crystallites 12.1. SCATTERING FROM A FLAT INTERFACE 103 12.1 Scattering from a Flat Interface

Consider a ﬂat interface without roughening, as drawn in ﬁg. 12.8. For simplicity we will usually consider an interface embedded in dimension d = 2. If the density ρ

y = 0 1 y is interface y x

Grey Scale Density

Figure 12.8: is uniform, it can be written as

ρ(x,y) = A θ(y) + B (12.11) where θ(y) is the Heaviside step function, and A and B are constants. For simplicity, we will let A = 1 and B = 0 from now on:

ρ(x,y) = θ(y) (12.12)

To get S(q) we need the fourier transform of θ. Let

θ = H + C where C is a to–be–determined constant. Taking a derivative gives dθ dH = + 0 dy dy or, dH = δ(y) dy Fourier transform gives 1 Hˆ (ky) = iky so that 1 θˆ(ky) = + C δ(ky) iky To determine C, note that

θ(x) + θ( x) = 1 − 104 CHAPTER 12. SCATTERING so

θˆ(k ) + θˆ( k ) = δ(k ) y − y y Hence C = 1/2 and, 1 1 θˆ(ky) = + δ(ky) (12.13) iky 2 The two–dimensional Fourier transform of ρ(x,y) = θ(y) is therefore 1 1 ρˆ(~k) = δ(kx) + δ(~k) (12.14) iky 2 From this we obtain the scattering from a ﬂat interface y = 0 as the modulus,

2 1 S(k) = ρˆ(k) 2 δ(kx) (12.15) | | ∝ ky where we have neglected the trivial bulk factor delta function. If the unit vector normal to the interface isn ˆ and that along the surface isn ˆ⊥, then 1 S(k) δ(~k nˆ⊥) (12.16) ∝ (~k nˆ)2 · · As a cartoon (ﬁg. 12.10), the scattering appears as a streak pointed in the

nˆ

y nˆ⊥

x

Figure 12.9:

S(k) ky kx = 0

kx

ky Grey Scale of S(k)

Figure 12.10: Scattering from interface y = 0 direction ofn ˆ (ˆy in ﬁg. 12.10). 12.1. SCATTERING FROM A FLAT INTERFACE 105

y = 0 ρ is interface

y

x y

Grey Scale Density

Figure 12.11:

Lets also consider the scattering from a line defect, which is similar, but has important diﬀerences. In this case, ρ(x,y) = δ(y) (12.17) Then ρˆ = δ(kx) (12.18) and S(k) δ(k ) (12.19) ∝ x or if the unit vector normal to the surface isn ˆ, and that along the surface is nˆ⊥, S(k) δ(~k nˆ ) (12.20) ∝ · ⊥ for a line defect. Now, if ρ is not the order parameter (like the second case S(k) k = 0 x ky

kx

ky Grey Scale of S(k)

Figure 12.12: Scattering from a line defect at y = 0 discussed above) so that the density doesn’t vary across the interface, but an order parameter corresponding to sublattice concentration or crystalline orien- tation does, we have to incorporate the wavenumber k~0 corresponding to that structure. For simplicity, we will assume that all one has to do is modulate the structure up to that wavenumber. That is, for an interface 1 S(k) δ((~k ~k0) nˆ⊥) (12.21) ∝ (~k ~k ) nˆ 2 − · | − 0 · | 106 CHAPTER 12. SCATTERING while for a line defect S(k) δ((~k ~k ) nˆ ) (12.22) ∝ − 0 · ⊥ 12.2 Roughness and Diﬀuseness

So far this is only geometry. And even as geometry, we have not considered a diﬀuse interface. Before doing roughness, lets consider a diﬀuse interface as shown in ﬁg. 12.13. One example of this is

y ρ

ξ x

ξ y

Figure 12.13: A diﬀuse interface, with no roughness

ρ (x,y) tanh(y/ξ) (12.23) diffuse ∼ So that 2 δ(kx) 2 S(k) = ρˆ(k)diffuse 2 (1 + (ξky) ) (12.24) | | ∼ ky O for an interface at y = 0. So the diﬀuseness only eﬀects scattering at kx = 0

ky

kx extra scattering due to diﬀuseness

Figure 12.14: and large ky. On the other hand, let us now consider a rough interface, with no diﬀuseness, where the interface is given by

y = h(x) (12.25) and ρ (x,y) = θ(y h(x)) (12.26) Rough − 12.2. ROUGHNESS AND DIFFUSENESS 107

Figure 12.15: A very rough interface, with no diﬀuseness

If h(x) involves a small variation, we can accordingly expand to obtain

ρ θ(y) + h(x)δ(y) + ... (12.27) Rough ≈ since dθ/dy = δ(y). Taking the fourier transform and squaring this gives

2 δ(kx) ˆ 2 S(k) = ρˆRough(k) 2 + h(kx) (12.28) | | ∼ ky | | Averaging over thermal ﬂuctuations using

ˆ 2 1 h(kx) 2−η (12.29) h| | i ∼ kx which deﬁnes η, gives

~ δ(kx) T SRough(k) 2 + 2−η (12.30) ∼ ky kx where T is a constant. It would appear, then, that one could deconvolute the

ky

kx

Extra scattering due to roughness

Figure 12.16:

δ(kx) 2 2−η geometry k2 from the diﬀuseness (ξky) , from the roughness 1/kx , since y O they have diﬀerent signatures in k space. For excellent samples this is in fact true, but if there are many randomly oriented interfaces present, these signatures are smeared out, and washed away. 108 CHAPTER 12. SCATTERING 12.3 Scattering From Many Flat Randomly–Oriented Surfaces

Evidently the scattering from many randomly–oriented surfaces gives a pinwheel– like cluster of streaks as shown in ﬁg. 12.17. Angularly averaging over all orien-

ky ky 1/kp Angular Average

kx

Figure 12.17: tations of the interface gives dnˆ ρˆ 2 S = | k| (12.31) dnˆ R wheren ˆ is the unit vector normal to theR surface. We will write

nˆ = sin θxˆ + cos θyˆ (12.32) − and nˆ⊥ = cos θxˆ + sin θyˆ (12.33) with ~k = k cos βxˆ + k sin θyˆ (12.34) ~ Then, for scattering from a surface where ρˆ 2 = δ(k·nˆ⊥) we have (in d = 2) | k| |~k·nˆ|2 1 2π δ[k cos β cos θ + k sin β sin θ] S = dθ 2π k cos β sin θ + k sin β cos θ 2 Z0 | − | 1 1 2π δ[cos (θ β)] = dθ − 2π k3 sin (θ β) 2 Z0 | − | 1 1 2π δ[cos θ] = dθ 2π k3 sin θ 2 Z0 | | But cos θ = 0 at θ = π/2, 3π/2 where sin θ 2 = 1, so | | 1 1 S = (12.35) Surfaces π k3 Similarly, in arbitrary dimension one has 1 S (12.36) Surfaces ∝ kd+1 12.3. SCATTERING FROM MANY FLAT RANDOMLY–ORIENTED SURFACES109 for scattering from a surface. For scattering from a line defect which has ρˆ(k) 2 δ(k ), one similarly obtains | | ∝ x 1 S (12.37) linedefects ∝ kd−1 We shall call these results, Porod’s Law. They involve only geometry. The case where ρ is not the order, and a modulated structure exists is more subtle. If the wavenumber of that structure is ~k0 then this introduces a new angle via

~k0 = k0 cos αxˆ + k0 sin αyˆ (12.38) Now there are two possible averages

Figure 12.18:

1. α or β, average over crystallite orientation, or, angles in a non–angularly resolved detector. These are equivalent (look at the cartoon for a few seconds), and so we only need to do one of these two averages.

2. θ, averages over surface orientation.

If we average over α or β, for a ﬁxedn ˆ, it is easy to anticipate the answer as shown in ﬁg. 12.19, where in the cartoon the regions around (kx = k0, ky = 0)

ky ky α or β average kx kx

interfacen ˆ =y ˆ

Figure 12.19: retain the singularity of the original δ(k k )/(k k )2. x − 0 y − 0 110 CHAPTER 12. SCATTERING

ky ky

θ Average k k0 0 kx kx

Figure 12.20:

This is basically a detector problem, so instead we will consider averaging θ ﬁrst. In fact it is obvious what the result of such an average is, it must be the previous result, but shifted from ~k = 0 to ~k = ~k0, that is 1 Ssurface(k) ∝ ~k ~k d+1 | − 0| (12.39) 1 Sline(k) ∝ ~k ~k d−1 | − 0| If a further average is done over crystallite orientation, we can take these ex- pressions and angularly average them over the angle between ~k and ~k0. Clearly after averaging we will have

k k S¯ = S(| | − | 0|) S(∆k) (12.40) k ≡ | 0| First, let φ = β α (12.41) − so that (~k ~k )2 = k2 2kk cos φ + k2 − 0 − 0 0 or, on using k k (∆k + 1) (12.42) ≡ 0 we have (~k ~k )2 = k2 (∆k)2 + 2(1 cos φ)(∆k + 1) (12.43) − 0 0{ − } Hence in two dimensions, for a surface, we have

1 2π 1 S¯(∆k) = dφ (12.44) 2πk3 (∆k)2 + 2(1 cos φ)(∆k + 1) 3/2 0 Z0 | − | which is some weird integral. We will work out its asymptotic limits ∆k >> 1 and ∆k << 1. Firstly, if ∆k >> 1, then

1 1 2π S¯(∆k) = dφ + ... 2πk3 (∆k)3 0 Z0 12.3. SCATTERING FROM MANY FLAT RANDOMLY–ORIENTED SURFACES111 or (in arbitrary d) 1 S¯(∆k) , ∆k >> 1 (12.45) ∼ (∆k)d+1

If ∆k << 1, consider

1 2π 1 S¯(∆k 0) =? dφ → 2πk3 2(1 cos φ) 3/2 0 Z0 | − | which diverges due to the φ = 0 (and φ = π) contribution. Oops. So we had better keep a little ∆k. For convenience, we will also expand φ around 0.

1 2π 1 S¯(∆k) dφ + ... ≈ 2πk3 (∆k)2 + φ2 3/2 0 Z0 | | Now let u = φ/∆k

: ∞ 1 1 2π/∆k 1 S¯(∆k) = du 2πk3 (∆k)2 1 + u2 3/2 0 Z0 | | so, 1 S¯(∆k) as ∆k 0 (12.46) ∼ (∆k)2 → for all dimensions d. If one does this in d dimensions with

1 S = (12.47) ~k ~k γ | − 0| then

1 k−k0 γ−(d−1) , << 1 S¯ = (k−k0) k0 (12.48) 1 k−k0 γ , >> 1 ( (k−k0) k0

If γ (d 1) > 0. For the case γ = d 1, one obtains − − −

k−k0 ln (k k0), << 1 S¯ − − k0 (12.49) 1 k−k0 ∼ d−1 , >> 1 ( (k−k0) k0

It should be noted that the actual scattering is given by a weird boring integral 112 CHAPTER 12. SCATTERING

1 ky d+1 , near tails |k−k0| 1 2 , near center |k−k0|

k0 kx

Figure 12.21: like Eq. 12.44, which is pretty close to an elliptical integral. Finally, note the big mess of power laws, and we have not considered dif- fuseness or roughness! Everything is just geometry. Chapter 13

Fluctuations and Crystalline Order

Usually it is safer to apply Goldstone’s theorem by “re–deriving” it for what- ever system one is interested in. However, to give an interesting and concrete example, consider crystalline order in a solid. The density ρ clearly depends on

Density ρ = ρ(~r)

Atoms on a lattice

Figure 13.1: positions ~r, as shown, in a regular crystal. A crystal, although it looks sym- metric in a common sense way, actually has very little symmetry compared to a liquid or a vapour. A liquid or a vapour are isotropic in all directions of space and traditionally invariant in all directions. A crystalline solid has a very restricted symmetry. In ﬁgure 13.1, the system is invariant under (e.g.) ~r ~r + r (nxˆ + myˆ) (13.1) → 0 where n and m are integers (positive or negative), and r0 is the distance between atoms. Clearly then a crystal has very low symmetry, and–indeed–breaks the translational and rotational invariance of space! This means there should be a broken symmetry variable, associated with this, which satisﬁes Goldstone’s theorem. Clearly this variable is associated with the ~r dependence of ρ(~r) (13.2)

113 114 CHAPTER 13. FLUCTUATIONS AND CRYSTALLINE ORDER

invariance only for

~r ~r ~r′, with ~r′ → ~r′ = r (+1ˆx 1ˆy) 0 − shown here r0

Figure 13.2:

Let us then introduce a variable ~u(~r) which discribes the local variations in the density. That is, due to some ﬂuctuation,

~r ~r + ~u(~r) → and ρ(~r) ρ(~r + ~u(~r)) (13.3) → where ~u(~r) is the local displacement vector as shown in ﬁg. 13.3.

Equilibrium averaged position

Actual position due to ﬂuctuation

~u(~r)

Figure 13.3: Jiggled atoms due to thermal ﬂuctuations

Aside: (What does it mean to be translationally invariant

ρ(~r) ρ(~r + ~u) → Local variable corresponding to broken symmetry

Figure 13.4:

~u(~r) 115

A deﬁnite value for u(~r) = 0, say, breaks space symmetry (translational and rotational invariance).)h i By construction ~u(~r) = 0 (13.4) h i In Fourier space, if all directions of ﬂuctuation are equally likely, 1 ~uˆ(~k)~uˆ(~k′) (2π)dδ(~k + ~k′)( ) (13.5) h i ∝ k2 It is a little hard to see what this does to a density, since ~u appears inside ρ(~r + ~u(~r)). The thing to do is to expand in a Fourier series, but to get an idea, consider d = 1. Atoms don’t look like cosines, but this gives the sense of it. ρ(x) (cos Gx + 1) ∝ ρ

x 2π/G

Figure 13.5: Rough depiction of a one–dimensional variation of density via cosine

More precisely, iGx ρ(x) = e aˆG (13.6) XG where there might be several wavenumbers G giving the density, anda ˆG is a Fourier coeﬃcient giving the atomic form. In arbitrary dimension we write iGx ρ(x) = G e aˆG P ρ

x 2π/G

Figure 13.6: Better depiction of a one–dimensional variation. But the cosine is essentially the right idea.

iG~ ·~r ρ(~r) = e aˆG~ (13.7) XG~ where G~ are called the reciprocal lattice vectors. Then we have a ﬂuctuation represented by iG~ ·(~r+~u(~r)) ρ(~r + ~u(~r)) = e aˆG~ (13.8) XG~ 116 CHAPTER 13. FLUCTUATIONS AND CRYSTALLINE ORDER

~ ~ ρ(~r + ~u(~r)) = eiG·~raˆ eiG·~u(~r) (13.9) h i G~ h i XG~ To calculate the average we use the fact that ~u(~r) is a Gaussian variable. There- fore

iG~ ·~u − 1 hG~ ·~ui2 e = e 2 h i (13.10) − 1 G~ G~ :h~u~ui = e 2

But ~u is only correlated to ~u if they are in the same direction, we expect. That is u u δ h α βi ∝ αβ so −iG~ ·~u 1 G2hu2i e e 2 h i ≈ But, from Eq. 13.5 we have,

ddk 1 u2 = h i (2π)d k2 Z or Const. d = 3 u2 = ln L d = 2 (13.11) h i L d = 1 So in d = 3 ~ ρ = Const. eiG·~raˆ (13.12) h i G~ XG~ 1 2 where the Const. = exp 2 G (Const.) is called the deBye–Waller factor. In d = 2 we have − 1 G2 ln L −2

iG~ ·~r 1 const.G2/2 ρ = e aˆ e L (13.13) h i G XG~ | {z } which is a marginal case. In d = 1 we have

iG~ ·~r − 1 G2L ρ = e aˆ e 2 (13.14) h i G XG~ which is a large eﬀect, much greater than the periodicity as L . This is the content of the original Landan–Peierls theorem, which states that→ ∞ a one dimen- sional variation in density, ρ(x) (whether crystal structure or phase coexistance) is impossible: The only G which can survive in Eq. 13.14 is G = 0 as L which gives ρ(x) Constant, due to ~u(~r) ﬂuctuations. → ∞ It is more revealing→ to do these things for the scattering intensity I(k), where

I(k) ρˆ 2 (13.15) ∝ | k| 117

(Scattering is from coherent density ﬂuctuations). From Eq. 13.7,

~ ~ ρˆ = ddr e−ik·~r eiG·~raˆ k { G~ } Z XG~ d −i(~k−G~ )·~r = aˆG d r e XG~ Z or ρˆ = aˆ δ(~k G~ ) (13.16) k G − XG~ so, ρˆ∗ = aˆ∗ δ(~k G~ ) k G − XG~ and 2 ∗ ′ ρˆ = aˆ aˆ ′ δ(~k G)δ(~k G~ ) | k| G G − − G,~XG~ ′ or, ρˆ 2 aˆ 2δ(~k G~ ) (13.17) | k| ∝ | G| − XG~ 2 where aˆG is an atomic form factor. This gives well deﬁned peaks where ~k = G~ as shown| in| ﬁg. 13.7. When thermal ﬂuctuations are present

I = ρˆ 2 Scattering gives sharp k | k| peaks, if thermal ﬂuctuations

are absent.

k various G~ ’s

Figure 13.7:

d −i~k·~r iG~ ·~r iG~ ·u(~r) ρˆk = d ~re ( e aˆG e ) Z XG or, d −i(~k−G~ )·r iG~ ·u(~r) ρˆk = aˆG d ~re e (13.18) XG~ Z so ∗ ∗ d ′ i(~k−G~ ′)·~r′ −iG~ ′·u(~r′) ρˆk = aˆG d ~r e e XG~ ′ Z 118 CHAPTER 13. FLUCTUATIONS AND CRYSTALLINE ORDER

Hence

~ ~ ~ ~ ′ ′ ~ ~ ′ ′ ρˆ 2 = aˆ aˆ∗ dd~r dd~r′ e−i(k−G)·~r ei(k−G )·~r eiG·u(~r)−iG ·u(~r ) h| k| i G G h i G,~XG~ ′ Z Z (13.19) This is the same integral as on page so we can simply use that

~ ~ ′ ′ eiG·u(~r)−iG ·u(~r ) = f(~r ~r′) (13.20) h i − and then,

~ ~ ρˆ 2 = aˆ aˆ∗ dd~re−i(k−G)·~r(2π)dδ(G~ G~ ′)f(~r) (13.21) h| k| i G G − G,~XG~ ′ Z where

~ ~ f(~r) = eiG·~u(r)−iG·u(0) h 2 i − G h(u(~r)−u(0))2i = e 2 (13.22) 2 = Const. e−G hu(r)u(0)i so, ~ ~ 2 ρˆ 2 = const. aˆ 2 ddr e−i(k−G)·~re−G hu(~r)u(0)i (13.23) h| k| i | G~ | XG~ Z The correlation function u(~r)u(0) can be worked out as before (knowing uˆ(k) 2 1 h i h| | i ∼ k2 ), Const., d = 3 u(~r)u(0) = ln r, d = 2 (13.24) h i r, d = 1

In d = 3, ρˆ 2 = Const. aˆ 2δ(~k G~ ) h| k| i | G| − XG~ with the same delta function peaks. In d = 2, the marginal case,

~ ~ ρˆ 2 = Const. aˆ 2 ddr e−i(k−G)·re−Const. ln r h| k| i | G| XG~ Z e−i(~k−G~ )·~r = Const. aˆ 2 ddr | G| rConst. XG~ Z this is the fourier transform of a power

2 1 = Const. aˆG | {z } | | ~k G~ d−Const. XG~ | − | 119

I(k)

d = 3

G~ k

Figure 13.8: Delta Function Singulatities or, letting (d Const.) 2 η, we have − ≡ −

2 2 1 ρˆk = Const. aˆG (13.25) h| | i | | ~k G~ 2−η XG~ | − | in d = 2. The thermal ﬂuctuations change the delta function singularities to power–law singularities! Sometimes they are called η–line shapes (eta–line shapes).

I(k)

d = 2

G k

Figure 13.9: Power Law Singularities

In d = 1, we have

~ ~ ρˆ 2 = Const. aˆ 2 ddr e−i(k−G)·~r e−(Const.)r h| k| i G~ | G| Z P this is the Fourier transform| of{z an } exponential. But ′ ddr ei~k ·~r e−r/ξ 1 ∝ (k′)2+ξ−2 for small k′ R So, for d = 1, 2 2 1 ρˆk = Const. aˆ ~ (13.26) h| | i | G| (~k G~ )2 + ξ−2 XG~ − 120 CHAPTER 13. FLUCTUATIONS AND CRYSTALLINE ORDER where ξ−2 is a constant. The thermal ﬂuctuations remove all singularities as ~k = G~ now! This is another sign that ρ(x) is impossible, and there can be no

I(k)

d = 1

G G G

Figure 13.10: No Singularities

1 d crystalline order (or 1 d phase coexistance). −There are two other things− associated with broken symmetry which I will not discuss, but you can look up for yourself.

1. Broken symmetries are usually associated with a new (gapless, soft, or acoustic) mode. For surfaces, these propagating modes are capillary waves. For crystalline solids, they are shear–wave phonons.

2. Broken symmetries are associated with a “rigidity”. For crystalline solids, this is simply their rigidity: it takes a lot of atoms to break a symmetry of nature, and they all have to work together. Chapter 14

Ising Model

There are many phenomena where a large number of constituents interact to give some global behavior. A magnet, for example, is composed of many in- teracting dipoles. A liquid is made up of many atoms. A computer program has many interacting commands. The economy, the psychology of road rage, plate techtonics, and many other things all involve phenomena where the global behavior is due to interactions between a large number of entities. One way or another, almost all such systems end up being posed as an ”Ising model” of some kind. It is so common, that it is almost a joke.

The Ising model has a large number of spins which are individually in mi- croscopic states +1 or 1. The value of a spin at a site is determined by the spin’s (usually short–range)− interaction with the other spins in the model. Since I have explained it so vaguely, the ﬁrst, and major import, of the Ising model is evident: It is the most simple nontrivial model of an interacting system. (The only simpler model would have only one possible microscopic state for each spin. I leave it as an exercise for the reader to ﬁnd the exact solution of this trivial model.) Because of this, whenever one strips away the complexities of some interesting problem down to the bone, what is left is usually an Ising model of some form.

In condensed–matter physics, a second reason has been found to use the simpliﬁed–to–the–bone Ising model. For many problems where one is interested in large length scale phenomena, such as continuous phase transitions, the addi- tional complexities can be shown to be unimportant. This is called universality. This idea has been so successful in the study of critical phenomena, that it has been tried (or simply assumed to be true) in many other contexts.

For now, we will think of the Ising model as applied to anisotropic magnets. The magnet is made up of individual dipoles which can orient up, corresponding to spin +1 or down corresponding to spin 1. We neglect tilting of the spins, − 121 122 CHAPTER 14. ISING MODEL

arrow Spin S = +1

arrow Spin S = 1 − Figure 14.1: such as , or , or , and so on. For concreteness, consider dimension d = 2, puttingտ all theր spinsց on a square lattice, where there are i = 1, 2, 3, ...N spins. In the partition function, the sum over states is

or or . . .

Figure 14.2: Possible conﬁgurations for a N = 42 = 16 system.

+1 +1 +1 = ... s =−1 s =−1 s =−1 {StatesX } 1X 2X NX N +1 = i=1 s =−1 Y iX where each sum has only two terms, of course. The energy in the exp − EState/kBT now must be prescribed. Since we want to model a magnet, where all the spins are +1 or 1, by and large, we make the spin of site i want to oriented the same as its− neighbors. That is, for site i

Energy of Sitei = (S S )2 i − neighbors neighborsX of i = S2 2 S S + S2 i − i neigh. neigh. neighborsX of i 2 But Si = 1 always, so up to a constant (additive and multiplicative)

Energy of Sitei = S S − i neighbors of i neighborsX of i On a square lattice, the neighbors of “i” are the spin surrounding it (ﬁg. 14.3). As said before, the idea is that the microscopic constituents -the spins- don’t know how big the system is, so they interact locally. The choice of the 4 spins 123

Middle Spin is i, 4 Neighbors are to the North, South, East, and West

Figure 14.3: Four nearest neighbors on a square lattice. above, below and to the right and left as “nearest neighbors” is a convention. The spins on the diagonals (NW, NE, SE, SW) are called “next–nearest neigh- bors”. It doesn’t actually matter how many are included, so long as the inter- action is local. The total energy for all spins is

E = J S S State − i j Xhiji where J is a positive interaction constant, and ij is a sum over all i and j = 1, 2, ...N, provided i and j are nearest neighbors,h i with no double counting. Note N N 4 = = N × 2 i=1 j=1 Xhiji X X neighbors no double counting If there are “q” nearest neighbors,| corresponding{z } to a diﬀerent interaction, or a diﬀerent lattice (say hexagonal), or another dimension of space q = N 2 Xhiji qJ So the largest (negative) energy is E = 2 N. If there is constant external magnetic ﬁeld H, which tends to align spins,− the total energy is

E = J S S H S State − i j − i i Xhiji X and the partition function is

N +1 J H ( ) P Sj Sk+( ) P Sj Z = e kB T hjki kB T i i=1 Y S1X=−1 J H = Z(N, , ) kBT kBT As described above, the sum over q nearest neighbors is q = 4 for a two– dimensional square lattice. For three dimensions, q = 6 on a simple cubic lattice. The one–dimensional Ising model has q = 2. 124 CHAPTER 14. ISING MODEL

The one–dimensional model was solved by Ising for his doctoral thesis (we will solve it below). The two–dimensional model was solved by Ousager, for H = 0, in one of the premiere works of mathematical physics. We will not give that solution. you can read it in Landan and Lifshitz’s “Statistical Physics”. They give a readable, physical, but still diﬃcult treatment. The three–dimensional Ising model has not been solved. The most interesting result of the Ising model is that it exhibits broken symmetry for H = 0. In the absence of a ﬁeld, the energy of a state, and the partition function, have an exact symmetry

S S i ↔ − i

(The group is Z2.) Hence (?) S S h ii ↔ −h ii and S =? 0 h ii In fact, for dimension d = 2 and 3 this is not true. The exact result in d =2 is surprisingly simple: The magnetization/spin is

2 2J 1/8 (1 cosech k T ) , T

The critical temperature T 2.269J/k is where (1 cosech2 2J )1/8 vanishes. c B kB T A rough and ready understanding≈ of this transition is− easy to get. Consider the m = S h ii 1

0

Tc T

Figure 14.4: Two–dimensional square lattice Ising model magnetization per spin

Helmholtz free energy F = E TS − where S is now the entropy. At low temperatures, F is minimized by minimizing E. The largest negative value E has is q JN, when all spins are aligned +1 or − 2 1. So at low temperatures (and rigorously at T = 0), Si +1 or Si 1. At− high temperatures, entropy maximization minimizesh Fi →. Clearlyh entropyi → − is maximized by having the spins helter–skelter random: S = 0. h ii The transition temperature between Si = 1 and Si = 0 should be when the temperature is comparable to the energyh i per spinh k iT = ( q J) which is B O 2 125 in the right ball park. There is no hand–waving explanation for the abrupt increase of the magnetization per spin m = Si at Tc. Note it is not like a paramagnet which shows a gradual decrease inh magnetizationi with increasing temperature (in the presence of an external ﬁeld H). One can think of it (or

Paramagnet 1 No Broken H > 0 Symmetry

m = S h ii

T

Figure 14.5: at least remember it) this way: At Tc, one breaks a fundamental symmetry of the system. In some sense, one breaks a symmetry of nature. One can’t break such a symmetry in a tentative wishy–washy way. It requires a ﬁrm, determined hand. That is, an abrupt change. The phase diagram for the Ising model (or indeed for any magnet) look like ﬁg. 14.6. It is very convenient that H is so intimately related with the broken

Paramagnet Only on the shaded line is there spontaneous

Tc magnetization (and coexisting phases)

T T Tc H = 0 H > 0

forbidden region for single phase 1 +1 m +1 m − T H < 0

1 m − Figure 14.6: symmetry of the Ising model. Hence it is very easy to understand that broken symmetry is for H = 0, not H = 0. Things are more subtle in the lattice–gas version of the Ising model. 6 126 CHAPTER 14. ISING MODEL 14.1 Lattice–Gas Model

In the lattice–gas model, one still has a lattice labeled i = 1, 2...N, but now there is a concentration variable ci = 0 or 1 at each site. The most simple model of interaction is the lattice–gas energe

ELG = ϑ c c µ c − i j − i i Xhiji X where ϑ is an interaction constant and µ is the constant chemical potential. One can think of this as a model of a binary alloy (where Ci is the local concentration of one of the two phases), or a liquid–vapour system (where Ci is the local density). This is very similar to the Ising model

EIsing = J S S H S − i j − i i Xhiji X To show they are exactly the same, let S = 2C 1 i i − then EIsing = J (2C 1)(2C 1) H (2C 1) − i − j − − i − i Xhiji X = 4J C C + 4J C + Const. 2H C + Const. − i j i − i i Xhiji Xhiji X q = 4J C C + 4J C 2H C + Const. − i j 2 i − i i i Xhiji X X And

EIsing = 4J C C 2(H J ) C + Const. − i j − − q i i Xhiji X Since the additive constant is unimportant, this is the same as ELG, provided ϑ = 4J and

µ = 2(H j ) − q Note that the plane transition here is not at some convenient point µ = 0, it is at H = 0, which is the awkward point µ = 2Jq. This is one reason why the liquid–gas transition is sometimes said — incorrectly — to involve no change of symmetry (Z2 group) as the Ising model. But the external ﬁeld must have a carefully tuned value to get that possibility. 14.2. ONE–DIMENSIONAL ISING MODEL 127

Lattice – gas transition µ T c 1 µ = 2Jq h i ≈ Tc 2Jq c 0 h i ≈ T c 0 1 h i Tc Real liquid – gas transition P T

P = Pc Pc Tc

T Tc Density

Figure 14.7:

14.2 One–dimensional Ising Model

It is easy to solve the one–dimensional Ising model. Of course, we already know that it cannot exhibit phase coexistance, from the theorm we proved earlier. Hence we expect S = m = 0 h ii for all T > 0, where m is the magnetization per spin. The result and the method are quite useful, though.

N +1 J N H N P Sj Sj+1+ P Sj Z = e kB T j=1 kB T j=1 i=1 Y SiX=−1

Note we have simpliﬁed the hiji for one dimension. We need, however, a prescrition for the spin SN+1. We will use periodic boundary conditions so that S S . This is called theP Ising ring. You can also solve it, with a little N+1 ≡ 1 Ising RingN 1 Periodoc Boundary Conditions

N 1 2 −

3 ....

Figure 14.8: 128 CHAPTER 14. ISING MODEL more work, with other boundary conditions. Let the coupling constant

k J/k T ≡ B and h = H/kBT be the external ﬁeld. Then

N +1 N (kS S +hS ) Z(k,h,N) = ePj=1 j j+1 j i=1 Y SiX=−1 or, more symmetrically,

N +1 [kS S + 1 h(S +S )] Z = ePj j j+1 2 j j+1 i=1 Y SiX=−1 N +1 N kS S + 1 h(S +S ) = e j j+1 2 j j+1 i=1 j=1 Y SiX=−1 Y Let matrix elements of P be deﬁned by

kS S + 1 h(S +S ) S P S e j j+1 2 j j+1 h j| | j+1i ≡ There are only four elements

+1 P + 1 = ek+h h | | i 1 P 1 = ek−h h− | | − i +1 P 1 = e−k h | | − i 1 P + 1 = e−k h− | | i Or, in matrix form, ek+h e−k P = e−k ek−h · ¸ The partition function is

N +1 Z = S P S S P S ... S P S h 1| | 2ih 2| | 3i h N | | 1i i=1 Y SiX=−1 +1 +1 +1 = S P ( S S ) P ( S S )...P S h 1| | 2ih 2| | 3ih 3| | 1i S1X=−1 S2X=−1 S3X=−1 But, clearly, the identity operator is

1 S S ≡ | ih | XS 14.2. ONE–DIMENSIONAL ISING MODEL 129 so, +1 Z = S P N S h 1| | 1i S1X=−1 = T race(P N )

Say the eigenvalues of P are λ+ and λ−. Then, N N Z = λ+ + λ− Finally, say λ >λ . Hence as N + − → ∞ −F/kB T N Z = e = λ+ and F = Nk Tλ − B + It remains only to ﬁnd the eigenvalues of P . We get the eigenvalues from det(λ P ) = 0 − This gives (λ ek+h)(λ ek−h) e−2k = 0 − − − or, ek+h + ek−h λ2 2λ ( ) +e2k e−2k = 0 − 2 − ek cosh h The solutions are | {z } 2ek cosh h 4e2k cosh2 h 4e2k + 4e−2k λ± = ± − p 2 = ek[cosh h cosh2 h 1 + e−4k] ± − So, p k 2 −4k λ+ = e (cosh h + sinh h + e ) is the larger eigenvalue. The free energyp is F = Nk T [k + ln (cosh h + sinh2 h + e−4k)] − B where k = J/kBT , and h = H/kBT . p The magnetization per spin is 1 ∂F 1 ∂F m = = −N ∂H −NkBT ∂h Taking the derivative and simplifying, one obtains sinh h m(h, k) = sinh2 h + e−4k As anticipated, m(h = 0, k) = 0. Therep is magnetization for h = 0, although there is no broken symmetry of course. There is a phase transition6 at T = 0. 130 CHAPTER 14. ISING MODEL

+1 low T high T m

H

1 − Figure 14.9: Magnetization of 1-d Ising model

14.3 Mean–Field Theory

There is a very useful approximate treatment of the Ising model. We will now work out the most simple such “mean–ﬁeld” theory, due in this case to Bragg and Williams. The idea is that each spin is inﬂuenced by the local magnetization, or equiva- lently local ﬁeld, of its surrounding neighbors. In a magnetized, or demagnetized state, that local ﬁeld does not ﬂuctuate very much. Hence, the idea is to replace the local ﬁeld, determined by a local magnetization, with an average ﬁeld, de- termined by the average magnetization. “Mean” ﬁeld means average ﬁeld, not nasty ﬁeld. It is easy to solve the partition function if there is only a ﬁeld, and no interactions. This is a (very simple) paramagnet. Its energy is

E = H S State − i i X and,

+1 H P Sj Z = e kB T j i Y SiX=−1 H H S1 SN = ( e kB T )...( e kB T ) S1X=±1 SNX=±1 H S N = ( e kB T ) SX=±1 so,

H Z = (2 cosh )N kBT and H F = NkBT ln 2 cosh − kBT 14.3. MEAN–FIELD THEORY 131 so since, 1 m = ∂F/∂H −N then m = tanh H/kBT

For the interacting Ising model we need to ﬁnd the eﬀective local ﬁeld Hi at +1 1 m H > 0

H m

1 − T

Figure 14.10: Noninteracting paramagnet solution site i. The Ising model is

E = J S S H S State − i j − i i Xhiji X we want to replace this — in the partition function weighting — with

E = H S State − i i i X Clearly ∂E = Hi −∂Si Thus, from the Ising model ∂E = J (Sjδk,i + δj,iSk) + H −∂Si hXjki = J 2 S + H · · j j neighbors of i (no doubleX counting)

Now let’s get rid of the 2 and the double–counting restriction. Then

Hi = J Sj + H j neighborsX of i is the eﬀective local ﬁeld. To do mean–ﬁeld theory we let

H H i ≈ h ii 132 CHAPTER 14. ISING MODEL where

H = J S + H h ii h j i j neighborsX but all spins are the same on average, so

H = qJ S + H h ii h i and the weighting of states is given by

E = (qJ S + H) S State − h i i i mean ﬁeld, hHi X For example, the magnetization is | {z }

hHi S Se kB T m = S = S=±1 hHi S h i kB T P S=±1 e where S = S1 , and contributions dueP to S2,S3... cancel from top and bottem. So h i h i H m = tanh h i kBT or

qJm + H m = tanh kBT This gives a critical temperature

kBTc = qJ as we will now see. Consider H = 0. m m = tanh T/Tc

Close to Tc, m is close to zero, so we can expand the tanh function: m 1 m3 m = 3 + ... T/Tc − 3 (T/Tc)

One solution is m = 0. This is stable above Tc, but unstable for T

Rearranging

T T m2 = 3( )2(1 ) + ... Tc − Tc

Or,

T T m = (3 )1/2(1 )1/2 Tc − Tc for small m, T

Close to Tc, mean ﬁeld gives m (1 T/T )1/2 ∝ − c

T Tc

Figure 14.11: parabola near Tc, as drawn in ﬁg. 14.12 (within mean ﬁeld theory). This is

+1 Parabola near m = 0, T = Tc, H = 0 m

T “Classical” exponent β = 1/2 same as parabola 1 − Figure 14.12: interesting, and useful, and easy to generalize, but only qualitatively correct. Note 2J , d = 1 kBTc = qJ = 4J , d = 2 6J , d = 3

The result for d = 1 is wildly wrong, since Tc = 0. In d = 2, kBTc 2.27J, so the estimate is poor. Also, if we take the exact result from Ousageris≈ solution, 134 CHAPTER 14. ISING MODEL

1/8 m (1 T/Tc) , so β = 1/8 in d = 2. In three dimensions kBTc 4.5J and∝ estimates− are β = 0.313.... Strangely enough, for d 4, β = 1/2 (although∼ there are logarithmic corrections in d = 4)! ≥ For T T , but H = 0, we have ≡ c 6 H m = tanh (m + ) kBTc or

H = k T (tanh−1 m m) B c − Expanding giving

1 H = k T (m + m3 + ... m) B c 3 − so k T H = B c m3 3 for m close to zero, at T = Tc. This deﬁnes another critical exponent Shape of critical isotherm m T = Tc is a cubic H m3 ∝ in classical theory H

Figure 14.13:

H mδ ∝ for T = Tc, where δ = 3 in mean–ﬁeld theory. The susceptibility is ∂m X ( ) T ≡ ∂H T This is like the compressability in liquid–gas systems 1 ∂V κ = ( ) T −V ∂P T Note the correspondence between density and magnetization per spin, and be- tween pressure and ﬁeld. The susceptibility “asks the question”: how easy is it to magnetize something with some small ﬁeld. A large XT means the material is very “susceptible” to being magnetized. 14.3. MEAN–FIELD THEORY 135

Also, as an aside, the “sum rule” for liquids

d~r ∆n(r)∆n(0) = n2k Tκ h i B T Z has an analog for, magnets

d~r ∆m(r)∆m(0) X h i ∝ T Z We do not need the proportionality constant. From above, we have k T m + H m = tanh B c kBT or, H = k T tanh−1 m k T m B · − B c and ∂H 1 ( ) = k T k T ∂m T B 1 m2 − B c so − 1 X = T T T − c close to Tc. Or X T T −γ T ∝ | − c| where γ = 1. This is true for T T +, or, T T −. To get the energy, and so → c → c

χT χT , susceptibility diverges near Tc

Tc

Figure 14.14: the heat capacity, consider

E = J S S H S h i h− i j − ii i Xhiji X All the spins are independent, in mean–ﬁeld theory, so

S S = S S h i ji h iih j i Hence Jq E = Nm2 HNm h i − 2 − 136 CHAPTER 14. ISING MODEL

The speciﬁc heat is 1 ∂E c = ( ) N ∂T N Near Tc, and at H = 0,

(T T ) T

If we include temperature dependence away from Tc, the speciﬁc heat looks more like a saw tooth, as drawn in ﬁg. 14.15. This discontinuity in c, a second C

Tc

Figure 14.15: Discontinuity in C in mean ﬁeld theory. derivative of the free energy is one reason continuous phase transitions were called “second order”. Now it is known that this is an artifact of mean ﬁeld theory. It is written as c T T −α ∝ | − c| where the critical exponent α = o(disc.) To summarize, near the critical point,

m (T T )β, T

c T T −α, H = 0 ∼ | − c| 14.3. MEAN–FIELD THEORY 137

The exponents are:

mean ﬁeld d = 2 (exact) d = 3 (num) d 4 ≥ α 0 (disc.) 0 (log) 0.125... 0 (disc.) β 1/2 1/8 0.313... 1/2 γ 1 7/4 1.25... 1 δ 3 15 5.0... 3 Something is happening at d = 4. We will see later that mean–ﬁeld theory works for d 4. For now we will leave the Ising model behind. ≥

0.5 non trivial 0.4 no exponents transition β mean ﬁeld 0.3 theory works 0.2 0.1 1 2 3 4 5 d

Figure 14.16: Variation of critical exponent β with spatial dimension 138 CHAPTER 14. ISING MODEL Chapter 15

Landan Theory and Ornstien–Zernicke Theory

Nowadays, it is well established that, near the critical point, one can either solve the Ising model E = J S S H S − i j − i i Xhiji X with Z = e−E/kB T i Y SiX=±1 or, the ψ4 ﬁeld theory

k r u F = ddx ( ψ)2 + ψ2 + ψ4 H ddx ψ(~x) { 2 ∇ 2 4 } − Z Z where k, u, and H are constants, and

r T T ∝ − c with the partition function

− Z = ψ( ) e−F D · Z− Z where the kBT factor has been set to kBTc, and incorporated into k, r, u and H (this is ok near Tc). Look back in the notes for the discussion of the discrete Gaussian model versus the continuous drumhead model of interfaces. Although it may not look like it, the ﬁeld theory model is a very useful simpliﬁcation of the Ising model, wherein no important features have been lost.

139 140CHAPTER 15. LANDAN THEORY AND ORNSTIEN–ZERNICKE THEORY

To motivate their similarities, consider the following. First, the mapping between peices involving external ﬁeld only require getting used to notation: Ising Model ψ4 Field Theory N H S H ddxψ(~x) − i ⇐⇒ − i=1 X Z One is simply the continuum limit of the other. The square gradient term is likewise straightforward, as we now see. Ising Model ψ4 Field Theory J N (S S )2 K ddx( ψ)2 2 i − j ⇐⇒ ∇ i=1 X j neighborsX of i Z 2 using Si = 1 const. J S S y − i j Xhiji d r 2 u 4 The terms harder to see are the d ~x ( 2 ψ + 4 ψ ) terms. Let us deﬁne r u f(ψR) = ψ2 + ψ4 2 4 This is the free energy for a homogeneous system in the ψ4 ﬁeld theory (that is, where ~ ψ 0), in the absence of an external ﬁeld, per unit volume. Note ∇ ≡ r > 0, T>Tc

r < 0,T

f f

T >Tc T

ψ ψ

Figure 15.1: “free energy”/volume. No ∆~ ψ, no H to Van der Waals. (In fact, the whole ψ4 theory is originally due to Van der Waals.) Note that, since ψ is determined by the minimum of F , this single or double well restricts ψ to be within a small range. Otherwise, ψ could have any value, since − ∞ ψ( ) = dψ(~x) − D · −∞ Z Z Y~x Z 141

These unphysical values of ψ are suppressed in the free energy. In the Ising model, the restriction is Si = 1. We can lift this restriction with a kronecker delta ±

+1 ∞

= δ 2 Si −1 i i Y SiX=−1 Y SiX=−∞ ∞ − ln (1/δS2−1) = e i i Y SiX=−∞ So, although this is awkward and artiﬁcial, it is sort of the same thing. Expand- ing it out, as S2 0, i → 2 4 ln (1/δS2−1) Si (Si ) − i ≈ − O the double–well form. This can be done in a very precise way near Tc. Here I am just trying to give an idea of what is going on. So the ψ4 theory is similar to the Ising model, where ψ(~x) is the local magnetization. We will be concerned with the critical exponents α, β, γ, δ, η, and ν deﬁned as follows

F (T ) T T 2−α , H = 0 ∼ | − c|

2 So that C = ∂ F T T −α ∂T 2 ∼ | − c| ψ (T T )β , H = 0 ,T

ξ T T −ν , H = 0 ∼ | − c| This is the ﬁrst time we have mentioned η and ν in this context. Landan motivates the ψ4 theory in a more physical way. He asserts that a continuous phase transition involves going from a disordered phase, with lots of symmetry, to an ordered phase, with a lower symmetry. To describe the transition we need a variable, called the order parameter which is nonzero in the ordered phase only. For example, in a magnet, the order parameter is the magnetization. In general, identifying the order parameter, and the symmetery 142CHAPTER 15. LANDAN THEORY AND ORNSTIEN–ZERNICKE THEORY

broken at Tc can be quite subtle. This will take us too far aﬁeld, so we will only consider a scalar order parameter, where the symmetry broken is Z2, that is, where ψ ψ symmetry is spontaneously broken. (Note that a magnetic spin conﬁned↔ to − dimension d = 2, wherein all directions are equivalent, has an order parameter ψ~ = ψxxˆ + ψyyˆ. The group in u(1). A spin in d = 3 has ψ~ = ψxxˆ +ψyyˆ+ψzzˆ, if all directions are equivalent; the group in O(3) I think.) So Landan argues that the free energy to describe the transition must be an analytic function of ψ. Hence

F = ddx f(ψ) Z where r u f(ψ) = ψ2 + ψ4 + ... 2 4 No odd terms appear, so that the

ψ ψ ↔ − Symmetry is preserved. In the event an external ﬁeld breaks the symmetry, another term is added to the free energy, namely

H ddx ψ − Z At high temperatures, in the disordered phase, we want ψ = 0. Hence

r > 0 for T >Tc. However, at low temperatures, in the ordered phase, we must have ψ > 0. Hence r < 0 (To make sure everything is still well deﬁned, we need u > 0. There is no need for u to have any special dependence in T near Tc, so we take it to be a constant. Or, if you prefer : unimportant u(T ) u(T ) + (Const.)(T T ) + ... ≈ c − c near Tc. So we simply set u to be constant.) To ﬁnd r’s evident T dependence, we Taylor expand

r = r (T T ) + ... 0 − c keeping only the lowest–order term, where r0 > 0. Finally, we know that spatial ﬂuctuations should be suppressed. Expanding in powers of ~ and ψ, the lowest order nontrivial term is ∇ 1 K ddr ψ 2 2 |∇ | Z 143 where K > 0. Similar terms like ψ 2ψ turn into this after parts integration. ∇ Note that this term means inhomogeneities (~ ψ = 0) add free energy. R ∇ 6 Now we’re done. The Landan free energy is

k F = ddx f(ψ) + ψ 2 H ddx ψ { 2 |∇ | } − Z Z with r u f(ψ) = ψ2 + ψ4 2 4 In high–energy physics, this is called a ψ4 ﬁeld theory. The 2 term gives kinetic energy; r mass of particle; and ψ4 gives interactions. Van∇ der Waals wrote down the same∝ free energy (the case with k 0 is what is given in the text books). He was thinking of liquid–gas transitions,≡ so ψ = n n , where n − c is density and nc is the density at the critical point. Finally, from the theory of thermodynamic ﬂuctuations, we should expect that (in the limit in which the square gradient and quartic term can be neglected)

r 1/κ ∝ T or equivalently r 1/X ∝ T It is worth emphasizing that the whole machinery of quadratic, quartic, and square gradient terms is necessary. For example, note that the critical point is deﬁned thermodynamically as

∂P ( ) = 0 ∂V Tc and

∂2P ( ) = 0 ∂V 2 Tc in a liquid–gas system. In a magnet:

∂H ∂2H ( ) = 0, and( ) = 0 ∂m Tc ∂m2 Tc Trying to ﬁnd the equation of state for T T in a liquid we expand: ≡ c ∂P 1 ∂2P 1 ∂3P P (n, T ) = P + ( ) (n n ) + ( ) (n n )2 + ( ) (n n )3 + ... c c ∂n Tc − c 2 ∂n2 Tc − c 6 ∂n3 Tc − c For a magnet

0 ∂H 0 1 ∂2H 1 ∂3H H(m,T ) = H>+ ( ) (m m*) + ( ) m2 + ( ) m3 + ... c c ∂m Tc − c 2 ∂m2 Tc 2 ∂m3 Tc 144CHAPTER 15. LANDAN THEORY AND ORNSTIEN–ZERNICKE THEORY

So, at the critical point for both systems — from thermodynamics and analyticity only — we have (P P ) (n n )δ − c ∝ − c and H mδ ∝ where δ = 3 Experimentally, δ 5.0... in three dimensions, while δ 15 in d = 2. So we have a big subtle problem≈ to solve! ≡

15.1 Landan Theory

We now solve for the homogeneous states of the system. In that case ~ ψ = 0, so we need only consider ∇

F/V = f(ψ) Hψ r − u = 0 (T T )ψ2 + ψ4 Hψ 2 − c 4 − For H = 0, we ﬁnd the solution by minimizing F/V (or f):

∂f = 0 = r (T T )ψ + uψ3 ∂ψ 0 − c

There are 3 solutions

ψ = 0, (stableforT > Tc)

r0(Tc T ) ψ = − , (stableforT < Tc) ±r u (To check stability, consider ∂2f/∂ψ2.) for H = 0, only one solution is stable 6 for T

X T T −1 T ∝ | − c| 15.2. ORNSTEIN — ZERNICKE THEORY OF CORRELATIONS 145

Also, at T = Tc, this gives H ψ3 ∝ Finally, note that (at H = 0) 1 1 f = rψ2 + uψ4 2 4 but 0 ,T >T ψ2 = c r ,T

0 ,T >Tc f = 2 1 r ,T

α 0 (disc.) β 1/2 γ 1 δ 3

15.2 Ornstein — Zernicke Theory of Correla- tions

Ornstein and Zernicke give a physically plausible approach to calculating corre- lation functions. It works well (except near the critical point). In a bulk phase, say T >> Tc, and we shall only consider H = 0, ψ has an average value of zero: ψ = 0 h i So we expect ψ4 << ψ2 and, above Tc, k r F ddx[ ( ψ)2 + ψ2] ≈ 2 ∇ 2 Z 146CHAPTER 15. LANDAN THEORY AND ORNSTIEN–ZERNICKE THEORY

Far below Tc, we expect ψ to be closse to its average value r ψ = h i ± −u r where r = r (T T ) = negative below T . Let 0 − c c ∆ψ ψ ψ ≡ − h i Then ~ ψ = ~ (∆ψ ψ ) = ~ (∆ψ) ∇ ∇ − h i ∇ Also, ψ2 = (∆ψ ψ )2 = (∆ψ)2 2 ψ ∆ψ + ψ 2 − h i − h i h i and

ψ4 = (∆ψ ψ )4 − h i = (∆ψ)4 4 ψ (∆ψ)3 + 6 ψ 2(∆ψ)2 4 ψ 3(∆ψ) + ψ 4 − h i h i − h i h i Hence, grouping terms by powers of ∆ψ, r u r 3u ψ2 + ψ4 = ( r ψ u ψ 3)∆ψ + ( + ψ 2)(∆ψ)2 + (∆ψ)3 2 4 − h i − h i 2 2 h i O But ψ = r/u, so h i ± − r u p ψ2 + ψ4 = r(∆ψ)2 + ((∆ψ)3) 2 4 − O 3 2 But (∆ψ) << (∆ψ) , so, below Tc k F ddx [ ( ∆ψ)2 + r (∆ψ)2] ≈ 2 ∇ | | Z where we have used the fact that r is negative deﬁnite below Tc to write r = r − | | Both approximate free energies are pretty much the same as the free energy for the interface model: σ ∂h dd−1x( )2 2 ∂x Z They can be solved exactly the same way. The deﬁnitions of Fourier transforms are the same (except for dd−1x ddx, and Ld−1 Ld, for example), and the arguments concerning translational→ invariance of→ space, and isotropy of space still hold. Let C(x) ψ(x) ψ(0) ≡ h i −F k T Then we have (putting the kBT back in e B ) k T Cˆ(k) = B ,T>T Kk2 + r c 15.2. ORNSTEIN — ZERNICKE THEORY OF CORRELATIONS 147 and since r ∆ψ(x) ∆ψ(0) = ψ(x) ψ(0) | | h i h i − u using ψ 2 = r /u, we have h i | | k T r Cˆ(k) = B | |(2π)dδ(k) ,T

ξ/2 e−x/ξ, , d = 1 1 G(x) = 2π K0(x/ξ) , d = 2 1 −x/ξ 4πx e , d = 3 th for example. Recall the modiﬁed Bessel function of zero order satisﬁes K0(y 1/2 −y → 0) = ln y, and K0(y ) = (π/2y) e . This is more than we need. In discussions− of critical phenomena,→ ∞ most books state that if 1 Gˆ(k) ∝ k2 + ξ−2 then 1 G(x) e−x/ξ ∝ xd−2 The reason for the seemingly sloppiness is that proportionality constants are unimportant, and it is only long–length–scale phenomena we are interested in. Indeed, we should only trust out results for large x, or equivalently small k (since we did a gradient expansion in ~ ψ). In fact, so far as the Fourier transforms∇ go, here is what we can rely on: If 1 Gˆ(k) ∝ k2 + ξ−2 then

G(x) e−x/ξ ∝ unless ξ = , in which case ∞ 1 G(x) ∝ xd−2 148CHAPTER 15. LANDAN THEORY AND ORNSTIEN–ZERNICKE THEORY for large x. Using these results, we ﬁnally obtain the correlation function up to a constant of proportionality,

−x/ξ e , ,T>Tc 1 C(x) = xd−2 , T = Tc 2 −x/ξ ψ + (e ) ,T

k ,T>Tc ξ = r q k ,T

C(x) = ψ(x) ψ(0) = ψ 2 + (e−x/ξ) h i h i O is a rewriting of

∆ψ(x) ∆ψ(0) = e−x/ξ h i

In cartoon form: ﬁgure 15.2. Note the correlation length satisﬁes ξ 1/ r ∼ | | p

ξ Correlation length ξ ξ 1 ∼ √|r| or ξ T T −1/2 ∼ | − c| T

Figure 15.2:

above and below Tc, so

ξ T T −1/2 ∼ | − c| in Ornstein–Zernicke theory. The correlation function becomes power–law–like at Tc. 15.2. ORNSTEIN — ZERNICKE THEORY OF CORRELATIONS 149

C(x) T

T = Tc T >Tc

x

Figure 15.3: Correlation function above at and below Tc

1 C(x) = , T = T , (H = 0) xd−2 c Since ξ T T ν ∼ | − c| and 1 C(x) ∼ xd−2+η we have ν = 1/2 and η = 0. This completes the list of critical exponents. In summary:

Mean Field Ising Landan 0 Z d = 2 Ising d = 3 Ising d 4 Ising − ≥ α 0 (disc.) 0 (disc.) 0 (log) 0.125... 0 β 1/2 1/2 1/8 0.313... 1/2 γ 1 1 7/4 1.25... 1 δ 3 3 15 5.0... 3 η 0 0 1/4 0.041... 0 ν 1/2 1/2 1 0.638... 1/2 So something new is needed. 150CHAPTER 15. LANDAN THEORY AND ORNSTIEN–ZERNICKE THEORY Chapter 16

Scaling Theory

Historically, the ﬁeld of phase transitions was in a state of termoil in the 1960’s. Sensible down–to–earth theories like the mean–ﬁeld approach, Landan’s ap- proach, and Ornstein–Zernicke theory were not working. A lot of work was put into reﬁning these approaches, with limited success.

For example, the Bragg–Williams mean ﬁeld theory is quite crude. Reﬁne- ments can be made by incorporating more local structure. Pathria’s book on Statistical Mechanics reviews some of this work. Consider the speciﬁc heat of the two–dimensional Ising model. Since Ousager’s solution, this was known exactly. It obeys C ln T T , near T . The reﬁnements make for much ∼ | − c| c better predictions of C away from Tc. At Tc, all mean ﬁeld theories give C to be discontinuous. Comparing to the exact result shows this as well. (This

Crude mean ﬁeld Better mean ﬁeld better . . .

C C C

T T T Tc Tc Tc

All mean theories give C discontinuous ∼

Figure 16.1:

151 152 CHAPTER 16. SCALING THEORY

Figure 16.2: is taken from Stanley’s book, “Introduction to Phase Transitions and Critical Phenomena”.) Experiments consistently did not (and do not of course) agree with Landan’s prediction for the liquid–gas or binary liquid case. Landan gives n n (T T )β − c ∼ c − with β = 1/2. Three–dimensional experiments gave β 0.3 (usually quoted as 1/3 historically), for many liquids. The most famous paper≈ is by Guggenheim in J. Chem. Phys. 13, 253 (1945).

Figure 16.3:

This is taken from Stanley’s book again. The phenomena at Tc, where 153

ξ is striking. → ∞

Figure 16.4:

Here I show some (quite old) simulations of the Ising model from Stanley’s book. Note the “fractally” structure near Tc. You can easily program this yourself if you are interested. Nowadays, a PC is much much faster than one needs to study the two–dimensional Ising model. Strange things are also seen experimentally. For a ﬂuid, imagine we heat it through Tc, ﬁxing the pressure so that it 154 CHAPTER 16. SCALING THEORY corresponds to phase coexistence (i.e., along the liquid–vapor line in the P–T diagram as drawn). The system looks quite diﬀerent depending on temperature.

P

liquid heat this way

vapor

T

Figure 16.5:

It is transparent, except for the meniscus separating liquid and vapour for T <

Transparent Opaque Transparent

P P Vapor Milky ≡ liquid vapor coexistance Looking Gas Liquid Mess

T

Figure 16.6:

Tc, for T

Here is a list of all the independent relations between exponents

α + 2β + γ = 2 γ = ν(2 η) − γ = β(δ 1) − and νd = 2 α − So it turns out there are only two independent exponents. The last equation is called the “hyperscaling” relation. It is the only one involving spatial dimension d. The others are called “scaling” relations. This is because the derivation of them requires scaling arguments.

16.1 Scaling With ξ

Note that near Tc, ξ diverges. Also, correlation functions become power–law like at Tc according to the approximate Ornstein–Zernicke theory as well as the exact result for the Ising model in two dimensions. So let us make the bold assertion that, since ξ at Tc, the only observable information on a large length scale involves quantities→ ∞ proportional to ξ. That is, for example, all lengths scale with ξ. Let us now derive the hyperscaling relation. The free energy is extensive, and has units of energy, so V F = k T B ξd One can think of V/ξd as the number of independent parts of the system. But

ξ T T −ν ∼ | − c| so F = T T νd | − c| But the singular behavior in F is from the heat capacity exponent α via

F = T T 2−α | − c| Hence νd = 2 α − This back–of–the–envelope argument is the fully rigorous (or as rigorous as it comes) argument! The hyperscaling relation works for all systems, except the mean ﬁeld solutions must have d = 4. To derive the three scaling relations, we assert that correlations are scale invariant: 1 ∼ C(x, T, H) = C(x/ξ, Hξy) xd−2+η 156 CHAPTER 16. SCALING THEORY

Note that, since blocks of spin are correlated on scale ξ, this means only a little H is needed to magnetize the system. So the eﬀect of H is magniﬁed by ξ. The factor is ξy, where y must be determined. In a ﬁnite–size system we would have

1 ∼ C(x, T, HL) = C(x/ξ, Hξy, L/ξ) xd−2+η In any case, in the relation above

ξ T T −ν ∼ | − c| For ease of writing, let t T T ≡ | − c| So that ξ t−ν . Then we have ∼ 1 ∼ C(x, T, H) = C(xtν , H/t∆) xd−2+η where ∆ = yν is called the “gap exponent”. But the magnetic susceptibility satisﬁes the sum rule

X ddxC(x, T, H) T ∝ Z 1 ∼ = ddx C(xtν , H/t∆) xd−2+η Z ∼ C(xtν , H/t∆) = (tν )−2+η dd(xtν ) (xtν )d−2+η Z so, ∼ (−2+η)ν ∆ XT (T, H) = t X(H/t ) where ∞ 1 ∼ ∼ H dd(xtν ) C(xtν , H/t∆) X( ) (xtν )d−2+η ≡ t∆ Z−∞ But X(T, H = 0) = t−γ Hence γ = (2 η)ν − called Fisher’s law. Now note ∂ψ (T, H) = X (T, H) ∂H T 16.1. SCALING WITH ξ 157 so integrating, we have the indeﬁnite integral

ψ(T, H) = dH X(T, H) Z ∼ H = dH t−γ X( ) t∆ Z H ∼ H = t∆−γ d( ) X( ) t∆ t∆ Z∼ = t∆−γ ψ(H/t∆)

But ψ(T, H = 0) = tβ So β = ∆ γ − and the gap exponent ∆ = β + γ or equivalently, y = (β + γ)/ν. So we have

∼ H ψ(T, H) = tβψ( ) tβ+γ Here is a neat trick for changing the dependence “out front” from t to H. We ∼ H don’t know what the function ψ is. All we know is that it is a function of tβ+γ . ∼ Any factor of H/tβ+γ that we multiply or divide into ψ leaves another unknown function, which is still a function of only H/tβ+γ . In particular, a crafty choice can cancel the tβ factor:

∼ β H β H −β H ψ(T, H) = t ( ) β+γ ( ) β+γ ψ( ) tβ+γ tβ+γ tβ+γ

∼ ∼ We call the last term on the right–hand side ψ, and obtain

∼ ∼ H ψ(T, H) = Hβ/β+γ ψ( ) tβ+γ

∼ ∼ ∼ Note that H−β/β+γ ψ(H) = ψ(H). Now

1/δ ψ(Tc, H) = H is the shape of the critical isotherm, so δ = 1 + γ/β, or

γ = β(δ 1) − called the Widom scaling law. 158 CHAPTER 16. SCALING THEORY

Note that ψ = ∂F/∂H, so

F = dH ψ Z ∼ H = dH tβψ( ) tβ+γ Z H ∼ H = t2β+γ d( )ψ( ) tβ+γ tβ+γ Z or, ∼ H F (T, H) = t2β+γ F ( ) tβ+γ But F (T, 0) = t2−α so 2 α = 2β + γ, or − α + 2β + γ = 2 which is Rushbrooke’s law. The form for the free energy is ∼ H F (T, H) = t2−αF ( ) tβ+γ Note that ∂nF lim = t2−αt−n(β+γ) H→0 ∂Hn or F (n) = t2−α−n(β+γ) So all derivatives of F are separated by a constant “gap” exponent (β + γ).

y = 2 α n(β + γ) n − −

yn free energy 2 magnetization 1

0 1 2 3 Susceptability 1 − 2 −

Figure 16.7: Gap Exponent

Finally, we will give another argument for the hyperscaling relation. It is not, however, as compelling as the one above, but it is interesting. 16.1. SCALING WITH ξ 159

Near Tc, we assert that ﬂuctuations in the order parameter are of the same order as ψ itself. Hence (∆ψ)2 h i = (1) ψ 2 O h i This is the weak part of the argument, but it is entirely reasonable. Clearly

ψ 2 t2β h i ∼ To get (∆ψ)2 , recall the sum rule h i ddx ∆ψ(x)∆ψ(0) = X h i T Z The integral is only nonzero for x ξ. In that range, ≤ ∆ψ(x)∆ψ(0) (∆ψ)2 h i∼Oh i Hence ddx ∆ψ(x)∆ψ(0) = ξd (∆ψ)2 = X h i h i T Z or, (∆ψ)2 = X ξ−d h i T But

X t−γ T ∼ and

ξ t−ν ∼ so,

(∆ψ)2 = tνd−γ h i The relative ﬂuctuation therefore obeys

(∆ψ)2 h i = tνd−γ−2β ψ 2 h i But if, as we argued (∆ψ)2 h i = (1) ψ 2 O h i we have νd = γ + 2β which is the hyperscaling relation. (To put it into the same form as before, use Rushbrooke scaling law of α + 2β + γ = 2. This gives νd = 2 α.) − 160 CHAPTER 16. SCALING THEORY

As a general feature (∆ψ)2 h i ψ 2 h i measures how important ﬂuctuations are. We discussed this at the beginning of the course. Mean ﬁeld theory works best when ﬂuctuations are small. So we say that mean–ﬁeld theory is correct (or at least self–consistent) if its predictions give (∆ψ)2 h i << 1 ψ 2 h i This is called the Ginzburg criterion for the correctness of mean–ﬁeld theory. But from above (∆ψ)2 h i = tνd−γ−2β ψ 2 h i Putting in the results from mean–ﬁeld theory (ν = 1/2, γ = 1, β = 1/2) gives

2 (∆ψ) 1 (d−4) h i = t 2 ψ 2 h i

Hence, mean–ﬁeld theory is self consistent for d > 4 as T Tc (t 0). Mean– ﬁeld theory doesn’t work for d < 4. → → This is called the upper critical dimension; the dimension at which ﬂuctua- tions can be neglected. Chapter 17

Renormalization Group

The renormalization group is a theory that builds on scaling ideas. It gives a way to calculate critical exponents. The idea is that phase transition properties depend only on a few long wavelength properties. These are the symmetry of the order parameter and the dimension of space, for systems without long–range forces or quenched impurities. We can ignore the short length scale properties, but we have to do that “ignoring” in a controlled careﬁl way. That is what the renormalization group does. The basic ideas were given by Kadanoﬀ in 1966, in his “block spin” treatment of the Ising model. A general method, free of inconsistencies, was ﬁgured out by Wilson (who won the Nobel prize for his work). Wilson and Fisher ﬁgured out a very useful implementation of the renormalization group in the ǫ = 4 d expansion, where the small parameter ǫ was eventually extrapolated to unity,− and so d 3. → At a critical point, Kandanoﬀ reasoned that spins in the Ising model act together. So, he argued one could solve the partition function recursively, by progressively coarse–graining on scales smaller than ξ. From the recursion rela- tion describing scale changes, one could solve the Ising model. This would only be a solution on large length scales, but that is what is of interest for critical phenomena. The partition function is

K S S +h S Z(N,K,h) = e Phiji i j Pi i {StatesX } where K = J/kBT and h = H/kBT . Now, let us (imagine we) reorganize all of these sums as follows.

1. Divide the lattice into blocks of size bb (we’ll let b = 2 in the cartoon below).

2. Each block will have an internal sum done on it leading to a block spin

161 162 CHAPTER 17. RENORMALIZATION GROUP

S = 1. For example, the majority–rule transformation is ±

+1, iǫbd Si > 0

1, d S < 0 Si = Piǫb i − 1 , iǫbd Si = 0 randomly± P P

Clearly, if N is the number of spins originally, N/bd is the number of spins after blocking. It is easy to see that this is a good idea. Any feature correlated on a ﬁnite length scale l before blocking becomes l/b after blocking once. Blocking m times means this feature is now on the small scale l/bm. In particular, consider the correlation length ξ. The renormalization group (RG) equation for ξ is

ξ(m+1) = ξ(m)/b

If the RG transformation has a ﬁxed point (denoted by *), which we always assume is true here,

ξ∗ = ξ∗/b

The only solutions are

0, T rivial : T = 0, F ixed P oint : T = ξ∗ = ∞ , Nontrivial fixed point : T = T (∞ c

The critical ﬁxed point is unstable, while the T = 0 and T = ﬁxed points ∞

ξ

Arrows show ﬂow under RG

Tc

Figure 17.1: are stable. 163

Note that the coarse–graining eliminates information on small scales in a controlled way. Consider ﬁgure 17.2. Let us consider a blocking transformation

Less Fluctuations (lower T )

T >Tc blocking Renormalized System System with a few ﬂuctuations

Figure 17.2:

(ﬁg. 17.3). We know the states in the new renormalized system, S(m+1) = 1, i ±

Original System blocking with b = 2 Renormalized Conﬁguration RG level m RG level (m + 1)

Figure 17.3: for the renormalized spins on sites

i = 1, 2, ..N (m+1)

Note, N (m+1)N (m)/bd In the cartoon, N (m+1) = 16, bd = 22, and N (m) = 64. This is all pretty clear: we can easily label all the states in the sum for Z. But how do the spins interact after renormalization? It is clear that, if the original system has short–ranged interactions, the renormalized system will as well. (m) At this point, historically, Kadanoﬀ simply said that if EState was the Ising (m+1) (m) (m) model, so was EState . The coupling constants K and h would depend on the level of transformation, but that’s all. This is not correct as we will see shortly, but the idea and approach is dead on target. 164 CHAPTER 17. RENORMALIZATION GROUP

In any case, assuming this is true

K(m) S(m)S(m)+h(m) S(m) Z(m)(N (m), K(m),h(m)) = e Phiji i j Pi i {StatesX } But (m) (m) (m) (m) Z(m) = e−N f (h ,t ) where t(m) is the reduced temperature obtained from K(m), and f (m) is the free energy per unit volume. But we must have Z(m) = Z(m+1), since we obtain Z(m+1) by doing some of the sums in Z(m). Hence

N (m)f (m)(h(m),t(m)) = N (m+1)f (m+1)(h(m+1),t(m+1)) or, f (m)(h(m),t(m)) = b−df (m+1)(h(m+1),t(m+1)) The only reason h and t vary under RG is due to B. We assume

t(m+1) = bAt(m) and

h(m+1) = bBt(m)

So (and dropping the annoying superscripts from t and h) we have

f (m)(h,t) = b−df (m+1)(hbB, tbA)

Now we use Kadanoﬀ’s bold, and unfortunately incorrect, assertion that f (m) and f (m+1) are both simply the Ising model. Hence

f(h,t) = b−df(hbB, tbA)

This is one form of the scaling assumption, which is due to Widom. Widom argued that the free energy density was a homogeneous function. This allows for the possibility of nontrivial exponents (albeit built into the description), and one can derive the scaling relations. For example, the factor b is not ﬁxed by our arguments. Let

tbA 1 ≡ Then f(h,t) = td/Af(ht−B/A, 1) or, ∼ f(h,t) = td/Af(ht−B/A) which is of the same form as the scaling relation previously obtained for F . 165

vs.

(m) m m (m+1) m m E = −K(m) P S( )S( ) vs. E = −K(m+1) P S( +1)S( +1) kB T hiji i j kB T hiji i j +(langer range interactions)

Figure 17.4:

Nearest neighbor interaction shown by solid line Next – nearest neighbor interaction (along diagonal) shown by wavy line

Figure 17.5:

Kadanoﬀ also presented results for the correlation function, but we’ll skip that. Let us consider the interactions between spins again–before and after appli- cation of RG. For simplicity, let H = 0. The original system has interactions to its nearest neighbors as drawn in ﬁg. 17.4. The blocked spin treats 4 spins as interacting together, with identical strength with blocked spins. The nearest neighbor interaction is as a cartoon, in ﬁg. 17.5. Note that the direct part of the nearest–neighbor interaction to the right in the renormalized system has 4 spins interacting equivalently with 4 other spins. There are 16 cases, which are replaced by one global eﬀective interaction. Those 16 cases have: All K(m+1) contributions treated equally in block spin near neighbor interaction.

2 interactions separated by 1 lattice constant 6 interactions separated by 2 lattice constants 8 interactions separated by 3 lattice constants 2 interactions separated by 4 lattice constants Let’s compare this to the next–nearest neighbor interaction, along the diagonals. There are again 16 cases which have: All next nearest neighbor contributions 166 CHAPTER 17. RENORMALIZATION GROUP

Figure 17.6: Nearest neighbor interaction vs. Next–nearest neighbor treated equally in block spin representation.

1 interactions separated by 2 lattice constants 4 interactions separated by 3 lattice constants 6 interactions separated by 4 lattice constants 4 interactions separated by 5 lattice constants 1 interactions separated by 6 lattice constants (m+1) Let us call the block–spin next–nearest neighbor coupling Knn , which lumps all of this into one eﬀective interaction. It is clear from looking at the interac- (m+1) (m+1) tions underlying the block spins that K (nearest neighbor) and Knn (m+1) (next nearest neighbor) are quite similar, although it looks like Knn < (m+1) (m+1) K . However, Kadanoﬀ’s step of Knn 0 looks very drastic, in fact with the beniﬁt of hindsight, we know it is too drastic.≡ One can actually numerically track what happens to the Ising model under the block–spin renormalization group. It turns out that longer–range interac- tions between two spins are generated (as we have described in detail above) as well as three spin interactions (like SiSjSk), and other such interactions. Although the interactions remain short ranged, they become quite complicated. It is useful to consider the ﬂow of theP Energy of a state under application of RG. For H = 0, for the Ising model, the phase diagram is as in ﬁg. 17.7. For low T K 0 c K Recall K = J/kBT inﬁnite T

Figure 17.7: Usual Ising model H = 0 the Ising model with only next nearest neighbor interactions (J 0) one gets the same phase diagram at H = 0 (ﬁg. 17.8). The phase diagram≡ for an Ising

low T (Knn)c K 0 nn Knn = Jnn/kBT inﬁnite T

Figure 17.8: Ising model, no near neighbor interaction 167

model with both K and Knn namely, E = K SiSj Knn SiSj kBT − − Xhiji hhXijii for H = 0, is drawn in ﬁg. 17.9. The ﬂow under RG is hence as drawn in

K nn ordered Low T

(Knn)c 2nd order transitions

disordered 0 Kc K inﬁnite T

Figure 17.9:

ﬁg. 17.10. Of course there are many other axes on this graph corresponding to

Knn

Critical ﬁxed point of RG transformation K

Figure 17.10: diﬀerent interactions. One should also note that the interactions developed by the RG, depend on the RG. For example, b = 3 would give a diﬀerent sequence of interactions, and a diﬀerent ﬁxed result, than b = 2. But, it turns out, the exponents would all be the same. This is an aspect of universality: All models with the same symmetry and spatial dimension share the same critical exponents. The formal way to understand these ideas was ﬁgured out by Wilson. The RG transform takes us from one set of coupling constants K(m) to another K(m+1) via a length scale change of b entering a recursion relation: K(m+1) = R(k(m)). At the ﬁxed point K∗ = R(k∗) Let us now imagine we are close enough to the ﬁxed point that we can linearize around it. In principle this is clear. In practice, because of all the coupling constants generated on our way to the ﬁxed point, it is a nightmare. Usually it is done by numerical methods, or by the ǫ = 4 d expansion described later. − 168 CHAPTER 17. RENORMALIZATION GROUP

Using subscripts, which correspond to diﬀerent coupling constants

K(m+1) K∗ = R(K(m)) R(K∗) α − α − or, (m+1) (m) δKα = Mαβ δKβ Xβ where ∗ Mαβ = (∂Rα/∂Kβ) is the linearized RG transform through which “b” appears, and

δK = K K∗ −

All that is left to do is linear algebra to diagonalize Mαβ. Before doing this, let us simply pretend that Mαβ is already diagonal. Then we can see everything clearly. If

λ(1) 0 λ(2) ... 0 ... the transform becomes (m+1) (α) (m) δKα = λ δKα The form of λ(α) is easy to get. It is clear that acting with the RG on scale b, followed by acting again by b′, is the same as acting once by b b′. Hence: · M(b)M(b′) = M(bb′) somewhat schematically, or

λ(b)λ(b′) = λ(bb′)

The solution of this is λ = by The remormalization group transformation becomes

(m+1) yα (m) δKα = b δKα

If yα > 0, δKα is relevant, getting bigger with RG. If yα = 0, δKα is marginal, staying unchanged with RG. If yα < 0, δKα is irrelevant, getting smaller with RG. The singular part of the free energy near the ﬁxed point satisﬁes

∗ −d ∗ y1 y2 f (δK1, δK2, ..) = b f (b δK1, b δK2, ...) 169

This is the homogeneous scaling form, where we now know how to calculate the yα’s (in principle), and we know why (in principle) only two exponents are necessary to describe the Ising model (presumably the remainder are irrelevant). In practice, the problem is to ﬁnd the ﬁxed point. A neat trick is the ǫ = 4 d expansion. From our discussion above about the Ginzburg criteria, mean–ﬁeld− theory works for d > 4 (it is marginal with logarithmic correlations in d = 4 itself). In fact we then expect there to be a very easy–to–ﬁnd, easy–to–ﬁddle– with, ﬁxed point in d 4. If we are lucky, this ﬁxed point can be analytically continued to d < 4 in a≥ controllable ǫ = 4 d expansion. We will do this below. I say “lucky” because the ﬁxed point associated− with mean–ﬁeld theory is not always related to the ﬁxed point we are after (usually it is then a ﬁxed point related to a new phase transition at even higher dimensions, i.e., it corresponds to a lower critical dimension). Finally let us now return to the linear algebra:

(m+1) (m) δKα = Mαβ δKα Xβ since Mαβ is rarely diagonal. We will assume it is symmetric to make things easier. (If not, we have to deﬁne right and left eigenvalues. This is done in Goldenfeld’s book.) We deﬁne an orthonormal complete set of eigenvectors by

(γ) (γ) (γ) Mαβeβ = λ eα Xβ where λ(γ) are eigenvalues and e are the eigenvectors. We expand δK via ˆ (γ) δKα = δKγ eα γ X or, ˆ (γ) δKγ = δKαeα α X where δKˆ are the “scaling ﬁelds” — a linear combination of the original inter- (γ) actions. Now we act on the linearized RG transform with α eα (...). This gives (γ) (m+1) (γ) (m)P eα (δKα ) = eα ( Mαβ δKα ) α α X X Xβ It is a little cofusing, but the superscripts on the e’s have nothing to do with the superscripts on the δK’s. We get ˆ (m+1) (α) (γ) (m) δKγ = λ eβ δKβ Xβ so, ˆ (m+1) (α) ˆ (m) δKγ = λ δKγ This is what we had (for δK’s rather than δKˆ ’s) a few pages back. Everything follows the same from there. 170 CHAPTER 17. RENORMALIZATION GROUP 17.1 ǫ–Expansion RG

Reconsider K r u F [ψ] = ddx ( ψ)2 + ψ2 + ψ4 { 2 ∇ 2 4 } Z near Tc, where K and u are constants and

r T T ∝ − c

Near Tc we can conveniently consider F K r u = ddx ( ψ)2 + ψ2 + ψ4 k T { 2 ∇ 2 4 } B Z by describing the kBT , which is nonsingular, within K, r, and u. We can remove r F the constant K by absorbing it into the scale lenght or letting r K , F K etc., so that → → F 1 r u = ddx ( ψ)2 + ψ2 + ψ4 k T {2 ∇ 2 4 } B Z Now we can easily construct dimensional arguments for all coeﬃcients. Clearly

F [ ] = 1 kBT where [...] means the dimension of the quantity. Hence,

[ ddx( ψ)2] = 1 ∇ Z or

[Ld−2ψ2] = 1, since [x] = L or,

1 [ψ] = d −1 L 2

Similarly,

r [ ddx ψ2] = 1 2 Z or

1 [Ldr ] = 1 Ld−2 17.1. ǫ–EXPANSION RG 171 or 1 [r] = L2 Finally,

u [ ddx ψ4] = 1 4 Z 1 so [Ldu ] = 1 L2d−4 1 or [u] = L4−d Eventually we will be changing scale by a length rescaling factor, so we need to ﬁnd out how these quantities are aﬀected by a simple change in scale. These are called engineering dimensions. Note the following, 1 [ψ2] = Ld−2 But at Tc 1 ψ2 = h i xd−2+η where η is nonzero! Also, since r T T , near T ∝ − c c 1 [T T ] = − c L2 but [ξ] = L, so ξ = (T T )−1/2 − c near Tc. But ξ = (T T )−ν − c So our results of scaling theory give 1 1 1 [ψ]after avg. = = (d−2+η)/2 d −1 η/2 L L 2 L and 1 1 1 [r] = = after avg. 1/ν 2 1 −2 L L L ν The extra length which appears on averaging is the lattice constant a 5A˙. Hence ≈ 1 ψ2 = aη h i xd−2+η and η/2 is called ψ’s anomalous dimension, while 1/η 2 is the anomalous dimension of r. By doing the averaging, carefully taking− account of changes in scale by L, we can calculate ν and η as well as other quantities. 172 CHAPTER 17. RENORMALIZATION GROUP

In this spirit, let us measure quantities in units of their engineering dimen- sions. Let ψ ψ ψ′ = = d −1 [ψ] 1/L 2 and x x x′ = = [x] L so that (dropping the kBT under F ), 1 r u F = ddx′ ( ′ψ′)2 + ( L2)ψ′2 + ( L4−d)ψ′4 {2 ∇ 2 4 } Z we can of course let r r r′ = = [r] 1/L2 and u u u′ = = = uL4−d [u] 1/L4−d so we have 1 r′ u′ F = ddx′ ( ′ψ′)2 + ψ′2 + ψ′4 {2 ∇ 2 4 } Z Equivalently, since we are interested in the behaviour at T , let rL2 1 (choos- c ≡ ing the scale by the distance from Tc), then

1 1 u′′ F = ddx′ ( ′ψ′)2 + ψ′2 + ψ′4 {2 ∇ 2 4 } Z where ′′ 1 4−d u = u( ) 2 r Note that u′ is negligable as L for d > 4, and more revealingly, u′′ is → ∞ negligible as T Tc for d > 4. This is the Ginzburg→ criterion again, and note the equivalence between L → and T Tc (i.e., r 0). ∞ It is worthwhile→ to consider→ how the coeﬃcient of the quartic term (with its engineering dimension removed) varies as scale L is varied. Consider the ”beta function”: du′ L = (4 d)L4−du − dL − − = (4 d)u′ − − 17.1. ǫ–EXPANSION RG 173

du′ L ′ − dL d> 4 initial value of u is desceased as L increases

d = 4 u′

d< 4 initial value of u′ is increased as L increases

Figure 17.11:

The direction of the arrows is from du′ = (d 4)u′ , d > 4 d ln L − − du′ = (4 d)u′ , d < 4 d ln L − Note that u′ = 0 is a stable ﬁxed point for d > 4 and an unstable ﬁxed point for d < 4. This sort of picture implicitly assumes u′ is small. The renormalization group will give us a more elaborate ﬂow similar to du′ L = (4 d)u′ + (u′2) − dL − − O so that the ﬂow near the ﬁxed pt. determines the dimension of a quantity.

du′ L u′ − dL = 0 ﬁxed pt. d > 4

new Wilson ﬁxed pt. u′ > 0 u′ d < 4

Figure 17.12:

Clearly du′ L = (Dim.of u) u′ − dL and similarly dr′ L = (Dim.of r) r′ − dL So the anomalous dimension can be determined by the “ﬂow” near the ﬁxed point. 174 CHAPTER 17. RENORMALIZATION GROUP

The analysis above is OK for d > 4, and only good for u′ close to zero for d < 4. To get some idea of behavior below 4 dimensions, we stay close to d = 4 and expand in ǫ = 4 d. (See note below) So consider − K r u F = ddx ( ψ)2 + ψ2 + ψ4 { 2 ∇ 2 4 } Z we will ﬁnd ǫ = 1 (d = 3) Exact (d = 3) α = ǫ/6 0.17 0.125 1 ǫ β = 2 6 0.33 0.313 γ = 1 +−ǫ/6 4 5.0 δ = 3 + ǫ 1.17 1.25 1 ν = 2 + ǫ/12 0.58 0.638 η = 0 + (ǫ2) 0 0.041 O (Note: Irrelevant Couplings Consider 1 r u v w ddx[ ( ψ)2 + ψ2 + ψ4 + ψ6 + ψ8] 2 ∇ 2 4 6 8 Z as before,

d −1 [ψ] = 1/L 2 [ν] = 1/L2 [u] = 1/L4−d

But similarly,

[v] = 1/L2(3−d) 3(2 2 −d) [w] = 1/L 3 so,

2(3−d) d ′ 1 ′ ′ 2 r 2 ′2 u 4−d ′4 vL 6 3(2 2 −d) 8 F = d x [ ( ψ ) + ( L )ψ + ( L )ψ + ψ + wL 3 ψ ] 2 ∇ 2 4 6 Z Hence, v, w look quite unimportant (at least for the Gaussian ﬁxed point) near d = 4. This is called power counting. For the remainder of these notes we will simply assume v, w are irrelevant always.) We will do a scale transformation in Fourier space,

d d k ~ ψ(x) = eik·~xψˆ (2π)d k Z|k|<Λ where (before we start) Λ = π/a, since we shall need the lattice constant to get anomalous dimensions. We will continually apply the RG to integrate out 17.1. ǫ–EXPANSION RG 175

ky ky Λ ky Λ/b 0 kx kx .... kx Λ Λ − RG Λ/b RG Λ − domain of nonzero hatpsi k Scale transformation to k 0 by factor of b →

Figure 17.13: small length scale behavior, which, in Fourier space, means integrating out large wavenumber behaviour. Schematically shown in ﬁg. 17.13. Let

ψ(x) = ψ¯(x) + δψ(x) all degrees of freedom degrees of freedom degrees of freedom after several for “slow” modes on for “fast” modes on transformations k < Λ/b Λ/b < k < Λ | | | | k < Λ to be integrated out | | Also, for convenience, Λ =Λ δΛ b − So that the scale factor Λ b = 1 Λ δΛ ≥ − Then we explicitly have

d d k ~ ψ¯(x) = eik·~xψˆ (2π)d k Z|k|<Λ−δΛ and d d k ~ δψ(x) = eik·~xψˆ (2π)d k ZΛ−δΛ<|k|<Λ Schematically, refer to ﬁg. 17.14. This is called the “momentum–shell” RG. We do this in momentum space rather than real space because it is convenient for factors of ddk ddx( ψ)2 = k2 ψˆ 2 ∇ (2π)d | k| Z coupling diﬀerent ψ(x)’s Z fourier modes uncoupled | {z } | {z } 176 CHAPTER 17. RENORMALIZATION GROUP

ky ky ky Λ Λ Λ Λ δΛ − Λ δΛ − kx kx kx

ψ(x) ψ¯(x) δψ(x)

Figure 17.14:

which are trivial in momentum space. The size of the momentum shell is

ddk = aΛd−1δΛ (2π)d ZΛ−δΛ<|k|<Λ where a is a number that we will later set to unity for convenience, although a(d = 3) = 1/2π2, a(d = 2) = 1/2π. Let us now integrate out the modes δψ

¯ ¯ ¯ e−F [ψ] e−F [ψ+δψ] ≡ scales |k| < Λ − δΛ StatesX δψ scales |k| < Λ | {z } shell scales | {z } | {z } Essentially we are — very slowly — solving the partition function by bit–by–bit integrating out degrees of freedom for large k. We will only do this once, and thus obtain recursion relations for how to change scales.

1 r u F [ψ¯ + δψ] = ddx ( (ψ¯ + δψ))2 + [ψ¯ + δψ]2 + [ψ¯ + δψ]4 {2 ∇ 2 4 } Z

We will expand this out, ignoring odd terms in ψ¯ (which must give zero contri- bution since they change the symmetry of F ), constant terms (which only shift the zero of the free energy), and terms of order higher than quartic like ψ6 and ψ8 (which are irrelevant near d = 4 by power counting as we did above). 17.1. ǫ–EXPANSION RG 177

In that case 1 1 1 [ (ψ¯ + δψ)]2 = ( ψ¯)2 + ( δψ)2 + odd term 2 ∇ 2 ∇ 2 ∇ r r r [ψ¯ + δψ]2 = ψ¯2 + (δψ)2 + odd term 2 2 2 u u δψ [ψ¯ + δψ]4 = ψ¯4[1 + ]4 4 4 ψ¯ u δψ (δψ)2 (δψ)3 (δψ)4 = ψ¯4[1 + 4 + 6 + 4 + ] 4 ψ¯ ψ¯2 ψ¯3 ψ¯4 u 3u = ψ¯4 + ψ¯2(δψ)2 + (δψ)4 + odd term 4 2 O We will neglect (δψ)4, since these modes have a small contribution to the k = 0 behavior. This can be self–consistently checked. Hence 1 1 F [ψ¯ + δψ] = F ψ¯ + ddx[ ( δψ)2 + (r + 3uψ¯2)(δψ)2] { } 2 ∇ 2 |k|<Λ−δΛ Z

|dd{zk }i~k·~x ˆ but δψ = Λ−δΛ<|~k|<Λ (2π)d e ψk, so R ( δψ)2 =Λ2(δψ)2 ∇ for a thin shell, and 1 F [ψ¯ + δψ] = F [ψ¯] + ddx (Λ2 + r + 3uψ¯2)(δψ)2 2 Z and we have,

2 ¯2 −F¯[ψ¯] −F [ψ¯] − ddx Λ +r+3uψ (δψ)2 e = e e R 2 StatesX δψ where = dδψ(x) StatesX δψ all δψY states Z The integrals are trivially Gaussian in δψ, so

¯ ¯ ¯ 2π e−F [ψ] = e−F [ψ] Λ2 + r + 3uψ¯2 x s Y total no.Y of δψ states we know the total number of δψ states because we know the total number of modes in Fourier space in the shell are

ddk = aΛd−1δΛ (2π)d ZΛ−δΛ<|k|<Λ 178 CHAPTER 17. RENORMALIZATION GROUP so d−1 (...) e(R dx)(aΛ δΛ)ln(...) → x Y totalY no. and −F¯[ψ] −F [ψ¯] − ddx aΛd−1δΛ 1 ln (Λ2+r+3uψ¯2)+const. e = e e R 2 Now consider 1 1 r 3u ln [Λ2 + r + 3uψ¯2] = ln (1 + + ψ¯2) + const. 2 2 Λ2 Λ2

2 3 we expand using ln (1 + ǫ) = ǫ ǫ + ǫ − 2 3 1 r 3u 1 r 3u = + ψ¯2 ( + ψ¯2)2 + ... 2{Λ2 Λ2 − 2 Λ2 Λ2 } const. const. 1 r7 3u 1 r * 3ur 9u2 = + ψ¯2 ( )2 ψ¯2 ψ¯4 + ... 2{Λ2 Λ2 − 2 Λ2 − Λ2 − 2Λ4 } 1 3u 3ur 1 9u2 = ( )ψ¯2 + (− )ψ¯4 2 Λ2 − Λ4 4 Λ4 So that 1 3u 3ur 1 9u2 F¯[ψ¯] = F [ψ¯] + ddx ( )(aΛd−1δΛ)ψ¯2 + (− )(aΛd−1δΛ)ψ¯4 {2 Λ2 − Λ4 4 Λ4 } Z so, 1 r¯ u¯ F¯[ψ¯] = ddx ( ψ¯)2 + ψ¯2 + ψ¯4 {2 ∇ 2 4 } Z where, 3u 3ur r¯ = r + ( )(aΛd−1δΛ) Λ2 − Λ4 9u2 u¯ = u + (− )(aΛd−1δΛ) Λ4

But r¯ = r(Λ δΛ) etc, − so r(Λ δΛ) r(Λ) 3u 3ur − − = ( )aΛd−1 δΛ Λ2 − Λ4 and u(Λ δΛ) u(Λ) 9u2 − − = (− )aΛd−1 δΛ Λ4 or, 1 ∂r 3u 3ur = ( )Λd−1 −a ∂Λ Λ2 − Λ4 1 ∂u 9u2 = (− )Λd−1 −a ∂Λ Λ4 17.1. ǫ–EXPANSION RG 179

Set a = 1 (or let u u/a), and set Λ = 1/L, to obtain → ∂r L = 3uL2−d + 3urL4−d − ∂L − ∂u L = 9u2L4−d − ∂L We need to include the trivial change in length due to the overall rescaling of space, with engineering dimensions [r] = 1/L2, [u] = 1/L4−d, by letting r′ = rL2, u′ = uL4−d. Note dr d(r′/L2) dr′ r′ 1 = = ( 2 ) dL dL dL − L L2 so dr dr′ 1 L = ( L + 2r′) − dL − dL L2 and du du′ 1 L = ( L + (4 d)u′) − dL − dL − L4−d But L2−d 1 3uL2−d = 3u′ = 3u′ − − L4−d − L2 and

L4−d 1 3urL4−d = 3u′r′ = 3u′r′ − − L4−dL2 − L2 and

L4−d 1 9u2L4−d = 9u′2 = 9u′2 (L4−d)2 L4−d Hence we obtain dr′ L = 2r′ 3u′ + 3u′r′ − dL − − and du′ L = (4 d)u′ + 9u′2 − dL − − Which are the recursion relations, to ﬁrst order. Evidently there is still a trivial Gaussian ﬁxed point r∗ = u+ = 0, but there is a nontrivial “Wilson–Fisher” ﬁxed point which is stable for d < 4 ! From the u equation (using ǫ = 4 d) − ǫ u∗ = 9 and in the r equation (using u∗ r∗ ǫ with ǫ << 1), 2r∗ = 3u∗, so ∼ ∼ − ǫ r∗ = −6 180 CHAPTER 17. RENORMALIZATION GROUP

′ L du − dL d > 4 d = 4 Wilson–Fisher ﬁxed pt. Stable for d > 4 u∗ u′ Gaussian d < 4 ﬁxed pt.

Figure 17.15:

Linearlizing around this ﬁxed point gives the dimensions of r and u as that ﬁxed point is approached. Let δr = r′ r∗, δu = u′ u∗. In the u equation, − − dδu 2δu L = ǫu∗ ǫδu + 9(u∗)2(1 + ) − dL − − u∗ = [18u∗ ǫ]δu − dδu L = ǫδu (Wilson–Fisher ﬁxed pt.) − dL So the dimension of u is ǫ. In the r equation,

dδ r L = 2r∗ 3u∗ 2δr + 3u∗δr − dL − − − but δr >> δu as L , so, → ∞ dδr L = ( 2 + 3u∗)δr − dL − or,

dδr ǫ L = 2(1 )δr Wilson–Fisher ﬁxed pt. (stable d < 4) − dL − − 6 So the dimension of r is 2(1 ǫ/6). − − Similarly, near the Gaussian ﬁxed pt. where u∗ = r∗ = 0, we have (as we did before)

dδu L = ǫδu − dL − dδr L = 2δr − dL − Giving the dimensions of r and u there. 17.1. ǫ–EXPANSION RG 181

To get the physical exponents, recall

[r] = 1/L1/ν

Hence 1 ν = , Gaussian ﬁxed pt. 2 1 ǫ ν = (1 + ), Wilson–Fisher ﬁxed pt. 2 6 To get another exponent we use 1 [ψ] = d −1+η/2 L 2 or more precisely 1 [ψ] = f(rL1/ν , uL−yu ) d −1+η/2 L 2 where ǫ, Gaussian yu = − (ǫ, Wilson–Fisher is the dimension of u. Choosing the scale

rL1/ν 1 ≡ gives ν (d−2+η) νy [ψ] = r 2 f(1, ur u ) for the Wilson–Fisher ﬁxed point this is straight forward. Since νyu > 0, the term urνyu is irrelavant as r 0. Hence → ν (d−2+η) : const. [ψ] = r 2 f(1, 0) rβ ≡ Now it turns out η = 0 + (ǫ2) (which can be calculated later via scaling relations), so, O 1 1 ǫ β = (1 + )(2 ǫ) 2 2 6 − 1 ǫ ǫ = (1 + )(1 ) 2 6 − 2 or 1 ǫ β = (1 ) Wilson–Fisher ﬁxed pt. 2 − 3 The Gaussian ﬁxed point is more tricky (although academic since it is for d > 4). We again obtain ν (d−2+η) νy [ψ] = r 2 f(1, ur u ) 182 CHAPTER 17. RENORMALIZATION GROUP but now ν = 1 , y = ǫ = d 4, η = 0, so 2 u − − d−2 d−4 [ψ] = r 4 f(1, ur 2 )

d−2 This looks like β = 4 ! Not the well–known β = 1/2. The problem is that u is irrelevant, but dangerous, for T

d−2 1 4 [ψ]u→0 = r d−4 (ur 2 )1/2 d−2 − d−4 r 4 4 = u1/2 r = ( )1/2 u So, β = 1/2 for the Gaussian ﬁxed point. Hence d 4, the Gaussian ﬁxed point ≥ 1 is stable and α = 0, β = 1/2, δ = 3, γ = 1, η = 0, ν = 2 (The remainder are obtained carefully with the scaling relations). For d < 4, the Wilson–Fisher ﬁxed point is stable and since α + 2β + γ = 2, γ = ν(2 η), γ = β(δ 1), νd = 2 α we have: − − − ǫ = 1 Ising d = 3 ǫ α = 6 0.17 0.125... 1 ǫ β = 2 (1 3 ) 0.33 0.313... −ǫ γ = 1 + 6 1.17 1.25... ǫ δ = 3(1 + 3 ) 4 5.0... 1 ǫ ν = 2 (1 + 12 ) 0.58 0.638... η = 0 + (ǫ2) 0 0.041... O