<<

A Molecular Modeler’s Guide to Course notes for BIOE575

Daniel A. Beard Department of Bioengineering University of Washington Box 3552255 [email protected] (206) 685 9891

April 11, 2001 Contents

1 Basic Principles and the 2 1.1 Classical Laws of Motion ...... 2 1.2 Ensembles and Thermodynamics ...... 3 1.2.1 An Ensembles of Particles ...... 3 1.2.2 Microscopic Thermodynamics ...... 4 1.2.3 Formalism for Classical Systems ...... 7 1.3 Example Problem: Classical Ideal Gas ...... 8 1.4 Example Problem: Quantum Ideal Gas ...... 10

2 and Equipartition 15 2.1 The Canonical Distribution ...... 15 2.1.1 A Derivation ...... 15 2.1.2 Another Derivation ...... 16 2.1.3 One More Derivation ...... 17 2.2 More Thermodynamics ...... 19 2.3 Formalism for Classical Systems ...... 20 2.4 Equipartition ...... 20 2.5 Example Problem: Harmonic Oscillators and Blackbody Radiation ...... 21 2.5.1 Classical Oscillator ...... 22 2.5.2 Quantum Oscillator ...... 22 2.5.3 Blackbody Radiation ...... 23 2.6 Example Application: Poisson-Boltzmann Theory ...... 24 2.7 Brief Introduction to the ...... 25

3 Brownian Motion, Fokker-Planck Equations, and the Fluctuation-Dissipation Theo- rem 27 3.1 One-Dimensional Langevin Equation and Fluctuation- Dissipation Theorem ...... 27 3.2 Fokker-Planck Equation ...... 29 3.3 Brownian Motion of Several Particles ...... 30 3.4 Fluctuation-Dissipation and Brownian Dynamics ...... 32

1 Chapter 1

Basic Principles and the Microcanonical Ensemble

The first part of this course will consist of an introduction to the basic principles of statistical mechanics (or statistical physics) which is the set of theoretical techniques used to understand microscopic systems and how microscopic behavior is reflected on the macroscopic scale. In the later parts of the course we will see how the tool set of statistical mechanics is key in its application to molecular modeling. Along the way in our development of basic theory we will uncover the principles of thermodynamics. This may come as a surprise to those familiar with the classical engineering paradigm in which the laws of thermodynamics appear as if from the brain of Jove (or from the brain of some wise old professor of engineering). This is not the case. In fact, thermodynamics arises naturally from basic principles. So with this foreshadowing in mind we begin by examining the classical laws of motion1.

1.1 Classical Laws of Motion

Recall Newton’s famous second law of motion, often expressed as ¢¡¤£¦¥ , where is the force

£ ¥

acting to accelerate a particle of with the acceleration . For a collection of § particles

located at Cartesian positions ¨ © ¨  ¨ the law of motion becomes

£ ¡ 

¨ (1.1.1)

 2

§    where are the forces acting on the particles . We shall see that in the absence of external fields or dissipation the Newtonian equation of

motion preserves total energy:



)

¡ "!$#%¡'& £-, -, !$#/.





¨ ¨0© -¨  ¨212

( (1.1.2)

+* © 1This course will be concerned primarily with classical physics. Much of the material presented will be applicable to quantum mechanical systems, and occasionally such references will be made.

2 674

A note on notation: Throughout these notes vectors are denoted by bold lower case letters (e.g. 354 , ). The

3 4 354 354:9<;3 4>=5;?8 354:9<;AB354C=D; ?EA@ notation 8 denotes the time derivative of , i.e., , and .

2

Chapter 1 – Basic Principles 3

# F¡HGJIF#LKMI 

where is some potential energy function and ¨ and is the kinetic energy.

Another way to pose the classical law of motion is the Hamiltonian formulation, defined in

P ¦¡ £SR P

N5Q O

terms of the particle positions NO and momenta . It is convenient to adopt the VWT

notation (from ) QUT momenta and positions, and to consider the scalar

 

V X Y §

quantities Q and , which denote the entries of the vectors and . For a collection of particles

 

Y`Z]\_^ aS§ X[Z]\_^ and are the collective positions and momenta vectors listing all entries.

The so called Hamiltonian function is an expression of the total energy of a system:







)

^

¡ !d#/.

b

Q

£

V© -Ve  V f1f

( (1.1.3)

c*

^ ©

Hamilton’s equations of motion are written as:

I

b

R g¡

I 

V (1.1.4)

Q

I

b

R h¡ G

I  

Q (1.1.5) V

Hamilton’s equations are equivalent to Newton’s:

R F¡ EKM£ R ¡HGJIi#fKSI j¡lke

Q Q V  V (1.1.6)

So why bother with Hamilton when we are already familiar with Newton? The reason is that the Hamiltonian formulation is often convenient. For example, starting from the Hamiltonian

formulation, it is straightforward to prove energy conservation:

o o o o o

 

I I I I I I

m

b b b b b b b

) )

^ ^

¡ R :! R Cp¡ p pdG p pLr

I  I  I  I  I  I 

m n

Q 

V (1.1.7)

c* +*

V Q V Q Q V

© ©¦q

1.2 Ensembles and Thermodynamics

With our review of the equations of classical mechanics complete, we undertake our study of sta- tistical physics with an introduction to the concepts of statistical thermodynamics. In this section thermodynamics will be briefly introduced as a consequence of the interaction of ensembles of large numbers of particles. The material loosely follows Chapter 1 of Pathria’s Statistical Mechan- ics [3], and additional information can be found in that text.

1.2.1 An Ensembles of Particles

 s

Consider a collection of § particles confined to a volume , with total . A system

 s of this sort is often referred to as an NVE system, as § , , and are the three thermodynamic variables that are held fixed. [In general three variables are necessary to define the thermodynamic

state of a system. Other thermodynamic properties, such as temperature for example, cannot be

 s assigned in an NVE ensemble without changing at least one of the variables § , , or .] We will refer to the thermodynamic state as the macrostate of the system. Chapter 1 – Basic Principles 4

For a given macrostate, there is likely to be a large number of possible microstates, which cor- respond to different microscopic configurations of the particles in the system. According to the

principles of quantum mechanics there is a finite fixed number of microscopic states that can be .

 3

§< Dsu 1 adopted by our NVE system. We denote this number of states as t . For a classical

system, the microstates are of course not discrete and the number of possible states for a fixed

¡



& § §vs ensemble is in general not finite. To see this imagine a system of a single particle ( )

travelling in an otherwise empty box of volume s . There are no external force fields acting on

¡ £¦w



© 

the particle so its total energy is  . The particle could be found in any location within

the box, and its velocity could be directed in any direction without changing the thermodynamic

 s macrostate defined by the fixed values of § , , and . Thus there are an infinite number of al-

lowable states. Let us temporarily ignore this fact and move on with the discussion based on a

.



§x Dsy 1 finite (yet undeniably large) t . This should not bother those of us familiar with quantum mechanics. For classical applications we shall see that bookkeeping of the state space for classi- cal systems is done as an integration of the continuous state space rather than a discrete sum as

employed in quantum statistical mechanics.

.



§< Dsu 1 t

At this point don’t worry about how you might go about computing t , or how

 s

might depend on § , , and for particular systems. We’ll address these issues later. For now just

.



§x 5su 1 appreciate that the quantity t exists for an NVE system.

1.2.2 Microscopic Thermodynamics

Consider two such so called NVE systems, denoted system 1 and system 2, having macrostates

 

©  sF© §{ s| defined by ( §z© , , ) and ( , , ), respectively.

N1, V1, E1 N2, V2, E2

Figure 1.1: Two NVE systems in thermal contact.

Next, bring the two systems into thermal contact (see Fig. 1.1). By thermal contact we mean

  

that the systems are allowed to exchange energy, but nothing else. That is © and may change,

§} sF© s|

but §z© , , , and remain fixed. Of course the total energy remains fixed as well, that is,

! ¡

 _~  J © (1.2.8) if the two systems interact only with one another. Now we introduce a fundamental postulate of statistical mechanics: At any time, system 1 is

equally likely to be in any one of its tJ© microstates and system 2 is equally likely to be in any one

of its tf microstates (more on this assumption later). Given this assumption, the composite system

. .

~   ~  

©5  1 t © 51 is equally likely to be in an one of its t possible microstates. The number

can be expressed as the multiplication:

. ¡ . .

~    

© D1 tJ© ©-1tf  1f t (1.2.9) 3

The number €‚„ƒ† ‡ˆ ‰‹Š corresponds to the number of independent solutions to the Schrodinger¨ equation that the

system can adopt for a given eigenvalue ‰ of the Hamiltonian.

Chapter 1 – Basic Principles 5

  

Next we look for the value of © (or equivalently, ) for which the number of microstates

.

~  

©D  1 t achieves its maximum value. We will call this achievement equilibrium, or more specifically thermal equilibrium. The assumption here being that physical systems naturally move

from improbable macrostates to more probable macrostates4. Due to the large numbers with which



&eŽ ^ we deal on the macro-level ( §Œ ), the most probable macrostate is orders of magnitude more

probable than even closely related macrostates. That means that for equilibrium we must maximize

. ¡ !

~   ~  

t ©D  1 ©

under the constraint that the sum  remains constant.

KMI I ¡

~ 

Ž ©

At the maximum t , or

I/ . I I I .

  

¡ ! r|•|– •— •:™ •— ¡

tJ© tJ© tf ©1‘tf   1B’

– ˜ ™

I I I Ž

  tf tJ©  

*

* (1.2.10)

© © ©”“ 

q

. I KSI ¡ G

{š {š  

&

1

 ©  where © denote the maximum point. Since from Eq. (1.2.8), Equa-

tion (1.2.10) reduces to:

I I

š š

. ¡ .

&  & 

tJ© tf

I I

 1  1f 

© (1.2.11)

©  tJ©›“ tf}“

which is equivalent to

I I

š š

. ¡ .

 

I I

 tJ© 1  tf 1f 

© (1.2.12)

©œ7ž Ÿ ‹œ„ž Ÿ

To generalize, for any number of systems in equilibrium thermal contact,

I

¡ ]¡

I t

 constant (1.2.13) œ7ž Ÿ for each system. Let us pause and think for a moment: From our experience, what do we know about systems in equilibrium thermal contact? One thing that we know is that they should have the same temper- ature. Most people have an intuitive understanding of what temperature is. At least we can often gauge whether or not two objects are of equal or different temperatures. You might even know of

a few ways to measure temperature. But do you have a precise physical definition of temperature?

It turns out that the constant is related to the temperature ¡ via

]¡ K ¢ & ¡ (1.2.14)

where ¢ is Boltzmann’s constant. Therefore temperature of the NVE ensemble is expressed as

I



¡

&

¢ I .

¡ 

 (1.2.15)

t §< Dsy 1 œ7ž Ÿ Until now some readers may have had a murky and vague mental picture of what the thermody- namic variable temperature represents. And now we all have a murky and vague mental picture of what temperature represents. Hopefully the picture will become more clear as we proceed.

Next consider that systems 1 and 2 are not only in thermal contact, but also their volumes are

¡ !

~

si© s£ allowed to change in such a way that the total volume s remains constant. For this

4Again, the term macrostate refers to the thermodynamic state of the composite system, defined by the variables

‡S¤ ‰¥¤ ƒ ‡ ‰

ƒ2¤ , , , and , , . A more probable macrostate will be one that corresponds to more possible microstates

A A than a less probableA macrostate. Chapter 1 – Basic Principles 6

example imagine a flexible wall separates the two chambers – the wall flexes to allow §{

to equilibrate between the chambers, but the particles are not allowed to pass. Thus §z© and

.

~ sF© Ds|D1

remain fixed. For such a system we find that maximizing t yields

I¦ . . I I I

— —

¡ ! r|§ § § § ¡

– ™

tJ© sF©1‘tf s|D1B’ tJ© s£ tf

– ™

˜

I I I Ž

tf tJ©

*

* (1.2.16)

sF© sF© si©”“ s|

q I

or I

š š

. ¡ .

& &

tJ© tf

I I

s 1 s 12 

© (1.2.17)

tJ© sF© tf s|

“ “ I

or I

š š

. ¡ .

I I

tJ© s 1 tf s 1f 

© (1.2.18)

si©œ7ž Ÿ s|‹œ„ž Ÿ

or I

¡%¨©¡

I t

constant  (1.2.19)

s œ„ž Ÿ We shall see that the parameter ¨ is related to pressure, as you might expect. But first we have one more case to consider, that is mass equilibration. For this case, imagine that the partition between the chambers is perforated and particles are permitted to freely travel from one system to

the next. The equilibrium statement for this system is

I

¡ ª«¡

I 

t constant (1.2.20)

§ œ7žSŸ

To summarize, we have the following:

•

¡­ t

1. Thermal (Temperature) Equilibrium: ¬ .

œ7ž Ÿ

¬

§

¡¨ t

2. Volume (Pressure) Equilibrium: ¬ .

œ„ž Ÿ

¬

¡ ª

t ¬

3. Number (Concentration) Equilibrium:  .

œ7žSŸ ¬ How do these relationships apply to the macroscopic world with which we are familiar? Recall

the fundamental expression from thermodynamics:

¡ G±° !³²

m m¯® m m



s §

¡ (1.2.21) ®

which tells us how to relate changes in energy  to changes in the variables , volume

°

§ ¡ s , and number of particles , occurring at temperature , pressure , and chemical potential

² . Equation (1.2.21) arose as an empericism which relates the three intrinsic thermodynamic

° ²



§ s properties ¡ , , and to the three extrinsic properties , , and . In developing this relationship,

it was necessary to introduce a novel idea, entropy, which we will try to make some sense of below. §

For constant s and Equation (1.2.21) gives us

o

I

®

§v¡

p

&

˜

I 

 (1.2.22)

¡  Chapter 1 – Basic Principles 7

Going back to Equation (1.2.13) we see that

¡ ¢ ®

t% (1.2.23) œ7žSŸ which makes sense if we think of entropy as a measure of the total disorder in a system. The greater the number of possible states, the greater the entropy. For pressure and chemical potential

we find the following relationships: 

For constant and § we arrive at

o

I ° I °

®

¡ ¡¨ ¨¦¡

pJ•

˜

I I ¢

t

or and  (1.2.24)

œ„ž Ÿ

s ¡ s ¡

 

For constant and s we obtain

o

I ² I ²

®

p §

• ¡´G ¡ª ª/¡´G

˜

I I ¢



or t and (1.2.25)

§ ¡ § ¡ œ„ž Ÿ

For completeness we repeat:

o

I I

®

§]¡ v¡ ¡%

p & &

˜

I I ¢

 t 

 or and (1.2.26)

œ7žSŸ

¡ ¡ 

Through Eqs. (1.2.24)-(1.2.26) the internal thermodynamic parameters familiar to our everyday

experience – temperature, pressure, and chemical potential – are related to the microscopic world

¡µ¢

®



§ s of , , and . The key to this translation is the formula t . As Pathria puts it, this formula “provides a bridge between the microscopic and the macroscopic”œ7žSŸ [3]. After introducing such powerful theory it is compulsory that we work out some example prob- lems in the following sections. But I recommend that readers tackling this subject matter for the first time should pause to appreciate what they have learned so far. By asserting that entropy (the most mysterious property to arise in thermodynamics) is simply proportional to the log of the num- ber of accessible microstates, we have derived direct relationships between the microscopic to the macroscopic worlds.

Before moving on to the application problems I should point out one more thing about the t number t – that is its name. The quantity is commonly referred to as the microcanonical partition function, a partition function being a statistically weighted sum over the possible states of

a system. Since t is a non-biased enumeration of the microstates, we refer to it as microcanonical. Similarly, another name for the NVE ensemble is the microcanonical ensemble. Later we will meet the canonical (NVT) and grand canonical ( ² VT) ensembles.

1.2.3 Formalism for Classical Systems

The microcanonical partition function for a classical system is proportional to the volume of phase §

space accessible by the system. For a system of § particles the is a 6 -dimensional

 

V a § Q space encompassing the a § variables and the variables , and the partition function is Chapter 1 – Basic Principles 8

proportional to the :

. . . G

m m m m m m

 b 

t §x 5su 1uŒ·¶¸ QF©5 Q|e  Q _¹ V© V  -V f1 1 QF© Q|j Q  V© Vj V 

^ ^ ^ ^ (1.2.27)

or using vector shorthand

. . . G

m m

 b 

 

^ ^

t §< Dsy 1uŒ ¸ Xº 5Y»1 1 X Y

¶ (1.2.28)

m

 a § where the notation ^ reminds us that the integration is over -dimensional space.

In Eqs. (1.2.27)-(1.2.28) the delta function restricts the integration to the the constant energy

. ¡ ¡

b 

hypersurface defined by X‹ 5Yi1 constant. [In general we won’t be integrating this difficult-looking delta function directly. Just think of it as a mathematical shorthand for restricting the phase space to a constant-energy subspace.]

We notice that Equation (1.2.28) lacks a constant of proportionality that allows us to replace the

.



§x 5su 1 proportionality symbol with the equality symbol and compute t . This constant comes

from relating a given volume of the classical phase space to a discrete number of quantum mi-

K



&

^ ½ crostates. It turns out that this constant of proportionality is §`¼¾½ , where is Planck’s constant.

Thus

. ¡

m m &



 

^ ^

t §x Dsy 1 X YÂ

¶£Á (1.2.29)



§¿¼À½

^

. ¡

b 

Á Xº 5Y»1

where the integration à is over the subspace defined by . 

From where does the constant §¿¼À½Ä^ come? We know from quantum mechanics that to specify the of a particle, we have to allow its to lose . Similarly, when we

specify the momentum with increasing certainty, the position loses coherence. If we consider ÅzV

and ņQ to be the fundamental uncertainties in position an momenta, then Planck’s constant tells us how these uncertainties depend upon one another:

½ÆlŔQ£Å¦V¦ (1.2.30)

Thus the minimal discrete volume element of phase space is approximately ½ for a single particle



^ a §

in one , or ½ when there are degrees of freedom. This explains (heuristically



^ §`¼ at least) the factor of ½ . From where does the come? We shall see when we enumerate the

quantum states of the ideal gas, the indistinguishability of the particles further reduces the partition t function by a factor of §¿¼ , which fixes as the number of distinguishable microstates.

1.3 Example Problem: Classical Ideal Gas

A system of noninteracting monatomic particles is referred to as the ideal gas. For such a system

)

¡ K £

b

(

 

the kinetic energy is the only contribution to the Hamiltonian Q , and

§

. ¡

m m

 &

 

^ ^

t §x 5su 1 ¶ Y{¶ X

Á (1.3.31)



§¿¼À½ ^

Chapter 1 – Basic Principles 9

§ § Ã

where à represents integration over the volume of the container. [The integral can be split into

.

b

Á X‚1

and à components because does not depend on particle positions in any way.] Therefore

. ¡

m &



 

^

s X[ t §x Dsy 1

¶ÄÁ (1.3.32)



§¿¼À½

^ 

It turns out that knowing tnj·s is enough to derive the :

I °

¡ ¡

t §

¢ I

œ„ž Ÿ (1.3.33)

¡ s s

or

° ¡¢ ¡ÈiÉ

§<¡ ¡·

s (1.3.34)

ÉÊ¡´¢ È §}Ë

where §}Ë is the gas constant, is the number of particles in moles, and is Avogadro’s Á number. For other properties (like energy and entropy) we need to do something with the Ã

integral in Equation (1.3.32).



)

^

¡ £



(

 

We approach this integral by first noticing that the constant energy surface Q

+*

©

ÉÊ¡µ. £



(

©ÌÍ a §

defines a sphere of radius 1 in -dimensional space. We can find the volume and

 ^ surface area of such a sphere from a handbook of mathematical functions. In \ the volume and

surface area of a sphere are given by:

(

ÌÍ ˆÌÍ

É .ÏÉ ¡ Î É .É ¡ Î

®

^ ^

 uÐ|©

^ ^

. K . K G

 1 s  1 (

( and (1.3.35)

^ ^

&

a § 1D¼ aS§ 1 ¼

K (

[One may wonder what to do in the case where a § is non-integral. Specifically, how would

. K . K G . K !

( ( (

& &

1 ¼ a § 1D¼ Ñ a § 1

one define the a § and factorials? We could use gamma functions

. K .ϲ .E² ¡

m

(

ÐÕÔÖ|×Ð|© Ö

Ñ aS§ 1 Ñ 1 Ñ 1 ¶ÇÒ Ó

and , where is a generalization of the factorial function: ~ . It

.E² ¡Ø.E²†G ²ÂÙ

& Ž

1 1D¼

turns out that Ñ for . So in the above equations for surface area and volume we

² ¡

m

ÐÕÔÖ|× Ö

Ò

¼ ¶ Ó are using the generalized factorial function ~ , which is the same as the regular

factorial function for non-negative integer arguments.]

m

Á



^ X

Returning to the task at hand: we wish to evaluate the integral à over the constant energy



)

^

¡ £



(





 Q

surface in \_^ defined by . One way to do this is to take

+*

©

¡ .-. £

m ®



(

©ÌÍ 

^

1 1 X 

¶ÄÁ (1.3.36) ^

which gives us



. ¡ . £ r . £

  

a § s

( (

©ÌÍ ÌÍ

^

Î

. K

1 1  t §x Dsy 1

( (1.3.37)

§¿¼ a § 1 ¼ ½

^

q

G §

Taking the of this function, we will employ Stirling’s approximation, that §¿¼¯Æl§

œ„ž Ÿ œ„ž Ÿ œ„ž Ÿ §

§ , for large . Thus

o

ÌÍ-Ü

^

. ¡ . £ . £ G ! . G !<Ú

p r

   &

s aS§ §

( (

ÌÍ

^

Î

t §x Dsy 1 § 1L 1 § § a §v1

( ( (

œ7žSŸ œ7ž Ÿ œ7ž Ÿ œ„ž ŸvÛ œ„ž Ÿ

½

^ q (1.3.38) Chapter 1 – Basic Principles 10

In the limit of large § , we know that the first three terms grow faster than the last two. So com-

. .

1 § § §Ý1

bining the § terms and keeping only terms of order and results in

œ7ž Ÿ “ œ„ž Ÿ

oyÞ

£



ÌÍ

Ü

^

Î Ú

. !

p



§ s

t §x Dsy 1uÆl§

( (1.3.39)

œ7ž Ÿ œ„ž Ÿ]Û ½ § a § ^

or the entropy

o

Þ

£ ¢



ÌÍ-Ü

^

. ¢ Î !ÊÚ

® p



s §

§x 5su 1yÆ § 

( (1.3.40)

œ7ž ŸvÛ ½ § a § ^

Using our thermodynamic definition of temperature,

o

I

®

¢ §Ý¡ ¡

p

& &

a

˜

I

§  

( or (1.3.41)

¡



o



(

¡ ¢ ¡

p



a

¢

§ ¡ ¡ 

( and (1.3.42)

a §

As you can see, the internal energy is proportional to the temperature and, as expected, the number

¡ ¢ K



( ¡

of particles. Inserting a § into Equation (1.3.40), we get:

o o

£<¢

(

. ¡ ¢ ! ! Î ¢ Ú

® p pr

a ¡ s

§< Dsy -¡›1 § §

( (1.3.43)



œ„ž Ÿ § a œ7ž Ÿ ½ q

which is the Sackur-Tetrode equation for entropy.

m

Á



^ X We should note that if, instead of taking the integral à to be the surface area of the constant-energy sphere, we had allowed the energy to vary within some small range, we would have arrived at the same results. In fact we shall see that, for the quantum mechanical ideal gas, that is precisely what we will have to do.

1.4 Example Problem: Quantum Ideal Gas

As we saw for the classical ideal gas, analysis of the quantum mechanical ideal gas will hinge on the enumeration of the partition function, and not on the analysis of the underlying equations of motion. Nevertheless, it is necessary to introduce some quantum mechanical ideas to understand the ideal gas from the perspective of quantum mechanics. It will be worthwhile to go through this exercise to appreciate how statistical mechanics naturally applies to the discrete states observed in

quantum systems.

.

Ö -à 5á 1 First we must find the quantum mechanical state, or ß , of a single par- ticle living in an otherwise empty box. The equation describing the shape of the constant-energy

wave function for a single particle in the presence of no potential field is

I I I

    

G . ¡´G ! ! ¡ .

r



½ ½

 Ö Ö

£³ã £ I I I

©‘ß ß àˆ 5á01 à á012

â â

Ö (1.4.44)

    

Î Î

à á

q

.

Ö àˆ 5á01yä

We solve Equation (1.4.44) , a form of Schrodinger’¨ s equation, with the condition that ß

 Ž on the walls of the container. The constant energy © (the subscript “1” reminds us that this is Chapter 1 – Basic Principles 11 the energy of a single particle) is an eigenvalue of the Hamiltonian operator on the left hand side

of Equation (1.4.44). Under these conditions the single-particle wave function has the form:

o

È Èë Èì

ÌÍæç7è¦é æ-ç„è¦é æç7è¦é (

Ö

^

. Î Î Î ¡

p

Ô à á

Ö

ß àˆ 5á01

å ê å ê å ê

å (1.4.45)

È Èë ÈFì

where the Ô , , and can be any of the positive integers (1, 2, 3, . . . ). Here the box is assumed

å 

to be a cube with sides of length . The energy © is related to these numbers via



¡ È !±È !±È



½

ë ìDî

  

£

© å  â

Ô (1.4.46)

«í 

If energy © is fixed then the number of possible quantum states is equal to the number of sets

È Èë ÈìP

Ô

N for which

£

â

BÌ

È !ïÈ !³È ¡

s ^ 

ë ìDî

   ©

Ô (1.4.47)

í 

½

¡

å

BÌ 

^

§ § where s . For a system of noninteracting particles, we have such sets of three

integers, and the energy is the sum of the energies from each particle:



£

â

BÌ

)

^

š

È ¡ ¡

 

^

s 

 (1.4.48)



+*

½

©

š  where  now represents the total energy of the system, and is a nondimensionalization of energy. The similarities between the classical ideal gas and Equation (1.4.48) are striking. As in the classical system, the constant energy condition limits the quantum phase space to the surface of

a sphere in a § dimensional space. The important difference is that for the quantum mechanical ȈP

system the phase space is discrete because the N are integers. This discrete nature of the phase

.



§x 5su 1 space means that t can be more difficult to pin down than it was for the classical case.

To see this imagine the regularly spaced lattice in a § dimensional space which is defined by the

ÈP .



t §x Dsy 1 set of positive integers N . The number is equal to the number of lattice points which

fall on the surface of the sphere defined by Equation (1.4.48)–this number is an irregular function

.

 1

of §x 5su . As an illustration, return to the single particle case. There is one possible quantum

 

¡ ¡

 

a ½ a0½

Þ

£ £

© ©

state for â and three possible states for . Yet there are no possible

BÌ BÌ

s s ^

states for energies falling^ between these two energies. Thus the distinct microstates can be difficult 

to enumerate. We shall see that as and § become large, the discrete spectrum becomes more

regular and smooth and easier to handle.

.



§x 5su 1

Consider the number ð which we define to be the number of microstates with energy

.

  

ð §< Dsy 1 less than or equal to . In the limit of large and large § , is equal to the volume of

the “positive compartment” of a a § dimensional sphere. Recalling Equation (1.3.35) gives

o



ˆÌÍ

^

š š

p Î r

. ¡

&

  ^

ˆÌÍ

^

. K

ð 1  (

( (1.4.49)

a § 1 ¼

q

. K .

«š

(



&

ð 1

[The factor 1B^ comes from limiting to the volume spanned by the positive values of

ÈP ¡ £ K

«š 

â

BÌ 

N ^ ½

.] Plugging in s results in

o

. £



 (

ˆÌÍ

. ¡ Î

p



s 1‘^

. K

ð §x Dsy 1 

( (1.4.50)

½ aS§ 1 ¼ ^

Chapter 1 – Basic Principles 12

. .

 

§x 5su 1 ð §x 5su 1

Next we calculate t from by assuming that the energy varies over some small

´ñ  Åóò

range Å , where . The enumeration of microstates within this energy range can be

I .

calculated as 

.



ð §x Dsy 1

I



t §x 5su ¹5Å©1uÆlÅ

(1.4.51) 

which is valid for small Å (relative to ). From Equation (1.4.50), we have

o

I

p ¡

ð ð a §

I

 ô ( (1.4.52)

and thus o

. ¡

p



ð a §

t §x Dsy ¹5Å©1 Å  ( (1.4.53)

and

o o o

Þ

£



ÌÍ-Ü

^

! ! . ¡ ! Î

p p p



s a § aS§ Å

t §x 5su ¹5Å©1 § 

 (

( (1.4.54)

œ„ž Ÿ œ7ž ŸvÛ œ7ž Ÿ œ„ž Ÿ

½ a §

^

§ §

As for the classical ideal gas, we take terms of order § and order which grow much faster œ„ž Ÿ

than § and the constant terms. Thus

œ7ž Ÿ

o

Þ

£



ÌÍ-Ü

^

! . ¡ Î

p



a § s

t §x Dsy 1 § 

( (1.4.55)

œ7ž Ÿ œ„ž ŸvÛ

½ a § ^

From Equation (1.4.55) we could derive the thermodynamics of the system, just as we did for the

classical ideal gas. However we notice that the entropy, which is given by

o

Þ

¢ £



ÌÍ-Ü

^

. ¡l¢ ! Î

® p



s a §

§x 5su 1 §

( (1.4.56)

œ„ž Ÿ]Û

½ a § ^

is not equivalent to the Sackur-Tetrode expression, Equation (1.3.43). [The difference is a factor of

K & §`¼ in the partition function, which is precisely the factor that we added to the classical partition function, Equation (1.2.29), with no solid justification.] In fact, one might notice that the entropy, according to Equation (1.4.56) is not an extensive measure! If we increase the volume, energy, and number of particles by some fixed proportion,

then the entropy will not increase by the same proportion. What have we done wrong? How can

K & we recover the missing factor of §¿¼ ? To justify this extra factor, we need to consider that the particles making up the ideal gas system are not only identical, they are also indistinguishable. We label the possible states that a

given particle can be in as state 1, state 2, etc., and denote the number that exist in each state at

õL õ›© õL a given instant as õ›© , , etc. Thus there are particles in state 1, and particles in state 2,

and so on. Since the particles are indistinguishable, we can rearrange the particles of the system ÏP

(by switching the states of the individual particles) in any way as long as the numbers Nõ remain unchanged, and the microstate of the system is unchanged. The number of ways the particles can

be rearranged is given by

§¿¼

 õ›©D¼öõL¼„÷÷ Chapter 1 – Basic Principles 13

Introducing another assumption, that if the temperature is high enough that the number of possible

microstates of a single particle is so fantastically large that each possible single particle state is

 ¡ 

&

¼ õ

represented by, at most, one particle, then õ (because each is either 1 or 0). Thus we

K & need to correct the partition function by a factor §`¼ , and as a result Equation (1.4.55) reduces to Equation (1.3.39). Chapter 1 – Basic Principles 14

Problems

.

® 

1. (Warm-up Problem) Invert Equation (1.3.40) to produce an equation for Dsu §v1 . Us- ing this equation and our basic thermodynamic definitions, derive the pressure-volume law (). How does this compare with Equation (1.3.34)?

2. (Particle in a § box) Verify that Equation (1.4.45) is a solution to Equation (1.4.44). Evaluate

m m m

Ö



à á ß the integral à , for the one-particle system. What are the 6 lowest possible ener- gies of this system? For each of the 6 lowest energies count the number of corresponding quantum states. Are the energy levels equally spaced? Does the number of quantum states increase monotonically with E?

3. (Gas of finite size particles) Consider a gas of particles that do not interact in any way except that, each particle occupies a finite volume wø which cannot be overlapped by other particles.

What consequences does this imply for the ideal gas law? [Hint: return to the relationship

. °

m



^

s/1{Œùà Y

t . You might try assuming that each particle is a solid sphere.] Plot vs.

° s

s for both the ideal gas law and the - relationship for the finite-volume particles. (Use

¡ ÈU¡

Ž Ž & a ¡ K and mole.) Discuss the following questions: Where do the curves differ? Where are they the same? Why?

4. (Phase space of simple harmonic oscillator) Consider a system made up of a single particle £

of mass attached to a linear spring, with spring constant ú . One end of the spring is

attached to the particle, the other is fixed in space, and the particle is free to move in one

.

b Qi V 1

dimension, V . What is the Hamiltonian for this system? Plot the phase space for

. ¡ .

®

b   1

Q» VS1 . Find an expression for the entropy of this system. You can assume that

K ¡I KSI

®

ûñ  

&

Åüò ¡ energy varies over some small range: Å , . Using , derive an

expression for the “temperature” of this system. We saw that the ideal gas has an internal

¢ K

( ¡ temperature of a per particle. How does the energy as a function of temperature for the simple harmonic oscillator compare to that for the ideal gas? Does it make sense to calculate the “temperature” of a one-particle system? Why or why not? Chapter 2

Canonical Ensemble and Equipartition

In Chapter 1 we studied the statistical properties of a large number of particles interacting within the microcanonical ensemble – a closed system with fixed number of particles, volume, and inter- nal energy. While the microcanonical ensemble theory is sound and useful, the canonical ensemble (which fixes the number of particles, volume, and temperature while allowing the energy to vary) proves more convenient than the microcanonical for numerous applications. For example, consider a solution of macromolecules stored in a test tube. We may wish to understand the conformations adopted by the individual molecules. However each molecule exchanges energy with its environ- ment, as undoubtedly does the entire system of the solution and its container. If we focus our attention on a smaller subsystem (say one molecule) we adopt a canonical treatment in which variations in energy and other properties are governed by the statistics of an ensemble at a fixed thermodynamic temperature.

2.1 The Canonical Distribution

2.1.1 A Derivation Our study of the canonical ensemble begins by treating a large reservoir thermally coupled to a smaller system using the microcanonical approach. The energy of the heat reservoir denoted Lý

and the energy of the smaller subsystem,  . The system is assumed closed and the total energy is

! ¡ ø‹¡

Lý  fixed:  constant.

This system is illustrated in Fig. (2.1). For a given energy  of the subsystem, the reservoir can

. ø2G

ý   ý

1 t obtain t microstates, where is the microcanonical partition function. According to our standard assumption that the of a state is proportional to the number of microstates

available:

° ¡ . øºG .

  ý Lý

1f 1 t

Œlt (2.1.1)

. ø

ý  t

We take the of the microcanonical partition function and expand about 1 :

œ7ž Ÿ œ7ž Ÿ

I

ý

° . ø G •¯ÿ •¡ ¥. ! .

ý   

t



I

Lý þ Œ t 1 1 12 œ„ž Ÿ

* O (2.1.2)

þ

œ„ž Ÿ œ7ž Ÿ

þ þ

15 Chapter 2 – Canonical Ensemble 16

Er

E Lý

Figure 2.1: A system with energy  thermally coupled to a large heat reservoir with energy .

ø

 

For large reservoirs ( ò ) the higher order terms in Equation (2.1.2) vanish and we have



¡ ° G

ý

¢

t

Πconstant (2.1.3)

œ7ž Ÿ œ„ž Ÿ ¡ ¡

where we have used the microcanonical definition of• thermodynamic temperature . Thus

¢

°

Ð Œ

Ó (2.1.4)

¡ K ¢ & where ¡ has been defined previously. Equation 2.1.4) is the central result in canonical ensemble theory. It tells us how the probability of a given energy of a system depends on its energy.

2.1.2 Another Derivation A second approach to the canonical distribution found in Feynman’s lecture notes on statistical mechanics [1] is also based on the central idea from microcanonical ensemble theory that the

probability of a microstate is proportional to t . Thus

. øºG °¢.

  

¡

t ©-1 ©1

°¢. . øºG

 

 (2.1.5)

D1 t  1 ø

where again  is the total energy of system and a heat reservoir to which the system is coupled.

 

©  The energies and are possible energies of the system and t is the microcanonical partition

function for the reservoir. (The subscript O has been dropped.) Next Feynman makes us of the fact that energy is defined only up to an additive constant. In

other words, there is no absolute energy value, and we can always add a constant, say £ , so long

as we add the same £ to all relevant values of energy. Without changing its physical meaning

Equation (2.1.5) can be modified:

! . øuG ° .

  

¡

© £ 1 ©1 t

! ° . . øuG



 

 (2.1.6)

D1  £ 1 t

! ¡ . ø¥G .

 

Ö Ö

1  1 t Next we define the function ¤ . Equating the right hand sides of Eqs. (2.1.5)

and (2.1.6) results in

. øuG . øuG ! ¡ . øuG . øuG !

       

©-1t  £51 t  1‘t © £ 1 t (2.1.7)

or

. G . ¡ . . ! G

   

Ž

 ©-1¥¤ £51 ¤ 1¦¤ £  ©-12 ¤ (2.1.8) Chapter 2 – Canonical Ensemble 17

Equation (2.1.8) is uniquely solved by:

¢¨§

. ¡ .

Ð

•

Ž

¤ £51 ¤ 1

Ó (2.1.9)



Ð Ì ©

where is some constant. Therefore the probability of a given energy is proportional to Ó ,

which is the result from Section 2.1.1. To take the analysis one step further we can normalize the •

probability:

¢

Ð

°¢. ¡



Ó

1 (2.1.10) •

where

)

¢

û¡

 Ð

Ó (2.1.11) 

is the canonical partition function and Equation (2.1.10) defines the canonical distribution func-

"¡ K ¢ & tion. [Feynman doesn’t go on to say why ¡ ; we will see why later.] Summation in Equation (2.1.11) is over all possible microstates. Equation (2.1.11) is equation #1 on the first page of Feynman’s notes on statistical mechanics [1]. Feynman calls Equation (2.1.10) the “sum- mit of statistical mechanics, and the entire subject is either the slide-down from this summit...or the climb-up.” The climb took us a little bit longer than it takes Feynman, but we got here just the same.

2.1.3 One More Derivation Since the canonical distribution function is the summit, it may be instructive to scale the peak once more from a different route. In particular we seek a derivation that stands on its own and does not rely on the microcanonical theory introduced earlier.

Consider a collection of  identical systems which are thermally coupled and thus share en-

¡

(

&



ergy at a constant temperature. If we label the possible states of the system  and  denote the energy these obtainable microstates as  , then the total number is system is equal to

the summation,

)

ÈF¡

 (2.1.12)

 Ȉ

where the are the number of systems which correspond to microstate  . The total energy of the

ensemble can be computed as

)

È F¡ # 

 (2.1.13) 

where # is the average internal energy of the systems in the ensemble. ȈÍP

Eqs. (2.1.12) and (2.1.13) N represent constraints on the ways microstates can be distributed

amongst the members of the ensemble. Analogous to our study of microcanonical statistics, here ȈP

we assume that the probability of obtaining a given set N of numbers of systems in each mi- crostate is proportional to the number of ways this set can be obtained. Imagine the numbers È to represent bins count the number of systems at a given state. Since the systems are identical, they can be shuffled about the bins as long as the numbers Ȉ remain fixed. The number of possible

ways to shuffle the states about the bins is given by:

. ÈP ¡

 H¼

È È

N 1

 (2.1.14)

©D¼ e¼ Chapter 2 – Canonical Ensemble 18

One way to arrive at the canonical distribution is via maximizing the number  under the

constraints imposed by Eqs. (2.1.12) and (2.1.13). At the maximum value,



¡



Ž

ã ¸

(2.1.15)

“

– ™



¡ ! !

ã

 ¸ ¬

where the operator is ¬ , and is a vector which represents a direction

 ¬ allowed by the constraints. ¬ [The occasional mathematician will point out the hazards of taking the derivative of a dis- continuous function with respect to a discontinuous variable. Easy-going types will be satisfied

with the explanation that for astronomically large numbers of possible states, the function  and ÈP

the variables N are effectively continuous. Sticklers for mathematical rigor will have to find satisfaction elsewhere.] We can maximize the number  by using the method of Lagrange multipliers. Again, it is convenient to work with the of the number  , which allows us to apply Stirling’s approxima-

tion. œ7žSŸ

)

. G È .ÏȈ ¡



H¼À1 ¼À1L

 (2.1.16)



œ7žSŸ œ7ž Ÿ œ7žSŸ

This equation is maximized by setting

) )

G ȈG È F¡

 

Ž

ã ã ã

(2.1.17)

 

œ„ž Ÿ where  and are the unknown Lagrange multipliers. The second two terms in this equations are

the of the constraint functions. Evaluating Equation (2.1.17) results in:

G ȈG GvG i¡

 Ž

& (2.1.18) œ„ž Ÿ

in which the entries of the gradients in Equation (2.1.17) are entirely uncoupled. Thus Equa- Ȉ

tion (2.1.18) gives us a straightforward expression for • the optimal :

¢

ÈF¡

 

Ð

Ó (2.1.19) where the unknown constants  and can be obtained by returning to the constraints.

The probability of a given state follows from the • first constraint (2.1.12)

¢

•

"

Ð

°! L¡

Ó

¢

# 

 (2.1.20)

Ð Ó

which is by now familiar as the canonical distribution function. As you might guess, the parameter

K ¢ & will once again turn out to be ¡ when we examine the thermodynamics of the canonical

ensemble. ȈP

[Note that the above derivation assumed that the numbers of states N assumes the most probable distribution, e.g., maximizes  . For a more rigorous approach which directly evaluates the expected values of Ȉ see Section 3.2 of Pathria [3].] Chapter 2 – Canonical Ensemble 19

2.2 More Thermodynamics

With the canonical distribution function defined according to Equation (2.1.20), we can calculate

the expected value of a property of a canonical system •

¢

#

ke





Ð

$

k&%y¡ Ó

(2.2.21)

$

&% k k k where k is the expected value of some observable property , and is the value of corre-

sponding to the ('*) state. For example, the internal energy of a system in the canonical ensemble

is defined as the expected, or average, value of  :

•

•

¢

#



 I I

Ü

• 



Ð

)

¢

#¡ ¡HG ¡HG



Ó

Ð

¢

#

I| I|



Ó 

 (2.2.22)

Ð



œ7ž Ÿ]Û œ„ž Ÿ

Ó

¡g#·G

®

+ ¡ +

The + is defined as , and incremental changes in can

G ¡ #·G

m m-, ®ym m

¡ ¡

be related to changes in internal energy, temperature, and entropy by + .

G$° # ¡ !

m¯® m m s

Substituting our basic thermodynamics accounting for the internal energy ¡

² m

§ , results in:

¡´G G±° !±²

m ®um m m

+ ¡ s

§ô (2.2.23)

# ¡ !

® ¡

Thus, the internal energy + can be expressed as:

o o

I I I . K

§]¡´G § §v¡ #¡ G

p r p_r

+ + + ¡›1

˜ ˜ ˜



I I I¥. K

¡  ¡

+ (2.2.24)

&

¡ ¡ ¡›1 ¡

  

q q

h¡ KS¢ & We can equate Eqs. (2.2.22) and (2.2.24) by setting ¡ . [So far we have still not shown that ¢ is the same constant (Boltzmann’s constant) that we introduced in Chapter 1; here ¢ is assumed to be some undetermined constant.] The Helmholtz free energy can be calculated

directly from the canonical partition function:

¡´G›¢

 ¡

+ (2.2.25) œ„ž Ÿ

How do we equate the constant ¢ of this chapter to Boltzmann’s constant of the previous chap- 

ter? We know that the probability of a given state • in the canonical ensemble is given by:

¢

° F¡ K.



Ð  Ó (2.2.26)

Next we take the expected value of the log of this quantity:

$ $

°j/%u¡HG G[ 0%º¡´G G[ ‚#¡ ‹. GÇ#

 1f

+ (2.2.27)

œ7ž Ÿ œ„ž Ÿ œ„ž Ÿ [You might think that in the study of statistical mechanics, we are terribly eager to take loga- rithms of every last quantity that we derive, perhaps with no a priori justification. Of course, the justification is sound in hindsight. So when in doubt in statistical mechanics, try taking a logarithm. Maybe something useful will appear!]

Chapter 2 – Canonical Ensemble 20

$

¡HG›¢ ° % Gï# ¡HG

® ®

 ¡

A useful relationship follows from Equation (2.2.27). Since + , .

. °j œ„ž Ÿ

The expected value of 1 is straightforward to evaluate:

œ7ž Ÿ

)

¡ûG”¢ °  °  ®

 (2.2.28)

 œ7ž Ÿ

From this equation, we can make a connection to the microcanonical ensemble, and the ¢ from

°j‹¡ Ð|©

Chapter 1. In a microcanonical ensemble, each state is equally likely. Therefore t , and

Equation (2.2.28) becomes

m

) )

¡¢ ¡ ¢ ¡ ¢

®

Ð|©

m

t t t%

t (2.2.29)

 

t

œ7ž Ÿ œ7žSŸ œ7žSŸ ¢ which should look familiar. Thus the ¢ of Chapter 2 is identical to the of Chapter 1, Boltzmann’s constant.

2.3 Formalism for Classical Systems

As in the construction of the classical microcanonical partition function, in defining the canonical partition function for classical systems we make use of the correction factor described in Chapter 1

which relates the volume of classical phase space to a distinct number of microstates. An elemen-

K

m m m m

    

Y ^ X ^ Y ^ X §¿¼À½Ä^ tary volume of classical phase space ^ is assumed to correspond to

distinguishable microstates. The partition function becomes:

Á

¢

û¡

m m

&

 Ð 

^ ^

X[ Y ¶

Ó (2.3.30)



§`¼¾½ ^

and mean values of a physical property k are expresses as:

˜

Á

¢

k.

m m

2



Ð 1  

˜

$

k&%u¡

à Y X1 ^ Y ^ X

Ó

Á

¢

m m  2

 (2.3.31)

Ð 1  

Y X

Ã

Ó ^ ^

2.4 Equipartition

The study of molecular systems often makes use of the equipartition theorem, which describes the

correlation structure of the variables of a Hamiltonian system in the canonical ensemble. Recalling

b §

that the classical Hamiltonian of a system is a function of 3 independent momentum and 

position coordinates. We denote these coordinates by Ö and seek to evaluate the ensemble average:

é

Á

Á

¢

m¡4

65 Ð

I

b

à ¬ ê

Ó

"

Ô

$

%u¡ 

Ö

Á

¬

¢

I

m

4 Ö

5 (2.4.32)

Ð 

Ã

Ó

b

Ö § where the integration is over all possible values of the 3 coordinates. The Hamiltonian depends on the internal coordinates although the dependence is not explicitly stated in Equa- tion (2.4.32).

Chapter 2 – Canonical Ensemble 21

Using integration by parts in the numerator to carry out the integration over the Ö coordinate

produces:

é é

Á Á

Ô87

"

¢ ¢

G r !

m m:4

 

Ô Ô Ö

©

Ð Ð uÐ|©¥5

¢ ¢

¬

ê ê

à Ã

I

b

Ó Ó

"

Ô

þ

Ô¨9

"

$

 %u¡

¬

Ö

þ

Á

q

¢

I

m

4

þ Ö

5 (2.4.33)

Ð 

Ã

Ó

m:4

Ö Ö ¥5

where the integration over uÐ|© indicates integration over all coordinates excluding . The

Ö; Ö

notation ÖiÐ and indicates the extreme values accessible to the coordinate . Thus for a momen- =< tum coordinate these extreme values would be ñ , while for a position coordinate the extreme values would come from the boundaries of the container. In either case, the first term of the nu- merator in Equation (2.4.33) vanishes because the Hamiltonian is expected to become infinite at the extreme values of the coordinates.

Equation (2.4.33) can be further simplified by noting that since the coordinates are independent,

I KSI ©¡ > > > z¡ ¡@? > z¡

Ö Ö

& Ž

¸ ¸  ¸

¸ , where is the usual Kronecker delta function. [ for ; for

¡C?

BA .] After simplification we are left with

I

b

$

 %u¡l¢ >

Ö

I

¡J¸

Ö (2.4.34) which is the general form of the equipartition theorem for classical systems. It should be noted that this theorem is only valid when all coordinates of the system can be freely and independently excited, which may not always be the case for certain systems at low temperatures. So we should

keep in mind that the equipartition theorem is rigorouslyÁ true only in the limit of high temperature.

$

%u¡ ¢

Ö

¡ ¬

Equipartition tells us that for any coordinate Ô . Applying this theorem to a momen-

 ¬

tum coordinate, Q , we find,

I

b

$ $

 %‹¡ ¯R /%º¡ ¢

I 

Q V ¡l

Q (2.4.35) Q

[Remember the basic formulation of .] Similarly,

$

:R /%º¡´G›¢

Q ¡· V (2.4.36)

From Equation (2.4.35), we see that the average kinetic energy associated with the ('*) coor-

$

£¦w K %/¡ù¢ K

( (

 

dinate is ¡ . For a three dimensional system, the average kinetic energy of each

¢ K

( ¡

particle is specified by a . If the potential energy of the Hamiltonian is a quadratic function

¢ K (

of the coordinates, then each degree of freedom will contribute ¡ energy, on average, to the internal energy of the system.

2.5 Example Problem: Harmonic Oscillators and Blackbody Radiation

A classical problem is statistical mechanics is that of a blackbody radiation. What is the equilib- rium energy spectrum associated with a cavity of a given volume and temperature? Chapter 2 – Canonical Ensemble 22

2.5.1 Classical Oscillator The vibrational modes of a simple material can be approximated by modeling the material as a

collection of simple harmonic oscillators, with Hamiltonian (for the case of classical mechanics):



£ED

ø



)

¡ !

& r

b

 

 

£

V Q (

( (2.5.37)

+*

© q

where each of the identical § oscillators vibrates with one degree of freedom. The natural fre- ¥ø quency of the oscillators is denoted by D . The partition function for such a system is expressed

as: K



£ED

ø



)

GJ ·¡ !

& & rML

 

   

£

V Q Y X[

¶GF8HJI (

( (2.5.38)



+*

§`¼¾½ ©¦q

which is product of § single-particle :

o

£ED



ø



û¡ G !

pQR m m r

& & &

 

£

V Q V Q 

¶NFOHJIQP (

( (2.5.39)

™

§`¼ ½

q

¡TS

m

Ö

ÐÕÔ

Î Ã

Using the identity for Gaussian distributions Ó , Equation (2.5.39) is reduced to



o o

£

( ©ÌÍ ( ©ÌÍ

Ü

Î Î

û¡

& & p p

j£ED

ø (2.5.40)



§`¼ ½ Û

or

( 

Î

r û¡ &

D¥ø

 (2.5.41)

½ §`¼

q

K & Remember that the factor §¿¼ corrects for the fact that the particles in the system are indis- tinguishable. If the particles in the system are distinguishable, then the partition function is given

by:

(



Î

û¡ r

D¥ø

(2.5.42)

½ q

which is the single-particle partition function raised to the § power.

2.5.2 Quantum Oscillator

The one-dimensional Schrodinger¨ wave equation for a particle in a harmonic potential is:

m

 

GVU ! ¡ £ED

&



ß

ø

 Ö£

£

m

ßJ ß

( (

Ö (2.5.43)



K D¥ø

(

Î U where the constant is equal to ½ , is the angular frequency associated with the classical

oscillator, and  is the energy eigenvalue of the Schrodinger¨ operator. This equations has, for

¡ D¥ø K È%¡ D¥ø-K Dø K



( ( ( (

Ú

U U U

Ž &

za  

quantum numbers , energy values of . The

ÈÝ¡  so-called Planck oscillator excludes the Ž eigenvalue. [For a complete analysis and associated wave functions, see any introductory quantum physics text, such as French and Taylor [2].] Chapter 2 – Canonical Ensemble 23

Thus the single-particle partition function is given by (for the Schrodinger¨ oscillator):

)

¢

¡

 W¦X

Ð  ©ÌÍ

Ò

 ©

Ó (2.5.44)

*

~



which can be simplified

¢

WYX

Ð ÌÍ

¡

Ó

¢

G

© 

W¦X (2.5.45)

Ð

& Ó

For § distinguishable oscillators the partition function becomes

¢

WYX

Ð: ÌÍ

¡Z ¡

Ó



 

¢

©

. G

 (2.5.46)

WYX

Ð

&

1 Ó

From our thermodynamic analysis, we calculate internal energy of the § -particle system as

D¥ø I D¥ø

¡ U #¡´G ! U r

¢

I| G

§  (

WYX (2.5.47)



&

œ„ž Ÿ

Ó

q È¿¡

The Planck analysis of this system (excluding the zero-point energy Ž eigenvalue), results in

a mean-energy per oscillator of:

D¥ø #

$

U

%º¡ ¡

¢

G £

WYX (2.5.48)

&

§ Ó .

2.5.3 Blackbody Radiation

ë ì

å å å

Consider a large box or cavity with length Ô , , and , in which radiation is re- flected off the six internal walls. [It is assumed that radiation is absorbed and emitted by the

container, resulting in thermal equilibrium of the photons.] In this cavity, a given frequency D

[D2K.\ \ ¢ corresponds to wavenumber ¢z¡ , where is the speed of light and is wavenumber measured

in units of inverse length. Wavenumbers obtainable in the rectangular cavity are specified by the

¢ ¡ È K ¢Së{¡ ȈëeK ë ¢ ì ¡ ÈFì K ì È Èë ÈFì

å å å

( ( (

Î Î Î

Ô Ô Ô

Cartesian components Ô , , and , where , , and

¢¡¤.¢ !d¢ !d¢ È

ë ì

©ÌÍ   

1 Ô

are integers and Ô . Angular frequency, expressed in terms of the integers , Èì

Èë , and , is:

©ÌÍ

D¥ø2¡ \^]>.EÈ K !.ÏȈëK ë ! .EÈFì K ì

å å å

(

  `_

Î

Ô1 Ô 1 1

 (2.5.49)

øbacD The total number of modes corresponding to a given frequency range, as D , can be calculated

from the integral (in the continuous limit):

ed

È Èˆë ÈFì

m m m

¶ 

Ô (2.5.50)

X X

¡È K 돡%ȈëeK ë ìL¡%Èì K ì

å å å

Ô |Ô Ô f f

Changing integration variables to f , , , yields

–mlϙ

™ ™ ™

d

ë ë ë ì

m m m

å å å

f  fÄÔ f ¶

Ô (2.5.51)

 X

ÌÍ npo g ig g

h j k Chapter 2 – Canonical Ensemble 24

Evaluating this integral gives the number of modes with frequency of less than D :

o‚Þ

D

é

ë ì

p

å å å

^

Î

\

Ô ê

( (2.5.52)

Î

a

D Dv! D

and the number with frequencies between and m is

ë ìeD D

å å å

 

D D

m m

Ô s

\ \

(

( or (2.5.53)

 

Î Î

^ ^

where s is the volume of the cavity. Multiplying by a factor of 2, for the two possible opposite

polarizations of a given mode, we obtain:

D



D

m s

\ (2.5.54)



Î

^

D D]! D

for the number of obtainable states for a photon of frequency between and m . Multiplying

$ %

by the Planck expression for £ the mean energy per oscillator, we get

D D

m

¡ U

m



^ s

X

¢

\ . G

WYX (2.5.55)



Î

&

1

Ó ^ the radiation energy (sum total of energy of the photons) in the frequency range.

2.6 Example Application: Poisson-Boltzmann Theory

As an example application of canonical ensemble theory to biomolecular systems, we next consider

the distribution of ions around a solvated charged macromolecule. If the electric field q can be

¡HG

ãEr

expressed as the gradient of a potential q , the Gauss’ Law can be expressed

. ¡´GBtˆ. .

ã ãsr

¨M1 ¨M1f ¨M1

£ (2.6.56)

“

tˆ. .

¨S1 ¨M1 where £ is the position-dependent permittivity, and is the charge density. This electrostatic approximation is valid if the length scale of the system is much smaller than the wavelengths of

the electromagnetic radiation.

tˆ.

We can split the charge density ¨S1 into two contributions:

tˆ. ¡utwv”. !xt¡y.

¨M1 ¨M1f

¨M1 (2.6.57)

twv”.

where ¨M1 is the charge density associated with the ionized residues on the macromolecule, and

t¡y.

¨M1 is the charge density of the salt ions surrounding the molecule. For a mono-monovalent salt the mobile ions, distributed according to Boltzmann statistics (thermal equilibrium canonical

distribution), have a mean-field concentration of

t¡y. ¡HG y. G

Ì © Ð Ì ©Y

¨M1 V{z-§}Ë!| 12

}(~(€ ¨}~€ Ó

Ó (2.6.58)

y

§{Ë Vpz where | is the bulk concentration of the salt, and is Avogadro’s number, and is the ele- mentary charge. This distribution assumes that ions interact with one another only through the electrostatic field, and thus is strictly valid only in the limit of dilute solutions. Chapter 2 – Canonical Ensemble 25

The two terms on the right-hand side of Equation (2.6.58) correspond to concentrations of positive and negative valence ions. Substitution of Equation (2.6.58) into Gauss’ Law leads to the

Poisson-Boltzmann equation:

æç7èiƒ

. . G y . KS¢ ¡HG twv”.

(

ãsr ã

¨M1 ¨M1 Vpz§{˂| Vpze„ ¡›1 ¨M1f

£ (2.6.59) “ a nonlinear partial differential equation for electrostatic potential surrounding a macromolecule. Once the electrostatic potential is calculated, the ion concentration field is straightforward provided by Equation (2.6.58).

2.7 Brief Introduction to the Grand Canonical Ensemble

Grand canonical ensemble theory is the statistical treatment of a system which exchanges not only energy, but also particles, with its environment in thermal equilibrium. Derivation of the

basic probability distribution for the grand canonical distribution is similar to that of the canonical 

ensemble, except that both and § are treated as statistically-varying quantities. The resulting

probability distribution is of the form: •

¢ ¢

˜

 "

×  Ð

°  L¡

Ó

†z.ϲ

(2.7.60)

Dsu ¡”1

 

where each state is specified by number of particles § , and energy . ?

The grand partition function is defined by summation over all  and states:

•

) )

¢ ¢

†z.ϲ ¡

 "

×  Ð

Dsy -¡›1

Ó (2.7.61)



which is often written in a form like: •

) )

¢

†z. ¡

 "

Ð 

á Dsy -¡›1 á

Ó (2.7.62)



¡

×Ì © á where Ó is called the fugacity (the tendency to be unstable or fly away, from the Latin fugere meaning to flee according to the Oxford English Dictionary [5]). Chapter 2 – Canonical Ensemble 26

Problems

1. (Derivation of canonical ensemble) Show that Equation (2.1.8) is uniquely solved by Equa- tion (2.1.9).

2. (Simple harmonic oscillators and blackbody radiation) Compare the classical oscillator with the Schrodinger¨ and Planck oscillators. (a) What is the energy per oscillator in the canonical

ensemble for the classical case? Which oscillators (if any) obey equipartition? For those #fK

that do not, is there a limiting case in which equipartition is valid? [Hints: plot § as a function of temperature. Perhaps Taylor expansions of these expressions will be helpful.] (b) From Equation (2.5.55) obtain a nondimensional expression for energy per unit frequency

spectrum, and plot the nondimensional energy distribution of blackbody radiation versus

D

nondimensional frequency U . At what frequency does the spectral energy distribution obtain a maximum?

3. (Electrical double layer) Consider a one-dimensional model of a metal electrode/solution electrolyte interface. The potential in the solution is governed by the Poisson-Boltzmann

equation:

y

m

(

æ-ç7臃

 

¡

„ V §}Ë!|

z

¢

m

„]

Ö



£ ¡

(a) Show that the above equation is the Poisson-Boltzmann equation in terms of the dimen-

K ¢ ¡

r

¡ Vpz

sionless potential „ . Show that this equation can be linearized as

m



¡‰ˆ

„



m

„Ý

Ö



Kwˆ

(b) Evaluate the Debye length ( & ) for the case of 0.1 M and 0.0001 M solution of NaCl.

(c) Using the boundary condition

m

¡HG

r

Vpz Š „

¢

m

Ö

*

~

£ ¡

Ô

q

.

Ö

„ 1 (where Š is the charge density on the surface of the electrode), find . Plot the concen-

trations of Na and Cl as functions of Ö (assume a positively-charged electrode).

B. ¨M1 4. (Donnan equilibrium) Consider a gel which carries a certain concentration | of immobile

charges and is immersed in an aqueous solution. The bulk solution carries mono-monovalent

. .

¨M1 |JÐ ¨M1

mobile ions of concentration |b and . Away from the gel, the concentration of the ø

salt ions achieves the bulk concentration, denoted | . What is the difference in electrical po- tential between the bulk solution and the interior of the gel? [Hint: assume that inside the gel, the overall concentration of negative salt ion balances immobile gel charge concentration.] Chapter 3

Brownian Motion, Fokker-Planck Equations, and the Fluctuation-Dissipation Theorem

Armed with our understanding of the basic principles of microscopic thermodynamics, we are fi- nally ready to examine the motions of microscopic particles. In particular, we will study these motions from the perspective of stochastic equations, in which random processes are used to ap- proximate thermal interactions between the particles and their environment.

3.1 One-Dimensional Langevin Equation and Fluctuation- Dissipation Theorem

Consider the following Langevin equation for the one-dimensional motion of a particle:

£·RwF. !‹ w. ¡k.y. !Çk .

n n n n

ý

1 1 1

1 (3.1.1)

‹ k.y

where £ is the mass of the particle, is the coefficient of friction, is the systematic (determinis- k tic) force acting on the particle, and ý is a random process used to induce thermal fluctuations in the energy of the particle. Equation (3.1.1) can be thought of as Newton’s second law with three forces acting on the particle: viscous damping, random thermal noise, and a systematic force.

Equation (3.1.1) can be factored

m

v v

w ¡ k.yi!Çk £ ]

ý

_ СŒ Ì ‡Œ Ì

' '

m0n

Ó Ó (3.1.2)

and has the general solution:

y v v

'

.k.y. !ïk . r w. ¡ w. ! &

, , m-, n

ý

‡Œ Ì Ð¡Œ Ì

'

Ž £

1 1-1  1 1 ¶ ~¿Ó

Ó (3.1.3)

q $

Using angled brackets % to denote averaging over trajectories, we can calculate the covariance “

27

Chapter 3 – Thermal Motions 28 ™

in particle velocity: –

v

$

]

w. w. %ô¡ w . !

n n



–

СŒ   Ì 

' '

Ž

©1  1 1

Ó

–

w.

y v

$

'

k.y. !Çk . % !

, , m-,

ý Ž

1

‡Œ Ì

™

£

©1 ©-1

~ Ó

™

w.

y v

$

'

k.y. !Çk . % ! Ž

, , m-,

ý 1

‡Œ Ì

™ –

£

¶ D1  1

~ Ó

– ™

y y v

$

' '

k . k . %

, , mJ, m-, r &

ý ý



‡Œe  Ì

£

¶ ¶

©1 D1 © 

~ Ó

~ (3.1.4) 

assuming that the deterministic forces have zero averages. Equation (3.1.4) can be further simpli-

™ – ™

fied: –

– ™ – ™

v



v y y v

СŒe  Ì

$ $

' '

w. w. %º¡%w . ! k . k . %

n n , , mJ, m-,

ý ý

' '

 

Ó

 СŒe  Ì ‡Œe  Ì

' '

£

Ž

©1 D1 ¶ 1 ¶ ©1 D1 © »

Ó ~ ~ Ó  (3.1.5) And if we assume that the random force is a white noise process, then its correlation can be de-

scribed by:

$

k . k . %u¡ . G

, , , ,

ý ý

©1  1 +”¸ ©

D1F (3.1.6)

,

™

– ™

Integrating Equation (3.1.5) over © we obtain:

™ – ™

v



y vŽ v

СŒe  Ì

$

'

w. . w. G %u¡w . !

n n n , m-,

' '



Ó

Ä Œ Ì Ð¡Œe   Ì

' '

£

Ž

+ ©‘1 © D1 1 D1 

~ Ó

Ó (3.1.7)





. n

where 1 is the step function defined by

Ù

n



Ž &

>‘

K ¡

. ¡  n

n

(

& Ž

1 (3.1.8)

n“’ Ž Ž .

Finally, integration of Equation (3.1.7) yields:

– ™ – ™ – ™

v v v

$

w. ] G w. %‹¡­w . !

n n

+

 

СŒ” Ð ” Ì Ð¡Œ   Ì Ð¡Œe   Ì

_

' ' ' ' ' '

‹ £ Ž

©‘1  D1 1

(

Ó Ó

Ó (3.1.9)

¡ ¡

n n n ©

To obtain the mean kinetic energy, we take  :

v v

$

! w . %º¡%w . ] G

n

+

  Я Œ Ì Ð¯ Œ Ì _

' '

Ž £•‹ &

1 1

( Ó Ó (3.1.10)

which approaches

$

w . %º¡

n

+



£•‹ 1

( (3.1.11)

$

w %º¡ ¢ KM£ 

at equilibrium. From equipartition we have ¡ . So

$

k . k . %º¡ ‹Õ¢ . G

, , , ,

ý ý

(

©1  1 © D1F ¡J¸ (3.1.12) which is a statement of the fluctuation-dissipation theorem. To obtain thermal equilibrium, the strength of the random (thermal) noise must proportional to the frictional damping constant, as prescribed by Equation (3.1.12). Chapter 3 – Thermal Motions 29

3.2 Fokker-Planck Equation

Imagine integrating a stochastic differential equation such as Equation (3.1.1) a number of times

so that the noisy trajectories converge into a probability density of states. Considering an § -



.

n Ö

dimensional problem with the vector 1 representing the state space, we denote the probability

  

. . . !x– !x–˜— . ° .

n n n n n n



Ö™ Ö Ö

1 1 1 1D

distribution as 1 and introduce as the probability of transition

 

. . !– !–

n n n n

Ö™ Ö 1 from state 1 at time to at time .

From the transition probability it follows:

    

. . . !– !–˜— . . !– !– ¡ ° . .

n n m n n n n n n

 

™ ™ ™

Ö Ö Ö Ö Ö

1 1  1 1 1 1 1D (3.2.13)

Expanding the transition probability as a power series, we obtain

– –

)

  

. !Ç! & . — . !– !– ¡ ° . . .  . !– G  .

n n n n n n n n



™

Ö Ö Ö Ö Ö

Ò

È

1D 1 1 1D 1 ¶ 1 1-1F

*

Û ¼

©



I

–

   

. ° . . — r . . . ›šÄ. !– G ›š .

n n n n n n m n n





™ ™

Ö Ö Ö Ö Ö Ö

š

I  I 

1 1 1 1D 1 1 11

Ö Ö 

(3.2.14)

‘© ee  

where the convention of summation over the indices  is implied. Noting that



   

. °¢. . — ¡ . . . G ¡ . . . G .. . . G

n n n n n n n n n n

™ ™ ™ ™

Ö Ö Ö Ö Ö Ö Ö Ö

1 1 ¸ 1D 1 1-1 ¸ © 1 11F¸  1 1-11F ©  (3.2.15)

we obtain

– –

)

 

!– ¡ . . .  . !– G  . . . !– !

n n n & n n n

 

Ö Ö Ö Ö

Ò

È

1 1 1 1D 1 ¶ 1-1F

*

Û

¼

©



I

–

š š    

.  . !– G  . . . . G r . .

n n n n n n m





™ ™

Ö Ö Ö Ö Ö Ö

I  I ›š

1 1-1 ¸ 1 1-1 1D 1

Ö Ö  (3.2.16)

or

– –

)

 

. . !– G . . . !– !– ¡ . . ! &

n n n n n n

 

™ ™

 

Ö Ö Ö Ö

Ò

È

1-1F 1 1D 1 1 1 ¶

*

¼ Û

©



I

–

   

. . !– G . . . G . r . .

n n n n n n m





™›š ™›š ™ ™

Ö Ö Ö Ö Ö Ö

I I

1 11 ¸ 1 1-1 1D 1 

œš 

Ö Ö

™ ™  (3.2.17)

By successively integrating by parts, we can move the derivatives off of the delta functions:

I .BG

)

–

   

. . !– !– ¡ . . ! . . G .

n n n n n n &

 

1

 

™

Ö Ö Ö Ö

Ò

I È I

1 1 1 1-1 1D 1 ¸

š

 

Ö Ö

™ ™

*

Û

 ¼

– –

©



 

. . . . !– G . . . !– G .

n n m n n n n



™ ™ ™ ™ ™

›š ›š  

Ö _ Ö Ö Ö Ö Ö

1 1  1-1 11F 1 1 (3.2.18) Chapter 3 – Thermal Motions 30

Equation (3.2.18) integrates to

 

. ! . !– !– ¡ . .

n n n n

 

Ö Ö

1D 1 1 1

– –

.‘G I

)

–

 š š

. . ÷.  . !– G  . .  . !– G  .

& n n n n n n



1

 

Ö Ö Ö Ö Ö

Ò

È I  I ›š

1 1’u 1 11F 1 11

Ö Ö

*

¼  ©

 (3.2.19)

– Ž

or in the limit ä ,

I .‘G I

)

–



. ¡ &

n

 1

 

Ö

Ò

I È I  I œš

n

1

Ö Ö

*

¼ 

©

– –



o

瞝



.   . !– G  .  œš|. !– G ›š . .

n n r n n n n p

 &

Ö Ö Ö Ö Ö

~

–

1 1  1 1’0 1 1B’

ŸO

œ q (3.2.20)

Defining the Kramers-Moyal coefficients [4] as

– –

o

–¥¢£¢£¢

瞝



  . !– G  . ¡  ›š . !– G ›š . p

& n n & n n

¡



 ›š

Ö Ö Ö Ö

~



È –

1 1’0 1

1B’ (3.2.21)

Ÿ

¼:œ



.

n



Ö 1

We obtain the Fokker-Planck equation for :

–¥¢£¢£¢

I I

)

–



 

. ¡ .‘G . .

n n n

 ¡ 





 œš

Ö Ö

Ò



I I  I œš¥¤

n &



1 1 1 1¦¦W Ö

Ö (3.2.22)

*



©  In the following section we evaluate the Kramers-Moyal expansion coefficients for a nonlinear

§ -dimensional stochastic differential equation.

3.3 Brownian Motion of Several Particles



. ¡ . . .

n n n n

§ § § §

1 © 1F  1” 

Consider the general nonlinear Langevin equation for several variables 1 :

 

R F¡ B. . ! £ S. . .

n n n n n

§ § §

1 1 ¤ 1 1ÄÑ 1

½ (3.3.23)

M.

n 1

where Ñ are uncorrelated white noise processes distributed according to:

$ $

Í. . %u¡ > . G . %u¡

n n n n n

(

Ž

Ñ ©-1‘Ñ  1 ¸ ¸ © D1f 1 Ñ (3.3.24)

In Equation (3.3.23) we use the Einstein convention of summation over repeated indices. Thus the



> .

n

§ 1 matrix ¤ describes the covariance structure of the random forces acting on .

Equation (3.3.23)has the general solution:

Ÿ



'

 

¡  B. ‘. !– G ‘. . ! > . . .

n n n n n n n m n

™ ™ ™ ™ ™ ™

§ § § §

1 ½ 1 1D 1 ¤ 1 1ÄÑ 1’

¶ (3.3.25) '

Chapter 3 – Thermal Motions 31

 > ¤

We can expand this integral by expanding the functions ½ and

I

  

. . ! B. . ¡ B. . !  . G .

n n n n n n n n

™ ™ ™ ™ ™

§ § § § §

I

½ 1D 1  ½ 1 1 ½ 1 1 © 1 © 1’

§

“

©

I

  

> . . ¡ £ M. . !  . G . > . . !

n n n n n n n n

™ ™ ™ ™ ™

§ § § § §

I

1 1 ¤ 1 1 © 1 © 1B’ ¤ 1D 1 

¤ (3.3.26)

§

“ ©

and inserting these expansions into Equation (3.3.25):

Ÿ Ÿ

 

B. !– G B. ¡ B. . ‘. . !  . ! G .

n n  n n n n m n  n m n n

™ ™ ™ ™ ™

§ § § § § §

' '

1 1 ½ 1 ½ 1 1 © 1  1 © 1’

à à ¬

“

' '

¬¨©

Ÿ Ÿ

 

 . G . > . . M. ! £ M. . M. ! !

 n n  n n n m n n n n m0n

™ ™ ™ ™ ™ ™ ™

§ § § §

' '

© 1 © 1B’ ¬ ¤ 1 1Ñ 1 à ¤ 1 1Ñ 1  Ã

“

' ' ¨©

¬ (3.3.27)

 . G .

n n

™

§ §

© © 1’ We can expand the 1 terms in the above equation and recursively using Equa-

tion (3.3.27):

Ÿ Ÿ

 

. . !– G . ! . . ¡ . !  . G .

n n m n n n  n n m n  n n

™ ™ ™ ™ ™

§ § § § §«ª §«ª

' '

© 1 ½i© 1 1  © 1 ½i© 1D 1 1 1’

à à ¬

“

' '

¬¨­¬

Ÿ Ÿ

 

 . . . ! G . . . . ! ! .

n n n n m0n n n n n m0n

 

™ ™ ™ ™ ™ ™ ™

§«ª § ª §«ª ª § ª ª

' '

à 1 1 1Ñ 1 1B’ ¬  à ¤®© 1 1Ñ 1 ¤®©

“

' ' ¨­¬

¬ (3.3.28)

$

. !¯– G B. % –

n n

§ §

Ž

1 ä To compute 1 in the limit , we insert Equation (3.3.28) into Equa-

tion (3.3.27), average over trajectories, and retain terms of first order:

Ÿ

I



$

' '›±

  

. B. !°– G . . . . %2¡[– ‘. ! > . . G !

n n n n n n n n r n n m n m n

(

™ ™ ™ ™ ™ ™ ™ ™ ™

§ § § § § ª ª

I

1 1 1 ¸ 1 ½ 1D 1 ¤ 1D 1 ¤®© ¸  1

¶ ¶

§

©

q '

' (3.3.29) – where terms of order  are not shown. To evaluate the integral in Equation (3.3.29), we use the

identity:

'›±

 

. . . . G ¡ .

n n n n n n m0n

(

™ ™ ™ ™ ™

§ ª ª §

1D 1 ¸ ¤.© ¸ ¤®© 1D 1F 1

¶ (3.3.30) '

and we get:

I



  

. . . ¡ . ! . > .

n n n n n n

¡

„©



§ § §

I

1D 1 1 1 ¤ 1 1 ¤.©

½ (3.3.31)

§ ©

for the drift coefficients.

$

 B. !²– G ‘.  M. !²– G M. %

n n n n

§ § § §

1’ 1 1’

Evaluating 1 , the only term that survives averaging and

–

“ Ž

the limit ä is

$

 ‘. !– G B.  . !– G . %u¡

n n n n

§ § § §

1 1B’ © 1 © 1’

“

Ÿ Ÿ

 

' '

   

> . . . . . G ¡ – > . . . .

n n n n n n m n m n n n n n

( (

™ ™ ™ ™ ™ ™ ™ ™ ™

§ ª § ª § §

¶ ¶

¤ 1 1J¤®© 1 1 ¸ ¸ 1 ¤ 1 1J¤.© 1D 1 ' ' (3.3.32)

and



 

¡  . . . .

n n n n

¡

À

>

§ §

© 1D 1¦¤ © 1 1F ¤ (3.3.33) Chapter 3 – Thermal Motions 32

All higher-order coefficients are zero:

– –

–¥¢£¢£¢

瞝

 $

¡   . !– G  .  œšÄ. !– G ›šÄ. %u¡ Ȳ³

n n n n

¡ &



 ›š

§ § § §

~



È Ž

1 1’0 1 1B’ aL

(3.3.34)

ŸO

¼ œ

The Fokker-Planck equation corresponding to Equation (3.3.23) is:

o



I . I I .

n n



Ö

   

. . ! . . ¡ G . . . £ M.

n n n n n n r n n p



1D 1

Ö Ö Ö Ö

I I  I

n

1 1 1D 1 ¤.© 1 1 ½ 1 1 ¤

Ö Ö

©

q

I



  

. . . ! . .   .

n n n n n n



Ö Ö Ö

I ÏI

1D 1B’u 1 1 1D 1¦¤ © ¤ © Ö Ö (3.3.35)

3.4 Fluctuation-Dissipation and Brownian Dynamics

To examine how the fluctuation-dissipation theorem arises for the general nonlinear Langevin

equation, we first examine the simple one-dimensional problem described by:

R ¡ . ! . .

n n n

§ § §

½ 1 ¤ 1Ñ 1

$

. . %u¡ . G

n n n n

(

Ñ ©1Ñ  1 ¸ ©

D1F (3.4.36)

¡HG #

¡ ã If the systematic term is proportional to the gradient of a potential, ½ , then the Fokker-

Planck equation is expressed as:

I . I I

n



Ö



¡  ‹. # ! ]

¡  

1

 _

I I I

n ã

1 ’ ¤  Ö

Ö (3.4.37)



¢­´



Ð Œ

If we further assume that Ó at equilibrium, then

#

m m m



¢­´ ¢­´

G ‹. ¡ ]

r

¡

Ð  Ð _

m m m

1 ¤ 

Ó Ó

Ö Ö

Ö (3.4.38)



q

¡

¡ ©ÌÍ

It is straightforward to show that this condition is satisfied if ¤ , which is the fluctuation- dissipation theorem for this one-dimensional Brownian motion.

This relationship generalizes for the following § -dimensional case:

R F¡ > k{ y! £ M.

n

¡

§

¤ Ñ

1 (3.4.39)

$

Í. . %u¡ > . G

n n n n

(

©-1‘Ñ  1 ¸ ¸ © D1

Ñ (3.4.40)

m



k8 _¡ #/. .

n n

§

m 1F

1 (3.4.41) §

to

 ¡ >

¡

©{¤ © 

¤ (3.4.42)



#/. . >

n n

¡

§ 1 In the above equations, 1 is the potential energy function, and is the frictional interac- tion matrix which determines the hydrodynamic interactions between the particles in the system.

Thus the covariance structure of the random forcing is proportional to the hydrodynamic inter-

action matrix. Given a diffusional matrix ¡ , generation of the random force vector requires

 >

¡ © the calculation of ¤ , the factorization of . In fact, for Brownian dynamics simulations, this factorization represents the computational bottleneck that demands the majority of CPU resources. Chapter 3 – Thermal Motions 33

Problems

1. (Brownian Motion) Consider the coupled Langevin equations:

R





 

¡HG #ï! !Cµ .

n

b

Ö Ö

ã

Ñ 1  µ

where  is a diagonal matrix of particle , is the non-diagonal frictional interaction

matrix, and the noise term Ñ is correlated according to:

$

. %u¡ . G .

n n n n

(.¶

 1 ¸ ©  1 Ñ ©-1Ñ

where ¶ is the identity matrix. The fluctuation-dissipation theorem tells us that:

¡ ¢ µ

b b

¡ 

(a) Brownian Motion If we ignore inertia in the above system we get

R





¡´G #ï!x· .

& n

¡

Ö

¢

ã

Ñ 1

¡

¡h¢ µ · ·h¡

¡ ¡

Ð|©

where ¡ is the matrix. Show that . Given a diffusion · matrix ¡ , how would you calculate its factor ? (b) Brownian Dynamics Algorithm Under what circumstances do you think the inertial

system will reduce to the over-damped (non-inertial) system? Assuming that the ran- n

dom and systematic forces remain constant over a time step of length Å , find an ex-

 

w. ! w.

n n n

Å 1

pression for in terms of 1 . [Hint: The equations can be decoupled by

¡ µ

Ð|©  considering the eigen decomposition of + . Proceed by finding a matrix- vector equivalent to Equation (3.1.3).] Show that under a certain highly-damped limit,

this expression reduces to the Brownian equation

¡



 w. ¡

n

¢ ’ 

1 systematic force + random force ¡

2. (Numerical Methods) Devise a numerical propagation scheme for the Brownian equation. Bibliography

[1] R. P. Feynman. Statistical Mechanics. A Set of Lectures. Addison-Wesley, Reading, MA, 1998.

[2] A. P. French and E. F. Taylor. An Introduction to Quantum Physics. W. W. Norton and Co., New York, NY, 1978.

[3] R. K. Pathria. Statistical Mechanics. Butterworth-Heinemann, Oxford, UK, second edition, 1996.

[4] H. Risken. The Fokker-Planck Equation. Methods of Solution and Applications. Second edi- tion.

[5] J. A. Simpson and E. S. C. Weiner, editors. Oxford English Dictionary Online http://oed.com. Oxford Univerisity Press, Oxford, UK, second edition, 2001.

34