Based on CHEM_ENG 404 at Northwestern University

TABLE OF CONTENTS 1 A Brief Review of Classical ...... 4 1.1 Prelude ...... 4 1.2 Terminology ...... 4 1.3 Zeroth Law of Thermodynamics ...... 5 1.4 First Law of Thermodynamics ...... 5 1.4.1 Definition of ...... 5 1.4.2 Heat and Work ...... 5 1.4.3 Definition of ...... 5 1.4.4 Definition of Heat Capacity ...... 6 1.4.5 Ideal Gas Assumptions ...... 6 1.5 Calculating First-Law Quantities in Closed Systems ...... 7 1.5.1 Starting Point ...... 7 1.5.2 Reversible, Isobaric Process ...... 7 1.5.3 Reversible, Isochoric Process ...... 7 1.5.4 Reversible, Isothermal Process ...... 7 1.5.5 Reversible, Adiabatic Process ...... 8 1.5.6 Irreversible, Adiabatic Expansion into a Vacuum ...... 8 1.6 The Carnot Cycle ...... 8 1.7 Second Law of Thermodynamics ...... 10 1.8 Calculating Second-Law Quantities in Closed Systems ...... 10 1.8.1 Reversible, Adiabatic Processes ...... 10 1.8.2 Reversible, Isothermal Processes ...... 10 1.8.3 Reversible, Isobaric Processes ...... 11 1.8.4 Reversible, Isochoric Processes ...... 11 1.8.5 Reversible Phase Change at Constant 푇 and 푃 ...... 11 1.8.6 Irreversible Processes for Ideal Gases ...... 11 1.8.7 Change of Mixing ...... 12 1.8.8 Expansion into a Vacuum ...... 12 1.9 Derived Thermodynamic Quantities ...... 12 1.9.1 Exact Differentials ...... 12 1.9.2 Mathematical Relations ...... 12 1.9.3 Homogeneous Function Theorem ...... 13 1.10 Thermodynamic Relations ...... 13 2 Fundamentals of Statistical Mechanics ...... 15 2.1 Brief Review of Probability ...... 15 2.2 Brief Overview of Averages: an example ...... 15 2.3 Microscopic Definition of Entropy ...... 16 2.4 Modeling Spins on a Lattice Site ...... 17 2.5 Modeling a Rubber Band ...... 18 3 The ...... 21 3.1 The Partition Function ...... 21 3.2 Thermodynamics from 푍 ...... 21 3.3 Thermodynamics from 푍: an example ...... 23 3.4 Translational Partition Function ...... 23 3.5 Rotational Partition Function ...... 24 3.6 Vibrational Partition Function ...... 26 3.7 Electronic Partition Function ...... 26 3.8 Factorizing the Partition Function ...... 26 3.9 Fluctuations in the Canonical Ensemble ...... 27 4 Hamiltonian Mechanics ...... 28 4.1 Formalism ...... 28

2

4.2 Practice Problems ...... 30 4.2.1 Classical Spring Model ...... 30 4.2.2 Classical Ideal Gas ...... 31 5 Chemical Equilibrium ...... 33 5.1 Conditions of Chemical Equilibrium ...... 33 5.2 Calculating Equilibrium Constants ...... 33 5.3 ...... 35 5.3.1 Definition ...... 35 5.3.2 Connection to Thermodynamics ...... 35 6 Thermodynamics of Biopolymers ...... 37 6.1 Mean Size of Random Walk Macromolecule ...... 37 6.2 Random Walk: Diffusion ...... 37 6.3 Analogy to Polymers ...... 38 6.4 Entropic Spring Model ...... 39 6.5 Entropic Restoring Force ...... 39 6.6 Force as a Function of Extension ...... 40 6.7 Freely Jointed Chain ...... 42 6.8 Worm-Like Chain ...... 45 6.9 1D Harmonic Chain ...... 53

3

1 A BRIEF REVIEW OF CLASSICAL THERMODYNAMICS 1.1 PRELUDE This section on classical thermodynamics is not meant to be a thorough review. It does not cover much of the underlying theory. It simply provides a refresher of content typically covered in a standard undergraduate thermodynamics course and is used just to provide context for the statistical mechanical content discussed in later sections.

1.2 TERMINOLOGY In order to accurately and precisely discuss various aspects of thermodynamics, it is essential to have a well-defined vernacular. As such, a list of some foundational concepts and their definitions are shown below. Other terms will be defined as they are introduced.

• Universe – all measured space • System – space of interest • Surroundings – the space outside the system • Boundary – what separates the system from the surroundings • Open System – a system that can have both mass and energy flowing across the boundary • Isolated System – a system that can have neither mass nor energy flowing across the boundary • Closed System – a system that can have energy but not mass flowing across the boundary • Extensive Property – a property that depends on the size of the system • Intensive Property – a property that does not depend on the size of the system • State – the condition in which one finds a system at any given time (defined by its intensive properties) • Process – what brings the system from one state to another • State Function – a quantity that depends only on the current state of a system • Path Function – a quantity that depends on the path taken • Adiabatic Process – a process that has no heat transfer (푄 = 0) • Rigid System – a system that does not allow for mechanical work (푊 = 0) • Permeable System – a process that does not allow for species transport • Isothermal Process – a process that has a constant (Δ푇 = 0) • Isobaric Process – a process that has a constant (Δ푃 = 0) • Isochoric Process – a process that has a constant volume (Δ푉 = 0) • Isenthalpic Process – a process that has a constant enthalpy (Δ퐻 = 0) • Isentropic Process – a process that has a constant entropy (Δ푆 = 0) • Diathermal Process – a process that allows heat transfer (푄 ≠ 0) • Moveable System – a system that allows for mechanical work (푊 ≠ 0) • Impermeable System – a system that does not allow species transport • Mechanical Equilibrium – no pressure difference between system and surroundings • Thermal Equilibrium – no temperature difference between system and surroundings • Chemical Equilibrium – no tendency for a species to change phases or chemical react • Thermodynamic Equilibrium – a system that is in mechanical, thermal, and chemical equilibrium • Phase Equilibrium – a system with more than one phase present that is in thermal and mechanical equilibrium between the phases such that the phase has no tendency to change • Chemical Reaction Equilibrium – a system undergoing chemical reactions with no more tendency to react

4

1.3 ZEROTH LAW OF THERMODYNAMICS The zeroth law of thermodynamics states: if two systems are separately in thermal equilibrium with a third, then they must also be in thermal equilibrium with each other. This is a seemingly trivial statement but is one that must be set for the rest of thermodynamics to rigorously hold true. It essentially defines an operational definition of temperature (i.e. the thing that must be the same between the three systems for them to be in thermal equilibrium).

1.4 FIRST LAW OF THERMODYNAMICS 1.4.1 DEFINITION OF INTERNAL ENERGY The first law of thermodynamics states that the total quantity of energy in the universe is constant (in other words, energy cannot be created or destroyed, although it can change forms). That being said, it is not convenient to consider the entire universe when doing a thermodynamic calculation. We can break this statement down into the region we are interested in (i.e. the system) and the rest of the universe (i.e. the surroundings). This allows us to restate the first law of thermodynamics as: the energy change of the system must equal the energy transferred across its boundaries from the surroundings. Energy can be transferred as heat, 푄, or by work, 푊, for a closed system. Written mathematically then, Δ푈 = 푄 + 푊, where Δ푈 is a state function we call the change in internal energy (of the system). In differential form, this is 푑푈 = 훿푄 + 훿푊, where the 훿 operator is used in place of the 푑 operator to denote an inexact differential (i.e. the differential change of a path function).

1.4.2 HEAT AND WORK Heat, 푄, is defined as the amount of energy transferred to a system via a temperature gradient. All other forms of energy transfer are considered work, denoted 푊. Work can be supplied by many means, but the most common in chemical systems is via external pressure over a volume change such that

푊 = − ∫ 푃 푑푉.

This is often referred to as 푃푉 (or mechanical) work. In this context, a positive value of 푊 is defined as the energy transferred from the surroundings to the system. The same sign-convention is chosen for heat.

1.4.3 DEFINITION OF ENTHALPY For mechanical work, we know that 훿푊 = −푃 푑푉, so 푑푈 = 훿푄 − 푃 푑푉. At constant pressure, 푃 푑푉 = 푑(푃푉), so we can then state

푑푈 = 훿푄푃 − 푑(푃푉), where the subscript on heat denotes constant pressure. Solving for 훿푄푃,

훿푄푃 = 푑(푈 + 푃푉).

5

This 푈 + 푃푉 term will appear frequently in thermodynamics and refers to a quantity called enthalpy, 퐻, defined as 퐻 ≡ 푈 + 푃푉.

This means that at constant 푃, 푑퐻 = 훿푄푃. In addition, at constant 푉, no work is done, so 푑푈 = 훿푄푉. Enthalpy and internal energy therefore each describe how heat changes a system in different conditions. They differ in value because at constant 푃, some heat will go into performing work instead.

1.4.4 DEFINITION OF HEAT CAPACITY The heat capacity, 퐶, is defined as the ratio of heat added to the corresponding temperature rise, 훿푄 퐶 ≡ . 푑푇 More often, the heat capacity at a constant set of conditions is defined for a given substance. For instance, since 훿푄 = 휕푈 at constant volume, the constant volume heat capacity, 퐶푉, is 휕푈 퐶푉 ≡ ( ) . 휕푇 푉

Similarly, since 훿푄 = 휕퐻 at constant pressure, the constant pressure heat capacity, 퐶푃, is 휕퐻 퐶푃 ≡ ( ) . 휕푇 푃

It should be noted that 퐶푃 > 퐶푉 since – for a given 훿푄 – some of that heat must go into work, causing a smaller temperature change.

1.4.5 IDEAL GAS ASSUMPTIONS For an ideal monatomic gas, it can be shown that 3 푈 = 푛푅푇. 2 and since 퐻 = 푈 + 푃푉, 5 퐻 = 푛푅푇. 2 This then also means that 3 퐶 = 푛푅 푉 2 and 5 퐶 = 푛푅. 푃 2 As we can see from the above expressions, the following useful relationship exists for ideal gases:

퐶푃 − 퐶푉 = 푛푅.

6

1.5 CALCULATING FIRST-LAW QUANTITIES IN CLOSED SYSTEMS 1.5.1 STARTING POINT When calculating first-law quantities in closed systems for reversible processes, it is best to always start with the following three equations, which are always true:

푊 = − ∫ 푃 푑푉

Δ푈 = 푄 + 푊 Δ퐻 = Δ푈 + Δ(푃푉) If ideal gas conditions can be assumed then, heat capacities are constant and

퐶푃 − 퐶푉 = 푛푅 Δ퐻 = Δ퐻(푇) Δ푈 = Δ푈(푇)

1.5.2 REVERSIBLE, ISOBARIC PROCESS Since pressure is constant: 푊 = −푃Δ푉 We then have the following relationships for enthalpy:

푄푃 = Δ퐻

Δ퐻 = ∫ 퐶푃 푑푇

Δ퐻 = Δ푈 + 푃Δ푉

1.5.3 REVERSIBLE, ISOCHORIC PROCESS Since volume is constant: 푊 = 0 We then have the following relationships for the internal energy:

푄푉 = Δ푈

Δ푈 = ∫ 퐶푉 푑푇

Δ퐻 = Δ푈 + 푉Δ푃

1.5.4 REVERSIBLE, ISOTHERMAL PROCESS If one is dealing with an ideal gas, Δ푈 and Δ퐻 are only functions of temperature, so Δ푈 = Δ퐻 = 0 Due to the fact that Δ푈 = 푄 + 푊, 푄 = −푊

7

For an ideal gas, integrate the ideal gas law with respect to 푉 to get 푉 푃 푊 = −푛푅푇 ln ( 2) = 푛푅푇 ln ( 2) 푉1 푃1

1.5.5 REVERSIBLE, ADIABATIC PROCESS By definition the heat exchange is zero, so: 푄 = 0 Due to the fact that Δ푈 = 푄 + 푊, 푊 = Δ푈 The following relationships can also be derived for a system with constant heat capacity:

푅 푇 푉 퐶푉 2 = ( 1) 푇1 푉2 푃 푅 푇 퐶푃 ( 1) = ( 1) 푃2 푇2

퐶푃/퐶푉 퐶푃/퐶푉 푃1푉1 = 푃2푉2 This means that Δ(푃푉) 푛푅Δ푇 푊 = Δ푈 = = 퐶푃/퐶푉 − 1 퐶푃/퐶푉 − 1

1.5.6 IRREVERSIBLE, ADIABATIC EXPANSION INTO A VACUUM For this case, 푄 = 푊 = Δ푈 = Δ퐻 = 0

1.6 THE CARNOT CYCLE A thermodynamic cycle always returns to the same state it was in initially, meaning all state functions are zero for the net cycle. For a Carnot cycle, there are four stages, as outlined in the figure below.

8

Since all state functions are zero for the net cycle, we know that

Δ푈cycle = Δ퐻cycle = 0 Due to the First Law of Thermodynamics,

−푊net = 푄net The net work and the neat heat can be computed by summing up the individual work and heat from each of the four processes. For a Carnot cycle, there is a negative net work. The following relationships apply to the Carnot cycle: 푃 푃 2 = 3 푃1 푃4 and 푄 푇 퐻 = − 퐻 푄퐶 푇퐶 The efficiency of the Carnot cycle is defined as net work |푊 | 푇 휂 ≡ = net = 1 − 퐶 wasted heat 푄퐻 푇퐻 The efficiency of the Carnot cycle run in reverse (i.e. a Carnot refrigerator) is characterized by the coefficient of performance, given by 푄 T COP = 퐶 = C |푊net| 푇퐻 − 푇퐶

9

1.7 SECOND LAW OF THERMODYNAMICS Recall that in the equation 푑푈 = 훿푄 + 훿푊, we were able to replace 훿푊 by a function of only state variables. It would be ideal if the same could be done for 훿푄 so that 푑푈 could be calculated from only state functions as well. This can be achieved with the second law of thermodynamics. I will start with the definition and then follow with the mathematical explanation. The second law of thermodynamics states that the entropy, 푆, of the universe increases for all real, irreversible processes (and does not change for reversible processes). In other words,

Δ푆univ ≥ 0, where

Δ푆univ ≡ Δ푆system + Δ푆surroundings. However, to understand this law, we must first define entropy, which we can do by finding out the integrating factor to turn 훿푄 into an exact differential. We begin with 훿푄 = 푑푈 + 푃 푑푉. Assuming an ideal gas, we can then say 3 푛푅푇 훿푄 = 푛푅 푑푇 + 푑푉. 2 푉 푉 If we were to integrate this expression, we would have an 푛푅푇 ln ( 푓) term on the right-hand side. The 푉푖 factor of 푇 makes this entirely dependent on the path. However, if we divide by 푇 on both sides, we get 훿푄 3 푑푇 푛푅 = 푛푅 + 푑푉. 푇 2 푇 푉 Integrating from an initial to final state,

푓 훿푄 3 푇푓 푉푓 ∫ = 푛푅 ln ( ) + 푛푅 ln ( ). 푇 2 푇푖 푉푖 푖 We then see that 1/푇 is an integrating factor such that 훿푄/푇 is an exact differential, as it is independent of path now. We can define this exact differential as entropy, 훿푄 푑푆 ≡ . 푇 1.8 CALCULATING SECOND-LAW QUANTITIES IN CLOSED SYSTEMS 1.8.1 REVERSIBLE, ADIABATIC PROCESSES Since the process is reversible and there is no heat transfer1,

Δ푆 = 0, Δ푆surr = 0, Δ푆univ = 0

1.8.2 REVERSIBLE, ISOTHERMAL PROCESSES Since temperature is constant,

1 It will be tacitly assumed any quantity without a subscript refers to that of the system.

10

푄 Δ푆 = rev 푇

If the ideal gas assumption can be made, then Δ푈 = 0 such that 푄rev = 푊rev = − ∫ 푃 푑푉. Plug in the ideal gas law to get 푃 Δ푆 = −푛푅 ln ( 2) 푃1

Since all reversible processes have no change in the entropy of the universe (i.e. Δ푆univ = 0), we can say that Δ푆surr = −Δ푆.

1.8.3 REVERSIBLE, ISOBARIC PROCESSES Since 훿푄푃 = 푑퐻 = 퐶푃 푑푇 for isobaric processes, 퐶 Δ푆 = ∫ 푃 푑푇 푇

Since all reversible processes have no change in the entropy of the universe (i.e. Δ푆univ = 0), we can say that Δ푆surr = −Δ푆.

1.8.4 REVERSIBLE, ISOCHORIC PROCESSES Since 훿푄푉 = 푑푈 = 퐶푉 푑푇 for isochoric processes, 퐶 Δ푆 = ∫ 푉 푑푇 푇

Since all reversible processes have no change in the entropy of the universe (i.e. Δ푆univ = 0), we can say that Δ푆surr = −Δ푆.

1.8.5 REVERSIBLE PHASE CHANGE AT CONSTANT 푇 AND 푃 In this case, 푄rev is the latent heat of the phase transition. As such, 푄 Δ퐻 Δ푆 = 푃 = transition 푇 푇

Since all reversible processes have no change in the entropy of the universe (i.e. Δ푆univ = 0), we can say that Δ푆surr = −Δ푆.

1.8.6 IRREVERSIBLE PROCESSES FOR IDEAL GASES A general expression can be written to describe the entropy change of an ideal gas. Two equivalent expressions are: 퐶 푉 Δ푆 = ∫ 푉 푑푇 + 푛푅 ln ( 2) 푇 푉1 and 퐶 푃 Δ푆 = ∫ 푃 푑푇 − 푛푅 ln ( 2) 푇 푃1 In order to find the entropy change of the universe, one must think about the conditions of the problem statement. If the real process is adiabatic, then 푄surr = 0 and then Δ푆surr = 0 such that Δ푆univ = Δ푆. If

11 the real process is isothermal, note that 푄 = 푊 from the First Law of Thermodynamics (i.e. Δ푈 = 0) amd 푄 that due to conservation of energy 푄 = −푄. Once 푄 is known, simply use Δ푆 = surr. The surr surr surr 푇 entropy change in the universe is then Δ푆univ = Δ푆 + Δ푆surr. If the ideal gas approximation cannot be made, try splitting up the into hypothetical, reversible pathways that may be easier to calculate.

1.8.7 ENTROPY CHANGE OF MIXING If we assume that we are mixing different inert, ideal gases then the entropy of mixing is

푉푓 Δ푆mix = 푅 ∑ 푛푖 ln ( ) 푉푖 For an ideal gas at constant 푇 and 푃 then

푃푖 Δ푆mix = −푅 ∑ 푛푖 ln ( ) = −푅 ∑ 푛푖 ln(푥푖) 푃tot where 푃푖 is the partial pressure of species 푖 and 푥푖 is the mole fraction of species 푖.

1.8.8 EXPANSION INTO A VACUUM For expansion into a vacuum (otherwise known as a Joule expansion),

푉2 푃2 Δ푆sys = Δ푆univ = 푛푅 ln ( ) = −푛푅 ln ( ) 푉1 푃1

1.9 DERIVED THERMODYNAMIC QUANTITIES 1.9.1 EXACT DIFFERENTIALS The change in any intensive thermodynamic property of interest, 푧, can be written in terms of partial derivatives of the two independent intensive properties, 푥 and 푦 as shown below: 휕푧 휕푧 푑푧 = | 푑푥 + | 푑푦, 휕푥 푦 휕푦 푥 When 푧(푥, 푦) is a smooth function, the mixed derivatives are equal, such that the following (referred to as the Euler reciprocity) holds: 휕2푧 휕2푧 = . 휕푥 휕푦 휕푦 휕푥

1.9.2 MATHEMATICAL RELATIONS The following general relationship are true: 휕푧 휕푧 휕푧 ( ) ( ) ( ) = −1. 휕푧 푦 휕푥 푧 휕푦 푥 For a function 휙 = 휙(푥, 푦) and a given constraint 푧, 휕휙 휕휙 휕휙 휕푦 ( ) = ( ) + ( ) ( ) 휕푥 푧 휕푥 푦 휕푦 푥 휕푥 푧

12

1.9.3 HOMOGENEOUS FUNCTION THEOREM A homogeneous function of order 푘 is one where 푓(휆푎, 휆푏, 휆푐) = 휆푘푓(푎, 푏, 푐). Therefore, all extensive variables are homogeneous functions with degree 1. Euler’s homogeneous function theorem states that for an arbitrary homogeneous function 푓 with degree 푘 = 1, 푓(휆풙) = 휆푓(풙) the following must hold

푓(풙) = 풙 ⋅ grad(푓(풙))

Applying this to the definition of entropy, for example, yields 휕푆 휕푆 휕푆 푆(푁, 푉, 푈) = (푁, 푉, 푈) ⋅ grad(푆(푁, 푉, 푈)) = 푁 ( ) + 푉 ( ) + 푈 ( ) 휕푁 푉,푈 휕푉 푁,푈 휕푈 푁,푉

1.10 THERMODYNAMIC RELATIONS The measured properties of a system are 푃, 푉, 푇 and composition. The fundamental thermodynamic properties are 푈 and 푆, as previously discussed. There are also derived thermodynamic properties. One of which is 퐻. There are also two other convenient derived properties: 퐹, which is , and 퐺, which is . The derived thermodynamic properties have the following relationships: 푑퐻 ≡ 푑푈 + 푑(푃푉) 푑퐹 ≡ 푑푈 − 푑(푇푆) 푑퐺 ≡ 푑퐻 − 푑(푇푆) The fundamental thermodynamic potentials are given by

푑푈 = 푇 푑푆 − 푃 푑푉 + ∑ 휇푖 푑푁푖 푖

푑퐻 = 푇 푑푆 + 푉 푑푃 + ∑ 휇푖 푑푁푖 푖

푑퐹 = −푃 푑푉 − 푆 푑푇 + ∑ 휇푖 푑푁푖 푖

푑퐺 = 푉 푑푃 − 푆 푑푇 + ∑ 휇푖 푑푁푖 푖 With these expressions, one can derive the following relationships: 휕푈 휕푈 ( ) = 푇 ( ) = −푃 휕푆 푉,푁 휕푉 푆,푁 휕퐻 휕퐻 ( ) = 푇 ( ) = 푉 휕푆 푃,푁 휕푃 푆,푁 휕퐹 휕퐹 ( ) = −푆 ( ) = −푃 휕푇 푉,푁 휕푉 푇,푁 휕퐺 휕퐺 ( ) = −푆 ( ) = 푉 휕푇 푃,푁 휕푃 푇,푁

13

휕푆 푃 휕푆 휇 ( ) = ( ) = − 휕푉 푁,푈 푇 휕푁 푉,푈 푇 휕푈 휕퐻 휕퐹 휕퐺 휇 = ( ) = ( ) = ( ) = ( ) 푖 휕푁 휕푁 휕푁 휕푁 푖 푉,푆,푁푗≠푖 푖 푃,푆,푁푗≠푖 푖 푇,푉,푁푗≠푖 푖 푇,푃,푁푖≠푗 The major are 휕푇 휕푃 휕푇 휕푉 ( ) = − ( ) ( ) = ( ) 휕푉 푆 휕푆 푉 휕푃 푆 휕푆 푃 휕푆 휕푃 휕푆 휕푉 ( ) = ( ) ( ) = − ( ) 휕푉 푇 휕푇 푉 휕푃 푇 휕푇 푃 One can also write heat capacities in terms of 푇 and 푠: 휕푈 휕푆 휕퐻 휕푆 퐶푉 = ( ) = 푇 ( ) 퐶푝 = ( ) = 푇 ( ) 휕푇 푉 휕푇 푉 휕푇 푃 휕푇 푃 It is useful to know the following identities: 1 휕푉 1 휕푉 훽 ≡ ( ) 휅 ≡ − ( ) 푉 휕푇 푃 푉 휕푃 푇 where 훽 and 휅 are the thermal expansion coefficient and isothermal compressibility, respectively. The Gibbs-Helmholtz equations are 휕(퐺/푇) 휕(퐺/푇) 퐻 = −푇2 ( ) = ( ) 휕푇 휕(1/푇) 푃 푃 휕(퐹/푇) 휕(퐹/푇) 푈 = −푇2 ( ) = ( ) 휕푇 휕(1/푇) 푉 푉 휕(퐹/푉) 휕(퐹/푉) 퐺 = −푉2 ( ) = ( ) 휕푉 휕(1/푉) 푇 푇

14

2 FUNDAMENTALS OF STATISTICAL MECHANICS 2.1 BRIEF REVIEW OF PROBABILITY When dealing with mutually exclusive events, the probability of either event happens is given by

푝푖푗 = 푝푖 + 푝푗 When dealing with independent events, the probability that both occur is given by

푝푖푗 = 푝푖푝푗 The number of arrangements of 푛 objects of which 푝 is one kind, 푞 is another, 푟 is another (and so on) is given by 푛! # arrangements = 푝! 푞! 푟! … We now consider the number of ways of making a choice if order of choice is important (i.e. a permutation). The number of permutation of 푟 objects out of 푛 total objects is given by 푛! 푛푃 = 푟 (푛 − 푟)! We now consider the number of ways of making a choice if order of choice is not important (i.e. a combination). The number of combinations of 푟 objects out of 푛 total objects is given by 푛 푛! 푛퐶 = ( ) = 푟 푟 푟! (푛 − 푟!)

2.2 BRIEF OVERVIEW OF AVERAGES: AN EXAMPLE 1 Consider a model system that has an energy given by 퐸 = 푘푥2. Find a distribution for 푝(푥) and use it to 2 calculate ⟨푥⟩, ⟨푥2⟩, and ⟨퐸⟩. We know that the probability is defined as

푘푥2 − 푒 2푘퐵푇 푝 = 푍 The partition function here is found by integrating over all 푥 (assume 푥 ≥ 0) so

푘푥2 − 2푘 푇 푘푥2 푒 퐵 2푘 − 푝 = = √ 푒 2푘퐵푇 푘푥2 ∞ − 휋푘퐵푇 2푘퐵푇 ∫0 푒 The value of ⟨푥⟩ can be found from

∞ 2푘 푇 ⟨푥⟩ = ∫ 푥푝(푥) 푑푥 = √ 퐵 휋푘 0 The value of ⟨푥2⟩ can be found from

15

∞ 푘 푇 ⟨푥2⟩ = ∫ 푥2푝(푥) 푑푥 = 퐵 푘 0 The value of ⟨퐸⟩ can be found from 1 1 1 ⟨퐸⟩ = ⟨ 푘푥2⟩ = 푘⟨푥2⟩ = 푘 푇 2 2 2 퐵 2.3 MICROSCOPIC DEFINITION OF ENTROPY Boltzmann’s hypothesis stated that entropy is related to some function of the number of quantum states, 푊. In other words, 푆 = 휙(푊) where 휙(푊) is some unknown function of 푊. To determine this function, we consider the following argument. Imagine two systems, 퐴 and 퐵, which are not interacting (and therefore independent of one another). Their are

푆퐴 = 휙(푊퐴)

푆퐵 = 휙(푊퐵) as would be expected. Since entropy is an extensive quantity, we can define the total entropy as simply

푆퐴퐵 = 푆퐴 + 푆퐵 Since the two systems are independent, the total number of quantum states is given by

푊퐴퐵 = 푊퐴푊퐵 Therefore,

휙(푊퐴퐵) = 휙(푊퐴푊퐵) = 휙(푊퐴) + 휙(푊퐵) The solution to the above equation

휙(푊) = 푘퐵 ln(푊) where 푘퐵 is some constant (which we know now to be Boltzmann’s constant). As such, we see that

푆 = 푘퐵 ln(푊) To illustrate the utility of this equation, consider 푁 non-interacting molecules moving within a volume 푉. We can imagine specifying the position of each molecule by subdividing the total volume into individual volumes of size Δ푉. The number of ways to place a particular molecule in the volume is 푉 푊 = Δ푉 In words, if this box had a volume of 100 L, and we had partitions of size 10 L, there are then of course 10 ways to place a single molecule in the box. The number of ways of arranging 푁 molecules in the box is

푉 푁 푊 = ( ) 푁 Δ푉

16 since each molecule goes in independently of the rest. Therefore, the entropy is 푉 푆 = 푁푘 ln ( ) 퐵 훥푉 Of course, this is reliant on the arbitrarily defined volume size Δ푉. This is not a concern, however, because 푆 is always defined with respect to a given reference state. If we take an entropy change (given the same 푉), we will find that

푉푓 Δ푆 = 푆푓 − 푆푖 = 푁푘퐵 ln ( ) 푉푖 and the arbitrarily defined Δ푉 term vanishes. We can also get the pressure of the system now because we know that 휕푆 푁푘 푇 푃 = 푇 ( ) = 퐵 휕푉 푈 푉 which is just the ideal gas law and also gives us a clear definition of Boltzmann’s constant by relating the equation to 푃푉 = 푛푅푇. The relationship is simply 푘퐵 = 푅/푁퐴.

2.4 MODELING SPINS ON A LATTICE SITE Suppose there are 푁 particles placed on lattice sites with each particle having a magnetic moment given by 휇. Each particle’s spin can be either spin up (i.e. spin +1/2) or spin down (i.e. spin -1/2). If the spin points up, the particle has an energy given by −휀 (i.e. −휇퐵) and if the spin points down the particle has an energy of +휀 (i.e. +휇퐵). If we define the number of particles that are spin-up as 푛1, then the number of particles with spin down is simply 푛2 = 푁 − 푛1. The total energy is then

푈 = −푛1휀 + (푁 − 푛1)휀 = 푁휀 − 2푛1휀

The number of ways of choosing 푛1 spin-up particles out of a total 푁 particles is given by 푁! 푊 = 푛1! (푁 − 푛1)! The spin system’s entropy is then 푁! 푆 = 푘퐵 ln(푊) = 푘퐵 ln ( ) 푛1! (푁 − 푛1)! This can be easily evaluated, but the problem is that a large lattice will have large values of 푁! (and potentially 푛1!). This can make calculations quite difficult due to the sheer size of these numbers. As such, a useful approximation is known as Stirling’s approximation, given by ln(푁!) ≈ 푁 ln(푁) − 푁 If we employ this approximation, we get 푛 푛 푛 푛 푆 ≈ −푁푘 (( 1) ln ( 1) + (1 − 1) ln (1 − 1)) 퐵 푁 푁 푁 푁

We know from the equation 푈 = 푁휀 − 2푛1휀 that we can write

17

푈 푛 1 − 1 − 푥 1 = 푁휀 = 푁 2 2 where 푥 ≡ 푈/푁휀. This allows us to rewrite the expression for entropy as 1 + 푥 1 + 푥 1 − 푥 1 − 푥 푆 ≈ −푁푘 ( ln ( ) + ln ( )) 퐵 2 2 2 2 We recall that we can define the temperature as 1 휕푆 휕푆 푑푥 휕푆 1 = ( ) = ( ) = ( ) 푇 휕푈 푉 휕푥 푉 푑푈 휕푥 푉 푁휀 As such, 1 푘 1 − 푥 = 퐵 ln ( ) 푇 2휀 1 + 푥 which is equivalent to saying 휀 푥 = − tanh ( ) 푘퐵푇 We can convert this back to be in terms of 푈 to get the equation of state 휀 푈 = −푁휀 tanh ( ) 푘퐵푇

We know that 푈 = −푛1휀 + 푛2휀. As such, 푈 푛 − 푛 = − 1 2 휀 This means that 휀 푛1 − 푛2 = 푁 tanh ( ) 푘퐵푇 This is the difference in populations as a function of measurable parameters. In electromagnetism, there is also the quantity known as magnetization, which is the magnetic moment per unit volume. The magnetization is simply 휇(푛 − 푛 ) 휇푁 휇퐵 푀 = 1 2 = tanh ( ) 푉 푉 푘퐵푇 where 퐵 is the magnetic induction field.

2.5 MODELING A RUBBER BAND We now consider a simple model of a rubber band, which we model as a collection of links that lie in the +푧 or −푧 direction. The work done on the rubber band when it is extended by an amount 푑ℓ is 퐹 푑ℓ, where 퐹 is the tension in the band. Therefore, the energy energy is given by 푑푈 = 푇 푑푆 + 퐹 푑ℓ From this relation, we can say

18

퐹 푑ℓ = 푑푈 − 푇푑푆

With 푆 = 푘퐵 ln(푊), we can say 퐹 휕 ln(푊) = −푘 ( ) 푇 퐵 휕ℓ 푈

We will define 푛+ as the number of links in the +푧 direction and 푛− as the number of links in the −푧 direction. For a chain of 푁 links, 푊 is given by 푁! 푁! 푊 = = 푛+! 푛−! 푛+! (푁 − 푛+) We can then say

ln(푊) = ln(푁!) − ln(푛+!) − ln((푁 − 푛+)!) which is the following via Stirling’s approximation 푛 푛 푛 푛 ln(푊) ≈ −푁 (( +) ln ( +) + (1 − +) ln (1 − +)) 푁 푁 푁 푁

The total extension is

ℓ = (푛+ − 푛−)푑 = (2푛+ − 푁)푑 where 푑 is the length of each link. This means we can write an expression ℓ 푛 1 + 1 + 푥 + = 푁푑 = 푁 2 2 where 푥 ≡ ℓ/푁푑. This makes our expression for ln(푊) become 1 + 푥 1 + 푥 1 − 푥 1 − 푥 ln(푊) = −푁 (( ) ln ( ) + ( ) ln ( )) 2 2 2 2

The tension is also rewritten in terms of 푥 as 퐹 휕 ln(푊) 푑푥 = −푘 ( ) 푇 퐵 휕푥 푑ℓ 푈

푑푥 where = 1/푁푑 so 푑ℓ 퐹 푘 휕 ln(푊) = − 퐵 ( ) 푇 푁푑 휕푥

We can plug in our expression for ln(푊) to get 퐹 푘 1 + 푥 푘 푁푑 + ℓ = 퐵 ln ( ) = 퐵 ln ( ) 푇 2푑 1 − 푥 2푑 푁푑 − ℓ For small values of ℓ/푁푑, we get

19

푘 푇ℓ 퐹 ≈ 퐵 푁푑2 Therefore, we see that tension is proportional to 푇ℓ. Similarly, for a rubber band under constant tension, the length will decrease upon heating (and vice versa).

20

3 THE CANONICAL ENSEMBLE 3.1 THE PARTITION FUNCTION Let us consider a system with multiple energetic states, each occurring with a different probability. We wish to be able to describe the thermodynamic behavior of this system at constant temperature, volume, and number of particles (this is referred to as a canonical ensemble and sometimes an NVT ensemble). The probability of being in a state 푖 out of all possible states 푗 is given by

푊푖 푝푖 = ∑푗 푊푗 When dealing with energies, we can say that

퐸 − 푖 푘 푇 푊푖 = 푒 퐵 Therefore,

퐸 − 푖 푒 푘퐵푇 푝푖 = 퐸 − 푗 푘 푇 ∑푗 푒 퐵 We define the partition function of a particle to be

퐸 − 푗 푍 ≡ ∑ 푒 푘퐵푇 푗 such that

퐸 − 푖 푒 푘퐵푇 푝 = 푖 푍 For a system of 푁 distinguishable particles,

푁 푍푁 = 푍 For a system of 푁 indistinguishable particles,

푍푁 푍 = 푁 푁! for a large 푁. When the probabilities are not equal, the entropy is given by the formula

푆 = −푘퐵 ∑ 푝푖 ln(푝푖) 푖

3.2 THERMODYNAMICS FROM 푍 We have that entropy is related to probabilities now. Recall that

21

퐸 − 푖 푒 푘퐵푇 푝 = 푖 푍 Let’s take the natural logarithm of this expression

퐸푖 ln(푝푖) = − − ln(푍) 푘퐵푇 Plugging this into the expression for entropy yields

퐸푖 푆 = 푘퐵 ∑ 푝푖 ( + ln(푍)) 푘퐵푇 푖 The average internal energy is defined as

푈̅ = ∑ 푝푖퐸푖 푖 so 푈̅ 푆 = + 푘 ln(푍) 푇 퐵 which can be rewritten as

푈̅ − 푇푆 = −푘퐵푇 ln(푍) We know that 퐹 = 푈 − 푇푆, so

퐹 = −푘퐵푇 ln(푍) From the previously derived thermodynamic relations relating free energy to 푃 and 푆, 휕퐹 휕(ln(푍)) 푃 = − ( ) = 푘 푇 ( ) 휕푉 퐵 휕푉 푇 푇 휕퐹 휕(푇 ln(푍)) 푆 = − ( ) = 푘 ( ) 휕푇 퐵 휕푇 푉 푉 휕 ln(푍) 푈̅ = 퐹 + 푇푆 = 푘 푇2 ( ) 퐵 휕푇 푉

We can calculate 퐶푉 as well 휕푆 휕2퐹 휕2(푇 ln(푍)) 퐶 = 푇 ( ) = −푇 ( ) = 푘 푇 ( ) 푉 휕푇 휕푇2 퐵 휕푇2 푉 푉 푉 Essentially, once we know the partition function, we can calculate entropy and free energy, which make it possible to calculate most other thermodynamic quantities of interest.

22

3.3 THERMODYNAMICS FROM 푍: AN EXAMPLE Suppose that 퐸푖 is a function of 푉. Show that 푃 = −푁⟨휕퐸/휕푉⟩. We start with the partition function for 푁 indistinguishable particles,

푁 퐸푖 푁 ∑ exp (− ) 푍 푖 푘 푇 푍 = 1 = 퐵 푁! 푁! The Helmholtz free energy is

퐸푖 퐹1 = −푘퐵푇 ln(푍) = −푘퐵푇 (푁 ln (∑ exp (− )) + 푘퐵푇(푁 ln(푁) − 푁)) 푘퐵푇 푖

The pressure is then

퐸푖 휕 ln (∑푖 exp (− )) 휕퐹 푘퐵푇 푃 = − ( ) = 푁푘퐵푇 ( ) 휕푉 푇 휕푉 푇 This is difficult to compute though, so we will use the identity that

휕 ln(푦(푥)) 1 휕푦(푥) = 휕푥 푦(푥) 휕푥 so 퐸 휕 (∑ exp (− 푖 )) 푁푘 푇 푖 푘 푇 푁푘 푇 퐸 1 휕퐸 푃 = 퐵 ( 퐵 ) = 퐵 (∑ exp (− 푖 ) (− ) ( 푖) ) 푍 휕푉 푍 푘퐵푇 푘퐵푇 휕푉 푇 푖 푇 푁 퐸 휕퐸 = − (∑ exp (− 푖 ) ( 푖) ) 푍 푘퐵푇 휕푉 푇 푖 We note that 퐸 exp (− 푖 ) 푘 푇 푝 = 퐵 푖 푍 so

휕퐸푖 휕퐸 푃 = −푁 ∑ 푝푖 ( ) = −푁 ⟨ ⟩ 휕푉 푇 휕푉 푖

3.4 TRANSLATIONAL PARTITION FUNCTION Recall from quantum mechanics the particle in a box solution to the Schrodinger wave equation, ℏ2푛2휋2 퐸 = 푛 2푚퐿2 We can write the partition function for one particle in the box as the following

23

∞ ∞ ∞ 퐸 − 푛 2 2 퐿 푘 푇 −훾푛 −훾푛 푍1 = ∑ 푒 퐵 = ∑ 푒 ≈ ∫ 푒 푑푛 = 휆퐷 푛=1 푛=1 0 where

1 ℏ2휋2 훾 ≡ 2 푘퐵푇 2푚퐿 1 2휋ℏ2 2 휆퐷 = ( ) 푚푘퐵푇

This only applies for sufficiently small 훾 for the summation to be computed as an integral. The 휆퐷 term is known as the thermal de Broglie wavelength. To scale this up to a 3D particle in a box, the energy term is modified to become ℏ2휋2 퐸 = (푛2 + 푛2 + 푛2) 푛 2푚퐿2 푥 푦 푧 such that he partition function for a single particle in the 3D box becomes

3 2 3 3 푚푘퐵푇 퐿 푍1 = 퐿 ( 2 ) = 3 2휋ℏ 휆퐷 It is important to note that this is the translational partition function for a molecule. In other words,

3 푚푘퐵푇 2 푉 푍trans = 푉 ( 2 ) = 3 2휋ℏ 휆퐷 where 푉 is 퐿3 for a box. From this, all other thermodynamic quantities can be derived. For instance, 3 푚푘 푇 퐹 = −푘 푇 ln(푍 ) = −푘 푇 (ln(푉) + ln ( 퐵 )) 1 퐵 1 퐵 2 2휋ℏ2 However, if we want to write the Helmholtz free energy for an 푁 particle system (where all particles are indistinguishable), we cannot simply multiply 퐹1 by 푁. We need to instead use the statement that for large 푁 푁, 푍푁 = (푍1) /푁! for 푁 indistinguishable particles.

3.5 ROTATIONAL PARTITION FUNCTION The rotational partition function comes from the rigid rotor quantum mechanical model, which has energy levels of

ℏ2ℓ(ℓ + 1) 퐸 = ℓ 2퐼 where 퐼 is the moment of inertia and ℓ is the rotational quantum number. The moment of inertia is defined as

2 퐼 = ∑ 푚푖푟푖 푖

24 where 푟푖 is the distance of an atom 푖 from the axis of rotation. For a diatomic molecule, the moment of inertia can be conveniently expressed as 푚 푚 퐼 = 1 2 푑2 = 휇푑2 푚1 + 푚2 where 휇 is the reduced mass and 푑 is the distance between the two atoms. For a linear, symmetric molecule 2 (e.g. CO2), the moment of inertia is 퐼 = 2푚푂푟퐶푂, where 푚푂 is the mass of the oxygen atom and 푟퐶푂 is the C-O bond length. For a triatomic molecule, such as the D-D-H transition state, the moment of inertia would 2 2 approximately be 퐼 = 푚퐷푟퐷퐷 + 푚퐻푟퐻퐷 (this assumes the center of the molecule is the center of mass, which is a fairly reasonable approximation). With this definition, the rotational partition function is

∞ ∞ ℏ2ℓ(ℓ+1) 퐵ℓ(ℓ+1) − − 2퐼푘 푇 푘 푇 푍rot = ∑(2ℓ + 1)푒 퐵 = ∑(2ℓ + 1)푒 퐵 ℓ=0 ℓ=0 where 퐵 is the rotational constants defined as

ℏ2 퐵 ≡ 2퐼

For 퐵 ≪ 푘퐵푇 (the “high temperature limit”), we can approximate the summation as an integral to get ∞ 퐵ℓ(ℓ+1) − 푘퐵푇 푍 ≈ ∫(2ℓ + 1)푒 푘퐵푇 푑ℓ = rot 퐵 0 Technically though, this formula needs to take into account the symmetry of the molecule and therefore should be 푘 푇 푍 ≈ 퐵 rot 휎퐵 where 휎 is the symmetry number (the number of ways a molecule can be oriented in indistinguishable ways). For nonlinear molecules in the high temperature limit, the equation becomes

1 √휋 푘 푇3 2 푍 ≈ ( 퐵 ) rot 휎 퐴퐵퐶 where 퐴, 퐵, and 퐶 are the three rotational constants (one for each spatial dimension, resulting from the different moments of inertia that exist in each dimension).

If we are instead operating in the regime of 퐵 ≫ 푘퐵푇 (the “low temperature limit”), we cannot approximate the sum as an integral. However, we can truncate the series without loss of significant accuracy. As such,

ℏ2 − 퐼푘 푇 푍rot ≈ 1 + 3푒 퐵 + ⋯ In either limit, we can use this definition of the rotational partition function to find the Helmholtz free energy and therefore obtain the other thermodynamic parameters of interest.

25

3.6 VIBRATIONAL PARTITION FUNCTION The vibrational energy levels come from the harmonic oscillator quantum mechanical model, described by 1 퐸 = ℏ휔 (푛 + ) 푛 2 where 휔 is the angular frequency and 푛 is the vibrational quantum number. The partition function is then

ℏ휔 ∞ 1 − ℏ휔(푛+ ) 2푘 푇 − 2 푒 퐵 푘퐵푇 푍vib = ∑ 푒 = ℏ휔 − 푛=0 1 − 푒 푘퐵푇

In the low temperature limit of ℏ휔 ≫ 푘퐵푇, we must use this form of the vibrational partition function. In the high temperature limit of ℏ휔 ≪ 푘퐵푇, we can approximate the partition function as simply 푘퐵푇/ℏ휔. The aforementioned discussion is only valid for a system with one vibrational mode. More complicated systems can have multiple vibrational modes such that it is more accurate to say

ℏ휔 − 푗 푒 2푘퐵푇 푍vib = ∏ ℏ휔 − 푗 푗 1 − 푒 푘퐵푇 3.7 ELECTRONIC PARTITION FUNCTION The electronic partition function can be described by

퐸 − 푖 푘 푇 푍el = ∑ 푔푖푒 퐵 푖 where 푔푖 is the degeneracy and 퐸푖 is the electronic energy above the ground state.

3.8 FACTORIZING THE PARTITION FUNCTION Since the total energy of a molecular system is the sum of translation, rotational, vibrational, and electronic components, we can then say that the molecular partition function is the product of all the individual components. In other words,

푍1 = 푍trans푍vib푍rot푍el The partition functions derived in the previous sections are only applicable to a single molecule. If we wish to scale the system up to 푁 indistinguishable molecules, the partition function is simply

(푍 )푁 푍 = trans 푍푁 푍푁 푍푁 푁 푁! vib rot el With this, the free energy can be written as

퐹 = −푘퐵푇 ln(푍푁) = −푁푘퐵푇(ln(푍푡푟푎푛푠) + ln(푍푟표푡) + ln(푍푒푙) − ln(푁!)) We then see that when the energies can be added, the partition functions can be multiplied, and the free energies of each component can be added (and therefore so can other thermodynamic functions).

26

3.9 FLUCTUATIONS IN THE CANONICAL ENSEMBLE In the canonical ensemble, which has a constant 푁, 푉, 푇 in each system, the energy of each system can fluctuate around the average value. We wish to describe the size of these fluctuations. If we define our internal energy as 푈 ≡ ⟨퐸⟩, then we can state

−훽퐸푖 ∑푖 퐸푖푒 푈 = ∑ 퐸푖푝푖 = −훽퐸 ∑푖 푒 푖 푖 where 훽 ≡ 1/푘퐵푇. We can then state that 2 휕푈 − ∑ 퐸2 푒−훽퐸푖 (∑ 퐸 푒−훽퐸푖) ( ) = 푖 푖 + ( 푖 푖 ) −훽퐸푖 −훽퐸푖 2 휕훽 푉 ∑푖 푒 (∑푖 푒 ) We immediately can recognize that

2 −훽퐸푖 − ∑푖 퐸푖 푒 2 −훽퐸 = −⟨퐸 ⟩ ∑푖 푒 푖 2 −훽퐸푖 (∑푖 퐸푖푒 ) 2 ( −훽퐸 2 ) = ⟨퐸⟩ (∑푖 푒 푖) such that 휕푈 ⟨퐸2⟩ − ⟨퐸⟩2 = − ( ) 휕훽 푉 We can substitute back in for 훽 to get

휕푈 휕푈 휕푇 휕푈 2 2 ( ) = ( ) ( ) = ( ) (−푘퐵푇 ) = 퐶푉(−푘퐵푇 ) 휕훽 푉 휕푇 푉 휕훽 휕푇 푉 such that

2 2 2 ⟨퐸 ⟩ − ⟨퐸⟩ = 퐶푉푘퐵푇

We then see that 퐶푉 is an entity that can describe the fluctuations in energy of the canonical ensemble.

27

4 HAMILTONIAN MECHANICS 4.1 FORMALISM We are often used to dealing with Newtonian mechanics, where momentum is related to force via 푑푝 푝̇ ≡ = 퐹 푑푡 If the force is a function of the three spatial variables, then this is a set of second order differential equations in those coordinates whose solution is a function of time. For 푁 particles, there are 3푁 second order differential equations with 6푁 initial conditions. To simplify this formalism, we can make use of Lagrangian and Hamiltonian mechanics. In Lagrangian mechanics, we let 푇 be the kinetic energy of a particle. In Cartesian coordinates, 푚 푇(푥̇, 푦̇, 푧̇) = (푥̇ 2 + 푦̇ 2 + 푧̇2) 2 We also deal with a potential energy, denoted 푉. Newton’s equation of motion can be written as 휕푉 푚푥̈ = − 휕푥 with analogous equations in 푦 and 푧. Now, we introduce a new function called the Lagrangian: 퐿(푥, 푦, 푧, 푥̇, 푦̇, 푧̇) ≡ 푇(푥̇, 푦̇, 푧̇) − 푉(푥, 푦, 푧) The generalized momentum is then 휕퐿 휕푇 = = 푚푥̇ = 푝 휕푥̇ 휕푥̇ 푥 The generalized force is then 휕퐿 휕푉 = − 휕푥 휕푥 Newton’s equations can therefore be written as 푑 휕퐿 휕퐿 ( ) = 푑푡 휕푥̇ 휕푥 We now wish to introduce transformed coordinates so that we are not restricted to the Cartesian system. Let us consider 푞1, 푞2, and 푞3 as the transformed spatial coordinates. If we use this, then Lagrange’s equations become 푑 휕퐿 휕퐿 ( ) = 푑푡 휕푞̇푗 휕푞푗 where 푗 = 1,2,3. For 푁 particles, there are still 3푁 second order differential equations and 6푁 initial conditions, but the equations have the same form in any coordinate system. To simplify the second order differential equations to a set of first order differential equations, we must introduce the Hamiltonian approach. Let us define a generalized momentum by

28

휕퐿 푝푗 = 휕푞̇푗 where 푗 = 1,2, … ,3푁. Then we can define the Hamiltonian function for a system containing one particle (for simplicity) as

3

퐻(푝1, 푝2, 푝3, 푞1, 푞2, 푞3) = ∑ 푝푗푞̇푗 − 퐿(푞̇1, 푞̇2, 푞̇3, 푞1, 푞2, 푞3) 푗=1 We can write the kinetic energy as

3푁 2 푇 = ∑ 푎푗(푞1, 푞2, … , 푞3푁)푞̇푗 푗=1 where 푎푗 are functions of the generalized coordinates. If the potential energy is a function only of the generalized coordinates, then 푝푗 can be defined by 휕퐿 휕푇 푝푗 = = = 2푎푗푞̇푗 휕푞̇푗 휕푞̇푗 This then lets us write that 퐻 = 푇 + 푉 such that the Hamiltonian is just the sum of the kinetic and potential energies. If the Lagrangian is not an explicit function of time, then the time-derivative of the Hamiltonian is zero. Let us begin with the definition of 퐻 휕퐿 휕퐿 푑퐻 = ∑ 푞̇푗푑푝푗 + ∑ 푝푗푑푞̇푗 − ∑ 푑푞̇푗 − ∑ 푑푞푗 휕푞̇푗 휕푞푗 푗 푗 푗 푗 We can use the previously derived equations to then state

푑퐻 = ∑ 푞̇푗 푑푝푗 + ∑ 푝̇푗 푑푞푗 푗 푗 The total derivative of the Hamiltonian is 휕퐻 휕퐻 푑퐻 = ∑ ( ) 푑푝푗 + ∑ ( ) 푑푞푗 휕푝푗 휕푞푗 푗 푗 Therefore, comparing the last two equations it is clear to see that 휕퐻 = 푞̇푗 휕푝푗 and 휕퐻 = −푝̇푗 휕푞푗

29 with 푗 = 1,2, … ,3푁 for 푁 particles. The last two expressions are Hamiltonian’s equations of motions. For 푁 particles, there are 6푁 first order differential equations and 6푁 initial conditions. In particular, the Hamiltonian equations of motion describe the motion of a particle through a phase space composed of 푝푗 and 푞푗 coordinates. For a classical continuous system, the canonical partition function for one particle is 1 퐻 푍1 = 3 ∭ 푑푝푖푑푝푗푑푝푘 ∭ 푑푞푖푑푞푗푑푞푘 exp (− ) ℎ 푘퐵푇 Here, the 푖, 푗, 푘 subscripts are used to represent three spatial coordinates in a given coordinate system. Note that if we are dealing with a system that is in less than three dimensions, the volume integral (and the number of derivative terms) would change accordingly. In addition, recall that 푞 is just a generalized position variable. So, if we are operating in Cartesian coordinates, the above is simply 1 퐻 푍1 = 3 ∭ 푑푝푥푑푝푦푑푝푧 ∭ 푑푥푑푦푑푧 exp (− ) ℎ 푘퐵푇

4.2 PRACTICE PROBLEMS 4.2.1 CLASSICAL SPRING MODEL Consider a one-dimensional spring, whose Hamiltonian can be expressed as

푝2 푐푞2 퐻 = + 2푚 2 Find the partition function of one particle whose energy follows this expression. We start with the classical, continuous partition function for one particle 1 퐻 푍1 = 3 ∭ 푑푝푖푑푝푗푑푝푘 ∭ 푑푞푖푑푞푗푑푞푘 exp (− ) ℎ 푘퐵푇 We are only operating in one dimension, so we can more simply say 1 퐻 푍1 = ∫ 푑푝 ∫ 푑푞 exp (− ) ℎ 푘퐵푇 Plugging in the Hamiltonian,

푝2 푐푞2 + 1 2푚 2 푍1 = ∫ 푑푝 ∫ 푑푞 exp (− ) ℎ 푘퐵푇

We can split up the exponential into parts 1 푝2 푐푞2 푍1 = ∫ (− ) 푑푝 ∫ exp (− ) 푑푞 ℎ 2푘퐵푇 2푘퐵푇 This has the solution

2휋푘퐵푇 푚 푍 = √ √2휋푚푘 푇 = 2휋푘 푇√ 1 푐 퐵 퐵 푐

30

4.2.2 CLASSICAL IDEAL GAS Consider an ideal gas consisting of 푁 particles that obey classical statistics. The energy of a particle is 휀 and is proportional to the magnitude of the momentum, 푝, via 휀 = 푐|푝|. Find the equation of state and energy of the gas. Consider the particles as structureless. Let us first begin with one particle. If the particle is structureless, we are only considering translational contributions to the partition function. We start with the classical, continuous partition function for one particle 1 퐻 푍1 = 3 ∭ 푑푝푖푑푝푗푑푝푘 ∭ 푑푞푖푑푞푗푑푞푘 exp (− ) ℎ 푘퐵푇 We recognize that we are dealing in three dimensions, so these integrals will be triple integrals. We also know that the Hamiltonian is a representation of the energy, so we can substitute in 휀 for 퐻. This allows us to then say 1 휀 푍1 = 3 ∭ 푑푝푥푑푝푦푑푝푧 ∭ 푑푥푑푦푑푧 exp (− ) ℎ 푘퐵푇 Since the momentum and position variables can fluctuate from −∞ to +∞, those are the bounds of the triple integral in the 푑푝푖 terms. For the spatial triple integral, we can simply recognize that integrating over the three spatial coordinates is the volume of the system, 푉, so

∞ 푉 휀 푍1 = 3 ∭ 푑푝푥푑푝푦푑푝푧 exp (− ) ℎ 푘퐵푇 −∞ Plugging in for the energy

∞ 푉 푐|푝| 푍1 = 3 ∭ 푑푝푥푑푝푦푑푝푧 exp (− ) ℎ 푘퐵푇 −∞ To evaluate this integral, we transform it into spherical coordinates in momentum space

2휋 휋 ∞ 푉 푐푟 2 푍1 = 3 ∫ ∫ ∫ exp (− ) 푟 sin 휃 푑푟푑휃푑휙 ℎ 푘퐵푇 0 0 0

2 2 2 since |푝| = √푝푥 + 푝푦 + 푝푧 = 푟. This can be integrated to yield

8휋푉 푘 푇 3 푍 = ( 퐵 ) 1 ℎ3 푐 Scaling this up to 푁 particles,

푁 푍푁 1 8휋푉 푘 푇 3 푍 = 1 = ( ( 퐵 ) ) 푁 푁! 푁! ℎ3 푐

The free energy is

31

푁 1 8휋푉 푘 푇 3 8휋푉 푘 푇 3 퐹 = −푘 푇 ln(푍 ) = −푘 푇 ln ( ( ( 퐵 ) ) ) = −푘 푇 (푁 ln ( ( 퐵 ) ) − ln(푁!)) 퐵 푁 퐵 푁! ℎ3 푐 퐵 ℎ3 푐 푉 푘 푇 = −푘 푇푁 (ln (8휋 ) + 3 ln ( 퐵 ) + 1) 퐵 푁 ℎ푐 We can then note that 휕퐹 푘 푇푁 푛푅푇 푃 = − ( ) = 퐵 = 휕푉 푇 푉 푉 which is the ideal gas law. To find the energy of the gas, note that 푈 = 퐹 + 푇푆 Therefore, we first need the entropy. This is

휕퐹 푉 푘퐵푇 푆 = − ( ) = 푘퐵푇푁 (ln (8휋 ) + 3 ln ( ) + 1) + 3푘퐵푁 휕푇 푉 푁 ℎ푐 Therefore,

푈 = 3푘퐵푇푁 = 3푛푅푇 3 We do not get the typical answer for an ideal gas of 푈 = 푛푅푇 since the Hamiltonian does not have three 2 quadratic terms (it only has one – the one for the translational component).

32

5 CHEMICAL EQUILIBRIUM 5.1 CONDITIONS OF CHEMICAL EQUILIBRIUM We now consider a simple equilibrium reaction of 퐴 ↔ 퐵, such as isomerization or the folding of a protein. We know from kinetics that

[퐵]푒푞 퐾 = [퐴]푒푞 where 퐾 is the chemical equilibrium constant. We wish to relate chemical equilibrium to . At constant 푇 and 푉,

푑퐹 = −푆푑푇 − 푃푑푉 + 휇퐴푑푛퐴 + 휇퐵푑푛퐵 becomes

푑퐹 = 휇퐴푑푛퐴 + 휇퐵푑푛퐵

If we define 휉 as the extent of reaction such that 푑휉 = −푑푛퐴 = +푑푛퐵, then

푑퐹 = (휇퐵 − 휇퐴)푑휉 such that 휕퐹 ( ) = 휇퐵 − 휇퐴 휕휉 푇,푉 At equilibrium, we know that the derivative must be zero, and therefore equilibrium in this system is defined as being when 휇퐴 = 휇퐵. We could have repeated the same procedure constant 푇 and 푃. As such,

푑퐺 = −푆푑푇 + 푉푑푃 + 휇퐴푑푛퐴 + 휇퐵푑푛퐵 becomes

푑퐺 = 휇퐴푑푛퐴 + 휇퐵푑푛퐵 such that 휕퐺 ( ) = 휇퐵 − 휇퐴 휕휉 푇,푃 We can also determine that the chemical potential of a species 푖 by 푝 휇 = 휇∘ + 푘 푇 ln ( 푖) 푖 푖 퐵 푃 5.2 CALCULATING EQUILIBRIUM CONSTANTS This is effectively a mini-summary of Sections 3.4-3.8. From kinetics, we know that we can express the equilibrium constant as 훥푈 휈푖 퐾퐶 = exp (− ) ∏ 푍푖 푘퐵푇 푖

33 where Δ푈 represents the energy change of the reaction. Note that if you have bond dissociation energies (BDE), that Δ푈 = BDEbonds broken − BDEbonds formed in order for the sign convention to work out since BDEs are tabulated as positive quantities. We recall that the partition function can be factorized, so each species’ partition function is the product of the various components. Recalling from before, the translational partition function per unit volume is

3 푍trans 1 2휋푚푘퐵푇 2 = 3 = ( 2 ) 푉 휆퐷 ℎ

24 In calculating 퐾, you must use 푍trans/푉 to get the units correct. Common values of 푍trans/푉 are about 10 cm-3. The rotational partition function is different depending on the shape of the molecule. For a linear molecule,

2 8휋 퐼푘퐵푇 푇 푍푟표푡 = 2 = 휎ℎ 휎휃rot where 휃rot is the rotational “temperature”. For a nonlinear molecule 1 3 8휋2(8휋3퐼 퐼 퐼 )2(푘 푇)2 푍 = 1 2 3 퐵 푟표푡 ℎ3휎 In these equations, 휎 represents the symmetry number and is determined by the number of spatial orientations of the subject molecule that are identical. For easy reference, it is a value of 2 for linear molecules with a center of symmetry and 1 for linear molecules without a center of symmetry. The quantity 퐼 is the moment of inertia, and for the nonlinear case they are the three principal moments. The moment of inertia is defined as

2 퐼 = ∑ 푚푖푟푖 푖 where 푟 is the distance to the axis of rotation. For a diatomic molecule, the moment of inertia is 퐼 = 휇푅2 where 휇 is the reduced mass and 푅 is the distance between the two atoms. For a linear, symmetric molecule 2 like CO2, the moment of inertia is 퐼 = 2푚O푟CO, where 푚O is the mass of the oxygen atom and 푟CO is the C-O bond length. For a triatomic linear molecule, such as a D-D-H transition state, the moment of inertia 2 2 would approximately be 퐼 = 푚퐷푟퐷퐷 + 푚퐻푟퐻퐷 (this assumes the center of the molecule is the center of 2 4 mass, which is a reasonable approximation). The value of 푍푟표푡 is unitless and approximately 10 − 10 for linear molecules and 103 − 106 for nonlinear molecules. The vibrational partition function is given by

푛 −1 푛 −1 ℎ휈푖 휃vib 푍푣푖푏 = ∏ (1 − exp (− )) = ∏ (1 − exp (− )) 푘퐵푇 푇 푖 푖 where 푛 is the degrees of vibrational freedom, 휈푖 is the vibrational frequency from IR or Raman spectroscopy, and 휃vib is the vibrational “temperature”. The value of 푍푣푖푏 is unitless and approximately 1 to 10. Note that spectra normally yield wavenumbers with units of inverse length. To convert a wavenumber 휈̃ to frequency, use 휈푖 = 푐휈̃푖.

34

Finally, the electronic partition function is given by

휀푖 푍푒푙 = ∑ 푔푖 exp (− ) 푘퐵푇 푖 where 푔푖 is the degeneracy and 휀푖 is the electronic energy above the ground state.

5.3 GRAND CANONICAL ENSEMBLE 5.3.1 DEFINITION In the canonical ensemble, we assumed constant NVT conditions. We now wish to consider the case where we can let the number of particles fluctuate, which is at constant 휇VT conditions instead.

The probability to find a member of the ensemble that contains 푁 particles and is in an energy state 퐸푖(푁, 푉) is given by

(퐸 −휇푁 ) − 푖 푖 푒 푘퐵푇 푃 = 푖 Ξ where Ξ is the grand canonical partition function, given by

(퐸−휇푁) − Ξ = ∑ 푒 푘퐵푇 states It is important to note that this sum is over all possible microstates. That includes all combinations of energy levels and 푁. As such, the grand canonical partition function (for one type of particle) can be restated as

(퐸 −휇푁) − 푖 Ξ = ∑ ∑ 푒 푘퐵푇 푁 푖 where the inner sum is over all possible energy states, and the outer sum is overall possible number of particles. Since the exponential can be split into the product of two exponentials, it is also worthwhile to note that

휇푁 푘 푇 Ξ = ∑ 푍푁푒 퐵 푁

5.3.2 CONNECTION TO THERMODYNAMICS We recall that the entropy is given by

푆 = −푘퐵 ∑ 푝푖 ln(푝푖) 푖 Plugging in our probability expression gets us

푈̅ 휇푁̅ 푆 = − + 푘 ln(훯) 푇 푇 퐵 Therefore, we define a new state function called the grand potential that is

Φ ≡ 푈̅ − 푇푆 − 휇푁̅

35

The definition implies that

푑Φ = 푑푈̅ − 푑(휇푁̅) − 푑(푇푆) If we note that 푑푈 = 푇푑푆 − 푃푑푉 + 휇푑푁 then

푑Φ = −푃푑푉 − 푁̅푑휇 − 푆푑푇 This allows us to state that 휕Φ 휕(푇 ln(Ξ)) 푆 = − ( ) = 푘 ( ) 휕푇 퐵 휕푇 푉,휇 푉,휇 휕Φ 휕(ln(훯)) 푃 = − ( ) = 푘 푇 ( ) 휕푉 퐵 휕푉 푇,휇 푇,휇 휕Φ 휕(ln(훯)) 푁̅ = − ( ) = 푘 푇 ( ) 휕휇 퐵 휕휇 푇,푉 푇,푉 It is also clear from our definition of the grand canonical partition function and grand potential that

Φ = −푘퐵푇 ln(훯)

푃푉 = 푘퐵푇 ln(훯)

36

6 THERMODYNAMICS OF BIOPOLYMERS 6.1 MEAN SIZE OF RANDOM WALK MACROMOLECULE Let us start by considering a lattice model with a lattice constant determined by ℓ. The value of ℓ represents the distance over which a polymer bends. We can then relate the contour length of the polymer (i.e. the length of the polymer when fulling outstretched), 퐿, and the number of steps taken, 푁, via 퐿 푁 = ℓ

The end-to-end distance of the polymer is given by 푟⃗. Let 푔푁(푟⃗) equal the total number of walks on the lattice that start at the origin and end at 푟⃗. This implies then that ∑푟⃗ 푔푁 (푟⃗) is the total number of possible configurations of the polymer. We will define this as

푁 ∑ 푔푁 (푟⃗) = 푔tot = 훾 푟⃗ It can be shown that that 훾 = 2 for 1D, γ = 4 for 2D, and 훾 = 6 for 3D. Before we get 푍, let us try to get 2 the size of the polymer, given by √⟨푟⃗ ⟩. If we define the position vector as 푟⃗ = ℓ⃗⃗1 + ℓ⃗⃗2 + ⋯ + ℓ⃗⃗푁 then

푁 2 푁 푁 푁 푁 2 ⃗⃗ ⃗⃗ ⃗⃗ ⃗⃗2 ⃗⃗ ⃗⃗ ⟨푟⃗ ⟩ = ⟨(∑ ℓ푖) ⟩ = ⟨∑ ∑ ℓ푖ℓ푗⟩ = ∑⟨ℓ푖 ⟩ + ∑ ⟨ℓ푖ℓ푗⟩ 푖=1 푖=1 푗=1 푖=1 푖≠푗=1 Since every step is independent of all steps that preceded and follow it, the second term on the right-hand ⃗⃗2 2 ⃗⃗ side is zero. In addition, ⟨ℓ푖 ⟩ = ℓ since ℓ푖 = ±ℓ. Therefore, ⟨푟⃗2⟩ = 푁ℓ2

The polymer size, 푅0, is then

2 푅0 = √⟨푟⃗ ⟩ = ℓ√푁 = √퐿ℓ

6.2 RANDOM WALK: DIFFUSION Let 푝0(푟⃗, 푡) be the probability of a diffusing random-walker particle to be a distance 푟⃗ at a time 푡. It can be shown that the probability is

푟⃗2 − 푒 4퐷푡 푝0 = 3 (4휋퐷푡)2 To find ⟨푟⃗2⟩, we note that the following statement is true (so long as the probability function is normalized such that it has a value of 1 over all space)

∞ 2 2 ⟨푟⃗ ⟩ = ∫ 푟⃗ 푝0(푟⃗) 푑푟⃗ −∞ Plugging in our probability

37

푟⃗2 ∞ − 4퐷푡 2 2 푒 ⟨푟⃗ ⟩ = ∫ 푟⃗ 3 푑푟⃗ −∞ (4휋퐷푡)2 This would be a nightmare to actually integrate. We can note the following identity:

∞ 3 2 휋 2 ∫ 푒−훼푟⃗ 푑푟⃗ = ( ) 훼 −∞ To make our exponential in ⟨푟⃗2⟩ look like the above integral, we note that 훼 = 1/4퐷푡. Substituting this in yields the following (and I pulled out a constant factor in front of the integral):

∞ 2 1 2 −훼푟⃗2 ⟨푟⃗ ⟩ = 3 ∫ 푟⃗ 푒 푑푟⃗ (4휋퐷푡)2 −∞ This almost looks like the tabulated integral, except for there is an 푟⃗2 factor inside the integrand as well. We can solve this easily by noting that

2 푑 2 푟⃗2푒−훼푟⃗ = − (푒−훼푟⃗ ) 푑훼 Therefore, we can say

∞ 3 3 1 푑 2 1 푑 휋 2 1 3 휋2 ⟨푟⃗2⟩ = (− ∫ 푒−훼푟⃗ 푑푟⃗) = (− ( ) ) = ( ) = 6퐷푡 3 푑훼 3 푑훼 훼 3 2 5 (4휋퐷푡)2 −∞ (4휋퐷푡)2 (4휋퐷푡)2 훼2

Therefore, √⟨푟⃗2⟩ ∼ √푡.

6.3 ANALOGY TO POLYMERS Using the previous solution, we make an analogy to polymers. For diffusion, we dealt with time (푡) whereas for polymers we deal with a length scale 푁 (defined by 푡/Δ푡, where Δ푡 is a discrete time step). It can therefore be shown that the size is proportional to √푁 (instead of √푡) for polymers. From our expressions of ⟨푟⃗2⟩ = 푁ℓ2 and

⟨푟⃗2⟩ = 6퐷푡 we can equate them to yield

ℓ2푁 ℓ2 퐷 = = 6푡 6Δ푡 For polymers, the probability can be expressed as the following, as discussed in the next section:

3 3푟⃗2 3 2 − 푝 = ( ) 푒 2푁ℓ2 2휋푁ℓ2

38

6.4 ENTROPIC SPRING MODEL Using the previous Gaussian chain formulation, find the probability that the two ends of a polymer chain are at a distance 훿 or smaller from each other. To do this, we begin by recalling that

3 3푟⃗2 3 2 − 푝 = ( ) 푒 2푁ℓ2 2휋푁ℓ2 We essentially want to calculate the probability the two ends are within a volume slice of distance 훿, so

휋 2휋 훿 2 푝0 = ∫ ∫ ∫ 푟 푝 sin 휃 푑휃푑휙푑푟 0 0 0 This becomes

훿 2 푝0 = 4휋 ∫ 푟 푝 푑푟 0 or

3 훿 3푟2 3 2 − 푝 = 4휋 ( ) ∫ 푟2푒 2푁ℓ2 푑푟 0 2휋푁ℓ2 0

If we can assume that 훿 ≪ √푁ℓ, then the exponential is approximately 1 and therefore

3 1 3 3 2 훿3 6 2 훿 푝 = 4휋 ( ) = ( ) ( ) 0 2휋푁ℓ2 3 휋푁3 ℓ 6.5 ENTROPIC RESTORING FORCE For polymers, the probability of being in a given state is the number of ways to get a given 푟⃗ divided by the total number of configurations. We shall denote this as follows 푔 (푟⃗) 푝(푟⃗, 푡) = 푁 훾푁

2 Therefore, using the identity 푁ℓ = 퐿ℓ, we can find 푔푁(푟⃗) to get 3 푁 3푟⃗2 훾 32 − 2푁ℓ2 푔푁(푟⃗) = 3 푒 (2휋퐿ℓ)2 We can now calculate the force associated with a given end-to-end distance. The Helmholtz free energy is given by 퐹 = 푈 − 푇푆 where 푈 is just the internal energy of the system, which is a constant. Since

푆 = 푘퐵 ln(푔푁(푟⃗)) we get

39

퐹 = 푈 − 푘퐵푇 ln(푔푁(푟⃗)) Since 푈 is a constant, we can say 3푘 푇 1 퐹 = const + 퐵 푟⃗2 = (const + ) 휅푟⃗2 2퐿ℓ 2 where 휅 is the entropic spring constant given by 3푘 푇 휅 = 퐵 퐿ℓ The restoring force for a stretched polymer is driven by entropy.

6.6 FORCE AS A FUNCTION OF EXTENSION We can use the machinery of statistical mechanics, in particular the canonical ensemble, to determine the force and other macroscopic properties. Recall that the canonical partition function is

퐸 − 푛 푍 = ∑ 푒 푘퐵푇 푛 For this discussion, we will rearrange our definition of 푍 to instead read

퐸 − 푘 푇 푍 = ∑ 푔푁(퐸)푒 퐵 퐸

This is now a sum over all allowable energies, and 푔푁(퐸) is the energy level degeneracy (known as the density of states). If 퐸 is thought of as the work done to move one end of the polymer to move it from the origin to a position 푟⃗, then

퐸 = −퐹⃗ ⋅ 푟⃗ Plugging this in yields

퐹⃗⋅푟⃗ 푘 푇 푍 = ∑ 푔푁(푟⃗)푒 퐵 푟⃗

We recognize that we can substitute in 휅 into 푔푁(푟⃗), remembering that equivalence 푁ℓ = 퐿. As such, 3 3 3 2 푁 2 3푟⃗2 푁 2 3푟⃗2 푁 2 1휅푟⃗ 훾 3 − 훾 3 − 훾 3 − 2푁ℓ2 2퐿ℓ 2푘퐵푇 푔푁(푟⃗) = 3 푒 = 3 푒 = 3 푒 (2휋퐿ℓ)2 (2휋퐿ℓ)2 (2휋퐿ℓ)2

Plugging in for 푔푁 yields 3 푁 1휅푟⃗2 퐹⃗⋅푟⃗ 1휅푟⃗2 퐹⃗⋅푟⃗ 훾 32 − − + 2푘퐵푇 푘퐵푇 2푘퐵푇 푘퐵푇 푍 = ∑ 3 푒 푒 = const ⋅ ∑ 푒 푟⃗ (2휋퐿ℓ)2 푟⃗ Converting this to an integral in the continuum limit,

40

∞ 1휅푟⃗2 퐹⃗⋅푟⃗ − + 푍 = const ⋅ ∫ 푒 2푘퐵푇 푘퐵푇푑푟⃗ −∞

Let us assume that 퐹⃗ only acts in the 푥̂ direction. As such, 퐹푦 = 퐹푧 = 0. Then, if we write the integral in

Cartesian coordinates (and noting 퐹⃗ = 퐹푥 ≡ Ϝ) ∞ ∞ ∞ 1휅푥2 Ϝ푥 1휅푦2 1휅푧2 − + − − 푍 = const ⋅ ( ∫ 푒 2푘퐵푡 푘퐵푇푑푥 ∫ 푒 2푘퐵푇푑푦 ∫ 푒 2푘퐵푇푑푧) −∞ −∞ −∞ This can be integrated by using the following identity (in addition to the one already mentioned before):

∞ 2 2 휋 훽 ∫ 푒−훼푥 +훽푥 푑푥 = √ 푒4훼 훼 −∞ Employing this identity yields

3/2 Ϝ2 2휋푘퐵푇 푍 = const ⋅ (( ) 푒2휅푘퐵푇) 휅

We now wish to obtain an expression for ⟨푥⟩. To do so, we note that 휕 ln(푍) ⟨푥⟩ = 푘 푇 퐵 휕Ϝ To see why this is true takes a bit of foresight. The following is a brief explanation. Recall that the average value of a function 휉 is going to have the form

퐸(휉) − ∫ 휉푒 푘퐵푇 ⟨ ⟩ 휉 = 퐸(휉) − ∫ 푒 푘퐵푇 We want to find an expression that has this form. We begin with our expression of the partition function in its integral formulation:

∞ ∞ ∞ 1휅푥2 Ϝ푥 1휅푦2 1휅푧2 − + − − 푍 = const ⋅ ( ∫ 푒 2푘퐵푡 푘퐵푇푑푥 ∫ 푒 2푘퐵푇푑푦 ∫ 푒 2푘퐵푇푑푧) −∞ −∞ −∞ Now, we take the derivative with respect to Ϝ and divide by 푍 to get

∞ ∞ ∞ 1휅푥2 Ϝ푥 1휅푦2 1휅푧2 1 휕푍 1 1 − + − − = [const ⋅ ∫ 푥푒 2푘퐵푡 푘퐵푇푑푥 ∫ 푒 2푘퐵푇푑푦 ∫ 푒 2푘퐵푇푑푧] 푍 휕Ϝ 푍 푘퐵푇 −∞ −∞ −∞ We see that the integral in the 푥 direction now has an 푥 term multiplied by an exponential. That looks just like the expression for the definition of the average value I quoted above! This means that 1 휕푍 ⟨푥⟩ ∼ 푍 휕Ϝ

41 or put another way 휕 ln(푍) ⟨푥⟩ ∼ 휕Ϝ with the constant of proportionality being 푘퐵푇. It is important to note that we do not include the lumped constant term in the expression for ⟨푥⟩ since it is also present in 푍 and will therefore cancel. We now calculate ⟨푥⟩.

3 2휋푘 푇 Ϝ2 ln(푍) = ln(const) + ln ( 퐵 ) + 2 휅 2휅푘퐵푇 Therefore, 휕 ln(푍) Ϝ = 휕Ϝ 휅푘퐵푇 This then means 휕 ln(푍) Ϝ 1 퐿ℓ ⟨푥⟩ = 푘퐵푇 = = Ϝ 휕Ϝ 휅 3 푘퐵푇 Therefore, if we plot ⟨푥⟩ as a function of Ϝ, it will be a line with a slope of one-third. While this is true experimentally for small to moderate values of Ϝ, the polymer obviously cannot extend infinitely – it must level off at high Ϝ and approach the contour length. This is where the following models come into play.

6.7 FREELY JOINTED CHAIN In the freely jointed chain model, we model a polymer by discrete segments connected by hinges. We shall discretize the polymer curve with tangents, each with a length 푡⃗푗. The absolute value of the tangent vector is equal to the persistence length, |푡⃗푗| = ℓ. We begin by writing the partition function for the system. We only include the configurational part (i.e. any factors of 1/ℎ and any momentum integral terms will be neglected) because we really care about ratios of partition functions, and constants will cancel. The energy is now

퐸 = −퐹⃗ ⋅ ∑ 푡⃗푗 푗=1 The configurational partition function is then

푁 퐹⃗⋅∑푗=1 푡⃗푗 푘 푇 푍 = ∫ 푑Ω1 ∫ 푑Ω2 … ∫ 푑Ω푁 푒 퐵 where

2휋 휋

∫ 푑Ω푖 = ∫ 푑휙푗 ∫ 푑휃푗 sin(휃푗) 0 0 To explain what’s going on here, we are doing our position integrals over a spherical coordinate space, but only the angular terms matter since each chain is assumed to have the same length. The exponential in the definition of the canonical partition function usually contains the negative of the Hamiltonian. In this case,

42 the energy is simply (negative of the) force times the distance, which is now this tangent vector. We can note that from the definition of the dot product,

퐹⃗ ⋅ 푡⃗푗 = Ϝℓ cos(휃푗) This then means we can rewrite the partition function as

푁 Ϝ ∑푗=1 ℓ cos(휃푗) 푘 푇 푍 = ∫ 푑Ω1 ∫ 푑Ω2 … ∫ 푑Ω푁 푒 퐵

We can rewrite this without the summation in the exponential by the following (since sums in an exponential can be rewritten as the product of exponentials):

푁 Ϝℓ cos(휃푗) 푘 푇 푍 = ∏ ∫ 푑Ω푗 푒 퐵 푗=1 Substituting in for our generalized angular coordinates yields

푁 2휋 휋 Ϝℓ cos(휃푗) 푘 푇 푍 = ∏ [∫ 푑휙푗 ∫ 푑휃푗 sin(휃푗) 푒 퐵 ] 푗=1 0 0

That is going to get messy, so let’s evaluate 푍1 (푁 = 1) first (or we could note that the two integrals yield constant terms, so we just raise their product to the power of 푁 in the end). That is,

2휋 휋 Ϝℓ cos(휃푗) 푘 푇 푍1 = ∫ 푑휙푗 ∫ 푑휃푗 sin(휃푗) 푒 퐵 0 0

The integral over 푑휙푗 is just 2휋, so that can be factored out. The right integral is a bit harder, but can be done out via 푢-substitution to yield

휋 Ϝℓ cos(휃푗) 2푘 푇 Ϝℓ 푘 푇 퐵 ∫ 푑휃푗 sin(휃푗) 푒 퐵 = sinh ( ) Ϝℓ 푘퐵푇 0 Therefore,

4휋푘퐵푇 Ϝℓ 푍1 = sinh ( ) Ϝℓ 푘퐵푇 Scaling this up to 푁 (distinguishable) chains yields

4휋푘 푇 Ϝℓ 푁 푍 = ( 퐵 sinh ( )) Ϝℓ 푘퐵푇 If we are stretching the polymer in the 푥̂ direction, we now wish to find ⟨푥⟩ again, so we can use the result from before that 휕 ln(푍) ⟨푥⟩ = 푘 푇 퐵 휕Ϝ

43

If we really want to prove this to ourselves though, we begin with our partition function in integral form as

푁 Ϝ ∑푗=1 ℓ cos(휃푗) 푘 푇 푍 = ∫ 푑Ω1 ∫ 푑Ω2 … ∫ 푑Ω푁 푒 퐵

Then we realize that if take a derivative with respect to Ϝ and then divide by 푍 we get the following

푁 1 휕푍 1 ∑푁 ℓ cos(휃 ) Ϝ ∑푗=1 ℓ cos(휃푗) 푗=1 푗 푘 푇 = [∫ 푑Ω1 ∫ 푑Ω2 … ∫ 푑Ω푁 푒 퐵 ] 푍 휕Ϝ 푍 푘퐵푇

That looks like an average of the position and supports the conclusion that 휕 ln(푍) ⟨푥⟩ = 푘 푇 퐵 휕Ϝ We start with 4휋푘 푇 Ϝℓ ln(푍) = 푁 ln ( 퐵 sinh ( )) Ϝℓ 푘퐵푇 We then note that 푑 훼 푠푖푛ℎ(훽휉) 1 ln ( ) = 훽 coth(훽휉) − 푑휉 훽휉 휉

To use this identity, we note that 훼 = 4휋, 훽 = ℓ/푘퐵푇 and 휉 = Ϝ such that ⟨푥⟩ becomes Ϝℓ 푘 푇 ⟨푥⟩ = 푁ℓ (coth ( ) − 퐵 ) 푘퐵푇 Ϝℓ We know that 퐿 = 푁ℓ, so Ϝℓ 푘 푇 ⟨푥⟩ = 퐿 (coth ( ) − 퐵 ) 푘퐵푇 Ϝℓ 1 1 We want to understand the limiting behavior. At 휉 → 0, coth(휉) → + 휉 + 푂(휉2). Therefore, at the limit 휉 3 of Ϝ → 0 we get 푘 푇 1 Ϝℓ 푘 푇 ⟨푥⟩ = 퐿 ( 퐵 + − 퐵 ) Ϝℓ 3 푘퐵푇 Ϝℓ or ⟨푥⟩ 1 Ϝℓ = 퐿 3 푘퐵푇 so the initial slope is still one-third, which is good. At 휉 → ∞, coth(휉) → 1, so if we let Ϝ → ∞, then ⟨푥⟩ → 퐿, and therefore the polymer length approaches the contour length at high force, which is also good. Experimentally, this is pretty good, but there is another improvement we need to make. At high force, the value of ⟨푥⟩ should decay as 1/√Ϝ, not 1/Ϝ. As such, we go to the next more complicated model.

44

6.8 WORM-LIKE CHAIN In the worm-like chain model, we no longer let |푡⃗푗| = ℓ but instead some microscopic length scale 푎 such that |푡⃗푗| = 푎, such that 푎 ≪ ℓ. We also add resistance to bending, such that the tangents want to be parallel. The energy function is now the same as the freely jointed chain but with an energy “bonus” for chains in the same direction (and penalty if not parallel):

푁−1 푁

퐸 = −퐾̃ ∑ 푡⃗푗 ⋅ 푡⃗푗+1 − 퐹⃗ ⋅ ∑ 푡⃗푗 푗=1 푗=1 with 퐾̃ > 0 so that the tangents want to align locally. We then note that

푁−1 푁−1 1 2 −퐾̃ ∑ 푡⃗ ⋅ 푡⃗ = 퐾̃ ∑(푡⃗ − 푡⃗ ) − 푁퐾̃ 푗 푗+1 2 푗+1 푗 푗=1 푗=1 such that

푁−1 푁 1 2 퐸 = 퐾̃ ∑(푡⃗ − 푡⃗ ) − 푁퐾̃ − 퐹⃗ ⋅ ∑ 푡⃗ 2 푗+1 푗 푗 푗=1 푗=1

2 2 푑푡⃗ We note that (푡⃗ − 푡⃗ ) looks like a derivative term. It looks like ( ) , where 푗 is just a counting variable 푗+1 푗 푑푗 so its finite difference is 1. If we define an arc length, 푠, that describes each chain, then we can write 푡⃗ in terms of 푠 instead of in 푗. It can then be noted that 푎 = 푑푠/푑푗 and 푡⃗ = 푎푛⃗⃗. As such, we will say

2 2 2 2 2 푑푡⃗ 푑푡⃗ 푑푠 푑푡⃗ 푑푛⃗⃗ (푡⃗ − 푡⃗ ) = ( ) = ( ) = 푎2 ( ) = 푎4 ( ) 푗+1 푗 푑푗 푑푠 푑푗 푑푠 푑푠

Plugging this into the energy expression yields

푁 푁 1 푑푛⃗⃗ 2 퐸 = 퐾̃ ∑ 푎4 ( ) − 푁퐾̃ − 퐹⃗ ⋅ ∑ 푡⃗ 2 푑푠 푗 푗=1 푗=1 We now convert our sum into an integral in the continuum limit via the following thought process. We start by noting that since Δ푗 is 1,

∑ 푓 = ∫ 푓 푑푗 푗 Then we know that 푎 = 푑푠/푑푗, so 푑푗 = 1/푎 푑푠. As such, 1 ∑ 푓 = ∫ 푓 푑푗 = ∫ 푓 푑푠 푎 푗 Plugging this in yields,

퐿 퐿 1 1 푑푛⃗⃗ 2 퐸 = 퐾̃ ∫ 푎4 ( ) 푑푠 − 푁퐾̃ − 퐹⃗ ⋅ ∫ 푛⃗⃗(푠) 푑푠 2 푎 푑푠 0 0 45

If we define 퐾 ≡ 퐾̃푎3 then

퐿 퐿 1 푑푛⃗⃗ 2 퐸 = 퐾 ∫ ( ) 푑푠 − 푁퐾̃ − 퐹⃗ ⋅ ∫ 푛⃗⃗(푠) 푑푠 2 푑푠 0 0 The energy is now a function of 푛⃗⃗, which is a function of 푠, so 퐸 is said to be a functional. Although it is not obvious yet, we can simply drop the 푁퐾̃ term since it is a constant, and it will be divided out at a later point. With this adjustment,

퐿 퐿 1 푑푛⃗⃗ 2 퐸 = 퐾 ∫ ( ) 푑푠 − 퐹⃗ ⋅ ∫ 푛⃗⃗(푠) 푑푠 2 푑푠 0 0

We can split this up into a bending and a force component, where 퐸 = 퐸bending + 퐸force with

퐿 1 푑푛⃗⃗ 2 퐸 = 퐾 ∫ ( ) 푑푠 bending 2 푑푠 0 and

퐸force = −퐹⃗ ⋅ ∫ 푛⃗⃗(푠) 푑푠 0 We consider pulling the polymer in the 푧̂ direction such that 퐹⃗ is parallel to 푧̂. We note that 푛⃗⃗ is a unit vector (whose magnitude must be unity). As such, we can write it as a function of its components as follows:

푛푥(푠) 푛푦(푠) 푛⃗⃗(푠) = 2 2 √1 − 푛푥 − 푛푦 ( ) We consider for the remainder of the derivation large forces only (since this is the regime we wish to make more accurate – the one-third slope at the low-force regime is already accurate). We note that the following Taylor expansion: 1 √1 − 휉 ≈ 1 − 휉 + 푂(휉2) 2 2 2 We apply this expansion to our expression for 푛⃗⃗(푠), letting 휉 = 푛푥 + 푛푦 and keeping only the first two terms, to arrive at

푛푥(푠) 푛 (푠) 푛⃗⃗(푠) ≈ 푦 1 1 − (푛2 + 푛2) ( 2 푥 푦 )

We want to get the derivative term in our 퐸bending expression. As such,

46

2 2 2 푑푛⃗⃗ 푑푛 푑푛푦 ( ) = ( 푥) + ( ) + 푂(푛4 + 푛4) 푑푠 푑푠 푑푠 푥 푦 which we will simplify to just

2 2 2 푑푛⃗⃗ 푑푛 푑푛푦 ( ) = ( 푥) + ( ) 푑푠 푑푠 푑푠

This then makes 퐸bending become

퐿 2 2 1 푑푛 푑푛푦 퐸 = 퐾 ∫ [( 푥) + ( ) ] 푑푠 bending 2 푑푠 푑푠 0

The 퐸force term can be expressed now with our expression for 푛⃗⃗(푠) as follows, noting that we only need the 푛푧 term if we note we are pulling in the 푧̂ direction only: 퐿 퐿 1 1 퐸 = −퐹⃗ ⋅ ∫ [1 − (푛2 + 푛2)] 푑푠 = −퐹퐿 + 퐹 ∫[푛2 + 푛2] 푑푠 force 2 푥 푦 2 푥 푦 0 0 This means

퐿 퐿 2 2 1 푑푛 푑푛푦 1 퐸 = 퐾 ∫ [( 푥) + ( ) ] 푑푠 + 퐹 ∫[푛2 + 푛2] 푑푠 − 퐹퐿 2 푑푠 푑푠 2 푥 푦 0 0 The −퐹퐿 term is a constant that will be also be cancelled later on. We now have an expression for the energy that we can use to find the partition function. Since 퐸 is a function of 푛⃗⃗, which is a function of 푠, we must perform a path integral, denoted by a 풟 operator, when evaluating the partition function (which is shown below for reference):

퐸[푛⃗⃗(푠)] − 푍 = ∫ 푒 푘퐵푇 풟[푛⃗⃗(푠)]

This is going to be very difficult to compute, and so before we tackle it, we must convert our energy expression to Fourier space. Returning to our energy expression, we can write 푛푥(푠) (or 푛푦(푠) for that matter) as 1 푛 (푠) = ∑ 푒푖푞푠푛̂ (푞) 푥 퐿 푥 푞 for 0 ≤ 푠 ≤ 퐿. We now assume periodic boundary conditions, such that 푛푥(푠) = 푛푥(푠 + 퐿). This turns out to be a rather reasonable assumption. This then means that 1 1 ∑ 푒푖푞푠푛̂ (푞) = ∑ 푒푖푞(푠+퐿)푛̂ (푞) 퐿 푥 퐿 푥 푞 푞

For this to be true, 푒푖푞퐿 = 1. Therefore,

47

2휋 푞 = 푚 퐿 where 푚 = 0, ±1, ±2, …. We recognize that a constraint on 푛푥(푠) is that it must be real, such that 푛푥(푠) = ∗ 푛푥(푠), where the asterisk denotes the complex conjugate. Therefore, 1 1 ∑ 푒푖푞푠푛̂ (푞) = ∑ 푒−푖푞푠푛̂∗ (푞) 퐿 푥 퐿 푥 푞 푞

∗ This then implies that 푞 = −푞 such that 푛̂푥(푞) = 푛̂푥(−푞). With these expressions, we can plug in our prior expression of 푛푥(푠) into the integral in the bending energy term, which as a reminder is 퐿 2 2 1 푑푛 푑푛푦 퐸 = 퐾 ∫ [( 푥) + ( ) ] 푑푠 bending 2 푑푠 푑푠 0 We now claim that

푑푛 2 푖푞 푖푘 ( 푥) = [( ) ∑ 푒푖푞푠푛̂ (푞)] [( ) ∑ 푒푖푘푠푛̂ (푘)] 푑푠 퐿 푥 퐿 푥 푞 푘

Then

퐿 퐿 푑푛 2 1 ∫ ( 푥) 푑푠 = − ∑ ∑ 푞푘 ∫ 푒푖(푞+푘)푠 푑푠 푛̂ (푞)푛̂ (푘) 푑푠 퐿2 푥 푥 0 푞 푘 0 We note that

퐿 푖(푞+푘)푠 ∫ 푒 푑푠 = 퐿훿푞+푘,0 0

The notation 훿푞+푘,0 means it has a value of 1 when 푞 = −푘 and 0 when 푞 ≠ −푘. Therefore, we can write our double-sum in terms of just one sum as follows, plugging in 푘 = −푞. Our expression is now the following (the 훿푞+푘,0 converts the double sum to a single sum and kills off the exponential)

퐿 푑푛 2 1 ∫ ( 푥) 푑푠 = ∑ 푞2푛̂ (푞)푛̂ (−푞) 푑푠 퐿 푥 푥 0 푞 We then note that

∗ 2 2 2 푛̂푥(푞) ⋅ 푛̂푥(푞) = (Re(푛̂푥)) + (Im(푛̂푥)) ≡ |푛̂푥(푞)| From the result of the periodic boundary condition, we can also state this is

2 푛̂푥(푞) ⋅ 푛̂푥(−푞) = |푛̂푥(푞)| We can now use this to state

48

퐿 푑푛 2 1 ∫ ( 푥) 푑푠 = ∑ 푞2|푛̂ (푞)|2 푑푠 퐿 푥 0 푞 This mean our bending term is the following, when we plug in the above term (and repeat the process for 푛푦:

1 퐾 2 퐸 = ∑ 푞2 (|푛̂ (푞)|2 + |푛̂ (푞)| ) bending 2 퐿 푥 푦 푞 We now wish to find the force term, which we recall was:

퐿 1 퐸 = −퐹퐿 + 퐹 ∫(푛2 + 푛2) 푑푠 force 2 푥 푦 0 We note that

1 푛2 = [∑ 푒푖푞푠푛̂ (푞)] [∑ 푒푖푘푠푛̂ (푘)] 푥 퐿2 푥 푥 푞 푘

Then

퐿 퐿 1 ∫ 푛2 푑푠 = ∑ ∑ ∫ 푒푖(푞+푘)푠푑푠 푛̂ (푞)푛̂ (푘) 푥 퐿2 푥 푥 0 푞 푘 0 This again can be simplified via the delta function such that

퐿 1 ∫ 푛2 푑푠 = ∑|푛̂ (푞)|2 푥 퐿 푥 0 푞 Therefore,

1 퐹 2 퐸 = −퐹퐿 + ∑ (|푛̂ (푞)|2 + |푛̂ (푞)| ) force 2 퐿 푥 푦 푞 Our total energy expression is then

1 퐾 2 1 퐹 2 퐸 = ∑ 푞2 (|푛̂ (푞)|2 + |푛̂ (푞)| ) + ∑ (|푛̂ (푞)|2 + |푛̂ (푞)| ) − 퐹퐿 2 퐿 푥 푦 2 퐿 푥 푦 푞 푞 This can be rewritten as

1 2 퐸 = ∑ ((푘푞2 + 퐹)|푛̂ (푞)|2 + (푘푞2 + 퐹)|푛̂ (푞)| ) − 퐹퐿 2퐿 푥 푦 푞 We now have an energy expression we can use to evaluate the partition function that is now written in Fourier space. Our partition function once again is

49

퐸[푛⃗⃗(푠)] − 푍 = ∫ 푒 푘퐵푇 풟[푛⃗⃗(푠)]

In Fourier space, we can rewrite the path integral as

∞ ∞ ∞ ∞

∫ 풟[푛⃗⃗(푠)] = ∫ 푑[Re(푛̂푥(푞))] ∫ 푑[Im(푛̂푥(푞))] ∫ 푑 [Re (푛̂푦(푞))] ∫ 푑 [Im (푛̂푦(푞))] −∞ −∞ −∞ −∞ We then use this to write our partition function as

∞ ∞ ∞ ∞ 퐸 − 푞 푘 푇 푍 = ∏ ∫ 푑[Re(푛̂푥(푞))] ∫ 푑[Im(푛̂푥(푞))] ∫ 푑 [Re (푛̂푦(푞))] ∫ 푑 [Im (푛̂푦(푞))] 푒 퐵 푞≥0 −∞ −∞ −∞ −∞ where the product over 푞 ≥ 0 is to prevent double-counting. Note that I have written 퐸푞 instead of 퐸, which is what allows me to introduce the product notation. We want to find the average extension length, ⟨푟⟩, if we pull in the 푧̂ direction. As such,

⟨푟⟩ = 푧̂⟨∑ 푡⃗푗⟩ 푗=1

We note from before that 푡⃗ = 푎푛⃗⃗ and that ∑푗 푓 ≈ 1/푎 ∫ 푓 푑푠, so

퐿 퐿

⟨푟⟩ = ⟨∫ 푛푧(푠) 푑푠⟩ = ∫⟨푛푧(푠)⟩ 푑푠 0 0

For large 퐹, ⟨푛푧(푠)⟩ is mostly independent of 푠 so

⟨푟⟩ = 퐿⟨푛푧⟩

We can plug in our expression for 푛푧 to get

⟨푟⟩ 1 = ⟨√1 − 푛2 − 푛2⟩ ≈ 1 − (⟨푛2⟩ + ⟨푛2⟩) 퐿 푥 푦 2 푥 푦

2 2 We need to evaluate ⟨푛푥⟩ and ⟨푛푦⟩. We start by noting that 1 ⟨푛2(푠)⟩ = ∑ ∑ 푒푖(푞+푘)푠⟨푛̂ (푞)푛̂ (푘)⟩ 푥 퐿2 푥 푥 푞 푘

We’re a step closer, but we now need to evaluate ⟨푛̂푥(푞)푛̂푥(푘)⟩. I will temporarily switch to a notation of 푞1 and 푞2 instead of 푞 and 푘. As such,

퐸[푛⃗⃗(푠)] − ∫ 푒 푘퐵푇 푛̂ (푞 )푛̂ (푞 )풟[푛⃗⃗(푠)] ⟨푛̂ (푞 )푛̂ (푞 )⟩ = 푥 1 푥 2 푥 1 푥 2 퐸[푛⃗⃗(푠)] − ∫ 푒 푘퐵푇 풟[푛⃗⃗(푠)] This integral can be written out as follows

50

⟨푛̂푥(푞1)푛̂푥(푞2)⟩ 퐸 − 푞 ∞ ∞ ∞ ∞ 푘 푇 ∏푞≥0 ∫ 푑[Re(푛̂푥(푞))] ∫ 푑[Im(푛̂푥(푞))] ∫ 푑 [Re (푛̂푦(푞))] ∫ 푑 [Im (푛̂푦(푞))] 푛̂푥(푞1)푛̂푥(푞2)푒 퐵 = −∞ −∞ −∞ −∞ 퐸푞 ∞ ∞ ∞ ∞ − ∏ ( ) ( ) ( ) ( ) 푘퐵푇 푞≥0 ∫−∞ 푑[Re(푛̂푥 푞 )] ∫−∞ 푑[Im(푛̂푥 푞 )] ∫−∞ 푑 [Re (푛̂푦 푞 )] ∫−∞ 푑 [Im (푛̂푦 푞 )] 푒 It may not be immediately clear how to simplify this further. However, let us have a brief side-thought. In 2 2 2 2 general, if we wanted to calculate a function ⟨푥 ⟩ and we had 퐸푞(푥 + 푦 + 푧 ) then we could say

푥2+푦2+푧2 2 − ∫ 푑푥 ∫ 푑푦 ∫ 푑푧 푥 푒 푘퐵푇 ⟨푥2⟩ = −푥2+푦2+푧2 ∫ 푑푥 ∫ 푑푦 ∫ 푑푧 푒 푘퐵푇

푥2 푦2 푧2 − − − However, since the exponential function can be split into three multiplied terms of 푒 푘퐵푇푒 푘퐵푇푒 푘퐵푇, the 푦 and 푧 terms will cancel such that

푥2 2 − ∫ 푑푥 푥 푒 푘퐵푇 ⟨푥2⟩ = 푥2 − ∫ 푑푥 푒 푘퐵푇

This result can be extended to our worm-like chain problem. We note that our average is just over the 푞1 and 푞2 dimensions, so can cancel all the integrals that are not those terms. This makes the problem more tractable. When doing this, and plugging in 퐸푞,

⟨푛̂푥(푞1)푛̂푥(푞2)⟩ (푘푞2+퐹)|푛̂ (푞 )|2 (푘푞2+퐹)|푛̂ (푞 )|2 − 1 푥 1 − 2 푥 2 ∞ ∞ ∞ ∞ 2퐿푘 푇 2퐿푘 푇 ∫ 푑[Re(푛̂푥(푞1))] ∫ 푑[Im(푛̂푥(푞2))] ∫ 푑[Re(푛̂푥(푞1))] ∫ 푑[Im(푛̂푥(푞2))] 푛̂푥(푞1)푛̂푥(푞2)푒 퐵 푒 퐵 = −∞ −∞ −∞ −∞ (푘푞2+퐹)|푛̂ (푞 )|2 (푘푞2+퐹)|푛̂ (푞 )|2 ∞ ∞ ∞ ∞ − 1 푥 1 − 2 푥 2 ( ) ( ) ( ) ( ) 2퐿푘퐵푇 2퐿푘퐵푇 ∫−∞ 푑[Re(푛̂푥 푞1 )] ∫−∞ 푑[Im(푛̂푥 푞2 )] ∫−∞ 푑[Re(푛̂푥 푞1 )] ∫−∞ 푑[Im(푛̂푥 푞2 )] 푒 푒

We then recall the result of the periodic boundary condition: that 푞1 = −푞2, which we will just call 푞. As 2 such, we can simplify the equation further because 푛̂푥(푞1)푛̂푥(푞2) = 푛̂푥(푞1)푛̂푥(−푞1) = |푛̂푥(푞)| = 2 2 (Re(푛̂푥(푞))) + (Im(푛̂푥(푞))) , and also by only writing our integrals in terms of this general 푞. Substituting this in

⟨푛̂푥(푞1)푛̂푥(푞2)⟩ 2 2 2 (푘푞 +퐹)[(Re(푛̂푥(푞))) +(Im(푛̂푥(푞))) ] ∞ ∞ 2 2 − ( ) ( ) ̂ ( ) ̂ ( ) 2퐿푘퐵푇 ∫−∞ 푑[Re(푛̂푥 푞 )] ∫−∞ 푑[Im(푛̂푥 푞 )] [(Re(푛푥 푞 )) + (Im(푛푥 푞 )) ] 푒 = 2 2 (푘푞2+퐹)[(Re(푛̂푥(푞))) +(Im(푛̂푥(푞))) ] ∞ ∞ − ( ) ( ) 2퐿푘퐵푇 ∫−∞ 푑[Re(푛̂푥 푞 )] ∫−∞ 푑[Im(푛̂푥 푞 )] 푒 This expression can actually be easily evaluated. We note that

∞ 2 ∫ 푥2푒−훼푥 푑푥 1 −∞ = ∞ −훼푥2 2훼 ∫−∞ 푒 푑푥 To apply this to our expression, we note the following. In general,

51

2 2 2 2 2 ∬ 푑푥 푑푦 (푥2 + 푦2)푒−훼(푥 +푦 ) 2 ∬ 푑푥 푑푦 푥2푒−훼푥 푒−훼푦 2 ∫ 푑푥 푥2푒−훼푥 1 = = = ∬ 푑푥 푑푦 푒−훼(푥2+푦2) ∬ 푑푥 푑푦 푒−훼푥2푒−훼푦2 ∫ 푑푥 푒−훼푥2 훼

푘푞2+퐹 In our expression, 훼 = , 푥 = Re(푛̂푥(푞)), and 푦 = Im(푛̂푥(푞)). Therefore, 2퐿푘퐵푇 2퐿푘 푇 ⟨푛̂ (푞 )푛̂ (푞 )⟩ = 훿 퐵 푥 1 푥 2 푞1+푞2,0 푘푞2 + 퐹 Switching back to our 푞 and 푘 nomenclature 2퐿푘 푇 ⟨푛̂ (푞)푛̂ (푘)⟩ = 훿 퐵 푥 푥 푞+푘,0 푘푞2 + 퐹 where the 훿푞+푘,0 ensures the periodic boundary condition of 푞 = −푘. We therefore can finally plug this in to get 1 2푘 푇 ⟨푛2(푠)⟩ = ∑ 퐵 푥 퐿 푘푞2 + 퐹 푞 For large 퐿, we can convert the summation to an integral. The process to do so is below, noting that 푞 = 2휋푚/퐿 so Δ푞 = 2휋/퐿: 퐿 퐿 ∑ 푓 = ∑ 푓 Δ푞 = ∫ 푓 푑푞 2휋 2휋 푞 푞 to get

∞ 1 2푘 푇 ⟨푛2(푠)⟩ = ∫ 퐵 푑푞 푥 2휋 푘푞2 + 퐹 −∞ 2 Of course, we could repeat this whole procedure for ⟨푛푦(푠)⟩. Plugging these two expressions into ⟨푟⟩ yields

∞ ∞ ⟨푟⟩ 1 1 2푘 푇 1 2푘 푇 = 1 − [ ∫ 퐵 + ∫ 퐵 ] 퐿 2 2휋 푘푞2 + 퐹 2휋 푘푞2 + 퐹 −∞ −∞ or

∞ ⟨푟⟩ 1 2푘 푇 = 1 − ∫ 퐵 퐿 2휋 푘푞2 + 퐹 −∞

This is a contour integral (due to the singularity at 푞 = ±푖√퐹/푘). It can be computed, and we arrive at ⟨푟⟩ 푘 푇 = 1 − 퐵 퐿 √푘퐹 for the high force limit of the worm-like chain model.

52

6.9 1D HARMONIC CHAIN Consider a one-dimensional harmonic chain of length 퐿, defined with respect to sites that each site a distance ℓ apart. Each particle in the chain (which sit in a potential well …, 푥푚−1, 푥푚, 푥푚+1, …) is connected to its neighbors by a spring. So, if an actual particle position is 푟푚 = 푥푚 + 푢푚 (where 푢푚 is just defined as the displacement from 푥푚). The energy is given as 퐵′ 퐶′ 퐸 = ∑(푢 − 푢 )2 + ∑ 푢2 2 푚+1 푚 2 푚 푚 푚 We note that in the continuum limit,

푑푢 2 (푢 − 푢 )2 = ( ) 푚+1 푚 푑푚 We, however, want to write 푑푢/푑푥 instead. So, if we note that 푑푥/푑푚 = ℓ then

푑푢 2 푑푢 푑푥 2 푑푢 2 (푢 − 푢 )2 = ( ) = ( ) = ℓ2 ( ) 푚+1 푚 푑푚 푑푥 푑푚 푑푥 Plugging into the left-hand term gets

퐵′ℓ2 푑푢 2 퐶′ 퐸 = ∑ ( ) + ∑ 푢2 2 푑푥 2 푚 푚 푚 We now convert the summations to integrals such that

퐿 퐿 퐵′ℓ 푑푢 2 퐶′ 퐸 = ∫ ( ) + ∫ 푢2 푑푥 2 푑푥 2ℓ 0 0 We can define 퐵 ≡ 퐵′ℓ and 퐶 ≡ 퐶′/ℓ to get

퐿 퐿 퐵 푑푢 2 퐶 퐸 = ∫ ( ) + ∫ 푢2 푑푥 2 푑푥 2 0 0 We define the Fourier variable 1 푢 = ∑ 푒푖푞푥 푢̂ 퐿 푞 We shall address the left-hand term first. The squared derivative term can be written in terms of Fourier modes as

푑푢 2 푖푞 푖푘 ( ) = [( ) ∑ 푒푖푞푥푢̂(푞)] [( ) ∑ 푒푖푘푥푢̂(푘)] 푑푠 퐿 퐿 푞 푘

Then

53

퐿 퐿 푑푢 2 1 ∫ ( ) 푑푥 = − ∑ ∑ 푞푘 ∫ 푒푖(푞+푘)푥 푑푥 푢̂(푞)푢̂(푘) 푑푠 퐿2 0 푞 푘 0 We note that

퐿 푖(푞+푘)푥 ∫ 푒 푑푥 = 퐿훿푞+푘,0 0 This kills off one of the summations to lead to

퐿 푑푢 2 1 ∫ ( ) 푑푥푠 = ∑ 푞2푢̂(푞)푢̂(−푞) 푑푠 퐿 0 푞 From the result of the periodic boundary condition, we can also state that is

푢̂(푞) ⋅ 푢̂(−푞) = |푢̂(푞)|2 We can now use this to state

퐿 푑푢 2 1 ∫ ( ) 푑푥푠 = ∑ 푞2|푢̂(푞)|2 푑푠 퐿 0 푞 We now do a similar procedure for the right-hand term. We note that

1 푢2 = [∑ 푒푖푞푥푢̂(푞)] [∑ 푒푖푘푥푢̂(푘)] 퐿2 푞 푘

Then

퐿 퐿 1 ∫ 푢2푑푥 = ∑ ∑ ∫ 푒푖(푞+푘)푥 푑푥 푢̂(푞)푢̂(푘) 퐿2 0 푞 푘 0 This again can be simplified via the delta function such that

퐿 1 ∫ 푢2 푑푥 = ∑|푢̂(푞)|2 퐿 0 푞 Therefore, 퐵 1 퐶 1 퐸 = ∑ 푞2|푢̂(푞)|2 + ∑|푢̂(푞)|2 2 퐿 2 퐿 푞 푞 or 1 퐸 = ∑|푢̂(푞)|2(퐵푞2 + 퐶) 2퐿 푞

54

As in the worm-like chain derivation, we can define the partition function (and therefore the free energy) as

퐸 − 푍 = ∫ 푒 푘퐵푇풟[푢(푥)]

This becomes the following in Fourier space

∞ ∞ 퐸 − 푍 = ∫ 푑[Re(푢̂(푞))] ∫ 푑[Im(푢̂(푞))] 푒 푘퐵푇 −∞ −∞ If we wanted to continue as in the worm-like chain derivation, we could find ⟨푢2(푥)⟩. Taking the result from the worm-like chain derivation, 1 ⟨푢2(푥)⟩ = ∑ ∑ 푒푖(푞+푘)푥⟨푢̂(푞)푢̂(푘)⟩ 퐿2 푞 푘 We therefore need to evaluate ⟨푢̂(푞)푢̂(푘)⟩, which can be done via

퐸 − ∫ 푒 푘퐵푇푢̂(푞)푢̂(푘)풟[푢] ⟨푢̂(푞)푢̂(푘)⟩ = 퐸 − ∫ 푒 푘퐵푇풟[푢] As we found out in the worm-like chain derivation, this just becomes

1 1 퐿푘퐵푇 ⟨푢̂(푞)푢̂(푘)⟩ = 훿푞+푘,0 = 훿푞+푘,0 = 훿푞+푘,0 2훼 1 퐵푞2 + 퐶 2 (퐵푞2 + 퐶) (2퐿 )

Plugging this in gets 1 푘 푇 ⟨푢2(푥)⟩ = ∑ 퐵 퐿 퐵푞2 + 퐶 푞 For large 퐿, we can convert the summation to an integral. We note that Δ푞 = 2휋/퐿,so

∞ 1 푘 푇 ⟨푢2(푥)⟩ = ∫ 퐵 푑푞 2휋 퐵푞2 + 퐶 −∞

This is a contour integral with a singularity at 푞 = ±푖√퐶/퐵. As such, 푘 푇 ⟨푢2(푥)⟩ = 퐵 2√퐵퐶 It is analogous to the worm-like chain solution. Now let us assume we wish to find the free energy. For this, we go back to

퐸 − 푍 = ∫ 푒 푘퐵푇풟[푢(푥)]

55

퐹 − We recall that 푍 = 푒 푘퐵푇, so

퐸 − 푘 푇 퐹 = −푘퐵푇 ln (∫ 푒 퐵 풟[푢(푥)])

We recall that 1 퐸 = ∑|푢̂(푞)|2(퐵푞2 + 퐶) 2퐿 푞 We must restrict the summation to be 푞 ≥ 0. We write out 퐸 into two parts: one at 푞 = 0 and one at 푞 > 0 as follows

퐸 = 훾(0)|푢̂(0)|2 + ∑ 2훾(푞)|푢̂(푞)|2 푞>0 where

퐵푞2 + 퐶 훾(푞) ≡ 2퐿 The factor of 2 is included in the right-hand term in the expression for 퐸 to account for all values of 푞. This then means our expression for 퐹 becomes

∞ ∞ ∞ ∞ 훾(0)|푢̂(0)|2 2훾(푞)|푢̂(푞)|2 − − 푘 푇 푘 푇 퐹 = −푘퐵푇 ln ( ∫ ∫ 푒 퐵 푑[Re(푢̂(푞))]푑[Im(푢̂(푞))] ∏ ∫ ∫ 푒 퐵 푑[Re(푢̂(푞))]푑[Im(푢̂(푞))]) −∞ −∞ 푞>0 −∞ −∞ We can note the following integral:

∞ ∞ 2 2 휋 ∫ ∫ 푒−훼(푥 +푦 ) = 훼 −∞ −∞ Therefore,

휋푘 푇 휋푘 푇 퐹 = −푘 푇 ln ( 퐵 ∏ 퐵 ) 퐵 훾(0) 2훾(푞) 푞>0

Plugging back in for 훾 yields

2휋푘 푇퐿 휋푘 푇퐿 퐹 = −푘 푇 ln ( 퐵 ∏ 퐵 ) 퐵 퐶 퐵푞2 + 퐶 푞>0

Therefore,

2휋푘 푇퐿 휋푘 푇퐿 퐹 = −푘 푇 [ln ( 퐵 ) − ∑ ln ( 퐵 )] 퐵 퐶 퐵푞2 + 퐶 푞>0

56

If we want to get the expression for 퐹 in the limit of 퐿 → ∞, we can do so. We first note that since 푞 = 2휋푚/퐿, having large 퐿 implies that the spacing between 푞’s is infinitesimal, so we can convert the summation to an integral (as we have done many times before). Therefore, noting that Δ푞 = 2휋/퐿,

∞ 2휋푘 푇퐿 퐿 휋푘 푇퐿 퐹 = −푘 푇 [ln ( 퐵 ) − ∫ ln ( 퐵 )] 퐵 퐶 2휋 퐵푞2 + 퐶 푞>0 We can set the lower-bound on the integral to be 푞 ≈ 0 since it will not greatly affect the integral. Therefore,

∞ 2휋푘 푇퐿 퐿 휋푘 푇퐿 퐹 = −푘 푇 [ln ( 퐵 ) − ∫ ln ( 퐵 )] 퐵 퐶 2휋 퐵푞2 + 퐶 0

57