hep-lat/9409003 6 Sep 1994 algorithm malization group twoparameter theory Monte Carlo pivot algorithm KarpLuby universal  KEY Formerly with the Department of Physics New York University WORDS amplitude Critical for and Two Selfavoiding ratio Universal Department of SelfAvoiding Internet New York NY Debt Exp onents North second New York NY and Internet September World Financial Center Department Internet and Equity New York University York Ontario MJ P walk Washington Place York University MADRASNEXUSYORKUCA Alan D Sokal Merril l Lynch virial Neal Madras Mathematics and Statistics CANADA ThreeDimensional Keele Street Bin Li Amplitude SOKALNYUEDU BLIMLCOM p olymer co ecient Markets Group of Physics Hyp erscaling Walks USA critical USA interpenetration Ratios exp onent hyperscaling ratio renor

Abstract

We make a highprecision Monte Carlo study of two and threedimensional

selfavoiding walks SAWs of length up to steps using the pivot algo

rithm and the KarpLuby algorithm We study the critical exp onents and

as well as several universal amplitude ratios in particular we make

4

an extremely sensitive test of the hyperscaling relation d In two

4

we conrm the predicted exp onent and the hyperscal

2 2

ing relation we estimate the universal ratios hR ihR i

g e

2 2

hR ihR i and condence

m e

limits In three dimensions we estimate with a correction

toscaling exp onent sub jective condence limits This

1

value for agrees excellently with the eldtheoretic renormalizationgroup

prediction but there is some discrepancy for Earlier Monte Carlo esti

1

mates of which were are now seen to b e biased by corrections to

2 2

scaling We estimate the universal ratios hR ihR i and

g e

since hyperscaling holds The approach to is

from ab ove contrary to the prediction of the twoparameter renormalization

group theory We critically reexamine this theory and explain where the error

lies

Contents

Introduction

The Problem of Hyp erscaling

Which Quantities are Universal

Plan of this Paper

Background and Notation

The SelfAvoiding Walk SAW A Review

The Pivot Algorithm A Review

Algorithms for Counting Overlaps

Generalities

Notation

Deterministic Algorithms

Monte Carlo Algorithms Generalities

HitorMiss Monte Carlo Algorithm

Barrett Algorithm Theory

KarpLuby Algorithm Theory

Scaling Theory

Barrett and KarpLuby Algorithms Numerical Results

Numerical Results

Two Dimensions

Three Dimensions

Discussion

Comparison with Previous Numerical Studies

Two Dimensions

Three Dimensions

The Sign of Approach to

Prosp ects for Future Work

A Some Geometrical Theorems

A Theorems and Pro ofs

A Application to SAWs

B Adequacy of Thermalization in the Pivot Algorithm

C Some Statistical Subtleties

D Remarks on the FieldTheoretic Estimates of Universal Amplitude

Ratios

Introduction

The selfavoiding walk SAW is a wellknown lattice mo del of a p olymer molecule

in a go o d solvent Its equivalence to the n limit of the nvector mo del

has also made it an imp ortant testcase in the theory of critical phe

nomena

In this pap er we rep ort the results of an extensive Monte Carlo study of two

and threedimensional SAWs of length up to steps using the pivot algorithm

1

We make a highprecision determination of the critical exp onents and

as well as several universal amplitude ratios In particular we make an

4

extremely sensitive test of the hyperscaling relation d which plays a

4

central role in the general theory of Section

Our results have also led us to reexamine critically the conventional theory

of p olymer molecules the socalled twoparameter renormalizationgroup theory

Indeed such a reexamination is unavoidable as our Monte Carlo data are

inconsistent with this theory as it has b een heretofore applied But this is b ecause

as we explain in Section the theory has heretofore b een applied incorrectly

1

These computations were carried out over a year p erio d on a variety of RISC workstations

The total CPU time was several years but we have by now lost track of exactly how many

These p oints were rst made three years ago by Nickel in an imp ortant but

apparently underappreciated pap er they have recently b een extended by one of us

The Problem of Hyp erscaling

One of the key unsolved problems in the theory of critical phenomena has b een

the status of the socalled hyperscaling relations scaling laws in which the spatial

d app ears explicitly These relations have long b een known to rest on a

much more tenuous physical basis than the other scaling laws

Indeed it has b een understo o d since the early s that hyperscaling should not

hold for systems ab ove their upp er d for d d the critical ex

u u

p onents are exp ected to b e those of meaneld theory and these exp onents satisfy

2

the hyperscaling relations only at d d For mo dels in an nvector univer

u

sality class including the SAW d equals It has generally b een b elieved that

u

hyperscaling should hold in dimensions d d but in our opinion there is no partic

u

ularly comp elling justication for such a b elief although the claim itself is probably

correct Hyp erscaling in the threedimensional Ising mo del is the sub ject of a con

3

troversy that has b een raging for years and which is still not completely settled

We remark that hyperscaling is also of interest in quantum eld theory where it

4

is equivalent to the nontriviality of the continuum limit for a stronglycoupled

eld theory

Although the hyperscaling relations app ear naively to b e ineluctable conse

quences of the renormalizationgroup approach to critical phenomena closer exami

nation reveals a mechanism by which hyperscaling can fail the socalled dangerous

4

irrelevant variables But the much more dicult question of whether this viola

tion actually occurs in a given mo del can b e resolved only by detailed calculation

Unfortunately a direct analytical test of hyperscaling app ears to b e p ossible only

at or in the immediate neighborho o d of a Gaussian xed p oint that is for asymp

totically free theories or for small d d or large n

u

We note that the realspace RG and eldtheoretic RG frameworks

as typically used in approximate calculations implicitly assume the hyperscaling

relations so they cannot b e used to test hyperscaling

It is therefore of interest to make an unbiased numerical test of hyperscaling

working directly from rst principles One approach is series extrap olation

which aords a direct test of universality and scaling laws including hyperscaling

2

This b elief has now b een conrmed by rigorous pro ofs of the failure of hyperscaling for the Ising

mo del in dimension d the selfavoiding walk in dimension d and spreadout

p ercolation in dimension d

3

See for seriesextrap olation work and

for Monte Carlo work

4

This mechanism was prop osed indep endently by Fisher and Wegner and Riedel p

and fo otnote in the early s For further discussion see also Ma Section VI I Amit and

Peliti Fisher App endix D and van Enter Fernandezand Sokal Section

It gives numerical results of apparently very high accuracy the claimed sub jec

tive error bars on critical exp onents are on the order of which

is comparable to the b est alternative calculational schemes However as is inherent

in any extrap olation metho d the results obtained dep end critically on the assump

tions made ab out the singularity structure of the exact function notably the nature

of the conuent singularity if any Indeed estimates by dier

ent metho ds from the same series sometimes dier among themselves by several

times their claimed error bars This together with systematic dierences b etween

lattices of the same dimension accounts for much of the controversy over hyper

scaling Quite a few extra terms would b e needed to resolve these discrepancies

in a convincing manner Unfortunately the computer time required to eval

uate the series co ecients grows exp onentially with the number of terms desired

while the extrap olation error is prop ortional to some inverse p ower of the number

of terms the p ower dep ends on the details of the correctiontoscaling terms and

the extrap olation metho d

In a Monte Carlo study by contrast one aims to prob e directly the regime

where the correlation length is For SAWs this corresp onds to a chain

length N The metho d aords a direct test of universality and scaling laws

including hyperscaling In practice however it has b een extremely dicult to

obtain go o d data in the neighborho o d of the critical p oint There are two essential

diculties nite system size and critical slowingdown For

mo dels and lattice eld theories these two factors together imply that the CPU

d+z d z

time needed to obtain one eectively indep endent sample grows as L

where d is the spatial dimension of the system and z is the dynamic critical exp onent

5

of the Monte Carlo algorithm This situation may b e alleviated somewhat by

a new nitesizescaling technique that yields accurate estimates of innite

volume quantities from Monte Carlo data on lattice sizes L The situation

for SAWs is rather more favorable one can simulate a SAW directly in innite

d

space with no nitesize corrections or L factor in the CPU time There is to b e

sure critical slowingdown but vast progress has b een made over the last decade

or so in inventing new and more ecient algorithms for simulating the SAW

In particular using the pivot algorithm one can generate an eectively

indep endent N step SAW at least as regards global observables in a CPU time

of order N This is the b est p ossible order of magnitude since it takes

a time of order N merely to write down an N step walk Since N this

1

corresp onds to a CPU time in a spin system which if d is b etter

d+z

than even if z So the SAW is a uniquely favorable lab oratory for

studying the problem of hyperscaling

5

Conventional lo cal algorithms have z while the new collectivemode algorithms

can have z and in some cases even z

Which Quantities are Universal

Over the past four decades various mathematical mo dels have b een employed

6

to describ e the b ehavior of linear p olymer molecules in a go o d solvent Among

these mo dels are the selfavoiding walk the b eadro d mo del and the con

tinuum Edwards mo del The detailed b ehavior dep ends on

the sp ecic mo del chosen just as the detailed b ehavior of real p olymer molecules

dep ends on the particular chemical structure of the p olymer and solvent and on the

temp erature However it has long b een understo o d that some asp ects of p olymer

b ehavior b ecome universal in the longchain limit N where N is the number

of monomers in the chain Unfortunately there has b een considerable confusion

ab out which quantities are universal and which are not In this subsection we sum

marize recent work of Nickel and one of us which claries this issue

For further discussion see Section b elow

Standard renormalizationgroup RG arguments predict that the mean

2 2

i and the i the meansquare radius of gyration hR square endtoend distance hR

g e

(mol)

2 2

N A of any real or mo del second virial co ecient A N M

Av og adr o 2

2

monomer

p olymer chain should have the asymptotic b ehavior

(1)

2 2 

1

i A hR N b N

R

e

e R

e

(1)

2 2 

1

hR i A N b N

R

g

g R

g

(mol) (1)

d 

1

A A N b N

A

2

A

7

as N where d is the spatial dimension The critical exp onents and

1

(1) (1) (1)

are universal The amplitudes A b are nonuniversal in fact A b A b

R R A

e g

A R R

g e

(1) (1) (1)

even the signs of the correctiontoscaling amplitudes b b b and their various

R R A

e g

(1) (1) (1)

db b combinations such as b are nonuniversal However the RG

R A

g

d2

theory also predicts that the dimensionless amplitude ratios A A A A

R R A

g e

R

e

(1) (1) (1) (1)

are universal and b b b b

A R R R

e e g

So there is no reason why the correctiontoscaling amplitudes should have any

particular sign In the continuum Edwards mo del the eective exp onents

eR

e

1 1

2 2

id log N and id log N and the interpenetration ratio d loghR d log hR

eR

g

e g

2 2

(mol)

d2 2 d2

d A hR i all approach their asymptotic values from below

2

g

(1) (1) (1)

that is b b and b On the other hand high

R R

e g

precision Monte Carlo data on lattice selfavoiding walks see Section b elow

as well as Ref show clearly that these quantities approach their asymptotic

values from above and the same o ccurs in the b eadro d mo del with suciently large

b ead diameter Indeed this b ehavior is almost obvious qualitatively short self

avoiding walks b ehave roughly like hard spheres only at larger N do es one see the

6

Here go o d solvent means that we work at any xed temp erature strictly ab ove the theta

temp erature for the given p olymersolvent pair

7

In we have assumed for simplicity that the hyperscaling relation d is valid

4

softer excluded volume smaller characteristic of a fractal ob ject In any case

all these mo dels are in excellent agreement for the leading universal quantities

d2

d2

A A and d A A and they are in rough agreement for the

R R A

g e

R

g

(1) (1) (1) (1)

and b b b universal correctiontoscaling quantities b

1

A R R R

e e g

It is thus misguided to analyze the exp erimental data in the go o dsolvent regime

by attempting to match the real p olymer molecules to the continuum Edwards

12

mo del via the corresp ondence z aN where a is an empirically deter

E dw ar ds

8

mined scale factor dep ending on the p olymer solvent and temp erature the con

tinuum Edwards mo del can predict only the universal quantities Indeed there is

evidence that real p olymers in a suciently go o d solvent b ehave like

selfavoiding walks ie they approach from ab ove in this case they cannot b e

matched to any value of z This b ehavior has heretofore b een considered

E dw ar ds

paradoxical in fact it is quite natural Hub er and Sto ckmayer attributed this

b ehavior to the eects of chain stiness In fact as p ointed out by Nickel chain

stiness is quite irrelevant here as the eect o ccurs also for perfectly exible chains

such as selfavoiding walks or the b eadro d mo del

These p oints have b een made previously by Nickel Similar comments have

b een made with regard to liquidgas critical p oints by Liu and Fisher

In summary the error of all twoparameter theories is to fail to distinguish

correctly which quantities are universal and which are nonuniversal In particular

the mo dern twoparameter theory b egins from one sp ecial mo del the continuum

Edwards mo del and assumes incorrectly that it can describ e certain asp ects of

p olymer b ehavior eg the sign of approach to which in reality are nonuniversal

Remark A very dierent limiting b ehavior is obtained in dimension d

if we take simultaneously N and T T such that x N T T remains

xed for a suitable crossover exp onent In a separate work one of us has

argued that it is precisely this universal crossover scaling behavior in an innitesimal

region just ab ove the theta temp erature that is describ ed by the continuum Edwards

mo del

Plan of this Paper

The plan of this pap er is as follows In Section we review the needed back

ground information ab out the selfavoiding walk and the pivot algorithm In Section

we analyze several algorithms for computing the second virial co ecient this sec

tion can b e skipp ed by readers whose main interest is in the results In Section

we present and analyze our Monte Carlo data for selfavoiding walks in two and

three dimensions In Section we compare our results with previous work discuss

further the interpretation of the sign of approach to and discuss prosp ects for

d

the future In App endix A we prove some geometric b ounds for subsets of Z as a

corollary we prove hyperscaling for SAWs in dimension d In App endix B we

discuss the problem of ensuring adequate thermalization for the pivot algorithm

8

See for example the comparisons b etween theory and exp eriment in Section F and

Section

In App endix C we discuss some subtleties involved in the statistical analysis of our

data In App endix D we make a few remarks on the eldtheoretic calculations of

universal amplitude ratios

Background and Notation

The SelfAvoiding Walk SAW A Review

In this section we review briey the basic facts and conjectures ab out the SAW

that will b e used in the remainder of the pap er Let L b e some regular ddimensional

lattice Then an N step selfavoiding walk SAW on L is a sequence of distinct

p oints in L such that each p oint is a nearest neighbor of its predecessor

0 1 N

d

For simplicity we shall restrict attention to the simple hypercubic lattice Z

similar ideas apply with minor alterations to other regular lattices We assume all

walks to b egin at the origin unless stated otherwise and we let S b e the

0 N

set of all N step SAWs starting at the origin and ending anywhere

First we dene the quantities relating to the number or of SAWs

d

Let c resp c x b e the number of N step SAWs on Z starting at the origin

N N

and ending anywhere resp ending at x Then c and c x are b elieved to have

N N

the asymptotic b ehavior

N 1

c N

N

N 2

sing

c x N x xed

N

as N here is called the connective constant of the lattice and and

sing

are critical exponents The critical exp onents are b elieved to b e universal among

lattices of a given dimension d For rigorous results concerning the asymptotic

b ehavior of c and c x see

N N

9

Next we dene several measures of the size of an N step SAW

The squared endtoend distance

2 2

R

N e

The squared radius of gyration

2

N N

X X

2

A

R a

j i

g

N N

i=0 j =0

2

N N

X X

2

b

i

i

N N

i=0 i=0

N

X

2

c

i j

2

N

ij =0

9

Some other measures of the size of a SAW will b e dened in App endix A

The meansquare distance of a monomer from the endpoints

N

h i

X

2 2 2

R

i N

m i

N

i=0

2 2 2

We then consider the mean values hR i hR i and hR i in the probability dis

N N N

e g m

tribution which gives equal weight to each N step SAW Very little has b een proven

rigorously ab out these mean values but they are b elieved to have the asymptotic

b ehavior

2 2 2 2

hR i hR i hR i N

N N N

e g m

as N where is another universal critical exp onent Moreover the ampli

tude ratios

2

i hR

N

g

A

N

2

i hR

N

e

2

i hR

N

m

B

N

2

i hR

N

e

10

are exp ected to approach universal values in the limit N Indeed the full

probability distribution of is exp ected to scale as

N

c x

N

d

N f xN

c

N

as N for a suitable scaling function f also universal mo dulo a single nonuni

11

versal scale factor and f is exp ected to b e rotationinvariant All these b eliefs

can b e subsumed in the even more general assertion that the probability distribu

tion of the SAW with lengths rescaled by N converges weakly as N to some

12

welldened probability measure on a space of continuum chains

(1) (2) (1)

b e the number of pairs such that is an N step Finally let c

1 N N

1 2

(2) (1)

SAW starting at the origin is an N step SAW starting anywhere and and

2

(2) (1) (2)

have at least one p oint in common ie Equivalently we can

in terms of walks that b oth start at the origin write c

N N

1 2

X X

(1) (2)

T c

N N

1 2

(1) (2)

S S

N N

1 2

10 2 2

Sometimes the notation hR ihR i is used instead

g e

11

Actually is claimed to hold only for jxj of order N The precise statement of is

therefore that the limit

c N y

N

d

f y lim N

N !1

c

N

exists for each y and that f y

12

Very recently Hara and Slade have proven that the SAW in dimension d converges

12

weakly to Brownian motion when N with lengths rescaled by CN for a suitable nonuniver

1

and also that have the sal constant C It follows from this that holds with

2

1 1

limiting values A B Earlier Slade had proven these results for suciently

1 1

6 2

high dimension d See also

where

(1) (2) d (1) (2)

T fx Z x g

(2) (1)

is the number of translates of that somewhere intersect It is b elieved that

N +N (2 + 2)2

1 2 4

c N N g N N

N N 1 2 1 2

1 2

as N N where is yet another universal critical exp onent and g is a

1 2 4

universal scaling function

The quantity c is closely related to the second virial co ecient To see this

N N

1 2

consider a rather general theory in which molecules of various types interact Let

the molecules of type i have a set S of internal states so that the complete state

i

d

of such a molecule is given by a pair x s where x Z is its p osition and s S is

i

its internal state Let us assign Boltzmann weights or fugacities W s s S

i i

P

W s and let us assign an to the internal states normalized so that

i

sS

i

d

x s x s x x Z s S s S to a molecule of interaction energy V

i j ij

type i at x s interacting with one of type j at x s Then the second virial

co ecient b etween a molecule of type i and one of type j is

i h

X X

0 0

(ij )

V ((0s)(x s ))

ij

B W s W s e

i j

2

0 d

x Z

s S

i

0

s S

j

In the SAW case the types are the dierent lengths N the internal states are the

conformations S starting at the origin the Boltzmann weights are W

N N

c for each S and the interaction energies are hardcore repulsions

N N

if x x

0

V x x

NN

otherwise

It follows immediately that

c

N N

(N N )

1 2

1 2

B

2

c c

N N

2 1

(N N )

1 2

The second virial co ecient B is a measure of the excluded volume b e

2

tween a pair of SAWs It is useful to dene a dimensionless quantity by normalizing

(N N )

1 2

2

B by some measure of the size of these SAWs Theorists prefer hR i as the

2

e

2

measure of size while exp erimentalists prefer hR i since it can b e measured by light

g

scattering We thus dene the theorists interpenetration ratio

(N N )

c B

N N

2

d2 d2

d d

RN

d2 d2

2

2 2

i i hR c hR

e e N N N

and the usual interpenetration ratio

(N N )

c B

N N

2

d2 d2

d d

N

d2 d2

2

2 2

i i hR c hR

N N g g N

for simplicity we consider only N N N The numerical prefactors are a

1 2

convention that arose historically for reasons not worth explaining here Crudely

sp eaking measures the degree of hardness of a SAW in its interactions with

other SAWs A useful standard of comparison is the hard sphere of radius r and

constant density

d2

d

r B

2

d d

d

2 2

R r

g

d

and hence

in d

d2

d

in d

har dspher e

in d d d

in d

Inserting and into we see that

2 d

4

N

N

as N We can therefore distinguish three a priori p ossibilities

a d so that as N This b ehavior cannot o ccur

4 N

unless typical SAWs have a very strange p orcupinelike shap e which is quite

13

implausible

b d In the simplest case this means that

4 N

as N ie typical SAWs exclude each other within a constant factor

like hard spheres This b ehavior is called hyperscaling However the relation

d is also consistent with a logarithmic violation of hyperscaling

4

p

ie log N as N for some p ower p

N

c d so that as N This is a p owerlaw violation

4 N

of hyperscaling typical SAWs exclude each other innitely more weakly than

hard spheres

A very b eautiful heuristic argument concerning hyperscaling for SAWs was given

by des Cloizeaux Note rst from that measures roughly sp eaking

the probability of intersection of two indep endent SAWs that start a distance of

2 12

order hR i N apart Now by we can interpret a long SAW as an ob ject

g

with fractal dimension Two indep endent such ob jects will generically

intersect if and only if the sum of their fractal dimensions is at least as large as the

13

Mo dulo some reasonable assumptions this b ehavior can in fact b e rigorously excluded see

Theorem A and equations AA in App endix A

dimension of the ambient space So we exp ect lim to b e nonzero if and

N

N

only if

d ie d

Since it is b elieved that

1

for d

2

18

1

log for d

2

2

1 15

for d

2 16 512

for d this pap er

3

for d

4

we see that

d for d

d logs for d

d for d

Therefore we exp ect

hyperscaling for d

logarithmic violation of hyperscaling for d

p owerlaw violation of hyperscaling for d

One half of this heuristic argument can b e proven rigorously It is easy to see

that

c N N c c

N 1 2 N N N

2 1 1 2

so that

(N N )

1 2

N N B

1 2

2

or in terms of critical exp onents

4

It follows that d implies d ie the p owerlaw violation of

4

hyperscaling This is now proven rigorously to o ccur for d

In the p olymerphysics literature it is usually taken for granted that hyperscaling

holds in dimension d But in our opinion hyperscaling is a deep prop erty that

needs to be tested

We remark that dimension d is a dierent case here hyperscaling can b e

proven rigorously mo dulo some reasonable assumptions on the scaling of individual

SAWs We present this pro of in App endix A it is the analogue for SAWs of

Aizenmans pro of Section of hyperscaling for twodimensional Ising mo dels

with niterange ferromagnetic interaction The underlying geometric idea is that

SAWs in the plane cannot avoid intersecting each other

Finally we need to make some comments ab out corrections to scaling Clearly

are only the leading term in a largeN asymptotic expan

sion According to renormalizationgroup theory the mean value of any global

observable O b ehaves as N as

a b b b a

2 0 1 2 1

p

hO i AN

N

2   +1  +2

1 1 1

N N N N N

c c c

0 1 2

  +1  +2

2 2 2

N N N

k

Thus in addition to analytic corrections to scaling of the form a N there are

k

 +k  +k

1 2

nonanalytic corrections to scaling of the form b N c N and so forth

k k

as well as more complicated terms not shown in which have the general form

k  +k  ++l

1 1 2 2

constN where k k and l are nonnegative integers The leading

1 2

exp onent p and the correctiontoscaling exp onents are universal p

1 2

of course dep ends on the observable in question but the do not Please note

i

that the exp onents have no relation whatso ever to the gap exp onent

1 2

dened in The notation used here is standard but unfortunate The

4

various amplitudes b oth leading and subleading are all nonuniversal However

ratios of the corresp onding amplitudes A b and c but not a or the higher b c

0 0 k k k

for dierent observables are universal

Remark The names of the critical exp onents and are chosen by

sing 4

analogy with the corresp onding exp onents in ferromagnetic spin systems

Indeed the generating functions of selfavoiding walks

X

N

c

N

N =0

X

N

Gx c x

N

N =0

X

N +N

1 2

c u

N N 4

1 2

N N =0

1 2

are equal to the susceptibility spinspin and fourth cumulant

in the nvector mo del analytically continued to n In particular if x

is a nearest neighbor of the origin then Gx is essentially the energy E up to

an additive and multiplicative constant The quantity

P

12

2

jxj Gx

B C

x

P

A

Gx

x

is the secondmoment correlation length Inserting into

we obtain the leading b ehavior

c

1

sing

Gx regular terms

c

c

2

4

u

4 c

as approaches the critical point Note in particular that is the

c sing

exp onent for the singular part of the sp ecic heat C E the exp onent for

H

the full sp ecic heat is max If the hyperscaling relation d

sing 4

holds without multiplicative logarithmic corrections then the renormalized cou

2 d

pling constant g u tends to a nonzero limiting value g as so

4 c

hyperscaling without multiplicative logarithmic corrections can b e interpreted as

the nonGaussianness nontriviality of the scalinglimit quantum eld theory

p

The Pivot Algorithm A Review

The pivot algorithm was invented in by Lal reinvented in by

MacDonald et al and again reinvented a short time later by Madras The

pivot algorithm is the most ecient algorithm currently known for simulating SAWs

in the xedN variablex ensemble Here we summarize briey the relevant features

of the algorithm more details can b e found in

The elementary move of the pivot algorithm is as follows Cho ose at random a

pivot p oint k along the walk k N choose at random an element g of the

symmetry group of the lattice rotation or reection or a combination thereof then

apply g to the part of the walk subsequent to the pivot p oint namely

k +1 N

using as the temp orary origin That is the prop osed new walk is

k

for i k

i

i

g for k i N

k i k

The walk is accepted if it is selfavoiding otherwise it is rejected and the old walk

is counted once more in the sample It is easy to see that this algorithm satises

detailed balance for the standard equalweight SAW distribution Ergo dicity is less

obvious but it can b e proven

At rst thought this seems to b e a terrible algorithm for N large nearly all the

prop osed moves will get rejected In fact this latter statement is true but the hasty

conclusion drawn from it is radically false The acceptance fraction f do es indeed

p

go to zero as N roughly like N empirically it is found that the exp onent p

is in d and in d But this means that roughly

p

once every N moves one gets an acceptance And the pivot moves are very radical

one might surmise that after very few accepted moves say or the SAW will

have reached an essentially new conguration One conjectures therefore that

p

the auto correlation time of the pivot algorithm b ehaves as N Things are

in fact somewhat more subtle see the next paragraph but roughly sp eaking and

mo dulo a p ossible logarithm this conjecture app ears to b e true On the other hand

a careful analysis of the computational complexity of the pivot algorithm see also

b elow shows that one accepted move can b e pro duced in a computer time of order

N Combining these two facts we conclude that one eectively indep endent

sample can b e pro duced in a computer time of order N or p erhaps N log N

Lets lo ok more closely Supp ose we know that the acceptance fraction f in the

p

pivot algorithm b ehaves as f N as N Then as argued ab ove after a

p

few successful pivots ie a time of order f N the global conformation of

the walk should have reached an essentially new state Thus we exp ect that for

2 2

observables A which measure the global prop erties of the walk such as R R or

e g

2

R the auto correlation time see C should b e a few times f This

intA

m

is conrmed numerically Section On the other hand it is imp ortant to

th th

recognize that local observables such as the angle b etween the and steps

of the walk may evolve a factor of N more slowly than global observables For

example the observable mentioned in the preceding sentence changes only when

17

serves as a successful pivot p oint and this happ ens on average only once every N f

attempted moves Thus for local observables A we exp ect to b e of order N f

intA

General prop erties of reversible Markov chains then imply that the exp onential

auto correlation time must b e of at least this order and if we have not overlooked

exp

any slow mo des in the system then should b e of exactly this order Finally

exp

even the global observables are unlikely to b e precisely orthogonal to the slowest

mo de so it is reasonable to exp ect that b e of order N f for these observables

expA

to o In other words for global observables A we exp ect the auto correlation function

t to have an extremelyslowlydecaying tail which however contributes little

AA

to the area under the curve This b ehavior is illustrated by the exact solution of

the pivot dynamics for the case of ordinary random walk Section and by

numerical calculations for the SAW see App endix C

Computational complexity A very imp ortant issue in any algorithm but

esp ecially in a nonlo cal one is the CPU time p er iteration By using a hash

table the selfavoidance of a prop osed new walk can b e checked in a time

of order N But one can do even b etter by starting the checking at the pivot p oint

1p

and working outwards failures can b e detected in a mean time of order N

1p

Sections and The mean CPU time per successful pivot is therefore N

p

for each of N failures plus N for one success or N in all Combining this with

the observations made previously we conclude that one eectively indep endent

sample as regards global observables can b e pro duced in a computer time of

order N

Initialization There are two main approaches

Equilibrium start Generate the initial conguration by dimerization

then the Markov chain is in equilibrium from the b eginning and no data need b e

discarded This approach is feasible and recommended at least up to N of order

a few thousand There is no harm in sp ending even days of CPU time on this

step provided that this time is small compared to the rest of the run after all the

algorithm need only b e initialized once

Thermalization Start in an arbitrary initial conguration and then dis

card the rst n N f iterations This is painful b ecause is a factor

disc exp exp

5

N larger than for global observables A thus for very large N the

intA

CPU time of the algorithm could end up b eing dominated by the thermalization

Nevertheless one must resist the temptation to cut corners here as even a small

initialization bias can lead to systematically erroneous results esp ecially if the sta

tistical error is small see App endix B for striking evidence of this Some mo dest

gain can probably b e obtained by using closertoequilibrium initial congurations

eg but it is still prudent to take n at least several times N f

disc

Initialization will b ecome a more imp ortant issue in the future as faster com

puters p ermit simulations at everlarger chain lengths

Algorithms for Counting Overlaps

In this section we discuss algorithms for computing the excluded volume b etween

a given pair of SAWs this is the key step in a Monte Carlo study of the second

virial co ecient This section can b e skipp ed by readers whose main interest is in

the results rather than the algorithms

Generalities

(1) (2) (1) (2)

Let and b e resp ectively N step and N step SAWs and dene T

1 2

(2) (1)

to b e the number of translates of which somewhere intersect

(1) (2) d (1) (2)

T fx Z x g a

(1) (2)

b

(1) (2)

where A B fy z y A z B g The exp ected value of T averaging

(2) (1)

This quantity c c is c and S over indep endent walks S

N N N N N N

2 1 1 2 2 1

has the asymptotic b ehavior

(2 )2

4

N N g N N c c c

1 2 1 2 N N N N

2 1 1 2

where g is a scaling function cf It is thus p ossible to estimate the

critical exp onent by running two indep endent pivot algorithms and mea

4

(1) (2)

suring T Typically one would run at N N N for a sequence of

1 2

values of N In particular this allows a direct Monte Carlo test of the hyperscaling

relation d Note that an indep endent measurement of is not needed

4

The ecient determination of T A A A A for a sp ecied pair of

1 2 1 2

d

sets A A Z is a very interesting and nontrivial problem in computer science

1 2

We see two broad approaches

Deterministic algorithms which compute T A A exactly

1 2

Monte Carlo algorithms which pro duce an unbiased or almost unbiased es

timate of T A A

1 2

In the latter case the statistical uctuations in the auxiliary innerlo op Monte

Carlo pro cess would b e added to those in the main Monte Carlo program but this

is acceptable provided that the former are not to o large compared to the latter see

Section

We shall discuss the deterministic algorithms for computing T A A in Section

1 2

and the Monte Carlo algorithms in Sections

Notation

We shall denote by N resp N the number of p oints in the set A resp

1 2 1

A We also write N minN N and N maxN N An N step

2 min 1 2 max 1 2

selfavoiding walk has N N p oints

d

Now x a pair of sets A A Z and write

1 2

S A A fy z y A z A g

1 2 1 2

Our goal is to compute T A A S An imp ortant role is played by the

1 2

function

x fy z y A z A y z xg a

1 2

x b

A A

2 1

where and are the indicator functions of the sets A and A resp ectively

A A 1 2

1 2

and z z and denotes convolution Clearly

A A

2 2

x for x S a

x N for x S b

min

We also write

if x if x S

I x

if x if x S

Note that

X X

T A A I x

1 2

d

xS

xZ

X X

x x N N

1 2

d

xS

xZ

For future reference we dene also

X

U A A

1 2

x

xS

This observable has little intrinsic interest but it will play an imp ortant role in

14

two of the Monte Carlo algorithms It is not hard to see that

N N T A A N N

1 2 1 2 1 2

2

T A A

1 2

U A A T A A

1 2 1 2

N N

1 2

Moreover we shall prove in App endix A Theorem A that

U A A log N N

1 2 1 2

We shall use h i to denote exp ectation with resp ect to some probability distri

bution on pairs of sets A A eg the equalweight ensemble on the space S S

1 2 N N

1 2

of pairs of SAWs We shall use E to denote exp ectation with resp ect to some

innerlo op Monte Carlo algorithm

Deterministic Algorithms

Here we introduce some deterministic algorithms for computing x I x

T A A andor U A A These algorithms can b e employed either stand

1 2 1 2

alone or as building blo cks for the Monte Carlo algorithms to b e introduced later

Supp ose rst that we want to compute x for a single value of x This

can b e done as follows write all the p oints of A into a hash table then

1

examine sequentially each of the p oints z A inquiring whether y x z b elongs

2

to A and incrementing a counter if it do es Clearly this requires a CPU time of

1

order N N

1 2

For computing I x the algorithm can b e streamlined by stopping as so on as

one nds x

Supp ose next that we want to compute x for several dierent values of x

say r of them Then the foregoing algorithm requires a CPU time of order N r N

1 2

But since the answer is invariant under the interchange A A x x it now

1 2

pays to choose A to b e the smal ler of the two sets As a result the CPU time is

2

of order N r N

max min

Supp ose nally that we want to compute x or I x for al l x A sp ecial

case of this is to compute T A A or U A A This can b e done by examining

1 2 1 2

sequentially each of the pairs y A z A and writing the p oints x y z into

1 2

a hash table equipp ed with an auxiliary count eld Clearly this requires a CPU

time of order N N The CPU time for subsequently utilizing the counts fxg is

1 2

14

The two upp er b ounds are trivial The lower b ound in is the Schwarz inequality The

lower b ound in is proven as follows Let y resp z b e the lexicographically smallest element

 

of A resp A Then y A y z A z so y A and A z are subsets of S of

1 2  2   1   2 1 

cardinalities N and N resp ectively with only one p oint in common namely y z QED

2 1  

of order T A A N N If all one wants is fI xg then the count eld can b e

1 2 1 2

disp ensed with it suces to know which sites x are hit at least once

An alternative algorithm for computing x for all x can b e based on the

th

Fast Fourier Transform FFT Let j d b e the extension of A in the j

j 1

co ordinate direction ie the dierence b etween the maximum and minimum values

of y for y A Let b e the corresp onding extension for A Now let k b e the

j 1 j 2 j

k

j

least integer such that Then if we place A and A in a p erio dic

j j 1 2

k k k

1 2

d

b ox B of size we can compute the convolution of and

A A

1 2

in the b ox B without distortion by the p erio dic b oundary conditions this gives

the full set of counts fxg and thus also fI xg T A A and U A A The

1 2 1 2

convolution can b e carried out by the FFT in a time V log V where

d

Y

k +...+k

1

d

V

j j

j =1

Let us now estimate the p erformance of these metho ds as standalone algo

(1) (2) (1) (2)

rithms for computing T where and are selfavoiding walks Al

(1) (2)

gorithm computes T in a CPU time of order N N N N

1 2 1 2

2

ie of order N if N N N By contrast we exp ect that one eectively inde

1 2

(1) (2)

p endent sample of the pair can b e pro duced by the pivot algorithm in

p

a CPU time of order N N if N or in any case not much greater So

1 2 intT

this algorithm would sp end more time analyzing the data than pro ducing it and

the overall computational complexity p er eectively indep endent sample would

2

b e increased from N to N thereby nullifying the advantage of the pivot algorithm

over previous algorithms

(1) (2)

Algorithm computes T in a CPU time of order V log V where V

d

is given by For typical SAWs we have V N N so presumably

1 2

we have

d

hV log V i N N logN N

1 2 1 2

In the usual situation N N N this metho d is asymptotically b etter than

1 2

algorithm if d ie if d However it is unlikely to b e b etter in practice

except for very large N And the b ehavior is still vastly worse than the time of

order N for generating the walks except in d

It may b e p ossible to devise deterministic algorithms which are more ecient

than either of these elementary ones we leave this as an exercise for interested

computer scientists

Monte Carlo Algorithms Generalities

Fix a pair of sets A A and supp ose that we use some Monte Carlo algorithm

1 2

to provide an unbiased estimate Z of T A A We will thus have

1 2

E Z jA A T A A a

1 2 1 2

2 2

varZ jA A E Z jA A E Z jA A V A A b

1 2 1 2 1 2 1 2

Here a expresses the unbiasedness of the innerlo op Monte Carlo algorithm

while b denes its conditional variance We shall compute the functional

V A A for each of the Monte Carlo algorithms we introduce Sections

1 2

Now let us call the Monte Carlo subroutine R times for the given pair A A

1 2

P

R

1

Z Obviously we have and average the results Z R

i

i=1

E Z jA A T A A a

1 2 1 2

1

varZ jA A R V A A b

1 2 1 2

Now supp ose that we generate a random pair A A from some probability

1 2

(1) (2)

distribution eg SAWs from the equalweight distribution on S S

N N

1 2

Clearly Z is an unbiased estimator of hT i ie

hZ i hE Z jA A i hT i

1 2

where h i denotes exp ectation in the given probability distribution The variance

of Z is a sum of two terms

varZ varT hvarZ jA A i

1 2

i h

1 2 2

R hV i hT i hT i

The rst term is the uctuation of T A A from one pair of sets to another the

1 2

second term is the mean over pairs of sets of the uctuation conditional variance

in the inner Monte Carlo subroutine

The mean CPU time for the computation of Z is hT i a bR here a is

CPU

the mean CPU time for generating a pair of eectively indep endent sets A A

1 2

from the desired ensemble plus any setup time asso ciated with the innerlo op

Monte Carlo algorithm while b is the mean additional CPU time p er iteration of

the innerlo op Monte Carlo algorithm The goal is to minimize the variancetime

pro duct

1

hT i varZ b varT R ahV iR a varT bhV i

CPU

since this quantity divided by the total CPU time equals the variance of our nal

estimate Hence the optimal choice of R is

12

a hV i

R

opt

b varT

and the variancetime pro duct is then

i h

2

12 12

hT i varZ a varT b hV i

CPU opt

Of course R must b e a p ositive integer and so the true R is obtained by rounding

opt

the righthand side of up or down This subtlety can b e ignored if R is

opt

large but may b e signicant otherwise In particular the deterministic innerlo op

algorithms Section have V but in this case R rather than as

opt

claims and hT i varZ a bvarT

CPU opt

In the remainder of this section we will assume that the CPU time for generating

the sets A and A is of order N N Clearly this is a lower b ound since it takes a

1 2 1 2

time of order N N simply to write down the two sets On the other hand for our

1 2

application to SAWs A and A will b e generated by the pivot algorithm which

1 2

generates an eectively indep endent SAW as regards global observables in a

(1) (2)

CPU time of order N see Section We exp ect that the observable T

p

is indeed global in the sense that N where p is the acceptancefraction

intT

exp onent

HitorMiss Monte Carlo Algorithm

+

th

Let resp b e the maximum resp minimum value of the j co ordinate

j j

among the p oints in A so that

1

+ +

B

1

d d 1 1

+

is the smallest rectangular parallelopip ed containing A Let and b e the

1

j j

corresp onding values for A and B the corresp onding b ox It follows that

2 2

+ + + +

B B B

1 2

d d d d 1 1 1 1

15

is a parallelopip ed which is guaranteed to contain all the p oints of S A A

1 2

P

I x can b e computed by the trivial hit Therefore T A A S

1 2

xB

or miss Monte Carlo metho d Pick a p oint x B at random and compute I x by

the deterministic algorithm of Section then output Z B I x where

Q

+ d +

Clearly I x is and B Here

j j j j

j j j j

j =1

a binomial random variable of mean p S B so that

S

E I x

B

S S

varI x

B B

Hence Z B I x is an unbiased estimator of S and its variance is

V A A varZ S B S

hitor miss 1 2

The CPU time for R iterations of this algorithm is of order T N

CPU max

RN we put the larger of the two sets A A in the hash table once and then

min 1 2

each time we compute x by lo oping over the smaller of the two sets The CPU

time for generating the two sets is by assumption also of order N Therefore in

max

we have a N and b N

max min

15

In fact it is the smallest such parallelopip ed since for each index j there is a p oint in A A

1 2

+ +

th th

with j co ordinate equal to and another p oint with j co ordinate equal to

j j j j

Barrett Algorithm Theory

Barrett has prop osed the following Monte Carlo algorithm which gives an

unbiased estimate of T A A

1 2

Cho ose at random y A and z A Set x y z

1 2

Compute x using the deterministic algorithm describ ed in Section

Note that by construction we have x S and hence x

Output Y N N x

1 2

The analysis of this algorithm is easy In Step we choose the vector x with

probability

x

Probx

N N

1 2

It follows that

X X

E Y Probx Y x T A A a

1 2

xS xS

X X

N N

1 2

2 2

N N U A A b E Y Probx Y x

1 2 1 2

x

xS xS

and hence

2

V A A varY N N U A A T A A

B ar r ett 1 2 1 2 1 2 1 2

The CPU time for R iterations of the Barrett algorithm is of order T

CPU

N RN we put the larger of the two sets A A in the hash table once

max min 1 2

and then each time we compute x by lo oping over the smaller of the two sets

The CPU time for generating the two sets is by assumption also of order N

max

Therefore in we have a N and b N

max min

KarpLuby Algorithm Theory

Karp and Luby have devised an elegant Monte Carlo algorithm for

estimating T A A and somewhat more general combinatorial problems The

1 2

KarpLuby algorithm go es as follows

Cho ose at random y A and z A Set x y z Set t

1 2

Cho ose at random y A

1

If z y x A then go to Step Otherwise increment t by and go to

2

Step

Output Z N t

2

This algorithm can b e understo o d as a randomized version of the Barrett algorithm

Step is identical in the two algorithms and it selects the vector x with probability

x

Probx

N N

1 2

Then t the number of trials of Steps and needed to nd a y such that y x A

2

is conditioned on x a random variable with a geometric distribution

k 1

x x

for k Probt k jx

N N

1 1

Hence the conditional exp ectations of Z N t are

2

N N

1 2

a E Z jx

x

2

N N N

1 1

2

2

E Z jx b

x x

Of course this makes sense only for x S Thus Steps of the KarpLuby

algorithm pro duce a random quantity Z whose mean value conditional on x is

precisely the deterministic quantity Y N N x of the Barrett algorithm It

1 2

follows that the unconditional exp ectations are

X

E Z Probx E Z jx T A A a

1 2

xS

X X

N

1

2 2

E Z Probx E Z jx N

2

x

xS xS

N N U A A N T A A

1 2 1 2 2 1 2

b

and hence

2

V A A varZ N N U A A N T A A T A A

K ar pLuby 1 2 1 2 1 2 2 1 2 1 2

For future reference we note the following inequality

N

1

N N U A A V A A N N U A A

1 2 1 2 K ar pLuby 1 2 1 2 1 2

N N

1 2

Proof The upp er b ound is trivial To prove the lower b ound use and

2

to deduce N N T T N N U This means that V is

1 2 1 2 K ar pLuby

of the same order of magnitude as its rst term namely N N U except p erhaps

1 2

when N N An alternative and often sharp er lower b ound on V can

1 2 K ar pLuby

b e obtained by noting that

12

N

1

log N N N U N T

max 1 2 2

N

2

2

Proof From we have T N N U while from we have log N

1 2 max

U Now take the geometric mean of these two b ounds Therefore we have

12

N

1

2

log N N N U A A T A A V A A

max 1 2 1 2 1 2 K ar pLuby 1 2

N

2

The CPU time for one execution of Steps and is essentially t so the exp ected

CPU time for one iteration of the KarpLuby algorithm is E t T A A N In

1 2 2

addition there is an initial CPU time of order N to place the elements of A in a

2 2

hash table Finally we should remember the time of order N N for generating A

1 2 1

and A in the rst place The exp ected CPU time for R iterations of the KarpLuby

2

algorithm plus generating A and A is thus

1 2

T A A

1 2

T N R

CPU max

N

2

Therefore in we have a N and b hT iN

max 2

Scaling Theory

In this section we consider the scaling theory of the three Monte Carlo algorithms

hitormiss Barrett and KarpLuby in the case where A and A are indep en

1 2

dent random SAWs of lengths N and N resp ectively and N N N In

1 2 1 2

each case we need to compute or guess heuristically the scaling b ehavior of varZ

and hT i as N A go o d gure of demerit for an algorithm is the mean

CPU

CPU time needed to estimate hT i with a relative variance of order this time is

hT i varZ

CPU opt

2

hT i

2

For the hitormiss algorithm we must study the scaling of hT i hT i and hT B i

2

For the Barrett and KarpLuby algorithms we must study the scaling of hT i hT i

and hU i as well as their various combinations

As discussed in Section we exp ect that hT i scales as

p

T

hT i N

where

p

T 4

We further exp ect that the probability distribution of T will after rescaling by

p

T

approach a nontrivial limiting distribution here nontrivial means that the N

distribution is not a delta function Therefore we exp ect that

2 2p

T

hT i varT N

On the other hand it is reasonable to exp ect that

p

U

hU i N

for some a priori unknown exp onent p From we know that p

U T

p p and we will b e interested in knowing whether these inequalities are strict

U T

or not

We now lo ok at the individual algorithms

HitorMiss Algorithm As just discussed we exp ect

p

T

hT i N

2p

T

varT N

with p On the other hand for typical pairs of SAWs we clearly have

T 4

d

B N so we exp ect

d +p

T

hT B i N

In general we have d p but even when d p ie hyperscaling holds it

T T

2

seems intuitively clear that the ratio hT ihT B i will stay well b elow except

in dimension d Therefore radical cancellations in are excluded and we

exp ect

d +p

T

hV i N

hitor miss

It follows from and that

2p 1 d +p

T T

varZ N R N

On the other hand the CPU time is obviously

hT i N RN

CPU

The variancetime pro duct is thus

i h

2p d +p 1 2p d +p

T T T T

hT i varZ N N R N R N N

CPU

There are two cases

a If the hyperscaling relation d p holds then R We exp ect that

T opt

2

this is the case for d With this choice of R we have varZ hT i and

hT i N In other words we can achieve a relative variance of order in a CPU

CPU

time of order N

(d p )2

T

b If d p then R N We exp ect that this is the case for

T opt

d More precisely for d we exp ect that and p in which

T

(d4)4 32 d 2

case R N With this choice of R we have varZ hT i N and

opt

1+d 2 12 1+d

hT i N hT i which implies a variancetime pro duct of order N hT i

CPU

In fact a variancetime pro duct of this order can b e achieved with any R in the

d p

T

range R N the optimal value R lies at the geometric mean of these

opt

two extremes Hence a relative variance of order requires a CPU time of order

1+d p 1+(d4)2

T

N ie order N for d

Barrett and KarpLuby algorithms The b ehavior of the Barrett and Karp

2

Luby algorithms is determined by the scaling b ehavior of hT i hT i and hU i as

N From and we have

2

hT i

hV i N N hU i

B ar r ett 1 2

N N hU i

1 2

2

hT i

hV i N N hU i

K ar pLuby 1 2

N N hU i

1 2

We write a b to denote that ab as N This fo cusses attention on the

ratio

2

hT i

N N

1 2

N N

1 2

N N hU i

1 2 N N

1 2

which by satises N N and in particular on

1 2

lim N N

N

For simplicity we consider here only N N N but the same principles ob

1 2

viously apply at any xed ratio N N There are then three p ossible

1 2

cases

2 2

a In this case hV i N hU i and hV i N hU i

B ar r ett K ar pLuby

2

b In this case hV i N hU i and hV i

B ar r ett K ar pLuby

2

N hU i

2

c In this case hV i N hU i and its exact scaling is subtle Of

B ar r ett

2

course we still have hV i N hU i

K ar pLuby

Case a corresp onds to the exp onent p b eing strictly greater than p while

U T

cases b and c corresp ond to p p We b elieve that in fact case c never

U T

2 (1) (2)

o ccurs although it is p ossible to have U T N N for sp ecial pairs eg

1 2

p erp endicular ro ds it seems quite implausible that such b ehavior could o ccur even

in the limit N after averaging over all pairs of SAWs

In Section we shall present numerical data showing clearly that in

dimensions d what is less clear is whether is zero or nonzero For dimension

2 16

d we exp ect that For d we exp ect that hT i N from which

2 17

it follows that hU i N and hence On the other hand we b elieve as

noted ab ove that is imp ossible

Assuming that it follows that hV i and hV i are b oth of

B ar r ett K ar pLuby

2

order N hU i Therefore

2 1 2

varZ hT i R N hU i

16

This is rigorously proven for d mo dulo the problem of translating generatingfunction

results into xedN results See Theorem and Remark following it

17 2 2 2 2

Proof hU i hT iN hT i N by and the Schwarz inequality while

2 2

hU i hT i by Thus if hT i N we have also hU i N

for b oth algorithms On the other hand the CPU times are

hT i N NR a

C P UB ar r ett

1

hT i N N hT iR b

C P UK ar pLuby

The variancetime pro ducts are thus

h i

2 2 1 2 2

hT i varZ N hT i R N hU iR hT i N hU i

C P UB ar r ett

a

h i

2 3 2 1 2

hT i varZ N N hT i R N hU iR hT i hU ihT i

C P UK ar pLuby

b

2 2

Recalling now the inequality T N U cf we see that the dom

inant term for the Barrett algorithm is the fourth one provided that R

2 2 3

N hU ihT i and in this case we have hT i varZ N hU i The optimal value

CPU

12

is at the geometric mean of this range ie R N hU i hT i We conclude that

opt

3 2

a relative variance of order requires a CPU time of order N hU ihT i This is

2 2

at least of order N but it may b e larger in case hU i hT iN ie in case

p p

U T

Recalling next the inequality U T cf we see that the dominant term

2 2

R for the KarpLuby algorithm is the third one provided that N hU ihT i

2 2

N hT i and in this case we have hT i varZ N hT i The optimal value is at

CPU

2 12 32

the geometric mean of this range ie R N hU i hT i We conclude that a

opt

relative variance of order requires a CPU time of order N

Comparing the analyses of the Barrett and KarpLuby algorithms we see that

the eect of the additional randomization in the KarpLuby algorithm is to reduce

drastically the mean CPU time p er iteration T N versus N while only mo d

2 min

estly increasing the variance The KarpLuby algorithm is thus sup erior to the

2

Barrett algorithm whenever hT i N as we exp ect o ccurs in dimension d

2

the two algorithms are of the same order whenever hU i hT i N as we exp ect

o ccurs in dimension d

Barrett and KarpLuby Algorithms Numerical Results

In this section we rep ort results of a rather crude Monte Carlo study of hT i

2

hT i varT hU i hV i and hV i for SAWs in dimensions d and

B ar r ett K ar pLuby

d taking N N N The goal is to estimate the exp onents p and p

1 2 T U

and the various amplitudes in the scaling relations

p

T

hT i A N a

T

2p 2

T

2

N b hT i A

T

2p

T

varT A N c

v ar (T )

p

U

hU i A N d

U

2+p

U

hV i A N e

B ar r ett V

B ar r ett

2+p

U

hV i A N f

K ar pLuby V

K ar pLuby

In Table we rep ort our Monte Carlo estimates for dimensional selfavoiding

walks at N These were obtained by using the dimer

(1) (2)

ization algorithm to generate pairs of indep endent SAWs and then using

(1) (2) (1) (2)

deterministic algorithm Section to compute T and U

Unfortunately some of the observables lack error bars this is our fault The t

2

to hT i gives a reasonable provided that we discard the data p oint at the lowest

value of N we then get p This is not far from the exp ected

T

value p d the small discrepancy can quite plausibly b e attributed to

T

corrections to scaling We will make a much more careful estimate of hT i and its

2

exp onent p in Section The ts to hT i and varT are also consistent with

T

p although the lack of error bars prevents making this quantitative The

T

2

t to hV i gives a reasonable only if we discard the data p oints at the

K ar pLuby

two lowest values of N we then get p If we use the same data

U

p oints in the ts to hU i and hV i we get p and unfortunately

B ar r ett U

without error bars These estimates of p are thus in rather medio cre agreement

U

and it is far from clear whether p is strictly greater than p Another way

U T

2 2

of lo oking at this is to study the ratio hT iN hU i which decreases from

N

at N to at N it is far from clear whether this ratio

is tending to zero or to a nonzero value as N Clearly data at much larger

values of N would b e needed to resolve this question denitively Unfortunately

such data are very timeconsuming to obtain b ecause the deterministic algorithm

2

for computing T and U takes a CPU time of order N Regarding the reverse

inequality it seems clear that p is strictly smaller than p

U T

In Table we rep ort our Monte Carlo estimates for dimensional selfavoiding

walks at N Here we used the pivot algorithm combined

2

with deterministic algorithm For all observables we can get a reasonable

provided that we discard the data p oints at the two lowest values of N The ts to

2

hT i hT i and varT yield p Again the errors here are purely

T

statistical they do not take account of systematic errors arising from corrections to

scaling We will make a much more careful estimate of hT i and p in Section

T

The ts to hU i hV i and hV i yield p

B ar r ett K ar pLuby U

and resp ectively Again the agreement is medio cre and it is far

from clear whether p is strictly greater than p later we will see that

U T

2 2

p is a b etter estimate Otherwise put the ratio hT iN hU i

T N

decreases from at N to at N it is far from clear

whether this ratio is tending to zero or to a nonzero value as N Again data

at much larger values of N would b e useful

It is worth remarking that in b oth the dimensional and dimensional cases

2

varT scales the same way as hT i but is only ab out resp as big

This means that the probability distribution of T is very narrow Otherwise put

while N step SAWs vary radically among themselves in size and shap e the over

(1) (2)

lap T b etween two of them is remarkably constant Presumably this is

(1) (2)

b ecause the op eration of forming lls in the holes in the individual

(1) (2) (1) (2)

walks while and are fractals is semisolid roughly like Swiss

cheese

Numerical Results

Two Dimensions

2 2 2

In Table we present our data for hR i hR i hR i hT i and the pivot

N N N N N

e g m

algorithm acceptance fraction f for SAWs in dimension d in the range

N Most of these SAWs were generated using the pivot algorithm using

either dimerization or straight ro ds for initialization see App endix B for a discussion

6

of the adequacy of thermalization run lengths were b etween and

6

pivots subsequent to thermalization However some data at N were

5 5

generated by pure dimerization b etween and indep endent pairs of SAWs

(1) (2)

p er run The overlap T was in most cases estimated using the KarpLuby

algorithm Section with R however some runs at N used

deterministic algorithm Section Some subtleties concerning the correct

determination of error bars on data generated by the pivot algorithm are discussed

in App endix C

Early versions of our program had a bug in which SAWs were pivoted only at

sites k ie never at the starting p oint This is harmless for single SAWs

thanks to the lattice symmetries but for pairs of SAWs it means that the relative

orientation of the initial steps of the two SAWs is never altered by the algorithm

This causes a slight bias in the estimates of hT i esp ecially for small N However

we b elieve that this bias is completely negligible ie much less than our statistical

errors for the values of N treated here we have b een unable to detect any sys

tematic dierences b etween runs with and without this bug Therefore instead of

a

throwing away the tainted data we have simply indicated by an in Table those

values of N for which some not necessarily all of the data suers from the bug

Caveat lector

2 2

Table shows the resulting values for the universal amplitude ratios hR ihR i

g e

2 2

i and The error bars are here determined using the triangle inequality ihR hR

e m

they are probably overestimates by a factor of

2 2 2

i hT i and f versus N are so straight that there i hR i hR Loglog graphs of hR

m g e

is nothing to b e gained by repro ducing them here We t each of these quantities to

pow er

the Ansatz AN where p ower for the three squared radii for hT i

4

and p for f by p erforming weighted leastsquares regressions of their logarithms

against log N using the a priori error bars on the raw data p oints Table to

determine b oth the weights and the error bars Chapter As a precaution

against corrections to scaling we p erformed the t with a lower cuto N N

min

2

and we tried all p ossible values of N The value sum of squares of normalized

min

deviations from the regression line can serve as a test of go o dness of t Let us

2

dene the signicance level as the probability that would exceed the observed

value assuming the correctness of the p owerlaw mo del and of the rawdata error

2

bars this probability can b e read o the distribution with D n degrees of

freedom DF where n is the number of data p oints used in the t An abnormally

2

large value of say a level less than may indicate either that the pure p ower

law Ansatz is incorrect eg due to corrections to scaling or else that the claimed

error bars on the raw data are to o small further investigation would b e necessary

18 2

to determine which of these is the true cause An abnormally small value of

say a level greater than probably indicates that the claimed error bars on

the raw data are to o large

2 2 2

The exp onents estimated from ts to hR i hR i hR i and hT i are plotted

e g m

as a function of N in Figure The exp onent estimated from the t to f is

min

2

plotted in Figure Lo oking at the values in these ts we nd statistically

2 2

signicant corrections to scaling in hR i and hR i only for N in hT i only

min

e g

Using in each case the next larger and in f only for N for N

min min

value of N we nd

min

2

hR i

e

A

N

min

2

DF level

2

hR i

g

A

N

min

2

DF level

2

i hR

m

A

N

min

2

DF level

hT i

4

A

N

min

2

DF level

f p

A

N

min

2

DF level

error bars are one standard deviation The results are in excellent agreement with

the b elieved exact value although some very slight corrections

to scaling clearly remain and with the hyperscaling relation d

4

2 2 2

For hR i hR i hR i and hT i b etter estimates of the nonuniversal amplitude

e g m

A can b e obtained by imp osing the exp onents and simply

4

18 2

Note also that an abnormally large change in as N is increased by one step that is a

min

2

drop in by an amount signals that the data p oint in question diers from the regression

line by several standard deviations This could indicate that the corrections to scaling at this value

2

of N are signicant even though the overall which is dominated by contributions from larger

N may lo ok reasonable

32

tting observableN to a constant For all four observables this ratio declines

very slightly with N by N it is within error bars of its apparent asymptotic

value Taking N we obtain

min

2

hR i A

e

2

DF level

2

hR i A

g

2

DF level

2

i A hR

m

2

DF level

hT i A

2

DF level

32 

Another approach is to t observableN to a bN with a b and all

19 2

variable For hR i for which we have data only at N we are unable to

m

obtain a decent t with any value of N available to us the corrections to scaling

min

2 2

i and hT i we obtain reasonable ts even i hR at N are to o weak For hR

g e

for N

min

2

i a hR

e

b

2

DF level

2

hR i a

g

b

2

DF level

hT i a

b

2

DF level

Of course the exp onents pro duced by the ab ove ts should not b e taken to o se

riously they could well b e phenomenological eective exp onents that summarize

the combined eects of two or more correctiontoscaling terms eg the leading

 1

1

nonanalytic correction N plus the analytic correction N over some particu

lar range of N These ts are nevertheless useful in providing simple interpolation

19

The purp ose of writing here N instead of N is to reduce the correlation b etween the

estimates of b and Of course can equally well b e replaced by any reasonable value which is

roughly in the middle of the N values represented by the data p oints

formulae that summarize our data within error bars

2 32 0657

hR i N N

e

2 32 0557

hR i N N

g

32 0581

hT i N N

In particular we can use these formulae to compare our data with other workers

data at dierent values of N but only for N

The upshot is that the corrections to scaling are quite weak in the twodimensional

selfavoiding walk we can barely see them at our level of precision A serious

study of corrections to scaling in this mo del would therefore require much higher

statistics than are available here at large but notto olarge values of N say

We have b egun such a study and will rep ort the results sepa N

rately

2 2 2 2

The corrections to scaling on the amplitude ratios hR ihR i hR ihR i and

g e m e

are even weaker than on the original observables this is b ecause the original correc

tions to scaling all have the same sign The two ratios of radii have no statistically

signicant corrections to scaling at our level of precision even if one considers the

true error bars to b e of those rep orted in Table The interpenetration ratio

but these corrections are do es have noticeable corrections to scaling at N

very small see Figure Fitting the data for N N to a constant we obtain

min

2 2

i ihR hR

e g

N

min

2

DF level

2 2

i ihR hR

e m

N

min

2

DF level

N

min

2

DF level

The theorists interpenetration ratio dened in is

R

Three Dimensions

2 2

In Table we present our data for hR i hR i hT i and the pivotalgorithm

N N N N

e g

acceptance fraction f for SAWs in dimension d in the range N

Most of these SAWs were generated using the pivot algorithm using either

dimerization or straight ro ds for initialization see App endix B for a discussion of the

6 7

adequacy of thermalization run lengths were b etween and pivots

subsequent to thermalization However some data at N were generated

5 5

by pure dimerization b etween and indep endent pairs of SAWs p er

(1) (2)

run The overlap T was in most cases estimated using the KarpLuby

algorithm Section with R however some runs at N used

deterministic algorithm Section Data tainted by the bug concerning hT i

a

are again indicated by see Section

2 2

Table shows the results for the universal amplitude ratios hR ihR i and

g e

The error bars are determined using the triangle inequality they are probably

overestimates by a factor of

2 2 pow er

We b egan by tting hR i hR i hT i and f to the pure p owerlaw Ansatz AN

e g

2 2

The exp onents estimated from ts to hR i hR i and hT i are plotted as a function

e g

of N in Figure The exp onent estimated from the t to f is plotted in Figure

min

Very strong corrections to scaling are apparent for all these observables the

4

or more exp onent estimates do not app ear to stabilize until one takes N

min

Nevertheless the estimates of do app ear to b e converging to a common value

20

in particular hyperscaling app ears to b e satised

Another view of these same ts is presented in Figure here we plot the

05

estimate of versus N The idea here is that the correctiontoscaling exp onent

min

is predicted by RG calculations to b e in the vicinity of

1

see Section if this prediction is correct then each set of estimates should

fall roughly on a straight line at least asymptotically as N The data are

min

consistent with this prediction but the large uctuations make clear that it will b e

dicult to get accurate estimates of by Monte Carlo

1

pow er pow er  2 2

i and hT i to the Ansatz AN BN i hR Next we tried tting hR

g e

where p ower for the squared radii and for hT i we now assume hyperscaling

For each choice of xed exp onents and we determine A and B by weighted

2

leastsquares and record the resulting we then ask which pairs lead to

2

an acceptable Here acceptable is taken to mean a signicance level

for D n degrees of freedom where n is the number of data p oints This

corresp onds roughly to condence limits of one standard deviation The results

2 2

for hR i and hT i are shown in Figure The result for hR i is similar to that for

g e

2

hR i but the band of allowed values is wider For N there are no pairs

min

g

2

that are satisfactory for b oth hR i and hT i This means that at such small

g

N one needs more than a single correctiontoscaling term to t b oth observables in

a way compatible with hyperscaling and universality On the other hand for larger

N one obtains a swath of acceptable pairs all contained in the range

min

and From this analysis it seems dicult to

obtain much precision on but it do es suggest that

sub jective condence limits

pow er

Now supp ose we imp ose and t observableN where p ower



for the squared radii and for hT i to a bN with a b and all

variable For N the estimates of from these three ts are not consistent

min

20

Note that the rigorous inequality d see Theorem A and equations AA

4

in App endix A means that the limiting value of the upp er pair of curves must b e ab ove or equal to

the limiting value of the lower curve

this might b e guessed from Figure However for N they are consistent

min

and yield

pow er pow er 

If we now return to the Ansatz AN BN and take and

2 2

in hR i for N we obtain reasonable ts in hR i for N

min min

g e

Taking N we obtain and in hT i for N

min min

2

hR i A

e

B

2

DF level

2

hR i A

g

B

2

DF level

hT i A

B

2

DF level

At the very least these ts provide simple interpolation formulae that summarize

our data within error bars but for N only

11754 06154 2

i N N hR

e

2 11754 06154

hR i N N

g

17631 12031

hT i N N

If we take these ts seriously we obtain the results

(1) (1)

b b

R R

g e

(1) (1)

b b

A R

e

for the universal ratios of correctiontoscaling amplitudes

Another way to study the corrections to scaling is to lo ok at an amplitude

ratio which is known to tend to a nonzero constant as N and t it to

 2 2

a bN with a b and all variable Two candidates are hR ihR i and

g e

2 2

assuming hyperscaling Unfortunately the corrections to scaling in hR ihR i

g e

are to o weak to yield much information on see Table the amplitude b is

within of zero and the error bars on are enormous However we can obtain

2 2

a reasonable estimate of the universal limiting value hR ihR i a Note rst that

g e

2 2 2

the unusually small conrms our b elief that the true error bars on hR i hR i

N N

g e

are roughly of those rep orted in Table If so then the error bars on a b and

in Table should also b e reduced by a factor of Making this adjustment

we conclude that

2

hR i

g

2

hR i

e

here the rst error bar represents systematic error due to uncontrolled corrections

to scaling sub jective condence limits and the second error bar represents

the adjusted statistical error classical condence limits

The analysis of is much more favorable for estimating as the corrections

to scaling are rather strong Figure This is b ecause the corrections to scaling

2

on hT i and hR i have opp osite signs as is evident from Figure A sample of

g

the results is shown in Table Note that the estimates of b and are quite

stable as N is increased and that the error bar on is remarkably small The

min

2

unusually small conrms our b elief that the true error bars on are roughly

of those rep orted in Table If so then the error bars on b and should

also b e reduced by a factor of The excellent t can b e seen graphically in

056

Figure is amazingly close to a linear function of N This conrms our

exp ectation that the correctiontoscaling exp onent is approximately and

1

suggests that it may b e slightly higher around A fair estimate would b e

1

sub jective condence limits On the other hand the pro duced by the

ab ove t could well b e a phenomenological eective exp onent that summarizes

the combined eects of two or more correctiontoscaling terms eg the leading

1 

1

plus the analytic correction N over some particular range of N correction N

The only way to sort this out would b e to use a larger N together with improved

min

statistics In any case we can estimate the universal limiting value

We have again made the factorof adjustment in the statistical error bar The

theorists interpenetration ratio dened in is

R

Since hyperscaling is satised

Discussion

Comparison with Previous Numerical Studies

Two Dimensions

We rst compared our raw data with those of other studies

making direct comparisons where the values of N match and using the interpolation

21

formulae at other values of N We nd excellent agreement with one

p erplexing exception when compared with the extremely precise data of our

2 2

values for hR i and hR i are ab out low at N and slightly low

e g

at N Interestingly these are precisely the runs that were p erformed

21

Warning We b elieve that the theoretical premises of are erroneous in that the authors of

fail to distinguish correctly which quantities are universal and which are nonuniversal See

Section ab ove Nevertheless the Monte Carlo data in are useful

using pure dimerization while the runs at N used the pivot algorithm It is

therefore conceivable that our dimerization program has a subtle bug which causes

a very small error p erhaps one that decreases with N However we have b een

unable to nd any such bug and the small discrepancy could well b e a statistical

uke

2 2 2 2

Our estimates of the universal ratios hR ihR i and hR ihR i agree p erfectly

g e m e

with the b est published estimates and have roughly the same precision In

particular they conrm the b eautiful conformalinvariance prediction of Cardy and

Saleur as corrected by

2

2

hR i

hR i

g

m

2 2

hR hR i i

e e

Much more precise data will b e available so on

To our knowledge there are no previous seriesextrap olation or Monte Carlo

22

estimates of for twodimensional p olymers But we can compare our value

with eldtheoretic estimates

Monte Carlo simple cubic lattice N this work

2

Edwards mo del through order naive sum App endix D b elow

2

Edwards mo del through order exploiting d value App endix D b elow

This last estimate is amazingly close to the correct value it would b e interesting to

know whether this close agreement is an accident

Three Dimensions

We rst compared our raw data with those of other studies

making direct comparisons where the values of N match and using the interpola

23

tion formulae at other values of N We nd excellent agreement We

also compared our raw data with Nickels interpolationextrap olation formulae

equations and which are based on exact enumeration of short chains combined

2

with highprecision Monte Carlo data at N The agreement for hR i and

e

2

hR i is excellent most of the p oints dier by less than ab out dier by

g

b etween and and none dier by more than However for hT i there are

some mo dest discrepancies for N and N our values are all b etween

and lower than Nickels while for N ab out half of our values are

b etween and higher than Nickels We do not know whether either of these

discrepancies is real or what might b e its cause We susp ect that it arises simply

from statistical errors in Nickels raw data for hT i which might b e several times as

large as our errors these would induce statistical errors in the co ecients of his

22

Table see Section b elow gives the values of for N from exact enumeration

N

But this series is much to o short to b e usefully extrap olated

23 2 2

Warning Reference uses an unconventional denition of hR i it is N times ours

g

Note also that the data of references lack error bars

extrap olation formula which would in turn induce correlated statistical errors at

nearby values of N

Next we compare our estimates of the critical exp onents and with previous

1

work

Monte Carlo simple cubic lattice N this work

1

Monte Carlo simple cubic and BCC lattices N

Monte Carlo simple cubic lattice N

Monte Carlo simple cubic lattice N

Series extrap olation various lattices

RG n eld theory

1

24

RG n eld theory

g

6 25

Edwards mo del through order z

1

6 25

Edwards mo del through order z

1

all Monte Carlo error bars are one standard deviation It is clear in retrosp ect

that the earlier Monte Carlo estimates based on shorter walks than

those used here were biased upwards due to corrections to scaling This eect is

seen very clearly in Figure Moreover if we truncate our own data to lie in the

same range of N as the previous studies the resulting estimates of are almost

identical to the quoted ones We have now done what we b elieve is a careful analysis

of the corrections to scaling and we have obtained reasonably go o d control over

them We therefore think that the current estimate is correct within its claimed

error bar But we could b e wrong We do not know why the seriesextrap olation

estimates are also high but it could arise from the same eect p erhaps the walks

prob ed in these analyses up to steps are simply not long enough to p ermit

an adequate analysis of corrections to scaling even using the most sophisticated

dierentialapproximants metho ds

24 

The b est estimate of for g is Comparison of references and suggests that



the uncertainty in g may b e of order This would add an extra uncertainty of order

to

25 7

The term of order z has recently b een obtained but the extended series has not yet to

our knowledge b een analyzed

The amazing fact is that the Monte Carlo estimates of have stabilized at almost

exactly the value predicted by the eldtheoretic calculations in their various equiv

alent forms n eld theory or Edwards mo del

The very high accuracy of the eldtheoretic calculations is rather surprising to us

since this metho d is susceptible to serious and quite p ossibly undetectable sys

tematic errors arising from a conuent singularity at the RG xed p oint

The weakness of this eect may b e related to the apparent fact that the

conuent exp onents and are b oth very close to an integer namely

1 2 1

they are

On the other hand for the conuent exp onent the agreement b etween our

1

Monte Carlo estimates and the eldtheoretic predictions is not so go o d Our Monte

Carlo data for can b e t amazingly well all the way down to N by the



Ansatz a bN with statistical errors only This exp onent

diers considerably from the eldtheoretic predictions which all lie at

1

We do not know which of these estimates if either is the correct one indeed

there are go o d reasons to b e distrustful of b oth On the one hand our Monte Carlo

estimate could well b e a phenomenological eective exp onent that summarizes

the combined eects of two or more correctiontoscaling terms eg the leading

1 

1

plus the analytic correction N over some particular range of correction N

N To test this we would need to go to larger N and obtain signicantly

min

improved statistics As things stand our data do not rule out the eldtheoretic

prediction provided that one includes suitable subleading correction

1

toscaling terms On the other hand the eldtheoretic prediction for should

1

also b e taken with a grain of salt it arises from the slop e of the function at the

xed p oint g g and as Nickel showed long ago this function has in

addition to the desired term linear in g g also nonanalytic terms prop ortional to

  1

2 1 1

and the like see Section b elow for further discussion g g g g

But the numerical metho ds currently employed to

extrap olate the p erturbation series from g to g g assume contrary to fact

p

that g is regular at g g The presence of terms g g with p could

well lead to systematic errors in estimates of the slop e g It is therefore

worth considering the p ossibility that the true is indeed closer to than

1

to and that the error bars on the eldtheoretic estimates may b e over

optimistic

Let us also make a brief comparison with the exp erimental results on p olymers

in a go o d solvent a more detailed analysis will app ear elsewhere The most

systematic data on the static scaling b ehavior of highmolecularweight p olymers in

a go o d solvent were obtained in the s by three Japanese groups

using p olystyrene in b enzene These data were reanalyzed shortly after the app ear

ance of the RG predictions for by Cotton After making corrections

for p olydisp ersity Cotton obtains the value in go o d agreement

with the RG prediction and now in go o d agreement with the Monte Carlo data as

well Unfortunately we b elieve that this exp erimental value is unreliable for the

following reasons

The samples of Yamamoto et al have an unknown p olydisp ersity but

this p olydisp ersity is certainly not zero as Cottons analysis implicitly as

sumed

The measurements of Fukuda et al were apparently aicted by a serious

systematic error of magnitude ranging from to arising from the way

26

that the solution concentration was measured see fo otnote

The data from the three dierent lab oratories cover almostnonoverlapping

ranges of molecular weight As a result the combined analysis of the three

sets of data is highly susceptible to the eects of small systematic discrepancies

in absolute calibration b etween the three lab oratories as well as to the fact

that the three lab oratories used slightly dierent temp eratures C for

and C for

The raw data lack error bars As a result it is imp ossible to distinguish cor

rections to scaling or systematic exp erimental errors from statistical errors

This distinction is crucial to extracting a reliable value for as our analysis

of Monte Carlo data has demonstrated

A new generation of exp eriments using mo dern ultrasensitive lightscattering in

strumentation and an optimal statistical analysis could prove

to b e very exciting

2 2

Next let us compare our estimates of the universal ratio hR ihR i with previous

g e

work

Monte Carlo simple cubic lattice N this work

Monte Carlo simple cubic lattice N

Monte Carlo b o dycenteredcubic lattice N

Monte Carlo simple cubic lattice N

Monte Carlo simple cubic lattice N

Monte Carlo simple cubic lattice N

4

Edwards mo del through order z

27

all Monte Carlo error bars are one standard deviation The agreement is excellent

2 2

It is worth noting that hR ihR i is nonmonotonic as a function of N for small

g e

2 2

exact enumeration shows that hR ihR i is considerably above N

g e

our data in N its limiting value but for larger N eg

2 2

Table show unequivocally that hR ihR i is below its limiting value and our t

g e

26

Unfortunately it will not suce simply to replace the measurements of Fukuda et al by

those of Utiyama et al on a subset of their samples b ecause these latter measurements may

suer from a systematic error of their own arising from the extrap olation to zero scattering angle

see fo otnote

27

There also exists a calculation in the Edwards mo del to second order in d but

unfortunately this expansion is to o illb ehaved to b e extrap olated reliably to see p

Table suggests that as N the approach to the limiting value is also from

below It would b e interesting to know whether seriesextrap olation techniques can

sense this nonmonotonicity which lies b eyond the present enumerations and

predict approximately the correct limiting value

Next let us compare our estimates of the limiting interpenetration ratio with

previous work

Monte Carlo simple cubic lattice N this work

Monte Carlo simple cubic lattice N

2

Edwards mo del through order z Pade App endix D b elow

2

Edwards mo del through order naive sum App endix D b elow

2

Edwards mo del through order exploiting d App endix D b elow

28

all Monte Carlo error bars are one standard deviation Our estimate is thus

consistent with the very recent Monte Carlo estimate of Nickel et al but

is ab out times as precise The Edwardsmodel renormalizationgroup estimates

are all of the right order of magnitude but most of them are not terribly close to the

correct answer This is hardly surprising in view of the very short p erturbation

series on which these estimates are based The expansion result augmented by

exact information at d is however amazingly close to the correct value b oth

in d and d It would b e useful to obtain a b etter understanding of whether

3

this is a coincidence or not p erhaps by calculating the O term in

Finally it is worth noting that the exp erimental values for also lie in the range

Section F Sections and or more optimistically

they average to However the exp erimental measurements of

are sub ject to all the problems noted earlier as well as the danger of addi

tional serious systematic errors arising from curvature in the extrap olation to zero

concentration This is a particularly severe problem for older studies which used

higher concentrations in order to comp ensate for the lesssensitive lightscattering

instrumentation then available Again new exp eriments would b e highly desirable

As explained in Section also the ratios of correctiontoscaling amplitudes

(1) (1) (1) (1)

b b and b b are universal So ideally we would like to measure these am

R R A R

g e e

(1) (1)

plitude ratios and compare them with the Edwardsmodel prediction b b

R R

g e

Unfortunately the correctiontoscaling amplitudes are ex

tremely sensitive to the choice of exp onents and Therefore the only sensible

1

comparison that can b e made is to impose the same values of and as are

1

used in namely and and then p erform a weighted

1

leastsquares t to estimate the amplitudes Unfortunately the resulting estimates

of the correctiontoscaling amplitudes are highly unstable as a function of N If

min

(1) (1)

we take N we obtain b and b hence

min

R R

e g

(1) (1)

b b not a very useful estimate

R R

g e

28

Note also that Table see Section b elow gives the values of for N from exact

N

enumeration But this series is much to o short to b e usefully extrap olated

Finally we can compare the estimates of the pivotalgorithm acceptancefraction

exp onent p

Monte Carlo simple cubic lattice N this work p or p erhaps lower

Monte Carlo simple cubic lattice N p

Monte Carlo tetrahedral lattice N p

Monte Carlo simple cubic lattice N p

There is a slight discrepancy b etween our results and the very careful work of Zierer

we do not understand its origin It is conceivable though in our opinion un

likely that the critical exp onent for the pivotalgorithm acceptance fraction might

vary from one threedimensional lattice to another More likely the magnitude and

even the sign of the corrections to scaling may vary radically b etween lattices

The Sign of Approach to

For several decades most work on the b ehavior of longchain p olymer molecules

in dilute solution has b een based on the twoparameter the

29

ory in one or another of its variants traditional Florytype pseudotraditional

30 31

mo died Florytype or mo dern continuouschaintype All twoparameter

theories predict that in the limit of zero concentration the meansquare endtoend

2 2

i and the interpenetration ra i the meansquare radius of gyration hR distance hR

g e

tio dep end on the degree of p olymerization N or equivalently on the molecular

weight M NM according to

monomer

2

i AN F hR bN a

R

e

e

2

hR i AN F bN b

R

g

g

F bN c

where F F F are claimed to b e universal functions which each sp ecic two

R R

e g

parameter theory should predict and A and b are nonuniversal scale factors de

p ending on the p olymer solvent and temp erature but indep endent of N The

2 2 d 2d2

conventional notation is F F F z and z bN in h

R R

e g

R S S

spatial dimension d Moreover virtually all the theories and in particular the

mo dern continuouschainbased theories predict that F is a monotone increas

ing and concave function of its argument bN which approaches a limiting value

for d as bN

29

See Yamakawa Sections and pp and and parts of Sections b

and b pp and See also DesCloizeaux and Jannink Section

pp

30

See Yamakawa most of Section pp and parts of Sections b and b pp

and See also Domb and Barrett

31

These theories take as their starting p oint the Edwards mo del of a weakly selfavoiding contin

4

uous chain The Edwards mo del is also equivalent to the continuum eld

theory with n comp onents See DesCloizeaux and Jannink for a detailed treatment of the

Edwards mo del

But our Monte Carlo data for the selfavoiding walk Figures and show

precisely the opp osite b ehavior is a decreasing and convex function of N which

approaches a limiting value d or d as N

The same b ehavior was found by Nickel The decrease of with N is strong

in d and weak but noticeable in d

In retrosp ect this b ehavior is heuristically almost obvious Short selfavoiding

walks b ehave roughly like hard spheres ie is on the same order of magnitude

as the hardsphere value see Table On the other hand long selfavoiding

walks are fractal ob jects ie thinner than hard spheres so one exp ects

This is manifestly true in dimension d where and in

har dspher e

2

dimension d where O it is natural to exp ect and we now

conrm that it is true also in d and to a lesser extent in d Of course

32

in d SAWs are hard spheres ie hard ro ds so The

har dspher e

monotonic decrease of as a function of d is shown in the last line

har dspher e

of Table If one now conjectures the simplest b ehavior namely that is a

N

monotonic function of N it follows that in dimension d must approach

N

its limiting value from above

There is also experimental evidence that for real p olymers in a

suciently go o d solvent the approach to is from ab ove contrary to the two

parameter theory This b ehavior was considered to b e a p erplexing anomalous

eect and various purp orted explanations were advanced

The correct explanation in our opinion was given three years ago by Nickel

see also theories of twoparameter type are simply wrong Indeed

they are wrong not merely b ecause they make incorrect predictions but for a more

fundamental reason they purp ort to make universal predictions for quantities that

33

are not in fact universal Twoparameter theories predict among other things

2 2 2

in particu i ihR hR that is a universal function of the expansion factor

T

g g S

lar is claimed to dep end on molecular weight and temp erature only through the

2

particular combination M T This prediction is quite simply incorrect b oth

S

for mo del systems and for real p olymers Indeed even the sign of the deviation

from the limiting value is not universal

All this has a very simple renormalizationgroup explanation so it is

surprising that it was not noticed earlier As mentioned already in Section stan

dard RG arguments predict for any real or mo del p olymer chain the asymptotic

b ehavior

(1)

2 2 

1

hR i A N b N a

R

e

e R

e

(1)

2 2 

1

hR i A N b N b

R

g

g R

g

(1)



1

b N c

32 

In d is essentially constant with N it diers from only by corrections of order N

N

p

p

1

arising from the discreteness of the chain The exact formula is N N N

N

2

33

More precisely twoparameter theories in which the scale factor A is indep endent of temp erature

In these are termed strong twoparameter theories

as N at xed temp erature T T The critical exp onents and are

1

(1) (1) (1)

universal The amplitudes A A b b b are nonuniversal in fact even the

R R

e g

R R

e g

(1) (1) (1)

signs of the correctiontoscaling amplitudes b b b are nonuniversal However

R R

e g

the RG theory also predicts that the dimensionless amplitude ratios A A

R R

g e

(1) (1) (1) (1)

are universal and b b b b

R R R

e e g

Recently however several pap ers have app eared which at

tempt to explain the observed approach to from ab ove in terms of either an al

leged second branch of the renormalized eld theory with renormalized coupling

constant g the xedp oint value g or an extended twoparameter

theory containing an extra parameter We confess that we have b een

unable to understand the conceptual basis of these pap ers Here is our attempt

to clarify what is going on

We b elieve that critical phenomena are b est understo o d in a Wilsontype or

Wilsonde Gennestype renormalizationgroup framework By this we

mean an RG map R acting on the innitedimensional space of Hamiltonians

for a eld theory or p olymer mo del with some xed ultraviolet cuto

eg living on a lattice the map R acts by integrating out the highmomentum

34

shortwavelength degrees of freedom We wish to contrast this with a eld

theoretic RG which acts in a nitedimensional space of continuum

renormalized eld theories or p olymer mo dels ie mo dels with the ultraviolet

cuto already taken to innity by spatial dilation A few paragraphs from now we

will explain the eldtheoretic RG in terms of the Wilson RG and we will see that

the reverse is not p ossible But for the moment let us simply pro ceed with the

Wilson approach

Figure taken from shows part of the Wilson renormalizationgroup

ow for a cuto eld theory or p olymer mo del in dimension d Here

H is the Gaussian xed p oint which for d is also the xed p oint con

is the nontrivial go o dsolvent trolling the thetasolvent b ehavior while H

GS

WilsonFisher xed p oint Please ignore for now the curves M M and C

s u

More precisely Figure shows the ow on the critical surface corresp onding to

correlation length in the cuto eld theory or chain length N in the

p olymer mo del Noncritical mo dels ie or N lie ab ove the plane

of the page Both H and H have unstable relevant directions coming out

GS

of the page and these tra jectories lead to the innitetemp erature xed p oint H

which has or N

We must now distinguish two very dierent limiting situations in p olymer the

ory

I N at xed temp erature T where either a T T b T T or c

T T

34

For example for a continuousspace eld theory with ultraviolet cuto the map R could b e

integration over eld comp onents with momenta in the shell jpj for a lattice eld

theory R could b e passage to blo ck spins for a discrete p olymer chain R could b e decimation

of o ddnumbered sites along the chain

I I N T T with x N T T xed where is a suitable crossover

exponent

Roughly sp eaking case Ia corresp onds to the go o dsolvent regime while case II

35

corresp onds to the crossover regime near the theta p oint

Case Ia applies to any oneparameter family F of Hamiltonians parametrized by

the chain length N that transversally intersects the critical surface ie the plane

of the page at some p oint within the domain of attraction of H ie anywhere to

GS

the right of M For example the family F could intersect the critical surface at

s

P Q or R among many other places Then the critical exp onents and universal

amplitude ratios cf asso ciated with the limit N in the family

F are completely determined by the RG ow in an innitesimal neighborho o d of

H they are therefore the same for al l families intersecting the critical surface

GS

within the domain of attraction of H universality On the other hand the

GS

nonuniversal amplitudes arise from the entire history of the RG ow that takes F

they are therefore dierent for dierent families F For example a family to H

GS

crossing the critical surface at P or R would approach from b elow while a family

crossing at Q would approach from ab ove The nonuniversal amplitudes cannot

b e predicted except through detailed knowledge of the microscopic physics ie of

the family F In particular they cannot b e predicted by any coarsegrained

theory such as a renormalized eld theory or indeed by any simple mathematical

mo del

As will b e explained b elow the manifold M which extends also out of the

u

plane of the page corresp onds to the continuum Edwards mo dels with the plane

of the page corresp onding to z It is imp ortant to note that M has no special

u

status with resp ect to the go o dsolvent xed p oint H it is merely one of many

GS

The tra jectories whose intersection with the critical surface is attracted to H

GS

universal prop erties of p olymer chains in a go o d solvent case Ia can indeed b e

extracted from the limit z of the continuum Edwards mo del but they can b e

extracted equally well from the limit N of any family F whose intersection

with the critical surface is attracted to H One family may b e more convenient

GS

for computation than another but all have the same conceptual status

Let us now explain the relation b etween the Wilson RG and continuum eld

theories Let H b e any critical xed p oint for the moment it do esnt matter

whether H is Gaussian and let M resp M b e the stable resp unstable

s u

36

manifold of H Continuum limits are obtained by taking a sequence of initial

Hamiltonians H approaching the stable manifold and rescaling lengths by suitable

n

factors This rescaling is equivalent to applying the map R a suitable number

35

However it is crucial to understand that cases I and I I refer to families of limiting paths in the

T N plane not to regions or domains of the nite T N plane Failure to appreciate this

distinction which may at rst seem rather p edantic can lead to apparent paradoxes regarding

the sign of the correctiontoscaling terms

36

For simplicity let us assume that there are no marginal op erators The presence of marginal

6

op erators such as the op erator at the Gaussian xed p oint in d do es not aect the

fundamental conclusions but merely induces multiplicative logarithmic corrections

e n

of times The lowenergy eective Hamiltonians H R H then tend to the

n

n

unstable manifold see Figure Continuum eld theories F are thus in oneto

one corresp ondence with Hamiltonians H on the unstable manifold the correlation

functions of F at momenta jpj are equal to the correlation functions of H with

cuto This p oint of view has b een emphasized by Wilson and Kogut and

others

In particular the Gaussian xed p oint H has in dimension d a

twodimensional unstable manifold M the two unstable directions corresp ond to

u

2 4

and interactions or in p olymer language a chainlength fugacity and a two

37

b o dy selfinteraction So M corresp onds to the manifold of sup errenormalizable

u

4

continuum eld theories or in p olymer language to the continuum Edwards

mo del

We can now understand the connection b etween the Wilson and eldtheoretic

38

renormalization groups The homogeneous eldtheoretic RG equations

describ e how a family of continuum eld theories is mapp ed into itself under spatial

dilation On the other hand the continuum eld theories are in onetoone corre

sp ondence with Hamiltonians on the unstable manifold and this corresp ondence

takes spatial dilation into the RG map R Thus the eldtheoretic RG is nothing

other than the Wilson RG restricted to the unstable manifold and then rewritten in

terms of renormalized parameters

Having understo o d this we can now evaluate the attempts to

explain the wrong sign of approach to or the analogue in liquidgas critical

p oints in terms of an alleged second branch of the renormalized eld theory

39

lo cated at g g The trouble is that as far as we know no such branch exists

As explained ab ove continuum eld theories corresp ond to the unstable manifolds of

critical xed p oints Thus the alleged second branch could exist only if there were

lying somewhere to the right of H an asyetunknown critical xed p oint H

GS new

part of whose unstable manifold is attracted to H There is to our knowledge

GS

40

no evidence whatso ever for the existence of such a xed p oint

37 2

We ignore here the marginal op erator r which corresp onds to a physically trivial rescaling

of eld strengths

38

This connection was worked out by Kupiainen and Sokal in and was quite p ossibly known

to others as well However to our knowledge it rst app eared publicly in Very similar ideas

app eared earlier in work of Hughes and Liu

39

Of course no such explanation is needed b ecause we have already given a complete and straight

forward explanation in terms of the Wilson RG But there would b e no harm in giving an alternate

explanation of the same phenomenon provided that this explanation is correct

40

The nonexistence of such a branch of continuum eld theories seems to b e recognized by Kr uger

and Schaferwho note abstract and p that the strong coupling branch implicitly relies

on the existence of a nite segment size ultraviolet cuto But on the other hand they insist



that renormalized eld theory can b e used for g g p We do not understand how

these two statements can b e reconciled Perhaps Kr ugerand Schaferwant to study the cuto eld



theories to the right of H But there is no distinguished oneparameter family of such theories ie

GS

the putative extension of M there is simply the innitedimensional space of al l cuto theories

u



with complicated Hamiltonians as well as simple ones lying to the right of H all of which have

GS

The conventional eldtheoretic RG approach makes another

error in assuming that the function describing the eldtheoretic RG ow is reg

ular at g g In fact as p ointed out long ago by Nickel the function

1  

1 2 1

should b e exp ected to contain nonanalytic terms like g g g g and

many others This can easily b e understo o d in our Wilsontype RG framework The

eldtheoretic function describ es the Wilson RG ow restricted to the unstable

manifold M and rewritten in terms of the renormalized parameter g Now M

u u

41

has no sp ecial status at H like every other RG tra jectory it approaches H

GS GS

tangent to the leading irrelevant direction but barring a miraculous coincidence

M also has nonzero comp onents with resp ect to the nonlinear scaling elds at

u

H in all of the subleading irrelevant directions This induces nonanalytic terms

GS

   

2 1 3 1

g g g g in the function In addition the analytic correc

1 2

1 1

tions to scaling at H induce nonanalytic terms like g g g g

GS

in the function

In summary the eldtheoretic function is nonanalytic as g g and is as

far as we know not dened at al l for g g These facts cannot b e understo o d by

manipulation of formal expressions within renormalized eld theory but they can

b e understo o d when the renormalized eld theory is placed within the Wilsontype

RG context

Finally this framework allows us to understand the sp ecial role played

but it by the continuum Edwards mo del This mo del has no sp ecial status at H

GS

do es have sp ecial status at H it is the unstable manifold M For this reason the

u

continuum Edwards mo del describ es the universal crossover scaling behavior in an

innitesimal region just above the theta temperature namely the limit N T

T with x N T T xed case I I ab ove where d for d

411

1

42

and log for d That is the continuum Edwards mo del controls

2

the b ehavior of any twoparameter family F of Hamiltonians that transversally

intersects the critical surface in some curve C which in turn transversally intersects

M see Figure Since the theta p oint is b eyond the scop e of the present pap er

s

we refer the reader to for details

equivalent conceptual status

The same ob jection applies to the extended twoparameter theory of Chen and No olandi

Perhaps their extra parameter is intended to corresp ond to the co ecient of the leading



irrelevant coupling at H but in that case it is merely an inordinately complicated restatement of

GS

and moreover it neglects the second and higher irrelevant couplings Of course one could

consider the manifold corresp onding to the vanishing of the second and higher irrelevant couplings

 

with resp ect to the nonlinear scaling elds at H but this manifold on the left side of H

GS GS

do es not coincide with M

u

Finally we b elieve that the same ob jection applies to the second half of Nickels pap er after

p in connection with his recursion mo del He seems to realize this as he describ es the

upp er branch in his simplied recursion mo del as quasi universal

41

Except for a measurezero set of exceptional tra jectories

42

By the latter expression we mean that in d the correct scaling variable is x

12 411

N log N T T

Prosp ects for Future Work

Let us close this pap er by mentioning briey some interesting areas for future

work

Higher virial co ecients can b e studied by the same metho ds that we have used

here to study the second virial co ecient For example the third virial co ecient

b etween molecules of types i j k is

X X

(ij k )

W s W s W s B

i j k

3

0 d

s S x Z

i

0 00 d

s S x Z

j

00

s S

k

i i h i h h

0 0 00 00 00 00 0 0

V ((x s )(x s )) V ((0s)(x s )) V ((0s)(x s ))

ij

j k ik

e e e

compare where the interaction energy V is given as usual by The

2

(N N N ) (N N )

ratio B B is exp ected to b e universal for p olymers in a go o d solvent

3 2

and it would b e interesting to know its value At present only a crude eld

2

theoretic estimate is available B B based on rstorder p erturbation

3

2

43

theory It should b e noted that this quantity plays a crucial role in the extraction

of the second virial co ecient from exp erimental lightscattering data due to the

necessity of extrap olating to zero concentration

The hitormiss algorithm Section can easily b e generalized to compute

th

the n virial co ecient given n indep endent SAW samples Moreover the e

ciency should remain reasonably go o d whenever hyperscaling holds We do not

know whether the Barrett and KarpLuby algorithms can b e generalized to virial

co ecients of order n Finally the deterministic Fourier metho d Section

can b e applied to the third virial co ecient and to at least some of the graphs for

virial co ecients of order n namely those graphs that can b e decomp osed

into series and parallel connections For example to compute B one rst uses the

3

Fourier metho d to compute x and thus I x and likewise for and one

12 12

then computes I I I by passing to Fourier space

12 13 23

Another interesting extension of our work would b e to study the DombJoyce

mo del of weakly selfrep elling walks in which each selfintersection is p enalized

by a weight e obviously this mo del interpolates b etween ordinary

random walks and SAWs This mo del can b e studied in two

very dierent limits

Ia N at xed

2d2

I I N with x N xed

Case Ia is the go o dsolvent regime and the universal quantities should take exactly

the same values as for SAWs for any On the other hand the nonuniversal

43 2

See DesCloizeauxJannink p equation Note that in their notation B B

3

2

4

2

hz g z compare their equation

3

quantities will manifestly b e dep endent we exp ect the approach to to b e from

b elow for small and from ab ove for large In particular there will b e some

intermediate value for which the leading correction to scaling will vanish In

Figure this would b e achieved eg by a family of Hamiltonians that crosses the

critical surface ab out halfway b etween P and Q Of course subleading corrections



2

to scaling will still b e present but these will b e suppressed by N which decays



1

much more rapidly than N By studying the DombJoyce mo del systematically

in the N plane it should b e p ossible to lo cate empirically and to exploit this

knowledge to obtain estimates of and other universal quantities that are less

contaminated by the eects of corrections to scaling We thank Jim Barrett and

Bernie Nickel for this observation This is in the same spirit as the analyses of

series expansions by Chen Fisher and Nickel who found that by varying an

irrelevant parameter and imp osing universality they could reduce the sensitivity of

exp onent estimates to the eects of corrections to scaling

Case I I in the DombJoyce mo del should b e given rather trivially by the con

tinuum Edwards mo del This fact is hardly in doubt but the results could serve

as a useful nonp erturbative check on the reliability of the extrap olations to large z

in the Edwards mo del This is a warmup for the problem of the crossover

scaling b ehavior near the theta p oint

The deep est and most dicult extension of this work would b e to SAWs

with nearestneighbor attraction for the purp ose of studying the crossover scaling

b ehavior near the theta p oint In dimension d the crossover scaling functions

are predicted to b e given exactly mo dulo two nonuniversal scale factors by

the continuum Edwards mo del In dimension d there is as yet no theoretical

prediction for the crossover scaling functions but there are some predictions for

critical exp onents It is a highly nontrivial problem to develop Monte Carlo

algorithms that work well near the theta p oint The pivot algorithm taken alone

do es not work terribly well in this regime

A Some Geometrical Theorems

In this app endix we prove some geometrical b ounds on T A B A B and

U A B see and that played a role in Sections and of this pap er

When averaged over pairs of indep endent SAWs of length N these b ounds show

that

d

a hT i const N

2

b hT i const N

c hU i log N

mo dulo the usual assumptions on the scaling of individual SAWs In particular

b ounds a and b together prove mo dulo these assumptions hyperscaling for

SAWs in dimension d

A Theorems and Pro ofs

d

Let us b egin by dening several measures of the size of a set A Z

th

The span in the co ordinate direction

S A max e x min e x A

xA xA

where e is the unit vector along the axis d

The Euclidean diameter

diamA max kx x k A

2

0

xx A

where k k denotes the Euclidean norm

2

The supnorm diameter

diam A max kx x k max S A A

0

1d

xx A

where kxk max je xj

1d

d

Let R b e a p ositive real number A set A Z is said to b e Rconnected if for

all pairs x x A there exists a sequence x x x x x of p oints in A such

0 1 n

that kx x k R for i n A set is said to b e connected if it is connected

i i1

Note that this allows diagonal as well as nearestneighbor connections

Finally we let R b e the maximum number of lattice p oints in a closed

d

Euclidean ball of radius R namely

d

fx Z kx x k Rg A R sup

0 2 d

d

x R

0

d d1 d2

Note that R R O R as R where dd is the

d d d

d

volume of the unit ball in R

d

Theorem A Let A B Z Then

d

h i

Y

S A S B A A B

=1

A B diamA diamB A

d

d

Theorem A Let A B be Rconnected subsets of Z d Let G O d be

d

d

d

the orthogonal symmetry group of Z so that G d Then

d

X

diam A diam B A A g B

2

G R

d

g G

d

d

Theorem A Let A B Z Then

N 1

min

X

N N

max min

U A B Aa

k N

min

k =1

logN log N Ab

max min

where N minA B and N maxA B The rst inequality

min max

is the best possible in terms of N and N

min max

Proof of Theorem A These b ounds follow immediately from the fact that

the set A B can b e contained in a rectangular parallelopip ed or a sphere of the

indicated size

Proof of Theorem A Let us consider the case d rst Fix g G

2

There exist I and x x A such that jjx x jj R for i I

1 I i i1

and jjx x jj diam A Similarly there exist J and y y g B such

I 1 1 J

that jjy y jj R for j J and jjy y jj diam g B diam B

j j 1 J 1

Then Lemma of Aizenman implies that

jjx x jj jjy y jj j sin x x y y j A g B

I 1 2 J 1 2 I 1 J 1

2

R

diam A diam B j sin x x y y j A

I 1 J 1

2

R

where v w is the angle that the vector w makes with resp ect to the vector v

Let g G b e the op erator of rotation by Then for any vectors v and

+ 2

w v g w v w so

+

j sin v w j j sin v g w j j sin v w j j cos v w j A

+

From A we see that

A g B A g g B diam A diam B A

+

2

R

The theorem for d now follows by averaging A over all g G and dividing

2

b oth sides by

Now consider the case d Fix g G Cho ose indices and so that

d

0

g B diam B If then let P b e the S A diam A and S

d

orthogonal pro jection of Z onto the co ordinate plane If then let

b e any element of f dg n f g and let P b e the orthogonal pro jection onto

the co ordinate plane Let h G b e the rotation by in the or

d

co ordinate plane which leaves all the other co ordinates xed Note that

P h g P where g G was dened ab ove We now use the result A for

+ + 2

d

A g B A hg B PA P g B PA P hg B

diam PA diam P g B

2

R

diam A diam B A

2

R

The theorem now follows by averaging as for the d case

We remark that some strengthened versions of Theorem A while quite plausi

ble at rst sight are wrong For example we at rst thought that for connected

2

sets and in particular walks in Z

A B const S A S B A

1 2

This is false since if A and B are b oth N step selfavoiding walks from to

N N that stay in the diagonal strip jx x j then A B N while

1 2

2

S AS A N Thus one must average over rotations in some way We had also

1 2

2

conjectured that for any sets A B Z

A B A g B C Pro j A Pro j B A

+

1 2

where Pro j is the pro jection onto the x co ordinate axis and g is rotation by

+

here C is a constant that is indep endent of the connectedness R A counterexample

to this conjecture was found for us by GL OBrien as follows Let M b e a large

integer and let

A B f k M l M k l M g A

2

This is a large sparse square slightly tilted Then Pro j A Pro j B M

1 2

2

but A B A g B M We still do not know if A holds if its

+

lefthand side is replaced by the lefthand side of A with d

Proof of Theorem A Throughout this pro of we shall assume without loss

of generality that N A N B

1 2

To show that the rst inequality is the b est p ossible let A resp B consist N

1

resp N consecutive sites on the x axis so A and B are parallel ro ds Then it

2 1

is easy to check that the rst inequality holds as an equality

d

Let v b e a vector in R such that the innerpro duct mapping x v x is

onetoone on A B For example the co ordinates of v could b e d irrational

numbers that are linearly indep endent over the rationals Denote the elements of

A by a a ordered so that v a v a for every i Similarly denote the

1 N i i+1

1

elements of B by b b ordered so that v b v b for every j

1 N j j +1

2

We shall use the following observation

Observation If a b a b for some i l m then l i

i N l m

2

To prove this note that the hypothesis implies that a a b b and so

i l N m

2

v a v a v b v b Therefore v a v a which implies l i The

i l N m i l

2

pro of of the following observation is exactly analogous

Observation If a b a b for some j l m then l j

N j m l

1

We now claim that for any i

a b i A

i N

2

To prove this observe that x equals the number of distinct a s in A such that

l

x a B But if a b a B then l i by Observation This proves

l i N l

2

A Next we can use Observation in an analogous way to prove that for any

j

b j A a

j N

1

Now dene the following subsets of A B

i N g Aa S fa b

2 i N

2

b j N g Ab S fa

j 2 N

1

k N N g Ac S fa b

2 1 k N

2

It follows from Observation that S and S are disjoint and from Observation

that S and S are disjoint trivially S and S are disjoint Therefore

X X X

U A B

x x x

00 000 0

xS xS xS

N 1

2

X

N N

1 2

A

k N

2

k =1

where the last inequality uses A A and the trivial b ound x N A

2

n

P

1

k logn simple comparison of Riemann sum to integral shows that

k =1

A Application to SAWs

Now let us apply these b ounds to the case in which A and B are indep endent

(1) (2)

SAWs of length N We rst note that that the spans S S and the

1 d

diameters D and D of either one of these SAWs are exp ected to scale like N in

the sense that

k

k m (k +...+k ++m)

d

1 1

d

hS S D D i N A

1 d

for any exp onents k k m In particular this holds if the probability

1 d

distribution of the SAW with lengths rescaled by N converges weakly to some

probability measure on a space of continuum chains with resp ect to which the spans

have nite nonzero moments

Assuming such scaling it follows that is b ounded as N Proof By

N

Theorem A we have

d

h i

Y

(1) (2) (1) (2)

S S A T

=1

but from this and A it easily follows that

c

N N

(1) (2) d

hT i const N A

N N

2

c

N

QED Assuming the usual scaling for the radius of gyration it follows that

is b ounded as N

N

Equation A also implies that

(1) (2) 2

hT i const N A

N N

in any dimension d This may b e deduced from Theorem A as follows Since

S the set of all N step SAWs which start at the origin is invariant under lattice

N

symmetries we have

X

(1) (2) (1) (2)

hT i

N N

2

c

N

(1) (2)

S

N

X X

(1) (2)

g

2

G c

d

N (1) (2)

g G

S

d

N

X

(1) (2)

diam diam

2

c

N (1) (2)

S

N

2

A hD i

N

and combining this with A gives A

Lastly we observe that the scaling assumption A implies that hyperscaling

holds in two dimensions Indeed equations A and A together imply that

(1) (2) 2

hT i const N when d A

N N

B Adequacy of Thermalization in the Pivot Al

gorithm

As p ointed out in Section the initialization of the pivot algorithm is a highly

nontrivial issue For notto olarge N up to a few thousand at least one can use the

dimerization algorithm to pro duce a p erfect equilibrium start However

for very large N dimerization is unfeasible and it is necessary to thermalize the

system by discarding the rst n N f iterations But this is painful

disc exp

b ecause is a factor N larger than for global observables A and for very

exp intA

5

large N the CPU time of the algorithm could end up b eing dominated by

the thermalization One is therefore tempted to cut corners in the choice of n

disc

In this app endix we want to illustrate how acquiescing to this temptation can lead

to disaster in the form of large systematic errors

Figure shows the temp oral history of our pivotalgorithm run for the

dimensional SAW at N The initial conguration was a pair of parallel

2 5

ro ds We averaged the observables R and T over bins of pivot iterations each

e

and plotted the resulting averages for the rst bins Clearly severe initialization

2 5 5

bias is present in R until at least time and in T until at least time

e

Here we used the following seatofthepants criterion if the value at bin n is

larger than all subsequent bins and smaller than all preceding bins then severe

initialization bias is present at time n Signicant initialization bias enough to

cause a systematic error comparable to the very small statistical error could

well b e present at times twice or three times this So to b e safe we discarded in

6 6

this case the rst iterations The total run length was approximately

iterations so a little less than half of the run was discarded

In general we exp ect the thermalization to require a time prop ortional to

exp

N f where f is the pivotalgorithm acceptance fraction From the run shown

in Figure we infer that severe initialization bias is present until at least time

N f Therefore to b e safe we have always discarded at least N f iterations

except of course for runs using a dimerized start We b elieve that this rule is

suciently conservative to render the systematic errors arising from inadequate

thermalization much smaller than the statistical errors

C Some Statistical Subtleties

Let A A b e the time series for some observable A in equilibrium As

1 n

P

n

1

discussed in App endix C the error bar on the sample mean A n A

i

i=1

is

12

C

intA AA

C stddevA

n

where

2

C t hA A i hAi C

AA s s+t

t C tC C

AA AA AA

X

t C

intA AA

t=

Since and C are not known they must b e estimated from the time series

intA AA

we dene

njtj

X

b

b

A AA A C C t

i AA

i+jtj

n jtj

i=1

b b

b b b

b

t C tC C

AA AA

AA

M

X

b b

b b

t C

intA

AA

t=M

where we must still choose the window M

One reasonable approach to choosing M is the automatic windowing algorithm

b

b

of App endix C choose M to b e the smallest integer such that M c M

intA

for a suitable window factor c If t were roughly a pure exp onential then it

AA

5

would suce to take eg c since e However for global observables

A in the pivot algorithm t is exp ected to b e very slowly decaying after a

AA

1

brief initial decay one exp ects t t up until a time of order N f

AA exp

after which time t decays rapidly This is the b ehavior exhibited by the ex

AA

act solution of the pivot algorithm for ordinary random walk where t t

AA

in the intermediate region Section We conrmed this b ehavior empir

8

ically for the selfavoiding walk by making an extremely long n pivots

b

b

simulation at N d the sample auto correlation functions t for

AA

2 2 2

there T are shown in a loglog plot in Figure For R R A R

K ar pLuby R=20

e g e

is a wide intermediate region t in which the loglog plot is roughly

q 2

there is more curvature straight yielding t t with q For R

AA

g

but the slop e q is in the same ballpark For T in contrast the curva

K ar pLuby R=20

q

ture is so great that we are unable to identify an intermediate region with clear t

b ehavior In any case we have rough empirical conrmation for the theoretically

predicted b ehavior we are unable to say whether q or q p or something

else but clearly q

Because of this very slow decay the automatic windowing algorithm leads to

signicant underestimates of even when the window factor c is as large as

intA

e

or We therefore dened a mo died estimator by extrap olating t

intA AA

b

e b

prop ortionally to t in the region b eyond the window ie M M t for

intA

AA

t M and cutting o the sum at a much larger time M of order

exp

M

X

b b

e b b

t M M log M M C

intA

AA AA

t=M

Since M M we approximated the second sum by an integral Here M is

dened as b efore by the automatic windowing algorithm In the absence of any

e

precise knowledge of we to ok M N f luckily is not very sensitive to

exp intA

b

b

the choice of M b ecause of the logarithmic dep endence and b ecause M

AA

In Table we show the results for a typical run d N for the

2 2 2

observables A R R R T as a function of the window factor c

K ar pLuby R=150

e g m

Time is measured in units of pivots Note that

b

b

a The standard windowing estimates are a factor smaller than the

intA

e

mo died estimates even at c

intA

b

b

are slowly but relentlessly increasing as a function of c b The estimates

intA

it is p erfectly plausible that they will double or triple by the time c reaches

innity

e

c The estimates are roughly stable as a function of c as so on as c

intA

However they do show some uctuation b ecause they are based on extrap o

b

b

lation from a single noisy p oint M

AA

e

We therefore chose as our nal estimate the average of for the eleven values

intA

c by taking the average we reduce the uctuations mentioned in

c

This whole pro cedure is of course inelegant and ad hoc But it do es work

reasonably well we exp ect that the estimates of are accurate to ab out

intA

This is not go o d enough for a serious study of dynamic critical b ehavior but it is

go o d enough for our present purp ose which is merely to set error bars on the static

quantities A In the future we hop e to devise b etter metho ds for analyzing time

series with slow decay of the auto correlation function

D Remarks on the FieldTheoretic Estimates of

Universal Amplitude Ratios

The critical exp onents and universal amplitude ratios asso ciated with p olymers

in a go o d solvent can b e extracted from any family of theories which intersects

transversally the domain of attraction stable manifold of the go o dsolvent xed

p oint H One such family which has no sp ecial status at H but is computa

GS GS

tionally convenient is the continuum Edwards mo del or what is equivalent the

4

n continuum eld theory This mo del b ecomes critical ie crosses the

when its bare coupling constant selfavoidance parameter stable manifold of H

GS

z tends to

There are two main approaches to the quantitative study of the continuum

Edwards mo del in the limit z

Perturbation expansion in the coupling constant z at xed dimension d

or d In this case the problem is to estimate the asymptotic b ehavior

as z from the rst few terms of a p erturbation expansion around z

Expansion in d Here the critical exp onents and limiting amplitude

ratios which corresp ond to the limit z can b e obtained directly from a

suitable renormalizationgroup analysis The problem is then to extrap olate

to or

In this app endix we want to summarize the results for the universal amplitude ratios

2 2

hR ihR i and which have b een obtained by each of these metho ds

g e R

and to make some comments on their extrap olation

Perturbation expansion at xed dimension d Let z z and hz

R S

b e the conventional expansion and second virial factors of the continuum Edwards

mo del One then has

2 2

z z z D

S R

d

z z hz z D

S

d

z z hz z D

R

R

The known p erturbation series in d are as follows

2

z z z

3 4 5

z z O z D

p

2 3 4

z z O z D z z

p

2 3 4

z z O z D z z

R

Crude estimates of can b e obtained from the and Pade

approximants

554 3395

z

105 2592

D z

[11]

3395 16

z

3 2592

2

z z

z D

[22]

2

z z

More sophisticated extrap olations have and yielding

[22] [11]

b een p erformed by Shanes and Nickel who nd Our

Monte Carlo value is

Likewise crude estimates of and can b e obtained

R

R

from the Padeapproximants

z

p

D z

[11]

1769896 2

z

105

z

p

z D

R[11]

254128 2

z

15

yielding and Our Monte Carlo values are

[11] R[11]

and resp ectively

We can also try the direct renormalization approach of des Cloizeaux Conte

2

and Jannink dene the eective exp onent z z d log z dz

R

R

which approaches the limiting value as z reexpress and

R

R

as functions of and extrap olate these series to using the b est estimate

R R

R

of If we carry out this last step by the most naive metho d imaginable namely

straight evaluation of the cubic p olynomial at we

R

obtain and which is not bad at all for such a short series

R

Expansion in d The limiting universal ratios and can b e

R

44

evaluated in dimension d in p owers of the results are

2 3

O D

2

3

log O D

2

3

log O D

R

Evaluating these at yields

D

D

D

R

We remark that the Pade approximants for all three quantities have p oles

b etween and and so are unreliable for

45

Des Cloizeaux fo otnote has suggested to augment the expansions

DD by enforcing the known exact values at d which are

12 12

see and This

R

pro duces the cubic p olynomials

2 3

D

2 3

log log D

12

2 3

12

log log D

R

Evaluating this at yields

D

D

D

R

44 

The expansion for is from see also equation of The expansion for is

R



from equation of see also equations of In these works

R



is called g To establish the connection compare equations and of The



expansion for can then b e derived from these two Alternatively it can b e found as the limit

of equation a of

45

See also pp and

For this mo dication has actually worsened the agreement with the Monte Carlo

this mo died value b oth in d and in d On the other hand for and

R

expansion prediction is amazingly close to the correct value b oth in d and

d It would b e useful to obtain a b etter understanding of whether this is a

3

coincidence or not p erhaps by calculating the O term in

Acknowledgments

We wish to thank Jim Barrett Sergio Caracciolo Bertrand Duplantier Michael

Fisher Peter Grassb erger Tony Guttmann Bernie Nickel George OBrien Enzo

Orlandini Andrea Pelissetto Carla Tesi and Stu Whittington for many helpful

conversations and corresp ondence The computations rep orted here were carried out

on various IBM RS and H Sparcstation Silicon Graphics Crimson

and Convex C machines at New York University

The authors research was supp orted in part by an op erating grant from the

Natural Sciences and Engineering Research Council of Canada NM US De

partment of Energy contract DEFGER ADS US National Sci

ence Foundation grants DMS and DMS ADS and by a New

York University Research Challenge Fund grant ADS Acknowledgment is also

made to the donors of the Petroleum Research Fund administered by the Ameri

can Chemical So ciety for partial supp ort of this research under grants AC

and ACBC ADS During part of the p erio d of research BL was a

American Chemical So cietyPetroleum Research Fund Fellow

References

N Madras and G Slade The SelfAvoiding Walk BirkhauserBostonBasel

Berlin

PG deGennes Phys Lett A

J des Cloizeaux J Physique

M Daoud JP Cotton B Farnoux G Jannink G Sarma H Benoit R Du

plessix C Picot and PG de Gennes Macromolecules

VJ Emery Phys Rev B

C Aragaode Carvalho S Caracciolo and J Frohlich Nucl Phys BFS

R Fernandez J Frohlich and AD Sokal Random Walks Critical Phe

nomena and Triviality in Quantum Field Theory SpringerVerlag Berlin

Heidelb ergNew York

M Lal Molec Phys

B MacDonald N Jan DL Hunter and MO Steinitz J Phys A Math

Gen

N Madras and AD Sokal J Stat Phys

KF Freed Theory of Macromolecules Wiley New

York

J des Cloizeaux and G Jannink Polymers in Solution Their Modelling and

Structure Oxford University Press New York

BG Nickel Macromolecules

AD Sokal Static scaling b ehavior of highmolecularweight p olymers in di

lute solution A reexamination NYU preprint NYUTH May

heplatftpscrifsued u rejected ve times by Phys

Rev Lett A slightly abridged and revised version of this pap er will app ear

in Europhys Lett

AD Sokal Fundamental problems in the static scaling b ehavior of high

molecularweight p olymers in dilute solution I Critique of twoparameter

theories in preparation

B Widom J Chem Phys

ME Fisher Rep Prog Phys

G Stell J Chem Phys

NS Snider J Chem Phys

B Widom Physica

th

Nob el Sym ME Fisher in Col lective Properties of Physical Systems

p osium ed B Lundqvist and S Lundqvist Academic Press New York

London

CK Hall J Stat Phys

T Hara and G Slade Commun Math Phys

T Hara and G Slade Reviews in Math Phys

T Hara and G Slade Commun Math Phys

T Hara Prob Th and Rel Fields

WJ Camp DM Saul JP Van Dyke and M Wortis Phys Rev B

GA Baker Jr Phys Rev B

BG Nickel and B Sharp e J Phys A

JJ Rehr J Phys A L

J ZinnJustin J Physique

B Nickel Physica A

BG Nickel in Phase Transitions Cargeselectures ed M Levy JC

LeGuillou and J ZinnJustin Plenum New YorkLondon

R Roskies Phys Rev B

RZ Roskies Phys Rev B

J ZinnJustin J Physique

GA Baker Jr and JM Kincaid J Stat Phys

JH Chen ME Fisher and BG Nickel Phys Rev Lett

J Adler M Moshe and V Privman Phys Rev B

M Ferer and MJ Velgakis Phys Rev B

ME Fisher and JH Chen J Physique

AJ Guttmann Phys Rev B

R Schrader and E TrankleJ Stat Phys

BA Freedman and GA Baker Jr J Phys A L

MN Barb er RB Pearson D Toussaint and JL Richardson Phys Rev

B

K Binder M Nauenberg V Privman and AP Young Phys Rev B

A Ho ogland A Compagner and HWJ Blote Physica A

J Glimm and A Jae Ann Inst Henri Poincare A

R Schrader Phys Rev B

R Schrader Commun Math Phys

AD Sokal Ann Inst Henri Poincare A

GA Baker Jr in Phase Transitions and Critical Phenomena Vol ed C

Domb and JL Leb owitz Academic Press London

ME Fisher in Renormalization Group in Critical Phenomena and Quantum

Fields ed JD Gunton and MS Green Temple University Philadelphia

FJ Wegner and EK Riedel Phys Rev B

SK Ma Modern Theory of Critical Phenomena Benjamin Reading MA

DJ Amit and L Peliti Ann Phys

ME Fisher in Critical Phenomena Stel lenbosch Lecture Notes in

Physics ed FJW Hahne SpringerVerlag BerlinHeidelb ergNew

York pp

ACD van Enter R Fernandezand AD Sokal J Stat Phys

K Gawedzki and A Kupiainen Commun Math Phys

K Gawedzki and A Kupiainen Nucl Phys BFS

K Gawedzki and A Kupiainen Commun Math Phys

J Feldman J Magnen V Rivasseau and R SeneorCommun Math Phys

K Gawedzki and A Kupiainen Commun Math Phys

K Gawedzki and A Kupiainen J Stat Phys

K Gawedzki and A Kupiainen Commun Math Phys

G Felder Commun Math Phys

T Niemeijer and JMJ van Leuuwen in Phase Transitions and Critical Phe

nomena Vol ed C Domb and M S Green Academic Press LondonNew

YorkSan Francisco

RH Swendsen in Phase Transitions Cargese lectures ed M Levy

JC LeGuillou and J ZinnJustin Plenum New YorkLondon

E Brezin J C Le Guillou and J ZinnJustin in Phase Transitions and

Critical Phenomena Vol ed C Domb and M S Green Academic Press

LondonNew YorkSan Francisco

DS Gaunt and AJ Guttmann in Phase Transitions and Critical Phenom

ena Vol ed C Domb and MS Green Academic Press London

AJ Guttmann in Phase Transitions and Critical Phenomena Vol ed

C Domb and JL Leb owitz Academic Press London

J Adler M Moshe and V Privman in Structures and Processes

ed G Deutscher R Zallen and J Adler Israel Physical So ciety

V Privman J Phys A

MN Barb er in Phase Transitions and Critical Phenomena Vol ed C

Domb and JL Leb owitz Academic Press London

JL Cardy ed FiniteSize Scaling NorthHolland Amsterdam

V Privman ed Finite Size Scaling and Numerical Simulation of Statistical

Systems World Scientic Singap ore

AD Sokal Monte Carlo Methods in Foundations and

New Algorithms Cours de TroisiemeCycle de la Physique en Suisse Romande

Lausanne June

AD Sokal Nucl Phys B Pro c Suppl

AD Sokal in Quantum Fields on the Computer ed M Creutz World Sci

entic Singap ore

S Caracciolo RG Edwards SJ Ferreira A Pelissetto and AD Sokal

Finitesize scaling at L in preparation

AD Sokal in Monte Carlo and Molecular Dynamics Simulations in Polymer

Science ed K Binder Oxford University Press New York to app ear

A Baumgartnerand K Binder J Chem Phys

SF Edwards Pro c Phys So c London

SRS Varadhan app endix to article of K Symanzik in Local Quantum The

ory ed R Jost Academic Press New YorkLondon

J Westwater Commun Math Phys

J Westwater Commun Math Phys

A Bovier G Felder and J Frohlich Nucl Phys BFS

V Privman PC Hohenberg and A Aharony in Phase Transitions and Crit

ical Phenomena Vol ed C Domb and JL Leb owitz Academic Press

LondonSan Diego

M Muthukumar and BG Nickel J Chem Phys

J des Cloizeaux R Conte and G Jannink J Physique Lett L

M Muthukumar and BG Nickel J Chem Phys

AJ Barrett and BG Nickel private communication

H Fujita and T Norisuye Macromolecules

K Hub er and WH Sto ckmayer Macromolecules

H Fujita Macromolecules

H Fujita Polymer Solutions Elsevier Amsterdam

AJ Liu and ME Fisher J Stat Phys

T Hara G Slade and AD Sokal J Stat Phys

G Slade Commun Math Phys

G Slade Ann Probab

G Slade J Phys A Math Gen L

GE Uhlenbeck and GW Ford in Studies in Statistical Mechanics Vol

ed J de Bo er and GE Uhlenbeck NorthHolland Amsterdam

J des Cloizeaux private communication cited in E Brezin in Order and

Fluctuation in Equilibrium and Nonequilibrium Statistical Mechanics th

Solvay Conference ed G Nichols G Dewel and J W Turner Wiley

Interscience New York

JC LeGuillou and J ZinnJustin Phys Rev B

JC LeGuillou and J ZinnJustin J Physique Lett L

JC LeGuillou and J ZinnJustin J Physique

DB Murray and BG Nickel Revised estimates for critical exp onents for

the continuum nvector mo del in dimensions University of Guelph preprint

B Nienhuis Phys Rev Lett

B Nienhuis J Stat Phys

DS McKenzie and C Domb Pro c Phys So c London

M Aizenman Commun Math Phys

FJ Wegner Phys Rev B

HE Stanley Introduction to Phase Transitions and Critical Phenomena Ox

ford University Press Oxford

N Madras A Orlitsky and LA Shepp J Stat Phys

G Zierer Macromolecules

N Eizenberg and J Klafter J Chem Phys

DE Knuth The Art of Computer Programming Vol AddisonWesley

Reading Massachusetts Section

TH Cormen CE Leiserson and RL Rivest Introduction to Algorithms

MIT PressMcGrawHill Cambridge MANew York Chapter

K Suzuki Bull Chem So c Japan

G Zierer Molec Simul

S Redner and PJ Reynolds J Phys A

A Berretti and AD Sokal J Stat Phys

AJ Barrett and BG Nickel private communication A precursor of this

algorithm can b e found in AJ Barrett Macromolecules Sec

tion

th

RM Karp and M Luby in IEEE Symposium on Foundations of Com

puter Science IEEE New York pp

RM Karp M Luby and N Madras J Algorithms

SD Silvey Statistical Inference Chapman and Hall London

S Caracciolo AJ Guttmann B Li A Pelissetto and AD Sokal Correction

toscaling exp onents for twodimensional selfavoiding walks in preparation

DC Rapap ort J Phys A L

S Caracciolo A Pelissetto and AD Sokal A L

AJ Barrett M Manseld and BC Benesch Macromolecules

JL Cardy and H Saleur J Phys A L

AJ Guttmann S Merrilees and AD Sokal unpublished

DC Rapap ort J Phys A

J Dayantis and JF Palierne J Chem Phys

LA Johnson A Monge and RA Friesner J Chem Phys

F Shanes and BG Nickel Calculation of the radius of gyration for a linear

exible p olymer chain with excluded volume interaction J Chem Phys to

app ear

J Dayantis and JF Palierne Phys Rev B

AJ Guttmann J Phys A

BG Nickel Physica A

KE Newman and EK Riedel Phys Rev B

AD Sokal Fundamental problems in the static scaling b ehavior of high

molecularweight p olymers in dilute solution I I Critical review of the exp er

imental literature in preparation

A Yamamoto M Fujii G Tanaka and H Yamakawa Polym J

M Fukuda M Fukutomi Y Kato and T Hashimoto J Polym Sci Polym

Phys Ed

Y Miyaki Y Einaga and H Fujita Macromolecules

JC LeGuillou and J ZinnJustin Phys Rev Lett

JP Cotton J Physique Lett L

H Utiyama S Utsumi Y Tsunashima and M Kurata Macromolecules

B App elt and G Meyerho Macromolecules

HR Haller C Destor and DS Cannell Rev Sci Instrum

B Chu R Xu T Maeda and HS Dhadwal Rev Sci Instrum

KB Strawbridge FR Hallett and J Watton Can J Appl Sp ectroscopy

AD Sokal Optimal statistical analysis of static lightscattering data from

dilute p olymer solutions in preparation

M Benhamou and G Mahoux J Physique Lett L

C Domb and FT Hio e J Chem Phys

M van Pro oyen and BG Nickel The second virial co ecient for selfavoiding

walks on a lattice in preparation

PJ Flory Principles of Polymer Chemistry Cornell University Press Ithaca

NY

H Yamakawa Modern Theory of Polymer Solutions Harp er and Row New

York

PG DeGennes Scaling Concepts in Polymer Physics Cornell Univ Press

Ithaca NY

C Domb and AJ Barrett Polymer

JF Douglas and KF Freed Macromolecules

JF Douglas and KF Freed J Phys Chem

ZY Chen and J No olandi J Chem Phys

ZY Chen and J No olandi Macromolecules

B Kr ugerand L SchaferLong p olymer chains in go o d solvent Beyond the

universal limit UniversitatEssen preprint late

L Schafer On the sign of correction to scaling amplitudes Field theoretic

considerations and results for self rep elling walks UniversitatEssen preprint

C Bagnuls and C Bervillier Phys Rev B

KG Wilson and J Kogut Phys Rep orts C

K Gawedzki and A Kupiainen in Critical Phenomena Random Systems

Gauge Theories Les Houches Part I ed K Osterwalder and R Stora

NorthHolland Amsterdam pp

J Polchinski Nucl Phys B

J Hughes and J Liu Nucl Phys B

S Weinberg Phys Rev D

JC Collins and AJ Macfarlane Phys Rev D

SW MacDowell Phys Rev D

B Duplantier J Physique

B Duplantier J Chem Phys

B Duplantier Phys Rev A

C Domb and GS Joyce J Phys C

B Duplantier and H Saleur Phys Rev Lett

S Caracciolo G Ferraro A Pelissetto and AD Sokal work in progress

E Orlandini MC Tesi and SG Whittington private communication

G Tanaka and K Solc Macromolecules

J des Cloizeaux J Physique

JF Douglas and KF Freed Macromolecules

2 4 4 4 4

N hT i hT i varT hU i hV i hV i

B ar r ett K ar pLuby

Table Quantities relevant to the Barrett and KarpLuby algorithms as a function

of N N N for dimensional selfavoiding walks Error bars are one standard

1 2

deviation

2 4 4 4 4

N hT i hT i varT hU i hV i hV i

B ar r ett K ar pLuby

Table Quantities relevant to the Barrett and KarpLuby algorithms as a function

of N N N for dimensional selfavoiding walks Error bars are one standard

1 2

deviation

2 2 2

N hR i hR i hR i hT i f

e g m

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

Table The results of our runs in dimension d Errors are one standard

a

deviation indicates a p ossible minor bug in measurement see text

2 2 2 2

N hR ihR i hR ihR i

g e m e

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

a

Table Universal amplitude ratios in dimension d Errors are one stan

a

dard deviation based on triangle inequality indicates a p ossible minor bug in

measurement see text

2 2

N hR hR hT i f i i

e g

a

a

a

a

a

a

a

a

a

a

a

a

a

a

Table The results of our runs in dimension d Errors are one standard

a

deviation indicates a p ossible minor bug in measurement see text

2 2

N hR ihR i

g e

a

a

a

a

a

a

a

a

a

a

a

a

a

a

Table Universal amplitude ratios in dimension d Errors are one stan

a

dard deviation based on triangle inequality indicates a p ossible minor bug in

measurement see text

2

N a b

min

DF level

DF level

DF level

DF level

DF level

2 2 

Table Fits hR ihR i a bN with a b all variable for

g e

dimensional SAWs The true error bars are probably of those indicated

here see text

2

N a b

min

DF level

DF level

DF level

DF level

DF level

DF level



Table Fits a bN with a b all variable for dimensional

SAWs The true error bars are probably of those indicated here see text

d d d d

har dspher e

N =1

N =2

N =3

N =4

N =5

N =6

N =7

har dspher e

Table for short chains rounded to four decimal places from exact enumer

N

ations along with our Monte Carlo values for

2 2 2

A R A R A R A T

K LR=150

e g m

b b b b

e e e e b b b b

c 2 2 2 2 2 2

intT intT

intR intR intR intR intR intR

e g m e g m

b

b e

and mo died estimate for the Table Standard windowing estimate

intA intA

2 2 2

T as a function of window factor c for R R observables A R

K ar pLuby R=150

m g e

SAWs on the square lattice at N Time is measured in units of pivots

2 2

Figure Estimated exp onent from pure p owerlaw ts to hR i 2 hR i 3

e g

2

and hR i and estimated exp onent from pure p owerlaw t to hT i

m 4

plotted versus N Note the go o d agreement with the b elieved exact value

min

and with the hyperscaling relation d as so on as N

4 min

Figure Estimated exp onent p from pure p owerlaw t to pivotalgorithm accep

tance fraction f plotted versus N

min

Figure Interpenetration ratio versus N for dimensional SAWs Note that

and is constant within error bars for N varies little for N

2 2

Figure Estimated exp onent from pure p owerlaw ts to hR i 2 and hR i

e g

3 and estimated exp onent from pure p owerlaw t to hT i plotted

4

versus N Note the very strong corrections to scaling which lead to erroneous

min

4

exp onent estimates unless one takes N

min

Figure Estimated exp onent p from pure p owerlaw t to pivotalgorithm accep

tance fraction f plotted versus N

min

2 2

Figure Estimated exp onent from pure p owerlaw ts to hR i 2 and hR i

e g

3 and estimated exp onent from pure p owerlaw t to hT i plotted

4

05

Note the very roughly linear b ehavior in agreement with the b elief versus N

min

that

1

pow er pow er 

Figure Pairs for which the t to the Ansatz AN BN pro

2

duces a acceptable at the signicance level ie one standard deviation

2

3 and hT i Observables are hR i

g

Figure Interpenetration ratio versus N for dimensional SAWs Note that

is a decreasing and convex function of N in agrant disagreement with the

prediction of the twoparameter renormalizationgroup theory

056

Figure Interpenetration ratio versus N for dimensional SAWs The

056

regression line is N Note the excellent linearity of the

plot

P

Q

u

u

C

M

s

H

1

u

H

2

j

u

U e

u

H

3

H

u

1

e

q

u H

2

R

e

u

z

H

u

u

3

H

GS

M

u

u

H

Figure Wilsonde Gennestype renormalizationgroup ow on the critical sur

face H resp H is the Gaussian resp go o dsolvent xed p oint M resp

s

GS

M is the stable resp unstable manifold of H Case Ia Mo dels in the go o d

u

solvent regime may have correctiontoscaling amplitudes that are either negative

P R or p ositive Q Case I I The initial Hamiltonians H approach the stable

n

ef f n

manifold while the lowenergy eective Hamiltonians H R H approach the

n

n

unstable manifold

2 5

iterations for pivot Figure Averages of a R and b T over bins of width

e

algorithm on simple cubic lattice at N with parallelro d start

2

b

b

Figure Loglog plot of the sample auto correlation function t for a A R

AA

e

2

and c A T for pivot algorithm on square lattice at b A R

K ar pLuby R=20

g

N