Computational Physics 1 Home Page Title Page

Module PH3707 Contents

JJ II 10 Credits J I

Page1 of 85

Go Back by Roger Stewart Full Screen Version date: Monday, 7 July 2008 Close

Quit

opttoa Physics Computational opttoa Physics Computational

opttoa Physics Computational opttoa Physics Computational

Home Page

opttoa Physics Computational Title Page

Contents

opttoa Physics Computational Physics Computational

opttoa Physics– Computational JJ II Computational Physics–

opttoa Physics– Computational Computational Physics–

opttoa Physics– Computational Computational Physics–

opttoa Physics– Computational Computational Physics– J I

opttoa Physics Computational Physics– Computational Computational Physics– Computational Physics

opttoa Physics– Computational Computational Physics– Page2 of 85

opttoa Physics– Computational Computational Physics–

opttoa Physics– Computational Computational Physics– opttoa Physics– Computational Computational Physics– Go Back

Computational Physics Computational Physics Full Screen

Close Computational Physics Computational Physics

Quit Computational Physics Computational Physics ComputationalComputational Physics Physics Copyright c 2003 Dennis Dunn.

Home Page

Title Page

Contents

JJ II

J I

Page3 of 85

Go Back

Full Screen

Close

Quit Contents Home Page

Title Page

Contents Contents 4 JJ II INTRODUCTION TO THE MODULE5 J I 1 ANALYSIS OF WAVEFORMS 21 Page4 of 85 1.1 Objectives 22 Go Back 1.2 Fourier Analysis 23 1.2.1 Why bother? 24 Full Screen

1.2.2 Aperiodic functions 25 Close 1.3 The Numerical Methods 26 Quit 1.4 Diffusion Equation 29 1.4.1 Exercises 30

2 EIGENVALUES AND EIGENVECTORS OF MATRICES 33 2.1 Objectives 34 2.2 Eigenvalues and eigenvectors of real symmetric or hermitian matrices 35 2.3 A Matrix Eigenvalue Package 39 2.4 Schrodinger¨ Equation 42 2.4.1 Harmonic Oscillator 43 Home Page 2.4.2 Spherically Symmetric 3D Systems 44

Title Page 3 RANDOM PROCESSES 48

3.1 Objectives 49 Contents 3.2 Introduction 50 JJ II 3.3 Random Number Generators 51 J I 3.3.1 The basic algorithm 51 3.4 Intrinsic Subroutine 52 Page5 of 85

3.4.1 Different number ranges 53 Go Back 3.4.2 Testing your random generator 54 Full Screen 3.5 Monte Carlo Integration 56 Close 3.5.1 Hit and Miss Method 56

3.5.2 Sampling Method 56 Quit 3.6 Nuclear Decay Chains 58 3.6.1 Analytic approach 58 3.6.2 Computer 60 3.7 Exercises 61 4 MONTE CARLO SIMULATION 65 4.1 OBJECTIVES 66 4.2 EQUILIBRIUM AND FLUCTUATIONS 67 4.3 MONTE CARLO – THE PRINCIPLES 70 Home Page 4.4 The Metropolis Monte Carlo Algorithm 71

4.5 The Ising Model 72 Title Page

4.6 The Ising Model and the Monte Carlo Algorithm 73 Contents 4.7 Rationale 74 JJ II 4.8 MONTE CARLO SIMULATIONS – IN ACTION 75 4.9 Order Parameter - Magnetisation 76 J I

4.10 Temperature Scan (Annealing and Quenching) 77 Page6 of 85 4.11 MATHEMATICAL APPENDIX 78 Go Back 4.11.1 Fluctuations 78 4.11.2 Metropolis Monte Carlo Algorithm and Principle of Detailed Balance 79 Full Screen

4.11.3 Susceptibility and Fluctuations 80 Close 4.12 EXERCISES 82 Quit Index 85 INTRODUCTION TO THE MODULE Home Page

Title Page

Contents Version date: Monday, 7 July 2008 JJ II

J I

Page7 of 85

Go Back

Full Screen

Close

Quit Introduction

In this module you will be taught techniques employed in computational science and, in particular, computational physics using the FORTRAN95 language. The unit consists of four ‘computer ex- periments’, each of which must be completed within a specified time. Each ‘computer experiment’ is described in a separate chapter of this manual and contains a series of exercises for you to com- Home Page plete. You should work alone and should keep a detailed record of the work in a logbook that must be submitted for assessment at the end of each experiment. Title Page For three of the four projects, there will be a supervised laboratory session each week and further Contents unsupervised sessions.

There will be no supervised sessions for the fourth project: These will be exercises in independent JJ II learning. The Salford Fortran95 compiler will be used in this course, and this may be started by double- J I clicking on the Plato icon under the “Programming - Salford Fortran 95” program group. A “FTN95 Page8 of 85 Help” facility is supplied with this software and can be found within the same program group. This help facility includes details of the standard FORTRAN95 commands as well as the compiler- Go Back specific graphics features. All of the programs needed during this course may be downloaded from the Part 3 - PH3707 Computational Physics page on the department’s web-server: Full Screen (www.rdg.ac.uk/physicsnet). Close

Quit Web Site Information

In addition to all the chapters and programs required for this course, there are links to other useful sites including a description of programming style; a description of computational science in gen- eral and FORTRAN programming in particular; a tutorial for FORTRAN90; and a description of object-oriented programming. Home Page

Title Page

Contents

JJ II

J I

Page9 of 85

Go Back

Full Screen

Close

Quit References

Programming in Fortran 90/95 By J S Morgan and J L Schonfelder

Published by N.A. Software 2002. 316 pages. Home Page

This can be ordered online from Title Page www.fortran.com $15 Contents Fortran 95 Handbook By Jeanne Adams, Walt Brainerd, Jeanne Martin, Brian Smith, and Jerry Wagener JJ II

Published by MIT Press, 1997. 710 pages. J I

$55.00 Page 10 of 85 Fortran 95/2003 Explained Go Back By Michael Metcalf and John Reid Full Screen Oxford University Press ISBN 0-19-8526293-8 $35.00 Close

Fortran 90 for Scientists and Engineers Quit By Brian Hahn Published by Arnold £19.99 Fortran 90/95 for Scientists and Engineers By Stephen J. Chapman McGraw-Hill, 1998 ISBN 0-07-011938-4 $68.00 Numerical Recipes in Fortran90 By William Press, Saul Teukolsky, William Vetterling, and Brian Flannery Published by Cambridge University Press, 1996. 550 pages. $49.00 Home Page A more complete list of reference texts is held at Title Page

http://www.fortran.com/fortran/Books/books.html Contents where books can be ordered directly. JJ II

J I

Page 11 of 85

Go Back

Full Screen

Close

Quit Logbooks

You must keep an accurate and complete record of the work in a logbook. The logbook is what is assessed. In the logbook you should answer all the questions asked in the text, include copies of the programs with explanations of how they work, and record details of the program inputs and of the output created by the programs. On completion of each chapter you should write a brief summary Home Page of what has been achieved in the project. Title Page I am often asked what should go in the logbook. It is difficult to give a precise answer to this since each computer experiment is different but as a guide: it should have Contents

JJ II • a complete record of your computer experiment; • sufficient detail for someone else to understand what you were doing and why; and J I • for someone else to be able to repeat the computer experiment. Page 12 of 85

In particular should also include: Go Back

Full Screen • program listings; • a description of any changes you made to the programs; Close If you have made a succession of changes, you should not reproduce the complete program each time but simply specify what changes you have made and why you have made them. Quit • the data entered and results obtained from your programs (or, if there are a very large number of results, a summary of these results); • a comment on each set of results; You will lose marks if you simply record masses of computer output without commenting on its significance. • descriptions of any new techniques you have learned. It worked!

A statement in your logbook of the form ”I tried the program and it worked” will not be looked on favorably.

• What inputs did you provide to the program? Home Page • What output did you obtain? • What evidence do you have that this output is correct? Title Page

Contents Program Testing JJ II It is always safe to assume that a program you have written is wrong in some way. If you have made some error with the programming language then the compiler will tell you about this: it may not J I always tell you precisely what is wrong! However it may be the case that what you have told the computer to do is not really what you Page 13 of 85 intended. Computers have no intelligence: You need to be very precise in your instructions. Go Back Every program should be tested. You do this by giving the program input data such that you know, or can can easily calculate, the result. If possible your method of calculation should be via a Full Screen different method than that which the computer is using: the method you have told the computer to use may be incorrect. Close Only when the program has passed your tests should you start to use it to give you new results. Quit Module Assessment

The module comprises 4 computational projects. A record of each project must be kept in a logbook and the logbook submitted for assessment by the specified deadline. The final assessment will based on the best 3 project marks. Each project will be marked out of 20. A detailed marking scheme is given, in this manual, for each Home Page project. Title Page Guidelines on the assessment are given below. Contents

JJ II

J I

Page 14 of 85

Go Back

Full Screen

Close

Quit Late Submissions

If a project is submitted up to one calendar week after the original deadline 2 marks will be de- ducted. I am prepared to mark any project, which is more than one week late, providing it is submitted by the last day of the of the Spring Term. However for such a project 4 marks will be deducted. Home Page

Title Page

Contents

JJ II

J I

Page 15 of 85

Go Back

Full Screen

Close

Quit Extensions & Extenuating Circumstances

If you have a valid reason for not being able to complete a project by the specified deadline then you should

• inform the lecturer as soon as possible; and Home Page • complete an Extension of Deadlines Form. The form can be obtained from the School Office (Physics 210). Title Page

If you believe that there has been some non-academic problem that you encountered during the Contents module (medical or family problems for example) you should complete an Extenuating Circum- stances Form, again obtainable from the School Office, so that the Director of Teaching & Learning JJ II and the Examiners can consider this. J I

Page 16 of 85

Go Back

Full Screen

Close

Quit Feedback

In addition to comments written in your logbook by the assessor during marking, feedback on the projects will be provided by a class discussion and, when appropriate, by individual discussion with the lecturer. There will be no feedback, apart from the mark, on late submissions. Home Page

Title Page

Contents

JJ II

J I

Page 17 of 85

Go Back

Full Screen

Close

Quit Assessment Guidelines

This module is assessed solely by continuous assessment. Each project (which corresponds to one chapter of the manual) is assessed as follows. The depth of understanding and level of achievement will be assessed taking into account the fol- lowing three categories: Home Page

1. Completion of the project (0 – 17 marks) Title Page Completeness of the record Contents Description and justification of all actions Following the documented instructions, answering questions and performing derivations etc. JJ II 2. Summary (0 – 3 marks) J I • Review of objectives • Summary of achievements Page 18 of 85 • Retrospective comments on the effectiveness of the exer- cises Go Back

3. Bonus for extra work (0 - 2 marks) Full Screen • Any exceptional computational work beyond the require- ments stated Close • An exceptional depth of analysis • An outstanding physical insight Quit

I should point out that bonus marks are only rarely awarded and, in any case, the total mark of a project cannot exceed 20. Unfinished work will be marked pro rata, unless there are extenuated circumstances. If you are unable to attend the laboratory session you should inform the lecturer Dr R J Stewart by telephone 0118 378 8536 or by email [email protected]

Home Page

Title Page

Contents

JJ II

J I

Page 19 of 85

Go Back

Full Screen

Close

Quit Plagiarism

In any learning process you should make use of whatever resources are available. In this course, I hope, the lecturer and postgraduate assistant should be valuable resources. Your fellow students may also be useful resources and I encourage you to discuss the projects with them. However at the end of these discussions you should then write your own program (or program Home Page modification). It is completely unacceptable to copy someone else’s program (or results). This is a form of cheating and will be dealt with as such. Title Page

I should point out that such copying (even if slightly disguised) is very easy to detect. Contents

JJ II

J I

Page 20 of 85

Go Back

Full Screen

Close

Quit Time Management

There is ample time for you to complete each of these projects providing you manage your time sensibly. You should aim to spend at least six hours per week on each project: that is a total of about 18 hours per project. Home Page

Each project is divided into a number of ’exercises’ and each of these exercises is allocated a mark. Title Page This mark is approximately proportional to the time that I expect you to spend on the exercise. You should therefore be able to allocate your time accordingly: It is not sensible to spend half of the Contents available time on an exercise which is worth only a quarter of the total marks. Each of the projects below is given a deadline. You should not take this as a target. You should set JJ II yourself a target well before the deadline. J I

Page 21 of 85

Go Back

Full Screen

Close

Quit Projects

• Random Processes [Deadline: noon, Wednesday Week 3 Autumn Term] • Analysis of Waveforms ; [Deadline: noon, Wednesday Week 6 Autumn Term] • Eigenvalues & Eigenvectors [Deadline: noon, Wednesday Week 9 Autumn Term] • Monte Carlo Simulation [Deadline: noon, Wednesday Week 2 Spring Term] Home Page The ’Monte Carlo Simulation’ project will be unsupervised is an exercise in independent learning. Title Page Nevertheless the lecturer and demonstrator will be available for consultation.

Contents

JJ II

J I

Page 22 of 85

Go Back

Full Screen

Close

Quit Chapter 1 Home Page

Title Page ANALYSIS OF WAVEFORMS Contents

JJ II Version Date: Wednesday, 5 September, 2007 at 13:05 J I

Page 23 of 85

Go Back

Full Screen

Close

Quit 1.1. Objectives

In this chapter the main elements of Fourier Analysis are reviewed and the methods are applied to some basic wave forms. On completion of this chapter students will have utilized simple numerical techniques for performing Fourier Analysis; studied the convergence of Fourier series and how this is effected by discontinuities in the function; and investigated the best choice of Fourier coefficients Home Page in finite series. Fourier techniques will be applied to the solution of the diffusion equation. Title Page

Contents

JJ II

J I

Page 24 of 85

Go Back

Full Screen

Close

Quit 1.2. Fourier Analysis

Fourier’s theorem states that any well-behaved (or physical) periodic wave form f(x) with period L may be expressed as the series

∞ Home Page X f (x) = Fr exp (ikrx) (1.1) r=−∞ Title Page where the wave-vector kr is given by Contents 2πr kr = (1.2) L JJ II

The complex Fourier coefficients Fr are given by J I

1 Z Page 25 of 85 = f (x) exp (−ik x) dx (1.3) Fr L r Go Back Here the integrals are over any complete period (e.g. x = 0 to x = L or x = −L to x = L ). 2 2 Full Screen r The rth component of the sum in (1.1) corresponds to a harmonic wave with spatial frequency /L L Close and hence a wavelength /r.

Normally, in Physics, f(x) is a real function. In this case the Fourier coefficients have the following Quit symmetry property:

∗ Fr = F−r (1.4) An important exception to this is the case of quantum mechanics where the wavefunctions are normally complex. The above symmetry does not apply to this case. In numerical work we can only deal with series with a finite number of terms. Suppose the finite series, f [M] (x), is used to approximate the function, f (x).

M [M] X f (x) = Fr exp (ikrx) (1.5) r=−M Home Page

The mean-square-error involved in making this approximation is by definition Title Page

L Contents 1 Z h i2 E[M] = f(x) − f [M](x) dx. (1.6) L 0 JJ II and this can be written (after quite a bit of manipulation!) as J I

Page 26 of 85 +∞ M [M] X 2 X 2 E = |Fr| − |Fr| (1.7) Go Back r=−∞ r=−M Full Screen This latter form shows that the mean square error decreases monotonically as a function of M (ie [M+1] [M] [M] E 6 E ). f (x) converges to f(x) as more terms are added to the series. It can also Close be shown that the mean-square-error E[M] (for any fixed M) is minimized by using the Fourier coefficients as calculated through equation (1.3). Quit

1.2.1. Why bother?

It is not entirely clear from the above equations what has been gained by expressing the function f(x) as a series (1.1) or (1.5). In Physics we often need to evaluate the derivative (or second derivative etc) of a function: If the function has been expressed as a Fourier series then this is a trivial operation. For example the Jth derivative of f(x), using (1.1), is

J ∞ d f (x) X J = (ikr) [Fr exp (ikrx)] (1.8) dxJ Home Page r=−∞

We can similarly write expressions for integrals of f(x). Title Page

Contents 1.2.2. Aperiodic functions JJ II The instances, in Physics, of genuinely periodic functions are exceedingly rare. However there are still many applications of the above theory. J I

Suppose a function f(x) either exists only in a finite range 0 ≤ x ≤ L or is known only in this finite Page 27 of 85 range. We can construct a periodic function simply by making a periodic repetition of the finite- range function with a repeat period L and apply the theory to this (artificial) periodic function. It is Go Back in this form that Fourier Analysis is normally applied in Physics. Full Screen

Close

Quit 1.3. The Numerical Methods

We now consider how to calculate the coefficients Fr. In general we can approximate an integral by means of the trapezoidal rule. The essence of this is shown in the figure (1.1). Home Page

Title Page

Contents

JJ II

J I

Page 28 of 85

Go Back

Full Screen

Close

Quit

Figure 1.1. Trapezoidal Integration

The integrand is divided into N equal intervals of size L/N and the integrand is approximated by a sequence of straight-line segments. The function f(x) is evaluated at the positions xs = sL/N. Note that N needs to be quite large to ensure accuracy. In the program you will use N has been taken to be 1000. The result of this procedure applied to (1.3) and is

" N−1 # 1 f(0) + f(L) X sL  2π rs = + f exp −i (1.9) Home Page Fr N 2 N N s=1 Title Page We can simplify this by defining an array Contents

f = f sL  ; s = 1, .., N − 1 s N JJ II (1.10)  f(0)+f(L)  f0 = 2 J I

Page 29 of 85 Using this array gives the result for the coefficients as

Go Back N−1 1 X  2π rs = f exp −i (1.11) Full Screen Fr N s N s=0 Close This approximate procedure for the integrals predicts coefficients for r < N/2. It fails to correctly predict coefficients for r ≥ N/2. That is it fails to predict the Fourier components with spatial Quit frequency greater than N/2L and wavelengths less than 2L/N. In fact, if the function f(x) has no spatial frequency greater than N/2L, the Sampling Theorem tells us that the expression (1.11) is exact. There is a technical problem in evaluating the exponentials in (1.11) or (1.5). In Fortran we can only evaluate exp (iα) if α is not too large: In practice less than about 70. (You might think this is a deficiency of Fortran but you need to be aware that no other language has complex exponentials built in). We can surmount this difficulty by evaluating the exponentials in the form:

 2π rs  2π MOD (rs, N) exp −i = exp −i (1.12) N N Home Page That is we have replaced rs by MOD (rs, N). This is a Fortran function which gives the remainder when rs is divided by N. This procedure works because (rs − MOD (rs, N)) /N is an integer Title Page and because Contents exp (−i2πm) = 1 (1.13) JJ II for any integer m. J I The argument of the exponential on the right-hand side of (1.12) is then a small quantity (in fact, less in magnitude than 2π). Page 30 of 85

If we only need to evaluate the original function f at the discrete points xs = sL/N then the Go Back formula (1.5) simplifies to Full Screen M X  2π rs f = exp i (1.14) Close s Fr N r=−M Quit which is very similar to the expression for the Fourier coefficient (1.11) and the exponential is calculated in the same way. 1.4. Diffusion Equation

I now look at the use of Fourier Analysis in solving differential equations. In thermal equilibrium (and in the absence of external forces) gases and liquids have uniform den- sities. If a gas or liquid is prepared with a high density in a localized region then this excess density Home Page will quickly spread out until uniformity is restored: This process is called diffusion. If f(x, t) denotes the deviation of density from equilibrium then the evolution of this quantity with time is Title Page determined by the diffusion equation:

Contents ∂f (x, t) ∂2f (x, t) = D (1.15) ∂t ∂x2 JJ II

D is the diffusion constant. J I

Now suppose that I use equation (1.5) for f(x, t) but where the coefficients are functions of time: Page 31 of 85

M Go Back X f (x, t) = Fr (t) exp (ikrx) (1.16) r=−M Full Screen

Inserting this expression in the diffusion equation gives the following result for the coefficients: Close

2  Quit Fr (t) = exp −Dkr t Fr (0) (1.17)

This can be used to determine the density at any later time. If I assume this fluid is contained in a region

0 ≤ x ≤ L 2πr then k = as in (1.2). The above equation then becomes r L

 4π2 r2D t (t) = exp − (0) (1.18) Fr L2 Fr

The complete prescription for solving the diffusion problem is: Home Page

• Fourier analyse the initial (t = 0) density function. That is, calculate the Fourier coefficients Title Page Fr (0) using equation (1.11); • Evaluate the coefficients at time t using equation (1.17); Contents • Calculate the density at time t by inserting these Fourier coefficients into (1.5) with M = (N − 1)/2. JJ II

Now attempt the exercises. J I

Page 32 of 85 1.4.1. Exercises Go Back

1. [10 Marks] Full Screen (a) Obtain a copy of the program Fourier.f95 and run it. The program sets up various waveforms and plots them in the graphics window. Close Make sure you understand how it works, and record its main features in your log-book. Remember that the graph-plotting routine also Quit writes to the clipboard, so that copies of the graphs can be pasted into other documents if required. (b) Write a subroutine to calculate the coefficients Fr for r ≤ (N −1)/2 using equation (1.11). Write the results to a data file. Compare your numerical results with the analytical solutions to equation (1.3) for one of the waveforms.

(c) Check that the symmetry property (1.4) is satisfied by the Fourier co- Home Page efficients. What symmetry do the square and triangular waveforms have, and how is this related to the values of their Fourier coeffi- Title Page cients? What about the ramp wave? Contents

(d) Use your calculated values of Fr to reconstruct the approximation f [M] in equation (1.14), and plot this curve alongside the original JJ II waveform. The value of M should be ≤ (N − 1)/2. Note that the subroutine ‘plot graph’ provided will plot all the curves stored in J I

f(npoints, ngraphs). Page 33 of 85

(e) Investigate the convergence of the series (that is gradually increase Go Back M and observe what happens); does that of the triangular wave con- verge faster than those of the square or ramp waves? Also investigate Full Screen the Gibbs overshoot phenomenon observed in Fourier series for dis- continuous wave forms. Close

(f) Calculate the mean-square-error in equation (1.7); Since N is large, Quit (N−1)/2 P 2 a good approximation to the right-hand side is |Fr| − r=−(N−1)/2 M P 2 |Fr| r=−M (g) Show that E[M] monotonically decreases with M.

Remember to keep an accurate record of your work in your log-book.

2. [7 Marks] Home Page Diffusion: Assume that in a diffusion problem the initial density is Title Page  (x−L/2)2  L f (x, 0) = exp − 2σ2 where σ = 80 . Determine the densities L2 2L2 4L2 Contents at times: t = 2000D , 2000D , 2000D . Notice that at t = 0 the required function is exactly that described as ”gaussian” in the program. JJ II

J I Remember that you must finish your work on this chapter by writing a summary in your laboratory note books. This should summarize in about 300 words what you have learnt and whether the Page 34 of 85 objectives of this chapter have been met. Go Back

Full Screen

Close

Quit Chapter 2 Home Page

Title Page EIGENVALUES AND EIGENVECTORS OF Contents MATRICES JJ II

J I Version Date: Thursday, 30 August, 2007 at 10:20 Page 35 of 85

Go Back

Full Screen

Close

Quit 2.1. Objectives

In this chapter you will investigate eigenvalue equations and eigenvalue packages for solving such equations. You will be provided with a subroutine for finding all the eigenvalues and eigenvectors of a real symmetric matrix and also an eigenvalue package which finds a few eigenvalues and eigenvectors. Home Page In the first set of exercises you will check that the results produced by the package against direct calculations (for small matrices). Title Page

In the second part of the project you will use the packages to investigate eigenvalues and eigenvec- Contents tors of the Schrodinger¨ equation. JJ II

J I

Page 36 of 85

Go Back

Full Screen

Close

Quit 2.2. Eigenvalues and eigenvectors of real symmetric or hermitian matrices

An eigenvalue equation is

(k) (k) (k) A v = λ v (2.1) Home Page

(k) (k) where A is an n × n matrix; v is the kth eigenvector and is an n × 1 column vector; and λ is Title Page the kth eigenvalue. Contents The complete expression of the above equation is JJ II n X A v(k) = λ(k) v(k) r = 1, . . . , n rs s r (2.2) J I s=1 Page 37 of 85 I shall only consider matrices that are real and symmetric or complex and hermitian: Go Back In the first case the matrices have the symmetry

Full Screen

Ars = Asr Close and in the second case Quit

∗ Ars = Asr

These are the types of matrices required for most physical problems. Such matrices have very special properties: • all the eigenvalues are real;

• (if the matrices are n×n) there are n eigenvectors that are mutually orthogonal and these form a complete set.

Home Page Mutually orthogonal means that the eigenvectors satisfy

Title Page n X  (k)∗ (k0) 0 vr vr = 0 k 6= k (2.3) Contents r=1

It is conventional to normalize the eigenvector, that is choose them to satisfy JJ II

n J I X   v(k)∗ v(k) = 1 (2.4) r r Page 38 of 85 r=1

The completeness of the eigenvectors means that any column vector can be constructed as a sum of Go Back the eigenvectors (with appropriate coefficients). Full Screen That is any column vector b can be written as

n Close X (k) (k) br = α vr (2.5) Quit k=1

The required set of coefficients α(k) can be evaluated by

n (k) X (k)∗ α = vr br (2.6) r=1 In order to emphasize that the properties of hermitian (or real symmetric) matrices are not shared by more general matrices, consider a space of 2 × 2 matrices and 2 × 1 column vectors. A very simple, but not symmetric 2 × 2 matrix is:

 0 1  0 0 Home Page

This matrix has eigenvalue λ = 0 and has only one eigenvector Title Page   1 Contents 0 JJ II Clearly this one eigenvector cannot be used to generate an arbitrary 2 × 1 column vector. J I You should verify the above properties of this asymmetric real matrix. Page 39 of 85 The eigenvalue equation (2.1) can be written, entirely in matrix form, as

Go Back AV = VD (2.7) Full Screen where is an n × n matrix which is made up from the n column vectors v(1), v(2),..., v(n) and V Close D is a diagonal matrix with diagonal entries λ(1), λ(2), . . . , λ(n). If the eigenvectors are normalized according to (2.4) then the eigenvector matrix V satisfies the Quit equations

† † V V = I = VV (2.8) where I denotes the n × n unit matrix and V† denotes the hermitian conjugate of V. † ∗ Vrs = Vsr (2.9)

For real matrices this is just the transpose.

This property (2.8) of the eigenvector matrices can be used to represent the original matrix A as Home Page = † (2.10) A VDV Title Page

This is very useful in numerical computations because it provides a very severe test of the numerical Contents method. That is, use the numerical procedure to calculate the eigenvalue and eigenvector matrices and ; then use (2.10) to reconstruct . If this reconstruction does not agree with the original D V A JJ II matrix (to within some required accuracy) then the procedure is at fault. Numerical techniques for finding eigenvalues and eigenvectors of complex hermitian matrices are J I a straightforward development of those used for real symmetric matrices. Page 40 of 85

Go Back

Full Screen

Close

Quit 2.3. A Matrix Eigenvalue Package

The program SymEigTest, which is on the Physics intranet, contains the module SymmetricEigen- systems. This is specifically for real, symmetric matrices. However the techniques used could easily be converted to deal with complex hermitian matrices. The SymmetricEigensystems module contains several subroutines that you will use: Home Page

Title Page • SymEig: This finds all the eigenvalues and (optionally) all the eigenvectors of an n × n real,

symmetric matrix. Contents • TrdQRL: This finds all the eigenvalues and (optionally) all the eigenvectors of a tridiagonal n × n real, symmetric matrix. JJ II • TrdEig: This finds the eigenvalues in a certain, specified, range and (optionally) all corre- sponding eigenvectors for a tridiagonal n × n real, symmetric matrix. J I

A tridiagonal symmetric matrix has only the main diagonal and the two adjacent diagonals with Page 41 of 85 non-zero elements. Go Back   d1 n1 0 0 0 0 ... Full Screen  n1 d2 n2 0 0 0 ···     0 n2 d3 n3 0 0 ...    Close  0 0 n3 d4 n4 0 ...     0 0 0 n4 d5 n5 ...    Quit  0 0 0 0 n5 d6 ...   ......  ......

In the case of a tridiagonal matrix there is no essential difference in speed in using TrdQRL rather than SymEig. However TrdQRL only requires the non-zero two leading diagonals to be stored. This is great advantage for large matrices. For example if we are finding the eigenvalues of a 10000 × 10000 tridiagonal matrix then SymEig requires 108 matrix elements to be stored (even if most of them are zero) whereas TrdQRL requires only 19999 elements to be stored.

Again, in the case of large matrices, both TrdQRL and SymEig provide too much information: For a 10000 × 10000 matrix there are 100, 000, 000 elements in the eigenvector matrix. Home Page TrdEig allows the user to investigate a few eigenvectors. Title Page These three subroutines are used as follows: Contents

• CALL SymEig(A, eval, evec) A is the n × n matrix; eval is an n × 1 array containing the eigenvalues in ascending order; JJ II evec is the n × n eigenvector matrix. J I • CALL TrdQRL(Ad, An, eval, evec) Ad is an n × 1 array containing the main diagonal elements of the matrix; An is an (n − Page 42 of 85 1) × 1 array containing the leading upper diagonal elements of the matrix; evec is the n × n eigenvector matrix, eval is an n × 1 array containing the eigenvalues. Go Back Ad(j) = A(j, j); An(j) = A(j, j + 1) = A(j + 1, j) Note: Some care is required in using this subroutine because on input evec needs to be set Full Screen equal to the unit matrix. • CALL TrdEig(Ad, An, Lower, Upper, NumEig, eval, evec) Close Ad is an n×1 array containing the main diagonal elements of the matrix; An is an (n−1)×1 array containing the leading upper diagonal elements of the matrix; Lower and Upper define Quit the range in which the eigenvalues are required; NumEig is the number of eigenvalues (and eigenvectors) found in this range; eval is an n×1 array containing the eigenvalues in ascending order; evec is the n × NumEig eigenvector matrix. If there are more than certain number of eigenvalues in the range, the program will complain. The maximum number is set to 30 in the subroutine. If you really need more than this then change the variable max num eig. One of the advantages of Fortran lies in the availability of good quality, well-tested ’libraries’ of subroutines. Most physicists make use of such subroutines and incorporate these in their programs. I produced the SymmetricEigenvalue module by modifying (to my own needs) subroutines from the well-known library package LAPACK (Linear Algebra Package).

Home Page

Title Page

Contents

JJ II

J I

Page 43 of 85

Go Back

Full Screen

Close

Quit 2.4. Schrodinger¨ Equation

The time-independent Schrodinger¨ Equation for a one-dimensional system is

2 d2Ψ(x) − ~ + V (x)Ψ(x) = EΨ(x) (2.11) 2m dx2 Home Page

This is an eigenvalue equation, with E as the eigenvalue and Ψ(x) as the eigenfunction. I want to Title Page show how this differential eigenvalue equation can be expressed as a matrix eigenvalue equation. Contents The second derivative in (2.11) can be calculated (approximately) as JJ II d2Ψ(x) Ψ(x + ∆x) + Ψ (x − ∆x) − 2Ψ (x) = n (2.12) dx2 ∆x2 J I where ∆x is some suitably small distance. Page 44 of 85 I now define Go Back

Ψn = Ψ (n∆x); Vn = V (n∆x) (2.13) Full Screen where n is an integer. Close

The differential equation can then be written as Quit

~2 − (Ψn+1 + Ψn−1 − 2Ψn) + VnΨn = EΨn (2.14) 2m∆x2

I can further simplify this by using dimensionless variables. I choose to measure distances in terms 2 ~ of some basic length a and energies in terms of the basic energy 2ma2 . In terms of these dimensionless variables, this equation becomes

1 − (Ψn+1 + Ψn−1 − 2Ψn) + VnΨn = EΨn (2.15) ∆x2 If the potential V is reasonably well-behaved the wavefunctions go to zero as x → ±∞. Hence there must be a choice for n (remember x = n∆x) beyond which the eigenfunction is (approxi- Home Page mately) zero. I call this value N. Title Page Then I can define a column vector Ψ with elements Ψ−N ,..., ΨN and a tridiagonal matrix H. Contents The diagonal elements of H are

2 JJ II Hn,n = 2 + Vn (2.16) ∆x J I and the non-zero off-diagonal elements are Page 45 of 85

1 Go Back Hn,n+1 = Hn+1,n = − (2.17) ∆x2 Full Screen In terms of this (2N + 1) × (2N + 1) matrix the eigenvalue equation is

Close HΨ = EΨ (2.18) Quit

2.4.1. Harmonic Oscillator

The potential for a harmonic oscillator can be written as 1 V (x) = mω2x2 2 and if I choose the unit of distance to be r a = ~ mω then the unit of energy is Home Page 1 ω Title Page 2~ Contents and the potential is

JJ II 2 2 Vn = n ∆x J I This is a useful test problem because the exact results are known: The exact eigenvalues are: 1, 3, 5, 7,.... Page 46 of 85

In order to solve numerically to a reasonable accuracy we need to choose Go Back

• ∆x  1 Full Screen • N∆x  1 Close In order to calculate the first few eigenvalues ∆x = 0.001 and N = 10000 should be sufficient to give fairly accurate results. However you may need to experiment with these values. Quit

2.4.2. Spherically Symmetric 3D Systems

In a spherically symmetric 3D system the wavefunction in spherical polar coordinates can be written as 1 Ψ(r) Y (θ, φ) exp (imφ) (2.19) r lm where Ylm are spherical harmonic functions. l and m are the angular momentum quantum numbers. The radial function Ψ(r) satisfies the same equation as (2.11) except that a term Home Page ~2l(l + 1) 2mr2 Title Page

Contents needs to be added to the potential; and, of course, r is positive.

For the case of the 3D harmonic oscillator, the eigenvalue equation can still be written as (2.18) JJ II except that the matrix indices now run from 1 → N and the potential is J I

2 2 l(l + 1) Vn = n ∆x + Page 47 of 85 n2∆x2

In order to calculate the first few eigenvalues, for small values of l (l = 0, 1, 2), ∆x = 0.001 and Go Back N = 10000 should be sufficient. Full Screen In the case of the hydrogen atom, if we use the Bohr radius as the unit of distance, the correspond- ing potential is Close

2 l(l + 1) Quit Vn = − + n ∆x n2∆x2 Exercises

1. [3 marks] (a) Solve the eigenvalue equations for the 2 × 2 asymmetric matrix

 0 1  Home Page 0 0 Title Page Show the steps in process in detail. Show that it does not have a complete set of eigenvectors. Contents (b) Repeat the calculation for symmetric matrix

 0 1  JJ II 1 0 J I

Discuss the differences. Page 48 of 85 (c) Determine the eigenvalue and eigenvector matrices D and V for the matrix in (b) and show that equation (2.10) is satisfied. Go Back

2. [4 marks] Full Screen (a) Download the program SymEigTest.F95; make a working copy of this with a different name. Close The program constructs a random n × n symmetric matrix, with n initially set to 10. It then calls SymEig to find all the eigenvalues and eigenvectors. (In fact it calls this 100 times just to make the Quit computer time long enough to determine accurately!). Then it uses equation (2.10) to attempt to reconstruct the original matrix. It finally calculates the rms error in the reconstructed matrix (by comparing it to the original). Run the program with matrix sizes 10, 20,..., 100. Record the results. Deduce how the time to operate the subroutine depends on n. (b) Modify the program so that the random symmetric matrix is now a random symmetric tridi- agonal matrix. Repeat the above calculations. (c) Next modify the program to use the TrdQRL subroutine which is specifically intended for tridiagonal symmetric matrices. How does the performance compare with that of SymEig. 3. [4 marks] Home Page Modify the program so as to make use of the subroutine TrdEig. Determine the lowest four eigen- values and corresponding eigenvalues of the 1D Harmonic oscillator and plot the corresponding Title Page eigenvectors. Contents 4. [6 marks] (a) Modify the program so as to be able to treat radial functions for spherically symmetric 3D JJ II systems. Test this by finding the first four eigenvalues for l = 0, l = 1 and for l = 2. Plot the lowest four eigenfunctions for l = 2. J I

(b) Set up the eigenvalue equations for a the hydrogen atom; choose the Bohr radius to be the Page 49 of 85 unit of distance. In this case suitable sizes of the parameters are: ∆x = 0.001 and N = 200000. Yes! you really are going to find eigenvalues of a 200000 × 200000 matrix. Go Back You may need to experiment with the parameters ∆x and N. Full Screen Find the first four eigenvalues for l = 0, l = 1 and for l = 2 and then plot the lowest four eigenfunctions for l = 2. Close 2 Note: In these dimensionless units the Coulomb potential is − and the lowest eigenvalue Quit (n∆x) should be −1. Remember that you must finish your work on this chapter by writing an abstract in your laboratory notebooks. This abstract should summarise in about 300 words what you have learnt and whether the objectives of this chapter have been met. Chapter 3 Home Page

Title Page RANDOM PROCESSES Contents

JJ II Version Date: Thursday, 30 August, 2007 at 11:21 J I

Page 50 of 85

Go Back

Full Screen

Close

Quit 3.1. Objectives

This chapter provides an introduction to random processes in physics. On completion, you will be familiar with the random number generator in FORTRAN 95, and will have gained experience in using it in two applications. You will also be ready to tackle later chapters that develop computa- tional studies of random systems. Home Page

Title Page

Contents

JJ II

J I

Page 51 of 85

Go Back

Full Screen

Close

Quit 3.2. Introduction

It is convenient to describe models of physical processes as either deterministic or random. An obvious example of the former is planetary motion and its description via Newton’s equations of motion: given the position and momenta of the particles in the system at time t, we can predict the values of all the positions and momenta at a later time t0. Even the solution of Schrodinger’s¨ Home Page equation is in a sense deterministic: we can predict the time evolution of the wave function in a deterministic way even though the wave function itself carries a much more restricted amount of Title Page information than a classical picture provides. Contents The obvious example in physics of a theory based on randomness at the microscopic level is statis- tical mechanics. There may well be deterministic processes taking place, but they do not concern us because we can only observe the net effect of a vast number of such processes, and this is much JJ II more amenable to description on a statistical basis. But a statistical basis does not only concern statistical mechanical (and thermodynamic) systems. Many physical systems are inherently disor- J I dered and defy a simple deterministic analysis: the passage of a liquid through a porous membrane Page 52 of 85 (oil through shale, for example), electrical breakdown in dielectrics, the intertwining of polymer chains, and galaxy formation are some examples of random processes. Go Back Statistical mechanics uses concepts like entropy, partition functions, Boltzmann, Fermi or Bose and so on to describe the net effect of random processes. In computer simulations, one Full Screen actually models the microscopic random processes themselves. To model randomness, we need to have something to provide the element of chance - like a coin to toss, or a dice to throw. Of Close course, in computing, we do not use coins or dice but rather random number generators to inject the statistics of chance, and we start by seeing how they work. Quit 3.3. Random Number Generators

3.3.1. The basic algorithm

Random number generators are more precisely known as pseudo-random number generators. The sequence of numbers they produce can be predicted if the algorithm is known, but there should be Home Page no correlations between the numbers along the sequence. In practice, the sequence will repeat itself but the period of the cycle should be longer than the equivalent scale of the process one wants to Title Page simulate. Random number generators are based on the algorithm Contents

xn+1 = (axn + c) mod m JJ II where x and x are respectively the (n + 1)th and nth numbers in the sequence. The starting n+1 n J I integer in the sequence x0 is called the seed. All the quantities in the expression, including the numbers themselves and the constants a, c, m are integers. y = z mod m means that y is the Page 53 of 85 remainder left after dividing z by m. For example (27 mod 5) equals 2. ⇒ Now go to Exercise 1 Go Back

You will see, from the Exercise, that with a judicious choice of parameters we can produce a pseudo- Full Screen random sequence of integers i such that 0 ≤ i < m (note that 0 appears in the sequence but m does not). Usually real random numbers r between 0 and 1 are required. This can be done using the Close algorithm with a final step r = REAL(i)/REAL(m), such that 0 ≤ r < 1. Quit In practice the number m, a and c are chosen to give a large range of integers and a large period (before the sequence starts to repeat). A random number generator that I most often use has

m = 248 a = 33952834046453 3.4. Intrinsic Subroutine

All computing systems have a built-in random number generator (usually based on more than one of the basic generators just studied) that has been optimized, and one would normally use that. For FORTRAN 95, the simplest use is as follows: Home Page CALL RANDOM NUMBER(r) r is the generated random number (with 0 ≤ r < 1), and r must be declared as REAL (or better Title Page still REAL (KIND=...) ). You can also declare r as a one dimensional real array; in this case the subroutine returns a (different) random number in each element of the array. Contents If you repeat a run of a program containing this call, the same set of random numbers is produced. JJ II This is not what is usually required and it can be overcome by seeding the random number generator at the start of the program using the system clock. J I The standard random number generator has a seed which is an array of several integers. The Page 54 of 85 ’several’ can be different for different compilers. For Salford ’several’ is actually 1; in the Lahey fortran complier it is 4; and in the free GNU gfortran compiler it is 8. Go Back It good practice to write the code so that it works on any compiler. The following example shows how to do this. Full Screen

Close

INTEGER :: j, k, count Quit REAL (KIND=DP) :: r INTEGER, ALLOCATABLE :: seed(:)

CALL RANDOM_SEED(SIZE=k) ALLOCATE(seed(k)) DO j = 1, k CALL SYSTEM_CLOCK(count)} seed(j) = count + j*j END DO

CALL RANDOM_SEED(PUT = seed)}

WRITE(*, *) ’Random number seed = ‘,seed Home Page

CALL RANDOM_NUMBER(r) Title Page

Contents

The first CALL of RANDOM SEED is to find out the size of the seed array and puts this size JJ II into the integer k. The second call puts the correct seed array into the random number generator. Note, count (the current value of the system clock) is integer and that DP denotes what kind of real J I numbers you are using. Page 55 of 85 It is sensible to write out the seed value just in case you do want to rerun the program with the exactly same set of random numbers. Go Back

3.4.1. Different number ranges Full Screen

Close Often we want to generate random numbers over a range different from 0 to 1. This is straightfor- ward. If we want a real random number x between −2 and +2, for example, this can be obtained Quit from the random number generator output r using the statement: x = −2.0 + 4.0 ∗ r. Generally, use the expression x = a + (b − a) ∗ r if the required range is a to b. We have to be a little more careful with integers. Suppose we were simulating the throw of a dice (and needed to generate a random integer from the set 1, 2, . . . .,6). If d is the required random integer, we can use the statement: d = INT(6*r) + 1. Calculation of INT(6*r) gives one of the integers 0, 1, . . . .,5. 3.4.2. Testing your random generator

The random generator is supposed to generate a random number r which is uniformly distributed in the range 0 ≤ r < 1. If this is so then we can easily calculate the following average mathematically 1 hrpi = p + 1 Home Page where p is any integer. We could then check the average computationally by generating N random Title Page numbers r1 . . . rN and then forming the average Contents N 1 X p hrpi = r JJ II N k k=1 J I When programming this it is not necessary to define an array for the random numbers! For large N the two averages – theoretical and computational – should be very nearly the same and Page 56 of 85 the difference between them should reduce as N gets larger. Go Back Note that for a large value of N it is not sensible to use the standard REAL variables since these give only about 1 part in 106 accuracy. Full Screen A slightly more complicated test checks whether there is any correlation between neighbouring Close random numbers generated. Suppose we have two independent random variables r and s then mathematically we have Quit

1 1 hrpsqi = (p + 1) (q + 1)

We then calculate this average using our computer random number generators. Generate N pairs of random numbers (r1, s1) ... (rN , sN ) and then form the average N N 1 X X hrpsqi = rpsq N 2 j k j=1 k=1

Again when programming this it is should not be necessary to define an array for the random numbers! Home Page There are more sophisticated tests but these simple tests should show if something is wrong with Title Page the generator.

⇒ Now go to Exercise 2 Contents

JJ II

J I

Page 57 of 85

Go Back

Full Screen

Close

Quit 3.5. Monte Carlo Integration

There is a wide range of calculations in Computational Physics that rely on the use of a random number generator. Generally they are known as Monte Carlo techniques. We will start by looking at the applied to integration. There are two approaches to choose from: the ‘hit and miss’ method and the ‘sampling’ method. Home Page

Title Page 3.5.1. Hit and Miss Method

Contents Here is analogy to help explain the method. An experimental way to measure the area of the treble twenty on a dart board is to throw the darts at the board at random; if N60 is the number hitting the JJ II treble twenty and N is the total number of darts thrown, then the area A60 of the treble twenty is given by the equation A60 = A ∗ N60/N, where A is the total area of the board. J I ⇒ Now go to Exercises 3(a) and (b) Page 58 of 85

3.5.2. Sampling Method Go Back

Full Screen This method can be summarised by the equation

Close b N Z (b − a) X f(x)dx = f(x ) Quit N i a i=1

Choose N random numbers xi in the range a < xi < b, calculate f(xi) for each, take the average, and then multiply by the integration range (b-a). It is similar to Simpson’s rule but in that case the values of xi are evenly distributed. We have seen already how to generate random numbers in the range a to b. The extension to a multidimensional integral is easy. For example, in two dimensions, write

b d N Z Z (b − a)(d − c) X dx dyf(x, y) = f(x , y ) N i i a c i=1 Home Page In this case choose N pairs of random numbers (xi, yi) and go through a similar procedure. The extension to an arbitrary number of dimensions is straightforward. Title Page Generally, from the point of view of accuracy, it is better to use ‘conventional’ methods like Simp- Contents son’s rule for integrals in low dimensions and Monte Carlo methods for high dimensions. If you have body with an awkward shape, however, Monte Carlo methods are useful even at low dimen- sionality. JJ II Why are Monte Carlo methods better in higher dimensions? If we pick n random numbers for J I our integration the error is proportional to n1/2 independent of the number of dimensions. For the trapezoidal approximation and for Simpson’s rule the errors are proportional to n−1/d and Page 59 of 85 n−2/d respectively. Here n is the number of strips the integration range is divided into, and d is the dimensionality. Increasing n is more effective at reducing errors in Monte Carlo than in the Go Back trapezoidal rule for d > 2. In comparison with Simpson’s rule, Monte Carlo wins out for d > 4. Full Screen ⇒ Now go to Exercises 3( c) and (d) Close

Quit 3.6. Nuclear Decay Chains

Now let us consider a nuclear decay sequence with a number of different daughter products:

234 230 92 U −→ 90 T h + α 250,000yr Home Page 226 ↓ −→ 88 Ra + α 80,000yr 222 Title Page ↓ −→ 86 Rn + α 1,620yr 206 ↓−→ 82 P b fast Contents

234 238 The times shown are half-lives in years. 92 U is produced from 92 U by a decay with a half-life of JJ II 4.5×109 years which is so long compared with the half lives in the above chain that we can ignore 234 222 206 J I this factor in the change of the number of 92 U nuclei. The decay of 86 Rn to 82 P b (stable) is by a chain of disintegrations and takes place very rapidly on the time-scale being considered here; T1 /2 Page 60 of 85 222 for 86 Rn is about 4 days. Therefore we can in effect consider the decay of Ra to be directly to the stable Pb isotope. Go Back The half life of a nucleus is defined as the time taken for a half of the nuclei in a large population to decay. For an individual nucleus it really is a random process however with only the probability Full Screen of decay defined. We can either use an analytic method or a to describe the system. Close

Quit 3.6.1. Analytic approach

In the analytic approach we deal with the statistical averages of the numbers of each type of particle. These average quantities are not, of course, integers. 234 Let N1(t) be the statistical average of the number of 92 U nuclei at time t, 230 N2(t) be the statistical average of the number of 90 T h nuclei at time t, 226 N3(t) be the statistical average of the number of 88 Ra nuclei at time t, 206 and N4(t) be the statistical average of the number of 82 P b nuclei at time t. Note that because we are considering the statistical averages these numbers are no longer integers. Home Page We take the initial condition (at t = 0) to be N1 = N; N2 = N3 = N4 = 0.

At a general time t, N1(t) + N2(t) + N3(t) + N4(t) = N, i.e. the total number of nuclei is conserved. Title Page

The rate equations for the chain of decays are: Contents

dN dN 1 = −λ N 2 = λ N − λ N JJ II dt 1 1 dt 1 1 2 2 J I dN dN 3 = λ N − λ N 4 = λ N dt 2 2 3 3 dt 3 3 Page 61 of 85 where the decay constant λ = ln2/T = 0.693/T . Go Back 1/2 1/2

The analytic solution to these equations with the given initial conditions is Full Screen

Close N1(t) = N exp(−λ1 t)

Quit λ1 N N2(t) = [exp (−λ1 t) − exp (−λ2 t)] λ2 − λ1

  exp(−λ1 t) exp(−λ2 t) exp(−λ3 t) N3 (t) = λ1 λ2 N + + (λ1 − λ2)(λ1 − λ3) (λ2 − λ3)(λ2 − λ1) (λ3 − λ1)(λ3 − λ2) N4 (t) = N − {N1(t) + N2(t) + N3(t)}

3.6.2. Computer simulation

We are now going to take a different approach in which we try to simulate the random physical Home Page processes. We can regard a nucleus as existing in one of 4 states as defined as above: Title Page State 1 ≡ U ; state 2 ≡ Th ; state 3 ≡ Ra ; state 4 ≡ Pb Contents The probability PI that a nucleus in state i decays within a time interval ∆t to the next state (i+1) is given by Pi = λi∆t. When it reaches state 4 there is no further decay, of course. We suppose that ∆t is small enough for the possibility of double decays such as JJ II

J I U → T h → Ra Page 62 of 85 within ∆t to be negligible. Go Back We can simulate the decay process for one nucleus by choosing a random number in the range 0 to 1 and comparing this with Pi. If the random number is less than Pi then the decay takes place; Full Screen if not the nucleus remains in the same state. This trick is very common in computer simulations of random processes. Close We start with N nuclei in state 1, and simulate the decay of each one of the nuclei in a succession of time intervals ∆t. Quit In such a computer simulation the numbers of each type of particle are of course integers as they are in the real physical case. ⇒ Now go to Exercise 4 3.7. Exercises

1. [2 Marks]

Check on paper that you understand what is happening. Take a=5, c=2, m=8, and x0 =1 and generate the first few random numbers. You should find they start as follows 1, 7, 5, 3, and then repeat. Now Home Page go through the same process with the following sets of numbers: (a=3, c=4, m=8, x0 =1) and then (a=5, c=1, m=8, x0 =1). Record your sequences of numbers. Title Page 2. [ 4 Marks] Contents (a) Write a program which tests the built-in random number generator by calculating the averages hrpi for a few values of p, in the range 1 to 12. Do this for the cases where the number of random JJ II numbers generated N is 100,000, 1,000,000 and 10,000,000. Record your test results. (b) Modify the program to calculate the averages, using the built-in random number generator, J I hrpsqi for a value of p, q in the range 1 to 12. Use a double summation to do this; Page 63 of 85

N N 1 X X Go Back hrpsqi = rpsp N 2 j k j=1 k=1 Full Screen where N is 1000 and 10,000. Record your test results. Close 3. [ 5 Marks] Quit (a). We will use the random number generator to calculate π. Consider a circle of radius 1 unit and centre at the origin (0,0); it just fits in a square with corners at the points (-1,-1), (-1,+1), (+1,-1), (+1,+1). Now generate a pair of random numbers (x,y) each between -1 and +1. They are inside the square. What is the condition for them to be inside the circle as well? Repeat this till you have generated a total of N points. If Nc points were also inside the circle, the ratio of the area of the circle to that of the square is Nc/N - but the area of the square is 4, so the area of the circle is given 2 by 4Nc/N. But since we know the answer is πr and r=1, we have a way of determining π. Write a program to do this. You should aim for this kind of accuracy: π=3.14±0.01. (b). Although we expressed it as an evaluation of π, the last exercise was really a calculation of the area of a circle. We know that the area of a circle of radius r is given by A=πr2. The analogous quantity in 3 dimensions is the volume of a sphere, V=(4/3)πr3. What is the equivalent quantity in 4 dimensions? Presumably the ‘hypervolume’ of a 4D ‘hypersphere’ of radius r is given by H=Cr4. Home Page Make (very minor) modifications to your circle program to calculate C. [exact result is C=π2/2]. Title Page (c). The mean energy (kinetic) of an atom of a Boltzmann gas of non-interacting atoms moving in

1 dimension is given by Contents

E = I1/I2where ∞ ∞ JJ II R 2 2  R 2  I1 = (p /2m) exp −p /2mkT dpand I2 = exp −p /2mkT dp −∞ −∞ J I √ p is the momentum. With a change of variables, α=p/ (2mkT), this can be rewritten as Page 64 of 85

E = kT (J1/J2)where Go Back ∞ ∞ R 2 2 R 2 J1 = α exp −α dαand J2 = exp −α dα 0 0 Full Screen

Write a program that employs the sampling method to calculate J1 and J2 and thus the coefficient Close J1/J2. You will have to cut off the upper limit of the integrals at some value b. Increase the values of b and N until you have a result that is accurate to 2 decimal places, but estimate a reasonable value Quit of b by hand before you start computing (for what value of α does the integrand become small). Is your result what you expect? (d). Now calculate the mean energy (translational kinetic energy + vibrational potential energy) of an ideal gas of diatomic molecules confined to 1 dimension. It is a 3 variable problem - the momenta p1 and p2 of the 2 atoms of the molecule and their displacement x from the equilibrium separation. 2 2 2 2 Besides kinetic energy terms p1/2m and p2/2m we have a potential term µω x /2 where µ is the √ reduced mass and ω is the natural frequency. If we again do a change of variables, β=x (µω2/2kT), we can write an expression for E as in the previous case, but now

∞ ∞ ∞ Z Z Z 2 2 2  2 2 2 J1 = dα1 dα2 dβ α1 + α2 + β exp − α1 + α2 + β Home Page 0 0 0

Title Page ∞ ∞ ∞ Z Z Z  2 2 2 Contents J2 = dα1 dα2 dβ exp − α1 + α2 + β 0 0 0 JJ II

What is the coefficient of kT in this case? What result did you expect to obtain? J I

4 [ 6 Marks] Page 65 of 85 Compile and execute the program Nuclear Decay.f95. This does the analytic part of the calculation Go Back and prints out the results every 50,000 years up to 2,000,000 years. It asks you to input the number of nuclei N, and a time step dt. Use 1.0 for dt for the moment (dt is not relevant until you write the Full Screen Monte Carlo code). Parts of the program relevant for only the Monte Carlo part are indicated by comments. When you have understood the program you can go onto the Monte Carlo part. Close An array nr(1 : 4) has been set up for you which holds the number of nuclei of each type. nr(2 : 4) have been initialized to zero and nr(1) to the total number of nuclei. That is, initially each nucleus Quit is of type 1. Your task is to write a subroutine for the MC calculation and incorporate it in the program. The program is set up so that your subroutine can be called each time step to calculate the updated values of nr. In each time step you have to consider each nucleus and determine whether it decays using the criterion given in the text. Notice that the sum 4 X nr(j) j=1 should remain constant.

First choose a reasonable time step and justify your choice. Now run your program with various Home Page number of atoms in your sample, say 100, 1000, 10000 and 100000. What sort of value of N (number in the program) do you need to use for analytic and MC results to be similar. Display and Title Page comment on results that you obtain. Contents

JJ II

J I

Page 66 of 85

Go Back

Full Screen

Close

Quit Chapter 4 Home Page

Title Page MONTE CARLO SIMULATION Contents

JJ II Version Date: Wednesday, 1 October 2003 J I

Page 67 of 85

Go Back

Full Screen

Close

Quit 4.1. OBJECTIVES

The Metropolis Monte Carlo algorithm is one of the most important techniques in Computational Physics for dealing with random systems; it also provides a way of including temperature into the modelling. The main objective of this chapter is to introduce Monte Carlo methods. Inevitably modelling is done on systems with a small number of particles, whereas in real systems we are Home Page dealing with ∼1023 particles. Fluctuations about the mean become a dominant feature in small systems and it is important that we understand about size effects. Developing a feeling about this Title Page subject is the first priority of this chapter. A simple model contains the main features. Contents

JJ II

J I

Page 68 of 85

Go Back

Full Screen

Close

Quit 4.2. EQUILIBRIUM AND FLUCTUATIONS

Suppose we have two boxes, one of which contains a certain number of molecules of a gas and in the other is a vacuum. If these boxes are joined then gas will flow from one to the other until equilibrium is reached (a state of uniform density). To define fully a state of the system we have to specify the position and momentum of each molecule Home Page of the gas. Let us investigate the approach to equilibrium with a drastic simplification. We will concern ourselves only with which box a molecule is in, and ignore details about positions and Title Page velocities. Contents Suppose there are N molecules in total and, at each instant, NL are in the left hand box and NR are in the right hand one (of course, NL + NR = N). The table below describes the possible situations JJ II for N=6. There are N+1 states of the system distinguished by the number of molecules in each box. J I State NL NR No. of configura- Prob. Prob. lnΩ tions, Ω L to R R to L Page 69 of 85 1 6 0 1 1 0 0.000 2 5 1 6 5/6 1/6 1.792 Go Back 3 4 2 15 2/3 1/3 2.708 Full Screen 4 3 3 20 1/2 1/2 2.996 5 2 4 15 1/3 2/3 2.708 Close 6 1 5 6 1/6 5/6 1.792 7 0 6 1 0 1 0.000 Quit

The number of configurations (or microstates) associated with a particular state is given by Ω = N!/(NL!NR!). In the above example, Ω of state 2 is 6 because any of the 6 molecules could be the one in the right hand box (molecules here are treated as classical – they are distinguishable). The entropy S of a particular state is given by S/kB=lnΩ. As the system evolves, suppose one molecule moves from one box to the other in each time step. It is reasonable to take the probability that it will be a left to right move as NL/N and for a right to left move as NR/N. It is clear from the table that the tendency will be a move toward state 4 (the equilibrium state - the one of maximum entropy), but there will certainly be fluctuations about this position.

Even if we are in the equilibrium state, there is a chance that fluctuations could lead us in a few Home Page time steps into a state where all the molecules are back in one of the boxes. The probability (see table) that this occurs in just three time steps is (1/2)x(1/3)x(1/6)=1/36. This is not extremely long Title Page odds, but think what is the likelihood of a similar situation occurring for larger values of N. Contents The program boxes.f95 simulates the above model. You can enter N and the number of time-steps; you can select to start with the particles equally distributed or all in the left hand box; you can also JJ II choose graphical output. The random number generator decides on a L to R or R to L move. Look at the program and make sure you understand what it does. J I ⇒ Go to Exercise 1(a) Page 70 of 85 Now let us try to get something more quantitative about the fluctuations from the simulations. The variance σ2 is defined as: Go Back

Full Screen 2 2 2 σ =< NL > − < NL > Close and σ provides us with a measure of the size of the fluctuations. The averages (denoted by angular brackets) are taken over the time period of the simulation. The ratio σ/ < NL > is an informative Quit way to express the behaviour. ⇒ Go to Exercise 1(b) You will have observed in Exercise 1(a) that if N is small, then quite frequently you find all of the particles in one of the boxes; by contrast, if N is large, this dramatic departure from equilibrium is an extremely rare event. We can make this observation more quantitative. Let us assume that the probability that there are NL particles in the left-hand box is given by a normal (Gaussian) distribution with width σ:

√ −1   h 2 2i P (NL) = σ 2π exp − (NL− < NL >) / 2σ

Home Page Given the values of < NL > and σ (see Appendix), we could argue that the probability of finding all or none of the particles in one of the boxes is Title Page p 2 2/πN exp (−N/2) Contents

Then, the number of times in a run that we will find all the particles in one box is the product of JJ II this probability and the number of time-steps. Note, this argument is only a rough one, but it should give an order of magnitude estimate. J I

⇒ Go to Exercise 1( c) Page 71 of 85

Go Back

Full Screen

Close

Quit 4.3. MONTE CARLO SIMULATIONS – THE PRINCIPLES

We have to find a way to introduce temperature into a simulation. In statistical mechanics we can calculate a thermodynamic average of some quantity A by performing a weighted sum over all configurations (microstates) of the system Home Page −1 X < A >= Z As exp(−Es/kBT ) Title Page s

Contents where As is the value taken by A in microstate s, and Z is the partition function

X JJ II Z = exp(−Es/kBT ) s J I In a computer simulation, what we would like to do is perform a trajectory through phase space Page 72 of 85 in such a way that a microstate s is visited with a probability exp(−Es/kBT )/Z. Averaging A < A > throughout the trajectory will then reproduce the same that we get in the statistical mechan- Go Back ics calculation (canonical ensemble). There is not a unique way of doing this but one of those most widely used is the Metropolis Monte Full Screen Carlo algorithm. Close

Quit 4.4. The Metropolis Monte Carlo Algorithm

The algorithm is best described by way of an example. Suppose we have a system of N spins (elementary magnets) each of which can point up or down. There are 2N microstates of the system; a microstate is determined by specifying each spin (up or down) of the system. We assume that the spins interact with each other in some way so that the energy associated with any microstate is Home Page known. A trajectory through phase space governed by the algorithm is generated as follows. Title Page (i) Choose one of the 2N microstates in which to start. (ii) Pick one of the spins at random (using the random number generator). Contents (iii) Consider the new microstate obtained by reversing the direction of the selected spin; calculate the change in energy ∆E that occurs if the system is allowed to jump to the new microstate. JJ II (iv) If there is a decrease in energy, ∆E ≤ 0, move to the new microstate state (ie flip the selected spin); J I (v) If there is an increase in energy, ∆E > 0, make the move with a probability exp(−∆E/kBT ); ie on some occasions, when ∆E > 0, the jump is made, while on others the system remains Page 73 of 85 unchanged. (vi) Repeat the process from (ii) until enough data is collected. Go Back

Full Screen The example used for illustration is a simple discrete one in which each entity (a spin) has only two possible states. We could equally well apply the principle to an assemblage of particles (in a Close fluid say). In that case step (ii) would be to pick a particle at random, and step (iii) would involve calculating the change in potential energy if its position was changed randomly by a small amount. Quit Note that step (iv) is what defines the Metropolis algorithm. One could use alternative recipes that would still provide a valid simulation of the canonical ensemble (see Mathematical Appendix 2 for the condition that has to be fulfilled). The Metropolis method is the most widely used however, and we will not consider other choices. 4.5. The Ising Model

The spin system used in the introduction to the Metropolis algorithm is known as the Ising model. The energy of a pair of spins is generally written as −J if they are parallel and +J if they are antiparallel. A positive J provides a simple model for ferromagnetism and a negative one for antiferromagnetism. The table summarises the situation for a single pair of spins for which there Home Page are 22 microstates. If we represent a spin numerically as S = +1 (up) and S = −1 (down), then the energy of the pair, Title Page S1 and S2, can be written: E = −JS1S2. Contents Microstate Spins Energy 1 ↑↑ −J JJ II 2 ↑↓ +J 3 ↓↑ +J J I

4 ↓↓ −J Page 74 of 85

Although the Ising model was originally set up to study the transition from ferromagnetism to Go Back paramagnetism as temperature is increased, it has much wider application. We could, for example, Full Screen use it to describe a binary alloy (made up of atomic species A and B). We use spin up to represent type A and spin down to represent type B. Then if J > 0, atoms like to have their own sort as Close neighbours and, if J < 0, they prefer the other type as neighbours.

Quit 4.6. The Ising Model and the Monte Carlo Algorithm

Let us fill in a little more detail about the implementation of the algorithm (see the working program ising.f95). Suppose the spins lie on a lattice (square in 2 dimensions, cubic in 3 dimensions). Each spin has 4 (in 2D) or 6 (in 3D) neighbours (the program is for 2D). Declare an array, spin(:, :), the elements of which can take values ±1. The arguments of the array Home Page label its position coordinates. Then initialise the array (step 1 of the MC algorithm). The 3 common choices are programmed. Title Page P − JijSiSj Now choose a spin at random (step 2). Let us call it spin i defined by its coordinates Contents j (x,y). The energy associated with it and its neighbours is JJ II P ∆E = 2 JijSiSj where j is summed over the neighbours. The change in energy (step 3) on j P J I reversing the sign of Si is therefore 4 JijSj. If the spin flip is made in step 4, Si → −Si. j Page 75 of 85 In the calculations, only the dimensionless ratio, J/kBT is important. Usually in programming J is set equal to 1 and ‘temperature’ is a number that we can vary. If we wanted to convert the Go Back ‘temperature’ in the program to real units we would multiply it by J/kB. Periodic boundary conditions are usually employed to reduce finite size effects - otherwise the spins Full Screen on the edges of the lattice would have fewer than 4 (in 2D) neighbours. This is done by adding an extra row of spins to each edge – each spin is constrained to have the same value as the one just Close inside the lattice on the opposite side. Quit 4.7. Rationale

The reason for doing simulations, of course, is that we are trying to work out the behaviour of macroscopic systems containing of the order of 1023 spins or particles, and there are only a few models for which exact statistical mechanical solutions are obtainable. Hopefully we can simulate on systems with N large enough so that we can deduce (or extrapolate to) what happens in the Home Page macroscopic case (even though N is many orders of magnitude smaller than 1023). You might well ask, if we have to be satisfied with a reasonably modest value of N, why not do an Title Page exact calculation for that size of system. Considering a spin system will provide the answer. For an Contents exact calculation one would have to consider 2N states and perform thermodynamic averages over them. For a Monte Carlo simulation you would consider perhaps 1000 × N Monte Carlo steps. If 1000 × N ¡ 2N it is more cost effective to do a Monte Carlo calculation. Check at what value of N JJ II the cross-over occurs and MC becomes more efficient (you will find it is between 13 and 14 - really very small). Even for a fairly modest N, the value of 2N rapidly becomes too large for an exact J I calculation. Page 76 of 85 A Monte Carlo simulation is employing the principle of importance sampling. At low temperatures, the low energy states dominate in a thermodynamic average - and the phase space excursion is Go Back primarily through these microstates. Indeed in a real macroscopic system the high energy states will rarely get visited at low temperatures - perhaps on a time scale of the order of the age of Full Screen the Universe - or longer! At very high temperatures, on the contrary, all microstates are more or less equally accessible - but a fairly coarse-grained average will do - as long as representative Close microstates are visited according to their relative density. Quit The fact that simulations are done on finite systems has to be borne in mind. There are ways of extrapolating to large systems from a series of simulations on small systems over a range of different sizes. Even if we do not want to go to such extra sophistication, we generally do have a knowledge of what the effects of finite size are; we have seen one example of this already in the calculation of σ/ < NL > in the previous section. 4.8. MONTE CARLO SIMULATIONS – IN ACTION

Because MC simulations are such an important technique, we have looked at the principles in some detail. Now let us put it into practice. ⇒ Go to Exercise 2(a) Home Page

Title Page

Contents

JJ II

J I

Page 77 of 85

Go Back

Full Screen

Close

Quit 4.9. Order Parameter - Magnetisation

The magnetisation per spin S at a particular instant in time is defined as

−1 X S = N Sr Home Page r where the sum is over all N spins of the lattice. The average over the period of the simulation, Title Page < S >, is our definition of the magnetisation. We expect it to be 1 at very low temperatures and Contents to fall as temperature is increased going to zero when T reaches TC (the model exhibits a Curie temperature). For this reason it is a convenient measure of the order in the system – and is some- times called the order parameter. It is also possible to study the fluctuations in the magnetisation: JJ II 2 2 < S > − < S > . The fluctuations are largest at temperatures near TC . They are also related to the susceptibility (the ease with which the system responds to a magnetic field) – see Appendix. J I

⇒ Go to Exercise 2(b) Page 78 of 85

Go Back

Full Screen

Close

Quit 4.10. Temperature Scan (Annealing and Quenching)

If you want to look at several temperatures in a simulation, it is usually more efficient to do them all in a single run. The spin configuration at the end of the simulation at one temperature provides the input for the simulation at the next temperature. The procedure increases efficiency because it reduces the time to settle to equilibrium compared with making a new start at each temperature. Home Page There is another problem, which you might have noticed, and this can be avoided by this technique. At low temperatures, you might have expected that your picture would have been all red or all blue Title Page (fully ordered). For larger samples at T = 0.5, say, it is more likely that you will see a big blue and Contents a big red area. Early on, one part of the sample started ordering one way while the other began with the opposite orientation. Neither could win. This is what happens in reality. If you cool something very fast – ‘quenching’ – it does not have time to adjust and different parts get locked into positions JJ II that are not necessarily the most favourable energetically for the system as a whole. A slow cooling schedule (annealing) will give them time to adjust. J I ⇒ Go to Exercise 2(c) Page 79 of 85

Go Back

Full Screen

Close

Quit 4.11. MATHEMATICAL APPENDIX

4.11.1. Fluctuations

If ΩN is the number of configurations of the state with N molecules in the left-hand box when N NL L molecules are present then Home Page

N N N Title Page −1 X N 2 −1 X 2 N X N < NL >= Z NLΩN < NL >= Z NLΩN Z = ΩN L L L Contents NL=0 NL=0 NL=0 where JJ II

J I ΩN = N!/ [N !(N − N )!] NL L L Page 80 of 85 N N Now Ωr is also the coefficient that appears in the binomial expansion of (1 + x) SN = (1 + N Go Back N P N r x) = Ωr x r=0 Full Screen Setting x=1, we obtain Z = 2N (there are 2 configurations of each particle and there are N of them) Close

Then, differentiating: Quit

N dSN X = N(1 + x)N−1 = ΩN rxr−1 dx r r=0 and again setting x=1 and comparing the expression for < NL > −1 N−1 < NL >= Z N2 = N/2

A further related differentiation yields

  N Home Page d xdSN X = N(1 + x)N−1 + N(N − 1)x(1 + x)N−2 = ΩN r2xr−1 dx dx r r=0 Title Page

2 and following an analogous procedure for < NL > we obtain Contents

2 −1  N−1 N−2 JJ II < NL >= Z N2 + N(N − 1)2 = N/2 + N(N − 1)/4

2 2 2 J I From the definition of the variance σ =< NL > − < NL > we obtain √ Page 81 of 85 σ2 = N/4 and σ = N/2 and so finally Go Back

σ 1 Full Screen = √ < NL > N Close

4.11.2. Metropolis Monte Carlo Algorithm and Principle of Detailed Balance Quit

An important relation that has to be satisfied in simulations of the sort we are considering is called the Principle of Detailed Balance; it can be written as

P (i → j) exp(−Ei/kBT ) = P (j → i) exp(−Ej/kBT ) where P (i → j) is the probability that the system, if it is in microstate i, will make a transition to microstate j; Ei is the energy of the system when in microstate i. −1 Since the probability of the system being in microstate i is given by Z exp(−Ei/kBT ), we can see that the left hand side of the equation gives the rate at which transitions from i to j occur, and the right hand side describes the reverse process. The detailed balance equation is the condition for equilibrium. Home Page

For a computer simulation, the condition on any P (i → j) that is used is that it must satisfy the Title Page detailed balance equation. We can see that the Metropolis algorithm does. Suppose that Ei > Ej. Then according to the algorithm, P (i → j) = 1, and P (j → i) = exp[−(Ei − Ej)/kBT ], which Contents is entirely consistent with the detailed balance requirement. JJ II

4.11.3. Susceptibility and Fluctuations J I

We can write the mean value of the spin at an arbitrary site i as Page 82 of 85

Go Back −1 X < Si >= Z Si exp [−β (E − SiH)] Full Screen where Close

X Z = exp [−β (E − SiH)] Quit

and β = 1/kBT . The sum is over all configurations (with energy E) of the system; we include a magnetic field H and show the contribution to the energy arising from the effect of H on the particular spin. The susceptibility χ is defined as d < S > χ = dH

d [Z < Si >] X d < Si > X =< S > βS exp [−β (E − S H)]+Z = βS2 exp [−β (E − S H)] dH i i i dH i i Home Page

Now dividing both sides by Z leads us to Title Page

Contents d < Si >  2 2  = β < Si > − < Si > dH JJ II

< Si > is the same for all sites, so we can write J I

 2 2 Page 83 of 85 χ = < S > − < S > /kBT

Go Back

Full Screen

Close

Quit 4.12. EXERCISES

1 [8 Marks] (a). Run the program with the graphics option to get a feel for what happens. Compare N=10, tstep=200 with N=100, tstep=5000 for example. Choose other values as well. Describe your obser- vations. Home Page 2 (b). Add some code to the program to calculate < NL >, < NL >, σ, and the ratio σ/ < NL >. Title Page You will need to calculate values averaged over the run; do several runs for a particular set of parameters (the clock will ensure different random number sequences); start with the configuration Contents of equally filled boxes. If you are doing the averages for very large samples or over long time periods, be careful about generating very large numbers. If you are working with integers try to JJ II estimate how big an integer your program is likely to produce. The default ‘KIND’ for integers on FORTRAN 95 is 3 which means integers in the range −231 to 231 − 1 are allowed. If you find you J I are going above this range, convert to floating point for calculating averages. Compare the results from your simulations with the ‘theoretical’ values from the Mathematical Appendix. In particular, Page 84 of 85 investigate how σ/ < NL > depends on N. You can speed up the calculation by not displaying the graphical output. Go Back

(c). Add some more lines to the code to count the number of times in a run that all or none of the Full Screen particles is in the left-hand box. Sometimes this configuration never occurs, so write some code to evaluate the largest and smallest value of NL that occurs in a run. Compare your simulation with Close the rough theory. You should be able to do the investigations for N up to around 20. Use the rough formula to estimate the probability of the rare event occurring for N = 50, and calculate how long Quit you would have to sit in front of the computer to observe it. 2. [ 9 Marks] (a). The program ising.f95 allows you to input lattice size, number of Monte Carlo steps per spin, temperature, and sign of the coupling J. You can choose from three initial configurations and you can select graphical output (up and down spins are distinguished by red and blue circles). You can also request a metafile for hardcopy output. The colour output is relatively slow: use it to get a feel about what is happening, but switch it off for quantitative calculations. First study the program and make sure that you understand what it does. Do a number of runs for different parameters and comment qualitatively on what happens. Examine values of temperature 0.5, 2.0, 3.0, and lattices of side 16 and 64. Monte Carlo steps per spin in the range 200-500 should be adequate at this stage. It may help your comments to note that the model is expected to show a Home Page Curie temperature TC = 2.27J/kB (2.27 in the program units). Title Page You can also try the other options. (b). As a first step to more quantitative results, add some code to the program to monitor the Contents P magnetisation. Calculate the total spin Stot = Sr immediately after initialisation. Then each r JJ II time the Metropolis step produces a spin flip, update Stot by +2 or −2. Now average the magnetisation over the simulation run, and also find how the magnetisation fluc- J I tuates. That is, calculate both < S > and < S2 > − < S >2. Page 85 of 85 If the simulation starts far from equilibrium, you should let the system settle down until it is fluctu- ating about its equilibrium behaviour before you start your averaging. For example, you might use Go Back 1000 MC steps per spin, but let it run for 200 steps per spin before you start averaging (ie disregard the first 20% of the run). Full Screen

The total number of MC steps (see program) is mcs=mcsps*n. It is not necessary to average the Close spin over all of the mcs steps. You could save time by including in your average values taken every n MC steps, for example. Quit Referring to ising.f95, the suggestion is to average values taken at the point indicated by a comment, and to ignore values for i < mscps/5. Make an evaluation of < S > over a range of temperatures (say 1.0, 2.0, 3.0 and a few around where you expect TC to be). You should do several runs at each temperature. How do the magnetisation and the fluctuations vary with temperature? (c) Rather than inputting temperature for each run, add a temperature loop so that your program performs an annealing schedule. The configuration you have at the end of one temperature loop is the starting configuration for the next. Just restart your averaging process. Do your annealing schedule over 10 to 15 temperatures starting at 4.0 and going down to 0.5. Make your temperature steps closer around TC .

The first thing you should observe is uniform magnetisation (all red or blue) in the low temperature Home Page regime. Title Page Obtain data for < S > and χ over the temperature range and plot it as a graph. Make this calculation 64 × 64 as accurate as possible. You should be able to do a lattice with 2000 MC steps per spin in Contents a reasonable time (if you don’t display graphical output). Do several runs. Perhaps the PC will be fast enough for you to do runs for larger samples or for more MC steps. JJ II

J I

Page 86 of 85

Go Back

Full Screen

Close

Quit Home Page

Title Page

Contents

JJ II

J I

Page 87 of 85

Go Back

Full Screen

Close

Quit