Computational Physics 2

Administrative Lecturer: me

Lectures: Thursdays 12noon - 1pm Labs: Thursdays 2pm - 4pm

Mark distribution: Cont.assessment (= quizes + assignments) −→ 30% Exam −→ 70%

MP468P Project: (dual-department students) Oct–May Overview of lecture slides 00

1 Content/overview of module

2 The unix/ command line

3 Programming language(s) — choice of python

4 Changing landscape of computational physics

5 Things not covered

6 The practice of scientific computing

7 You should... Computational Physics 2: Content

Module topics

Random numbers and stochastic processes

Monte Carlo methods

Linear (Linear sets of equations, matrix decompositions, eigenvalues)

Minimisation / Optimisation

Partial differential equations (PDEs) + ODE boundary value problems

“Soft skills”: unix/linux command-line, python Computational Physics 2: Specialties

This module is a bit different...

( Math (, PDE’s) Components of (random processes, probability) Computer Science (Algorithms) ‘Lab’ work

May turn out to be the most useful subject for your future

The most ‘modern’ module. (Numerics with python — less than 20 years old)

Some aspects will become outdated in a few years. (Programming tools, workflow, etc. Not the principles.) The command line

Why unix systems, why command line? Serious computing generally done on unix/linux machines

Serious computing usually done remotely on multiple machines/cores

I at high-performance computing facilities (e.g., ICHEC in Ireland) I wherever you have access to multiple cpu’s or gpu’s I Difficult to work remotely without command-line knowledge! The command line

Common commands Basic: ls, cd, cp, mv, rm, mkdir, less, grep, diff, cat, top, ps, kill

Slightly less basic: ssh, find, awk, tar, sort, gzip, unzip, chmod, chown, tail, head

Combinations Piping the results of one command (program) into another: ls | sort - ls -l | sort -g -k 5 cat *.tex | grep -i perturbative

Redirection: output of one command sent to a file: cat file1.txt file2.txt > outfile.txt Computer languages other than python

Should you learn other programming languages?

C? julia? ? Mathematica? /octave? ?

SageMath? R? html? java? javascript? ++? Go?

lisp? php? perl? bash? awk & sed? Ruby? C#?

Pascal? COBOL? Basic? Assembly language? POV-Ray? Computer languages

Low-level vs High-level Low-level → closer to machine; programmer implements many details; speed and control at the expense of programmer time.

High-level → closer to human; ≈ scripting languages programmer uses libraries; pre-defined data structures

Nowadays, low-level ≈ compiled; high-level ≈ interpreted.

Low-level languages: High-level languages: (low to high) python / julia / matlab Machine language Mathematica / Maple Assembly language awk / bash / perl C / Fortran Using python for computational physics

Python: the best programming language ever (?) Widely used — lots of information easily available

Easy to learn interpreted not compiled, don’t have to worry about variable types Libraries available for many common (and specialized) tasks.

I Most relevant for us: numpy, , Speed does not matter nowadays for many tasks.

I Many tasks done by external, non-python libraries. Example: matrix eigensolvers call ‘.

Counting starts at 0 instead of 1, like a proper computer language. Using python for computational physics

Python: a terrible choice of programming language Widely used — lots of junk information and incompetent users Slow. Very, very slow. Crawling slow. interpreted not compiled Sometimes speed actually matters.

I E.g., Monte Carlo calculations. I To speed up critical parts of your code, you might have to write those parts in C/Fortran. −→ two-language problem Designed originally for computer people, not for physicists or for numerical work. We are secondary citizens in the python world. Sometimes this shows :-( Counting starts at 0 instead of 1, an insult to people who deal with matrices and vectors. Alternatives to python

Matlab/octave Pros: Designed for numerical work. Just-In-Time compiler makes matlab faster than python. Octave freely available. Packages less chaotic than for python. Cons: Matlab needs expensive license. Octave slower. Not a proper programming language.

C/ C++ Pros: Fast if coded correctly. Cons: Have to learn a (much) more complicated language than python. Compilation necessary — development cycle slower. Memory allocations by hand. Not as many convenient predefined data structures. Using libraries is a more involved process. Alternatives to python, continued

Fortran Pros: Designed explicitly for numerical work. Fast if coded correctly. Cons: Compilation cycle. Not used much outside numerics.

julia Pros: Designed explicitly for numerical work. Aims to solve the two-language problem — aims to be fast to develop and fast to run. Aims to overcome deficits of python. Cons: Still new, and changing/growing. E.g., libraries currently even more chaotic than python. Alternatives to python, continued further

Mathematica/ Maple Pros: Combination of numerical and symbolic capabilities. Cons: Not free or open-source. Expensive license. Not general-purpose programming languages.

R Pros: Great for statistics. Great graphics package. Cons: Slow. Not as suitable for non-statistical tasks. Changing landscape of computational physics

Algorithms and principles Mostly stable, but some things change Example: gradient descent Changing landscape...

Programming practices (and fashions) Rapid change — be warned (and be prepared) python was considered unacceptably slow for numerics, until ∼2005. double precision was considered unacceptably slow for numerics. integer division, different in python2 and python3. GPU usage increasingly unavoidable. :-( For scientific usage, python might be replaced by julia soon(ish). Things not covered in MP468C

Many, many aspects of !! Graph algorithms, advanced data structures, adaptive numerical integration, extrapolation, finite element methods, .... Serious applications of computers in physics Quantum Monte Carlo, molecular dynamics, density functional theory,.... Statistical data analysis, Machine learning Parallel computing GPU computing Cloud computing Other programming languages/paradigms: Matlab/octave, mathematica, julia, C, ... Many python features: objects/classes, , making packages,... The practice of computational physics

Do’s and dont’s Don’t guess what a command/package does. Look it up. E.g., If you use np.arange(2,10), first read its doc.

Looking up documentation: use reliable sources.

Start coding first, think later? Please please please don’t!!!

First calculate by hand (on paper) whatever is needed. When possible: write out what you need to code as pesudocode or as an algorithm. Practice — writing out algorithms

Example (from wikipedia page on Metropolis-Hastings)

¡ ¨ ©     ¦   ¨  

¢ ¤ ¦ §    ¨   

£ ¥  

 

˜

"

! 

%

#

$

'

(

#

&

) * + *

,

—

-

#

&

/

0

. 

2 3 7 ; 4 < = 5 > A B

4 5 6 8 9 : 8 ? @

Writing out algorithms 1

C

D

&



E

“ ” • –

.

&

# H

G L

& H & J K &



F F & F I M

Example from Higham, Accuracy & Stability of Numerical AlgorithmsN

Q

f

|

O P z _ {  _ €

S T U V U W X Y Z [ ] X X ^ _ ` d g g _ p q \ r s t W Z w [ _ x y Z [ } ~ X X 

\ a

j j

e

e

R u

o v

l m i n

b c h i

k

{ X [ X { [ {  X ]

 ‚ ƒ „ ‚ ‰ Š Œ

† ‡ ‹

ˆ

ž

Q  ‘ ’ – ™ œ – 

 ‘ “ ” • — ˜ ›

Ÿ  Ž

š

Œ

™ ™ –

¥ ¢

‘ S ‘

¤ ¦ £ ¡

Power method S

š

§

¨

©

Q

ª Ÿ

for finding eigenvalues Ÿ

¬ ®

¯

­

« Ÿ Ÿ

µ

° ± ² ´

³

™

S ¶

Q

Ì Ï

S S ” ” ª  ‘ S –  ”

È É Ê Ë Ä Å Ç Â Ì Í ¿ Á  à ¼ Ð Ñ Ä Ò Ó Ð Ÿ ¼ · ¸ ¹ ¼

S S ‘ ’ S ˜ ’

Ä ¾ À Î º

»

š

Æ ½

š

Ô Ô × ™ Ô  Ô  à ž ž ™ Ø – Ù Ô Ü Ô

‘ – ¶ Õ Ö – –  × S › S – ‘ Ý Þ – “ – S “ S × ‘ ‘  Ö ‘ S Ú ‘

Á £ ¸ Ð È Ð

S S S S – S

£

ß

Û

Û

š š

– ™ Ý ‘ ” ” –

Ï ¿ å ê î

“ ‘ ‘ “ S – ‘ ” – “

Ð £ Ê ¼ â ã ä æ ç é Ì Á ë ì º

S S

Û

á

è

í

Œ

 ™ ž

ð ¥ ¿ Ï ñ

ï S – ” –

Ò

¶ S

ö

U

ú

U

÷ ó ô õ

ø ù

ý

¢

ÿ

þ

£

¤

Œ Œ

ü  ¡

÷

ù

¥

§

ÿ

¤ ¢

Œ ¦ Œ ©

¨

ù ù

û ò

ž ž  ‘  ™ ¤  

¦ ¦ ¿

‘ S – – ‘ S Ü ’ ” ¶ ” ™ S › ¤    

£ é ¸ Ð Á é Ò ä ë Ä Ò Ð

 S S

 ÷  Ó

ù



 

š

 







 ™ ™ ™

" # % ¦ ¦ ¦ ¦

– ‘ – S Ô ‘ – S ‘ ‘ › Ý

Ð Ä ! º Œ $ ¦ º ¥ Ò

S ˜ S S S S S ˜ S S œ – ‘

(  ß

Û

'

š š š

)

&  

Q Q

Œ

 – + Ü Ô › , –   ™ / ™ Ý Ô Ö / 1 Ö  3 - –   – –

Ò £ ¥ ¥ Ò ¥ ¿ Ì ¿ ¥ Ê

S – S – ‘ ‘ S S S ˜ S S S ‘ S S ¶

* * º 2 º

0

Û Û Û Û

.

š

¬

î 9 î

¬

7

Ô Ø ž ™ ” ™ – ™ ”

4 4 ; < Ä

S ™ ‘ ™ S ” ™ ‘ – ” ™ ” ‘ ™ ‘ – S S S S ›

Ð Ä Ð ¼ Ð 6 ë Ð £ 8 Ð Ð : Ò 2 =

˜ “ S ‘ ˜ ‘ S

2 Ð

5

Û

š

Œ §

G H I J K   ™ >   F ” / › œ >   Ý A B C ›  à +

* ð : Ä * 2 ë Ä ë £ ¥

S S S S @ S S S S

8 ë * º º

L

? ? D E

š

R

S T

ž

” Ô à ‘ – ™ ” Ý œ Ô – ’ Ø ž – –

6 ¦ M ¥ Ò N ¥ ¸ P ¥ Q

S @ S S ‘ ¶ S ’ ” › – ‘ S S S ‘ ‘ S ˜

Ð U V W Ñ â

À

? O

\ A

U

^

Z ” – ‘ –  Ý – Ü Y ‘ ” [ ” ™ – Ô ] I  X ” ” –

ä ë Ã Ã ¥ Ð Ï Ã é ; Ì ë Ò

‘ ˜ ‘ S ‘ S ‘ ‘ “ “ S S ˜ ‘

â é Ð  ë

_

Û Û ?

á

š

¬



j

–  ”  – Ý ü Ú ‘ Ü ‘ ” –  –

é : ê é Ã e 6 f h ä i k Á

‘ S Ý “ “ ‘ Q – S

Ì ` ` a b c ` 6 d ë â

S ¶

?

½ g

š

£

t Ü Ô t ™ I

¥ ¿

Ý q  r ª u ” ™ v w – x ‘ ª ”  o “ m

* é Ã Ð é Ì Ñ é

‘ p ‘ S S S n l

º 6 é Ä

s

» » '

? Û

A

Π|

y

Ô ž Ø r ª ” Ý  Ý – /  Ô o

z ê i | ê é ; Ä

“ › “ ‘ ‘ “ › S ‘ ‘ ¶ ‘ “ S ¶ “ ‘ ’ – “

é Ð â Ó Ó â ë ¤

s

{

Û O

á

 š



/

 Š

– ‘ Ý „ Ô † ‡ Ô ˆ ~ Ý ‘ } ~  ª I  v –

é Ã ä ë ¿ Ä : é Ò é Ã ä e ë

‘ ‘ ‘ ‘ ‘ S › “ ˜

* €

»

ƒ 

á

‚

‰ ‹



 7

Ô Ô Ü

‘

ª t ª U ‘ S   ™ – × › ™ ” –

¥  Ž ¿ ’ Â é Œ e Ò ¥

‘ ‘ S  ” S “ S • ‘ ‘ S ‘

é ` 2 Ð ë â ë



?



E ?

š š



¢ ¥ § ¨ © ¨ ©

¦

¡ ¦

¤

£

!

 .

     0   $ 3 4  #  5 6

&

           # $  ' ,     1 #  

   " % &  

 

)

+ / / 2

(

* -

Writing out algorithms ;

L

I H M

6  

8 < 8 > ? " A D E F G H

5 #     #    #  5        6  5  5 5 6

<

9  @  B

7 J

C

: =

K

c

>

^

` a 

I % "

_     5 @ #    Q R $ Z   \

d N P M T U W X Y M " [

  # 6    5    5  5 5 6   

S V

2

O ]

=

b

=

=

-



e

$ 5

g " f "

  $ c

 e G &

  @  #  ' j



h ' h

k

i

 r

„

ƒ

{

Š m

‚ † ‡ ˆ ‰ ‹ ~ l o p q s t v

6    #

U € z e u w x v y

  $  5   $ $

"  "  "

R h  | 5



n

(

}

~



Another example from Higham, Accuracy & Stability... ’

“

š

e e € "

  5  #  $  Œ $ 3      ‘ Z   ˜    ™ ›  6 œ 

 —

#    # 5 #  5 

•

2 – ”

=

Ž ”

š

ž

›  $ 

 "

9  j h  ¢ 5 £

Ÿ ¡

§

¥ ¨ ª >

®

©

¯

¤ ¤ °  ­

›

¬

«

¦

ª

c

²

±

 °

5 ­ !

›

¦

¨

¸

´

µ

³

,

»

«

¦

¹ º ¶ ·

Computing the ¦

«

¼ ¾ ¿

½

Á

,

«

Â

¦ ]

À «

QR decomposition Ã

Ä È

Å

†

5 

Ç É

­

› Æ

of an n × n matrix A, using a :

¦

Í

¾

¨

Ë

Ê

Ì

Î

Á

,

«

]

- «

Gram-Schmidt-like method. - Ñ

¦ ¦

Ð

Ï Ô

Ò Ó ¾

×

. Õ

Ö

Á

, ¤

«

¦

J

Ø

-

-

 

D

"

R 6

Ý



s

Ù Û 

I

Ú Ü ™ › Þ !

5

/

T D ä

#      

ß à — M " e á W % ã % % %  å

 #   R    # 5  6 # 6 #  # h h

V — G — )

#  # 5  5   â

=

í

e

  $  $   

z e e < † D

5 æ ç è    #      # ë           5  ì 

à % " % M é ê %

6 5  5   #   

—

9

/

~

†

" e z D e D e D

$ 5  $  5   $  5  5 6    #      5 

% " U ê "

#     î 6     9   9 5 ‘ ç # 

2

/

=

ð

% e

$ ‘    

e 8 M U

    # # 5 # 6   

à ï E ñ I D "

5 6 5 6   ¢     3  ¢  5 6   5   

E

9 9

2 2 2

7 =

=

ó !

e e z e

Œ  '     ‘    6  # 

— é ò é † D D ê

#  5 #      ‘ $       @ 5    

% % 8 

  5   5 '  5

ô

2 2

¡

! c

‘

e ±

Q  $  ¡ 5 

M

õ  ö 5 6  3  6   $ $    5 3 ÿ   6 ¢

— D e ù 8 ü ý U

5  # 5 # 5   6 ¢  

û <

9 B 9

2 ÷ ø þ

7

}

ú

©

Í



¦

.

.

Ì

e Á

  #    

e U V ý Á û I ¢ é

    5    5 6  ì 6     5

— G Y ý % "

, ¨  £  @ #

«

«

§

]

=

¤

-

¥

« -

Ê

Ê



U M

 # Q    # 

W H e D e z " Y " M I ± 8

 5 ¤    6 #   $ 5  ' $  #  5 ¤ ç      6 5 @ 3 B

—

1 9

7

2

-

-

ð

É 

Â

¾

¨

Ö



ò M %

# #  

Y " % —

  5   5  Z  #    5         B 

" U U

9 5   õ # 5 , í 6 5   #  

2

7 =

7

-

 ~

ì



e e

  5 6  6 ‘      #   #  

T  D   ü —  D I

9 5  5  5 # 5 5 5  @   @ #    ç   9 5    

U %

1 5

 

=  

 

W

    $   !  5



B

~



(

¦

f ?

% 6 '    ‘ "  $

 ¢ I % "

j h 5    6     !  6  #  # 

V

 B 

«

& 2 2

¡

«

5

¦

¾ 1 ¿

¾ ;

¦

:

, . € é

ì  

É € 3

,  ó   5  $ ' # 5 B

7  8 8 à

, j 9 6 ç 1 9

* / 9 9

«

+ - 2 4 ”

J J ¡

O

Ð

6

)

0

>

?

E

@

e " € € é e &

3   # Œ #    6 #         3  6    #   =  $ 

— [

B  œ  5  # 5 

0

« A « «

<

þ

D C

Ð

c

M

N

 = O

z  " % g % 

›   I J  K  F   

e e G

   #     #    6 5   ­  5 

— — ý

 B

L G 2 2

H

P

Õ

­

Q



T

S

S

c

S S S

a V V

¦

¦

W

¾ ¾

¨ ¥

¨ ¨

X

Ì

]

d É

'

f

¤ ,

e g V

,

/ »

» V

_

] ] ]

J J

Y

b ` [ \ ^

Z

Á Á

S S

e

h V h

°

U

S

R

S S

Ì

k

~

m  n i  j

• •

•

*

l

ì

p

A " o

 

e e  s e u à Y " % 7 q

  3     5 3  5    æ 3     $ æ $  #  

< " 

 t 5   F 5 

” 2 2

r

¡

c

 

  # ë    #  5

7 U ± E % z

   @  #  '    K    ‘ $ a   # a   $  6

" 8 D

5   5  9 5   5 w

—

x

v ”

{

m



e e

    6  6

y W e " ± e } I F I

 9  5   3   5  $ #     5   6  $   3  9    5 # 1

— < — |

5

=

þ ”

z ~





e ± † ê "

#    $    5 5 € ó

% " † 8 % V

 # 5  5    # $  5   6    5

é û "

 5   5 #  # 5 B 9 5 5 9

/ ” SEC. 21.1 Methods for First-Order ODEs 905

call briefly the Runge–Kutta method.1 It is shown in Table 21.3. We see that in each step we first compute four auxiliary quantities k 1, k 2, k 3, k 4 and then the new value ynϩ1. The method is well suited to the computer because it needs no special starting procedure, makes light demand on storage, and repeatedly uses the same straightforward compu- tational procedure. It is numerically stable. Note that, if f depends only on x, this method reduces to Simpson’s rule of integration Writing out algorithms Á (Sec. 19.5). Note further that k 1, , k 4 depend on n and generally change from step to step. Example from Kreyszig, Advanced Engineering Table 21.3 Classical Runge–Kutta Method of Fourth Order

ALGORITHM RUNGE–KUTTA (ƒ, x0, y0, h, N).

This algorithm computes the solution of the initial value problem yЈ ϭ ƒ(x, y), y(x0) ϭ y0 at equidistant points

Á (9) x 1 ϭ x 0 ϩ h, x 2 ϭ x 0 ϩ 2h, , x N ϭ x 0 ϩ Nh;

here ƒ is such that this problem has a unique solution on the interval [x0, xN] (see Sec. 1.7).

INPUT: Function ƒ, initial values x0, y0, step size h, number of steps N

OUTPUT: Approximation ynϩ1 to the solution y(xnϩ1) at x nϩ1 ϭ x 0 ϩ (n ϩ 1) h, where n ϭ 0, 1, Á , N Ϫ 1 For n ϭ 0, 1, Á , N Ϫ 1 do:

k 1 ϭ hf (x n, yn) j 1 1 k 2 ϭ hf (x n ϩ 2 h, yn ϩ 2 k 1) j 1 1 k 3 ϭ hf (x n ϩ 2 h, yn ϩ 2 k 2) j k 4 ϭ hf (x n ϩ h, yn ϩ k 3) j x nϩ1 ϭ x n ϩ h j 1 ynϩ1 ϭ yn ϩ 6 (k 1 ϩ 2k 2 ϩ 2k 3 ϩ k 4) j OUTPUT x nϩ1, ynϩ1 j End Stop End RUNGE–KUTTA

1Named after the German mathematicians KARL RUNGE (Sec. 19.4) and WILHELM KUTTA (1867–1944). Runge [Math. Annalen 46 (1895), 167–178], the German mathematician KARL HEUN (1859–1929) [Zeitschr. Math. Phys. 45 (1900), 23–38], and Kutta [Zeitschr. Math. Phys. 46 (1901), 435–453] developed various similar methods. Theoretically, there are infinitely many fourth-order methods using four function values per step. The method in Table 21.3 is most popular from a practical viewpoint because of its “symmetrical” form and its simple coefficients. It was given by Kutta. 860 CHAP. 20 Numeric Linear Algebra

(4) A ϭ I ϩ L ϩ U (ajj ϭ 1)

where I is the n ϫ n unit matrix and L and U are, respectively, lower and upper triangular matrices with zero main diagonals. If we substitute (4) into Ax ϭ b, we have

.b ؍ I ؉ L ؉ U)x) ؍ Ax

,x ؍ Taking Lx and Ux to the right, we obtain, since Ix

.b ؊ Lx ؊ Ux ؍ x (5)

Remembering from (3) in Example 1 that below the main diagonal we took “new” approximations and above the main diagonal “old” ones, we obtain from (5) the desired iteration formulas “New” “Old”

(mϩ1) ϭ Ϫ (mϩ1) Ϫ (m) ϭ Writing out(6) algorithmsx b Lx Ux (ajj 1) (m) (m) (mϩ1) (mϩ1) where x ϭ [x j ] is the mth approximation and x ϭ [x j ] is the (m ϩ 1) st approximation. In components this gives the formula in line 1 in Table 20.2. The matrix Another exampleA must satisfy from ajj  Kreyszig,0 for all j. In Table 20.2Adv. our assumption Eng. ajj Math.ϭ 1 is no longer required, but is automatically taken care of by the factor 1>ajj in line 1.

Table 20.2 Gauss–Seidel Iteration

ALGORITHM GAUSS–SEIDEL (A, b, x(0), P , N ) This algorithm computes a solution x of the system Ax ϭ b given an initial approximation (0) ϭ ϫ  ϭ ••• x , where A [ajk] is an n n matrix with ajj 0, j 1, , n. INPUT: A, b, initial approximation x(0), tolerance P Ͼ 0, maximum number of iterations N

(m) (m) (N) OUTPUT: Approximate solution x ϭ []x j or failure message that x does not satisfy the tolerance condition For m ϭ 0, •••, N Ϫ 1, do: For j ϭ 1, •••, n, do: j؊1 n (mϩ1) ϭ 1 Ϫ (mϩ1) (m) 1 x j bj a ajkx k Ϫ a ajkx k ajj a ϭ1 b k kϭjϩ1 End (mϩ1) Ϫ (m) Ͻ P (mϩ1) (mϩ1) 2 If max ͉x j x j ͉ ͉x j ͉ then OUTPUT x . Stop j [Procedure completed successfully] End OUTPUT: “No solution satisfying the tolerance condition obtained after N iteration steps.” Stop [Procedure completed unsuccessfully] End GAUSS–SEIDEL How you can help (yourself and me)

Would help if you... Keep learning python and numpy intricacies — read sections of the official documentation (or a good book) as bedtime reading

Install a linux/unix shell (a bash shell) on your own machine.