<<

Contents

Introduction ...... 1 Bibliography; Physical Constants 1. Series ...... 2 Arithmetic and Geometric progressions; Convergence of series: the ratio test; Convergence of series: the comparison test; Binomial expansion; Taylor and Maclaurin Series; Power series with real variables; Integer series; Plane wave expansion 2. Vector Algebra ...... 3 Scalar product; of a line; Equation of a plane; Vector product; Scalar triple product; Vector triple product; Non-orthogonal basis; Summation convention 3. Matrix Algebra ...... 5 Unit matrices; Products; Transpose matrices; Inverse matrices; Determinants; 2×2 matrices; Product rules; Orthogonal matrices; Solving sets of linear simultaneous ; Hermitian matrices; Eigenvalues and eigenvectors; Commutators; Hermitian algebra; Pauli spin matrices 4. Vector Calculus ...... 7 Notation; Identities; Grad, Div, Curl and the Laplacian; Transformation of 5. Complex Variables ...... 9 Complex numbers; De Moivre’s theorem; Power series for complex variables. 6. Trigonometric Formulae ...... 10 Relations between sides and angles of any plane triangle; Relations between sides and angles of any spherical triangle 7. Hyperbolic Functions ...... 11 Relations of the functions; Inverse functions 8. Limits ...... 12 9. Differentiation ...... 13 10. Integration ...... 13 Standard forms; Standard substitutions; Integration by parts; Differentiation of an ; Dirac δ-‘function’; Reduction formulae 11. Differential Equations ...... 16 Diffusion (conduction) equation; Wave equation; Legendre’s equation; Bessel’s equation; Laplace’s equation; Spherical harmonics 12. Calculus of Variations ...... 17 13. Functions of Several Variables ...... 18 Taylor series for two variables; Stationary points; Changing variables: the chain rule; Changing variables in surface and integrals – Jacobians 14. Fourier Series and Transforms ...... 19 Fourier series; Fourier series for other ranges; Fourier series for odd and even functions; Complex form of Fourier series; Discrete Fourier series; Fourier transforms; Convolution theorem; Parseval’s theorem; Fourier transforms in two dimensions; Fourier transforms in three dimensions 15. Laplace Transforms ...... 23 16. Numerical Analysis ...... 24 Finding the zeros of equations; Numerical integration of differential equations; Central difference notation; Approximating to derivatives; Interpolation: Everett’s formula; Numerical evaluation of definite integrals 17. Treatment of Random Errors ...... 25 Range method; Combination of errors 18. Statistics ...... 26 Mean and Variance; Probability distributions; Weighted sums of random variables; Statistics of a data sample x1, . . . , xn; Regression (least squares fitting) Introduction

This Mathematical Formaulae handbook has been prepared in response to a request from the Physics Consultative Committee, with the hope that it will be useful to those studying physics. It is to some extent modelled on a similar document issued by the Department of Engineering, but obviously reflects the particular interests of physicists. There was discussion as to whether it should also include physical formulae such as Maxwell’s equations, etc., but a decision was taken against this, partly on the grounds that the book would become unduly bulky, but mainly because, in its present form, clean copies can be made available to candidates in exams. There has been wide consultation among the staff about the contents of this document, but inevitably some users will seek in vain for a formula they feel strongly should be included. Please send suggestions for amendments to the Secretary of the Teaching Committee, and they will be considered for incorporation in the next edition. The Secretary will also be grateful to be informed of any (equally inevitable) errors which are found. This book was compiled by Dr John Shakeshaft and typeset originally by Fergus Gallagher, and currently by Dr Dave Green, using the TEX typesetting package. Version 1.5 December 2005.

Bibliography

Abramowitz, M. & Stegun, I.A., Handbook of Mathematical Functions, Dover, 1965. Gradshteyn, I.S. & Ryzhik, I.M., Table of Integrals, Series and Products, Academic Press, 1980. Jahnke, E. & Emde, F., Tables of Functions, Dover, 1986. Nordling, C. & Osterman,¨ J., Physics Handbook, Chartwell-Bratt, Bromley, 1980. Speigel, M.R., Mathematical Handbook of Formulas and Tables. (Schaum’s Outline Series, McGraw-Hill, 1968).

Physical Constants

Based on the “Review of Particle Properties”, Barnett et al., 1996, Physics Review D, 54, p1, and “The Fundamental Physical Constants”, Cohen & Taylor, 1997, Physics Today, BG7. (The figures in parentheses give the 1-standard- deviation uncertainties in the last digits.)

speed of light in a vacuum c 2 997 924 58 108 m s 1 (by definition) · × − permeability of a vacuum µ 4π 10 7 H m 1 (by definition) 0 × − − permittivity of a vacuum  1/µ c2 = 8 854 187 817 . . . 10 12 F m 1 0 0 · × − − elementary charge e 1 602 177 33(49) 10 19 C · × − Planck constant h 6 626 075 5(40) 10 34 J s · × − h/2π ¯h¯ 1 054 572 66(63) 10 34 J s · × − Avogadro constant N 6 022 136 7(36) 1023 mol 1 A · × − unified atomic mass constant m 1 660 540 2(10) 10 27 kg u · × − mass of electron m 9 109 389 7(54) 10 31 kg e · × − mass of proton m 1 672 623 1(10) 10 27 kg p · × − Bohr magneton eh/4πm µ 9 274 015 4(31) 10 24 J T 1 e B · × − − molar gas constant R 8 314 510(70) J K 1 mol 1 · − − k 1 380 658(12) 10 23 J K 1 B · × − − Stefan–Boltzmann constant σ 5 670 51(19) 10 8 W m 2 K 4 · × − − − gravitational constant G 6 672 59(85) 10 11 N m2 kg 2 · × − − Other data acceleration of free fall g 9 806 65 m s 2 (standard value at sea level) · −

1 1. Series

Arithmetic and Geometric progressions n A.P. S = a + (a + d) + (a + 2d) + + [a + (n 1)d] = [2a + (n 1)d] n · · · − 2 − n 2 n 1 1 r a G.P. S = a + ar + ar + + ar − = a − , S∞ = for r < 1 n · · · 1 r 1 r | | −  −  (These results also hold for complex series.)

Convergence of series: the ratio test

un+1 Sn = u1 + u2 + u3 + + un converges as n ∞ if lim < 1 · · · → n ∞ u → n

Convergence of series: the comparison test

If each term in a series of positive terms is less than the corresponding term in a series known to be convergent, then the given series is also convergent.

Binomial expansion n(n 1) n(n 1)(n 2) (1 + x)n = 1 + nx + − x2 + − − x3 + 2! 3! · · ·

r n r n n If n is a positive integer the series terminates and is valid for all x: the term in x is Crx or where Cr n! r ≡ is the number of different ways in which an unordered sample of r objects can be selected  from a set of r!(n r)! n objects− without replacement. When n is not a positive integer, the series does not terminate: the infinite series is convergent for x < 1. | | Taylor and Maclaurin Series

If y(x) is well-behaved in the vicinity of x = a then it has a Taylor series, dy u2 d2 y u3 d3 y y(x) = y(a + u) = y(a) + u + + + dx 2! dx2 3! dx3 · · · where u = x a and the differential coefficients are evaluated at x = a. A Maclaurin series is a Taylor series with − a = 0, dy x2 d2 y x3 d3 y y(x) = y(0) + x + + + dx 2! dx2 3! dx3 · · ·

Power series with real variables x2 xn ex = 1 + x + + + + valid for all x 2! · · · n! · · · x2 x3 xn ln(1 + x) = x + + + ( 1)n+1 + valid for 1 < x 1 − 2 3 · · · − n · · · − ≤ ix ix 2 4 6 e + e− x x x cos x = = 1 + + valid for all values of x 2 − 2! 4! − 6! · · · ix ix 3 5 e e− x x sin x = − = x + + valid for all values of x 2i − 3! 5! · · · 1 2 π π tan x = x + x3 + x5 + valid for < x < 3 15 · · · − 2 2 3 5 1 x x tan− x = x + valid for 1 x 1 − 3 5 − · · · − ≤ ≤ 3 5 1 1 x 1.3 x sin− x = x + + + valid for 1 < x < 1 2 3 2.4 5 · · · −

2 Integer series

N N(N + 1) ∑ n = 1 + 2 + 3 + + N = 1 · · · 2 N N(N + 1)(2N + 1) ∑ n2 = 12 + 22 + 32 + + N2 = 1 · · · 6 N N2(N + 1)2 ∑ n3 = 13 + 23 + 33 + + N3 = [1 + 2 + 3 + N]2 = 1 · · · · · · 4

∞ ( 1)n+1 1 1 1 ∑ − = 1 + + = ln 2 [see expansion of ln(1 + x)] 1 n − 2 3 − 4 · · · ∞ n+1 ( 1) 1 1 1 π 1 ∑ − = 1 + + = [see expansion of tan− x] 2n 1 − 3 5 − 7 · · · 4 1 − ∞ 2 ∑ 1 1 1 1 π 2 = 1 + + + + = 1 n 4 9 16 · · · 6 N N(N + 1)(N + 2)(N + 3) ∑ n(n + 1)(n + 2) = 1.2.3 + 2.3.4 + + N(N + 1)(N + 2) = 1 · · · 4 This last result is a special case of the more general formula,

N N(N + 1)(N + 2) . . . (N + r)(N + r + 1) ∑ n(n + 1)(n + 2) . . . (n + r) = . 1 r + 2

Plane wave expansion ∞ l exp(ikz) = exp(ikr cosθ) = ∑(2l + 1)i jl(kr)Pl(cosθ), l=0 where Pl(cosθ) are Legendre polynomials (see section 11) and jl(kr) are spherical Bessel functions, defined by π j (ρ) = J 1 (ρ), with J (x) the Bessel function of order l (see section 11). l 2ρ l+ /2 l r

2. Vector Algebra

If i, j, k are orthonormal vectors and A = A i + A j + A k then A 2 = A2 + A2 + A2. [Orthonormal vectors x y z | | x y z ≡ orthogonal unit vectors.]

Scalar product

A B = A B cosθ where θ is the angle between the vectors · | | | | Bx = A B + A B + A B = [ Ax Ay Az ] B x x y y z z  y  Bz   Scalar multiplication is commutative: A B = B A. · · Equation of a line

A point r (x, y, z) lies on a line passing through a point a and parallel to vector b if ≡ r = a + λb with λ a real number.

3 Equation of a plane

A point r (x, y, z) is on a plane if either ≡ (a) r d = d , where d is the normal from the origin to the plane, or · | | x y z (b) + + = 1 where X, Y, Z are the intercepts on the axes. X b Y Z

Vector product

A B = n A B sin θ, where θ is the angle between the vectors and n is a unit vector normal to the plane containing × A and B in| the| | dir| ection for which A, B, n form a right-handed set of axes.

A B in determinant form A B in matrix form × ×

i j k 0 Az Ay B − x Ax Ay Az Az 0 Ax B  −   y  Bx By Bz Ay Ax 0 B − z    

Vector multiplication is not commutative: A B = B A. × − × Scalar triple product

Ax Ay Az A B C = A B C = B B B = A C B, etc. × · · × x y z − × · Cx Cy Cz

Vector triple product

A (B C) = (A C)B (A B)C, (A B) C = (A C)B (B C)A × × · − · × × · − · Non-orthogonal basis

A = A1e1 + A2e2 + A3e3 e2 e3 A = 0 A where 0 = × 1 · e (e e ) 1 · 2 × 3 Similarly for A2 and A3.

Summation convention

a = aiei implies summation over i = 1 . . . 3 a b = a b · i i (a b) = ε a b where ε = 1; ε = ε × i i jk j k 123 i jk − ik j ε ε = δ δ δ δ i jk klm il jm − im jl

4 3. Matrix Algebra

Unit matrices

The unit matrix I of order n is a square matrix with all diagonal elements equal to one and all off-diagonal elements 1 zero, i.e., (I)i j = δi j. If A is a square matrix of order n, then AI = IA = A. Also I = I− .

I is sometimes written as In if the order needs to be stated explicitly.

Products

If A is a (n l) matrix and B is a (l m) then the product AB is defined by × × l (AB)i j = ∑ AikBk j k=1 In general AB = BA. 6 Transpose matrices

T T If A is a matrix, then transpose matrix A is such that (A )i j = (A) ji.

Inverse matrices

1 1 1 If A is a square matrix with non-zero determinant, then its inverse A− is such that AA− = A− A = I.

1 transpose of cofactor of Ai j (A− ) = i j A | | where the cofactor of A is ( 1)i+ j times the determinant of the matrix A with the j-th row and i-th column deleted. i j − Determinants

If A is a square matrix then the determinant of A, A ( det A) is defined by | | ≡ A = ∑ i jk...A1i A2 j A3k . . . | | i, j,k,... where the number of the suffixes is equal to the order of the matrix.

2 2 matrices × a b If A = then, c d   T a c 1 1 d b A = ad bc A = A− = − | | − b d A c a   | |  −  Product rules

(AB . . . N)T = NT . . . BT AT 1 1 1 1 (AB . . . N)− = N− . . . B− A− (if individual inverses exist) AB . . . N = A B . . . N (if individual matrices are square) | | | | | | | | Orthogonal matrices

An orthogonal matrix Q is a square matrix whose columns qi form a set of orthonormal vectors. For any orthogonal matrix Q, 1 T T Q− = Q , Q = 1, Q is also orthogonal. | | 

5 Solving sets of linear simultaneous equations

1 1 If A is square then Ax = b has a unique solution x = A− b if A− exists, i.e., if A = 0. | | 6 If A is square then Ax = 0 has a non-trivial solution if and only if A = 0. | | An over-constrained set of equations Ax = b is one in which A has m rows and n columns, where m (the number of equations) is greater than n (the number of variables). The best solution x (in the sense that it minimizes the error Ax b ) is the solution of the n equations AT Ax = ATb. If the columns of A are orthonormal vectors then | − | x = ATb.

Hermitian matrices

T The Hermitian conjugate of A is A† = (A∗) , where A∗ is a matrix each of whose components is the complex conjugate of the corresponding components of A. If A = A† then A is called a Hermitian matrix.

Eigenvalues and eigenvectors

The n eigenvalues λi and eigenvectors ui of an n n matrix A are the solutions of the equation Au = λu. The eigenvalues are the zeros of the polynomial of degr×ee n, P (λ) = A λI . If A is Hermitian then the eigenvalues n − λ are real and the eigenvectors u are mutually orthogonal. A λ|I = 0 is| called the characteristic equation of the i i − matrix A. | |

Tr A = ∑ λi, also A = ∏ λi. i | | i If S is a symmetric matrix, Λ is the diagonal matrix whose diagonal elements are the eigenvalues of S, and U is the matrix whose columns are the normalized eigenvectors of A, then UT SU = Λ and S = UΛUT. If x is an approximation to an eigenvector of A then xT Ax/(xTx) (Rayleigh’s quotient) is an approximation to the corresponding eigenvalue.

Commutators

[A, B] AB BA ≡ − [A, B] = [B, A] − [A, B]† = [B†, A†] [A + B, C] = [A, C] + [B, C] [AB, C] = A[B, C] + [A, C]B [A, [B, C]] + [B, [C, A]] + [C, [A, B]] = 0

Hermitian algebra

b† = (b1∗, b2∗, . . .) Matrix form Operator form Bra-ket form

Hermiticity b∗ A c = (A b)∗ c ψ∗Oφ = (Oψ)∗φ ψ O φ · · · · Z Z h | | i Eigenvalues, λ real Au = λ u Oψ = λ ψ O i = λ i i (i) i i (i) i | i i | i

Orthogonality u u = 0 ψ∗ψ = 0 i j = 0 (i = j) i · j Z i j h | i 6

Completeness b = ∑ u (u b) φ = ∑ ψ ψ∗φ φ = ∑ i i φ i i · i Z i | i h | i i i   i Rayleigh–Ritz

ψ∗Oψ b∗ A b Z ψ O ψ Lowest eigenvalue λ0 · · λ0 h | | i ≤ b∗ b ≤ ψ ψ ψ ψ · Z ∗ h | i

6 Pauli spin matrices

0 1 0 i 1 0 σ = , σ = , σ = x 1 0 y i −0 z 0 1      −  σxσy = iσz, σyσz = iσx, σzσx = iσy, σxσx = σyσy = σzσz = I

4. Vector Calculus

Notation φ is a scalar function of a set of position coordinates. In Cartesian coordinates φ = φ(x, y, z); in cylindrical polar coordinates φ = φ(ρ,ϕ, z); in spherical polar coordinates φ = φ(r,θ,ϕ); in cases with radial symmetry φ = φ(r). A is a vector function whose components are scalar functions of the position coordinates: in Cartesian coordinates A = iAx + jAy + kAz, where Ax, Ay, Az are independent functions of x, y, z. ∂ ∂x ∂ ∂ ∂  ∂  In Cartesian coordinates (‘del’) i ∂ + j ∂ + k ∂  ∂  ∇ ≡ x y z ≡  y     ∂     ∂   z    grad φ = φ, div A = A, curl A = A ∇ ∇ · ∇ ×

Identities

grad(φ + φ ) grad φ + grad φ div(A + A ) div A + div A 1 2 ≡ 1 2 1 2 ≡ 1 2 grad(φ φ ) φ grad φ + φ grad φ 1 2 ≡ 1 2 2 1 curl(A + A) curl A + curl A ≡ 1 2 div(φA) φ div A + (grad φ) A, curl(φA) φ curl A + (gradφ) A ≡ · ≡ × div(A A ) A curl A A curl A 1 × 2 ≡ 2 · 1 − 1 · 2 curl(A A ) A div A A div A + (A grad)A (A grad)A 1 × 2 ≡ 1 2 − 2 1 2 · 1 − 1 · 2 div(curl A) 0, curl(gradφ) 0 ≡ ≡ curl(curl A) grad(div A) div(grad A) grad(div A) 2 A ≡ − ≡ − ∇ grad(A A ) A (curl A ) + (A grad)A + A (curl A ) + (A grad)A 1 · 2 ≡ 1 × 2 1 · 2 2 × 1 2 · 1

7 Grad, Div, Curl and the Laplacian

Cartesian Coordinates Cylindrical Coordinates Spherical Coordinates

Conversion to x = r cos ϕ sinθ y = r sinϕ sin θ Cartesian x = ρ cosϕ y = ρ sin ϕ z = z z = r cosθ Coordinates

Vector A Axi + Ay j + Azk Aρρ + Aϕϕ + Azz Arr + Aθθ + Aϕϕ ∂φ ∂φ ∂φ ∂φ 1 ∂φ ∂φ ∂φ 1 ∂φ 1 ∂φ Gradient φ i + j + k ρb+ bϕ + b z r + b θ +b b ϕ ∇ ∂x ∂y ∂z ∂ρ ρ ∂ϕ ∂z ∂r r ∂θ r sin θ ∂ϕ ∂ 2 b ∂ b b b 1 b(r Ar) 1 Aθ sinbθ ∂ ∂ ∂ ∂ ∂ ∂ + Divergence Ax Ay Az 1 (ρAρ) 1 Aϕ Az r2 ∂r r sin θ ∂θ ∂ + ∂ + ∂ ∂ + ∂ + ∂ A x y z ρ ρ ρ ϕ z 1 ∂Aϕ ∇ · + r sin θ ∂ϕ 1 1 1 1 1 i j k ρ ϕ z r θ ϕ ρ ρ r2 sin θ r sinθ r ∂ ∂ ∂ Curl A ∂ ∂ ∂ ∂ ∂ ∂ ∇ × ∂x ∂y ∂z b b b b b b ∂ρ ∂ϕ ∂ ∂ ∂ ∂ z r θ ϕ Ax Ay Az θ Aρ ρAϕ Az Ar rAθ rAϕ sin

1 ∂ ∂φ 1 ∂ ∂φ r2 + sinθ Laplacian ∂2φ ∂2φ ∂2φ 1 ∂ ∂φ 1 ∂2φ ∂2φ r2 ∂r ∂r r2 sinθ ∂θ ∂θ + + ρ + +     2φ ∂x2 ∂y2 ∂z2 ρ ∂ρ ∂ρ ρ2 ∂ϕ2 ∂z2 1 ∂2φ ∇   + r2 sin2 θ ∂ϕ2

Transformation of integrals

L = the distance along some curve ‘C’ in space and is measured from some fixed point. S = a surface τ = a volume contained by a specified surface t = the unit tangent to C at the point P n = the unit outward pointing normal b A = some vector function b dL = the vector element of curve (= t dL) dS = the vector element of surface (= n dS) b Then A t dL = A dL ZC · ZC · b and when A = φ b∇ ( φ) dL = dφ ZC ∇ · ZC

Gauss’s Theorem (Divergence Theorem)

When S defines a closed region having a volume τ

( A) dτ = (A n) dS = A dS Zτ ∇ · ZS · ZS · also b ( φ) dτ = φ dS ( A) dτ = (n A) dS Zτ ∇ ZS Zτ ∇ × ZS × b

8 Stokes’s Theorem

When C is closed and bounds the open surface S,

( A) dS = A dL ZS ∇ × · ZC · also (n φ) dS = φ dL ZS × ∇ ZC

Green’s Theorb em

ψ φ dS = (ψ φ) dτ ZS ∇ · Zτ ∇ · ∇ = ψ 2φ + ( ψ) ( φ) dτ Zτ ∇ ∇ · ∇   Green’s Second Theorem

(ψ 2φ φ 2ψ) dτ = [ψ( φ) φ( ψ)] dS Zτ ∇ − ∇ ZS ∇ − ∇ ·

5. Complex Variables

Complex numbers

The complex number z = x + iy = r(cosθ + i sin θ) = r ei(θ+2nπ), where i2 = 1 and n is an arbitrary integer. The − real quantity r is the modulus of z and the angle θ is the argument of z. The complex conjugate of z is z∗ = x iy = iθ 2 2 2 − r(cosθ i sin θ) = r e− ; zz∗ = z = x + y − | | De Moivre’s theorem

(cosθ + i sinθ)n = einθ = cos nθ + i sin nθ

Power series for complex variables.

z2 zn ez = 1 + z + + + + convergent for all finite z 2! · · · n! · · · z3 z5 sin z = z + convergent for all finite z − 3! 5! − · · · z2 z4 cos z = 1 + convergent for all finite z − 2! 4! − · · · z2 z3 ln(1 + z) = z + principal value of ln(1 + z) − 2 3 − · · · This last series converges both on and within the circle z = 1 except at the point z = 1. | | − 3 5 1 z z tan− z = z + − 3 5 − · · · This last series converges both on and within the circle z = 1 except at the points z = i. | |  n(n 1) n(n 1)(n 2) (1 + z)n = 1 + nz + − z2 + − − z3 + 2! 3! · · · This last series converges both on and within the circle z = 1 except at the point z = 1. | | −

9 6. Trigonometric Formulae

cos2 A + sin2 A = 1 sec2 A tan2 A = 1 cosec2 A cot2 A = 1 − − 2 tan A sin 2A = 2 sin A cos A cos 2A = cos2 A sin2 A tan 2A = . − 1 tan2 A −

cos(A + B) + cos(A B) sin(A B) = sin A cos B cos A sin B cos A cos B = −   2

cos(A B) cos(A + B) cos(A B) = cos A cos B sin A sin B sin A sin B = − −  ∓ 2

tan A tan B sin(A + B) + sin(A B) tan(A B) =  sin A cos B = −  1 tan A tan B 2 ∓

A + B A B 1 + cos 2A sin A + sin B = 2 sin cos − cos2 A = 2 2 2 A + B A B 1 cos 2A sin A sin B = 2 cos sin − sin2 A = − − 2 2 2 A + B A B 3 cos A + cos 3A cos A + cos B = 2 cos cos − cos3 A = 2 2 4 A + B A B 3 sin A sin 3A cos A cos B = 2 sin sin − sin3 A = − − − 2 2 4

Relations between sides and angles of any plane triangle

In a plane triangle with angles A, B, and C and sides opposite a, b, and c respectively, a b c = = = diameter of circumscribed circle. sin A sin B sin C a2 = b2 + c2 2bc cos A − a = b cos C + c cos B b2 + c2 a2 cos A = − 2bc A B a b C tan − = − cot 2 a + b 2 1 1 1 1 area = ab sin C = bc sin A = ca sin B = s(s a)(s b)(s c), where s = (a + b + c) 2 2 2 − − − 2 q Relations between sides and angles of any spherical triangle

In a spherical triangle with angles A, B, and C and sides opposite a, b, and c respectively, sin a sin b sin c = = sin A sin B sin C cos a = cos b cos c + sin b sin c cos A cos A = cos B cos C + sin B sin C cos a −

10 7. Hyperbolic Functions

2 4 1 x x x x cosh x = ( e + e− ) = 1 + + + valid for all x 2 2! 4! · · · 3 5 1 x x x x sinh x = ( e e− ) = x + + + valid for all x 2 − 3! 5! · · · cosh ix = cos x cos ix = cosh x sinh ix = i sin x sin ix = i sinh x sinh x 1 tanh x = sech x = cosh x cosh x cosh x 1 coth x = cosech x = sinh x sinh x cosh2 x sinh2 x = 1 −

For large positive x: ex cosh x sinh x ≈ → 2 tanh x 1 → For large negative x: x e− cosh x sinh x ≈ − → 2 tanh x 1 → −

Relations of the functions

sinh x = sinh( x) sech x = sech( x) − − − cosh x = cosh( x) cosech x = cosech( x) − − − tanh x = tanh( x) coth x = coth( x) − − − − 2 tanh (x/2) tanh x 1 + tanh2 (x/2) 1 sinh x = 2 = cosh x = 2 = 1 tanh (x/2) 2 1 tanh (x/2) 2 1 tanh x 1 tanh x − − − − q q tanh x = 1 sech2 x sech x = 1 tanh2 x − − q q coth x = cosech2 x + 1 cosech x = coth2 x 1 − q q cosh x 1 cosh x + 1 sinh(x/2) = − cosh(x/2) = r 2 r 2 cosh x 1 sinh x tanh(x/2) = − = sinh x cosh x + 1 2 tanh x sinh(2x) = 2 sinh x cosh x tanh(2x) = 1 + tanh2 x cosh(2x) = cosh2 x + sinh2 x = 2 cosh2 x 1 = 1 + 2 sinh2 x − sinh(3x) = 3 sinh x + 4 sinh3 x cosh 3x = 4 cosh3 x 3 cosh x − 3 tanh x + tanh3 x tanh(3x) = 1 + 3 tanh2 x

11 sinh(x y) = sinh x cosh y cosh x sinh y   cosh(x y) = cosh x cosh y sinh x sinh y   tanh x tanh y tanh(x y) =   1 tanh x tanh y  1 1 1 1 sinh x + sinh y = 2 sinh (x + y) cosh (x y) cosh x + cosh y = 2 cosh (x + y) cosh (x y) 2 2 − 2 2 − 1 1 1 1 sinh x sinh y = 2 cosh (x + y) sinh (x y) cosh x cosh y = 2 sinh (x + y) sinh (x y) − 2 2 − − 2 2 −

1 tanh (x/2) x sinh x cosh x =  = e  1 tanh(x/2) ∓ sinh(x y) tanh x tanh y =   cosh x cosh y sinh(x y) coth x coth y =    sinh x sinh y

Inverse functions

2 2 1 x x + x + a sinh− = ln for ∞ < x < ∞ a pa ! − 2 2 1 x x + x a cosh− = ln − for x a a pa ! ≥

1 x 1 a + x 2 2 tanh− = ln for x < a a 2 a x  −  1 x 1 x + a 2 2 coth− = ln for x > a a 2 x a  −  2 1 x a a sech− = ln + 1 for 0 < x a a  x s x2 −  ≤   2 1 x a a cosech− = ln + + 1 for x = 0 a  x s x2  6  

8. Limits

ncxn 0 as n ∞ if x < 1 (any fixed c) → → | | xn/n! 0 as n ∞ (any fixed x) → → (1 + x/n)n ex as n ∞, x ln x 0 as x 0 → → → → f (x) f 0(a) If f (a) = g(a) = 0 then lim = (l’Hopital’sˆ rule) x a ( ) → g x g0(a)

12 9. Differentiation

u 0 u0v uv0 (uv)0 = u0v + uv0, = − v v2   (n) (n) (n 1) (1) n (n r) (r) (n) (uv) = u v + nu − v + + C u − v + + uv Leibniz Theorem · · · r · · · n n! where nC = r ≡ r r!(n r)!   −

d d (sin x) = cos x (sinh x) = cosh x dx dx d d (cos x) = sin x (cosh x) = sinh x dx − dx d d (tan x) = sec2 x (tanh x) = sech2 x dx dx d d (sec x) = sec x tan x (sech x) = sech x tanh x dx dx − d d (cot x) = cosec2 x (coth x) = cosech2 x dx − dx − d d (cosec x) = cosec x cot x (cosech x) = cosech x coth x dx − dx −

10. Integration

Standard forms xn+1 xn dx = + c for n = 1 Z n + 1 6 − 1 dx = ln x + c ln x dx = x(ln x 1) + c Z x Z − 1 x 1 eax dx = eax + c x eax dx = eax + c Z a Z a − 2  a  x2 1 x ln x dx = ln x + c Z 2 − 2  

1 1 1 x dx = tan− + c Z a2 + x2 a a   1 1 1 x 1 a + x 2 2 dx = tanh− + c = ln + c for x < a Z a2 x2 a a 2a a x −    −  1 1 1 x 1 x a 2 2 dx = coth− + c = ln − + c for x > a Z x2 a2 − a a 2a x + a −     x 1 1 2 2 n dx = − 2 2 n 1 + c for n = 1 Z (x a ) 2(n 1) (x a ) − 6  −  x 1 dx = ln(x2 a2) + c Z x2 a2 2   1 1 x dx = sin− + c Z a2 x2 a −   1 p dx = ln x + x2 a2 + c Z x2 a2    p  x p dx = x2 a2 + c Z x2 a2   p 1 2 1 x pa2 x2 dx = x a2 x2 + a sin− + c Z − 2 − a p h p  i 13 ∞ 1 p dx = π cosec pπ for p < 1 Z0 (1 + x)x ∞ ∞ 1 π cos(x2) dx = sin(x2) dx = Z Z 2 2 0 0 r ∞ exp( x2/2σ 2) dx = σ√2π Z ∞ − − ∞ 1 3 5 (n 1)σ n+1√2π for n 2 and even xn exp( x2/2σ 2) dx = × × × · · · − ≥ Z ∞ −  −  0 for n 1 and odd ≥ sin x dx = cos x + c  sinh x dx = cosh x + c Z − Z cos x dx = sin x + c cosh x dx = sinh x + c Z Z tan x dx = ln(cos x) + c tanh x dx = ln(cosh x) + c Z − Z cosec x dx = ln(cosec x cot x) + c cosech x dx = ln [tanh(x/2)] + c Z − Z sec d = ln(sec + tan ) + sech d = 2 tan 1( ex) + Z x x x x c Z x x − c cot d = ln(sin ) + coth d = ln(sinh ) + Z x x x c Z x x x c

sin(m n)x sin(m + n)x sin mx sin nx dx = − + c if m2 = n2 Z 2(m n) − 2(m + n) 6 − sin(m n)x sin(m + n)x cos mx cos nx dx = − + + c if m2 = n2 Z 2(m n) 2(m + n) 6 − Standard substitutions If the integrand is a function of: substitute:

(a2 x2) or a2 x2 x = a sin θ or x = a cosθ 2 − 2 − (x + a ) or px2 + a2 x = a tanθ or x = a sinh θ (x2 a2) or px2 a2 x = a secθ or x = a coshθ − − p If the integrand is a rational function of sin x or cos x or both, substitute t = tan(x/2) and use the results: 2t 1 t2 2 dt sin x = cos x = − dx = . 1 + t2 1 + t2 1 + t2

If the integrand is of the form: substitute:

dx px + q = u2 Z (ax + b) px + q p dx 1 + = . Z ax b (ax + b) px2 + qx + r u q

14 Integration by parts

b b b u dv = uv v du Za − Za a

Differentiation of an integral

If f (x,α) is a function of x containing a parameter α and the limits of integration a and b are functions of α then d b(α) db da b(α) ∂ f (x,α) dx = f (b,α) f (a,α) + f (x,α) dx. dα Za(α) dα − dα Za(α) ∂α Special case, d x f (y) dy = f (x). dx Za

Dirac δ-‘function’ 1 ∞ δ(t τ) = exp[iω(t τ)] dω. − 2π Z ∞ − − ∞ If f (t) is an arbitrary function of t then δ(t τ) f (t) dt = f (τ). Z ∞ − ∞ − δ(t) = 0 if t = 0, also δ(t) dt = 1 6 Z ∞ − Reduction formulae

Factorials n! = n(n 1)(n 2) . . . 1, 0! = 1. − − Stirling’s formula for large n: ln(n!) n ln n n. ∞ ∞≈ − p x p 1 x 1 1 √π For any p > 1, x e− dx = p x − e− dx = p!. ( /2)! = √π, ( /2)! = /2, etc. − Z0 Z0 − 1 p!q! For any p, q > 1, xp(1 x)q dx = . − Z0 − (p + q + 1)!

Trigonometrical

If m, n are integers, π/2 π/2 π/2 m n m 1 m 2 n n 1 m n 2 sin θ cos θ dθ = − sin − θ cos θ dθ = − sin θ cos − θ dθ Z0 m + n Z0 m + n Z0 and can therefore be reduced eventually to one of the following integrals π/2 1 π/2 π/2 π/2 π sin θ cos θ dθ = , sinθ dθ = 1, cosθ dθ = 1, dθ = . Z0 2 Z0 Z0 Z0 2

Other

∞ n 2 (n 1) 1 π 1 If In = x exp( αx ) dx then In = − In 2, I0 = , I1 = . Z 2α − 2 α 2α 0 − r

15 11. Differential Equations

Diffusion (conduction) equation ∂ψ = κ 2ψ ∂t ∇

Wave equation

1 ∂2ψ 2ψ = ∇ c2 ∂t2

Legendre’s equation

d2 y dy (1 x2) 2x + l(l + 1)y = 0, − dx2 − dx l 1 d l solutions of which are Legendre polynomials P (x), where P (x) = x2 1 , Rodrigues’ formula so l l l dx − 2 l!   1  P (x) = 1, P (x) = x, P (x) = (3x2 1) etc. 0 1 2 2 −

Recursion relation

1 Pl(x) = [(2l 1)xPl 1(x) (l 1)Pl 2(x)] l − − − − −

Orthogonality

1 2 δ Pl(x)Pl0 (x) dx = ll0 Z 1 2l + 1 − Bessel’s equation

d2 y dy x2 + x + (x2 m2)y = 0, dx2 dx − solutions of which are Bessel functions Jm(x) of order m.

Series form of Bessel functions of the first kind

∞ ( 1)k(x/2)m+2k Jm(x) = ∑ − (integer m). k=0 k!(m + k)! The same general form holds for non-integer m > 0.

16 Laplace’s equation

2u = 0 ∇ If expressed in two-dimensional polar coordinates (see section 4), a solution is n n u(ρ,ϕ) = Aρ + Bρ− C exp(inϕ) + D exp( inϕ) − where A, B, C,D are constants and n is a real integer.  If expressed in three-dimensional polar coordinates (see section 4) a solution is l (l+1) m u(r,θ,ϕ) = Ar + Br− Pl C sin mϕ + D cos mϕ where l and m are integers withl  m 0; A, B, C, D are constants; ≥ | | ≥ m m m d | | P (cosθ) = sin| | θ P (cosθ) l d(cosθ) l   is the associated Legendre polynomial.

0 Pl (1) = 1.

If expressed in cylindrical polar coordinates (see section 4), a solution is u(ρ,ϕ, z) = J (nρ) A cos mϕ + B sin mϕ C exp(nz) + D exp( nz) m − where m and n are integers; A, B, C, D are constants. 

Spherical harmonics

m The normalized solutions Yl (θ,ϕ) of the equation 1 ∂ ∂ 1 ∂2 sinθ + Ym + l(l + 1)Ym = 0 sinθ ∂θ ∂θ 2 θ ∂ϕ2 l l    sin  are called spherical harmonics, and have values given by

m m 2l + 1 (l m )! m imϕ ( 1) for m 0 Yl (θ,ϕ) = − | | Pl (cosθ) e − ≥ s 4π (l + m )! × 1 for m < 0 | | 

0 1 0 3 1 3 iϕ i.e., Y0 = , Y1 = cosθ, Y1 = sin θ e , etc. r 4π r 4π ∓r 8π Orthogonality

m m0 Ω δ δ Yl∗ Yl d = ll0 mm0 Z4π 0

12. Calculus of Variations

b ∂F d ∂F dy The condition for I = F(y, y0, x) dx to have a stationary value is = , where y0 = . This is the Za ∂y dx ∂y0 dx   Euler–Lagrange equation.

17 13. Functions of Several Variables

∂φ If φ = f (x, y, z, . . .) then implies differentiation with respect to x keeping y, z, . . . constant. ∂x ∂φ ∂φ ∂φ ∂φ ∂φ ∂φ dφ = dx + dy + dz + and δφ δx + δy + δz + ∂x ∂y ∂z · · · ≈ ∂x ∂y ∂z · · · ∂φ ∂φ ∂φ where x, y, z, . . . are independent variables. is also written as or when the variables kept ∂x ∂x ∂x  y,... y,... constant need to be stated explicitly.

∂2φ ∂2φ If φ is a well-behaved function then = etc. ∂x ∂y ∂y ∂x If φ = f (x, y),

∂φ 1 ∂φ ∂x ∂y = , = 1. ∂x ∂x ∂x ∂y ∂φ −  y  y  φ  x ∂φ  y

Taylor series for two variables

If φ(x, y) is well-behaved in the vicinity of x = a, y = b then it has a Taylor series ∂φ ∂φ 1 ∂2φ ∂2φ ∂2φ φ(x, y) = φ(a + u, b + v) = φ(a, b) + u + v + u2 + 2uv + v2 + ∂x ∂y 2! ∂ 2 ∂x ∂y ∂ 2 · · ·  x y  where x = a + u, y = b + v and the differential coefficients are evaluated at x = a, y = b

Stationary points

∂φ ∂φ ∂2φ ∂2φ ∂2φ A function φ = f (x, y) has a stationary point when = = 0. Unless = = = 0, the following ∂x ∂y ∂x2 ∂y2 ∂x ∂y conditions determine whether it is a minimum, a maximum or a saddle point. ∂2φ ∂2φ Minimum: > 0, or > 0, 2 ∂x2 ∂y2 ∂2φ ∂2φ ∂2φ  > 2 2 and 2 2 ∂ ∂ ∂ φ ∂ φ  ∂x ∂y x y Maximum: < 0, or < 0,    ∂x2 ∂y2 2  ∂2φ ∂2φ ∂2φ  Saddle point: < ∂ 2 ∂ 2 ∂x ∂y x y   ∂2φ ∂2φ ∂2φ If = = = 0 the character of the turning point is determined by the next higher derivative. ∂x2 ∂y2 ∂x ∂y

Changing variables: the chain rule

If φ = f (x, y, . . .) and the variables x, y, . . . are functions of independent variables u, v, . . . then

∂φ ∂φ ∂x ∂φ ∂y = + + ∂u ∂x ∂u ∂y ∂u · · · ∂φ ∂φ ∂x ∂φ ∂y = + + ∂v ∂x ∂v ∂y ∂v · · · etc.

18 Changing variables in surface and volume integrals – Jacobians

If an area A in the x, y plane maps into an area A0 in the u, v plane then

∂x ∂x ∂u ∂v f (x, y) dx dy = f (u, v)J du dv where J = Z Z ∂ ∂ A A0 y y ∂ ∂ u v

∂(x, y) The Jacobian J is also written as . The corresponding formula for volume integrals is ∂(u, v)

∂x ∂x ∂x ∂u ∂v ∂w

∂y ∂y ∂y f (x, y, z) dx dy dz = f (u, v, w)J du dv dw where now J = Z Z ∂ ∂ ∂ V V0 u v w ∂ ∂ ∂ z z z

∂u ∂v ∂w

14. Fourier Series and Transforms

Fourier series

If y(x) is a function defined in the range π x π then − ≤ ≤ M M0 y(x) c0 + ∑ cm cos mx + ∑ sm sin mx ≈ m=1 m=1 where the coefficients are 1 π c0 = y(x) dx 2π Z π − 1 π cm = y(x) cos mx dx (m = 1, . . . , M) π Z π − 1 π sm = y(x) sin mx dx (m = 1, . . . , M0) π Z π − with convergence to y(x) as M, M0 ∞ for all points where y(x) is continuous. → Fourier series for other ranges

Variable t, range 0 t T, (i.e., a periodic function of time with period T, frequency ω = 2π/T). ≤ ≤ y(t) c + ∑ c cos mωt + ∑ s sin mωt ≈ 0 m m where ω T ω T ω T c0 = y(t) dt, cm = y(t) cos mωt dt, sm = y(t) sin mωt dt. 2π Z0 π Z0 π Z0 Variable x, range 0 x L, ≤ ≤ 2mπx 2mπx y(x) c + ∑ c cos + ∑ s sin ≈ 0 m L m L where 1 L 2 L 2mπx 2 L 2mπx c0 = y(x) dx, cm = y(x) cos dx, sm = y(x) sin dx. L Z0 L Z0 L L Z0 L

19 Fourier series for odd and even functions

If y(x) is an odd (anti-symmetric) function [i.e., y( x) = y(x)] defined in the range π x π, then only 2− π − − ≤ ≤ sines are required in the Fourier series and sm = y(x) sin mx dx. If, in addition, y(x) is symmetric about π Z0 4 π/2 x = π/2, then the coefficients sm are given by sm = 0 (for m even), sm = y(x) sin mx dx (for m odd). If π Z0 y(x) is an even (symmetric) function [i.e., y( x) = y(x)] defined in the range π x π, then only constant − 1 π − ≤ 2 ≤π and cosine terms are required in the Fourier series and c0 = y(x) dx, cm = y(x) cos mx dx. If, in π Z0 π Z0 π addition, y(x) is anti-symmetric about x = , then c = 0 and the coefficients c are given by c = 0 (for m even), 2 0 m m 4 π/2 cm = y(x) cos mx dx (for m odd). π Z0 [These results also apply to Fourier series with more general ranges provided appropriate changes are made to the limits of integration.]

Complex form of Fourier series

If y(x) is a function defined in the range π x π then − ≤ ≤ M π imx 1 imx ( ) ∑ e , = ( ) e− d y x Cm Cm Z y x x ≈ M 2π π − − with m taking all integer values in the range M. This approximation converges to y(x) as M ∞ under the same conditions as the real form.  → For other ranges the formulae are: Variable t, range 0 t T, frequency ω = 2π/T, ≤ ≤ ∞ T imωt ω imωt y(t) = ∑ Cm e , Cm = y(t) e− dt. ∞ 2π Z0 − Variable x0, range 0 x0 L, ≤ ≤ ∞ L i2mπx0/L 1 i2mπx0/L y(x0) = ∑ Cm e , Cm = y(x0) e− dx0. ∞ L Z0 − Discrete Fourier series

If y(x) is a function defined in the range π x π which is sampled in the 2N equally spaced points x = − ≤ ≤ n nx/N [n = (N 1) . . . N], then − − y(xn) = c0 + c1 cos xn + c2 cos 2xn + + cN 1 cos(N 1)xn + cN cos Nxn · · · − − + s1 sin xn + s2 sin 2xn + + sN 1 sin(N 1)xn + sN sin Nxn · · · − − where the coefficients are 1 c = ∑ y(x ) 0 2N n 1 c = ∑ y(x ) cos mx (m = 1, . . . , N 1) m N n n − 1 c = ∑ y(x ) cos Nx N 2N n n 1 s = ∑ y(x ) sin mx (m = 1, . . . , N 1) m N n n − 1 s = ∑ y(x ) sin Nx N 2N n n each summation being over the 2N sampling points xn.

20 Fourier transforms

If y(x) is a function defined in the range ∞ x ∞ then the Fourier transform y(ω) is defined by the equations ∞ − ≤ ∞≤ 1 iωt iωt y(t) = y(ω) e dω, y(ω) = y(t) e− dt. 2π Z ∞ Z ∞ b − − If ω is replaced by 2π f , where f is the frequency, this relationship becomes ∞ b b ∞ i2π f t i2π f t y(t) = y( f ) e d f , y( f ) = y(t) e− dt. Z ∞ Z ∞ − − If y(t) is symmetricb about t = 0 thenb 1 ∞ ∞ y(t) = y(ω) cos ωt dω, y(ω) = 2 y(t) cosωt dt. π Z0 Z0 If y(t) is anti-symmetricb about t = 0 thenb 1 ∞ ∞ y(t) = y(ω) sin ωt dω, y(ω) = 2 y(t) sinωt dt. π Z0 Z0

Specific cases b b

y(t) = a, t τ sin ωτ | | ≤ (‘Top Hat’), y(ω) = 2a 2aτ sinc(ωτ) = 0, t > τ ω ≡ | |  sin(x) b where sinc(x) = x

y(t) = a(1 t /τ), t τ 2a ωτ − | | | | ≤ (‘Saw-tooth’), y(ω) = (1 cos ωτ) = aτ sinc2 = 0, t > τ ω2τ − 2 | |    b

y(t) = exp( t2/t2) (Gaussian), y(ω) = t √π exp ω2t2/4 − 0 0 − 0  b

y(t) = f (t) eiω0 t (modulated function), y(ω) = f (ω ω ) − 0 ∞ ∞ b y(t) = ∑ δ(t mτ) (sampling function) by(ω) = ∑ δ(ω 2πn/τ) m= ∞ − n= ∞ − − − b

21 Convolution theorem ∞ ∞ If z(t) = x(τ)y(t τ) dτ = x(t τ)y(τ) dτ x(t) y(t) then z(ω) = x(ω) y(ω). Z ∞ − Z ∞ − ≡ ∗ − − Conversely, xy = x y. b b b ∗ Parseval’sctheoremb b ∞ 1 ∞ y∗(t) y(t) dt = y∗(ω) y(ω) dω (if y is normalised as on page 21) Z ∞ 2π Z ∞ − − b b b Fourier transforms in two dimensions

ik r 2 V(k) = V(r) e− · d r Z ∞ b = 2πrV(r)J0(kr) dr if azimuthally symmetric Z0 Examples Fourier transforms in three dimensions V(r) V(k) ik r 3 V(k) = V(r) e− · d r Z 1 1 ∞ 4π π b 2 b = ( ) 4 r k V r r sin kr dr if spherically symmetric λr k Z0 e− 1 1 ik r 3 4πr 2 λ2 V(r) = V(k) e · d k k + 3 Z (2π) V(r) ikV(k) ∇ b 2V(r) k2V(k) ∇ − b b

22 15. Laplace Transforms

If y(t) is a function defined for t 0, the Laplace transform y(s) is defined by the equation ∞ ≥ st y(s) = y(t) = e− y(t) dt L{ } Z0

Function y(t) (t > 0) Transform y(s)

δ(t) 1 Delta function 1 θ(t) Unit step function s n! tn sn+1

1 1 π t /2 2 3 r s

1/ π t− 2 r s at 1 e− (s + a) ω sin ωt (s2 + ω2 s cos ωt (s2 + ω2) ω sinh ωt (s2 ω2) − s cosh ωt (s2 ω2) − at e− y(t) y(s + a)

sτ y(t τ) θ(t τ) e− y(s) − − dy ty(t) − ds dy sy(s) y(0) dt − n n 1 d y n n 1 n 2 dy d − y s y(s) s − y(0) s − dtn − − dt · · · − dtn 1  0  − 0 t y(s) y(τ) dτ Z0 s t x(τ) y(t τ) dτ Z0 −  x(s) y(s) Convolution theorem t  x(t τ) y(τ) dτ  Z0 −   [Note that if y(t) = 0 for t< 0 then the Fourier transform of y(t) is y(ω) = y(iω).]

b

23 16. Numerical Analysis

Finding the zeros of equations

If the equation is y = f (x) and xn is an approximation to the root then either f (xn) xn+1 = xn . (Newton) − f 0(xn) xn xn 1 or, xn+1 = xn − − f (xn) (Linear interpolation) − f (xn) f (xn 1) − − are, in general, better approximations.

Numerical integration of differential equations

dy If = f (x, y) then dx y = y + h f (x , y ) where h = x x (Euler method) n+1 n n n n+1 − n

Putting yn∗+1 = yn + h f (xn, yn) (improved Euler method)

h[ f (xn, yn) + f (xn+1, yn∗+1)] then y + = y + n 1 n 2

Central difference notation

If y(x) is tabulated at equal intervals of x, where h is the interval, then δyn+1/2 = yn+1 yn and 2 − δ yn = δyn+1/2 δyn 1/2 − − Approximating to derivatives

δ 1 δ 1 dy yn+1 yn yn yn 1 yn+ /2 + yn /2 − − − − where h = x + x dx ≈ h ≈ h ≈ 2h n 1 − n  n 2 2 d y yn+1 2yn + yn 1 δ yn − − = dx2 ≈ h2 h2  n Interpolation: Everett’s formula

1 2 1 y(x) = y(x + θh) θy + θy + θ(θ 1)δ2 y + θ(θ2 1)δ2 y + 0 ≈ 0 1 3! − 0 3! − 1 · · · where θ is the fraction of the interval h (= xn+1 xn) between the sampling points and θ = 1 θ. The first two terms represent linear interpolation. − −

Numerical evaluation of definite integrals

Trapezoidal rule

The interval of integration is divided into n equal sub-intervals, each of width h; then b 1 1 f (x) dx h c f (a) + f (x1) + + f (x j) + + f (b) Za ≈ 2 · · · · · · 2   where h = (b a)/n and x = a + jh. − j Simpson’s rule

The interval of integration is divided into an even number (say 2n) of equal sub-intervals, each of width h = (b a)/2n; then − b h f (x) dx f (a) + 4 f (x1) + 2 f (x2) + 4 f (x3) + + 2 f (x2n 2) + 4 f (x2n 1) + f (b) Za ≈ 3 · · · − −   24 Gauss’s integration formulae

1 n These have the general form y(x) dx ∑ ci y(xi) Z 1 ≈ − 1 For n = 2 : x = 0 5773; c = 1, 1 (exact for any cubic). i  · i For n = 3 : x = 0 7746, 0 0, 0 7746; c = 0 555, 0 888, 0 555 (exact for any quintic). i − · · · i · · ·

17. Treatment of Random Errors

1 Sample mean x = (x + x + x ) n 1 2 · · · n Residual: d = x x 1− Standard deviation of sample: s = (d2 + d2 + d2 )1/2 √n 1 2 · · · n

1 2 2 2 1/2 Standard deviation of distribution: σ (d1 + d2 + dn) ≈ √n 1 · · · σ − 1 Standard deviation of mean: σ = = (d2 + d2 + d2 )1/2 m √ 1 2 n n n(n 1) · · · − q 1/2 1 1 2 = ∑ x2 ∑ x i − n i n(n 1)   −  q Result of n measurements is quoted as x σ .  m Range method

A quick but crude method of estimating σ is to find the range r of a set of n readings, i.e., the difference between the largest and smallest values, then r σ . ≈ √n This is usually adequate for n less than about 12.

Combination of errors

If Z = Z(A, B, . . .) (with A, B, etc. independent) then ∂Z 2 ∂Z 2 (σ )2 = σ + σ + Z ∂A A ∂B B · · ·     So if (i) Z = A B C, (σ )2 = (σ )2 + (σ )2 + (σ )2   Z A B C σ 2 σ 2 σ 2 (ii) Z = AB or A/B, Z = A + B Z A B  σ  σ    (iii) Z = Am, Z = m A Z A σ (iv) Z = ln A, σ = A Z A σ (v) Z = exp A, Z = σ Z A

25 18. Statistics

Mean and Variance

A random variable X has a distribution over some subset x of the real numbers. When the distribution of X is discrete, the probability that X = xi is Pi. When the distribution is continuous, the probability that X lies in an interval δx is f (x)δx, where f (x) is the probability density function.

Mean µ = ( ) = ∑ or ( ) d . E X Pixi Z x f x x

Variance σ 2 = V(X) = E[(X µ)2] = ∑ P (x µ)2 or (x µ)2 f (x) dx. − i i − Z −

Probability distributions

x 2 y2 Error function: erf(x) = e− dy √π Z0

n x n x 2 Binomial: f (x) = p q − where q = (1 p), µ = np, σ = npq, p < 1. x −   x µ µ 2 Poisson: f (x) = e− , and σ = µ x! 1 (x µ)2 Normal: f (x) = exp − √ − σ 2 σ 2π  2  Weighted sums of random variables

If W = aX + bY then E(W) = aE(X) + bE(Y). If X and Y are independent then V(W) = a2V(X) + b2V(Y).

Statistics of a data sample x1, . . . , xn 1 Sample mean x = ∑ x n i 1 1 Sample variance s2 = ∑(x x)2 = ∑ x2 x2 = E(x2) [E(x)]2 n i − n i − −   Regression (least squares fitting)

To fit a straight line by least squares to n pairs of points (xi, yi), model the observations by yi = α + β(xi x) + i, 2 − where the i are independent samples of a random variable with zero mean and variance σ . 1 1 1 Sample statistics: s2 = ∑(x x)2, s2 = ∑(y y)2, s2 = ∑(x x)(y y). x n i − y n i − xy n i − i −

2 sxy n Estimators: α = y, β = ; E(Y at x) = α + β(x x); σ 2 = (residual variance), s2 − n 2 x − 1 s4 b b ∑ α bβ b 2 b 2 xy where residual variance = yi (xi x) = sy 2 . n { − − − } − sx b bσ 2 σ 2 α β Estimates for the variances of and are and 2 . n nsx b b b 2 b sxy Correlation coefficient: ρ = r = . sxsy b

26