The University of Michigan Industry Program of the College of Engineering

Notes on Mathematics For Engineers

Prepared by the American Nuclear Society. University of Michigan Student Branch

October, 1959

IP-394 Acknowledgments

These notes were prepared by the University of Michigan Student Branch of the American Nuclear Society, Committee on Math Notes, which included R.W. Albrecht, J.M. Carpenter, D.L. Galbraith, E.H. Klevans, and R.J. Mack, all students in the University of Michigan Department of Nuclear Engineering. Assistance was provided by Professors William Kerr and Paul Zweifel.

The translation to LATEXwas done by Alison Chistopherson (class of 2013), with the encouragement of Alex Bielajew, over the Winter and Summer of 2012. Other than this paragraph, everything else is a verbatim transcription of the original document.

i Foreword

This set of notes has been compiled with one primary objective in mind: to provide, in one volume, a handy reference for a large number of the commonly-used mathematical formulae, and to do so consistently with respect to notation, definition, and normalization. Many of us keep these results available to us in an excessive number of references, in which the notation or normalization varies, or formulae are so spread out that they are difficult to find, and their use is time-consuming. Short explanations are included, with some examples, to serve two purposes: first, to recall to the user some of the ideas which may have slipped his mind since his detailed study of the material; second, for those who have never studied the material, to make its use at least plausible, and to help in his study of references. No claim can be made that all results anyone ever uses are here, but it is hoped that a sufficient quantity of material is included to make necessary only infrequent use of other references, except for tables, etc. for elementary work. Of course, the user may find it desirable to add some pages of his own. Finally, it is recommended that those unfamiliar with the theory at any point not blindly apply the formulae herein, for this is risky business at best; a text should be studied (and, of course, understood) first.

ii Contents

1 Orthogonal Functions 2 1.1 ImportanceofOrthogonalFunctions ...... 3 1.2 GenerationofOrthogonalFunctions...... 4 1.3 UseofOrthogonalFunctions...... 7 1.4 Operational Properties of Some Common Sets of Orthogonal Functions . . . 8 1.5 , Range ℓ x ℓ ...... 9 1.5.1 Boundary Value− Problem≤ ≤ Satisfied by Sines and Cosines ...... 9 1.5.2 OrthogonalSet ...... 9 1.5.3 Expansion in Fourier Series ...... 9 1.5.4 NormalizationFactors ...... 10 1.5.5 OrthonormalSet ...... 10 1.5.6 Fourier Series, Range 0 x L ...... 10 1.5.7 ExpansioninHalf-rangeSeries.≤ ≤ ...... 10 1.5.8 FullRangeSeries ...... 11

2 12 2.1 GeneratingFunction ...... 12 2.2 RecurrenceRelations ...... 12 2.3 Differential Equation Satisfied by P (x)...... 12 2.4 Rodriques’Formula...... 13 2.5 NormalizingFactor ...... 13 2.6 ...... 13 2.7 Expansion in Legendre Polynomials ...... 13 2.8 Normalized Legendre Polynomials ...... 14 2.9 Expansion in Normalized Polynomials ...... 14 2.9.1 A Few Low-Degree Legendre Polynomials and Respective Norms. . . 14 2.9.2 Integral Representation of Pℓ(x) ...... 14 2.9.3 Bounds on Pℓ(x) ...... 15

3 Associated Legendre Functions 16 3.1 Definition ...... 16 3.2 RecurrenceRelations ...... 16 m 3.3 Differential Equation Satisfied by Pℓ (x) ...... 16

iii m 3.4 Expression of Pℓ(cos θ) in terms of Pℓ (x) ...... 17 3.5 NormalizingFactor ...... 17

4 19 4.1 Definition of Y m(Ω) ...... 19 4.2 Expression of P (cos θ) in terms of the Spherical Harmonics ...... 19 4.3 Orthonormality of Y m(Ω) ...... 20 4.4 Expansion in Spherical Harmonics ...... 20 m 4.5 Differential Equation Satisfied by Yℓ (Ω)...... 20 4.6 Some Low-order Spherical Harmonics ...... 21 4.7 AUsefulRelationship ...... 21

5 23 5.1 DerivativeDefinition ...... 23 5.2 GeneratingFunction ...... 23 α 5.3 Differential Equation Satisfied by Ln(x)...... 23 5.4 Orthogonality, Range 0 x ...... 23 5.5 Expansion in Laguerre Polynomials≤ ≤∞ ...... 24 5.6 Expansion of Xm inLaguerrePolynomials ...... 24 5.7 RecurrenceRelations ...... 24

6 Bessel Functions 25 6.1 Differential Equation Satisfied by Bessel Functions ...... 25 6.2 GeneralSolutions ...... 25 6.3 SeriesRepresentation...... 26 6.4 PropertiesofBesselFunctions ...... 27 6.5 GeneratingFunction ...... 27 6.6 RecursionFormulae...... 27 6.7 DifferentialFormulae ...... 28 6.8 Orthogonality,Range...... 28 6.9 Expansion in Bessel Functions ...... 28 6.10 BesselIntegralForm ...... 29

7 Modified Bessel Functions 30 7.1 DifferentialEquationSatisfied ...... 30 7.2 GeneralSolutions ...... 30 7.3 Relation of Modified Bessel to Bessel ...... 31 7.4 Properties of In ...... 31

8 The Laplace Transformation 33 8.1 Introduction...... 33 8.1.1 Description ...... 33 8.1.2 Definition ...... 33

iv 8.1.3 ExistenceConditions ...... 33 8.1.4 Analyticity ...... 34 8.1.5 Theorems ...... 34 8.1.6 FurtherProperties ...... 35 8.2 Examples ...... 36 8.2.1 Solving Simultaneous Equations ...... 36 8.2.2 Electric Circuit Example ...... 37 8.2.3 TransferFunctions ...... 38 8.3 InverseTransformations ...... 39 8.3.1 HeavisideMethods ...... 39 8.3.2 TheInversionIntegral ...... 41 8.4 TablesofTransforms ...... 44 8.4.1 Analyticity ...... 47 8.4.2 Cauchy’sIntegralFormula ...... 47 8.4.3 RegularandSingularPoints ...... 50 8.5 References...... 50

9 Fourier Transforms 52 9.1 Definitions...... 52 9.1.1 BasicDefinitions ...... 52 9.1.2 RangeofDefinition...... 52 9.1.3 ExistenceConditions ...... 53 9.2 FundamentalProperties ...... 53 9.2.1 TransformsofDerivatives ...... 53 9.2.2 Relations Among Infinite Range Transforms ...... 55 9.2.3 Transforms of Functions of Two Variables ...... 56 9.2.4 Fourier Exponential Transforms of Functions of Three Variables . . . 56 9.3 SummaryofFourierTransformFormulae ...... 57 9.3.1 FiniteTransforms...... 57 9.3.2 InfiniteTransforms ...... 59 9.4 Types of Problems to which Fourier Transforms May be Applied...... 61 9.5 InversionofFourierTransforms ...... 66 9.5.1 FiniteRange ...... 66 9.5.2 InfiniteRange...... 67 9.5.3 Inversion of Fourier Exponential Transforms ...... 67 9.6 TableofTransforms...... 70 9.7 References...... 71

10 Miscellaneous Identities, Definitions, Functions, and Notations 72 10.1LeibnitzRule ...... 72 10.2 General Solution of First Order Linear Differential Equations...... 72 10.3 Identities in Vector Analysis ...... 74 10.4 CoordinateSystems...... 75

v 10.5IndexNotation ...... 76 10.6 ExamplesofUseofIndexNotation ...... 77 10.7 TheDiracDeltaFunction ...... 79 10.8GammaFunction ...... 80 10.9ErrorFunction ...... 81

11 Notes and Conversion Factors 82 11.1ElectricalUnits ...... 82 11.1.1 ElectrostaticCGSSystem ...... 82 11.1.2 ElectromagneticCGSSystem ...... 82 11.1.3 PracticalSystem ...... 83 11.2 TablesofTransforms ...... 83 11.2.1 EnergyRelationships ...... 83 11.3 Physical Constants and Conversion Factors; Dimensional Analysis ...... 83

1 Chapter 1

Orthogonal Functions

In general, orthogonal functions arise in the solution of certain boundary-value problems. The use of the properties of orthogonal functions may often greatly simplify and systematize the solution to such a problem, in addition to providing a natural way of making approximate solutions. Let us first make clear the concept of orthogonality. We begin at what may seem an improbable starting point. Two vectors, A and B in a three-dimensional space are said to be “orthogonal” if the dot (inner) product A B vanishes: · 3 A B = A B + A B + A B = A B =0. · 1 1 2 2 3 3 i i i=1 X This is easily generalized to a space with more than three dimensions. In an n-dimensional space the concept of orthogonality is unchanged, except that then the sum is over n terms AiBi, and we have for orthogonality n A B = A B + A B + A B = A B =0. · 1 1 2 2 3 3 i i i=1 X Now one may think of the components of a vector, A1, A2, A3, as the values of a (real) function at three values of its argument; say A1 = f(r1), A2 = f(r2), A3 = f(r3), or, in terms which will make our efforts here more clear, Ai = f(ri)(ri = r1,r2,r3). That is, r has the values r1,r2,r3, and to get Ai, put ri in f(r). We may now think of r as having any number, say n, possible values in some range, so that f(r) evaluated at the various r’s generates an n-dimensional vector. The step to considering a function as an infinitely-many-dimensional vector is now a natural one; we allow n to increase without bound, r taking all values in its range. Let r have some range a r b in which it takes on n values such that rj rj 1 = ∆rj, ≤ ≤ − − and suppose two such n-dimensional vectors f(rj) and g(rj) are thus generated. The inner product of f with g is generalized as n

f(rj)g(rj)∆rj. j=1 X 2 Above, for A B,rj takes on only the discrete values 1,2,3; so ∆r is always unity in this simple case. Now· let us consider the case as n , where r takes all values between a and b. The inner product, if it exists, is then →∞ n

lim f(rj)g(rj)∆rj. n →∞ j=1 X With proper restrictions on ∆rj (max ∆rj 0) and on the range of r(a r b), this is just the limit occurring in the definition of the→ ordinary integral. Thus we say≤ that≤ the inner product of f(r) with g(r), which is often denoted (f,g), is

b (f,g)= f(r)g(r)dr. Za Of course, the range could be infinite. This discussion constitutes the generalization of the dot, or inner, product, to functions. Two functions are then by definition orthogonal over the range a r b when ≤ ≤ b (f,g)= f(r)g(r)dr =0. Za As we shall see, this definition is subject to generalization by the inclusion of a “weight function” p(r), with which b (f,g)= p(r)f(r)g(r)dr. Za Here f and g are said to be orthogonal “with respect to weight function p” over the range a r b. The function p(r) may of course be unity. ≤Only≤ the function f(r) 0, a r b, is orthogonal to itself. In general, we denote the ≡ ≤ ≤ inner product of a function with itself as

b 2 2 (f, f)= f(r )dr = Nf , Za and call it the norm of the function. Orthogonal sets may or may not possess two other properties, normality and complete- 2 ness. A set of orthogonal functions Ui(u) is said to be normal or orthonormal if Ni = 1 for all i. { } A set of functions U (u) orthogonal on the interval u u u is complete if there { i } 1 ≤ ≤ 2 exists no other function orthogonal to all the Ui on the same interval, with respect to the same weight function, if one is involved.

1.1 Importance of Orthogonal Functions

The importance of orthogonal sets in mathematical physics may perhaps be indicated by further considerations of their analogs: orthogonal coordinate vectors. It is true that any

3 N-dimensional vector may be defined in terms of its components along N coordinates, pro- vided that no more than two of the reference coordinates are coplanar. But if the reference coordinates are orthogonal, e.g., Cartesian coordinates, then the equations take a particu- larly simple form. The situation is somewhat similar when it is desired to expand a function in terms of a set of other functions – it is much simpler if the set is orthogonal. Completeness is another important property. It is apparent that no two reference axes will suffice for the definition of a vector in 3-dimensional space. The set of two reference axes is not complete in ordinary space, since a third coordinate can be added which is orthogonal to both of them. Addition of this coordinate makes the set complete. The situation with orthogonal functions is exactly analogous. Some authors define a complete set as a set in terms of which any other function defined on the same interval can be expressed. Some more common sets of orthogonal functions are the sines and cosines, Bessel func- tions, Legendre polynomials, associated Legendre functions, spherical harmonics, Laguerre and ; operational properties of which are listed in these notes.

1.2 Generation of Orthogonal Functions

In the mathematical formulation of physical problems, one often encounters partial dif- ferential equations or integro-differential equations (which contain not only derivatives of functions but also of functions), with which are associated a set of boundary con- ditions. If the equation and its boundary conditions are such as to be “separable” in one of the variables one may attempt to apply the method of “separation of variables”. Suppose we (admittedly rather abstractly) represent our equation

F (u,v,w, ) =0 (1.2.1) ··· where is an operator involvingM  the variables ,u,v,w, , applied to the function {} ··· F (u,v,w, ). For Example, might be something like L ··· {} 2 w2 L∂ 2 + v + K(w,w′)dw′ + ∆ ∂u2 Zw1 so that when is applied to a function F the equation appears {} 2 w2 L ∂ F 2 F vF K w,w′ F u,v,w dw′ F = 2 + + ( ) ( ) + ∆ =0 { } ∂u w1 M Z The process of separation proceeds as follows. One attempts to find a variable, say u, such that if it is assumed that F (u,v,w, ) may be written ··· F (u,v,w)= U(u)φ(v,w, ) ··· then Equation (1.2.1) can be written as follows:

F = U(u)φ(v,w) =0. (1.2.2) { } M M  4 Now let Υ U(u) = Φ φ(v,w) . Here, Υ is an operator involving only u, and Φ is an operator involving only v,w, . Re- turning to the example, assume ···

F (u,v,w)= U(u)φ(v,w).

Then,

F = Uφ { } { } w2 M M∂2Uφ 2 = ∂u2 + vUφ + ∆ Uφ + K(w,w′)Uφdw′ w1 Z w2 d2U 2 = φ du2 + vUφ + ∆ Uφ + U K(w,w′)φ(v,w′)dw′ Zw1 =0

We can, in this case, if Uφ = 0, divide by Uφ to get 6 2 w2 1 d U 1 2 + v + K(w,w′)φ(v,w′)dw′ + ∆ =0. U du2 φ Zw1 This can be rearranged to obtain the following result:

2 w2 1 d U 2 1 + ∆ = v K(w,w′)φ(v,w′)dw′ U du2 − − φ Zw1 In this case the separation has been successful, for on the left are functions of u only, while on the right stand only functions of v and w. Now suppose we were to vary u, fixing v and w. Then the right side would not change since it does not involve u, and is therefore a constant. We therefore state this fact:

2 w2 1 d U 2 2 1 + ∆ = µ = v K(w,w′)φ(v,w′)dw′ U du2 − − φ Ze1 where µ2 is a constant called the “separation constant”. We may choose it at our discretion. We now have two equations, where only one existed before:

d2U 2 2 0= du2 + (∆ µ )U − w2 2 0=(v + µ )φ + K(w,w′)φ(v,w′)dw′. Zw1 For φ = 0, we can divide by φ to get 6

U(u1)=0 dU du (u2)= U(u2).

5 These equations are separated; they do not involve v nor w. By the process of separation of variables we have, from the original equation involving u, v, and w, generated a new set of equations, some of which (those involving u) are a complete problem. 0= d2U + (∆2 µ2)U du2 − 0= U(u1)  dU (u )= U(u )  du 2 2  2 w2 0=(v + µ )φ + K(w,w′)φ(v,w′)dw′ w1  This was our objective in applying the method of separation of variables. The process may  R be repeated on the remaining equation or performed on another variable. Now if, after separation, the u-equation can be put in the form d r(u) dU q(u)+ µ(u) U =0, (1.2.3) du du − and the boundary conditions assumeh thei form  a U(u )+ a U (u )+ α U(u )+ α U (u )=0 1 1 2 ′ 1 1 2 2 ′ 2 (1.2.4) (b1U(u1)+ b2U ′(u1)+ β1U(u2)+ β2U ′(u2)=0 where a, b, α, and β are constants (some may be zero), then the system of differential equa- tions and boundary conditions is called a “Sturm-Liouville system”1. It will be noted that the Sturm-Liouville system is very general, and includes many important equations as special cases, for example the wave equation with those boundary conditions which are commonly applied. This system under quite general conditions generates a complete set of orthogonal func- tions, one for each of an infinite, discrete set of values of the parameter µ. One finds the values of µ, called “eigenvalues”, for which solutions exists, and the solution functions cor- responding to these eigenvalues, called “”. If the eigenvalues are µ1,µ2, and the corresponding eigenfunctions are U(u ), U(u ), , then in general, the functions··· 1 2 ··· are orthogonal over the range u1 to u2, with respect to the weight function p(u).

u2 2 p(u)Ui(u)Uj(u)du = Ni δij (1.2.5) Zu1 Here δ is the Kronecker delta, and p(u) is the same as in equation (1.2.3). One must take care to find all possible eigenvalues when the equation and boundary conditions are written exactly as in (1.2.3) and (1.2.4) and they are all real. When all eigenvalues are found, the set of eigenfunctions is complete, and any function reasonably well behaved between u1 and u2 may be represented in terms of them. Say we seek an expansion of a function f(u) in terms of our eigenfunctions,

f(u)= fiUi(u) i X 1pronounced LEE-oo-vil, NOT Loo-i-vil

6 where fi are a set of constants. Multiplying on the right and left by Ujp(u) and integrating with respect to u in the range of u1 to u2.

u2 u2 p(u)Uj(u)f(u)du = p(u)Uj(u)fiUi(u)du Zu1 Zu1 i X u2 = fi p(u)Uj(u)Ui(u)du i u1 X Z As a consequence of the orthogonality of the Ui(u)’s, defined in equation (1.2.5), this becomes

2 2 fiNi δij = fjNj . i X We then solve for the fj’s; 1 u2 1 f = p(u)U (u)f(u)du = (U , f). j N 2 j N 2 j j Zu1 j In order to justify the switching of the order of summation and integration here, and to guarantee the existence of (U < j, f), we usually require that f(u) be absolutely integrable, i.e., u2 f(u)du exists. Zu1 It is to be noted that the outline of procedure above requires the solution of an ordinary differential equation, perhaps not an easy task, but one hopes not as difficult as the problem of solving the partial differential equation. Very often the eigenfunctions which fit a given problem are known, and so this process can be bypassed. Different sets of eigenfunctions have different sets of operational properties, or sets of relationships between member which may be found useful. We note in conclusion that sets of orthogonal functions are generated by other means than be Sturm-Liouville systems; by sets of differential equations and boundary conditions which are not of Sturm-Liouville type, and by integral equations, to mention two.

1.3 Use of Orthogonal Functions

Let us consider a linear partial differential equation outlining the elimination of one variable from the equation. Under rather general conditions, we may expand F in an infinite series of orthogonal functions which we assume are known:

∞ F (u,v,w, )= f (v,w, )U (u) ··· i ··· i i=0 X where 1 u2 f (v,w, )= p(u)F (u,v,w, )U (u)du. i ··· N 2 ··· i i Zu1 7 The formula for the coefficient fi follows immediately from multiplying the first equation by p(u)Ui(u) and integrating. Let the original partial differential equation be represented abstractly. F (u,v,w, )= S(u,v,w, ) ··· ··· where is a linear differentialM operator and F merely represents that part of the differ- ential equation that involves F. Assume F to be expanded in a series in fiUi; multiply the L L equation by p(u)Uj(u) and integrate over u from u1 to u2. F = S

MfiUi = S i u2 MX u2 p(u)Uj(u) fiUidu = Sp(u)Uj(u)du u1 i u1 Z MX Z = N 2S (v,w, ) j j ··· where

u2 S = S(u,v,w, )p(u)U (u)du j ··· j Zu1 so that

∞ S(u,v,w, )= S U ··· j j j X Now, by using the operational properties of the Ui’s, one reduces the equations (an infinite set, one for each i) to a set in the fi’s and si’s. The particular steps taken depend upon the exact nature of the operator , and the set of equations may be coupled, i.e., f’s with several indices may appear in the same equation (for example, fi 1, fi, fi+1). These equations do − not involve derivatives withL respect to variable u, and we have gained in this respect. But we have to contend with the infinite set of equations. All is not lost at this point, for, as it turns out, the series for F, (called a generalized Fourier series because of the manner in which the coefficients of Ui’s are chosen in the expansion for F), is the most rapidly convergent series possible in the Ui’s. Thus solving for only the coefficients of the leading terms in the series may enable us to obtain a satisfactory approximation to F. Also, very often, we are interested only in one or two of the fi’s on physical or other grounds.

1.4 Operational Properties of Some Common Sets of Orthogonal Functions

There follow now a few pages on which are outlined, rather concisely, some basic oper- ational properties of some commonly occurring sets of orthogonal functions. The familiar

8 set of sines and cosines used in the construction of Fourier series is included as an example. These lists of properties contained in these notes are by no means complete, though they may suffice for the solution of many problems. The references listed with each section give detailed derivations, more extensive lists of properties, more discussions of the method and its limitations, or examples of the use of orthogonal functions. It is recommended that one unfamiliar with these functions read in some of these references, in order to avoid the pitfalls of using mathematics beyond the realm of its applicability. These notes have been assembled mainly for reference. Of special interest may be the tables in Margenau and Murphy, page 254, which lists twelve special cases of the Sturm-Liouville equation with the name of the orthogonal set which satisfied each one.

1.5 Fourier Series, Range ℓ x ℓ − ≤ ≤ 1.5.1 Boundary Value Problem Satisfied by Sines and Cosines

d2f 2 dx2 + k f =0 kreal f( ℓ)= f(ℓ) − f( ℓ)= f ′(ℓ) − 1.5.2 Orthogonal Set

nπx nπx sin ; cos ℓ x ℓ, n =0, 1, 2,... ℓ ℓ ≤ ≤     1.5.3 Expansion in Fourier Series

a ∞ nπx nπx F (x)= 0 + a cos + b sin z n ℓ n ℓ n=1( ) X     where

1 ℓ nπx an = F (x) cos dx n =0, 1, 2 . . . ℓ ℓ ℓ Z−   1 ℓ nπx bn = F (x) sin dx n =0, 1, 2,... ℓ ℓ ℓ Z−  

9 1.5.4 Normalization Factors

2 2 Nn cos x = Nn sin x = ℓ n =1, 2,... 2 N0 cos x =2ℓi

1.5.5 Orthonormal Set

1 1 nπx 1 nπx , sin , cos ℓ x ℓ, n =1, 2,... √ℓ √ℓ ℓ √ℓ ℓ − ≤ ≤     1.5.6 Fourier Series, Range 0 x L ≤ ≤ One may wish to expand a function defined on an interval 0 x L, in a Fourier series. ≤ ≤ One may choose at his discretion either of two ways, expanding either in a series of sines or one of cosines. The sine-series expansion yields an odd function and the cosine series yields an even function when the series is considered as a continuation of the function outside the range 0 x L. An “odd function” is one such that F(x)=-F(-x) and an “even function” is one such≤ that≤ F(x)=F(-x). Even functions are symmetric about the y axis. Of course, either series represents the function on the interval 0 x L. Note that nπx nπx ≤ ≤ on this range, either sin ( L ) or cos( L )n = 0, 1, 2,... forms a complete set, thus the two possible expansions.

1.5.7 Expansion in Half-range Series

a ∞ nπx F (x)= 0 a cos z n L n=1 X   or

∞ nπx F (x)= b sin n L n=1 X   where

2 L nπx a = F (x) cos dx, n =0, 1, 2,... n L L Z0   2 L nπx b = F (x) sin dx, n =0, 1, 2,... n L L Z0  

10 1.5.8 Full Range Series Consider the coefficients in a full-range expansion for an even function, i.e., F(x) such that F(x)=F(-x). 1 ℓ nπx an = F (x) cos dx ℓ ℓ ℓ Z−   1 0 nπx ℓ nπx = F (x) cos dx + F (x) cos dx ℓ ℓ ℓ 0 ℓ "Z−   Z   # Putting x in for x in the first integral, − 1 0 nπx ℓ nπx a = F ( x) cos − dx + F (x) cos dx n ℓ − − ℓ ℓ " Zℓ   Z0   # nπx nπx Use F(-x)=F(x), cos − ℓ = cos ℓ , and switch the order of integration,

1 ℓ  nπx ℓ nπx a = F (x) cos dx + F (x) cos dx n ℓ ℓ ℓ "Z0   Z0   # 2 ℓ nπx = F (x) cos dx = 0 in general. ℓ ℓ 6 "Z0   # 1 ℓ nπx bn = F (x) sin dx ℓ ℓ ℓ "Z−   # 1 0 nπx ℓ nπx = F (x) sin dx + F (x) sin dx ℓ ℓ ℓ 0 ℓ "Z−   Z   # Putting x in for x, − 1 0 nπx ℓ nπx bn = F ( x) sin dx + F (x) sin dx ℓ − ℓ − ℓ 0 ℓ " Z−   Z   # nπx nπx Using F (x)= F ( x), sin − = sin , and switching the order of integration, − ℓ − ℓ 1 ℓ  nπx  ℓ nπx b = F (x) sin dx + F (x) sin dx m ℓ − ℓ ℓ " Z0   Z0   # =0 n N ∀ ∈ Thus, the coefficients of the sine terms in the full range expansion of an even function are all zero; the cosine coefficients do not, of course, vanish for all n. This situation is much the same with respect to coefficients in the full range expansion of odd functions, except that in this case it is the coefficients of the cosine terms which vanish, for n=0, 1, 2 . . ..

11 Chapter 2

Legendre Polynomials

2.1 Generating Function

If

∞ 1 ℓ H(x, y)= = Pℓ(x)y 2 1 2xy + y ℓ − X=0 p Then

1 ∂ℓ P (x)= H(x, y). ℓ ℓ! yℓ Valid for y=0.

2.2 Recurrence Relations

1. (ℓ + 1)Pℓ+1(x) (2ℓ + 1)xPℓ(x)+ ℓPℓ 1(x)=0 − ′ − ′ 2. ℓPℓ 1(x) Pℓ (x)+ xPℓ 1(x)=0 − − − 2.3 Differential Equation Satisfied by P (x)

′′ ′ 1. (1 x2)P (x) 2xP (x)+ ℓ(ℓ + 1)P (x)=0 ℓ Z − ℓ − ℓ ℓ ∈

12 Very often x = cos θ.

∂2 ℓ(ℓ + 1)(1 x2)+1 2. P + − P =0 x2 ℓ (1 x2)2 ℓ − ! 2.4 Rodriques’ Formula

1 dℓ P (x)= (x2 1)ℓ ℓ 2ℓℓ! dxℓ − 2.5 Normalizing Factor

Norm of Pℓ(x) is: 1 2 2 2 Nℓ = (Pℓ(x)) dx = 1 2ℓ +1 Z− 2.6 Orthogonality

1 2 Pℓ(x)Pℓ′ (x)dx = δℓℓ′Nℓ 1 Z− where δℓℓ′ is the Kronecker delta, defined as

0, ℓ = ℓ′ δℓℓ′ = 6 (1, ℓ = ℓ′

2.7 Expansion in Legendre Polynomials

Any function f(x) which is defined over the range 1 x 1 0, and which is absolutely integrable over this range may be expanded in an infinite− ≤ seri≤es of| Legendre Polynomials;

∞ f(x)= fℓPℓ(x) ℓ X=0 where

2ℓ +1 1 fℓ = f(x)ℓ(x)dx 2 1 Z− 1 1 = 2 f(x)Pℓ(x)dx Nℓ 1 Z−

13 2.8 Normalized Legendre Polynomials

Pℓ(x) Pℓ(x)= Nℓ 2.9 Expansion in Normalized Polynomials

∞ g(x)= gℓPℓ(x) ℓ X=0 where

1 gℓ = g(x)Pℓ(x)dx 1 Z− 2.9.1 A Few Low-Degree Legendre Polynomials and Respective Norms

2 P0(x)=1 N0 =2 2 P (x)= x N 2 = 1 1 3 1 2 P (x)= (3x2 1) N 2 = 2 2 − 2 5 1 2 P (x)= (5x3 3x) N 2 = 3 2 − 3 7 1 2 P (x)= (35x4 30x2 + 3) N 2 = 4 8 − 4 9 1 2 P (x)= (63x5 70x3 + 15x) N 2 = 5 8 − 5 11 1 2 P (x)= (231x6 315x4 + 105x2 5) N 2 = 6 16 − − 6 13

2.9.2 Integral Representation of Pℓ(x)

π ℓ 1 2 Pℓ(x)= x + √x 1 cos ϕ dϕ π 0 − Z h i

14 2.9.3 Bounds on Pℓ(x) For 1 x 1, − ≤ ≤ P (x) 1, ℓ Z and ℓ 0 | ℓ | ≤ ∀ ∈ ≥ References on Legendre Polynomials

1. D. Jackson; “Fourier Series and ”, Carus Math. Monographs, Number 6, 1941, pp. 45-68.

2. H. Margenau and G. M. Murphy; “The Mathematics of Physics and Chemistry”, D. Van Nostrand, First Edition, 1943, pp. 94-109.

3. A. G. Webster; “Partial Differential Equations of Math. Phys.”, G.E. Stechert and Company, 1927, pp. 302-320.

4. R.V. Churchill; “Four Series and Boundary Value Problems”, McGraw Hill, 1941, pp. 175-201. (for discussion of concept of orthogonality, see Chap. 3).

5. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125. (Lists some properties and tabulates functions).

15 Chapter 3

Associated Legendre Functions

3.1 Definition

m m d| | m 2 2 1. Pℓ (x)=(1 x ) m Pℓ(x) (0 m ℓ) − dx| | ≤ ≤ 2 m ℓ+m (1 x ) 2 d . P m x x2 ℓ 2 ℓ ( )= −ℓ ℓ+m ( 1) 2 ℓ! ! x −

3.2 Recurrence Relations

m+1 x m m 1 1. P (x) 2m P (x)+ ℓ(ℓ + 1) m(m 1) P − (x)=0 ℓ − √1 x2 ℓ − − ℓ −m  m+1  m (ℓ + m)Pℓ 1(x)+(ℓ m + 1)Pℓ+1 (x) 2. xP (x)= − − ℓ 2ℓ +1 m+1 m+1 m Pℓ+1 (x) Pℓ 1 (x) 3. √1 x2P (x)= − − − ℓ 2ℓ +1 m+1 2m m m 4. Pℓ (x)= (ℓ + m)Pℓ 1(x)+(ℓ m + 1)Pℓ+1(x) √1 x2(2ℓ + 1) − − − m 1 ℓ(ℓ + 1) m(m 1) P − (x)  − − − ℓ   m 3.3 Differential Equation Satisfied by Pℓ (x)

d2 d m2 (1 x2) P m(x) 2x P m(x)+ ℓ(ℓ + 1) P m(x)=0 − x2 ℓ − dx ℓ − 1 x2 ℓ " − #

16 m m Since Pℓ (x) is defined on the interval 1 x 1, in physical applications Pℓ (x) is often associated with an angle θ through the− relation≤ ≤x = cos θ. Then the equation satisfied m by Pℓ (x) may be found in the following form:

d2 d m2 P m x θ P m x ℓ ℓ P m x 2 ℓ ( )+cot ℓ ( )+ ( + 1) 2 ℓ ( )=0 dθ dθ " − sin θ # or 1 d d m2 sin θ P m(x)+ ℓ(ℓ + 1) P m(x)=0 sin θ dθ dθ ℓ − sin2 θ ℓ   " #

m 3.4 Expression of Pℓ(cos θ) in terms of Pℓ (x)

1. Expressions θ1,ϕ1 and θ2,ϕ2 respectively denote the polar and azimuthal angles of two lines passing through the origin. Therefore, Θ, the angle between these two lines, is given by cos θ = cos θ cos θ + sin θ sin θ cos ϕ ϕ . 1 2 1 2 1 − 2 With these definitions, Pℓ(cos θ may be expressed as follows:

ℓ (ℓ m)! P (cos θ)= P (cos θ )P (cos θ )+2 − P m(cos θ )P m(cos θ ) cos m(ϕ ϕ ) ℓ ℓ 1 ℓ 2 (ℓ + m)! ℓ 1 ℓ 2 2 − 2 m=1 X  m 2. If Pℓ (x) is defined in a slightly different manner that allows negative values for m,

m |m| d| | m 2 2 Pℓ (x)=(1 x ) m Pℓ(x) ( m ℓ) − x| | | | ≤ then the expansion may be written as follows:

ℓ (ℓ m )! P (cos θ)= −| | P m(cos θ )P m(cos θ ) cos m(ϕ ϕ ) . ℓ (ℓ + m )! ℓ 1 ℓ 2 1 − 2 m= ℓ | | X−  3.5 Normalizing Factor

1 m 2 m 2 2 (ℓ + m)! (Nℓ ) = Pℓ (x) dx = 1 2ℓ +1 (ℓ m)! Z− −   References

1. H. Margenan and G. M. Murphy; “The Mathematics of Physics and Chemistry”, D. Van Nostrand, 1st edition (1934).

17 2. A. G. Webster; “Partial Differential Equations of Math. Phys.”, G.E. Stechert and Company, 1927, pp. 302-320.

3. D. K. Holmes and R.V. Meghreblian, “Notes on Reactor Analysis, Part II, Theory”, U.S.A.E.C. Document CF-4-7-88 (Part II), August 1955, pp. 164-165.

4. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125. (Lists some properties and tabulates functions).

18 Chapter 4

Spherical Harmonics

4.1 Definition of Y m(Ω)

The spherical harmonics are a complete, orthonormal set of complex functions of two variables, defined on the unit sphere. Below, the vector symbol Ω will be used to denote a pair of variables θ,ϕ here taken to be, respectively, the polar and azimuthal angles specifying a point on the unit sphere with reference to a coordinate system at its center. With these m conventions, the functions Yℓ (Ω) are defined as:

1 2ℓ +1 (ℓ m )! 2 Y m(Ω)= −| | P m(cos θ)eimϕ. ℓ 4π (ℓ + m )! ℓ  | |  It also defines µ = cos θ,

1 2 |m| ℓ+ m 2ℓ +1 (ℓ m )! 2 (1 µ ) 2 d | | Y m(Ω)= −| | − (µ2 1)ℓeimϕ. ℓ 4π (ℓ + m )! 2ℓ! dµℓ+ m −  | |  | | m m m∗ Note: Y =( 1) Y − where denotes complex conjugate. ℓ − ℓ 4.2 Expression of P (cos θ) in terms of the Spherical Har- monics

Define θ1,ϕ1, i.e. Ω1, and θ2,ϕ2, i.e. Ω2 as the polar and azimuthal angles specifying two points on the unit sphere, with respect to a coordinate system at its center. Denote by Θ the angle between the lines drawn from each point to the origin of the coordinates. Then:

m=ℓ 4π ∗ P (cosΘ) = Y m(Ω )Y m (Ω ) ℓ 2ℓ +1 ℓ 1 ℓ 2 m= ℓ X−

19 4.3 Orthonormality of Y m(Ω)

k m∗ Yj (Ω)Yℓ (Ω)dΩ = δjℓδkm ZΩ where the integral over vector Ω indicates a double integration over the full ranges of θ and ϕ; π θ π , 0 ϕ 2π. 2 ≤ ≤ 2 ≤ ≤ 4.4 Expansion in Spherical Harmonics

Any functions, perhaps complex, of the variables θ and ϕ, i.e. Ω, absolutely integrable m over θ and ϕ, can be expanded in terms of the functions Yℓ :

ℓ ∞ m m F (Ω)= Fℓ Yℓ (Ω) ℓ=0 m= ℓ X X− where

m m∗ Fℓ = F (Ω)Yℓ (Ω)dΩ ZΩ m 4.5 Differential Equation Satisfied by Yℓ (Ω)

∂ ∂Y 1 ∂2Y sin θ + + ℓ(ℓ + 1) sin θY =0 ∂θ ∂θ sin θ ∂ϕ2   Assume Y=Φ(ϕ)Θ(θ) and let ∂2Y = m2Y ∂ϕ2 − Imposing the conditions that Y is bounded at cos θ = 1 makes ℓ an integer. Imposing that Y is single-valued in ϕ makes m an integer. ±

20 4.6 Some Low-order Spherical Harmonics

1 Y 0(Ω)= 0 √4π

1 3 iϕ Y1− (Ω)= sin θe− r8π 0 3 Y1 (Ω)= cos θ r4π 1 3 iϕ Y1 (Ω)= sin θe r8π 4.7 A Useful Relationship

If the vector Ω is considered to represent a point on the unit sphere, its components can be represented as follows:

Ωx = sin θ cos ϕ

Ωy = sin θ sin ϕ

Ωz = cos θ

If a new set of components are constructed as follows: 1 1 Ω 1 = (Ωx iΩy)= sin θ(cos ϕ i sin ϕ) − √2 − √2 −

Ω0 = Ωx = cos θ 1 1 Ω1 = (Ωx + iΩy)= sin θ(cos ϕ + i sin ϕ) √2 √2 then these are easily seen to be expressible as the ℓ = 1 spherical harmonics (see E),

4π 1 Ω 1 = Y1− − r 3 4π Ω0 = Y1 r 3 4π 1 Ω1 = Y1 r 3 References

1. H. Margenan and G. M. Murphy; “The Mathematics of Physics and Chemistry”, D. Van Nostrand, 1st edition (1934).

21 2. A. G. Webster; “Partial Differential Equations of Math. Phys.”, G.E. Stechert and Company, 1927, pp. 302-320.

3. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125. (Lists some properties and tabulates functions).

4. D. K. Holmes and R.V. Meghreblian, “Notes on Reactor Analysis, Part II, Theory”, U.S.A.E.C. Document CF-4-7-88 (Part II), August 1955, pp. 164-165.

5. L. I. Schiff, “Quantum Mechanics”, 2nd Edition, McGraw Hill, 1955, p. 73.

6. Whittaker and Watson; “Modern Analysis”, 4th Edition, Cambridge University Press (1927), pp. 391-396.

22 Chapter 5

Laguerre Polynomials

5.1 Derivative Definition

n (α) m α x d α+n x L (x)=( 1) x− e (x e− ) n − dxn 5.2 Generating Function

∞ n (α+1) xt ( 1) (α) n H(x, t)=(1 t)− e− 1−t = − L (x)t − n! n n=0 X Thus

n (α) d (α+1) xt L (x)= (1 t)− e− 1−t ; t =0 n dtn −   α 5.3 Differential Equation Satisfied by Ln(x)

d2y dy x +(α x + 1) + ny =0 n =0, 1, 2,... dx2 − dx n 5.4 Orthogonality, Range 0 x ≤ ≤ ∞

∞ α x (α) (α) 2 x e− Lm (x)Ln (x)dx = Nnδmn Z0

23 where

N 2 =( 1)nn!Γ(n + α + 1) n − 5.5 Expansion in Laguerre Polynomials

∞ (α) F (x)= fnLn (x) n=0 X where

1 ∞ α x (α) f = x e− F (x)L (x)dx n N 2 n n Z0 5.6 Expansion of Xm in Laguerre Polynomials

m ( 1)nm!Γ(α + m + 1) xm = − L(α)(x) n!Γ(n + α + 1) n n=0 X 5.7 Recurrence Relations

(α) (α) (α) a. (x 2n α 1)Ln (x)= Ln+1(x)+ n(n + α)Ln 1(x) − − − − ′ ′ b. L(α) =(n + 1) L(α) L(α)(x) n+1 n − n h i

24 Chapter 6

Bessel Functions

6.1 Differential Equation Satisfied by Bessel Functions

d2y 1 dy ν2 . y 1 2 + + 1 2 =0 dx x dx − x !

or

1 d dy ν2 2. x + 1 y =0 x dx dx − x2   ! (An extensive listing of other equations satisfied by Bessel functions is given in Reference 2,)

6.2 General Solutions

Assuming that A and B are arbitrary constants, the general solutions to the above equa- tions are:

y = AJν(x)+ BJ ν(x) (ν non-integral) − y = AJn(x)+ BNn(x) (n integral)

Where Jn(x) are the Bessel Functions of the first kind of order n and Nn(x) are the Neu- mann1 Functions, the Bessel Functions of the second kind. The Neumann functions are also frequently represented by the symbol Yn(x). For non-integers,

Jν(x)cos(νπ) J ν(x) N (x)= − − ν sin (νπ)

1In reference 4, these are called Weber functions

25 For ν = n and n Z, the above expression reduces to2 ∈ x n+2r n 1 2r n 2 x 1 ∞ 1 − (n r 1)! x − N (x)= J (x)log ( 1)r 2 F (r)+ F (n + r) − − n π n 2 − π − r!(n + r)! − π r! 2 r=0  r=0 X X   r 1 Where F (r)= s=1 s . A third function which sometimes finds use is the Hankel function, or a Bessel function of the third kind. There are two such functions, defined by P 1 Hν (x)= Jν(x)+ iNν (x) (ν unrestricted ) H2(x)= J (x) iN (x) ν ν − ν Then, for integers, we see that a solution to Bessel’s equation will be

(1) (2) y = A1Hn (x)+ A2Hn (x)

Where A1 and A2 are arbitrary constants that may be complex. These functions bear νx the same relation to the Bessel functions Jν (x) and Nν(x) as the functions e± bear close to cos νx and sin νx. They satisfy the same differential equation and recursion relations as Jν(x). Their importance results from the fact that they alone vanish for an infinite complex argument, vis. H(1) if the imaginary part of the argument is positive, H(2) if it is negative, (1) iθ (2) iθ i.e, limr H (re )=0, limr H (re− ), 0 θ λ. →∞ →∞ ≤ ≤ From the above equations, we can also write 1 J (x)= H(2)(x)+ H(1)(x) (ν unrestricted ) ν 2 ν ν 1 h i N (x)= H(2)(x) H(1)(x) ν 2 ν − ν h i 6.3 Series Representation

For Bessel Functions of the first kind of order n, a series representation is given as follows:

∞ ( 1)rx2r+n J (x)= − n 22r+nr!(1 + n + r) r=0 X 2This is shown in reference 8, pg. 577.

26 6.4 Properties of Bessel Functions

n 1. Jn(x)=( 1) J n(x) − − 2. J (x)=( 1)nJ ( x) n − n − 3. Bounds on J (x) for n Z : n 0 and x R : x 0 n ∈{ ≥ } ∈{ ≥ } a. J (x) 1 n ≤ dk b. k Jn (x) 1 k =1, 2,... dx ≤

Z 4. Limits for Jn(x) for n : n 0 ∈{ ≥ } a. lim Jn(x)=0 n →∞ b. lim Jn(x)=0 x →∞ xn c. lim Jn(x)= x n →∞ 2 n! 5. Limits for N (x) for n Z : n 0 and x R n ∈{ ≥ } ∈ a. lim Nn(x)=0 x →∞ (n 1)! 2 n b. lim Nn(x)= − n 1 x 0 − π x ≥ →   2 2 c. lim N0(x)= log (cγ =1.781) x 0 → −π γx 6. Graphs of Jn(x) and Nn(x)

6.5 Generating Function

x 1 ∞ exp t = J (x)tn ( n integral) 2 − t n "  # h= X−∞ 6.6 Recursion Formulae

′ a. 2Jn(x)= Jn 1(x) Jn+1(x) − − 2n b. Jn(x)= Jn 1(x)+ Jn+1(x) x − ′ c. xJn(x)= xJn 1(x) nJn(x)= xJn(x) xJn+1(x) − − −

27 6.7 Differential Formulae

d n n a. x Jn(x) = x Jn 1(x) dx − d  n  n b. x− J (x) = x− J (x) dx n − n+1   6.8 Orthogonality, Range

For 0 x c, ≤ ≤ c 2 1. xJn(λnjx)Jn(λnkx)dx = δjkNnj Z0 2 c2 2 Where λnj are positive roots of the equation Jn(λc) = 0 and Nnj = 2 Jn+1(λnjc) c 2   2. xJn(λnℓx)Jn(λnmx)dx = δℓmMnℓ Z0 ′ Where λnℓ are the positive roots of the equation (λc)Jn(λc) = hJn(λc) or its equivalent (n + h)J (λc) λcJ (λc) = 0 where h is some constant > 0 and− n =0, 1, 2,... and n − n+1 2 2 2 2 2 λnℓc + h n 2 Mnℓ = 2 − Jn(λnℓc) " 2λnℓ #   6.9 Expansion in Bessel Functions

∞ f(x)= AjJn(λnjx) j=0 X where Aj will be represented by either of the two following ways: 2 c a. Aj = xf(x)Jn(λnjx)dx 2 2 c Jn+1(λnjc) Z0   when Jn(λnjc)=0

2 c 2λnj b. Aj = xf(x)Jn(λnjx)dx 2 2 (λ c2 + h2 n2) J (λ c) 0 nj − n nj Z when λcJ ′ (λc)= hJ (λc)   n − n

28 6.10 Bessel Integral Form

1 π J (x)= cos(nθ x sin θ)dθ (n =0, 1, 2,...) n π − Z0

29 Chapter 7

Modified Bessel Functions

7.1 Differential Equation Satisfied

d2y 1 dy ν2 y 2 + 1+ 2 =0 dx x dx − x !

7.2 General Solutions

y = AIν(x)+ BIν (x) (ν non-integral)

y = AIn(x)+ BKn(x) (n integral) where A and B are arbitrary constants

In(x) = Modified Bessel Function of the first kind of order n

Kn(x) = Modified Bessel Function of the second kind of order n. For non-integers, π 1 Kν(x)= I ν(x) Iν(x) 2 − − sin (νπ) π ν+1 (1) π ν 1 (2) = i H (ix)= i− − H ( ix). 2 ν 2 ν − For integers,

n 1 n+2r 2 x 1 − r(n r 1)! x − K (x)= ( 1)n+1I (x)log + ( 1)r − − ν π − n 2 π − r! 2 r=0   x n r X 1 ∞ ( ) +2 +( 1)n 2 F (r)+ F (n + r) − π r!(n + r)! r=0 X  30 Where

r 1 F (r)= s s=1 X 7.3 Relation of Modified Bessel to Bessel

For ν unrestricted,

ν Iν(x)= i− Jν(ix) i = H(2)(x) H(1)(x) 2 ν − ν h i r xν ∞ 1 x2 = Γ(ν + 1)2ν r!(ν + 1)r 4 r=0 ! X 2r+ν ∞ 1 x = Γ(r + ν + 1)r! 2 r=0 X   Where (α) =(α)(α + 1) (α + r 1) is the rising factorial. r ··· −

7.4 Properties of In

1. I n(x)= In(x) − ′ 2. 2In(x)= In 1(x)+ In+1(x) − 2n 3. In(x)= In 1(x) In+1(x) x − − ′ 4. xIn(x)= xIn 1(x) nIn(x) − ′ − 5. xIn(x)= nIn(x)+ xIn+1(x) ′ 6. I0(x)= I1(x) A few comments on notation of Jahnke and Emde.

1. A general cylindrical function Zp(x) is defined on page 144 by Zp(x)= c1Jp(x)+c2Np(x) for p Z where c ,c are arbitrary (real or complex) constants. Thus Z (x) can apply ∈ 1 2 p to Jp(x) by letting c2=0, to Np(x) for c1=0, and to Hp(x) by other constant. All formulae on pages following use this definition of Zp(x).

p 2. The function Ip(x) is not listed as such, but is found as i Jp(ix) on pages 224-229.

2 n+1 (1) n+1 (2) 3. The function K (x) is not listed, but K (x) = i Hn (ix) = i Hn (ix). The p π n − functions iH(1)(ix)= iH(2)( ix) and H(2)( ix) are tabulated on pages 236-243. 0 − 0 − − 1 − 31 4. This reference is full of extremely interesting, beautiful, and helpful pictures of many functions, almost suitable for hanging in the living room.

References

1. G.N. Watson; “A Treatise on the Theory of Bessel Functions”, 2nd Edition, Cambridge University Press, 1944, (exhaustive treatment)

2. E. Jahnke and F. Emde; “Tables of Functions”, Dover Publications, 1945, pp. 107-125. (Lists some properties and tabulates functions).

3. R.V. Churchill; “Fourier Series and Boundary Value Problems”, McGraw Hill, 1941, pp. 175-201. (for discussion of concept of orthogonality, see Chap. 3).

4. I.N. Sneddon; “Special Functions of Mathematical Physics and Chemistry” University Math. Texts, 1956 (thorough discussion for practical use. However, his use of subscript is not always clear or general.)

5. D. Jackson; “Fourier Series and Orthogonal Polynomials”, Carus Math. Monographs, Number 6, 1941, pp. 45-68.

6. Whittaker and Watson; “Modern Analysis”, 4th Edition, Cambridge University press, 1958 (more rigorous development)

7. N.W. McLachlan; “Bessel Functions for Engineers”, 2nd Edition, Oxford, Clarendon Press, 1955.

8. H. and B.S. Jeffrets; “Mathematical Physics”, 3rd Edition, Cambridge University Press, 1955, (compact, but rigorous presentation. They use a different notation, but it is clearly defined.)

32 Chapter 8

The Laplace Transformation

8.1 Introduction

8.1.1 Description The Laplace transformation permits many relatively complicated operations upon a func- tion, such as differentiation and integration for instance, to be replaced by simpler algebraic operations, such as multiplication or division, upon the transform. It is analogous to the way in which such operations as multiplication and division are replaced by simpler pro- cesses of addition and subtraction when we work not with numbers themselves but with their logarithms.

8.1.2 Definition The Laplace transformation applied to a function f(t) associates a function of a new variable with f(t). This function of s is denoted by f(t) or where no confusion will result, L simply by (f) or F(s); and the transform is defined by: L

∞ st (f) f(t)e− dt L ≡ Z0 8.1.3 Existence Conditions For a Laplace transformation of f(t) to exist and for f(t) to be recoverable from its transform, it is sufficient that f(t) be of exponential order, i.e. that there should exist a st constant, s, such that the product e− f(t) is bounded for all values of t greater than some finite number T ; and that f(t) should| be| at least piecewise continuous over every finite interval 0 t T , for any finite number T . More formally, ≤ ≤ st M,s,T : e− f(t) M t > T ∃ | | ≤ ∀

33 These conditions are usually met by functions occurring in physical problems. The num- at ber a is called the exponential order of f(t). If a number a exists such that e− f(t) is bounded, then f is said to be of exponential order. | |

8.1.4 Analyticity If f(t) is piecewise continuous and of exponential order a, the transform of f(t), i.e., F(s), is an analytic function of s for Re(s)>s. Also, it is true that for Re(s)>s, limx F (s)=0 →∞ and limy F (s) = 0 when s=x+iy. →∞

8.1.5 Theorems Theorem I (Linearity) The Laplace transform is a sum of functions is the sum of the transforms of the individual functions. (f + g)= (f)+ (g) L L L Theorem II (Linearity) The Laplace transform of a constant times a function is the constant times the transform of the function. (cf)= c (f) L L Theorem III (Basic Operational Property) If f(t) is a continuous function of exponential order whose derivative is at least piecewise + continuous over every finite interval 0 t t2, and limt 0+ f(t)= f(0 ), then the Laplace transform of the derivative of f(t) is given≤ ≤ by →

+ (f ′)= s f(0 ) L { − and

2 + + (f ′′)= s sf(0 ) f ′(0 ). L { − − The latter, of course, requires an extension of the continuity of f(t) and its derivatives to include f ′′(t), and may be formally shown by partial integration. More generally, if f(t) and dnf its first n-1 derivatives are continuous and dtn is piecewise continuous, then

n d f n n 1 + n 2 + (n 1) + = s (f) s − f(0 ) s − f ′(0 ) . . . f − (0 ) L dtn L − − − −  

34 Theorem IV (Transforms of Integrals) If f(t) is of exponential order and at least piecewise continuous, then the transform of t a f(t)dt is given by t 1 1 0 R f(t)dt = (f)+ f(t)dt L sL s "Za # Za 8.1.6 Further Properties Below, let us assume all functions of the variable t are piecewise continuous, 0 t T , and of exponential order as t approaches infinity. Then the following theorems hold:≤ ≤

Theorem V

estf(t) = F (s a) L −   Theorem VI

If

f(t b), t b fb(t)= − ≥ (0, t

Then

bs f (t) = e− F (s) L b   Theorem VII (Convolution)

t f(t τ)f(τ)dt = F (s) G(s) L − ∗ "Z0 # t = g(t τ)f(τ)dt L − "Z0 #

35 Theorem VIII (Derivatives of Transforms)

dF (s) tf(t) = L − ds More generally,   dnF tnf(t) =( 1)n L − dsn   8.2 Examples

8.2.1 Solving Simultaneous Equations The problem is to solve for y from the simultaneous equations: t y′ + y +3 zdt = cos t + 3 sin t Z0 2y′ +3z′ +6z =0 y = 3, z =2 0 − 0 Taking the Laplace transform of each equation, we have: t (y′)+ (y)+3 zdt = (cos t)+3 (sin t) L L L L L "Z0 # 2 (y′)+3 (z′)+6 (z)=0 L L L This system of equations can be reduced as follows by using the theorems and definitions from the previous sections: 3 s 3 (y)+3 + (y)+ (z)= + L L sL s2 +1 s2 +1 2 s (y)+3 +3 s (z) 2 +6 (z)=0 L L − L Collecting the terms and transposing, we have  3 s +3 (s + 1) (y)+ (z)= 3 L sL s2 +1 − 2s (y)+3(s + 2) (z)=0 L L The two original integro-differential equations have thus been reduced to two linear algebraic equations in (y) and (z). Applying Cramer’s rule and solving for (y) since it is y which L L L we want; s+3 3 3 s2+1 − s s s s+3  0 3( + 2) 3( + 2) s2+1 3 (y)= = − 3   L (s + 1) 3s(s + 3) s 2s 3(s + 2)

36 Where 3s(s+3) is the determinant of the matrix in the denominator and 3(s+2) s+3 3 s2+1 − is the determinant of the matrix in the numerator. (y) can also be written as   L s +2 s +2 (y)= 3 . L s(s2 + 1) − s(s + 3)   Now applying the method of partial fractions, 2 1 2s 1 2 (y)= + − + L s s2 +1 − s +3 s     s 1 1 = 2 + − s2 +1 s2 +1 − s +3   And, finding the inverse in the table of transforms, which are tables relating functions of s to the corresponding functions of t, and will be found in section IV of this paper, 3t y = 2 cos t + sin t e− (t> 0) − − It should be noted that one of the inherent characteristics of solving differential equations by the use of Laplace transforms is that the initial conditions are included in the solution.

8.2.2 Electric Circuit Example Since Laplace transforms are widely used in the determination of the transient response of electric circuits, suppose a resistor R, inductor L, DC voltage source E, and an open switch are all connected in series. At time t=0, the switch closes. Find the equation of current flow after the switch is closed.

1. Kirchoff’s Voltage Law di E = iR + L dt 2. Laplace Transformation E = I(s)R + LsI(s) Lf(0+) s − t =0 i =0 f(0+)=0 ⇒ ⇒ 3. Solving for I(s) E I(s)= s(R + Ls) 4. Taking the Inverse

E Rt i(t)= 1 e− L R −   37 8.2.3 Transfer Functions For certain control functions, and for representing the dynamic behavior of various devices such as reactors, heat exchangers, etc., it is advantageous to use a “transfer function” because of the convenience in manipulation it provides. The transfer functions of many elements of a system, when strung together in a block diagram, represent a convenient way of writing complicated system equations. The transfer function of a system may be defined as the ratio of the output to the input of the system in transform (s) space. The conditions for using transfer functions are:

1. Initial condition operator =0.

2. No loading between transfer functions.

3. Transfer function satisfies existence conditions for Laplace transformation.

4. Linear system.

Example of Transfer Function

Consider an input voltage source ei in series with a resistor R and an inductor with inductance L. Suppose the potential drop across the inductor is e0. The objective is to find the transfer function E0 (s). Ei

1. Equations

di e = R + L i i dt di e = L 0 dt

2. Transforms

Ei(s)= RI(s)+ sLI(s)

E0(s)= sLI(s)

3. Solving for the Transfer Function

E sLI(s) sL sτ 0 (s)= = = Ei (R + sL)I(s) R + sL 1+ sτ

L (where τ = R is the circuit’s time constant.)

38 8.3 Inverse Transformations

8.3.1 Heaviside Methods When solving equations by the Laplace transform technique, it is frequently the most difficult part of the procedure to invert the transformed solution for F(s) into the desired function f(t). A simple way of making this inversion, but unfortunately a method only applicable to special cases, is to reduce the answer to a number of simple expressions by using partial fractions, and then applying the Heaviside theorems as outlined below:

Theorem I

1 p(s) If y(t)= − , where p(s) and q(s) are polynomials, and the order of q(s) is greater L q(s) than the order ofh p(s),i then the term in y(t) corresponding to an unrepeated linear factor (s a) of q(s) is p(a) eat or p(a) eat, where Q(s) is the product of all factors of q(s) except for − q!(a) Q(a) (s a). − Example If (f(t)) = s2+2 , then what is f(t)? L s(s+1)(s+2)

a. Rootsofthedenominatorare s =0,s = 1,s = 2 − − b. p(s)= s2 +2 3 2 2 c. q(s)= s +3s +2s; q′(s)=3s +6s +2 p(0) = 2; p(1) = 3,p( 2)=6 − d. q′(0) = 2, q′( 1) = 1, q′( 2)=2 − − − 2 0 3 t 6 2t t 2t e. f(t)= e + e− + e− =1 3e− +3e− 2 1 2 − − Theorem II

1 p(s) If y(t)= − where p(s) and q(s) are polynomials and the order of q(s) is greater L q(s) than the order ofh p(s),i then the terms in y(t) corresponding to the repeated linear factor (s a)r of q(s) are: − φ(r 1)(a) φ(r 2)(a) t φ (a)tr 2 tr 1 eat − + − + . . . + ′ − + φ(a) − , (r 1)! (r 2)! 1! 1!(r 2)! (r 1)! " − − − − # where φ(s) is the quotient of p(s) and all the factors of q(s) except for (s a)r. −

39 Example If (f(t)) = s+3 , then what is f(t)? L (s+2)2(s+1)

s+3 (s+1) (s+3) 2 − a. φ(s)= s+1 ; φ′(s)= (s+1)2 = (s+1)− 2 b. φ( 2) = 1; φ′( 2) = 2 The terms− in f−(t) corresponding− − to (s + 2)2 are:

2t 2 1t 2t e− − + − = e− (s + t). 1! 0!1! −   Then, as in the example from Theorem I; p(s)=8+3; q(s)= s3 +5s2 +8s +4 2 q′(s)=3s + 10s +8

p( 1)=2 q′( 1)=1. − − Thus,

2t t f(t)= e− (2 + t)+2e− − Theorem III

1 p(s) If y(t)= − , where p(s) and q(s) are polynomials and the order of q(s) is greater L q(s) than the order ofh p(si), then the terms in y(t) corresponding to an unrepeated quadratic 2 2 e−at factor (s + a) + b of q(s) are b (φi cos bt + φr sin bt) where φr and φi are respectively, the real and imaginary parts of φ( a+ib), and φ(s) is the quotient of p(s) and all the factors   of q(s) except (s + a)2 + b2 . −   Example If (f(t)) = s , then what is f(t)? L (s+2)2(s2+2s+2) a. Considering the linear factor as in the example of Theorem II s s2 +2 φ(s)= ; φ′(s)= − (s2 +2s + 2) (s2 +2s + 2)2 1 φ( 2) = 1; φ′( 2) = − − − −2 b. Considering the quadratic factor;

s2 +2s +2=(s + 1)2 +12 s φ(s)= . (s + 2)2

40 Therefore,

1+ i φ( a + ib)= φ( 1+ i)= − 2 − − ( 1+ i)+2 − 1+ i 1+ i 1 i = − = −  = + . (1 + i)2 2i 2 2

1 2 e−t(cos t+sin t) So φr = φi = 2 and the terms in f(t) corresponding to (s +2s + 2) are 2 . By adding the two partial inverses, we get the following answer:

2t t (1+2t)e− e− (cos t + sin t) f(t)= − + 2 2 8.3.2 The Inversion Integral When the function cannot be reduced to a form amenable to inversion by tables of transforms or Heaviside methods, there remains a most powerful method for the evaluation of inverse transformations. The inversion is given by an integral in the complex s-plane,

γ+i 1 ∞ f(t)= estF (s)ds 2πi γ i Z − ∞ where γ is some real number chosen such that F(s) is analytic (see Appendix A) for Re(s) γ, ≥ and the Cauchy principle value of the integral is to be taken, i.e.

1 γ+iB f(t)= lim eStF (s)ds 2πi B γ iB →∞ Z − Let us illustrate the formal origin of the inversion integral in the following way. In the complex plane, let φ(x) be a function of z, analytic on the line x=γ, and in the entire half plane R to the right of this line. Moreover, let φ(z) approach zero uniformly as z becomes | | infinite through this half plane. Then s0 is any point in the half plane R, we can choose a semi-circular contour c, composed of c1 and c2, as shown below, and apply Cauchy’s integral formula, (see Appendix B) 1 φ(z) Φ(s )= dz 0 2πi z s Ic − 0 Here, φ(z) is analytic within and on the boundary c of a simply connected region R and s0 is any point in the interior of R integration around c in a positive sense). Thus, Cauchy’s integral formula yields

γ ib 1 φ(z) 1 φ(z) 1 − φ(z) φ(s)= dz = dz + dz 2πi z s 2πi z s 2πi z s Ic − Zc2 − Zγ+ib −

41 Now, for values of z on the path of integration, c2, and for b sufficiently large, z a b a γ b a | − | ≥ −| − | ≥ −| | φ(z) φ(z) dz | | dz z s z s ⇒ c2 ≤ c2 | | Z − Z | − | M dz ≤ b s | | −| | Zc2 bM = π b s −| | b where M is the maximum value of φ(z) on c2. As b , the fraction b s 1, and M approaches zero. Hence, | | → ∞ −| | → φ(z) lim dz =0 b z s →∞ Zc2 − and the contour integral reduces to γ ib γ+i 1 − φ(z) 1 ∞ φ(z) φ(s) = lim dz = dz b 2πi γ+ib z s 2πi γ i z s →∞ Z − Z − ∞ − Now taking the inverse transform of the above equation,

γ+i 1 1 1 ∞ φ(z) f(t)= − φ(s) = − dz L { } L (2πi γ i z s ) Z − ∞ − γ+i 1 ∞ 1 φ(z) = − dz 2πi γ i L z s Z − ∞   γ+i − 1 ∞ 1 1 = φ(z) − dz 2πi γ i L z s Z − ∞   γ+i − 1 ∞ = φ(z)etzdz 2πi γ i Z − ∞ Since from our table of transforms

1 1 zt − = e L s z  −  Our final equation after switching from z to s as dummy variable in the last integral is just the inversion integral which we were establishing. γ+i 1 1 ∞ st − φ(z) = f(t)= φ(z)e ds L 2πi γ i Z − ∞ At this point, it would be advantageous to know how to evaluate the integral on the right. According to residue theorem, the integral of estφ(s) around a path enclosing the isolated singular points s)1,s , ,s , of estφ(s) has the value 2 ··· n 2πi φ1(t)+ ρ2(t)+ . . . + ρn(t)   42 st where ρn(t) is the residue of e φ(s) at s=sn. For discussion of residues, see Appendix C; for singular points, Appendix D. Let the path of integration be made up of the line segments γ ib, γ + ib, and c . Then, − 3 N 1 γ+ib 1 φ(s)estds + φ(s)estds = ρ (t) 2πi 2πi n γ ib c3 n=1 Z − Z X If the second integral around c vanishes for b , as often happens, we are led to the 3 → ∞ immediate result that N 1 − φ(s) = f(t)= ρ (t) L n n=1  X Note that in the formal derivation of the inversion formula, we assumed that φ(s) (and st therefore e φ(s)) is analytic for s γ, and that lims φ(s) =0 in that plane. In our →∞ discussion of the residue form of the≥ inversion, we work| in th|e left half-plane. This is because Laplace transforms have the property that they are analytic in a right half-plane, and that in that plane, lims φ(s) =0. →∞ Questions of the validity of the| above| procedures, alterations of contour, and applications to problems are not dealt with here, as they are presented in detail in the references.

43 8.4 Tables of Transforms

F(s) f(t) 1 Unit impulse at t=0, δ(t)

s Unit doublet impulse at t=0, δ2(t) 1 s Unit step at t=0, u(t) 1 at s+a e− 1 e−at e−bt (s+a)(s+b) b−a − s+c (c a)e−at (c b)e−bt − − − (s+a)(s+b) b a − s+c c c a at c b bt s(s+a)(s+b) ab + a(a− b) e− + b(b− a) e− − − 1 e−at e−bt e−ct (s+a)(s+b)(s+c) (c a)(b a) + (a b)(c b) + (a c)(b c) − − − − − − s+d (d a)e−at (d b)e−bt (d c)e−ct − − − (s+a)(s+b)(s+c) (c a)(b a) + (a b)(c b) + (a c)(b c) − − − − − − s2+es+d (a2 ea+d)e−at (b2 eb+d)e−bt (c2 ec+d)e−ct − − − (s+a)(s+b)(s+c) (c a)(b a) + (a b)(c b) + (a c)(b c) − − − − − − 1 sin bt a2+b2 b s+d √d2+b2 1 b s2+b2 b sin (bt + ϕ) ϕ = tan− d s   s2+b2 cos(bt)

44 F(s) f(t)

1 e−at (s+a)2+b2 b sin (bt)

s+a at (s+a)2+b2 e− cos(bt)

e−at√(d a)2+b2 s+d − 1 b (s+a)2+b2 b sin (bt + ϕ) ϕ = tan− d a −   1 1 e−at 1 b 2 2 2 2 + sin (bt ϕ) ϕ = tan− b = a + b s (s+a)2+b2 b b0b a 0 [ ] 0 − − −at 2 2   s+d d e √(d a) +b 1 b 1 b 2 2 2 2 + − sin (bt + ϕ) ϕ = tan− tan− b = a + b s (s+a)2+b2 b b0b d a a 0 [ ] 0 − − −     1 sinh (bt) s2 b2 b − s s2 b2 cosh (bt) − 1 tn−1 N sn (n 1)! n − ∀ ∈ ν−1 1 t ν > 0 sν Γ(ν) ∀ −at 1 e +at 1 s2(s+a) a2 − s+d d a at d a d s2(s+a) a−2 e− + a t + a−2

45 F(s) f(t)

s+d at (d a)t +1 e− (s+a)2 − 1  1 (at+1)e−at − s(s+a)2 a2

s+d d a d d at + − t e− s(s+a)2 a2 a − a2 h i s2+bs+d d ba a2 d a2 d at s(s+a)2 a2 + −a − t + a−2 e− h i  1 1 t 1 sin (bt) s2(s2+b2) b2 − b3 1 1 (cos (bt) 1) + 1 t2 s3(s2+b2) b4 − 2b2 1 1 1 s2(s2 b2) b3 sinh (bt) b2 t − − 1 1 1 2 s3(s2 b2) b4 (cosh (bt) 1) 2b2 t − − − 1 1 sin (bt) bt cos(bt) (s2+b2)2 2b3 − s 1  (s2+b2)2 2b t sin (bt)

s2 1 (s2+b2)2 2b sin (bt)+ bt cos(bt)

s2+b2  (s2+b2)2 t cos(bt)

1 e−at 2 3 sin (bt) bt cos(bt) [(s+a)2+b2] 2b −

s+a te−at  2 sin (bt) [(s+a)2+b2] 2b

(s+a)2 b2 at − 2 te− cos(bt) [(s+a)2+b2]

−t s e 1 (t t )u(t t ) s2 − 1 − 1 −t s (t1s+1)e 1 tu(t t ) s2 − 1 (t2s2+2t s+2)e−t1s 1 1 t2u(t t ) s3 − 1

46 8.4.1 Analyticity Let ω be a single valued complex function of z such that ω = f(z) = u(x, y)+ iv(x, y) where u and v are real functions. The definition of the limit of f(z) as z approaches z0 and the theorems on limits of sums, products, and quotients correspond to those in the theory of functions of a real variable. The neighborhoods involved are not two-dimensional; however; and the condition lim f(z)= u0 + iv0 z z0 → is satisfied if and only if the two-dimensional limits of the real functions u(x, y) and v(x, y) as x x ,y y have the values u and v respectively. Also, f(z) is continuous when → 0 → 0 0 0 z = z0 is and only if u(x, y) and v(x, y) are both continuous at x0,y0). The derivative of ω at a point z is:

dω ∆ω f(z + ∆z) f(z) = f ′(z) = lim = lim − , dz ∆z 0 z ∆z 0 z → ∆ → ∆ provided that this limit exists (it must be independent of direction). Now suppose one chooses a path on which ∆y = 0 so that ∆z = ∆x. Then, since ∆ω = ∆u + i∆v,

dω ∆u ∆v ∂u ∂v = lim + i = + i dz ∆x 0 ∆x ∆x ∂x ∂x →   The case when ∆x = 0 so that ∆z = i∆y is similar. dω ∆u ∆v ∂u ∂v = lim + i = + i dz ∆y 0 ∆y ∆y ∂y ∂y →   Equating the real and imaginary parts of the above equations, since we insist that the derivative must be independent of direction, we get ∂u ∂v ∂v ∂u = ; = . ∂x ∂y ∂x −∂y These are known as the “Cauchy-Reimann conditions”. Now, the definition of analyticity is that “a function f(z) is said to be analytic at a point z0 is its derivative f ′(z) exists at every point of some neighborhood of z0”. And, it is necessary and sufficient that f(z)= u + iv satisfies the Cauchy-Reimann conditions in order for the function to have a derivative at point z.

8.4.2 Cauchy’s Integral Formula Theorem I

If f(z) is analytic at all points within and on a closed curve c, then c f(z)dz =0. H Proof:

47 f(z)dz = (u + iv)(dx + idy)= (udx vdy)+ i (vdx + udy). − Ic Ic Ic Ic Applying Green’s lemma to each integral, ∂v ∂u ∂u ∂v f(z)dz = dxdy + i dxdy. −∂x − ∂y −∂x − ∂y Ic Z ZR   Z ZR   Due to analyticity, the integrands on the right vanish identically, giving

f(z)dz =0. QED Zc Theorem II

If f(z) is analytic within and on the boundary c of a simply connected region R and if z0 is any point in the interior of R, then f(z ) = 1 f(z) dz where the integration around c 0 2πi c z z0 is in the positive sense. − H

Proof:

Let c0 be a circle with center at z0 whose radius ρ is sufficiently small such that c0 lies entirely within R. The function f(z) is analytic everywhere within R, hence f(z) is analytic z z0 − everywhere within R except at z = z0. By the “principle of deformation of contours” (see any complex variable book), we have that

f(z) f(z) f(z )+ f(z) f(z ) dz = dz = 0 − 0 dz z z z z z z Ic − 0 Ic0 − 0 Ic0 − 0 1 f(z) f(z ) = f(z ) dz + − 0 dz. 0 z z z z Ic0 − 0 Ic0 − 0 Considering the first integral 1 dz and letting z z = reiθ, dz = rieiθdθ, we get the c0 z z0 0 following: − − H 2π rieiθ 2π dθ = i dθ =2πi. reiθ Z0 Z0 With that, we may also observe the following:

f(z) f(z ) f(z) f(z ) 1. − 0 dz | − 0 | dz z z z z c0 0 ≤ c0 0 | | Z − I | − |

2. On c0, z z0 = ρ − 3. f(z) f(z ) < ǫ provided that z z = ρ < δ (Due to the continuity of f(z)). | − 0 | | − 0|

48 Choosing ρ to be less than δ, we write f(z) f(z ) ǫ ǫ ǫ − 0 dz < dz = dz = 2πρ =2πǫ. z z ρ ρ ρ c0 0 c0 | | c0 | | I − I I

Since the integral on the left is independent of ǫ, yet cannot exceed 2πǫ, which can be made arbitrarily small, it follows that the absolute value of the integral is zero. We therefore have that f(z) dz = f(z )2πi +0 or, z z 0 Ic0 − 0 1 f(z) f(z )= dz. QED 0 2πi z z Ic0 − 0 This is known as “Cauchy’s integral formula”.

Calculation of Residues Laurent Series Theorem I:

If f(z) is analytic throughout the closed region R, bounded by two concentric circles c1 and c2, then at any point in the annular ring bounded by the circles, f(z) can be represented by a series of the following form where a is the common center of the circles:

∞ f(z)= a (z a)n n − n= X−∞ 1 f(ω) a = dω. n 2πi (ω a)n+1 Ic − Each integral is taken in the counter-clockwise direction around any curve c lying within the annulus and encircling its inner boundary (for proof see any complex variable book). This series is called the Laurent series.

Residues

1 The coefficient, a 1, of the term (z a)− in the Laurent expansion of a function f(z), − is related to the integral of the function− through the formula 1 a 1 = f(z)dz. − 2πi Ic 1 In particular, the coefficient of (z a)− in the expansion of f(z) about an “isolated singular − point” is called the “residue” of f(z) at that point. If we consider a simply closed curve c containing in its interior a number of isolated

singularities of a function f(z), then it can be shown that c f(z)dz =2πi [r1 + r2 + + rn] where r ,r , ,r are the residues of f(z) at the singular points within c. ··· 1 2 ··· n H 49 Determination of Residues The determination of residues by the use of series expansions is often quite tedious. An alternative procedure for a simple or first order pole at z = a can be obtained by writing a 1 f(z)= − + a + a (z a)+ z a 0 1 − ··· − and multiplying by z a to get − 2 (z a)f(z)= a 1 + a0(z a)+ a1(z a) + − − − − ··· and letting z a to get → a 1 = lim (z a)f(z) . − z a → −   A general formula for the residue at a pole of order m is

m 1 d − m (m 1)!a 1 = lim (z a) f(z) . − z a dzm 1 − → − −   For polynomials, the method of residues reduces to the Heaviside method for finding inverse Laplace transforms.

8.4.3 Regular and Singular Points If ω = f(z) possesses a derivative at z z and at every point in some neighborhood of − 0 z0, then f(z) is said to be “analytic” at z = z0, and z0 is called a “regular point” of the function. If a function is analytic at some point in every neighborhood of a point z0, but not at z0, then z0 is called a “singular point” of the function. If a function is analytic at all points except z0, in some neighborhood of z0, then z0 is an “isolated singular point”. About an isolated singular point z0, a function always has a Laurent series representation

A 2 A 1 f(z)= . . . + − + − + A + A (z z )+ . . . (z z )2 z z 0 1 − 0 − 0 − 0 where 0 < z z < γ and γ is the radius of the neighborhood in which f(z) is analytic | − 0| 0 0 except at z . This series of negative powers of (z z ) is called the “principle part” of f(z) 0 − 0 about the isolated singular point z0. The point z0 is an “essential singular point” of f(z) if the principle part has an infinite number of non-vanishing terms. It is a “pole of order m” if A m = 0 and A n = 0 when n > m. It is called a “simple pole” when m = 1. − 6 − 8.5 References

1. Churchill; “Operational Mathematics” - A complete discourse on operational methods, well written and presented.

50 2. Wylie; “Advanced Engineering Mathematics” - A concise review of the highlights of transformation calculus, both Fourier and Laplace transforms.

3. Murphy; “Basic Automatic Control Theory” - An exposition of Laplace transforms with many good electrical engineering examples. A fairly complete table of Laplace transforms.

4. Erdelyi, A., et al; “Tables of Integral Transforms”, Vols. I and II, McGraw Hill, New York (1954). Very extensive compilation of transforms of many kinds.

5. Gardner and Barnes; “Transients in Linear Systems”, Extensive tables of transforms of ratios of polynomials.

51 Chapter 9

Fourier Transforms

9.1 Definitions

9.1.1 Basic Definitions In addition to the Laplace transform there exists another commonly-used set of trans- forms called Fourier transforms. At least five different Fourier transforms may be distin- guished. Their definitions are as follows:

Finite Range Cosine Transform π [n]= C [f] f(x)cos(nx)dx (n =0, 1, 2,...) Fc n ≡ Z0 Finite Range Sine Transform π [n]= S [f] f(x) sin (nx)dx (n =1, 2, 3,...) Fs n ≡ Z0 Infinite Range Cosine Transform

2 ∞ [r]= C [f] f(x)cos(rx)dx (0 r ) Fc r ≡ π ≤ ≤∞ r Z0 Infinite Range Sine Transform

2 ∞ [r]= S [f] f(x) sin (rx)dx (0 r ) Fs r ≡ π ≤ ≤∞ r Z0 Infinite Range Exponential Transform

1 ∞ [r]= [f] f(x)eirxdx ( r ) Fe Er ≡ 2π −∞ ≤ ≤∞ r Z−∞ 9.1.2 Range of Definition In the infinite range transforms, the transform variable is continuous; in the case of the finite range transforms, the variable takes only positive integer values or zero. Considering

52 the range of integration used in the definition of each transform, we see that the finite range transforms apply to functions defined on a finite interval, the infinite range sine and cosine transforms to functions defined on a semi-infinite interval, while the exponential transform applies to functions defined on the infinite interval.

9.1.3 Existence Conditions As an existence condition for all these transforms, it is customarily required that the function be absolutely integrable over the range, i.e. f(x) dx exists. Note that although for the derivations to follow, the more stringent conditions| of continuity| or sectional continuity R are imposed upon the function, absolute integrability is all that is required in the general case.

9.2 Fundamental Properties

9.2.1 Transforms of Derivatives

Consider the finite range cosine transforms of the derivative f ′ of the function f,

π Cn[f ′]= f ′(x)cos(nx)dx. Z0 Integrating by parts,

π π Cn[f]= f(x)cos(nx) + n f(x) sin (nx)dx 0 0 Z + = f(π)cos(nπ) f(0 )+ nSn[f]. − Since n is an integer,

n + C [f ′]= ( 1) f(π) f(0 ) + nS [f]. n − − n   Consider also,

π Sn[f ′]= f ′(x) sin (nx)dx 0 Z π π = f(x) sin (nx) n f(x)cos(nx)dx − 0 0 Z

= nCn[f]. −

53 Now take for f, f = g′; and by iteration we get the following:

n + Cn[f ′]= Cn[g′′]= ( 1) g′(π) g′(0 ) + nSn[g′] n − + − 2 = ( 1) g′(π) g′(0 ) n C [g]. −  − − n  Similarily,  

S [g′′]= nC [g′] n − n = n ( 1)ng(π) g(0) n2S [g] − − − − n = n g(0) ( 1)ng(π) n2S [g].  − − − n Now consider the infinite range cosine transform 

2 ∞ C [f ′]= f ′(x)cos(rx)dx, r π r Z0

and again integrating by parts, and assuming limx f(x) = 0, which is a consequence of →∞ our condition of absolute integrability, we get

2 2 ∞ C [f ′]= [ f(0)] + r f(x) sin (rx)dx r π − π r r Z0 2 = f(0) + rSr[f]. −rπ Likewise,

2 ∞ S [f ′]= f ′(x) sin (rx)dx r π r Z0 2 ∞ = r f(x)cos(rx)dx = rC [f]. − π − r r Z0 Iterating once, we find

2 Cr[f ′′]= f ′(0) + rSr[f ′] −rπ 2 2 = f ′(0) r Cr[f]. −rπ − Similarly,

S [f ′′]= rC [f ′] r − r 2 2 = r f(0) r Sr[f]. − rπ −

54 Finally, consider

1 ∞ irx [f ′]= f ′(x)e dx Er 2π r Z−∞ and assuming limx f(x)=0, →∞

ir ∞ irx r[f ′]= f(x)e dx. E −√2π Z−∞ Iterating, it follows that

2 [f ′′]= r [f]. Er − Er

In each case, we have assumed continuity for f ′ and f ′′ in order to perform the indicated parts integrations. One may proceed with the iterations, obtaining relations involving trans- forms of higher derivatives. Further properties are derivable with similar ease, the procedure usually involving an integration by parts.

9.2.2 Relations Among Infinite Range Transforms It is interesting to note some relations among the infinite range transforms. Recalling Euler’s identity eirx = cos(rx)+ i sin (rx), we find that

1 ∞ i ∞ r[f]= f(x)cos(rx)dx + f(x) sin (rx)dx E √2π √2π −∞ −∞ Z 0 Z 1 1 ∞ = f(x)cos(rx)dx + f(x)cos(rx)dx √2π √2π 0 −∞ Z 0 Z i i ∞ + f(x) sin (rx)dx + f(x) sin (rx)dx √2π √2π 0 −∞ Z0 Z 1 1 ∞ = f( x)cos(rx)dx + f(x)cos(rx)dx √2π − √2π 0 −∞ Z 0 Z i i ∞ + f(x) sin (rx)dx + f( x) sin (rx)dx √2π √2π 0 − Z−∞ Z or 1 [f]= C [f( x)] + C [f(x)] + iS [f(x)] iS [f( x)] , Er 2 r − r r − r − which is not very interesting except when f(x) is either even or odd on the infinite interval; if even, i.e. f(x)= f( x), then the exponential transform reduced to the cosine transform; − if odd, i.e. f(x) = f( x), then the exponential transform reduces to the sine transform, with a factor √ 1.− − − 55 9.2.3 Transforms of Functions of Two Variables The transforms may also be used with functions of two or more variables; for example, if f is a function of x and y, defined for 0 x π, 0 y π, then ≤ ≤ ≤ ≤ π Sm[f]= f(x, y) sin (mx)dx 0 Z π Sn[f]= f(x, y) sin (ny)dy 0 Z π π Sm,n[f]= Sn[f] sin (my)dy = Sm[f] sin (nπ)dx 0 0 Z π π Z = f(x, y) sin (mx) sin (ny)dxdy. Z0 Z0 Furthermore,

∂2f S m S f ,y mS f π,y m2S f m,n 2 = n (0 ) ( 1) n[ ( )] m,n[ ] "∂x # − − − n  o so that if

f(0,y)= f(π,y)= f(x, 0) = f(x, π), then

∂2f S m2S f m,n 2 = m,n[ ] "∂x # − ∂2f ∂2f S m2 n2 S f . m,n 2 + 2 = ( + ) m,n[ ] "∂x ∂y # −

Similar formulae may be derived for Cm,n and extensions can be worked out in analogy to the single-variable properties. These transforms of more than one variable amount to transforms of transforms, obtained by taking the transform of the function with respect to a single variable, and subsequently taking the transform of this transformed function with respect to another variable. In fact, if the boundary conditions in the various dimensions are not all of the same type, more than one type of transformation may be used. In fact, one fairly common combination is the Fourier plus the Laplace transformation.

9.2.4 Fourier Exponential Transforms of Functions of Three Vari- ables

Consider a function of three variables, f(x1, x2, x3), piecewise continuous and absolutely integrable over the infinite range with respect to each variable. We may apply the exponential

56 transform with respect to each variable, defining the three-times transformed function.

∞ ∞ ∞ 1 i(k1x1+k2x2+k3x3) k[f]= g(k1,k2,k3)= 3 e f(x1, x2, x3)dx1dx2dx3 E (2π) 2 Z−∞ Z−∞ Z−∞ Using vector notation, this may be rewritten as follows:

1 ik x 3 k[f]= g(k)= 3 e · f(x)d x E π 2 x (2 ) Z 3 where k has the components k1,k2,k3, x has the components x1, x2, x3, and d x = dx1dx2dx3, and the integration is to be taken over the full range, to of each variable. The inverse transformation gives back f(x), −∞ ∞

1 ik x 3 f(x)= 3 e− · g(k)d k. π 2 k (2 ) Z

Properties: suppose k[f]= g(k); k[F]= G(k), then E E

1. k[ f]= ikg(k) E ∇ − 2. k[~ F ]= ik G(k) E ∇· − · 3. k[ F]= ik G(k) E ∇2 × −2 × 4. k[ f]= k f E ∇ − From a glance at formulae 1 to 4, we see that under this transformation, the vector operator operating on a function transforms into the vector ik times the transformed function. ∇ 9.3 Summary of Fourier Transform Formulae

9.3.1 Finite Transforms Functions defined on any finite range can be transformed into functions defined on the interval 0 x π. ≤ ≤

1. Definitions

2 π a. S [y]= y(x) sin (nx)dx n =1, 2,... n π r Z0 2 π b. C [y]= y(x)cos(nx)dx n =0, 1,... n π r Z0

57 2. Inversions (0 x π) ≤ ≤ 2 ∞ a. y(x)= S [y] sin (nx) π n r n=1 X 2 ∞ 1 b. y(x)= C [y]cos(nx)+ C [y] π n π 0 r n=1 X 3. Transforms of Derivatives

a. Sn[y′]= nCn[y] n =1, 2,... − n b. Cn[y′]= nSn[y] y(0)+( 1) y(π) n =0, 1, 2,... 2 − − n c. Sn[y′′]= n Sn[y]+ n y(0)+( 1) y(π) n =1, 2,... − 2 − n d. C [y′′]= n C [y] y′(0)+( 1) y(π) n =0, 1, 2,... n − n −  −  (Note that for b and c, the functions must be known on the boundaries and for d, the derivative must be known on the boundaries.)

4. Transforms of Integrals x 1 ( 1)n a. S y(ξ)dξ = C [y] − C [y] n =1, 2,... n n n − n 0 Z0  x 1 b. Cn y(ξ)dξ = Sn[y] 0 −n Z x  C y(ξ)dξ = πC [y] C [xy] 0 0 − 0 Z0  5. Convolution Properties

a. Define convolution of f(x),g(x) for π x π. π − ≤ ≤ p q p(x ξ)q(ξ)dξ = q p ∗ ≡ π − ∗ Z− b. Transforms of Convolutions Define extension of f(x), where f(x) is defined for 0 x π. ≤ ≤ f ( x)= f (x) Odd extension: 1 − − 1 (f1(x +2π)= f1(x) f ( x)= f (x) Even extension: 2 − 2 (f2(x +2π)= f2(x) 1. 2S [f]S [g]= C [f g ] n n − n 1 ∗ 1 2. 2S [f]C [g]= S [f g ] n n n 1 ∗ 2 3. 2C [f]C [g]= C [f g ] n n n 1 ∗ 2

58 6. Derivatives of Transforms d a. S [y]= C [xy] dn n n d b. C [y]= S [xy] dn n − n

(Here the differentiated transforms must be in a form valid for n as a continuous variable instead of only for integral n.)

9.3.2 Infinite Transforms

∞ ∞ It must be true that 0 y(x)dx or y(x)dx exists. −∞ R R 1. Definitions

2 ∞ a. S [y]= y(x) sin (rx)dx r 0 r π ≥ r Z0 2 ∞ b. C [y]= y(x)cos(rx)dx r 0 r π ≥ r Z0 1 ∞ irx c. r[y]= y(x)e dx r E √2π − ∞ ≤ ≤∞ Z−∞ 2. Inversions

2 ∞ a. y(x)= S [y] sin (rx)dx x> 0 π r r Z0 2 = Sx Sr[y] rπ 2 ∞  b. y(x)= C [y]cos(rx)dx x> 0 π r r Z0 2 = Cx Cr[y] rπ 1 ∞  irx c. y(x)= r[y]e− dx √2π E Z−∞ = x r[y] = x r[y] E− E E E−     (The Cauchy principal value of the integral is to be taken).

59 3. Transforms of Derivatives

a. S [y′]= rC [y] r − r b. C [y′]= rS [y] y(0) r r − c. r[y′]= ir r[y] E − E2 d. Sr[y′′]= r Sr[y]+ ry(0) − 2 e. Cr[y′′]= r Cr[y] y′(0) − 2 − f. [y′′]= r [y] Er − Er dny g. =( ir)n [y] Er dxn − Er   4. Transforms of Integrals x 1 a. S y(ξ)dξ = C [y] r r r Z0  x 1 b. C y(ξ)dξ = − S [y] r r r Z0  (In a and b, x y(ξ)dξ must be piecewise continuous and approaching zero as x ). 0 →∞ x R 1 c. y(ξ)dξ = [y] Er r Er Zc  (Where c is any lower limit and x y(ξ)dξ must be piecewise continuous and approaching zero as x ). −∞ →∞ R x λ 1 d. S y(ξ)dξdλ = − S [y] r r2 r "Z0 Z0 # x λ 1 e. C y(ξ)dξdλ = − C [y] r r2 r "Z0 Z0 # x λ 1 f. y(ξ)dξdλ = − r[y] Er r2 E "Z0 Z0 # 5. Some Relations Between Transforms for real y(x) a. C [y]+ iS [y]= [y] r r Er or C [y]= [y] , r ℜ Er S [y]= [y] ( r ℑ Er  ǫt b. For y(x)= y( x), y(x)=Θ e− ǫ> 0 −  [y]=2 [y], Er L 

60 (where Laplace transform variable is taken as ir.)

6. Convolution Properties

∞ a. 2S [f]S [g]= C g(ξ) f(x + ξ) f (x ξ) dξ r r r − 1 − Z0  ∞  b. 2S [f]C [g]= S g(ξ) f(x + ξ)+ f (x ξ) dξ r r r 1 − Z0  ∞  = C f(ξ) g (x ξ) g(x + ξ) dξ r 2 − − Z0  ∞  c. 2C [f]C [g]= C g(ξ) f (x ξ)+ f(x + ξ) dξ r r r 2 − Z0   where extensions defined

y ( x)= y(x); y ( x)= y(x), x 1 − − 2 − ∀ ∞ d. [f] [g]= f(ξ)g(x ξ)dξ Er Er Er − "Z−∞ # 7. Derivatives of Transforms d a. S [y]= C [xy] dr r r d b. C [y]= S [xy] dr r − r d c. [y]= i [xy] drEr Er 9.4 Types of Problems to which Fourier Transforms May be Applied

General Discussion It is to our great advantage to have some inkling as to just which transform to use and where to use it. We have noted that finite-range transforms are useful on functions defined over a finite range, c[r] and s[r] are useful on functions defined over semi-infinite intervals, and [r] on functionsF definedF over the infinite range. Nevertheless, there is still more to be Fe said. First, it goes almost without saying that if it can be avoided, it is undesirable to introduce an unknown quantity into an equation. Now, if an equation in f defined over 0 x π d2f ≤ ≤ contains a differential operator which one wishes to reduce, say dx2 , and f(π) and f(0) are

61 known, while f ′(π) and f ′(0) are not, then clearly Sn is the best option because in doing so, we introduce f(π) and f(0) and need not know the value of f ′ at any point. We would not use Cn because f ′(0) and f ′(π) are unknown and could not be solved for until much later in the work. On the other hand, if f ′(0) and f ′(π) are known, then one uses Cn for the same reason. The situation is similar with respect to the infinite range transforms; use s[r] to d2f F reduce when f(0) is known and use [r] when f ′(0) is known. No such question arises dx2 Fc with respect to e[r]. We have notedF at the start that the functions to which the Fourier transforms are appli- cable are usually required to be absolutely integrable. This kind of knowledge of a function is usually evident from the physical meaning of the function, before the function itself is known. The Laplace transform, on the other hand, merely requires that the function be of exponential order, i.e., f(x) < Meαx M, α R. | | ∀ ∈ Example Consider the following steady-state heat conduction problem in a medium with no internal heat generation. Suppose we have a two-dimensional slab which is semi-infinite along the y- axis (0 y ) and finite along the x-axis (0 x π). The left face at x = 0 is insulated for y >≤ a but≤∞ has a heat flux q for 0 a ∂T (π,y)=0 ∂x T (x, 0) = T0 We propose to do the problem by the method of Fourier transforms, but intuitively we know limy T T¯ = 0 and that therefore, the transform of T does not exist. However, the →∞ ≡ 6 function T T¯ Θ is such that limy Θ = limy T T¯ = T¯ T¯ = 0 and the transform →∞ →∞ may (in fact,− does)≡ exist. Let us, therefore, substitute T−=Θ+ T¯−into the above problem to obtain the following: 2Θ=0 ∇ ∂Θ q 0 a ∂Θ (π,y)=0 ∂x Θ(x, 0) + T¯ = T Θ(x, 0) = T T¯ Θ 0 ⇒ 0 − ≡ 0

62 The structure of the problem is not essentially change, except that now Θ0 is not known since T¯ is not known. ∂2Θ ∂2Θ ∂Θ ∂T heta We must reduce the operators ∂x2 and ∂y2 . In x, ∂x (0,y) and ∂x (π,y) are known. Thus, a finite range Fourier cosine transform is a good option here. In y, we know that Θ(x, 0)=Θ0 and that limy Θ = 0. Therefore, an infinite range Fourier sine transform is →∞ a good option. We shall denote the x-transformed functions by the superscript fn and the y-transformed functions by the superscript F . Recall that

f ∂2Θ ∂Θ ∂Θ n2 f n π,y ,y 2 = Θ +( 1) ( ) (0 ) ∂x ! − − ∂x − ∂x and

F ∂2Θ r2 F r x, 2 = Θ + Θ( 0) ∂y ! −

It is irrelevant in which order the transformations are applied or inverted, although one order may prove nicer than another. Let us transform first with respect to x.

∂Θ ∂Θ d2Θfn 0= n2Θfn +( 1)n (π,y) (0,y)+ − − ∂x − ∂x dy2 ∂Θ q 0 a ∂Θ (π,y)=0 ∂x π n =0 Θfn(x, 0) = (0 n =1, 2,...

Then with respect to y ( Churchill pg. 300, formula 3):

∂ΘF ∂ΘF 0= n2Θfn +( 1) (π,y) (0,y) r2ΘfnF + rΘfn(0) − − ∂x − ∂x − ∂ΘF q 1 cos(ar) (0,y)= − ∂x −k r

(See Erdelye´ , pg. 63, formula 1)

∂ΘF (π,y)=0 ∂x Θfn(x, 0)=0 n =1, 2, 3,...

63 Making substitutions yields the single algebraic equation: q (1 cos(ar)) 0= (n2 + r2)ΘfnF + − ; n =1, 2,... − k r q (1 cos(ar)) 0= r2ΘfoF + − + πΘ r; n =0 − k r 0 Solving for ΘfnF : q (1 cos(ar)) πΘ ΘfoF = − + 0 ; n =0 k r3 r q (1 cos(ar)) ΘfnF = − ; n =1, 2,... k r(r2 + n2) We propose to invert first with respect to r, but we would run into difficulties for n = 0. Let us, therefore, integrate the x-transformed equations directly for n =0 to get Θfo. We have d2Θfo q + =0 0 a dy2 fo Θ (x, 0) = πΘ0 lim Θfo =0. y →∞ Integrating, q Θfo = y2 + C y + C 0 a f Now lim Θ =0 C3 = C4 =0. y →∞ ⇒ Also,

fo Θ (x, 0) = πΘ0 = C2.

It is necessary to cook up another condition to get C1. In a problem of this type, we must ∂Θ ∂Θfo fo require ∂y and Θ to be continuous, therefore ∂y and Θ are continuous. Applying these conditions at y = a, fo fo dΘ dΘ + (x, a−)= (x, a ) dy dy qa qa 0= + C ; C = − k 1 1 k fo fo + Θ (a−)=Θ (a ) qa2 qa2 0= + + πΘ . − 2k k 0

64 Somewhat surprisingly, applying this last condition yields

qa2 Θ = . 0 −2πk Thus

qa (y a)2 y a Θfo = − 2k − ≤ 0 y a ( ≥ We have Θfo. Let us now invert ΘfnF to get Θfn.

q (1 cos(ar)) ΘfnF = − n =1, 2,... k r(n2 + r2)

1 1 ny The inverse of is (1 e− ) (See Erdelye´ , pg. 65, formula 20). r(n2+r2) n2 −

Also, a property of the Fourier sine and cosine transforms is

1 1 − g(r)cos(ar) = G(y + a)+ G(y a) . F 2 −     It is also true that for the sine transform, if

1 f − [g ]= G(y), F then

G( y)= G(y). − − Therefore,

1 1 − (1 cos(ar)) = F r(r2 + n2) −   1 ny 1 n(y+1) n(y a) n2 (1 e− ) n2 1 e− +1 e− − = − − 2 − − 1 ny 1 n(y+1) n(a y)  (1 e− ) h1 e− 1+ e− − i  n2 − − 2n2 − − 1 ny h ny (e−na+ena) i  n2 1 e− 1+ e− 2 y > a = − − 1 ny (e−n(y+a) e−n(a−y))  h1 e− + − i y < a  n2 − 2 e−nyh na i y > a  n2 (cosh ( ) 1) = 1 ny − na (1 e− e− sinh (ny)) y < a ( n2 − −

65 Now, lacking a known inversion to invert with respect to n, we use the series form

1 2 ∞ Θ= Θfo + Θfn cos(nx) π π n=1 X qa 2 2q (1 e−ny e−na sinh (ny)) (y a) + ∞ − − cos(nx) y > a − 2πk − πk n=1 n2 =  2q e−ny P  ∞ 2 (cosh (na) 1)cos(nx) y < a πk n=1 n − and recall that P T =Θ+ T¯ qa2 T =Θ + T¯ = + T¯ 0 0 −2πk qa2 T¯ = T + 0 2πk qa2 T =Θ+Θ + 0 2πk 9.5 Inversion of Fourier Transforms

9.5.1 Finite Range Inversions of the finite range transforms are easily seen to be a consequence of the com- pleteness and orthogonality of the cosines in the case of the cosine transform and of the sine in the case of the sine transform on the interval of integration. Indeed, one sees that the integrals defining Cn and Sn are just the Fourier coefficients for expansions of f in a cosine series or a sine series. Thus, their inversions are given by

∞ 1 1 2 f(x)= C− [C ]= C [f]+ C [f]cos(nx) (0 x π) n n π 0 π n ≤ ≤ n=1 X ∞ 1 2 f(x)= S− [S ]= S [f] sin (nx) (0 x π). n n π n ≤ ≤ n=1 X Two facts, though obvious, should be noted with regard to these transforms. If the function is defined over some range other than 0 x π, say 0 x L, there arises no ≤ ≤ πx ≤ ≤ difficulty since one can define a new variable, such as ξ = L . This way, x = L xi = π, L ⇒ and f(x) = g(ξ) = f( π ξ). If the function is extended out of the range 0 x π, the inversion of the cosine transform is the even extensions, i.e., ≤ ≤ 1 1 C− C (x) = C− C ( x) , n n n n − while the inversion of the sine transform  is the odd extensio ns, i.e.,

1 1 S− S (x) = S− S ( x) . n n − n n −     66 9.5.2 Infinite Range Inversions of the infinite range transforms follow from the Fourier integral theorem in various forms. The inversion of the cosine transform, for example, arises from the formula

2 ∞ ∞ f(x)= cos(rx) f(y)cos(ry)dydr π Z0 Z0 2 ∞ 2 ∞ = cos(rx) f(y)cos(ry)dydr. π π r Z0 r Z0 The interior integral is just what we above defined as Cr[f]. Therefore,

1 2 ∞ f(x)= C− [C ]= C [f]cos(rx)dr. r r π r r Z0 The other inversions follow immediately in the same way from other forms of the Fourier integral theorem. It is to our great advantage to note that, with the normalization factor 2 or 1 inserted as above, the inversion integral is just the transform of the transform, π √2π i.e.,q 1 f(x)= Cr− [Cr]= Cx Cr[f] Similar formulae for the sine and exponential transforms are

1 f(x)= Sr− [Sr]= Sx Sr[f] 1 f(x)= r− [ r]= E x r[f] . E E − E  Knowing this fact doubles the utility of a table of transform s since it can be used backwards as well as forwards. That is, given a transform one wishes to invert, one may first look for it among tabulated transformed functions; not finding an inversions there, one may equally well look for his transform among the tabulated functions. If it is found there, the inversions of the given transform is the transform of the tabulated function. There are tables of both the finite range transforms and the infinite range transforms, useful for the purpose of inverting these transforms. However, this is just one way of obtaining an inversions (the easiest, of course). In the case of the finite range transforms, where the inversion is a Fourier series, and one does not know how to sum it, or get the inversion in closed form, then the truncated series is a useful approximation to the inverse. In the case of the infinite range transforms, the inversion integral is subject to evaluation by the methods of complex integration and residue theory.

9.5.3 Inversion of Fourier Exponential Transforms We have seen that if the transform of a function F (x) is

1 ∞ f(r)= F (x)eirxdx, √2π Z−∞

67 then (under the proper conditions on F ), the function can be recovered from its transform through the inversion formula:

1 ∞ irx F (x)= f(r)e− dr. √2π Z−∞ Note that in the inversion formula, r is a real variable. Let us change the variable r to a new (complex) variable, ir = s, and let φ(s)= f(r). Then

1 ∞ φ(s)= F (x)esxdx, √2π Z−∞ and the inversions is idr = ds

i sx F (x)= φ(s)e− ds. −√2π c Z Where c is the path of integration in the complex s-plane (along the iy axis at x = 0) and the Cauchy Principal value is to be taken such that

iβ i sx F (x)= lim φ(s)e− ds. −√2π β iβ →∞ Z− Suppose φ(s) is analytic in the left half plane (s) < 0, except at a finite number of ℜ isolated singular points sn. Let us close the path cβ, β (s) β, with a semicircle in the left half-plane, choosing β so large as to included− all finite≤ ℑ singular≤ points in the plane (s) < 0. Therefore, c is a vertical line along the imaginary axis, extending from iβ to ℜ β − iβ, and cβ′ is a semicircle with radius β which intersects the real axis at x = β and the imaginary axis at iy = iβ and iy = iβ. Every singular point is contained within− this half circle. By Cauchy’s residue theorem,− we then have that

k sx sx e− φ(s)ds + e− φ(s)ds =2πi ρj ′ cβ cβ j=1 Z Z X sx where ρj denotes the residue of e− φ(s) at the singular points sj, and we have assumed that there are k such singular points.

Since we have hypothesized that β be so large that cβ′ include all finite singular points in the left half plane, in the limit as β , the right side remains constant, and we have the following: →∞

k sx sx lim e− φ(s)ds + lim e− φ(s)ds =2πi ρj β β →∞ cβ →∞ cβ j=1 Z Z X

68 or

iβ k sx sx lim e− φ(s)ds =2πi ρj lim e− φ(s)ds. β − β ′ →∞ iβ j=1 →∞ cβ Z− X Z Thus,

i iβ F (x)= lim esxφ(s)ds −√2π β iβ →∞ Z− k i sx = √2π ρj + lim e− φ(s)ds. √ π β ′ j=1 2 →∞ cβ X Z Many times, the limit on the right hand side is zero, or is easy to evaluate, so that the above formula is a useful tool for inverting the transform. Reasoning in similar fashion, but completing the path with a semi-circle in the right half plane (s) > 0, we obtain a similar formula: ℜ k′ i sx F (x)= √2π ρj′ lim e− φ(s)ds − − √ π β ′′ k 2 →∞ cβ X=1 Z where the curve cβ′′ is another semicircular path similar to cβ′ except it extends out into the positive real axis (rather than the negative real axis). Hence, this curve intersects the real axis at x = β. Another way to visualize the curve is to realize that combining semi-circle cβ′ with semi-circle cβ′′ results in a full circle with radius β centered at the origin of the complex plane. For x< 0, one may find that the limit

sx lim e− φ(s)ds =0 β c′ →∞ Z β so that the first formula becomes

k

F (x)= √2π ρj. j=1 X

(Recall that ρj are residues at singular points in the left half-plane (s) < 0). Again, one may find that for x> 0, the limit ℜ

sx lim e− φ(s)ds =0 β c′′ →∞ Z β

69 so that the second formula becomes

k′ F (x)= √2π ρ′ . − j j=1 X At x = 0,

1 + F (0) = [F (0 )+ F (0−)] 2

where

F (0+) = lim F (ǫ) ǫ 0 → F (0−) = lim F ( ǫ) ǫ 0 → − 9.6 Table of Transforms

y(x) Sn(y) 1 a 1 ( 1)n nπ − − 2 x a ( 1)n+1  nπ − 3 3 x2 a ( 1)n+1 2 a 1 ( 1)n nπ − − nπ − − cx nπa n ca e n2π2+c2a2 1 ( 1) e  a − − ωa (n = ) 2   π sin (ωx) nπa ωa ( 1)n+1 sin (ωa) (n = ) n2π2 ω2a2 − 6 π − ωa 0 (n = ) π cos ωx nπa ωa 1 ( 1)n cos(ωa) (n = ) n2π2 ω2a2 − − 6 π − sinh (cx) nπa ( 1)n+1 sinh (ca ) n2π2+c2a2 − cosh (cx) nπa 1 ( 1)n cosh (ca) n2π2+c2a2 − − 2 a x  a  − nπ 3 x(a x) 2 a 1 ( 1)n − nπ − − sin(ω(a x)) nπa ωa − sin(ωx) n2π2 ω2a2  (n = π ) − 6 sin(c(a x)) nπa − sin(cx) n2π2+c2a2

70 y(x) Cn(y) a (n = 0) 1 0 (n =1, 2,...) a2 (n = 0) 2 x a 2 [( 1)n 1] (n =1, 2,...) nπ − −   a3 (n = 0) x2 3 2a3 (01)n (n =1, 2,...) n2π2 cx a2c n ca e n2π2+c2a2 [( 1) e 1] − − ωa 0 (n = ) π sin (ωx) a2ω ωa ( 1)n cos(ωa) 1 (n = ) n2π2 ω2a2 − − 6 π a − ωa  (n = ) 2 π cos ωx a2ω ωa ( 1)n+1 sin (ωa) (n = ) n2π2 ω2a2 − 6 π − 2 sinh (cx) a c ( 1)n cosh (ca) 1 n2π2+c2a2 − − a2c n cosh (cx) 2 2 2 2 ( 1) sinh (ca)  n π +c a − 4 a3 (n = 0) (x a)2 3 − a3 2 (n =1, 2,...) n2π2 cos (ω(a x)) a2ω ωa − (n = ) sin(ωa) n2π2+ω2a2 6 π cosh (c(a x)) a2c − sinh (ca) n2π2+c2a2

For a few additional transforms of this type, see Churchill or Sheddon (References 2 and ∞ inx ∞ ∞ 3). For transforms of the form y(x)e dx, 0 y(x) sin (nx)dx, and 0 y(x)cos(nx)dx, see Erdelyi (reference 1). −∞ R R R 9.7 References

1. Sneddon, Ian N. “Fourier Transforms”, McGraw Hill, 1951. 2. Churchill, Ruel V., “Operational Mathematics”, McGraw Hill, 1958. 3. Erdelyi, Volume I, “Bateman Mathematical Tables”.

71 Chapter 10

Miscellaneous Identities, Definitions, Functions, and Notations

10.1 Leibnitz Rule

If

b(x) f(x)= g(x, y)dy, Za(x) Then

d b(x) 2g(x, y) f(x)= g[x, b(x)]b′(x) g[x, a(x)]a′(x)+ dy. dx − 2x Za(x) 10.2 General Solution of First Order Linear Differen- tial Equations

The objective is to solve for y(x) which satisfied the following differential equation: dy f(x)= + a(x)y dx

with boundary condition

y(x0)= y0.

72 The procedure is to find an integrating factor. Define h such that

dh = a(x). dx

Thus

x h = a(x′)dx′. Zc The integrating factor will be eh, since

d(ehy) dy dh dy = eh + ehy = eh + eha(x)y = ehf(x). dx dx dx dx

Then

yeh x h h′ d(e y)= e f(x′)dx′ h Zy0e 0 Zx0 where

x0 h0 = a(x′)dx′ c Z x0 h′ = a(x′′)dx′′. Zc Thus,

x ′ h h0 h ye y e = e f(x′)dx′. − 0 Zx0 Hence

x ′ h0 h h h y = y0e − + e − f(x′)dx′. Zx0 Recalling that

x h = a(x′)dx′ c Z x′ x x h′ h = a(x′′)dx′′ a(x′′)dx′ = a(x′′)dx′′. − − − ′ Zc Zc Zx

73 Finally,

x x ′′ ′′ h0 h R ′ a(x )dx y = y0e − + f(x′)e− x dx′. Zx0 (Note that the constant c appearing as a lower limit in the integral of the integrating factor is not a boundary condition: it disappears in the final solution.)

10.3 Identities in Vector Analysis

ˆ ∂ Below, bold quantities are vectors, and V is the vector differential operator = i ∂x + ˆ ∂ ˆ ∂ ˆ ˆ ˆ ∇ j ∂y + k ∂x , and i, j, k are unit vectors in the x, y, and z directions, respectively.

1. a b c = b c a = c a b · × · × · × 2. a (b c)= b(a c) c(a b) × × · − · 3. (a b) (c d)= a b (c d) × · × · × × = a c(b d) d(b c) · · − · =(a c)(b d) (a d)(b c) · · − ·  · 4. (a b) (c d)=(a b d)c (a b c)d × × × × · − × · 5. (φ + ψ)= φ + ψ ∇ ∇ ∇ 6. (φψ)= φ ψ + ψ φ ∇ ∇ ∇ 7. (a b)=(a )b +(b )a + a ( b)+ b ( a) ∇ · · ∇ · ∇ × ∇ × × ∇ × 8. (a + b)= a + b ∇· ∇· ∇· 9. (a + b)= a + b ∇ × ∇ × ∇ × 10. (φa)= a φ + φ a ∇· · ∇ ∇· 11. (φa)=( φ) a + φ a ∇ × ∇ × ∇ × 12. (a b)= b a a b ∇· × · ∇ × − · ∇ × 13. (a b)= a( b) b( a)+(b )a (a )b ∇ × × ∇· − ∇· · ∇ − · ∇ 14. a = ( a) 2a ∇×∇× ∇ ∇· − ∇ 15. φ =0 ∇ × ∇ 16. a =0 ∇·∇× If r = ˆix +ˆjy + kˆz

17. r = 3; r =0 ∇· ∇ ×

74 If V represents a volume bounded by a closed surface S with unit vector ˆn normal to S and directed positive outwards, then,

18. ( φ)dV = (φˆn)dS ∇ ZZZV ZZS 19. ( a)dV = (a ˆn)dS (Gauss’ Theorem) ∇· · ZZZV ZZS 20. (f g)dV = (fg ˆn)dS (g f)dV ∇· · − · ∇ ZZZV ZZS ZZZV 21. (g f)dV = (fg ˆn)dS (f g)dV · ∇ · − ∇· ZZZV ZZS ZZZV 22. ( a)dV = (ˆn a)dA ∇ × × ZZZV ZZS 23. ϕ 2ψ ψ 2φ dV = (ϕ ψ ψ φ) ˆn dS (Green’s Theorem) ∇ − ∇ ∇ − ∇ · ZZZV ZZS   If S is an unclosed surface bounded by contour C, and ds is an increment of length along C, then

24. (ˆn φ)dA = φdS × ∇ ZZS ZC 25. ( a)dA = a dS (Stokes’ Theorem). ∇ × · ZZS ZC 10.4 Coordinate Systems

Cartesian Coordinates ∂ ∂ ∂ = ˆi + ˆj + kˆ ∇ ∂x ∂y ∂z ∂2φ ∂2φ ∂2φ 2φ = + + ∇ ∂x2 ∂y2 ∂z2 d3r = dxdydz

Cylindrical Coordinates ∂ 1 ∂ ∂ = ρˆ0 + θˆ0 + kˆ ∇ ∂ρ ρ ∂θ ∂z 1 ∂ ∂ϕ 1 ∂2ϕ ∂2ϕ 2ϕ = ρ + + ∇ ρ ∂ρ ∂ρ ρ2 ∂θ2 ∂z2   d3r = ρdρdθdz

75 Spherical Coordinates

∂ 1 ∂ 1 ∂ = ˆr0 + θˆ0 + ϕˆ0 ∇ ∂r r ∂θ r sin (θ) ∂ϕ 1 ∂ ∂ψ 1 ∂ ∂ψ 1 ∂2ψ 2ψ = r2 + sin (θ) + ∇ r2 ∂r ∂r r2 sin (θ) ∂θ ∂θ r2 sin2 (θ) ∂ϕ2     d3r = r2 sin (θ)drdθdϕ

10.5 Index Notation

A short note will be given on this notation which greatly simplifies certain mathematical problems (to mention one advantage). A convention is adopted here which is sufficient for working with rectangular Cartesian coordinates. For more general coordinate systems, a more elaborate convention is needed (see references 1, 2, and 3). Consider a simple example which illustrates the utility and application of the index notation. Supposed one has a set of three equations

u = axx + ayy + azz

v = bxx + byy + bzz

w = cxx + cyy + czz

By defining

u = u1 v = u1 w = u3;

x = x1 y = x2 z = x3;

ax = a11 ay = a12 az = a13;

bx = a21 by = a22 bz = a23;

cx = a31 cy = a32 cz = a33;

These may be written

u1 = a11x1 + a12x2 + a13x3

u2 = a21x1 + a22x2 + a23x3

u3 = a31x1 + a32x2 + a33x3

76 or

3

u1 = a1αxα α=1 X3

u2 = a2αxα α=1 X3

u3 = a3αxα α=1 X or

3

ui = aiαxα (i =1, 2, 3) α=1 X To this point we have effected considerable simplification of the original equations. By the introduction of the “summation convention”, we can so even further. Notice that there are two kinds of indices on the right: i, which occurs once in the product, and α, which occurs twice. Index i is called a “single-occurring” index; α is called a “double-occurring” index. The convention to be introduced is: 1. Doubly-occurring indices are to be given all possible values and the results summed within the equation. 2. Single-occurring indices are to be assigned one value in an equation, but as many equations are to be generated as there are available values for the index. Thus from 1, we may drop the summation symbol, and by 2, we may drop the parenthesis denoting values for i. With the summation convention in force, we have ui = aiαxα which unambiguously represents the original equations, if α and i have the same range, which we shall assume. A nice but unnecessary finishing touch can be put on the convention which seems to make things clearer in the work: for all singly occurring indices use lower-case Roman letters; for all double-occurring indices, use lower-case Greek letters. We remark that it is possible to have any number of indices on a quantity.

10.6 Examples of Use of Index Notation

A. Some Handy Symbols

1 i = j 1. δ = ij 0 i = j ( 6

77 This is the Kronecker delta.

1 i = j, i = k, j = k,i,j,k in cyclic order 6 6 6 2. ε = 1 i = j, i = k, j = k,i,j,k in anticylic order ijk  − 6 6 6  0 i = j, i = k, or j = k  This is called the Levy-Civita Tensor density.

B. Some Relationships Expressed in Index Notation

1. Dot Product (a scalar) a b = a b · α α 2. Cross Product (a vector; consider ith component)

(a b) = ε a b = ε b a × i iαb α α − iβα β α 3. Triple Product (a scalar) a b c = a ε b c · × α αβγ β γ 4. Gradient (ϕ is a scalar) ∂ϕ ( ϕ)i = ∇ ∂xi 5. Divergence (V is a vector) ∂V V = α ∇· ∂xα 6. Curl (V is a vector) ∂V ( V) = ε α ∇ × i iαβ ∂xβ 7. Laplacian ∂2φ 2φ = ∇ ∂xα∂xα ∂xi 8. = δij ∂xj

C. Some Identities in δij and ǫijk in 3-D space

1. δαα =3

2. εαβγεαβγ =6

3. εiαβεjαβ =2δij 4. ε ε = δ δ δ δ ijβ kℓβ ik jℓ − iℓ jk 5. εijkεℓmn = δiℓδjmδkn + δimδjnδkℓ

78 10.7 The Dirac Delta Function

The Dirac Delta “function” symbolizes an integration operation and in this sense is not strictly a function (in the interpretation of Professor R. V. Churchill). It cannot be the end result of a calculation, but is meaningful only if an integration is to be carried out over its argument. we define the δ-“function” as follows:

δ(x) 0, x =0 ǫ ≡ 6 δ(x)dx =1, ǫ> 0 ǫ ǫ Z− f(x)δ(x)dx = f(0) ǫ Z− Very often it is convenient to think of the δ-“function” as a function zero everywhere except where its argument is zero, but which is so large at that point where its argument vanishes that its integral over any region of which that point is an interior point, is equal to unity. Mathematicians shudder at the idea of the δ-“function”, but physicists have used them for years (carefully), finding them of great utility. Shiff’s “Quantum Mechanics” lists some properties of the Dirac δ:

δ(x)= δ( x) − dδ(x) δ′(x)= δ′( x) δ′(x)= − − dx   xδ(x)=0

xδ′(x)= δ(x) − 1 δ(ax)= δ(x) a> 0 a 2 δ(x2 a2)= δ(x a)+ δ(x + a) a> 0 − a − δ(a x)δ(x  b)dx = δ(a b)  − − − Z f(x)δ(x a)= f(a)δ(x a) − − Professor Churchill uses as a δ-“function” the operation

ǫ lim f(x)U(h, x)dx; ǫ> 0 h ǫ →∞ Z− where U(h, x) is the function

1 h h h ; 2 x 2 U(h, x)= − ≤h ≤ h (0; x< −2 or x> 2

79 Here, note that the limit is taken after integration. Other such representations are common, like that given by Schiff:

sin (gx) δ(x) = lim g πx →∞ which means not that the limit is to be taken exactly as show, but rather that it is taken after integration, i.e., with this representation:

ǫ ǫ sin (gx) δ(x)dx = lim dx ǫ g ǫ πx Z− →∞ Z− ǫ ǫ sin (gx) f(x)δ(x)dx = lim f(x) dx ǫ g ǫ πx Z− →∞ Z− Schiff gives still another representation in terms of an integral:

g 1 ∞ 1 δ(x)= eixξdξ = lim eixξdξ 2π g 2π g Z−∞ →∞ Z− 1 2 sin (gx) = lim g →∞ 2π x sin (gx) = lim g →∞ πx thus

ǫ ǫ ∞ 1 f(x)δ(x)dx = eixξf(x)dξdx ǫ ǫ 2π Z− Z− Z−∞ 10.8 Gamma Function

A. Definitions

∞ t x 1 Γ(x)= e− t − dt Z0

80 B. Properties

a. Γ(x +1)= xΓ(x) b. Γ(n)=(n 1)! (Γ(1)=1) n N − π ∈ c. Γ(x)Γ(1 x)= − sin (πx)

1 1 2x d. Γ(x)Γ(x + )=2 − √πΓ(2x) 2 1 b 1 a 2 a 3 a e. Γ − = − − − 1 a 1 b · a b · 3 b ···  −  − − − 1 f. Γ = √π 2   Since by 1, one may reduce Γ(x) to a product involving Γ or some number between 1 and 2, a handy table for calculations is one like that found at the end of Chemical Rubber Integral Tables, for Γ(x), 1 x 2. ≤ ≤ References 1. H. Margenau and G.M. Murphy; “The Mathematics of Physics and Chemistry”, D. Van Nostrand Company, inc., New York, 1956m pp. 93-98.

2. Whittacker and Watson, “Modern Analysis”, 4th Edition, Cambridge University Press (1927), Chapter VII.

10.9 Error Function

x 2 λ2 erf(x)= e− dλ √π Z0 2 ∞ λ2 erfc(x)=1 erf(x)= e− dλ − √π Zx

81 Chapter 11

Notes and Conversion Factors

11.1 Electrical Units

11.1.1 Electrostatic CGS System 1. The electrostatic cgs units of charge, sometimes called the escoulomb or statcoulomb, is that “point” charge which repels an “equal” point charge at a distance of 1 cm in a vacuum, with a force of 1 dyne.

2. The electrostatic cgs unit of field strength is that field in which 1 escoulomb experiences a force of 1 dyne. It, therefore, is 1 dyne per escoulomb.

3. The electrostatic cgs unit of potential difference (or esvolt) is the difference of potential between two points such that 1 erg of work is done in carrying 1 escoulomb from one point to the other. It is 1 erg per escoulomb.

11.1.2 Electromagnetic CGS System 1. The unit magnetic pole is a “point” pole which repels an equal pole at a distance of 1 cm, in a vacuum, with a force of 1 dyne.

2. The unit magnetic field strength, the oersted, is that field in which a unit pole expe- riences a force of 1 dyne. It therefore is 1 dyne per unit pole.

3. The absolute unit of current (or abampere) is that current which in a circular wire of 1-cm radius, produces a magnetic field of strength 2π dynes per unit pole at the center of the circle. One abampere approximately equals 3 1010 esamperes, or 10 amp. · 4. The electromagnetic cgs unit of charge (or abcoulomb) is the quantity of electricity passing in 1 second through any cross section of a conductor carrying a steady current of 1 abampere. One abcoulomb equals 10 coulombs.

82 5. The electromagnetic cgs unit of potential difference (or abvolt) is a potential difference between two points, such that 1 erg of work is done in transferring abcoulomb from 8 1 10 one point to the other. One abvolt = 10− volts which is approximately 10− esvolt. 3 · 11.1.3 Practical System 11.2 Tables of Transforms

Practical Electrostatic cgs Electromagnet cgs Quantity 1 Coulomb 3 109 escoulombs 1 abcoulomb · 10 Current 1 Ampere 3 109 esamperes 1 abcoulomb · 10 1 8 Potential Difference 1 Volt 300 esvolt 10 abvolts 1 8 Electrical Field Strength 1 Volt/cm 300 dyne/escoulomb 10 abvolts/cm

11.2.1 Energy Relationships 1 esvolt x 1 escoulomb = 1 erg • 1 abvolt x 1 abcoulomb = 1 erg • 1 volt x 1 coulomb = 107 ergs = 1 joule • The electron volt equals the work done when an electron is moved from one point to • another differing in potential by 1 volt.

10 1 12 1 electron volt = 4.80 10− escoulomb x esvolt = 1.60 10− erg • · 300 · 11.3 Physical Constants and Conversion Factors; Di- mensional Analysis

See http://physics.nist.gov/cuu/Constants/index.html.

83