1 Introduction 2 Integral Transforms
Total Page:16
File Type:pdf, Size:1020Kb
Physics 129b Integral Equations 051012 F. Porter Revision 150928 F. Porter 1 Introduction The integral equation problem is to find the solution to: Z b h(x)f(x) = g(x) + λ k(x; y)f(y)dy: (1) a We are given functions h(x), g(x), k(x; y), and wish to determine f(x). The quantity λ is a parameter, which may be complex in general. The bivariate function k(x; y) is called the kernel of the integral equation. We shall assume that h(x) and g(x) are defined and continuous on the interval a ≤ x ≤ b, and that the kernel is defined and continuous on a ≤ x ≤ b and a ≤ y ≤ b. Here we will concentrate on the problem for real variables x and y. The functions may be complex-valued, although we will sometimes simplify the discussion by considering real functions. However, many of the results can be generalized in fairly obvious ways, such as relaxation to piece- wise continuous functions, and generalization to multiple dimensions. There are many resources for further reading on this subject. Some of the popular ones among physicists include the \classic" texts by Mathews and Walker, Courant and Hilbert, Whittaker and Watson, and Margenau and Murphy, as well as the newer texts by Arfken, and Riley, Hobson, and Bence. 2 Integral Transforms If h(x) = 0, we can take λ = −1 without loss of generality and obtain the integral equation: Z b g(x) = k(x; y)f(y)dy: (2) a This is called a Fredholm equation of the first kind or an integral transform. Particularly important examples of integral transforms include the Fourier transform and the Laplace transform, which we now discuss. 1 2.1 Fourier Transforms A special case of a Fredholm equation of the first kind is a = −∞ (3) b = +1 (4) 1 k(x; y) = p e−ixy: (5) 2π This is known as the Fourier transform: 1 Z 1 g(x) = p e−ixyf(y)dy (6) 2π −∞ Note that the kernel is complex in this case. The solution to this equation is given by: 1 Z 1 f(y) = p eixyg(x)dx: (7) 2π −∞ We'll forego rigor here and give the \physicist's" demonstration of this: 1 1 1 Z Z 0 g(x) = e−ixydy eix yg(x0)dx0 (8) 2π −∞ −∞ 1 1 1 Z Z 0 = g(x0)dx0 ei(x −x)ydy (9) 2π −∞ −∞ Z 1 = g(x0)δ(x − x0)dx0 (10) −∞ = g(x): (11) Here, we have used the fact that the Dirac \delta-function" may be written 1 Z 1 δ(x) = eixydy: (12) 2π −∞ The reader is encouraged to demonstrate this, if s/he has not done so before. It is instructive to notice that the Fourier transform may be regarded as a limit of the Fourier series. Let f(x) be expanded in a Fourier series in a box of size [−L=2; L=2]: 1 X 2πinx=L f(x) = ane : (13) n=−∞ We have chosen periodic boundary conditions here: f(L=2) = f(−L=2). The an expansion coefficients may be determined for any given f(x) using the orthogonality relations: Z L=2 1 2πinx=L −2πimx=L e e dx = δmn: (14) L −L=2 2 Hence, Z L=2 1 −2πinx=L an = f(x)e dx: (15) L −L=2 Now consider taking the limit as L ! 1. In this limit, the summationp goes over to a continuous integral. Let y = 2πn=L and g(y) = Lan= 2π. Then, using dn = (L=2π)dy, 1 X 2πinx=L f(x) = lim ane (16) L!1 n=−∞ p 1 2π = lim X g(y)eixy (17) L!1 n=−∞ L 1 Z 1 = p eixyg(y)dy: (18) 2π −∞ Furthermore: La 1 Z 1 g(y) = p n = p f(x)e−ixydx: (19) 2π 2π −∞ We thus verify our earlier statements, including the δ-function equivalence, assuming our limit procedure is acceptable. Suppose now that f(y) is an even function, f(−y) = f(y). Then, 1 Z 0 Z 1 g(x) = p e−ixyf(y)dy + e−ixyf(y)dy (20) 2π −∞ 0 1 Z 1 h i = p eixy + e−ixy f(y) dy (21) 2π 0 s 2 Z 1 = f(y) cos xy dy: (22) π 0 This is known as the Fourier cosine transform. It may be observed that the transform g(x) will also be an even function, and the solution for f(y) is: s 2 Z 1 f(y) = g(x) cos xy dx: (23) π 0 Similarly, if f(y) is an odd function, we have the Fourier sine trans- form: s 2 Z 1 g(x) = f(y) sin xy dy; (24) π 0 where a factor of −i has been absorbed. The solution for f(y) is s 2 Z 1 f(y) = g(x) sin xy dx: (25) π 0 Let us briefly make some observations concerning an approach to a more rigorous discussion. Later we shall see that if the kernel k(x; y) satisfies 3 conditions such as square-integrability on [a; b] then convenient behavior is achieved for the solutions of the integral equation. However, in the present case, we not only have jaj; jbj ! 1, but the kernel eixy nowhere approaches zero. Thus, great care is required to ensure valid results. We may deal with this difficult situation by starting with a set of functions which are themselves sufficiently well-behaved (e.g., approach zero rapidly as jxj ! 1) that the behavior of the kernel is mitigated. For example, in quantum mechanics we may construct our Hilbert space of acceptable wave functions on R3 by starting with a set S of functions f(x) where: 1. f(x) 2 C1, that is f(x) is an infinitely differentiable complex-valued function on R3. n 2. limjxj!1 jxj d(x) = 0; 8n, where d(x) is any partial derivative of f. That is, f and its derivatives fall off faster than any power of jxj. We could approach the proof of the Fourier inverse theorem with more rigor than our limit of a series as follows: First, consider that subset of S consisting of Gaussian functions. Argue that any function in S may be ap- proximated aribtrarily closely by a series of Gaussians. Then note that the S functions form a pre-Hilbert space (also known as an Euclidean space). Add the completion to get a Hilbert space, and show that the theorem remains valid. The Fourier transform appears in many physical situations via its con- nection with waves, for example: <eixy = cos xy: (26) In electronics we use the Fourier transform to translate \time domain" prob- lems in terms of \frequency domain" problems, with xy ! !t. An LCR circuit is just a complex impedance for a given frequency, hence the integral- differential time-domain problem is translated into an algebraic problem in the frequency domain. In quantum mechanics the position-space wave func- tions are related to momenutm-space wave functions via the Fourier trans- form. 2.1.1 Example: RC circuit Suppose we wish to determine the \output" voltage Vo(t) in the simple circuit of Fig. 1. The time domain problem requires solving the equation: Z t Z t 1 0 0 1 1 1 0 0 Vo(t) = Vi(t ) dt − + Vo(t ) dt : (27) R1C −∞ C R1 R2 −∞ This is an integral equation, which we will encounter in Section 5.2 as a \Volterra's equation of the second kind". 4 R1 V (t) V (t) i C R2 o Figure 1: A simple RC circuit problem. If Vi(t) is a sinusoid waveform of a fixed frequency (!), the circuit elements may be replaced by complex impedances: R1 ! Z1 = R1 (28) R2 ! Z2 = R2 (29) 1 C ! Z = : (30) C i!C Then it is a simple matter to solve for Vo(t): 1 V (t) = V (t) ; (31) o i R1 1 + (1 + i!R2C) R2 if Vi(t) = sin(!t + φ), and where it is understood that the real part is to be taken. Students usually learn how to obtain the result in Eqn. 31 long before they know about the Fourier transform. However, it is really the result in the frequency domain according to the Fourier transform. That is: Z 1 1 −i!t Vbo(!) = p Vo(t)e dt (32) 2π −∞ 1 = Vb (!) : (33) i R1 1 + (1 + i!R2C) R2 We are here using the \hat" ( b ) notation to indicate the integral transform of the unhatted function. The answer to the problem for general (not necessarily sinusoidal) input Vi(t) is then: Z 1 1 i!t Vo(t) = p Vbo(!)e d! (34) 2π −∞ 1 Z 1 ei!t = p Vb (!) d!: (35) i R1 2π −∞ 1 + (1 + i!R2C) R2 5 2.2 Laplace Transforms The Laplace transform is an integral transform of the form: Z 1 F (s) = f(x)e−sxdx: (36) 0 The \solution" for f(x) is: 1 Z c+i1 f(x) = F (s)esxds; (37) 2πi c−i1 where x > 0. This transform can be useful for some functions where the Fourier trans- form does not exist. Problems at x ! +1 are removed by multiplying by e−cx, where c is a positive real number. Then the problem at −∞ is repaired by multiplying by the unit step function θ(x): 8 1 if x > 0, < θ(x) ≡ 1=2 if x = 0, and (38) : 0 if x < 0. Thus, we have Z 1 g(y) = f(x)θ(x)e−cxe−ixydx (39) −∞ Z 1 = f(x)e−cxe−ixydx; (40) 0 p where we have by convention also absorbed the 1= 2π.