APPLICATIONS

IN

A Thesis

Presented to

The Faculty of the Department of

California State University, Los Angeles

In Partial Fulfillment

of the Requirements for the Degree

Master of Science

in Mathematics

By

Misha M. Avetisyan

December 2013

© 2013

Misha Avetisyan

ALL RIGHTS RESERVED

ii

The thesis of Misha M. Avetisyan is approved.

Borislava Gutarts

Derek Chang

Michael Hoffman, Committee Chair

Grant Fraser, Department Chair

California State University, Los Angeles

December 2013

iii

ABSTRACT

Hilbert Space Applications in Integral Equations

By

Misha M. Avetisyan

The purpose of this paper is to describe some applications of the theory of Hilbert spaces to integral equations. The main goal is to illustrate possible applications of techniques developed in theory and to include the standard classification of the important integral equations (Volterra, Fredholm, Integro-Differential, Singular and Abel’s integral equations) and their solvability. The most available methods of the subject are abstract and most of them are based on comprehensive theories such as topological methods of . This paper uses some recent developments in the solution of integral equations without using the extensive mathematical theory and leaves out abstract and comprehensive methods. The main goal of this paper is to provide both a systematic exposition of the basic ideas and results of Hilbert space theory and functional analysis, and an introduction to various methods of solution of integral equations. Several examples with solutions are introduced in each section.

iv

ACKNOWLEDGMENTS

I wish to express my deep gratitude to my thesis advisor, Dr. Hoffman for his encouragement, advice, and guidance during the preparation for this thesis. I would like to thank him because I took the three most valuable classes in applied linear analysis, advanced complex analysis with him and also he accepted to be my thesis advisor. Also, I am most thankful for Professor Borislava Gutarts and Professor Derek Chang for being in the committee members. I am thankful to my math teachers, who make it learning math interesting. Special thanks to Professor Melisa Hendrata, with her I took five classes related to numerical analysis, linear programming and optimization, which helped me a lot in my thesis.

I wish to thank my wife Arusyak and my son Gurgen for their valuable support during the course of the thesis.

This thesis is dedicated to the loving memory of my father Gurgen Poghosi

Avetisyan.

v

TABLE OF CONTENTS

Abstract ...... iv

Acknowledgments...... v

Chapter

1. Historical Introduction ...... 1

2. Hilbert Space ...... 4

2.1 Euclidian Space ...... 4

2.2 Hilbert Space of Sequences ...... 6

2.3 Space ...... 11

3. Abstract Hilbert Space H ...... 15

3.1 Properties of Hilbert Space ...... 15

3.2 Properties of Orthonormal Systems ...... 19

4. Linear Operators in Hilbert Spaces ...... 24

4.1 Linear Integral Operators ...... 24

4.1.1 Norm of an Integral Operator ...... 26

4.1.2 Hermitian Adjoint ...... 28

4.2 Bounded Linear Operators ...... 29

4.3 Completely Continuous Operators………………………………………. 34

5. Linear Integral Equations ...... 40

5.1 Classification of Linear Integral Equations ...... 40

5.1.1 Fredholm Linear Integral Equations ...... 40

5.1.2 Volterra Linear Integral Equations ...... 41

5.1.3 Integro-Differential Equations ...... 42

5.1.4 Singular Integral Equations ...... 42 vi

5.2 Solution of an Integral ...... 43

5.3 Converting Volterra Equation to Ordinary ...... 44

5.4 Converting Initial Value Problem to Volterra Equation ...... 45

5.5 Converting Boundary Value Problem to Fredholm Equation...... 49

6. Fredholm Integral Equations...... 52

6.1 Introduction ...... 52

6.2 The Decomposition Method ...... 53

6.3 The Direct Computation Method ...... 57

6.4 The Successive Approximations Method ...... 59

6.5 The Method of Successive Substitutions ...... 60

6.6 Homogeneous Fredholm Equations ...... 62

7. Volterra Integral Equations ...... 65

7.1 Introduction ...... 65

7.2 The Adomian Decomposition Method...... 65

7.3 The Series Solution Method...... 70

7.4 Successive Approximations Method ...... 73

7.5 The Method of Successive Substitutions ...... 75

7.6 Volterra Integral Equations of the First Kind ...... 77

8. Integro-Differential Equations ...... 79

8.1 Introduction ...... 79

8.2 Fredholm Integro-Differential Equations ...... 80

8.2.1 The Direct Computation Method ...... 80

8.2.2 The Adomian Decomposition Method...... 82

vii

8.2.3 Converting to Fredholm Integral Equations ...... 86

8.3 Volterra Integro-Differential Equations ...... 88

8.3.1 The Series Solution Method...... 88

8.3.2 The Decomposition Method ...... 91

8.3.3 Converting to Volterra Integral Equations ...... 93

9. Singular Integral Equations...... 96

9.1 Definitions...... 96

9.2 Abel’s Problem ...... 97

References………...... 103

Appendices

A. of Irrational Functions ...... 105

B. The Gamma Function ( ) ...... 106

훤 푥

viii

CHAPTER 1

Historical Introduction

We began with a historical survey that the name integral equation [10], [13], [18]

for any equation in which the unknown function ( ) is under an integral sign was

introduced by du Bois-Reymond in 1888. In 1782ϕ Laplace푥 used the

( ) = ( ) (1.1) ∞ −푥푠 푓 푥 �0 푒 ϕ 푠 푑푠 to solve differential equations. In 1822 Fourier found the formulas

2 ( ) = sin ( ) , (1.2) ∞ � 푓 푥 �0 푥푠 ϕ 푠 푑푠 휋

2 ( ) = sin ( ) (1.3) ∞ � ϕ 푠 �0 푥푠 푓 푥 푑푥 and 휋

2 ( ) = cos ( ) (1.4) ∞ 푓 푥 � � 푥푠 ϕ 푠 푑푠 휋 0 2 ( ) = cos ( ) (1.5) ∞ � ϕ 푠 �0 푥푠 푓 푥 푑푥 휋

The integral equations (1.2) and (1.4) can be solved in terms of the known

function ( ) by (1.3) and (1.5).

In푓 1826푥 Abel solved the integral equation, having the form

1

( ) = ( ) ( ) , (1.6) 푥 −∝ 푓 푥 �푎 푥 − 푠 ϕ 푠 푑푠 where ( ) is a continuous function satisfying ( ) = 0, and 0 < < 1.

푓An푥 integral equation of the type 푓 푎 ∝

( ) = ( ) + ( ) ( ) (1.7) 푥

ϕ 푥 푓 푥 휆 �0 푘 푥 − 푠 ϕ 푠 푑푠 in which the unknown function ( ) occurs outside and within the integral and the variable appears as upper limitϕ of푥 the integral, was obtained by Poisson in 1826 the theory of푥 magnetism. He solved it by expanding ( ) in powers of the parameter without proving the convergence of this series. Inϕ 1837푠 Liouville proved the convergence휆 of such a series.

The determination of the function having values over certain boundary surface

S and satisfying Laplace’s equation 2 =훹 0 within the region enclosed by S was shown

by Neumann in 1870 to be equivalent∇ to훹 the solution of an integral equation. This is

similar to the procedure used by Poisson and Liouville, and leads to good method of

successive approximations.

In 1896 Volterra gave the first general solution of the class of linear integral

equations bearing his name with the variable appearing as the upper limit of the

integral. 푥

A more general class of linear integral equations having the form

( ) = ( ) + ( , ) ( ) (1.8) 푏

ϕ 푥 푓 푥 �푎 퐾 푥 푠 ϕ 푠 푑푠 which includes Volterra’s class of integral equations as the special case given by

( , ) = 0 for > , was first discussed by Fredholm in 1900. He employed a similar

퐾 푥 푠 푠 푥 2

approach to that introduced by Volterra in 1884. In this method the Fredholm equation

(1.8) is regarded as the limiting form as of a set of linear algebraic equations

푛 → ∞ 푛 ( ) = ( ) + 푛 ( , ) ( )δ ( = 1, … … , ) (1.9)

Φ 푥푟 푓 푥푟 � 퐾 푥푟 푥푠 ϕ 푥푠 푛 푟 푛 푠=1 where δ = ( )/ and = + δ . The solution of these equations can be

푛 푟 푛 obtained and Fredholm푏 − 푎 푛 verified푥 by direct푎 푟 substitution in the integral equation (1.8) that his limiting formula for gave the true solution.

In the remaining 푛chapters→ ∞ of this paper we shall be giving a discussion of the general theory of linear integral equations as developed by Volterra, Fredholm, Hilbert, and Schmidt. For this purpose we need to introduce the concept of a Hilbert Space [10],

[17], [22]. This is a good generalization of ordinary three-dimensional Euclidian space to a linear vector space of infinite dimensions which, for the subject of integral equations, is chosen to be the complete linear space composed of square integrable functions having a distance property defined in terms of an inner product. The whole theory was initiated by the work of D. Hilbert (1912) on integral equations.

To explain the meanings of these terms we consider Euclidian space first then the

Hilbert space of sequences and function space.

3

CHAPTER 2

Hilbert Space

2.1 Euclidian Space

In three-dimensional Euclidian space each point is represented by a coordinates

( , , ) forming the components of the position vector . The vector has

1 2 3 coordinates푥 푥 푥 ( , , ) and the vector sum + of two풙 vectors , 휆has풙 coordinates

1 2 3 ( + , +휆푥 ,휆 푥 +휆푥 ). These obey the propertie풙 퐲s of a linear vector풙 퐲 space.

1 1 2 2 3 3 푥 푦The푥 scalar푦 푥product푦 of two vectors , is defined as

( , ) = +풙 퐲 + . (2.1.1)

1 1 2 2 3 3 The vectors are orthogonal,풙 at퐲 right푥 angles,푦 푥 if푦 ( , 푥) =푦 0.

We have 풙 퐲

( , ) = + + 0, 2 2 2 1 2 3 where ( , ) = 0 if and only if풙 풙 is the푥 zero 푥vector푥 0 ≥with coordinates (0,0,0).

The magnitude풙 풙 or norm of 풙a vector is given by

‖=풙‖ ( , ) = 풙 + + < . (2.1.2) 2 2 2 1 2 3 The vector is said to‖ be풙‖ normalized� 풙 풙 if �푥 = 1푥. 푥 ∞

The distance between two points‖풙 specified‖ by the vectors , is given by .

Since the length of a side of a triangle is less than, or equal to the풙 sum퐲 of the lengths‖풙 − of풚 ‖

the other two sides, we have the triangle inequality.

+ . (2.1.3)

Suppose that , , ‖ are풙 − linearly풚‖ ≤ ‖ independent퐱‖ ‖풚‖ vectors so that + +

is not the zero vector 풙0 except풚 풛 when = = = 0. We can use them휇 풙 to construct훾 풚 훽 an풛

휇 훾 훽

4

orthogonal and normalized, i.e. orthonormal set of three vectors , , . This vector is

1 2 3 normalized, = then 풆 풆 풆 풙 풆ퟏ ‖풙‖ = ( , )

ퟏ ퟏ ퟏ is orthogonal to and =풚 / 풚 − is풚 normalized.풆 풆 Further

ퟏ ퟐ ퟏ ퟏ 풆 풆 풚 =‖풚 ‖ ( , ) ( , )

ퟏ ퟏ ퟏ ퟐ ퟐ is orthogonal to and while풛 풛=− 풛/풆 풆 is− also풛 풆normalized.풆

ퟏ ퟐ ퟑ ퟏ ퟏ The three풆 orthogonal풆 unit 풆vectors풛 ‖ 풛 ‖ are said to form a basis since any

1 2 3 vector a of the three-dimensional space can푒 be푒 expressed푒 as the linear combination

= ( , ) + ( , ) + ( , ) .

ퟏ ퟏ ퟐ ퟐ ퟑ ퟑ The foregoing vector풂 algebra풂 풆 풆can be풂 extended풆 풆 to 풂an풆 n-dimensional풆 space whose

points are specified by an ordered set of n complex numbers ( , , … … , ) denoted by

1 2 푛 the vector . The inner product of two vectors , is now defined푥 푥 as 푥

풙 풙 퐲 ( , ) = 풏 , = ( , ) (2.1.4)

풙 퐲 � 풙풓 풚���풓 ��풚���퐱�� 풓=ퟏ while the norm of the vector is given by

= ( , ) = 푛 | | . (2.1.5) 2 ‖풙‖ � 풙 풙 �� 푥푟 푟=1 An orthonormal set of n vectors , , … , form a basis of the n-dimensional

ퟏ ퟐ 풏 space, and an arbitrary vector a in the space풆 풆 can be풆 expressed as the linear combination

= 푛 ( , )

풂 � 풂 풆풓 풆풓 푟=1 where the = ( , )(r = 1, … , n) are the components of the vector a with respect to

푟 풓 the basis vectors.푎 풂 The풆 only vector which is orthogonal to every vector of the basis is the

5

zero vectors 0 and so the orthonormal set , , … , is said to span the n-dimensional

ퟏ ퟐ 풏 space. If we take = (1,0,0, … ,0), =풆(0풆,1,0, …풆,0), … , = (0,0,0, … ,1) we see that

ퟏ ퟐ 풏 a is the vector specified풆 by ( , , …풆… , ), where = (풆, ).

1 2 푛 푟 풓 푎2.2푎 Hilbert 푎Space of Sequences푎 풂 풆

By a natural generalization of a finite dimensional space, we can consider an

infinite dimensional space [10], [13] and [14] whose points are represented by vectors

having components, or coordinates, given by the infinite sequence of complex numbers풙

{ } = ( , , … , …) satisfying

푥푟 푥1 푥2 푥푟 ∞ | | < . (2.2.1) 2 � 푥푟 ∞ 푟=1 The scalar product or inner product of two vectors and given by

풙 퐲 ( , ) = ∞ , = ( , ). (2.2.2)

풙 퐲 � 풙풓 풚���풓 ��풚���퐱�� 풓=ퟏ Then we have 0 ( , ) < , where ( , ) = 0 if and only if is the zero vector 0 whose components≤ all풙 vanish.퐱 ∞ 풙 퐱 풙

Also we define the norm of a vector by the formula

‖풙‖ 풙

= ( , ) = ∞ | | < . (2.2.3) 2 ‖풙‖ � 풙 풙 �� 푥푟 ∞ 푟=1 Thus = 0 if and only if = 0.

‖Further풙‖ we let be the풙 vector with components { } so that = | | ,

푟 and let the sum + 휆of풙 two vectors , be the vector having휆푥 components‖휆풙‖ { +휆 ‖풙} ‖as

푟 푟 in the case of a fini풙 te풚 dimensional space.풙 풚 푥 푦

6

Now, by the Cauchy inequality

∞ | || | ∞ | | ∞ | | (2.2.4) 2 2 � 푥푟 푦푟 ≤ ��� 푥푟 � �� 푦푟 � 푟=1 푟=1 푟=1 and the inequality

∞ ∞ | | | |, (2.2.5)

�� 푥푟푦�푟� ≤ � 푥푟 푦푟 we obtain Schwarz’s inequality푟= 1 푟=1

|( , )| . (2.2.6)

This is the generalization풙 to풚 infinite≤ ‖풙 ‖sequences‖풚‖ of the corresponding result in

three-dimensional Euclidian space which follows from ( , ) = cos where is the angle between the vectors and . 풙 풚 ‖풙‖‖풚‖ 훼 훼

Hence 풙 풚

+ = ∞ | + | 2 2 ‖풙 풚‖ � 푥푟 푦푟 푟=1

= ∞ | | + ∞ | | + ∞ ( + ) 2 2 � 푥푟 � 푦푟 � 푥푟푦⃐푟 푥̅푟푦푟 푟=1 푟=1 푟=1 ( + ) < . 2 This shows that the sum + satisfies the≤ condition‖풙‖ ‖풚 ‖ ∞

풙 풚 ∞ | + | < (2.2.7) 2 � 푥푟 푦푟 ∞ and also yields the triangle inequality푟=1

+ + (2.2.8)

‖풙 풚‖ ≤ ‖풙‖ ‖풚‖ 7

which can be rewritten in the form (2.1.3) by reversing the sign of y, as

+ .

The real number ‖풙 − 풚‖ ≤ ‖풙‖ ‖풚‖

( , ) = = ∞ | | (2.2.9) 2 푑 풙 풚 ‖풙 − 풚‖ �� 푥푟 − 푦푟 푟=1 represents the distance between two points characterized by vectors x and y. Clearly is the distance of the point from the origin given by the zero vector 0. ‖풙‖

A sequence of vectors풙 { } converges strongly or “in norm” to a limit vector if,

푛 given any > 0, there exists 풙 such that for > we have < . Strong 풙

푛 convergence휀 is denoted by 푁 . 푛 푁 ‖풙 − 풙‖ 휀

푛 If we have, 풙using→ 풙the triangle inequality,

푛 풙 → 풙 = +

푛 푚 푛 푚 ‖ 풙 − 풙 ‖ ‖풙 − 풙 +풙 − 풙 ‖ < for large , .

푛 푚 A sequence { } satisfying ≤< ‖ 풙for− large풙‖ ‖and풙 − is풙‖ known휀 as a Cauchy푛 푚

푛 푛 푚 sequence. A sequence풙 of points‖풙 {− 풙} in‖ a Hilbert휀 space푛 is 푚said to converge weakly to a

푛 point in if 풙 푯

풙 푯 ( , ) ( , )

푛 for all in . Here, ( , ) is understood풙 to풚 be →the 풙inner풚 product on the Hilbert space.

풚Weak푯 convergence∙ ∙ is in contrast to strong convergence or convergence in the

norm, which is defined by 0 where = ( , ) is the norm of . The

푛 notation of weak convergence‖풙 defines− 풙‖ →a topology ‖on풙‖ and� 풙this풙 is called the weak풙

topology on . In other words, the weak topology is 푯the topology generated by the

푯 8

bounded functions on . It follows from Schwarz inequality that the weak topology is

weaker than the norm 푯topology. Therefore convergence in norm implies weak

convergence while the converse is not true in general. However, if for each

( , ) ( , ) and , then we have 0 as 풚 .

푛 푛 푛 On the level풙 풚of operators,→ 풙 풚 a bounded‖풙 ‖ →operator풙 is also cont‖풙inuous− 풙‖ in→ the weak푛 → topology.∞

If weakly, then for all 푇

푛 풙 → 풙 ( , ) =풚 ( , ) ( , ) = ( , ). ∗ ∗ 푛 푛 Now we consider푻풙 a sequence풚 풙 푻 which풚 → was풙 푻constructed풚 푻풙 to풚 be orthonormal, that is,

푛 (푒 , ) = .

푛 푚 푚푛 Where equals one if = and zero푒 푒 otherwise.훿 We claim that if the sequence is

푚푛 infinite,훿 then it converges푚 weakly푛 to zero. A simple proof is as follows. For , we

have 푥 ∈ 푯

|( , )| ( ), 2 2 ′ � 푒푛 푥 ≤ ‖푥‖ 퐵푒푠푠푒푙 푠 푖푛푒푞푢푎푙푖푡푦 푛 where equality holds when { } is a Hilbert space basis. Therefore

푛 푒|( , )| 0 . . ( , ) 0. 2 푛 푛 Uniform Bounded Theorem푒 푥 (UBT)→ for푖 푒Hilbert푒 푥 space:→ If { } is a sequence of ∞ 푛 푛=1 vectors in , such that the numerical sequence = ( , ) is bounded푦 for each in .

푛 푛 The sequence푯 { } is bounded in norm. 푡 푥 푦 푥 푯 ∞ 푛 푛=1 UBT for푦 Hilbert space: If there are for each in constants such that

푥 |( , )| , 푥 푯 푀

푛 푥 then there is a single constant such푥 that푦 ≤ 푀 ∀푛

퐵 .

‖푦푛‖ ≤ 퐵 푎푙푙 푛 9

Corollary: weakly convergent sequence is bounded.

If then ( , ) ( , ) , this is a number and any convergent sequence of 푤 푛 푛 numbers푦 → 푦 is bounded.푥 푦 So,→ the푥 sequence푦 ∀푥 {( , )} is bounded in for each . By UBT the

푛 sequence { } is bounded in norm. 푥 푦 푭 푥

푛 Now푦 we demonstrate that every Cauchy sequence has a limit vector in the

space. Suppose that = (1,0,0, … ), = (0,1,0, … ), … are vectors in the infinite풙

ퟏ ퟐ dimensional space. They풆 form a basis which풆 spans the space, and the arbitrary vector

= ( , , … … ) can be expressed as

풂 푎1 푎2 푎푟 = ∞

풂 � 푎푟풆푟 푟=1 where = ( , ) and is the th unit vector satisfying = 1.

푟 풓 푟 푟 Then we푎 have,풂 using풆 Schwarz풆 ’s inequality,푟 ‖풆 ‖

|( , ) ( , )| = |( , )|

푛 푟 푚 푟 푛 푚 푟 풙 풆 − 풙 풆 풙 − 풙 풆<

푛 푚 for large and . It follows that the sequence ≤of‖ numbers풙 − 풙 (‖ , 휀 ) = ( ) is a Cauchy 푛 푛 푟 푟 sequence 푛and approaches푚 a limiting value ( = 1,2, … ) as풙 풆 . For푥 large , we

푟 have 푥 푟 푛 → ∞ 푛 푚

( ) ( ) = ∞ | | < 푛 푚 2 ‖풙푛 − 풙푚‖ �� 푥푟 − 푥푟 휀 푟=1 and so for every

( ) ( ) 푘 | | < . 푛 푚 2 �� 푥푟 − 푥푟 휀 푟=1

10

Hence, in the limit as we obtain

푚 → ∞

( ) 푘 | | < 푛 2 �� 푥푟 − 푥푟 휀 푟=1 and since this is true for every we get

( ) = ∞ | | < . 푛 2 ‖풙푛 − 풙‖ �� 푥푟 − 푥푟 휀 푟=1 But

∞ | | = = ( ) + 2 �� 푥푟 ‖풙‖ ‖ 풙 − 풙풏 풙풏‖ 푟=1 +

풏 풏 ≤ ‖<풙 −+풙 ‖ ‖풙 ‖

풏 and so we have 휀 ‖풙 ‖

∞ | | < . 2 � 푥푟 ∞ 푟=1 Thus = ( , , … , … ) belongs to the space and as . A space

1 2 푟 풏 in which 풙 when푥 푥 { } 푥is a Cauchy sequence is called complete.풙 → 풙 The푛 → space∞ of

풏 풏 sequences풙 described→ 풙 above풙 is an example of a Hilbert space.

2.3 Function Space

We consider the function space [10], [13] composed of all continuous complex

functions ( ) of a real variable , defined in the interval , which are square

integrable푓 and푥 thus satisfy the condition푥 푎 ≤ 푥 ≤ 푏

11

| ( )| < . (2.3.1) 푏 2 �푎 푓 푥 푑푥 ∞ The inner product of two functions ( ) and ( ) given by

푓 푥 푔 푥 ( , ) = ( ) ( ) , (2.3.2) 푏 ������ 푓 푔 �푎 푓 푥 푔 푥 푑푥 we define the norm of the function ( ) as

푓 =푥 ( , ). (2.3.3)

Next we establish the important‖푓‖ inequality� 푓 푓 named after Schwarz.

We have

( , ) ( ) ( ) 0 (2.3.4) 푏 ( , ) 2 푓 푔 �푎 �푓 푥 − 푔 푥 � 푑푥 ≥ so that 푔 푔

|( , )| |( , )| ( , ) 2 + 0 ( , )2 ( , )2 푓 푔 푓 푔 푓 푓 − ≥ i.e. 푔 푔 푔 푔

( , )( , ) |( , )| . (2.3.5) 2 Hence 푓 푓 푔 푔 ≥ 푓 푔

|( , )| (2.3.6)

this is Schwarz’s inequality for square‖푓‖‖푔 integrable‖ ≥ 푓 푔 functions.

Also

( + ) = + + 2 2 2 2 ‖ 푓 ‖ ‖ 푔 ‖ (‖푓, ‖) + ‖(푔,‖ ) + 2‖|푓(‖‖, 푔)‖|

by Schwarz’s inequality. But ≥ 푓 푓 푔 푔 푓 푔

2|( , )| ( , ) + ( , )

푓 푔 ≥ 푓 푔 ��푓���푔�� 12

and so

( + ) ( , ) + ( , ) + ( , ) + ( , ) 2 ‖푓‖ ‖푔‖ ≥= 푓( 푓+ , 푔+푔 ). 푓 푔 푔 푓

Hence we have 푓 푔 푓 푔

+ + (2.3.7)

which is the triangle inequality‖푓 ‖for functions,‖푔‖ ≥ ‖푓 known푔‖ as Minkowski’s inequality.

Two functions ( ) and ( ) belonging to the function space are said to be

orthogonal if 푓 푥 푔 푥

( , ) = ( ) ( ) = 0 (2.3.1.1) 푏 ������ 푓 푔 �푎 푓 푥 푔 푥 푑푥 and the function ( ) normalized if

푓 푥 = 1. (2.3.1.2)

We consider a set of sectionally‖푓‖ continuous complex functions

( ), ( ), … , ( ), …. satisfying the orthonormality condition

휙1 푥 휙2 푥 휙푟 푥 ( , ) = ( ) ( ) = (2.3.1.3) 푏 푟 푠 푟 ���푠����� 푟푠 휙 휙 �푎 휙 푥 휙 푥 푑푥 훿 where is the Kronecker delta symbol

훿푟푠 1 ( = ) = (2.3.1.4) 0 ( ). 푟 푠 훿푟푠 � Such a set of functions is called orthonormal.푟 ≠ 푠

An orthonormal system of functions [6], [13] is said to form a basis or a complete

system if and only if the sole function which is orthogonal to every number ( ) of the

푟 system is the null function which vanishes throughout the interval 휙 except푥 at a

푎 ≤ 푥 ≤ 푏

13

finite number of points. We note here that a complete system of functions should not be confused with a complete space.

14

CHAPTER 3

Abstract Hilbert Space

3.1 Properties of Hilbert Space퐻

We shall conclude this chapter by abstracting the common axioms that all Hilbert spaces must satisfy [6], [10], [13], [14].

A Hilbert space is a linear vector space possessing a distance function or metric which is given by an inner푯 product and which is complete with sequent to that metric.

A linear vector space points or vectors, forming an Abelian group and permitting multiplication by the field of complex numbers over .

An Abelian group has an internal law of 휆composition푪 satisfying the commutative law

+ = + (3.1.1)

푓 푔 푔 푓 and the associative law

+ ( + ) = ( + ) + , (3.1.2)

푓 푔 ℎ 푓 푔 ℎ having a zero element 0 such that

0 + = + 0 = , (3.1.3)

푓 푓 푓 and an inverse element corresponding to each element of the set such that

−푓 푓 + ( ) = ( ) + = 0. (3.1.4)

푓 −푓 −푓 푓 The multiplication by the field of complex numbers satisfies

15

1 = , (3.1.5)

∙ 푓 푓 0 = 0, (3.1.6)

∙ 푓 ( ) = ( ) (3.1.7)

휆휇 푓 휆 휇푓 and satisfies the distributive law with respect to the elements ,

푓 푔 ( + ) = + (3.1.8)

휆 푓 푔 휆푓 휆푔 and the distributive law with respect to the numbers ,

휆 휇 ( + ) = + . (3.1.9)

휆 휇 푓 휆푓 휇푓 The inner product of two elements , is a complex number denoted by ( , ) satisfying

푓 푔 푓 푔 ( , ) = ( , ), (3.1.10)

푓 푔 ��푔���푓�� ( , ) = ( , ), (3.1.11)

휆푓 푔 휆 푓 푔 ( + , ) = ( , ) + ( , ). (3.1.12)

푓1 푓2 푔 푓1 푔 푓2 푔 It follows at once that ( , ) = ( , ) so that ( , ) is a real number. We shall assume

that 푓 푓 ��푓���푓�� 푓 푓

( , ) 0 (3.1.13)

푓 푓 ≥ and further that ( , ) = 0 if and only if = 0.

푓 푓 푓 Then we have also that

( , ) = ( , ) = ( , ) = ( , ) (3.1.14)

푓 휆푔 ��휆푔�����푓�� 휆̅ ��푔���푓�� 휆̅ 푓 푔 16

( , + ) = ( + , ) = ( , ) + ( , ) = ( , ) + ( , ).

1 2 1 2 1 2 1 2 푓 푔 푔 � �푔 � � � � � � 푔� � � � � 푓 � � � � 푔� � � � �푓 � � � � 푔� � � � �푓 � � 푓 푔 푓 푔 (3.1.15)

The norm of the element is denoted as

푓 = ( , ) . (3.1.16)

The norm = 0 if and only if‖ 푓‖= 0� and푓 푓 = | | .

Now ( +‖푓‖, + ) 0, and so푓 taking ‖=휆푓‖( , 휆)/‖(푓‖, ) we obtain Schwarz’s inequality푓 휆푔 푓 휆푔 ≥ 휆 − 푓 푔 푔 푔

|( , )| (3.1.17) and we have the triangle inequality‖푓‖‖푔 ‖ ≥ 푓 푔

+ + . (3.1.18)

We now define a distance‖푓‖ ‖function푔‖ ≥ ‖ 푓 ( ,푔‖) in terms of the norm according to the formula 푑 푓 푔

( , ) = . (3.1.19)

This satisfies the conditions required푑 푓 푔 of a‖ distance푓 − 푔‖ between two points and , namely

(i) ( , ) = ( , ), 푓 푔

( ) 푑 (푓 ,푔 ) 푑0,푔 푓

(푖푖 ) 푑 (푓 ,푔 )≥= 0 if and only if = ,

(푖푖푖) 푑(푓, 푔) ( , ) + ( , )푓. 푔 (3.1.20)

This푖푣 last푑 condition푓 푔 ≤ 푑 follows푓 ℎ from푑 ℎ the푔 triangle inequality for the norm:

= ( ) + ( ) + .

‖We푓 − say푔‖ that‖ a푓 se−quenceℎ ofℎ − elements푔 ‖ ≤ ‖{푓 −} convergesℎ‖ ‖ℎ − strongly푔‖ to a limit element

푛 if, given any > 0, there exists a such that푓 for > we have < . Strong푓

푛 convergence 휀is denoted by 푁. 푛 푁 ‖푓 − 푓‖ 휀

푓푛 → 푓 17

If we have = ( ) + ( )

푛 푛 푚 푛 푚 푓 → 푓 ‖ 푓 − 푓 ‖ ‖ 푓 − 푓 + 푓 − 푓 ‖<

푛 푚 for sufficiently large , . A sequence ≤{ ‖}푓 of− elements푓‖ ‖푓 satisfying− 푓‖ 휀 < for

푛 푛 푚 large , is known as푛 a푚 Cauchy sequence.푓 ‖푓 − 푓 ‖ 휀

푛 A푚 Hilbert space is complete, that is every Cauchy sequence converges to a limit vector in the space.

The theory of Hilbert spaces initiated, by David Hilbert (1862-1943), in his 1912

work on quadratic forms in infinitely many variables which he applied to the theory of

integral equations. John von Neumann first formulated an axiomatic theory of Hilbert

spaces and developed the modern theory of operators on Hilbert spaces. Necessary

condition for a normed space is to be an inner product space.

A complete inner product space is called a Hilbert space [6], [10], [13]. Since C is

complete, it is a Hilbert space. is a Hilbert space. is a Hilbert space. 푁 2 Every inner product space is a normed푪 space and the convergence푙 defined by the norm.

Definition 3.1.1: (Strong Convergence). A sequence { } of vectors in an inner product

푛 space is called strongly convergent to a vector 푥if

퐸 0 as . 푥 ∈ 퐸

푛 Definition 3.1.2:‖푥 (Weak− 푥‖ Convergence).→ 푛 → ∞ A sequence { } of vectors in an inner product

푛 space is called weakly convergent to a vector 푥if ( , ) ( , ) as ,

푛 for every퐸 . 푥 ∈ 퐸 푥 푦 → 푥 푦 푛 → ∞

Theorem 푦3.∈1.3:퐸 A strong convergent sequence is weakly convergent.

Proof. Suppose the sequence 0 as . By Schwarz’s inequality

푛 |( , )| ‖푥 − 푥‖ → 0푛 → ∞as , and

푥푛 − 푥 푦 ≤ ‖푥푛 − 푥‖‖푦‖ → 푛 → ∞ 18

( , ) 0 as , for every .

푛 푥 − 푥 푦 → 푛3.2→ Properties∞ of푦 Orthonormal∈ 퐸 Systems

Using the principle of mathematical induction, we can generalize the Pythagorean formula [6] as follows:

Theorem 3.2.1: (Pythagorean Formula). If , … , are orthogonal vectors in an inner

1 푛 product space, then 푥 푥

푛 2 = 푛 . (3.2.1) 2 �� 푥푘� �‖푥푘‖ 푘=1 푘=1 Proof. If ⊥ , then + = + . Thus the theorem is true for 2 2 2 1 2 1 2 1 2 = 2. Assume푥 푥now that‖ the푥 (3.2.1)푥 ‖ holds‖푥 for‖ ‖푥1,‖ that is

푛 푛 −

푛−1 2 = 푛−1 . 2 �� 푥푘� �‖푥푘‖ 푘=1 푘=1

Set = and = . Clearly ⊥ . Thus 푛−1 푥 ∑푘=1 푥푘 푦 푥푛 푥 푦 푛 2 = + = + = 푛−1 + = 푛 . 2 2 2 2 2 2 �� 푥푘� ‖푥 푦‖ ‖푥‖ ‖푦‖ �‖푥푘‖ ‖푥푛‖ �‖푥푘‖ 푘=1 푘=1 푘=1 This proves the theorem.

Theorem 3.2.2: (Bessel’s Equality and Inequality). Let , … , be an orthonormal

1 푛 set of vectors in an inner product space . Then, for every 푥 , 푥we have

퐸 푥 ∈ 퐸 푛 ( , ) 2 = 푛 |( , )| (3.2.2) 2 2 �푥 − � 푥 푥푘 푥푘� ‖푥‖ − � 푥 푥푘 and 푘=1 푘=1

19

푛 |( , )| . (3.2.3) 2 2 � 푥 푥푘 ≤ ‖푥‖ Proof. In view of the Pythagore푘=an1 formula (3.2.1), we have

푛 2 = 푛 = 푛 | | 2 2 �� 훼푘푥푘� �‖훼푘푥푘‖ � 훼푘 푘=1 푘=1 푘=1 for arbitrary complex numbers , … , . Hence

훼1 훼푛 푛 2 = ( 푛 , 푛 )

�푥 − � 훼푘푥푘� 푥 − � 훼푘푥푘 푥 − � 훼푘푥푘 푘=1 푘=1 푘=1

= , 푛 ( 푛 , ) + 푛 | | 2 2 2 ‖푥‖ − �푥 � 훼푘푥푘� − � 훼푘푥푘 푥 � 훼푘 ‖푥푘‖ 푘=1 푘=1 푘=1

= 푛 ( , ) 푛 ( , ) + 푛 2 ‖푥‖ − � �훼�푘� 푥 푥푘 − � 훼푘��푥���푥��푘�� � 훼푘훼��푘� 푘=1 푘=1 푘=1

= 푛 |( , )| + 푛 |( , ) | . (3.2.4) 2 2 2 ‖푥‖ − � 푥 푥푘 � 푥 푥푘 − 훼푘 푘=1 푘=1 In particular, if = ( , ), this results yields (3.2.2). From (3.2.2) it follows that

훼푘 푥 푥푘 0 푛 |( , )| , 2 2 ≤ ‖푥‖ − � 푥 푥푘 푘=1 which gives (3.2.3). The proof is complete.

Remarks. 1. Note that expression (3.2.4) is minimized by taking = ( , ). This

푘 푘 choice of minimizes and thus it provides the 훼best approximation푥 푥 of 푛 푘 푘=1 푘 푘 by a linear훼 combination‖푥 of− vectors∑ 훼 { 푥 ,‖… , }.

푥 푥1 20푥푛

2. If { } is an infinite orthonormal sequence of vectors in an inner product space

푛 , then from 푥(3.2.3), by letting , we obtain

퐸 푛 → ∞ ∞ |( , )| . (3.2.5) 2 2 � 푥 푥푘 ≤ ‖푥‖ 푘=1 This shows that the series |( , )| converges for every . In other words, the ∞ 2 푘=1 푘 sequence {( , )} is an element∑ of푥 푥 . We can say that an orthonormal푥 ∈ 퐸 sequence in 2 푛 induces a mappi푥 푥 ng from into . The푙 expansion 퐸 2 퐸 푙 ~ ∞ ( , ) (3.2.6)

푥 � 푥 푥푛 푥푛 푛=1 is called a generalized Fourier series of with respect to the orthonormal sequence { }.

푛 This set of coefficients gives the best approximation.푥 In general we do not know whether푥

the series in (3.2.6) is convergent. The next theorem shows, the completeness of

ensures the convergence. 퐸

Theorem 3.2.3: Let { } be an orthonormal sequence in a Hilbert space and let { }

푛 푛 be a sequence of complex푥 numbers. Then the series converges푯 if and only훼 if ∞ 푛=1 푛 푛 | | < , and in that case ∑ 훼 푥 ∞ 2 ∑푛=1 훼푛 ∞

∞ = ∞ | | . (3.2.7) 2 �� 훼푛푥푛� �� 훼푛 푛=1 푛=1 Proof. For every > > 0, we have

푚 푘

푚 = 푚 | | , (3.2.8) 2 �� 훼푛푥푛� �� 훼푛 푛=푘 푛=푘

21

If | | < , then, by (3.2.8) the sequence = is a Cauchy ∞ 2 푚 푛=1 푛 푚 푛=1 푛 푛 sequence.∑ 훼 This implies∞ convergence of the series 푠 ∑ , because훼 푥 of the ∞ 푛=1 푛 푛 completeness of . ∑ 훼 푥

Conversely,푯 if the series converges, then the same formula (3.2.8) ∞ 푛=1 푛 푛 implies the convergence of ∑| | 훼, because푥 the sequence of numbers ∞ 2 푛=1 푛 = | | is a∑ Cauchy훼 sequence in . 푚 2 푚 푛=1 푛 To휎 obtain∑ (3.2.7)훼 it is enough to take = 1푹 and let in (3.2.8).

Definition 3.2.1: (Complete Sequence). An 푘orthonormal sequence푚 → ∞ { } in a Hilbert

푛 space is said to be complete if for every we have 푥

푯 푥 ∈ 푯 = ∞ ( , ) . (3.2.9)

푥 � 푥 푥푛 푥푛 Since the right side of (3.2.8) is a series,푛=1 the equality means

lim 푛 ( , ) = 0,

푘 푘 푛→∞ �푥 − � 푥 푥 푥 � 푘=1 where is the norm in . For example, if = ([ , ]) and { } is an orthonormal ퟐ 푛 sequence‖∙‖ in , then by 푯 푯 푳 −휋 휋 푓

푯 = ∞ ( , ) ,

푓 � 푓 푓푛 푓푛 we mean 푛=1

lim ( ) ( ) 2 = 0, = ( ) ( ) . 휋 푛 휋 푘 푘 푘 푘 푛→∞ � �푓 푡 − � 훼 푓 푡 � 푑푡 훼 � 푓 푡 �푓����푡��푑푡 −휋 푘=1 −휋 This, in general, does not imply the pointwise convergence: ( ) = ( , ) ( ). ∞ 푓 푥 ∑푛=1 푓 푓푛 푓푛 푥

22

The following theorem gives important characterization of complete orthonormal sequences [6], [10], [17].

Theorem 3.2.4: An orthonormal sequence { } in a Hilbert space is complete if and

푛 only if the condition ( , ) = 0 for all 푥 implies = 0. 푯

푛 Proof. Suppose { } 푥is 푥complete in . 푛Then∈ 푁 every 푥 has the representation

푥푛 푯 푥 ∈ 푯 = ∞ ( , ) .

푥 � 푥 푥푛 푥푛 푛=1 Thus, if ( , ) = 0 for every , then = 0.

푛 Conversely,푥 푥 suppose the푛 condition∈ 푁 (푥 , ) = 0 for all implies = 0. Let be

푛 an element of . Define 푥 푥 푛 푥 푥

푯 = ∞ ( , ) .

푦 � 푥 푥푛 푥푛 푛=1 The sum exists in by (3.2.5) and Theorem 3.2.3. Since, for every ,

푦 푯 ∈ 푁 ( , ) = ( , ) ∞ ( , ) ,

푥 − 푦 푥푛 푥 푥푛 − �� 푥 푥푘 푥푘 푥푛� 푘=1

= ( , ) ∞ ( , )( , ) = ( , ) ( , ) = 0,

푥 푥푛 − � 푥 푥푘 푥푘 푥푛 푥 푥푛 − 푥 푥푛 푘=1 we have = 0, and hence

푥 − 푦 = ∞ ( , ) .

푥 � 푥 푥푛 푥푛 푛=1

23

CHAPTER 4

Linear Operators in Hilbert Spaces

In an integral equation the unknown function occurs under an integral sign and

thus, if the functions involved belong to a Hilbert space, it is clear that we have to deal

with integral operators acting on a Hilbert space of functions.

Linear operators (or transformations) [6], [10], [17] on a normed vector space are widely used to represent physical quantities. The most important operators include differential, integral, and matrix operators. The major objective of this chapter is to

introduce linear operators in Hilbert spaces with some attention to different kinds of

operators and their basic properties. Although linear mappings may be from a vector

space into a vector space , we are interested mostly in the case when = = ,

1 2 1 2 where 퐸 is a Hilbert space or퐸 a subspace of a Hilbert space. However, in order퐸 to퐸 avoid퐸

unduly퐸 difficult concepts, we shall suppose that our functions and kernels are square

integrable without usually specifying the sense in which the integrals are to be performed.

It is useful to place linear integral operators in a more general context, so this

chapter will conclude with an introduction to the theory of linear operators in an abstract

Hilbert space.

4.1 Linear Integral Operators

We consider the linear integral operator given by

푲 = ( , ) (4.1.1) 푏

푲 �푎 퐾 푥 푠 푑푠 where ( , ) is a square integrable kernel. This is abbreviation for

퐾 푥 푠 ( )( ) = ( , ) ( ) , (4.1.2) 푏

푲ϕ 푥 �푎 퐾 푥 푠 ϕ 푠 푑푠 24

where ( ) is a square integrable function, in the symbolic form

ϕ 푠 = . (4.1.3)

The operator is linear since 훹 푲ϕ

푲 ( + ) = + (4.1.4)

1 1 2 2 1 1 2 2 where , are constants푲 휆 ϕ and휆 ϕ, are휆 square푲ϕ integrable휆 푲ϕ functions.

1 2 1 2 If 휆 휆 ϕ ϕ

= ( , ) (4.1.5) 푏

푳 �푎 퐿 푥 푠 푑푠 is a second integral operator we have

χ= = ( ) (4.1.6) where 푳훹 푳 푲ϕ

( ) = ( , ) ( , ) (s) 푏 푏

휒 푥 �푎 퐿 푥 푡 푑푡 �푎 퐾 푡 푠 ϕ 푑푠 = ( , ) (s) (4.1.7) 푏

�푎 푃 푥 푠 ϕ 푑푠 and

( , ) = ( , ) ( , ) , (4.1.8) 푏

푃 푥 푠 �푎 퐿 푥 푡 퐾 푡 푠 푑푡 that is

= (4.1.9)

where = is the integral operator with휒 kernel푷ϕ ( , ).

Integral푷 푳푲 operators satisfy the associative law 푃 푥 푠

( ) = ( ) (4.1.10)

and the distributive laws 푴 푳푲 푴푳 푲

( + ) = + (4.1.11)

푴 푳 푲 푴푳 25 푴푲

( + ) = + (4.1.12)

but, in general, linear operators푳 푲 and푴 integral푳푴 operators푲푴 in particular, do not satisfy the

commutative law.

Using the associative law we see that

= , ( ) = ( , 1) (4.1.13) 푚 푛 푚+푛 푚 푛 푚푛 where 푲 푲 푲 푲 푲 푚 푛 ≥

= ( , ) (4.1.14) 푏 푛 푛 푲 �푎 퐾 푥 푠 푑푠 and ( , ) is the iterated kernel.

푛 4.1.1퐾 Norm푥 푠 of an Integral Operator

If ( , ) is a square integrable kernel its norm is defined by

퐾 푥 푠 = [ | ( , )| ] / . (4.1.1.1) 푏 푏 2 1 2 2 ‖푲‖ �푎 �푎 퐾 푥 푠 푑푥푑푠 Then if (s) is a square integrable function and ( ) is given by (4.1.2), using

Schwarz’sϕ inequality, we have 훹 푥

| ( )| | ( , )| | (s)| 푏 푏 2 2 2 훹 푥 ≤ �푎 퐾 푥 푠 푑푠 �푎 ϕ 푑푠 which yields

| ( )| | ( , )| | (s)| 푏 푏 푏 푏 2 2 2 �푎 훹 푥 푑푥 ≤ �푎 �푎 퐾 푥 푠 푑푥푑푠 �푎 ϕ 푑푠 so that

( ) = < , (4.1.1.2)

2 and thus ( ) is square‖ integrable.퐾ϕ ‖ ‖ 훹So‖ that≤ ‖ 푲‖ ‖ϕ‖ ∞ where is operator norm

2 and 훹 is 푥Hilbert-Schmidt norm in of the‖ operator푲‖ ≤ ‖푲 ‖. ‖푲‖ 2 ‖푲‖2 푳 푲

26

When = 0 then = 0 so ( ) = 0 for all so that vanishes

2 ‘almost everywhere’,‖푲‖ for all square‖훹 integrabl‖ e‖ functions퐾ϕ ‖ , and is휑 called a null푲ϕ operator.

Also, if ( , ) and ( , ) are square integrableϕ kernels,푲 then ( , ) given by

(4.1.8) is square 퐿integrable푥 푡 since,퐾 푡 푠by Schwarz’s inequality, 푃 푥 푠

| ( , )| | ( , )| | ( , )| 푏 푏 2 2 2 푃 푥 푠 ≤ �푎 퐿 푥 푡 푑푡 �푎 퐾 푡 푠 푑푡 so that

| ( , )| | ( , )| | ( , )| . 푏 푏 푏 푏 푏 푏 2 2 2 �푎 �푎 푃 푥 푠 푑푥푑푠 ≤ �푎 �푎 퐿 푥 푡 푑푥푑푡 �푎 �푎 퐾 푡 푠 푑푡푑푠 For left hand side we have

| ( , )| = ( | ( , )| ) 푏 푏 푏 푏 2 2 �푠=푎 �푥=푎 푃 푥 푠 푑푥푑푠 �푎 �푎 푃 푥 푠 푑푥 푑푠 ( ( | ( , )| )( | ( , )| ) ) 푏 푏 푏 푏 2 2 ≤ �푠=푎 �푥=푎 �푡=푎 퐿 푥 푡 푑푡 �푡=푎 퐾 푡 푥 푑푡 푑푥 푑푠 = ( | ( , )| ) ( | ( , )| ) 푏 푏 푏 푏 2 2 �푠=푎 � �푡=푎 퐾 푡 푠 푑푡 �푥=푎 �푡=푎 퐿 푥 푡 푑푡 푑푥� 푑푠 = | ( , )| | ( , )| 푏 푏 푏 푏 2 2 ��푥=푎 �푡=푎 퐿 푥 푡 푑푡푑푥� ��푠=푎 �푡=푎 퐾 푡 푠 푑푡푑푠� = | ( , )| | ( , )| , 푏 푏 푏 푏 2 2 ��푎 �푎 퐿 푥 푡 푑푡푑푥� ��푎 �푎 퐾 푡 푠 푑푡푑푠� giving

< . (4.1.1.3)

2 2 2 Putting = we see that‖ 푳푲‖ ≤ ‖푳‖ ‖푲 ‖and in∞ general 2 2 2 2 푳 푲 ‖푲 ‖ ≤ ‖ 푲 ‖ ( = 1,2,3, … ) (4.1.1.4) 푛 푛 2 2 ‖푲 ‖ ≤ ‖푲‖ 푛

27

4.1.2 Hermitian Adjoint

The Hermitian adjoint of a kernel ( , ) which is defined to be the kernel

( , ) =퐾 푥(푠, ). (4.1.2.1) ∗ We see that 퐾 푥 푠 퐾����푠���푥��

( ) = , = . (4.1.2.2) ∗ ∗ ∗∗ Also 휆푲 휆̅푲 푲 푲

( ) ( , ) = ( , ) ∗ 퐿퐾 푥 푠 퐿퐾������푠���푥�� = ( , ) ( , ) 푏 �������� ��������� �푎 퐿 푠 푡 퐾 푡 푥 푑푡 = ( , ) ( , ) 푏 ∗ ∗ �푎 퐾 푥 푡 퐿 푡 푠 푑푡 = ( , ) ∗ ∗ and so 퐾 퐿 푥 푠

( ) = . (4.1.2.3) ∗ ∗ ∗ Further, using the definition of the inner푳푲 product,푲 푳 we have

( , ) = ( )( ) ( ) 푏 ������� 푲ϕ 훹 �푎 푲ϕ 푥 훹 푥 푑푥 = ( ( , ) (t)dt) ( ) 푏 푏 ������� �푎 �푎 퐾 푥 푡 ϕ 훹 푥 푑푥 = ( , ) (t) ( ) 푏 푏 ������� �푥=푎 �푡=푎퐾 푥 푡 ϕ 훹 푥 푑푡푑푥 = ( , ) (t) ( ) 푏 푏 ������� �푡=푎 �푥=푎퐾 푥 푡 ϕ 훹 푥 푑푥푑푡 = ( , ) (t) ( ) 푏 푏 ����������������������������� ��������� ������ �푡=푎 �푎 퐾 푥 푡 ϕ 훹 푥 푑푥 푑푡

28

= (t) ( , ) ( ) 푏 푏 ��������������������������� ������ ��������� �푡=푎 ϕ ��푎 퐾 푥 푡 훹 푥 푑푥� 푑푡 = (t) ( ( , ) ( ) ) 푏 푏 ��������∗������������������ �푡=푎ϕ �푎 퐾 푡 푥 훹 푥 푑푥 푑푡 = ( , ), (4.1.2.4) ∗ where ϕ 푲 훹

( )( ) = ( , ) ( ) , ( , ) = ( , ), 푏 ∗ ∗ ∗ ��������� 퐾 훹 푡 �푎 푘 푡 푥 훹 푥 푑푥 푘 푡 푥 푘 푥 푡 ( )( ) = ( , ) ( ) , ( , ) = ( , ). 푏 ∗ ∗ ∗ ��������� 퐾 훹 푥 �푎 푘 푥 푡 훹 푡 푑푡 푘 푥 푡 푘 푡 푥 If = , then the kernel is called Hermitian or self adjoint. ∗ 푲 푲 4.2 Bounded Linear Operators

So far we have been concerned with linear integral operators acting on a space of

square integrable functions which, if chosen to be all the functions which are square

integrable in the Lebesgue sense, would form a Hilbert space of function .

ퟐ We shall now turn our attention to the case of an abstract Hilbert푳 space, acted on

by bounded linear operators.

Consider a non-empty subset of an abstract Hilbert space such that if

, then + where , are퐷 arbitrary complex numbers, then푯 is called a

푓linear푔 ∈ manifold.퐷 휆푓 We휇푔 note∈ 퐷 that a linear휆 휇 manifold must contain the zero elements퐷 since

+ ( 1) = 0.

푓 −Suppose푓 that corresponding to any element we assign an element ,

then maps onto and is called a linear푓 ∈operator퐷 with domain 푲and푓 ∈ let푯

=range푲 ( ) 퐷{ : ∆ }. 퐷

∆ 퐾 퐾푓 푓 ∈ 퐷 29

If

( + ) = + (4.2.1) for , and numbers푲 휆푓 and휇푔 , then휆푲 푓 is also휇푲푔 a linear manifold and is a linear operator푓 푔 ∈for퐷 domain and range휆 휇. ∆ 푲

A linear operator퐷 having∆ the Hilbert space as domain is bounded if there exists a constant 0 such푲 that for all푯 .

The norm퐶 ≥ of the bounded‖푲푓‖ ≤ li퐶near‖푓‖ operator푓 ∈ 퐻is defined as the smallest possible value of ‖. 푲Thus‖ is the least upper bound or 푲supremum of / , that is 퐶 ‖푲‖ ‖푲푓‖ ‖푓‖

= (4.2.2) 푠푢푝 ‖푲푓‖ ‖푲‖ and so 푓 ∈ 퐻 ‖푓‖

. (4.2.3)

This is a generalization of the ‖inequality푲푓‖ ≤ ‖ 푲(4.1.1.2)‖‖푓‖ for the linear integral operator (4.1.1) with square integrable kernel. Clearly (4.1.1) is a bounded linear operator and that

.

2 ‖퐾‖ ≤ The‖퐾‖ linear operator has an adjoint operator defined so that ∗ 푲 ( , ) = ( , ) 푲 (4.2.4) ∗ for all , . If = the operator푲푓 푔 is self푓 푲 adjoint푔 or Hermitian. ∗ Now by푓 Schwarz’s푔 ∈ 푯 푲inequality푲 and (4.2.3), we have

|( , )| . (4.2.5)

And also we have 푲푓 푔 ≤ ‖푲푓‖‖푔‖ ≤ ‖푲‖‖푓‖‖푔‖

|( , )| . (4.2.6) ∗ ∗ We can write 푓 푲 푔 ≤ ‖푲 ‖‖푓‖‖푔‖

30

= ( , ) ∗ 2 ∗ ∗ ‖ 푲 푓 ‖ = ( 푲 푓 푲, 푓) ∗ 푲푲 푓 푓 ∗ ≤ ‖푲푲 푓‖‖푓‖ ∗ Dividing on both sides of this inequality≤ ‖푲‖ we‖푲 get푓‖ ‖푓‖ ∗ ‖푲 푓‖ . ∗ This shows that ‖푲 푓‖ ≤ ‖푲‖‖푓‖

. ∗ Now we use = to get ‖푲 ‖ ≤ ‖푲‖ ∗∗ 푲 푲 = ( ) . ∗ ∗ ∗ We have inequality in both directions,‖푲‖ so‖ 푲 ‖ ≤ ‖푲 ‖

= (4.2.7) ∗ and is also a bounded linear operator.‖푲 ‖ ‖푲‖ ∗ 푲 Any bounded operator is continuous linear operator which transforms a

strongly convergent sequence { 푲} into a strongly convergent sequence { }. We have

푛 푛 푓 = ( ) 푲푓

푛 푛 푛 so that if as ‖푲푓 then− 푲 푓 ‖ ‖푲 푓 − 푓 ‖ ≤ ‖푲‖‖푓 − 푓‖

푛 푓 → 푓 푛 → ∞ 0,

푛 that is is strongly convergent to‖ 푲푓. − 푲푓‖ →

푛 푲Actually푓 every continuous linear푲푓 operator with domain is bounded. For

otherwise there would exist a sequence { } such that푲 푯 so 1,

푛 푛 푛 푛 but = / 0 as , which푓 is contrary‖ 푲to푓 ‖the≥ 푛hypothesis‖푓 ‖ ‖ 푲that푔 ‖ ≥ is

푛 푛 푛 continuous.푔 푓 푛‖푓 ‖ → 푛 → ∞ 푲

31

We see that the linear integral operator (4.1.1), with a square integrable kernel, is continuous.

If and are two bounded linear operators which map onto itself, then the product operator푲 푳 corresponds to 푯

푳푲 ( ) = ( ).

Then 푳푲 푓 푳 푲푓

so that ‖푳푲푓‖ ≤ ‖푳‖‖푲푓‖ ≤ ‖푳‖‖푲‖‖푓‖

(4.2.8) and therefore is a bounded ‖operator.푳푲‖ ≤ ‖ 푳‖‖푲‖

Also 푳푲

( , ) = ( , ) ∗ 푳푲 푓 푔 = (푲,푓 푳 푔 ) ∗ ∗ and hence 푓 푲 푳 푔

( ) = . (4.2.9) ∗ ∗ ∗ Lastly it is interesting to observe푳푲 that푲 푳 the norms of bounded linear operators and satisfy the triangle inequality. For, using the triangle inequality for the elements of푲

Hilbert푳 space, we have

+ +

‖ 푲 푓 푳 푓 ‖ ≤ (‖푲푓‖+ ‖푳푓)‖ so that ≤ ‖푲‖ ‖푳‖ ‖푓‖

+ + . (4.2.10)

‖푲 푳‖ ≤ ‖푲 ‖ ‖푳‖

32

Suppose that , , … , , … is a complete orthonormal system or basis in .

1 2 푟 Then the matrix withϕ elementsϕ ϕ 푯

= ( , ) (4.2.1.1)

푟푠 푟 푠 is called the kernel matrix of푘 the bounded푲ϕ ϕoperator .

Introducing the generalized Fourier coefficients푲

= ( , ), = ( , ) (4.2.1.2)

푟 푟 푠 푠 where , , we have푥 푓 ϕ 푦 푔 ϕ

푓 푔 ∈ 푯 = ∞ , = ∞ (4.2.1.3)

푓 � 푥푟ϕ푟 푔 � 푦푠ϕ푠 푟=1 푠=1 and

= ∞ = ∞ ∞

푲푓 � 푥푟푲ϕ푟 � � 푥푟푘푟푠ϕ푠 푟=1 푟=1 푠=1 So that ( , ) has the bilinear form

푲푓 푔 ( , ) = ∞ ∞ . (4.2.1.4)

푲푓 푔 � � 푥푟푘푟푠푦�푠 푟=1 푠=1 Since is bounded it follows that

푲 |( , )| (4.2.1.5)

푲푓 푔 ≤ ‖푲‖‖푓‖‖푔‖

= ∞ | | ∞ | | . 2 2 ‖푲‖��� 푥푟 � �� 푦푠 � 푟=1 푠=1 Hence

33

∞ ∞ ∞ | | ∞ | | (4.2.1.6) 2 2 �� � 푥푟푘푟푠푦�푠� ≤ ‖푲‖��� 푥푟 � �� 푦푠 � 푟=1 푠=1 푟=1 푠=1 and thus the bilinear form (4.2.1.4) is also bounded.

4.3 Completely Continuous Operators

If a sequence is strongly convergent it is also weakly convergent. For, by

Schwarz’s inequality, we have

|( , )|

푛 푛 and thus if 0 as 푓 − 푓 푔then≤ (‖푓, −) 푓‖‖(푔,‖ ) as . However , the

‖푓푛 − 푓‖ → 푛 → ∞ 푓푛 푔 → 푓 푔 푛 → ∞ converse need not be true. Thus we may have but as . 푤 푛 푛 Let us suppose that is a bounded linear푓 → operator푓 푓 in↛ 푓. Then푛 → ∞

|(푲 , )| ( ) 푯

푛 푛 푲 푓 − 푲 푓 푔 ≤ ‖푲 푓 − 푓 ‖‖푔‖.

푛 But ≤ ‖푲‖‖푓 − 푓‖‖푔‖

= ( )

푛 푛 ‖ 푲 푓 − 푲 푓 ‖ ‖푲 푓 − 푓 ‖.

≤ ‖푲‖‖푓푛 − 푓‖ So,  (strongly) ‖∙‖ ‖∙‖ 푓푛 �� 푓 푲푓푛 �� 푲푓 and so if as then , that is is strongly convergent to . ‖∙‖ 푛 푛 푛 So K is bounded푓 → 푓 operator푛 → ∞ then 푲푓 �� 푲푓0  푲푓 0, i.e. operator K푲 is푓

‖푓푛 − 푓‖ → ‖푲푓푛 − 푲푓‖ → continuous. Then K completely continuous in  0 as . 푤 푛 푛 Then is called a completely continuous (or compact)푓 → 푓 linear‖푲푓 operator− 푲푓‖ →in . 푛 → ∞

푲Every completely continuous operator is bounded. A completely continuous푯

operator is a continuous operator and this means that it is bounded.

34

If the kernel matrix ( ) of a bounded linear operator satisfies

푘푟푠 푲 ∞ ∞ | | < (4.3.1) 2 � � 푘푟푠 ∞ 푟=1 푠=1 then is completely continuous. For we have, using (4.2.1.4), that

푲 ( ) ( ( ), ) = ∞ (4.3.2) 푛 푲 푓푛 − 푓 ϕ푠 ��푥푟 − 푥푟�푘푟푠 ( ) 푟=1 Where and are the Fourier coefficients of and respectively. This gives 푛 푥푟 푥푟 푓푛 푓 ( ) 푚 |( ( ), )| + ∞ ∞ | | (4.3.3) 2 2 2 2 ‖푲 푓푛 − 푓 ‖ ≤ � 푲 푓푛 − 푓 ϕ푠 � � 푘푟푠 ‖푓푛 − 푓‖ using Cauchy’s inequality.푠=1 푠=푚+1 푟=1

As indicated above that weakly convergent sequences are bounded. This we can proof in following way.

Since { } is a complete orthonormal system, gives

ϕ푠 ( ) ( ) = ∞ |( ( ), )| = ∞ ( ∞ 2) 2 2 푛 ‖푲 푓푛 − 푓 ‖ � 푲 푓푛 − 푓 ϕ푠 � ���푥푟 − 푥푟�푘푟푠� 푠=1 푠=1 푟=1 ( ) ∞ ∞ ∞ | | 2 푛 2 ≤ � ����푥푟 − 푥푟� � �� 푘푟푠 �� 푏푦 퐶푎푢푐ℎ푦 − 푆푐ℎ푤푎푟푧 푠=1 푟=1 푟=1

= ∞ ( ∞ | | ) = ( ∞ ∞ | | ) 2 2 2 2 � ‖푓푛 − 푓‖ � 푘푟푠 ‖푓푛 − 푓‖ � � 푘푟푠 Taking square roots,푠= gives1 푟=1 푠=1 푟=1

/

( ) | | 1 2 . 2 ‖푲 푓푛 − 푓 ‖ ≤ �� � 푘푟푠 � ‖푓푛 − 푓‖ 푠 푟 This shows that is bounded and hence continuous and that the operator norm satisfies

푲 35

| | 1 2. 2 ‖푲‖표푝 ≤ �� � 푘푟푠 � 푠 푟 Now we also know that exists and is continuous. So here we have split sum of ∗ unspecified . 퐾

푚 Now suppose and let > 0 as in (4.3.3) split the sum into two parts 푤 푓푛 → 푓 휀 = 푚 |( ( ), )| + ∞ |( ( ), )| 2 2 2 ‖푲푓푛 − 푲푓‖ � 푲 푓푛 − 푓 ϕ푠 � 푲 푓푛 − 푓 ϕ푠 푠=1 푠=푚+1

푚 |( ( ), )| + ∞ ( ∞ | | ) 2 2 2 ≤ � 푲 푓푛 − 푓 ϕ푠 ‖푓푛 − 푓‖ � � 푘푟푠 푠=1 푠=푚+1 푟=1 Now we will see how to select the .

Recall my observation earlier that “ weakly푚 convergent sequences are bounded” with that there is a finite with < for all . Since ( | | ) converges to a 2 ∞ ∞ 2 푛 푠=1 푟=1 푟푠 finite sum, we can퐵 select‖ 푓 −large푓‖ enough퐵 so that푛 ∑ ∑ 푘

푚 ∞ ( ∞ | | ) < 2 2 휀 � � 푘푟푠 푠=푚+1 푟=1 퐵 (the “tail” of a convergent infinite series is small).

Now we have

푚 |( ( ), )| + 2 2 2 휀 ‖푲푓푛 − 푲푓‖ ≤ � 푲 푓푛 − 푓 ϕ푠 퐵 푠=1 퐵

= 푚 ( ), + . 2 ∗ 2 휀 ��� 푓푛 − 푓 푲 ϕ푠�� 푠=1 Now we use the weak convergence times.

For each = 1,2,3, … there is푚 a such that

푠 푁푠 36

|( , )| < . 2 ∗ 휀 푛 ≥ 푁푠 푓푛 − 푓 푲 ϕ푠 � If max , , … , then 푚

푛 ≥ �푁푠1 푁푠2 푁푠푚� 푚 + = + = . 2 2 2 2 2 휀 휀 휀 휀 ‖푲푓푛 − 푲푓‖ ≤ � 푚 휀 푠=1 푚 푚 This can be done for each > 0, so 0 as required.

휀 ‖푲푓푛 − 푲푓‖ → This show that  . That is, is completely continuous as 푤 ‖∙‖ 푛 푛 claimed. 푓 → 푓 푲푓 �� 푲푓 푲

Any finite dimensional linear operator is completely continuous since for this case the double sum on the left-hand side of (4.3.1) contains a finite number of terms only.

As an example of a completely continuous operator we consider the integral operator over the space of square integrable functions [6], [10], [17] given by

푲 ( ) = ( , ) ( ) , ( ), (4.3.1.1) 푏

푔 푥 �푎 푲 푥 푠 푓 푠 푑푠 푎 ≤ 푥 ≤ 푏 where the kernel is square integrable and satisfies

| ( , )| < ( ), 푏 2 �푎 푲 푥 푠 푑푠 ∞ 푎 ≤ 푥 ≤ 푏 | ( , )| < ( ), (4.3.1.2) 푏 2 �푎 푲 푥 푠 푑푥 ∞ 푎 ≤ 푠 ≤ 푏 | ( , )| < . 푏 푏 2 �푎 �푎 푲 푥 푠 푑푥 푑푠 ∞ By Schwarz’s inequality we have

| ( )| | ( , )| | ( )| 푏 푏 2 2 2 푔 푥 ≤ �푎 푲 푥 푠 푑푠 �푎 푓 푠 푑푠 and so

37

| ( )| | ( , )| | ( )| , 푏 푏 푏 푏 2 2 2 �푎 푔 푥 푑푥 ≤ �푎 �푎 푲 푥 푠 푑푠푑푥 �푎 푓 푠 푑푠 that is

| ( , )| . (4.3.1.3) 푏 푏 2 2 2 ‖푔‖ ≤ �푎 �푎 푲 푥 푠 푑푠푑푥‖푓‖ Hence

| ( , )| = (4.3.1.4) 푏 푏 2 2 ‖푲‖ ≤ ��푎 �푎 푲 푥 푠 푑푠푑푥 ‖푲‖ since the norm of is the least upper bound of / .

Now consider푲 any complete orthonormal‖ 푔system‖ ‖푓‖ ( ), ( ), … , ( ), … in the

1 2 푟 Hilbert space of functions defined over . Weϕ let푥 ϕ 푥 ϕ 푥 2 퐿 푎 ≤ 푥 ≤ 푏 = ( ) ( , ) ( ) , (4.3.1.5) 푏 푏 푟푠 ���푠����� 푟 푘 �푎 �푎 ϕ 푥 푲 푥 푡 ϕ 푡 푑푥푑푡 = ( ) ( ) (4.3.1.6) 푏 푟 ���푟���� 푥 �푎 푓 푡 ϕ 푡 푑푡 and

= ( ) ( ) . (4.3.1.7) 푏 푠 ���푠����� 푦 �푎 푔 푥 ϕ 푥 푑푥 Then

= ( , ) ( ) ( ) . (4.3.1.8) 푏 푏 푠 ���푠����� 푦 �푎 �푎 푲 푥 푡 푓 푡 ϕ 푥 푑푥푑푡 But

( , ) ( ) = ( ) (4.3.1.9) 푏 ∞ ���푠����� 푟푠 ���푟���� �푎 푲 푥 푡 ϕ 푥 푑푥 � 푘 ϕ 푡 and 푟=1

38

( ) = ∞ ( ) (4.3.1.10)

푓 푡 � 푥푟ϕ푟 푡 푟=1 for almost all values of , that is except for a set of values of of Lebesgue measure zero, and hence 푡 푡

= ∞ . (4.3.1.11)

푦푠 � 푥푟푘푟푠 Also 푟=1

( , ) ( ) = | | (4.3.1.12) 푏 푏 2 ∞ 2 ���푠����� 푟푠 �푎 ��푎 푲 푥 푡 ϕ 푥 푑푥� 푑푡 � 푘 and, using Parseval’s formula, 푟=1

| ( , )| = ( , ) ( ) . (4.3.1.13) 푏 ∞ 푏 2 2 ���푠����� �푎 푲 푥 푡 푑푥 � ��푎 푲 푥 푡 ϕ 푥 푑푥� Consequently 푠=1

| ( , )| = | | (4.3.1.14) 푏 푏 ∞ ∞ 2 2 � � 푲 푥 푡 푑푥푑푡 � � 푘푟푠 푎 푎 푟=1 푠=1 and so the matrix representation of the integral operator with square integrable kernel satisfies the condition (4.3.1) which ensures that is completely푲 continuous.

39

CHAPTER 5

Linear Integral Equations

5.1 Classification of Linear Integral Equations

Integral equations arise naturally in physics, chemistry, biology and engineering applications modeled by initial value problems for a finite interval [a, b]. More details about the sources and origins of integral equations can be found in [8] and [11].

The most frequently used linear integral equations fall under two main classes namely

Fredholm and Volterra integral equations and two related types of integral equations. In particular, the four types are given by:

1. Fredholm integral equations

2. Volterra integral equations

3. Integro-Differential equations

4. Singular integral equations.

5.1.1 Fredholm Linear Integral Equations

The standard form of Fredholm linear integral equations are given by the form

( ) ( ) = ( ) + ( , ) ( ) , ( , ), (5.1.1.1) 푏

휙 푥 푢 푥 푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 푎 ≤ 푥 푡 ≤ 푏 where the kernel of the integral equation ( , ) and the function ( ) are given in

advance, and is a parameter. The equation퐾 푥(5푡.1.1.1) is called linear푓 푥 because the

unknown function휆 ( ) under the integral sign occurs linearly, i.e. the power of ( ) is

one. 푢 푥 푢 푥

The value of ( ) will give the following kinds of Fredholm linear integral

equations: 휙 푥

1. When ( ) = 0, equation (5.1.1.1) becomes

휙 푥 40

( ) + ( , ) ( ) = 0, (5.1.1.2) 푏

푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 and is called Fredholm integral equation of the first kind.

2. When ( ) = 1, equation (5.1.1.1) becomes

휙 푥 ( ) = ( ) + ( , ) ( ) , (5.1.1.3) 푏

푢 푥 푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 and is called Fredholm integral equation of the second kind.

5.1.2 Volterra Linear Integral Equations

The standard form of Volterra linear integral equations, where the limits of integration are functions of rather than constants are of the form

푥 ( ) ( ) = ( ) + ( , ) ( ) . (5.1.2.1) 푥 휙 푥 푢 푥 푓 푥 휆 � 퐾 푥 푡 푢 푡 푑푡 As in Fredholm equations, Volterra integral equations푎 fall under two kinds

1. When ( ) = 0, equation (5.1.2.1) becomes

휙 푥 ( ) + ( , ) ( ) = 0, (5.1.2.2) 푥 푓 푥 휆 � 퐾 푥 푡 푢 푡 푑푡 and is called Volterra integral equation푎 of the first kind.

2. When ( ) = 1, equation (5.1.2.1) becomes

휙 푥 ( ) = ( ) + ( , ) ( ) , (5.1.2.3) 푥 푢 푥 푓 푥 휆 � 퐾 푥 푡 푢 푡 푑푡 and is called Volterra integral equation of the푎 second kind.

For both cases in 5.1.1 and 5.1.2 (# 2) the formulas are ok if ( ) 0 since we

can just modify the kernel and the integral equation with modified kernel휙 푥 given≠ by

( , ) ( ) = ( ) + ( ) . 푏 ( ) 푘 푥 푡 푢 푥 푓 푥 휆 � 푢 푡 푑푡 푎 휙 푥 41

Two other types of integral equation, related to the two main classes Fredholm

and Volterra integral equations arise in many science and engineering applications.

5.1.3 Integro-Differential Equations

Several phenomena in physics and biology [11] and [19] give rise to this type of integro-differential equations. In integro-differential equations, the unknown function u(x) occurs in one side as an ordinary derivative, and appears on the other side under the integral sign. The following are examples of integro-differential equations:

( ) = + ( ) ( ) , (0) = 0, (0) = 1, (5.1.3.1) 푥 ′′ ′ 푢 푥 −푥 � 푥 − 푡 푢 푡 푑푡 푢 푢 0 1 ( ) = 2 + ( ) , (0) = 1. (5.1.3.2) 4 1 ′ 푢 푥 − 푥 � 푥푡푢 푡 푑푡 푢 Equation (5.1.3.1) is called Volterra0 integro-differential equation related to

Volterra integral equations, or Volterra integro-differential equations. Equation (5.1.3.2) is called integro-differential equation related to Fredholm integral equations, or simply

Fredholm integro-differential equation.

5.1.4 Singular Integral Equations

The integral equation of the first kind or the integral equation second kind is

called singular if the lower limit, the upper limit or both limits of integration are infinite.

Examples are:

( ) = 3 + 7 sin( ) ( ) , (5.1.4.1) ∞ 푢 푥 푥 � 푥 − 푡 푢 푡 푑푡 1 0 ( ) = cos( + ) ( ) , (5.1.4.2) 4 0

푢 푥 푥 − �−∞ 푥 푡 푢 푡 푑푡

42

1 ( ) = 5 + 2 + ( + ) ( ) . (5.1.4.3) 4 ∞ 2 푢 푥 푥 � 푥 푡 푢 푡 푑푡 Examples of second kind singular integral equations−∞ are given by

1 = ( ) , (5.1.4.4) 푥 2 푥 � 푢 푡 푑푡 1 0 √푥 − 푡 = ( ) , 0 < < 1, (5.1.4.5) 푥 ( ) 훼 푥 �0 푢 푡 푑푡 훼 where the singular behavior푥 −has푡 resulted from the kernel ( , ) becoming infinite as

. 퐾 푥 푡

푡 → 푥 5.2 Solution of an Integral Equation

A solution of an integral equation or an integro-differential equation on the interval of integration is a function ( ) which satisfies the given equation. In other words, if the given solution is substituted푢 푥 in the right-hand side of the equation, the output of this direct substitution must yield the left-hand side, i.e. we should verify that the given function satisfies the integral equation under discussion. Consider the following example.

Example 1: Show that ( ) = is a solution of the Volterra integral equation 푥 푢 푥 푒 ( ) = 1 + ( ) . 푥

푢 푥 �0 푢 푡 푑푡 Substituting ( ) = in the right hand side yields 푥 푢 푥 푒 = 1 + , 푥 푡 푅퐻푆 �0 푒 푑푡 = 푥 = 푒( ),

= LHS.푢 푥

43

5.3 Converting Volterra Equation to Ordinary Differential Equation

Now we will present the technique that converts Volterra integral equations of the second kind to equivalent differential equations. This can be achieved by applying the useful Leibnitz Rule for differentiating an integral. We have

( ) ( ) ( , ) = , ( ) , ( ) + , (5.3.1) 훽 푥 훽 푥 푑 ( ) 푑훽 푑훼 ( ) 휕퐺 � 퐺 푥 푡 푑푡 퐺�푥 훽 푥 � − 퐺�푥 훼 푥 � � 푑푡 푑푥 훼 푥 푑푥 푑푥 훼 푥 휕푥 which works provided ( , ) and are continuous functions in the domain D in the 휕퐺 휕푥 – plane that contains the퐺 rectangular푥 푡 region R, , , and ( ) and 푥푡

0 1 ( ) are functions having continuous derivatives푎 ≤ for푥 ≤ 푏< 푡 <≤ 푡. We≤ 푡 note that훼 Leibnitz푥 rule훽 푥 is usually presented in most books, and our푎 goal푥 here푏 will be on using the rule rather than its theoretical proof.

Here is an example.

Example 1. Find the initial value problem equivalent to the integral equation

( ) = + + 2 ( ) ( ) . 푥 4 2 2 푢 푥 푥 푥 � 푥 − 푡 푢 푡 푑푡 Differentiating both sides and using Leibnitz0 rule yield

( ) = 4 + 2 + 4 ( ) ( ) , 푥 ′ 3 ⎧ 푢 푥 푥 푥 � 푥 − 푡 푢 푡 푑푡 ( ) = 12 + 2 + 4 0 ( ) , ⎪ 푥 ′′ 2 ( ) = 24 + 4 ( ). ⎨ 푢 푥 푥 �0 푢 푡 푑푡 ⎪ ′′′ ⎩ 푢 (푥) 4 (푥 ) =푢24푥 ′′′ The initial condition we get by substituting푢 푥 − 푢=푥0 in both푥 sides of the integral equations, we find (0) = (0) = 0, and (0) = 2,푥 so the initial value problem of third order is ′ ′′ 푢 ( ) 푢 4 ( ) = 24 ,푢 (0) = (0) = 0, (0) = 2. ′′′ ′ ′′ 푢 푥 − 푢 푥 푥 푢 푢 푢 44

5.4 Converting Initial Value Problem to Volterra Equation.

In this section, we will show the method that converts an initial value problem to an equivalent Volterra integral equation. Before outlining the method needed, we wish to recall the transformation formula

1 … ( ) … = ( ) ( ) , (5.4.1) 푥 푥1 푥2 푥푛−1 ( 1)! 푥 푛−1 � � � � 푓 푥푛 푑푥푛 푑푥1 � 푥 − 푡 푓 푡 푑푡 that0 converts0 0 any0 multiple integral to a single푛 − integral.0 The formulas

( ) = ( ) ( ) , (5.4.2) 푥 푥 푥 � � 푓 푡 푑푡푑푡 � 푥 − 푡 푓 푡 푑푡 0 0 0 1 ( ) = ( ) ( ) (5.4.3) 푥 푥 푥 2! 푥 2 � � � 푓 푡 푑푡푑푡푑푡 � 푥 − 푡 푓 푡 푑푡 are two special cases0 0of the0 formula given above,0 and used formulas that will transform double and triple integrals respectively to a single integral for each. Here we prove the formula (5.4.2) that converts double integral to a single integral.

We take right-hand side of (5.4.2) as

( ) = ( ) ( ) (5.4.4) 푥

퐼 푥 �0 푥 − 푡 푓 푡 푑푡 = ( ) ( ) 푥 푥 푥 � 푓 푡 푑푡 − � 푡푓 푡 푑푡 Differentiating both sides of (5.4.4) and0 using Leibnitz0 rule we obtain

( ) = = ( ) + ( ) ( ) 푥 ′ 푑퐼 퐼 푥 � 푓 푡 푑푡 푥푓 푥 − 푥푓 푥 푑푥 0 ( ) = ( ) , (5.4.5) 푥 ′ 퐼 푥 � 푓 푡 푑푡 0 assuming is continuous. So integrate both sides from 0 to , we get

푓 푥 45

( ) (0) = ( ) = ( ) . 푥 푥 푠 ′ 퐼 푥 − 퐼 �0 퐼 푠 푑푠 �0 ��0 푓 푡 푑푡� 푑푠 The two components of ( ) are equal and we get

퐼 푥 ( ) = ( ) ( ) . 푥 푠 푥

�0 ��0 푓 푡 푑푡� 푑푠 �0 푥 − 푡 푓 푡 푑푡 This is the first version of (5.4.2).

The second version of (5.4.2) we get by integrating both sides of (5.4.5) from 0 to ,

considering that (0) = 0 and from (5.4.4), we find 푥

퐼 ( ) = ( ) (5.4.6) 푥 푥 퐼 푥 � � 푓 푡 푑푡푑푡 Equating the right-hand sides of (5.4.4)0 and0 (5.4.6) completes the proof for this special case of (5.4.2).

Now we use the technique to convert an initial value problem to an equivalent

Volterra integral equation. Here we have a third order initial value problem given by

( ) + ( ) ( ) + ( ) ( ) + ( ) ( ) = ( ) (5.4.7) ′′′ ′′ ′ subject to the푦 initial푥 conditions푝 푥 푦 푥 푞 푥 푦 푥 푟 푥 푦 푥 푔 푥

(0) = , (0) = , (0) = and , , are constants. (5.4.8) ′ ′′ The functions푦 (훼 ),푦 ( ) and훽 ( )푦 must have훾 enough훼 continuous훽 훾 derivatives to make my computations work,푝 푥 푞especially푥 푟 any푥 integration by parts, and have Taylor expansions about the origin. Assume that ( ) is continuous through the interval of discussion. To transform (5.4.7) into an equivalent푔 푥 Volterra integral equation we set

= ( ). (5.4.9) ′′′ By integrating both sides of (5.4.9) from푦 0 to푢 푥 we found

푥 ( ) (0) = ( ) , (5.4.10) 푥 ′′ ′′ 푦 푥 − 푦 � 푢 푡 푑푡 46 0

or equivalently

( ) = + ( ) . (5.4.11) 푥 ′′ 푦 푥 훾 �0 푢 푡 푑푡 To obtain ( ) we integrate both sides of (5.4.11) from 0 to to find that ′ 푦 푥 푥 ( ) = + + ( ) . (5.4.12) 푥 푥 ′ 푦 푥 훽 훾푥 �0 �0 푢 푡 푑푡푑푡 We integrate both sides of (5.4.12) from 0 to to obtain

1 푥 ( ) = + + + ( ) . (5.4.13) 2 푥 푥 푥 2 푦 푥 훼 훽푥 훾푥 � � � 푢 푡 푑푡푑푡푑푡 Using the conversion formulas (5.4.2) and (5.4.3),0 0to reduce0 the double and triple integrals in (5.4.12) and (5.4.13) respectively to single integrals yields

( ) = + + ( ) ( ) , (5.4.14) 푥 ′ 푦 푥 훽 훾푥 � 푥 − 푡 푢 푡 푑푡 1 0 1 ( ) = + + + ( ) ( ) . (5.4.15) 2 2 푥 2 2 푦 푥 훼 훽푥 훾푥 � 푥 − 푡 푢 푡 푑푡 Substituting (5.4.9), (5.4.11), (5.4.14) and (5.4.15)0 into (5.4.7) leads to the following

Volterra integral equation of the second kind

( ) = ( ) + ( , ) ( ) , (5.4.16) 푥 푢 푥 푓 푥 � 퐾 푥 푡 푢 푡 푑푡 where 0

( , ) = ( ) + ( )( ) + ( )( ) , (5.4.17) ! 1 2 and 퐾 푥 푡 푝 푥 푞 푥 푥 − 푡 2 푟 푥 푥 − 푡

1 ( ) = ( ) ( ) + ( ) + ( ) + ( ) + ( ) + . (5.4.18) 2 2 푓 푥 푔 푥 − �훾푝 푥 훽푞 푥 훼푟 푥 훾푥푞 푥 푟 푥 �훽푥 훾푥 �� The following example will be used to illustrate this technique.

47

Example 1. Find the equivalent Volterra integral equation to the following initial value

problem

+ = 0, (0) = 2, (0) = 0, (0) = 2. ′′′ ′′ ′ ′ ′′ First we set푦 − 푦= −(푦 ). Integrate푦 both푦 sides this equation푦 from 0 to x.푦 We get ′′′ 푦 푢 푥 ( ) (0) = ( ) ( ) = 2 + ( ) . 푥 푥 ′′ ′′ ′′ 푦 푥 − 푦 � 푢 푡 푑푡 → 푦 푥 � 푢 푡 푑푡 Integrate both sides from 0 to x. We0 get 0

( ) (0) = 2 + ( ) ( ) = 2 + ( ) . 푥 푥 푥 푥 ′ ′ ′ 푦 푥 − 푦 푥 � � 푢 푡 푑푡푑푡 → 푦 푥 푥 � � 푢 푡 푑푡푑푡 Integrate both sides from 0 to x.0 We0 get 0 0

( ) 2 = + ( ) . 푥 푥 푥 2 푦 푥 − 푥 � � � 푢 푡 푑푡푑푡푑푡 We use 0 0 0

( ) = 2 + ( ) ( ) 푥 ′ 푦 푥 푥 �0 푥 − 푡 푢 푡 푑푡

to integrate both sides from 0 to x. We get

1 ( ) = 2 + + ( ) ( ) . 2 푥 2 2 푦 푥 푥 �0 푥 − 푡 푢 푡 푑푡 Substitute in + = 0, we get ′′′ ′′ ′ 푦 − 푦 − 푦 푦 1 ( ) = 2 + 1 + ( ) ( ) ( ) . 푥 2 2 2 푢 푥 푥 − 푥 �0 � 푥 − 푡 − 푥 − 푡 � 푢 푡 푑푡 This is the equivalent Volterra integral equation. Solving it gives us = . Then ′′′ integrate three times to get (for most of alternative approach to get푢 a Volterra푦 equation).

48

5.5 Converting Boundary Value Problem to Fredholm Equation

The procedure of reducing a boundary value problem to a Fredholm integral equation is complicated and rarely used. The method is similar to the technique that reduces initial value problem to Volterra integral equation, with the exception that we are given boundary conditions. It seems practical to illustrate this method by applying it to an example rather than proving it.

Example 1. Find a Fredholm integral equation equivalent to the following boundary value problem

( ) + ( ) = , 0 < < , ′′ subject to the boundary conditions푦 푥 , 푦(0푥) = 1푥, ( )푥= 휋 1.

First, we set 푦 푦 휋 휋 −

( ) = ( ). ′′ Integrating both sides of from 0 to gives푦 푥 푢 푥

푥 ( ) = ( ) , 푥 푥 ′′ � 푦 푡 푑푡 � 푢 푡 푑푡 or equivalently 0 0

( ) = (0) + ( ) . 푥 ′ ′ 푦 푥 푦 �0 푢 푡 푑푡 Integrating both sides from 0 to and using boundary condition (0) = 1 we found

푥 푦 ( ) = 1 + (0) + ( ) ( ) , 푥 ′ 푦 푥 푥푦 � 푥 − 푡 푢 푡 푑푡 here we converted the double integral to a single0 integral as discussed before. By substituting = in both sides, we found

푥 휋 ( ) = 1 + (0) + ( ) ( ) . 휋 ′ 푦 휋 휋푦 �0 휋 − 푡 푢 푡 푑푡 49

Solving for (0) we get ′ 푦 1 (0) = (( 2) ( ) ( ) ). 휋 ′ 푦 휋 − − �0 휋 − 푡 푢 푡 푑푡 After substitution , yields 휋

( ) = 1 + (( 2) ( ) ( ) ) + ( ) ( ) . 휋 푥 푥 푦 푥 휋 − − �0 휋 − 푡 푢 푡 푑푡 �0 푥 − 푡 푢 푡 푑푡 The final substitution gives휋

( ) = 1 (( 2) ( ) ( ) ) ( ) ( ) . 휋 푥 푥 푢 푥 푥 − − 휋 − − � 휋 − 푡 푢 푡 푑푡 − � 푥 − 푡 푢 푡 푑푡 Using identity 휋 0 0

( ) = ( ) + ( ), 휋 푥 휋 � ∙ � ∙ � ∙ We will get the equation 0 0 푥

( ) = 1 ( 2) 푥 푢 푥 푥 − − 휋 − 휋 + ( ) ( ) + ( ) ( ) ( ) ( ) , 푥 휋 푥 푥 푥 � 휋 − 푡 푢 푡 푑푡 � 휋 − 푡 푢 푡 푑푡 − � 푥 − 푡 푢 푡 푑푡 after simple calculations휋 0and adding integrals휋 with푥 similar limits we get0

2 ( ) ( ) ( ) = ( ) ( ) . 푥 휋 푥 − 휋 푡 푥 − 휋 푥 푡 − 휋 푢 푥 − �0 푢 푡 푑푡 − �푥 푢 푡 푑푡 Finally, the desired Fredholm휋 integral equation휋 of the second kind휋 is given by

2 ( ) = ( , ) ( ) , 휋 푥 − 휋 푢 푥 − �0 퐾 푥 푡 푢 푡 푑푡 where the kernel ( , ) is defined by휋

퐾 푥 푡 ( ) 0 ( , ) = 푡(푥 − 휋) ≤ 푡 ≤ 푥. 퐾 푥 푡 � 휋 푥 푡 − 휋 푥 ≤ 푡 ≤ 휋 휋 50

As before, the solution to the integral equation given by = we need integrate twice ′′ to get . Computing first the value of (0) needed to do푢 this푦 from the partial result ′ above.푦 푦

51

CHAPTER 6

Fredholm Integral Equations

6.1 Introduction

This chapter studies the nonhomogeneous Fredholm integral equations of the

second kind of the form

( ) = ( ) + ( , ) ( ) , , (6.1.1) 푏

푢 푥 푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 푎 ≤ 푥 ≤ 푏 where ( , ) is the kernel and is a parameter. Here we focus our attention on degenerate퐾 푥 or푡 separable kernels. 휆The standard form of the degenerate or separable kernel is given by

( , ) = 푛 ( ) ( ). (6.1.2)

퐾 푥 푡 � 푔푘 푥 ℎ푘 푡 푘=1 Examples of separable kernels are: , + , , 3 + , etc. For non-separable 2 2 kernels, we can approximate by expanding푥 − 푡 푥 these푡 푥푡 kernels푥 − using푥푡 Taylor’s푡 expansion. The

partial sums of the Taylor’s series are separable kernels. The kernel is said to be square

integrable in both , in the square ,

if the following푥 푡 regularity푎 condition≤ 푥 ≤ 푏

푎 ≤ 푡 ≤ 푏 | ( , )| < (6.1.3) 푏 푏 2 �푎 �푎 퐾 푥 푡 푑푥푑푡 ∞ is satisfied. This condition gives rise to the development of the solution of the Fredholm

integral equation (6.1.1). Here we need to state, without proof, the so called Fredholm

Alternative Theorem that relates the solutions of Fredholm integral equations. For more

details about the regularity condition and the Theorem the reader is

52

referred to [5], [8], [9] and [11]. If the kernel ( , ) is real, continuous and bounded in the square and , i.e. if 퐾 푥 푡

푎 |≤(푥,≤)|푏 ,푎 ≤ 푡 ≤ 푏 and , (6.1.4) and if ( ) 퐾0푥, and푡 continuous≤ 푀 푎 ≤ 푥in≤ 푏 푎, ≤then푡 ≤ the푏 necessary condition that will guarantee푓 푥 that≠ (6.1.1) has only a unique푎 ≤ solution푥 ≤ 푏 is given by

| | ( ) < 1. (6.1.5)

This comes from the Banach fixed휆 point푀 푏 −theorem.푎 The transform

( )( ) = ( ) + ( , ) ( ) 푏

푇푢 푥 푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 a contractive with respect to the uniform norm on ([ , ]) and so guarantees a unique fixed point. 퐶 푎 푏

The continuous solution to Fredholm integral equation may exist [15], even though the condition (6.1.5) is not satisfied.

6.2 The Decomposition Method

Adomian [2] recently developed the so-called Adomian decomposition method or simply the decomposition method. The method was introduced by Adomian in his resent books [2] and [3] and several related papers [1] and [4] for example. This method provides the solution in a series form. The decomposition method can be applied for linear and nonlinear integral equations. In the decomposition method we express the solution ( ) of the integral equation (6.1.1) in a series form defined by

푢 푥 ( ) = ∞ ( ). (6.2.1)

푢 푥 � 푢푛 푥 푛=0 Substituting the decomposition (6.2.1) into both sides of (6.1.1) yields

53

( ) = ( ) + ( , ) ( ) , (6.2.2) ∞ 푏 ∞ 푛 푛 � 푢 푥 푓 푥 휆 �푎 퐾 푥 푡 �� 푢 푡 � 푑푡 or equivalently 푛=0 푛=0

( ) + ( ) + ( ) + = ( ) + ( , ) ( ) 푏 0 1 2 0 푢 푥 푢 푥 푢 푥 ⋯ 푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 + ( , ) ( ) 푏 1 휆 �푎 퐾 푥 푡 푢 푡 푑푡 + ( , ) ( ) 푏 2 휆 �푎 퐾 푥 푡 푢 푡 푑푡 + (6.2.3)

The components , , , … of the unknown function ⋯( ) are determined if we set

0 1 2 푢 푢 푢 ( ) = ( ), 푢 푥 (6.2.4)

푢0 푥 푓 푥 ( ) = ( , ) ( ) , (6.2.5) 푏 1 0 푢 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 ( ) = ( , ) ( ) , (6.2.6) 푏 2 1 푢 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 and so on. Solution of (6.1.1) can be written in recursive manner by

( ) = ( ), (6.2.7)

푢0 푥 푓 푥 ( ) = ( , ) ( ) , 0. (6.2.8) 푏 푛+1 푛 푢 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 푛 ≥ It is important to note that the series obtained from ( ) frequently gives the exact solution. 푢 푥

In the following example for solution we use the decomposition method.

Example 1. We consider here the Fredholm integral equation

( ) = 1 + ( ) . 1 푥 푢 푥 푒 − � 푡푢 푡 푑푡 54 0

Applying the decomposition technique as discussed before we find

( ) = 1, 푥 푢0 푥 푒 − 1 ( ) = ( ) = ( 1) = , 1 1 2 푡 푢1 푥 � 푡푢0 푡 푑푡 � 푡 푒 − 푑푡 0 0 1 1 ( ) = ( ) = = . 1 1 2 4 푢2 푥 � 푡푢1 푡 푑푡 � 푡푑푡 We get solution in a series form given0 by 0

1 1 1 ( ) = 1 + + + … 2 4 8 푥 푢 푥 푒 − 1 1 1 = 1 + 1 + + + , 2 2 4 푥 푒 − � ⋯ � where 1 + + + is infinite geometric series, = = = 2, so for solution 1 1 푎1 1 � 2 4 ⋯ � 푠 1−푟 1−1� we get 2

( ) = . 푥 For easier calculations there is a푢 modifi푥 ed푒 decomposition method for cases where

( ) consists of polynomial, or a combination of polynomial and other trigonometric or transcendental푓 푥 functions. In this modified method, we simply split the given function

( ) into two parts defined by

푓 푥 ( ) = ( ) + ( ), (6.2.1.1)

1 2 where ( ) consists of one term푓 푥 of 푓( 푥) in many푓 푥 problems or two terms for other cases,

1 and (푓 ) 푥includes the remaining terms푓 푥 of ( ).

2 The 푓integral푥 equation (6.1.1) becomes 푓 푥

( ) = ( ) + ( ) + ( , ) ( ) , . (6.2.1.2) 푏 1 2 푢 푥 푓 푥 푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 푎 ≤ 푥 ≤ 푏

55

Substituting the decomposition given by (6.2.1) into both sides of (6.2.1.2) and using few terms of the expansion we obtain

( ) + ( ) + ( ) + = ( ) + ( ) + ( , ) ( ) 푏 0 1 2 1 2 0 푢 푥 푢 푥 푢 푥 ⋯ 푓 푥 푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 + ( , ) ( ) 푏 1 휆 �푎 퐾 푥 푡 푢 푡 푑푡 + ( , ) ( ) 푏 2 휆 �푎 퐾 푥 푡 푢 푡 푑푡 + (6.2.1.3)

The modified decomposition method works if we set ⋯

( ) = ( ),

푢0 푥 푓1 푥 ( ) = ( ) + ( , ) ( ) , 푏 1 2 0 푢 푥 푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 ( ) = ( , ) ( ) , 푏 2 1 푢 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 ( ) = ( , ) ( ) , 푏 3 2 푢 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 and so on. This scheme for the determination of components ( ), ( ), ( ), … of

0 1 2 the solution ( ) of (6.1.1) can be written in a recursive manner푢 푥by 푢 푥 푢 푥

푢 푥 ( ) = ( ), (6.2.1.4)

푢0 푥 푓1 푥 ( ) = ( ) + ( , ) ( ) , (6.2.1.5) 푏 1 2 0 푢 푥 푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 ( ) = ( , ) ( ) , 1. (6.2.1.6) 푏 푛+1 푛 푢 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 푛 ≥ In many problems we need to use ( ) and ( ) only.

0 1 The following example illustrates푢 푥 the modified푢 푥 decomposition scheme.

56

Example 2. We consider here the Fredholm integral equation

1 ( ) = + 2 + ( ) . 2 2 1 −1 휋 푢 푥 푡푎푛 푥 �푙푛 − � 푥 �0 푥푢 푡 푑푡 Applying the modified decomposition method, we first split the function ( ) into

푓 푥 ( ) = , and ( ) = 2 . −1 1 휋 1 2 2 2 Therefore, we set 푓 푥 푡푎푛 푥 푓 푥 �푙푛 − � 푥

( ) = , −1 0 and we get 푢 푥 푡푎푛 푥

1 ( ) = 2 + ( ) , 2 2 1 휋 푢1 푥 �푙푛 − � 푥 � 푥푢0 푡 푑푡 1 0 = 2 + = 0. 2 2 1 휋 −1 �푙푛 − � 푥 푥 �0 푡푎푛 푡푑푡 The components ( ) = 0, 1 and for exact solution we get

푛 푢 푥 푛 ≥ ( ) = . −1 6.3 The Direct푢 푥 Computation푡푎푛 푥 Method

We next introduce an efficient method for solving Fredholm integral equations of the second kind

( ) = ( ) + ( , ) ( ) . 푏

푢 푥 푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 Our attention will be focused on separable or degenerate kernel ( , ), expressed in the form defined by (6.1.2). For simplicity we assume a single term 퐾kernel푥 푡 be expressed as

( , ) = ( ) ( ). (6.3.1)

The equation (6.1.1) becomes 퐾 푥 푡 푔 푥 ℎ 푡

( ) = ( ) + ( ) ( ) ( ) . (6.3.2) 푏 푢 푥 푓 푥 휆푔 푥 � ℎ 푡 푢 푡 푑푡 57 푎

We substitute by the integral on right hand side by

훼 = ( ) ( ) . (6.3.3) 푏

훼 �푎 ℎ 푡 푢 푡 푑푡 It follows that equation (6.3.2) becomes

( ) = ( ) + ( ). (6.3.4)

The solution ( ) determined푢 upon푥 evaluating푓 푥 휆훼푔 the 푥constant .This can be done by

substituting (6.3.4)푢 푥 into (6.3.3). This approach [21] is different훼 than other techniques. The

direct computation method determines the exact solution in a closed form. This method

gives rise to a system of algebraic equations depending on the structure of the kernel,

where we need to evaluate more than one constant.

This technique is illustrated in the next example.

Example 1. We will use the direct computation method to solve the Fredholm integral

equation

/ ( ) = + ( ) . 휋 2

푢 푥 푥푠푖푛푥 − 푥 �0 푥푢 푡 푑푡 We set

/ = ( ) . 휋 2

훼 �0 푢 푡 푑푡 If is a solution, then

푢 ( ) = + .

푢 푥/ 푥푠푖푛푥 − 푥 훼푥 = ( + ) = 1. 휋 2

훼 �0 푡푠푖푛푡 − 푡 훼푡 푑푡 Substituting = 1 in ( ) = + , we get the exact solution

훼 푢 푥 푥푠푖푛푥 −(푥) =훼푥 .

푢 푥 푥푠푖푛푥

58

If there are two terms in the separable equation then technique become as two linear equations for two unknown coefficients and so for we have terms.

6.4 The Successive Approximations푛 Method푛

In this method we replace the unknown function under the integral sign of the

Fredholm integral equation of the second kind

( ) = ( ) + ( , ) ( ) , , (6.4.1) 푏

푢 푥 푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 푎 ≤ 푥 ≤ 푏 by any selective real valued function ( ), . The first approximation ( )

0 1 of ( ) and the second approximation푢 푥( ) of푎 ≤( 푥) ≤ are푏 defined by 푢 푥

푢 푥 푢2 푥 푢 푥 ( ) = ( ) + ( , ) ( ) , (6.4.2) 푏 1 0 푢 푥 푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 ( ) = ( ) + ( , ) ( ) . (6.4.3) 푏 2 1 푢 푥 푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 This process can be continued in the same manner to obtain the approximation given 푡ℎ by 푛

( ) =

( 0) = ( ) + ( , ) ( ) , 1. (6.4.4) 푢 푥 푎푛푦 푠푒푙푒푐푡푖푣푒푏 푟푒푎푙 푣푎푙푢푒푑 푓푢푛푐푡푖표푛 � 푛 푛−1 푢 푥 푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 푛 ≥ The most commonly selected function for ( ) are 0,1 or . At the limit, the solution

0 ( ) is obtained by 푢 푥 푥

푢 푥 ( ) = lim ( ), (6.4.5)

푛 푢 푥 푛→∞ 푢 푥 so that the solution ( ) is independent of the choice of ( ).

0 The successive approximations푢 푥 method will be illustrated푢 by푥 the following example.

Example 1. Consider the Fredholm integral equation

59

( ) = + ( ) . 1 푥 −1 푢 푥 푒 푒 � 푢 푡 푑푡 As indicated above we can select any real valued function0 for the zeroth approximation,

we set

( ) = 0.

0 After substitution for ( ), ( ) and 푢 (푥 ) we get

푢1 푥 푢2 푥 푢3 푥 ( ) = + ( ) = , 1 푥 −1 푥 1 0 푢 푥 푒 푒 �0 푢 푡 푑푡 푒 ( ) = + = + 1 , 1 푥 −1 푡 푥 −1 2 푢 푥 푒 푒 �0 푒 푑푡 푒 − 푒 ( ) = + 1 . 푥 −2 3 We obtain the component 푢 푥 푒 − 푒 푡ℎ 푛 ( ) = + 1 ( ), 1. 푥 − 푛−1 푛 The solution ( ) is given푢 by푥 푒 − 푒 푛 ≥

푢 푥 ( ) = lim ( )

푛 푢 푥 푛→∞ 푢 푥 = lim ( + 1 ( )) = + 1. 푥 − 푛−1 푥 푛→∞ 푒 − 푒 푒 6.5 The Method of Successive Substitutions

This method introduces the solution of integral equations in a series form through

evaluating single integral and multiple integrals. In this method, we set = and =

1 in the Fredholm integral equation 푥 푡 푡 푡

( ) = ( ) + ( , ) ( ) , , (6.5.1) 푏

푢 푥 푓 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 푎 ≤ 푥 ≤ 푏 to obtain

60

( ) = ( ) + ( , ) ( ) . (6.5.2) 푏 1 1 1 푢 푡 푓 푡 휆 �푎 퐾 푡 푡 푢 푡 푑푡 Replacing ( ) in the right-hand side of (6.5.1) by its value given by (6.5.2) yields

푢 푡 ( ) = ( ) + ( , ) ( ) 푏

푢 푥 푓 푥 휆 �푎 퐾 푥 푡 푓 푡 푑푡 + ( , ) ( , ) ( ) . (6.5.3) 푏 푏 2 1 1 1 휆 �푎 퐾 푥 푡 �푎 퐾 푡 푡 푢 푡 푑푡 푑푡 Substituting = and = in (6.5.1) we obtain

푥 푡1 푡 푡2 ( ) = ( ) + ( , ) ( ) . (6.5.4) 푏 1 1 1 2 2 2 푢 푡 푓 푡 휆 �푎 퐾 푡 푡 푢 푡 푑푡 Substituting the value of ( ) obtained in (6.5.4) into the right-hand side of (6.5.3) leads

1 to 푢 푡

( ) = ( ) + ( , ) ( ) 푏

푢 푥 푓 푥 휆 �푎 퐾 푥 푡 푓 푡 푑푡 + ( , ) ( , ) ( ) 푏 푏 2 1 1 1 휆 �푎 �푎 퐾 푥 푡 퐾 푡 푡 푓 푡 푑푡 푑푡 + ( , ) ( , ) ( , ) ( ) . (6.5.5) 푏 푏 푏 3 1 1 2 2 2 1 휆 �푎 �푎 �푎 퐾 푥 푡 퐾 푡 푡 퐾 푡 푡 푢 푡 푑푡 푑푡 푑푡 The general series form for ( ) can be written as

푢 푥 ( ) = ( ) + ( , ) ( ) 푏

푢 푥 푓 푥 휆 �푎 퐾 푥 푡 푓 푡 푑푡 + ( , ) ( , ) ( ) 푏 푏 2 1 1 1 휆 �푎 �푎 퐾 푥 푡 퐾 푡 푡 푓 푡 푑푡 푑푡 + ( , ) ( , ) ( , ) ( ) 푏 푏 푏 3 1 1 2 2 2 1 휆 �푎 �푎 �푎 퐾 푥 푡 퐾 푡 푡 퐾 푡 푡 푓 푡 푑푡 푑푡 푑푡 + (6.5.6)

We note that the⋯ series solution converges uniformly in the interval [ , ] if

61 푎 푏

( ) 1 where | ( , )| . This comes from the contractive mapping

휆푀principle푏 − 푎or ≤Banach fixed퐾 point푥 푡 theorem≤ 푀 the proof of which appears in the texts [15], [17],

[19] and others. The substitution of ( ) occurs several times through the integrals. This

is why it is called the method of successive푢 푥 substitutions. The technique is illustrated by

the following example.

Example 1. We solve the following Fredholm integral equation

11 1 ( ) = + ( ) , 6 4 1 푢 푥 푥 � 푥푡푢 푡 푑푡 by using the method of successive substitutions. 0

Substituting = , ( ) = , and ( , ) = into (6.5.6) yields 1 11 휆 4 푓11푥 16 푥 11 퐾 푥 푡 1푥푡 11 ( ) = + + + , 6 4 1 6 16 1 1 6 2 2 2 푢 푥 푥 � 푥푡 푑푡 � � 푥푡1 푡 푑푡1푑푡 ⋯ 0 11 1 0 10 ( ) = 1 + + + , 6 12 144 푢 푥 푥 � ⋯ � where 1 + + + is infinite geometric series and its sum = = , 1 1 1 12 � 12 144 ⋯ � 푠 1−1� 11 this gives the exact solution by 12

( ) = 2 .

6.6 Homogeneous푢 푥 Fredholm푥 Equations

In this section we study the homogeneous Fredholm equation with separable kernel given by

( ) = ( , ) ( ) . (6.6.1) 푏

푢 푥 휆 �푎 퐾 푥 푡 푢 푡 푑푡 The trivial solution ( ) = 0 is a solution of the homogeneous Fredholm equation. Our

goal will be focused푢 on푥 finding nontrivial solutions if they exist. To get these solutions

62

we use the direct computational method that was used for nonhomogeneous Fredholm integral equations. Consider a one term kernel given by

( , ) = ( ) ( ), (6.6.2) so that (6.6.1) becomes 퐾 푥 푡 푔 푥 ℎ 푡

( ) = ( ) ( ) ( ) . (6.6.3) 푏

푢 푥 휆푔 푥 �푎 ℎ 푡 푢 푡 푑푡 Using the direct computation method we set

= ( ) ( ) , (6.6.4) 푏

훼 �푎 ℎ 푡 푢 푡 푑푡 so that (6.6.3) becomes

( ) = ( ). (6.6.5)

Substitute (6.6.5) into (6.6.4) we obtain푢 푥 휆훼푔 푥

= ( ) ( ) , (6.6.6) 푏

훼 휆훼 �푎 ℎ 푡 푔 푡 푑푡 or equivalently

1 = ( ) ( ) , (6.6.7) 푏

휆 �푎 ℎ 푡 푔 푡 푑푡 which gives a numerical value for 0. Non-zero values of that result from solving the algebraic system of equations are휆 ≠ called the eigenvalues of휆 the kernel. Substituting these values of in (6.6.5) gives the eigenfunctions of the equation which are the nontrivial solutions휆 of (6.6.1).

The following example will be used to explain the technique introduced above and the concept of eigenvalues and eigenfunctions.

Example 1. Solve the homogenous Fredholm integral equation with a two term kernel

63

( ) = (6 2 ) ( ) . 1 푢 푥 휆 � 푥 − 푡 푢 푡 푑푡 This equation can be rewritten as 0

( ) = 6 ,

where and defined by 푢 푥 휆훼푥 − 훽휆

훼 훽 = ( ) , = 2 ( ) . 1 1

훼 �0 푢 푡 푑푡 훽 �0 푡푢 푡 푑푡 Substituting ( ) in the formula and , we get

푢 푥 훼 훽 = (6 ) , = 2 (6 ) . 1 1

훼 �0 휆훼푡 − 훽휆 푑푡 훽 �0 푡 휆훼푡 − 훽휆 푑푡 Thus

= 3 , = 4 ,

훼 휆훼(−1 훽휆3 ) +훽 =휆훼0 − 훽휆

4 − +휆(1훼+ 휆훽) = 0.

(−1 휆훼3 ) 휆 훽 = 0, 4 (1 + ) − 휆 휆 � � (1 −3 휆)(1 + ) +휆4 = 0. 2 Solving the quadratic equation for− we휆 get 휆 = 휆= 1. Then substituting in algebraic

1 2 equations, we get = 2 . The eigenfunctions휆 휆 corresponding휆 to = = 1 are given

1 2 by ( ) = 훽( ) =훼6 2 . Thus = 1 is the only eigenvalue휆 휆 and the

1 2 corresponding푢 푥 eigenspace푢 푥 is훼푥 one−-dimensional훼 휆 and spanned by the eigenfunction

( ) = 3 1.

푢 푥 푥 −

64

CHAPTER 7

Volterra Integral Equations

7.1 Introduction

In this chapter we introduce the nonhomogeneous Volterra integral equation of the second kind of the form

( ) = ( ) + ( , ) ( ) , (7.1.1) 푥

푢 푥 푓 푥 휆 �0 퐾 푥 푡 푢 푡 푑푡 where ( , ) is the kernel, and is parameter. As indicated earlier the limits of integration퐾 푥 for푡 Volterra integral equations휆 [20] are functions of and not constants as in

Fredholm integral equations. The kernel here will be considered푥 as a separable kernel.

We want to determine the solution ( ) applying various methods.

7.2 The Adomian푢 푥 Decomposition Method

Adomian recently developed the Adomian decomposition method that proved to

work for all types of differential, integral and integro-differential equations, linear or

nonlinear [1], [2], [3], [4].

The decomposition method establishes the solution in the form of power series. In this method ( ) will be composed into components that will be determined, given by the series form푢 푥

( ) = ∞ ( ), (7.2.1)

푢 푥 � 푢푛 푥 푛=0 with identified as all terms out of the integral sign, i.e.

0 푢 ( ) = ( ). (7.2.2)

0 Substituting (7.2.1) into (7.1.1) yields푢 푥 푓 푥

65

( ) = ( ) + ( , )( ( )) , ∞ 푥 ∞ 푛 푛 � 푢 푥 푓 푥 휆 �0 퐾 푥 푡 � 푢 푡 푑푡 which by using few terms푛=0 of the expansion gives 푛=0

( ) + ( ) + ( ) + = ( ) + ( , ) ( ) 푥 0 1 2 0 푢 푥 푢 푥 푢 푥 ⋯ 푓 푥 휆 �0 퐾 푥 푡 푢 푡 푑푡 + ( , ) ( ) 푥 1 휆 �0 퐾 푥 푡 푢 푡 푑푡 + ( , ) ( ) 푥 2 휆 �0 퐾 푥 푡 푢 푡 푑푡 + ( , ) ( ) 푥 3 휆 �0 퐾 푥 푡 푢 푡 푑푡 + (7.2.3)

The components ( ), ( ), ( ⋯), ( ), … of the unknown function ( ) are

0 1 2 3 푛 determined if we 푢set 푥 푢 푥 푢 푥 푢 푥 푢 푥

( ) = ( ), (7.2.4)

푢0 푥 푓 푥 ( ) = ( , ) ( ) , (7.2.5) 푥 1 0 푢 푥 휆 �0 퐾 푥 푡 푢 푡 푑푡 ( ) = ( , ) ( ) , (7.2.6) 푥 2 1 푢 푥 휆 �0 퐾 푥 푡 푢 푡 푑푡 ( ) = ( , ) ( ) , (7.2.7) 푥 푢3 푥 휆 � 퐾 푥 푡 푢2 푡 푑푡 and so on. This scheme for the determination0 of components of solution can be written in a recursive scheme by

( ) = ( ), (7.2.8)

푢0 푥 푓 푥 ( ) = ( , ) ( ) , 0. (7.2.9) 푥 푛+1 푛 푢 푥 휆 �0 퐾 푥 푡 푢 푡 푑푡 푛 ≥

66

The solution ( ) is determined in a series form using

푢 푥 ( ) = ∞ ( ).

푢 푥 � 푢푛 푥 푛=0 The series obtained for ( ) always provides the exact solution. It is important to note here that a few terms of the푢 series푥 usually provide the higher accuracy level of the approximation of the solution if compared with other numerical techniques.

It is important to indicate that the decomposition method provides the solution of any style of equations in the form of series with easily computable components. These applications have shown a very fast convergence of the series solution.

The following illustrative example will be discussed to explain the decomposition method.

Example 1. We consider the Volterra integral equation

( ) = 4 + 2 ( ) . 푥 2 푢 푥 푥 푥 − �0 푢 푡 푑푡 Applying the decomposition technique we find ( ), ( ), ( ), ( ) by

0 1 2 3 ( ) = 4 푢+ 2푥 ,푢 푥 푢 푥 푢 푥 2 푢0 푥 푥 푥 ( ) = (4 + 2 ) , 푥 2 푢1 푥 − � 푡 푡 푑푡 0 2 = 2 + , 3 2 3 − � 푥 푥 � 2 ( ) = 2 + , 푥 3 2 3 푢2 푥 − � − � 푡 푡 � 푑푡 2 0 1 = + , 3 6 3 4 푥 2 푥 1 ( ) = + , 푥 3 6 3 4 3 푢 푥 − �0 � 푡 푡 � 푑푡 67

1 1 = + . 6 30 4 5 − � 푥 푥 � For solution in series form we get

2 2 1 1 1 ( ) = 4 + 2 2 + + + + + , 3 3 6 6 30 2 2 3 3 4 4 5 푢 푥 푥 푥 − � 푥 푥 � 푥 푥 − � 푥 푥 � ⋯ The solution is

( ) = 4 .

The series solution usually employed푢 푥 for 푥numerical approximation, and the more

terms we obtain provide more accuracy in the approximation of the solution. Even

through the decomposition method proved to be powerful and reliable, but it can be used

in a more effective manner which we called the modified decomposition method. The

volume of calculations will be reduced by evaluating only the first two components

( ) and ( ). The modified technique works for specific problems where ( )

0 1 consists푢 푥 of 푢at least푥 of two terms. 푓 푥

It is important to note that the modified decomposition method, which was

introduced before for the Fredholm integral equations, is also applicable here. In Volterra

integral equations where ( ) consists of a polynomial, or a combination of polynomial

and other trigonometric or푓 transcen푥 dental functions, the modified decomposition method

works well. In this case we decompose ( ) into two parts such as

( ) = (푓)푥+ ( ). (7.2.1.1)

푓 푥 푓1 푥 푓2 푥 ( ) = ( ) + ( ) + ( , ) ( ) . (7.2.1.2) 푥 푢 푥 푓1 푥 푓2 푥 휆 � 퐾 푥 푡 푢 푡 푑푡 Using few terms of the expansions we obtain 0

( ) + ( ) + ( ) + = ( ) + ( ) + ( , ) ( ) 푥 0 1 2 1 2 0 푢 푥 푢 푥 푢 푥 ⋯ 푓 푥 푓 푥 휆 �0 퐾 푥 푡 푢 푡 푑푡 68

+ ( , ) ( ) 푥 1 휆 �0 퐾 푥 푡 푢 푡 푑푡 + ( , ) ( ) 푥 2 휆 �0 퐾 푥 푡 푢 푡 푑푡 + ( , ) ( ) 푥 3 휆 �0 퐾 푥 푡 푢 푡 푑푡 + (7.2.1.3)

We assign ( ) only to the component ( ), and the ( ) ⋯ will be added to the

1 0 2 component푓 푥( ). We set 푢 푥 푓 푥

1 푢 푥 ( ) = ( ), (7.2.1.4)

푢0 푥 푓1 푥 ( ) = ( ) + ( , ) ( ) , (7.2.1.5) 푥 1 2 0 푢 푥 푓 푥 휆 �0 퐾 푥 푡 푢 푡 푑푡 ( ) = ( , ) ( ) , (7.2.1.6) 푥 2 1 푢 푥 휆 �0 퐾 푥 푡 푢 푡 푑푡 ( ) = ( , ) ( ) , (7.2.1.7) 푥 3 2 푢 푥 휆 �0 퐾 푥 푡 푢 푡 푑푡 and so on. The components ( ), ( ), ( ), ( ), …of the solution can be written

0 1 2 3 by 푢 푥 푢 푥 푢 푥 푢 푥

( ) = ( ), (7.2.1.8)

푢표 푥 푓1 푥 ( ) = ( ) + ( , ) ( ) , (7.2.1.9) 푥 1 2 0 푢 푥 푓 푥 휆 �0 퐾 푥 푡 푢 푡 푑푡 ( ) = ( , ) ( ) , 1. (7.2.1.10) 푥 푢푛+1 푥 휆 � 퐾 푥 푡 푢푛 푡 푑푡 푛 ≥ The following example illustrates0 how to obtain the solution of the Volterra integral equation by using the modified decomposition method.

69

Example 2. Solve the following Volterra integral equation by using the modified

decomposition method

( ) = + 1 + ( ) . 푥 푠푖푛푥 푠푖푛푡 푢 푥 푐표푠푥 � − 푒 �푥 푥 �0 푒 푢 푡 푑푡 Using this method, we first decompose the function ( ) into

( ) = , 푓 푥

1 ( 푓) =푥 1 푐표푠푥 . 푠푖푛푥 2 We find ( ) and ( ) by 푓 푥 � − 푒 �푥

0 1 푢 푥 푢 푥 ( ) = ,

푢0 푥 푐표푠푥 ( ) = 1 + ( ) , 푥 푠푖푛푥 푠푖푛푡 1 0 푢 푥 � − 푒 �푥 푥 �0 푒 푢 푡 푑푡 = 1 + , 푥 푠푖푛푥 푠푖푛푡 � − 푒 �푥 푥 �0 푒 푐표푠푡푑푡 = 1 1 = 0. 푠푖푛푥 푠푖푛푥 The exact solution is � − 푒 �푥 − � − 푒 �푥

( ) = .

It is clear that only two components are푢 calculate푥 푐표푠푥d to determine the exact solution.

7.3 The Series Solution Method

Now we introduce the series solution method that is practical method to solve the

Volterra integral equation with variable limits of integration

( ) = ( ) + ( , ) ( ) (7.3.1) 푥

푢 푥 푓 푥 휆 �0 퐾 푥 푡 푢 푡 푑푡 where ( , ) is the kernel of the integral equation, and is parameter. In this method we will follow퐾 푥 a푡 parallel approach to the method of the series휆 solution that usually applied in solving an ordinary differential equation around ordinary point. The method applicable

70

when ( ) is an analytic function, i. e. ( ) has a Taylor expansion around = 0 so,

( ) can푢 푥 be expressed by a series expansion푢 푥 [21] given by 푥

푢 푥 ( ) = ∞ , (7.3.2) 푛 푢 푥 � 푎푛푥 푛=0 where the coefficients are constants that will be determined. Substituting (7.3.2) into

푛 both sides of (7.3.1) yields푎

= ( ) + ( , ) , (7.3.3) ∞ 푥 ∞ 푛 푛 푛 푛 � 푎 푥 푓 푥 휆 �0 퐾 푥 푡 �� 푎 푡 � 푑푡 by using few terms푛= 0of the expansions in both sides, we푛=0 find

+ + + + = ( ) + ( , ) , 푥 2 3 0 1 2 3 0 푎 푎 푥 푎 푥 푎 푥 ⋯ 푓 푥 휆 �0 퐾 푥 푡 푎 푑푡 + ( , ) , 푥 1 휆 �0 퐾 푥 푡 푎 푡푑푡 + ( , ) , 푥 2 2 휆 �0 퐾 푥 푡 푎 푡 푑푡 + ( , ) , 푥 3 3 휆 �0 퐾 푥 푡 푎 푡 푑푡 + (7.3.4)

We write the Taylor expansion for ( ) and evaluate⋯ the first few integrals. Then we equate the coefficients of like powers of푓 푥 in both sides, so we find , , , … .

1 2 3 Substituting these coefficients , 0, gi푥ves the solution in a series푎 form.푎 푎This may

푛 lead to a solution in a closed form푎 if푛 the≥ expansion obtained is a Taylor expansion to a well-known elementary function.

The following example illustrates the series solution method.

Example 1. Use the series solution method to solve

71

( ) = 1 + ( ) ( ) . 푥

푢 푥 �0 푡 − 푥 푢 푡 푑푡 Substituting ( ) by the series

푢 푥 ( ) = ∞ , 푛 푢 푥 � 푎푛푥 into both sides of the equation leads to 푛=0

= 1 + ( ) , ∞ 푥 ∞ 푛 푛 푛 푛 � 푎 푥 �0 푡 − 푥 �� 푎 푡 � 푑푡 which gives 푛=0 푛=0

= 1 + . ∞ 푥 ∞ ∞ 푛 푛+1 푛 � 푎푛푥 � �� 푎푛푡 − 푥 � 푎푛푡 � 푑푡 푛=0 0 푛=0 푛=0 Evaluating the integrals on the right-hand side that involves terms of the form , 0 푛 yields 푡 푛 ≥

1 ∞ = 1 ∞ , ( + 1)( + 2) 푛 푛+2 � 푎푛푥 − � 푎푛푥 푛=0 푛=0 푛 푛 or equivalently

1 1 1 + + + + = 1 + 2! 3! 12 2 3 2 3 4 푎0 푎1푥 푎2푥 푎3푥 ⋯ − 푎0푥 − 푎1푥 − 푎2푥 ⋯ Equating the coefficients of like powers of in both sides we find

푥 1 1 = 1, = 0, = , = 0, = , 2! 4! 푎0 푎1 푎2 − 푎3 푎4 and generally

= ( 1) , 0, ( )! for 푛 1 푎2푛 − 2푛 푛 ≥ = 0, for 0.

2푛+1 푎 ( ) = 1 푛 ≥+ + , We find the solution in series form ! ! ! 1 2 1 4 1 6 푢 푥 72 − 2 푥 4 푥 − 6 푥 ⋯

the solution in a closed form ( ) = .

7.4 Successive푢 푥 Approximation푐표푠푥 s Method

The method of successive approximations used before for handling Fredholm integral equations will be implemented here to solve Volterra integral equation. In this method we replace the unknown function ( ) under integral sign of the Volterra equation 푢 푥

( ) = ( ) + ( , ) ( ) (7.4.1) 푥

푢 푥 푓 푥 휆 �0 푘 푥 푡 푢 푡 푑푡 by any real valued continuous function ( ), called the zeroth approximation. This

0 substitution will give the first approximation푢 푥 ( ) by

푢1 푥 ( ) = ( ) + ( , ) ( ) . (7.4.2) 푥 1 0 푢 푥 푓 푥 휆 �0 푘 푥 푡 푢 푡 푑푡 The second approximation obtained by replacing ( ) in (7.4.2) by ( ) obtained

0 1 above, hence we find 푢 푥 푢 푥

( ) = ( ) + ( , ) ( ) . (7.4.3) 푥 2 1 푢 푥 푓 푥 휆 �0 푘 푥 푡 푢 푡 푑푡 This process can be continued to obtain the -th approximation. So, we have

( ) = 푛

0( ) = ( ) + ( , ) ( ) , 1. 푢 푥 푎푛푦 푠푒푙푒푐푡푖푣푒푥 푟푒푎푙 푣푎푙푢푒푑 푓푢푛푐푡푖표푛 � 푛 푛−1 푢 푥 푓 푥 휆 �0 푘 푥 푡 푢 푡 푑푡 푛 ≥ The most commonly selected functions for ( ) are 0, 1, or . The successive

0 approximation are leading to a solution ( 푢) 푥 푥

( ) = lim푢 푥 ( ). (7.4.4)

푛 푢 푥 푛→∞ 푢 푥 of the equation (7.4.1)

73

So the solution ( ) will be independent of the choice of ( ) if we know for some

0 series such as in푢 the푥 contraction mapping principle that the푢 solution푥 is unique. In this case the solution is given in a series form

( ) = lim ( ∞ ( )).

푛 푢 푥 푛→∞ � 푢 푥 푛=0 The zeroth approximation is not defined and given by a selective real valued function.

To illustrate this method we solve the following example.

Example 1. Solve the Volterra integral equation

( ) = + ( ) ( ) 푥 푢 푥 푥 � 푡 − 푥 푢 푡 푑푡 by the successive approximations method. We0 first select any real valued function for the zeroth approximation, hence we set ( ) = 0.

푢0 푥 ( ) = + ( ) ( ) , 푥 1 0 푢 푥 푥 �0 푡 − 푥 푢 푡 푑푡 ( ) = .

푢1 푥 푥 ( ) = + ( ) , 푥 푢2 푥 푥 � 푡 − 푥 푡푑푡 10 ( ) = , 3! 3 2 푢 푥 푥 − 1 푥 1 ( ) = + , 3! 5! 3 5 푢3 푥 푥 − 푥 푥

( ) = 푛 ( 1) , 1. (2 2푘−1)! 푘−1 푥 푢푛 푥 � − 푛 ≥ 푘=1 푘 − The solution ( ) is given by

푛 푢 푥 ( ) = lim ( ),

푛 푢 푥 푛→∞ 푢 푥

74

= lim ( 푛 ( 1) ), (2 2푘−1)! 푘−1 푥 푛→∞ � − 푘=1 푘 − = .

7.5 The Method of Successive푠푖푛푥 Substitutions

The technique to be used here is completely identical to that we used before. In this method, we set = and = in the Volterra integral equation

푥 푡 푡 푡1 ( ) = ( ) + ( , ) ( ) , (7.5.1) 푥 푢 푥 푓 푥 휆 � 푘 푥 푡 푢 푡 푑푡 to obtain 0

( ) = ( ) + ( , ) ( ) . (7.5.2) 푡 1 1 1 푢 푡 푓 푡 휆 �0 푘 푡 푡 푢 푡 푑푡 Replacing ( ) at the right-hand side of (7.5.1) by obtained value given by (7.5.2) yields

푢 푡 ( ) = ( ) + ( , ) ( ) 푥

푢 푥 푓 푥 휆 �0 푘 푥 푡 푓 푡 푑푡 + ( , ) ( , ) ( ) . (7.5.3) 푥 푥 2 1 1 1 휆 �0 퐾 푥 푡 �0 퐾 푡 푡 푢 푡 푑푡 푑푡 Substituting = and = in (7.5.1) we obtain

푥 푡1 푡 푡2 ( ) = ( ) + ( , ) ( ) . (7.5.4) 푡1 1 1 1 2 2 2 푢 푡 푓 푡 휆 �0 퐾 푡 푡 푢 푡 푑푡 Substituting the value of ( ) obtained in (7.5.4) into the right hand side of (7.5.3) leads

1 to 푢 푡

( ) = ( ) + ( , ) ( ) 푥

푢 푥 푓 푥 휆 �0 퐾 푥 푡 푓 푡 푑푡 + ( , ) ( , ) ( ) 푥 푡 2 1 1 1 휆 �0 �0 퐾 푥 푡 퐾 푡 푡 푓 푡 푑푡 푑푡 75

+ ( , ) ( , ) ( , ) ( ) . 푥 푡 푡1 3 1 1 2 2 2 1 휆 �0 �0 �0 퐾 푥 푡 퐾 푡 푡 퐾 푡 푡 푢 푡 푑푡 푑푡 푑푡 (7.5.5)

The general series form for ( ) can be rewritten as

푢 푥 ( ) = ( ) + ( , ) ( ) 푥

푢 푥 푓 푥 휆 �0 푘 푥 푡 푓 푡 푑푡 + ( , ) ( , ) ( ) 푥 푡 2 1 1 1 휆 �0 �0 퐾 푥 푡 퐾 푡 푡 푓 푡 푑푡 푑푡 + ( , ) ( , ) ( , ) ( ) . 푥 푡 푡1 3 1 1 2 2 2 1 휆 �0 �0 �0 퐾 푥 푡 퐾 푡 푡 퐾 푡 푡 푓 푡 푑푡 푑푡 푑푡 + (7.5.6)

In this method the unknown⋯ function ( ) is substituted by the given function

( ) that makes the evaluation of the multiple푢 integrals푥 easily computable. This process

occurs푓 푥 several times through the integrals and this is why it is called the method of

successive substitutions. The technique will be illustrated by solving the last example in

this new format.

Example 1. We solve the following Volterra integral equation

( ) = ( ) ( ) , 푥

푢 푥 푥 − �0 푥 − 푡 푢 푡 푑푡 by using the method of successive substitutions. Substituting = 1, ( ) = , and

( , ) = ( ) into (7.5.6) we obtain 휆 − 푓 푥 푥

퐾 푥 푡 푥 − 푡 ( ) = ( ) + ( ) ( ) + , 푥 푥 푡 1 1 1 푢 푥 푥 − �0 푥 − 푡 푡푑푡 �0 �0 푥 − 푡 푡 − 푡 푡 푑푡 푑푡 ⋯ or equivalently

( ) = ( ) + ( )( ) + , 푥 푥 푡 2 2 1 1 1 푢 푥 푥 − �0 푥푡 − 푡 푑푡 �0 �0 푥 − 푡 푡푡 − 푡 푑푡 푑푡 ⋯ 76

therefore, we obtain the solution in a series form

1 1 ( ) = + + , 3! 5! 3 5 푢 푥 푥 − 푥 푥 ⋯ or in a closed form

( ) =

upon using the Taylor expansion for 푢 푥. We 푠푖푛푥get the same solution as before.

7.6 Volterra Integral푠푖푛푥 Equations of the First Kind

In this section we will study the Volterra integral equation of the first kind with separable kernel given by

( ) = ( , ) ( ) . (7.6.1) 푥 푓 푥 � 퐾 푥 푡 푢 푡 푑푡 It is important to note that the Volterra integral0 equation of the first kind can be handled by reducing this equation to Volterra equation of the second kind. This goal can be accomplished by differentiating both sides of (7.6.1) with respect to to obtain

푥 ( ) = ( , ) ( ) + ( , ) ( ) (7.6.2) 푥 ′ 푥 푓 푥 퐾 푥 푥 푢 푥 �0 퐾 푥 푡 푢 푡 푑푡 by using Leibnitz rule. If ( , ) 0 in the interval of discussion, then dividing both

sides of (7.6.2) by ( , )퐾 yields푥 푥 ≠

퐾 푥 푥 ( ) 1 ( ) = ( , ) ( ) , (7.6.3) (′ , ) ( , ) 푥 푓 푥 푥 푢 푥 − �0 퐾 푥 푡 푢 푡 푑푡 a Volterra integral equation of the퐾 second푥 푥 kind.퐾 푥The푥 case in which the kernel ( , ) = 0,

leads to a complicated behavior of the problem that will not be investigated here.퐾 푥 푥

To solve (7.6.3) we select any method that we discussed before. The technique of

differentiating both sides of Volterra integral equation of the first kind, verifying that

77

( , ) 0, reducing to Volterra integral equation of the second kind and solving the resulting퐾 푥 푥 ≠ equation will be illustrated by discussing the following example.

Example 1. Find the solution of the Volterra equation of the first kind

5 + = (5 + 3 3 ) ( ) . 푥 2 3 푥 푥 �0 푥 − 푡 푢 푡 푑푡 Note that ( , ) = 5 + 3 3 , therefore ( , ) = 5 0, and differentiating, gives

퐾 푥 푡 푥 − 푡 퐾 푥 푥 ≠ 10 + 3 = 5 ( ) + 3 ( ) , 푥 2 푥 푥 푢 푥 � 푢 푡 푑푡 or equivalently 0

3 1 ( ) = 2 + 3 ( ) , 5 5 푥 2 푢 푥 푥 푥 − �0 푢 푡 푑푡 We prefer to use the modified decomposition method. We set ( ) = 2 , which gives

3 3 푢0 푥 푥 ( ) = 2 = 0. 5 5 푥 2 1 푢 푥 푥 − �0 푡푑푡 For ( ) = 0, 2. Exact solution is ( ) = 2 .

푛 푢 푥 푛 ≥ 푢 푥 푥

78

CHAPTER 8

Integro-Differential Equations

8.1 Introduction

In this chapter we shall be concerned with the integro-differential equations where both differential and integral operators will appear in same equation. This type of equations was introduced by Volterra [5], [6] and [20] in 1900. These equations came up in his research work on population growth. More details about the sources where these equations arise can be found in physics, biology and engineering applications as well as in advanced integral equations books such as [7], [11], [12] and [16]. In integro- differential equations the unknown function ( ) and one or more its derivatives such as

( ), ( ), … appear outside and under the푢 integral푥 sign as well. ′ ′′ 푢The푥 following푢 푥 are examples of linear integro-differential equations:

( ) = ( ) , (0) = 0, (8.1.1) 1 ′ 푥−푡 푢 푥 푥 − �0 푒 푢 푡 푑푡 푢 ( ) = + ( ) , (0) = 1, (0) = 1, (8.1.2) 1 ′′ 푥 ′ ′ 푢 푥 푒 − 푥 �0 푥푡푢 푡 푑푡 푢 푢 ( ) = ( ) ( ) , (0) = 0, (8.1.3) 푥 ′ 푢 푥 푥 − �0 푥 − 푡 푢 푡 푑푡 푢 ( ) = + ( ) ( ) , (0) = 0, (0) = 1. (8.1.4) 푥 ′′ ′ 푢 푥 −푥 � 푥 − 푡 푢 푡 푑푡 푢 푢 − Of these equations (8.1.1) and0 (8.1.2) are Fredholm integro-differential equations, which

are linear, and equations (8.1.3) and (8.1.4) are the Volterra integro-differential

equations. Our concern in this paper will be focused only on the linear integro-differential equations. The initial conditions are needed to determine the constants of integration.

79

8.2 Fredholm Integro-Differential Equations

In this section we will discuss methods used to solve Fredholm integro- differential equations. We focus our concern on the equations that involve separable kernels where the kernel ( , ) can be expressed as a finite sum of the form

퐾 푥 푡 ( , ) = 푛 ( ) ( ). (8.2.1)

퐾 푥 푡 � 푔푘 푥 ℎ푘 푡 푘=1 We will make our analysis on a one term kernel ( , ) of the form

( , ) = ( ) ( )퐾, 푥 푡 (8.2.2) and this can be generalized for퐾 other푥 푡 cases.푔 푥 Theℎ 푡 non-separable kernel can be approximated by a separable kernel by using the Taylor expansion for the kernel involved. Then we have the exact solution or an approximation to the solution with the highest desirable accuracy. We first start with the most practical method.

8.2.1 The Direct Computation Method

This method has been introduced in previous chapter. The standard form to the

Fredholm Integro-Differential Equation given by

( )( ) = ( ) + ( , ) ( ) , ( )(0) = , 0 ( 1), (8.2.1.1) 1 푛 푘 푘 푢 푥 푓 푥 �0 퐾 푥 푡 푢 푡 푑푡 푢 푏 ≤ 푘 ≤ 푛 − where ( )( ) indicates the -th derivative of ( ) with respect to and are constants 푛 푘 that define푢 the푥 proper initial 푛conditions. Substitute푢 푥 (8.2.2) into (8.2.1.1)푥 to푏 get

( )( ) = ( ) + ( ) ( ) ( ) , (0) = , 0 ( 1). (8.2.1.2) 1 푛 푘 푢 푥 푓 푥 푔 푥 � ℎ 푡 푢 푡 푑푡 푢 푏푘 ≤ 푘 ≤ 푛 − We set 0

= ( ) ( ) . (8.2.1.3) 1

훼 �0 ℎ 푡 푢 푡 푑푡

80

With defined in (8.2.1.3), the equation (8.2.1.2) can be written by

훼 ( )( ) = ( ) + ( ). (8.2.1.4) 푛 We need to determine푢 the푥 constant푓 푥 to훼푔 evaluate푥 the exact solution ( ). We

integrate both sides times from 0 to , and훼 by using the given initial conditions푢 푥 we

obtain ( ) given by푛 푥

푢 푥 ( ) = ( ; ), (8.2.1.5)

Where ( ; ) is the result푢 푥 derive푝 d푥 from훼 integrating equation (8.2.1.4) and by using the given푝 initi푥 훼al condition. Substituting (8.2.1.5) into right side of (8.2.1.3),

integrating and solving the resulting equation leads to determination of . The exact

solution of (8.2.1.1) follows immediately upon substituting the resulting훼 value of into

(8.2.1.5). To give a clear view of the technique, we illustrate the method by solving훼 the

following example.

Example 1. Solve the Fredholm integro-differential equation

1 ( ) = 1 + ( ) , (0) = 0, 3 1 ′ 푢 푥 − 푥 푥 �0 푡푢 푡 푑푡 푢 by using direct computation method.

The equation may be written in the form

1 ( ) = 1 + , (0) = 0 3 ′ 푢 푥 − 푥 훼푥 푢 Where the constant is defined by

훼 = ( ) , 1

훼 �0 푡푢 푡 푑푡 Integrating both sides of ( ) from 0 to and using initial condition we obtain ′ 푢 푥 푥 1 ( ) = + . 2 6 훼 2 푢 푥 푥 � − � 푥 81

Substitute this into the integral defining , integrate, and solve for to find

훼 1 훼 = . 3 훼 So, the exact solution is

( ) = .

8.2.2 The Adomian Decomposition Method푢 푥 푥

This method has been introduced already for handling Fredholm integral equations. In this section we will show how it can be implemented to determine a series solution to the Fredholm integro-differential equations. As in the last section, consider the

Fredholm integro-differential equation given by

( )( ) = ( ) + ( , ) ( ) , ( )(0) = , 0 ( 1) (8.2.2.1) 1 푛 푘 푘 푢 푥 푓 푥 �0 퐾 푥 푡 푢 푡 푑푡 푢 푏 ≤ 푘 ≤ 푛 − where ( )( ) indicates the -th derivative of ( ) with respect to x and are 푛 푘 constants푢 that푥 give the initial 푛conditions. Substituting푢 푥 ( , ) = ( ) ( ) 푏 into (8.2.2.1)

we obtain 퐾 푥 푡 푔 푥 ℎ 푡

( )( ) = ( ) + ( ) ( ) ( ) . (8.2.2.2) 1 푛 푢 푥 푓 푥 푔 푥 � ℎ 푡 푢 푡 푑푡 In an operator form, the equation (8.2.2.2) can be written0 as

( ) = ( ) + ( ) ( ) ( ) , (8.2.2.3) 1

퐿푢 푥 푓 푥 푔 푥 �0 ℎ 푡 푢 푡 푑푡 where the differential operator is given by

퐿 = . (8.2.2.4) 푛 푑 퐿 푛 Here is an invertible operator, therefore 푑the푥 integral operator is an -fold −1 integration퐿 operator and may be considered as definite integrals퐿 from 0 to푛 for each

푥 82

integral. Applying to both sides of (8.2.2.3), yields −1 퐿 1 1 ( ) = + + + + + ( ( )) 2! ( 1)! 2 푛−1 −1 푢 푥 푏0 푏1푥 푏2푥 ⋯ 푏푛−1푥 퐿 푓 푥 푛 − + ( ) ( ) ( ) . (8.2.2.5) 1 −1 ��0 ℎ 푡 푢 푡 푑푡� 퐿 �푔 푥 � We have integrated (8.2.2.2) times from 0 to and using the initial conditions at every

step of integration. It is important푛 to indicate that푥 the equation (8.2.2.5) is a standard

Fredholm integral equation. In the decomposition method we define the solution ( ) of

(8.2.2.1) in a series form given by 푢 푥

( ) = ∞ ( ). (8.2.2.6)

푢 푥 � 푢푛 푥 푛=0 Substituting (8.2.2.6) into both sides of (8.2.2.5) we get

1 ∞ ( ) = 푛−1 + ( ( )) ! 푘 −1 � 푢푛 푥 � 푏푘푥 퐿 푓 푥 푛=0 푘=0 푘 + ( ) ( ) ( ) , (8.2.2.7) 1 ∞ −1 �� ℎ 푡 �� 푢푛 푡 � 푑푡� 퐿 �푔 푥 � 0 푛=0 or equivalently

1 ( ) + ( ) + ( ) + = 푛−1 + ( ( )) ! 푘 −1 푢0 푥 푢1 푥 푢2 푥 ⋯ � 푏푘푥 퐿 푓 푥 푘=0 푘 +( ( ) ( ) ) ( ( )) 1 −1 0 �0 ℎ 푡 푢 푡 푑푡 퐿 푔 푥 +( ( ) ( ) ) ( ( )) 1 −1 1 �0 ℎ 푡 푢 푡 푑푡 퐿 푔 푥 +( ( ) ( ) ) ( ( )) 1 −1 2 �0 ℎ 푡 푢 푡 푑푡 퐿 푔 푥

83

+ (8.2.2.8)

The components ( ), ( ), ⋯( ), ( ), … of the unknown function ( ) are

0 1 2 3 determined in a recursive푢 푥 푢 manner,푥 푢 in푥 a 푢similar푥 fashion as discussed before,푢 if푥 we set

1 ( ) = 푛−1 + ( ) , (8.2.2.9) ! 푘 −1 푢0 푥 � 푏푘푥 퐿 �푓 푥 � 푘=0 푘 ( ) = ( ) ( ) ( ) , (8.2.2.10) 1 −1 1 0 푢 푥 ��0 ℎ 푡 푢 푡 푑푡� 퐿 �푔 푥 � ( ) = ( ) ( ) ( ) , (8.2.2.11) 1 −1 2 1 푢 푥 ��0 ℎ 푡 푢 푡 푑푡� 퐿 �푔 푥 � ( ) = ( ) ( ) ( ) , (8.2.2.12) 1 −1 3 2 푢 푥 ��0 ℎ 푡 푢 푡 푑푡� 퐿 �푔 푥 � and so on. For the determination of the components ( ), ( ), ( ), ( ), … of the

0 1 2 3 solution ( ) in general can be written in a recursive푢 relationship푥 푢 푥 by푢 푥 푢 푥

푢 푥 1 ( ) = 푛−1 + ( ) , (8.2.2.13) ! 푘 −1 푢0 푥 � 푏푘푥 퐿 �푓 푥 � 푘=0 푘 ( ) = ( ) ( ) ( ) , 0. (8.2.2.14) 1 −1 푛+1 푛 푢 푥 ��0 ℎ 푡 푢 푡 푑푡� 퐿 �푔 푥 � 푛 ≥ The solution ( ) is immediately determined with these components calculated.

The series obtained for푢 푥 ( ) frequently provides the exact solution as will be illustrated later. 푢 푥

In some problems, where a closed form is not easy to find, we can use the series form obtained to approximate the solution. It can be shown [6] that a few terms of the series derived by decomposition method usually provide a highly accurate approximation. The decomposition method avoids massive computational work and difficulties that arise from other methods.

84

The exact solution of any integral equation or integro-differential equations may

be obtained by considering the first two components and only. If we observe the

0 1 appearance of like terms in both components with opposite푢 signs,푢 then by cancelling these terms, the remaining non-cancelled terms of may in some cases provide the

0 exact solution. The self-cancelling terms between the푢 components and are called

0 1 the noise terms. The other terms in other components will vanish in푢 the limit푢 if the noise

terms occurred in ( ) and ( ). However, if the exact solution was not attainable by

0 1 using this phenomenon,푢 푥 then 푢we 푥should continue determining other components of ( )

to get a closed form solution or an approximate solution. In the following we discuss푢 one푥

example which illustrates the decomposition scheme where we will examine the

phenomena of the self-cancelling noise terms as well.

Example 1. Solve the following Fredholm integro-differential equation

1 1 ( ) = + 휋 ( ) , (0) = 0, 4 4 2 ′ 푢 푥 푐표푠푥 푥 − �0 푥 푡 푢 푡 푑푡 푢 by using the decomposition method.

Integrating both sides of the equation from 0 to , since (0) = 0 and 0 = 0 gives

1 1푥 / 푢 푠푖푛 ( ) = + ( ) . 8 8 휋 2 2 2 푢 푥 푠푖푛푥 푥 − 푥 � 푡푢 푡 푑푡 Using the decomposition technique we decompose the0 solution into a series form given

by

( ) = ∞ ( ).

푢 푥 � 푢푛 푥 Substituting into both sides yields 푛=0

85

1 1 / ∞ ( ) = + ∞ ( ) , 8 8 휋 2 2 2 푛 푛 � 푢 푥 푠푖푛푥 푥 − 푥 �0 푡 �� 푢 푡 � 푑푡 or equivalently 푛=0 푛=0

1 ( ) + ( ) + ( ) + = + 8 2 푢0 푥 푢1 푥 푢2 푥 ⋯ 푠푖푛푥 푥 1 / ( ( ) ) 8 휋 2 2 0 − 푥 �0 푡푢 푡 푑푡 1 / ( ( ) ) 8 휋 2 2 1 − 푥 �0 푡푢 푡 푑푡 1 / ( ( ) ) 8 휋 2 2 2 − 푥 �0 푡푢 푡 푑푡 + .

We set ⋯

1 ( ) = + 8 2 푢0 푥 푠푖푛푥 푥 which gives

1 1 1 ( ) = 휋 + = . 8 2 8 8 164 2 2 2 휋 2 1 3 푢 푥 − 푥 �0 푡 �푠푖푛푡 푡 � 푑푡 − 푥 − 푥 Considering ( ) and ( ) we see that the two identical terms appear in 1 2 0 1 8 these components with푢 opposite푥 푢 signs.푥 Cancelling these terms, and substituting푥 the remaining non cancelled term in ( ) it satisfies the given equation lead to

0 푢 푥 ( ) =

this is the exact solution in closed form.푢 푥 푠푖푛푥

8.2.3 Converting to Fredholm Integral Equations

In this section we will discuss a technique that will reduce Fredholm integro-

86

differential equation to an equivalent Fredholm integral equation. This can be done by integrating both sides of the integro-differential equation as many times as the order of the derivative involved in the equation from 0 to for every time we integrate by using the given initial conditions. This technique is applicable푥 only if the Fredholm integro- differential equation involves the unknown function ( ) only, and not any of its derivatives, under the integral sign. We can use the decomposition푢 푥 method, the direct computation method, the successive approximation method or the method of successive substitutions. To make a clear overview of this method we solve the example of section

8.2.1.

Example 1. Solve the following Fredholm integro-differential equation

1 ( ) = 1 + ( ) , (0) = 0, 3 1 ′ 푢 푥 − 푥 푥 � 푡푢 푡 푑푡 푢 by converting it to a standard Fredholm integral0 equation.

Integrating both sides from 0 to and using the initial condition we get

푥 1 1 ( ) = + ( ) . 3! 2! 1 2 2 푢 푥 푥 − 푥 푥 ��0 푡푢 푡 푑푡� This is the Fredholm integral equation and we choose the successive approximation method to solve this equation. We set a zeroth approximation by

( ) = ,

0 This gives the first approximation 푢 푥 푥

1 1 1 1 1 ( ) = + = + = 3! 2! 1 3! 3 2! 2 2 2 2 2 1 푢 푥 푥 − 푥 푥 ��0 푡 푑푡� 푥 − 푥 푥 푥 since

( ) = = ( ),

푢1 푥 푥 푢0 푥 87

If we continue we get

( ) = for all .

푛 Accordingly 푢 푥 푥 푛

( ) = lim ( )

푛 푢 푥 푛→∞ 푢 푥 = lim

푛→∞ 푥 = .

And this is the same solution we obtained before.푥

8.3 Volterra Integro-Differential Equations

In this section we will present a method to handle Volterra integro-differential

equations. We will focus on equations that involve separable kernels of the form

( , ) = 푛 ( ) ( ). (8.3.1)

퐾 푥 푡 � 푔푘 푥 ℎ푘 푡 푘=1 We consider the cases where the kernel ( , ) consists of product of the functions ( )

and ( ) given by ( , ) = ( ) ( ), 퐾 where푥 푡 the other cases can be generalized in푔 the푥

sameℎ manner.푡 The non퐾 푥-se푡parable푔 푥 kernelℎ 푡 can be approximated by separable kernel by

using the Taylor expansion for the kernel involved. We use most practical method, the

series solution method.

8.3.1 The Series Solution Method

We consider a standard form to the Volterra integro-differential equation given by

( ) = ( ) + ( , ) ( ) , (0) = , 0 ( 1), (8.3.1.1) 푥 푛 푘 푘 푢 푥 푓 푥 �0 퐾 푥 푡 푢 푡 푑푡 푢 푏 ≤ 푘 ≤ 푛 − where ( ) indicates the -th derivative of ( ) with respect to , and are constants 푛 푘 that define푢 푥 the initial conditions.푛 Substituting푢 푥( , ) = ( ) ( ) 푥into (8.3푏 .1.1) we get

퐾 푥 푡 푔 푥 ℎ 푡 88

( ) = ( ) + ( ) ( ) ( ) , (0) = , 0 ( 1). (8.3.1.2) 푥 푛 푘 푢 푥 푓 푥 푔 푥 � ℎ 푡 푢 푡 푑푡 푢 푏푘 ≤ 푘 ≤ 푛 − As before, follow a parallel0 analogy to that for ordinary differential equations around an ordinary point. We assume that the solution is an analytic function and it can be represented by a series expansion given by

( ) = ∞ , (8.3.1.3) 푘 푢 푥 � 푎푘푥 푘=0 where are constants that will be determined by using the initial conditions so that

푎푘 1 = (0), = (0), = (0), (8.3.1.4) 2! ′ ′′ 푎0 푢 푎1 푢 푎2 푢 and so on depending on the number of the initial conditions. Substituting (8.3.1.3) into both sides of (8.3.1.2) yields

( )( ) = ( ) + ( ) ( ) . (8.3.1.5) ∞ 푥 ∞ 푘 푛 푘 � 푎푘푥 푓 푥 푔 푥 � ℎ 푡 �� 푎푘푡 � 푑푡 푘=0 0 푘=0 Equation (8.3.1.5) can be easily evaluated if we have to integrate terms of the form , 푛 0 only. The next step is to write the Taylor expansion for ( ), evaluate the resulting푡

푛integrals,≥ and then equating the coefficients of like powers of 푓 in푥 both sides of the equation. This will lead to a determination of the coefficients 푥 , , , … of the series.

0 1 2 This may give a solution in closed form. The following example푎 illustrates푎 푎 the series solution method for Volterra integro-differential equation.

Example 1. Solve the following Volterra integro-differential equation

( ) = ( ) , (0) = 0, (0) = 1 푥 ′′ ′ 푢 푥 푥푐표푠ℎ푥 − �0 푡푢 푡 푑푡 푢 푢 by using the series solution method. Notice that ( ) = , so we will be interpreting terms of the form . ℎ 푡 푡 푛 푡 89

Substituting ( ) by the series

푢 푥 ( ) = ∞ 푛 푢 푥 � 푎푛푥 푛=0 into both sides of the equation and using the Taylor expansion of we obtain

푐표푠ℎ푥 ∞ ( 1) = ∞ ∞ . (22푘)! 푥 푛−2 푥 푛 � 푛 푛 − 푎푛푥 푥 �� � − � 푡 �� 푎푛푡 � 푑푡 푛=2 푘=0 푘 0 푛=0 Using the initial conditions yields = 0, = 1, evaluating the integrals that involve

0 1 terms of the form , 0, and using푎 few terms푎 for both sides yield 푛 푡 푛 ≥ 1 1 2 + 6 + 12 + 20 + = (1 + + + ) 2! 4! 2 3 2 4 2 3 4 5 푎 푎 푥 푎 푥 푎 푥 ⋯ 푥 1 푥1 푥 ⋯ + + . 3 4 3 4 − � 푥 푎2푥 ⋯ � Equating the coefficients of like powers of in both sides we find

푥 1 = 0, = , = 0, 3! 푎2 푎3 푎4 and generally

= 0, 0,

2푛 and 푎 푓표푟 푛 ≥

1 = , 0. (2 + 1)! 푎2푛+1 푓표푟 푛 ≥ We find the solution ( ) in a series form푛

푢 푥 1 1 1 ( ) = + + + + , 3! 5! 7! 3 5 7 푢 푥 푥 푥 푥 푥 ⋯ and in a closed form the solution is

( ) = .

푢 푥 푠푖푛ℎ 푥

90

8.3.2 The Decomposition Method

The decomposition method and the modified decomposition method were

discussed earlier. In this section we will show how this method can be implemented to

determine series solutions to Volterra integro-differential equations. A standard form of

the Volterra integro-differential equation is

( )( ) = ( ) + ( , ) ( ) , (0) = , 0 ( 1) (8.3.2.1) 푥 푛 푘 푘 푢 푥 푓 푥 �0 퐾 푥 푡 푢 푡 푑푡 푢 푏 ≤ 푘 ≤ 푛 − We can find ( ) by integrating both sides of (8.3.2.1) from 0 to as many times as the order of the derivative푢 푥 involved. We obtain 푥

1 ( ) = 푛−1 + ( ) + ( , ) ( ) , (8.3.2.2) ! 푥 푘 −1 −1 푢 푥 � 푏푘푥 퐿 �푓 푥 � 퐿 �� 퐾 푥 푡 푢 푡 푑푡� 푘=0 푘 0

where ! is obtained by using the initial conditions, and is an n-fold 푛−1 1 푘 −1 푘=0 푘 푘 integration∑ operator.푏 푥 Now we apply the decomposition method by defining퐿 the solution

( ) of (8.3.2.2) in a decomposition series given by

푢 푥 ( ) = ∞ ( ). (8.3.2.3)

푢 푥 � 푢푛 푥 Substituting (8.3.2.3) into both sides of (8.3.2.2푛=0 ) we get

1 ∞ ( ) = 푛−1 + ( ) + ( , ) ∞ ( ) (8.3.2.4) ! 푥 푘 −1 −1 � 푢푛 푥 � 푏푘푥 퐿 �푓 푥 � 퐿 �� 퐾 푥 푡 �� 푢푛 푡 � 푑푡� 푛=0 푘=0 푘 0 푛=0 or equivalently

1 ( ) + ( ) + ( ) + = 푛−1 + ( ( )) ! 푘 −1 푢0 푥 푢1 푥 푢2 푥 ⋯ � 푏푘푥 퐿 푓 푥 푘=0 푘 + ( ( , ) ( ) ) 푥 −1 0 퐿 �0 퐾 푥 푡 푢 푡 푑푡 91

+ ( ( , ) ( ) ) 푥 −1 1 퐿 �0 퐾 푥 푡 푢 푡 푑푡 + ( ( , ) ( ) ) 푥 −1 2 퐿 �0 퐾 푥 푡 푢 푡 푑푡 + . (8.3.2.5)

The components ( ), ( ), (⋯), ( ), … of the unknown function ( ) are

0 1 2 3 determined in a recursive푢 푥 푢 manner,푥 푢 in푥 a푢 similar푥 way as discussed before, if푢 we푥 set

1 ( ) = 푛−1 + ( ( )), (8.3.2.6) ! 푘 −1 푢0 푥 � 푎푘푥 퐿 푓 푥 푘=0 푘 ( ) = ( , ) ( ) , (8.3.2.7) 푥 −1 1 0 푢 푥 퐿 ��0 퐾 푥 푡 푢 푡 푑푡� ( ) = ( , ) ( ) , (8.3.2.8) 푥 −1 2 1 푢 푥 퐿 ��0 퐾 푥 푡 푢 푡 푑푡� and so on. The solution ( ) of the equation (8.3.2.1) can be written as

푢 푥 1 ( ) = 푛−1 + ( ( )), (8.3.2.9) ! 푘 −1 푢0 푥 � 푎푘푥 퐿 푓 푥 푘=0 푘 ( ) = ( , ) ( ) , 0. (8.3.2.10) 푥 −1 푛+1 푛 푢 푥 퐿 ��0 퐾 푥 푡 푢 푡 푑푡� 푛 ≥ The series obtained for ( ) can provide the exact solution in a closed form.

The following example푢 푥 will illustrate this technique.

Example 1. Solve the following Volterra integro-differential equation

( ) = + ( ) ( ) , (0) = 0, (0) = 1 푥 ′′ ′ 푢 푥 푥 � 푥 − 푡 푢 푡 푑푡 푢 푢 by using decomposition method.0

Applying the two-fold integration operator we get −1 퐿 92

(. ) = (. ) , 푥 푥 −1 퐿 �0 �0 푑푥푑푥 to both sides of equation integrating twice from 0 to , and using the given initial

conditions yield 푥

1 ( ) = + + ( ) ( ) . 3! 푥 3 −1 푢 푥 푥 푥 퐿 ��0 푥 − 푡 푢 푡 푑푡� Following the decomposition scheme (8.3.2.9) and (8.3.2.10) we find

1 ( ) = + , 3! 3 푢0 푥 푥 푥 ( ) = ( ) ( ) 푥 −1 푢1 푥 퐿 �� 푥 − 푡 푢0 푡 푑푡� 1 0 1 = + , 5! 7! 5 7 푥 푥 ( ) = ( ) ( ) 푥 −1 푢2 푥 퐿 �� 푥 − 푡 푢1 푡 푑푡� 1 0 1 = + . 9! 11! 9 11 푥 푥 Combining these equations we get the solution ( ) in series form given by

1 1 1 푢 푥 1 1 ( ) = + + + + + + 3! 5! 7! 9! 11! 3 5 7 9 11 푢 푥 푥 푥 푥 푥 푥 푥 ⋯ and this leads to

( ) = sinh ,

the exact solution in a closed form. 푢 푥 푥

8.3.3 Converting to Volterra Integral Equations

We can easily convert the Volterra integro-differential equation to an equivalent

Volterra integral equation, provided that the kernel is a difference kernel defined by the form ( , ) = ( ). This can be done by integrating both sides of the equation and

퐾 푥 푡 퐾 푥 − 푡 93

using the initial conditions. To perform the conversion to a regular Volterra integral

equation we should use the formula that converts multiple integral to a single integral.

The following two formulas

( ) = ( ) ( ) , (8.3.2.1) 푥 푥 푥 � � 푢 푡 푑푡푑푡 � 푥 − 푡 푢 푡 푑푡 0 0 0 1 ( ) = ( ) ( ) . (8.3.2.2) 푥 푥 푥 2! 푥 2 � � � 푢 푡 푑푡푑푡푑푡 � 푥 − 푡 푢 푡 푑푡 are used to transform0 double0 0 integrals and triple integrals0 respectively to a single integral.

To give clear overview of this method we discuss the following example.

Example 1. Solve the following Volterra integro-differential equation

1 1 ( ) = 2 + ( ) , (0) = 0, 4 4 푥 ′ 2 푢 푥 − 푥 � 푢 푡 푑푡 푢 by converting to a standard Volterra integral equation.0

Integrating both sides from 0 to and using the initial condition we obtain

푥 1 1 ( ) = 2 + ( ) , 12 4 푥 푥 3 푢 푥 푥 − 푥 � � 푢 푡 푑푡푑푡 which gives 0 0

1 1 ( ) = 2 + ( ) ( ) . 12 4 푥 3 푢 푥 푥 − 푥 � 푥 − 푡 푢 푡 푑푡 This is a standard Volterra integral equation and can0 be solved by using the

decomposition method. We set

1 ( ) = 2 , 12 3 푢0 푥 푥 − 푥 which gives

1 1 ( ) = ( ) 2 , 4 푥 12 3 1 푢 푥 �0 푥 − 푡 � 푡 − 푡 � 푑푡 94

1 1 = . 12 240 3 5 푥 − 푥 The term appears with opposite signs in the components ( ) and ( ), and by 1 3 12 0 1 cancelling this푥 noise term from ( ) and justifying that 푢 푥 푢 푥

0 푢 푥 ( ) = 2 is the exact solution. 푢 푥 푥

95

CHAPTER 9

Singular Integral Equations

9.1 Definitions

An integral equation is called a singular integral equation if one or both limits of integration become infinite, or if the kernel ( , ) of the equation becomes infinite at one or more points in the interval of integration.퐾 푥 For푡 example, the integral of the first kind

( ) ( ) = ( , ) ( ) (9.1.1) 훽 푥 ( )

푓 푥 휆 �훼 푥 퐾 푥 푡 푢 푡 푑푡 or the integral equation of the second kind

( ) ( ) = ( ) + ( , ) ( ) (9.1.2) 훽 푥 ( )

푢 푥 푓 푥 휆 �훼 푥 퐾 푥 푡 푢 푡 푑푡 is called singular if the lower limit ( ), the upper limit ( ) or both limits of integration

are infinite. Examples of the first type훼 푥 of singular equation훽 s푥 are the following examples:

( ) = 1 + ( ) , (9.1.3) ∞ −푥 푢 푥 푒 − �0 푢 푡 푑푡 ( ) = ( ) , (9.1.4) ∞ −푖휆푥 퐹 휆 �−∞푒 푢 푥 푑푥 [ ( )] = ( ) . (9.1.5) ∞ −휆푥 퐿 푢 푥 �0 푒 푢 푥 푑푥 The integral equations (9.1.4) and (9.1.5) are the Fourier transform and of the function ( ) respectively. In addition, these equations are Fredholm integral equations of the푢 first푥 kind with kernels given by ( , ) = and ( , ) = . It is −푖휆푥 −휆푥 important to note that the Laplace transforms and퐾 the푥 Fourier푡 푒 transforms퐾 푥 are푡 used푒 for

solving ordinary and partial differential equations with constant coefficients.

Examples of the second type of singular integral equations are given by the following

96

1 = ( ) , (9.1.6) 푥 2 푥 � 푢 푡 푑푡 0 √푥1− 푡 = ( ) , 0 < < 1, (9.1.7) 푥 ( ) 푥 � 훼 푢 푡 푑푡 훼 0 푥 − 푡 1 ( ) = 1 + 2 ( ) , (9.1.8) 푥

푢 푥 √푥 − �0 푢 푡 푑푡 where the singular behavior has been attributed to√ the푥 − kernel푡 ( , ) becoming infinite

as . Integral equations similar to examples (9.1.6) and (9.1.7퐾 푥 )푡 are called Abel’s

problems푡 → 푥 and generalized Abel’s integral equations respectively. These styles of singular

integral equations are among the earliest integral equations established by the Norwegian

mathematician Niles Abel in 1823.

9.2 Abel’s Problem

Abel in 1823 investigated the motion of a particle that slides down along a smooth unknown curve, in a vertical plane, under the influence of the gravitational field. It is assumed that the particle starts from rest at the point P, with vertical elevation x, slides along the unknown curve, to the lowest point on the curve where the vertical distance is = 0. The total time of descent T from the 푂highest point to the lowest point on the curve푥 is given in advance, and dependent on the elevation , hence expressed by

= ( ). 푥 (9.2.1)

Assuming that the curve of motion between푇 ℎ 푥the point and has an arc length , then

the velocity at a point on the curve, between and 푃, is given푂 by 푠

푄 푃 푂 = 2 ( ) (9.2.2) 푑푠 −� 푔 푥 − 푡 where is a variable coordinate defines푑푇 the vertical distance of the point , and is the

acceleration푡 of gravity assumed to be constant on the scale of the problem푄 . Integrating푔

97

both sides of (9.2.2) gives

= . (9.2.3) 푃 2 ( ) 푑푠 푇 − �푂 Setting � 푔 푥 − 푡

= ( ) , (9.2.4)

and using (9.2.1) we find the equation푑푠 푢of 푡motion푑푡 of the sliding particle

1 ( ) = ( ) . (9.2.5) 푥

푓 푥 �0 푢 푡 푑푡 We point out that ( ) is a predetermined√푥 function− 푡 that depends on the elevation and given by 푓 푥 푥

( ) = 2 ( ), (9.2.6)

where is the gravitational constant,푓 푥 and� 푔( ℎ) is푥 the time of descent from the highest

point to푔 the lowest point on the curve. Theℎ main푥 goal of Abel’s problem is to determine

the unknown function ( ) under the integral sign that will define the equation of the

curve. Notice that Abel’s푢 푥 integral equation is a Volterra integral equation of the first kind

with singular kernel. The kernel in (9.2.5) is

1 ( , ) = , (9.2.7)

퐾 푥 푡 which shows that the kernel (9.2.7) is singular√푥 −in푡 that

( , ) . (9.2.8)

Taking Laplace transforms of퐾 both푥 푡 sides→ ∞ of (9푎푠.2.5) leads푡 → 푥 to

[ ( )] = [ ( )] [ / ] −1 2 퐿 푓 푥 퐿 푢 푥 퐿 푥1 2 = [ ( )] , (9.2.9) 훤 � � 퐿 푢 푥 1 2 푧 98

where is the gamma function. In Appendix B, the definition of the gamma function and

훤 some of the relations related to it are given. Notice that = , the equation (9.2.9) 1 2 becomes 훤 � � √휋

/ [ ( )] = [ ( )], (9.2.10) 1 2 푧 퐿 푢 푥 퐿 푓 푥 which can be rewritten by √휋

[ ( )] = [ ( )] . (9.2.11) 1 − 푧 2 퐿 푢 푥 �√휋 푧 퐿 푓 푥 � Setting 휋

( ) = ( ) ( ) , (9.2.12) 푥 1 − 2 ℎ 푥 �0 푥 − 푡 푓 푡 푑푡 into (9.2.11) yields

[ ( )] = [ ( )], (9.2.13) 푧 퐿 푢 푥 퐿 ℎ 푥 this gives 휋

1 [ ( )] = [ ( )], (9.2.14) ′ 퐿 푢 푥 퐿 ℎ 푥 upon using the fact 휋

[ ( )] = [ ( )]. (9.2.15) ′ Appling to both sides of (9.2.14)퐿 ℎ 푥yields 푧퐿easilyℎ 푥 calculable formula −1 퐿 1 ( ) ( ) = , (9.2.16) 푥 푑 푓 푡 푢 푥 � 푑푡 that will be used for the determination휋 푑푥of the0 √so푥lution.− 푡 Appendix A supplies a helpful tool for evaluating the integrals involved in (9.2.16).

The procedure of using the formula (9.2.16) that determines the solution of Abel’s problem (9.2.5) will be illustrated by the following example.

99

Example 1. We consider the following Abel’s problem

1 = ( ) . 푥

휋 �0 푢 푡 푑푡 Substituting ( ) = in equation 9.2.16 √for푥 −(푡 )

푓 푥 휋 1 푢 푥 ( ) = , 푥 푑 휋 푢 푥 � 푑푡 휋 푑푥 0 √1푥 − 푡 = . 푥 푑 �0 푑푡 Setting the substitution = , we obtain푑푥 √푥 − 푡

푦 푥 − 푡 ( ) = 2 , 푑 푢 푥 � √푥� 푑푥1 = .

Abel introduced the more general singular√푥 integral equation

1 ( ) = ( ) , 0 < < 1, (9.2.1.1) 푥 ( ) 푓 푥 � 훼 푢 푡 푑푡 훼 known as the Generalized Abel’s0 푥 −integral푡 equation. Abel’s problem discussed above is a special case of the generalized equation where = . To determine a practical formula 1 2 for the solution ( ) of (9.2.1.1), and hence for훼 Abel’s problem, we use the Laplace transform on both푢 푥sides of equation (9.2.1.1). This yields

[ ( )] = [ ( )] [ ] −훼 퐿 푓 푥 퐿 푢 푥 퐿 (푥1 ) = [ ( )] , (9.2.1.2) 훤 − 훼 퐿 푢 푥 1−훼 where is the gamma function. The equation (9.2.1.2)푧 can be written as

훤 [ ( )] = ( ) [ ( )], (9.2.1.3) ( ) (1 ) 푧 −훼 퐿 푢 푥 훤 훼 푧 퐿 푓 푥 훤 훼 훤 − 훼 100

or equivalently

[ ( )] = [ ( )], (9.2.1.4) ( ) (1 ) 푧 퐿 푢 푥 퐿 푔 푥 where 훤 훼 훤 − 훼

( ) = ( ) ( ) . (9.2.1.5) 푥 훼−1 푔 푥 � 푥 − 푡 푓 푡 푑푡 Accordingly, equation (9.2.1.4) can be written0 as

sin ( ) [ ( )] = [ ( )], (9.2.1.6) 훼휋 ′ 퐿 푢 푥 퐿 푔 푥 upon using the identities 휋

[ ( )] = [ ( )], (9.2.1.7) ′ and 퐿 푔 푥 푧퐿 푔 푥

( ) (1 ) = , (9.2.1.8) sin ( ) 휋 훤 훼 훤 − 훼 from Laplace transforms and Appendix B respectively.훼휋 Applying to both sides of −1 (9.2.1.6) yields the easily calculable formula for determining the solution퐿

sin ( ) ( ) ( ) = , 0 < < 1. (9.2.1.9) 푥 ( ) 훼휋 푑 푓 푡 푢 푥 � 1−훼 푑푡 훼 We first integrate the integ휋 ral푑 at푥 the0 right푥 − 푡hand side of (9.2.1.9) by parts to obtain

( ) 1 1 = [ ( )( ) ] + ( ) ( ) , 푥 ( ) 푥 푓 푡 훼 푥 훼 ′ � 1−훼 푑푡 − 푓 푡 푥 − 푡 0 � 푥 − 푡 푓 푡 푑푡 0 푥 − 푡 1 훼 1 훼 0 = (0) + ( ) ( ) . (9.2.1.10) 푥 훼 훼 ′ 푓 푥 �0 푥 − 푡 푓 푡 푑푡 Differentiating both sides of (9.2.1.10)훼 and using훼 the Leibnitz rule, yields

( ) (0) ( ) = + . (9.2.1.11) 푥 ( ) 푥 ( ′ ) 푑 푓 푡 푓 푓 푡 � 1−훼 푑푡 1−훼 � 1−훼 푑푡 Substituting (9.2.1.11)푑푥 into0 (9푥.2.1.9− 푡 ) yields the푥 desired0 formula푥 − 푡

101

sin ( ) (0) ( ) ( ) = + , 0 < < 1, (9.2.1.12) 푥 ( ′ ) 훼휋 푓 푓 푡 1−훼 1−훼 푢 푥 � �0 푑푡� 훼 that will be used to determine휋 the 푥solution of the푥 − generalized푡 Abel’s equation and also of the standard Abel’s problem as well.

The following example shows how one can use (9.2.1.12) in solving Abel’s equations.

Example 2. Solve the following Abel’s problem

1 = ( ) . 푥

휋푥 �0 푢 푡 푑푡 In this example ( ) = , hence (0) =√0푥 and− 푡 ( ) = . Also, = so that ′ 1 2 sin( ) = 1. Using푓 푥 the 휋푥formula (9푓.2.1.12) and formula푓 푥 from휋 Appendix훼 A we obtain

훼휋 1 ( ) = , 푥 휋 푢 푥 �0 푑푡 = 휋2 . √푥 − 푡

√푥

102

REFERENCES

[1] Adomian, G. (1986). Nonlinear stochastic operator equations. San Diego, CA:

Academic Press.

[2] Adomian, G. (1992). A review of the decomposition method and some recent results

for nonlinear equation. San Diego, CA: Math. Comput. Modeling.

[3] Adomian, G. (1994). Solving frontier problems of physics: The decomposition

method. Kluwer.

[4] Adomian, G., & Rach,R. (1992). Noise terms in decomposition series solution.

Journal Computers Mathematics Applications, Vol. 24(11), 61-64.

[5] Chambers, L. G. (1976). Integral Equations, Short Course. London: International

Textbook Company.

[6] Debnath, L., & Mikusinski, P. (1990). Introduction to Hilbert Spaces with

applications. San Diego, CA: Academic Press, Inc. Harcourt Brace Jovanovich,

Publishers.

[7] Golberg, A. M. (1979). Solution Methods for Integral Equations: Theory

and Applications. New York, NY: Plenum Press.

[8] Green, C. D. (1969). Integral Equation Methods. New York, NY: Barnes and Noble.

[9] Hochstadt, H. (1973). Integral Equations. New York, NY: Wiley.

[10] Hoffman, M. J. (Fall Quarter 2008, Winter Quarter 2009). Notes from Math 502A

and 502B, (Advanced Linear Analysis). California State University, Los Angeles.

[11] Jerri, A. J. (1971). Introduction to Integral Equations with Applications. New York,

NY: Marcel Dekker.

103

[12] Kanwal, R. P. (1971). Linear Integral Equations, Theory and Technique. New York,

NY: Academic Press.

[13] Kreyszig, E. (1978). Introductory Functional Analysis with Applications. New York,

NY: John Wiley & Sons, Inc.

[14] Laszlo, M. (1989). Hilbert Space Methods in Science and Engineering. New York,

NY: Adam Hilger, Bristol and New York.

[15] Lovitt, W. V. (1950). Linear Integral Equations. New York, NY: Dover.

[16] Miller, R. K. (1967). Nonlinear Volterra Integral Equations. Menlo Park, CA: W. A.

Benjamin.

[17] Moiseiwitsch, B. L. (1977). Integral Equations. New York, NY: Longman Inc.

[18] Polyanin, A. D., & Manzhirov, A. V. (1998). Handbook of Integral Equations. New

York, NY: CRC Press LLC.

[19] Tricomi, F. G. (1957). Integral Equations. New York, NY: Interscience.

[20] Volterra, V. (1959). Theory of Functionals and of Integral and Integro-Differential

Equations. New York, NY: Dover.

[21] Wazwaz, A. M., & Khuri, S. A. (1996). Two Methods for Solving Integral

Equations. Journal of Applied Mathematics and Computation, 77, 79-89.

[22] Willi-Hans, S. (1998). Hilbert Spaces, Wavelets, Generalised Functions and Modern

Quantum Mechanics. Dordrecht/Boston/London : Kluwer Academic Publishers.

104

APPENDIX A

Integrals of Irrational Functions

= 0,1,2,3, … Integrals Involving function 푛 , 푡 1 √푥−푡 푛 1. = 2 . 푥 � 푑푡 √푥 0 √푥 − 푡 4 2. = / . 푥 3 푡 3 2 � 푑푡 푥 0 √푥 − 푡 16 3. = / . 푥 2 15 푡 5 2 � 푑푡 푥 0 √푥 − 푡 32 4. = / . 푥 3 35 푡 7 2 � 푑푡 푥 0 √푥 − 푡 256 5. = / . 푥 4 315 푡 9 2 � 푑푡 푥 0 √푥 − 푡 512 6. = / . 푥 5 693 푡 11 2 � 푑푡 푥 0 √푥 − 푡 2048 7. = / . 푥 6 3003 푡 13 2 � 푑푡 푥 0 √푥 − 푡 4096 8. = / . 푥 7 6435 푡 15 2 � 푑푡 푥 0 √푥 − 푡 65536 9. = / . 푥 8 109395 푡 17 2 �0 푑푡 푥 √푥 − 푡

105

APPENDIX B

The Gamma Function ( )

훤 푥 1. ( ) = . ∞ 푥−1 −푡 훤 푥 �0 푡 푒 푑푡 2. ( + 1) = ( ).

3. 훤(푥1) = 1, 푥 훤( 푥+ 1) = !, .

훤 훤 푛 푛 푛 푖푠 푎 푖푛푡푒푔푒푟 4. ( ) (1 ) = . sin ( ) 휋 훤 푥 훤 − 푥 1 휋푥 5. = . 2

훤 �3� √1휋 6. = . 2 2

훤 �1� √1휋 7. = 2 . 2 2 훤 � � 훤 �− � − 휋

106