<<

ON STOCHASTIC INTEGRATION FOR DISTRIBUTION THEORY

A Dissertation Submitted to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Doctor of Philosophy in The Department of

by Said K. Ngobi B. , Makerere University, Kampala, Uganda, 1986 M.S., Louisiana State University, 1990 December 2000 Acknowledgments

This dissertation would not see the light of the day without several significant contributions. The most important of them, without any doubt, was given by my advisor, Professor Hui-Hsiung Kuo, who continuously dedicated his time and attention to ensure that I finish up my work. I am grateful for his advice, guidance, patience and mentorship. I thank Professor Padmanabhan Sundar for introducing me to the field of stochastic integration and Dr. George Cochran for proof-reading my dissertation. I give special thanks to my committee members for being patient.

I thank all the people in the Department of Mathematics for providing me with a pleasant working environment. A special thanks to Professor Leonard Richardson who granted me an assistantship to come from Uganda and study at LSU. To my fellow graduate students, I am very grateful for all your friendship and support.

I wish to extend a very special thank you to my family in Uganda, for your encouragement and support which were strong enough to overcome the tremendous distance of separation. To Nasser, Zaharah and Faisal, I will always remember you and I dedicate this work to my late Father.

Finally, a heartfelt thank you to Monica, Ishmael and Shereef, who gave me a lot of emotional support.

ii Table of Contents

Acknowledgments ...... ii

Abstract ...... iv

Chapter 1. Introduction ...... 1

Chapter 2. Background ...... 3 2.1 Concept and Notations ...... 3 2.2 Hermite Polynomials, Wick Tensors, and Multiple Wiener 7 2.3 The White Noise Differential ...... 9 2.4 Characterization Theorems and Convergence of Generalized Functions 10 2.5 Pettis and Bochner Integrals ...... 13 2.6 White Noise Integrals ...... 15 2.7 Donsker’s Delta ...... 19

Chapter 3. A Generalization of the ItˆoFormula ...... 21 3.1 The ItˆoIntegral ...... 21 3.2 The ItˆoFormula ...... 23 3.3 An Extension of the ItˆoIntegral ...... 25 3.4 An Anticipating ItˆoFormula ...... 30

Chapter 4. A Generalization of the Clark-Ocone Formula ...... 45 4.1 Representation of Brownian Functionals by Stochastic Integrals . . 45 4.2 A Generalized Clark-Ocone Formula ...... 54 4.3 The Formula for the Hermite Brownian ...... 60 4.4 The Formula for Donsker’s Delta Function ...... 63 4.5 Generalization to Compositions with Tempered Distributions . . . . 66

References ...... 68

Vita ...... 71

iii Abstract

This thesis consists of two parts, each part concentrating on a different problem from the theory of Stochastic Integration. Chapter 1 contains the introduction explaining the results in this dissertation in general terms. We use the infinite di- mensional space 0(R), which is the dual of the (R), endowed with S S 2 2 2 the gaussian µ. The (L ) is defined as (L ) L ( 0(R), µ), ≡ S 2 and our results are based on the Gel’fand triple ( ) (L ) ( )∗ . The necessary S β ⊂ ⊂ S β preliminary background in white noise analysis are well elaborated in Chapter 2.

In the first part of the dissertation we present a generalization of the ItˆoFormula to anticipating processes in the white noise framework. This is contained in Chapter

3. We first introduce an extension of the Itˆointegral to anticipating processes called the Hitsuda Skorokhod . We then use the anticipating Itˆoformula for processes of the type θ(X(t),B(c)), where X(t) is a Wiener integral and B(t) =

, 1 is Brownian motion for a c b, which was obtained in [21] as a h· [0,t)i ≤ ≤ motivation to obtain our first main result. In this result, we generalize the formula in [21] to processes of the form θ(X(t),F ) where X(t) is a Wiener integral and

F 1/2. The space 1/2 is a special subspace of (L2). ∈ W W Chapter 4 contains the second part of our work. We first state the Clark-Ocone

(C-O) formula in the white noise framework. Before the main result we verify the formula for Brownian functionals using the very important white noise tool, the S- transform. Previous proofs of this formula have not used this method. As our second main result, we extend the formula to generalized Brownian functionals of the form

n 2 ˆ n Φ = ∞ : ⊗ : , fn , where fn L (R)⊗ . The formula for such Φ takes on the n=0h · i ∈ sameP form as that for Brownian functionals. As examples we verify the formula

iv n for the Hermite Brownian functional : B(t) : t using the Itˆoformula. We also compute the C-O formula for the Donsker’s delta function using the result for the

Hermite Brownian functional. Finally, the formula is generalized to compositions of tempered distributions with Brownian motion, i.e, to functionals of the form f(B(t)) where f 0(R) and B(t) = , 1[0,t] is Brownian motion. ∈ S h· i

v Chapter 1. Introduction

In the first part of this dissertation we prove in the white noise framework the anticipating Itˆoformula

t ∂θ θ(X(t),F ) = θ(X(a),F ) + ∂∗ f(s) (X(s),F ) ds s ∂x a ’ “ t 2 Z t 2 1 2 ∂ θ ∂ θ + f(s) 2 (X(s),F ) ds + f(s)(∂sF ) (X(s),F ) ds, 2 a ∂x a ∂x∂y Z Z (1.1)

t 1/2 where f L∞([a, b]),X(t) = f(s)dB(s) is a Wiener integral, F , θ ∈ a ∈ W ∈ 2 2 t R ∂θ (R ) and the integral ∂t∗ f(s) (X(s),F ) ds is a Hitsuda-Skorokhod integral Cb a ∂x as introduced by HitsudaR [9]€ and Skorokhod [31].

As our second result, we study the Clark-Ocone representation formula [4] [27] for Brownian functionals F 1/2 given by ∈ W

F = E(F ) + E(∂ F ) dB(t). (1.2) t |Ft ZT

Since E(∂ F ) is nonanticipating, equation (1.2) can be rewritten as t Ft Œ Œ F = E(F ) + ∂∗E(∂ F ) dt. (1.3) t t |Ft ZT

We verify this equation for Brownian functionals using the S-transform. Then we extend this representation to generalized functions Φ in the Kondratiev-Streit

n 2 ˆ n space satisfying the conditions Φ = ∞ : ⊗ : , fn , fn L (R)⊗ . n=0h · i ∈ Since the Hermite Brownian functionalP plays a very important role in white noise analysis, using the Itˆoformula, we verify its Clark-Ocone representation. As a specific example of a generalized Brownian functional that meets our conditions,

1 we compute the Clark-Ocone formula for the Donsker’s delta function which is

2 t (a B(s))2 1 a 1 (a B(s)) − 2t 2(t s) δ(B(t) a) = e− + − 3 e− − dB(s). (1.4) − √2πt √2π (t s) 2 Z−∞ −

This formula relies heavily on the representation for the Hermite Brownian func- tional. Finally, we generalize the Clark-Ocone formula to compositions of tempered distributions with Brownian motion, i.e, to generalized Brownian functionals of the form f(B(t)) where f 0(R). As an observation, another generalization of the ∈ S Clark-Ocone formula was obtained in [6] for regular generalized functions in the space ( )∗ which is larger than our space ( )∗ , 0 β < 1. But we have a stronger S 1 S β ≤ result than the one in [6].

2 Chapter 2. Background

In this chapter the basic concepts and notations from White Noise Analysis of interest to our work are introduced. The proofs for the necessary theorems are not presented in here and the interested reader is provided with the relevant references.

2.1 Concept and Notations

Let E be a real separable Hilbert space with norm . Let A be a densely | · |0 defined self-adjoint operator on E, whose eigenvalues λn n 1 satisfy the following { } ≥ conditions:

1 < λ λ λ . • 1 ≤ 2 ≤ 3 ≤ · · ·

2 ∞ λ− < . • n=1 n ∞ P For any p 0, let be the completion of E with respect to the norm f := Apf . ≥ Ep | |p | |0 We note that is a Hilbert space with the norm and for any p q. Ep | · |p Ep ⊂ Eq ≥ The second condition on the eigenvalues above implies that the inclusion map i : is a Hilbert-Schmidt operator. Ep+1 −→ Ep Let = the projective limit of ; p 0 , E {Ep ≥ }

0 = the of . E E

Then the space = p 0 p equipped with the topology given by the family E ≥ E

p p 0 of semi-normsT is a . Hence E 0 is a Gel’fand triple {| · | } ≥ E ⊂ ⊂ E with the following continuous inclusions:

E 0 0 0, q p 0. E ⊂ Eq ⊂ Ep ⊂ ⊂ Ep ⊂ Eq ⊂ E ≥ ≥

We used the Riesz Representation Theorem to identify the dual of E with itself.

3 It can be shown that for all p 0, the dual space p0 is isomorphic to p, which ≥ E E− p is the completion of the space E with respect to the norm f p = A− f 0. | |− | | Minlo’s theorem allows us to define a unique probability measure µ on the Borel subsets of 0 with the property that for all f , the , f is E ∈ E h· i normally distributed with mean 0 and variance f 2. We are using , to denote | |0 h i the duality between 0 and . This means that the characteristic functional of µ E E is given by

i x,ξ 1 ξ 2 e h idµ(x) = e− 2 | |0 , ξ . (2.1) ∀ ∈ E ZE0 2 The ( 0, µ) is called the white noise space. The space L ( 0, µ) E E 2 2 will be denoted by (L ); i.e., (L ) is the set of functions ϕ : 0 C such that ϕ E −→ 2 is measurable and ϕ(x) dµ(x) < . If we denote by Ec the complexification E0 | | ∞ of E, the Wiener-ItˆoTheoremR allows us to associate to each ϕ (L2) a unique ∈ ˆ n sequence fn n 0, fn Ec⊗ and express ϕ as ϕ = ∞ In(fn) where In(fn) is { } ≥ ∈ n=0 a multiple Wiener integral of order n (see [11]). ThisP decomposition is similar to what is referred to as the “Fock-Space decomposition” as shown in [26].

The (L2)-norm ϕ of ϕ is given by k k0 ∞ ϕ 2= n! f 2, k k0 | n |0 n=0 X ˆ n where denotes the E⊗ -norm induced from the norm on E. For any | · |0 c | · |0 ˆ n p 0, let be the ⊗ -norm induced from the norm on and define ≥ | · |p Ep,c | · |p Ep 1 2 ∞ ϕ = n! f 2 . k kp | n |p n=0 ! X Let

( ) = ϕ (L2); ϕ < . Ep { ∈ k kp ∞}

4 If 0 < p q, then ( ) ( ) with the property that for any q 0, there exists ≤ Eq ⊂ Ep ≥ p > q such that the inclusion map I :( ) , ( ) is a Hilbert-Schmidt operator p,q Ep → Eq 2 2 1 and I (1 i )− where i is the inclusion map from into k p,q kHS≤ − k p,q kHS p,q Ep Eq as noted earlier on.

Analogous to the way was defined, we also define E ( ) = the projective limit of ( ); p 0 , E { Ep ≥ }

( )∗ = the dual space of ( ). E E

With the above result, ( ) = p 0( p) with the topology generated by the family E ≥ E ; p 0 of norms. ItT is a nuclear space forming the infinite dimensional {k · kp ≥ } 2 Gel’fand triple ( ) (L ) ( )∗. Moreover we have the following continuous E ⊂ ⊂ E inclusions

2 ( ) ( q) ( p) (L ) ( p) ( q) ( )∗, q p 0. E ⊂ E ⊂ E ⊂ ⊂ E− ⊂ E− ⊂ E ≥ ≥

The elements in ( ) are called test functions on 0 while the elements in ( )∗ are E E E called generalized functions on 0. The bilinear pairing between ( ) and ( )∗ is E E E denoted by , . If ϕ (L2) and ψ ( ) then ϕ, ψ = (ϕ, ψ), where ( , ) is hh· ·ii ∈ ∈ E hh ii · · the inner product on the complex Hilbert space (L2).

ˆ n An element ϕ ( ) has a unique representation as ϕ = ∞ I (f ), f ⊗ ∈ E n=0 n n n ∈ Ec with the norm P ∞ ϕ 2 = n! f 2 < , p 0. k kp | n|p ∞ ∀ ≥ n=0 X ˆ n Similarly, an element φ ( )∗ can be written as φ = ∞ I (F ),F ( 0)⊗ ∈ E n=0 n n n ∈ Ec with the norm P

∞ 2 2 φ p= n! Fn p, k k− | |− n=0 X

5 The bilinear pairing between φ and ϕ is then represented as

∞ φ, ϕ = n! F , f . hh ii h n ni n=0 X Kondratiev and Streit (see Chapter 4 in [21]) constructed a wider Gel’fand triple than the one above in the following way:

Let 0 β < 1 be a fixed number. For each p 0, define ≤ ≥ 1/2 ∞ 1+β p n 2 ϕ = (n!) (A )⊗ f k kp,β | n|0 n=0 ! X and let

( ) = ϕ (L2); ϕ < . Ep β { ∈ k kp,β ∞} It should be noted here that ( ) = (L2) unless β = 0. Similarly to the above E0 β 6 setting, for any p q 0, we have ( p)β ( q)β and the inclusion map ( p+ α )β , ≥ ≥ E ⊂ E E 2 → ( ) is a Hilbert-Schmidt operator for some positive constant α. Ep β Let

( ) = the projective limit of ( ) ; p 0 , E β { Ep β ≥ }

( )∗ = the dual space of ( ) . E β E β

2 Then ( ) is a nuclear space and we have the Gel’fand triple ( ) (L ) ( )∗ E β E β ⊂ ⊂ E β with the following continuous inclusions:

2 ( ) ( ) (L ) ( )∗ ( )∗ , p 0, E β ⊂ Ep β ⊂ ⊂ Ep β ⊂ E β ≥ where the norm on ( )∗ is given by Ep β 1/2 ∞ 1 β p n 2 ϕ p, β = (n!) − (A− )⊗ fn 0 . k k− − | | n=0 ! X Thus we have the following relationships with the earlier Gel’fand triple first dis- cussed:

2 ( ) ( ) (L ) ( )∗ ( )∗ . E β ⊂ E ⊂ ⊂ E ⊂ E β

6 An even wider Gel’fand triple was introduced in [5] to create what is known as the Cochran-Kuo-Sengupta (CKS) space. The interested reader is referred to that paper for its construction. 2.2 Hermite Polynomials, Wick Tensors, and Multiple Wiener Integrals

The Hermite polynomial of degree n with parameter σ2 is defined by

2 2 n 2 n x n x : x : 2 = ( σ ) e 2σ2 D e− 2σ2 . σ − x

These polynomials have a generating function given by

∞ n t n tx 1 σ2t2 : x : 2 = e − 2 . (2.2) n! σ n=0 X The following formulas are also helpful:

[n/2] n n 2 k n 2k : x : 2 = (2k 1)!!( σ ) x − , (2.3) σ 2k − − Xk=0 ’ “

[n/2] n n 2k n 2k x = (2k 1)!!σ : x − : 2 , (2.4) 2k − σ Xk=0 ’ “ where (2k 1)!! = (2k 1)(2k 3) 3.1 with ( 1)!! = 1. − − − ··· − ˆ 2 The trace operator is the element τ ( 0)⊗ defined by ∈ Ec

τ, ξ η = ξ, η , ξ, η . h ⊗ i h i ∈ Ec

n Let x 0. The Wick tensor : x⊗ : of an element x is defined as ∈ E [n/2] n n k (n 2k) k : x⊗ : = (2k 1)!!( 1) x⊗ − ˆ τ ⊗ , 2k − − ⊗ Xk=0 ’ “ where τ is the trace operator. The following formula similar to equation (2.3) is also important for Wick tensors, i.e.,

[n/2] n n (n 2k) k x⊗ = (2k 1)!! : x⊗ − : ˆ τ . 2k − ⊗ Xk=0 ’ “

7 For x 0 and ξ , the following equalities related to Wick tensors hold: ∈ E ∈ E

n n n : x⊗ : , ξ⊗ =: x, ξ : ξ 2 , h i h i | |0

n n n : x⊗ : , ξ⊗ = √n! ξ . kh ik0 | |0 In order to make mathematical computations concerning multiple Wiener inte- grals easier, they are expressed in terms of Wick tensors. This is achieved via two statements as follows (see Theorem 5.4 [21]).

1. Let h , h , E be orthogonal and let n + n + = n. Then for almost 1 2 · · · ∈ 1 2 ···

all x 0, we have ∈ E

n n1 n2 n1 n2 ˆ ˆ 2 2 : x⊗ : , h1⊗ h2⊗ =: x, h1 : h1 : x, h2 : h2 . h ⊗ ⊗ · · · i h i | |0 h i | |0 ···

ˆ n 2. Let f E⊗ . Then for almost all x 0, ∈ c ∈ E

n I (f)(x) = : x⊗ : , f n h i

where In(f)(x) is a multiple Wiener integral of order n. With this relationship, we are able to write test functions and generalized functions in terms of Wick tensors as follows: Any element ϕ (L2) can be expressed as ∈ ∞ n ˆ n ϕ(x) = : x⊗ : , f , µ a.e. for x 0, f E⊗ . h ni − ∈ E n ∈ c n=0 X ˆ n ˆ n Suppose ϕ ( ). Since ⊗ is dense in E⊗ , we have ∈ E Ec c ∞ n ˆ n ϕ(x) = : x⊗ : , f , f ⊗ . h ni n ∈ Ec n=0 X For an element φ ( )∗, it follows that ∈ Ep ∞ n ˆ n φ(x) = : x⊗ : ,F ,F ( 0 )⊗ . h ni n ∈ Ep n=0 X

8 2.3 The White Noise Differential Operator

n Let y 0 and ϕ = : x⊗ : , f ( ) . It can be shown that the directional ∈ E h i ∈ E β

ϕ(x + ty) ϕ(x) (n 1) lim − = n : x⊗ − :, y ˆ 1f , t 0 → t h ⊗ i ˆ n ˆ (n 1) where y ˆ : E⊗ Ec⊗ − is the unique continuous and such that ⊗1· c −→

n (n 1) y ˆ g⊗ = y, g g⊗ − , g E . ⊗1 h i ∈ c

This shows that the function ϕ has Gˆateauxderivative Dyϕ. For general ϕ(x) =

n ˆ n ∞ : x⊗ : , f , f ⊗ , we define an operator D on ( ) as n=0h ni n ∈ Ec y E β P ∞ (n 1) D ϕ(x) n : x⊗ − :, y ˆ f . y ≡ h ⊗1 ni n=1 X It can be shown that D is a continuous linear operator on ( ) (see section 9.1, y E β

[21]). By the duality between ( )∗ and ( ) the adjoint operator D∗ of D can be E β E β y y defined by

D∗Φ, ϕ = Φ,D ϕ , Φ ( )∗, ϕ ( ). hh y ii hh y ii ∈ E ∈ E Now let be the Schwartz space of all infinitely differentiable functions f : R R E −→ such that n, k N, ∀ ∈ k n d sup x k f(x) < x R dx ∞ ∈ Œ ’ “ Œ Œ Œ If we take y = δ , the Dirac deltaŒ function at t,Œ then t Œ Œ

1. ∂ D is called the white noise differential operator, or the Hida differential t ≡ δt operator , or the annihilation operator,

2. ∂∗ D∗ is called the creation operator, and t ≡ δt

3. For ϕ ( ) , B˙ (t)ϕ ∂ ϕ + ∂∗ϕ is called white noise multiplication. ∈ S β ≡ t t

9 2.4 Characterization Theorems and Convergence of Generalized Functions

,h ,h Let h E . The renormalization : eh· i : of eh· i is defined by ∈ c

∞ ,h 1 n n : eh· i : = : ⊗ : , h⊗ . (2.5) n!h · i n=0 X Moreover,

,h ,h 1 h,h : eh· i : = eh· i− 2 h i. (2.6)

,h ,h If ξ , then : eh· i : ( ) for all 0 β < 1. If y 0, then : eh· i : ( )∗ . ∈ Ec ∈ E β ≤ ∈ Ec ∈ E β For such a y and ξ we have

,y ,ξ y,ξ : eh i, : eh· i : = eh i. (2.7) hh ii

,ξ The renormalized exponential functions : eh· i : ξ are linearly independent { ∈ Ec} and span a dense subspace of ( ) . Œ E ⠌

For all Φ ( )∗ , the S-transform SΦ of Φ is defined to be the function on ∈ E β Ec given by

,ξ SΦ(ξ) = Φ, : eh· i : , ξ (2.8) hh ii ∈ Ec

The S-transform is an injective function because the exponential functions span a dense subspace of ( ) . By using the S-transform, we can extend the operator ∂ E β t from ( ) to the space ( )∗ as follows. S β S β

For Φ ( )∗ , let F (ξ) = SΦ(ξ) and let F 0(ξ; t) be the functional derivative of ∈ S β F at ξ. The function F is said to have the first variation if it satisfies the condition

F (ξ + δξ) = F (ξ) + δF (ξ) + o(δξ), ξ, δξ (R), ∈ S

10 where δF (ξ) is given by

δF (ξ) = F 0(ξ; t)δξ(t)dt. R Z

Definition 2.1. (see [20]). Suppose that F 0(xi; t) where ξ (R) is the S-transform ∈ S of some . Then we define ∂tΦ to be the generalized function with

S-transform F 0(ξ; t), i.e.,

S(∂tΦ)(ξ) = F 0(ξ; t), ξ (R). ∈ S

Example. The Hermite Brownian functional gives us the following

n n Φ = : ⊗ : , 1⊗ , h · [0,t]i F (ξ) = SΦ(ξ) = ξ(t)n,

n 1 n 1 δF (ξ) = nξ(t) − δξ(t) = n1[0,t](u)ξ(u) − δξ(u) du, R Z n 1 n 1 F 0(ξ; t) = n1[0,t](t)ξ(t) − = nξ(t) − ,

n 1 (n 1) ∂ Φ = n : − : , 1⊗ − . t h · [0,t] i The main purpose of the S-transform is to change an otherwise infinite-dimensional object into a finite-dimensional one. By so doing, ordinary can be applied to the finite-dimensional case before finally reconverting it back (if possible) into the infinite-dimensional setup. A good example is when solving white noise stochas- tic diffrential equations. As it was done in chapter 13 of [21], the S-transform was

first applied to the white noise diffrential equation thereby obtaining an ordinary differential equation, before solving it and then reconverting the result back to the white noise setup. It is therefore necessary to have unique characterization the- orems for both test and generalized functions which we will state without proof.

The interested reader is referred to [21] chapter 8. We will start by characterizing generalized functions in the following theorem. The next theorem will then char-

11 acterize test functions; and in both cases, the S-transform plays a very important role.

Theorem 2.2. Let Φ ( )∗. Then its S-transform F = SΦ satisfies the condi- ∈ Eβ tion:

(a) For any ξ and η in c, the function F (zξ + η) is an entire function of z C. E ∈ (b) There exists nonnegative constants K, a, and p such that

2 1 β F (ξ) K exp a ξ p− , ξ | | ≤ | | ∀ ∈ Ec ” • Conversely, suppose a function F defined on satisfies the above two conditions. Ec

Then there exists a unique Φ ( )∗ such that F = SΦ and for any q satisfying ∈ E β 1 β 2 2a − (q p) 2 the condition that e 1 β A− − HS < 1, the following inequality holds: − k k  ‘ 1/2 1 β − 2 2a − (q p) 2 Φ q, β K 1 e A− − HS k k− − ≤ − 1 β k k ’ − “ ! Theorem 2.3. Let F be a function on satisfying the conditions: Ec

(a) For any ξ and η in c, the function F (zξ + η) is an entire function of z C. E ∈ (b) There exists positive constants K, a, and p such that

2 1+β F (ξ) K exp a ξ p , ξ c. | | ≤ | |− ∀ ∈ E ” •

Then there exists a unique ϕ ( )∗ such that F = Sϕ. In fact, ϕ ( ) for any ∈ Eβ ∈ Eq β 1+β 2 2a (p q) 2 q [0, ) satisfying the condition e A− − < 1 and ∈ ∞ 1+β k kHS  ‘ 1/2 1+β − 2 2a (p q) 2 ϕ K 1 e A− − k kq,β ≤ − 1 + β k kHS ’ “ ! In the next theorem, we state the necessary and sufficient condition for de- termining the convergence of generalized functions. This is done in terms of the

S-transform. For a proof, see Theorem 8.6 in [21].

12 Theorem 2.4. Let Φ ( )∗ and F = SΦ . Then Φ converges strongly in ( )∗ n ∈ E β n n n E β if and only if the following conditions are satisfied:

(a) limn Fn(ξ) exists for each ξ c. →∞ ∈ E (b) There exists nonnegative constants K, a, and p, independent of n, such that

2 1 β Fn(ξ) K exp a ξ p− , n N, ξ c. | | ≤ | | ∀ ∈ ∈ E ” • .

2.5 Pettis and Bochner Integrals

Let (M, , m) be a sigma-finite measure space. Let X be a with norm B . A function f : M X is called weakly measurable if w, f( ) is measurable k · k → h · i for each w X∗ (with the understanding that , denotes the bilinear pairing ∈ h· ·i of X∗ and X). A weakly measurable X-valued function f on M is called Pettis integrable if it satisfies the following conditions:

1 (1) For any w X∗, w, f( ) L (M). ∈ h · i ∈

(2) For any E , there exists an element JE X such that ∈ B ∈

w, JE = w, f(u) dm(u), w X∗. h i h i ∀ ∈ ZE

This JE is unique. It is denoted by (P ) E f(u) dm(u) and is called the Pettis in- tegral of f on E. It is noted that if X is reflexive,R then f is Pettis integrable if and

1 only if f is weakly measurable and w, f( ) L (M) for each w X∗. h · i ∈ ∈

∞ Let f : M X be a countably-valued function given by f = xk1Ek ,Ek → k=1 ∈ B disjoint. Then f is called Bochner integrable if P

∞ x m(E ) < k kk k ∞ Xk=1

13 For any E , the of F on E is defined by ∈ B

∞ (B) f(u) dm(u) = xkm(E Ek). E ∩ Z Xk=1 A function f : M X is said to be almost separably-valued if there exists a set →

E0 such that m(E0) = 0 and the set f(M E0) is separable. Let f : M X be \ → a weakly measurable and an almost separably-valued function. Then the function

f( ) is measurable. Consider a sequence f of weakly measurable functions k · k { n} such that fn converges almost everywhere. Then the limit function f = limn fn →∞ is also weakly measurable. Moreover, if the sequence f is also countably-valued, { n} then both functions f and f f are weakly measurable and almost separably- − n valued. Hence f and f f are measurable. It therefore follows from the above k k k − nk paragraph that a function f : M X is called Bochner integrable if there exists → a sequence f of Bochner integrable functions such that { n}

1. limn fn = f almost everywhere →∞

2. limn f(u) fn(u) dm(u) = 0. →∞ M k − k R From condition 2, it follows that for any E , f (u) dm(u), n 1, is Cauchy ∈ B E n ≥ in X. The Bochner integral of f on E is then definedR as

(B) f(u) dm(u) = lim fn(u) dm(u). n ZE →∞ ZE Condition (2) also allows us to state the following conditions for determining

Bochner integrability.

A function f : M X is Bochner integrable if and only if the following conditions → are satisfied:

1. f is weakly measurable,

14 2. f is almost separably-valued, and

3. f(u) dm(u) < . M k k ∞ R It is observed that if f is Bochner integrable, then it is also Pettis integrable and we have equality of the two integrals:

(P ) f(u) dm(u) = (B) f(u) dm(u), E . ∀ ∈ B ZE ZE The converse is not true. See examples in [21] Section 13.2.

2.6 White Noise Integrals

A white noise integral is a type of integral for which the integrand takes values in the space ( )∗ of generalized functions. As an example consider the integral E β t c(t s) 2 e− − : B˙ (s) : ds where B˙ is white noise. In this case the integrand is an ( )∗ - 0 S β R t valued measurable function on [0, t]. A second example is the integral 0 ∂s∗Φ(s) ds

t 2 where Φ is also ( )∗ -valued. If ∂∗Φ(s) ds is a random variable in (LR ) then the S β 0 s white noise integral is called aR Hitsuda-Skorokhod integral. This integral will be defined in detail in the next section and will be our center of focus. So in general,

2 if ( ) (L ) ( )∗ is a Gel’fand triple, and Φ is an ( )∗ -valued function on a E β ⊂ ⊂ E β E β measurable space (M, , m), a white noise integral is of the type B

Φ(u) dm(u),E . ∈ B ZE

Despite the fact that ( )∗ is not a Banach space, these integrals can be defined in E β the Pettis or Bochner sense by the use of the S-transform.

White noise integrals in the Pettis sense: •

We need to define Φ(u) dm(u) as the generalized function in ( )∗ that satisfies E E β R 15 the following:

S Φ(u) dm(u) (ξ) = S(Φ(u))(ξ) dm(u), ξ . ∈ Ec ’ZE “ ZE

In particular, if Φ(u) is replaced by ∂u∗Φ(u) we have

S ∂∗Φ(u) dm(u) (ξ) = ξ(u)S(Φ(u))(ξ) dm(u), ξ . (2.9) u ∈ Ec ’ZE “ ZE The above two equations then call for the following conditions on the function Φ to be satisfied:

(a) S(Φ(u))(ξ) is measurable for any ξ . ∈ Ec

(b) S(Φ( ))(ξ) L1(M) for any ξ . · ∈ ∈ Ec

(c) For any E , the function S(Φ(u))( ) dm(u) is a generalized function in ∈ B E ·

( )∗ . (This can be verified byR using the characterization theorem for gener- E β alized functions).

The statement in (c) can be rewritten as

,ξ ,ξ Φ(u) dm(u), : eh· i : = Φ(u), : eh· i : dm(u), ξ . hh ii hh ii ∈ Ec ZE ZE ,ξ Since the linear span of the set : eh· i : , ξ is dense in ( )∗ , the above { ∈ Ec} E β equation implies that

Φ(u) dm(u), ϕ = Φ(u), ϕ dm(u), ϕ ( )∗ (2.10) hh ii hh ii ∈ E β ZE ZE In terms of the S-transform, Pettis integrability can be characterized using the following theorem. For a proof, see section 13.4 [21].

Theorem 2.5. Suppose a function Φ: M ( )∗ satisfies the following condi- → E β tions:

16 (1) S(Φ( ))(ξ) is measurable for any ξ . · ∈ Ec (2) There exists nonnegative numbers K, a and p such that

2 1 β S(Φ(u))(ξ) dm(u) K exp a ξ p− , ξ . ≤ | | ∈ Ec ZM ” • Then Φ is Pettis integrable and for any E , ∈ B

S Φ(u) dm(u) (ξ) = S(Φ(u))(ξ) dm(u), ξ . ∈ Ec ’ZE “ ZE White noise integrals in the Bochner sense: •

Since the space ( )∗ is not a Banach space, the definition of the Bochner in- E β tegral from the last section cannot be used to define the white noise integral

E Φ(u) dm(u). As it turns out, we will need much stronger conditions than those usedR in the defining white noise integrals in the Pettis sence. As earlier observed, we know that ( )β∗ = p 0( p)β∗ and each of the spaces ( p)∗β is a separable Hilbert E ∪ ≥ E E space. With is in mind, the white noise integral E Φ(u) dm(u) can be defined in the Bochner sense in the following way. R

Let Φ : M ( )∗ . Then Φ is Bochner integrable if it satisfies the following → E β conditions:

1. Φ is weakly measurable

2. There exists p 0 such that Φ(u) ( )∗ for almost all u M and ≥ ∈ Ep β ∈ 1 Φ( ) p, β L (M). k · k− − ∈

If Φ is Bochner integrable, then we have

Φ(u) dm(u) Φ(u) p, β dm(u) ≤ k k− − ZM  p, β ZM  − −     17 The following theorem contains the conditions for Bochner integrability in terms of the S-transform and helps estimate the norm Φ(u) p, β of Φ. See [21] section k k− − 13.5.

Theorem 2.6. Let Φ: M ( )∗ be a function satisfying the conditions → E β

1. S(Φ( ))(ξ) is measurable for any ξ · ∈ Ec

1 2. There exists p 0 and nonnegative functions L L (M), b L∞(M), and ≥ ∈ ∈

an m-null set E0 such that

2 1 β c S(Φ(u))(ξ) L(u) exp b(u) ξ p− , ξ , u E . | | ≤ | | ∀ ∈ Ec ∈ 0 ” •

Then Φ is Bochner integrable and Φ(u) dm(u) ( )∗ for any q > p such that M ∈ Eq β R 1 β 2 2 b − (q p) 2 e k k∞ A− − < 1 (2.11) 1 β k kHS ’ − “ where b is the essential supremum of b. It turns out that for such q, k k∞

Φ(u) dm(u) q, β k M k− − Z 1/2 (2.12) 2 2 b (q p) 2 − L 1 e k k∞ A− − . ≤ k k1 − 1 β k kHS ’ ’ − “ “ 2 An example worth noting is the following. Let F 0(R) and f L (R), f = 0. ∈ S ∈ 6 ˆ Then F ( , f ) is a generalized function and if the Fourier transform F L∞(R), h· i ∈ then F ( , f ) is represented as a white noise integral by h· i

1 iu ,f F ( , f ) = e h· iFˆ(u) du. (2.13) h· i √2π R Z iu ,f ˆ In this case Φ(u) = e h· iF (u), u R and it satisfies the conditions in Theorem ∈ 2.4 for Pettis integrability.

18 2.7 Donsker’s Delta Function

2 Consider the Gel’fand triple (R) L (R) 0(R). For F 0(R) and non-zero S ⊂ ⊂ S ∈ S f L2(R), it is known that the composition F ( , f ) is a generalized Brownian ∈ h· i functional [17], [21]. In particular, let F = δa be the at a.

Then, for f = 1 , and , 1 = B(t) a Brownian motion, the function δ (B(t)) [0,t] h· [0,t]i a is a generalized Brownian functional. δa(B(t)) is called Donsker’s delta function.

Sometimes δ (B(t)) is written as δ(B(t) a). Some of the applications of Donsker’s a − delta function include among others:

(a) Solving partial differential equations

(b) White noise representation of Feynman integrals

(c) L(t, x) = t δ(B(s) x) ds. 0 − Donsker’s delta functionR δ(B(t) a) can be expressed in two ways. − First, as a white noise integral in the Pettis sense represented as

iu(B(t) a) δ(B(t) a) = e − du. − R Z

Note that this is a special case of equation (2.13) with F = δa and f = 1[0,1].

Second, in terms of its Wiener-Itˆodecomposition also given by

2 ∞ 1 a 1 n n n δ(B(t) a) = e− 2t : a : : ⊗ : , 1⊗ − √ n!tn th · [0,t]i 2πt n=0 X 2 ∞ 1 a 1 n n = e− 2t : a : : B(t) : √ n!tn t t 2πt n=0 X n n n This representation is using the fact that : ⊗ : , 1⊗ =: B(t) : . In general, for h · [0,t]i t F 0(R), the Wiener-Itˆodecomposition for F (B(t)) is given by ∈ S 1 ∞ 1 F (B(t)) = F, ξ : B(t)n : (2.14) √ n!tn h n,ti t 2πt n=0 X

19 2 n x where ξn,t(x) =: x :t e− 2t is a function in (R) and , is the pairing between S h i 0(R) and (R) (See [22]). Another representation for Donsker’s delta function is S S the subject of discussion in the second part of this dissertation, which is called the

Clark-Ocone Representation formula. The second representaion above will be very important in obtaining the result.

20 Chapter 3. A Generalization of the Itˆo Formula

This is the chapter that contains the first result of this dissertation. In the first two sections we define the Itˆointegral and some other related concepts. Section two contains an overview of the ordinary Itˆoformula. In section three we introduce the problem plus the machinery involved in solving it while the fourth section contains the major result. 3.1 The ItˆoIntegral

Definition 3.1. Let T R and let (Ω, ,P ) be a probability space. A stochastic ⊂ F process X(t, ω) is a measurable mapping X : T Ω R such that: × →

(a) For each ω, X( , ω) is a sample path i.e. for every fixed ω (hence for every · observation), X( , ω) is an R-valued function defined on T . ·

(b) For each t, X(t, ) is a random variable. ·

For a given X, we let X(t) denote the random variable X(t, ). · Definition 3.2. A Brownian motion B(t) is a stochastic process satisfying the following conditions:

1. P B(0) = 0 = 1. { }

2. For any 0 < t < s, B(s) B(t) N(0, s t); i.e. − ∼ − x 1 y2 2(s t) P B(s) B(t) x = e− − dy. { − ≤ } 2π(s t) − Z−∞ p 3. B(t) has independent increments; i.e. for any 0 < t < < t the random 1 ··· n

variables B(t1),B(t2) B(t1), ,B(tn) B(tn 1) are independent. − ··· − −

4. P ω B( , ω) is continuous = 1. { | · }

21 Definition 3.3. A filtration on T R is an increasing family t of σ-fields, ⊂ {F } with for all s t; s, t T. Ft ⊂ F Fs ⊂ Ft ≤ ∈

Let B(t) be a Brownian motion. Take a filtration ; a t b such that: {Ft ≤ ≤ } (1) B(t) is - measurable Ft (2) For any s t, B(t) B(s) is independent of . ≤ − Fs

Definition 3.4. A stochastic process f(t, ω) with filtration is called non- {Ft} anticipating if for any t [a, b], f(t, ) is measurable with respect to . ∈ · Ft

As examples, the functions B(t),B(t)2, and B(t)2 t are non-anticipating. − b The well-known Itˆointegral is a stochastic integral of the form a f(t, ω) dB(t, ω). It is defined for any stochastic process f(t, ω) satisfying the followingR conditions:

(1) f is non-anticipating

(2) b f(t, ) 2 dt < almost surely. (i.e. almost all sample paths are in L2(a, b).) a | · | ∞ ForR the definition of a stochastic integral see [1], [25].

When f is a deterministic function and in L2(a, b), it is obviously non-anticipating

b and in this case a f(t) dB(t) is known as the Wiener integral. R Definition 3.5. A stochastic process M(t) is called a martingale with respect to a filtration if E(M(t) ) = M(s) s t. {Ft} |Fs ∀ ≤

Examples:

1. A Brownian motion B(t) is a martingale with respect to the filtration Ft generated by B(s); s t , because if s t, then { ≤ } ≤ E[B(t) ] = E[B(t) B(s) + B(s) ] |Fs − |Fs = E[B(t) B(s) ] + E[B(s) ] − |Fs |Fs = 0 + B(s)

22 Here we have used the facts that E[B(t) B(s) ] = 0 since B(t) B(s) is − |Fs − independent of and that E[B(s) ] = B(s) since B(s) is -measurable. Fs |Fs Fs

2. Let M(t) = B(t)2 t. Then M(t) is a martingale. To see this, let s t. Then − ≤ by using the fact that E[B(t)B(s) ] = B(s)E[B(t) ] = B(s)2 we have |Fs |Fs E[M(t) M(s) ] = E[B(t)2 B(s)2 (t s) ] − |Fs − − − |Fs = E[B(t)2 (2B(s)2 B(s)2) (t s) ] − − − − |Fs = E[B(t)2 2B(t)B(s) + B(s)2 (t s) ] − − − |Fs = E[(B(t) B(s))2 (t s) ] − − − |Fs = (t s) (t s) = 0. − − −

This then shows that E(M(t) ) = M(s) and so B(t)2 t is a martingale. |Fs − If the conditions above in the definition of the Itˆointegral are replaced with the following:

(a) f is non-anticipating

(b) E( b f(t, ) 2 dt) < (i.e. f L2([a, b] Ω)), a | · | ∞ ∈ × then conditionR (b) is stronger than condition (2) above and the Itˆointegral defined using conditions (a) and (b) is a martingale.

3.2 The ItˆoFormula

In this section we briefly describe the well-known Itˆoformula [1], [25].

Definition 3.6. Let [a, b] be a fixed interval. An Itˆoprocess is a stochastic process of the form

t t X(t) = X(a) + f(s) dB(s) + g(s) ds, t [a, b] (3.1) ∈ Za Za where X(a) is -measurable, f is nonanticipating and b f(s) 2 ds < almost Fa a | | ∞ surely, g is nonanticipating, and b g(s) ds < almostR surely. a | | ∞ R 23 Take an Itˆoprocess X(t) as defined by equation (3.1) and let θ(t, x) be a 2 function C in x and 1 in t. Then the well-known Itˆoformula states that C t ∂θ θ(t, X(t)) = θ(a, X(a)) + s, X(s) f(s) dB(s) ∂x Za (3.2) t ∂θ ∂θ €  1 ∂2θ + s, X(s) + s, X(s) g(s) + s, X(s) f(s)2 ds. ∂s ∂x 2 ∂x2 Za ’ “ €  €  €  for all t [a, b]. Observe that θ(t, X(t)) is also an Itˆoprocess. ∈ In comparison with ordinary calculus, the chain rule says that for such a function

θ and G a deterministic 1 function C t ∂θ t ∂θ θ(t, G(t)) = θ(a, G(a)) + s, G(s) ds + s, G(s) G0(s) ds (3.3) ∂s ∂x Za Za €  €  If we compare equations (3.2) and (3.3) we see a significant difference. In equation

1 t ∂2θ 2 (3.2) there is an extra term 2 a ∂x2 (s, X(s))f(s) ds which is often called the cor- rection term for .R

Examples:

1. Let X(t) = B(t), θ(t, x) = f(x) a 2-function. Itˆo’sformula in this case is C given by

t 1 t f(B(t)) = f(B(a)) + f 0(B(s)) dB(s) + f 00(B(s)) ds. 2 Za Za

2. Let X(t) = B(t) and let θ(x) = xn. Then

t t n n n 1 n(n 1) n 2 B(t) = B(a) + n B(s) − dB(s) + − B(s) − ds. 2 Za Za

In particular, put t = b and n = 3 to obtain

b 1 b B(t)2 dB(t) = B(b)3 B(a)3 3 B(t) dt . a 3 − − a Z š ‘ Z ›

24 For n = 2 we have

b 1 B(t) dB(t) = B(b)2 B(a)2 (b a) . 2{ − − − } Za 3.3 An Extension of the ItˆoIntegral

The property that the integrands in the Itˆoprocesses be nonanticipating is needed in order to apply the Itˆoformula discussed in section 3.2 above. However, in many cases this essential condition is not satisfied. Take the example of the stochastic

1 1 integral 0 B(1)dB(t), t < 1. Clearly, B(1) is anticipating and 0 B(1)dB(t) cannot be definedR as an Itˆointegral. R

A number of extensions of the Itˆointegral exist. One such extension is by Itˆo.

In [12] he extended it to stochastic integrals for integrands which may be antici-

1 2 pating. In particular, he showed that 0 B(1)dB(t) = B(1) . In [9] a special type of integral called the Hitsuda-SkorokhodR integral (check the definition 3.7 below) was introduced as a motivation to obtain an Itˆotype formula for such functions as θ(B(t),B(c)), t < c, for a 2- function θ. (We note here that B(c), t < c is not C measurable). As shown in Theorem 3.8 below, this integral is also an extension Ft of the Itˆointegral.

2 Consider the Gel’fand triple ( ) (L ) ( )∗ which comes from the Gel’fand S β ⊂ ⊂ S β 2 triple (R) L (R) 0(R) as described in section 2.1. Suppose Φ : [a, b] ( )∗ S ⊂ ⊂ S → S β is Pettis integrable. Then the function t ∂∗Φ(t) is also Pettis integrable and → t equation (2.9) holds. Also, if Φ : [a, b] ( )∗ is Bochner integrable, then the → S β function t ∂∗Φ(t) is also Bochner integrable; and because Bochner integrability → t implies Pettis integrability, equation (2.9) still holds. We now define a particular extension of the Itˆointegral which forms the basis of our result.

25 Definition 3.7. Let ϕ :[a, b] ( )∗ be Pettis integrable. The white noise integral → S β b a ∂t∗ϕ(t)dt is called the Hitsuda-Skorokhod integral of ϕ if it is a random variable inR (L2).

2 Consider a stochastic process ϕ(t) in the space L ([a, b] 0(R)) which is non- × S b anticipating. The Itˆointegral a ϕ(t) dB(t) for the process ϕ(t) can be expressed as a white noise integral in the PettisR sense. The following theorem due to Kubo and

Takenaka [15] (see also [21] Theorem 13.12 for a proof) implies that the Hitsuda-

Skorokhod integral is an extension of the Itˆointegral to ϕ(t) which might be anticipating. A look at the example following the next theorem will throw some light on the difference between Itˆo’sextension and the Hitsuda-Skorokhod integral.

Theorem 3.8. Let ϕ(t) be nonanticipating and b ϕ(t) 2dt < . Then the func- a k k0 ∞ tion ∂∗ϕ(t), t [a, b], is Pettis integrable and R t ∈ b b

∂t∗ϕ(t)dt = ϕ(t) dB(t). (3.4) Za Za where the right hand side is the Itˆointegral of ϕ.

Example 3.9. Let ϕ(t) = B(1), t 1. It was observed earlier that by Itˆo’s ≤ 1 2 extension, 0 B(1) dB(t) = B(1) . However, for the Hitsuda-Skorokhod integral,

1 2 ∂∗B(1) dtR = B(1) 1. This equality can be verified by using the S-transform 0 t − inR the following way:

Since B(1) = , 1 , we have (SB(1))(ξ) = 1 ξ(s) ds. Now, by the use of equa- h· [0,1)i 0 tion (2.9), R

1 1

S ∂t∗B(1) dt (ξ) = ξ(t)(SB(1))(ξ) dt 0 0 ’Z “ Z 1 1 = ξ(t)ξ(s) dtds Z0 Z0 2 2 = S : ⊗ :, 1⊗ (ξ). h · [0,1)i

26 1 2 2 Therefore, ∂∗B(1) dt = : ⊗ :, 1⊗ . We can apply the results from section 2.2 0 t h · [0,1)i concerningR wick tensors to get

1 2 ∂∗B(1) dt =: , 1 : t h· [0,1)i 1 Z0 = , 1 2 1 h· [0,1)i − = B(1)2 1 − 1 We clearly see a difference between the integral 0 B(1) dB(t) as defined by Itˆo and the Hitsuda-Skorokhod integral of B(1). R

From now on, the space L2([a, b]; (L2)) will be identified with the Hilbert space

2 L ([a, b] 0). × S

Example 3.10. It is not always true that for any ϕ L2([a, b]; (L2)) the white ∈ b noise integral a ∂t∗ϕ(t)dt is a Hitsuda-Skorokhod integral. Consider the following example: Let R ∞ 1 n n ϕ(t) = : ⊗ :, 1⊗ . √ h · [0,1)i n=1 n n! X 1 (This is a case where ϕ is constant in t). Applying the S-transform to 0 ∂t∗ϕ(t)dt as in the above example gives us the following result R

1 ∞ 1 1 n n S ∂∗ϕ(t) dt (ξ) = ξ(t)S : ⊗ : , 1⊗ (ξ) dt t √ h · [0,1)i 0 n=1 n n! 0 ’Z “ X Z  ‘ ∞ 1 n n = 1 , ξ 1⊗ , ξ⊗ √ h [0,1) i h [0,1) i n=1 n n! X ∞ 1 (n+1) (n+1) = 1⊗ , ξ⊗ √ h [0,1) i n=1 n n! X ∞ 1 (n+1) (n+1) = S : ⊗ : , 1⊗ (ξ) √ h · [0,1) i n=1 n n! ! X Therefore, 1 ∞ 1 (n+1) (n+1) ∂∗ϕ(t) dt = : ⊗ , 1⊗ . t √ h · [0,1) i 0 n=1 n n! Z X

27 2 1 Now if we compute the L -norms of ϕ(t) and 0 ∂t∗ϕ(t)dt we have

1 1 R ∞ 1 ∞ 1 ϕ(t) 2dt = n! 1 2ndt = < , k k0 n2n!| [0,1)|0 n2 ∞ 0 0 n=1 n=1 Z Z X X while

1 ∞ ∞ 2 1 2(n+1) n + 1 ∂∗ϕ(t)dt = (n + 1)! 1 = = . k t k0 n2n!| [0,1)|0 n2 ∞ 0 n=1 n=1 Z X X 2 2 1 Thus even though ϕ L ([0, 1]; (L )), ∂∗ϕ(t)dt is not a Hitsuda-Skorokhod ∈ 0 t integral. This then calls for some restrictionsR on the stochastic process ϕ(t) in

1 order for the white noise integral 0 ∂t∗ϕ(t)dt to be a Hitsuda-Skorokhod integral. First we define a very importantR operator and a subspace of (L2) which will be very important in our main result.

n Definition 3.11. For ϕ = ∞ : ⊗ : , f , we define n=0h · ni

P ∞ n Nϕ = n : ⊗ :, f . h · ni n=1 X The operator N is called the number operator. Moreover, the power N r, r R, of ∈ n the number operator is defined in the following way: for ϕ = ∞ : ⊗ :, f , n=0h · ni

∞ P r r n N ϕ = n : ⊗ :, f . h · ni n=1 X r For any r R,N is a continuous linear operator from ( )β into itself and from ∈ S 1/2 1 ( )∗ into itself. Let be the of order for the Gel’fand triple S β W 2 2 + 1/2 ( )β (L ) ( )∗ . In other words, for [a, b] R , will denote the set of S ⊂ ⊂ S β ⊂ W 2 2 2 1/2 ϕ (L ) such that (∂tϕ)t [a,b] L ([a, b]; (L )). The norm on will be defined ∈ ∈ ∈ W

28 as

1 2 2 2 ϕ 1 = (N + 1) ϕ 0 k k 2 k k 2 ∞ 1/2 n = (n + 1) : ⊗ : , fn  h · i n=0 0 X      ∞ = n!(n + 1) f 2 | n|0 n=0 X ∞ ∞ = n! f 2 + n!n f 2 | n|0 | n|0 n=0 n=0 X X = ϕ 2 + N 1/2ϕ 2 k k0 k k0 Now Theorem 9.27 in [21] gives us the fact that if ϕ is given with the property that N 1/2ϕ (L2), and t [a, b], then N 1/2ϕ 2 = b ∂ ϕ 2 dt. Hence ∈ ∈ k k0 a k t k0 b R 2 2 2 ϕ 1 = ϕ 0 + ∂tϕ 0 dt. k k 2 k k k k Za and so

1 2 2 ϕ (L ); ϕ 1 < . W ≡ { ∈ k k 2 ∞} For more information on Sobolev spaces and the Number operator, see [23], [21].

The following theorem gives the desired condition on the function ϕ(t) in order

b for a ∂t∗ϕ(t)dt to be a Hitsuda-Skorokhod integral. This condition is determined by theR number operator N and the space 1/2 plays a major role. The proof for W this condition can be found in [21] Theorem 13.16.

2 1 b Theorem 3.12. Let ϕ L ([a, b]; 2 ). Then ∂∗ϕ(t) dt is a Hitsuda-Skorokhod ∈ W a t integral and R

b 2 b b b 2 ∂t∗ϕ(t) dt = ϕ(t) 0 dt + ∂tϕ(s), ∂sϕ(t) dsdt (3.5) k k 0 Za 0 Za Za Za    ‘‘     29 where (( , )) is the inner product on (L2). Moreover, · · 0 b b b 1 2 2 ∂tϕ(s), ∂sϕ(t) dsdt N ϕ(t) 0 dt. (3.6) 0 ≤ k k ŒZa Za Œ Za Œ  ‘‘ Œ Œ Œ As a remark,Œ when ϕ L2([a, b]; (L2)) is nonanticipating,Œ then the inner product ∈ ∂ ϕ(s), ∂ ϕ(t) = 0 for almost all (s, t) [a, b]2. Therefore, when attempting t s 0 ∈ €€to compute theL2-norm for the integral by using equation (3.5), we obtain the following very useful result that relates the norms:

b 2 b b 2 2 ∂∗ϕ(t) dt = ϕ(t) dt = ϕ(t) dB(t) . t k k0 Za 0 Za Za 0         The Hitsuda-Skorokhod integral is related to two other extensions of the Itˆo

2 1 integral: the forward and backward integrals. For ϕ L ([a, b]; 2 ) let ∂ + ϕ(t) ∈ W t and ∂t− ϕ(t) be the right-hand white noise derivative and left-hand white noise derivative of ϕ respectively (see [21] definition 13.25). The forward integral of ϕ is defined as b b b + ϕ(t)dB(t ) ∂ + ϕ(t)dt + ∂∗ϕ(t)dt, ≡ t t Za Za Za provided both integrals on the right hand side are random variables in (L2). In particular, 1 B(1)dB(t+) = B(1)2 Z0 1 2 which agrees with 0 B(1)dB(t) = B(1) as shown by Itˆo’sin [12]. Similarly, the backward integral ofR ϕ is defined as

b b b ϕ(t)dB(t−) ∂ ϕ(t)dt + ∂∗ϕ(t)dt. ≡ t− t Za Za Za 3.4 An Anticipating ItˆoFormula

In this section we present our main result for this part of the dissertation. It is about a particular generalization of the ordinary Itˆoformula to a class of functions

30 that are anticipating.

Let B(t) be a Brownian motion given by B(t) = , 1 . For a 2-function θ, a h· [0,t)i C simple case of the Itˆoformula is given by:

t 1 t θ(B(t)) = θ(B(a)) + θ0(B(s)) dB(s) + θ00(B(s)) ds, 2 Za Za where 0 a t. ≤ ≤ 2 2 If we assume that θ(B( )), θ0(B( )), θ00(B( )) L ([a, b]; (L )), then Theorem 3.8 · · · ∈ enables us to write the above equality as

t 1 t θ(B(t)) = θ(B(a)) + ∂∗θ0(B(s)) ds + θ00(B(s)) ds (3.7) s 2 Za Za t where 0 ∂s∗θ0(B(s))ds is a Hitsuda-Skorokhod integral. In [21], the following two generalizationsR of the white noise version of the Itˆoformula in equation (3.7) were considered:

(a) θ(X(t),B(c)) for a 2- function θ and a Wiener integral X(t), t c. C ≤ (b) θ(B(t)) with a generalized function θ in 0(R). S The main tool for the proofs of the formulas obtained for the above two generaliza- tions is the S-transform. The result for the generalization of (a) is stated below as a theorem and as earlier explained, our new formula will be using this particular generalization. See [21] Theorem 13.21 for a complete proof.

Theorem 3.13. Let 0 a c b. Let X(t) = t f(s) dB(s) be a Wiener integral ≤ ≤ ≤ a 2 2 with f L∞([a, b]) and let θ(x, y) be a -functionR on R such that ∈ C ∂2θ ∂2θ θ(X( ),B(c)), (X( ),B(c)) , (X( ),B(c)) · ∂x2 · ∂x∂y · are all in L2([a, b]; (L2)). Then for any a t b, the integral ≤ ≤ t ∂θ ∂∗ f(s) (X(s),B(c)) ds s ∂x Za ’ “

31 is a Hitsuda-Skorokhod integral and the following equalities hold in (L2):

(1) for a t c, ≤ ≤ t ∂θ θ(X(t),B(c)) = θ(X(a),B(c)) + ∂∗ f(s) (X(s),B(c)) ds s ∂x Za ’ “ t 1 ∂2θ ∂2θ + f(s)2 (X(s),B(c)) + f(s) (X(s),B(c)) ds, 2 ∂x2 ∂x∂y Za ’ “

(2) for c < t b, ≤ t ∂θ θ(X(t),B(c)) = θ(X(a),B(c)) + ∂∗ f(s) (X(s),B(c)) ds s ∂x Za ’ “ 1 t ∂2θ c ∂2θ + f(s)2 (X(s),B(c)) ds + f(s) (X(s),B(c)) ds. 2 ∂x2 ∂x∂y Za Za In [29], Nicholas Prevault developed a generalization similar to one in (a) above to processes of the form Y (t) = θ(X(t),F ), where F is a smooth random vari- able depending on the whole trajectory of (B(t))t R+ , and where (X(t))t R+ is an ∈ ∈ adapted Itˆoprocess. His proof relies on the expression of infinitesimal time changes on Brownian functionals using the Gross Laplacian. For our generalization in this section, X(t) is taken to be a Wiener integral while the function F is chosen to be in the space 1/2. The resulting formula is similar to the one obtained in [29], W except for the method of computation. We use a limiting process of Theorem 3.13 above in order to maintain the use of the S-transform as used in [21].

The following is the main result:

t Theorem 3.14. Let f L∞([a, b]) and X(t) = f(s) dB(s) be a Wiener inte- ∈ a gral. Let F 1/2 and θ 2(R2) such that R ∈ W ∈ Cb ∂2θ ∂2θ θ(X(( ),F )), (X( ),F ), (X( ),F ) · ∂x2 · ∂x∂y ·

32 are all in L2([a, b]; (L2)). Then for any a t b, the integral ≤ ≤ t ∂θ ∂∗ f(s) (X(s),F ) ds s ∂x Za ’ “ is a Hitsuda-Skorokhod integral and the following equality holds in (L2):

t ∂θ θ(X(t),F ) = θ(X(a),F ) + ∂∗ f(s) (X(s),F ) ds s ∂x Za ’ “ 1 t ∂2θ t ∂2θ + f(s)2 (X(s),F ) ds + f(s)(∂ F ) (X(s),F ) ds 2 ∂x2 s ∂x∂y Za Za The proof is presented in the following steps. First it will be shown that for

F = B(c), 0 a c b the above formula holds and that it coincides with ≤ ≤ ≤ the formula in Theorem 3.13. Secondly a special choice of F will be taken in the

,g 2 following way: We know that the span of the set : eh· i :; g L (R) is dense in { ∈ } 2 2 (L ). If we let g1, g2, , gk L (R) and ··· ∈ N 1 m m F = λ : ⊗ : , g⊗ N 1 m!h · 1 i m=1 XN 1 m m + λ : ⊗ : , g⊗ 2 m!h · 2 i m=1 . X . N 1 m m + λ : ⊗ : , g⊗ , k m!h · k i m=1 X

2 ,g1 ,g2 then, as N ,F F in (L ) where F = λ : eh· i : + λ : eh· i : + + −→ ∞ N −→ 1 2 ··· ,g 1/2 λ : eh· ki : . In our proof we shall assume that F F in . Also, g will be k N −→ W k taking on the form g = αj1[0,cj ), k N, αj, cj R. The formula will then be j=1 ∈ ∈ generalized to θ(X(t),FPN ) with FN chosen as above. An extension to θ(X(t),F ) with general F 1/2 will be achieved via a limiting process. The following is ∈ W the proof:

33 Proof. In the proof of Theorem 3.13 in [21] which uses the S-transform, there are two components that were treated separately: (1) when a t c, and (2) when ≤ ≤ c < t b. ≤ Now suppose that F = B(c), 0 a c b. We claim that our new formula in the ≤ ≤ ≤ above Theorem is correct when we replace F with B(c). To see that this is correct it is enough to show that B(c) 1/2. We proceed by computing the norm for ∈ W B(c) in the space 1/2. Indeed, since B(c) = , 1 , we have W h· [0,c)i

∂sB(c) = 1[0,c)(s) and B(c) 2 = , 1 2 = 1 2 k k0 kh· [0,c)ik0 | [0,c)|0 c = 1[0,c)(s) ds = ds = c. R Z Z0 Therefore,

b 2 2 B(c) 1/2 = B(c) 0 + ∂sB(c) 0 ds k kW k k k k Za = c + (b a). − Hence, B(c) 1/2. Now since ∂ B(c) = 1 (s), we then have ∈ W s [0,c)

t 2 t 2 ∂ θ ∂ θ a f(s) ∂x∂y (X(s),B(c)) ds if a t c, f(s)(∂sB(c)) (X(s),B(c)) ds = ≤ ≤ ∂x∂y  c 2 Za  R f(s) ∂ θ (X(s),B(c)) ds if c < t b.  a ∂x∂y ≤ R Thus, the two components (1) and (2) above in Theorem 3.13 are put together as one piece so as to satisfy the above theorem.

In general, suppose a c c c b. Let θ˜(x, y , , y ) be a func- ≤ 1 ≤ 2 ≤ · · · ≤ p ≤ 1 ··· p tion defined on Rp+1 and of class 2 . Then we have the following formula which C is simply a generalization of the one above

34 θ˜(X(t),B(c ), ,B(c )) = θ˜(X(a),B(c ), ,B(c )) 1 ··· p 1 ··· p t ∂θ˜ + ∂s∗ f(s) X(s),B(c1), ,B(cp) ds a ∂x ··· ! Z  ‘ t 2 ˜ 1 2 ∂ θ + f(s) 2 X(s),B(c1), ,B(cp) ds 2 a ∂x ··· Z p  ‘ t ∂2θ˜ + f(s) (∂ B(c )) X(s),B(c ), ,B(c ) ds. s j ∂x∂y 1 ··· p a j=1 j Z X  ‘

For a suitable function G : Rp R and of class 2, we can transform the −→ C function θ˜ to coincide with θ as a function defined on R2 in the following way: θ˜ X(t),B(c ), ,B(c ) = θ X(t),G B(c ), ,B(c ) = θ(x, y). With this 1 ··· p 1 ··· p transformation€ and using the chain rule€ the above integral‘ equation then takes on the following form:

θ(X(t),G(B(c ), ,B(c ))) = θ(X(a),G(B(c ), ,B(c ))) 1 ··· p 1 ··· p t ∂θ + ∂s∗(f(s) (X(s),G(B(c1), ,B(cp))))ds a ∂x ··· Z t 2 1 2 ∂ θ + f(s) 2 (X(s),G(B(c1), ,B(cp))) ds 2 a ∂x ··· Z p t ∂2θ + f(s) (∂ B(c )) (X(s),G(B(c ), ,B(c ))) s j ∂x∂y 1 ··· p a j=1 Z X ∂ G(B(c1), ,B(cp)) ds. × ∂yj ···

35 Now let

N p(1) 1 m (1) m FN = λ1 : ⊗ : , ( αj 1[0,c(1)))⊗ m!h · j i m=1 j=1 X X N p(2) 1 m (2) m + λ2 : ⊗ : , ( αj 1[0,c(2)))⊗ m!h · j i m=1 j=1 X X . .

(k 1) N p − 1 m (k 1) m + λ : ⊗ : , ( α − 1 (k 1) )⊗ k 1 j [0,c − ) − m!h · j i m=1 j=1 X X N p(k) 1 m (k) m + λk : ⊗ : , ( αj 1[0,c(k)))⊗ . m!h · j i m=1 j=1 X X We know that for any n N, ∈ [n/2] n n n k n 2k : ⊗ : , 1⊗ = (2k 1)!!( c) B(c) − h · [0,c)i 2k − − k=0 ’ “ Xm i = aiB(c) . i=1 X for some constants ai and m. Some summands could as well be zero. This then becomes a polynomial in B(c). We can similarly write FN as a polynomial in

Brownian motion as follows

(1) (1) (1) m1 (1) mp FN = λ1 am(1) m(1) B(c1 ) B(cp ) 1 ··· p ··· m(1) m(1) 1 X··· p (2) (2) (2) m1 (2) mp + λ2 am(2) m(2) B(c1 ) B(cp ) 1 ··· p ··· m(2) m(2) 1 X··· p . .

(k) (k) (k) m1 (k) mp + λk am(k) m(k) B(c1 ) B(cp ) , 1 ··· p ··· m(k) m(k) 1 X··· p where again some coefficients am(j) m(j) could be zero for some j. Suppose FN is 1 ··· p restricted to only one summand out of the k summands above. i.e. let us suppose

36 that

(1) (1) (1) m1 (1) mp FN = λ1 am(1) m(1) B(c1 ) B(cp ) . (3.8) 1 ··· p ··· m(1) m(1) 1 X··· p

Then we see that the function FN is a good choice for our composition. That is,

(1) (1) we let F = G(B(c ), ,B(cp )) so that the following ensues N 1 ···

(1) (1) (1) (1) m1 θ X(t),G(B(c1 ), ,B(cp )) = θ X(t), λ1 am(1) m(1) B(c1 ) ··· 1 ··· p m(1) m(1)  ‘  1 X··· p (1) B(c(1))mp . ··· p ‘

Moreover,

∂ (1) (1) (1) (1) m1 G B(c1 ), ,B(cp ) = λ1 am(1) m(1) B(c1 ) ∂yj ··· 1 ··· p m(1) m(1)  ‘ 1 X··· p (1) (1) m(1) 1 (1) m(1) m B(c ) j − B(c ) p . ··· j j ··· p Therefore,

t ∂θ θ(X(t),FN ) = θ(X(a),FN ) + ∂s∗ f(s) (X(s),FN ) ds a ∂x Z ’ p “ 1 t ∂2θ t + f(s)2 (X(s),F )ds + f(s)(∂ B(c(1))) 2 ∂x2 N s j Za Za j=1 2 X ∂ θ (1) (1) (1) m1 (X(s),FN )λ1 am(1) m(1) B(c1 ) mj × ∂x∂y 1 ··· p ··· m(1) m(1) 1 X··· p (1) m(1) 1 (1) m(1) B(c ) j − B(c ) p ds. × j ··· p

Now, by the product rule,

∂s(ϕψ) = (∂sϕ)ψ + ϕ(∂sψ).

37 We then incorporate this rule into the last summand of the most recent equation above to obtain the following

p 2 ∂ θ (1) (1) (1) m1 f(s) (X(s),FN )λ1 am(1) m(1) B(c1 ) mj ∂x∂y 1 ··· p ··· j=1 m(1) m(1) X 1 X··· p (1) m(1) 1 (1) (1) (1) B(c ) j − B(c ) ∂ B(c ) × j ··· p s j ∂2θ = f(s) (X(s),F )(∂ F ). ∂x∂y N s N Then, by linearity, the above result can be extended to all the k summands in the original expression for F . Hence for F 1/2 the formula in our Theorem is N N ∈ W true; i.e.,

t ∂θ θ(X(t),F ) = θ(X(a),F ) + ∂∗ f(s) (X(s),F ) ds N N s ∂x N Za ’ “ 1 t ∂2θ t ∂2θ + f(s)2 (X(s),F ) ds + f(s)(∂ F ) (X(s),F ) ds. 2 ∂x2 N s N ∂x∂y N Za Za As stated earlier, by assumption, F F as N in the 1/2-norm, where N −→ −→ ∞ W

,g1 ,g2 ,g F = λ : eh· i : + λ : eh· i : + + λ : eh· ki : 1 2 ··· k with m p(n) ⊗ (n) gn = αj 1[0,c(n)) , 1 n k.  j  ≤ ≤ j=1 X   Therefore, there exists a subsequence FN k 1 FN N 1 such that FN F { k } ≥ ⊂ { } ≥ k −→ as k almost surely. For such a subsequence, the following is true −→ ∞ t ∂θ θ(X(t),F ) = θ(X(a),F ) + ∂∗ f(s) (X(s),F ) ds Nk Nk s ∂x Nk Za ’ “ 1 t ∂2θ t ∂2θ + f(s)2 (X(s),F )ds + f(s)(∂ F ) (X(s),F )ds. 2 ∂x2 Nk s Nk ∂x∂y Nk Za Za

38 We claim that the following convergences hold almost surely as k:

(i)θ(X(t),F ) θ(X(t),F ) Nk −→ (ii)θ(X(a),F ) θ(X(a),F ) Nk −→ 1 t ∂2θ (iii) f(s)2 ∂2θ∂x2(X(s),F )ds 2 ∂x2 Za t ∂2θ t ∂2θ (iv) f(s)(∂ F ) (X(s),F )ds f(s)(∂ F ) (X(s),F )ds s Nk ∂x∂y Nk −→ s ∂x∂y Za Za t ∂θ t ∂θ (v) ∂∗ f(s) (X(s),F ds ∂∗ f(s) (X(s),F ds s ∂x Nk −→ s ∂x Za ’ “ Za ’ “

Proof of (i) and (ii):

Because of continuity of θ, since F F a.s, those two convergences also hold Nk −→ almost surely.

Proof of (iii):

Let ω 0(R) be fixed. Since FNk converges, it is a bounded sequence. i.e. there ∈ S exists M > 0 such that F (ω) M for all k. Therefore, the function given by | Nk | ≤ ∂2θ 2 :[a, b] [ M,M] R is continuous on compact sets and hence uniformly ∂x × − −→ continuous. Thus given  > 0, there exists δ > 0 such that whenever x x 2 + | 1 − 2| y y 2 < δ, we have | 1 − 2|

∂2θ ∂2θ 2 (x , y ) (x , y ) < (3.9) ∂x2 1 1 − ∂x2 2 2 f 2 (t a) Œ Œ Œ Œ k k∞ − Œ Œ Also, by the convergenceŒ of FN k 1, thereŒ exists a number N() depending on { k } ≥  such that

k N() = F (ω) F (ω) < δ, ≥ ⇒ | Nk − |

39 which implies that

t ∂2θ ∂2θ 1 f(s)2 (X(s),F (ω)) (X(s),F (ω)) ds 2 ∂x2 Nk − ∂x2 ŒZa ’ “ Œ Œ f 2 t ∂2θ ∂2θ Œ Œk k∞ (X(s),F (ω)) (X(s),F (ω)) ds Œ ≤ Œ 2 ∂x2 Nk − ∂x2 Œ Za Œ Œ f 2 Œ 2 Œ < k k∞ Œ (t a) Œ 2 f Œ2 (t a) − Œ k k∞ − =  and so (iii) is proved.

Proof of (iv):

It is known that in a Hilbert space H, if x x and y y, then x , y n −→ n −→ h n ni −→ x, y where , is the inner product on H. By taking H = L2([a, t]) with the h i h· ·i Lebesgue measure it follows then that

t ∂2θ ∂2θ f(s)(∂sFN (ω)) (X(s),FN (ω))ds ∂ FN (ω), f( ) (X( ),FN (ω)) k ∂x∂y k ≡ · k · ∂x∂y · k Za œ  and

t ∂2θ ∂2θ f(s)(∂sF (ω)) (X(s),F (ω))ds ∂ F (ω), f( ) (X( ),F (ω)) . ∂x∂y ≡ · · ∂x∂y · Za œ  By the same uniform continuity argument used above in proving (iv), as k , −→ ∞ we see that

∂2θ ∂2θ f( ) (X( ),F (ω)) f( ) (X( ),F (ω)) · ∂x∂y · NK −→ · ∂x∂y · in L2([a, t]).

40 Now F F in 1/2. Hence F F in (L2) and t ∂ F ∂ F 2 ds 0 Nk −→ W Nk −→ a k s Nk − s k0 −→ as k . Therefore a change of integrals is justifiedR and the following is true: −→ ∞ t t 2 2 ∂sFN (ω) ∂sF (ω) dsdµ(ω) = ∂sFN (ω) ∂sF (ω) dµ(ω)ds (R) a | − | a (R) | − | ZS0 Z Z ZS0 t 2 = ∂ F ∂ F 2 ds 0. k s N − s k(L ) −→ Za

Let

t ∂ F (ω) ∂ F (ω) 2ds = H (ω). | s Nk − s | Nk Za

Then, HNk (ω) k 1 is a sequence of positive functions and by the above result, { } ≥ H (ω) 0 in L1(µ). Therefore, Nk −→

∂ FN (ω) ∂ F (ω) · k −→ · in L2([a, t]) and so (iv) is true.

Proof of (v):

Let us approximate the (L2)-norm of the difference of the two integrals. We claim the norm of this difference goes to zero. For convenience let us use the following notation. Let ∂θ f(s) (X(s),F (ω)) = ϕ (s) ∂x Nk Nk and ∂θ f(s) (X(s),F (ω)) = ϕ(s). ∂x Then by Theorem 3.12, we have the following

t t 2 t 2 ∂∗ϕ ds ∂∗ϕ ds = ϕ ϕ ds s Nk − s k Nk − k0 Za Za 0 Za (3.10)  t t   + ∂ ϕ (u) ∂ ϕ(u), ∂ ϕ (s) ∂ ϕ(s) du ds  s Nk − s u Nk − u 0 Za Za €€ 

41 where (( , )) is the inner product on (L2). · · 0 Since F (ω) F (ω) in (L2), we can choose a subsequence so that the con- Nk −→ vergence is almost surely. Thus for such a subsequence, there exists Ω0 such that

P Ω = 1 and for ω fixed, we have F (ω) F (ω). Then, by the Mean Value { 0} Nk −→ Theorem, for each s,

∂θ ∂θ ∂2θ X(s),F (ω) X(s),F (ω) = X(s), ξ(ω) F (ω) F (ω) ∂x Nk − ∂x ∂x∂y · Nk − Œ Œ Œ Œ Œ €  € Œ Œ € Œ Œ Œ Œ Œ Œ Œ Œ Œ

2 2 ∂2θ where ξ(ω) is between FNk (ω) and F (ω). Since θ (R ), (X(s), ξ(ω)) < C ∈ Cb ∂x∂y Œ Œ for some constant C. So, Œ Œ Œ Œ ∂θ ∂θ (X(s),F (ω)) (X(s),F (ω)) C F (ω) F (ω) . ∂x Nk − ∂x ≤ k Nk − k0  0     As earlier noted, the convergence of the subsequence FN k 1 of numbers implies { k } ≥ that for some fixed constant M > 0, the quantity F (ω) F (ω) M, k. k Nk − k0 ≤ ∀ Therefore, using the fact that F (ω) F (ω) in (L2), we have that Nk −→

ϕN ϕ 0 f C FN F 0 0, k k − k ≤ k k∞ k k − k −→ and also that

ϕN ϕ 0 f C FN F 0 k k − k ≤ k k∞ k k − k f CM. ≤ k k∞ Since this bound is independent of both s and k, by the Bounded Convergence

Theorem, as k , −→ ∞ t ϕ ϕ 2 ds 0. k Nk − k0 −→ Za

42 The convergence of the second summand of the right hand side of equation (3.10) goes as follows

t t ∂ ϕ (u) ∂ ϕ(u), ∂ ϕ (s) ∂ ϕ(s) du ds s Nk − s u Nk − u 0 ŒZa Za Œ Œ €€t t  Œ Œ ∂ ϕ (u) ∂ ϕ(u) ∂ ϕ (s) ∂ ϕ(s) Œ du ds Œ ≤ k s Nk − s k0 k u Nk − u kŒ0 Za Za 1 t t ∂ ϕ (u) ∂ ϕ(u) 2 + ∂ ϕ (s) ∂ ϕ(s) 2 du ds ≤ 2 k s Nk − s k0 k u Nk − u k0 Za Za t t €  2 = ∂uϕNk (s) ∂uϕ(s) 0 du ds a a k − k Z t Z 2 ∂uϕNk (s) ∂uϕ(s) 0 du ds. ≤ R k − k Za ’Z “

The Number operator, N can also be expressed as N = R ∂s∗∂s ds. (see [21]). Hence, for any ϕ L2([a, b]; 1/2) we have R ∈ W N 1/2ϕ 2 = ((Nϕ, ϕ )) k k0 0

= ((∂s∗∂sϕ, ϕ))0 ds R Z

= ((∂sϕ, ∂sϕ))0 ds R Z 2 = ∂sϕ 0 ds. R k k Z Therefore, by this result,

t t 2 1/2 2 ∂uϕNk (s) ∂uϕ(s) 0 du ds = N ∂uϕNk (s) ∂uϕ(s) 0 ds. R k − k k − k Za ’Z “ Za

But N 1/2 is a from 1/2 into (L2). Therefore, by the same W Mean Value Theorem argument, since F F also in 1/2 by our original Nk −→ W assumption, we have that

t t ∂ ϕ (u) ∂ ϕ(u), ∂ ϕ (s) ∂ ϕ(s) du ds 0 s Nk − s u Nk − u 0 −→ ŒZa Za Œ Œ €€  Œ Œ Œ Œ Œ

43 as k . −→ ∞ Therefore, it has been shown that the convergence in (iii) is true in the (L2)- norm. We then pick a further subsequence of the subsquence FN k 1 also denoted { k } ≥ by FN k 1 such that convergence hold almost surely. This then proves (v) and { k } ≥ completes the proof of the Theorem.

44 Chapter 4. A Generalization of the Clark-Ocone Formula

In this second part of the dissertation we show that the well-known Clark-Ocone formula [4], [27] can be extended to generalized Brownian (white noise) functionals that meet certain criteria and that the formula takes on the same form as the classical one. We look at a specific example of Donsker’s delta function which is a generalized Brownian functional.

In section 4.1 we first state the Clark-Ocone formula as it is applied to Brownian functionals in [4], [27] that meet certain conditions. We then state the white noise version of the formula and verify it for functionals in the space 1/2. In section W 4.2, we verify the formula for a generalized Brownian functional of the form Φ =

n 2 ˆ n ∞ : ⊗ : ,Fn ,Fn L (R⊗ ). In section 4.3, we use th e Itˆoformula to verify n=0h · i ∈ n Pthe Clark-Ocone formula for the Hermite Brownian functional : B(t) : t. This is important in view of the fact that Donsker’s delta function has a representation in terms of the Hermite Brownian functional as shown in equation (2.14). In section

4.4, we compute the formula for Donsker’s delta function as a specific example of the formula obtained in section 4.2 using the result in section 4.3. It should be pointed out that in [6] the same result was obtained by using a totally different approach. Our result is purely computational and much simpler. In section 4.5 we extend the formula to generalized Brownian functionals of the form F = f(B(t)) where f is a tempered distribution; i.e. f 0(R). ∈ S

4.1 Representation of Brownian Functionals by Stochastic Integrals

Consider a probability space (Ω, ,P ). Let B(t) be the Brownian motion B(t) = F , 1 , 0 t 1 and = σ B(s) 0 s t the filtration it generates. In [4], h· [0,t)i ≤ ≤ Ft { | ≤ ≤ }

45 [27] the following results are considered. If F (B(t)) is any finite functional of Brow- nian motion, then it can be represented as a stochastic integral. This representation is not unique. However, if E(F 2(B( ))) < , then according to martingale rep- · ∞ resentation theory, F does have a unique representation as the sum of a constant and an Itˆostochastic integral

1 F = E(F ) + φ(t) dB(t), (4.1) Z0 2 where the process φ belongs to the space L ([0, 1] Ω; R) and is t-measurable. × F The following specific class of Brownian functionals was considered. If F is

Fre´chet-differentiable and satisfies certain technical regularity conditions, then F has an explicit expression as a stochastic integral in which the integrand consists of the conditional expectations of the Fr´echet differential. It is this explicit repre- sentation for the integrand that gives rise to the Clark-Ocone formula. In the white noise setup, if we replace Fr´echet differentiability with white noise differentiation, then we need the condition that F 1/2 for the result to have meaning. The ∈ W formula then can be represented in two ways as shown below and we will verify each case separately. In the satement of the theorem, we shall assume that T R. ⊂ The second form of the formula will be easier to verify using the S-transform.

Theorem 4.1 (The Clark-Ocone Formula). Let 1/2 be the sobolev space from W 2 the Gel’fand triple ( ) (L ) ( )∗ . Suppose B(t) is the Brownian motion given S β ⊂ ⊂ S β by B(t) = , 1 , t T with = σ B(s) 0 s t the filtration it generates. h· [0,t)i ∈ Ft { | ≤ ≤ } Let F 1/2 be a Brownian functional. Then, the Clark-Ocone representation ∈ W formula for such an F is given by

F = E(F ) + E(∂ F ) dB(t). (4.2) t |Ft ZT

46 We can rewrite the stochastic integral in the above equation as

E(∂ F ) dB(t) = ∂ E(∂ F ) dt + ∂∗E(∂ F ) dt. t |Ft t t |Ft t t |Ft ZT ZT ZT But ∂ E(∂ F ) = 0 since (E(∂ F )) is non-anticipating. (See [21] Lemma t t |Ft t |Ft 13.11). Therefore, another formulation for equation (4.2) is

F = E(F ) + ∂∗E(∂ F ) dt, (4.3) t t |Ft ZT where ∂t is the white noise differential operator, ∂t∗ is its adjoint, and the integral in equation (4.3) is regarded as a white noise integral in the Pettis sense.

Such formulas are very useful in the determination of hedging portfolios. Another application is in the context of determining the process of

Brownian martingales.

We start with verifying the representation (4.2). The space 1/2 can be repre- W sented as the following

∞ ∞ 1/2 2 n 2 ˆ n 2 = F (L ) F = : ⊗ : , fn , fn L (R)⊗ , nn! fn 0 < . W ∈ | h · i ∈ | | ∞ ( n=0 n=1 ) X X 2 2 This is so because by Theorem 9.27 in [21], ∂ F dt = ∞ nn! f . T k t k0 n=1 | n|0 As an observation first, we need to make sureR that the integralP in equation (4.2) is well defined by computing its (L2) norm. We will assume that F is real valued.

Otherwise, F = F1 + iF2 if F is complex valued so that the two pieces are treated separately. By using Jensen’s inequality we can have

2 E E(∂ F ) dB(t) = E (E(∂ F ))2 dt t |Ft t |Ft ’ZT “ ZT E E(∂ F )2 dt ≤ t |Ft ZT € 2  = E(∂tF ) dt ZT

47 2 E(∂tF ) dt ≤ R Z 2 = ∂tF 0 dt < . R k k ∞ Z So the integral is well defined as an (L2) function.

n 2 ˆ n F will take on the form F = ∞ : ⊗ : , fn with fn L (R⊗ ). The Wick n=0h · i ∈ n tensor ϕ = : ⊗ : , f is referredP to as the nth order homogeneous chaos. It is h · ni simply another name for the multiple Wiener integral In(fn) of degree n for the function fn. We first need to verify the formula for ϕ. The following Lemma will be useful in the verification. We will first recall that in integral form, In(fn) can be written as I (f ) = f(t , , t ) dB(t ) dB(t ). We will use second form n n Rn 1 ··· n 1 ··· n to verify the first versionR of the formula.

Lemma 4.2. Let ϕn = In(f) be the multiple Wiener integral of degree n with

2 ˆ n f L (R⊗ ). Let B(t) be the Brownian motion given by B(t) = , 1[0,t) , t R ∈ h· i ∈ with = σ B(s) 0 s t , the filtration it generates. Then Ft { | ≤ ≤ }

n E(ϕn t) = In(f1⊗( ,t]). |F −∞

Using another notation, for ϕ = f(t , , t ) dB(t ) dB(t ), we have n Rn 1 ··· n 1 ··· n R E(ϕn t) = f(t1, , tn) dB(t1) dB(tn). |F ( ,t]n ··· ··· Z −∞

Proof. Let n 1 be fixed. Let A1,A2, Ak be pairwise disjoint intervals in R. ≥ ···

Denote by εn the set of special step functions of the form

k

fn(t1, t2, , tn) = ai1 in 1A A (t1, , tn) ··· ··· i1 ×···× in ··· i1, ,in=1 ···X with the property that the coefficients ai1, in are zero if any two of the indices ··· i , , i are equal. In other words f should vanish on the rectangles that intersect 1 ··· m any diagonal subspace t = t , i = j . Assume A is of the form (t , t ]. Then { i j 6 } ij ij ij+1

48 In(fn) can be expressed in the form

k

In(fn) = ai1 in (B(ti2 ) B(ti1 )) (B(tin ) B(tin 1 )) ··· − ··· − − i1, ,in=1 ···X k = a , 1 , 1 i1 in (ti1 ,ti2 ] (tin 1 ,tin ] ··· h· i · · · h· − i i1, ,in=1 ···X

By linearity, it suffices to take fn = 1A1 An where the Ai0 s are mutually disjoint ×···× intervals in R of the form Ai = (ti, ti+1]. In this case, we obtain the following representation

E(In(fn) t) = E( , 1(t1,t2] , 1(tn 1,tn] t) − | F n h· i · · · h· i | F

= E( , 1(t ,t ] ( ,t] + , 1(t ,t ] (t, ) t) h· i i+1 ∩ −∞ i h· i i+1 ∩ ∞ i | F i=1 Yn

= ( , 1(t ,t ] ( ,t] ) h· i i+1 ∩ −∞ i i=1 Y = In(1(A1 ( ,t]) (An ( ,t])) ∩ −∞ ×···× ∩ −∞ n = In(1A1 An 1(⊗ ,t]) ×···× −∞ n = In(fn1(⊗ ,t]) −∞

= 1A1 An dB(t1) dB(tn) ( ,t]n ×···× ··· Z −∞ 2 ˆ n We then conclude that the formula is also true for general f L (R⊗ ) because ∈ 2 ˆ n the set εn is dense in L (R⊗ ). This then completes the proof for the lemma.

We are now ready to verify the formula for the nth order homogeneous chaos. Let

n 2 n ϕ = : ⊗ : , fn , fn L (R⊗ ), n 1 be the nth order homogeneous chaos. Then h · i ∈ ≥ in integral form,

ϕ = fn(t1, , tn) dB(t1) dB(tn) Rn ··· ··· Z tn t3 t2 = n! fn(t1, , tn)dB(t1) dB(tn) R ··· ··· ··· Z Z−∞ Z−∞ Z−∞

49 tn t3 t2 = n! fn(t1, , tn)dB(t1) dB(tn 1) dB(tn) R ··· ··· ··· − Z ’ Z−∞ Z−∞ Z−∞ “ 1 = n! fn(t1, , tn 1dB(t1) dB(tn 1) dB(t) R (n 1)! ( ,t]n 1 ··· − ··· − Z − ’Z −∞ − “

= n fn(t1, , tn 1dB(t1) dB(tn 1) dB(t) R ( ,t]n 1 ··· − ··· − Z ’ Z −∞ − “ (n 1) Now since ∂ ϕ = n : ⊗ − :, f (t, ) , by the lemma above, E(∂ ϕ ) = t h · n · i t | Ft n ( ,t]n 1 fn(t1, , tn 1, t)dB(t1) dB(tn 1) and so −∞ − ··· − ··· − R ϕ = n fn(t1, , tn 1, t)dB(t1) dB(tn 1) dB(t) R ( ,t]n 1 ··· − ··· − Z ’ Z −∞ − “

= E(∂tϕ t)dB(t). R | F Z N n If we have a finite sum F = : ⊗ : , f , then N n=0h · ni PN n F = f + : ⊗ : , f N 0 h · ni n=1 X = E(FN ) + E(∂tFN t)dB(t). R | F Z n 1/2 Now for general F = ∞ : ⊗ : , f , we have the following. n=0h · ni ∈ W P N ∞ n n F = : ⊗ : , f + : ⊗ : , f h · ni h · ni n=0 n=N+1 X X = FN + FN+1

= E(FN ) + E(∂tFN t)dB(t) + FN+1 R | F Z We note that E(F ) = E(F ) and F F in (L2) because F F 2 = N N −→ k − N k0 2 ∞ n! f 0 as n . We claim that E (∂ F ) dB(t) n=N+1 | n|0 −→ −→ ∞ R t N |Ft −→ PE (∂ F ) dB(t) in (L2) as N . In fact all theR convergences also hold if R t |Ft −→ ∞ weR work in the space 1/2, which indeed is our main concern. However, for pur- W poses of ease of computation we will work in the space (L2) because of the way the two norms are related to each other. Now if we start by estimating the (L2)-norm

50 of the difference between these two quantities we obtain the following

2

E(∂tF t) dB(t) E(∂tFN t) dB(t) R |F − R |F Z Z 0   2    = E E(∂t(F FN) t) dB(t) R − |F (’Z “ ) 2

= E E(∂tFN+1 t) dB(t) R |F (’Z “ ) 2 = E (E(∂tFN+1 t)) dt R |F Z 2 E(E(∂tFN+1) t) dt ≤ R |F Z 2 = E(∂tFN+1) dt R Z 2 = ∂tFN+1 0 dt R k k Z ∞ = nn! F 2 0. | n|0 −→ n=N+1 X Hence, for general F 1/2 we can decompose F as in equation (4.2). ∈ W We now verify the representation in equation (4.3). The main tool for the ver- ification is the S-transform. First we prove that the integral ∂∗E(∂ F ) dt T t t |Ft is a white noise integral defined in the Pettis sense. We needR to show that the three conditions in section (2.9) are satisfied. We will take Φ : R ( )∗ where −→ S β

Φ(t) = ∂∗E(∂ F ). We note that t S(Φ(t))(ξ) is measurable because t t |Ft 7→

S(Φ(t))(ξ) = S(∂∗E(∂ F )(ξ) t t |Ft = ξ(t)S(E(∂ F ))(ξ) t |Ft ∞ t t = ξ(t) n fn(t, s1, , sn 1)ξ(s1) ξ(sn 1) ds1 dsn 1, ··· ··· − ··· − ··· − n=1 X Z−∞ Z−∞ which is a product of measurable functions. Hence the function ∂∗E(∂ F ) is t t |Ft weakly measurable to settle the first of the three necessary conditions.

51 1 The second condition is to show that t Φ, ϕ L (R) ϕ ( )β. Let 7→ hh ii ∈ ∀ ∈ S n ϕ = ∞ : ⊗ : , g ( ) . Then n=0h · ni ∈ S β P ∂t∗E(∂tF t), ϕ dt = E(∂tF t), ∂tϕ dt R hh |F ii R hh |F ii Z Z Œ Œ Œ Œ Œ Œ Œ Œ E(∂tF t) 0 ∂tϕ 0 dt ≤ R |F Z         ∂tF 0 ∂tϕ 0 dt ≤ R Z       2  2 ∂tF 0 dt ∂tϕ 0 dt. ≤ R R sZ Z     2     2 We know that ∂ F dt < by assumption. To show that ∂ ϕ dt < , R t 0 ∞ R t 0 ∞ 2 it is enough toR show that ∞ nn! g < . We see that for anyR p 0,   n=1 | n|0 ∞  ≥

∞P ∞ 2 n 2pn 2 nn! g n!2 λ− g | n|0 ≤ 1 | n|p n=1 n=1 X X 2p n 2pn 2 If we choose p large enough such that 2λ− < 1, we see that ∞ 2 λ− n! g 1 n=1 1 | n|p ≤ 2 2 ∞ n! g = ϕ < . Therefore the second condition hasP been satisfied. The n=1 | n|p k kp ∞ Pthird condition in section (2.9) is implied by the above two conditions as shown in

[21]. Hence the integral ∂∗E(∂ F ) dt is defined in the Pettis sense. T t t |Ft 1/2 We now claim that FR= E(F ) + ∂∗E(∂ F ) dt F . We prove this T t t |Ft ∀ ∈ W 1 n n 2 first for exponential functions F =RFη = ∞ : ⊗ : , η⊗ , η L (R). n=0 n! h · i ∈ First we check that F 1/2. P η ∈ W 2 ∞ η n ∞ n nn! ⊗ = η 2n n! n!| |0 n=1 Œ Œ0 n=1 X Œ Œ X ∞ Œ Œ 2 1 2(n 1) Œ Œ = η η − | |0 (n 1)!| |0 n=1 − 2 Xη 2 = η e| |0 < . | |0 ∞

52 Now to show that F = E(F ) + ∂∗E(∂ F ) dt we will have to check if the η η R t t η|Ft S-transforms of the two sides are equal.R

S E(Fη) + ∂t∗E(∂tFη t) dt (ξ) = S(E(Fη))(ξ) + S ∂t∗E(∂tFη t) dt (ξ) R |F R |F ’ Z “ ’Z “

= 1 + ξ(t)S(E(∂tFη t))(ξ) dt R |F Z = 1 + ξ(t)S(Eη(t)Fη t))(ξ) dt R |F Z = 1 + ξ(t)η(t)S(E(Fη t))(ξ) dt R |F Z ∞ n 1 n = 1 + ξ(t)η(t)S : ⊗ : , η1(⊗ ,t] (ξ) dt R h · n! −∞ i n=0 Z ’X “

= 1 + ξ(t)η(t)S(Fη1( ,t] )(ξ) dt R −∞ Z

= 1 + ξ(t)η(t) Fη1( ,t] ,Fξ dt R hh −∞ ii Z η1( ,t],ξ = 1 + ξ(t)η(t)eh −∞ i dt R Z R t η(s)ξ(s) ds = 1 + ξ(t)η(t)e −∞ dt R Z R ∞ d t = 1 + e η(s)ξ(s) ds dt dt −∞ ZR−∞ ’ “ t η(s)ξ(s) ds = 1 + e −∞ ∞ −∞ R ∞ η(s)ξ(s) dsŒ = 1 + e −∞ Œ 1 − R t η(s)ξ(s) ds = e −∞

η,ξ = eh i

= F ,F hh η ξii

= S(Fη)(ξ)

This then verifies that F = E(F ) + ∂∗E(∂ F ) dt. We now have to show η η R t t η|Ft 2 1/2 n that the span of the set Fη, η L (R)R is dense in . Let F = : ⊗ : , fn { ∈ } W h · i ∈ 1/2 2 ˆ n n , fn L (R⊗ ). It is enough to show that each : ⊗ : , fn can be approxi- W ∈ h · i

53 mated in the 1/2-norm by exponential functions. Now each f can be approxi- W n n 2 n mated by η⊗ , where η L (R) by the polarization identity. Thus : ⊗ : , fn is ∈ h · i n n approximated by : ⊗ : , η⊗ . But we have the equation h · i n n n d : ⊗ : , η⊗ = F h · i dtn tη t=0 Œ dn Œ and dtn Ftη t=0 is in the closure of the span of exponential functions (it is a limit Œ of a limit ofŒ finite linear combinations of exponential functions). Therefore, there 2 exists a sequence Fn n 1 in the span of Fη η L (R) such that Fn F { } ≥ { ∈ } −→ 1/2 in . Since E(F ) E(F ) it remains to showŒ that ∂∗E(∂ F ) dt W n −→ Œ R t t n Ft −→

∂∗E(∂ F ) dt. This is equivalent to showing thatR the convergenceŒ exists R t t Ft Œ Rweakly. Let ϕŒ ( ) . Then we have Œ ∈ S β

∂t∗E(∂t(Fn F ) t) dt, ϕ = ∂t∗E(∂t(Fn F ) t), ϕ dt hh R − F ii Rhh − F ii Œ Z Œ ŒZ Œ Œ Œ Œ Œ Œ Œ Œ Œ Œ Œ E(∂t(Fn F ) Œt), ∂tϕ dtŒ ≤ R hh − F ii Z Œ Œ Œ Œ Œ Œ E(∂t(Fn F ) t) 0 ∂tϕ 0 dt ≤ R − F Z  Œ     Œ2   2 ∂t(Fn F ) 0 dt ∂tϕ 0 dt ≤ R − R sZ Z     0.     −→ This then completes the verification of formula (4.3).

4.2 A Generalized Clark-Ocone Formula

This is the main result of our second part of the dissertation. As pointed out earlier on, extending the Clark-Ocone formula to generalized Brownian functionals that meet some restrictions yields a formula of the same form as equation (4.2), but for ease of computation we will verify the form as given in equation (4.3). For our result, we start with laying down the kind of restrictions that the generalized

Brownian functional has to satisfy.

54 n 2 ˆ n Let Φ ( )∗ with the property that Φ = ∞ : ⊗ : , fn , fn L (R⊗ ), ∂tΦ ∈ S β n=0h · i ∈ 2 P exists in ( )β∗ and R ∂tΦ p, β dt < . We first need to extend the definition of S − − ∞ the conditional expectationR   E(∂ Φ ).   t Ft Œ Œ Definition 4.3. Let Φ ( )∗ that satisfies the above property. For each t, we ∈ S β define the conditional expectation of ∂ Φ with respect to to be the function t Ft whose S-transform is given by

S(E(∂ Φ ))(ξ) t Ft t t Œ ∞ :=Œ n fn(t, s1, , sn 1)ξ(s1) ξ(sn 1) ds1 dsn 1. ··· ··· − ··· − ··· − n=1 X Z−∞ Z−∞

Of course for such a definition to be valid, S(E(∂ Φ ))(ξ) must satisfy the t Ft Œ conditions in the characterization Theorem for generalizedŒ functions as introduced in chapter 2. That is, we need to check that for ξ, η c(R) and z C the function ∈ S ∈ S(E(∂ Φ ))(ξ+η) is an entire function and that it satisfies the growth condition. t Ft Œ Let us startŒ by showing that the function is question is an entire function.

(a) Let ξ, η c(R) and any z C. Then we have ∈ S ∈

∞ t t S(E(∂tΦ t))(ξ + η) = n fn(t, s1, , sn 1)(zξ + η)(s1) F ··· ··· − ··· n=1 Z−∞ Z−∞ Œ X (zξ Œ+ η)(sn 1) ds1 dsn 1 ··· − ··· − ∞ t t = z n fn(t, s1, , sn 1)ξ(s1) ξ(sn 1) ds1 dsn 1 ··· ··· − ··· − ··· − n=1 X Z−∞ Z−∞ ∞ t t + n fn(t, s1, , sn 1)η(s1) η(sn 1) ds1 dsn 1. ··· ··· − ··· − ··· − n=1 X Z−∞ Z−∞

Now by the Fundamental Theorem of Calculus, the two series in the last

equality are complex differentiable. Therefore, the first condition has been

satisfied.

55 (b) The growth condition:

S(E(∂ Φ ))(ξ) t Ft t t Œ ∞ Œ Œ Œ = Œn Œ fn(t, s1, , sn 1)ξ(s1) ξ(sn 1) ds1 dsn 1 ··· ··· − ··· − ··· − n=1 ŒX Z−∞ Z−∞ Œ Œ ∞ Œ Œ n ,ξ Œ n f , ξ⊗ = ∂ Φ, : eh· i : ≤ h n i hh t ii Œn=1 Œ X ,ξ Œ Œ Œ Œh· iŒ Œ Œ ∂tΦ p, β : eŒ : p,β ≤ k k− − 2β 1 β/2  1 −β 2/1 β ∂tΦ p, ⠍2 exp(1 β)2 − ξ p − ≤ k k− − − | | 2/1 β = K exp[a ξ − ], ξ , | |p ∀ ∈ Ec

2β 1 β/2 1 −β where K = 2 ∂tΦ p, β and a = (1 β)2 − . See Theorem 5.7 [21]. k k− − −

Therefore, a generalized function in the space ( )∗ exists for which the given S β expression is its S-transform; and that function is given by E(∂ Φ ). t Ft

We now claim that the integral ∂∗E(∂ Φ ) dt is defined as aŒ white noise in- R t t Ft Œ tegral in the Pettis sense. The argumentR will beŒ similar to the 1/2 case considered Œ W 1 earlier. It only suffices to show that t ∂t∗E(∂tΦ t), ϕ L (R), ϕ ( )β 7→ hh F ii ∈ ∀ ∈ S Œ since the measurability condition is obvious by the previousŒ argument. n Let ϕ = ∞ : ⊗ : , g ( ) . Then n=1h · ni ∈ S β P ∂t∗E(∂tΦ t), ϕ dt = E(∂tΦ t), ∂tϕ dt R hh F ii R hh F ii Z Z Œ Œ Œ Œ Œ Œ Œ Œ Œ Œ Œ Œ E(∂tΦ t) p, β ∂tϕ p,β dt ≤ R F k k Z − −  Œ   Œ  ∂tΦ p, β ∂tϕ p,β dt ≤ R k k− − k k Z 2 2 ∂tΦ p, β dt ∂tϕ p,β dt. ≤ R k k− − R k k sZ Z 2 By assumption, R ∂tΦ p, β dt < . Let us choose any q > p 0 such that k k− − ∞ ≥ 2(q p) R 4λ−1 − < 1. We will the have the following for the computation.

56 ∞ 2 2 1+β 2 ∂tϕ p,β dt = n ((n 1)!) gn(t, ) p dt R k k R − | · | n=1 Z Z X ∞ 2 1+β 2 n (n!) gn(t, ) p dt ≤ R | · | n=1 Z X ∞ = n2(n!)1+β g 2 | n|p n=1 X ∞ 2n 1+β 2(q p)n 2 2 (n!) λ− − g ≤ 1 | n|q n=1 X ∞ (n!)1+β g 2 = ϕ 2 < . ≤ | n|q k kq ∞ n=1 X

Hence, ∂∗E(∂ Φ ) dt is defined as a white noise integral in the Pettis sense. R t t Ft R Œ The next Theorem thenŒ establishes the second version of the Clark-Ocone formula is given in equation (4.3) above.

n 2 ˆ n Theorem 4.4. Let Φ ( )∗ with Φ = ∞ : ⊗ : , fn , fn L (R⊗ ). Assume ∈ S β n=0h · i ∈ 2 that ∂tΦ exists in ( )β∗ and R ∂tΦ p, βPdt < for some p 0. Then, Φ can be S k k− − ∞ ≥ represented as R

Φ = E(Φ) + ∂t∗E(∂tΦ t) dt. (4.4) R F Z Œ N n Œ Proof. First let Φ = : ⊗ : , f . Then calculating its S-transform yields N n=0h · ni the following P

,ξ S(Φ )(ξ) = Φ , : eh· i : N hh N ii N n = f , ξ⊗ h n i n=0 X N t t = f + f (s , , s )ξ(s ) ξ(s )ds ds 0 ··· n 1 ··· n 1 ··· n 1 ··· n n=1 X Z−∞ Z−∞ N t t = f0 + n fn(t, s1, , sn 1)ξ(t)ξ(s1) R ··· ··· − ··· n=1 X Z Z−∞ Z−∞ ξ(sn 1)ds1 dsn 1dt ··· − ··· −

57 N t t = f0 + ξ(t) n fn(t, s1, , sn 1)ξ(s1) R ··· ··· − ··· n=1 Z ’X Z−∞ Z−∞

ξ(sn 1)ds1 dsn 1 dt ··· − ··· − “

= f0 + ξ(t)S(E(∂tΦN t))(ξ) dt R F Z Œ Œ = S(E(ΦN ))(ξ) + S ∂t∗(E∂tΦN t) dt (ξ) R F ’Z “ Œ Œ = S E(ΦN ) + ∂t∗(E∂tΦN t) dt (ξ). R F ’ Z “ Œ Œ This then verifies the decomposition Φ = E(Φ ) + ∂∗E(∂ Φ ) dt. Now N N R t t N Ft n 2 ˆ n Œ for Φ = ∞ : ⊗ : , fn , with fn L (R⊗ ), we haveR that ΦN Φ in ( )∗ and n=0h · i ∈ −→ Œ S β E(Φ ) P E(Φ) as N . We only need to check that N −→ −→ ∞

∂t∗(E∂tΦN t) dt ∂t∗(E∂tΦ t) dt. R F −→ R F Z Z Œ Œ Œ Œ We will use Theorem 2.3 to show this convergence. We verify the two conditions which are as follows

(a)Let FN = SΦN . We claim that limN FN (ξ) exists. −→∞

FN (ξ) = S(E(ΦN ))(ξ) + S ∂t∗E(∂tΦN t) dt (ξ) R F ’Z “ Œ = f0 + ξ(t)S(E(∂tΦN t))(ξ) dt.Œ R F Z Œ Now, we note that as N , we obtain theŒ pointwise limit of the integrand in −→ ∞ the above integral because

S(E(∂ Φ ))(ξ) t N Ft N Œ t t = Œn fn(t, s1, , sn 1)ξ(s1) ξ(sn 1) ds1 dsn 1 ··· ··· − ··· − ··· − n=1 X Z−∞ Z−∞ ∞ t t n fn(t, s1, , sn 1)ξ(s1) ξ(sn 1) ds1 dsn 1. −→ ··· ··· − ··· − ··· − n=1 X Z−∞ Z−∞

58 Moreover, an attepmt to obtain a bound for the quantity S(E(∂ Φ ))(ξ) t N Ft Œ Œ Œ yields the following estimate Œ Œ Œ SE(∂ Φ )(ξ) t N Ft N Œ Œ t Œ t Œ nŒ Œ fn(t, s1, , sn 1)ξ(s1) ξ(sn 1) ds1 dsn 1 ≤ ··· ··· − ··· − ··· − Œn=1 Z−∞ Z−∞ Œ ŒX ,ξ Œ Œ ∂ Φ, : eh· i : Œ ≤ Œhh t ii Œ Œ ,ξŒ h· i Œ ∂tΦ p, β : e Œ : p,β. ≤ − −     This bound obtained  above is integrable because

,ξ ,ξ h· i h· i ∂tΦ p, β : e : p,β dt = : e : p,β ∂tΦ p, β dt < , R R ∞ Z − − Z − −          2       since R ∂tΦ p, β dt < by assumption in the Theorem. − − ∞

Hence,R by the Lebesgue Dominated Convergence Theorem, limN S(ΦN )(ξ) =   −→∞ f0 + R ξ(t) limN SE(∂tΦN t)(ξ) dt and so limN FN (ξ) exists. −→∞ F −→∞ R Œ (b) We claim that FN (ξ) satisfiesŒ the growth condition.

FN (ξ) = S(Φn)(ξ) Œ Œ Œ Œ Œ Œ Œf0 + ξŒ(t) SE(∂tΦN t)(ξ) dt ≤ | | R F Z 2 Œ ŒŒ Œ Œ 1 ⠌ ŒŒ Œ Œ − f0 + ξ(t) ∂tΦ p, β exp[a ξ p ]dt ≤ | | R − − | | Z 2 Œ Œ1 ⠍ 2 2 Œ Œ−  f0 + exp[a ξ p ] ξ(t) dt ∂tΦ p, βdt. ≤ | | | | R | | R Z Z − −   2 2   2 But R ∂tΦ p. βdt < and R ξ(t) dt < since ξ (R) L (R). If we k k− − ∞ | | ∞ ∈ S ⊂ R 2 2 R let K = R ξ(t) dt R ∂tΦ p, βdt, we get the desired bound for the growth | | − −   conditionR which is independentR on N for every ξ c(R). We now conclude   ∈ S that ∂∗E(∂ Φ ) dt ∂∗E(∂ Φ ) dt since SE(∂ Φ )(ξ) R t t N Ft −→ R t t Ft t N Ft −→ SE(∂RΦ )(ξ) fromŒ the aboveR argument.Œ This then completes theŒ verification t Ft Œ Œ Œ Œ for the extendedŒ formula (4.4).

59 4.3 The Formula for the Hermite Brownian Functional

In this section we verify the Clark-Ocone formula for the Hermite Brownian func- tional using the Itˆoformula.

Lemma 4.5. Let B(t), t > 0 be a Brownian motion. Then the Hermite Brownian

n functional : B(t) : t is a martingale.

λB(t) 1 λt2 Proof. Let λ R and set Yλ(t) = e − 2 . We first show that the process ∈ Y (t), t 0 is a martingale (See [8]). First we show integrability. We observe { λ ≥ } that λ(t) is -measurable. Moreover, Ft 2 1 λ2t 1 λy y E[Yλ(t)] = e− 2 e − 2t dy √2πt R Z 1 λ2t 1 1 (y2 2tλy) = e− 2 e− 2t − dy √2πt R Z 2 2 1 λ2t 1 1 (y tλ)2 t λ = e− 2 e− 2t − e 2t dy √2πt R Z 1 λ2t 1 λ2t = e− 2 e 2 = 1 which shows integrability.

Now let t > s. We can write

λB(s) 1 λ2s λ(B(t) B(s)) 1 λ2(t s) Yλ(t) = e − 2 e − − 2 −

λ(B(t) B(s)) 1 λ2(t s) = Yλ(s)e − − 2 −

Since B(t) B(s) is independent of any -measurable function, we have − Fs λB(t s) 1 λ2(t s) E[Y (t) ] = Y (s)E[e − − 2 − ] λ | Fs λ

= Yλ(s) almost surely by the integrability result above.

n We have the generating function for : B(t) :t given by

∞ n λ n λB(t) 1 λ2t : B(t) : = e − 2 (4.5) n! t n=0 X

60 Taking the conditional expectation on both sides of (4.5) we obtain

∞ λn ∞ λn E(: B(t)n : ) = : B(s)n : . n! t Ft n! s n=0 n=0 X Œ X Œ By comparing coefficients of λn, we get

E(: B(t)n : ) =: B(s)n : , t Ft s ΠΠwhich completes the proof for the lemma.

n Next we verify the formula for the Hermite Brownian functional : B(t) :t.

Theorem 4.6. Let t [0, 1] and B(t) a Brownian motion. Then, ∈ n n 1 E(∂ : B(t) : ) = n : B(s) − : 1 (s) and the Clark-Ocone representation s t | Fs s [0,t] n formula for : B(t) :t is given by:

t n n 1 : B(t) :t= n : B(s) − :s dB(s). (4.6) Z0 n Proof. Consider the function θ(t, x) =: x :t. Then by using the generating function

∞ n λ n λx 1 λ2t : x : = e − 2 , n! t n=0 X the following formulas can easily be verified:

∂ 1 n 2 θ(t, x) = n(n 1) : x − : . ∂t −2 − t

∂ n 1 θ(t, x) = n : x − : . ∂x t 2 ∂ 1 n 2 θ(t, x) = n(n 1) : x − : . ∂x2 2 − t By the Itˆoformula,

∂ 1 ∂2 ∂ dθ(t, B(t)) = θ(t, B(t))dB(t) + θ(t, B(t)) + θ(t, B(t)) dt. ∂x 2 ∂x2 ∂t ’ “

61 Putting it in integral form,

t t 2 n ∂ 1 ∂ ∂ : B(t) :t = θ(s, B(s))dB(s) + 2 θ(s, B(s)) + θ(s, B(s)) ds 0 ∂x 0 2 ∂x ∂s Z t Z ’t “ n 1 1 n 2 = n : B(s) − :s dB(s) + n(n 1) : B(s) − :s ds 0 0 2 − Z t Z 1 n 2 n(n 1) : B(s) − :s ds − 0 2 − Z t n 1 = n : B(s) − :s dB(s). Z0

n Since E[: B(t) :t] = 0, it remains to be shown that

1 t n n 1 E(∂ : B(t) : )dB(s) = n : B(s) − : dB(s). (4.7) s t| Fs s Z0 Z0 n We can first write : B(t) :t in terms of Wick tensors as

n n n : B(t) : = : ⊗ :, 1⊗ . t h · [0,t]i

We then obtain

n (n 1) (n 1) ∂ : B(t) : = n : ⊗ − :, 1⊗ − 1 (s) s t h · [0,t] [0,t] n 1 = n : B(t) − :t 1[0,t](s).

n Since : B(t) :t is a martingale as earlier noted, it follows that

n n 1 E(∂ : B(t) : ) = n : B(s) − : 1 (s). s t| Fs s [0,t] and so

1 1 n n 1 E(∂s : B(t) :t s)dB(s) = n : B(s) − :s 1[0,t](s)dB(s) 0 | F 0 Z Z t n 1 = n : B(s) − :s dB(s). Z0 n which then verifies the Clark-Ocone formula for : B(t) :t .

62 4.4 The Formula for Donsker’s Delta Function

As a specific example of the results obtained in section 4.2 above, we consider

Donsker’s delta function. For δa the Dirac delta function at a and B(t) a Brow- nian motion, the Donsker’s delta function δ(B(t) a) is a generalized Brownian − functional with the following Wiener-Itˆodecomposition ([18]) given by:

2 ∞ 1 a 1 n n n δ(B(t) a) = e− 2t : a : : ⊗ :, 1⊗ . (4.8) − √ n!tn t h · [0,t]i 2πt n=0 X Theorem 4.7. Let δ(B(t) a) be Donsker’s delta function. Then the Clark-Ocone − representation formula for δ(B(t) a) is given by: −

2 t (a B(s))2 1 a 1 (a B(s)) − 2t 2(t s) δ(B(t) a) = e− + − 3 e− − dB(s). (4.9) − √2πt √2π (t s) 2 Z−∞ − n n n Proof. Since : ⊗ : , 1⊗ =: B(t) : , equation (4.8) can also be written as h · [0,t]i t

2 ∞ 1 a 1 n n δ(B(t) a) = e− 2t : a : : B(t) : . (4.10) − √ n!tn t t 2πt n=0 X n t n 1 We now know from Theorem 4.6 that : B(t) : t = n: B(s) − : s dB(s). Hence, −∞ R 2 ∞ 1 a 1 1 n n δ(B(t) a) = e− 2t + : a : : B(t) : − √ √ n!tn t t 2πt 2πt n=1 X 2 ∞ t 1 a 1 1 n n 1 = e− 2t + : a : n : B(s) − : dB(s) √ √ n!tn t s 2πt 2πt n=1 X Z−∞ 2 t ∞ 2 1 a 1 a 1 n n 1 = e− 2t + e− 2t : a : : B(s) − : dB(s). √ √ (n 1)!tn t s 2πt n=1 2πt Z−∞ X −

The following 2 formulas can be used to simplify the integrand in the last equality:

∞ n 2 2 2 2 t n n 1 t x 2txy + t y : x :1 : y :1 = exp − , t < 1. (4.11) n! √ 2 − 2(1 t2) | | n=0 1 t X − ” − •

n n n :(σu) :σ2 = σ : u :1 . (4.12)

63 From the second formula above, we get

n 1 n : u : = :(σu) : 2 . 1 σn σ

Therefore,

∞ n 2 2 2 2 t 1 n 1 n 1 t x 2txy + t y :(σx) :σ2 :(λy) :λ2 = exp − . n! σn λn √ 2 − 2(1 t2) n=0 1 t X − ” − • Put σx = u, λy = v. Then

t2 u v t2 ∞ n 2 2 t 1 n 1 n 1 σ2 u 2t σ λ + λ2 v : u :σ2 : v :λ2 = exp − . n! σn λn √ 2 − 2(1 t2) n=0 1 t " # X − − Let t = τ. Then we obtain

τ 2 1 τ 2 ∞ n 2 2 τ 1 n 1 n 1 σ2 u 2τ σλ uv + λ2 v : u :σ2 : v :λ2 = exp − . n! σn λn √ 2 − 2(1 τ 2) n=0 1 τ " # X − − Put σ2 = t, λ2 = s. to obtain

τ 2 2 1 τ 2 2 ∞ τ n 1 1 1 u 2τ uv + v n n t − √ts s n : u :t: v :s = exp 2 . n! (√t)n (√s) √1 τ 2 "− 2(1 τ ) # n=0 − − X (4.13)

Differentiating both sides of equation (4.13) in v we obtain

2uτ 2vτ 2 τ 2u2 2τuv τ 2v2 ∞ τ n 1 1 + n n √ts − s t − √ts s : u :t: v :s = 2 exp 2 . (n 1)! (√ts)n √1 τ 2 2(1 τ ) "− 2(1 τ # n=1 − − − − X (4.14)

√s Substitute a for u, B(s) for v, and √t for τ in equation(4.14) to get

∞ 1 : an : : B(s)n : (n 1)!tn t s n=1 X − √s 1 s 1 s 1 √s 1 s 1 1 a 2B(s) a2 2 aB(s) + B(s)2 = √t √ts − t s exp t t − √t √ts t s 1 s 2(1 s ) − 2(1 s ) − t − t " − t # s 2 2 p√ta √tB(s) t a 2aB(s) + B(s) = − 3 exp − . (t s) 2 − 2(t s) − ” − •

64 We then get

∞ 2 1 a n n e− 2t : a : : B(s) : (n 1)!tn t s n=1 − X 2 s 2 2 √t(a B(s)) a t a 2aB(s) + B(s) = − 3 exp − . (t s) 2 − 2t − 2(t s) − ” − • The power for the exponential expression in the above equation reduces to

a2 s a2 2aB(s) + B(s)2 a2 sa2 2aB(s) + B(s)2 t − = − − − 2t − 2(t s) 2t − 2t(t s) − 2(t s) − − − (t s)a2 + sa2 2aB(s) + B(s)2 = − − − 2t(t s) − 2(t s) − − a2 2aB(s) + B(s)2 = − −2(t s) − 2(t s) − − (a B(s))2 = − . − 2(t s) − Therefore

∞ 2 2 1 a n n √t(a B(s)) (a B(s)) − 2t n e : a :t: B(s) :s= − 3 exp − . (n 1)!t 2 − 2(t s) n=1 (t s) X − − ” − • Hence δ(B(t) a) − 2 t ∞ 2 1 a 1 a 1 n n 1 = e− 2t + e− 2t : a :t: B(s) − :s dB(s) √2πt √2πt (n 1)!tn Z−∞ n=1 − t X 2 1 a2 1 √t(a B(s)) (a B(s)) 2t = e− + − 3 exp − dB(s) √2πt √2πt (t s) 2 − 2(t s) Z−∞ ” − • t − 2 1 a2 1 √t(a B(s)) (a B(s)) 2t = e− + − 3 exp − dB(s). √2πt √2π (t s) 2 − 2(t s) Z−∞ − ” − • which gives us the desired result, and completes the proof.

65 4.5 Generalization to Compositions with Tempered Distributions

In this section we generalize the Clark-Ocone formula to Brownian functionals of the form f(B(t)) where f 0(R). We first state a Theorem wich verifies that ∈ S indeed f(B(t)) is a generalized function. For the proof see [21], [22].

Theorem 4.8. Let f 0(R). Then f(B(t)) f, δ(B(t) ( )) is a generalized ∈ S ≡ h − · i Brownian functional with the following series representation:

1 ∞ 1 f(B(t)) = f, ξ : B(t)n : . (4.15) √ n!tn h n,ti t 2πt n=0 X 2 n x where ξn,t(x) =: x :t e− 2t is a function in (R) and , is the pairing between S h· ·i 0(R) and (R). S S

Theorem 4.9. Let f 0(R). Then, the Brownian functional f(B(t)) has the ∈ S following Clark-Ocone representation formula:

2 t (x B(s))2 1 x 1 − 2t 2(t s) f(B(t)) = f(x)e− dx + f 0(x)e− − dx dB(s). √2πt R 2π(t s) R Z Z−∞ − ’Z “ (4.16) p

2 n x Proof. For f 0(R) and ξn,t(x) = : x : te− 2t (R) the pairing f, ξn,t is the ∈ S ∈ S h i 2 n x integral R f(x): x : te− 2t dx. Therefore, R 1 ∞ 1 f(B(t)) = f, ξ : B(t)n : √ n!tn h n,ti t 2πt n=0 X ∞ 2 1 1 n x n − 2t = n f(x): x :t e dx : B(t) : t √ n!t R 2πt n=0 X ’Z “ 2 ∞ 2 1 x 1 1 n x n − 2t − 2t = f(x)e dx + n f(x): x :t e dx : B(t) : t. √ R √ n!t R 2πt n=1 2πt Z X ’Z “

66 Let K denote the second summand in the last equality above. Then we obtain

∞ t 2 1 1 n n 1 x − − 2t K = n f(x): x : tn : B(s) :s e dx dB(s) √ n!t R n=1 2πt X Z−∞ ’Z “ t ∞ 2 1 1 n n 1 x − − 2t = f(x) n : x : t : B(s) :s e dx dB(s). √ R (n 1)!t 2πt n=1 ! Z−∞ Z X −

But

∞ 2 (x B(s))2 1 n n 1 x √t(x B(s)) − − − 2t − 2(t s) n : x : t : B(s) :s e = − 3 e − . (n 1)!t 2 n=1 (t s) X − − Therefore,

t 1 (x B(s)) (x B(s))2 2(−t s) K = f(x) − 3 e− − dx dB(s). √2π R (t s) 2 ! Z−∞ Z − Applying to K yields the following

t 1 1 (x B(s)) (x B(s))2 2(−t s) K = f(x) − e− − dx dB(s) √2π R (t s) (t s) ! Z−∞ Z − − t 1 1 (x B(s)) (x B(s))2 p 2(−t s) = − f(x)− − e− − dx dB(s) √2π (t s) R (t s) Z−∞ − ’Z − “ t (x B(s))2 1 1 − p 2(t s) = f 0(x)e− − dx dB(s). √2π (t s) R Z−∞ − ’Z “ Hence p

2 t (x B(s))2 1 x 1 − 2t 2(t s) f(B(t)) = f(x)e− dx + f 0(x)e− − dx dB(s). √2πt R 2π(t s) R Z Z−∞ − ’Z “ which is the Clark- Ocone formula for fp(B(t)) for which

1 x2 E[f(B(t))] = f(x)e− 2t dx. √2πt R Z and 1 (x B(s))2 2(−t s) E(∂sf(B(t)) s) = f 0(x)e− − dx. | F 2π(t s) R − Z This finishes the proof. p

67 References

[1] L. Arnold, Stochastic Differential Equations, John Wiley and Sons, 1971.

[2] R. Buckdahn, Skorohod’s Integral and Linear Stochastic Differential Equa- tions, Preprint (1987).

[3] R. Buckdahn, Anticipating Linear Stochastic Differential Equations, Lecture Notes in Control and Information Sciences 136 (1989) 18 - 23, Springer-Verlag.

[4] J. M. C. Clark, The Representation of Functionals of Brownian Motion by Stochastic Integrals, The Annals of , vol 41, No. 4 (1970) 1282 - 1295.

[5] W. G. Cochran, H.-H. Kuo, A. Sengupta, A new class of white noise gen- eralized functions, Infinite Dimensional Analysis, Quantum Probability and Related Topics 1 (1998) 43 - 67.

[6] M. de Faria, M. J. Oliveira, and L. Streit, A Generalized Clark-Ocone For- mula, Preprint (1999).

[7] I. M. Gel’fand and N. Y. Vilenkin, Generalized Functions, Academic Press 4 (1964).

[8] T. Hida, Brownian Motion. Springer-Verlag, 1980.

[9] M. Hitsuda, Formula for Brownian partial , Second Japan-USSR Symp. Probab. Th. 2 (1972) 111 - 114.

[10] T. Hida, H. Kuo, J. Potthoff, L. Streit, ”White Noise”: An Infinite Dimen- sional Calculus. Kluwer Academic Publishers, 1993.

[11] K. Itˆo, Multiple Wiener Integral, J. Math. Soc. Japan. 3 (1951) 157 - 169.

[12] K. Itˆo, Extension of Stochastic Integrals, Proc. Intern. Symp. Stochastic Dif- ferential Equations, (1978) 95 - 109, Kinokuniya.

[13] I. Kubo and S. Takenaka, Calculus on Gaussian white noise I, Proc. Japan Acad. 56A (1980) 376 - 380.

[14] I. Kubo and S. Takenaka, Calculus on Gaussian white noise II, Proc. Japan Acad. 56A (1980) 411 - 416.

[15] I. Kubo and S. Takenaka, Calculus on Gaussian white noise III, Proc. Japan Acad. 57A (1981) 433 - 437.

[16] I. Kubo and S. Takenaka, Calculus on Gaussian white noise IV, Proc. Japan Acad. 58A (1982) 186 - 189.

68 [17] I. Kubo, Itˆoformula for generalized Brownian functionals, Lecture Notes in Control and Information Scis. 49 (1983) 156 - 166, Springer-Verlag.

[18] H. H. Kuo, Donsker’s delta function as a generalized Brownian Functional and its Application, Lecture Notes in Control and Information Sciences, 49 167 - 178, Springer-Verlag New York, 1983.

[19] H. H. Kuo, Gaussian Measures in Banach Spaces, Lecture Notes in Mathe- matics. 463 (1975), Springer-Verlag.

[20] H. H. Kuo, Infinite Dimensional Stochastic Ananlysis, Lecture Notes in Math- ematics, Nagoya University, No. 11.

[21] H. H. Kuo, White Noise Distribution Theory, Probability and Stochastic Series, CRC Press, Inc. 1996.

[22] G. Kalliampur and H. Kuo, Regularity Property of Donsker’s delta function, Appl. Math. Optim. 12 (1984) 89-95.

[23] D.Nualart, The and Related Topics, Probability and its Applications, Springer-Verlag 1995.

[24] D.Nualart and E. Pardoux, Stochastic Calculus with anticipating integrands, and Related Fields 78 (1988), 535–581.

[25] B. Oksendal, Stochastic Differential Equations, An Introduction with Appli- cations, Fourth Edition, Springer-Verlag 1995.

[26] N. Obata, White Noise Calculus on Fork Space, Lecture Notes in Math 1577, Springer-Verlag (1994).

[27] D. Ocone, Malliavin Calculus and Stochastic integral representation of func- tionals of diffusion processes, Stochastics 12 (1984), 161–185.

[28] E. Pardoux and P. Protter, Stochastic Volterra Equations with Anticipating Coefficients, Annals of Probability 18 (1990), 1635–1655.

[29] N. Prevault, An Extension of the Hitsuda-ItˆoFormula, Equipe d’Analyse et Probabilites, Universite d’Evry-Val d’Essonne, Boulevard des Coquibus, France, 1997.

[30] M. Reed and B. Simon, Methods of Modern Mathematical Physics I, Func- tional Analysis, Academic Press 1972.

[31] A. V. Skorokhod, On a generalization of a stochastic integral, Theory Probab. Appl. 20 (1975) 219–233.

69 [32] S. Watanabe, Some refinements of Donsker’s delta function, Pitman Research Notes in Math. Series 310 (1994), 308–324, Longman Scientific and Technical.

[33] N. Wiener, The homogenous chaos, American J. Math. 60 (1938) 897–936.

70 Vita

Said Kalema Ngobi was born in Uganda on December 30, 1960. He finished his undergraduate studies and earned a bachelor’s degree in statistics and applied economics at Makerere University, Uganda, in 1986. He came to Louisiana State

University and earned a master of science degree in 1990. He then went back to

Uganda and worked as a Lecturer at Makerere University up to 1995, when he came back to L.S.U. to pursue a doctoral degree. The degree of Doctor of Philosophy in mathematics will be awarded in December 2000.

71