Louisiana State University LSU Digital Commons

LSU Doctoral Dissertations Graduate School

2011 A new theory of stochastic integration Anuwat Sae-Tang Louisiana State University and Agricultural and Mechanical College, [email protected]

Follow this and additional works at: https://digitalcommons.lsu.edu/gradschool_dissertations Part of the Applied Commons

Recommended Citation Sae-Tang, Anuwat, "A new theory of stochastic integration" (2011). LSU Doctoral Dissertations. 893. https://digitalcommons.lsu.edu/gradschool_dissertations/893

This Dissertation is brought to you for free and open access by the Graduate School at LSU Digital Commons. It has been accepted for inclusion in LSU Doctoral Dissertations by an authorized graduate school editor of LSU Digital Commons. For more information, please [email protected]. A NEW THEORY OF STOCHASTIC INTEGRATION

A Dissertation Submitted to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Doctor of Philosophy in The Department of Mathematics

by Anuwat Sae-Tang B.Sc. (Hons.), Chulalongkorn University, Thailand, 2005 M.S., Louisiana State University, 2008 August 2011 Acknowledgments

This dissertation would not be possible without many valuable contributions. First of all, I would like to express my deep gratefulness and indebtedness to my dis- sertation advisor, Professor Hui-Hsiung Kuo, for his help, guidance and support.

In particular, I really appreciate the consideration and patience he has had with me throughout these years. He is not only an outstanding researcher but he is also an excellent teacher. He always encourages students to do their best and concerns about the students’ understanding. It was absolutely my great pleasure to be a student under his supervision.

I would also like to thank Professors William Adkins, Mark Davidson, Richard

Litherland, Padmanabhan Sundar, and Martin Tzanov for their kindness on serv- ing as my final examination committee members.

My thanks also go to the faculty and staff members in the LSU Department of Mathemetics for their help and very warm environment during my whole time at LSU. I thank all my fellow graduate students for a plenty of mathematical discussions, some useful ideas and their help. I would like to give my deepest appreciation to Benedykt Szozda. He has helped me very much for this dissertation research. I have enjoyed countless discussions with him regarding to the new Ayed-

Kuo stochastic . I am also indebted to him for proof-reading my dissertation and providing many useful comments and suggestions.

It is also my pleasure to thank the Thai community in Louisiana for their sup- port, care, and nice hospitality. In particular, my special gratitude goes to Ms.

Kanya Arntson who have always kindly treated me as her son throughout my time in the U.S. My gratitude also goes to my first roommate, Dr. Keng Wiboonton,

ii who has helped me settle down and made me feel at home since the first day I arrived at LSU.

Without the financial support from Thailand, my dream of pursuing a Ph.D. degree would be impossible. I would like to take this opportunity to thank the

Office of the Higher Education Commission for providing me a “Higher Educa- tional Strategic Scholarships for Frontier Research Network”(SFR) scholarship to complete this Ph.D. degree. I would also like to mention all “Office of Educational

Affairs”(OEA) staff at the Royal Thai Embassy in Washington DC for their hard work to help me solve many difficult problems during my years of study at LSU.

This paragraph is dedicated to all of my teachers in the past and my friends in Thailand. I would like to thank all my former teachers and regard all of them as parts of my success. Several of them made a strong impact on my life. Among those, let me mention Chutatip Kaewsiri and Rattana Pongtiwattanakul who were my class advisors in the secondary school period. I am also thankful to my under- graduate advisor, Professor Kritsana Neammanee who introduced me to the field of . My acknowledgements also go to all my Thai friends for their help, inspiration, and support in completing my work.

Finally, words cannot qualify my sense of indebtedness to my parents, my three lovely sisters and all other relatives for their unconditional love, support, and en- couragement. This work is dedicated to them in appreciation of their understand- ing. I am very glad to see that their hope is finally fulfilled.

iii Table of Contents

Acknowledgments ...... ii

Abstract ...... v

Chapter 1: Introduction ...... 1

Chapter 2: Background ...... 3 2.1 BrownianMotion ...... 3 2.2 ConditionalExpectation ...... 8 2.3 Martingales ...... 10

Chapter 3: The ItˆoTheory of Stochastic Integration ...... 14 3.1 ItˆoIntegrals ...... 14 3.2 RiemannSumsandStochasticIntegrals...... 20 3.3 Itˆo’sFormula ...... 22

Chapter 4: The New Stochastic Integral ...... 30 4.1 Motivation...... 30 4.2 NewIdeasofAyedandKuo ...... 32 4.3 Simple Itˆo’s Formula for the New Stochastic Integral ...... 42

Chapter 5: Some Properties of the New Stochastic Integral ...... 45 5.1 Near-Martingales ...... 45 5.2 ItˆoIsometry of the New Stochastic Integral...... 57 5.3 Some Formulas for the Integral Computation ...... 63 5.4 Generalized Itˆo’s Formula for the New Stochastic Integral...... 68

References ...... 76

Vita ...... 78

iv Abstract

In this dissertation, we focus mainly on the further study of the new stochastic integral introduced by Ayed and Kuo [1] in 2008. Several properties of this new stochastic integral are obtained. We first introduce the concept of near-martingale for non-adapted stochastic processes. This concept is a generalization of the mar- tingale property for adapted stochastic processes in the Itˆotheory. We prove a special case of Itˆoisometry for the stochastic integral of certain instantly inde- pendent processes defined in [1]. We obtain some formulas for expressing a new stochastic integral in terms of Itˆointegrals and Riemann . Several gener- alized versions of Itˆo’s formula for the new stochastic integral (obtained by Ayed and Kuo in [1] and [2]) are given. We also provide some examples to illustrate the ideas.

v Chapter 1 Introduction

The theory of stochastic integration, also known as Itˆocalculus, was first intro- duced by Kiyoshi Itˆoin 1942. The original motivation for Itˆowas the construc- tion of Markov diffusion processes directly from infinitesimal generators by solving stochastic differential equations [9].

The Itˆotheory of stochastic integration has been extensively studied and widely applied to many scientific fields. The most well-known application is the Black-

Scholes model in finance for which Merton and Scholes won the 1997 Nobel Prize in Economics.

b In the Itˆotheory, the stochastic integral a f(t, ω) dB(t) is only defined for an adapted (or non-anticipating) integrand f(t). It is natural to ask how one can define a stochastic integral for a non-adapted integrand. For instance, in 1976,

Itˆo[7] considered the stochastic integral t B(1) dB(s) for 0 t 1. Note that 0 ≤ ≤ this integral is not defined as an Itˆointegral because B(1) is not adapted to the

filtration ; 0 t 1 with = σ B(s); s t . {Ft ≤ ≤ } Ft ≤ In 1972, Hitsuda [5, 6] defined a stochastic integral for non-adapted integrands and obtained a special case of the Itˆo’s formula for anticipating stochastic integrals.

Then his idea was used by Skorokhod [17] in 1975 to define such stochastic integrals via homogeneous chaos expansion. Their stochastic integral can be defined in terms of the differentiation. Since 1972, there have been several approaches to define stochastic integrals for non-adapted integrands, e.g., Buckdahn [3], L´eon and Protter [12], Nualart and Pardoux [13], Pardoux and Protter [15].

1 In 2008, Ayed and Kuo [1, 2] proposed a new approach to define stochastic integrals of non-adapted integrands. They first introduced the class of instantly independent stochastic processes which can be regarded as a counterpart of adapted stochastic processes in the Itˆotheory. Then they define the stochastic integral of a linear combination of the products of instantly independent and adapted processes to be the limit in probability of its Riemann sum. Their main idea is to use the right endpoints of subintervals as the evaluation points for instantly independent processes in the Riemann sum whereas the left endpoints are still used for adapted processes as in Itˆotheory.

In this dissertation, we will further study this new Ayed-Kuo integral. In chapter

2, we give a brief review of some basic notions and backgrounds from probability theory.

Chapter 3 is devoted to a brief review of the Itˆotheory of stochastic integration.

In chapter 4, we describe the idea of the new stochastic integral introduced by

Ayed and Kuo [1, 2]. We will describe the motivation of this new idea, the definition and some examples of this new integral, and a special case of the Itˆoformula for the new stochastic integrals.

In chapter 5, we will present the main results of this dissertation. Several prop- erties of the Ayed-Kuo integral are obtained. We will introduce a new class of stochastic processes called “near martingales” which play an important role for stochastic processes associated with the new stochastic integrals of instantly inde- pendent stochastic processes. We will prove the Itˆoisometry for the new integral of a certain class of instantly independent processes, and several formulas for ex- pressing the new stochastic integral in terms of Itˆoand Riemann integrals. Finally, we will generalize the Itˆo’s formula obtained by Ayed and Kuo in [1] and [2].

2 Chapter 2 Background

In this chapter, we present some basic settings from the probability theory which will be needed in the dissertation. For more details, see [9]. 2.1 Brownian Motion

We will begin this section with the definition of stochastic processes.

Definition 2.1. A is a collection of random variables Xt t T { } ∈ defined on a (Ω, , P ) where T is an index set. F In other words, a stochastic process can be regarded as a measurable function

X(t, ω) defined on a product space T Ω such that × 1. for each t T , X(t, ) is a , ∈

2. for each fixed ω Ω, X( , ω) is a (measurable) function of t (called a sample ∈ path).

Remark 2.2. The index set T usually represents the time. In the discrete case, T is a subset of N, whereas in the continuous case, T is an interval in R. e.g., T =

[0, ). ∞ Remark 2.3. We sometimes denote X(t, ω) by Xt(ω),X(t) or Xt for convenience.

Example 2.4. Let X ∞ be a sequence of independent identically distributed { k}k=1 random variables. For each n N, define S to be the random variable ∈ n

Sn = X1 + X2 + ... + Xn.

Then S ; n N is a stochastic process and it is called a . (Here, { n ∈ }

T = N which is a discrete set, so Sn is an example of a discrete stochastic process.)

3 One of the well-known and useful continuous stochastic processes in probability theory is the Brownian motion (sometimes called the ). It is a mathematical model to describe the random movement of a small particle that is suspended in a liquid or gas. It was named after the English botanist, Robert

Brown, who observed how pollen grains move irregularly when suspended in water.

Nowadays, this mathematical model has some real-world applications in many different fields such as physics and economics. It is also used to model the stock market fluctuations. Moreover, Brownian motion has many well-known properties.

For example, it is a Markov process, a , a L´evy process and a martingale.

Let (Ω, , P ) be a probability space. Below we give the mathematical definition F of the Brownian motion.

Definition 2.5. A stochastic process B(t, ω); t 0 is called a Brownian motion { ≥ } if it satisfies the following conditions:

1. P ω; B(0, ω)=0 =1. (We say B(0) = 0 almost surely.) { }

2. For any 0 s

b 1 x2/2(t s) P a B(t) B(s) b = e− − dx. { ≤ − ≤ } 2π(t s) a − 3. B(t, ω) has independent increments, that is, for any 0 t

B(t1), B(t2) B(t1),..., B(tn) B(tn 1), − − −

are independent.

4 4. Almost all sample paths of B(t, ω) are continuous functions, i.e.,

P ω; B( , ω) is continuous =1. { }

Example 2.6. Let X (t) be a random walk start at 0 with jumps h and h equally δ,h − likely at times δ, 2δ, 3δ,.... (See Section 1.2 of [9].)

Assume that h2 = δ. Then, for each t 0, it can be shown that the limit ≥

B(t) = lim Xδ,√δ(t) δ 0 → is a Brownian motion.

Example 2.7. In this example, we provide a construction of Brownian motion.

Let C[0, 1] be the Banach space of real-valued continuous functions g on [0, 1] with g(0) = 0 equipped with the norm given by g = sup g(t) . || ||∞ t [0,1] | | ∈ Note that a cylindrical set (or a cylinder set) A of C[0, 1] is a set of the form

A = g C[0, 1] ; g(t ),g(t ),...,g(t ) U (2.1) ∈ 1 2 n ∈ where 0

Let denote the collection of all cylindrical subsets of C[0, 1]. The collection R R defined in this way is a field, but it is not a σ-field.

Consider the mapping µ defined on and given by 0 R n 2 1 (ui ui 1) µ (A)= exp − − du du ...du 0 − 2(t t ) 1 2 n U i=1 2π(ti ti 1) i i 1 − − − − where A is as in Equation(2.1) and t0 = u0 =0. It can be easily seen that µ0 is a

finitely additive mapping from the field into [0, 1]. It was also proved by Norbert R

Wiener in 1923 that mapping µ0 is σ-additive. (For more details of the proof, see reference [18]). Thus, by Caratheodory’s extension theorem, µ0 can be extended to a

σ-additive measure µ defined on σ( ), the σ-field generated by . (This extension R R

5 is unique since µ (C[0, 1]) = 1 and then µ : [0, 1] is σ-finite.) Moreover, it 0 0 R → turns out that the σ-field σ( ) is the same as the Borel field (C[0, 1]) of C[0, 1]. R B Thus, C[0, 1], (C[0, 1]),µ is a probability space. It is called the Wiener space B and the measure µ is called the Wiener measure. Under this construction, we can show that the stochastic process

B(t,g)= g(t), 0 t 1, g C[0, 1], ≤ ≤ ∈ is, in fact, a Brownian motion. (Here, Ω= C[0, 1] and T = [0, 1].)

Let B(t) be a Brownian motion. The following theorems are some well-known properties of B(t).

Theorem 2.8. For s,t 0, we have E B(s)B(t) = min s,t . ≥ { } Theorem 2.9. (Translation invariance) For fixed t 0, the stochastic process 0 ≥ B(t)= B(t + t ) B(t ) is also a Brownian motion. 0 − 0 The above property says that a Brownian motion “restarted” at any moment is also a Brownian motion.

Theorem 2.10. (Rescaling invariance) For any real number λ> 0, the stochastic 1 process B(t)= B(λt) is also a Brownian motion. √λ Theorem 2.11. The sample path of a Brownian motion is nowhere differentiable almost surely.

Theorem 2.12. ( of Brownian motion)

Let ∆n = a = t0

6 Since we will be using this property of Brownian motion quite often in Chapter

4 and 5, we provide a short proof of this theorem.

n

Proof. Note that b a = (ti ti 1). Let − − − i=1 n n 2 Φn = B(ti) B(ti 1)) (ti ti 1) = Xi, (2.2) − − − − − i=1 i=1 2 where Xi = B(ti) B(ti 1) (ti ti 1). n− − − − − 2 2 To show that B(ti) B(ti 1) converges to b a in L (Ω), it suffices to − − − i=1 show that E(Φ2 )0 as ∆ 0. Thus consider n → n → n n 2 2 Φn = XiXj = Xi + XiXj. i,j=1 i=1 i=j Therefore, n 2 2 E Φn = E Xi + E (XiXj) . (2.3) i=1 i=j For i = j, we have E(X X ) = 0 since B(t) has independent increments and i j 2 E B(t) B(s) = t s . − | − | 4 On the other hand, note that E B(t) B(s) =3 t s 2. Then − | − | 2 4 2 2 E Xi = E B(ti) B(ti 1) 2(ti ti 1) B(ti) B(ti 1) +(ti ti 1) − − − − − − − − − 2 2 2 = 3(ti ti 1) 2(ti ti 1) +(ti ti 1) − − − − − − − 2 = 2(ti ti 1) . − −

Therefore, from Equation (2.3), we have

n 2 2 E Φn = 2(ti ti 1) − − i=1 n

2 ∆n (ti ti 1) ≤ − − i=1 = 2(b a) ∆ 0 as ∆ 0. − n → n →

7 2 This shows that Φn converges to 0 is L (Ω). Thus from Equation (2.2), we see that

n 2 2 B(ti) B(ti 1) converges to b a in L (Ω). i=1 − − − Moreover, with the similar arguments, one can also prove the following theorem.

Theorem 2.13. Let ∆n = a = t0

Here we present an important concept in probability theory called the conditional expectation.

Definition 2.14. Let (Ω, , P ) be a probability space and X L1(Ω, ). That is, F ∈ F E X = X dP < . Suppose is a σ-field such that . The conditional | | | | ∞ G G ⊂ F Ω expectation of X given is defined to be the unique random variable Y (up to G P -measure 1) satisfying the following conditions :

(1) Y is -measurable ; G

(2) Y dP = X dP for all A . ∈G A A We usually use the notations E[X ],E(X ) or E X to denote this random |G |G { |G} variable Y .

Remark 2.15. The Radon-Nikodym theorem guarantees the existence and unique- ness of the conditional expectation.

Theorem 2.16. (Radon-Nikodym Theorem) Let (Ω, , P ) be a measure space. Let F µ be a signed measure on . (This means that µ : ( , ) is a σ-additive F F → −∞ ∞

8 measure such that µ( )=0.) Suppose that µ is absolutely continuous with respect ∅ to P . (i.e., µ(A)=0 for any P -null set A.) Then there exists a unique integrable function f such that

µ(A)= f dP for any A . ∈ F A An important difference between E(X ) and E(X) one should notice is that |G the conditional expectation E(X ) is a random variable, whereas the (regular) |G expectation E(X) is just a real number.

Next, we list some properties of the conditional expectation.

Theorem 2.17. Let (Ω, , P ) be a probability space and X L1(Ω, ). Suppose F ∈ F that is a σ-field such that . Then, each of the following properties holds G G ⊂ F almost surely.

(a) E(E[X ]) = EX. |G

(b) If X is -measurable, then E[X ]= X. G |G

(c) If X and are independent, then E[X ]= EX. G |G

(d) If Y is -measurable and E XY < , then E[XY ]= YE[X ]. G | | ∞ |G |G

(e) (Tower Property) If is a sub-σ-field of , then E[X ]= E[E[X ] ]. H G |H |G |H

(f) If X,Y L1(Ω) and X Y , then E[X ] E[Y ]. ∈ ≤ |G ≤ |G

(g) E[X ] E[ X ]. | |G |≤ | ||G

(h) E[aX + bY ]= aE[X ]+ bE[Y ] for any a,b R and X,Y L1(Ω). |G |G |G ∈ ∈

(i) (Conditional Jansen’s inequality) Let X L1(Ω). Suppose φ is a convex ∈ function on R. Then,

φ (E [X ]) E [φ(X) ] . |G ≤ |G

9 (j) (Conditional monotone convergence theorem) Let 0 X X ... ≤ 1 ≤ 2 ≤ ≤ 1 Xn ... and assume that X = lim Xn in L (Ω). Then n ≤ →∞

E [X ]= lim E [Xn ] . n |G →∞ |G

(k) (Conditional Fatou’s Lemma) Let X 0, X L1(Ω), n = 1, 2,..., and n ≥ n ∈ 1 assume that lim inf Xn L (Ω). Then n →∞ ∈

E lim inf Xn lim inf E [Xn ] . n G ≤ n |G →∞ →∞ (l) (Conditional Lebesgue Dominated convergence theorem) Assume that X | n|≤ 1 Y , Y L (Ω), and X = lim Xn exists almost surely. Then, n ∈ →∞

E [X ]= lim E [Xn ] . n |G →∞ |G 2.3 Martingales

As mentioned before, one of the good properties of Brownian motion is that it is a martingale. Next, we provide more details about the concept of martingales.

We begin with the definition of the filtration and the adaptedness of a stochastic process.

Definition 2.18. Let (Ω, , P ) be a probability space. Let T be an interval in R or F the set of positive integers. A filtration on T is an increasing family ; t T {Ft ∈ } of sub-σ-fields of . In other words, for any s,t T , the inequality s t implies F ∈ ≤ . Fs ⊂ Ft ⊂ F Next, we define the adaptedness of a stochastic process. This concept is very important for the study of the new stochastic integral which we will discuss later in Chapters 4 and 5.

Definition 2.19. A stochastic process X , t T , is said to be adapted to the t ∈ filtration ; t T if for each t T , the random variable X is -measurable. {Ft ∈ } ∈ t Ft

10 Remark 2.20. We assume that all σ-fields are complete. This means that if Ft A with P (A)=0, then for any subsets B of A, B . ∈ Ft ∈ Ft Now we are ready to define a martingale.

Definition 2.21. Let X ; t T be a stochastic process and E X < for all { t ∈ } | t| ∞ t T . Then X is called a martingale with respect to the filtration ; t T if ∈ t {Ft ∈ }

(1) X is adapted to the filtration ; t T , and t {Ft ∈ }

(2) for s t in T , E[X ]= X almost surely. ≤ t|Fs s

Remark 2.22. If the filtration is not specified, then the filtration is understood to be the one given by = σ X(s); s t , the σ-field generated by all X when Ft { ≤ } s s t. ≤ Remark 2.23. If the equality in condition (2) of the definition of martingales is replaced by (or ), then X is called a submartingale (or supermartingale) with ≥ ≤ t respect to . {Ft}

Example 2.24. Let T = 1, 2,... . Let X ∞ be the sequence of independent { } { k}k=1 1 Bernulli random variables given by P (X = 1) = P (X = 1) = . Hence, k k − 2 E(X )=1 1 +( 1) 1 =0 for each k. k 2 − 2

Then, the random walk Sn = X1 + ... + Xn is a martingale with respect to the

filtration ; n T where = σ S ; m n = σ X ,X ,...,X . {Fn ∈ } Fn m ≤ 1 2 n Proof. Consider, for any m

n m n E(S )= E X = E(X )+ E(X ). (2.4) n|Fm i Fm i|Fm j|Fm i=1 i=1 j=m+1 For each 1 i m, X is -measurable which implies that E(X ) = X . ≤ ≤ i Fm i|Fm i Also, X is independent of for each m +1 j n, then E(X ) = E(X ) j Fm ≤ ≤ j|Fm j

11 for all m +1 j n. Thus Equation (2.4) becomes ≤ ≤

m m E(S ) = X + E(X ) = S +0 = S . n|Fm i j m m i=1 j=m+1 Therefore, S is a martingale with respect to the filtration ; n T . n {Fn ∈ }

Example 2.25. The Brownian motion B(t) is a martingale (as we stated at the beginning of this section).

Proof. For s t, consider ≤

E B(t) = E B(t) B(s)+ B(s) |Fs − | Fs = EB(t) B(s) + E B(s) . (2.5) − |Fs |Fs Since B(t) B(s) is independent of and B(s) is -measurable, Equation (2.5) − F Fs becomes

E B(t) = E B(t) B(s) + B(s)=0+ B(s)= B(s). |Fs − Thus B(t) is a martingale (with respect to where = σ B(s); s t ). {Ft} Ft { ≤ }

Example 2.26. Let B(t) be a Brownian motion. Then B(t)2 is a submartingale.

Proof. For s t, ≤

E B(t)2 |Fs 2 =E B(t) B(s)+ B(s) − | Fs 2 = E B(t) B(s) +2E B(s) B(t) B(s) + E B(s)2 . (2.6) − |Fs − |Fs |Fs

12 Again, since B(t) B(s) is independent of and B(s) is -measurable, Equation − F Fs (2.6) becomes

E B(t)2 = E (B(t) B(s))2 +2B(s)E[B(t) B(s) ]+ B(s)2 |Fs − − |Fs =(t s)+2B(s)E[B (t) B(s)] + B(s)2 − − =(t s)+0+ B(s)2 − B(s)2. ≥

Thus B(t)2 is a submartingale (with respect to the filtration where = {Ft} Ft σ B(s); s t ). { ≤ }

13 Chapter 3 The ItˆoTheory of Stochastic Integration

In this chapter, we review stochastic integral in the Itˆotheory and its properties.

Namely, we consider the integral of the form

b f(t, ω) dB(t, ω) a where B(t) is a Brownian motion and f is an adapted stochastic processes. Then, we state the relationship between stochastic integral and its limit of the Riemann sums. Also, the well-known Itˆo’s formula is presented at the end.

Note that the discussion here will be very brief. For more details, the reader can refer to the book [9].

3.1 ItˆoIntegrals

Let B(t) be a fixed Brownian motion, [a,b] be any finite interval on R, and let

; a t b be the filtration such that {Ft ≤ ≤ }

(1) for each t,B(t) is -measurable Ft

(2) for any s t, the random variable B(t) B(s) is independent of the σ-field ≤ − . Fs

Remark 3.1. The natural Brownian filtration B = σ B(s); s t is an example Ft { ≤ } of filtration satisfying those above conditions.

An Itˆointegral is a stochastic integral of the form

b I(f)= f(t, ω) dB(t) a

14 where f(t, ω) is a stochastic process such that f(t) is adapted to the filtration {Ft} and b E f(t, ω) 2 dt < . a | | ∞ For convenience, let L2 [a,b] Ω denote the space of all stochastic processes ad × f(t, ω), a t b, satisfying two conditions above. Note that by Fubini’s theorem, ≤ ≤ the condition b E f(t, ω) 2 dt < can also be stated as a | | ∞ b E f(t, ω) 2 dt < . | | ∞ a In other words, if f L2 [a,b] Ω , the stochastic integral b f(t, ω) dB(t) is ∈ ad × a b an Itˆointegral. For convenience, we sometimes denote it by a f(t) dB(t). For details on how to rigorously define the Itˆointegrals, the reader should refer to the book [9]. Only the properties of Itˆointegrals that will be used later will be presented here.

2 Theorem 3.2. (Linearity of Itˆointegral) For a,b R and f,g Lad [a,b] Ω , ∈ ∈ × I(af + bg)= aI(f)+ bI(g).

The following theorem gives the mean and variance of the Itˆo integral.

2 Theorem 3.3. (Zero expectation and Itˆoisometry) Suppose f Lad [a,b] Ω . ∈ × b Then the Itˆointegral I(f)= a f(t) dB(t) is a random variable with E I(f) =0 and b E I(f) 2 = E f(t) 2 dt. (3.1) | | | | a By this theorem, the map I : L2 [a,b] Ω L2(Ω) is an isometry. Equation ad × → (3.1) is often called as the “Itˆoisometry.”

Example 3.4. Consider f(t, ω)= B(t, ω)2. Then f(t) is adapted to the filtration

; a t b and {Ft ≤ ≤ } b b b E B(t)2 2 dt = E B(t) 4 dt = 3t2 dt = b3 a3 < . | | | | − ∞ a a a

15 2 b b 2 Thus f(t) Lad [a,b] Ω , Then f(t) dB(t)= B(t) dB(t) is an Itˆointe- ∈ × a a b 2 gral. Also, by Theorem 3.3, we have that a B(t) dB(t) is a random variable with mean 0 and variance b E B(t)2 2 dt = b3 a3. a | | − 1 √ B(t) Example 3.5. The stochastic integral 0 te dB(t) is an Itˆointegral because √teB(t) is adapted to the filtration ; 0 t 1 and {Ft ≤ ≤ } 2 B(t) 2B(t) 2B(t) 1 (2)2t 2t E √te = E te = tE e = te 2 = te , 1 1 2 which implies that E√teB(t) 2 dt = te2t dt =1+ e < . 0 | | 0 4 ∞ 1 √ B(t) Moreover, by Theorem 3.3, 0 te dB(t) is a random variable with mean 0

2 and variance 1 E √teB(t) 2 dt =1+ e . 0 | | 4 Example 3.6. Consider f(t)= sgn B(t) . Since

b b 2 E sgn B(t) dt = E(1) dt = b a< , − ∞ a a 2 it follows that f(t) = sgn B(t) Lad [a,b] Ω . By Theorem 3.3, the random ∈ × 2 variable b sgn B(t) dB(t) has mean 0 and variance b E sgn B(t) dt = b a. a a − Suppose that f L2 [a,b] Ω . Then for any t [a,b], t E f(t) 2 dt ∈ ad × ∈ a | | ≤ b E f(t) 2 dt < . So, f L2 [a,t] Ω and the integral t f(s) dB(s) is well a | | ∞ ∈ ad × a defined. Consider a stochastic process given by

t X = f(s) dB(s), a t b. t ≤ ≤ a Note that, by Theorem 3.3, we have

t 2 t b E X 2 = E f(s) dB(s) = E f(s) 2 ds E f(s) 2 ds < . | t| | | ≤ | | ∞ a a a 1/2 Then, by the Caushy-Schwarz inequality, E X E X 2 < . Hence, for | t| ≤ | t| ∞ each t, the random variable Xt is integrable and we can consider its martingale property.

16 The next two theorems state the martingale and continuity properties of the stochastic process Xt which is called an associated stochastic process of f.

2 Theorem 3.7. (Martingale property) Let f Lad [a,b] Ω . Then the stochastic ∈ × process t X = f(s) dB(s), a t b, t ≤ ≤ a is a martingale with respect to the filtration ; a t b . {Ft ≤ ≤ }

t 2 t B(s) Example 3.8. The stochastic processes a B(s) dB(s) and a √se dB(s) are martingales with respect to . {Ft}

2 Theorem 3.9. (Continuity property) Suppose f Lad [a,b] Ω . Then the ∈ × stochastic process t X = f(s) dB(s), a t b, t ≤ ≤ a is continuous, i.e., almost all its sample paths are continuous functions on [a,b].

In particular, consider when f(t) is just a deterministic function (does not de- pend on ω) in L2[a,b], the of real valued square integrable functions on [a,b].

Observe that f(t) is -adapted and b E f(t) 2 dt = b f(t) 2 dt is finite {Ft} a | | a | | because f L2[a,b]. ∈ Therefore, f is also in L2 [a,b] Ω . Thus, by the above discussion, we have ad × the following inclusion:

L2[a,b] L2 [a,b] Ω . ⊂ ad × In other words, for any f L2[a,b], the integral b f(t) dB(t) is also an Itˆo ∈ a integral. Note that when f L2[a,b], we call the stochastic integral b f(t) dB(t) ∈ a a Wiener integral.

17 Example 3.10. Since f(t)= t sin 1 L2[0, 1], it follows that 1 t sin 1 dB(t) t ∈ 0 t is a Wiener integral.

Since all Wiener integrals are Itˆointegrals, all above properties of Itˆointegrals

(Theorem 3.2 - Theorem 3.9) can also be applied to the Wiener integrals as well.

However, there is a special property of the Wiener integrals that the Itˆointegrals do not have. For f(t, ω) L2 [a,b] Ω , we know just that the Itˆointegral ∈ ad × b f(t, ω) dB(t) is a random variable with mean 0 and variance b E f(t) 2 dt. a a | | However, if f L2[a,b], we know more than that. The Wiener integral b f(t) dB (t) ∈ a is not any random variable, but a Gaussian random variable with mean 0 and variance b f(t) 2 dt as stated in the following theorem. a | | Theorem 3.11. For each f L2[a,b], the Wiener integral b f(t) dB(t) is a ∈ a 2 b 2 Gaussian random variable with mean 0 and variance f 2 = f(t) dt. || ||L [a,b] a | |

5 Example 3.12. Since 2 t2 2 dt = 2 1 = 31 < , f(t) = t2 L2[1, 2]. Then 1 | | 5 − 5 5 ∞ ∈ 2 2 by Theorem 3.11, the Wiener integral 1 t dB(t) is a Gaussian random variable 2 4 31 with mean 0 and variance 1 t dt = 5 . Note that the Itˆointegral can be extended to a larger class of integrands. Also called Itˆointegral, it can be defined for stochastic processes f(t, ω) satisfying the following conditions:

(a) f(t) is adapted to the filtration ; {Ft}

(b) b f(t) 2 dt < almost surely. a | | ∞ We will use the notation Ω,L2[a,b] to denote the space of all stochastic Lad processes f(t, ω) satisfying conditions (a) and (b) above.

If f L2 [a,b] Ω , we have that f is adapted with respect to and ∈ ad × {Ft} E b f(t) 2 dt= b E( f(t) 2) dt < . It follows that b f(t) 2 dt < almost a | | a | | ∞ a | | ∞

18 surely which implies f Ω,L2[a,b] . Therefore, we have a larger class of ∈ Lad b integrands f(t, ω) for the stochastic integral a f(t) dB(t). Namely, L2 [a,b] Ω Ω,L2[a,b] . ad × ⊂Lad The essential difference between these two spaces is the possible lack of integra- bility (with respect to ω) of the integrand f(t, ω) for f Ω,L2[a,b] which ∈ Lad one can see more clearly by the following example.

Example 3.13. Consider the stochastic process f(t)= eB(t)2 . Note that

2 E( f(t) 2)= E e2B(t) | | 2 ∞ 2x2 1 x = e e− 2t dx √2πt −∞ x2 1 ∞ 1 − 2 t = e ( 1−4t ) dx √1 4t t − −∞ 2π 1 4t − 1 if 0

1 2 1 2B(t)2 2 f(t) dt = e dt < almost surely. So f ad Ω,L [0, 1] . 0 | | 0 ∞ ∈L

2 Hence f(t)= eB(t) is an example of stochastic processes such that f belongs to

Ω,L2[0, 1] , but f does not belong to L2 [0, 1] Ω . Lad ad × As discussed before, the stochastic processf Ω,L2[a,b] may lack the ∈ Lad b integrability property. So the stochastic integral a f(t) dB(t) is a random variable that may have infinite expectation.

In general, for f Ω,L2[a,b] , the associated stochastic process ∈Lad t Xt = f(s) dB(s) a

19 may not have finite expectation. Thus it fails to be a martingale. However, we have another concept involving the called and we have

t a similar theorem to the Theorem 3.7 stating that Xt = a f(s) dB(s) is a local martingale if f Ω,L2[a,b] . ∈Lad

2 Theorem 3.14. Let f ad Ω,L [a,b] . Then the stochastic process ∈L t X = f(s) dB(s), a t b, t ≤ ≤ a is a local martingale with respect to the filtration ; a t b . {Ft ≤ ≤ } See [9] for more detailed discussions of the stopping times and local martingales.

Example 3.15. Let f(t) = eB(t)2 . We have already seen in Example 3.13 that

2 f ad Ω,L [0, 1] . So, by Theorem 3.14, we have that the stochastic process ∈L t 2 X = eB(s) dB(s), 0 t 1, t ≤ ≤ 0 is a local martingale but not a martingale.

We also have an analogue of the Theorem 3.9.

Theorem 3.16. Let f (Ω,L2[a,b]). Then the stochastic process ∈Lad t X = f(s) dB(s), a t b, t ≤ ≤ a has a continuous realization.

3.2 Riemann Sums and Stochastic Integrals

In this section, we will review the relationship between the stochastic integrals of continuous adapted stochastic processes and their convergence of the Riemann-like sums evaluated at the left endpoints of the subintervals.

Note that if f(t) is a continuous -, then f (Ω,L2[a,b]). {Ft} ∈Lad b Moreover, its stochastic integral a f(t) dB(t) can be expressed as a limit of its Riemann like sums evaluated at the left endpoints.

20 Theorem 3.17. Suppose f is a continuous -adapted stochastic process. Then {Ft} f (Ω,L2[a,b]) and ∈Lad

b n f(t) dB(t)= lim f(ti 1) B(ti) B(ti 1) in probability, ∆n 0 − − − a → i=1 where ∆n = a = t0

[a,b] and ∆n = max (ti ti 1). 1 i n − ≤ ≤ − The proof of this theorem can be found in [9].

More specifically, if f L2 Ω [a,b] , we know that the Itˆointegral b f(t)dB(t) ∈ ad × a belongs to L2(Ω) and we have an analogue of Theorem 3.17 for f L2 Ω [a,b] ∈ ad × with a stronger convergence. Namely, the above Riemann sum of f will converge in

L2(Ω) instead of just in probability, but it still requires a little stronger assumption that E f(t)f(s) is a continuous function of t and s. 2 Theorem 3.18. Suppose f Lad [a,b] Ω and assume that E f(t)f(s) is a ∈ × continuous function of t and s. Then,

b n 2 f(t) dB(t)= lim f(ti 1) B(ti) B(ti 1) in L (Ω), ∆n 0 − − − a → i=1 where ∆n = a = t0

Example 3.19. We will evaluate the integral t B(s) dB(s), for t 0. 0 ≥

Let ∆n = 0= s0

21 which is a continuous function of s and t. Then Theorem 3.18 provides that

t 2 0 B(s) dB(s) is the L -limit of the summation

n B(si 1) B(si) B(si 1) . − − − i=1 To find its limit in L2(Ω), consider

2 2 2 B(si) = B(si) B(si 1) +2B(si 1) B(si) B(si 1) + B(si 1) − − − − − − which can be rewritten as

1 2 1 2 1 2 B(si 1) B(si) B(si 1) = B(si) B(si 1) B(si) B(si 1) − − − 2 − 2 − − 2 − − Thus, we have

n

B(si 1) B(si) B(si 1) − − − i=1 n n 1 2 2 1 2 = B(si) B(si 1) B(si) B(si 1) 2 − − − 2 − − i=1 i=1 n 1 2 2 1 2 = B(t) B(0) B(si) B(si 1) . 2 − − 2 − − i=1 By the quadratic variation of the Brownian motion (Theorem 2.12), we note that the last summation converges to t 0 in L2(Ω). So, we conclude that − n 1 2 2 B(si 1) B(si) B(si 1) B(t) t in L (Ω) as ∆n 0. − − − −→ 2 − → i=1 Therefore, t 1 B(s) dB(s)= B(t)2 t . 2 − 0 3.3 Itˆo’s Formula

In ordinary , we deal with deterministic functions. One of the useful rules for differentiation that we use very often is the chain rule.

22 The chain rule states that if f and g are differentiable, then its composite func- tion f g is also differentiable and has derivative given by ◦ d d f g (t)= f g(t) = f ′ g(t) g′(t) dt ◦ dt which can be rewritten in the integral form as

t f g(t) f g(a) = f ′ g(s) g′(s) ds. (3.2) − a On the other hand, in Itˆocalculus, we deal with the stochastic processes which are random functions. For example, if g(t) is the stochastic process B(t), then we have the composite function f B(t) . Note that almost all sample paths of the d Brownian motion B(t) are nowhere differentiable, so the expression f B(t) = dt f ′ B(t) B′(t) has absolutely no meaning.

However, by rewriting B′(s) ds as the integrator dB(s) in the Itˆointegral, we have a similar integral version of the chain rule in the Itˆocalculus. It is called

Itˆo’s formula (or sometimes called Itˆo’s lemma). Later, we will see that this Itˆo’s formula can be generalized to some larger classes of stochastic processes.

Here, we will present several versions of Itˆoformula without the proofs. For more details about their proofs, the reader may look at [9].

Let B(t), a t b, be a Brownian motion. We begin with the simplest form of ≤ ≤ the Itˆoformula.

2 Theorem 3.20. Let f be a C -function. That is, f is twice differentiable and f ′′ is continuous. Then

t 1 t f B(t) f B(a) = f ′ B(s) dB(s)+ f ′′ B(s) ds (3.3) − 2 a a where the first integral on the right hand side is an Itˆointegral as defined in section

3.1 and the second integral is a for each sample path of B(s).

23 Remark 3.21. Compared to Equation (3.2), the Itˆoformula in Equation (3.3)

1 t has an extra term 2 a f ′′ B(s) ds. This is a consequence of the nonzero quadratic variation of the Brownian motion which is a main point distinguishing Itˆocalculus from the ordinary calculus.

Example 3.22. Let f(x)= x2. Then, by Theorem 3.20, we get

t B(t)2 B(a)2 =2 B(s) dB(s)+(t a). − − a Thus we have t 1 B(s) dB(s)= B(t)2 B(a)2 (t a) . (3.4) a 2 − − − When putting a =0, Equation (3.4) reduces to t 1 B(s) dB(s)= B(t)2 t (3.5) 2 − 0 which is exactly the same as what we obtained in Example 3.19.

t Example 3.23. We will evaluate the integral B(s)4 dB(s). To evaluate this 0 x5 integral by using Itˆo’s formula, we let f(x)= ( so that we will have f (x)= x4 5 ′ ). Hence, by Theorem 3.20, we have B(t)5 t 1 t 0= B(s)4 dB(s)+ 4B(s)3 ds. 5 − 2 0 0 Thus, we have t 1 t B(s)4 dB(s)= B(t)5 2 B(s)3 ds. 5 − 0 0 Next, we give a more generalized Itˆo’s formula. Namely, we consider a function f(t,x) of two variables t and x. We set x = B(t) to get a stochastic process f t,B(t) . Observe that t appears in two places, one as a variable of f and the other one is in the Brownian motion. For the first t, we can apply the ordinary calculus. However, we need the Itˆocalculus to take care of the second t in B(t). The following theorem is a slightly generalized Itˆo’s formula for the function f(t,x).

24 Theorem 3.24. Let f(t,x) be a continuous function with continuous derivatives ∂f ∂f ∂2f , and . Then ∂t ∂x ∂x2

f t,B(t) f a,B(a) − t ∂f t ∂f 1 ∂2f = s,B(s) dB(s)+ s,B(s) + s,B(s) ds. ∂x ∂t 2 ∂x2 a a Again, the first integral is the Itˆointegral and the second integral is the Riemann integral for each sample path of the Brownian motion B(t).

t Example 3.25. We will find 0 sB(s) dB(s) by using Itˆo’s formula. First, let tx2 ∂f f(t,x)= ( so that we will have (t,x)= tx ). Then 2 ∂x ∂f ∂2f ∂f x2 (t,x)= tx, (t,x)= t, (t,x)= . ∂x ∂x2 ∂t 2

By Theorem 3.24, we have

tB(t)2 t t B(s)2 1 0= sB(s) dB(s)+ + (s) ds 2 − 2 2 0 0 t 1 t 1 t2 = sB(s) dB(s)+ B(s)2 ds + . 2 2 2 0 0 Thus, we obtain

t 1 1 1 t sB(s) dB(s)= tB(t)2 t2 B(s)2 ds. 2 − 4 − 2 0 0 Example 3.26. Let f(t,x)= x3 t. Note that − ∂f ∂2f ∂f (t,x)=3x2, (t,x)=6x, (t,x)= 1. ∂x ∂x2 ∂t −

Thus, by theorem 3.24, we have that

t t 1 B(t)3 t B(a)3 a = 3B(s)2 dB(s)+ 1+ 6B(s) ds − − − − 2 a a t t =3 B(s)2 dB(s) (t a)+3 B(s) ds − − a a

25 which can be simplified to

t 1 t B(s)2 dB(s)= B(t)3 B(a)3 B(s) ds. 3 − − a a When putting a =0, we obtain

t B(t)3 t B(s)2 dB(s)= B(s) ds. (3.6) 3 − 0 0 Before we move to the most general form of Itˆo’s formula, let us introduce a special class of stochastic processes which is called “Itˆo process.”

Recall that Ω,L2[a,b] is the space consisting of all -adapted stochastic Lad Ft processes f(t) such that b f(t) 2 dt < almost surely. a | | ∞ We also denote by Ω,L1[a,b] the class of all -adapted stochastic processes Lad Ft f(t) such that b f(t) dt < almost surely. a | | ∞ Definition 3.27. An Itˆoprocess is a stochastic process of the form

t t X = X + f(s) dB(s)+ g(s) ds, a t b, (3.7) t a ≤ ≤ a a 2 1 where X is -measurable, f ad Ω,L [a,b] , and g ad Ω,L [a,b] . a Fa ∈L ∈L A more convenient way to write Equation (3.7) is the following “stochastic dif- ferential”

dX = f(t) dB(t)+ g(t) dt, a t b. (3.8) t ≤ ≤ One must note that the expression above has no meaning because of the non- differentiability of the Brownian paths. The differential form in (3.8) is just a symbolic expression for the integral equation in (3.7).

Example 3.28. Let θ be a C2-function on [a,b]. By the Itˆoformula Equation

(3.3) , we have

t t 1 θ B(t) = θ B(a) + θ′ B(s) dB(s)+ θ′′ B(s) ds, a t b. 2 ≤ ≤ a a

26 2 Then, Xt = θ B(t) is an Itˆoprocess. In other words, every C -function of the

Brownian motion is an Itˆoprocess.

The following theorem is the most general form of the Itˆoformula. Namely, it is the Itˆoformula for functions of Itˆoprocesses.

Theorem 3.29. Let Xt be an Itˆoprocess given by

t t X = X + f(s) dB(s)+ g(s) ds, a t b. t a ≤ ≤ a a ∂f ∂f Suppose that θ(t,x) is a continuous function with continuous derivatives , , ∂t ∂x ∂2f and . Then θ(t,X ) is also an Itˆoprocess and ∂x2 t t ∂θ θ(t,X )= θ(a,X )+ (s,X ) dB(s) t a ∂x s a t ∂θ ∂θ 1 ∂2θ + (s,X )+ (s,X )g(s)+ (s,X )f(s)2 ds. (3.9) ∂t s ∂x s 2 ∂x2 s a One of the best ways to get Equation (3.9) is through the Taylor expansion in the stochastic differential form and the following useful “Itˆotable” :

dB(t) dt × dB(t) dt 0 dt 0 0 TABLE 3.1. Itˆotable.

First, we apply the Taylor expansion of θ(t,Xt) to the first order for dt and to the second order for dXt to get ∂θ ∂θ 1 ∂2θ dθ(t,X )= (t,X ) dt + (t,X ) dX + (t,X )(dX )2. (3.10) t ∂t t ∂x t t 2 ∂x2 t t

2 2 Then, apply the above Itˆotable to get (dXt) = f(t) dt. Finally, we obtain

dθ(t,Xt) ∂θ ∂θ 1 ∂2θ = (t,X ) dt + (t,X ) f(t) dB(t)+ g(t) dt + (t,X ) f(t)2 dt ∂t t ∂x t 2 ∂x2 t ∂θ ∂θ ∂θ 1 ∂2θ = (t,X )f(t) dB(t)+ (t,X )+ (t,X ) g(t)+ (t,X )f(t)2 dt. ∂x t ∂t t ∂x t 2 ∂x2 t

27 2 Example 3.30. Let f ad Ω,L [0, 1] . Consider an Itˆoprocess ∈L t 1 t X = f(s)dB(s) f(s)2ds, 0 t 1. t − 2 ≤ ≤ 0 0

Xt We will show that Yt = e is a solution of the stochastic differential equation

dYt = f(t)Yt dB(t), 0 t 1,  ≤ ≤ (3.11)   Y0 =1.   X0 0 First note that Y0 =e = e = 1. Now, we apply the Taylor expansion as in

x Equation (3.10) toθ(x)= e to the second order and use the Itˆotable to get

dYt = dθ(Xt)

1 2 = θ′(Xt) dXt + θ′′(Xt)(dXt) 2 1 1 = eXt f(t) dB(t) f(t)2 dt + eXt f(t)2dt − 2 2 = eXt f(t) dB(t)

= f(t)Yt dB(t).

Xt Thus we conclude that Yt = e is a solution of the stochastic differential equation

(3.11).

Moreover, the Itˆoformula can be generalized to the multi-dimensional case.

We will not precisely state the formula here. For those interested in the multi- dimensional Itˆo’s formula, see the book [9].

With a special case of the multi-dimensional Itˆo’s formula, the following useful

Itˆoproduct formula is obtained.

Let X and Y be two Itˆoprocesses defined for a t b. Then the integral form t t ≤ ≤ of the Itˆoproduct formula is given by

t t t XtYt = XaYa + Ys dXs + Xs dYs + dXsdYs. (3.12) a a a

28 The Itˆoproduct formula can also be expressed in the differential form as

d(XtYt)= Yt dXt + Xt dYt +(dXt)(dYt) . (3.13)

Note that the Itˆoproduct formula in Equation (3.13) looks similar to the product rule in the ordinary calculas, but it contains the extra term (dXt)(dYt).

More specifically, if Xt and Yt are given by

dX = f(t) dB(t)+ ξ(t) dt and dY = g(t) dB(t)+ η(t) dt, a t b. t t ≤ ≤

Then by Equation (3.12) and Itˆotable, we have

t t XtYt = XaYa + Ys f(s) dB(s)+ ξ(s) ds + Xs g(s) dB(s)+ η(s) ds a a t + f(s) dB(s)+ ξ(t) dt g(s) dB(s)+ η(s) ds a t t = XaYa + f(s)Ys dB(s)+ ξ(s)Ys ds a a t t t + g(s)Xs dB(s)+ η(s)Xs ds + f(s)g(s) ds a a a t = XaYa + f(s)Ys + g(s)Xs dB(s) a t + ξ(s)Ys + η(s)Xs ds + f(s)g(s) ds. a

29 Chapter 4 The New Stochastic Integral

b In Chapter 3, all stochastic integrals I(f) = a f(t) dB(t) are only defined for -adapted stochastic processes f(t). In this chapter, we will explain Ayed and {Ft} Kuo’s new idea to define the new stochastic integral. This new integral can be regarded as an extension of Itˆointegral because it can also be defined for some non-adapted integrands. More details about this new idea can be found in [1] and

[2].

Throughout this chapter, let B(t), t 0, be a Brownian motion and let = ≥ Ft σ B(s); s t . { ≤ }

4.1 Motivation

We begin this section with some history of anticipating stochastic integration.

Then we review some previous approaches to define the stochastic integrals for non-adapted integrands before Ayed and Kuo obtain this new idea.

In 1976, Itˆoraised a very interesting question in the International Symposium on Stochastic Differential Equations at Kyoto. The question was how one should

1 define 0 B(1) dB(t) which is not an Itˆointegral because the integrand B(1) is not adapted to ; t 0 ? {Ft ≥ } In [7], Itˆoprovided his idea to define this anticipating stochastic integral. His idea is to enlarge the filtration so that the integrand B(1) is adapted. Namely, let = σ ,B(1) ; the σ-field generated by and B(1). By doing this, the Gt {Ft } Ft stochastic process B(t) is no longer a Brownian motion with respect to the new

filtration . {Gt}

30 However, the process B(t) can still be decomposed as

t B(1) B(u) t B(1) B(u) B(t) = B(t) − du + − du − 1 u 1 u 0 − 0 − which shows that B(t) is a quasimartingale with respect to the filtration . {Gt} 1 Then he finally defines 0 B(1) dB(t) as a stochastic integral with respect to a quasimartingale. Then the result of this approach is

1 B(1) dB(t) = B(1)2. (4.1) 0 Note that this stochastic integral fails to have zero expectation, one of the impor- tant properties of Itˆointegrals.

After that, there were many approaches to define stochastic integrals with non- adapted integrands. One of the notable approaches is via the white noise theory.

For those who are interested in white noise theory, see the book [8] for more details.

Although the white noise approach provides an extension of stochastic integral to the non-adapted integrands, there are several disadvantages of this approach to anticipating stochastic integration:

It requires too much complicated background from white noise theory. •

The computation of the white noise integral involving the S-transform is • usually very difficult, even for very simple examples.

There is no available characterization theorem for generalized functions to • be realized as random variables in Lp( (R),µ) for some p > 1, i.e., we do S not know when a white noise integral is a random variable or when it is just

a generalized function.

It lacks probabilistic interpretation, e.g., it is unknown how to deal with • convergence in probability in terms of the S-transform.

31 More details of an extension of Itˆointegrals to anticipating stochastic processes via this approach can be found in Chapter 13 of [8].

Nevertheless, in 2008, Ayed and Kuo introduced a new approach to define stochastic integrals of anticipating integrands. Their idea is not only much simpler than the white noise approach, but it also has probabililistic interpretation. We provide more details about thier idea in the next section.

4.2 New Ideas of Ayed and Kuo

In this section, we describe the idea of Ayed and Kuo on how to define the new stochastic integrals. Namely, this new stochastic integral is defined for a special class of anticipating stochastic processes. Inspired by Itˆo’s original idea to de-

1 fine 0 B(1) dB(t), their idea came from a very simple observation. Instead of decomposing the integrator B(t) as in Itˆo’s idea, they focused on decomposing the integrands. The following are some simple examples that demonstrate the decom- positions.

Example 4.1. The anticipating stochastic process B(1) can be decomposed as

B(1) = B(1) B(t) + B(t). − Example 4.2. Consider another anticapating stochastic process B(1)2. This an- ticipating integrand B(1)2 can be decomposed as

2 B(1)2 = B(1) B(t) +2B(t) B(1) B(t) + B(t)2. (4.2) − − Example 4.3. For n N, it follows from the binomial theorem that B(1)n can be ∈ decomposed as

n n n n k n k B(1) = B(1) B(t)+ B(t) = B(1) B(t) B(t) − . − k − k=1

32 Example 4.4. The stochastic process eB(1) can be written as

B(1) B(1) B(t) B(t) e = e − e . (4.3)

By above examples, every decomposition is a linear combination of products of an adapted part (e.g., B(t), B(t)k, eB(t)) and an anticipating part with a special

k B(1) B(t) property (e.g., B(1) B(t), B(1) B(t) , e − ). This special property in- − − spired Ayed and Kuo to define a new class of stochastic processes called “instantly independent.”

Definition 4.5. A stochastic process ϕ(t), a t T , is said to be instantly ≤ ≤ independent with respect to the filtration if ϕ(t) and are independent for {Ft} Ft each t.

Example 4.6. For 0 t 1, the following stochastic processes are all instantly ≤ ≤ independent with respect to : (i) B(1) B(t), (ii) [B(1) B(t)]n ; n N, {Ft} − − ∈ B(1) B(t) 1 (iii) e − , and (iv) t h(s)dB(s) where h(s) is a deterministic function in L2[0, 1].

A useful property of this class of stochastic processes can be seen in Lemma 4.7.

Lemma 4.7. If a stochastic process ϕ(t) is both adapted and instantly independent with respect to the filtration , then ϕ(t) must be a deterministic function. {Ft}

By this lemma, one can regard this class of instantly independent stochastic processes as a counterpart to the class of adapted stochastic processes in the Itˆo theory.

By the above examples, we see that many anticipating stochastic processes can be decomposed into a sum of the products of Itˆoparts (adapted processes) and counterparts (instantly independent processes). This turns out to be a key idea of

33 Ayed and Kuo approach to define the new stochastic integral. Namely, to use their idea to evaluate an anticipating stochastic integral, one needs to

1. keep the filtration and the integrator B(t) as the same, but {Ft}

2. decompose the integrand into a sum of the products of adapted stochastic

processes and instantly independent stochastic processes.

3. evaluate each stochastic integral of a product of an adapted stochastic process

and an instantly independent stochastic process.

Hence the next question we need to answer is how one can define a stochastic integral of a product of an adapted process and an instantly independent process.

Recall from Theorem 3.17 in the previous chapter that the stochastic integral of a continuous -adapted process can be defined as the limit in probability of its {Ft} Riemann-like sum evaluated at the left endpoints.

Another key idea of this new approach is motivated by Theorem 3.17. Namely,

Ayed and Kuo define the stochastic integral of a stochastic process which is a product of an adapted stochastic process and an instantly independent stochas- tic process as the limit in probability of its Riemann sum with fixed evaluation points. More specifically, they use the left endpoints as the evaluation points for the adapted parts and evaluate the instantly independent parts at the right endpoints of the subintervals.

Definition 4.8. For an adapted stochastic process f(t) and an instantly indepen- dent stochastic process ϕ(t), the new stochastic integral of f(t)ϕ(t) is defined to be the limit

b n I(fϕ)= f(t)ϕ(t) dB(t)= lim f(ti 1)ϕ(ti) B(ti) B(ti 1) ∆n 0 − − − a → i=1 provided that the limit exists in probability.

34 Remark 4.9. Note that, by the above definition, if f(t) is continuous and ϕ(t)=1,

b n I(f)= f(t) dB(t)= lim f(ti 1) B(ti) B(ti 1) . ∆n 0 − − − a → i=1 We see that this new integral reduces to the Itˆointegral according to Theorem

3.17 of the previous chapter. This is why it can be regarded as an extension of Itˆo integral.

N In general, for a stochastic process F (t) = k=1 fk(t)ϕk(t) where fk(t)’s are b adapted and ϕk(t)’s are instantly independent, its stochastic integral a F (t) dB(t) is defined as follows.

Definition 4.10. Let F (t) be a stochastic process given by

N

F (t)= fk(t)ϕk(t), k=1 where fk(t)’s are adapted and ϕk(t)’s are instantly independent. Then the stochastic integral of F (t) is defined to be

b N b F (t) dB(t)= fk(t)ϕk(t) dB(t), a a k=1 b where, for each k, the integral a fk(t)ϕk(t) dB(t) is defined as in Definition 4.8. Next, let us provide some examples to show how this idea works. More examples can be found in [1] and [2]. First, we begin with the simplest case when f(t)=1.

Example 4.11. We will find 1 B(1) B(t) dB(t). 0 − Let ∆ = 0= t

[ti 1,ti], we take the right endpoint ti as the evaluation point to form a Riemann − like sum. So the stochastic integral 1 B(1) B(t) dB(t) is defined to be the 0 −

35 following limit in probability

1 B(1) B(t) dB(t) − 0 n = lim B(1) B(ti) B(ti) B(ti 1) ∆n 0 − − − → i=1 n n

= B(1) lim B(ti) B(ti 1) lim B(ti) B(ti) B(ti 1) ∆n 0 − − − ∆n 0 − − → i=1 → i=1 n 2 = B(1) lim B(ti) B(ti 1) + B(ti 1) B(ti) B(ti 1) − ∆n 0 − − − − − → i=1 n n 2 2 = B(1) lim B(ti) B(ti 1) lim B(ti 1) B(ti) B(ti 1) − ∆n 0 − − − ∆n 0 − − − → i=1 → i=1 1 = B(1)2 1 B(t) dB(t). − − 0

1 The last integral 0 B(t) dB(t) is an Itˆointegral since B(t) is adapted. Hence, by the decomposition in Example 4.1 and Definition 4.10, we have

1 1 1 B(1) dB(t)= B(1) B(t) dB(t)+ B(t) dB(t) − 0 0 0 1 1 = B(1)2 1 B(t) dB(t) + B(t) dB(t) − − 0 0 = B(1)2 1. −

1 Note that by using this new approach, the stochastic integral 0 B(1) dB(t) is not the same as the one obtained by using Itˆo’s idea in Equation (4.1). However, it coincides with the Hitsuda-Skorokhod integral defined via the white noise ap- proach. (See Example 13.14 of [8].) In general, the new stochastic integrals have expectation 0 as what we have for the Itˆointegral in Theorem 3.3. This property is presented in Theorem 4.12 below.

Theorem 4.12. Let f(t),a t b, be adapted and let ϕ(t),a t b, be instantly ≤ ≤ ≤ ≤ independent. Then the new stochastic integral of f(t) and ϕ(t) has zero expectation.

36 That is, b E I(fϕ) = E f(t)ϕ(t) dB(t) =0. a The proof of this theorem can be seen in [1].

Below we provide more examples of direct computations of the new integrals.

Example 4.13. We will evaluate the stochastic integral t B(s) B(1) B(s) dB(s) 0 − for 0 t 1. For 0 t 1, f(t)= B(t) is adapted and ϕ(t)= B(1) B(t) is in- ≤ ≤ ≤ ≤ − stantly independent. Let ∆ = 0= s

n

B(si 1) B(1) B(si) B(si) B(si 1) − − − − i=1 n n

= B(si 1)B(1) B(si) B(si 1) B(si 1)B(si) B(si) B(si 1) − − − − − − − i=1 i=1 n

= B(1) B(si 1) B(si) B(si 1) − − − i=1 n

B(si 1) B(si) B(si 1)+ B(si 1) B(si) B(si 1) − − − − − − − i=1 n

= B(1) B(si 1) B(si) B(si 1) − − − i=1 n n 2 2 B(si 1) B(si) B(si 1) B(si 1) B(si) B(si 1) . − − − − − − − − i=1 i=1 Note that, as ∆ 0, the above three summations converge in probability to n → t t t 2 0 B(s) dB(s), 0 B(s) ds and 0 B(s) dB(s), respectively. Therefore,

t t t t B(s) B(1) B(s) dB(s)= B(1) B(s) dB(s) B(s) ds B(s)2 dB(s). − − − 0 0 0 0 (4.4)

t t 2 Since both 0 B(s) dB(s) and 0 B(s) dB(s) are Itˆointegrals, we can apply the regular Itˆo’s formula (Theorem 3.20) to evaluate these two integrals. (See Equation

37 (3.5) in Example 3.22 and Equation (3.6) in Example 3.26.) So, for 0 t 1, we ≤ ≤ have

t B(s) B(1) B(s) dB(s) − 0 1 t B(t)3 t = B(1) B(t)2 t B(s) ds B(s) ds 2 − − − 3 − 0 0 1 B(t)3 = B(1) B(t)2 t . 2 − − 3 Note that when putting t =1, this stochastic integral becomes

1 1 B(1)3 1 B(s) B(1) B(s) dB(s)= B(1) B(1)2 1 = B(1)3 3B(1) − 2 − − 3 6 − 0 which still coincides with the Hitsuda-Skorokhod integral defined in the white noise approach. (See Example 2.5 in Chapter 2 of [14].)

2 Example 4.14. We will evaluate the integral t B(1) B(s) dB(s) for 0 0 − ≤ t 1. Let ∆ = 0= s

[0,t]. For convenience, let ∆Bi := B(si) B(si 1). Then, by definition, the integral − − 2 t B(1) B(s) dB(s) is the limit in probability of the summation 0 − n 2 B(1) B(si) B(si) B(si 1) − − − i=1 n 2 2 = B(1) 2B(1)B(si)+ B(si) B(si) B(si 1) − − − i=1 n n n 2 2 = B(1) B(si) B(si 1) 2B(1) B(si)∆Bi + B(si) ∆Bi − − − i=1 i=1 i=1 n 2 = B(1) [B(t) B(0)] 2B(1) B(si) B(si 1)+ B(si 1) ∆Bi − − − − − i=1 n 2 + B(si) B(si 1)+ B(si 1) ∆Bi − − − i=1

38 n n 2 2 = B(1) B(t) 2B(1) B(si) B(si 1) 2B(1) B(si 1)∆Bi − − − − − i=1 i=1 n 2 2 + B(si) B(si 1) +2B(si 1) B(si) B(si 1) + B(si 1) ∆Bi − − − − − − i=1 n n 2 2 = B(1) B(t) 2B(1) B(si) B(si 1) 2B(1) B(si 1)∆Bi − − − − − i=1 i=1 n n n 3 2 2 + ∆Bi +2 B(si 1) ∆Bi + B(si 1) ∆Bi. − − i=1 i=1 i=1 Note that, as ∆ 0, the above five summations converge in probability to t, n → t B(s) dB(s), 0, t B(s) ds and t B(s)2 dB(s), respectively. So, for 0 t 1, 0 0 0 ≤ ≤ we have

t t 2 B(1) B(s) dB(s)= B(1)2B(t) 2B(1)t 2B(1) B(s) dB(s) − − − 0 0 t t +2 B(s) ds + B(s)2 dB(s). (4.5) 0 0 Also, by Equation (3.5) and Equation (3.6) , Equation (4.5) becomes

t 2 1 B(1) B(s) dB(s)= B(1)2B(t) 2B(1)t 2B(1) B(t)2 t − − − 2 − 0 t B(t)3 t +2 B(s) ds + B(s) ds 3 − 0 0 t B(t)3 = B(1)2B(t) B(1)B(t)2 + B(s) ds + . − 3 0 Example 4.15. We will evaluate the stochastic integral t B(1)2 dB(s) for t 0. 0 ≥ First consider when 0 t 1, we decompose the integrand B(1)2 as in Equation ≤ ≤ (4.2):

2 B(1)2 = B(1) B(t) +2B(t) B(1) B(t) + B(t)2. − − Then, by Definition 4.10, we have

t t 2 B(1)2 dB(s) = B(1) B(s) dB(s) − 0 0 t t +2 B(s) B(1) B(s) dB(s)+ B(s)2 dB(s). − 0 0

39 From Equation (4.4) and Equation (4.5), we get

t B(1)2 dB(s) 0 t t t = B(1)2B(t) 2B(1)t 2B(1) B(s) dB(s)+2 B(s) ds + B(s)2 dB(s) − − 0 0 0 t t t t +2 B(1) B(s) dB(s) B(s) ds B(s)2 dB(s) + B(s)2 dB(s). − − 0 0 0 0 Therefore,

t B(1)2 dB(s)= B(1)2B(t) 2B(1)t, 0 t 1. − ≤ ≤ 0 Next, consider when t> 1. Note that B(1)2 is -measurable for all t 1. Then Ft ≥ t 2 for each t> 1, 1 B(1) dB(s) is an Itˆointegral. So

t t B(1)2 dB(s)= B(1)2 dB(s)= B(1)2 B(t) B(1) . − 1 1 Therefore, when t> 1, we have

t 1 t B(1)2 dB(s)= B(1)2 dB(s)+ B(1)2 dB(s) 0 0 1 = B(1)3 2B(1) + B(1)2 B(t) B(1) − − = B (1)2B(t) 2B(1). −

Thus

t B(1)2B(t) 2B(1)t if 0 t 1; B(1)2 dB(s) =  − ≤ ≤ 0  B(1)2B(t) 2B(1) if t> 1. −   t Example 4.16. Consider the integral eB(2) dB(s) for t 0. 0 ≥ When 0 t 2, eB(2) can be rewritten as in Equation (4.3) ≤ ≤

B(2) B(2) B(t) B(t) e = e − e .

40 Let ∆ = 0= s

n B(si−1) B(2) B(si) e e − B(si) B(si 1) − − i=1 n B(2) B(si−1) B(si) = e e e− B(si) B(si 1) − − i=1 n B(2) B(si) B(si−1) = e e− − B(si) B(si 1) − − i=1 n B(2) 1 2 = e 1 B(si) B(si 1) + B(si) B(si 1) − − − 2 − − i=1 2 o (B(si) B(si 1)) B(si) B(si 1) − − − − − n n B(2) 2 = e B(si) B(si 1) B(si) B(si 1) − − − − − i=1 i=1 n n 1 3 1 4 + B(si) B(si 1) B(si) B(si 1) + ... 2 − − − 6 − − i=1 i=1 n n B(2) 2 1 3 = e B(t) B(si) B(si 1) + B(si) B(si 1) + ... . − − − 2 − − i=1 i=1

Note that, as ∆ 0, the first summation converges to t, and all other n → summations converge to 0 in probability. Hence we have

t eB(2) dB(s)= eB(2) B(t) t , 0 t 2. − ≤ ≤ 0

Next, consider when t > 2. Since eB(2) is -measurable for all t 2. Then for Ft ≥ t B(2) each t> 2, 2 e dB(s) is an Itˆointegral and so t t eB(2) dB(s)= eB(2) dB(s)= eB(2) B(t) B(2) . − 2 2

41 Therefore, for t> 2, we have t 2 t eB(2) dB(s)= eB(2) dB(s)+ eB(2) dB(s) 0 0 2 = eB(2) B(2) 2 + eB(2) B(t) B(2) − − = eB(2)B(t) 2 . − That is,

t eB(2) B(t) t if 0 t 2; eB(2) dB(s) =  − ≤ ≤ 0  eB(2) B(t) 2 if t> 2. −  4.3 Simple Itˆo’s Formula for the New Stochastic Integral

In [1], Ayed and Kuo provide a version of Itˆo’s formula for the new integral. We give this in the following theorem.

Theorem 4.17. Let f(x) and ϕ(x) be C2-functions and θ(x,y) = f(x)ϕ(y x). − Then the following equality holds for a t T , ≤ ≤ t ∂θ θ B(t),B(T ) = θ B(a),B(T ) + B(s),B(T ) dB(s) ∂x a t 1 ∂2θ ∂2θ + B(s),B(T ) + B(s),B(T ) ds. (4.6) 2 ∂x2 ∂x∂y a Observe that the last integral of Equation (4.6) contains both the Itˆocorrection

1 ∂2θ ∂2θ term 2 ∂x2 B(s),B(T ) for the Itˆopart and the correction term ∂x∂y B(s),B(T ) for the counterpart.

The following is the corollary of Itˆo’s formula in the case of t>T .

Corollary 4.18. Let f(x) and ϕ(x) be C2-functions and θ(x,y)= f(x)ϕ(y x). − Then the following equality holds for t>T , t ∂θ θ B(t),B(T ) = θ B(a),B(T ) + B(s),B(T ) dB(s) ∂x a 1 t ∂2θ T ∂2θ + B(t),B(T ) ds + B(s),B(T ) ds. 2 ∂x2 ∂x∂y a a

42 ∂2θ Note that the correction term B(s),B(T ) due to the counterpart holds ∂x∂y only up to T. The proof of Theorem 4.17 and Corollary 4.18 can be found in [1].

The following example shows how one can use the above Itˆo’s formula to evaluate the new stochastic integral.

Example 4.19. We will evaluate the integral t eB(2) dB(s) for t 0 by using the 0 ≥ ∂θ above Itˆoformula. Let θ(x,y)= xey so that we have (x,y)= ey. Then we have ∂x ∂θ ∂2θ ∂2θ (x,y)= ey, (x,y)=0, (x,y)= ey. ∂x ∂x2 ∂x∂y

y x y x x Note that θ(x,y) = xe = xe e − = f(x)ϕ(y x) where f(x) = xe and − ϕ(x)= ex are C2-functions. Therefore, by Itˆo’s formula in Theorem 4.17, we have that for 0 t 2, ≤ ≤ t t 1 B(t)eB(2) = B(0)eB(2) + eB(2) dB(s)+ (0) + eB(2) ds 0 0 2 t t =0+ eB(2) dB(s)+ eB(2) ds. 0 0 which implies

t eB(2) dB(s)= B(t)eB(2) eB(2)t = eB(2) B(t) t , 0 t 2. − − ≤ ≤ 0 For t> 2, we can apply the Corollary 4.18, that is

t t 1 2 B(t)eB(2) = B(0)eB(2) + eB(2) dB(s)+ (0) ds + eB(2) ds 0 0 2 0 t 2 =0+ eB(2) dB(s)+ eB(2) ds. 0 0 So t eB(2) dB(s)= B(t)eB(2) 2eB(2) = eB(2) B(t) 2 , t 2. − − ≥ 0 Therefore,

t eB(2) B(t) t if 0 t 2; eB(2) dB(s) =  − ≤ ≤ 0  eB(2) B(t) 2 if t> 2. −  

43 Observe that we obtain the same result as by computing the integral directly from the definition in Example 4.16.

44 Chapter 5 Some Properties of the New Stochastic Integral

In this chapter, we present the main results of this dissertation. In fact, these results are collected from my joint works with Kuo and Szozda during my last 2 years of study. These selected results are the description of a new class of stochastic processes called “near-martingales,” the Itˆoisometry of a special case of stochastic integrals of instantly stochastic processes, some formulas for the computation of the new integrals and some generalized Itˆoformulas for the new integrals which are much easier to use than the one given by Ayed and Kuo in Theorem 4.17.

5.1 Near-Martingales

In Itˆotheory, one of the well-known and widely used property of a stochastic process is the “martingale property.” Recall from Definition 2.21 that a martingale with respect to the filtration is an -adapted stochastic process with {Ft} {Ft} E X < and E (X )= X for any s t. So we see that this property makes | t| ∞ t|Fs s ≤ sense only for adapted stochastic processes.

In the new theory, we use mainly instantly independent stochastic processes and products of an adapted process and an instantly independent process which are not adapted in general. Therefore, it is natural to ask if we shall have a similar property to that martingale property in this new theory which still makes sense for non-adapted stochastic processes.

Obviously, in order to obtain such similar property, the first assumption that Xt is -adapted must be removed and consider only the assumption E (X ) = {Ft} t|Fs X for any s t. s ≤

45 Suppose E (X )= X for any s t. Note that the condition (1) in the defi- t|Fs s ≤ nition of the conditional expections (Definition 2.14) implies the -measurability Fs of X for all s. This means that the -adaptedness of X also follows from the s {Ft} t condition (2) E(X )= X of the Definition 2.21. t|Fs s However, this condition E(X )= X for any s t can be rewritten as t|Fs s ≤ E X X =0 , s t. (5.1) { t − s|Fs} ∀ ≤ Observe that, the expression in Equation (5.1) still makes sense for non-adapted stochastic processes including instantly independent stochastic processes in the new thoery. Therefore, this property in Equation (5.1) becomes our new concept called “near-martingale.”

Remark 5.1. Note that condition E(X )= X for any s t implies Equation t|Fs s ≤ (5.1), but the converse is not true in general.

Definition 5.2. Let X be a stochastic process with E X < for all t. Then X t | t| ∞ t is a near-martingale with respect to a filtration if {Ft} E X X =0 , s t. (5.2) { t − s|Fs} ∀ ≤ Remark 5.3. By Remark 5.1, every martingale is a near-martingale.

Remark 5.4. If X is a near-martingale with respect to the filtration and is t {Ft} adapted to , then it is a martingale with respect to . This is the reason {Ft} {Ft} why we call this class of stochastic processes near-martingales.

Next, we prove a theorem providing the necessary and sufficient conditions for an instantly independent process to be a near martingale.

Theorem 5.5. Suppose ϕ(t) is instantly independent with respect to a filtration

and E ϕ(t) < for all t. Then ϕ(t) is a near-martingale with respect to {Ft} | | ∞ if and only if E [ϕ(t)] = E [ϕ(s)] for all s and t, i.e., E [ϕ(t)] is constant. {Ft}

46 Proof. Let s t. By the tower property of conditional expectation, we have ≤

E ϕ(t) ϕ(s) = E ϕ(t) E ϕ(s) { − |Fs} { |Fs}− { |Fs} = E E [ϕ(t) ] E ϕ(s) . (5.3) { |Ft |Fs}− { |Fs}

Since ϕ(t) is instantly independent with respect to , Equation (5.3) becomes {Ft}

E ϕ(t) ϕ(s) = E E [ϕ(t)] E [ϕ(s)] { − |Fs} { |Fs}− = E [ϕ(t)] E [ϕ(s)] . −

Therefore, we conclude that ϕ(t) is a near-martingale with respect to if and {Ft} only if E [ϕ(t)] = E [ϕ(s)] for all s and t.

The following are some simple examples of stochastic processes that are near- martingales and some stochastic processes that are not near-martingales with re- spect to the filtration ; a t T where = σ B(s); a s t . {Ft ≤ ≤ } Ft { ≤ ≤ } Example 5.6. For each k N, the stochastic process ∈

2k+1 X = B(T ) B(t) , a t T, t − ≤ ≤

2k+1 is instantly independent to ; a t T and E B(T ) B(t) = 0. {Ft ≤ ≤ } − 2k+1 Therefore, by Theorem 5.5, X = B(T ) B(t) is a near-martingale with t − respect to . {Ft} 2 Example 5.7. On the other hand, for a t T , E B(T ) B(t) = T ≤ ≤ − − t which is not a constant. Thus it follows from Theorem 5.5 that the instantly

2 independent stochastic process Y = B(T ) B(t) is not a near-martingale with t − respect to the filtration . {Ft} 2 However, for a t T , the stochastic processes Y = B(T ) B(t) (T t) ≤ ≤ t − − − 2 and Y = B(T ) B(t) + t are near-martingales with respect to since their t − {Ft} expectations are constants not depending on t.

47 (B(T ) B(t)) Example 5.8. The instantly independent stochastic process Zt = e − is

(B(T ) B(t)) 1 (T t) not a near-martingale with respect to since E e − = e 2 − is not {Ft} (B(T ) B(t)) 1 (T t) a constant, but the stochastic process Zt = e − − 2 − is a near-martingale with respect to (since E[Z ]=1). {Ft} t Next, consider the case of a product of an adapted process and an instantly independent process. We also have a theorem giving sufficient conditions for this product of stochastic processes to be a near-martingale.

Theorem 5.9. Let be a filtration. Assume that f(t) and ϕ(t) are stochastic {Ft} processes such that

1. f(t) is a martingale with respect to ; {Ft}

2. ϕ(t) is instantly independent with respect to and E [ϕ(t)] is constant; {Ft}

3. E f(t)ϕ(t) < for all t. | | ∞

Then θ(t) = f(t)ϕ(t) is a near-martingale with respect to . {Ft} Remark 5.10. Observe that

1. by Theorem 5.5, condition (2) implies that ϕ(t) is a near-martingale,

2. Theorem 5.9 fails to be true if we just assume that ϕ(t) is a near-martingale

without the instantaneous independence of ϕ(t).

Proof. Let s t. Since f(s) is measurable with respect to , we have ≤ Fs

E [θ(t) θ(s) ]= E [f(t)ϕ(t) f(s)ϕ(s) ] − |Fs − |Fs = E [f(t)ϕ(t) ] E [f(s)ϕ(s) ] |Fs − |Fs = E [f(t)ϕ(t) ] f(s)E [ϕ(s) ] . |Fs − |Fs

48 Since , we have E [f(t)ϕ(t) ]= E E [f(t)ϕ(t) ] . Thus we have Fs ⊂ Ft |Fs |Ft Fs E [θ(t) θ(s) ]= E E [f(t)ϕ(t) ] f(s)E [ϕ(s)] (5.4) − |Fs |Ft Fs − Also, by the -measurability of f(t) and the instantaneous independence of ϕ(t), Ft Equation (5.4) becomes

E [θ(t) θ(s) ]= E f(t)E [ϕ(t) ] f(s)E [ϕ(s)] − |Fs |Ft Fs − = E f(t)E [ϕ(t)] f(s)E [ϕ(s)] Fs − = E [ϕ(t)] E f(t) f(s)E [ϕ(s)] . |Fs − Note that f(t) is a martingale with respect to and E [ϕ(t)] is constant. There- {Ft} fore,

E [θ(t) θ(s) ]= E [ϕ(t)] f(s) f(s)E [ϕ(s)]=0. − |Fs − Hence the stochastic process θ(t) = f(t)ϕ(t) is a near-martingale with respect to the filtration . {Ft}

Example 5.11. The stochastic process B(t) B(T ) B(t) , a t T , is a near- − ≤ ≤ martingale with respect to . {Ft} Pardoux and Protter (in [15]) have defined backward and forward filtrations as follows.

Definition 5.12. 1. We will say that a family of sub-σ-fields is a forward {Gt} filtration if for 0 s t; Gs ⊆Gt ≤ ≤

2. We will say that a family (t) of sub-σ-fields is a backward filtration if G (s) (t) for 0 s t. G ⊇G ≤ ≤ Remark 5.13. The usual and widely used definition of the filtration in Definition

2.18 is actually the forward filtration in this sense.

49 By dealing with the backward filtrations, we can also define a near-martingale with respect to a backward filtration as one can see in Definition 5.14.

Definition 5.14. Let E X < for all t. We say that X is a near-martingale | t| ∞ t with respect to a backward filtration (t) if {F }

E X X (t) =0, s t. (5.5) { t − s | F } ∀ ≤

With the similar arguments in the proof of Theorem 5.5 and Theorem 5.9, we can also prove the following two theorems.

Theorem 5.15. Suppose ϕ(t) is instantly independent with respect to a backward

filtration (t) and E ϕ(t) < for all t. Then ϕ(t) is a near-martingale with {F } | | ∞ respect to the backward filtration (t) if and only if E [ϕ(t)] = E [ϕ(s)] for all s {F } and t, i.e., E [ϕ(t)] is constant.

Proof. Let s t. Consider ≤

E ϕ(t) ϕ(s) (t) = E ϕ(t) (t) E ϕ(s) (t) . (5.6) { − |F } { |F }− { |F }

Since (t) (s), then by the tower property, Equation (5.6) becomes F ⊂ F

E ϕ(t) ϕ(s) (t) = E ϕ(t) (t) E E ϕ(s) (s) (t) { − |F } { |F }− { |F |F } = E [ϕ(t)] E E [ϕ(s)] (t) − |F = E [ϕ(t)] E [ϕ(s)] . −

Therefore, we conclude that ϕ(t) is a near-martingale with respect to the backward

filtration (t) if and only if E [ϕ(t)] = E [ϕ(s)] for all s and t. {F }

Theorem 5.16. Let (t) be a backward filtration. Assume that {F }

1. f(t) is a martingale with respect to (t) ; {F }

50 2. ϕ(t) is instantly independent with respect to (t) and E [ϕ(t)] is constant; {F }

3. E f(t)ϕ(t) < for all t. | | ∞ Then θ(t) = f(t)ϕ(t) is a near-martingale with respect to (t) . {F } Remark 5.17. The condition (1) f(t) is a martingale with respect to the backward

filtration (t) means that f(t) is (t)-measurable for all t and E f(s) (t) = {F } F |F f(t) for any s t. ≤

Proof. Let s t. Since f(t) is measurable with respect to , we have ≤ F(t)

E θ(t) θ(s) (t) = E f(t)ϕ(t) f(s)ϕ(s) (t) − |F − |F = f(t)E ϕ(t) (t) E f(s)ϕ (s) (t) . (5.7) |F − |F Since (t) (s), we have E f(s)ϕ(s) (t) = E E f(s)ϕ(s) (s) (t) . So F ⊂ F |F |F F Equation (5.7) becomes

E θ(t) θ(s) (t) = f(t)E ϕ(t) (t) E E f(s)ϕ(s) (s) (t) − |F |F − |F F = f(t)E [ϕ(t)] E f(s)E ϕ(s) (s) (t) − |F F = f(t)E [ϕ(t)] E f(s)E [ϕ(s)] (t) − F = f(t)E [ϕ(t)] E [ϕ(s)] E f(s) (t) . − |F Note that f(t) is a martingale with respect to the backward filtration (t) and F E [ϕ(t)] is constant. Therefore,

E [θ(t) θ(s) ]= f(t)E [ϕ(t)] E [ϕ(s)] f(t)=0. − |Fs −

Hence the stochastic process θ(t) = f(t)ϕ(t) is a near-martingale with respect to the backward filtration (t) . {F }

Now, another theorem in Itˆotheory related to the martingale property that we should consider is the theorem about the martingale property of the associated

51 stochastic process X = t f(s) dB(s) which is defined for f L2 Ω [a,b] . (See t a ∈ ad × Theorem 3.7.)

In the new theory, we can also define these stochastic processes associated with the new integrals. Namely, for continuous functions f and ϕ, we define Xt to be the stochastic process

t X = f B(s) ϕ B(T ) B(s) dB(s), a t T, (5.8) t − ≤ ≤ a and we will prove, under some conditions, that Xt is a near-martingale with respect to the (forward) filtration where = σ B(s); a s t . (See Theorem {Ft} Ft { ≤ ≤ } 5.18)

Moreover, we introduce another associated stochastic process defined by

T Y (t) = f B(s) ϕ B(T ) B(s) dB(s), a t T, (5.9) − ≤ ≤ t and, under the same conditions, we can also show that the stochastic process Y (t) is a near-martingale with respect to the same filtration.

For a t T , let = σ B(s); a s t . The next two theorems establish ≤ ≤ Ft { ≤ ≤ } the near martingale property (with respect to the filtration ) of X and Y (t). {Ft} t

Theorem 5.18. Let f(x) and ϕ(x) be continuous functions. Let

t X = f B(s) ϕ B(T ) B(s) dB(s), a t T, t − ≤ ≤ a and assume that E X < for all a t T . Then X is a near-martingale with | t| ∞ ≤ ≤ t respect to the forward filtration . {Ft}

Proof. Let a s t T . Consider ≤ ≤ ≤

t X X = f B(s) ϕ B(T ) B(s) dB(s). t − s − s

52 Let ∆n = s = t0 < t1 <...

the fact that f B(ti 1) is ti -measurable and ∆Bi is independent of ti− , we − F F 1 have that for each 1 i n, ≤ ≤

E f B(ti 1) ϕ B(T ) B(ti) ∆Bi s − − |F = E ϕ B(T ) B(ti) E E f B(ti 1) ∆Bi ti− s − − |F 1 F = E ϕ B(T ) B(ti) E f B(ti 1) E ∆Bi ti− s − − |F 1 F = E ϕ B(T ) B(ti) E f B(ti 1) E ∆Bi s − − F = E ϕ B(T ) B(ti) E ∆Bi E f B(ti 1) s − − F =0.

53 Thus, by Equation (5.10), we conclude that Xt is a near-martingale with respect to . {Ft}

Theorem 5.19. Let f(x) and ϕ(x) be continuous functions. Let

T Y (t) = f B(s) ϕ B(T ) B(s) dB(s), a t T, − ≤ ≤ t and assume that E Y (t) < for all a t T . Then Y (t) a near-martingale with | | ∞ ≤ ≤ respect to the forward filtration . {Ft}

Proof. Note that for a s

Y (t) Y (s) − T T = f B(s) ϕ B(T ) B(s) dB(s) f B(s) ϕ B(T ) B(s) dB(s) − − − t s t = f B(s) ϕ B(T ) B(s) dB(s) − − s = (X X ), − t − s where Xt is as in Theorem 5.18. Therefore,

E Y (t) Y (s) = E (X X ) = E X X =0. − |Fs − t − s |Fs − t − s|Fs Thus Y (t) is a near-martingale with respect to . {Ft}

Now, we turn our attention to the backward filtration. For a t T , let ≤ ≤

(t) = σ B(T ) B(s); t s T , F { − ≤ ≤ } the σ-field generated by B(T ) B(s) for all t s T . − ≤ ≤ Obviously, (t); a t T is a backward filtration. It can also be shown that F ≤ ≤ (t) the stochastic process Xt defined in Equation (5.8) and Y defined in Equation

(5.9) are both near-martingales with respect to this backward filtration (t) . {F }

54 Theorem 5.20. Let f(x) and ϕ(x) be continuous functions. Then the stochastic process X = t f B(s) ϕ B(T ) B(s) dB(s), a t T , is a near-martingale t a − ≤ ≤ with respect to the backward filtration (t). F

Proof. Let a s

n (t) (t) E Xt Xs = lim E f B(ti 1) ϕ (B(T ) B(ti) ∆Bi . (5.11) − |F ∆ 0 − − F → i=1 and it is enough to show that for each 1 i n, ≤ ≤

(t) E f B(ti 1) ϕ B(T ) B(ti) ∆Bi =0. − − |F Also note that ∆B is (ti−1)-measurable since i F

(ti−1) ∆Bi = B(ti) B(ti 1)= B(T ) B(ti 1) B(T ) B(ti) . − − − − − − ∈ F By the tower property of conditional expectations and (ti−1)-measurability of F ϕ B(T ) B(t ) and ∆B , we have − i i (t) E f B(ti 1) ϕ B(T ) B(ti) ∆Bi − − |F

(ti−1) (t) = E E f B(ti 1) ϕ B(T ) B(ti) ∆Bi − − |F |F (ti−1) (t) = E ϕ B(T ) B(ti) ∆BiE f B(ti 1) . (5.12) − − |F |F

Note that for each s>ti 1, B(T ) B(s) is independent of the σ-field ti− . This − − F 1 (ti−1) implies the independence of the σ-fields and ti− . Since f B(ti 1) is ti− - F F 1 − F 1 (ti−1) measurable, it follows that f B(ti 1) is independent of . Then Equation − F (5.12) becomes

(t) E f B(ti 1) ϕ B(T ) B(ti) ∆Bi − − |F (t) = E ϕ B(T ) B(ti) ∆BiE f B(ti 1) . − − |F

55 (t) Since E f B(ti 1) is just a deterministic function, then it is -measurable. − F Therefore,

(t) E f B(ti 1) ϕ B(T ) B(ti) ∆Bi − − |F (t) = E f B(ti 1) E ϕ B(T ) B(ti) ∆Bi − − |F

(ti) (t) = E f B(ti 1) E E ϕ B(T ) B(ti) ∆Bi − − |F |F (ti) (t) = E f B(ti 1) E ϕ B(T ) B(ti) E ∆Bi . (5.13) − − |F |F Since, for each 1 i n, (ti) is generated by all B(T ) B(s) such that t s T , ≤ ≤ F − i ≤ ≤ (ti) it follows that ∆Bi = B(ti) B(ti 1) is independent of . Thus Equation (5.13) − − F becomes

(t) E f B(ti 1) ϕ B(T ) B(ti) ∆Bi − − |F (t) = E f B(ti 1) E ϕ B(T ) B(ti) E [∆Bi] − − |F (t) = E f B(ti 1) E [∆Bi] E ϕ B(T ) B(ti) − − |F =0.

Therefore, by Equation (5.11), we conclude that Xt is a near-martingale with respect to the backward filtration (t) . {F }

Theorem 5.21. Let f(x) and ϕ(x) be continuous functions. The stochastic pro- cess Y (t) = T f B(s) ϕ B(T ) B(s) dB(s) defined for a t T is a near- t − ≤ ≤ martingale with respect to the backward filtration (t). F

Proof. From the proof of Theorem 5.19, we have already seen that Y (t) Y (s) = − (X X ). Also, in the proof of Theorem 5.20, we have that E X X (t) = 0. − t− s t − s|F Therefore,

E Y (t) Y (s) (t) = E (X X ) (t) = E X X (t) =0. − |F − t − s |F − t − s|F Thus Y (t) is a near-martingale with respect to the backward filtration (t) . {F }

56 5.2 ItˆoIsometry of the New Stochastic Integral

Recall that in Theorem 3.3 of Section 3.1, we have seen the zero expectation and the Itˆoisometry of Itˆointegrals. Comparing these properties with the new integral, we have already seen that the zero expectation of the new integrals still holds in

Theorem 4.17 of Chapter 4.

In this section, we investigate the Itˆoisometry of the new integral. Unfortunately, we still do not have the Itˆoisometry for the general case of new stochastic integrals yet. We only have the Itˆoisometry for a special class of the new integrals. More specifically, we will prove the Itˆoisometry for the new stochastic integral of an instantly independent process of the form ϕ B(T ) B(t) where ϕ(x) is an analytic − function on R.

We need the following two lemmas in order to prove the above Itˆoisometry.

Lemma 5.22. For a t T, we have ≤ ≤

t t m n E B(T ) B(s) dB(s) B(T ) B(s) dB(s) − − a a (m + n 1)!! m+n m+n 2 +1 2 +1 m+n − (T a) (T t) if m + n is even ;  +1 − − − = 2 (5.14)   0 if m + n is odd.

  Proof. Let τ, σ be any positive real numbers. Using the Taylor expansion of ex, we have the following equality

m ∞ m τ(B(T ) B(t)) B(T ) B(t) τ e − = − . m! m=0

So

t ∞ m t τ(B(T ) B(s)) τ m e − dB(s)= B(T ) B(s) dB(s). (5.15) m! − a m=0 a

57 t τ(B(T ) B(s)) t σ(B(T ) B(s)) Consider E a e − dB(s) a e − dB(s) . Then, by Equation (5.15), we can express this expectation as

t t τ(B(T ) B(s)) σ(B(T ) B(s)) E e − dB(s) e − dB(s) a a m t n t ∞ τ m ∞ σ n = E B(T ) B(s) dB(s) B(T ) B(s) dB(s) m! − n! − m=0 a n=0 a m n t t ∞ ∞ τ σ m n = E B(T ) B(s) dB(s) B(T ) B(s) dB(s) n!m! − − n=0 m=0 a a m n t t ∞ ∞ τ σ m n = E B(T ) B(s) dB(s) B(T ) B(s) dB(s) . n!m! − − n=0 m=0 a a (5.16)

Next, we will use the Itˆoformula for the new integrals to express this expectation eτ(y x) in another way. That is, we first apply Theorem 4.17 to θ(x,y) = − to − τ obtain

t t τ(B(T ) B(s)) 1 τ(B(T ) B(a)) 1 τ(B(T ) B(t)) τ τ(B(T ) B(s)) e − dB(s)= e − e − e − ds. τ − τ − 2 a a (5.17)

t τ(B(T ) B(s)) t σ(B(T ) B(s)) By Equation (5.17), E a e − dB(s) a e − dB(s) can also be expressed as the following

t t τ(B(T ) B(s)) σ(B(T ) B(s)) E e − dB(s) e − dB(s) a a t 1 τ(B(T ) B(a)) 1 τ(B(T ) B(t)) τ τ(B(T ) B(s)) = E e − e − e − ds τ − τ − 2 a t 1 σ(B(T ) B(a)) 1 σ(B(T ) B(t)) σ σ(B(T ) B(s)) e − e − e − ds . (5.18) × σ − σ − 2 a

Multiply two expressions under the expectation, we will obtain 9 terms. Then it is straightforward to compute each of the expectations term by term. Combining all

58 together, we have that the expectation in Equation (5.18) is simplified to be

2 1 (T a)(τ+σ)2 1 (T t)(τ+σ)2 e 2 − e 2 − (τ + σ)2 − 2 ∞ (T a)n ∞ (T t)n = − (τ + σ)2n − (τ + σ)2n (τ + σ)2 2nn! − 2nn! n=0 n=0 ∞ n n (T a) (T t) 2(n 1) = − − − (τ + σ) − 2n 1n! n=1 − ∞ (T a)n+1 (T t)n+1 = − − − (τ + σ)2n. (5.19) 2n(n + 1)! n=0 (T a)n+1 (T t)n+1 − − − For convenience, let Cn = 2n(n+1)! . Then, we apply the binomial theorem to (τ + σ)2n. Then the right-hand side of Equation (5.19) can be rewritten as

2n 2n ∞ ∞ ∞ 2n 2n k 2n k 2n k 2n k C (τ + σ) = C σ τ − = C σ τ − . n n k n k n=0 n=0 n=0 k=0 k=0 Next we switch the order of the summations to get

2n ∞ ∞ ∞ 2n k 2n k 2n k 2n k C σ τ − = C σ τ − . n k n k n=0 k=0 k=0 n= k ⌈ 2 ⌉ Then we split the summation over k into two parts. Namely, k are odd numbers and k are even numbers. Hence we have the following

∞ ∞ 2n k 2n k C σ τ − n k k=0 n= k ⌈ 2 ⌉ ∞ ∞ 2n k 2n k 2n k 2n k = C σ τ − + C σ τ − n k n k k=odd n= k k=even n= k ⌈ 2 ⌉ ⌈ 2 ⌉ ∞ ∞ ∞ ∞ 2n 2j+1 2n 2j 1 2n 2j 2n 2j = C σ τ − − + C σ τ − n 2j +1 n 2j j=0 n= 2j+1 j=0 n= 2j ⌈2 ⌉ ⌈ 2 ⌉ ∞ ∞ ∞ 2n 2j+1 2n 2j 1 2n 2j 2n 2j = C σ τ − − + C σ τ − n 2j +1 n 2j j=0 n=j+1 n=j Now we make some substitutions to the two summations in the square bracket.

Namely, let m = 2n 2j 1 for the first summation and let m = 2n 2j for − − −

59 the second summation. Once we have done this, we will see that m will run from m = 1, 3, 5,... (all positive odd numbers) in the first summation. On the other hand, m in the second summation will run from m = 0, 2, 4,... (all nonnegative even numbers). So the above summation becomes

∞ ∞ ∞ 2n 2j+1 2n 2j 1 2n 2j 2n 2j C σ τ − − + C σ τ − n 2j +1 n 2j j=0 n=j+1 n=j ∞ m +2j +1 2j+1 m m +2j 2j m = C m+2j+1 σ τ + C m+2j σ τ 2 2j +1 2 2j j=0 m=odd ⌈ ⌉ m=even ⌈ ⌉ ∞ ∞ m +2j +1 2j+1 m m +2j 2j m = C m+2j+1 σ τ + C m+2j σ τ 2 2j +1 2 2j j=0 j=0 m=even m=odd m + n n m m + n n m = C m+n σ τ + C m+n σ τ . 2 n 2 n n=even m=even n=odd m=odd Finally, we conclude that

t t τ(B(T ) B(s)) σ(B(T ) B(s)) E e − dB(s) e − dB(s) a a m + n m n m + n m n = C m+n τ σ + C m+n τ σ . (5.20) n 2 n 2 n=even m=even n=odd m=odd By comparing the coefficients of τ mσn in Equation (5.20) with those in Equation

(5.16), we have

t t m n E B(T ) B(s) dB(s) B(T ) B(s) dB(s) − − a a m + n n!m! C m+n if m,n are both odd OR both even; n 2 =    0 otherwise.

 m+n +1 m+n +1 (m + n)! (T a) 2 (T t) 2 − − − if m + n is even ;  m+n m+n = 2 2 +1 !  2   0 if m + n is odd.  (m + n 1)!! m+n m+n  2 +1 2 +1 m+n − (T a) (T t) if m + n is even ;  +1 − − − = 2   0 if m + n is odd.

 

60 Thus we complete the proof of Lemma 5.22.

Lemma 5.23. For any a t T and m,n 0, 1, 2,... we have ≤ ≤ ∈ { } t t n m E B(T ) B(s) dB(s) B(T ) B(s) dB(s) a − a − t n+m = E B(T ) B(s) ds. (5.21) − a Proof. First, recall that if X is normally distributed with mean 0 and variance σ2, the k-th moment of X is given by

0 if k is odd, E(Xk)=    σk(k 1)!! if k is even. −  Since B(T ) B(s) is normally distributed with mean 0 and variance T s, − − n+m E B(T ) B(s) is given by −

n+m 0 if m + n is odd ; E B(T ) B(s) =  −  m+n  (T s) 2 (m + n 1)!! if m + n is even. − −  Therefore, 

t n+m E B(T ) B(s) ds − a t m+n (m + n 1)!! (T s) 2 ds if m + n is even ; − − =  a   0 if m + n is odd  (m + n 1)!! m+n m+n  2 +1 2 +1 m+n − (T a) (T t) if m + n is even ;  +1 − − − = 2   0 if m + n is odd.

 which is exactly the right-hand side of Equation (5.14) in Lemma 5.22. Therefore, Equation (5.21) holds.

Now, we are ready to prove the Itˆoisometry that we have mentioned before.

61 Theorem 5.24. Suppose that ϕ(x) is an analytic function on R such that

2 T E ϕ B(T ) B(t) dt < and T ϕ B(T ) B(t) dB(t) exists. Then for a − ∞ a − a t T , ≤ ≤

2 t t 2 E ϕ B(T ) B(s) dB(s) = E ϕ B(T ) B(s) ds. (5.22) − − a a

Proof. Here, we will give only an informal derivation of Equation (5.22). Because of the analyticity of ϕ, for x R, ϕ(x) can be expanded as ∈

∞ ϕ(n)(0) ϕ(x)= xn, n! n=0 where ϕ(n)(x) is the n-th derivative of ϕ(x). We first consider the left-hand side of

Equation (5.22).

t 2 E ϕ B(T ) B(s) dB(s) a − 2 t (n) ∞ ϕ (0) n = E B(T ) B(s) dB(s) n! − a n=0 2 (n) t ∞ ϕ (0) n = E B(T ) B(s) dB(s) n! − n=0 a ∞ ∞ ϕ(n)(0)ϕ(m)(0) = E n!m! n=0 m=0 t t n m B(T ) B(s) dB(s) B(T ) B(s) dB(s) × − − a a ∞ ∞ ϕ(n)(0)ϕ(m)(0) = n!m! n=0 m=0 t t n m E B(T ) B(s) dB(s) B(T ) B(s) dB(s) . (5.23) × − − a a

62 On the other hand, consider the right-hand side of Equation (5.22).

t 2 E ϕ B(T ) B(s) ds a − 2 t (n) ∞ ϕ (0) n = E B(T ) B(s) ds n! − a n=0 t (n) (m) ∞ ∞ ϕ (0)ϕ (0) n+m = E B(T ) B(s) ds n!m! − a n=0 m=0 (n) (m) t ∞ ∞ ϕ (0)ϕ (0) n+m = E B(T ) B(s) ds. (5.24) n!m! − n=0 m=0 a So, Equation (5.22) holds by Lemma 5.23.

1 B(1) B(t) Example 5.25. We will find the mean and variance of the integral 0 e − dB(t). Note that the new integral always has mean zero by Theorem 4.12 in Chapter 4.

Thus, 1 B(1) B(t) E e − dB(t) =0. 0 Also, since ϕ(x)= ex is analytic on R, by Theorem 5.24, its variance is given by

2 1 1 2 B(1) B(s) B(1) B(s) E e − dB(s) = E e − ds 0 0 1 2(B(1) B(s)) = E e − ds 0 1 1 (2)2(1 s) = e 2 − ds 0 1 2(1 s) = e − ds 0 1 = e2 1 . 2 − 5.3 Some Formulas for the Integral Computation

In this section, we will prove two theorems to express the new integral of certain stochastic processes in terms of Itˆointegrals and Riemann integrals.

63 Theorem 5.26. Let f(t) be an adapted stochastic process with respect to and {Ft} let ϕ(x) be an analytic function on R. Assume that T f(t)ϕ B(T ) B(t) dB(t) a − exists. Then for a t T , ≤ ≤

t f(s)ϕ B(T ) B(s) dB(s) − a ∞ (n) t t n ϕ B(T ) n n 1 = ( 1) f(s)B(s) dB(s)+ n f(s)B(s) − ds , (5.25) − n! n=0 a a where ϕ(n)(x) is the n-th derivative of ϕ(x).

Proof. Let a t T and ∆ = a = s

t f(s)ϕ B(T ) B(s) dB(s) − a n = lim f(si 1)ϕ B(T ) B(si) ∆Bi ∆ 0 − − → i=1 n (m) ∞ ϕ (0) m = lim f(si 1) B(T ) B(si) ∆Bi ∆ 0 − m! − → i=1 m=0 n m ∞ (m) ϕ (0) m k m k k = lim f(si 1) ( 1) B(T ) − B(si) ∆Bi ∆ 0 − m! k − → i=1 m=0 k=0 n m ∞ (m) ϕ (0) k m k k = lim f(si 1) ( 1) B(T ) − B(si 1)+∆Bi ∆Bi ∆ 0 − (m k)!k! − − → i=1 m=0 k=0 − n m ∞ (m) ϕ (0) k m k = lim f(si 1) ( 1) B(T ) − ∆ 0 − (m k)!k! − → i=1 m=0 k=0 − k k 1 k B(si 1) + kB(si 1) − ∆Bi + ... + (∆Bi) ∆Bi × − − n m ∞ (m) ϕ (0) k m k = lim f(si 1) ( 1) B(T ) − ∆ 0 − (m k)!k! − → i=1 m=0 k=0 − k k 1 2 k+1 B(si 1) ∆Bi + kB(si 1) − (∆Bi) + ... + (∆Bi) (5.26) × − −

64 Since all the terms with (∆B )n, n 3, converge to 0 in probability, Equation i ≥ (5.26) becomes

n m ∞ (m) ϕ (0) k m k lim f(si 1) ( 1) B(T ) − ∆ 0 − (m k)!k! − → i=1 m=0 k=0 − k k 1 2 B(si 1) ∆Bi + kB(si 1) − (∆Bi) (5.27) × − − m n ∞ k (m) ( 1) ϕ (0) m k k = − B(T ) − lim f(si 1)B(si 1) ∆Bi (m k)!k! ∆ 0 − − m=0 k=0 − → i=1 n k 1 2 + k lim f(si 1)B(si 1) − (∆Bi) ∆ 0 − − → i=1 m ∞ k (m) t t ( 1) ϕ (0) m k k k 1 = − B(T ) − f(s)B(s) dB(s)+ k f(s)B(s) − ds . (m k)!k! m=0 a a k=0 − (5.28)

Next we change the order of summation. The right hand side of Equation (5.28) becomes

∞ k t t ∞ (m) ( 1) k k 1 ϕ (0) m k − f(s)B(s) dB(s)+ k f(s)B(s) − ds B(T ) − k! a a (m k)! k=0 m=k − ∞ k t t ( 1) k k 1 (k) = − f(s)B(s) dB(s)+ k f(s)B(s) − ds ϕ B(T ) k! k=0 a a ∞ n (n) t t ( 1) ϕ B(T ) n n 1 = − f(s)B(s) dB(s)+ n f(s)B(s) − ds . n! n=0 a a

Thus, the formula in Equation (5.25) is obtained, and the proof is complete.

Example 5.27. Suppose f(t) is an -adapted stochastic process. Let us evaluate {Ft} t B(1) B(s) the stochastic integral f(s)e − dB(s) when 0 t 1. 0 ≤ ≤ First, let ϕ(x) = ex. Of course, ϕ is analytic on R and ϕ(n)(x) = ex for every n =0, 1, 2,...

65 Therefore, by the formula in Theorem 5.26, for 0 t 1, we have ≤ ≤ t B(1) B(s) f(s)e − dB(s) 0 ∞ B(1) t 1 n e n n 1 = ( 1) f(s)B(s) dB(s)+ n f(s)B(s) − dt − n! n=0 0 0 t n t n 1 ∞ ( B(s)) ∞ ( B(s)) − = eB(1) f(s) − dB(s) f(s) − dt n! − (n 1)! 0 n=0 0 n=1 − s t B(1) B(s) B(s) = e f(s)e− dB(s) f(s)e− ds . − 0 0 Thus, for 0 t 1, we have the expression ≤ ≤ t t t B(1) B(s) B(1) B(s) B(s) f(s)e − dB(s)= e f(s)e− dB(s) f(s)e− ds . − 0 0 0 The formula (5.25) in Theorem 5.26 is used to evaluate the new stochastic in- tegral of a product of an adapted stochastic process and an instantly independent stochastic process of the form ϕ(B(T ) B(t)) for a t T where ϕ is an analytic − ≤ ≤ function on R. In fact, by using the same theorem, we are able to prove another formula to evaluate the anticipating stochastic integral of the form

t f(s)ϕ B(T ) dB(s) , a t T, ≤ ≤ 0 where f(t) is an adapted stochastic process and ϕ(x) is an analytic function on R.

Theorem 5.28. Let f(t) be adapted with respect to and let ϕ(x) be an analytic {Ft} function on R. Assume that T f(t)ϕ B(T ) dB(t) exists. Then for a t T , a ≤ ≤ t f(s)ϕ B(T ) dB(s) a ∞ ∞ n (n+k) t t ( 1) ϕ B(T ) n+k n+k 1 = − f(s)B(s) dB(s)+ n f(s)B(s) − ds n!k! k=0 n=0 a a (5.29) where ϕ(n) is the n-th derivative of ϕ.

66 Proof. First note that ϕ is analytic, so for any x,y R, we have the expression ∈ ϕ(k)(x) k ∞ ϕ(x + y)= k=0 k! y . Therefore, we can write

t f(s) B(T ) dB(s) a t ∞ ϕ(k) B(T ) B(s) = f(s) − B(s)k dB(s) k! a k=0 ∞ 1 t = f(s)B(s)kϕ(k) B(T ) B(s) dB(s). (5.30) k! − k=0 a Since f(s)B(s)k is adapted and ϕ(k) is still analytic on R, we apply Theorem 5.26 for the case f(s)B(s)k and ϕ(k) B(T ) B(s) to the integral in the right-hand side − of (5.30). Finally, it becomes

t f(s)ϕ B(T ) dB(s) a ∞ 1 ∞ (ϕ(k))(n) B(T ) t = ( 1)n f(s)B(s)kB(s)n dB(s) k! − n! k=0 n=0 a t k n 1 + n f(s)B(s) B(s) − ds a ∞ ∞ n (n+k) t t ( 1) ϕ B(T ) n+k n+k 1 = − f(s)B(s) dB(s)+ n f(s)B(s) − ds . n!k! k=0 n=0 a a

Example 5.29. Suppose f(t) is -adapted. For 0 t 2, let us evaluate the {Ft} ≤ ≤ t B(2) integral 0 f(s)e dB(s) by using the formula in Theorem 5.28. Let 0 t 2. As in Example 5.27, let ϕ(x) = ex. Then ϕ(n+k)(x) = ex for all ≤ ≤ non-negative integers n, k. By Theorem 5.28, we have

t f(s)eB(2) dB(s) 0 ∞ ∞ B(2) t t n e n+k n+k 1 = ( 1) f(s)B(s) dB(s)+ n f(s)B(s) − ds . − n!k! n=0 0 0 k=0

67 That is,

t f(s)eB(2) dB(s) 0 ∞ ( 1)n t ∞ B(s)k = eB(2) − f(s)B(s)n dB(s) n! k! n=0 0 k=0 t ∞ k n 1 B(s) + n f(s)B(s) − dt 0 k! k=0 ∞ n t t B(2) ( 1) n B(s) n 1 B(s) = e − f(s)B(s) e dB(s)+ n f(s)B(s) − e ds n! n=0 0 0 t ∞ ( B(s))n = eB(2) f(s)eB(s) − dB(s) n! 0 n=0 t n 1 ∞ ( B(s)) − f(s)eB(s) − ds − (n 1)! 0 n=1 − t t = eB(2) f(s) dB(s) f(s) ds . − 0 0 Thus for 0 t 2, we have the equality ≤ ≤ t t t f(s)eB(2) dB(s)= eB(2) f(s) dB(s) f(s) ds . − 0 0 0 A special case of the above equation, with f(t)=1, is given in Example 4.16 of

Chapter 4. 5.4 Generalized Itˆo’s Formula for the New Stochastic Integral

In this section, we generalize the Itˆoformula for the new stochastic integral ob- tained by Ayed and Kuo (Theorem 4.17) in Chapter 4.

Recall that Theorem 4.17 states if θ(x,y) = f(x)ϕ(y x) where f and ϕ are − C2-functions on R, then for a t T , ≤ ≤

dθ B(t),B(T ) ∂θ 1 ∂2θ ∂2θ = B(t),B(T ) dB(t)+ B(t),B(T ) + B(t),B(T ) dt. ∂x 2 ∂x2 ∂x∂y

68 The assumption that θ(x,y) must be in the form f(x)ϕ(y x) makes the theorem − somewhat difficult to apply. Let us show this on a very simple example below.

Example 5.30. We will evaluate the integral t B(1) dB(t) for 0 t 1 by using 0 ≤ ≤ Theorem 4.17. ∂θ Here we want to have (x,y)= y. So we let θ(x,y)= xy. We see that θ(x,y) ∂x is not of the form f(x)ϕ(y x), so we are not able to apply Theorem 4.17 to this − function. However, it can be rewritten as

θ(x,y)= xy = x(y x)+ x2 = θ (x,y)+ θ (x,y), − 1 2 where θ (x,y)= x(y x) and θ (x,y)= x2. 1 − 2 With x = B(t) and y = B(1), it follows that

dθ B(t),B(1) = dθ1 B(t),B(1) + dθ2 B(t),B(1) That is,

d B(t)B(1) = d B(t) B(1) B(t) + d B(t)2 . (5.31) − Now, θ (x,y) = x(y x) is of the form f(x)ϕ(y x). We can apply Theorem 1 − −

4.17 to θ1. Since

∂θ ∂2θ ∂2θ 1 (x,y)= y 2x, 1 (x,y)= 2, 1 (x,y)=1, ∂x − ∂x2 − ∂x∂y by Theorem 4.17, we have

1 d B(t) B(1) B(t) = B(1) 2B(t) dB(t)+ ( 2)+1 dt − − 2 − = B(1) 2B(t) dB(t). (5.32) − Note that we can compute d (B(t)2) easily by using the regular Itˆo’s formula in

Theorem 3.20 with f(x)= x2. So

1 d B(t)2 =2B(t) dB(t)+ (2) dt =2B(t) dB(t)+ dt. (5.33) 2

69 From Equation (5.32) and Equation (5.33), Equation (5.31) becomes

d B(t)B(1) = d B(t) B(1) B(t) + d B(t)2 − = B(1) 2B(t) dB(t)+2B(t) dB(t)+ dt − = B(1) dB(t)+ dt.

Thus

t t t B(t)B(1) = B(1) dB(s)+ ds = B(1) dB(s)+ t, 0 0 0 which implies that

t B(1) dB(s)= B(1)B(t) t, 0 t 1. − ≤ ≤ 0

We see that it requires many steps to evaluate a simple stochastic integral. So it is a good idea if we can adjust the form of θ(x,y) in the assumption to make the theorem easier to use. The two theorems below provide Itˆo’s formula for functions

θ(x,y) of the form f(x)ϕ(y). We not only adjust the form of θ(x,y), but we also allow x and y to be more general stochastic processes. Namely, instead of B(t), we can let x to be an Itˆointegral. Also, instead of B(T ), we allow y to be a

T T stochastic process t h(s) dB(s) or a random variable a h(s) dB(s), where h is a deterministic function. The first theorem is a generalized Itˆo’s formula for the case when the first variable x is evaluated by an Itˆointegral Xt and the second variable

(t) T y is evaluated by Y = t h(s) dB(s). t 2 Theorem 5.31. Let X = ξ(s) dB(s), a t T , where ξ(t) Lad [a,T ] Ω t a ≤ ≤ ∈ × and let Y (t) = T h(s) dB(s), a t T , where h(t) L2[a,T ]. t ≤ ≤ ∈

70 Let θ(x,y) = f(x)ϕ(y) where f and ϕ are both C2-functions on R. Then for a t T , ≤ ≤

t ∂θ 1 t ∂2θ θ(X ,Y (t))= θ(X ,Y (a))+ (X ,Y (s)) dX + (X ,Y (s))(dX )2 t a ∂x s s 2 ∂x2 s s a a t ∂θ 1 t ∂2θ + (X ,Y (s)) dY (s) (X ,Y (s))(dY (s))2, (5.34) ∂y s − 2 ∂y2 s a a or equivalently,

θ(X ,Y (t)) θ(X ,Y (a)) t − a t ∂θ 1 t ∂2θ = (X ,Y (s)) ξ(s) dB(s)+ (X ,Y (s)) ξ2(s) ds ∂x s 2 ∂x2 s a a t ∂θ 1 t ∂2θ (X ,Y (s)) h(s) dB(s) (X ,Y (s)) h2(s) ds. (5.35) − ∂y s − 2 ∂y2 s a a

With ξ(t) = 1 and h(t) = 1, X becomes B(t) and Y (t) becomes B(T ) B(t). t − We have the following corollary.

Corollary 5.32. Let θ = f(x)ϕ(y) where f and ϕ are both C2-functions on R.

Then for a t T , ≤ ≤

θ B(t),B(T ) B(t) θ B(a),B(T ) B(a) − − − t ∂θ 1 t ∂2θ = B(s),B(T ) B(s) dB(s)+ B(s),B(T ) B(s) ds ∂x − 2 ∂x2 − a a t ∂θ 1 t ∂2θ B(s),B(T ) B(s) dB(s) B(s),B(T ) B(s) ds. − ∂y − − 2 ∂y2 − a a (5.36)

Example 5.33. Let us evaluate the integral t B(s) B(1) B(s) dB(s) for 0 0 − ≤ t 1 by using the above corollary. ≤ x2 ∂θ First, we let θ(x,y)= y so that we have (x,y)= xy. It follows that 2 ∂x

∂2θ ∂θ x2 ∂2θ (x,y)= y, (x,y)= , (x,y)=0. ∂x2 ∂y 2 ∂y2

71 Therefore, by Corollary 5.32, we have

1 1 B(t)2 B(1) B(t) B(0)2 B(1) B(0) 2 − − 2 − t 1 t t B(s)2 = B(s) B(1) B(s) dB(s)+ B(1) B(s) ds dB(s) 0. − 2 − − 2 − 0 0 0 That is, for 0 t 1, ≤ ≤ t B(s) B(1) B(s) dB(s) − 0 1 1 t 1 t = B(t)2(B(1) B(t)) B(1) B(s) ds + B(s)2 dB(s) 2 − − 2 − 2 0 0 1 1 t t 1 t = B(t)2(B(1) B(t)) B(1) ds B(s) ds + B(s)2 dB(s). 2 − − 2 − 2 0 0 0 (5.37)

t B(t)3 t Note from Example 3.26 that B(s)2 dB(s) = B(s) ds, so Equation 3 − 0 0 (5.37) becomes

t B(s) B(1) B(s) dB(s) − 0 1 1 1 1 t 1 B(t)3 t = B(1)B(t)2 B(t)3 tB(1) + B(s) ds + B(s) ds 2 − 2 − 2 2 2 3 − 0 0 1 1 B(t)3 = B(1)B(t)2 tB(1) 2 − 2 − 3 1 B(t)3 = B(1) B(t)2 t . 2 − − 3 Observe that this result coincides with the one obtained by the direct computation in Example 4.13.

Furthermore, using an idea similar to the one in the above example, we can compute the integral t f B(s) B(T ) B(s) dB(s), a t T , for any C2- a − ≤ ≤ function f(x) on R with an antiderivative F (x) by simply letting θ(x,y)= yF (x).

The next theorem is another generalized version of Itˆo’s formula for the new stochastic integral. The difference between this version and the previous one (The- orem 5.31) is the evaluation at the second variable of function θ. Instead of being

72 (t) T evaluated by a stochastic process Y = t h(s) dB(s), we evaluate the variable y (a) T of θ(x,y) by the random variable Y = a h(s) dB(s).

t 2 Theorem 5.34. Let X = ξ(s) dB(s), a t T , where ξ(t) Lad [a,T ] Ω t a ≤ ≤ ∈ × and let Y (t) = T h(s) dB(s), a t T , where h(t) L2[a,T ]. t ≤ ≤ ∈ Let θ(x,y)=f(x)ϕ(y) where f is C2-function on R and ϕ is an analytic function on R. Then for a t T , ≤ ≤ t ∂θ 1 t ∂2θ θ X ,Y (a) = θ X ,Y (a) + X ,Y (a) dX + X ,Y (a) (dX )2 t a ∂x s s 2 ∂x2 s s a a t ∂2θ X ,Y (a) (dX )(dY (s)), − ∂x∂y s s a or equivalently,

θ X ,Y (a) θ X ,Y (a) t − a t ∂θ 1 t ∂2θ = X ,Y (a) ξ(s) dB(s)+ X ,Y (a) ξ2(s) ds ∂x s 2 ∂x2 s a a t ∂2θ + X ,Y (a) ξ(s)h(s) ds, ∂x∂y s a Again, with ξ(t)=1 and h(t) = 1, we also have the following corollary.

Corollary 5.35. Let θ = f(x)ϕ(y) where f is a C2-function on R and ϕ is an analytic function on R. Then for a t T , ≤ ≤

θ B(t),B(T ) B(a) θ B(a),B(T ) B(a) − − − t ∂θ 1 t ∂2θ = B(s),B(T ) B(a) dB(s)+ B(s),B(T ) B(a) ds ∂x − 2 ∂x2 − a a t ∂2θ + B(s),B(T ) B(a) ds. ∂x∂y − a One of the very useful application of Corollary 5.35 above is that we can easily evaluate the stochastic integral of the form

t g B(T ) dB(s), 0 t T, ≤ ≤ 0

73 where g is an analytic function on R. ∂θ Namely, we let θ(x,y) = xg(y) so that we will have (x,y) = g(y). It follows ∂x that ∂2θ ∂2θ (x,y)=0, (x,y)= g′(y). ∂x2 ∂x∂y

Then by Corollary 5.35 with a =0, for 0 t T we have ≤ ≤ t t B(t)g B(T ) B(0)g(0) = g B(T ) dB(s)+0+ g′ B(T ) ds. − a a This implies that for 0 t T , ≤ ≤ t t g B(T ) dB(s)= B(t)g B(T ) g′ B(T ) ds − 0 0 = B(t)g B(T ) tg′ B(T ) . (5.38) − t 2 Example 5.36. Let us use Equation (5.38), to evaluate the integral 0 B(1) dB(t) for 0 t 1. We let g(x)= x2 and T =1. By Equation (5.38), we obtain ≤ ≤ t B(1)2 dB(t)= B(t) B(1)2 t 2B(1) = B(1)2B(t) 2B(1)t, 0 t 1, − − ≤ ≤ 0 which is exactly the same result as in Example 4.15. Note that this method is much easier than the direct computation in Example 4.15 because we do not need to compute as many integrals as in Example 4.15.

Moreover, Corollary 5.35 can also be used to evaluate the stochastic integral of the form t ψ B(s) g B(T ) dB(s), 0 t T, ≤ ≤ 0 given that g is analytic on R and ψ is a C2-function with antiderivative Ψ.

This can be done by letting θ(x,y)=Ψ(x)g(y). Then it follows that

∂θ ∂2θ ∂2θ = ψ(x)g(y), (x,y)= ψ′(x)g(y), (x,y)= ψ(x)g′(y). ∂x ∂x2 ∂x∂y

74 Therefore, by the corollary 5.35 with a =0 and 0 t T , we have ≤ ≤

Ψ B(t) g B(T ) t =Ψ B(0) g B(T ) + ψ B(s) g B(T ) dB(s) 0 1 t t + ψ′ B(s) g B(T ) ds + ψ B(s) g′ B(T ) ds. 2 0 0 This can be rewritten as

t ψ B(s) g B(T ) dB(s) 0 g B(T ) t =Ψ B(t) g B(T ) Ψ B(0) g B(T ) ψ′ B(s) ds − − 2 0 t ψ B(s) g′ B(T ) ds − 0 1 t t = g B(T ) Ψ B(t) Ψ B(0) ψ′ B(s) ds ψ B(s) g′ B(T ) ds − − 2 − 0 0 t t = g B(T ) ψ B(s) dB(s) ψ B(s) g′ B(T ) ds, − 0 0 where the last equality follows from the regular Itˆo’s formula in Theorem 3.20.

Note that a specific case when g(x) = x and T = 1 is provided in Example 2.7 of [1]. Namely,

t t t B(1)ψ B(s) dB(s)= B(1) ψ B(s) dB(s) ψ B(s) ds, 0 t 1. − ≤ ≤ 0 0 0

75 References

[1] W. Ayed and H.-H. Kuo, An extension of the Itˆointegral, Communications on Stochastic Analysis 2 (2008), no.3, 323–333.

[2] W. Ayed and H.-H. Kuo, An extension of the Itˆointegral: Toward a general theory of stochastic integration, Theory of Stochastic Processes 16(32) (2010), N1, 17–28.

[3] R. Buckdahn, Anticipated Girsanov transformations, Probability Theory and Related Fields 89 (1991), 211-238.

[4] J. N. Esunge, White Noise Methods for Anticipating Stochastic Differential Equations, PhD Thesis, Louisiana State University, 2008.

[5] M. Hitsuda, Formula for Brownian partial derivatives, Second Japan-USSR Sumposium Probab. Th. 2 (1972), 111–114.

[6] M. Hitsuda, Formula for Brownian partial derivatives, Publ. Faculty of In- tegrated Arts and Sciences, Hiroshima Universtiy, Series III, Vol. 4, (1978), 1–15.

[7] K. Itˆo, Extension of stochastic integrals, Proc. Intern. Symp. on Stochastic Differential Equations, K. Itˆo(Ed.) (1978), 95–109, Kinokuniya.

[8] H.-H. Kuo, White noise distribution theory, CRC Press, Boca Raton, FL, 1996.

[9] H.-H. Kuo, Introduction to Stochastic Integration, Universitext, Springer, New York, 2006.

[10] H.-H. Kuo, A. Sae-Tang, and B. Szozda, A stochastic integral for adapted and instantly independent stochastic processes, Preprint (2010)

[11] S. K. Lee, On Moment Conditions for the , PhD Thesis, Louisiana State University, 2006.

[12] J. A. L´eon and P. Protter, Some formulas for anticipative Girsanov trans- formations, in: Chaos expansion, multiple Wiener-Itˆointegrals and their ap- plications (1994) 267–291, Probab. Stochastic Ser., CRC Press, Boca Raton, FL.

[13] D. Nualart and E. Pardoux, with anticipating integrands, Probability Theory and Related Fields 78 (1988) 535–581.

[14] G. Nunno, B. Øksendal and F. Proske, for L´evy processes with applications to finance, Universitext, Springer, New York, 2009.

76 [15] E. Pardoux, and P. Protter: A two-sided stochastic integral and its calculus, Probability Theory and Related Fields 76 (1987) 15–49.

[16] S. M. Ross, Introduction to Probability Models, Academic Press, Burlington, MA, 2010.

[17] A. V. Skorokhod, On a generalization of a stochastic integral, Theory Probab. Appl. 20 (1975) 219–233.

[18] N. Wiener, Differential space, J. Math. Phys. 58 (1923) 131–174.

77 Vita

Anuwat Sae-Tang was born on March 20, 1983 in Bangkok, Thailand. He finished his undergraduate studies at Chulalongkorn University in March 2005. In August

2006, he came to Louisiana State University to pursue graduate studies in mathe- matics. He earned a master of science degree in mathematics from Louisiana State

University in May 2008. He is now pursuing a doctoral degree. He is currently a candidate for the degree of Doctor of Philosophy in mathematics, which will be awarded in August 2011.

78