BEN-GURION UNIVERSITY OF THE NEGEV THE FACULTY OF NATURAL SCIENCES DEPARTMENT OF

GENERALIZED SPACE ANALYSIS AND STOCHASTIC INTEGRATION WITH RESPECT TO A CLASS OF GAUSSIAN STATIONARY INCREMENT PROCESSES

THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE MASTER OF SCIENCES DEGREE

ALON KIPNIS

UNDER THE SUPERVISION OF: PROFESSOR DANIEL ALPAY

MAY 2012 ABSTRACT

We extend the ideas in the basis of Hida’s white noise space into the case where the fundamental has a non-white spectrum. In particular, we show that a Skorokhod-Hitsuda with respect to this process, which obeys the Wick-Itˆo rules, can be naturally defined in this new setting. We use the spectral representation of the process to define a Fourier integral on L2(R). The Bochner-Minlos theorem is then applied to a characteristic functional on the Schwartz space of rapidly decreasing functions defined in terms of this operator, to obtain a probability measure on the topological dual of the Schwartz space, the space of tempered distributions. In the thus obtained we define the counterpart of the S-transform, and use it to define the stochastic integral and prove an associated Itˆoformula. We demonstrate an application of our stochastic integration approach to formulate and solve an optimal stochastic control problem. CONTENTS

1. Introduction ...... 4

Part I Preliminaries 6

2. Background ...... 7 2.1 Countably-Normed spaces...... 7 2.2 Countably-Hilbert spaces...... 8 2.3 Nuclear spaces...... 9 2.4 Gelfand triples...... 11 2.5 The space of Schwartz Functions and its Dual...... 11 2.6 Abstract Gaussian Hilbert Spaces...... 12

3. Hida’s White Noise space ...... 14 3.1 The Wiener-Hermite Chaos Expansion...... 15 3.2 Spaces of stochastic test functions and generalized functions...... 16 3.3 The S-Transform...... 18 3.4 Stochastic Integration in the White Noise Space...... 18

Part II Results 22

4. Introduction ...... 23

5. The m Noise Space ...... 27 5.1 Definition...... 27

5.2 The process Bm ...... 31

5.3 The Sm transform...... 34

6. Stochastic integration in Wm ...... 40 6.1 Itˆo’sformula...... 44 Contents 3

6.2 Relation to other white-noise extensions of Wick-Itˆointegral...... 47

7. Application in optimal control ...... 50 7.1 Solution...... 53 7.2 Simulation...... 54

8. Conclusions ...... 56 1. INTRODUCTION

In many mathematical models of real world systems some parameter values may fluc- tuate and vary in time or space in such a way that seems random to us. One way of dealing with this randomness is to replace the values of these parameters by some kind of average and hope that this will give a good approximation to the original one. One problem of this approach is that even if we assume that the model obtained by averaging is reasonable, we might still want to know how do the small fluctuations of the parameters actually affect on the solution. In addition, it may also be that the actual fluctuations of the parameter values affect the solution in such a way that the averaged model is not even near to be a reliable description of what is actually happening. This has motivated the development of stochastic integration theory and associated by K. Itˆoin the 40’s of the previous century [30]. This stochastic integration theory is based on the Brownian Motion, and can be defined with respect to any continuous semi-. A few other stochastic have been presented since Itˆo,for various other classes of stochastic processes. The Skorokhod-Hitsuda integral was initially introduced in [44] and [26] as a non adapted version of the Itˆointegral, which satisfies similar calculus rules as that of Itˆo.The satisfies the regular calculus rules as that of the , but is regularly defined for a narrower class of integrand [45, 46]. In this work we are interested in stochastic integration with respect to processes that are not necessarily semi Martingales. These processes are useful in, e.g., modeling of time- dependent phenomena in , in information theory, in telecommunication and in a host of applications. Some attempts have been made to extend the definition of Itˆo’sintegral to general stationary increments processes, a special attention was given to the case of the fractional Brownian motion [14, 38, 28,7].

White noise is regarded as a zero mean stationary stochastic process which is independent at different times. The function of such process must vanish anywhere outside zero, but in order for such a process to have a physical meaning as a random signal, the variance of the process must be unbounded, and hence such process does not exist 1. Introduction 5 in the ordinary sense. The increment process of the Brownian motion, normalized by the length of the increment, seems to obtain the properties of the white noise described above as the length of the increment approaches zero. This is the reason why white noise is informally regarded as the time derivative of the Brownian motion which has nowhere differentiable sample path with probability one. Due to the above features, white noise it is often used in system model equations as an idealization of random noise arises in nature, such as the roar of a jet engine, or the noise disturbing the transmission of a communication system. As of today, there are several approaches to stochastic integration. An intuitive approach is to define such an integral directly with white noise as part of the integrand. This re- quires the building of a rigorous mathematical theory of white noise, such as the one introduced by T. Hida in 1975 [22]. His idea was to realize non-linear functions on a as functions of white noise. During the last three decades the theory of white noise has evolved into an infinite dimensional distribution theory.

In this work we consider Gaussian stationary increment processes and extend Hida’s white noise space theory to a wide family of such processes. In particular we introduce stochastic integration theory with respect to these processes based on the Skorokhod- Hitsuda integral, which can be useful in modeling systems in which the underlying noise has a non-white spectrum, namely a colored noise. Our main tool is a version of the S transform adapted to our new setting. The S transform is an elementary transformation in the white noise space which allows a rather simple definition of the Itˆointegral and other important results in the white noise space settings. The fact that this transfor- mation can be naturally extended suggests that our new integration theory is a natural extension of the Itˆointegral. In the present thesis, I show that a Wick-Itˆostochastic integral, with respect to any sta- tionary increment , can be naturally defined using an associated family of Fourier integral operators and some ideas taken from Hida’s white noise space theory. In other words, this is an extension of Hida’s white noise space theory to the case of non- white noises. In particular, this integration theory generalizes many works in stochastic integration with respect to fractional Brownian motion done in the recent years [10].

The white noise space theory is an elegant example of the combination of many de- velopments in functional analysis to the study of stochastic dynamics in probability. The first part of this dissertation is devoted to review Hida’s white noise space theory and the relevant notions in functional analysis. New results are included in the second part. Part I

PRELIMINARIES 2. BACKGROUND

In this chapter we describe the background concerning the concepts needed in the con- struction of the white noise space and its extension in this work. The main references for this chapter are [19] for countably-normed spaces; [20] for countably-Hilbert spaces, nuclear spaces and Gel’fand triples; [42] for The space of Schwartz functions and [31] for abstract Gaussian Hilbert spaces.

2.1 Countably-Normed spaces

Definition 2.1.1. Two norms k · k1 and k · k2, defined in a vector space V , will be called comparable if there is constant C such that the inequality

kvk1 ≤ Ckvk2

holds for all v ∈ V .

In the above definition the norm k · k2 is considered to be stronger then k · k1 in the sense

that every Cauchy sequence with respect to the k · k2 is also a Cauchy sequence with

respect to k · k1.

Definition 2.1.2. Two norms k · k1 and k · k2, defined in a vector space V , will be called

compatible if every Cauchy sequence {vn}n≥1 ⊂ V in both norms that converges to the zero element with respect to one of the norms, also converges to the zero element with respect to the second.

If two comparable and compatible norms ,k · k1 and k · k2 such that k · k1 ≤ Ck · k2, are

defined in a space V , then the completion V1 and V2 of V with respect to these norms may be considered to have the following relationship:

V1 ⊃ V2 ⊃ V.

If the two norms k · k1 and k · k2 are compatible but not comparable, we can introduce a 2. Background 8

third norm k · k3 defined by

kvk3 = max {kvk1, kvk2} .

It is easy to verify that k · k3 is comparable and compatible with the other two norms.

Thus, given any family {k · kn}n≥1 of norms on V we may assume they satisfy the relation

k · k1 ≤ C2k · k2 ≤ ..., and the completions of V with respect to each of the norms satisfy

V1 ⊃ V2 ⊃ ... ⊃ Vn ⊃ ... ⊃ V.

For a vector space V with a countable system of norms {k · kn}n≥1 we can define a topology by the following system of neighborhoods of zero,

Uk, = {v : kvk1 < , kvk2 < , ..., kvkk < } , k ∈ N,  > 0.

We note that the topology defined in this way coincides with the topology in V defined by the metric ∞ X kvkn kvk = 2−n . 1 + kvk n=1 n Definition 2.1.3. A vector space V in which a topology is defined by a countable familiy of compatible norms is called a countably normed space.

2.2 Countably-Hilbert spaces

Suppose we are given a countably normed space V in which the topology is defined by p a countable set of inner product norms kvkn = (v, v)n. The space V will be called a countably Hilbert space if it is complete relative to the stated countably-normed topology.

Note that also in this case, any system of scalar products (·, ·)n, n = 1, 2, ... in V can be 0 replaced by a new system of scalar products (·, ·)n which does not alter the topology in V , by setting n 0 X (v, w)n = (v, w)k , v, w ∈ V. k=1 This new system has the property that

0 0 (v, v)1 ≤ (v, v)2 ≤ ..., (2.1) 2. Background 9

so without losing generality we can further assume that given a countably Hilbert space V and a system of norms in it, condition (2.1) is satisfied.

Let Vn denote the completion of the space V relative to the norm k · kn. So each Vn is a Hilbert space and from the completeness of V it follows that

∞ \ V = Vn. n=1

It is not hard to show that the dual space V 0 of a countably Hilbert space V equals

∞ 0 [ 0 V = Vn. n=1

2.3 Nuclear spaces

Let V be a countably-Hilbert space associated with an increasing sequence {k · kn}n = p (·, ·) of Hilbert norms. Denote Vn the completion of V with respect to the norm k · kn. In each of these spaces the set of elements of V is an everywhere dense set. By hypothesis, if m ≤ n then (v, v)m ≤ (v, v)n ∀v ∈ V . From this it follows that the function maps en element v ∈ V from Vn to Vm (i.e. the same element v considered in two different spaces) is a continuous function of an everywhere dense set in Vm, so it can be extended to a n p n p continuous linear operator Tm : Vn −→ Vm. Note that Tm = TmTn if m ≤ n ≤ p.

Definition 2.3.1. A countably Hilbert space V is called nuclear, if for any m there is n an n such that the operator Tm is nuclear, i.e. has the form

∞ n X Tmv = λk (v, vk)n wk, v ∈ Vn, k=1

where {vk} and {wk} are orthonormal systems of vectors in the spaces Vn and Vm respec- P∞ tively, λk > 0 and k=1 λk < ∞.

Every nuclear space is perfect, i.e. every bounded closed set in a nuclear space V is compact. From this follows the following properties of a nuclear space V :

1. V is separable.

2. V is complete relative to weak convergence.

3. Both in V and its dual V 0, strong and weak convergence coincide. 2. Background 10

4. V 0 is perfect relative to the topology of weak and strong convergence.

It follows that the σ-field on V 0 generated by the three topologies(weak, strong and inductive limit) are all the same. This σ-field is regarded as the Borel field of V and denoted B (V ). Two of the most important properties of nuclear spaces are given below.

Theorem 2.3.2 (Bochner-Minlos [20]). Let V be a real nuclear space. Let C : V −→ C be a complex valued function on V satisfying:

1. C is continuous.

2. C(0) = 1.

3. C is a positive function in the sense that for any

z1, ..., zn ∈ C and v1, ..., vn ∈ V ,

n X zizjC (vi − vj) ≥ 0. i,j=1

Then there exist a unique probability measure P on V 0 such that Z C(v) = eihx,vidP (x), v ∈ V. V 0

Note that the above theorem is not true for a general real separable infinite dimensional − 1 kvk2 Hilbert space H. To see this, take C(v) = e 2 H and let {e1, e2, ...} be an orthonormal basis for H. Then Z i(x,e ) − 1 e n dP (x) = e 2 . H

But for every x ∈ H,(x, en) −→ 0 as n −→ ∞. Another important fact about nuclear spaces is the following abstract kernel theorem.

Theorem 2.3.3 (Schwartz’s kernel theorem [20]). Let V be a nuclear space associated with an increasing sequence {k · kn} of norms and let Vn be the completion of V with respect to k · kn. Suppose F : V × V → C is a bilinear continuous functional. Then there 0 exist n, p ≥ 1, and a Hilbert-Schmidt operator A from Vn into Vp such that

F (u, v) = hAu, vi, u, v ∈ V. 2. Background 11

2.4 Gelfand triples

Let V be a nuclear space densely imbedded into some Hilbert space H, relative to the norm of H. We may identify H with its dual space H0 using Riesz representation theorem: 0 each h ∈ H is identified as the element φh in H defined by

φh(x) = hx, hiH , x ∈ H.

Each h ∈ H can be further identified with an element of V 0, by

hh, viV 0,V = hh, viH , v ∈ V.

Thus H is densely embedded in V 0 with respect to the weak topology of V 0. We get the triple V ⊂ H ⊂ V 0, which is called a Gelfand triple. H is dense in V 0 with the weak topology of V 0.

Suppose that V is associated with a sequence {k · kn} of norms and let Vn be the com- pletion of V with respect to k · kn. By setting H = V1, we get the continuous inclusion

0 0 0 V ⊂ ... ⊂ Vn+1 ⊂ Vn ⊂ ... ⊂ V1 ⊂ ... ⊂ Vn ⊂ Vn+1 ⊂ ... ⊂ V ,

0 and in particular V ⊂ V1 ⊂ V is a Gelfand triple.

2.5 The space of Schwartz Functions and its Dual

Definition 2.5.1. The Schwartz space SR of rapidly decreasing functions consists of all functions s ∈ C∞ (R) such that for every α, β ∈ N,

dβs lim | 1 + |x|2α (x)| = 0. |x|→∞ dxβ

The family of norms

Z β 1/2 α d s kskα,β = |x β (x)|dx , α, β ∈ N, R dx

makes into a countably normed space. The space and its topological dual 0 , SR SR SR the space of tempered distributions, play a central role in White Noise Space theory. We will list some of their important properties. 2. Background 12

1. If s ∈ SR and P is a polynomial, then the mapping

s −→ P (D)s,

n dns where D (s) = dxn , is a continuous map of SR into itself. In addition,

P\(D)s = P s,b and Pc s = P (−D)s.b

2. The Fourier transform is a continuous linear mapping of SR into itself [42, Theorem 7.4].

3. is a nuclear spaces, thus so is 0 [20, section 3.6]. SR SR 4. We have the Gelfand triple [41].

0 SR ⊂ L2 (R) ⊂ SR.

2.6 Abstract Gaussian Hilbert Spaces

As can be deduced by its name, a Gaussian Hilbert space is a notion combining and Hilbert space theory. A Gaussian linear space is a linear space of random variables defined in some probability space (Ω, F,P ) with central .

The inner product in L2 (Ω, F,P ) assigned to a Gaussian linear space turns it into a pre-Hilbert. A Gaussian Hilbert space is a complete Gaussian linear space, i.e. a closed linear space of L2 (Ω, F,P ) consisting of zero mean Gaussian variables.

Let H be a Gaussian Hilbert space on (Ω, F,P ). Since variables in H belong to Lp for every finite p ≥ 0, H¨older’sinequality shows that any finite product of variables in H

belongs to L2 (Ω, F,P ). This allows us to consider subspaces of L2 (Ω, F,P ) consists of polynomials in the elements of H. We define

n ⊥ H , P n(H) ∩ P n−1(H) , where P n(H) is the closure in L2 (Ω, F,P ) of the linear space generated by polynomials in the elements of H of degree ≤ n. It follows that the spaces Hn, n ≥ 0, are mutually orthogonal, and if we consider the 2. Background 13

sub-sigma field F(H) generated by the elements of H in L2 (Ω, F,P ), we get

∞ M n H = L2 (Ω, F,P ) , (2.2) n=0 which means that each elements X(ω) ∈ L2 (Ω, F,P ) has the representation

∞ X n X(ω) = Xn(ω),Xn(ω) ∈ H . n=0

This decomposition of L2 (Ω, F,P ) is called the Wiener chaos decomposition. The Wick product of two elements X(ω) ∈ Hn and Y (ω) ∈ Hm can be defined by

X  Y = πm+n(XY ),

n where πn is the orthogonal projection of L2 (Ω, F,P ) on H , and the definition may be extended to any two element of L2 (Ω, F,P ) in view of their chaos representation. In the sequel we will investigate a particular example of the Wiener chaos decomposition in the white noise space, given in terms of the Hermite polynomials and the Hermite functions. 3. HIDA’S WHITE NOISE SPACE

In 1975, T. Hida [21] defined white noise in rigorous mathematical terms as generalized functions on an infinite dimensional space. This approach has been extensively studied in the last decades and we will review it here briefly. We refer to [24], [27] and [35] for more details.

The starting point for the construction of the white noise space is the positive function − 1 ksk 2 L2(R) e defined on the nuclear space SR. Applying the Bochner-Minlos theorem 2.3.2 to it we obtain a probability measure P on 0 such that SR Z − 1 ksk2 hs0,si 0 2 L2(R) e = e dP (s ), s ∈ SR. (3.1) S 0 R In accordance with the notation common in probability theory, we set

0 Ω := SR

and denote by ω the elements of Ω. The Borel sigma algebra is denoted by B. The probability space (Ω, B( 0 ),P ) is called a white noise space. The measure P is SR called the standard Gaussian measure on 0 or the white noise measure. We also set SR

B 0 L2(Ω) , L2 (Ω, (SR),P ) .

Taking s = s1 with  ∈ R in (3.1) and expanding both sides into a power series we may

conclude that for each s ∈ SR, hω, si is a normally distributed with zero 2 mean and variance ksk . The isometric map s −→ hω, si from into L2(Ω) can be L2(R) SR

extended to any function f ∈ L2(R) by taking a sequence {sn} in SR such that sn → f in L2(R) and setting

hω, fi lim hω, sni , n→∞

in L2(Ω). It follows that H , {hω, fi, f ∈ L2(R)} 3. Hida’s White Noise space 15

is a Gaussian Hilbert space isomorphic to L2(R). The Brownian motion can be defined to be the continuous version of the process

B(t) , B(t, ω) , hω, 1[0,t]i, t ≥ 0. (3.2)

3.1 The Wiener-Hermite Chaos Expansion

It turns out to be convenient to express the Wiener chaos (2.2) for the space L2(Ω) in

terms of Hermite polynomials and Hermite functions. The nth (probabilistic) Hermite polynomial is defined by

n n 1 x2 d  − 1 x2  h (x) = (−1) e 2 e 2 , n = 1, 2, .... n dxn

The Hermite functions are defined by

1 2 − 2 x √ − 1 e   ηn(x) , π 4 hn−1 2x , n = 0, 1, 2, .... p(n − 1)! and constitutes an orthonormal basis for L2(R). We denote by J the set of multi-indices over N, which can be viewed as the set of infinite sequences α = (α1, α2, ...), αi ∈ N for

which αi = 0 for all i large enough. Define

∞ Y Hα(ω) , hαi (hω, ηii) . i=0

The family {Hα}α∈J constitutes an orthogonal basis for L2(Ω). In addition, if α =

(α1, α2...) then we have

H2 = kH k2 = α! α !α ! ··· . E α α L2(Ω) , 1 2

It follows that every X ∈ L2(Ω) can be decomposed as

X X(ω) = cαHα(ω), cα ∈ R, α∈J

and we have  2 X 2 E X(·) = α!cα. α∈J 3. Hida’s White Noise space 16

For example, the Brownian motion has the Wiener-Hermite expansion

∞ ∞ ∞ X Z t X Z t X Z t B(t) = hω, 1[0,t]i = hω, ηj(u)du ηji = ηj(u)duhω, ηji = ηj(u)duH(j), j=1 0 n=1 0 n=1 0 (3.3) where (j) = (0, 0, ..., 1, ..) with 1 on the entry number j.

The Wick product on L2(Ω) is defined through

Hα  Hβ , Hα+β.

Wick powers, Wick polynomials and Wick versions of analytic functions can be defined as well. For example, the Wick exponential is defined by

∞ X Xn eX (ω) . , n! n=0

For a Gaussian X(ω) = hω, fi ∈ H, with f ∈ L2(Ω), it can be shown that   X X− 1 X2 1 2 e = e 2 E[ ] = exp hω, fi − kfk . 2 L2(R)

We note that the definition of the Wick product is independent of the particular choice

of basis elements {Hα} [27, App. D].

3.2 Spaces of stochastic test functions and generalized functions

P Let X(ω) = α∈J cαHα(ω). If X 2 α!cα < ∞ (3.4) α∈J P then X ∈ L2(Ω). Moreover, if Y (ω) = α∈J bαHα(ω) then X E [X(·)Y (·)] = bαcαα!. α

The main idea in the following is to replace condition 3.4 by various other conditions, and thus obtain a family of stochastic distributions and test functions.

The Kondratiev spaces of stochastic test functions Sρ for 0 ≥ ρ ≥ 1 consist of those

X φ(ω) = cαHα(ω) ∈ L2(Ω) α∈J 3. Hida’s White Noise space 17

such that

2 X 2 1+ρ Y kαj kφkρ,k , cα (α!) (2j) < ∞ (3.5) α∈J j for all k ∈ N.

The corresponding Kondratiev space of stochastic distributions S−ρ consist of all formal expansions X Φ = bαHα(ω) α∈J such that

2 X 2 1−ρ Y −qαj kΦk−ρ,−k , bα (α!) (2j) < ∞ (3.6) α∈J j

for some q ∈ N. The topologies of Sρ and S−ρ are defined by the corresponding families

of defined in 3.5 and 3.6 respectively. Note that the duality between Sρ and

S−ρ is well defined by the action

X hΦ, φ, i , bαcαα!, Φ ∈ S−ρ, φ ∈ Sρ, (3.7) α∈J

since for q large enough ! ! 1−ρ 1+ρ X X Y qαj /2 Y −qαj /2 |bαcα|α! = |bαcα| (α!) 2 (α!) 2 (2j) (2j) α α j j !!1/2 !!1/2 X 2 1−ρ Y −qαj X 2 1+ρ Y qαj ≤ bα (α!) (2j) bα (α!) (2j) < ∞. α j α j

For general 0 ≤ ρ ≤ 1 we have

S1 ⊂ Sρ ⊂ Sρ ⊂ L2(Ω) ⊂ S−0 ⊂ S−ρ ⊂ S−1.

The Gelfand triples

S1 ⊂ L2(Ω) ⊂ S−1 and S0 ⊂ L2(Ω) ⊂ S−0

are used in stochastic analysis as the analog of the triple ⊂ L ( ) ⊂ 0 that SR 2 R SR is commonly used in differential equations. Indeed, it turns out that these spaces of stochastic distributions host many of the solution for stochastic differential equations. For example, consider the Wiener-Itˆoexpansion of the Brownian motion (3.3) and take 3. Hida’s White Noise space 18 its formal time derivative. The resulting sum

∞ X ηj(t)H(j) (3.8) n=1 does not satisfy condition (3.4) hence cannot be a member of L2(Ω), but it is not hard to prove that this sum belongs to Hida’s space of stochastic distributions S−0.

3.3 The S-Transform

We will now introduce a transformation on (L2) which its extension plays a central role in our work. This transformation was introduced in [18] and [34].

For X ∈ L2(Ω) we define the S-transform of X to be Z hω,si − 1 ksk2 h·,si 2 L2(R)   (SX)(s) , X(ω)e dP (ω) = e E X(·)e , s ∈ SR. Ω

Due to the translation invariance of the Gaussian measure P [23], it follows that Z SX(s) = X(ω + s)dP (ω). Ω P In terms of the chaos expansion, we can express the S-transform of X = α∈J aαHα as

X α (SX)(s) = aα (s, η) , α∈J where (s, η)α Q∞ (s, η )αi . This allows us to formally extend the definition of the , i=1 i L2(R) S-transform to any element of S−1. In addition, we conclude that

S (X  Y )(s) = (SX)(s) · (SY )(s)

for any X,Y ∈ S−1. The importance of the S-transform follows from the fact that it is injective [35, Propo- sition 5.10], hence one can specify a generalized function by its S-transform.

3.4 Stochastic Integration in the White Noise Space

The White Noise distribution theory allows a convenient framework for various levels of stochastic integrals, each generalizes the other. The following integrals are said to be of 3. Hida’s White Noise space 19

Wick-Itˆotype, since they all satisfy the Wick-Itˆocalculus rules that follows from Itˆo’s formula.

3.4.1 Wiener Integral

The Wiener integral of a function f ∈ L2(R) is defined by the isometric map f −→ hω, fi which identifies L2(R) with the Gaussian Hilbert space {hω, fi, f ∈ L2(Ω)} ⊂ L2 (Ω). Recall the definition of the Brownian motion {B(t)} in the white noise space (3.2) to justify the notation

Z Z d Z d fdB(t) := f hω, 1ti = hω, ” f 1t”i = hω, fi. R R dt R dt The Weiner integral carries a deterministic function into a Gaussian random variable. It is merely an isometric embedding of an abstract Hilbert space in its corresponding Gaussian Hilbert space.

3.4.2 ItˆoIntegral

The Itˆointegral is defined for non-anticipating stochastic processes, i.e. a stochastic

process {X(t)} defined on L2(Ω) such that for any t, the random variable X(t) is mea-

surable with respect to Ft, which is the sigma-field generated by {B(s), s ≤ t}. The most important properties of the Itˆointegral are given below. W Let {X(t)} be a non-anticipating stochastic process on L2(Ω, t≥0 Ft,P ). Denote

Z t It(X) , x(s)dB(s), 0

where the left hand side is the Itˆointegral of {X(t)}. We have [32, Chapter 3]

1.

I0(X) = 0, a.s. P.

2.

E [It(X)|Fs] = Is(X), a.s. P.

3. Z t   2 2 E (It(X)) = E X(s) ds , 0

2 4. Itˆo’srule: Let g : R −→ R be a function of class C and let X(t) = X0 + 3. Hida’s White Noise space 20

R t 0 f(s)dB(s). Then,

Z t 1 Z t g(X(t)) = g(X(0)) + g0(X(s))f(s)dB(s) + g00(X(s))(f(s))2ds, a.s. P. 0 2 0 (3.9)

We note that the Itˆointegral can be extended to be defined with respect to the class of semi-local martingales [32].

3.4.3 Hitsuda-Skorokhod integral

Hitsuda [26] and Skorokhod [44] introduced an integral which is not restricted to inte- grands of the class of non-anticipating process, but which reduces to the Itˆointegral if the integrand happens to be non-anticipating, as was proved in [39]. In the white noise space framework the Hitsuda-Skorokhod integral can be defined by the relation Z ∞ Z ∞ X(t)δB(t) = X(t)  B˙ (t)dt, (3.10) −∞ −∞ where B˙ (t) is the singular white noise, a stochastic distribution defined by the sum (3.8).

The integral at the right hand side should be interpreted as an S−1 valued Pettis/ (see for example [25] for Pettis integrability). Relation (3.10) presents a natural definition for the Hitsuda-Skorokhod integral in the white noise setting. Even more natural is an equivalent definition for it in terms of the S-transform.

Definition 3.4.1. Let {X(t), t ∈ R} be an L2(Ω) valued stochastic process such that

S (X)(s) ∈ L1(R) for any s ∈ SR, and such that for any Borel set E the function R d E S (X(t)) (s) dt (s, 1t) dt is the S-transform of a unique element in L2(Ω). Then {X(t), t ∈ R} is called Hitsuda-Skorokhod (S-transform) integrable and we define

Z Z  −1 d X(t)δB(t) , S S (X(t)) (s) (s, 1t) dt . E E dt

It can be shown that the last definition coincides with the definition of Hitsuda and

Skorokhod for their integral, and it can be extended to S−1 valued processes as well. (see [24], [35, 13.3] and especially [5] for more on the S-transform approach to stochastic integration). Note that the S-transform definition for stochastic integration of L2(Ω) processes does not involve Wick product nor stochastic distributions, and that for any s ∈ SR the function (s, 1t) is absolutely continuous with respect to the Lebesgue measure as can be seen by (6.2). In view of this, the S-transform integral can be defined in

only in terms of expectation and the inner product in L2(R). This distinction suggests 3. Hida’s White Noise space 21 the possibility to naturally extend the Hitsuda-Skorokhod integral to a setting where the underlying noise is not necessarily white. As we shall see in the Results part of this work, it will be required to replace L2(R) by another Hilbert space, one which is determined by the spectrum of the noise. Part II

RESULTS 4. INTRODUCTION

In this work we take an approach which is based on an extension of the S-transform in order to develop stochastic integration for the family of centered Gaussian processes with covariance function of the form

Z eiξt − 1 e−iξs − 1 Km(t, s) = m(ξ)dξ R ξ ξ where m is a positive measurable even function subject to

Z m(u) 2 dξ < ∞. (4.1) R ξ + 1

Note that Km(t, s) can also be written as

Km(t, s) = r(t) + r(s) − r(t − s),

where Z 1 − cos(tξ) r(t) = 2 m(ξ)dξ. R ξ This family includes in particular the fractional Brownian motion, which corresponds (up to a multiplicative constant) to m(ξ) = |ξ|1−2H , where H ∈ (0, 1). We note that complex-valued functions of the form

K(t, s) = r(t) + r(s) − r(t − s) − r(0),

where r is a continuous function, have been studied in particular by von Neumann, Schoenberg and Krein. Such a function is positive definite if and only if r can be written in the form Z   iξt iξt dσ(ξ) r(t) = r0 + iγt + e − 1 − 2 2 , R ξ + 1 ξ where σ is an increasing right continuous function subject to R dσ(ξ) < ∞. See [36], [33], R ξ2+1 and [2] for more information on these kernels. 4. Introduction 24

As in [2], our starting point is the (in general unbounded) operator Tm on the Lebesgue

space of complex-valued functions L2(R) defined by √ Tdmf(ξ) = m(ξ)fb(ξ), (4.2)

with domain  Z  2 D(Tm) = f ∈ L2(R); m(ξ)|fb(ξ)| dξ < ∞ , R where f(ξ) = √1 R e−iξtf(t)dt denotes the Fourier transform of f. Clearly, the Schwartz b 2π R space SR of real smooth rapidly decreasing functions belong to the domain of Tm. The indicator functions  1[0,t], t ≥ 0, 1t = −1[t,0] t ≤ 0,

also belong to D(Tm). In [2], and with some restrictions on m, a centered Gaussian process B with covariance function K (t, s) = (T 1 ,T 1 ) was constructed in m m m t m s L2(R) Hida’s white noise space. In the present work we chose a different path. We define the characteristic functional 2 − kTmsk Cm(s) = e 2 , s ∈ S . (4.3)

It has been proved in [3] that Cm is continuous from SR into R. Restricting Cm to real-valued functions and using the Bochner-Minlos theorem 2.3.2, we obtain an analog

of the white noise space in which the process Bm is built in a natural way. Stochastic calculus with respect to this process is then developed using an S-transform approach.

The S-transform of an element X(ω) of the white noise space is defined by

 h·,si − 1 ksk SX(s) = E X(·)e e 2 L2(R) .

An S-transform approach to stochastic integration in the white noise setting can be found in [24], [35, Section 13.3] and especially in [5]. The main idea is to define the Hitsuida- Skorohod integral of a stochastic process X(t) with respect to the Brownian motion B(t) over a Borel set E, by

Z Z  −1 X(t)δB(t) , S S (X(t)) (s)s(t)dt . E E

Namely, the integral of X(t) over the Borel set E is the unique stochastic process Φ(t) 4. Introduction 25

such that for any t ≥ 0 and s ∈ S , Z (SΦ(t)) (s) = S (X(t)) (s)s(t)dt. E

Since s(t) = d (s, 1 ) , it suggests to extend the last definition of the integral by dt t L2(R) replacing the inner product in L2(R) by a different one. In the present work, this inner

product is determined by the spectrum of the process through the operator Tm. 1−2H 1 We note that when m(ξ) = |ξ| , and H ∈ ( 2 , 1), the operator Tm reduces, up to a 2 multiplicative constant, to the operator MH defined in [16] and in [7]. The set Lφ pre-

sented in [12, Eq. 2.2], is closely related to the domain of Tm, and the functional Cm was used with the Bochner-Minlos theorem in [8, (3.5), p. 49]. In view of this, our work gen- eralized the stochastic calculus for fractional Brownian motion presented in these works to the aforementioned family of Gaussian processes.

Note moreover that the function φ from the last references defines the kernel associated ∗ to the operator TmTm via Schwartz’ kernel theorem, with m = MH . In the general case, ∗ the kernel associated to the operator TmTm is not a function. This last remark is the source for some of the difficulties arises in extending Wick-Itˆointegration for fractional 1 1 1 Brownian motion such as the distinction between the cases H < 2 , H > 2 and H = 2 .

There are two main results in this work. The first is the construction of a probability space in which a stationary increment process with spectral density m is naturally de- fined. This result, being a concrete example of Kolmogorov’s extension theorem on the existence of a Gaussian process with a given spectral density, is interesting in its own right. The second main result deals with developing stochastic integration with respect to the fundamental process in this space. We take an approach based on the analog of the S-transform in our setting, and show that this stochastic integral coincides with the one already defined in [1] but in the framework of Hida’s white noise space.

The results section consists of four chapters besides the introduction. In Chapter 5 we construct an analog of Hida’s white noise space using the characteristic function Cm,

define and study the fundamental stationary increment process Bm and the analog of the S-transform in this space. In Chapter6 we define a Wick-Itˆotype stochastic integral

with respect to Bm, and prove an associated Itˆoformula. We explain the relation of this integral to other works on white noise based stochastic integrals. In Chapter 7 we use our stochastic integration approach to formulate and solve a stochastic optimal control 4. Introduction 26 problem. The last chapter contains the main conclusion and some issues for further discussion. 5. THE M NOISE SPACE

5.1 Definition

We set to be the space of real-valued Schwartz functions, and Ω = 0 . We denote SR SR by B the associated Borel sigma algebra. Throughout this paper, we denote by h·, ·i the duality between 0 and , and by (·, ·) the inner product in L ( ). In case there is no SR SR 2 R danger of confusion, the L2(R) norm will be denoted as k · k.

Theorem 5.1.1. There exists a unique probability measure µm on (Ω, B) such that

2 kTmsk Z − 2 ihω,si e = e dµm(ω), s ∈ SR, Ω

Proof: The function Cm(s) is positive definite on SR since

 1   1  C (s − s ) = exp − kT s k2 × exp {(T s ,T s )} × exp − kT s k2 , m 1 2 2 m 1 m 1 m 2 2 m 2 where now the middle term is positive since an exponent of a positive function is still positive. Moreover, the operator Tm is continuous from S (and hence from SR) into L2(R). This was proved in [3], and we repeat the argument for completeness. As in [3] we set K = R m(u) du and s](u) = s(−u). We have for s ∈ : R 1+u2 S Z 2 2 kTmsk = |sb(u)| m(u)du R Z m(u) = |(1 + u2)s(u)|2 du b 2 R 1 + u Z Z  ≤ K |s ? s]|(ξ)dξ + |s0 ? (s])0|(ξ)dξ R R Z 2 Z 2! ≤ K |s(ξ)|dξ + |s0(ξ)|dξ , R R

where we have denoted convolution by ?. It follows that Cm is a continuous map from

SR into R, and the existence of µm follows from the Bochner-Minlos theorem. 5. The m Noise Space 28

The triplet (Ω, B, µm) will be used as our probability space.

Proposition 5.1.2. Let s ∈ SR. Then:

2 2 E[hω, si ] = kTmsk . (5.1)

Proof: We have

1 2 Z − kTmsk ihω,si e 2 = e dµm(ω). (5.2) Ω Expanding both sides of (5.2) in power series for s we obtain

Z E [hω, si] = hω, sidµm(ω) = 0. (5.3) Ω and Z  2 2 2 E hω, si = hω, si dµm(ω) = kTmsk . (5.4) S 0

We now want to extend the isometry (5.1) to any function in the domain of Tm. This extension involves two separate steps: First, an approximation procedure, and next com- plexification. For the approximation step we introduce an inner product defined by the

operator Tm. For f and g in D(Tm) we define the inner product Z ˆ ∗ (f, g)m , fgˆ m(ξ)dξ. R

Note that D(Tm) is consist of those functions f in L2(R) that satisfy

2 kfkm = (f, f)m < ∞.

We define the space LSm and Lm to be the closure of S and D(Tm) in the norm k · km, respectively.

Proposition 5.1.3. We have

Lm = LSm.

Proof: Let f ∈ Dm be orthogonal to any s ∈ S in the norm k · km, i.e. Z ∗ 0 = (s, f)m = sbfb m(ξ)dξ, ∀s ∈ S . R 5. The m Noise Space 29

It follows that fb∗m = 0 almost everywhere since it defines the zero distribution on S . But that also means Z fbfb∗m(ξ)dξ = 0 R

so f is zero in Lm.

Theorem 5.1.4. The isometry (5.1) extends to any f ∈ L2(R) where f is real-valued

and in the domain of Tm.

Proof: We first note that, for f in the domain of Tm we have

Tmf = Tmf. (5.5)

Indeed, since m is even and real we have

] √ ] √ ]   Tdmf = m(fb) = ( mfb) = Tdmf = Tdmf.

Let now f be real-valued and in D(Tm) ⊂ Lm. It follows from Proposition 5.1.3 that there exists a sequence (sn)n∈N of elements in S such that

lim ksn − fkm = 0. (5.6) n→∞

In view of (5.5), and since f is real-valued we have

lim ksn − fkm = lim kTmsn − TmfkL ( ) = lim kTmsn − TmfkL ( ) = 0. (5.7) n→∞ n→∞ 2 R n→∞ 2 R

Together with (5.6) this last equation leads to

lim kTm(Re sn) − TmfkL ( ) = 0. (5.8) n→∞ 2 R

In particular (Tm(Re sn))n∈N is a Cauchy sequence in L2(R). By (5.1), (hω, Re sni)n∈N

is a Cauchy sequence in Wm. We denote by hω, fi its limit. It is easily checked that the limit does not depend on the given sequence for which (5.6) holds.

We denote by DR(Tm) the elements in the domain of Tm which are real-valued.

Let f, g ∈ DR(Tm). The polarization identity applied to

[hω, fi2] = kT fk2 , f ∈ D (T ). (5.9) E m L2(R) R m 5. The m Noise Space 30

leads to [hω, fihω, gi] = Re (T f, T g) . E m m L2(R)

In view of (5.5), Tmf and Tmg are real and so we have:

Proposition 5.1.5. Let f, g ∈ DR(Tm). It holds that

E [hω, fihω, gi] = (Tmf, Tmg) . (5.10)

Proposition 5.1.6. {hω, fi, f ∈ DR(Tm)} is a Gaussian process in the sense that for Pn any f1, ..., fn ∈ DR(Tm) and a1, ..., an ∈ R, the random variable i=1 aihω, fii has a normal distribution.

Proof: By (5.2), for λ ∈ R we have,

Pn Z Pn iλ aihω,fii iλ aihω,fii E[e i=1 ] = e i=1 dµm(ω) Ω Z Pn ihω,λ aifii (5.11) = e i=1 dµm(ω) Ω − 1 λ2k Pn a T f k2 = e 2 i=1 i m i .

In particular, we have that for any ξ1, ..., ξn ∈ DR (Tm) such that Tmξ1, ..., Tmξn are n orthonormal in L2 (R) and for any φ ∈ L2(R )

Z n 1 Y 1 2 − 2 xi E [φ (hω, ξ1i, ...hω, ξ1i)] = n φ(x1, ..., xn) e dx1 · ... · dxn. (5.12) (2π) 2 n R i=1

Definition 5.1.7. We set G to be the σ-field generated by the Gaussian elements

{hω, fi, f ∈ DR (Tm)} , and denote

Wm , L2 (Ω, G, µm) .

Note that G may be significantly smaller than B, the Borel σ-field of Ω. For example, if

m ≡ 0, then Tm is the zero operator and G = {∅, Ω, 0, Ω\{0}}. We will see in the following section that the time derivative, in the sense of distributions,

of the fundamental stochastic process Bm in the space Wm has spectral density m(ξ). It

is therefore justified to refer Wm as the m-noise space. 5. The m Noise Space 31

In the case m (ξ) ≡ 1, Tm is the identity over L2 (R) and µm is the white noise measure used for example in [24, (1.4), p. 3]. Moreover, by Theorem 1.9 p. 7 there, G equals the Borel sigma algebra and so the 1-noise space coincides with Hida’s white noise space.

5.2 The process Bm

We now define the fundamental stationary increment process Bm : R −→ Wm via

Bm(t) , Bm(t, ω) , hω, 1ti.

This process plays the role of the Brownian motion for the stochastic integral and the

Itˆoformula in the space Wm. Note that this is the same definition as the Brownian mo- tion in white noise space (3.2), the difference being the probability measure assigned to Ω.

Theorem 5.2.1. Bm has the following properties:

1. Bm is a centered Gaussian random process.

2. For t, s ∈ R, the covariance of Bm(t) and Bm(s) is

Z eiξt − 1 e−iξs − 1 Km(t, s) = m(ξ)dξ = (Tm1t,Tm1s) . (5.13) R ξ ξ

3. The process Bm has a continuous version under the condition

Z m(ξ) dξ < ∞ (5.14) R 1 + |ξ|

Proof: (1) follows from (5.11) and (5.3). To prove (2), we see that by (5.10) we have

E [Bm(t)Bm(s)] = E [hω, 1tihω, 1si]

= Re (Tm1t,Tm1s)

= (Tm1t,Tm1s) , since this last expression is real. To prove (3) we use similar arguments to [3, Theorem 10.2]. For t, s ∈ R, Z  2  2 1 − cos ((t − s)ξ) E (Bm(t) − Bm(s)) = E h·, 1[s,t]i = 2 2 m(ξ)dξ, R ξ 5. The m Noise Space 32

where the last equality follows by vanishing imaginary part of (5.13). We now compute

Z 1 Z 1 tξ 2 1 − cos(tξ) 2 2 sin 2 2 2 m(ξ)dξ = 2 t 2 2 m(ξ)dξ 0 ξ 0 ξ t 2 ≤ C1t (C1 > 0 independent of t).

Using the mean-value theorem for the function ξ → cos(tξ) we have

1 − cos(tξ) = tξ sin(tθt), θt ∈ [0, ξ].

Thus,

Z ∞ 1 − cos(tξ) Z ∞ m(ξ) Z ∞ m(ξ) 2 m(ξ)dξ = t sin(tθt) dξ ≤ t dξ ≤ C2t, 1 ξ 1 ξ 1 ξ

where we have used (5.14) for the last move. Since Bm(t)−Bm(s) is zero mean Gaussian, we obtain

 4  2 2 E (Bm(t) − Bm(s)) = C3E (Bm(t) − Bm(s)) ≤ C4 (t − s) .

˘ Thus Bm satisfies Kolmogorov-Centsov test for the existence of a continuous version [32, Theorem 2.8].

We bring here two interesting examples for specific choices of the spectral density m and

the corresponding process Bm.

Example 5.2.2 (Band-limited noise). Consider the spectral density

m1 (ξ) = 1[−∆,∆], ∆ ≥ 0.

The corresponding process Bm1 has the covariance function

1 Z ∆ 1 − cos(tξ) − cos(sξ) + cos(ξ(t − s)) √ Km1 (t, s) = 2 dξ, t, s ∈ R. 2π −∆ ξ

The time derivative of this process also belongs to Wm, and is a stationary Gaussian process with covariance

2 Z ∆ ∂ 1 i(t−s)ξ 2 sin (∆(t − s)) Km1 (t, s) = √ e dξ = . (5.15) ∂t∂s 2π −∆ t − s

This process can be obtained in physical models by passing a white noise through a low- 5. The m Noise Space 33

Fig. 5.1: Various sample paths for the stationary increment process of Example 5.2.2 (left) and Example 5.2.3 (right).

pass filter with cut-off frequency ∆. We see from the covariance function (5.15) that each  π  time sample t0 ∈ R is positively correlated with time samples in the interval t0, t0 + 2∆ , π 3π  negatively correlated with time samples in the interval t0 + 2∆ , t0 + 2∆ and so on with decreasing magnitude of correlation. This behavior may describes well price fluctuation of some financial asset.

Example 5.2.3 (Band limited fractional noise). We can combine the spectral density m1 from the previous example with the spectral density |ξ|1−2H of the fractional noise with Hurst parameter H ∈ (0.5, 1) to obtain a process with covariance function

1 Z ∆ 1 − cos(tξ) − cos(sξ) + cos ((t − s)ξ) √ 1−2H Km2 (t, s) = 2 |ξ| , t, s ∈ R. 2π −∆ ξ

This process shares both properties of long range dependency of the fractional Brown- ian motion with Hurst parameter H as well as the ripples of the filtered noise for its

time derivative. As the bandwidth ∆ approaches infinity, the covariance function Km2 uniformly converges (up to a multiplicative constant) to the covariance of the fractional Brownian motion.

Our next goal is to define stochastic integration with respect to the process Bm in the space Wm. The definition of the Wiener integral with respect to Bm for f ∈ D (Tm) is straightforward in view of the Hilbert spaces isomorphism (5.4) and given by

Z τ f(t)dBm(t) , hω, 1τ fi. (5.16) 0 5. The m Noise Space 34

Note that since Z Z 2 2 2 m(ξ) m(ξ)|fb(ξ)| dξ ≤ sup(1 + ξ )|fb(ξ)| 2 dξ, R ξ∈R R 1 + ξ

a sufficient condition for a function f ∈ L2 (R) to be in the domain of Tm is

sup(1 + ξ2)|fb(ξ)|2 ≤ ∞. ξ∈R

This is satisfied in particular if f is differentiable with derivative in L2 (R). Recall that in the white noise space one may defines the Skorokhod-Hitsuda stochastic

integral of Xt on the interval [a, b] as

Z b Z b ˙ XtdB(t) = Xt  Bmdt a a

˙ where Bm denotes the time derivative of the Brownian motion and  denotes the Wick product [27]. The chaos decomposition of the white noise space is used in order to define ˙ the Wick product and appropriate spaces of stochastic distributions where Bm lives.

Chaos decomposition for Wm can be obtained by a similar procedure to the one explained in 3.1 for the fractional Brownian motion. A space of stochastic distributions that con- ˙ tains Bm and is closed under the Wick product can similarly be defined. A somewhat alternative approach, which uses only the expectation and the Lebesgue integral on the real line, is achieved by using the S-transform [5]. As we shall see below,

an analogue of the S-transform can be naturally defined in the space Wm, thus allows us

to introduce Skorokhod-Hitsuda integral for Wm valued processes which is based on this transform.

5.3 The Sm transform

We now define the analog of the S transform in the space Wm and study its properties.

For s ∈ SR we define the analog of the Wick exponential in the space Wm:

hω,si hω,si− 1 kT sk2 e , e 2 m

Note that this definition is not yet related to the Wick product which has not yet been defined in Wm. 5. The m Noise Space 35

Definition 5.3.1. The Sm transform of Φ ∈ Wm is defined by Z hω,si  hω,si  (SmΦ)(s) , e Φ(ω)dµm(ω) = E e Φ(ω) , s ∈ SR. Ω

Theorem 5.3.2. Let Φ, Ψ ∈ Wm. If (SmΦ) (s) = (SmΨ) (s) for all s ∈ S , then Φ = Ψ.

Proof: We follow the same arguments as in [5, Theorem 2.2] with some small changes.

By linearity of the Sm transform, it is enough to prove

(∀s ∈ S , (SmΦ) (s) = 0) ⇒ Φ = 0.

Let {ξ } ⊂ be a countable dense set in L ( ) and denote by G the σ-field n n∈N SR 2 R n generated by {hω, ξ i, ..., hω, ξ i}. We may choose {ξ } such that {T ξ } are 1 n n n∈N m n n∈N orthonormal. For every n ∈ N, E [Φ|Gn] = φn (hω, ξ1i, ..., hω, ξni) for some measurable n function φn : R −→ R such that Z Z − 1 x0x EΦ = ··· φn(x)e 2 dx < ∞,

Rn

where x0 denotes the transpose of x; see for instance [9, Proposition 2.7, p. 7]. Thus, for n t = (t1, ..., tn) ∈ R , using (5.12) we obtain

Z Pn Z Pn hω, tkξki hω, tkξki 0 = e k=1 Φ(ω)dµm = e k=1 E [Φ|Gn] dµm(ω) Ω Ω 1 Pn 2 2 Z Pn − t kTmξkk tkhω,ξki = e 2 k=1 k e k=1 φn (hω, ξ1i, ..., hω, ξni) dµm(ω) Ω 1 Pn 2 2 1 Z Z Pn 1 Pn 2 − 2 k=1 tkkTmξkk k=1 tkxk − 2 k=1 xk = e n ··· e φn (x1, ..., xn) e dx1...dxn (2π) 2 Rn Z Z − 1 (x−t)0(x−t) = ··· φn (x) e 2 dx.

Rn

Since the last expression is a convolution integral of φn with a positive eigne vector of

the Fourier transform, by properties of the Fourier transform we get that φn = 0 for all n ∈ . Since S G = G we have Φ = 0. N n∈N n Definition 5.3.3. A stochastic exponential is a random variable of the form

hω,fi e , f ∈ DR (Tm) .

We denote by E the family of linear combinations of stochastic exponentials. 5. The m Noise Space 36

hω,fi − 1 kT fk2 hω,fi Since e = e 2 m e , the following claim is a direct consequence of Theorem 5.3.2.

Proposition 5.3.4. E is dense in Wm.

Definition 5.3.5. A stochastic polynomial is a random variable of the form

p (hω, f1i, ..., hω, f2i) , f1, ..., fn ∈ DR (Tm) . for some polynomial p in n variables. We denote the set of stochastic polynomials by P.

Corollary 5.3.6. The set of stochastic polynomials is dense in Wm.

Proof: We first note that the stochastic polynomials indeed belong to Wm because the random variables hω, fi are Gaussian and hence have moments of any order.

Let Φ ∈ Wm such that E [Φp] = 0 for each p ∈ P. Then any f ∈ DR(Tm),

" ∞ # ∞ X hω, fin X [hω, finΦ(ω)] E ehω,fiΦ(ω) = E Φ(ω) = E = 0, (5.17) n! n! n=0 n=0 where interchanging of summation is justified by Fubini’s theorem since

∞  n  ∞ X hω, fi X 1 q 2 | Φ(ω) | ≤ [hω, fi2n] Φ(ω)  E n! (n!) E E n=0 n=0 ∞ s X (2n − 1)!! nq 2 ≤ k T f k Φ(ω)  (n!)2 m E n=0 ∞ n X 2 nq 2 ≤ k T f k Φ(ω)  n! m E n=0

2 q 2kTmfk  2 = e · E Φ(ω) < ∞.

(We have used the Cauchy-Schwarz inequality and the moments of a Gaussian distribu- tion).  hω,fi  We have shown that E e Φ(ω) = 0 for any f ∈ DR(Tm) so by Theorem 5.3.2 we obtain Φ = 0 in Wm.

Lemma 5.3.7. Let f, g ∈ DR(Tm). Then

E[ehω,fi ehω,gi] = e(Tmf,Tmg). 5. The m Noise Space 37

Proof: hω,fi − 1 kT fk2 hω,fi E[e ] = e 2 m E[e ] = 1, (5.18)

since E[ehω,fi] is the moment generating function of the Gaussian random variable hω, fi 2 with variance kTmfk evaluated at 1. Thus we get hω,fi hω,gi (T f,T g) hω,f+gi (T f,T g) E[e e ] = e m m E[e ] = e m m .

The following formula is useful in calculating the Sm transform of the multiplication of two random variables, and can be easily proved using Lemma 5.3.7.

hω,fi hω,gi (Tms,Tmf) (Tms,Tmf) (Tms,Tmg) Sm e e = e e e , f, g ∈ DR(Tm). (5.19)

Proposition 5.3.8. Let {Φn} be a sequence in Wm that converges in Wm to Φ. Then for any s ∈ SR the sequence of real numbers {Sm (Φn)(s)} converges to Sm (Φ) (s).

Proof: For any s ∈ SR,

r h i q  hω,si  hω,si 2  2 |SmΦn(s) − SmΦ(s)| = |E e (Φn − Φ) | ≤ E (e ) · E (Φn − Φ) .

h 2i 2 hω,si kTmsk  2 By direct calculation E e = e and since E (Φn − Φ) −→ 0, the claim follows.

In the statement of Theorem 5.3.9 recall that Tm is a continuous operator from SR into L ( ) and so its adjoint is a continuous operator from L ( ) into 0 . 2 R 2 R SR

Theorem 5.3.9. For Φ ∈ Wm and s ∈ SR, Z ∗ SmΦ(s) = Φ(ω + TmTms)dµm(ω). Ω

hω,s1i Proof: Assume first that Φ (ω) = e where s1 ∈ SR. We have by Lemma 5.3.7 that

Z Z ∗ ∗ hω,s1i hTmTms,s1i Φ(ω + TmTms) dµm (ω) = e e dµm (ω) Ω Ω

∗ Z hTmTms,s1i hω,s1i = e e dµm (ω) Ω = e(Tms,Tms1) · 1

= SmΦ(s). 5. The m Noise Space 38

The result may be extended by linearity and by Proposition 5.3.8 to any Φ ∈ E, which

by Propositions 5.3.4 is a dense subset of Wm.

We can find the Sm transform of powers of hω, fi for f ∈ DR(Tm) by the formula for Hermite polynomials.

Corollary 5.3.10. For f ∈ DR(Tm) and s ∈ SR, we have that

bn/2c 1 k n−2k 2k n X − Smhω, fi (s) kTmfk (T s, T f) = n! 2 , (5.20) m m k!(n − 2k)! k=0

in particular

(Smhω, fi)(s) = (Tmf, Tms) (5.21)

and 2 2 2 (Smhω, fi )(s) = (Tmf, Tms) + kTmsk (5.22)

Proof: From Lemma 5.3.7 we get that

hω,fi (Tms,Tmf) (Sme )(s) = e , then, ∞ k ! ∞ k − 1 kT fk2 X hω, fi X (Tms, Tmf) e 2 m S (s) = (5.23) m k! k! k=0 k=0

By the linearity of the Sm transform and Fubini’s theorem, and replacing f by tf with t ∈ R we compare powers of t at both sides to get (5.20). This last corollary can be also formulated in terms of the Hermite polynomials. Recall

that the nth Hermite polynomial with parameter t ∈ R is defined by

bn/2c 1 k n−2k 2k X − x · t h[t] (x) n! 2 (5.24) n , k!(n − 2k)! k=0

(see for instance [35, p. 33]). For f ∈ D(Tm) we define

bn/2c 1 k n−2k 2k X − hω, fi · kTmfk h˜ (hω, fi) h[kTmfk] (hω, fi) = n! 2 , (5.25) n , n k!(n − 2k)! k=0

˜ and we also set h0 = 1. 5. The m Noise Space 39

So by (5.20) we have that

 ˜  n Smhn (hω, fi) (s) = (Tms, Tmf) . (5.26)

Using Equation 5.20 and Lemma 5.3.7, one can easily verify the following result:

Proposition 5.3.11. Let f ∈ DR(Tm). It holds that:

∞ ˜ X hk (hω, fi) ehω,fi = (5.27) k! k=0

It is possible to define a Wick product in Wm using the Sm transform.

Definition 5.3.12. Let Φ, Ψ ∈ Wm. The Wick product of Φ and Ψ is the element

Φ  Ψ ∈ Wm that satisfies

ST (Φ  Ψ)(s) = (ST Φ)(s)(ST Ψ)(s), ∀s ∈ SR, if it exists.

As this definition suggests, in general the Wick product is not stable in Wm.

From (5.26), the Wick product of Hermite polynomials satisfies

˜ ˜ ˜ hn (hω, fi)  hk (hω, fi) = hn+k (hω, fi) , n, k ∈ N, f ∈ DR(Tm). 6. STOCHASTIC INTEGRATION IN WM

We now use the Sm-transform to define a Wick-Itˆotype stochastic integral which can be seen as a version of Hitsuda-Skorokhod integral in Wm, and prove an Itˆoformula for this integral. In the next section we also show that for particular choices of m, our definition of the stochastic integral coincides with previously defined Wick-Itˆostochastic integrals for fractional Brownian motion; see [13,8]. We set

Bs(t) = Sm (Bm(t)) (s).

By (5.21) we see that

Z eiξt − 1 Bs(t) = (Tms, Tm1t) = m(ξ)sb(ξ) dξ. (6.1) R ξ This function is absolutely continuous with respect to Lebesgue measure and its derivative is Z 0 iξt (Bs(t)) = m(ξ)sb(ξ)e dξ. (6.2) R

We note that when Tm is a bounded operator from L2(R) into itself we have by a result 0 ∗ of Lebesgue (see [43, p. 410]), (Bs(t)) = (TmTms)(t) (a.e.).

Definition 6.0.13. Let M ∈ R be a Borel set and let X : M −→ Wm be a stochastic pro- 0 cess. The process X will be called integrable over M if for any s ∈ SR, (SmXt)(s)Bs(t)

is integrable on M, and if there is a Φ ∈ Wm such that Z 0 SmΦ(s) = (SmXt)(s)Bs(t) dt. M for any s ∈ SR. If X is integrable, Φ is uniquely determined by Theorem 5.3.2 and we R denote it by M XtdBm (t).

If T = IdL2(R), this definition coincides with the Hitsuda-Skorokhod integral [24, Chapter 8]. See also Section6. 6. Stochastic integration in Wm 41

Note that since

Z Z m(ξ) |B (t)0| ≤ m(ξ)|s(ξ)|dξ ≤ sup |(1 + ξ2)s(ξ)| dξ < ∞, s b b 2 R ξ R 1 + ξ for any s ∈ S there exists a constant Ks such that Z Z 0  hω,si | SmXt(s)Bs(t) dt| ≤ Ks |E Xte |dt M M Z  2  2 ≤ KsE e hω, si E[Xt ]dt. M

0 R 2 Thus a sufficient condition for the integrability of SmXt(s)Bs(t) is M E[Xt ]dt < ∞.

Theorem 6.0.14. Any non-random f ∈ DR(Tm) is integrable and we have,

Z τ f(t)dBm(t) = hω, 1[0,τ]fi. (6.3) 0

Proof: By virtue of (5.21) and the definition of the stochastic integral, we need to show that Z τ 0  f(t)Bs(t) dt = Tms, Tm1[0,τ]f . 0 Using formula (6.2) and Fubini’s theorem, we have:

Z τ Z τ Z  0 iξt f(t)Bs(t) dt = f(t) m(ξ)sb(ξ)e dξ dt 0 0 R Z Z τ  itξ = m(ξ)sb(ξ) f(t)e dt dξ R 0 Z   = m(ξ)sb(ξ) f\1[0,τ](ξ) dξ R  = Tms, Tm1[0,τ]f .

Proposition 6.0.15. The stochastic integral has the following properties:

1. For 0 ≤ a < b ∈ R, Z b Bm (b) − Bm (a) = dBm(t) a 6. Stochastic integration in Wm 42

2. Let X : M −→ Wm be an integrable process. Then Z Z XtdBm(t) = 1M XtdBm(t). M R

3. Let X : M −→ Wm an integrable process. Then Z  Z  E XtdBm(t) = Sm XtdBm(t) (0) = 0 M M .

4. The Wick product and the stochastic integral can be interchanged: Let X : R −→

Wm an integrable process and assume that for Y ∈ Wm, Y  Xt is integrable. Then, Z Z Y  XtdBm(t) = Y  XtdBm(t) R R

Proof: The proof of the first three items is easy and we omit it. The last item is proved in the following way: Z Z Sm(Y  XtdBm(t))(s) = (SmY )(s) (SmXt)(s)dBm M R Z = (SmY )(s)(SmXt)(s)dBm M Z = Sm( Y  XtdBm(t))(s). R

Example 6.0.16. For τ ≥ 0, we have by equation (5.22),

Z τ d 1 2 (Tms, Tm1t) (Tms, Tm1t) dt = (Tms, Tm1τ ) 0 dt 2 1 = S hω, 1 i − kT 1 k2 (s). 2 m t m t

Then Bm is integrable on the interval [0, τ], and we have

Z τ 1 2 1 2 Bm(t)dBm(t) = Bm(τ) − kTm1τ k . 0 2 2

This reduces to the well known result if m (ξ) ≡ 1 and Tm is then the identity operator. 6. Stochastic integration in Wm 43

Example 6.0.17. Let hfn be defined by (5.25). A similar argument to the one in (6.2)

will show that any f such that for any f1t ∈ D(Tm) we have that the function t 7→

(Tms, Tmf1t) is differentiable with time derivative Z d −itξ 0 (Tms, Tmf1t) = f(t) m(ξ)sb(ξ)e dξ = f(t)Bs(t) . dt R By a similar argument to Theorem 6.0.14 we have

1   1 S h˜ (hω, 1 fi)(t) (s) = (T s, T 1 f) n + 1 m n+1 τ n + 1 m m τ Z τ n 0 = (Tms, Tmf1t) f(t)Bs(t) dt 0 Z τ  ˜ = Sm f(t)hn (hω, 1tfi) dBm (t) (s). 0

Thus, Z τ ˜ 1 ˜ f(t)hn (hω, 1tfi) dBm (t) = hn+1 (hω, 1τ fi) . (6.4) 0 n + 1

It follows from (6.4) that for any polynomial p and f with 1tf ∈ D(Tm) the process

p(hω, 1tfi) is integrable. This result can be easily extended to the process

hω,1tfi t 7→ e , 1tf ∈ D(Tm),

and we also obtain:

Corollary 6.0.18. Z τ hω,1tfi hω,1tfi f(t)e dBm(t) = e − 1. 0

Example 6.0.19. Let f ∈ DR(Tm). Using (5.19) we can obtain

 Z τ  hω,fi hω,1ti hω,fi hω,1τ i hω,fi Sm e e dBm(t) (s) = Sm e e − e (s) 0 = e(Tms,Tmf) e(Tms,Tm1τ )e(Tmf,Tm1τ ) − 1 .

On the other hand,

Z τ  Z τ hω,fi hω,1ti (Tms,Tmf) (Tms,Tm1t) (Tmf,Tm1t) d Sm e e dBm(t) (s) = e e e (Tms, Tm1t) dt 0 0 dt = e(Tms,Tmf) e(Tms,Tm1τ )e(Tmf,Tm1τ ) − 1 Z τ (Tms,Tm1t) (Tmf,Tm1t) d − e e (Tmf, Tm1t) dt. 0 dt 6. Stochastic integration in Wm 44

So in general for an integrable stochastic process X and a random variable Y we have the undesirable result Z τ Z τ Y XtdBm(t) 6= YXtdBm(t). 0 0

6.1 Itˆo’sformula

In this section we prove an Itˆo’sformula. We begin by proving an extension of the classical to our setting.

hω,fi Theorem 6.1.1. Let f ∈ D(Tm), and let µ be the measure defined by µ(A) = E[e 1A]. The process ˜ Bm(t) , Bm(t) − (Tmf, Tm1t), is Gaussian and satisfies

˜ ˜ Eµ[Bm(t)Bm(s)] = (Tm1t,Tm1s).

˜ Proof: We will first prove that for all t ≥ 0, Bm(t) is a Gaussian random variable relative h i λB˜m(t) to the measure µ by considering its moment generating function Eµ e , λ ∈ R,

h i h 1 2 i λB˜m(t) hω,fi− kTmfk λhω,1ti−λ(Tmf,T m1t) Eµ e = E e 2 e (6.5) 1 2 −λ(Tmf,T m1t) − kTmfk  hω,f+λ1ti = e e 2 E e .

Since hω, f + λ1ti is a zero mean Gaussian random variable with variance

2 2 2 2 kTm (f + λ1t) k = kTmfk + λ kTm1tk + 2λ (Tmf, Tm1t) , its moment generating function evaluated at 1 is given by

1 2 1 2 2  hω,Tmf+λ1ti kTmfk λ kTm1tk λ(Tmf,Tm1t) E e = e 2 e 2 e , (6.6)

and we conclude from (6.5) that

h i 1 2 2 λB˜m(t) λ kTm1tk Eµ e = e 2 . (6.7)

˜ Thus for all t ≥ 0, Bm(t) is a zero mean Gaussian random variable on (Ω, G, µ). Similar arguments will show that any linear combination of time samples is a Gaussian variable, 6. Stochastic integration in Wm 45

and so Bem(t), t ≥ 0 is a Gaussian process. Finally, by the polarization formula,

˜ ˜ Eµ[Bm(t)Bm(s)] = (Tm1t,Tm1s).

R τ We now interpret integrals of the type 0 Φ(t)dt, where for every t ∈ [0, τ], Φ(t) ∈ Wm, as Pettis integrals, that is as

Z τ   Z τ E Φ(t)dt Ψ = E[Φ(t)Ψ]dt, ∀Ψ ∈ Wm, 0 0

under the hypothesis that the function t 7→ E[Φ(t)Ψ] belongs to L1([0, τ], dt) for every

Ψ ∈ Wm. See [25, pp. 77-78]. We note that if X is moreover pathwise integrable and

such that the pathwise integral belongs to Wm, then

Z τ E[|Xt|]dt < ∞, 0

and we can apply Fubini’s theorem to show that the coincides with the pathwise integral. It is also clear from the definition of the Pettis integral that it com-

mutes with the Sm transform.

We introduce the conditions

 hω,si E |F (t, Xt)|e < ∞ (6.8)  ∂F  | (t, X )|ehω,si < ∞ (6.9) E ∂t t  ∂F  | (t, X )|ehω,si < ∞ (6.10) E ∂x t for F ∈ C1,2 ([0, ∞) , R). We shall now develop an Itˆoformula for a class of stochastic processes of the form,

Z τ Xt (ω) = f(t)dBm(t) = hω, 1τ fi, τ ≥ 0, 1τ f ∈ D (Tm) . (6.11) 0

Theorem 6.1.2. Let F ∈ C1,2 ([0, ∞) , R) satisfying (6.8)-(6.10), and assume that the 2 function kTm1tfk is absolutely continuous with respect to the Lebesgue measure as a function of t. Then we have, 6. Stochastic integration in Wm 46

Z τ ∂ Z τ ∂ F (τ, Xτ ) − F (0, 0) = F (t, Xt)dt + f(t) F (t, Xt) dBm (t) (6.12) 0 ∂t 0 ∂x Z τ 2 1 d 2 ∂ + kTm1tfk 2 F (t, Xt)dt 2 0 dt ∂x in Wm.

This proof is based on the proof for Itˆo’sformula for the S-transform approach to Hitsuda- Skorokhod integration in the standard white noise space found in [35, Section 13.5].

Proof: Let s ∈ SR and f ∈ D (Tm). It follows from Theorem 6.1.1 that for every t ∈ [0, τ], Xt(ω) = hω, 1tfi is normally distributed under the measure

  1  µ (A) 1 exp hω, si − kT sk2 = 1 ehω,si , s , E A 2 m E A

with mean

(Tms, Tm1tf)

and variance 2 kTm1tfk .

Thus,

 hω,si  (SmF (t, Xt)) (s) = E e F (t, Xt) (6.13) Z 2  = F (t, u + (Tm1tf, Tms)) ρ kTm1tfk , u du, R

2 1 − u where ρ(w, u) = √ e 2w and satisfies, 2πw

∂ 1 ∂2 ρ = ρ. (6.14) ∂w 2 ∂u2

Integrating by part we obtain:

Z ∂2 Z ∂2 F (t, u) 2 ρ(w, u)du = 2 F (t, u)ρ(w, u)du. (6.15) R ∂u R ∂u In view of (6.8)-(6.10) we may differentiate under the integral sign by (6.13), (6.14) and 6. Stochastic integration in Wm 47

(6.15) and obtain for 0 ≤ t ≤ τ, Z d ∂ 2  Sm (F (t, Xt)) (s) = F (t, u + (Tm1tf, Tms)) ρ kTm1tfk , u du dt R ∂t Z ∂ d + F (t, u + (T 1 f, T s)) (T 1 f, T s) ρ kT 1 fk2, u du ∂x m t m dt m t m m t ZR d 2 ∂ 2 + F (t, u + (Tm1tf, Tms)) kTm1tfk ρ kTm1tfk du R dt ∂t  ∂  d  ∂  = S F (t, X ) (s) + (T s, T 1 f) S F (t, X ) (s) m ∂t t dt m m t m ∂x t 1 d  ∂2  + kT 1 fk2 · S F (t, X ) (s). 2 dt m t m ∂x2 t

Hence,

Z τ  ∂  Sm (F (τ, Xτ ) − F (0, 0)) (s) = Sm F (t, Xt) (s)dt (6.16) 0 ∂t Z τ d  ∂  + (Tms, Tm1tf) Sm F (t, Xt) (s)dt 0 dt ∂x 1 Z τ d  ∂2  2 S + kTm1tfk · m 2 F (t, Xt) (s)dt. 2 0 dt ∂x

By the definition of the stochastic integral,

Z τ ∂  Sm f(t) F (t, Xt) dBm(t) (s) 0 ∂x Z τ  ∂  = Sm F (t, Xt) (s)f(t)Bs(t)dt, 0 ∂x

which in view of Example 6.0.17 equals

Z τ d  ∂  (Tms, Tm1tf) Sm F (t, Xt) (s)dt. 0 dt ∂x

Thus we may now use Fubini’s theorem to interchange the Sm-transform and the pathwise

integral, and obtain that the Sm-transform of the right hand side of (6.12) is exactly the right hand side of (6.16) and the theorem is proved.

6.2 Relation to other white-noise extensions of Wick-Itˆointegral

Recall that the white noise space corresponds to m(ξ) ≡ 1, so denoting it W1 is consistent with our notation, and S1 is the classical S-transform of the white noise space. We define 6. Stochastic integration in Wm 48

a map Tfm : Wm −→ W1 by describing its action on the dense set of stochastic polynomials in Wm: n n Tfmhω, fi = hω, Tmfi , f ∈ D (Tm) .

Note that since the range of Tm is contained in D(T1) = D(idL2(R)) = L2(R), this map is well defined. It is easy to see that Tfm is an isometry of Hilbert spaces. By continuity we obtain that

hω,fi hω,Tmfi Tfme = e , f ∈ D (Tm) ,

hence   hω,fi (Tms,Tmf) hω,fi S1Tfme (Tms) = e = Sme (s).

So this relation between S1 and Sm is extended such that for any Φ ∈ Wm,   S1TfmΦ (Tms) = (SmΦ) (s).

Let X : [0, τ] −→ Wm be a stochastic process. We have defined its Itˆointegral as the

unique element Φ ∈ Wm (if exists) having Sm-transform

Z τ d (SmΦ) (s) = (Xt)(s) (Tms, Tm1t)(s)dt. 0 dt

˜ This suggests that if we define in the white noise the process Bm as hω, Tm1ti and ˜ stochastic integral with respect to Bm as the unique element Φ ∈ W1(if exists) having

S1-transform Z τ d (S1Φ) (s) = (Xt)(s) (s, Tm1t)) (s)dt, (6.17) 0 dt both definitions coincides in the sense that

Z τ Z τ Tfm XtdBm(t) = TfmXtdBm(t). (6.18) 0 0

Recall that the fractional brownian motion can be obtained in our setting by taking 1 1−2H m (ξ) = 2 |ξ| , H ∈ (0, 1), which results in Tm = MH , where MH is defined in [16]. In the white noise, the fractional Brownian motion can be defined by the continuous version

of the process {hω, MH 1ti}t≥0. An approach that is based on the definition described in (6.17) for the fractional Brownian motion was given in [5]. Due to Theorem 3.4 there, under appropriate conditions our

definition of the Hitsuda-Skorokhod integral in the case of Tm = MH coincides in the sense of (6.18) with the Hitsuda-Skorokhod integral defined there. Stochastic integration 6. Stochastic integration in Wm 49 in the white noise setting for the family of stochastic processes considered in this paper can be found in [1], and its equivalence to the integral described here can be obtained by a similar argument to that of Theorem 3.4 in [5]. 7. APPLICATION IN OPTIMAL CONTROL

The study of continuous time systems driven by noisy input has been focused since its beginning primarily on the case of white noise and Brownian motion. This is mainly thanks to the fact that the Brownian motion arises as the limit distribution of many discrete time models and physical processes, what makes it a reasonable model for ran- domness in many applications. Motivated from some applications in hydrology, telecommunications, and mathematical finance, there has been a recent interest in input noises without in- dependent increments. The fractional Brownian motion and the fractional noise are of particular interest and we refer to [15], [11], [37] and the book [6] for recent development.

Note that the Itˆoformula (6.12) can be rewritten in differential form as

∂ ∂ 1 d ∂2 dF (X , t) = F (t, X )dt + f(t) F (X , t)dB (t) + kT 1 fk2 F (X , t)dt, (7.1) t ∂t t ∂x t m 2 dt m t ∂x2 t or in its Sm transform differential form as

d ∂  ∂  S (F (t, X )) (s) = S (F (t, X )) (s) + f(t)S F (t, X ) (s)B0 (t) dt m t ∂t m t m ∂x t s (7.2) 1 d  ∂2  + kT 1 fk2S F (t, X ) (s), 2 dt m t m ∂x2 t

which is easily obtained form the proof of Theorem 6.1.2 and Example 6.0.17.

The fact that the Itˆoformula (6.12) for stochastic integrals in Wm takes a similar struc- ture as the classical one (3.9) and as some formulas developed for the fractional Brownian motion [12, Eq. 4.1], suggests that some results in stochastic analysis based on the Brow- nian motion and the fractional Brownian motion can be extended to our more general setting. In this chapter we bring a simple example for such a result in the field of stochas- tic optimal control. The same problem was formulated and solved in [29] for the case of fractional Brownian motion. We show that the generalization to our setting is straight- forward. 7. Application in optimal control 51

R m(ξ) We fix m such that 1+ξ2 dξ < ∞, and consider the case where the control dynamics is given by the following Wick-Itˆotype stochastic differential equation(SDE), where the state is scalar valued:  dX(t) = (a(t)X(t) + b(t)u(t)) dt + c(t)X(t)dBm(t), (7.3) X(0) = x0 ∈ R (given and deterministic).

Here a(t) and b(t) := (b1(t), ..., bp(t)) are measurable and essentially bounded determin- istic functions in t ≥ 0, dBm(t) is the Itˆo-type differential of Bm(t), t → c(t), t ∈ [0,T ] , 2 belongs to D(Tm) and kTm (1tc(·)) k is an absolutely continuous function in t.

Stochastic differential equations of the type (7.3) perturbed by white noise is widely common in the description of price development over time of a risky financial assets [40, Eq. 1.5.1]. In this case, a(t) is denoted the drift, c(t) is called the volatility and we can further interpret u(t) as some type of controlled influence on the price.

For every initial state and control u(t), 0 < t ≤ τ,, we define an associated cost functional

Z τ  2 ∗  2 J (x0, u(·)) , E q(t)X(t) + u (t)R(t)u(t) dt + gX(τ) , (7.4) 0 where q(t) and each entry in R(t) are given essentially bounded deterministic functions in t, and g is a given deterministic scalar. We reduce ourselves to the case where the control u(t), 0 < t ≤ τ, is taken to be of Markovian linear feedback type, namely,

u(t) = k(t)X(t),

∗ where k(t) := (k1(t), ..., kp(t)) is an essentially bounded deterministic function of t. Under these assumptions, (7.3) reduces to the following SDE:  dX(t) = (a(t) + b(t)k(t)) X(t)dt + c(t)X(t)dBm(t), (7.5) X(0) = x0 ∈ R.

The cost functional (7.4) may now be associated directly with the initial state and the 7. Application in optimal control 52

feedback gain k(t),

Z τ  ∗ 2 2 J (x0, k(·)) , E (q(t) + k (t)R(t)k(t)) X(t) dt + gX(τ) . (7.6) 0

The optimal stochastic control problem is to minimize the cost functional (7.6), for each

given x0 ∈ R, over the set of all Markovian linear feedback controls k(t), 0 < t ≤ τ.

Theorem 7.0.1.

Z t Z t  1 2 X(t) = x0 exp (a(u) + b(u)k(u)) du + c(u)dBm(u) − kTm (1tc(·)) k (7.7) 0 0 2

is the unique solution of (7.5).

Proof: Since in this work we have defined the stochastic integral by means of the Sm

transform, we use the Sm differential version of (7.5) which is

d X (t) = (a(t) + b(t)k(t) + c(t)B0 (t)) X (t), (7.8) dt s s s where Xs(t) , Sm (X(t)) (s). By the existence and uniqueness theory for ordinary differ- ential equations, it follows that for every s ∈ S , there is a unique solution Xs(t), t ≥ 0, which satisfies (7.8), and is given by

Z t Z t  0 Xs(t) = Xs(0) exp (Au + BuKu) du + C(u)Bs(u)du = 0 0 Z t  X(0) exp (Au + BuKu) du exp {(Tms, Tm (1tC(·)))} . 0

It follows from Lemma 5.3.7 and Theorem 5.3.2 that the (unique) inverse S-transform of the last expression is

Z t  X(t) = X(0) exp (a(u) + b(u)k(u)) du exp {hω, 1tc(·)i} , 0 which equals (7.7).

Note that the proof of the last theorem is significantly shorter than its counterpart for the special case of the fractional Brownian motion appears in [29], thanks to the S-transform approach. 7. Application in optimal control 53

7.1 Solution

The solution for the control problem is given in the following theorem.

Theorem 7.1.1. Assume g > 0 and that almost everywhere a(t) 6= 0, b(t) 6= 0, q(t) > 0, R(t) − I > 0. The optimal Markovian linear feedback control uˆ(t) is given by

uˆ(t) = kˆ(t)X(t) with kˆ(t) = −R−1(t)b∗(t)p(t), (7.9) where {p(t), t ∈ [0, τ]} is the unique non-negative solution of the backward Riccati equa- tion ( p˙(t) + 2p(t) a(t) + d kT 1 c(·)k2 + q(t) − b(t)R−1(t)b∗(t)p(t)2 = 0 dt t (7.10) p(τ) = g

Proof: Under the assumptions on the coefficients, the unique solvability of the Ricatti equation (7.10) is a classical result which was proved in, e.g., [17]. Since X(t) from (7.7) is an exponent of a Gaussian random variable, conditions (6.8)-(6.10) are trivially met for

 Z t   Z t  2 2 2 p(t)X(t) = p(t)x0 exp 2 (a(u) + b(u)k(u)) du exp 2 c(u)dBm(u) − 2kTm (1tc(·)) k , 0 0

hence by applying Itˆo’sformula (6.12) we obtain

Z τ    2 2 2 2 d 2 p(τ)X(τ) = p(0)x0 + p˙(t)X(t) + 2p(t)X(t) a(t) + b(t)k(t) + kTm (1tc(·)) k dt 0 dt Z τ 2 + 2c(t)p(t)X(t) dBm(t)dt. 0

Rearranging and taking expectation at both sides we get

Z τ     2 2 2 d 2 E p(τ)X(τ) = p(0)x0+E X(t) p˙(t) + 2p(t) a(t) + b(t)k(t) + kTm (1tc(·)) k dt. 0 dt 7. Application in optimal control 54

Since p(τ) = g, we have

2 J (x0, k(·)) = p(0)x0+ Z τ    2 ∗ d 2 E X(t) p˙(t) + (q(t) + k (t)R(t)k(t)) + 2p(t) a(t) + b(t)k(t) + kTm (1tc(·)) k dt 0 dt Z τ  2 2 d 2 = p(0)x0 + E X(t) p˙(t) + 2p(t)a(t) + 2p(t) kTm (1tc(·)) k + q(t) 0 dt + k(t) + R−1(t)b∗(t)p(t)∗ R(t) k(t) + R−1(t)b∗(t)p(t) − b(t)R−1(t)b∗(t)p(t)2 dt.

Substituting the Ricatti equation (7.10) into the last expression and rearranging yields

Z τ  2 −1 ∗ ∗ −1 ∗  J (x0, k(·)) = p(0)x0+E k(t) + R (t)b (t)p(t) R(t) k(t) + R (t)b (t)p(t) dt . 0

It follows that the cost function achieves its minimum when kˆ(t) = −R−1(t)b∗(t)p(t), 2 and that minimum is p(0)x0.

7.2 Simulation

In this section we check the validity of our state-space model and the solution to the optimal control problem by means of numeric approximation and computer simulation. We use Monte Carlo simulation in order to compute the cost function (7.6) which is associated here with two different linear Markovian controllers. The first controller kop(t), 0 ≤ t ≤ τ is the optimal controller obtained from Theorem 7.1.1. The second controller kna(t), 0 ≤ t ≤ τ would have been the optimal controller if the noise was white and with the same drift as the real noise, i.e. kna corresponds to a naive design of the system, which does not takes into account the true correlated nature of the noise.

In order to simulate a sample path of the fundamental stationary increment process Bm, we use 10,000 independent normally distributed pseudo-random numbers, multiplied by the square root of a covariance matrix of size 10, 000 × 10, 000, obtained by using the spectral measure

2H 2H 2H  E [Bm(t)Bm(s)] = min{t, s} + t + s − |t − s| , 0 < H < 1,

So in this case Bm can be viewed as the sum of standard Brownian motion and a fractional Brownian motion with Hurst parameter H. The corresponding spectral function is

m(ξ) = 1 + c(H)|ξ|1−2H , 7. Application in optimal control 55

Fig. 7.1: The ratio between the resulting cost using the naive controller and the resulting cost using the optimal Markovian controller, averaged over 5,000 runs, for the system (7.3) with τ = 5, x0 = 3, b = 0.3, q = 1, g = 2, r = 2,H = 0.8 (In order to simulate one sample path of the process Bm in n times samples, we draw the vector v consisted√ of n independent standard normal samples, and then compute the square√ root A of the n × n covariance matrix A of Bm in these n times. The vector Av can be seen as a vector of samples taken from one sample path of Bm). where C(H) is a constant depends on H. We pick non-time varying coefficients for the a2 state-space model (7.3), and define the signal to noise ratio in our model as SNR , c2 . This is motivated by view of (7.7), since this is the ratio between the free term in the exponent after removing the drift, and the noise amplitude. The measured ratio between the two cost ratio is depicted in Figure 7.1, in which we can observe that for example for a noise level of −6dB the expected cost of the system (7.5) in which k = kna is about 1.6 times than the expected cost in the optimal case. 8. CONCLUSIONS

In this work we have developed the basics of stochastic distribution theory which is based on noises with arbitrary spectrum subject to the condition (4.1), and stochastic integration theory using the analogue of the S-transform in our new settings. This was done using a variation on Hida’s white noise space theory. Unlike related extension of the white noise theory such as [5] for the fractional noise and [2] for the same family of noises considered here, our introduction of the characteristic functional (4.3) and the space Wm allows a natural and simple representation of the noise as the underlying stochastic process of this space; it plays the same role in Wm that the white noise has

in the white noise space W1. The fact that the Itˆoformula for stochastic integrals in

Wm takes a similar structure as the original, suggests that many results in stochastic analysis which are based on the Brownian motion can be extended to our more general setting. On Chapter7 we gave an example for such an extension in the field of stochastic optimal control. In view of Chapter7, this work can be continued by further extending other stochastic differential equations driven by white noise to our non-white settings. Additional directions in which this work can be extended are given below. • The Wiener-Itˆochaos expansion plays a key role in the white noise space theory in the sense that the spaces of stochastic distributions are defined by the chaos coefficients, and seems to be depended in the particular choice of the basis element

in L2(R) as suggested for example by the definition of the singular white noise (3.8). Moreover, as mentioned in the end of Section 5.3, an alternative approach to

define Hitsuda-Skorokhod integral in Wm can be carried out by means of the chaos expansion and the Wick product (which does not depends on the basis). In our

more general case, we have replaced L2(R) with the domain of Tm, and considered

the Hilbert space generated by the Gaussian random variables {hω, fi, f ∈ D(Tm)}.

Chaos expansion for Wm involves a particular basis for that Hilbert space, and this

is given by inverting the operator Tm.

• Spaces of stochastic distributions and corresponding Gelfand triples for the space

Wm has not yet been defined, as there are few issues that needed to be considered in this regard. If we wish to follow the same line as in the Kondratiev spaces of the 8. Conclusions 57

white noise space, there is still the question of whether the definition depends on the basis elements, and in that case which basis to pick. An alternative definition can be done in terms of the S-transform, similar to what was given in [35, Chapter 8] for Hida’s spaces of stochastic distributions. We ask how different are these two approaches? It is also interesting to note that for a particular family of spectral

functions m, the time derivative of Bm is a stationary (ordinary) process already iξt belongs to Wm (This is the case of Example 5.2.2 since e belongs to L1 (m(ξ)dξ)

in that case), so it may happen that some properties of Wm are fundamentally different than the standard white noise space.

• It turns out that the chaos expansion is useful in the solution of some linear pro-

jection problems in Wm. This approach is now being investigated by the author.

A special version of the chaos expansion for the space Wm and application of it in prediction problems for Gaussian process will be presented and discuss in a future work [4]. REFERENCES

[1] D. Alpay, H. Attia, and D. Levanony. White noise based stochastic calculus as- sociated with a class of Gaussian processes. Arxiv manuscript arXiv:1008.0186v1. (2010). To appear in Opuscula Mathematica.

[2] D. Alpay, H. Attia, and D. Levanony. On the characteristics of a class of Gaussian processes within the white noise space setting. Stochastic processes and applications, 120:1074–1104, 2010.

[3] D. Alpay, P. Jorgensen, and D. Levanony. A class of Gaussian processes with frac- tional spectral measures. J. Funct. Anal., 261(2):507–541, 2011.

[4] D. Alpay and A. Kipnis. Chaos expansion approach to optimal prediction of Gaus- sian processes. in preparation, 2012.

[5] C. Bender. An S-transform approach to integration with respect to a fractional Brownian motion. Bernoulli, 9(6):955–983, 2003.

[6] F. Biagini. Stochastic calculus for fractional Brownian motion and applications. Probability and its applications. Springer, 2008.

[7] F. Biagini, B. Øksendal, A. Sulem, and N. Wallner. An introduction to white noise theory and for fractional Brownian motion. Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 460(2041):347, 2004.

[8] Francesca Biagini, Yaozhong Hu, Bernt Øksendal, and Tusheng Zhang. Stochas- tic calculus for fractional Brownian motion and applications. Probability and its Applications (New York). Springer-Verlag London Ltd., London, 2008.

[9] R. M. Blumenthal and R. K. Getoor. Markov processes and potential theory. Pure and Applied Mathematics, Vol. 29. Academic Press, New York, 1968. References 59

[10] Alpay Daniel and Kipnis Alon. Stochastic integration with respect to a class of Gaussian stationary increment processes using an extension of the white noise space. 2011.

[11] L. Decreusefond and A. S. Ust¨enel.¨ Stochastic analysis of the fractional Brownian motion. pages 75–86, 1998.

[12] T. E. Duncan, Y. Z. Hu, and B. Pasik-duncan. Stochastic calculus for fractional Brownian motion. i: Theory. SIAM J. Control optim, 38:582–612, 1999.

[13] T.E. Duncan, Y. Hu, and B. Pasik-Duncan. Stochastic calculus for fractional Brow- nian motion. I. Theory. SIAM J. Control Optim., 38(2):582–612 (electronic), 2000.

[14] T.E. Duncan, Y.Z. Hu, and B. Pasik-Duncan. Stochastic calculus for fractional Brownian motion. i. theory. In Decision and Control, 2000. Proceedings of the 39th IEEE Conference on, volume 1, pages 212–216. IEEE, 2000.

[15] T.E. Duncan, B. Maslowski, and B. Pasik-Duncan. Fractional Brownian motion and stochastic equations in Hilbert spaces. Stoch. Dyn, 2(2):225–250, 2002.

[16] R.J. Elliott and J. Van Der Hoek. A general fractional white noise theory and applications to finance. , 13(2):301–330, 2003.

[17] G. Freiling, G. Jank, and H. Abou-kandil. On global existence of solutions to coupled matrix riccati equations in closed-loop nash games. IEEE Trans. A.C, 41:264–269, 1996.

[18] L. Gross. Potential theory on hilbert space. Journal of Functional Analysis, 1967.

[19] I.M. Guelfand, G.E. Shilov, and G. Rideau. Les distributions. Number v. 1 in Collection universitaire de math´ematiques.Dunand volume 2 of same series, 1972.

[20] I.M. Guelfand and N.Y. Vilenkin. Les distributions. Tome 4: Applications de l’analyse harmonique. Collection Universitaire de Math´ematiques,No. 23. Dunod, Paris, 1967.

[21] T. Hida. Analysis of Brownian functionals. Carleton Univ., Ottawa, Ont., 1975. Carleton Mathematical Lecture Notes, No. 13.

[22] T. Hida. Analysis of Brownian functionals. Stochastic Systems: Modeling, Identifi- cation and Optimization, I, pages 53–59, 1976. References 60

[23] T. Hida. Brownian motion. Applications of mathematics. Springer-Verlag, 1980.

[24] T. Hida, H. Kuo, J. Potthoff, and L. Streit. White noise, volume 253 of Mathematics and its Applications. Kluwer Academic Publishers Group, Dordrecht, 1993. An infinite-dimensional calculus.

[25] E. Hille and R.S. Phillips. Functional analysis and semi-groups, volume 31. Amer Mathematical Society, 1957.

[26] M. Hitsuda. Formula for Brownian partial derivatives. Second Japan-USSR Symp, pages 111–114, 1972.

[27] H. Holden, B. Øksendal, J. Ubøe, and T. Zhang. Stochastic partial differential equations. Probability and its Applications. Birkh¨auserBoston Inc., Boston, MA, 1996.

[28] Y. Hu and B. Øksendal. Fractional white noise calculus and applications to fi- nance. Infinite Dimensional Analysis Quantum Probability and Related Topics, 6:1– 32, 2003.

[29] Y. Hu and X.Y. Zhou. Stochastic control for linear systems driven by fractional noises. SIAM journal on control and optimization, 43:2245, 2005.

[30] K. Itˆo.Stochastic integral. Proceedings of the Japan Academy, Series A, Mathemat- ical Sciences, 20(8):519–524, 1944.

[31] S. Janson. Gaussian Hilbert spaces, volume 129 of Cambridge Tracts in Mathematics. Cambridge University Press, Cambridge, 1997.

[32] I. Karatzas and S.E. Shreve. Brownian motion and stochastic calculus. Graduate texts in mathematics. Springer, 1991.

[33] M.G. Krein. On the problem of continuation of helical arcs in Hilbert space. C. R. (Doklady) Acad. Sci. URSS (N.S.), 45:139–142, 1944.

[34] Takenaka S. Kubo I. Calculus on Gaussian white noise i. Proceedings of the Japan Academy, (50):239–256, 1980.

[35] Hui-Hsiung Kuo. White noise distribution theory. Probability and Stochastics Series. CRC Press, Boca Raton, FL, 1996.

[36] J. von Neumann and I. J. Schoenberg. Fourier integrals and metric geometry. Trans. Amer. Math. Soc., 50:226–251, 1941. References 61

[37] D. Nualart. Stochastic integration with respect to fractional Brownian motion and applications. Contemporary Mathematics, 336:3–40, 2003.

[38] David Nualart and Murad S. Taqqu. Wick–itˆoformula for regular processes and applications to the Black and Scholes formula. Stochastics An International Journal of Probability and Stochastic Processes, 80(5):477–487, 2008.

[39] David Nualart and Moshe Zakai. Generalized stochastic integrals and the malliavin calculus. Probability Theory and Related Fields, 73:255–280, 1986. 10.1007/BF00339940.

[40] B.K. Øksendal. Stochastic Differential Equations: An Introduction With Applica- tions. Universitext (1979). Springer, 2003.

[41] J. Potthoff and L. Streit. A characterization of Hida distributions. Journal of Functional Analysis, 101(1):212–229, 1991.

[42] Walter Rudin. Functional analysis. International Series in Pure and Applied Math- ematics. McGraw-Hill Inc., New York, second edition, 1991.

[43] L. Schwartz. Analyse. III, volume 44 of Collection Enseignement des Sciences [Col- lection: The Teaching of Science]. Hermann, Paris, 1998. Calcul int´egral.

[44] A. V. Skorokhod. On a generalization of a stochastic integral. Theory of Probability and its Applications, 20(2):219–233, 1976.

[45] R. L. Stratonovich. A new representation for stochastic integrals and equations. SIAM Journal on Control, 4(2):362–371, 1966.

[46] L.M. Surhone, M.T. Tennoe, and S.F. Henssonow. Stratonovich Integral. VDM Verlag Dr. Mueller AG & Co. Kg, 2010.