An Introduction to Malliavin Calculus
Total Page:16
File Type:pdf, Size:1020Kb
An Introduction to Malliavin Calculus Lecture Notes SummerTerm 2013 by Markus Kunze Contents Chapter 1. Stochastic Calculus 1 1.1. The Wiener Chaos Decomposition 1 1.2. The Malliavin Derivative 6 1.3. The Divergence Operator 15 1.4. The Ornstein-Uhlenbeck Semigroup 19 1.5. Multiple Wiener Integrals 20 1.6. Stochastic Calculus in the White Noise Case 25 1.7. It^o'sIntegral and the Clark-Ocone Formula 30 Chapter 2. Smoothness of Probability Laws 33 2.1. Absolute Continuity 33 2.2. Smoothness of the Density 39 2.3. Stochastic Differential Equations 44 2.4. H¨ormander'sTheorem 48 Bibliography 57 iii CHAPTER 1 Stochastic Calculus The general setting for Malliavin calculus is a Gaussian probability space, i.e. a proba- bility space (Ω; Σ; P) along with a closed subspace H of L2(Ω; Σ; P) consisting of centered Gaussian random variables. It is often convenient to assume that H is isometric to another Hilbert space H, typically an L2-space over a parameter set T . 1.1. The Wiener Chaos Decomposition We recall that a real-valued random variable X, defined on a probability space (Ω; Σ; P) is called Gaussian (or, more precisely, centered Gaussian) if its characteristic function 'X , q t2 defined by 'X (t) := E exp(itX) is of the form 'X (t) = e 2 for some q ≥ 0. It is well- known that a Gaussian random variable is either constantly zero (in which case q = 0) or, else, q > 0 and the distribution has density 2 1 − x p e 2q 2πq with respect to Lebesgue measure dx. In that case, the random variable has finite moments of all orders, its mean is zero (whence it is called \centered") and its variance is q. A Gaussian random variable is called standard if it has variance 1. A family (Xi)i2I of real-valued random variables is called Gaussian family or jointly Gaussian, if for any n 2 N and any choice i1; : : : ; in of distinct indices in I, the vector Rn (Xi1 ;:::;Xin ) is a Gaussian vector. The latter means that for any α 2 , the real-valued Pn random variable k=1 αkXik is Gaussian. We now introduce isonormal Gaussian processes. Definition 1.1.1. Let H be a real, separable Hilbert space with inner product h · ; · i. An H-isonormal Gaussian process is a family W = fW (h): h 2 Hg of real-valued random variables, defined on a common probability space (Ω; Σ; P), such that W (h) is a Gaussian random variable for all h 2 H and, for h; g 2 H, we have EW (h)W (g) = hh; gi. If the Hilbert space H is clear from the context, we will also speak of isonormal Gaussian processes. Given an isonormal Gaussian process, the probability space on which the random variables are defined will be denoted by (Ω; Σ; P). We now note some properties of isonormal Gaussian processes. Proposition 1.1.2. Let H be a real, separable Hilbert space. (1) There exists an H-isonormal Gaussian process W . Now let W = fW (h): h 2 Hg be any H-isonormal Gaussian process. (2) The map h 7! W (h) is an isometry, in particular, it is linear. (3) W is a Gaussian family. Proof. (2) It suffices to prove that W is linear; in that case, it follows directly from the definition of isonormal process that W is an isometry. Thus, let h; g 2 H and λ, µ 2 R. We have EW (λh + µg) − λW (h) − µW (g)2 = EW (λh − µg)2 − 2λEW (λh + µg)W (h) − 2µEW (λh + µg)W (g) 1 2 1. STOCHASTIC CALCULUS +λ2EW (h)2 + 2λµEW (h)W (g) + µ2EW (g)2 = kλh + µgk2 − 2λhλh + µg; hi − 2µhλh + µg; gi + λ2khk2 + 2λµhg; hi + µ2kgk2 = 0 : This implies that W (λh + µg) = λW (h) + µW (g) almost surely. n (3) For n 2 N, h1; : : : ; hn 2 H and α 2 R , we have, by linearity, n n X X αkW (hk) = W αkhk k=1 k=1 which is Gaussian by assumption. (1) Since H is separable, it has a countable orthonormal basis (hk)k2N (if the basis is finite, the proof is similar). Let γ denote standard Gaussian measure on R and consider the infinite product space 1 1 1 Y O O (Ω; Σ; P) := R; B(R); γ k=1 k=1 k=1 By construction, the random variables Xn, defined by Xn((!k)k2N) := !n are independent and Gaussian. In particular, they form an orthonormal basis of their closed linear span in 2 2 L (Ω; Σ; P). We now define W : H ! L (Ω; Σ; P) to be the isometry that sends hk to Xk. Then W is an H-isonormal Gaussian process as is easy to see. Let us give some examples of isonormal Gaussian processes. Example 1.1.3. (Brownian motion) Consider the Hilbert space H = L2((0; 1); B(0; 1); λ), where λ is Lebesgue measure. By Proposition 1.1.2, there exists an H-isonormal Gaussian process W . Let Bt := W (1(0;t]). Then Bt is a centered Gaussian random variable with variance E 2 E 1 1 1 1 Bt = W ( (0;t])W ( (0;t]) = (0;t]; (0;t] = t 1 1 1 Moreover, given 0 ≤ t0 < t1 : : : < tn = t < t+s, the functions (t0;t1];:::; (tn−1;t]; (t;t+sj are 1 orthogonal in H, hence the random variables Bt1 −Bt0 = W ( (t0;t1]);:::;Bt−Btn−1 ;Bt+s−Bt are orthogonal in L2(Ω), i.e. they are uncorrelated. As these random variables are jointly Gaussian (Proposition 1.1.2), they are independent. This shows that Bt+s−Bt is independent of the σ-algebra Ft := σ(Br : r ≤ t). Thus, the process (Bt) is a Brownian motion | except for the fact that we have not proved that it has continuous paths. However, using Kolmogorov's theorem, it can be proved that Bt has a continuous modification. We refer the reader to the literature for a proof. One often writes Z T f(t) dBt := W (1(0;T )f) 0 R T and calls 0 f(t) dBt the Wiener integral of f over (0;T ). Example 1.1.4. (d-dimensional Brownian motion) Consider the Hilbert space H = L2((0;; 1); B((0; 1); λ; Rd). We denote the canonical d Basis of R by (e1; : : : ; en), i.e. ei is the vector which has a 1 at position i and 0's at all other j 1 1 d positions. If we put Bt := W ( (0;t]ej), then the vector Bt := (Bt ;:::;Bt ) is a d-dimensional j j i Brownian motion, i.e. Bt is a Brownian motion for every j and Bt and Bt are independent for j 6= i. Let us also mention, without going into details, a further example which motivates the abstract setting we consider. 1.1. THE WIENER CHAOS DECOMPOSITION 3 Example 1.1.5. (Fractional Brownian motion) A fractional Brownian motion is a Gaussian process with covariance function 1 c (t; s) = (t2H + s2H − jt − sj2H ) H 2 1 where H 2 (0; 1) is the so-called Hurst parameter. The choice H = yields c 1 (t; s) = 2 2 minft; sg which is the covariance function of Brownian motion. Let E denote the step functions on, say, (0;T ). It can be proved that there exists an inner product h · ; · iH on E such that 1 ; 1 = c (t; s) : (0;t] (0;s] H H We denote by VH the completion of E with respect to the inner product h · ; · iH . Then an VH -isonormal Gaussian process W gives rise, as above, to a fractional Brownian motion with Hurst parameter H. The range of the isonormal process W is the subspace H that was mentioned in the introduction. We now begin to study the structure of that space. This will be done using Hermite polynomials. Definition 1.1.6. For n 2 N0, the n-th Hermite polynomial Hn is defined by H0 ≡ 1 and n 2 n 2 (−1) x d − x H (x) := e 2 e 2 n n! dxn for n ≥ 1. Consider the function F (t; x) := exp(tx − t2=2). Then the Hermite polynomial are the coefficients in the power series expansion of F with respect to t. Indeed, we have x2 1 F (t; x) = exp − (x − t)2 2 2 2 1 n n 2 x X t d − (x−t) = e 2 e 2 n! dtn t=0 n=0 1 n 2 n 2 1 X n (−1) x d − z X n = t e 2 e 2 = t Hn(x) : n! dzn z=x n=0 n=0 We note that the convergence of the series is uniform on compact sets. Now some basic properties of the Hermite polynomials follow easily: Lemma 1.1.7. For n ≥ 1, we have 0 (1) Hn(x) = Hn−1(x); (2) (n + 1)Hn+1(x) = xHn(x) − Hn−1(x) n (3) Hn(−x) = (−1) Hn(x). Proof. Throughout, we set F (t; x) := exp(tx − t2=2). @ P1 n+1 (1) We have @x F (t; x) = tF (t; x) = n=0 t Hn(x). On the other hand, interchanging @ P1 n 0 summation and differentiation, @x F (t; x) = n=0 t Hn(x). Thus (1) follows by equating coefficients. @ P1 n (2) follows similarly, observing that @t F (t; x) = (x − t)F (t; x) = n=0 t xHn(x) − n+1 @ P1 n−1 t Hn(x), while, on the other hand, @t F (t; x) = n=0 nt Hn(x).