Introduction to Lévy Processes

Introduction to Lévy Processes

Introduction to Lévy Processes Huang Lorick [email protected] Document type These are lecture notes. Typos, errors, and imprecisions are expected. Comments are welcome! This version is available at http://perso.math.univ-toulouse.fr/lhuang/enseignements/ Year of publication 2021 Terms of use This work is licensed under a Creative Commons Attribution 4.0 International license: https://creativecommons.org/licenses/by/4.0/ Contents Contents 1 1 Introduction and Examples 2 1.1 Infinitely divisible distributions . 2 1.2 Examples of infinitely divisible distributions . 2 1.3 The Lévy Khintchine formula . 4 1.4 Digression on Relativity . 6 2 Lévy processes 8 2.1 Definition of a Lévy process . 8 2.2 Examples of Lévy processes . 9 2.3 Exploring the Jumps of a Lévy Process . 11 3 Proof of the Levy Khintchine formula 19 3.1 The Lévy-Itô Decomposition . 19 3.2 Consequences of the Lévy-Itô Decomposition . 21 3.3 Exercises . 23 3.4 Discussion . 23 4 Lévy processes as Markov Processes 24 4.1 Properties of the Semi-group . 24 4.2 The Generator . 26 4.3 Recurrence and Transience . 28 4.4 Fractional Derivatives . 29 5 Elements of Stochastic Calculus with Jumps 31 5.1 Example of Use in Applications . 31 5.2 Stochastic Integration . 32 5.3 Construction of the Stochastic Integral . 33 5.4 Quadratic Variation and Itô Formula with jumps . 34 5.5 Stochastic Differential Equation . 35 Bibliography 38 1 Chapter 1 Introduction and Examples In this introductive chapter, we start by defining the notion of infinitely divisible distributions. We then give examples of such distributions and end this chapter by stating the celebrated Lévy-Khintchine formula. The proof of the latter will be given in a subsequent chapter. 1.1 Infinitely divisible distributions Historically, Paul Lévy was interested in "arithmetic of probabilities", where he would investigate properties of probabilities distributions that can be decomposed as the sum of independent copies of itself. This field gave rise to what we now call infinitely divisible distributions. Infinitely divisible distributions and Lévy process are closely related, as Lévy process have infinitely divisible distributions. We start by introducing the concept of infinitely divisible distribution and give some examples. Definition 1.1.1. We say that a random variable X is infinitely divisible if for all n ∈ N, there exists Y1,...,Yn such that (d) X = Y1 + ··· + Yn. A very simple consequence of this definition is the following result: Proposition 1.1.2. The following are equivalent: • X has an infinitely divisible distribution, • µX the distribution of X has an n-convolution root that is itself the distribution of a random variable for each n, • φX the characteristic function of X has an n-root that is itself the characteristic function of a random variable for each n. We leave the proof as an exercise. 1.2 Examples of infinitely divisible distributions Gaussian random variables Let X be a random vector. We say that X has a Gaussian distribution if there exists m ∈ R and a symmetric positive definite matrix A such that X has density: 1 1 exp − hx − m, A−1(x − m)i . (2π)d/2pdet(A) 2 In this case, we write X ∼ N (m, A), m is the mean and A the covariance matrix. An easy exercise gives that the Fourier transform of such random variable is 1 φ (ξ) = exp ihξ, mi − hξ, Aξi . X 2 2 CHAPTER 1. INTRODUCTION AND EXAMPLES 3 Hence, it is easy to see that: m 1 A φ (ξ)1/n = exp ihξ, i − hξ, ξi . X n 2 n m A Consequently, we have that X is infinitely divisible with Yi ∼ N n , n . Poisson random variable We say that a discrete random variable X has a Poisson distribution with parameter λ if λk (X = k) = e−λ . P k! Consider now Y independent of X with Poisson distribution of parameter µ , we have: k k X X λl µk−l (X + Y = k) = (X = l, Y = k − l) = e−λ e−µ . P P l! (k − l)! l=0 l=0 Grouping terms in the last identity, we get: k 1 X k! (λ + µ)k (X + Y = k) = e−(λ+µ) λlµk−l = e−(λ+µ) . P k! l!(k − l)! k! l=0 Consequently, convolution of two Poisson distribution is a Poisson distribution and Poisson distributions are infinitely divisible. Alternatively, one can show that the characteristic function for a Poisson distribution is iξ φX (ξ) = exp(λ(e − 1)), λ giving that Poisson distributions are infinitely divisible, with Yi with Poisson distribution of parameter n . Compound Poisson random variable Consider N to be a Poisson random variable with parameter λ. Since N is integer-valued, one can form the following sum: N X X = Yi, k=1 where Yi are independent and identically distributed, independent of N. Let us denote µY their common distribution. Proposition 1.2.1. The characteristic function of X is Z ihξ,yi φX (ξ) = exp λ (e − 1)µY (dy) . Proof. +∞ ! PN Pk ihξ,Xi ihξ, Yii X ihξ, Yii φX (ξ) = E(e ) = E(e i=1 ) = E e i=1 1N=k . k=0 Now, exploiting the independence of N and Yi’s, we can write: +∞ ! Pk X ihξ, Yii φX (ξ) = E e i=1 P(N = k) k=0 +∞ Pk k X ihξ, Yii −λ λ = e i=1 e . E k! k=0 CHAPTER 1. INTRODUCTION AND EXAMPLES 4 Now, we note that k Pk ihξ, Yii Y ihξ,Yii k E e i=1 = E(e ) = φY (ξ) , i=1 denoting φY the common characteristic function of the Yi’s. We thus obtained +∞ X λk φ (ξ) = e−λ φ (ξ)k = exp λ φ (ξ) − 1 . X k! Y Y k=0 R ihξ,yi To conclude, we only write φY (ξ) as e µY (dy). Hence, we see that a compound Poisson distribution also has an infinite divisible distribution. 1.3 The Lévy Khintchine formula One can notice that in every example above, the characteristic function has an exponential form. This is no coincidence, as it is a shared properties by all infinitely divisible distributions. In fact, one can even give more information on the exponent. This is the so-called Lévy Khintchine formula. In this section, we only state the result, the proof will be given later. d Theorem 1.3.1. A probability distribution µ on R is infinitely divisible if and only if there exists d • a vector b ∈ R , called the drift, or mean, • a symmetric positive definite d × d matrix A , called the covariance matrix, d R 2 • a measure on ν such that d min(|y| , 1)ν(dy) < +∞, R R \{0} such that: Z Z ! ihξ,yi 1 ihξ,yi e µ(dy) = exp ihb, ξi − hξ, Aξi + e − 1 − hξ, yi1{|y|≤1}ν(dy) . d 2 d R R \{0} Remark 1.3.2. Such a measure ν is called a Lévy measure. Later on, this measure will be linked to jumps the when discussing Lévy process. It shall be noted that one can state the whole theory by adding that d ν{0} = 0 and integrating over R , the point being that there should be no jumps of size 0. Besides, there is nothing special about the cut-off 1{|y|≤1} appearing above, one could take any > 0 and consider instead 1 1{|y|≤}, or even 1+|y|2 . Doing that would change the value for b. Remark 1.3.3. Obviously, the outstanding part of the previous theorem is the only if part. Indeed, if we are given a distribution with the above characteristic function, it is quite easy to see that it is infinitely divisible. Definition 1.3.4. The triple (A, ν, b) above is called the characteristic triplet, and they completely determine the distribution µ. Note that since A is a symmetric positive definite matrix, we will interchangeably write (Q, ν, b) as generating triplet, where Q is the quadratic form defined by Q(z) = hz, Azi. One interpretation of this result is that any infinitely divisible distribution can be decomposed as the 1 sum of fundamental building blocks. One would immediately observe that 2 hξ, Aξi in the exponent comes from a Gaussian distribution. Besides, barring the term multiplied by the indicator function, the integral R ihξ,yi d (e − 1)ν(dy) is the characteristic function of a compound Poisson process. R \{0} Stable distributions In this paragraph, we introduce a very important class of distributions known as Stable Distributions. Historically, those distributions arise from extensions of the Central Limit Theorem. Let X1,X2,... be a sequence of i.i.d. random variable, and for an, bn two sequence of real numbers, form X1 + ··· + Xn − an Sn = . bn If there exists a random variable X such that Sn converges in distribution to X, then we say that X has a stable distribution. A rather classical example of such distributions is for instance the case when X has a CHAPTER 1. INTRODUCTION AND EXAMPLES 5 √ 2 finite second moment. In this case, one can take bn = σ n and an = m, and we see that N (m, σ ) is a stable distribution. As an exercise, the reader can prove the following result: Proposition 1.3.5. Sn ⇒ X if and only if for all n, there exists cn and dn such that X1 + ··· + Xn = cnX + dn, where X1,...,Xn are independent copies of X. Remark 1.3.6. In the previous proposition, if dn can be taken to be 0, then X is said to be strictly stable.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    39 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us