Introduction to Lévy Processes

Huang Lorick [email protected]

Document type These are lecture notes. Typos, errors, and imprecisions are expected. Comments are welcome!

This version is available at http://perso.math.univ-toulouse.fr/lhuang/enseignements/

Year of publication 2021

Terms of use This work is licensed under a Creative Commons Attribution 4.0 International license: https://creativecommons.org/licenses/by/4.0/ Contents

Contents 1

1 Introduction and Examples 2 1.1 Infinitely divisible distributions ...... 2 1.2 Examples of infinitely divisible distributions ...... 2 1.3 The Lévy Khintchine formula ...... 4 1.4 Digression on Relativity ...... 6

2 Lévy processes 8 2.1 Definition of a Lévy process ...... 8 2.2 Examples of Lévy processes ...... 9 2.3 Exploring the Jumps of a Lévy Process ...... 11

3 Proof of the Levy Khintchine formula 19 3.1 The Lévy-Itô Decomposition ...... 19 3.2 Consequences of the Lévy-Itô Decomposition ...... 21 3.3 Exercises ...... 23 3.4 Discussion ...... 23

4 Lévy processes as Markov Processes 24 4.1 Properties of the Semi-group ...... 24 4.2 The Generator ...... 26 4.3 Recurrence and Transience ...... 28 4.4 Fractional Derivatives ...... 29

5 Elements of with Jumps 31 5.1 Example of Use in Applications ...... 31 5.2 Stochastic Integration ...... 32 5.3 Construction of the Stochastic Integral ...... 33 5.4 and Itô Formula with jumps ...... 34 5.5 Stochastic Differential Equation ...... 35

Bibliography 38

1 Chapter 1

Introduction and Examples

In this introductive chapter, we start by defining the notion of infinitely divisible distributions. We then give examples of such distributions and end this chapter by stating the celebrated Lévy-Khintchine formula. The proof of the latter will be given in a subsequent chapter.

1.1 Infinitely divisible distributions

Historically, Paul Lévy was interested in "arithmetic of ", where he would investigate properties of probabilities distributions that can be decomposed as the sum of independent copies of itself. This field gave rise to what we now call infinitely divisible distributions. Infinitely divisible distributions and Lévy process are closely related, as Lévy process have infinitely divisible distributions. We start by introducing the concept of infinitely divisible distribution and give some examples.

Definition 1.1.1. We say that a random variable X is infinitely divisible if for all n ∈ N, there exists Y1,...,Yn such that (d) X = Y1 + ··· + Yn. A very simple consequence of this definition is the following result: Proposition 1.1.2. The following are equivalent:

• X has an infinitely divisible distribution,

• µX the distribution of X has an n- root that is itself the distribution of a random variable for each n,

• φX the characteristic function of X has an n-root that is itself the characteristic function of a random variable for each n.

We leave the proof as an exercise.

1.2 Examples of infinitely divisible distributions

Gaussian random variables

Let X be a random vector. We say that X has a Gaussian distribution if there exists m ∈ R and a symmetric positive definite matrix A such that X has density:

1  1  exp − hx − m, A−1(x − m)i . (2π)d/2pdet(A) 2

In this case, we write X ∼ N (m, A), m is the mean and A the covariance matrix. An easy exercise gives that the Fourier transform of such random variable is

 1  φ (ξ) = exp ihξ, mi − hξ, Aξi . X 2 2 CHAPTER 1. INTRODUCTION AND EXAMPLES 3

Hence, it is easy to see that:

 m 1 A  φ (ξ)1/n = exp ihξ, i − hξ, ξi . X n 2 n

 m A  Consequently, we have that X is infinitely divisible with Yi ∼ N n , n .

Poisson random variable We say that a discrete random variable X has a with parameter λ if

λk (X = k) = e−λ . P k! Consider now Y independent of X with Poisson distribution of parameter µ , we have:

k k X X λl µk−l (X + Y = k) = (X = l, Y = k − l) = e−λ e−µ . P P l! (k − l)! l=0 l=0 Grouping terms in the last identity, we get:

k 1 X k! (λ + µ)k (X + Y = k) = e−(λ+µ) λlµk−l = e−(λ+µ) . P k! l!(k − l)! k! l=0 Consequently, convolution of two Poisson distribution is a Poisson distribution and Poisson distributions are infinitely divisible. Alternatively, one can show that the characteristic function for a Poisson distribution is

iξ φX (ξ) = exp(λ(e − 1)),

λ giving that Poisson distributions are infinitely divisible, with Yi with Poisson distribution of parameter n .

Compound Poisson random variable Consider N to be a Poisson random variable with parameter λ. Since N is integer-valued, one can form the following sum: N X X = Yi, k=1 where Yi are independent and identically distributed, independent of N. Let us denote µY their common distribution.

Proposition 1.2.1. The characteristic function of X is  Z  ihξ,yi φX (ξ) = exp λ (e − 1)µY (dy) .

Proof. +∞ ! PN Pk ihξ,Xi ihξ, Yii X ihξ, Yii φX (ξ) = E(e ) = E(e i=1 ) = E e i=1 1N=k . k=0

Now, exploiting the independence of N and Yi’s, we can write:

+∞ ! Pk X ihξ, Yii φX (ξ) = E e i=1 P(N = k) k=0 +∞  Pk  k X ihξ, Yii −λ λ = e i=1 e . E k! k=0 CHAPTER 1. INTRODUCTION AND EXAMPLES 4

Now, we note that k  Pk  ihξ, Yii Y ihξ,Yii k E e i=1 = E(e ) = φY (ξ) , i=1 denoting φY the common characteristic function of the Yi’s. We thus obtained

+∞ X λk    φ (ξ) = e−λ φ (ξ)k = exp λ φ (ξ) − 1 . X k! Y Y k=0

R ihξ,yi To conclude, we only write φY (ξ) as e µY (dy). Hence, we see that a compound Poisson distribution also has an infinite divisible distribution.

1.3 The Lévy Khintchine formula

One can notice that in every example above, the characteristic function has an exponential form. This is no coincidence, as it is a shared properties by all infinitely divisible distributions. In fact, one can even give more information on the exponent. This is the so-called Lévy Khintchine formula. In this section, we only state the result, the proof will be given later.

d Theorem 1.3.1. A distribution µ on R is infinitely divisible if and only if there exists d • a vector b ∈ R , called the drift, or mean, • a symmetric positive definite d × d matrix A , called the covariance matrix,

d R 2 • a measure on ν such that d min(|y| , 1)ν(dy) < +∞, R R \{0} such that: Z Z ! ihξ,yi 1 ihξ,yi e µ(dy) = exp ihb, ξi − hξ, Aξi + e − 1 − hξ, yi1{|y|≤1}ν(dy) . d 2 d R R \{0} Remark 1.3.2. Such a measure ν is called a Lévy measure. Later on, this measure will be linked to jumps the when discussing Lévy process. It shall be noted that one can state the whole theory by adding that d ν{0} = 0 and integrating over R , the point being that there should be no jumps of size 0. Besides, there is nothing special about the cut-off 1{|y|≤1} appearing above, one could take any  > 0 and consider instead 1 1{|y|≤}, or even 1+|y|2 . Doing that would change the value for b. Remark 1.3.3. Obviously, the outstanding part of the previous theorem is the only if part. Indeed, if we are given a distribution with the above characteristic function, it is quite easy to see that it is infinitely divisible. Definition 1.3.4. The triple (A, ν, b) above is called the characteristic triplet, and they completely determine the distribution µ. Note that since A is a symmetric positive definite matrix, we will interchangeably write (Q, ν, b) as generating triplet, where Q is the quadratic form defined by Q(z) = hz, Azi. One interpretation of this result is that any infinitely divisible distribution can be decomposed as the 1 sum of fundamental building blocks. One would immediately observe that 2 hξ, Aξi in the exponent comes from a Gaussian distribution. Besides, barring the term multiplied by the indicator function, the integral R ihξ,yi d (e − 1)ν(dy) is the characteristic function of a . R \{0}

Stable distributions In this paragraph, we introduce a very important class of distributions known as Stable Distributions. Historically, those distributions arise from extensions of the . Let X1,X2,... be a sequence of i.i.d. random variable, and for an, bn two sequence of real numbers, form

X1 + ··· + Xn − an Sn = . bn

If there exists a random variable X such that Sn converges in distribution to X, then we say that X has a . A rather classical example of such distributions is for instance the case when X has a CHAPTER 1. INTRODUCTION AND EXAMPLES 5

√ 2 finite second moment. In this case, one can take bn = σ n and an = m, and we see that N (m, σ ) is a stable distribution. As an exercise, the reader can prove the following result:

Proposition 1.3.5. Sn ⇒ X if and only if for all n, there exists cn and dn such that

X1 + ··· + Xn = cnX + dn, where X1,...,Xn are independent copies of X.

Remark 1.3.6. In the previous proposition, if dn can be taken to be 0, then X is said to be strictly stable. 1/α Besides, it can be shown that the only possible choice for cn is of the form σn . This parameter α is called the index of the stable distribution.

The next result characterises Lévy-Khinchine exponent for a .

Theorem 1.3.7. The Lévy-Khinchine exponent of a stable distribution can be one of two forms: 1. α = 2, then ν = 0 so that X ∼ N (b, A), 2. α < 2, then A = 0 and ν is of the form: dx dx ν(dx) = C 1 + C 1 , where C ,C ≥ 0. 1 x1+α x≥0 2 |x|1+α x<0 1 2

The proof of this result can be found in Sato [11]. In the one-dimensional case, an extensive discussion can be found in Zolotarev [12]. The higher dimensional cases are more elaborate, and many questions are still open to this day. We must also mention the book by Samorodnitsky and Taqqu [10]. We conclude this paragraph by giving an alternate expression for the exponent of a Stable distribution in one dimension.

Theorem 1.3.8. A random variable X has a stable distribution if and only if there exists σ > 0, β ∈ [−1, 1] and b ∈ R such that • if α = 2 1 φ (ξ) = exp(iξb − σ2ξ2) X 2 • if α < 2 and α 6= 1  h πα i φ (ξ) = exp iξb − σα|ξ|α 1 − iβsgn(ξ) tan , X 2 • if α = 1,  h 2 i φ (ξ) = exp iξb − σ|ξ| 1 + iβ sgn(ξ) log(|ξ|) , X π

The proof of this result can be found in all three books mentioned above. Remark 1.3.9.

• The parameters b and σ are designates respectively the drift and scale, whereas β is the skewness of the distribution. Taking β = 0 gives a symmetrical stable distribution. • Plugging β = 0 and b = 0, we see that the exponent of a stable process is essentially |ξ|α, for α ranging from 0 to 2. Because of that, we see that every stable distribution has a density, ranging from the Gaussian density to the Cauchy density: σ f (x) = . X π[(x − b)2 + σ2]

Series representations for those densities are available, often relying on special functions. Note that for α < 2, the distributions are heavy-tailed. In fact, it can be shown that if X has an α stable distribution, then E(|X|γ ) < +∞ for all γ < α. CHAPTER 1. INTRODUCTION AND EXAMPLES 6

We end this paragraph on stable distributions by mentioning the following result from Chambers, Mallow and Stuck, ofter abbreviated CMS method in the literature. Theorem 1.3.10. Consider U and W two independent random variables, such that

 π π  • U has a uniform distribution on − 2 , 2 • W has an exponential distribution of parameter 1, Set ( πα 1 arctan(−ζ) if α 6= 1 ζ = −β tan , and ξ = α 2 π 2 if α = 1 If α 6= 1, then 1−α      α sin α(U + ξ) cos U − α(U + ξ) 1 2 2α X = (1 + ζ ) 1   cos(U) α W

If α = 1, then  π  1 π   2 W cos U  X = + βU tan(U) − β log π . ξ 2 2 + βU X has a stable distribution with index α, and skewness β. The proof can be found in

1.4 Digression on Relativity

In this section, we would like to give an example in the theory of relativity where infinitely divisible distributions arise. More precisely, we will discuss the relativistic stable distribution. 3 3 Consider a particule in R whose mass is m > 0 and momentum is p = (p1, p2, p3) ∈ R . According to the models in relativity theory, the total energy of this particule is pm2c4 + c2|p|2, where c is the speed of light. Subtracting mc2 which is the energy due to mass, the kinetic energy of the particule is then given by

E(p) = pm2c4 + c2|p|2 − mc2.

We consider   −E(p) p 2 4 2 2 2 φm,c(p) = e = exp − m c + c |p| + mc .

Theorem 1.4.1. φm,c is the characteristic function of an infinitely divisible distribution.

Proof. This proof is in two parts. First, using Bochner’s theorem, we identify φm,c as a characteristic function. th Next, we express the n root of φm,c as a characteristic function as well. We first recall Bochner’s theorem.

Theorem 1.4.2. A function ψ is a characteristic function if and only if ∀n ∈ N, ∀z1, . . . , zn ∈ C, ∀p1, . . . , pn, n X ψ(pi − pj)ziz¯j ≥ 0 i,j=1

Rewriting the kinetic energy as: r ! |p|2 −E(p) = mc2 1 − 1 + = mc2ψ(p), m2c2 it is enough to use Bochner’s theorem on eψ(p). In fact, one clever way to rewrite ψ is though the use of the Gamma function, that is: Z +∞  2  1 −(1+ |p| )x dx ψ(p) = 1 − √ 1 − e m2c2 . 3/2 2 π 0 x We point out that at first glance, we might have a problem considering these integrals at 0, and we should consider a sequence approaching zero in order to be perfectly rigourous. But as it is not the main focus of CHAPTER 1. INTRODUCTION AND EXAMPLES 7 these notes, we will just admit these integrals to be defined. Now to show that ψ is positive definite, it is enough to focus on the exponential part:

n  n   Z +∞  |p −p |2   Z +∞ |p −p |2 X 1 −(1+ i j )x dx 1 −x X − i j x dx ziz¯j √ e m2c2 = √ e  ziz¯je m2c2  . 2 π x3/2 2 π x3/2 i,j=1 0 0 i,j=1 | {z } positive definite

2 − |p| x 2 2 m2c2 m c Indeed, p 7→ e is the characteristic function of a random variable with distribution N (0, 2x ). Thus, we obtained that   p 2 4 2 2 2 p 7→ φm,c(p) = exp − m c + c |p| + mc is the characteristic function of some distribution. To see that it is infinitely divisible, we write:

 2  1 1 p mc φ (p) n = exp − m2c4 + c2|p|2 + m,c n n r !  c 4  c 2 = exp − (nm)2 + c2|p|2 + mn n n

= φ c (p). nm, n Remark 1.4.3. This remarkable fact allowed physicists to use criteria developed for to relativity. In particular, the existence of bound states1 for relativistic Schrodinger operators follows from the application of a recurrence criteria.

1https://en.wikipedia.org/wiki/Bound_state Chapter 2

Lévy processes

In this chapter, we define Lévy processes and give a few examples of such processes. We discuss their relation with infinitely divisible distributions and the nature of their jumps. We will spend a large part of this chapter discussing integration with respect to a Poisson random measure in order to set-up the proof of the Lévy-Khintchine formula in the next chapter.

2.1 Definition of a Lévy process

There exists many equivalent definition for the . As it is not the main focus of these lectures, here is one definition that will suffices us.

Definition 2.1.1. A (Bt)t≥0 is a Brownian motion if:

• Almost surely, t 7→ Bt is continuous,

• For all s, t > 0, Bs+t − Bt has the same distribution as Bs

• For all n ≥ 1 and all times 0 ≤ t0 ≤ t1 ≤ · · · ≤ tn, the random variables Bt0 , Bt1 − Bt0 , ... , Btn − Btn−1 are independent.

Now one can wonder, what happens if we drop the assumption on continuity of t 7→ Bt ? The reader trained in probability would then observe that the Poisson process (more on that one later) also fit the description. In fact, the class of all process with independent and stationary increments is known as the Lévy processes. Let us write a formal definition.

d Definition 2.1.2. A stochastic process (Xt)t≥0 in R is a Lévy process if the following conditions are satisfied: 1. Independent increments:

for all n ≥ 1 and all times 0 ≤ t0 ≤ t1 ≤ · · · ≤ tn, the random variables Xt0 , Xt1 − Xt0 , ... , Xtn − Xtn−1 are independent. 2. Stationarity of increments:

for all s, t ≥ 0, the distribution of Xt+s − Xs does not depends on s.

3. X0 = 0 almost surely. 4. It is stochastically continuous:

  |Xs − Xt| > ε −→ 0. P s→t 5. It is càdlàg almost surely.

Remark 2.1.3. The last item above can be dropped, as one can prove that there always exists a càdlàg moditication (i.e. a process that is different on a set of measure zero). However, proving this is quite involved and we opt to add the càdlàg property in the definition of a Lévy process.

8 CHAPTER 2. LÉVY PROCESSES 9

The fact that the previous definition actually give rise to a probability measure on the space of càdlàg d functions from R+ to R invokes Kolmogorov’s extension criterion. The details can be found in Billingsley [3]. Obviously, the Brownian motion and the Poisson process satisfies all of these properties. The reader can also observe that the sum of a Poisson process and a Brownian motion also satisfies these properties. In the next chapter, we will see that any process satisfying those properties can be decomposed as the sum of a Brownian motion, a compound Poisson process and an L2 martingale. This is the celebrated Lévy Itô decomposition. Now, beside the fact that Gaussian and Poissonian distributions are in both, what would be the link between Lévy processes and infinitely divisible distributions? The answer is that at any given time, a Lévy process has an infinitely divisible distribution.

Proof. Let (Xt)t≥0 be a Lévy process. For all n ∈ N, we can write:

Xt = Xt − X n−1 + X n−1 − X n−2 + ··· + X t − X0 n t n t n t n

We used the fact that X0 = 0. Now, all of those increments are independent, and identically distributed (since we are considering increments of size t/n), and we decomposed Xt as sum of i.i.d. random varaible. Thus, Xt is indeed infinitely divisible.

Being infinitely divisible, the characteristic function of Xt must also satisfy the Lévy-Khintchine formula: " !# 1 Z ihξ,Xti ihξ,yi E(e ) = exp t ihb, ξi − hξ, Aξi + e − 1 − hξ, yi1{|y|≤1}ν(dy) , 2 d R \{0} for some characteristics (b, Aν). Therefore, we will refer to the triplet as the characteristic triplet for the Lévy process X as well.

2.2 Examples of Lévy processes

As we saw earlier, Gaussian and Poisson distributions are infinitely divisible. This means that their continuous time counter-parts, that are the Brownian motion and the Poisson process are Lévy processes. Just in case the reader is unfamiliar with Poisson processes on R, here’s a brief summary. A Poisson process is the only stochastic process with independent and stationary increments with bounded jumps. Equivalently, a Poisson Process of parameter λ has at all time a Poisson distribution of parameter λt:

(λt)k (N = k) = e−λt . P t k! As such, one can compute its characteristic function:

iξ φNt (ξ) = exp(λt(e − 1)).

We see that we can force a Lévy-Khintchine exponent form by writing this as  Z  iξx φNt (ξ) = exp t (e − 1)λδ1(dx) . R Thus, the Lévy measure of a Poisson process is a Dirac mass. Similarly, we saw that a compound poisson random variable was infinitely divisible as well, then the stochastic process N Xt Xt = Yi i=1 where Nt is a Poisson process is also a Lévy process. We naturally call this one the compound poisson process. The same calculation as above then gives:  Z  ihξ,yi φXt (ξ) = exp t (e − 1)λµY (dy) , and we see that the compound Poisson process has the Lévy measure λµY (dy). CHAPTER 2. LÉVY PROCESSES 10

Remark 2.2.1. The Lévy measure has an interpretation in terms of jumps of the Lévy process. Indeed, the Brownian motion, having continuous trajectories, its Lévy measure is zero everywhere. The Poisson process has jumps of size 1, giving a Dirac mass at 1 as Lévy measure, and a compound Poisson, whose jumps are the realisation of the random variables Yi’s at rate λ has Lévy measure λµY (dy). In general, the Lévy measure can be seen as the intensity of the jumps in a certain region of space.

The

We consider (Xt)t≥0 such that for all t ≥ 0,

L(Xt) = Γ(αt, β), α, β > 0.

This means that for all t > 0, the density of Xt is βαt f(t, x) = xαt−1e−βx1 . Γ(αt) {x>0}

βαt A simple integration yields that the characteristic function of Xt is φXt (ξ) = (β−iξ)t . To simplify the computations, we take α = β = 1, and give an alternative expression for characteristic function more suitable to the Lévy-Khintchine formula: 1 = e−t ln(1−iξ). (1 − iξ)t Now, notice that 0 Z +∞   i iξx −x ln(1 − iξ) = = iφX1 (ξ) = i e e dx. 1 − iξ 0

Indeed, for α = β = 1, X1 has an exponential distribution of parameter 1. Integrating both sides with respect to ξ gives: Z +∞  e−x ln(1 − iξ) = eiξx − 1 dx. 0 x e−x In other words, we exhibited the Lévy measure of (Xt)t≥0 to be ν(dx) = x dx.

Stable Subordinators In general, a subordinator is just a non decreasing process. But for a Lévy process to be increasing, means several things. First, there cannot be a Brownian part, as the Brownian motion cannot be (just) increasing. Second, the Lévy measure cannot charge (−∞, 0), otherwise, the process would see negative jumps and the trajectories cannot be increasing. Theorem 2.2.2. If T is a subordinator, then its characteristic function takes the form  Z +∞  iξT iξx E(e ) = exp ibξ + (e − 1)µ(dx) , 0 where b ≥ 0and the Lévy measure satisfies the additional requirements:   Z +∞ µ (−∞, 0) = 0 and min(1, y)µ(dy) < +∞. 0 The proof of this result can be found in Bertoin [2]. We will say more on the subject one we proved the Lévy Khintchine formula. Probably the most used type of subordinator is the α stable subordinator; that is as its name indicates, α a stable process with increasing trajectories. Let us denote (Tt )t≥0 such process. Looking at the previous Theorem, we see that we need to take 0 < α < 1 to guarantee the conditions on the Lévy measure. Besides, the measure having to give zero mass to all negative reals, we see that we have to get:

iξT α −ξα E(e t ) = e . Now, a simple computation gives Z +∞ α α  iξx dx ξ = 1 − e 1+α . Γ(1 − α) 0 x CHAPTER 2. LÉVY PROCESSES 11

α dx We thus see that the characteristic triplet of such α stable subordinator has to be (0, Γ(1−α) x1+α ). Those type of random processes are useful for considering time-change. Let us give one example using α (Tt )t≥0 and a d dimensional Brownian motion. α Example 2.2.3. Consider (Bt)t≥0 and (Tt )t≥0 an α stable subordinator, independent of B. The subordinated Brownian motion (B α ) is a d dimensional 2α stable process. Indeed, we find its characteristic function to Tt t≥0 be α hξ,BT α i −|ξ| E(e t ) = e . This example gives us a very simple way to get a d dimensional α stable process. We have to mention though that we do not get all the Stable process in dimension d in this way, since the process obtained is clearly symmetric.

2.3 Exploring the Jumps of a Lévy Process

In this section, we investigate the structure of the jumps of a Lévy process. We will link the jumps to a Poisson random measure, which will lead us to the celebrated Lévy-Itô decomposition. Henceforth, (Xt)t≥0 will denote a Lévy process with generating triplet (b, A, ν).

Remark 2.3.1. Note that until now, we did not specify where did the triplet come from. The Lévy-Khintchine representation states that any Lévy process is characterised by a triplet, but the origin of this triplet is for now unclear. In this section, we will define these objects in relation to some path properties of the process.

The Large Jumps of a Lévy process as a Compound Poisson Process The idea is to see within the jumps of a Lévy process, the structure of a Poisson :

•• •

• • • • BO•BO•

/

The difficulty in the analysis of the jumps of a Lévy process comes from the fact that even though the jumps are countable, it is possible to have: X |∆Xs| = +∞. 0

In other words, it is possible for the jumps to accumulate. This difficulty will be dealt with thanks to the fact that Lévy processes will always have the property that

X 2 |∆Xs| < +∞. 0

To exploit this, we need to define the jump measure associated with our Lévy process. Fix A a Borel set such that 0 ∈/ A¯. We define the random variables:

A T1 = inf{t > 0; ∆Xt ∈ A}, . . A A Tn+1 = inf{t > Tn ; ∆Xt ∈ A}, . .

Since X has càdlàg paths and that 0 ∈/ A¯, we see that

A {Tn ≥ t} ∈ Ft+ = Ft.

Thus, those random variables are stopping times. Besides, the assumption 0 ∈/ A¯ yields that

A lim Tn = +∞ almost surely. n→+∞

We introduce Nt(A) the following quantity:

+∞ X X N (A) = #{0 ≤ s ≤ t; ∆X ∈ A} = 1 = 1 A . t s {∆Xs∈A} {Tn ≤t} 0

A This quantity is a counting process without explosion (since Tn → +∞) that counts the number of times the jumps lands in A.

Theorem 2.3.2. Let A be a Borel set such that 0 6∈ A¯. Then, Nt(A) is a Poisson process. Proof. We can see that for all times 0 ≤ s < t < ∞,

Nt(A) − Ns(A) ∈ σ{Xu − Xu, s ≤ v ≤ u ≤ t}, and thanks to the fact that X has independent increments, Nt(A) − Ns(A) is independent of Fs, that is, Nt(A) has independent increments. Finally, we observe that Nt(A) − Ns(A) counts the number of jumps that Xs+u − Xs has in A, for 0 ≤ u < t − s. Using the fact that X has stationary increments, we then conclude that Nt(A) − Ns(A) has the same distribution as Nt−s(A). To summarise,

• Nt(A) is a counting process,

• Nt(A) has independent increments,

• Nt(A) has stationary increments we can conclude that Nt(A) must be a Poisson process.

We also define ν(A) to be the quantity:

ν(A) = E[N1(A)], that is ν(A) is the intensity (or parameter) of the Poisson process Nt(A). Consequently, we deduce that E[Nt(A)] = tν(A). CHAPTER 2. LÉVY PROCESSES 13

Remark 2.3.3. A rather useful property of Nt(A) is that if A and B are disjoints, Nt(A) and Nt(B) are independent. This property comes from the fact that Nt(A) and Nt(B) relies on different increments of (Xt)t≥0, as soon as A and B are disjoints. Thus, the large jumps of a Lévy process give rise to a Poisson process. The fact that only large jumps are considered comes from the assumption 0 6∈ A¯. Note that the Poisson process Nt(A) explicitly depends on the prescribed Borel set A. We can thus ask what is the dependency of this process with respect to the Borel set?

Theorem 2.3.4. The set function A 7→ Nt(A) defines a σ-finite measure on R\{0}. The set function A 7→ ν(A) = E[Nt(A)] is also a σ-finite measure on R\{0}.

Proof. By construction, Nt(A) is a counting measure. Besides, it is clear from the linearity properties of the expectation that ν is also a measure.

Definition 2.3.5. The measure ν is called the Lévy measure of the process (Xt)t≥0. This measure is the third element in the characteristic triplet of (Xt)t≥0.

The fact that ν(A) < +∞ when 0 ∈/ A¯ is actually a consequence of the fact that Nt(A) has jumps of size 1. Indeed, the moments of the Lévy measure are closely related to the moments of the Lévy process. More precisely, we have the following result:

Theorem 2.3.6. Let (Xt)t≥0 be a Lévy process with bounded jumps:

sup |∆Xt| < C, t≥0

m where C is a fixed non-random constant. Then, for all m ≥ 1, E[|Xt| ] < +∞, that is Xt has moment of every order.

Since Nt(A) has jumps of size 1, thus bounded jumps, it has moments of every order. The Lévy measure is defined to be the 1st moment of Nt(A), thus it is finite. Note that this is the first step towards satisfying the definition of a Lévy measure: Z min(1, |x|2)ν(dx) < +∞, d R R R 2 since we just obtained that |x|>1 ν(dx) < +∞. We will deal with the part |x|≤1 |x| ν(dx) later. Note that we actually have a stronger result, linking the moments of the Lévy measure to the moments of the process itself. See Theorem 25.3 p 159 in Sato [11].

Proof of Theorem 2.3.6. This proof follows the proof of Theorem 2.4.7 p118 in Applebaum [1]. We define the sequence of stopping times

T1 = inf{t > 0; |Xt| > C} . .

Tn+1 = inf{t > Tn; |Xt − XTn | > C}.

This sequence form an increasing sequence of stopping times. First, assume T1 < +∞ almost surely. Since |∆Xs| ≤ C for any time, we have by induction that:

sup |Xs∧Tn | ≤ 2Cn. t>0

By the strong , we get that Tn − Tn−1 is independent of FTn−1 and has the same distribution as T1. Thus, because T1 < +∞, we have

−T −T n n E(e n ) = E(e 1 ) = α , for a certain α ∈ [0, 1]. We thus get:

t −Tn t n P(|Xt| > 2Cn) ≤ P(Tn ≤ t) ≤ e E(e ) ≤ e α . CHAPTER 2. LÉVY PROCESSES 14

m From this last inequality, we deduce that E|Xt| is finite:

m m m E[|Xt| ] = E[|Xt| 1{|Xt|≤2Cn}] + E[|Xt| 1{|Xt| 2Cn}]. For the first part, there are no problems:

m m E[|Xt| 1{|Xt|≤2Cn}] ≤ (2Cn) . For the second part, we write:

+∞ m X m E[|Xt| 1{|Xt|>2Cn}] = E[|Xt| 1{2rC<|Xt|≤2(r+1)C}] r=n +∞ X m ≤ (2(r + 1)C) P(2rC < |Xt| ≤ 2(r + 1)C) r=n +∞ X m ≤ (2(r + 1)C) P(2rC < |Xt|) r=n +∞ X ≤ (2(r + 1)C)metαr < +∞. r=n

Thus, when T1 is finite almost surely, we have that Xt has moment of every order. Now, if P(T1 = +∞) > 0, then we can write: m m m E[|Xt| ] = E[|Xt| 1{T1<+∞}] + E[|Xt| 1{T1=+∞}]. m Then, for E[|Xt| 1{T1<+∞}], we can argue as before, and we are left with

m m m E[|Xt| 1{T1=+∞}] ≤ C P(T1 = +∞) ≤ C . Thus, the proof is complete.

Remark 2.3.7. So far, we concluded that Nt(·) is a measure. Note that this is actually a random measure, that is a random variable taking value in the space of measures. This begs the question of defining a probability space on the set of all measures. As it is not the main focus of these notes, we will not dwell to long on this construction. The interested reader can type "random measure" in a search engine to get many references on the matter. The case for Poisson random measures is of particular interest for us, and the reader can see that in this case, one can completely characterise the random measure through a Laplace-like transform.

Now that we established that Nt(·) is a measure, what kind of result can we obtain when integrating with respect to it? The answer is simple: we know what the measure does on indicators of Borel sets, we can extend this with results from measure theory to get the following.

Theorem 2.3.8. Let A be a Borel set such that 0 ∈/ A¯. Let f be measurable and finite on A. We have

Z X f(x)Nt(dx) = f(∆Xs)1{∆Xs∈A}. A 0

The last identity comes from the fact that E[Nt(A)] = tν(A) can be extended to integrable functions with R  usual arguments. We already know that A f(x)Nt(dx) t≥0 is a Lévy process, actually, we can way a bit more: R  Proposition 2.3.9. The Lévy process A f(x)Nt(dx) t≥0 is a compound Poisson process. CHAPTER 2. LÉVY PROCESSES 15

Proof. First, we observe that it is enough to prove the statement when f is a simple function, that is

m X f(x) = αi1Ai (x). i=1 R Pn Note that we can always reduce to the case where the Ai’s are disjoints. Thus A f(x)Nt(dx) = i=1 Nt(Ai∩A), and the Ai’s being disjoints make Nt(Ai ∩ A) independents (see Remark 2.3.3). Next, since we do know the characteristic exponent for compound Poisson process, the goal is to show that   Z    Z  h i ihξ,yi E exp i ξ, f(x)Nt(dx) = exp t (e − 1)µ(dy) . A for some measure µ to be determined. We plug the specific expression for f and use the fact that the Ai’s are disjoints:

* m + !  h  Z  i h Z X i E exp i ξ, f(x)Nt(dx) = E exp i ξ, αi1Ai (x)Nt(dx) A A i=1 * m + ! h X i = E exp i ξ, αiNt(A ∩ Ai) i=1 m Y  h i = E exp i hξ, αiNt(A ∩ Ai)i . i=1

Now, since Nt(A ∩ Ai) has Poisson distribution with parameter tν(A ∩ Ai), we get:  h i    ihξ,αii E exp i hξ, αiNt(A ∩ Ai)i = exp tν(A ∩ Ai) e − 1 .

We thus obtained  h  Z  i m    Y ihξ,αii E exp i ξ, f(x)Nt(dx) = exp tν(A ∩ Ai) e − 1 A i=1 Passing the product over i inside, we can rewrite the right hand side as:

m m !       Z  Y ihξ,αii X ihξ,αii ihξ,xi exp tν(A ∩ Ai) e − 1 = exp tν(A ∩ Ai) e − 1 = exp t (e − 1)µ(dx) , d i=1 i=1 R

−1 where we define νA,f (B) = ν(A ∩ f (B)) (recall f is our simple function). Thus, we obtained   Z    Z  h i ihξ,xi E exp i ξ, f(x)Nt(dx) = exp t (e − 1)νA,f (dx) , (2.1) d A R R and conclude that A f(x)Nt(dx) is a compound Poisson process. We point out the important formula established in the previous proof (2.1). In this formula, we characterised R the Fourier transform of the Lévy process A f(x)Nt(dx), where f is any measurable function. In this exposition, we chose not to talk too much on random measures, but note that giving an expression to  h R i E exp i ξ, A f(x)Nt(dx) in general is a way to characterise the law of the random measure Nt(dx). We highlight this fact in the following Corollary:

Corollary 2.3.10. Let A be a Borel set such that 0 ∈/ A¯, and (Nt(·))t≥0 be the jump measure of some Lévy process. Then, the following holds true: R   1. The process A f(x) Nt(dx) − tν(dx) is a martingale, 2. Quadratic variation " Z 2# Z   2 E f(x) Nt(dx) − tν(dx) = t |f(x)| ν(dx). A A CHAPTER 2. LÉVY PROCESSES 16

3. Fourier transform:    Z   Z    ihξ,xi E exp i ξ, f(x) Nt(dx) − tν(dx) = exp t (e − 1 − ihξ, xi)νA,f (dx) . d A R

R   R Proof. The fact that A f(x) Nt(dx) − tν(dx) is a martingale comes from the fact that A f(x)Nt(dx) has R independent increments with mean t A f(x)ν(dx), by definition of ν. −1 Besides, the last identity holds true by definition of the measure νA,f (B) = ν(A ∩ f (B)). The only result we need to prove is the quadratic variation. To obtain this one, we start from equation (2.1) and differentiate with respect to ξ twice. Then, plugging ξ = 0 yields the desired conclusion.

Finally, we give an extension to the result stated in Remark 2.3.3. Proposition 2.3.11. Let A, B be two disjoints Borel sets such that 0 ∈/ A,¯ B¯. Then, the two Lévy processes are independent: X X ∆Xs1{∆Xs∈A} and ∆Xs1{∆Xs∈B}. 0

We now use the jump measure to decompose our initial Lévy process (Xt)t≥0. Fix a cut-off level a > 0. The specific value for a does not matter. Assume for a while that our original Lévy process(Xt)t≥0 does not have a continuous part. As it is a càdlàg process, we can then write it as the sum of its jumps: X Xt = ∆Xu, 0

Now, we can split this sum into two parts: X X Xt = ∆Xu1{|∆Xs|≤a} + ∆Xu1{|∆Xs|>a}, 0

X Z ∆Xu1{|∆Xs|>a} = xNt(dx). 0a

Since {|x| ≥ a} does not contain 0, this process is a compound Poisson process: the large jumps of a Lévy process form a compound Poisson process. Now, going back to the separation above, what meaning can we give to the small jump parts: X ∆Xu1{|∆Xs|≤a} 0

In this case, the set {|x| ≤ a} does contain 0 and we cannot use the previous results. However, what will help us here is that this process only has bounded jumps (by definition, this process has jumps of size less than a). We will see that this remaining term forms an L2 martingale.

The Small Jumps as a Compensated Poisson Integral The small jumps part is probably the most difficult part to understand. In terms of trajectory, the process moves through the accumulation of small jumps. Notice that in the previous section, the recurring condition that 0 ∈/ A¯ prevented the jumps to be too small. We now relax this assumption and try to see what we can say about the jump measure. The key ingredient for dealing with small jumps is Theorem 2.3.6, namely that a Lévy process with bounded jumps have moments of every order. The main result of this section is the following: CHAPTER 2. LÉVY PROCESSES 17

Theorem 2.3.13. Let (Xt)t≥0 be a Lévy process with jumps bounded by a > 0:

sup |∆Xt| < a. t≥0

Define Zt = Xt − E(Xt). Then (Zt)t≥0 is a martingale and c d Zt = Zt + Zt , c d where (Zt )t≥0 is a Brownian motion (thus has continuous paths), (Zt )t≥0 is a martingale and: Z d   Zt = x Nt(dx) − tν(dx) . |x|≤a

c d Moreover, (Zt )t≥0 and (Zt )t≥0 are independent Lévy processes.

Proof. Form the fact that (Zt)t≥0 has zero mean and independent increments, we deduce that (Zt)t≥0 is a martingale and Lévy process. Now, for a Borel set A, define Z   X Z Mt(A) = x Nt(dx) − tν(dx) = ∆Xs1{∆Xs∈A} − t xν(dx). A 0

n a a o Now, consider An = n+1 ≤ |x| ≤ n . Since these sets are disjoints by Proposition 2.3.11, the Lévy n Pn processes Mt(An) are pairwise independent. Define Mt = k=1 Mt(Ak), the goal is to establish the two properties: • for all n ≥ 0, the Lévy processes Z − M n and M n are independent; • M n → Zd and Z − M n → Zc. The fact that the to processes Z − M n and M n are independent actually comes from the fact that those processes do not jump simultaneously. The fact that M n and Z − M n converge come from the fact that Z has bounded jumps. Indeed, since it has bounded jumps, it has moment of every order, in particular, by independence of Z − M n and M n, we have: n n Var(Zt − Mt ) + Var(Mt ) = Var(Zt) < +∞. Thus, the two martingales Z −M n and M n are bounded in L2, thus converge in L2. We denote their respective limits Zc and Zd. It remains us to prove that Zd has the explicit form given in the statement of the theorem and that Zc has continuous sample paths. For Zd, we have: n Z   n X   n a Mt = Mt(Ak) = x Nt(dx) − tν(dx) , with ∪k=1 Ak = ≤ |x| ≤ a . ∪n A n + 1 k=1 k=1 k Letting n → +∞ yields the result. We now have to establish the path continuity of Zc. But since Z − M n converges to Zc in L2, using Doob’s inequality, c n c n || sup Zt − (Zt − Mt )||L2 ≤ 2 sup ||Zt − (Zt − Mt )||L2 . t≥0 t≥0 Thus the convergence in L2 can be made uniform in t, and we can find a subsequence that converge almost surely uniformly in t to Zc, and thus Zc has to have continuous paths.

Remark 2.3.14. The crucial part of the last result is probably that Z d   Zt = x Nt(dx) − tν(dx) , |x|≤a since so far, we did not know how to define Poisson integrals for sets such as {|x| ≤ a} containing 0. We now have the answer: the integral is defined in L2 sense, though this process we call "compensation": R   Subtracting tν(dx) to ensure the convergence of the integral · Nt(dx) − tν(dx) is called compensating the Poisson integral. Similarly, the martingale Nt − λt is called a compensated Poisson process. CHAPTER 2. LÉVY PROCESSES 18

c Remark 2.3.15. In the next chapter, we will elaborate on the continuous part (Zt )t≥0, and actually show that when non-identically zero, then it has to be a Brownian motion. We conclude this section by pointing out that we proved that ν, defined as the intensity of the Poisson process, is a Lévy measure.

Proposition 2.3.16. ν defined as E[Nt(·)] is a Lévy measure, that is, a sigma-finite measure satisfying Z min(1, |x|2)ν(dx) < +∞. d R \{0} R Proof. In Theorem 2.3.6 above, we already showed that |x|≥1 ν(dx) < +∞. For the part where |x| ≤ 1, we write: Z Z |x|2ν(dx) = lim |x|2ν(dx). n→+∞ n |x|≤1 ∪k=1Ak

We recall that the Ak are defined in the proof of Theorem 2.3.13. n Now, using item 2 in Corollary 2.3.10, since ∪k=1Ak do not contain zero, we have that

 2 Z Z     2 2 n |x| ν(dx) = E  x N1(dx) − ν(dx)  = E M1 ∪k=1 Ak . n n ∪k=1Ak ∪k=1Ak

Thus, we get Z  2 2  n  d 2 |x| ν(dx) = lim E M1 ∪k=1 Ak = E[(Z1 ) ] < +∞. |x|≤1 n→+∞ Chapter 3

Proof of the Levy Khintchine formula

In this chapter, we collect the results of the previous chapter and establish the Lévy-Khintchine formula. To that end, let (Xt)t≥0 be a Lévy process, and let (Nt(·))t≥0 be the associated jump measure.

3.1 The Lévy-Itô Decomposition

To prove the Lévy-Khintchine formula, the idea is to remove the large jumps first that form a compensated Poisson process, then deal with the small jumps using a compensated Poisson integral, and whatever is left has to be a drifted Brownian motion. We thus define: Z Zt = Xt − xNt(dx). |x|>1 R  Proposition 3.1.1. The Lévy process xNt(dx) is a compound Poisson process, and is independent |x|>1 t≥0 of (Zt)t≥0. Proof. The fact that the integral form a compound Poisson process has been established in the previous chapter (see Proposition 2.3.9). The independence comes from the fact that those two Lévy processes do not have simultaneous jumps (see Proposition 2.3.11).

Now, (Zt)t≥0 only have jumps smaller than one. We have the following result. Proposition 3.1.2. The following decomposition holds:

c d Zt = Zt + Zt . c d c The two processes (Zt )t≥0 and (Zt )t≥0 are independent Lévy processes and (Zt )t≥0 is a Brownian Motion Proof. This is actually an extension to Theorem 2.3.13. Indeed, we already know that the small jumps decompose into a continuous part Zc and a discontinuous part Zd. We also know that those two processes are independent, the only novelty here is the nature of the continuous part. To show that Zc is a Brownian motion, the strategy is to prove: ihξ,Zci − t hξ,Aξi E[e t ] = e 2 , for a certain matrix A. For convenience, we consider the one-dimensional case. Note that by construction, Zc has no jumps, so it has moments of every order. Let us write

ihξ,Zci −tη(ξ) φ c (ξ) = [e t ] = e . Zt E Because Zc has all moments, we know that η is C∞. Besides, since Z has zero mean, so does Zc, and η0(0) = 0. Consequently, for all m ≥ 2, it holds that

c m 2 m−1 E[(Zt ) ] = a1t + a2t + ··· + am−1t . (3.1)

Let now 0 = t0 < t1 < ··· < tn = t be a partition of [0, t], we have

n−1  c c ihξ,Zci X  ihξ,Z i ihξ,Z i E[e t ] = E  e tj +1 − e tj  . j=0 19 CHAPTER 3. PROOF OF THE LEVY KHINTCHINE FORMULA 20

On each increments, we use Taylor’s formula on the exponential function and we split the sum into

n−1 c X ihξ,Z i c c  S (t) = iξ e tj Z − Z ; 1 tj +1 tj j=0 n−1 2 c 2 ξ X ihξ,Zt i c c  S2(t) = − e j Z − Z ; 2 tj +1 tj j=0 n−1 2 c c c c 2 ξ X  ihξ,Zt +θj (Zt +1−Zt )i ihξ,Zt i c c  S3(t) = − e j j j − e j Z − Z . 2 tj +1 tj j=0 Now, note that because Zc is a Lécy process it has independent increments and

 n−1  c X ihξ,Z i c c  (S (t)) = iξ e tj Z − Z = 0. E 1 E  tj +1 tj  j=0

c 2 Similarly, using the fact that E[(Zv) ] = a1v, we have

2 n−1 a1ξ X E(S2(t)) = − φZc (ξ)(tj+1 − tj). 2 tj j=0 The last term is more tricky, and to analyse it, we introduce the event: ( ) c c Bα = max sup |Zu − Zv| ≤ α . 0≤j≤n−1 tj ≤u,v≤tj+1 Note that this event is such that its probability tends to zero with the mesh size, thanks to the continuity of the paths of Zc. iu Let now [S (t)] = [S (t)1 ] + [S (t)1 c ]. Using the elementary inequality |e − 1| ≤ 2, we get: E 3 E 3 Bα E 3 Bα n−1 2  c c c 2 ξ X ihξ,Zt i ihξ,θj (Zt +1−Zt )i  c c  | [S3(t)1Bc ]| ≤ e j (e j j − 1) Z − Z E α 2 E tj +1 tj j=0

n−1  2  2 X  c c  ≤ ξ Z − Z 1 c . E tj +1 tj Bα j=0 We now use Cauchy-Schwartz on the right hand side:

n−1  2  n−1  41/2 2 X  c c  2 c 1/2 X  c c  ξ Z − Z 1 c ≤ ξ (B ) Z − Z . E tj +1 tj Bα P α E tj +1 tj j=0 j=0 Now, using again that Zc is a Lévy process with all moments, we get 2 c 1/2 2 3 1/2 | [S (t)1 c ]| ≤ ξ (B ) O(t + t ) . E 3 Bα P α c c Now, because of the path continuity of Z , we see that the probability P(Bα) goes to 0 as the mesh of the partition {tj} goes to 0. We now turn to the other term. We use the mean value theorem again:   3 n−1 3 3 |ξ| X  c c  αa1t|ξ| | [S3(t)1Bα ]| ≤  Zt +1 − Zt 1Bα  ≤ . E 2 E j j 2 j=0

To get the last inequality, we exploited the fact that on Bα, the increments are bounded by α. But now, α can be made arbitrarily small, so we conclude that this term must go to zero. In the end, we get to the equation 2 Z t a1ξ φ c (ξ) − 1 = − φ c (ξ)ds, Zt Zs 2 0 solving for φ c yields the desired result. Zt CHAPTER 3. PROOF OF THE LEVY KHINTCHINE FORMULA 21

We have established the following result, the so-called Lévy-Itô decomposition:

Theorem 3.1.3. Let (Xt)t≥0 be a Lévy process. Let (Nt(·))t≥0 be the jump measure and ν(·) the Lévy d measure. Then, there exist (Bt)t≥0 a Brownian motion, independent of (Nt(·))t≥0, b ∈ R such that for all t > 0 Z   Z Xt = Bt + bt + x Nt(dx) − tν(dx) + xNt(dx). {|x|≤1} {|x|>1} Remark 3.1.4. This result is generally remembered as any Lévy process is the sum of a (drifted) Brownian motion, a compound Poisson process and an L2 martingale, all independent. The continuous part corresponds to a Brownian motion, the large jumps to a compound Poisson process and the small jumps form an L2 martingale. From the Lévy-Itô decomposition, we trivially get the celebrated Lévy Khintchine formula, that characterises the Fourier transform of any Lévy process in terms of the characteristics of a Lévy process.

d d d Theorem 3.1.5. Let b ∈ R , Q a semi-definite quadratic form on R , and ν a measure on R \{0} such that R 2 d min(1, |x| )ν(dx) < +∞. We define R Z 1 ihx,ξi Ψ(ξ) = −ihξ, bi − Q(ξ) + e − 1 − ihx, ξi1|x|≤1ν(dx) 2 d R Then, it holds that  ihX ,ξi E e t = exp (tΨ(ξ)) .

Proof. We start from the Lévy-Itô decomposition. Since Bt is a Brownian motion, there exists a quadratic form Q such that ihB +bt,ξi −ihξ,bi− 1 Q(ξ) E[e t ] = e 2 . R Besides, since the large jumps part {|x|>1} xNt(dx) form a compound Poisson process, it holds that

* Z + ! Z ! h i ihξ,yi E exp i ξ, xNt(dx) = exp t (e − 1)ν(dy) . {|x|>1} {|x|>1}

Finally, from Corollary 2.3.10 " * Z +!# Z !    ihξ,xi  E exp i ξ, x Nt(dx) − tν(dx) = exp t e − 1 − ihξ, xi ν(dx) . {|x|<1} {|x|<1}

Finally, since those three processes are independent, the characteristic function of the sum is the product if the characteristic functions, which gives the expected result.

3.2 Consequences of the Lévy-Itô Decomposition

In this section, we exploit the Lévy-Itô decomposition to establish some nice properties for Lévy processes. These result are not in any importance order. We do not give proofs to all the stated results, but give specific references in the literature. We start with some results on the trajectory of a Lévy process. Theorem 3.2.1. A Lévy process is continuous if and only if its Lévy measure ν is zero.

Proof. This result is obvious, given the Lévy-Itô decomposition: the only part left corresponds to a drifted Brownian motion.

Conversely, here is what we can say when the Lévy measure is not zero.

Theorem 3.2.2 (Jumping times). Let (Xt)t≥0 be a Lévy process with Lévy measure ν.

d • If ν(R ) = ∞, then almost surely, the jumping times are countably dense in R+ d • If 0 < ν(R ) < ∞, then almost surely, jumping times are infinitely many and countable in increasing d order, and the first jump has exponential distribution with parameter 1/ν(R ). CHAPTER 3. PROOF OF THE LEVY KHINTCHINE FORMULA 22

Proof. See Theorem 21.3 in Sato [11] p136.

Theorem 3.2.3. Let (Xt)t≥0 be a Lévy process with generating triplet (A, ν, b). Then, d d R • If A = 0 and ν(R ) < +∞, or if A = 0, ν(R ) = +∞ and |x|≤1 |x|ν(dx) < +∞, then the function t 7→ Xt has finite variation (a.s.), R • If A 6= 0 or if |x|≤1 |x|ν(dx) = ∞, then the function t 7→ Xt has infinite variation on [0, t) for any t > 0.

Proof. See Theorem 2.4.5 p129 in Applebaum [1].

Now we give the asymptotic behaviour of the Lévy exponent. Proposition 3.2.4. Suppose the dimension d = 1. Recall Ψ is the characteristic exponent of some Lévy process (Xt)t≥0. Using the notations of Theorem 3.1.5, 1. we have Ψ(ξ) 1 lim = Q(ξ), |ξ|→+∞ ξ2 2 2. if X has bounded variation, Ψ(ξ) lim = −i × b, |ξ|→+∞ ξ

3. Ψ is bounded if and only if (Xt)t≥0 is a compound Poisson process. Proof. See Proposition 2 p 16 in Bertoin [2].

We end this section with two approximation results.

Theorem 3.2.5. Suppose that (µn)n≥1 is a sequence of infinitely divisible distributions with generating triplet d (Qn, νn, bn). Let µ be a on R . Then µn → µ if and only if µ is infinitely divisible with generating triplet (Q, ν, b), with Q, ν and b satisfying the following conditions:

d • if f : R → R is bounded continuous, vanishing on a neighbourhood of 0, Z Z lim f(x)νn(dx) = lim f(x)ν(dx). n→+∞ d n→+∞ d R R

• define the quadratic form Qn,ε by Z 2 Qn,ε(z) = Qn(z) − hx, zi νn(dx). {|x|≤ε} Then d lim lim sup |Qn,ε(z) − Q(z)| = 0, for all z ∈ R . ε→0 n→+∞

• bn −→ b. n→+∞ Proof. See Theorem 8.1 p 41 in Sato [11]. Note that in this reference, the proof is given for the triplet (An, νn, bn)c, meaning that one can change the cut-off function 1{|x|≤1} to any function c(x) bounded and such that ( 1 + o(|x|) when |x| → 0, c(x) = O(1/|x|) when |x| → +∞.

The above result is understood as "convergence of characteristics is convergence of the process". As a corollary, we have the following (see Corollary 8.8 p 45 in Sato [11]). Corollary 3.2.6. Every infinitely divisible distribution is the limit of a sequence of compound Poisson distributions. CHAPTER 3. PROOF OF THE LEVY KHINTCHINE FORMULA 23

3.3 Exercises

Exercice 1. Suppose that X and Y are two independent Lévy processes with characteristic exponents Ψ and Φ. Show that the process Zt = Xt + Yt, t ≥ 0, is a Lévy process with characteristic exponent Ψ + Φ.

Exercice 2. Proof of Proposition 2.3.11. The goal is to establish the independence of:

1 X 2 X Jt = ∆Xs1{∆Xs∈A} and Jt = ∆Xs1{∆Xs∈B}, 0

ihξ,J 1i ihζ,J 2i e t e t Cξ = − 1, and Dζ = − 1. t ihξ,J 1i t ihξ,J 2i E[e t ] E[e t ]

1. Show that Cξ and Dζ are martingale

2. Show that for {tk} a subdivision of [0, t], " # X [CξDζ ] = (Cξ − Cξ )(Dζ − Dζ ) . E t t E tk+1 tk tk+1 tk k

3. Deduce that   ξ ζ X ξ ζ E[Ct Dt ] = E  ∆Ct ∆Dt  . 0

ξ ζ 4. Prove that E[Ct Dt ] = 0. 5. Conclude that ihξ,J 1i ihζ,J 2i ihξ,J 1i ihζ,J 2i E[e t e t ] = E[e t ]E[e t ].

Plus d’exercices en examples chez Sato p. 45. Faire exercice preuve du TCL Stable chez Meerscheart via Pareto.

3.4 Discussion

We conclude this chapter by discussing some left-over facts.

Semi-martingale Theory The culmination of this chapter is the Lévy-Khintchine formula, characterising any infinitely divisible distribution. Another very important class of stochastic processes share a similar characterisation: the class of semi-martingales. The Theory of semi-martingale is much more abstract than Lévy processes, and we chose not to spend too much time on the subject, but the reader may consult Protter [9] or Jacod-Shiryaev [6] on the subject. We must warn the reader though, the latter is a very difficult read.

Poisson Random Measures If the reader consults Bertoin [2], or Applebaum [1], he will see that very soon, the concept of Poisson random measure is discussed. It does shed more light on the structure of the jumps of a Lévy process, but here, we chose not to introduce yet another concept. However, Poisson processes are a very interesting toping in their own right. We refer the reader to Kallenberg [7] for a detailed exposition on the subject. Chapter 4

Lévy processes as Markov Processes

In this chapter, we view Lévy processes as Markov processes. We point out that we anticipated a little bit as we already used the Markov property in the previous chapter. We recall that a stochastic process is said to be a Markov process if

E[f(Xt)|Fs] = E[f(Xt)|Xs] It is trivial to see that Lévy processes possess the Markov property, we write:

E[f(Xt)|Fs] = E[f(Xs + Xt − Xs)|Fs],

Now, Xt − Xs is independent of Fs, and obviously, Xs is Fs measurable, thus: Z E[f(Xt)|Fs] = f(Xs + y)pt−s(dy), d R denoting pu(·) the law of Xu. Hence, E[f(Xt)|Fs] does indeed only depend on Xs, thus the Markov property. In this chapter, we highlight the principal resutls of (Xt)t≥0 related to its Markov property.

4.1 Properties of the Semi-group

The first result we introduce is actually proved in the few lines before, and is related to the semigroup of (Xt)t≥0.

Lemma 4.1.1. The semigroup associated with the Lévy process (Xt)t≥0 is time homogeneous. We denote it (Tt)t≥0: E[f(Xt)|Xs = x] = Tt−sf(x). Let us give some definitions

Definition 4.1.2. A family of measures (pt)t≥0 is a convolution semigroup if p0 = δ0 and

pt+s = pt ∗ ps.

d Such a semigroup is said to be weakly continuous if for all f ∈ Cb(R ), Z Z lim f(x)pt(dx) = f(0) = f(x)dδ0. t→0 d d R R

Obviously, a semigroup (Tt)t≥0 associated to a Lévy process (Xt)t≥0 is a convolution semigroup. We now state a Lemma that is a direct consequence of the definitions above and of the stochastic continuity of (Xt)t≥0.

Lemma 4.1.3. The semigroup (Tt)t≥0 is a weakly continuous convolution semigroup.

d Proof. Let f ∈ Cb(R ), not identically zero. From the continuity of f, for all ε > 0, there exist η > 0 such that sup |f(x) − f(0)| ≤ ε/2. |x|≤η 24 CHAPTER 4. LÉVY PROCESSES AS MARKOV PROCESSES 25

0 Now, from the stochastic continuity of (Xt)t≥0, there exists η > 0 such that

0 ε 0 < t ≤ η ⇒ P(|Xt| > η) ≤ . 4 sup d |f(x)| x∈R For such t, we then find Z Z Z   f(x) − f(0) pt(dx) ≤ |f(x) − f(0)|pt(dx) + |f(x) − f(0)|pt(dx), d R |x|≤η |x|>η ≤ sup |f(x) − f(0)| + 2 sup |f(x)|P(|Xt| > η) ≤ ε. d |x|≤η x∈R

We can now state an important result on (Tt)t≥0. Theorem 4.1.4. Every Lévy process is a . Proof. Let us recall first that a Feller process is such that d d d • Tt : C0(R ) → C0(R ), C0(R ) being the set of all continuous functions that vanish at infinity.

• limt→0 kTtf − fk∞=0 for all f ∈ C0.

We know that (Tt)t≥0 is a weakly continuous convolution semigroup that is such that Z Ttf(x) = f(x + y)pt(dy), d R 0 where pt denotes the law of Xt. Let f ∈ C , we need to show that Ttf is continuous. Consider xn → x, we have: Z lim Ttf(xn) = lim f(xn + y)pt(dy). n→+∞ n→+∞ d R Now, from the dominated convergence theorem, we have Z Z lim f(xn + y)pt(dy) = f(x + y)pt(dy), n→+∞ d d R R which proves that Ttf is continuous. We can use the dominated convergence theorem again to prove Z lim |Ttf(x)| ≤ lim |f(x + y)|pt(dy) |x|→+∞ |x|→+∞ d R Z ≤ lim |f(x + y)|pt(dy) = 0. d |x|→+∞ R

Consequently, we do have Ttf ∈ C0. We now prove the second point in the Feller condition. Note that we can assume f 6= 0. Using the stochastic continuity of (Xt)t≥0, for all ε > 0, and any r > 0, there exist t0 > 0 such that ε 0 < t < t0 ⇒ P(|Xt| > r) ≤ . 4kfk∞ Using the continuity of f, we can find δ > 0 such that ε |y| ≤ δ ⇒ sup |f(x + y) − f(x)| ≤ . d 2 x∈R Choosing r = δ now gives

kTtf − fk∞ = sup Ttf(x) − f(x) d x∈R Z Z ≤ sup |f(x + y) − f(x)|pt(dy) + sup |f(x + y) − f(x)|pt(dy) d d |x|≤δ x∈R |x|>δ x∈R ε ≤ (|X | ≤ δ) + 2kfk (|X | > δ) 2P t ∞P t ≤ ε. CHAPTER 4. LÉVY PROCESSES AS MARKOV PROCESSES 26

4.2 The Generator

We now turn to the infinitesimal generator of a Lévy process, viewed as a Markov process. In general, for a Markov process (Xt)t≥0, one defines the generator of (Xt)t≥0 as the limit:

Ltf(x) = lim [f(Xs)|Xt = x]. s→t E

We recall that in general, the generator alone is not enough to characterise the distribution of (Xt)t≥0, and one needs to express also the domain of the generator, that is all the function for which the limit exist. We assume the reader to be familiar with these concepts, the universal reference on the matter being Ethier Kurtz [4]. The first thing we can say is that in the case of Lévy processes, since the semigroup is homogeneous, the generator do not depend on the time. We can actually say a lot more, by recalling the LévyKhintchine formula. Before stating the main result, we need to recall some definition from analysis on pseudo-differential operators. First, let us recall that the Fourier transform of a function f in Schwartz’s space S, defined as Z F(f)(ξ) = eihx,ξif(x)dx, d R has inverse transform given by: Z −1 1 −ihx,ξi F (g)(x) = d e g(ξ)dξ. (2π) d R Then, it is usual that operations such as derivatives, or integration can be seen on the Fourier transform as ’multipliers’.

Definition 4.2.1. A pseudo-differential operator L acting on functions f ∈ S is defined by its symbol `, in the following way: Z 1 −ihx,ξi L(f)(x) = d e `(ξ)F(f)(ξ)dξ. (2π) d R With this definition in mind, we have the following result.

ihXt,ξi Theorem 4.2.2. Let (Xt)t≥0 be a Lévy process with characteristic exponent Ψ(ξ), that is E[e ] = exp (tΨ(ξ)) , with Z 1 ihξ,yi Ψ(ξ) = ihb, ξi − hξ, Aξi + e − 1 − hξ, yi1{|y|≤1}ν(dy). 2 d R \{0}

Let (Tt)t≥0 denotes the semigroup and L the infinitesimal generator, we have:

d 1. For each t ≥ 0, f ∈ S, x ∈ R , Z 1 −ihξ,xi tΨ(ξ) Ttf(x) = d e e F(f)(ξ)dξ. (2π) d R

2. For each, f ∈ S, x ∈ d, R Z 1 −ihξ,xi Lf(x) = d e Ψ(ξ)F(f)(ξ)dξ, (2π) d R so that L is a pseudo differential operator with symbol Ψ.

d 3. For each, f ∈ S, x ∈ R , Z 1  2  Lf(x) = ihb, ∇f(x)i − Tr AD f(x) + f(x + z) − f(x) − h∇f(x), zi1{|z|≤1}ν(dz). 2 d R \{0}

Proof. We skip this proof, since it work exactly as one would expect: differentiate under the integral. The difficulty comes from checking that it is actually possible to do so. We refer to Theorem 3.3.3 in Applebaum [1] for a complete proof. CHAPTER 4. LÉVY PROCESSES AS MARKOV PROCESSES 27

Examples of Generators for some Lévy processes The main take-away from the previous Theorem is that the generator of a Lévy process is read through its characteristics. In this paragraph, we list a few Lévy processes with their corresponding characteristics and the corresponding form of the generator.

Example 4.2.3 (The drifted Brownian motion). The most well-known example. A Brownian motion with drift has characteristics (A, b, 0), where A = (ai,j)1≤i,j≤d is the covariance matrix. Then, it is well known that the generator is 1   Lf(x) = hb, ∇f(x)i + Tr AD2f(x) 2 d d X ∂ 1 X ∂2 = bj f(x) + ai,j f(x). ∂xj 2 ∂xj∂xk j=1 j,k=1

Example 4.2.4 (The Poisson process). A Poisson process with parameter λ has characteristics (0, 0, λδ1). Thus, the corresponding generator is a difference operator:   Lf(x) = λ f(x + 1) − f(x) .

Example 4.2.5 (The compound Poisson process). Consider the process

N Xt Xt = Yi i=1 where (Nt)t≥0 is a Poisson process with parameter λ and Yi are i.i.d. with common distribution µ. We determined in a previous chapter that the characteristics of a compound Poisson process are (0, 0, λµ). Therefore, the generator of (Xt)t≥0 is Z   Lf(x) = f(x + z) − f(x) λµ(dz). d R

Example 4.2.6 (The case of Stable operators). Consider a rotationally invariant stable process (Xt)t≥0. Its Lévy exponent is given by q α α  2 2 Ψ(ξ) = −|ξ| = ξ1 + ··· + ξd . Let’s pretend for a moment that the usual rule when dealing with Fourier multipliers holds true here, that is: replace ξ with −i∂xi . Then, we would get for a generator:

q α α − −∂2 − · · · − i∂2 = −(−∆) 2 . x1 xd

This is often referred in the literature as the fractional Laplacian. We will discuss more at length the relation between fractional derivatives and stable processes in a subsequent paragraph, but for a very nice and thorough presentation, the reader can consult Meerschaert and Sikorskii [8].

Example 4.2.7 (relativistic stable operators). Form Section 1.4, recall the relativistic stable distribution as having characteristics: E(ξ) = pm2c4 + c2|ξ|2 − mc2.

Again, with the correspondance ξ to −i∂xi , we get a representation that is familiar to physicist (or so I am told): p  L = − m2c4 − c2∆ − mc2 .

This operator is related to the quantisation of free energy, but for more details on that, the reader should probably ask a physicist... CHAPTER 4. LÉVY PROCESSES AS MARKOV PROCESSES 28

4.3 Recurrence and Transience

Any good chapter on Markov processes would not be complete without a discussion on recurrence transience. In this paragraph, we give two criteria to link this property to integrability properties of the Lévy-Khintchine exponent.

Definition 4.3.1. A Lévy process (Xt)t≥0 is said to be • recurrent at the origin if lim inf |Xt| = 0 (a.s.), t→+∞

• transient at the origin if lim inf |Xt| = +∞ (a.s.). t→+∞

Naturally, the dichotomy recurrent/transient still holds for Lévy processes (see e.g. Theorem 35.4 p239 in Sato [11]). We give two criteria.

Theorem 4.3.2. Fix a > 0. Let (Xt)t≥0 be a Lévy process with characteristic exponent Ψ. Then, the following are equivalent:

1. (Xt)t≥0 is recurrent, Z  1  2. lim < dξ = +∞, where <(z) is the real part of the complex number z, q→0 {|x|≤a} q − Ψ(ξ) Z  1  3. lim sup < dξ = +∞ q→0 {|x|≤a} q − Ψ(ξ)

Theorem 4.3.3. Let (Xt)t≥0 be a Lévy process with characteristic exponent Ψ. Then (Xt)t≥0 is recurrent if and only if Z  1  < dξ = +∞, ∀a > 0. {|x|≤a} −Ψ(ξ) It is remarkable that recurrence and transience can be determined only by looking at the exponent. A direct application of the previous result gives the recurrence of the Brownian motion for d = 1, 2, which was already known through other means. Exercice 3. Use Theorem 4.3.3 to show that 1. for d = 1, an α stable process is recurrent if 1 ≤ α ≤ 2 and transient if 0 < α < 1. 2. for d = 2, all strictly α stable process are transient when 0 < α < 2, 3. for d ≥ 3, all Lévy processes are transient.

Invariant measure

We conclude this section by giving a result on invariant distribution for some Lévy processes. Let (Zt)t≥0 be a Lévy process with characteristics (Q, b, ν). We denote Ψ its characteristic exponent. For c ∈ R, we introduce the Ornstein Uhlenbeck process directed by Z as the solution of the SDE:

Z t Xt = x + Zt − c Xsds, t ≥ 0. 0

We say that µ is an invariant measure for a semigroup (Pt)t≥0 if:

Pt(x, ·) −→ µ. t→+∞ CHAPTER 4. LÉVY PROCESSES AS MARKOV PROCESSES 29

Theorem 4.3.4. Fix c > 0. If the Lévy measure ν satisfies Z log |x|ν(dx) < +∞. |x|>2 Then, X has limiting distribution µ, with Z +∞  µˆ(z) = exp ψ(e−csz)ds . 0

4.4 Fractional Derivatives

In this section, we investigate more closely the relation between stable processes and fractional derivatives. We consider only the dimension 1. Fractional derivatives are natural extensions to regular derivatives that arise in all sorts of applications, ranging from physics to finance and hydrology. Their versatility comes from a form of universality of the stable distribution, coming from generalisation of central limit theorems.  α q d2 First, as we discussed in example 4.2.6, we need to give meaning to the operator − − dx2 . In general, dα we will define the fractional derivative dxα through its Fourier expression, and link this expression to the generator of a Stable process. Proposition 4.4.1. If f and its derivatives up to some integer order n > n + α exist and are absolutely integrable, then the fractional derivative dα 1 Z f(x) = e−ixξ(−iξ)αF(f)(ξ)dξ. dxα 2π R exists and its Fourier transform is (−iξ)αF(f)(ξ).

Proof. This is an application of results form analysis. The proof is left as an exercise. The reader can consult Proposition 2.5 p29 in Meerschaert and Sikorskii [8] for guidance. We point out that the sign convention is different in the previous reference. We use a convention that is more in line with probability theory, thus the negatives.

Consider now a one-sided α stable process (Xt)t≥0 with α < 1. We saw earlier that the Lévy measure of dx such a process is x1+α 1{x≥0}, but note that in this case, Z dx −α+1 1 x 1+α = [x ]0 = 1, x<1 x since α < 1. We say that in the case α < 1, there is no need for compensation. Then, the generator for (Xt)t≥0 writes: Z +∞   dz Lf(x) = f(x + z) − f(x) α+1 . 0 z We can now integrate by parts to get: Z +∞ Z +∞ 0 dz d dz Lf(x) = − f (x + z) α = − f(x + z) α . 0 z dx 0 z This form is often referred to as the Riemann-Liouville fractional derivative. Remark 4.4.2. Again, the sign convention is different. We have to point out that in the literature, the preferred definition for the Riemann-Liouville fractional derivative is • If 0 < α < 1 α Z +∞ d d −α dz α f(x) = f(x − z)z . dx dx 0 Γ(1 − α) • If 1 < α < 2 α 2 Z +∞ d d 1−α dz α f(x) = 2 f(x − z)z . dx dx 0 Γ(2 − α) CHAPTER 4. LÉVY PROCESSES AS MARKOV PROCESSES 30

Note that the choice of constant in front of the Lévy measure is to ensures that the Fourier expression simplifies nicely. We have yet to say why those objects agree. To do so, we have the following result. Proposition 4.4.3. When 0 < α < 1, it holds that Z +∞  iξx  dx α e − 1 1+α = (−iξ) . 0 x Proof. We establish that in general for u ≥ 0 and 0 < α < 1, Z +∞  −ξx dx α 1 − e 1+α = ξ . 0 x First, notice that Z x 1 − e−ξx = ξe−ξydy. 0 Plug this identity in the integral and exchange the order of integration: Z +∞ Z +∞ Z x  −ξx dx −ξy dx 1 − e 1+α = ξe dy 1+α 0 x 0 0 x Z +∞ Z +∞  = − x−1−α ξe−ξydy 0 y ξ Z +∞ ξα Z +∞ = e−ξyy−αdy = e−xx−αdx α 0 α 0 ξα = Γ(1 − α). α

We leave an alternate proof as an exercise left to the reader. Again, the reader may consult Meerschaert and Sikorskii [8] for guidance (see proposition 3.10 p 56). Exercice 4. The goal is to compute the integral Z +∞  iξx  dx I(α) = e − 1 1+α 0 x To that end, we define Z +∞  iξx−sx  dx I(s, α) = e − 1 1+α 0 x 1. Using an integration by parts, show that Z +∞ I(s, α) = (iξ − s) eiξx−sxx−αdx. 0

2. Show that for a, b > 0, the characteristic function for Γ(a, b) distribution is:

Z +∞ ba  iξ −a eiξx e−bxdx = 1 − . 0 Γ(a) b

3. Deduce that Γ(1 − α)  iξ −α I(s, α) = (iξ − s) 1 − = −Γ(1 − α)(s − iξ)α. s1−α s 4. Use the dominated convergence theorem to reach the desired result.

Thus, this shows that the generator L actually have symbol (−iξ)α. Chapter 5

Elements of Stochastic Calculus with Jumps

We start this chapter by saying that Stochastic calculus can be built from general semi-martingales, a class of stochastic processes encompassing Lévy processes. However, the construction is rather hard and theoretical, relying on tools ranging form analysis to algebra. In the case of Lévy processes, exploiting the Lévy-Itô decomposition, we see that we only need to give meaning to stochastic integrals with respect to the three "building blocs": Brownian motion, compensated Poisson process and compound Poisson process. At the end of this chapter, we will have given meaning to a stochastic differential equation of the form: Z t Z t Yt = y0 + b(s, Ys)ds + σ(s, Ys)dBs 0 0 Z   Z + F (Ys, x) N(ds, dx) − ν(dx)ds + G(Ys, x)N(ds, dx). [0,t]×{|x|<1} [0,t]×{|x|>1} This chapter is largely inspired by chapters 4 and 6 in Applebaum [1]. For a construction of stochastic calculus for semi-martingales, one can consult [9]. This chapter assumes that the reader is familiar with stochastic integration, differential equations for Brownian motion.

5.1 Example of Use in Applications

We start this chapter by exploring a rather popular type of equation arising in the applications: jump-diffusions. It is well known that diffusion equations, stochastic differential equations of the form

dXt = b(t, Xt)dt + σ(t, Xt)dBt,X0 = x0, can be used to model various phenomena. In finance for instance, they can be used to model stock prices. However, as the many financial crashes have demonstrated, the behaviour of a stock price is often far from the one of such an SDE. If one wanted to include the possibility of large losses to the evolution of a stock price, one option would be the following algorithm:

1. Start at x0.

2. Let T1 be an exponential random variable.

• if T1 ≤ T then continue the along dXt = b(t, Xt)dt + σ(t, Xt)dBt

• else simulate loss at time T1 as JT1 and restart the process at XT − JT1

3. Define T2 as exponential random variable and repeat step 2. This algorithm actually amounts to solving the SDE:

dXt = b(t, Xt)dt + σ(t, Xt)dBt − dJt,X0 = x0, where (Jt)t≥0 is a compound Poisson process. Of course, one can easily imagine complexification of this equation. For instance, one can imagine (Jt)t≥0 depending on (Xt)t≥0, or adding small jumps. The advantages in keeping the model as it is comes from the fact that one do not need to bother with constructing integrals with respect to Lévy processes. 31 CHAPTER 5. ELEMENTS OF STOCHASTIC CALCULUS WITH JUMPS 32

5.2 Stochastic Integration

As we assumed the reader to be already familiar with Brownian motion-related stochastic analysis, we will not expand on that front. Also, as we explained above as a consequence of the Lévy-Itô decomposition, the large jump parts form a compound Poisson process and has finite variation. Thus, on that front as well, the classical Stieltjes integration still applies, and we formally know how to define the part Z G(s, x)N(ds, dx). [0,t]×{|x|>1}

Observe that the Poisson process Nt(·) is extended in a measure over R+, the same way as any finite variation process is. Besides, similar in order for the above expression to be usable from a probability-theory point of view, we need to assume as usual that s 7→ G(s, x) is predictable. Really, the only unknown is how to define stochastic integration with respect to the small jump parts that form an L2 martingale. To do so, we need to explain how to construct integrals with respect to martingale-valued masures. In this section, we only give the outline of the construction of stochastic integration with respect to martingale valued martingales. The reader can consult [1] for more details. Also we will denote (Nt(·))t≥0 the Poisson random measure associated to a Lévy process (Xt)t≥0, and Fs the filtration associated with (Xt)t≥0.

Compensated Poisson Measure

Recall that for all Borel set A such that 0 ∈/ A¯, Nt(A) has a Poisson distribution with parameter tν(A). We thus consider the martingale: N˜t(A) = Nt(A) − tν(A). d We extend this measure to R+ × R by prescribing

N˜([t, s] × A) = N˜s(A) − N˜t(A).

Then, N˜(ds, dx) satisfies the following properties: 1. N˜({0} × A) = 0 almost surely,

2. N˜([t, s] × A) is independent of Fs

3. E(N˜([t, s] × A)2) = (s − t)ν(A). Again, the idea is to define such objects for general sets A by taking a limit in L2 sense. But before stating the result, we need to discuss about integrands.

Predictable Processes Just like in the Gaussian case, the integrand needs to be predictable. Fix E a Borel set and 0 < T < +∞. d Let P denote the smallest σ algebra with respect to all mappings F : [0,T ] × R × Ω → R satisfying 1 and 2 bellow are measurable:

1. for each 0 ≤ t ≤ T , the mapping (x, ω) 7→ F (t, x, ω) is B ⊗ Ft− measurable, 2. for each x ∈ E andω ∈ Ω, the mapping t 7→ F (t, x, ω) is left continuous. We call P the predictable σ algebra, and a G-measurable mapping is said to be predictable. We now define a subclass of predictable mappings:

( Z T Z ) 2 H2(T,E) = F predictable ; E[|F (t, x)| ]dtν(dx) < +∞ . 0 E

Naturally, we can define an inner product that agrees with the definition above:

Z T Z

hF,GiH2(T,E) = E[F (t, x)G(t, x)]dtν(dx). 0 E Then, with no surprises, we have: CHAPTER 5. ELEMENTS OF STOCHASTIC CALCULUS WITH JUMPS 33

Theorem 5.2.1. H2(T,E) is a Hilbert space. Moreover, the space of simple functions, that are functions of the form m n X X F = Fk(tj)1[tj ,tj+1]1Ak , j=1 k=1 where tj is a partition of [0,T ] and Ak are disjoints Borel sets with ν(Ak) < +∞, is dense in H2(T,E) For the proof of this result, see Lemma 4.1.13 p 217 and Lemma 4.1.4 p 218 in Applebaum [1]. Thus, we only need to define the integrals with respect to simple functions, and we get a general construction when taking the limit.

5.3 Construction of the Stochastic Integral

Note that a simple function of the form

m n X X F = Fk(tj)1[tj ,tj+1]1Ak , j=1 k=1 is basically a linear combination of the product of two indicator functions, with the weight Fk(tj). Thus, we define the stochastic integral by setting

m n X X Z T Z IT (F ) = Fk(tj)N˜([tj, tj+1 × Ak]) = F (t, x)N˜(dt, dx). j=1 k=1 0 E that is the same linear combination, but where the product of indicators is measured with our martingale-valued measure N˜(ds, dx). This construction enjoys the linearity property and the following properties:

1. E[IT (F )] = 0, Z T Z 2 2 2. E[IT (F ) ] = E[F (t, x) ]dtν(dx). 0 E 2 Thus, IT (·) is a linear isometry from the space of simple functions to L (Ω), and by density, one can extend the construction of the whole space H2(T,E).

d Remark 5.3.1 (Itô’s Isometry). It thus holds for any predictable function F : [0,T ] × R × Ω → R that  2 Z T Z Z T Z ˜ 2 E  F (t, x)N(dt, dx)  = E[F (t, x) ]dtν(dx). 0 E 0 E

This identity is called Itô’s Isometry, and is the key to the definition of the stochastic integral.

Lévy-type Stochastic Integrals We are now able to give meaning to

Z t Z t Yt = y0 + bsds + σsdBs 0 0 Z   Z + F (s, x) N(ds, dx) − ν(dx)ds + G(s, x)N(ds, dx), [0,t]×{|x|<1} [0,t]×{|x|>1} when b, σ, F and G are predictable. Note that the literature often employs the shorthand differential notation

dYt = btdt + σtdBt + F (t, x)N˜(dt, dx) + G(t, x)N(dt, dx).

Note that assume that (Xt)t≥0 has Lévy-Itô decomposition: Z Z a ˜ Xt = bt + Bt + xNt(dx) + xNt(dx), |x|<1 |x|>1 CHAPTER 5. ELEMENTS OF STOCHASTIC CALCULUS WITH JUMPS 34

a 2 where Bt has covariance a = σ . Consider also L = R+ → R a predictable mapping. Then, in the definition of (Yt)g≥0 above, if we specify F = σL, G(t, x) = K(t, x) = xLt, then, (Yt)t≥0 can be written as dYt = LtdXt. We call Y a Lévy stochastic integral. Naturally, the next step will be to explore stochastic differential equations. However, before that, we introduce the Itô formula with jumps, that make use of such stochastic integrals.

5.4 Quadratic Variation and Itô Formula with jumps

The goal of this section is to be able to use the Itô Formula when the process we consider has jumps. Before we can state this formula however, we must introduce the concept of quadratic variation.

Quadratic Variation We start by giving a definition-theorem:

Definition 5.4.1. The quadratic variation of a process (Xt)t≥0, denoted ([X,X]t)t≥0 is càdlàg and satisfies the following properties:

2 2 1. [X,X]0 = X0 and ∆[X,X]t = (∆Xt)

2. If 0 = t0 < t1 < ··· < tn = t is a partition of [0, t] whose mesh go to zero, then

2 X 2 X0 + (Xti+1 − Xti ) −→ ∆[X,X]t |ti+1−ti|→0 ti the convergence is in probability. First, we should warn the reader that a similar definition exists in the continuous case, where the bracket is denoted with hX,Xit instead. Actually, this angled bracket is defined to be the compensator of the squared bracket ([X,X]t)t≥0, the unique finite variation process making ([X,X]t − hX,Xit)t≥0 a . In the case where (Xt)t≥0 is continuous, those two processes coincide. See Protter [9] p124 for a discussion on the subject. Just as in the continuous case, the Kunita-Watanabe inequality holds.

Theorem 5.4.2. Let (Xt)t≥0 and (Yt)t≥0 be two semi-martingales, and (Ht)t≥0 and (Kt)t≥0 two measurable processes. Then, s s Z +∞ Z +∞ Z +∞ 2 2 |Hs||Ks||d[X,Y ]s| ≤ Hs d[X,X]s Ks d[Y,Y ]s 0 0 0

See Theorem 25 p 69 in Protter [9] for a proof. We can now state the Itô formula when the process (Xt)t≥0 is a semi-martingale. Again, we refer to Protter, Theorem 32 p78 for a proof.

Itô’s Formula with jumps At last, we can state the main result of this section.

d Theorem 5.4.3. Let (Xt)t≥0 be a semi martingale. Let F : R → R a twice differentiable function. We have: Z t 1 Z t X F (X ) − F (x) = F 0(X )dX + F 00(X )d[X,X]c + F (X ) − F (X ) − ∇F (X ) · ∆X , t s− s 2 s− s s s− s s 0+ 0+ 0

c where [X,X]s denotes the continuous part of the bracket [X,X]s. We refer to Applebaum [1] for a detailed proof. The reader familiar with the continuous version will P notice that the beginning is very familiar. The part 0

Stochastic Exponentials In this section, we discuss the solution of an SDE of the form

dZt = Zt−dYt, where (Yt)t≥0 is a Lévy-type stochastic integral. The solution to this equation is referred to as stochastic exponential, or Doléans-Dade exponential, and is defined as

 1  Y EY = exp Y − [Y,Y ]c (1 + ∆Y )e−∆Ys . t t 2 t s 0

Reader familiar with the continuous case will observe that when (Yt)t≥0 has no jumps, then this exponential coincide with the exponential appearing in relation to the . The main difference coming from the fact that here again, we must account for the jumps of the process.

Y Theorem 5.4.4. The stochastic process (Et )t≥0 satisfies the equation

dZt = Zt−dYt.

Y SY Proof. We start by showing that Et = e t , where Z Y 1 2 h i dSt = σtdBt + [bt + σt ]dt + log 1 + G(t, x) N(dt, dx) 2 |x|>1 Z h i Z  h i  + log 1 + F (t, x) N˜(dt, dx) + log 1 + F (t, x) − H(t, x) dtν(dx) |x|≤1 |x|≤1 using the fact that (Yt)t≥0 is a Lévy type stochastic integral:

Z t Z t Yt = y0 + bsds + σsdBs 0 0 Z   Z + F (s, x) N(ds, dx) − ν(dx)ds + G(s, x)N(ds, dx), [0,t]×{|x|<1} [0,t]×{|x|>1}

Then, the result follows from an application of Itô’s formula. For more details, see exercise 5.1.2 and Theorem 5.1.3 in Applebaum [1].

We state a useful property for such exponentials:

Lemma 5.4.5. It holds that for each t ≥ 0,

Y Z Y +Z+[Y,Z] Et Et = Et .

Proof. Again, this follows from an application of Itô’s formula.

5.5 Stochastic Differential Equation

In this section, we collect important result for an equation of the form

Z t Z t Yt = y0 + b(s, Ys)ds + σ(s, Ys)dBs 0 0 Z   Z + F (Ys, x) N(ds, dx) − ν(dx)ds + G(Ys, x)N(ds, dx). [0,t]×{|x|<1} [0,t]×{|x|>1}

Most of these results can be found in Chapter Applebaum [1]. We do not give any proofs, but we will always give an exact reference for it. CHAPTER 5. ELEMENTS OF STOCHASTIC CALCULUS WITH JUMPS 36

Existence and Uniqueness Unsurprisingly, we start by giving a sufficient condition to ensure existence and uniqueness. Just as in the continuous case, a notion of strong and weak solution exists. But to avoid paraphrasing other texts, we only refer here to the discussion in [1], subsection 6.2, page 363. We impose the following conditions

d • Lipschitz condition: there exists K1 > 0 such that for all y1, y2 ∈ R , Z 2 2 2 2 |b(y1) − b(y2)| + ka(y1) − a(y2)k + |F (y1, x) − F (y2, x)| ν(dx) ≤ K1|y1 − y2| . |x|≤c

d • Growth condition: there exists K2 > 0 such that for all y ∈ R : Z 2 2 2 2 |b(y)| + ka(y)k + |F (y, x)| ν(dx) ≤ K2(1 + |y| ). |x|≤c

• Continuity: the mapping y 7→ G(y, x) is continuous for all x > 1.

Theorem 5.5.1. Under these assumptions, there exists a unique càdlàg strong solution to Z t Z t Yt = y0 + b(s, Ys)ds + σ(s, Ys)dBs 0 0 Z   Z + F (Ys, x) N(ds, dx) − ν(dx)ds + G(Ys, x)N(ds, dx). [0,t]×{|x|<1} [0,t]×{|x|>1} This result is not optimal obviously, as it only deals with strong solutions. We point out that just as in the Gaussian case, one can link the weak solution to the so-called martingale problem. Then, showing weak existence and uniqueness boils down to giving sharp estimates on transition functions. Notably, one can establish well-posedness to the martingale problem using a Parametrix expansion for the case of tempered stable distributions (see [5]). As for local conditions, some results exists, see for instance Theorem 6.2.11 in Applebaum [1]. The term "local" here is referring to a potentially explosive . This phenomena is called "explosion time", see also Protter [9].

Stochastic flows and Markov Property In this paragraph, we assume the conditions of the previous paragraph to ensure existence and uniqueness to the SDE. Then, one can consider the path of the solution, and as in the continuous case, ask wether or not the future depends on the past. d More generally, one can study the mapping that to a deterministic starting point x ∈ R maps the unique strong solution to the SDE. The remarkable thing in the case of Lévy processes is that in the homogeneous case (coefficients not depending on the time), the independence of the increments is preserved.

d d Definition 5.5.2. Let Φ = {Φs,t, 0 ≤ s ≤ t < +∞} be a familly of measurable mappings from R × Ω → R :

ω d Φs,t(y) ∈ R . We say that Φ is a stochastic flow if almost surely,

ω ω  ω  d 1. Φr,t(y) = Φr,s Φs,t(y) , for all 0 ≤ r ≤ s ≤ t < +∞ and all y ∈ R

ω d 2. Φs,s(y) = y, for all s ≥ 0 and all y ∈ R . We say that it is a Lévy flow if in addition, we have that

d (L1) for each n ∈ N, and 0 ≤ t1 < ··· < tn < +∞, y ∈ R , the random variables

{Φtj ,tj+1 (y); 1 ≤ j ≤ n} are independent.

d (L2) the mapping t 7→ Φs,t(y) is càdlàg for each y ∈ R and t ≥ s. CHAPTER 5. ELEMENTS OF STOCHASTIC CALCULUS WITH JUMPS 37

d Theorem 5.5.3. Let Φ be the mapping that to y ∈ R maps: Z t Z t Yt = y0 + b(Ys)ds + σ(Ys)dBs 0 0 Z   Z + F (Ys, x) N(ds, dx) − ν(dx)ds + G(Ys, x)N(ds, dx). [0,t]×{|x|<1} [0,t]×{|x|>1} then, Φ is a Lévy flow. Moreover, the solution to the SDE is a Markov process

Proof. See theorem 6.4.2 in Applebaum [1] for the Levy flow property and theorem 6.4.5 for the Markov property. Bibliography

[1] D. Applebaum. Lévy Processes and Stochastic Calculus, II Edition. Cambridge University Press, 2009. [2] Jean Bertoin. Lévy Processes. Cambridge University Press, October 1998. [3] Patrick Billingsley. Probability and measure. John Wiley & Sons, 2008. [4] E. Ethier and T. Kurtz. Markov Processes. Characterization and Convergence. Wiley, 1997. [5] L Huang. Density estimates for sdes driven by tempered stable processes, 2016. [6] A. N. Shiryaev J. Jacod. Limit theorems for stochastic processes. Springer, 2002. [7] Olav Kallenberg. Foundations of modern probability. Springer Science & Business Media, 2006. [8] Mark M Meerschaert and Alla Sikorskii. Stochastic models for fractional calculus, volume 43. Walter de Gruyter, 2011. [9] P. E. Protter. Stochastic Integration and Differential Equations. Springer, 2005. [10] Gennady Samorodnitsky and Murad S Taqqu. Stable non-gaussian random processes. 2005. [11] K. Sato. Lévy processes and Infinitely divisible Distributions. Cambridge University Press, 2005. [12] Vladimir M Zolotarev. One-dimensional stable distributions, volume 65. American Mathematical Soc., 1986.

38