Introduction to Lévy Processes
Huang Lorick [email protected]
Document type These are lecture notes. Typos, errors, and imprecisions are expected. Comments are welcome!
This version is available at http://perso.math.univ-toulouse.fr/lhuang/enseignements/
Year of publication 2021
Terms of use This work is licensed under a Creative Commons Attribution 4.0 International license: https://creativecommons.org/licenses/by/4.0/ Contents
Contents 1
1 Introduction and Examples 2 1.1 Infinitely divisible distributions ...... 2 1.2 Examples of infinitely divisible distributions ...... 2 1.3 The Lévy Khintchine formula ...... 4 1.4 Digression on Relativity ...... 6
2 Lévy processes 8 2.1 Definition of a Lévy process ...... 8 2.2 Examples of Lévy processes ...... 9 2.3 Exploring the Jumps of a Lévy Process ...... 11
3 Proof of the Levy Khintchine formula 19 3.1 The Lévy-Itô Decomposition ...... 19 3.2 Consequences of the Lévy-Itô Decomposition ...... 21 3.3 Exercises ...... 23 3.4 Discussion ...... 23
4 Lévy processes as Markov Processes 24 4.1 Properties of the Semi-group ...... 24 4.2 The Generator ...... 26 4.3 Recurrence and Transience ...... 28 4.4 Fractional Derivatives ...... 29
5 Elements of Stochastic Calculus with Jumps 31 5.1 Example of Use in Applications ...... 31 5.2 Stochastic Integration ...... 32 5.3 Construction of the Stochastic Integral ...... 33 5.4 Quadratic Variation and Itô Formula with jumps ...... 34 5.5 Stochastic Differential Equation ...... 35
Bibliography 38
1 Chapter 1
Introduction and Examples
In this introductive chapter, we start by defining the notion of infinitely divisible distributions. We then give examples of such distributions and end this chapter by stating the celebrated Lévy-Khintchine formula. The proof of the latter will be given in a subsequent chapter.
1.1 Infinitely divisible distributions
Historically, Paul Lévy was interested in "arithmetic of probabilities", where he would investigate properties of probabilities distributions that can be decomposed as the sum of independent copies of itself. This field gave rise to what we now call infinitely divisible distributions. Infinitely divisible distributions and Lévy process are closely related, as Lévy process have infinitely divisible distributions. We start by introducing the concept of infinitely divisible distribution and give some examples.
Definition 1.1.1. We say that a random variable X is infinitely divisible if for all n ∈ N, there exists Y1,...,Yn such that (d) X = Y1 + ··· + Yn. A very simple consequence of this definition is the following result: Proposition 1.1.2. The following are equivalent:
• X has an infinitely divisible distribution,
• µX the distribution of X has an n-convolution root that is itself the distribution of a random variable for each n,
• φX the characteristic function of X has an n-root that is itself the characteristic function of a random variable for each n.
We leave the proof as an exercise.
1.2 Examples of infinitely divisible distributions
Gaussian random variables
Let X be a random vector. We say that X has a Gaussian distribution if there exists m ∈ R and a symmetric positive definite matrix A such that X has density:
1 1 exp − hx − m, A−1(x − m)i . (2π)d/2pdet(A) 2
In this case, we write X ∼ N (m, A), m is the mean and A the covariance matrix. An easy exercise gives that the Fourier transform of such random variable is
1 φ (ξ) = exp ihξ, mi − hξ, Aξi . X 2 2 CHAPTER 1. INTRODUCTION AND EXAMPLES 3
Hence, it is easy to see that:
m 1 A φ (ξ)1/n = exp ihξ, i − hξ, ξi . X n 2 n
m A Consequently, we have that X is infinitely divisible with Yi ∼ N n , n .
Poisson random variable We say that a discrete random variable X has a Poisson distribution with parameter λ if
λk (X = k) = e−λ . P k! Consider now Y independent of X with Poisson distribution of parameter µ , we have:
k k X X λl µk−l (X + Y = k) = (X = l, Y = k − l) = e−λ e−µ . P P l! (k − l)! l=0 l=0 Grouping terms in the last identity, we get:
k 1 X k! (λ + µ)k (X + Y = k) = e−(λ+µ) λlµk−l = e−(λ+µ) . P k! l!(k − l)! k! l=0 Consequently, convolution of two Poisson distribution is a Poisson distribution and Poisson distributions are infinitely divisible. Alternatively, one can show that the characteristic function for a Poisson distribution is
iξ φX (ξ) = exp(λ(e − 1)),
λ giving that Poisson distributions are infinitely divisible, with Yi with Poisson distribution of parameter n .
Compound Poisson random variable Consider N to be a Poisson random variable with parameter λ. Since N is integer-valued, one can form the following sum: N X X = Yi, k=1 where Yi are independent and identically distributed, independent of N. Let us denote µY their common distribution.
Proposition 1.2.1. The characteristic function of X is Z ihξ,yi φX (ξ) = exp λ (e − 1)µY (dy) .
Proof. +∞ ! PN Pk ihξ,Xi ihξ, Yii X ihξ, Yii φX (ξ) = E(e ) = E(e i=1 ) = E e i=1 1N=k . k=0
Now, exploiting the independence of N and Yi’s, we can write:
+∞ ! Pk X ihξ, Yii φX (ξ) = E e i=1 P(N = k) k=0 +∞ Pk k X ihξ, Yii −λ λ = e i=1 e . E k! k=0 CHAPTER 1. INTRODUCTION AND EXAMPLES 4
Now, we note that k Pk ihξ, Yii Y ihξ,Yii k E e i=1 = E(e ) = φY (ξ) , i=1 denoting φY the common characteristic function of the Yi’s. We thus obtained
+∞ X λk φ (ξ) = e−λ φ (ξ)k = exp λ φ (ξ) − 1 . X k! Y Y k=0
R ihξ,yi To conclude, we only write φY (ξ) as e µY (dy). Hence, we see that a compound Poisson distribution also has an infinite divisible distribution.
1.3 The Lévy Khintchine formula
One can notice that in every example above, the characteristic function has an exponential form. This is no coincidence, as it is a shared properties by all infinitely divisible distributions. In fact, one can even give more information on the exponent. This is the so-called Lévy Khintchine formula. In this section, we only state the result, the proof will be given later.
d Theorem 1.3.1. A probability distribution µ on R is infinitely divisible if and only if there exists d • a vector b ∈ R , called the drift, or mean, • a symmetric positive definite d × d matrix A , called the covariance matrix,
d R 2 • a measure on ν such that d min(|y| , 1)ν(dy) < +∞, R R \{0} such that: Z Z ! ihξ,yi 1 ihξ,yi e µ(dy) = exp ihb, ξi − hξ, Aξi + e − 1 − hξ, yi1{|y|≤1}ν(dy) . d 2 d R R \{0} Remark 1.3.2. Such a measure ν is called a Lévy measure. Later on, this measure will be linked to jumps the when discussing Lévy process. It shall be noted that one can state the whole theory by adding that d ν{0} = 0 and integrating over R , the point being that there should be no jumps of size 0. Besides, there is nothing special about the cut-off 1{|y|≤1} appearing above, one could take any > 0 and consider instead 1 1{|y|≤}, or even 1+|y|2 . Doing that would change the value for b. Remark 1.3.3. Obviously, the outstanding part of the previous theorem is the only if part. Indeed, if we are given a distribution with the above characteristic function, it is quite easy to see that it is infinitely divisible. Definition 1.3.4. The triple (A, ν, b) above is called the characteristic triplet, and they completely determine the distribution µ. Note that since A is a symmetric positive definite matrix, we will interchangeably write (Q, ν, b) as generating triplet, where Q is the quadratic form defined by Q(z) = hz, Azi. One interpretation of this result is that any infinitely divisible distribution can be decomposed as the 1 sum of fundamental building blocks. One would immediately observe that 2 hξ, Aξi in the exponent comes from a Gaussian distribution. Besides, barring the term multiplied by the indicator function, the integral R ihξ,yi d (e − 1)ν(dy) is the characteristic function of a compound Poisson process. R \{0}
Stable distributions In this paragraph, we introduce a very important class of distributions known as Stable Distributions. Historically, those distributions arise from extensions of the Central Limit Theorem. Let X1,X2,... be a sequence of i.i.d. random variable, and for an, bn two sequence of real numbers, form
X1 + ··· + Xn − an Sn = . bn
If there exists a random variable X such that Sn converges in distribution to X, then we say that X has a stable distribution. A rather classical example of such distributions is for instance the case when X has a CHAPTER 1. INTRODUCTION AND EXAMPLES 5
√ 2 finite second moment. In this case, one can take bn = σ n and an = m, and we see that N (m, σ ) is a stable distribution. As an exercise, the reader can prove the following result:
Proposition 1.3.5. Sn ⇒ X if and only if for all n, there exists cn and dn such that
X1 + ··· + Xn = cnX + dn, where X1,...,Xn are independent copies of X.
Remark 1.3.6. In the previous proposition, if dn can be taken to be 0, then X is said to be strictly stable. 1/α Besides, it can be shown that the only possible choice for cn is of the form σn . This parameter α is called the index of the stable distribution.
The next result characterises Lévy-Khinchine exponent for a stable process.
Theorem 1.3.7. The Lévy-Khinchine exponent of a stable distribution can be one of two forms: 1. α = 2, then ν = 0 so that X ∼ N (b, A), 2. α < 2, then A = 0 and ν is of the form: dx dx ν(dx) = C 1 + C 1 , where C ,C ≥ 0. 1 x1+α x≥0 2 |x|1+α x<0 1 2
The proof of this result can be found in Sato [11]. In the one-dimensional case, an extensive discussion can be found in Zolotarev [12]. The higher dimensional cases are more elaborate, and many questions are still open to this day. We must also mention the book by Samorodnitsky and Taqqu [10]. We conclude this paragraph by giving an alternate expression for the exponent of a Stable distribution in one dimension.
Theorem 1.3.8. A random variable X has a stable distribution if and only if there exists σ > 0, β ∈ [−1, 1] and b ∈ R such that • if α = 2 1 φ (ξ) = exp(iξb − σ2ξ2) X 2 • if α < 2 and α 6= 1 h πα i φ (ξ) = exp iξb − σα|ξ|α 1 − iβsgn(ξ) tan , X 2 • if α = 1, h 2 i φ (ξ) = exp iξb − σ|ξ| 1 + iβ sgn(ξ) log(|ξ|) , X π
The proof of this result can be found in all three books mentioned above. Remark 1.3.9.
• The parameters b and σ are designates respectively the drift and scale, whereas β is the skewness of the distribution. Taking β = 0 gives a symmetrical stable distribution. • Plugging β = 0 and b = 0, we see that the exponent of a stable process is essentially |ξ|α, for α ranging from 0 to 2. Because of that, we see that every stable distribution has a density, ranging from the Gaussian density to the Cauchy density: σ f (x) = . X π[(x − b)2 + σ2]
Series representations for those densities are available, often relying on special functions. Note that for α < 2, the distributions are heavy-tailed. In fact, it can be shown that if X has an α stable distribution, then E(|X|γ ) < +∞ for all γ < α. CHAPTER 1. INTRODUCTION AND EXAMPLES 6
We end this paragraph on stable distributions by mentioning the following result from Chambers, Mallow and Stuck, ofter abbreviated CMS method in the literature. Theorem 1.3.10. Consider U and W two independent random variables, such that
π π • U has a uniform distribution on − 2 , 2 • W has an exponential distribution of parameter 1, Set ( πα 1 arctan(−ζ) if α 6= 1 ζ = −β tan , and ξ = α 2 π 2 if α = 1 If α 6= 1, then 1−α α sin α(U + ξ) cos U − α(U + ξ) 1 2 2α X = (1 + ζ ) 1 cos(U) α W
If α = 1, then π 1 π 2 W cos U X = + βU tan(U) − β log π . ξ 2 2 + βU X has a stable distribution with index α, and skewness β. The proof can be found in
1.4 Digression on Relativity
In this section, we would like to give an example in the theory of relativity where infinitely divisible distributions arise. More precisely, we will discuss the relativistic stable distribution. 3 3 Consider a particule in R whose mass is m > 0 and momentum is p = (p1, p2, p3) ∈ R . According to the models in relativity theory, the total energy of this particule is pm2c4 + c2|p|2, where c is the speed of light. Subtracting mc2 which is the energy due to mass, the kinetic energy of the particule is then given by
E(p) = pm2c4 + c2|p|2 − mc2.
We consider −E(p) p 2 4 2 2 2 φm,c(p) = e = exp − m c + c |p| + mc .
Theorem 1.4.1. φm,c is the characteristic function of an infinitely divisible distribution.
Proof. This proof is in two parts. First, using Bochner’s theorem, we identify φm,c as a characteristic function. th Next, we express the n root of φm,c as a characteristic function as well. We first recall Bochner’s theorem.
Theorem 1.4.2. A function ψ is a characteristic function if and only if ∀n ∈ N, ∀z1, . . . , zn ∈ C, ∀p1, . . . , pn, n X ψ(pi − pj)ziz¯j ≥ 0 i,j=1
Rewriting the kinetic energy as: r ! |p|2 −E(p) = mc2 1 − 1 + = mc2ψ(p), m2c2 it is enough to use Bochner’s theorem on eψ(p). In fact, one clever way to rewrite ψ is though the use of the Gamma function, that is: Z +∞ 2 1 −(1+ |p| )x dx ψ(p) = 1 − √ 1 − e m2c2 . 3/2 2 π 0 x We point out that at first glance, we might have a problem considering these integrals at 0, and we should consider a sequence approaching zero in order to be perfectly rigourous. But as it is not the main focus of CHAPTER 1. INTRODUCTION AND EXAMPLES 7 these notes, we will just admit these integrals to be defined. Now to show that ψ is positive definite, it is enough to focus on the exponential part:
n n Z +∞ |p −p |2 Z +∞ |p −p |2 X 1 −(1+ i j )x dx 1 −x X − i j x dx ziz¯j √ e m2c2 = √ e ziz¯je m2c2 . 2 π x3/2 2 π x3/2 i,j=1 0 0 i,j=1 | {z } positive definite
2 − |p| x 2 2 m2c2 m c Indeed, p 7→ e is the characteristic function of a random variable with distribution N (0, 2x ). Thus, we obtained that p 2 4 2 2 2 p 7→ φm,c(p) = exp − m c + c |p| + mc is the characteristic function of some distribution. To see that it is infinitely divisible, we write:
2 1 1 p mc φ (p) n = exp − m2c4 + c2|p|2 + m,c n n r ! c 4 c 2 = exp − (nm)2 + c2|p|2 + mn n n
= φ c (p). nm, n Remark 1.4.3. This remarkable fact allowed physicists to use criteria developed for probability theory to relativity. In particular, the existence of bound states1 for relativistic Schrodinger operators follows from the application of a recurrence criteria.
1https://en.wikipedia.org/wiki/Bound_state Chapter 2
Lévy processes
In this chapter, we define Lévy processes and give a few examples of such processes. We discuss their relation with infinitely divisible distributions and the nature of their jumps. We will spend a large part of this chapter discussing integration with respect to a Poisson random measure in order to set-up the proof of the Lévy-Khintchine formula in the next chapter.
2.1 Definition of a Lévy process
There exists many equivalent definition for the Brownian motion. As it is not the main focus of these lectures, here is one definition that will suffices us.
Definition 2.1.1. A stochastic process (Bt)t≥0 is a Brownian motion if:
• Almost surely, t 7→ Bt is continuous,
• For all s, t > 0, Bs+t − Bt has the same distribution as Bs
• For all n ≥ 1 and all times 0 ≤ t0 ≤ t1 ≤ · · · ≤ tn, the random variables Bt0 , Bt1 − Bt0 , ... , Btn − Btn−1 are independent.
Now one can wonder, what happens if we drop the assumption on continuity of t 7→ Bt ? The reader trained in probability would then observe that the Poisson process (more on that one later) also fit the description. In fact, the class of all process with independent and stationary increments is known as the Lévy processes. Let us write a formal definition.
d Definition 2.1.2. A stochastic process (Xt)t≥0 in R is a Lévy process if the following conditions are satisfied: 1. Independent increments:
for all n ≥ 1 and all times 0 ≤ t0 ≤ t1 ≤ · · · ≤ tn, the random variables Xt0 , Xt1 − Xt0 , ... , Xtn − Xtn−1 are independent. 2. Stationarity of increments:
for all s, t ≥ 0, the distribution of Xt+s − Xs does not depends on s.
3. X0 = 0 almost surely. 4. It is stochastically continuous:
|Xs − Xt| > ε −→ 0. P s→t 5. It is càdlàg almost surely.
Remark 2.1.3. The last item above can be dropped, as one can prove that there always exists a càdlàg moditication (i.e. a process that is different on a set of measure zero). However, proving this is quite involved and we opt to add the càdlàg property in the definition of a Lévy process.
8 CHAPTER 2. LÉVY PROCESSES 9
The fact that the previous definition actually give rise to a probability measure on the space of càdlàg d functions from R+ to R invokes Kolmogorov’s extension criterion. The details can be found in Billingsley [3]. Obviously, the Brownian motion and the Poisson process satisfies all of these properties. The reader can also observe that the sum of a Poisson process and a Brownian motion also satisfies these properties. In the next chapter, we will see that any process satisfying those properties can be decomposed as the sum of a Brownian motion, a compound Poisson process and an L2 martingale. This is the celebrated Lévy Itô decomposition. Now, beside the fact that Gaussian and Poissonian distributions are in both, what would be the link between Lévy processes and infinitely divisible distributions? The answer is that at any given time, a Lévy process has an infinitely divisible distribution.
Proof. Let (Xt)t≥0 be a Lévy process. For all n ∈ N, we can write:
Xt = Xt − X n−1 + X n−1 − X n−2 + ··· + X t − X0 n t n t n t n
We used the fact that X0 = 0. Now, all of those increments are independent, and identically distributed (since we are considering increments of size t/n), and we decomposed Xt as sum of i.i.d. random varaible. Thus, Xt is indeed infinitely divisible.
Being infinitely divisible, the characteristic function of Xt must also satisfy the Lévy-Khintchine formula: " !# 1 Z ihξ,Xti ihξ,yi E(e ) = exp t ihb, ξi − hξ, Aξi + e − 1 − hξ, yi1{|y|≤1}ν(dy) , 2 d R \{0} for some characteristics (b, Aν). Therefore, we will refer to the triplet as the characteristic triplet for the Lévy process X as well.
2.2 Examples of Lévy processes
As we saw earlier, Gaussian and Poisson distributions are infinitely divisible. This means that their continuous time counter-parts, that are the Brownian motion and the Poisson process are Lévy processes. Just in case the reader is unfamiliar with Poisson processes on R, here’s a brief summary. A Poisson process is the only stochastic process with independent and stationary increments with bounded jumps. Equivalently, a Poisson Process of parameter λ has at all time a Poisson distribution of parameter λt:
(λt)k (N = k) = e−λt . P t k! As such, one can compute its characteristic function:
iξ φNt (ξ) = exp(λt(e − 1)).
We see that we can force a Lévy-Khintchine exponent form by writing this as Z iξx φNt (ξ) = exp t (e − 1)λδ1(dx) . R Thus, the Lévy measure of a Poisson process is a Dirac mass. Similarly, we saw that a compound poisson random variable was infinitely divisible as well, then the stochastic process N Xt Xt = Yi i=1 where Nt is a Poisson process is also a Lévy process. We naturally call this one the compound poisson process. The same calculation as above then gives: Z ihξ,yi φXt (ξ) = exp t (e − 1)λµY (dy) , and we see that the compound Poisson process has the Lévy measure λµY (dy). CHAPTER 2. LÉVY PROCESSES 10
Remark 2.2.1. The Lévy measure has an interpretation in terms of jumps of the Lévy process. Indeed, the Brownian motion, having continuous trajectories, its Lévy measure is zero everywhere. The Poisson process has jumps of size 1, giving a Dirac mass at 1 as Lévy measure, and a compound Poisson, whose jumps are the realisation of the random variables Yi’s at rate λ has Lévy measure λµY (dy). In general, the Lévy measure can be seen as the intensity of the jumps in a certain region of space.
The Gamma Process
We consider (Xt)t≥0 such that for all t ≥ 0,
L(Xt) = Γ(αt, β), α, β > 0.
This means that for all t > 0, the density of Xt is βαt f(t, x) = xαt−1e−βx1 . Γ(αt) {x>0}
βαt A simple integration yields that the characteristic function of Xt is φXt (ξ) = (β−iξ)t . To simplify the computations, we take α = β = 1, and give an alternative expression for characteristic function more suitable to the Lévy-Khintchine formula: 1 = e−t ln(1−iξ). (1 − iξ)t Now, notice that 0 Z +∞ i iξx −x ln(1 − iξ) = = iφX1 (ξ) = i e e dx. 1 − iξ 0
Indeed, for α = β = 1, X1 has an exponential distribution of parameter 1. Integrating both sides with respect to ξ gives: Z +∞ e−x ln(1 − iξ) = eiξx − 1 dx. 0 x e−x In other words, we exhibited the Lévy measure of (Xt)t≥0 to be ν(dx) = x dx.
Stable Subordinators In general, a subordinator is just a non decreasing process. But for a Lévy process to be increasing, means several things. First, there cannot be a Brownian part, as the Brownian motion cannot be (just) increasing. Second, the Lévy measure cannot charge (−∞, 0), otherwise, the process would see negative jumps and the trajectories cannot be increasing. Theorem 2.2.2. If T is a subordinator, then its characteristic function takes the form Z +∞ iξT iξx E(e ) = exp ibξ + (e − 1)µ(dx) , 0 where b ≥ 0and the Lévy measure satisfies the additional requirements: Z +∞ µ (−∞, 0) = 0 and min(1, y)µ(dy) < +∞. 0 The proof of this result can be found in Bertoin [2]. We will say more on the subject one we proved the Lévy Khintchine formula. Probably the most used type of subordinator is the α stable subordinator; that is as its name indicates, α a stable process with increasing trajectories. Let us denote (Tt )t≥0 such process. Looking at the previous Theorem, we see that we need to take 0 < α < 1 to guarantee the conditions on the Lévy measure. Besides, the measure having to give zero mass to all negative reals, we see that we have to get:
iξT α −ξα E(e t ) = e . Now, a simple computation gives Z +∞ α α iξx dx ξ = 1 − e 1+α . Γ(1 − α) 0 x CHAPTER 2. LÉVY PROCESSES 11
α dx We thus see that the characteristic triplet of such α stable subordinator has to be (0, Γ(1−α) x1+α ). Those type of random processes are useful for considering time-change. Let us give one example using α (Tt )t≥0 and a d dimensional Brownian motion. α Example 2.2.3. Consider (Bt)t≥0 and (Tt )t≥0 an α stable subordinator, independent of B. The subordinated Brownian motion (B α ) is a d dimensional 2α stable process. Indeed, we find its characteristic function to Tt t≥0 be α hξ,BT α i −|ξ| E(e t ) = e . This example gives us a very simple way to get a d dimensional α stable process. We have to mention though that we do not get all the Stable process in dimension d in this way, since the process obtained is clearly symmetric.
2.3 Exploring the Jumps of a Lévy Process
In this section, we investigate the structure of the jumps of a Lévy process. We will link the jumps to a Poisson random measure, which will lead us to the celebrated Lévy-Itô decomposition. Henceforth, (Xt)t≥0 will denote a Lévy process with generating triplet (b, A, ν).
Remark 2.3.1. Note that until now, we did not specify where did the triplet come from. The Lévy-Khintchine representation states that any Lévy process is characterised by a triplet, but the origin of this triplet is for now unclear. In this section, we will define these objects in relation to some path properties of the process.
The Large Jumps of a Lévy process as a Compound Poisson Process The idea is to see within the jumps of a Lévy process, the structure of a Poisson point process:
•• •
• • • • BO•BO•
•
•
•
/
The difficulty in the analysis of the jumps of a Lévy process comes from the fact that even though the jumps are countable, it is possible to have: X |∆Xs| = +∞. 0
In other words, it is possible for the jumps to accumulate. This difficulty will be dealt with thanks to the fact that Lévy processes will always have the property that
X 2 |∆Xs| < +∞. 0
To exploit this, we need to define the jump measure associated with our Lévy process. Fix A a Borel set such that 0 ∈/ A¯. We define the random variables:
A T1 = inf{t > 0; ∆Xt ∈ A}, . . A A Tn+1 = inf{t > Tn ; ∆Xt ∈ A}, . .
Since X has càdlàg paths and that 0 ∈/ A¯, we see that
A {Tn ≥ t} ∈ Ft+ = Ft.
Thus, those random variables are stopping times. Besides, the assumption 0 ∈/ A¯ yields that
A lim Tn = +∞ almost surely. n→+∞
We introduce Nt(A) the following quantity:
+∞ X X N (A) = #{0 ≤ s ≤ t; ∆X ∈ A} = 1 = 1 A . t s {∆Xs∈A} {Tn ≤t} 0
A This quantity is a counting process without explosion (since Tn → +∞) that counts the number of times the jumps lands in A.
Theorem 2.3.2. Let A be a Borel set such that 0 6∈ A¯. Then, Nt(A) is a Poisson process. Proof. We can see that for all times 0 ≤ s < t < ∞,
Nt(A) − Ns(A) ∈ σ{Xu − Xu, s ≤ v ≤ u ≤ t}, and thanks to the fact that X has independent increments, Nt(A) − Ns(A) is independent of Fs, that is, Nt(A) has independent increments. Finally, we observe that Nt(A) − Ns(A) counts the number of jumps that Xs+u − Xs has in A, for 0 ≤ u < t − s. Using the fact that X has stationary increments, we then conclude that Nt(A) − Ns(A) has the same distribution as Nt−s(A). To summarise,
• Nt(A) is a counting process,
• Nt(A) has independent increments,
• Nt(A) has stationary increments we can conclude that Nt(A) must be a Poisson process.
We also define ν(A) to be the quantity:
ν(A) = E[N1(A)], that is ν(A) is the intensity (or parameter) of the Poisson process Nt(A). Consequently, we deduce that E[Nt(A)] = tν(A). CHAPTER 2. LÉVY PROCESSES 13
Remark 2.3.3. A rather useful property of Nt(A) is that if A and B are disjoints, Nt(A) and Nt(B) are independent. This property comes from the fact that Nt(A) and Nt(B) relies on different increments of (Xt)t≥0, as soon as A and B are disjoints. Thus, the large jumps of a Lévy process give rise to a Poisson process. The fact that only large jumps are considered comes from the assumption 0 6∈ A¯. Note that the Poisson process Nt(A) explicitly depends on the prescribed Borel set A. We can thus ask what is the dependency of this process with respect to the Borel set?
Theorem 2.3.4. The set function A 7→ Nt(A) defines a σ-finite measure on R\{0}. The set function A 7→ ν(A) = E[Nt(A)] is also a σ-finite measure on R\{0}.
Proof. By construction, Nt(A) is a counting measure. Besides, it is clear from the linearity properties of the expectation that ν is also a measure.
Definition 2.3.5. The measure ν is called the Lévy measure of the process (Xt)t≥0. This measure is the third element in the characteristic triplet of (Xt)t≥0.
The fact that ν(A) < +∞ when 0 ∈/ A¯ is actually a consequence of the fact that Nt(A) has jumps of size 1. Indeed, the moments of the Lévy measure are closely related to the moments of the Lévy process. More precisely, we have the following result:
Theorem 2.3.6. Let (Xt)t≥0 be a Lévy process with bounded jumps:
sup |∆Xt| < C, t≥0
m where C is a fixed non-random constant. Then, for all m ≥ 1, E[|Xt| ] < +∞, that is Xt has moment of every order.
Since Nt(A) has jumps of size 1, thus bounded jumps, it has moments of every order. The Lévy measure is defined to be the 1st moment of Nt(A), thus it is finite. Note that this is the first step towards satisfying the definition of a Lévy measure: Z min(1, |x|2)ν(dx) < +∞, d R R R 2 since we just obtained that |x|>1 ν(dx) < +∞. We will deal with the part |x|≤1 |x| ν(dx) later. Note that we actually have a stronger result, linking the moments of the Lévy measure to the moments of the process itself. See Theorem 25.3 p 159 in Sato [11].
Proof of Theorem 2.3.6. This proof follows the proof of Theorem 2.4.7 p118 in Applebaum [1]. We define the sequence of stopping times
T1 = inf{t > 0; |Xt| > C} . .
Tn+1 = inf{t > Tn; |Xt − XTn | > C}.
This sequence form an increasing sequence of stopping times. First, assume T1 < +∞ almost surely. Since |∆Xs| ≤ C for any time, we have by induction that:
sup |Xs∧Tn | ≤ 2Cn. t>0
By the strong Markov property, we get that Tn − Tn−1 is independent of FTn−1 and has the same distribution as T1. Thus, because T1 < +∞, we have
−T −T n n E(e n ) = E(e 1 ) = α , for a certain α ∈ [0, 1]. We thus get:
t −Tn t n P(|Xt| > 2Cn) ≤ P(Tn ≤ t) ≤ e E(e ) ≤ e α . CHAPTER 2. LÉVY PROCESSES 14
m From this last inequality, we deduce that E|Xt| is finite:
m m m E[|Xt| ] = E[|Xt| 1{|Xt|≤2Cn}] + E[|Xt| 1{|Xt| 2Cn}]. For the first part, there are no problems:
m m E[|Xt| 1{|Xt|≤2Cn}] ≤ (2Cn) . For the second part, we write:
+∞ m X m E[|Xt| 1{|Xt|>2Cn}] = E[|Xt| 1{2rC<|Xt|≤2(r+1)C}] r=n +∞ X m ≤ (2(r + 1)C) P(2rC < |Xt| ≤ 2(r + 1)C) r=n +∞ X m ≤ (2(r + 1)C) P(2rC < |Xt|) r=n +∞ X ≤ (2(r + 1)C)metαr < +∞. r=n
Thus, when T1 is finite almost surely, we have that Xt has moment of every order. Now, if P(T1 = +∞) > 0, then we can write: m m m E[|Xt| ] = E[|Xt| 1{T1<+∞}] + E[|Xt| 1{T1=+∞}]. m Then, for E[|Xt| 1{T1<+∞}], we can argue as before, and we are left with
m m m E[|Xt| 1{T1=+∞}] ≤ C P(T1 = +∞) ≤ C . Thus, the proof is complete.
Remark 2.3.7. So far, we concluded that Nt(·) is a measure. Note that this is actually a random measure, that is a random variable taking value in the space of measures. This begs the question of defining a probability space on the set of all measures. As it is not the main focus of these notes, we will not dwell to long on this construction. The interested reader can type "random measure" in a search engine to get many references on the matter. The case for Poisson random measures is of particular interest for us, and the reader can see that in this case, one can completely characterise the random measure through a Laplace-like transform.
Now that we established that Nt(·) is a measure, what kind of result can we obtain when integrating with respect to it? The answer is simple: we know what the measure does on indicators of Borel sets, we can extend this with results from measure theory to get the following.
Theorem 2.3.8. Let A be a Borel set such that 0 ∈/ A¯. Let f be measurable and finite on A. We have
Z X f(x)Nt(dx) = f(∆Xs)1{∆Xs∈A}. A 0