AND ITS APPLICATIONS Serguei Pergamenshchikov, Evgeny Pchelintsev

To cite this version:

Serguei Pergamenshchikov, Evgeny Pchelintsev. RENEWAL THEORY AND ITS APPLICATIONS. Master. Russia. 2020. ￿hal-02485643￿

HAL Id: hal-02485643 https://hal.archives-ouvertes.fr/hal-02485643 Submitted on 20 Feb 2020

HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. MINISTRY OF SCIENCE AND HIGHER EDUCATION OF THE RUSSIAN FEDERATION NATIONAL RESEARCH TOMSK STATE UNIVERSITY FACULTY OF MECHANICS AND MATHEMATICS

RENEWAL THEORY AND ITS APPLICATIONS

Lectures notes for the course "Stochastic modelling" taken by most Mathematics students and Economics students (directions of training 01.03.01  Mathematics and 38.04.01  Economics)

Authors: Serguei M. Pergamenshchikov and Evgeny A. Pchelintsev

Tomsk 2020 APPROVED the Department of Mathematical Analysis and Theory of Functions Head of the Department, Professor S.P. Gul'ko

REVIEWED and APPROVED Methodical Commission of the Faculty of Mechanics and Mathematics

Record No from ” ” Febrary 2020 Chairman of the Commission, Associate Professor E.A. Tarasov

The goal of this course is to study the main tools of the re- newal theory and their applications to some problems of the actuar- ial analysis for insurance companies in the framework of the Cremer - Lundberg models. We consider such important problems in the renewal theory as limit theorems for the renewal processes and the ruin problems for the insurance companies with investments in the stochastic nancial markets. The notes are intended for students of the Mathematics and Economics Faculties. This work was supported by the Ministry of Education and Sci- ence of the Russian Federation (project No 1.472.2016/1.4). Contents

1 Poisson processes 5 1.1 Denition and main properties ...... 5 1.2 Principal features ...... 7 1.3 The last jump of the Poisson process ...... 10 1.4 Exercises I ...... 13

2 Asymptotic theory 17 2.1 Renewal equation ...... 17 2.2 Smith theorem ...... 21 2.3 Exercises II ...... 27

3 Cramer- Lundberg models 29 3.1 Main denitions and results ...... 29 3.2 Exercises III ...... 37 3.3 Lundberg inequality ...... 38 3.4 Exercises IV ...... 44 3.5 Fundamental equation for the non-ruin probability ...... 44 3.6 Exercises V ...... 48 3.7 Cramerbound ...... 50 3.8 Exercises VI ...... 53

3 3.9 Large claims ...... 54 3.10 Exercises VII ...... 64 3.11 Ruin problem with investment ...... 66 3.12 Exercises VIII ...... 77

A Appendix 78 A.1 Strong large numbers law ...... 78 A.2 Kolmogorov zero-one law ...... 78 A.3 Three series theorem ...... 78 A.4 ...... 79 A.5 Iterated logarithm law ...... 80

References 81

4 1 Poisson processes

1.1 Denition and main properties

In this section we study the principal properties of the Poisson process. Let be i.i.d. exponential random variables with (τj)≥1 ∑ some parameter . We set n for ≥ and . λ > 0 Tn = j=1 τj n 1 T0 = 0

Denition 1.1. The random R → N function +

∑∞ N = 1 ≤ (1.1) t (Tn t) n=1 is called the homogeneous Poisson process of the intensity λ > 0.

Proposition 1.1. If is i.i.d. exponential random variables (τj)j≥1 with a parameter , then the vector has the distri- λ > 0 (T1,...,Tn) bution density with respect to the Lebesgue measure in Rn dened as

n −λx f (x , . . . x ) = λ e n 1{ } . (1.2) n 1 n 0

Proposition 1.2. If is a homogeneous Poisson process with (Nt)t≥0 an intensity , then for any the has λ > 0 t > 0 Nt the , i.e. for any integer n ≥ 0

(λt)n P(N = n) = e−λt . (1.3) t n!

5 Now we study the main properties of the Poisson processes.

Proposition 1.3. Let be a homogeneous Poisson process (Nt)t≥0 of an intensity λ > 0. Then

1. almost sure the function is increasing, with integer (Nt)t≥0 values and right continuous;

2. conditionally with respect to , the vector Nt = n (T1,...,Tn) has the same distribution as n order uniformly dis- tributed on the interval [0, t];

3. the Poisson process has homogeneous increments, i.e. (Nt)t≥0 for all 0 < s < t and any integer n ≥ 0

− P(Nt Ns = n) = P(Nt−s = n);

4. the Poisson process has independent increments, i.e. (Nt)t≥0 for any time moments and any integer 0 = t0 < t1 < . . . < tm numbers n1, . . . , nm

( ) P N = n ,N − N = n ,...,N − N = n t1 1 t2 t1 2 tm tm−1 m ∏m ( ) = P N − N = n ; tj tj−1 j j=1

6 5. the Poisson process is a process of rare events, i.e. for (Nt)t≥0 any t ≥ 0 and ∆ > 0

( ) − o P Nt+∆ Nt = 1 = λ ∆ + (∆) , ( ) − o (1.4) P Nt+∆ Nt > 1 = (∆) ,

as ∆ → 0.

Remark 1.1. As we will see later all these properties are very use- ful in the actuarial mathematics for the constructing the principal insurance models. Indeed, the Poisson process is used to model the number of claims on the time interval [0, t]. Especially, the indepen- dent increments and rare events properties are very natural for the insurance models.

1.2 Principal features

Proposition 1.4. Let be a that satises (Nt)t≥0 the following conditions:

• for almost every , the trajectory is zero in , in- ω (Nt(ω))t≥0 0 creasing, right continuous and with integer values;

7 • the process has independent and homogeneous incre- (Nt)t≥0 ments;

• is a process of rare events, i.e. there exists , for (Nt)t≥0 λ > 0 which the asymptotic properties (1.4) hold.

Then is the Poisson process of the intensity . (Nt)t≥0 λ > 0

Proof. Firstly, we show that

−λt (1.5) P(Nt = 0) = e .

We denote by . Indeed, due to the independence f(t) = P(Nt = 0) and − we obtain Nt Nt+s Nt

f(t + s) = P(Nt+s = 0) = P(Nt+s = 0 ,Nt = 0)

− = P(Nt+s Nt = 0 ,Nt = 0) = f(t)f(s) .

Using here the rare events property, we get (1.5). Now we nd the distribution of for arbitrary xed . To this end, we set Nt t > 0

G(t) = EzNt for 0 < z < 1. Taking into account that the increments are independence and homogeneous, we can represent the function

G(t + s) as

− G(t + s) = E zNt+s Nt zNt = G(t)G(s) .

8 Moreover, for all t > 0

N λt G(t) ≥ Ez t 1{ } = e > 0 . Nt=0

Therefore, G(t) = etg(z) and

G(t) − 1 g(z) = lim . t→0 t

Then the rare events property directly implies that for t → 0

N −λt G(t) = P(N = 0)+zP(N = 1)+Ez t 1{ ≥ } = e +zλt+o(t) . t t Nt 2

Therefore, g(z) = λ(t − 1) and

∞ ( ) ∑ (λt)n G(t) = e−λt eλzt = zn e−λt . n! n=0

This directly implies that for all t > 0

(λt)n P(N = n) = e−λt . t n!

Next, note that we can represent the process as (Nt)t≥0

∑∞ N = 1{ ≤ } , t Tn t n=1

9 where { ≥ ≥ }. From here it follows that Tn = inf t 0 : Nt n

n∑−1 ≤ − P(Tn > t) = P(Nt n 1) = P(Nt = j) j=0 ∫ n∑−1 j ∞ ( ) − (λt) − = e λt = λn vn 1 eλv dv . j! j=0 t

This implies that the distribution of coincides with the distribu- Tn tion of a sum i.i.d. exponential random variables of the parameter

λ > 0. Thus, in view of the denition (1.1), the random function is a homogeneous Poisson process. (Nt)t≥0

1.3 The last jump of the Poisson process

Let's study the properties of the delay between the present time moment and the last jumping moment . Putting , t > 0 TNt T0 = 0 we get ∑∞

TNt = Tk 1{Nt=k} . k=0 Therefore, is a random variable. We will study the properties TNt of the two random variables V = T − t and V ∗ = t − T . t Nt+1 t Nt

Proposition 1.5. The random variable Vt is independent of the σ- eld, generated by the variables { ≤ } and has the exponential Ns , s t distribution with the parameter λ > 0.

10 Proof. We note that for u > 0

{V ≤ u} = {T − t ≤ u} = ∪∞ {T − t ≤ u , N = n} t Nt+1 n=0 n+1 t

∪∞ { ≥ } = n=0 Nt+u n + 1 ,Nt = n

∪∞ { − ≥ } = n=0 Nt+u Nt 1 ,Nt = n

{ − ≥ } = Nt+u Nt 1 .

This immediately implies Proposition 1.5.

Proposition 1.6. For any Borelian sets A ⊆ R and for any t > 0 ∫ ∗ ∈ −λt −λv (1.6) P(Vt A) = e 1{t∈A} + λ e dv . A∩[0,t]

Proof. It is clear that for this proposition it suces to show (1.6) for the sets of the form with . We note that ∗ ≤ A = [0, u[ u > 0 Vt t

11 a.s., i.e. for u ≥ t the equation (1.6) is true. For u < t we get that

∑∞ ∗ ∈ ∗ − P(Vt A) = P(Vt < u) = P(t Tn < u , Nt = n) n=0 ∑∞ = P(Nt−u < n , Nt = n) n=0 ∑∞ − = P(Nt Nt−u > 0 ,Nt = n) n=0

− = P(Nt Nt−u > 0) = P(Nu > 0) .

Therefore,

∫ ∗ ∈ − −λu −λv P(Vt A) = 1 e = λ e dv . A∩[0,t]

To nish this proof we note that

∗ −λt P(Vt = t) = P(Nt = 0) = e .

Hence Proposition 1.6.

Propositions 1.5 and 1.6 imply that

2 E (T − T ) = (1 − e−λt) . (1.7) Nt+1 Nt λ

12 Remark 1.2. The equation (1.7) is called the "bus paradox" . If we associate the jump moments of a Poisson process with the time moments of passages of a bus through a station, then according to (1.7) for suciently large the bus waiting time interval t [TNt ,TNt+1] is twice as long on average as an interval since − [Tn,Tn+1] E(Tn+1 . Tn) = 1/λ

1.4 Exercises I

1. Let be a Poisson process of an intensity and (Nt)t≥0 λ > 0 be his jumping moments. (Tn)n≥1

(a) Calculate and for . ENt Var(Nt) t > 0 (b) Calculate the distribution of for ≥ . Tn n 1

(c) Show that for all A ∈ B(Rn)

P ((T , ··· ,Tn) ∈ A | N = n) 1 ∫ t n! = 1 ds ··· ds . (1.8) n {0

(d) Let X1,...,Xn be i.i.d. random variables uniformly dis- tributed on the interval . Let be the ordi- [0, t] Z1,...,Zn

13 n nal statistics of X1,...,Xn. Show that for all A ∈ B(R )

P ((Z , ··· ,Zn) ∈ A) 1 ∫ n! = 1 ds ··· ds . (1.9) n {0

Deduce that conditionally with respect to { } the Nt = n random variables has same distribution as (T1,...,Tn) the order statistics of n uniform independent random variables on the interval [0, t].

(e) Show that for 0 < s < t  

Nt ( ) ( ) −   s k s Nt k P (N = k|N ) =   1 − 1{ } . s t t t k>Nt k

(f) Show that is a process with homogeneous incre- (Nt)t≥0 ments in the sense that for all 0 < s < t the increment − has the same distribution as . Nt Ns Nt−s (g) Show that has independent increments, i.e. for (Nt)t≥0 any increasing time moments ··· the 0 = t0 < t1 < < tk random variables

− − − Nt1 = Nt1 Nt0 ,Nt2 Nt1 ,...,Ntk Ntk−1

14 are independent.

(h) Show that is a rare events process, i.e. for any (Nt)t≥0 t ≥ 0 and ∆ > 0

− o − o P(Nt+∆ Nt = 1) = λ∆+ (∆) , P(Nt+∆ Nt > 1) = (∆)

as ∆ → 0.

2. Let 1 and 2 be two independent Poisson pro- (Nt )t≥0 (Nt )t≥0 cesses of the intensities and . Denote by the re- λ µ (Tn)n≥1 newal moments of 1 . (Nt )t≥0

(a) Calculate the distribution of the random variable N 2 − Tn+1 N 2 . Tn (b) Extend the result of (a) to random variables N 2 −N 2 , Tn+k Tn k > 1.

3. Let θ be positive a.s. random variable with the nite variance 2 and independent of . It is said that the process σθ > 0 (Nt)t≥0

˜ ≥ Nt = Nθt, t 0,

is mixed Poisson process of the mixed variable θ.

(a) Calculate ˜ . Deduce that ˜ has not usually P(Nt = n) Nt

15 Poisson distribution.

(b) Shaw that ˜ ˜ for all , while we have Var(Nt) > ENt t > 0 the equality for the Poisson processes.

(c) Calculate the distribution of ˜ , when and has Nt λ = 1 θ the Gamma distribution.

16 2 Asymptotic theory

2.1 Renewal equation

Let be i.i.d. positive random variables with a distri- (ηj)j≥1 bution function G. Now we consider the counting process for this sequence dened as ∑∞ N = 1 , (2.1) t {Sj ≤t} j=1 ∑ where and j for ≥ . Note that if the distri- S0 = 0 Sj = l=1 ηl j 1 bution is exponential, then is the Poisson process. Using G (Nt)t≥0 the large numbers law (Theorem A.1), one can establish that

N 1 lim t = a.s. (2.2) →∞ t t Eη1

Denition 2.1. We say that a random variable ξ is arithmetic if there exists d > 0 such that

∈ P(ξ Γd) = 1 , where { } is the grid of size . A random Γd = (kd)−∞ 0 variable is called non-arithmetic if ∈ for any . ξ P(ξ Γd) < 1 d > 0

In this section we need the Blackwell Renewal Theorem (see, for

17 example, in [3]):

Theorem 2.1. Assume that is non-arithmetic and η1 0 < E η1 < ∞. Then the expectation of the counting function has the following asymptotic properties:

E N 1 lim t = →∞ t t E η1 and for any h > 0

h lim E (N − N ) = . →∞ t+h t t E η1

We will use this theorem to study the renewal function

∑∞ Q(t) = E V (t − S )1 , (2.3) j {Sj ≤t} j=0 where R → R is bounded over all the nite intervals function. V : + One can check directly that this function satises the following re- newal equation

∫ u Q(u) = V (u) + Q(u − z) dG(z) . (2.4) 0

Now we study this equation.

Theorem 2.2. Assume that the distribution G is non-arithmetic

18 and the function V is bounded over all nite intervals. Then the renewal function Q is the unique solution of the renewal equation (2.4) among the functions which are bounded over all nite intervals.

Proof. Note that the Blackwell theorem implies that ∞ E Nt < for any t ≥ 0. Thus, if V is bounded on each nite interval, then the renewal function is bounded by

| | ≤ | | ∞ sup Q(u) sup V (u) (E Nt + 1) < 0≤u≤t 0≤u≤t on each nite interval [0, t]. Moreover, let B R be a linear space of R → R bounded on ( +) + each nite interval functions. We will introduce the following linear B R → B R operator ( +) ( +) ∫ t T (f)(u) = f(u − z) dG(z) . 0

In this case we can rewrite the renewal equation as

f = V + T (f) .

19 This implies that for all n ≥ 1

∑n f = T j(V ) + T n+1(f) . (2.5) j=0

To study this equation one needs to know how to calculate the n-th power of T . Let's show by induction that for each n ≥ 1

n T (f)(u) = E f(u − S ) 1{ ≤ } . (2.6) n Sn u

For n = 1 this is the denition. Assume now that this equality holds for some xed n > 1. We set

˜ n − f(u) = T (f)(u) = E f(u Sn) 1{S ≤u} ∫ n +∞ = f(u − y)1{ ≤ } dF (y) , y u Sn 0 where F (y) = P(S ≤ y). Using this function, we can represent Sn n the (n + 1)-th power as ∫ u n+1 T (f)(u) = T (f˜)(u) = E f(u − z − S ) 1{ ≤ − }dG(z) n Sn u z 0

= E f(u − η − S ) 1{ ≤ − } n+1 n Sn u ηn+1

= E f(u − S ) 1{ ≤ } . n+1 Sn+1 u

20 It means that equality (2.6) is true for any n ≥ 1. Using it in (2.5), we get that

∑n f(u) = E V (u − S ) 1{ ≤ } + E f(u − S ) 1{ ≤ } . (2.7) j Sj u n+1 Sn+1 u j=0

According to our condition, we try to solve the equation (2.4) among the functions which are bounded on each nite interval. So, the last term in (2.7) is bounded by

| E f(u − S ) 1{ ≤ } | ≤ sup |f(s)| P(S ≤ u) n+1 Sn+1 u n+1 0≤s≤u and, by the large numbers law (Theorem A.1), for any xed u > 0 this term tends to zero as n → ∞. So, taking the limit in (2.7) as n → ∞, we obtain that any solution of the equation (2.4) which is bounded on every nite interval is equal to the renewal function (2.3).

2.2 Smith theorem

Now we study the asymptotic properties of the function (2.3). To this end one needs the following denition.

Denition 2.2. We say that a R → R function is directly + V

21 integrable by Riemann on [0, ∞[ if

∑∞ sup |V (x)| < ∞ . (2.8) − ≤ ≤ k=1 k 1 x k

Using this denition, we will study the asymptotic properties of the function (2.3) as t → ∞.

Theorem 2.3. Let be a right or left continuous R → R function F + directly integrable by Riemann and on each nite interval it has a nite number of discontinuity points. Moreover we suppose that η1 is non-arithmetic and ∞. Then the function (2.3) has 0 < Eη1 < the following limit

∫ 1 ∞ lim Q(u) = V (z) dz . (2.9) →∞ u E η1 0

Proof. First, we show this theorem for linear combinations of indicator functions, i.e. we assume that

∑m V (x) = α 1 (x) + α 1 (x) , (2.10) 1 [t0,t1] k (tk−1,tk] k=2 where ∞. It's easy to see that this 0 = t0 < t1 < . . . < tm <

22 function for ≥ u tm

∑∞ ∑m ∑∞ Q(u) = α E 1{ − ≤ ≤ } + α E 1{ − ≤ − } 1 u t1 Sj u k u tk Sj

∑∞ where ∆N = 1 . Note that for any h > 0 t j=1 {Sj =t}

≤ − ∆Nt Nt+h Nt−h and, by the Blackwell theorem,

≤ 2h lim sup E ∆Nt . t→∞ Eη1

Therefore,

lim E ∆Nt = 0 t→∞ and

∑m ∫ ∞ 1 − 1 lim Q(u) = αk (tk tk−1) = V (z) dz . u→∞ E η Eη 1 k=1 1 0

23 Let now V be a function that satises the conditions of this theorem, i.e. it is directly integrable by Riemann and has a nite number of the jumps on all nite intervals. In this case, for each L > 0 we can nd a sequence of functions of the form (2.10) such that (Vm)m≥1

| − | lim sup V (x) Vm(x) = 0 . m→∞ 0≤x≤L

So, we can represent the function Q as

(2.11) Q(u) = I1(u) + I2(u) + I3(u) ,

∑∞ where I (u) = E V (u − S ) 1 , 1 j=0 m j {u−L≤Sj ≤u}

∑∞ I (u) = E (V (u − S ) − V (u − S )) 1 2 j m j {u−L≤Sj ≤u} j=0 and ∑∞ I (u) = E V (u − S ) 1 . 3 j {Sj ≤u−L} j=0 Taking into account that for , we nd Vm(z) = 0 z > L

∑∞ I (u) = E V (u − S 1 , 1 m j Sj ≤u} j=0

24 and, therefore,

∫ ∫ 1 ∞ 1 L lim I (u) = V (z) dz = V (z) dz . (2.12) →∞ 1 m m u E η1 0 E η1 0

Moreover,

( ) | | ≤ | − | − I2(u) sup V (z) Vm(z) E (Nu Nu−L) + E ∆ Nu−L . 0≤z≤L

And we get that

| | ≤ | − | L lim sup I2(u) sup V (z) Vm(z) . u→∞ 0≤z≤L E η1

This implies that for any L > 0

| | (2.13) lim lim sup I2(u) = 0 . m→∞ u→∞

Now we consider the last term in (2.11). Setting

∗ | | vk = sup V (x) , k−1≤x≤k

25 we can estimate it from above as

∑∞ ∑∞ |I (u)| ≤ E |V (u − S )|1 3 j {u−k≤Sj ≤u−k+1} j=0 k=L+1 ∑∞ ∑∞ ≤ E v∗ 1 k {u−k≤Sj ≤u−k+1} j=0 k=L+1 ∑∞ ( ) ∗ ≤ v 1 + E(N − − N − ) + E ∆N − k (u k)++1 (u k)+ u k)+ k=L+1 ( ) ∑∞ ≤ − ∗ sup 1 + E(Nx+1 Nx) + E ∆Nx vk . ≥ x 0 k=L+1

Thus, | | (2.14) lim lim sup I3(u) = 0 . L→∞ u→∞ From here, taking into account (2.11), we have

∫ ∫ ∞ L − 1 ≤ − 1 Q(u) V (y)dy I1(u) Vm(y)dy Eη1 0 Eη1 0 ∫ 1 L + |Vm(y) − V (y)|dy Eη1 0 ∫ ∞ 1 | | | | | | + V (y) dy + I2(u) + I3(u) . Eη1 L

26 Taking in this inequality the limit as

lim sup lim sup lim sup , L→∞ m→∞ u→∞ we get (2.11). Hence Theorem 2.3.

2.3 Exercises II

1. Let be counting function, that is (Nt)t≥0

∑ N = 1{ ≤ } , t η1+...+ηn t n≥1

where are i.i.d. random variables uniformly distributed (ηj)j≥1 on the interval [0, z] with a xed z > 0. Calculate the follow- ing limits

(a) EN lim t ; t→∞ 1 + 2t

(b) ( ) − lim E N3t N3t+4 ; t→∞

(c) EN lim √ 2t ; t→∞ 1 + t2

27 (d) ( ) − lim E Nt Nt−1/3 ; t→∞

(e)

lim sin(1/t) E N10t ; t→∞

(f) ( ) − 1/t lim 1 e E N4t . t→∞

2. Are the following functions directly integrable by Riemann

1 sin(x) , e−x , ? 1 + x2 1 + x4

3. Calculate the limit   ∑∞  1 1  lim + E 1{T ≤t} , t→∞ 1 + t2 1 + (t − T )2 j j=1 j ∑ where j 2 and are i.i.d. Gaussian random Tj = i=1 ξi (ξj)j≥1 variables with the parameters (0, 1).

28 3 Cramer- Lundberg models

3.1 Main denitions and results

In this section we consider non-life insurance models in which the claim sizes are dened by i.i.d. positive random variables (Yj)j≥1 with ∞ (3.1) µ = E Y1 < .

Moreover, we assume that the claims number on the time interval is a homogeneous Poisson process of intensity [0, t] (Nt)t≥0 λ > 0 dened in (1.1). This means that the time moments for claims occurrence are the jumps of the Poisson process (Tn)n≥1 (Nt)t≥0 and the inter-arrival times

− ≥ (3.2) τ1 = T1 , τk = Tk Tk−1 , k 2 , are i.i.d. exponentially distributed random variables with Eτ1 = 1/λ. We dene the total claim amount process as

∑Nt (3.3) Xt = Yj j=1 and for . In the theory of stochastic processes such Xt = 0 Nt = 0 process is called a . Moreover we assume

29 that a continuous stream of revenue brings in c t during the time interval [0, t], where c > 0 is the premium income rate. In this case the risk process is dened as

− (3.4) Ut = u + c t Xt , where u > 0 is the initial endowment of the insurance company.

Denition 3.1. The event

− { ∃ such that } ∪ { } (3.5) A = t > 0 Ut < 0 = t>0 Ut < 0 is called the ruin.

The denition of the risk process (3.4) immediately implies that

A− = ∪ { U < 0} . (3.6) k≥1 Tk

This means that this set is measurable. The moment τ u when the risk process goes below zero is called the ruin time:

u τ = inf{ t > 0 : Ut < 0} . (3.7)

30 The ruin probability or ruin function is given by

− u ψ(u) = P(A | U0 = u) = P( τ < ∞) . (3.8)

Setting

σu = inf{k ≥ 1 : U < 0} (3.9) Tk and taking into account the denition (3.8), we obtain

ψ(u) = P( σu < ∞) . (3.10)

Firstly, we study the properties of the total claim amount process (3.3).

Theorem 3.1. For the process (3.3) the following law of large num- bers holds 1 a.s. (3.11) lim Xt = λ µ t→∞ t Moreover, if 2 ∞, then for the process (3.3) the limit theorem E Y1 < holds also, i.e.

X − λ µt t √ =⇒ N (0, λ E Y 2) as t → ∞ . (3.12) t 1

Proof. To show (3.11) we note that, in view of the denition of

31 the Poisson process in (1.1), for any t > 0

T ≤ t < T . (3.13) Nt Nt+1

Therefore, taking into account that → ∞ a.s. as → ∞, we Nt t obtain through the large numbers law that

TN 1 lim t = Eτ = a.s. →∞ 1 t Nt λ

Therefore, from the inequalities (3.13) it follows that

N lim t = λ a.s. t→∞ t and, using again the large numbers law given in Theorem A.1, we come to the limit (3.11). As to the second equality, note that the deviation − can be represented as Xt λµt

X − λµt = S + λµ(T − t) , (3.14) t Nt Nt where ∑n and − − Sn = ηj ηj = Yj µ + µ(1 λτj) . j=1 Note that and 2 2 E ηj = 0 E ηj = E Y1 ,

32 and, in view of (3.13),

0 ≤ t − T ≤ τ . Nt Nt+1

Moreover, we have

∫ ∑∞ +∞ −λz E τ = E τ 1{ } ≤ Eτ + λ z Υ(t, z) e dz , Nt+1 k+1 Nt=k 1 k=0 0 (3.15) where

∑∞ ≤ − − Υ(t, z) = P(Tk t < Tk + z) = λ(t (t z)+) , k=1 and . Therefore, the bound (3.15) yields (x)+ = max(0, x) ∫ ∫ 1 t ∞ E τ ≤ + λ2 z2e−λz dz + λ2t z2e−λz dz Nt+1 λ 0 t i.e.

sup E τ < ∞ Nt+1 t≥0 and, therefore, − TN t P − lim √t = 0 . t→∞ t Using this equality in (3.14), we obtain the asymptotic representa-

33 tion X − λµt SN t √ = √ t + o (1) , (3.16) t t P where o is a term going to zero in probability as → ∞. More- P (1) t over, let now m = [λt] and [x] be the integer part of the number x. Then

( ) 2 E S − S = E S2 − 2E S S + E S2 = E η2E|N − m| , Nt m Nt Nt m m 1 t i.e. ( ) 2 √ − √ E SN Sm 1 E(N − E N )2 1 λ t ≤ + t t = + √ . t t t t t

Using this in (3.16), we get

X − λµt S t √ = √m + o (1) . t t P

Now, applying to the sequence the central limit Theorem (Sn)n≥1 A.4, we come to the limit property (3.12). Hence Theorem 3.1.

Now we come back to the ruin problem, i.e. we study the prop- erties for the ruin probability (3.10).

Proposition 3.1. (Almost sure ruin) If c ≤ µλ, then ψ(u) = 1

34 for all u > 0.

Proof. Let c < λµ. We can represent the sequence (U ) as Tk k≥1

∑k U = u − ξ , (3.17) Tk j j=1 where − . In this case, by applying the strong large ξj = Yj cτj ∑ numbers law (Theorem A.1) for k in the equality (3.10), Sk = j=1 ξj we nd that

UTn Sn c lim = − lim = − E ξ1 = − µ < 0 a.s. n→∞ n n→∞ n λ

So, taking into account (3.9) and (3.10), we obtain that ψ(u) = 1 for all u ≥ 0. Let now c = λµ, i.e. E ξ1 = 0. In this case note that for any k ≥ 1 and ϵ > 0

| | | | P( ξk > ϵ) = P( ξ1 > ϵ) > 0 .

Using Kolmogorov three-series theorem and Kolmogorov zero-one law (Theorems A.2 - A.3), we obtain that

∞ a.s. lim sup Sk = + k→∞

35 From the equalities (3.9) and (3.10) it follows that

∞ ≤ u ∞ 1 = P(lim sup Sk = + ) P(σ < ) . k→∞

Thus, ψ(u) = 1. Hence Proposition 3.1.

Remark 3.1. Proposition 3.1 means that insurance companies have to choose the premium rate c > 0 such that Eξ1 < 0. This is the only possibility to avoid being bankrupt almost sure in the framework of the Cramer- Lundberg model. So, if Eξ1 < 0, then we can hope that the ruin function ψ(u) will be less then 1.

Denition 3.2. The Cramer-Lundberg model satises "net prot condition" if

1 E ξ = E (Y − c τ ) = µ − c < 0 . (3.18) 1 1 1 λ

In the sequel we will assume that the premium rate is equal to

c = (1 + ρ) λ µ , (3.19) where ρ is a positive constant, which provides the net prot condi- tion.

36 3.2 Exercises III

Let be i.i.d. random variables with values in R and with (Yj)j≥1 + the nite on a neighborhood around 0 generator function dened as

hY mY (h) = E e j .

Let be a homogeneous Poisson process of an intensity (Nt)t≥0 λ > 0 independent of . For any ≥ we set (Yj)j≥1 t 0

∑Nt and − Xt = Yj Ut = u + ct Xt j=1 with u > 0 and c > 0.

1. Calculate expectation and variance of . Ut

2. Calculate the generator function for . Xt

3. Let α > 0. Show that there is only one solution cα for the equation − − E e α(ct Xt) = 1 , for any t > 0.

4. Show that for . What is the limit of as E Ut > u c = cα cα α → 0?

37 3.3 Lundberg inequality

In this section we will study the behavior of the function ψ(u) under the condition (3.18). Moreover, we assume that the sequence of claims amounts satises the following condition, called (Yj)j≥1 the Lundberg condition,

H1) There exists δ > 0 such that

E eδY1 < ∞ . (3.20)

Also we dene the Lundberg function as

L(x) = ln E exξ1 . (3.21)

The condition H1) implies that the function L(x) is nite in absolute value for any 0 ≤ x ≤ δ.

Proposition 3.2. We assume that the condition H1) holds. If the equation L(x) = 0 has a strictly positive root, then this root is unique.

Proof. First, we note that the function L is convex. Indeed, by Holder's inequality for 0 < α < 1 and for 0 ≤ x, y ≤ δ we obtain

38 that

− L(αx + (1 − α)y) = ln (E eαxξ1 e(1 α)yξ1 ) ( ) − ≤ ln (E exξ1 )α (E eyξ1 )1 α

− = ln (E exξ1 )α + ln (E eyξ1 )1 α

= α L(x) + (1 − α) L(y) .

We assume that there is such that . 0 < r1 < r2 L(r1) = L(r2) = 0 Then for all ∈ we obtain z [r1, r2]

− ≤ − L(z) = L(αr1 + (1 α)r2) αL(r1) + (1 α)L(r2) = 0 ,

where − − . If (i.e. zξ1 ) for α = (r2 z)/(r2 r1) L(z) = 0 E e = 1 all ≤ ≤ , then we would have 2 zξ1 and, so r1 z r2 E ξ1 e = 0 ξ1 = − a.s. But this is not possible since the random variables Y1 cτ1 = 0 and are independent. Therefore, it exists Y1 τ1 0 < r1 < z1 < r2 such that . Similar, as , we get that it exists L(z1) < 0 L(0) = 0 ∈ such that . Setting − − , we z0 [0, r1] L(z0) < 0 α = (z1 r1)/(z1 z0) nd that

− ≤ − 0 = L(r1) = L(αz0 + (1 α)z1) αL(z0) + (1 α)L(z1) < 0 .

39 This implies the uniqueness of the positive root. Hence Proposition 3.2.

Denition 3.3. If the equation L(x) = 0 admits a root r > 0, then this root is called the Lundberg coecient.

We will assume the following condition.

H2) The equation L(x) = 0 admits a root r > 0.

Remark 3.2. It is easy to see that the assumptions H1)H2) imply the net prot condition (3.18). Indeed, if E ξ1 ≥ 0, then by Jensen inequality we obtain that

L(x) = ln E exξ1 > ln exEξ1 ≥ 0 for any x > 0. So, the function L has no strictly positive root.

Theorem 3.2. (Lundberg inequality) Under the conditions H1)

H2) for all u ≥ 0 the ruin function admits the exponential upper bound

ψ(u) ≤ e−ru . (3.22)

Proof. First, one notes that according to (3.10), we can represent the ruin probability as the distribution tail of the extreme value for

40 a sequence of sums of i.i.d. random variables:

ψ(u) = P(inf UT < 0) = P(max Sk > u) , k≥1 k k≥1 ∑ where k and − . Let now Sk = j=1 ξj ξj = Yj cτj

ψn(u) = P( max Sk > u) . 1≤k≤n

It's obvious that

ψ(u) = lim ψn(u) . n→∞ So, for this theorem it suces to show the inequality (3.22) for the functions for all ≥ . We will do it by the induction. We ψn(u) n 1 start with n = 1. In this case S1 = ξ1 and by the Markov inequality

− − ≤ rξ1 ru ru ψ1(u) = P(ξ1 > u) E e e = e .

Moreover, if the inequality (3.22) holds for some xed n ≥ 1, then

41 for n + 1 we get

ψn+1(u) = P( max Sk > u , ξ1 > u) 1≤k≤n+1 ≤ + P( max Sk > u , ξ1 u) 1≤k≤n+1 ( ) ≤ (3.23) = P(ξ1 > u) + P max Sk > u , ξ1 u . 2≤k≤n+1

We estimate now the rst term in (3.23) more precisely, i.e.

− ≤ ru rξ1 (3.24) P(ξ1 > u) e E e 1{ξ1>u} .

Taking into account that is the sum of i.i.d. random variables Sk and using the inequality (3.22) for · , we can estimate the second ψn( ) term in (3.23) as

( )

P max S > u , ξ1 ≤ u ≤ ≤ k 2 k n+1   ∑n  − ≤  = P max ξj+1 > u ξ1 , ξ1 u 2≤k≤n+1 j=1

− − ≤ ru rξ1 = E 1{ξ1≤u} ψn(u ξ1) e E 1{ξ1≤u} e .

Using this inequality and the upper bound (3.24) in (3.23), we ob-

42 tain that

− ≤ ru rξ1 rξ1 ψn+1(u) e (E e 1{ξ1>u} + E e 1{ξ1≤u})

− − = e ruE erξ1 = e ru .

So, for all ≥ the functions ≤ −ru. Taking here the limit n 1 ψn(u) e as n → ∞, we get the bound (3.22). Hence Theorem 3.2.

Example 3.1. We consider the Cramer-Lundberg model in which the random variables are exponential with a parameter (Yj)j≥1 γ > 0. In this case the net prot condition (3.19) takes the form

c = (1 + ρ) λ /γ ,

where ρ is a positive constant. Note, that the condition H1) holds for δ < γ. Moreover, it is easy to see that the Lundberg coecient in this case is λ ρ r = γ − = γ . c 1 + ρ

So, in view of the Lundberg inequality, we get for all u ≥ 0

−γ ρ u ψ(u) ≤ e 1+ρ . (3.25)

43 3.4 Exercises IV

We consider the risk process − for a reinsurance Ut = u + ct Xt ∑ company, where Nt − with , is a Xt = i=j(Yj K)+ K > 0 (Nt)t≥0 homogeneous Poisson process of intensity λ > 0 independent of the i.i.d. sequence random exponential variables of parameter (Yj)j≥1 γ > 0. We choose the premium rate as

− with c = (1 + ρ)λE (Y1 K)+ ρ > 0 .

1. Calculate c.

2. Show that

− it − E eit(Y1 K)+ = 1 + e Kγ , t ∈ R. γ − it

∑ ˜ 3. Show that has the same distribution as ˜ Nt , Xt Xt = i=1 Yj where ˜ is a homogeneous Poisson process of the inten- (Nt)t≥0 sity e −Kγ independent of . λ = λe (Yj)j≥1

3.5 Fundamental equation for the non-ruin probability

Denote by ϕ(u) = 1 − ψ(u) the non-ruin probability.

44 Theorem 3.3. We assume that the Cramer-Lundberg model sat- ises the net prot condition (3.18) and the distribution function · of the random amounts has a density . Then the non- FY ( ) (Yj) fY ruin probability ϕ(u) satises the following integral equation ∫ u ρ 1 − (3.26) ϕ(u) = + ϕ(u y) dFY,I (y) , 1 + ρ 1 + ρ 0 where

∫ y 1 and − FY,I (y) = F Y (z) dz F Y (y) = 1 FY (y) = P(Y1 > y) . µ 0 ∑ Proof. Taking into account that n and are Sn = j=1 ξj (ξj)j≥1 i.i.d. random variables, one has

ϕ(u) = P(sup Sn ≤ u) = P(ξ1 ≤ u , sup Sn ≤ u) n≥1 n≥2 ∑n ≤ ≤ − − = P(ξ1 u, sup ξj u ξ1) = E1{ξ ≤u}ϕ(u ξ1) ≥ 1 n 2 j=2

= E 1{ − ≤ } ϕ(u − Y + cτ ) , Y1 cτ1 u 1 1

45 i.e.

∫ ∫ ∞ u+cv − −λv ϕ(u) = λ ϕ(u y + cv) dFY (y) e dv 0 0 ∫ ∫ ∞ z λ uλ/c −λz/c − = e e ϕ(z y) dFY (y) dz . c u 0

Taking the derivatives in this equality, we nd that

∫ u ′ λ − λ − ϕ (u) = ϕ(u) ϕ(u y) dFY (y) c c 0 and, therefore,

∫ ∫ ∫ t t u − λ − λ − (3.27) ϕ(t) ϕ(0) = ϕ(u) du ϕ(u y) dFY (y) du . c 0 c 0 0

Moreover, the integration by parts yields

∫ ∫ t u − ϕ(u y) dFY (y) du 0 0 ∫ ( ∫ ) t u ′ − = ϕ(0) FY (u) + FY (y) ϕ (u y) dy du 0 ∫ ∫ 0 (∫ ) t t t ′ − = ϕ(0) FY (u)du + FY (y) ϕ (u y)du dy 0 0 y∫ t − = FY (y)ϕ(t y)dy . 0

46 Using now the condition (3.19), we obtain from (3.27) that

∫ t − 1 − ϕ(t) ϕ(0) = ϕ(t y) F Y (y) dy (1 + ρ)µ 0 ∫ t 1 − (3.28) = ϕ(t y) dFY,I (y) . 1 + ρ 0

It should be noted now that ϕ(∞) = 1. Therefore, the passing here to the limit as t → ∞ yields

ρ ϕ(0) = 1 + ρ and we obtain from (3.28) the equality (3.26). Hence Theorem 3.3.

Note that (3.26) immediately implies the equation for the ruin prob- ability ψ(u) = 1 − ϕ(u): ∫ F (u) u Y,I 1 − (3.29) ψ(u) = + ψ(u y) dFY,I (y) , 1 + ρ 1 + ρ 0 where − . F Y,I (y) = 1 FY,I (y)

Example 3.2. In the case, when distribution of is exponen- (Yj)j≥1 tial, as in the example 3.1, i.e. − −γy, this equation has FY (y) = 1 e

47 the following form

∫ e−γu γ u ψ(u) = + ψ(u − y) e−γydy . (3.30) 1 + ρ 1 + ρ 0

We can resolve this equation directly and get that the solution is

1 −γ ρ u ψ(u) = e 1+ρ . (3.31) 1 + ρ

Remark 3.3. Note that, if we compare the form (3.31) with the upper bound (3.25), then one can see that the Lundberg inequality gives sharp upper bound for the coecient (1 + ρ)−1.

3.6 Exercises V

Let be i.i.d. random variables with values in N and (Yj)j≥1 a random variable with values in N independent of whose N (Yj)j distribution is of the form ( ) b q := P(N = n) = a + q − , n = 1, 2,..., n n n 1

48 where , for and ∈ R are xed constants. q0 = P(N = 0) a < 1 b Moreover let

∑N and X = Yj pk := P(X = k) . j=1

1. Show that the Poisson and binomial distributions verify the

previous hypotheses on N. ∑ 2. Let n . Show that for ≥ Sn = j=1 Yj i 1 ( )

Y1 1 E Si = . Si i

3. Show that

( )

Y1 E a + b S = n n i ( ) ∑n k P(Y = k)P(S − = n − k) = a + 1 i 1 . n P(S = n) k=0 i

4. Show that   if  q0 , P(Y1 = 0) = 0 ; p0 =  N else E (P(Y1 = 0)) , .

49 5. Show that for n ≥ 1

∑∞ pn = P(Si = n)qi . i=1

6. Show that the probabilities pk can be calculated recursively (Panjer's algorithm):

( ) 1 ∑k bi p = a + P(Y = i)p − , k ≥ 1. k 1 − aP(Y = 0) k 1 k i 1 i=1

3.7 Cramerbound

In this section we will study the limit of ψ(u) when u → ∞ for small claims, i.e. for claims that verify the condition H1).

Theorem 3.4. We assume that the conditions H1)H2) hold with and the random variable has a density . Then 0 < r < δ Y1 fY

ru lim e ψ(u) = ψ∗ > 0 , (3.32) u→∞ where ρ µ ψ∗ = ∫ ∞ r z erz P(Y > z) dz 0 1 and the parameter ρ > 0 is given in the net prot condition (3.19).

50 Proof. First, note that Remark 3.2 implies the condition (3.19).

Moreover, by the Theorem 3.3, we can write the equation for ψ ∫ u − (3.33) ψ(u) = q F Y,I (u) + q ψ(u y) dFY,I (y) , 0 where q = (1 + ρ)−1. From here we directly get the equation for the function Q(u) = eruψ(u), i.e. ∫ u Q(u) = V (u) + Q(u − y) dG(y) , (3.34) 0 where ru and V (u) = qe F Y,I (u) ∫ u q rz G(u) = e P(Y1 > z) dz . µ 0

Now let us to show that G is a distribution function, i.e. G(+∞) = 1. Indeed, the integrating by parts yields

∫ ∞ ∞ q rz G(+ ) = e P(Y1 > z) dz µ 0

q rz ∞ q rY = e P(Y > z) + E e 1 . rµ 1 0 rµ

Taking into account that 0 < r < δ, we obtain

− − ≤ δY1 (δ r)z → when → ∞ (3.35) P(Y1 > z) E e e 0 z .

51 Therefore, q q G(+∞) = − + E erY1 . rµ rµ

Note here that the denition of r and (3.19) imply

λ + rc rµ E erY1 = = 1 + , λ q i.e. G(+∞) = 1. This means that the equation (3.34) is a renewal equation and the solution Q is a renewal function for i.i.d. random variables with the distribution function . Let's study now (ηj)j≥1 G the function V . We note that the inequality (3.35) implies the following upper bound for V

∫ ∞ q rz V (u) = e P(Y1 > z) dz µ u

q − − ≤ E eδY1 e (δ r)u , δµ i.e. the function V satises the Riemann direct integrability condi- tion. This means that we can apply Smith theorem to the function , i.e. Q ∫ 1 ∞ lim Q(u) = V (z) dz , →∞ u Eη1 0 where ∫ ∞ q rz Eη1 = z e P(Y1 > z) dz µ 0

52 and

∫ ∞ ∫ ∞ rz V (z) dz = q e F Y,I (z)dz 0 0 ∫ ∞ q rz |∞ q rz = e F Y,I (z) 0 + e P(Y1 > z) dz r µr 0 q 1 1 − q = − + G(+∞) = . r r r

This directly implies (3.32). Hence Theorem 3.32.

3.8 Exercises VI

1. We consider a Cramer-Lundberg model with the risk process

− Ut = u + ct Xt ,

∑ where the total claim amount process Nt . Xt = j=1 Yj

(a) Show that the random variables − and are Xt Xs Xs independent for 0 < s < t.

(b) Show that the random variables − and have Xt Xs Xt−s the same distribution for 0 < s < t.

(c) Calculate − hUt | E(e Xs) .

53 (d) Assuming that the Lundberg coecient r > 0 exists, show that − − rUt | rUs E(e Xs) = e .

− (e) Show that Ee rUt independent of t.

2. Assume that in a Cramer-Lundberg model the distribution of the claim amounts is given by the density Yj

1 f (x) = αn xn−1e−αx for x > 0 (α > 0 , n ≥ 1). n Γ(n)

(a) Calculate the generator function EehY1 . For which values this function is well dened? Calculate . h > 0 EY1 (b) Find the net prot condition for this model.

(c) Calculate the Lundberg coecient for n = 1 and n = 2. (d) Write the integral equation for the ruin function . ψn(u) Find this function for n = 1.

3.9 Large claims

In this section we study the problem of ruin for the claims

which do not hold the condition , i.e., δY1 ∞ (Yj)j≥1 H1) E e = + for all δ > 0. We replace the condition H1) by a weaker condition, i.e. we assume that the distribution of is subexponential. (Yj)j≥1

54 Denition 3.4. We say that a random variable Y is subexponential if i.i.d. random variables having the same distribution as (Yj)j≥1 Y for all n ≥ 1 satisfy the following condition ∑ n P( Yj > z) lim j=1 = n . (3.36) →∞ z P(Y1 > z)

Example 3.3. Let Y a positive random variable such that for any z ≥ 0 1 P(Y > z) = and α > 0 . (1 + z)α

Let's show, by the induction, that Y satises the condition (3.36). Assuming that the property (3.36) holds for n −1, we will check this condition for n. To this end we set

F n(z) = P(Sn > z) ,

∑ where n . We have Sn = j=1 Yj

n∑−1 − F n(z) = P( Yj > z Yn) j=1

− = E F n−1(z Yn) 1{Yn≤z} + P(Yn > z) ∫ z − = F n−1(z t) dF (t) + F (z) , 0

55 where is the distribution function of and . So, F Y F (z) = F 1(z) we obtain that

∫ z F (z) F − (z − t) n = 1 + n 1 dF (t) . (3.37) F (z) 0 F (z)

Then we can represent the last term in this equality as

∫ ∫ rz z F − (z − t) F − (z − t) n 1 dF (t) + n 1 dF (t) 0 F (z) rz F (z)

= I1,r(z) + I2,r(z) , where 0 < r < 1. Note that for the function F (z) = 1 − (1 + z)−α for all t ≥ 0 we have

F (z − t) lim = 1 . z→∞ F (z)

Thus, in view of the induction hypothesis for all t > 0

F − (z − t) lim n 1 = n − 1 . z→∞ F (z)

Moreover, for t ≤ rz with 0 < r < 1 we obtain the following upper bound

F (z − t) F ((1 − r)z) − n−1 ≤ n−1 ≤ n 1 lim sup lim sup α . z→∞ F (z) z→∞ F (z) (1 − r)

56 Therefore, by the dominated convergence theorem,

− lim I1,r(z) = n 1 z→∞ for all . As to the function , we obtain that for any 0 < r < 1 I2,r(z) z > 0

1 I (z) ≤ (F (z) − F (rz)) 2,r F (z) ( ) 1 1 ≤ − (1 + z)α . (1 + rz)α (1 + z)α

This means that for any 0 < r < 1

≤ −α − lim sup I2,r(z) r 1 z→∞ and, passing here to the limit as r → 1, we nd

lim sup lim sup I2,r(z) = 0 . r→1 z→∞

Therefore, the equality (3.37) implies directly (3.36).

Proposition 3.3. Let Y be a subexponential random variable. Then

57 for any ε > 0 it exists K = K(ε) > 0 such that for any n ≥ 1

F (z) sup n ≤ K (1 + ε)n , (3.38) z≥0 F (z) ∑ where , , n and F n(z) = P(Sn > z) F (z) = F 1(z) Sn = j=1 Yj are i.i.d. random variables of the same distribution as . (Yj)j≥1 Y

Proof. We set

F n(z) αn = sup , z≥0 F (z) then we get ∫ z F (z) P(Sn > z − t) dF (t) n+1 = 1 + 0 F (z) F (z) ≤ P(Y1 + Y2 > z , Y2 z) ≤ 1 + αn F (z) ( ) P(S2 > z) = 1 + αn − 1 . F (z)

Note here that for any ε > 0 there exists T = T (ε) > 0 such that ( ) P(S > z) sup 2 − 1 ≤ (1 + ε) . z≥T F (z)

58 Therefore,

≤ F n(z) F n(z) αn+1 sup + sup 0≤z≤T F (z) z≥T F (z)

≤ αn (1 + ε) + K0 , where 1 K = 1 + . 0 P(Y > T )

This inequality means that for n ≥ 1

(3.39) αn+1 = (1 + ε) αn + βn+1 with and − ≤ . We can resolve α1 = 1 βn+1 = αn+1 (1 + ε)αn K0 this equation and nd that

∑n n−1 j αn = (1 + ε) α1 + (1 + ε) βn−j j=2 ∑n n−1 j ≤ (1 + ε) + K0 (1 + ε) j=2 ≤ K (1 + ε)n ,

where K = 1 + K0/ε. From here we obtain (3.38). In Section 3.5 we have shown that the probability of non-ruin

59 ϕ(u) satises the equation (3.26). Now one needs to resolve this equation.

Proposition 3.4. We assume that in the Cramer-Lundberg model the distribution function · for the random amounts has FY ( ) (Yj)j≥1 a density and ∞. Moreover, we assume that this fY µ = E Y1 < model satises the net prot condition (3.18), i.e. c = λµ(1+ρ) with ρ > 0. Then the solution of the equation (3.26) has the following form ∑∞ j ϕ(u) = p q P(S˜j ≤ u) , (3.40) j=0 ∑ where , , ˜ , ˜ j ˜ and p = ρ/(1 + ρ) q = 1/(1 + ρ) S0 = 0 Sj = i=1 Yi ˜ are i.i.d. random variables with the distribution function (Y )j≥1 · dened in (3.26). FY,I ( )

Proof. Let us denote the right part in equality (3.40) by g, i.e.

∑∞ j g(u) = p q P(S˜j ≤ u) . j=0

It is clear that this function is bounded, i.e.

∞ ∑ p g(u) ≤ p qj = = 1 . 1 − q j=0

Moreover, one can see that this function satises the equation (3.26).

60 Indeed, ( ) ∑∞ ∑j j g(u) = p + pq P(Y˜1 ≤ u) + p q P Y˜l ≤ u − Y˜1 j=2 l=2

= p + pq P(Y˜1 ≤ u) ∫ ( ) ∑∞ u ∑j j ˜ ≤ − + p q P Yl u t dFY,I (t) 0 j=2 ∫ l=2 u − = p + q g(u t) dFY,I (t) . 0

We show now that g(u) = ϕ(u). To this end we set δ(u) = g(u) − ϕ(u). We have already seen that |δ(u)| ≤ 2 for all u ≥ 0. Let now u be a xed positive number. Denote by | |. Then Mu = sup0≤t≤u δ(t) there exists ≤ ≤ such that | | because the function 0 t0 u Mu = δ(u0) δ(·) is continuous on the interval [0, u]. Therefore, ∫ u | | | − | Mu = δ(t0) = q δ(u0 t) dFY,I (t) 0 ∫ u ≤ | − | q δ(u0 t) dFY,I (t) 0

≤ ˜ ≤ ≤ q Mu P(Y1 u0) qMu .

Taking into account that , we get that for all . q < 1 Mu = 0 u > 0 Therefore, ϕ(u) = g(u) for any u ≥ 0.

61 Theorem 3.5. We assume that all the conditions in Proposition 3.4 hold and the distribution function · is subexponential. Then FY,I ( )

ψ(u) lim = ρ−1 . (3.41) →∞ u F Y,I (u)

Proof. The equality (3.41) implies directly that

∑∞ j ψ(u) = 1 − ϕ(u) = p q P(S˜j ≥ u) . j=1

Thus, taking into account that

∑∞ p j qj = ρ−1 , j=1 we have

∞ ψ(u) ∑ ∆(u) = − ρ−1 = p qj σ (u) , F (u) j Y,I j=1 where ˜ ≥ P(Sj u) − σj(u) = j . F Y,I (u) Now we xe ε > 0 such that θ = q(1 + ε) < 1. Then, by Proposi- tion 3.3, we obtain that there is a positive constant K such that for

62 any j ≥ 1 and for all u > 0

j | | ≤ j j q σj(u) K θ + q j .

Moreover, for any j ≥ 1

lim σj(u) = 0 . u→∞

Then the dominated convergence theorem directly implies that

∑∞ j lim ∆(u) = p q lim σj(u) = 0 . u→∞ u→∞ j=1

Hence Theorem 3.5.

Example 3.4. We consider the Cramer-Lundberg model in which the positive random variables are distributed according to (Yj)j≥1 the Pareto distribution function F (·) dened as

1 F (z) = P(Y ≤ z) = 1 − (1 + z)1+α for z ≥ 0. In this case µ = 1/α and, therefore, for all z ≥ 0

1 F (z) = 1 − . Y,I (1 + z)α

63 We have seen already that this distribution function is subexponen- tial. Therefore, in this case, in view of Theorem 3.5, we obtain that uαψ(u) → ρ−1 as u → ∞.

3.10 Exercises VII

Let F be some distribution function for the claim Y > 0. We denote F (x) = 1 − F (x). We say that F is light tailed if there exist a and b > 0 such that F (x) ≤ ae−bx for all x (or that F is heavy-tailed). Let

xl = inf{x|F (x) > 0} and xr = sup{x|F (x) < 1} .

We set

− | for ∈ eF (u) = E(Y u Y > u) u (xl, xr) .

1. Show that can be written as eF (u) ∫ 1 +∞ eF (u) = F (x)dx . F (u) u

2. Show that if F (x) > 0 for all x > 0 and that F is continuous,

64 then we have for any x > 0 { ∫ } e (0) x 1 F (x) = F exp − dy . eF (x) 0 eF (y)

3. Show that if ∞, then is heavy tailed. limu→+∞ eF (u) = + F

4. Show that if F is heavy tailed, then the generator function of Y is innite for any z > 0.

5. Calculate when is an exponential random variable of eF (u) Y a parameter λ > 0.

6. Calculate when has the Pareto distribution eF (u) Y ( ) κ α F (x) = κ + x

of parameters κ > 0 and α > 1.

7. Show that if Y has the Gamma distribution of order m ≥ 1 and a parameter γ > 0, i.e.

γmxm−1 F (x) = e−γx , m!

then F is light-tailed.

8. Show that if Y has the Weibull distribution with parameters

65 c > 0 and τ > 0, i.e.

F (x) = exp{−cxτ } ,

then F is light or heavy tailed depending on the value of τ.

3.11 Ruin problem with investment

In this section we consider an insurance company that invests its capital in a Black-Scholes market with the two assets B = (Bt)t≥0 and dened as S = (St)t≥0   dBt = r Bt dt , B0 = 1 ; (3.42)  dSt = aStdt + σSt dwt ,S0 > 0 , where is a Brownian motion, , and are non-negative (wt)t≥0 r a σ constants. Let F be ltration on this model dened as ( t)t≥0

F { ≤ } (3.43) t = σ ws ,Xs , s t , where is the total claim amount process dened in (3.3). (Xt)t≥0 We assume that in each time moment t ≥ 0 the insurance company has of the assets and of the assets . So, the wealth (the βt (B) γt (S)

66 risk process) at the instant t > 0 is equal to

(3.44) Ut = βtBt + γtSt .

We denote by and assume that the process πt = (βt, γt) π = (πt)t≥0 is adapted to the ltration F . In this case is said a nancial ( t)t≥0 π strategy.

Denition 3.5. Financial strategy with is π = (πt)t≥0 πt = (βt, γt) said to be admissible if for any t ≥ 0 ∫ t ( ) | | 2 ∞ a.s. βs + γs ds < 0 and, for anyy t ≥ 0, ∫ ∫ t t (3.45) Ut = βtBt + γtSt = u + βs dBs + γv dSv + Zt , 0 0 where is an initial endowment and − . u > 0 Zt = ct Xt

Proposition 3.5. Let and be a square integrated u > 0 γ = (γt)t≥0 process, i.e. for all t > 0 ∫ t 2 ∞ a.s. γv dv < 0

67 We set ∫ t e − e e (3.46) βt = u + γv dSv γtSt + Zt , 0 ∫ where Se = S /B and Ze = t B−1dZ . Then the nancial strategy t t t t 0 v v is admissible. (βt , γt)t≥0

Proof. First of all note that the process (3.46) is integrable, i.e. for any t > 0 ∫ t | | ∞ a.s. βv dv < 0 The denition (3.46) implies that the discounted wealth process

e Ut e Ut = = βt + γtSt Bt admits the following stochastic dierential:

e e e dUt = γt dSt + dZt .

Therefore, Ito formula implies that

e e e dUt = dBtUt = BtdUt + UtdBt ( ) e e e = βt dBt + γt StdBt + BtdSt + BtdZt .

68 Taking into account here that

e and e e e dZt = BtdZt dSt = dStBt = StdBt + BtdSt , we get the equality (3.45).

We denote by proportional strategy, i.e. ς = (ςt)t≥0

γ S γ Se γ Se t t t t ∫ t t ςt = = e = t e e . βtBt + γt St β + γ S u + γ dS + Z t t t 0 v v t

For this strategy we can rewrite the equation (3.45) as

∫ ∫ t t − − Ut = u + (r + (a r)ςs) Us ds + σ ςs Usdws + ct Xt . 0 0

In this section we assume that

≡ (3.47) ςt δ , where δ ≥ 0 is a xed nonrandom constant. In view of

e − e e dSt = (a r)Stdt + σStdwt , we obtain for the following stochastic dierential equation Vt = γt St

− e dVt = δ(a r)Vtdt + δσVtdwt + δdZt ,V0 = δu .

69 Through Ito formula we can represent the process in the follow- Vt ing form ( ∫ ) t − − ξt ξs 1 Vt = e δ u + δ c e Bs dZs , 0 where , − − 2 and . So, to ξt = a1 t + σδ wt a1 = δ(a r) σδ /2 σδ = δσ get the property (3.47) we set

( ∫ ) t − − ξt ξs 1 e δ u + δ c e B dZs 0 s ≤ ≤ γt = e , 0 t T. St

For this strategy the risk process can be written as

∫ ∫ t t (3.48) Ut = u + aδ Us ds + σδ Usdws + Zt , 0 0 where is the initial capital, − and . u > 0 aδ = r + δ(a r) σδ = δσ The ruin probability is

(3.49) ψ(u) = P(inf Ut < 0) . t≥0

We start to study this function for any u ≥ 0.

Proposition 3.6. The function ϕ(u) = 1 − ψ(u) satises for all

70 u ≥ 0 the following dierential equation

′′ ∫ 2 2 u σ u ϕ (u) ′ δ − − + ϕ (u)(aδu + c) λϕ(u) + λ ϕ(u y)dF (y) = 0 2 0 (3.50) with the boundary conditions

′ cϕ (0) = λϕ(0) and ϕ(+∞) = 1 . (3.51)

Proof. We denote by the solution of the following stochastic (ηt)t≥0 equation

dηt = (aδηt + c) dt + σδηtdwt , η0 = u .

By Ito formula, we can resolve it, i.e.

∫ t − ζt ζt ζs and e (3.52) ηt = u e + c e ds ζt = a t + σδ wt , 0 where e − 2 . Now we xe some and then, in view of a = aδ σδ /2 h > 0 the denition of ϕ, we can represent this function as

( ) ϕ(u) = E 1{ ≥ } inft≥0 Ut 0 ( ( )) = E 1{ ≥ } E 1{ ≥ } | F . inf0≤t≤h Ut 0 inft≥h Ut 0 h

71 It should be noted that the process (3.48) is Markovian, i.e.

( ) E 1{ ≥ } | F = ϕ(U ) inft≥h Ut 0 h h and, therefore,

( ) ϕ(u) = E 1{ ≥ } ϕ(U ) . inf0≤t≤h Ut 0 h

Moreover, we can represent this function in the form

(3.53) ϕ(u) = A1(u) + A2(u) + A3(u) ( ) where A (u) = E 1{ } ϕ(η ) , 1 Nh=0 h ( ) A (u) = E 1{ ≥ } 1{ }ϕ(U ) 2 inf0≤t≤h Ut 0 Nh=1 h and ( ) A (u) = E 1{ ≥ } 1{ ≥ }ϕ(U ) . 3 inf0≤t≤h Ut 0 Nh 2 h

We rewrite the equation (3.53) as

A (u) − ϕ(u) A (u) A (u) 1 + 2 + 3 = 0 . (3.54) h h h

72 It is clear that

− −λh − A1(u) ϕ(u) = e E (ϕ(ηh) ϕ(u)) .

In addition, it is well known (see, for example, in [8]) that the function ϕ is two times continuously dierentiable. So, by the Ito formula

− −λh − A1(u) ϕ(u) = (e 1) ϕ(u) ∫ ( ) h 2 2 ′ σδ ηv ′′ + E (aδηv + c) ϕ (ηv) + ϕ (ηv) dv 0 2 and, therefore,

2 2 − ′ σ u ′′ A1(u) ϕ(u) δ − lim = (aδu + c) ϕ (u) + ϕ (u) λϕ(u) . h→0 h 2

Then we can represent the term as A2(u) ( )

A2(u) = E 1{η ≥Y } 1{N =1}ϕ(Uh) . T1 1 h

Note here that on the set { } Nh = 1 ∫ ( ) h ζ −ζ ζ −ζ U = η − Y e h T1 + c e h s ds , h T1 1 T1

73 where the process is given in (3.52). So, on the set (ζt)t≥0 { } N = 1} ∩ {Y ≤ η (3.55) h 1 T1 we get

∗ 2ζ∗ 2ζ∗ ∗ | − | ≤ h − h Uh u + Y1 (u + ηh) e 1 + c h e + ηh , where ∗ | − | and ∗ | | ηh = sup ηs u ζh = sup ζs . 0≤s≤h 0≤s≤h From the denition of it follows that (ηt)t≥0

∗ ∗ ∗ ≤ | ζh − | 2ζh ηh u e 1 + c h e .

Therefore, on the set (3.55)

| − | ≤ ∗ ∗ Uh u + Y1 Bh(ζh) , where

( ) ∗ | x − | 2x | 2x − | 2x | x − | Bh(x) = u + u e 1 + c h e e 1 + 2c h e + u e 1 .

Taking into account that the process is continuous, we obtain (ζt)t≥0

74 that

lim ζ∗ = 0 a.s. h→0 h Moreover, using the properties of a Brownian motion, one can check directly that for all γ > 0 and 0 < t < ∞

γζ∗ E e t < ∞ .

So, by the dominated convergence theorem

lim EB∗(ζ∗) = 0 . h→0 h h

In view of the independence of the processes and , (ζt)t≥0 (Nt)t≥0 we obtain that

∫ A (u) u lim 2 = λ ϕ(u − y) dF (y) . (3.56) → h 0 h 0

Finally, for the last term in (3.54) it's easy to see that

A (u) P (N ≥ 2) 3 ≤ h → 0 , as h → 0 . h h

So, taking the limit in (3.54) as h → 0 we get the equation (3.50). Hence Proposition 3.6.

To study the asymptotic properties of the ruin probability (3.49)

75 as u → ∞, we set 2aδ − (3.57) ν = 2 1 . σδ The following theorem gives us the asymptotic behavior of the ruin probability in the depending of this parameter (see, for example, [4] and [9]).

Theorem 3.6. For the proportional strategy (3.47) the ruin proba- bility (3.49) has the following asymptotic (as u → ∞) properties

1. if ν ≤ 0, then ψ(u) = 1 for all u ≥ 0.

2. if and ν ∞, then there is a constant ∞ ν > 0 EY1 < 0 < ψ∗ < such that

ν lim u ψ(u) = ψ∗ . u→∞

Now one needs to choose the proportional investment coecient

δ > 0 in the strategy (3.47) such that the power parameter ν dened in (3.57) will be positive, i.e. 0 < δ < δ∗, where √ (a − r)2 + 2r − r + a δ∗ = . σ2

Therefore, if there exists ∗ for which ν ∞, then 0 < δ < δ E Y1 <

76 according to Theorem 3.6 for some constant 0 < ψ∗ < ∞

ν lim u ψ(u) = ψ∗ . u→∞

Remark 3.4. This result means that in the case where the net prot condition (3.18) does not hold true, i.e. c ≤ λµ, to avoid being bankrupt almost sure, the company is obliged to invest its capital in a Black-Scholes market through the strategy (3.47) with 0 < δ < δ∗ and ν ∞. E Y1 <

3.12 Exercises VIII

1. Give the denition of an admissible strategy. Show that the set of admissible strategies is not empty.

2. Is there an admissible strategy with initial endowment u = 20 euros and −2? Clarify the answer. γt = (1 + St)

3. Is there an admissible strategy with initial endowment u = and 2S0 √ b with b ? Clarify the answer. γt = 1/ St St = St/Bt

77 A Appendix

In this section we announce main limit results of the which can be found, for example, in [10].

A.1 Strong large numbers law

Theorem A.1. Let be i.i.d. random variables with | | (ξj)j≥1 E ξ1 < ∞. Then ∑n 1 a.s. lim ξj = Eξ1 n→∞ n j=1

A.2 Kolmogorov zero-one law

Theorem A.2. Let be a sequence of independent random (ξj)j≥1 variables and X ∩ { } = n≥1 σ (ξj)j≥n .

Then for any A ∈ X the probability P(A) = 0 or P(A) = 1.

A.3 Three series theorem

Theorem A.3. Let be a sequence of independent random (ξj)j≥1 ∑ variables. For almost sure convergence of the series neces- n≥1 ξj

78 sary that for any c > 0 the following series are convergent

∑ ∑ ∑ ( ) c c − c 2 | | E ξj , E (ξj E ξj ) , P ξj > c , n≥1 n≥1 n≥1 and suciently that these series are convergent for some xed c > 0, c where ξ = ξ1{| |≤ }. ξj c

A.4 Central limit theorem

First, we recall the weak convergence for random variables.

Denition A.1. The sequence of random variable is called conver- gent weakly to a random variable , i.e. ⇒ as → ∞, if for ξ ξn = ξ n any bounded continuous R → R function g

lim E g(ξn) = E g(ξ) . n→∞

Theorem A.4. Let be i.i.d. random variables with (ξj)j≥1 Eξ1 = 0 and 2 2. Then Eξ1 = σ ∑ n ξj √j=1 =⇒ ξ as n → ∞ , n where ξ is a Gaussian random variable with the parameters (0, σ2).

79 A.5 Iterated logarithm law

Theorem A.5. Let be i.i.d. random variables with (ξj)j≥1 Eξ1 = 0 and 2 2. Then Eξ1 = σ ∑ n ξj √ lim sup √ j=1 = σ 2 a.s. n→∞ n ln(ln n)

80 References

[1] Asmussen S., Albrecher H. Ruin Probabilities. - Singapore: World Scientic, 2010.

[2] Buraczewski D., Damek E., Mikosch Th. Stochastic Models

with Power-Law Tails. The Equation X = AX + B. - New York: Springer, 2016.

[3] Feller W. An Introduction to Probability Theory and Its Ap- plications, 2nd edition. - New York: Wiley, 1971.

[4] Frolova A., Kabanov Yu., Pergamenshchikov S. In the insur- ance business risky investments are dangerous // Finance and Stochastics. - 2002. - Vol.6. - P. 227-235.

[5] Grandell I. Aspects of Risk theory. - Berlin: Springer, 1990.

[6] Kabanov Yu., Pergamenshchikov S. In the insurance business risky investments are dangerous: the case of negative risk sums // Finance and Stochastics. - 2016. - Vol.20, No 2. P. 355-379.

[7] Mikocsh Th. Non-life Insurance Mathematics. An Introduc- tion with Stochastic Processes. - Berlin: Springer-Verlag, 2000.

81 [8] Paulsen J. with Applications to Risk The- ory. Lecture Notes. - Univ. of Bergen and Univ. of Copen- hagen, 1996.

[9] Pergamenshchikov S., Zeitouny O. Ruin probability in the presence of risky investments // Stoch. Process. Appl. - 2006. - Vol.116. - P. 267-278. Erratum to: Ruin probability in the presence of risky invest- ments // Stoch. Proc. Appl. - 2009. - Vol.119. - P. 305-306.

[10] Shiryaev A.N. Probability. - Berlin: Springer, 1996.

[11] Toulouse P. Theoremesde probabiliteset statistique. - Paris: Dunod, 1999.

82 The publication was prepared in the author's edition Printed in the area of digital printing Publishing House of Tomsk State University Order number from " " in February 2020, 50 copies.