Lecture Notes
Total Page:16
File Type:pdf, Size:1020Kb
Risk theory Harri Nyrhinen, University of Helsinki Fall 2017 Sisältö 1 Introduction 1 2 Background from Probability Theory 3 3 The number of claims 9 3.1 Poisson distribution and process . 10 3.2 Mixed Poisson variable . 16 3.3 The number of claims of a single policy-holder . 18 3.4 Mixed Poisson process . 19 4 Total claim amount 22 5 Viewpoints on claim size distributions 26 5.1 Tabulation method . 26 5.2 Analytical methods . 26 5.3 On the estimation of the tails of the distribution . 29 6 Calculation and estimation of the total claim amount 31 6.1 Panjer method . 31 6.2 Approximation of the compound distributions . 33 6.2.1 Limiting behaviour of compound distributions . 33 6.2.2 Refinements of the normal approximation . 36 6.2.3 Applications of approximation methods . 39 6.3 Simulation of compound distributions . 42 6.3.1 Producing observations . 43 6.3.2 Estimation . 44 6.3.3 Increasing efficiency of simulation of small probabilities . 46 6.4 An upper bound for the tail probability . 53 6.5 Modelling dependence . 54 6.5.1 Mixing models . 55 6.5.2 Copulas . 55 7 Reinsurance 61 7.1 Excess of loss (XL) . 61 7.2 Quota share (QS) . 67 7.3 Surplus . 68 7.4 Stop loss (SL) . 69 8 Outstanding claims 71 8.1 Development triangles . 71 8.2 Chain-Ladder method . 72 8.3 Predicting of the unknown claims . 76 8.4 Credibility estimates for outstanding claims . 82 9 Solvency in the long run 86 9.1 Classical ruin problem . 86 9.2 Practical long run modelling of the capital development . 95 10 Insurance from the viewpoint of utility theory 96 10.1 Utility functions . 96 10.2 Utility of insurance . 98 University of Helsinki 1 1 Introduction Consider briefly the nature of the insurance industry and the motivation for buying insu- rance contracts. As an example, think about a collection of houses and the associated risk of fires. For a single house-owner, a fire means a huge economic loss. To hedge against the risk by building up a bank account with a sufficient amount of money seems not realistic. Usually the problem is solved by means of an insurance contract. This means that each house-owner pays a premium to an insurance company. The premium corresponds roughly to the mean level of losses because of fires for the house-owner in one year. By the cont- ract, the company compensates these losses. Thus the risk has moved to the insurance company and the house-owners are protected against random large losses by means of a deterministic moderate premium. The company typically makes a large number of similar contracts. The law of large numbers can be applied to see that the company is able to manage the compensations by means of moderate premiums. We have already introduced two important cash flows associated with the insurance business, namely, the compensations and the premiums. There are many other cash flows as it is illustrated in the following picture. University of Helsinki 2 PREMIUMS COMPEN- J SATIONS J J J J J J J J J J RETURNS J^ ADMINIST- ON THE - INSURANCE - RATION INVESTMENTS COMPANY COSTS J J J J J J J J J J DIVIDENDS, JJ^ NEW CAPITAL TAXES, etc. The course is focussed on the analysis of compensations. Examples of the goals are: - how should we describe the compensation process - how should we estimate the solvency of an insurance company. The course can be viewed as a study of risks associated with non-life insurance com- panies. The main source for the course is part I of the book DPP: Daykin, C., Pentikäinen, T. and Pesonen, M. (1994). Practical Risk Theory for Actua- ries. Chapman & Hall, London. The reader is referred to this book especially to get more applied discussion of various topics. More detailed references to appropriate chapters will be given during the course. University of Helsinki 3 2 Background from Probability Theory The central subject of our interest is the compensation process, called later the claims process. We will consider it as a random variable or as a stochastic process. We list some concepts and facts from the probability theory which are assumed to be known. 1. Probability space Probability space is a triple (Ω; S; P) where Ω is the sample space. S is a sigma-algebra of Ω. The sets of S are called events. P is a probability measure. 2. Random variable A measurable map ξ : (Ω;S) ! (R; B) is called a random variable where B is the Borel sigma-algebra of R. In the sequel, the measurability of a real-valued function refers to the measurability with respect to the Borel-sigma-algebra B. 3. Distribution The distribution P of the random variable ξ is the probability measure on (R; B) such that −1 P (B) = P(ξ (B)) = P(! 2 Ω j ξ(!) 2 B) for every B 2 B. If the random variables ξ and η have the same distribution then we write ξ =L η: 4. Distribution function, density function, probability mass function The distribution function F : R ! R of the random variable ξ is defined by F (x) = P(ξ ≤ x) = P ((−∞; x]): The function f : R ! R is the density (function) of ξ if Z x F (x) = f(t)dt −∞ for every x 2 R. In this case, ξ is a continuous random variable. If there exists a countable subset fx1; x2;:::g of R such that P(ξ 2 fx1; x2;:::g) = 1 then ξ is discrete.Then the probability point mass function g : R ! R of ξ is defined by g(x) = P(ξ = x): University of Helsinki 4 In the sequel we often consider mixtures of continuous and discrete distributions. Then the distribution function has the form Z x X F (x) = f(t)dt + P(ξ = xi): (2:1) −∞ xi≤x kaikilla x 2 R. 5. Expectation, variance, higher order moments The expectation of a random variable ξ is defined by Z E(ξ) = ξ(!)dP(!) Ω under the assumption that E(j ξ j) < 1. If ξ ≥ 0 almost surely we also allow +1 as the value of the expectation. Thus E(ξ) is defined for every non-negative random variable ξ. Let h : R ! R be a measurable function. If E(j h(ξ) j) < 1 and F is the distribution function of ξ then Z 1 E(h(ξ)) = h(x)dF (x): −∞ If ξ has the density f then Z 1 E(h(ξ)) = f(x)h(x)dx: −∞ If ξ is discrete and the probability mass function is g then 1 X E(h(ξ)) = g(xi)h(xi); i=1 P1 where it is assumed that i=1 g(xi) = 1. For the mixture (2.1) of a continuous and a discrete distribution, it holds 1 Z 1 X E(h(ξ)) = f(x)h(x)dx + P(ξ = xi)h(xi): −∞ i=1 The nth (origin) moment an of ξ is defined by Z 1 n n an = E(ξ ) = x dF (x) −∞ n if E(j ξ j) < 1, n = 1; 2;:::. Hence, a1 = E(ξ). We also often write µ = E(ξ) or µξ = E(ξ). The nth central moment µn is defined by n µn = E((ξ − a1) ); n ≥ 2: University of Helsinki 5 The variance of ξ is 2 σξ = Var(ξ) = µ2 p and the standard deviation σξ = µ2. The skewness γξ is defined by 3 3 3 γξ = E((ξ − a1) )/σ = µ3/σ : 6. Moment generating function The moment generating function M = Mξ of ξ is a function R ! R [ f+1g which is determined by sξ Mξ(s) = E(e ): The cumulant generating function c = cξ is a function R ! R [ f+1g determined by cξ(s) = log Mξ(s): Both of the functions are always defined. The following results hold. a) If the moment generating functions of two random variables coincide and are finite in a non-empty open subset of R then the distributions of the random variables coincide. b) Let ξ and η be independent random variables, that is, P(ξ 2 A; η 2 B) = P(ξ 2 A)P(η 2 B) for every A; B 2 B, denoted by ξ ?? η. Then Mξ+η(s) = Mξ(s)Mη(s) and cξ+η(s) = cξ(s) + cη(s) for every s 2 R. c) The moment generating function has the derivatives of all order in the interior of (n) its domain. If s is in that interior then the nth derivative Mξ (s) is (n) n sξ Mξ (s) = E ξ e : In particular, if Mξ is finite in a neighbourhood of the origin then (n) n Mξ (0) = E(ξ ) for every n 2 N. Furthermore, 0 cξ(0) = E(ξ) and (n) n cξ (0) = E((ξ − a1) ) = µn; n = 2; 3: University of Helsinki 6 If P(ξ ≥ 0) = 1 then always lim M (n)(s) = (ξn): s!0− ξ E 7. Conditional expectation Let ξ and η be random variables. Assume that E(ξ) exists and is finite. Let σ(η) be the sigma-algebra generated by η, that is, σ(η) is the smallest sub-sigma-algebra of S where η is measurable. The conditional expectation of ξ with respect to η is the random variable E(ξ j η) which satisfies (i) E(ξ j η) is σ(η) − measurable (ii) EfE(ξ j η)1(η 2 B)g = E(ξ1(η 2 B)) for every B 2 B: In (ii) 1 is the indicator-function, that is, 1(η 2 B)(!) = 1, when η(!) 2 B and 0 otherwise.