arXiv:1010.0829v1 [q-fin.PR] 5 Oct 2010 nhn dadVcesa Hoyle Vickerstaff Edward Anthony umte oIpra olg London College Imperial to Submitted nomto-ae Models Information-Based o iac n Insurance and Finance for eateto Mathematics of Department meilCleeLondon College Imperial otro Philosophy of Doctor odnS72AZ SW7 London ntdKingdom United o h ereof degree the for 2010 by 2

Abstract

In financial markets, the information that traders have about an asset is reflected in its price. The arrival of new information then leads to price changes. The ‘information- based framework’ of Brody, Hughston and Macrina (BHM) isolates the emergence of information, and examines its role as a driver of price dynamics. This approach has led to the development of new models that capture a broad range of price behaviour. This thesis extends the work of BHM by introducing a wider class of processes for the generation of the market filtration. In the BHM framework, each asset is associated with a collection of random cash flows. The asset price is the sum of the discounted expectations of the cash flows. Expectations are taken with respect (i) an appropriate measure, and (ii) the filtration generated by a set of so-called information processes that carry noisy or imperfect market information about the cash flows. To model the flow of information, we introduce a class of processes termed L´evy random bridges (LRBs), generalising the Brownian and gamma information processes of BHM. Conditioned on its terminal value, an LRB is identical in law to a L´evy bridge. We consider in detail the case where the asset generates a single cash flow XT at a fixed date T . The flow of information about XT is modelled by an LRB with random terminal value XT . An explicit expression for the price process is found by working out the discounted conditional expectation of XT with respect to the natural filtration of the LRB. New models are constructed using information processes related to the Poisson process, the Cauchy process, the stable-1/2 subordinator, the variance-, and the normal inverse-. These are applied to the valuation of credit-risky bonds, vanilla and exotic options, and non-life insurance liabilities. 3

Acknowledgements

I am very grateful to Lane P. Hughston, my supervisor, for his help and support. With his breadth of knowledge, Lane’s teachings have stretched beyond mathematical finance to such eclectic subjects as quantum mechanics, Italian opera, and Indian literature. Many thanks go my co-author Andrea Macrina whose enthusiasm for this work often pushed me forward. I would like to express my gratitude to members of the mathematical finance groups at Imperial College London and King’s College London, and members of the London Graduate School in , with particular thanks going to my teachers M. Davis, H. Geman, W. Shaw, and M. Zervos; and also to P. Barrieu, D. Brody, M. Pistorius, and R. Norberg. I am grateful to F. Delbaen, and M. Schweizer, and other members of the mathematical finance group at ETH Z¨urich for their hospitality and support. I also wish to thank D. Taylor for his hospitality at the University of the Witwatersrand. This work was supported by an EPSRC Doctoral Training Grant and a European Science Foundation research visit grant under the Advanced Mathematical Methods in Finance programme (AMaMeF). Part of this work has undertaken when I was student at King’s College London. Finally, for their unfailing support throughout my studies, I must thank my family—Mum, Dad, Charlotte and Thomas—and Christine. 4

The work presented in this thesis is my own.

A.E.V. Hoyle Contents

1 Introduction and summary 9

2 L´evy processes and L´evy bridges 15 2.1 L´evyprocesses...... 15 2.2 Stableprocesses...... 17 2.3 L´evybridges...... 18 2.4 Stablebridges...... 21 2.5 BrownianmotionandBrownianbridge ...... 22 2.6 Gammaprocessandgammabridge ...... 23 2.7 Variance-gamma process and variance-gamma bridge ...... 26 2.8 Stable-1/2 subordinator and stable-1/2 bridge ...... 31 2.9 CauchyprocessandCauchybridge ...... 35 2.10 Normal inverse-Gaussian process and normal inverse-Gaussian bridge . 37 2.11 PoissonprocessandPoissonbridge ...... 39

3 L´evy random bridges 42 3.1 DefiningLRBs...... 43 3.2 Finite-dimensional distributions ...... 44 3.3 LRBsasconditionedL´evyprocesses...... 46 3.4 TheMarkovproperty...... 46 3.5 Conditional terminal distributions ...... 48 3.6 Measurechanges ...... 49 3.7 Dynamicconsistency ...... 51 3.8 IncrementsofLRBs...... 53

5 6 Contents

4 Information-based asset pricing 62 4.1 BHMframework ...... 63 4.2 L´evybridgeinformation ...... 65 4.3 Europeanoptionpricing ...... 66 4.4 Binarybond...... 69

5 Variance-gamma information 71 5.1 Variance-gammarandombridge ...... 72 5.2 Simulation...... 73 5.3 VGequitymodel ...... 75 5.4 VGbinarybond...... 76

6 Stable-1/2 information 80 6.1 Stable-1/2randombridge ...... 82 6.2 Insurancemodel...... 83 6.3 Estimating the ultimate loss ...... 84 6.4 Thepaid-claimsprocess ...... 85 6.5 Reinsurance ...... 86 6.6 Tailbehaviour...... 89 6.7 Generalized inverse-Gaussian prior ...... 90 6.8 Exposureadjustment ...... 95 6.9 Simulation...... 97 6.10 Multiple lines of business ...... 99

7 Cauchy information 104 7.1 Cauchyrandombridges...... 105 7.2 Simulation...... 106 7.3 Cauchybinarybond ...... 107

8 Normal inverse-Gaussian information 114 8.1 Normalinverse-Gaussianrandombridge ...... 114 8.2 Simulation...... 116 8.3 NIGequitymodel...... 117 8.4 NIGbinarybond ...... 117 Contents 7

9 Poisson information 122 9.1 Poissonrandombridge ...... 123 9.2 MixedPoissonprocesses ...... 132 9.3 Simulation...... 134 9.4 CompoundPoissonrandombridge ...... 135 9.5 nth-to-defaultbaskets ...... 137

A Some integrals 139 A.1 ProofofProposition2.8.2 ...... 139 A.2 ProofofProposition2.8.5 ...... 140 A.3 ProofofProposition2.9.1 ...... 142 A.4 ProofofProposition2.9.3 ...... 144

Bibliography 146 List of Figures

2.1 Stable-1/2 bridge simulations ...... 34

5.1 VG binary bond simulations: defaulting bonds ...... 77 5.2 VG binary bond simulations: non-defaulting bonds ...... 78 5.3 VG binary bond simulations: varying the rate parameter ...... 79

6.1 Craighead curves: truncated Weibull time-changes ...... 97 6.2 Simulations from the stable-1/2 random bridge reserving model . ... 98

7.1 The Cauchy binary bond price plotted as a function of the information process...... 109 7.2 Cauchy binary bond simulations: defaulting bonds ...... 112 7.3 Cauchy binary bond simulations: non-defaulting bonds ...... 113

8.1 NIG binary bond simulations: defaulting bonds ...... 119 8.2 NIG binary bond simulations: non-defaulting bonds ...... 120 8.3 NIG binary bond simulations: varying the rate parameter ...... 121

9.1 Poisson random bridge simulations ...... 135

8 Chapter 1

Introduction and summary

The formation of prices in financial markets has long been a concern for economists. Macroeconomic factors, market microstructure, and investor preferences constitute an inexhaustive list of constituents that play a part. It is a daunting task to formulate a parsimonious pricing model that incorporates even this short list of effects, and is capable of delivering useful and timely results. It is not surprising then that analysis of price formation tends to focus on a single or small number of relevant elements at any one time. The information that traders and investors have about an asset is reflected in its price. Information about an asset might include information about any of the effects that influence the formation of its price. The arrival of new information leads to changes in the price of the asset. Qualitatively speaking, if information about an asset arrives infrequently and in large lots, then its price process will exhibit large jumps. Conversely, if information arrives smoothly and steadily, then the impact of information arrival over short time-scales will be modest. Thus, the emergence of information as driver of price dynamics presents itself as interesting avenue of investigation. The objective of this thesis is to provide a framework for the derivation of price dynamics of assets (or, indeed, the valuation dynamics of liabilities) through the mod- elling of information flow. It has become commonplace in the mathematical finance literature to develop pricing models under the risk-neutral measure. Two situations when this may be appropriate are (a) when the market is incomplete and there exists a multitude of equivalent martingale measures, and the selection of any one for pric- ing is made subjectively; and (b) when pricing models are calibrated to market prices

9 10 1 Introduction and summary

(of options), since these prices are (theoretically) expectations under the risk-neutral measure. We adopt a similar approach. Throughout this work we build models under

a ‘suitable’ measure Q. At time t, we define the price of a contingent claim XT due at

time T as P EQ[X ], where P is a discount factor, and is the market fil- tT T |Ft tT {Ft} tration. For this price to agree with the finance-theoretic arbitrage-free price, then the measure Q needs to be interpreted as the T -forward measure in the case of stochastic interest rates, or the risk-neutral measure in the case of deterministic interest rates.

If XT is the size of a future liability that needs to be accounted for (‘booked’), then it may not be appropriate to use the market (i.e. arbitrage-free) price for valuation. This is particularly true if the market for such a claim is illiquid. In order to remain as general as possible we refrain from being precise in the interpretation of Q, and leave that to the judgement of the implementer.

In a stochastic model, it is the filtration that encodes the emergence of infor- {Ft} mation. Choosing an asset price process to be geometric , for example, implies that the process is adapted to a Brownian filtration. Although this approach of implicitly choosing a filtration by specifying the law of a price process is common in mathematical finance, we wish to avoid it and to specify directly. In particular, {Ft} we postulate the existence of a market information process ξ which generates . { tT } {Ft} Then prices are derived by taking conditional expectations of cash flows with respect to this filtration.

Earlier, we were careful to include the valuation of liabilities within our remit. We will consider in detail how the methods we develop can be applied to the calculation of reserves that an insurance company should set aside to meet future claims. A non-life insurance company may underwrite various risks for a particular year in return for premiums. The company ‘incurs’ claims over the one-year period, which means that losses are triggered that the company is liable to cover. However, there may be a delay between the date a loss is triggered and the day it is reported to the company, the total size of a claim may not be known when the claim is reported, and a claim may not be paid by a single cash-flow on a single date. The insurer may be paying for these losses for many years to come. The problem is: How much money should the insurer reserve at a given time to cover all future claim payments? This has implications for the company’s accounting, tax liability, solvency, capital adequacy, and investment strategy. 1 Introduction and summary 11

Chapter 2 begins with a brief introduction to L´evy processes, strictly stable pro- cesses, and L´evy bridges. A L´evy bridge is a defined over a finite time horizon, and is a L´evy process whose terminal value is known from the outset. We provide a proof of the for L´evy bridges. For the remainder of the chapter we examine particular examples of these processes. First are the well-known Brownian motion and the ; these are Gaussian processes with con- tinuous sample paths. Brownian motion without drift is a . Then are the gamma process and gamma bridge, which are increasing processes. The variance- gamma (VG) process is closely related to Brownian motion and the gamma process. In particular, a VG process is constructed by subordinating a Brownian motion by an independent gamma process, or by taking the difference of two independent gamma processes. We list some properties of the VG process and then examine VG bridges. We are able to utilise the relationship between the VG process and Brownian motion and the gamma process to derive two constructions of the VG bridge. These construc- tions write the VG bridge in terms of Brownian bridges, gamma bridges, and random volatility terms. The second stable process we examine is the stable-1/2 subordinator. This process can be constructed as the hitting times of a Brownian motion. We choose to examine this subordinator over other stable subordinators because its density is known in closed form. It will become apparent that this is a desirable feature in this work. The stable-1/2 subordinator has heavy tails; the expected value of the process at any future date is infinite. However, all positive moments of the stable-1/2 bridge exist. By subordinating a Brownian motion with an independent stable-1/2 subordina- tor we can construct a Cauchy process. The Cauchy process is the third stable process we consider. It too is heavy tailed—its first does not exist. We find that the first two moments of the Cauchy bridge do exist and are finite. One of the effects of pinning the end point of the Cauchy process is to temper its wild behaviour. The last process we consider that has a continuous state space is the normal inverse-Gaussian (NIG) process. This process is constructed by subordinating a Brownian motion by an inverse-Gaussian (IG) process. It is a very similar process to the VG process. It is not surprising then that NIG bridges are similar to VG bridges. Finally, we come to the Poisson process and the Poisson bridge. These processes have a discrete state spaces.

The state space of the Poisson process is N0, and Possion bridges are restricted to a subset of N0. We show how the Poisson bridge can be written as a Poisson process 12 1 Introduction and summary

under a time change that is not independent of the process. In Chapter 3 we define L´evy random bridges (LRBs). An LRB is a process defined over a finite time horizon whose bridge laws are the bridge laws of a L´evy process. It can be interpreted as being a L´evy process conditioned to have a fixed marginal law at a fixed future date. The motivation for LRBs is the desire for Markov processes, defined over a fixed time interval, over which we have a distributional degree of freedom for the terminal value. We shall see that such processes are useful for asset pricing in the information-based framework. We derive various properties of LRBs including: that LRBs are Markov processes; that the law of an LRB is equivalent to the law of a L´evy process (at least for all times up to the LRB’s termination time); that LRBs have stationary increments; that the joint distribution of the increments of an LRB have a generalized multivariate Liouville distribution; and, if the path of an LRB is split into non-overlapping portions, then each portion is itself an LRB. The marginal characteristic function often proves convenient in the analysis of a L´evy process. This is not the case for an LRB. However, we are able to provide an expression for the transition law of a general LRB. The information-based framework of Brody, Hughston & Macrina is described in Chapter 4. In this framework, cash flows are functions of independent X-factors. ‘Information processes’ generate the market filtration, and reveal the value of the X- factors—thus, they reveal the value of the cash flows. Cash flows are priced by discount- ing their expected value given the market information. A general multi-factor set-up is described that allows for a rich dependency structure between cash flows. Much of the analysis is presented for a single X-factor market where the only information process is an LRB. The final value of the LRB is set to be the value of the X-factor. Using a single factor may sound restrictive, but this includes some well-known one-dimensional exponential L´evy models as special cases (the Black-Scholes model was recovered from an information-based model by Brody et al. [19]). Using Bayesian methods, we are able to derive the dynamics of the price of a cash flow. From this, we can price European call options on the cash flow price. An expression for the call price is given in a general LRB-information model. The material in Chapters 3 and 4 appears in [52]. In Chapter 5 we examine a one-parameter VG random bridge and apply it to information-based pricing. We derive two terminal value decompositions of the VG random bridge using the decomposition of the VG bridge. We show that a three 1 Introduction and summary 13 parameter VG process can be recovered by scaling a standard VG random bridge which has an asymmetric VG terminal law. This allows the VG equity model of Madan et al. [67] to be derived as a special case of a single X-factor information-based model, when information is provided by a standard VG random bridge. We price a binary bond in a VG information model and include a rate parameter σ which acts like a volatility parameter. We provide two algorithms for the simulation of sample paths of the VG random bridge, and provide plots of simulated binary bond prices. We develop a non-life reserving model in Chapter 6 using a stable-1/2 random bridge to simulate the accumulation of paid claims, allowing for an arbitrary choice of a priori distribution for the ultimate loss. Taking a Bayesian approach to the reserving problem, we derive the process of the conditional distribution of the ultimate loss. The ‘best-estimate ultimate loss process’ is given by the conditional expectation of the ultimate loss. We derive explicit expressions for the best-estimate ultimate loss process, and for expected recoveries arising from aggregate excess-of-loss reinsurance treaties. Use of a deterministic time change allows for the matching of any initial (increasing) development pattern for the paid claims. We show that these methods are well-suited to the modelling of claims where there is a non-trivial of catastrophic loss. The generalized inverse-Gaussian (GIG) distribution is shown to be a natural choice for the a priori ultimate loss distribution. For particular GIG parameter choices, the best-estimate ultimate loss process can be written as a rational function of the paid- claims process. We extend the model to include a second paid-claims process, and allow the two processes to be dependent. The results obtained can be applied to the modelling of multiple lines of business or multiple origin years. The multidimensional model has the attractive property that the dimensionality of calculations remains low, regardless of the number of paid-claims processes. An algorithm is provided for the simulation of the paid-claims processes. The material in this chapter appears in [53]. In Chapter 7 we derive properties of Cauchy random bridges, and describe a method for simulating sample paths. Modelling the information process as a Cauchy random bridge, we examine the pricing of a binary bond in an information-based model. We derive an explicit expression for the price of a call option on the bond price. Chapter 8 closely follows Chapter 5. Only, in this case, we examine the NIG random bridge, and use it to model an information process, as opposed to using the VG random bridge. Since the NIG and VG processes are similar, the information-based models their 14 1 Introduction and summary

respective random bridges produce are similar. In Chapter 9 we provide an example of an LRB with a discrete state-space, the Poisson random bridge. Poisson random bridges are counting processes, and we derive expressions for the waiting times between jumps in terms of the probability generating function of the terminal distribution. We show that a Poisson random bridge can be written as a Poisson process with a state-dependent intensity, and we derive an explicit expression for the intensity process. When the terminal distribution of a Poisson random bridge is a negative binomial distribution, we show that all of the increment distributions of the process are negative binomial. We then generalise this result to show that a Poisson random bridge with a mixed Poisson terminal distribution is a mixed Poisson process. That is, the distribution of any increment of the process is a Poisson distribution with a mixed mean. By making the jump sizes of the PRB random we construct the compound Possion random bridge. We derive an expression for the characteristic function of compound Poisson random bridge. Finally, we price an nth-to-default credit swap in a model where defaults occur at the jump times of a Poisson random bridge. In this credit swap the buyer pays a premium in return for a lump-sum payment on the event of the nth default from a basket of credit risks. Chapter 2

L´evy processes and L´evy bridges

We fix a probability space (Ω, Q, ), and assume that all processes and filtrations under F consideration are c`adl`ag. Unless otherwise stated, when discussing a stochastic process we assume that the process takes values in R, begins at time 0, and the filtration is that generated by the process itself. We work with a finite time horizon [0, T ].

2.1 L´evy processes

This section and the next summarise a few well-known results about one-dimensional L´evy processes and stable processes, further details of which can be found in Bertoin [13], Kyprianou [61], and Sato [83]. A L´evy process is a stochastically-continuous pro- cess that starts from the value 0, and has stationary, independent increments. An increasing L´evy process is called a subordinator. For L a L´evy process, its charac- { t} teristic exponent Ψ: R C is defined by → E[eiλLt ] = exp( tΨ(λ)), λ R. (2.1) − ∈ The characteristic exponent of a L´evy process characterises its law, and its form is prescribed by the L´evy-Khintchine formula: 1 ∞ Ψ(λ) = iaλ + σ2λ2 + (1 eixλ + ixλ1 )Π(dx), (2.2) 2 − {|x|<1} −∞ where a R, σ > 0, and Π is a measure (the L´evy measure) on R 0 such that ∈ \{ } ∞ (1 x 2) Π(dx) < . (2.3) ∧| | ∞ −∞ 15 16 2.1 L´evy processes

There are particular subclasses of L´evy processes that we shall consider, defined as follows:

Definition 2.1.1. Let L and M be L´evy processes. Then we write { t}0≤t≤T { t}0≤t≤T 1. L [0, T ] if the density of L exists for every t (0, T ], { t} ∈ C t ∈ 2. M if the marginal law of M is discrete for some t> 0. { t}∈D t

Remark 2.1.2. If the marginal law of Mt is discrete for some t> 0, then the marginal

law of Mt is discrete for all t > 0. The density of Lt exists if and only if its law is absolutely continuous with respect to the Lebesgue measure. In general, the absolute continuity of L depends on t; thus [0, T ] [0, T ] for T T . See Sato [83, t C 2 ⊆ C 1 1 ≤ 2 chap. 5] for further details on the time dependence of distributional properties of L´evy processes.

We reserve the notation f (x) to represent the density of L for some L [0, T ]. t t { t} ∈ C Hence f : R R and Q[L dx] = f (x) dx. We reserve Q (a) to represent the t → + t ∈ t t probability mass function of M for some M . We denote the state-space of t { t}∈D M by a R. Hence Q : a [0, 1] and Q[M = a ]= Q (a ). We assume that { t} { i} ⊂ t { i}→ t i t i the sequence a is strictly increasing. { i} The transition of L´evy processes satisfy the convolution identities

∞ f (x)= f (x y)f (y) dy for L [0, T ], (2.4) t t−s − s { t} ∈ C −∞ and

∞ Q (a )= Q (a a )Q (a ) for M , (2.5) t n t−s n − m s m { t}∈D m=−∞ for 0 s < t T . These are the Chapman-Kolmogorov equations for the processes ≤ ≤ L and M . { t} { t} The law of any c`adl`ag stochastic process is characterised by its finite-dimensional distributions. The finite-dimensional densities of L exist and, with the under- { t}0≤t≤T standing that x0 = t0 = 0, they are given by

n

Q[L dx ,...,L dx ]= f − (x x ) dx , (2.6) t1 ∈ 1 tn ∈ n ti−ti 1 i − i−1 i i=1 2.2 Stable processes 17

for every n N , every 0 < t < < t T , and every (x ,...,x ) Rn. With the ∈ + 1 n ≤ 1 n ∈ understanding that a = t = 0, the finite-dimensional probabilities of M are k0 0 { t} n

Q[M = a ,...,M = a ]= Q − (a a − ), (2.7) t1 k1 tn kn ti−ti 1 ki − ki 1 i=1 for every n N , every 0 < t < < t , and every (k ,...,k ) Zn. ∈ + 1 n 1 n ∈

2.2 Stable processes

The (strictly) stable processes form a subclass of the L´evy processes. We say that a L´evy process Sα is a stable process with index α (or stable-α process) if its charac- { t } teristic exponent satisfies Ψ(kλ)= kαΨ(λ), (2.8) for every k > 0 and every λ R; α is restricted to values in (0, 2]. The process Sα ∈ { t } satisfies the scaling property

k−1/αSα law= Sα for k > 0. (2.9) { kt}t≥0 { t }t≥0 Equation (2.8) and the L´evy-Khintchine formula restrict the characteristic exponent and the L´evy measure of Sα to take an explicit form which depends on α. When { t } α (0, 1) (1, 2), ∈ ∪ Ψ(λ)= κ λ α(1 iβsign(λ) tan(πα/2)), (2.10) | | − where κ> 0, and β [ 1, 1]. In this case the L´evy measure can be written ∈ − κ+x−α−1 dx for x> 0, Π(dx)= (2.11)  κ− x −α−1 dx for x< 0,  | | where κ+ and κ− are non-negative numbers satisfying κ+ κ− β = − . (2.12) κ+ + κ− If β = 1 then the process exhibits only positive jumps, if β = 0 the process is symmetric, and if β = 1 the process exhibits only negative jumps. If α = 2 then Ψ(λ)= 1 σ2λ2, − 2 and Π(dx) = 0. In this case Sα is a Brownian motion (without drift). If α = 1 then { t } Ψ(λ) = iaλ + c λ , for c > 0, and Π(dx) = cx−2 dx. In this case Sα is a Cauchy | | { t } process with drift. 18 2.3 L´evy bridges

Excluding the case when α = 2, stable processes are heavy-tailed processes. The α fractional moments of the stable random variable St (α< 2) satisfy

E[ Sα p] < if p < α, (2.13) | t | ∞ E[ Sα p]= if p α. (2.14) | t | ∞ ≥

α α Hence the second moment of St is infinite for α < 2, and the expected value of St does not exist or is infinite for α 1. ≤ The density of Sα exists and is continuous for any α, and so Sα [0, T ] for t { t } ∈ C any T > 0. However, this density can be expressed in terms of elementary functions only when Sα is a Brownian motion, a Cauchy process, or a stable subordinator { t } with index α =1/2. We will examine each of these special cases later in this chapter. (See Feller [37, XVII.6] for examples of series representations for the density of a stable random variable with arbitrary index α.) If Sα is a stable subordinator then the Laplace transform of Sα exists and is given { t } t by E[exp ( λSα)] = exp( κtλα) for λ 0, (2.15) − t − ≥ where κ> 0, and α must be further restricted to 0 <α< 1.

2.3 L´evy bridges

A bridge is a stochastic process that is pinned to some fixed point at a fixed future time. Bridges of Markov processes were constructed and analysed by Fitzsimmons et al. [41] in a general setting. In this section we focus on the bridges of L´evy processes in the classes [0, T ] and . In particular we have the following: C D Proposition 2.3.1. The bridges of processes in [0, T ] and are Markov processes. C D Proof. We need to the show that the process L [0, T ] is a Markov process when { t} ∈ C we know that L = x, for some constant x such that 0 < f (x) < . (It will be T T ∞ explained later why the condition that 0 < f (x) < is required to ensure that the T ∞ law of the bridge process is well defined.) In other words, we need to show that

Q [L y L = x ,...,L = x , L = x]= Q [L y L = x , L = x] , (2.16) t ≤ | t1 1 tm m T t ≤ | tm m T 2.3 L´evy bridges 19

for all m N , all (x ,...,x ,y) Rm+1, and all 0 t < < t < t T . It is ∈ + 1 m ∈ ≤ 1 m ≤ crucial to the proof that L has independent increments. Let us write { t}

∆ = L L − , (2.17) i ti − ti 1 δ = x x , (2.18) i i − i−1

for 1 i m, where t = 0 and x = 0. Then we have: ≤ ≤ 0 0

Q [L y L = x ,...,L = x , L = x] t ≤ | t1 1 tm m T = Q [L L y x ∆ = δ ,..., ∆ = δ , L L = x x ] t − tm ≤ − m | 1 1 m m T − tm − m = Q [L L y x L L = x x ] t − tm ≤ − m | T − tm − m = Q [L L y x L L = x x , L = x ] t − tm ≤ − m | T − tm − m tm m = Q [L y L = x, L = x ] . (2.19) t ≤ | T tm m

The proof for processes in class is similar. D

Let L [0, T ], and let L(z) be an L -bridge to the value z R at { t} ∈ C { tT }0≤t≤T { t} ∈ time T . For the transition probabilities of the bridge process to be well defined, we require that 0 < f (z) < . By the Bayes theorem we have T ∞

Q L(z) dy L(z) = x = Q [L dy L = x, L = z ] tT ∈ sT t ∈ | s T Q [L dy, L dz L = x] = t ∈ T ∈ | s Q [L dz L = x] T ∈ | s f (y x)f (z y) = t−s − T −t − dy, (2.20) f (z x) T −s − for 0 s

ft(y)fT −t(z y) ftT (y; z)= − . (2.21) fT (z)

In this way Q L(z) dy L(z) = x = f (y x; z x) dy. (2.22) tT ∈ sT t−s,T −s − − The condition 0 < f (z) < is enough to ensure that T ∞

y f (y L(z); z L(z)) (2.23) → t−s,T −s − sT − sT 20 2.3 L´evy bridges

(z) is a well-defined density for almost every value of LsT . To see this, note that ∞ ∞ (z) ft−s,T −s(y x; z x) Q LsT dx dy −∞ −∞ − − ∈ ∞ ∞ = f (y x; z x)f (x; z) dx dy t−s,T −s − − s,T −∞ −∞ ∞ f (z y) ∞ = T −t − f (y x)f (x) dx dy f (z) t−s − s −∞ T −∞ 1 ∞ = f (z y)f (y) dy =1. (2.24) f (z) T −t − t T −∞ From (2.24) it follows that ∞ Q f (y L(z); z L(z)) dy =1 =1. (2.25) t−s,T −s − sT − sT −∞ (k) Let Mt , and let MtT 0≤t≤T be an Mt -bridge to the value ak at time T , {(k) }∈D { } { } so Q[MT T = ak] = 1. For the transition probabilities of the bridge to be well defined,

we require that Q[MT = ak]= QT (ak) > 0. Then the Bayes theorem gives

Q M (k) = a M (k) = a = Q [M = a M = a , M = a ] tT j sT i t j | s i T k Q [M = a , M = a M = a ] = t j T k | s i Q [M = a M = a ] T k | s i Q (a a )Q (a a ) = t−s j − i T −t k − j , (2.26) Q (a a ) T −s k − i where s, t satisfy 0 s

Proposition 2.3.2. If there exists a constant C < such that x 1+βf (x) is bounded ∞ | | t for x C and all t (0, T ], then | | ≥ ∈ ∞ x 1+2α f (x; z) dx< , (2.27) | | tT ∞ −∞ for every α (0, β). In other words, ∈ 1+2α E L(z) < (0 t T ). (2.28) tT ∞ ≤ ≤ Similarly, if there exists a constant C such that a 1+βQ (a ) is bounded for i C | i| t i | | ≥ and all t (0, T ], then ∈ 1+2α E M (z) < (0 t T ). (2.29) tT ∞ ≤ ≤ 2.4 Stable bridges 21

Proof. We prove the proposition in the continuous case. The discrete case is similar. The cases t = 0 and t = T are trivial, so we will assume that t (0, T ). ∈ Fix α (0, β) and assume that ∈ x 1+βf (x)

2.4 Stable bridges

Bridges of stable processes inherit a scaling property: Proposition 2.4.1. Let S be a stable process with index α, and let k > 0 be a { t} constant. If S(z) is a bridge of S to the value z at time T , and S(ζ) is a bridge { tT } { t} { t,kT } of S to the value ζ = k1/αz at time kT , then { t} law S(z) = S(ζ) . (2.34) { tT } { kt,kT } 22 2.5 Brownian motion and Brownian bridge

Proof. Denote the density of St by ft(x). From the scaling property of stable processes, we have law −1/α St = k Skt. (2.35) It follows that 1/α 1/α ft(x)= k fkt k x . (2.36)

From (2.20) we have

(z) (z) ft−s(y x)fT −t(z y) Q StT dy SsT = x = − − dy ∈ fT −s(z x) −1/α 1/α 1/α fkt−ks k y k x fkT −kt ζ k y = k1/α − − dy f (ζ k1/αx) kT −ks − = Q k−1/αS(ζ) dy k−1/αS(ζ) = x . (2.37) kt,kT ∈ ks,kT

2.5 Brownian motion and Brownian bridge

2.5.1 Brownian motion

Brownian motion is a L´evy process, and is a Gaussian process (i.e. all of its finite- dimensional distributions are multivariate normal). Gaussian processes are charac- terised by their mean and covariance functions. In the case of a one-dimensional Brownian motion B , these are { t} 2 E[Bt]= θt, Cov[Bs, Bt]= σ min(s, t), (2.38) where θ R is the drift of B , and σ > 0 is the diffusion coefficient. The characteristic ∈ { t} exponent of Bt is { } 1 Ψ(λ)= iθλ + σ2λ2. (2.39) − 2

The density of Bt is 1 1 (x θt)2 ft(x)= exp − . (2.40) σ√2πt −2 σ2t The sample paths of Brownian motion are continuous but nowhere differentiable. When θ = 0, B is a stable process with index α = 2, and satisfies the scaling identity { t} k−1/2B law= B for k > 0. (2.41) { kt} { t} 2.6 Gamma process and gamma bridge 23

When θ = 0 and σ = 1 we say that B is a standard Brownian motion (or Weiner { t} process).

2.5.2 Brownian bridge

A Brownian bridge is also a Gaussian process. Let β(z) be a standard Brownian { tT }0≤t≤T bridge to the point z R. The mean and covariance functions of β(z) are ∈ { tT } t st E[β(z)]= z, Cov[β(z), β(z)] = min(s, t) . (2.42) tT T sT tT − T It follows that 1/2 k−1/2β(k z) law= β(z) for k > 0. (2.43) kt,kT { tT } Let W be a standard Brownian motion, and define the process W (z) by { t} { tT } t W (z) = W + (z W ) (0 t T ). (2.44) tT t T − T ≤ ≤ Calculating the mean and covariance functions for this process verifies that it is a standard Brownian bridge to the value z at time T . It is also notable and easily verified that W (z) is independent of W . { tT } T

2.6 Gamma process and gamma bridge

2.6.1 Gamma process

A gamma process is a subordinator with gamma-distributed increments. The law of a gamma process is uniquely determined by its mean and variance at time 1. Both of these quantities are positive. Let γ be a gamma process with mean 1 and variance { t} m−1 > 0 at time 1; then

E[γt]= t, Var[γt]= t/m. (2.45)

The density of γt is mmt g(m)(x)= 1 xmt−1e−mx, (2.46) t {x>0} Γ[mt] where Γ[z] is the gamma function, defined as usual for x> 0 by

∞ Γ[x]= ux−1e−u du. (2.47) 0 24 2.6 Gamma process and gamma bridge

Hence γ [0, T ] for any T > 0. Due to the scaling property of the gamma { t} ∈ C distribution, if κ > 0 then the process κγ is a gamma process with mean κ, and { t} variance m−1κ2 at t = 1. The characteristic exponent of γ is { t} Ψ(λ)= m log[1 iλ/m], (2.48) − and so the characteristic function of γt is

E[eiλγt ]=(1 iλ/m)−mt. (2.49) − In the limit m this characteristic function is eiλt, which is the characteristic → ∞ function of the Dirac measure centred at t. It follows that γ law t as m . { t} −→ { } →∞ It should be noted that we find it convenient here to use a somewhat different parametrisation scheme for the family of gamma processes from that presented in Brody et al. [20]. In their scheme the ‘basic’ gamma process γ is characterised by a { t} single parameter m, with units of inverse time, such that E[γt]= mt and Var[γt]= mt. The ‘general’ gamma process is then obtained by considering a ‘scaled’ process κγ { t} 2 where κ > 0 is a constant. Clearly E[κγt] = κmt and Var[κγt] = κ mt. Thus, if the mean rate µ and variance rate σ2 of a gamma process is specified, then we have m = µ2/σ2 and κ = σ2/µ. The scheme introduced in (2.45) above is equivalent to the choice κ = m−1 in the BHM scheme. The advantage of the choice (2.45) for present purposes as the basic process is that it gives γ the dimensionality of time, and hence { t} makes it suitable as a basis for a time change.

2.6.2 Gamma bridge

Gamma bridges exhibit a number of remarkable similarities to Brownian bridges, some of which have been presented by Emery´ & Yor [33]. Let γ be a gamma bridge { tT }0≤t≤T with final value 1 associated with the gamma process γ . The transition law of γ { t} { tT } is given by

Q [γ dy γ = x]= Q [γ dy γ = x, γ = 1] tT ∈ | sT t ∈ | s T g(m)(y x)g(m) (1 y) = t−s − T −t − g(m) (1 x) T −s − y−x m(t−s)−1 1−y m(T −t)−1 = 1 1−x 1−x dy, (2.50) {x

for 0 s < t T and x 0. Here B[α, β] is the beta function, defined for α,β > 0 by ≤ ≤ ≥ 1 Γ[α]Γ[β] B[α, β]= xα−1(1 x)β−1 dx = . (2.51) − Γ[α + β] 0 If the gamma bridge γ has reached the value x at time s, then it must yet travel a { tT } distance 1 x over the time period (s, T ]. Equation (2.50) shows that the proportion − of this distance that the gamma bridge will cover over (s, t] is a random variable with a beta distribution (with parameters α = m(t s) and β = m(T t)). The conditional − − characteristic function of γtT is

E eiλγtT γ = x = M[m(t s), m(T s), i(1 x)λ], (2.52) sT − − − where M[α,β,z] is Kummer’s confluent hypergeometric function of the first kind, which can be expanded as the power series [1, 13.1.2]

α α(α + 1) z α(α + 1)(α + 2) z M[α,β,z]=1+ z + + + . (2.53) β β(β + 1) 2! β(β + 1)(β + 2) 3!

For brevity, we will later refer to (2.53) as ‘Kummer’s function M[α,β,z]’. Taking the limit as m in (2.52), we have →∞ ∞ t s k (i(1 x)λ)k E eiλγtT γ = x − − sT → T s k! k=0 − t s = exp i − (1 x)λ , (2.54) T s − − which is the characteristic function of the Dirac measure centred at (1 x)(t s)/(T s). − − − It then follows from the Markov property of gamma bridges that γ law t/T { tT } −→ { } as m . It is a property of gamma processes that the renormalised process → ∞ γ /γ is independent of γ (indeed, this independence property characterises { t T }0≤t≤T T the gamma process among L´evy processes). This leads to the remarkable identity

γ t law= γ . (2.55) γ { tT } T The identity (2.55) can be proved by showing that the process on the left-hand side is Markov, and then verifying that its transition law is the same as (2.50), which can be done using the results in Brody et al. [20]. Two further properties of gamma bridges follow immediately from (2.55). The first is that the bridge of the scaled gamma process 26 2.7 Variance-gamma process and variance-gamma bridge

κγ is, for any κ> 0, identical in law to the bridge of the unscaled process. Hence, { t} for fixed m and fixed terminal value, the bridge of the gamma process defined by (2.45) is identical to the bridge of the basic BHM gamma process. The second property is that a γ -bridge to the value z > 0 at time T is identical in law to the process zγ . { t} { tT }

2.7 Variance-gamma process and variance-gamma bridge

2.7.1 Variance-gamma process

A variance-gamma (VG) process is a Brownian motion with drift, subordinated by an independent gamma process. Letting W denote a standard Brownian motion, we { t} define the process V by { t} V = σW + θγ σ > 0 and θ R. (2.56) t γt t ∈ Then V is a VG process in its most general form (on the real line). The mean of { t} the gamma process γ at t = 1 was fixed as unity, but (in terms of the law of the { t} VG process) varying this is equivalent to an appropriate change of the parameters σ and θ. When θ = 0 we say that V is a symmetric VG process; if, in addition, σ =1 { t} then we say that V is a standard VG process. The characteristic exponent of V { t} { t} is given by iθλ σ2λ2 Ψ(λ)= m log 1 + , (2.57) − m 2m −1 where m is the variance of γ1. This can be decomposed as

iµ λ iµ λ Ψ(λ)= m log 1 + + m log 1+ − , (2.58) − m m where 1 µ = √θ2 +2mσ2 + θ , (2.59) + 2 and

1 µ = √θ2 +2mσ2 θ . (2.60) − 2 − 2.7 Variance-gamma process and variance-gamma bridge 27

The right-hand side of (2.58) is the sum of two characteristic exponents. The first corresponds to a gamma process, and the second corresponds to a decreasing L´evy process whose absolute value is a gamma process. It follows that, for γ(+) and { t } γ(−) independent copies of γ , we have { t } { t} V law= µ γ(+) µ γ(−) . (2.61) { t} + t − − t The characteristic function of Vt is

iθλ σ2λ2 −mt E[eiλVt ]= 1 + . (2.62) − m 2m In the limit as m , this characteristic function tends to →∞ exp iθλt 1 σ2λ2t . (2.63) − 2 It follows that

V law σW + θt as m . (2.64) { t} −→ { t } →∞

The distribution of Vt is normal with a gamma-mixed mean and variance. The

density of Vt can be shown to be [67]

2 mt 1 mt θx/σ 2 2 − 4 (m,θ,σ) 2 m e x −2 2 2 2 ft (x)= Kmt− 1 σ x (2mσ + θ ) , π σΓ[mt] 2mσ2 + θ2 2 (2.65) where K [z] is the modified Bessel function of the third kind. We see that V [0, T ] ν { t} ∈ C for all T > 0. Each of the following facts about Kν[z] can be found in Abramowitz & Stegun [1, 9.6]:

1. 0 0, (2.66) ν ∞ ∈ 2. K−ν[z]= Kν[z], (2.67) K [z] 3. lim 0 =1, (2.68) z→0 log z − K [z] 4. lim ν =1 for ν > 0, (2.69) z→0 1 1 −ν 2 Γ[ν]( 2 z) K [z] 5. lim ν =1. (2.70) z→∞ π −z 2z e 28 2.7 Variance-gamma process and variance-gamma bridge

In the case when ν is half an odd number we have following expression for Kν[z] from [1, 10.2.15]: n π −z 1 −j Kn+ 1 [z]= e (n + , j)(2z) , for n N, (2.71) 2 2z 2 ∈ j=0 where (m, n) is Hankel’s symbol,

Γ[m + 1 + n] (m, n)= 2 . (2.72) n! Γ[m + 1 n] 2 − Writing

(m) (m,0,1) ft (x)= ft (x) mt − 1 2 mmt x2 2 4 √ 2 = Kmt− 1 2mx , (2.73) π Γ[mt] 2m 2 we have

1 2 1−2mt f (m,θ,σ)(x)= eθx/σ k f (m) k x/σ , (2.74) t σ (m,θ,σ) t (m,θ,σ) where θ2 k(m,θ,σ) = 1+ 2 . (2.75) 2mσ (m) Here ft (x) is the density of the standard VG random variable W (γt). Since Kν[z] is finite for z > 0, f (m,θ,σ)(x) is finite away from zero. In other words, 0 < f (m,θ,σ)(x) < t t ∞ for all t> 0 and all x R 0 . When x = 0, we have ∈ \{ }

1 2 2 −mt 1 m θ Γ[mt 1/2] −1 (m,θ,σ) 1+ 2 − for t> (2m) , ft (0) =  σ 2π 2mσ Γ[mt] (2.76)   + for 0 < t (2m)−1. ∞ ≤   2.7.2 Variance gamma bridge

Let V be a VG process with parameter set m, θ, σ ; and let V (z) be a V -bridge { t} { } { tT } { t} to the value z R at time T . Since we must have that the density of V is positive ∈ T and finite at z, it follows from (2.76) that either z = 0, or T > (2m)−1. The transition 2.7 Variance-gamma process and variance-gamma bridge 29

law of V (z) is given by { tT } (m,θ,σ) (m,θ,σ) (z) (z) ft−s (y x)fT −t (z y) Q VtT dy VsT = x = − − dy ∈ f (m,θ,σ)(z x) T −s − (m) k (m) k k ft−s (y x) f (z y) = σ − T −t σ − dy σ (m) k fT −s σ(z x) σ σ− = Q U (kz/σ) dy U (kz/σ ) = x , (2.77) k tT ∈ k sT (kz/σ) for 0 s

W law= β + tW , (2.78) { t}0≤t≤1 { t 1}0≤t≤1 where β is a Brownian bridge, independent of W , ending at the value 0 at time { t} 1 t = 1. Also, from the time-scaling property of Brownian motion,

W (Xt) law= √X W (t) , (2.79) { } { } for X a positive random variable independent of W . We can use these identities, { t} with some properties of gamma bridges, to derive a terminal-value decomposition of a standard VG process: For t [0, T ], we have ∈ γ W (γ ) = W γ t { t } T γ T law= W (γ γ ) (2.80) { T tT } law = √γ W (γ ) (2.81) { T tT } law = γ √γ W (1) + √γ β(γ ) (2.82) { tT T T tT } law = γ W (γ )+ √γ β(γ ) . (2.83) { tT T T tT } In the above, (2.80) holds since W (t) , γ /γ , and γ are independent; (2.81) follows { } { t T } T from (2.79); (2.82) follows from (2.78); (2.83) also follows from (2.79). Note that in (2.83) the Brownian bridge β(t) , the gamma bridge γ , and the random { }0≤t≤1 { tT }0≤t≤T vector (γT , W (γT )) are independent. The joint density of (γT , W (γT )) is 1 1 z2 (y, z) exp g(m)(y). (2.84) → √2πy −2 y T 30 2.7 Variance-gamma process and variance-gamma bridge

Hence, given W (γT )= z, the conditional density of γT is 1 1 z2 y exp g(m)(y). (2.85) → (m) −2 y T fT (z)√2πy Some simplification shows that this is the generalized inverse-Gaussian (GIG) density fGIG(y; mT 1/2, z , √2m), where − | | λ γ 1 λ−1 1 2 −1 2 GIG 1 f (x; λ,δ,γ)= {x>0} x exp 2 (δ x + γ x) . (2.86) δ 2 Kλ[γδ] − It follows from (2.83) that the standard VG bridge satisfies

U (z) law= zγ + Σ β(γ ) , (2.87) { tT } tT z tT where Σ has a GIG distribution with parameter set mT 1/2, z , √2m . z { − | | } From (2.61) we have that

W (γ ) law= µ (¯γ γ ) , (2.88) { t } { t − t } where γ and γ¯ are independent, identical gamma processes with parameter m, { t} { t} and µ = (m/2)−1/2. We can use this to derive an alternative representation of the standard VG bridge. Define the process G by setting G =γ ¯ γ . Then we have { t} t t − t G = γ¯ γ { t} { t − t} law= γ¯ γ¯ γ γ { T tT − T tT } = G γ¯ + γ (¯γ γ ) , (2.89) { T tT T tT − tT } where γ and γ¯ are identical gamma bridges, independent of each other, and { tT } { tT } independent of the random vector (γT ,GT ). The joint density of (γT ,GT ) is given by

(y, z) g(m)(y)g(m)(y + z). (2.90) → T T

Given that GT = z, the conditional density of γT is

2mt 1 m 2 mT −1 −m(z+2y) y {y>(−z)+} (yz + y ) e , (2.91) → 2 (m) Γ[mT ] fT (z) where x+ denotes the positive part of x, so ( z)+ = max(0, z). Then we have − − U (z) law= zγ¯ + Y µ (¯γ γ ) , (2.92) { tT } { tT z tT − tT }

where Yz > 0 is a random variable with density (2.91). 2.8 Stable-1/2 subordinator and stable-1/2 bridge 31

2.8 Stable-1/2 subordinator and stable-1/2 bridge

2.8.1 Stable-1/2 subordinator

Let S be a stable-1/2 subordinator . The characteristic exponent of S is { t} { t} c Ψ(λ)= λ 1/2(1 i sign(λ)), (2.93) √2| | − where c > 0 is a parameter related to κ in (2.10) by c = κ√2. Then S satisfies { t} the scaling property k−2S law= S , for k > 0. The random variable S has a ‘L´evy { kt} { t} t distribution’ with density

ct 1 c2t2 ft(x)= 1{x>0} exp . (2.94) √2π x3/2 −2 x We call c the ‘activity parameter’ of S . The density (2.94) is bounded for all t> 0 { t} and is strictly positive for all x > 0, hence S [0, T ] for all T > 0. Integrating { t} ∈ C (2.94) yields the distribution function

x f (y) dy = 2 Φ ctx−1/2 , (2.95) t − 0 where Φ[x] is the standard normal distribution function. The random variable St has infinite mean, indeed E[Sp] < if and only if p< 1/2. The density of 1/S is t ∞ t

ct −1/2 1 2 2 x 1{x>0} x exp c t x . (2.96) → √2 Γ[1/2] −2 Thus the increments of S are distributed as reciprocals of gamma random variables. { t} For W a standard Brownian motion, define the exceedence times τ by { t} { t}t≥0 τ = inf s : W > ct . (2.97) t { s } From Feller [37, X.7], we then have

S law= τ . (2.98) { t} { t}

2.8.2 Stable-1/2 bridge

Fix z > 0 and let S(z) be a bridge of the process S to the value z and time { tT }0≤t≤T { t} T . We call S(z) a stable-1/2 bridge. The density function of the random variable { tT } 32 2.8 Stable-1/2 subordinator and stable-1/2 bridge

(z) StT is

ft(y)fT −t(z y) ftT (y; z)= − fT (z) 2 2 exp 1 c (Ty−tz) 1 ct(T t) − 2 yz(z−y) = 1{0

p E S(z) < for p> 0. (2.100) tT ∞ Remark 2.8.1. From Proposition 2.4.1, stable-1/2 bridges satisfy the following scaling property: For k > 0 a constant, and S(z) a stable-1/2 bridge, we have { tT }

(z) law −2 (k2z) StT = k Skt,kT . (2.101) 0≤t≤T 0≤t≤T Integrating the density (2.99) yields the following (details of the proof can be found in Appendix A.1):

Proposition 2.8.2. For y [0, z], the distribution function of the random variable (z) ∈ StT is given by

c(Ty tz) 2t 2c2t(T −t)/z c((2t T )y tz) FtT (y; z) = Φ − + 1 e Φ − − . (2.102) yz(z y) − T yz(z y) − − Remark 2.8.3. When t = T/2, the second term in the distribution function (2.102) vanishes. The distribution function is then analytically invertible, and we obtain the identity

(z) law 1 Z ST/2,T = z 1+ , (2.103) 2 c2T 2/z + Z2 where Z is a standard normal random variable.

Corollary 2.8.4. law S(z) t z as c . (2.104) { tT } −→ { T } →∞ Proof. Fix z > 0. It is sufficient to show that

lim FtT (y; z)= 1{Ty≥tz} for Lebesgue-a.e. y (0, z), (2.105) c→∞ ∈ 2.8 Stable-1/2 subordinator and stable-1/2 bridge 33

since this is equivalent to

lim Q S(z) t z <ε = 1 for all t [0, T ] and any ε> 0. (2.106) c→∞ tT − T ∈ Define α by (2t T )y tz α = − − , (2.107) − yz(z y) − and note that α> 0 for y (0, z). The inequality [1, 7.1.13] states ∈ ∞ 2 2 1 ex e−t dt (x> 0), (2.108) 2 x ≤ x + x +4/π from which we deduce −α2c2/2 2 2 2 e e2c t(T −t)/zΦ[ αc] e2c t(T −t)/z 2 2 − ≤ π αc + α c +2/π 2 (Ty−tz)2 2 exp c 2y(z−y)z = − . (2.109) 2 2 π αc + α c +2/π Since the left-hand side of (2.109) is positive, we see that

2 lim e2c t(T −t)/zΦ[ αc]=0. (2.110) c→∞ − Then we have

c(Ty tz) 2t 2c2t(T −t)/z lim FtT (y; z) = lim Φ − + 1 lim e Φ[ αc] c→∞ c→∞ yz(z y) − T c→∞ − − = 1 1 1 , (2.111) {Ty−tz≥0} − 2 {Ty=tz} which completes the proof.

The proof of the following proposition can be found in Appendix A.2:

(z) Proposition 2.8.5. Define the incomplete first moment of StT by y M (y; z)= u f (u; z) du (0 y z). (2.112) tT tT ≤ ≤ 0 Then we have

t c(Ty tz) 2c2t(T −t)/z c((2t T )y tz) MtT (y; z)= z Φ − e Φ − − , (2.113) T yz(z y) − yz(z y) − − (z) and the second moment of StT is given by

2 2 2 (z) t 2 c T 2π −1/2 E S = z 1 c(T t)e 2z Φ cT z . (2.114) tT T − − z − 34 2.8 Stable-1/2 subordinator and stable-1/2 bridge

Corollary 2.8.6.

t E[S(z)]= z, (2.115) tT T

2 2 2 2 2 2 (z) t(T t) c T 2πc T c T 2 2z Var[StT ]= −2 z 1 e Φ . (2.116) T − z − z

Remark 2.8.7. In general, we have

T t t s E S(z) S(z) = x = − x + − z, (2.117) tT sT T s T s − − and

2 (z) (z) E StT SsT = x t s c2(T −s)2 2π T s = − (z x)2 1 c(T t)e 2(z−x) Φ c − , (2.118) T s − − − (z x) − √z x − − − for 0 s

StT StT 1.0 1.0

0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

Time Time 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 0.8 1.0

(a) c = 1. (b) c = 5.

Figure 2.1: Simulations of the stable-1/2 bridge demonstrating the influence of the activity parameter c. Qualitatively speaking, increasing the value of c decreases the frequency of large jumps, and increases the frequency of small jumps. 2.9 Cauchy process and Cauchy bridge 35

2.9 Cauchy process and Cauchy bridge

2.9.1 Cauchy process

The Cauchy process is a stable process with index α = 1. Let Z be a driftless { t} Cauchy process. The characteristic exponent of Z is { t} Ψ(λ)= c λ for λ R, (2.119) | | ∈ where c> 0 is a scale parameter. The scaling property of Z can be written { t} k−1Z law= Z for k > 0. (2.120) { kt} { t}

The density function of Zt is ct f (y)= , (2.121) t π(y2 + c2t2) which is symmetric about y = 0, and the distribution function is

y 1 1 y ft(x) dx = + arctan . (2.122) −∞ 2 π ct We have E[ Z ]= , but E[ Z p] < for 0

2.9.2 Cauchy bridge

(z) Fix z R and let ZtT be a bridge of the Cauchy process Zt terminating at the ∈ { } (z) { } value z at time T . The density of the random variable ZtT is

ft(y)fT −t(z y) ftT (y; z)= − fT (z) ct(T t) z2 + c2T 2 = − . (2.124) πT (y2 + c2t2)((z y)2 + c2(T t)2) − − Integrating (2.124) yields the following (details of the proof can be found in Appendix A.3): 36 2.9 Cauchy process and Cauchy bridge

(z) Proposition 2.9.1. The distribution function of ZtT is

1 (T t)(c2T (T 2t)+ z2) y F (y; z)= + − − arctan tT 2 πT (c2(T 2t)2 + z2) ct − t(c2T (T 2t) z2) z y + − − arctan − πT (c2(T 2t)2 + z2) c(T t) − − ct(T t)z y2 + c2t2 + − log , (2.125) πT (c2(T 2t)2 + z2) (z y)2 + c2(T t)2 − − − where 0 0.

Remark 2.9.2. It follows from Proposition 2.4.1 that, for fixed k > 0, the Cauchy bridge Z(z) exhibits the scaling property { tT }

−1 (kz) law (z) k Zkt,kT = ZtT . (2.126) 0≤t≤T 0≤t≤T The first moment of a Cauchy process is not well defined, and the second is infinite. However, the first two moments of the Cauchy bridge exist and are finite. Hence conditioning a Cauchy process on its final value can temper its behaviour. The proof of the following proposition can be found in Appendix A.4:

(z) Proposition 2.9.3. The first two moments of ZtT exist, and are given by t E Z(z) = z, (2.127) tT T 2 t E Z(z) = (z2 + c2T (T t)). (2.128) tT T − Remark 2.9.4. In general we have

T t t s E Z(z) Z(z) = x = − x + − z, (2.129) tT sT T s T s 2 T − t t− s E Z(z) Z(z) = x = − x2 + − (z2 + c2(T s)(T t)), (2.130) tT sT T s T s − − − − for 0 s

(z) Remark 2.9.5. The third moment of ZtT does not exist; however

p E Z(z) < for p (0, 3). (2.131) tT ∞ ∈ 2.10 Normal inverse-Gaussian process and normal inverse-Gaussian bridge 37

2.10 Normal inverse-Gaussian process and normal inverse-Gaussian bridge

2.10.1 Inverse-Gaussian process

The inverse-Gaussian (IG) process is a subordinator with L´evy exponent

Ψ(λ)= c( γ2 2iλ γ), (2.132) − − for c> 0 and γ > 0 constants. Let X be an inverse-Gaussian process. The density { t} function of Xt is

γct cte 1 1 2 2 −1 2 qt(x)= 1{x>0} exp (c t x + γ x) √2π x3/2 − 2 ct 1 γ2 c 2 = 1{x>0} exp x t . (2.133) √2π x3/2 −2x − γ For k R, the moment m(k) = E[Xk] can be written ∈ t t ct k K [γct] m(k) = k−1/2 t γ K [γct] 1/2 k+ 1 2 ct 2 = γeγct K [γct]. (2.134) π γ k−1/2

Using the series representation of Kn+1/2[z] given in (2.71), the first four integer mo- ments simplify to ct m(1) = , (2.135) t γ ct m(2) = (1 + γct), (2.136) t γ3 ct m(3) = (3+3γct + γ2c2t2), (2.137) t γ5 ct m(4) = (15 + 15γct +6γ2c2t2 + γ3c3t3). (2.138) t γ7

3 The variance of Xt is ct/γ . Hence, similar to the gamma process, the law of an IG process is characterised by its mean and variance at t = 1. For W a Brownian motion, define the exceedence times τ by { t} { t} τ = inf t 0 : c−1W + c−1γt>t . (2.139) t { ≥ t } 38 2.10 Normal inverse-Gaussian process and normal inverse-Gaussian bridge

Then we have X law= τ . (2.140) { t} { t} Proposition 2.10.1. A bridge of the process X to a fixed point z > 0 at time T is { t} identical in law to a bridge process of the stable-1/2 subordinator S . { t} Proof. The (one-dimensional) marginal density of the IG bridge is

qt(y)qT −t(z y) ftT (y; z)= − qT (z) 2 2 exp 1 c (Ty−tz) 1 ct(T t) − 2 yz(z−y) = 1{0

2.10.2 Normal inverse-Gaussian process

We construct the normal inverse-Gaussian (NIG) process by subordinating a Brownian motion by an independent IG process. Again let X be an IG process, but this time { t} set c = γ. This ensures that E[X ]= t, so X has units of time. Then we define the t { t} NIG process Y by setting { t}

Yt = σW (Xt)+ θXt, (2.142) for σ > 0 and θ R constants. In terms of the law of the NIG process, allowing the ∈ mean rate of X to differ from unity is equivalent to an appropriate change in the { t} parameters σ and θ. We say that Y is a standard NIG process if θ = 0 and σ = 1. { t} The characteristic exponent of Y is { t}

Ψ(λ)= c√c2 + σ2λ2 2iθλ c2. (2.143) − −

The density of Yt is

2 2 2 (c,θ,σ) ct c2t+θx/σ2 c σ + θ −2 2 2 2 2 2 2 2 ft (x)= e 2 2 2 2 K1 σ (θ + c σ )(c σ t + x ) . (2.144) σπ c σ t + x 2.11 Poisson process and Poisson bridge 39

Note that f (c,θ,σ)(x) is bounded and continuous everywhere. Hence Y [0, T ] for t { t} ∈ C all T > 0. The density of the standard NIG variable W (Xt) is

(c) (c,0,1) ft (x)= ft (x) 2 c2t c t e 2 2 2 = K1 c√c t + x . (2.145) π√c2t2 + x2 After some rearrangement, we find

k 2 2 2 f (c,θ,σ)(x)= e(c −α )t+θx/σ f (α) (kx/σ) , (2.146) t σ t where k 1 and α> 0 are given by ≥ k2 = c−1√θ2 + c2, α2 = c√θ2 + c2. (2.147)

2.10.3 Normal inverse-Gaussian bridge

Let Y be an NIG process with parameter set c,θ,σ , and let Y (z) be a Y -bridge { t} { } { tT } { t} to the value z R at time T . The transition law of Y (z) is ∈ { tT } (c,θ,σ) (c,θ,σ) (z) (z) ft−s (y x)fT −t (z y) Q YtT dy YsT = x = − − dy ∈ f (c,θ,σ)(z x) T −s − (α) k (α) k k ft−s (y x) f (z y) = σ − T −t σ − dy σ (α) k fT −s σ(z x) σ σ− = Q U (kz/σ) dy U (kz/σ ) = x , (2.148) k tT ∈ k sT (kz/σ) where (i) 0 s

2.11 Poisson process and Poisson bridge

2.11.1 Poisson process

Let N be a Poisson process with intensity λ> 0. Then N is a L´evy process and, { t} { t} for each n N , Q[N = n]= Q (n) where ∈ 0 t t e−λt(λt)n Q (n)= 1 . (2.149) t {n≥0} n! 40 2.11 Poisson process and Poisson bridge

The characteristic exponent of N is { t} Ψ(u)= λ(1 eiu). (2.150) − Since N has state space N and is increasing in increments of 1, it is a so-called { t} 0 counting process. The moment generating function of Nt exists, and is given by

E euNt = exp (λt(eu 1)) for u R. (2.151) − ∈ We have

E[Nt] = Var[Nt]= λt. (2.152)

All higher moments of Nt are finite, and can be found by differentiating (2.151). Define the sequence T ∞ of jump times of N by { i}i=0 { t} T = inf t 0 : N i , (2.153) i { ≥ t ≥ } and the let W be the sequence of waiting times W = T T . The W ’s are { i} i i − i−1 i independent, identically distributed random variables with

−λt Q[Wi > t]= Q[Nt =0]=e , (2.154) i.e. the waiting times of Poisson processes are exponentially distributed.

2.11.2 Poisson bridge

(k) Let NtT 0≤t≤T be a Poisson bridge to the value k N+ at time T (we are excluding { } (k) (k) ∈ the case where NT T = 0). The distribution of NtT is

(k) Q[Nt = j, NT = k] Q NtT = j = Q[NT = k] Q (j)Q (k j) = t T −t − QT (k) k t j t k−j = 1 1 , (2.155) {0≤j≤k} j T − T (k) for 0

k N (k) t t E z tT = 1 + z (z > 0). (2.156) − T T 2.11 Poisson process and Poisson bridge 41

(k) All moments of NtT exist, and the first two are t E N (k) = k , (2.157) tT T 2 t t t 2 E N (k) = k 1 + k2 . (2.158) tT T − T T We denote the jump times of N (k) by T (k) k , and the waiting times by W (k) k . { tT } { i }i=1 { i }i=1 It is well known that the jump times are distributed as ordered, independent uniform random variables (see, for example, Sato [83] or Mikosch [74]). In particular, for U(1) < < U an ordered sequence of independent standard uniform random variables, we (k) have

(k) (k) law T (T1 ,...,Tk ) = (T1,...,Tk) (2.159) Tk+1 law = T (U(1),...,U(k)), (2.160) where T are the jump times of the Poisson process N . It is also well known that { i} { t} the jump time Tk+1 is independent of the vector T (T1,...,Tk). (2.161) Tk+1 The distributional identity (2.159) is equivalent to the following:

(k) (k) law T (W1 ,...,Wk ) = k+1 (W1,...,Wk), (2.162) i=1 Wi where W are the waiting times of N (i.e. they are independent, identically- { i} { t} distributed exponential random variables). The ordered uniform random variable U(i) has a beta distribution with parameters α = i and β = k i + 1. Hence we have − Q[T (k) t]= I [i, k i + 1]. (2.163) i ≤ t/T −

Here Iz[α, β] is the regularized incomplete beta function which is defined by z α−1 β−1 0 y (1 y) dy Iz[α, β]= − , for α,β > 0. (2.164) 1 yα−1(1 y)β−1 dy 0 − Define a process N¯ (k) for 0 t < T by { tT } ≤ ¯ (k) t ¯ (k) NtT = N T Tk+1 , NT T = k. (2.165) (k) T Then N¯ 0≤t≤T is a counting process that jumps at the times (T1,...,Tk), and { tT } Tk+1 is independent of Tk+1. It follows that N¯ (k) law= N (k) . (2.166) { tT }0≤t≤T { tT }0≤t≤T Chapter 3

L´evy random bridges

The idea of information-based asset pricing is to model the flow of information in

financial markets and hence to construct the market filtration explicitly. Let XT be a random variable (a market factor), with a given a priori distribution. The value of XT will be revealed to the market at time T . We wish to construct an information process ξ such that ξ = X . We can then use the filtration generated by ξ to model { tT } T T T { tT } the information that market participants have about XT . One problem to overcome is how to ensure that the marginal law of ξT T is the a priori law of XT . Two explicit forms for the information process have been considered in the litera- ture. The first is t ξ = X + β (0 t T ), (3.1) tT T T tT ≤ ≤ where β is a Brownian bridge starting and ending at the value 0 (see [17, 19, { tT }0≤t≤T 55, 64, 18, 81]). The second is

ξ = X γ (0 t T ), (3.2) tT T tT ≤ ≤ where X > 0 and γ is a gamma bridge starting at the value 0 and ending at T { tT }0≤t≤T the value 1 (see [20]). These forms share the property that each is identical in law to a L´evy process conditioned to have the a priori law of XT at time T . The Brownian bridge information process is identical in law to a conditioned Brownian motion, and the gamma bridge information process is identical in law to a conditioned gamma process. With this as motivation, in this chapter we define a class of processes that we call L´evy random bridges (LRBs). An LRB is identical in law to a L´evy process

42 3.1 Defining LRBs 43

conditioned to have a prespecified marginal law at T . Later we shall use LRBs as information processes in information-based models.

3.1 Defining LRBs

An LRB can be described as a process whose bridge laws are L´evy bridge laws. In the definitions below we define LRBs by reference to their finite-dimensional distri- butions rather than as conditioned L´evy processes. This proves convenient in future calculations.

Definition 3.1.1. We say that the process L has the law LRB ([0, T ], f , ν) { tT }0≤t≤T C { t} if the following are satisfied:

1. LT T has marginal law ν.

2. There exists a L´evy process L [0, T ] such that L has density f (x) for all { t} ∈ C t t t (0, T ]. ∈ 3. ν concentrates mass where f (z) is positive and finite, i.e. 0 < f (z) < for T T ∞ ν-a.e. z.

4. For every n N , every 0 < t < < t < T , every (x ,...,x ) Rn, and ∈ + 1 n 1 n ∈ ν-a.e. z, we have

Q [L x ,...,L x L = z ]= Q [L x ,...,L x L = z ] . t1,T ≤ 1 tn,T ≤ n | T T t1 ≤ 1 tn ≤ n | T Remark 3.1.2. Conditions 2 and 3 in Definition 3.1.1 are sufficient conditions for the right-hand side of the equation in condition 4 to make sense.

Definition 3.1.3. We say that the process M has the lawLRB ([0, T ], Q ,P ) { tT }0≤t≤T D { t} if the following are satisfied:

1. MT T has probability mass function P .

2. There exists a L´evy process M such that M has marginal probability mass { t}∈D t function Q (a) for all t (0, T ]. t ∈

3. The law of MT T is absolutely continuous with respect to the law of MT , i.e.

if P (a) > 0 then QT (a) > 0. 44 3.2 Finite-dimensional distributions

4. For every n N , every 0 < t < < t < T , every (k ,...,k ) Zn, and ∈ + 1 n 1 n ∈ every b such that P (b) > 0, we have

Q [M = a ,...,M = a M = b]= Q [M = a ,...,M = a M = b] . t1,T k1 tn,T kn | T T t1 k1 tn kn | T Definition 3.1.4. For a fixed time s < T , if the law of the process η is of { s+t}0≤t≤T −s the type LRB ([0, T s], , ), resp. LRB ([0, T s], , ), then we say that η C − D − { t}s≤t≤T has law LRB ([s, T ], , ), resp. LRB ([s, T ], , ). C D If the law of a process is one of the LRB-types defined above, then we say that it is a L´evy random bridge (LRB).

3.2 Finite-dimensional distributions

For the rest of this chapter we assume that L and M are LRBs with laws { tT } { tT } LRB ([0, T ], f , ν) and LRB ([0, T ], Q ,P ), respectively. We also assume that C { t} D { t} L is a L´evy process such that L has density f (x) for t T , and M is a L´evy { t} t t ≤ { t} process such that M has probability mass function Q (a ) for t T . t t i ≤ The finite-dimensional distributions of L are given by { tT } n

Q [L dx ,...,L dx , L dz]= f − (x x ) dx ψ (dz; x ), t1,T ∈ 1 tn,T ∈ n T T ∈ ti−ti 1 i − i−1 i tn n i=1 (3.3)

where the (un-normalised) measure ψt(dz; ξ) is given by

ψ0(dz; ξ)= ν(dz), (3.4)

fT −t(z ξ) ψt(dz; ξ)= − ν(dz), (3.5) fT (z) for 0

the marginal law of LtT is given by ∞ Q[L dx]= f (x)ψ (R; x) dx = f (x; z) ν(dz) dx. (3.7) tT ∈ t t tT z=−∞ 3.2 Finite-dimensional distributions 45

Hence the density of LtT exists for t < T , and

0 ψ (R; x) < for Lebesgue-a.e. x Support(f ). (3.8) ≤ t ∞ ∈ t

In particular, we have

0 < ψ (R; L ) < and 0 < f (x L ) < (3.9) t tT ∞ T −t − tT ∞ for a.e. value of L . If ν( z ) = 1 for some point z R, i.e. Q[L = z] = 1, then tT { } ∈ T T L is a L´evy bridge. If ν(dz)= f (z) dz, then L law= L for t [0, T ]. { tT } T { tT } { t} ∈ In the discrete case, the finite-dimensional probabilities of M are { tT }

n

Q [M = a ,...,M = a , M = z]= Q − (a a − ) φ (z; a ), t1,T k1 tn,T kn T T ti−ti 1 ki − k1 1 tn kn i=1 (3.10) where the function φt(z; ξ) is given by

φ0(z; ξ)= P (z), (3.11)

QT −t(z ξ) φt(z; ξ)= − P (z), (3.12) QT (z) for 0

Many of the results that follow are proved for the LRB L , which has a contin- { tT } uous state-space. Analogous results are provided for the discrete state-space process M ; details of proofs are omitted since they are similar to the continuous case. { tT } 46 3.3 LRBs as conditioned L´evy processes

3.3 LRBs as conditioned L´evy processes

It is useful to interpret an LRB as a L´evy process conditioned to have a specified marginal law ν at time T . Suppose that the random variable Z has law ν, then:

Q [L dx ,...,L dx , L dz L = Z ] t1 ∈ 1 tn ∈ n T ∈ | T = Q [L dx ,...,L dx L = z ] ν(dz) t1 ∈ 1 tn ∈ n | T n fT −tn−1 (z xn−1) = − f − (x x ) dx ν(dz). (3.13) f (z) ti−ti 1 i − i−1 i T i=1 Hence the conditioned L´evy process has law LRB ([0, T ], f , ν). C { t}

3.4 The Markov property

In this section we show that LRBs are Markov processes. The Markov property is a key tool in the application of LRBs to information-based asset pricing. As will be seen below, the Markov property of an LRB follows from the Markov property of the associated L´evy bridge processes. Note that if we stop the LRB L at time s> 0, { tT } and then restart it, the terminal law will no longer be the a priori law ν, but instead an updated law conditional on LsT .

3.4.1 Continuous state-space

Proposition 3.4.1. The process L is a Markov process with transition law { tT }0≤t≤T ψ (R; y) Q[L dy L = x]= t f (y x) dy, tT ∈ | sT ψ (R; x) t−s − s (3.14) ψs(dy; x) Q[LT T dy LsT = x]= , ∈ | ψs(R; x) where 0 s

ψtm (dy; xm) Q [LT T dy Lt1,T = x1,...,Ltm,T = xm]= . (3.16) ∈ | ψtm (R; xm) 3.4 The Markov property 47

We need now only consider the case t < T . Proposition 2.3.1 shows that L´evy bridges are Markov processes; therefore,

Q [L y L = x ,...,L = x , L = x]= Q [L y L = x , L = x] . (3.17) t ≤ | t1 1 tm m T t ≤ | tm m T It is straightforward by Definition 3.1.1 part 4 to show that LRBs are Markov processes. Indeed we have:

Q [LtT y Lt1,T = x1,...,Ltm,T = xm] ≤ ∞|

= Q [LtT y Lt1,T = x1,...,Ltm,T = xm, LT,T = x] ν(dx) −∞ ≤ | ∞

= Q [Lt y Lt1 = x1,...,Ltm = xm, LT = x] ν(dx) −∞ ≤ | ∞

= Q [Lt y Ltm = xm, LT = x] ν(dx) −∞ ≤ | ∞ = Q [L y L = x , L = x] ν(dx) tT ≤ | tm,T m T,T −∞ = Q [L y L = x ] . (3.18) tT ≤ | tm,T m The form of the transition law of L appearing in (3.14) follows from (3.3). { tT }

Example. In the Brownian case we set 1 z2 ft(z)= exp , (3.19) √2πt − 2t for t > 0. Thus ft(x) is the marginal density of a standard Brownian motion at time t. Then we have

(z−y)2 2 − 1 − z − 2 ∞ 2  T −t T  − 1 (y x) T s e ν(dz) e 2 t−s Q[L dy L = x]= − −∞ dy, (3.20) tT sT 1 (z−x)2 z2 ∈ | T t ∞ − − − 2 T s T 2π(t s) − −∞ e h i ν(dz) − and 2 1 (y−x) y2 − − 1 1 s 2 T −s T xy− y2 e   ν(dy) e T −s [ 2 T ] ν(dy) Q[L dy L = x]= = . (3.21) T T sT 1 (z−x)2 z2 1 1 s 2 ∞ − − ∞ − [xz− z ] ∈ | 2 T −s T e T s 2 T ν(dz) −∞ e h i ν(dz) −∞ Example. In the gamma case we consider the one-parameter family of processes indexed by m> 0 described in Section 2.6.1. We set mmt f (z)= 1 xmt−1e−mx; (3.22) t {x>0} Γ[mt] 48 3.5 Conditional terminal distributions

(m) (m) so ft(z) = gt (x) for gt (x) given by (2.46). These densities are the increment densities of the gamma process with mean 1 and variance m−1 at t = 1. Then

Q[L dy L = x] tT ∈ | sT 1 ∞(z y)m(T −t)−1z1−mT ν(dz) = {y>x} y − (y x)m(t−s)−1 dy, (3.23) B[m(T t), m(t s)] ∞(z x)m(T −s)−1z1−mT ν(dz) − − − x − and 1 (y x)m(T −s)−1y1−mT ν(dy) Q[L dy L = x]= {y>x} − , (3.24) T T ∈ | sT ∞(z x)m(T −s)−1z1−mT ν(dz) x − where B[α, β] is the Beta function.

3.4.2 Discrete state-space

The analogous result to Proposition 3.4.1 for the discrete case is provided below—the proof is similar.

Proposition 3.4.2. The process M has the Markov property, with transition { tT }0≤t≤T probabilities given by ∞ φt(ak; aj) Q [M = a M = a ]= k=−∞ Q (a a ), tT j | sT i ∞ φ (a ; a ) t−s j − i k=−∞ s k i (3.25) φs(aj; ai) Q [MT T = aj MsT = ai ]= ∞ , | k=−∞ φs(ak; ai) where 0 s

3.5 Conditional terminal distributions

Let L and M be the filtrations generated by L and M , respectively. {Ft } {Ft } { tT } { tT } Definition 3.5.1. Let ν be the L-conditional law of the terminal value L , and let s Fs T T P be the M -conditional probability mass function of the terminal value M . s Fs T T

We have ν0(B) = ν(B), and P0(a) = P (a). Furthermore, when s > 0, it follows from the results of the previous section that

L ψs(B; LsT ) νs(B)= Q LT T B s = , (3.26) ∈ F ψs(R; LsT ) 3.6 Measure changes 49

and

M φs(ak; MsT ) Ps(ak)= Q MT T = ak s = ∞ . (3.27) F j=−∞ φs(aj; MsT ) When the a priori qth moment of L is finite, the L-conditional qth moment is finite T T Fs and given by ∞ z q ν (dz). (3.28) | | s −∞

Similarly, when the a priori qth moment of M is finite, the M -conditional qth T T Fs moment is finite and given by

∞ a q P (a ). (3.29) | k| s k k=−∞

When they are finite, the quantities in (3.28) and (3.29) are martingales with respect to L and M , respectively. If q Z then z q ν(dz) < ensures that zq ν(dz) {Ft } {Ft } ∈ | | ∞ is a martingale, and a qP (a ) < ensures that aq P (a ) is a martingale. | k| k ∞ k k When the terminal law ν admits a density, we denote it by p(z), i.e. ν(dz)= p(z) dz.

In this case the LtT -conditional density of LT T exists, and we denote it by

νt(dz) fT −t(z LtT )p(z) pt(z)= = − . (3.30) dz ψt(R; LtT )fT (z)

3.6 Measure changes

This section leads to Proposition 3.6.1 which states that there exists a measure L equivalent to Q under which the LRB L is a L´evy process. To begin the analysis { tT } it proves convenient to assume the existence of such a measure L, which we do, and

we further assume that under L the density of LtT is ft(x). Writing ψ = ψ (R; L ), we can show that ψ is an L-martingale (with t t tT { t}0≤t

have

∞ L fT −t(z LtT ) L EL ψt s = EL − ν(dz) s F −∞ fT (z) F ∞ fT −t(z LsT (LtT LsT )) = EL − − − ν(dz) L f (z) sT −∞ T ∞ ∞ f (z L y) = T −t − sT − ν(dz) f (y) dy f (z) t−s y=−∞ z=−∞ T ∞ 1 ∞ = f (z L y)f (y) dy ν(dz) f (z) T −t − sT − t−s z=−∞ T y=−∞ ∞ f (z L ) = T −s − sT ν(dz) f (z) z=−∞ T = ψs. (3.31)

rb Since ψ0 = 1, we can define a probability measure L by the Radon-Nikod´ym derivative

dLrb = ψt for 0 t < T . (3.32) dL L ≤ Ft It was noted in Section 3.2 that 0 < ψ < , so Lrb is equivalent to L for t < T . For t ∞ s, t satisfying 0 s

rb L L L L dy = E rb 1 tT ∈ Fs L {LtT ∈dy} Fs −1 = ψ EL ψt1{L ∈dy} LsT s tT | ∞ f (z y) = ψ−1 T −t − ν(dz) f (y L ) dy s f (z) t−s − sT −∞ T ψt(R; y) = ft−s(y LsT ) dy. (3.33) ψs(R; LsT ) −

We see that L is a Markov process under the measure Lrb. Furthermore, by { tT }0≤t

Proposition 3.6.1. Let L be defined by

dL −1 = ψt(R; LtT ) (3.34) dQ L Ft for t [0, T ). Then L is a probability measure. Under L, L is a L´evy process, ∈ { tT }0≤t

 In the case of a discrete state space a similar result is obtained:

Proposition 3.6.2. Let L be defined by

−1 dL ∞ = φt(ak; MtT ) (3.35) dQ M Ft k=−∞ for t [0, T ). Then L is a probability measure. Under L, M is a L´evy process, ∈ { tT }0≤t



3.7 Dynamic consistency

In this section we show that LRBs possess the so-called dynamic consistency property. For L , this property means the process η defined by setting { tT } { t} η = L L (s t T ) (3.36) t tT − sT ≤ ≤ is an LRB for fixed s and L given. Defining the filtration η by sT {Ft } η = σ (L , η ) , (3.37) Ft sT { u}s≤u≤t we see that Q [F ( L ) η ]= Q F ( L ) L , (3.38) { uT }s≤u≤T |Ft { uT }s≤u≤T Ft for 0 s

filtration to be generated by a set of LRBs, regardless of the time in which they enter the market, and without their views being inconsistent with other participants. The dynamic consistency property was introduced in Brody et al. [17] with regard to Brownian random bridges, and was shown by the same authors to hold for gamma random bridges in [20]. Fix a time s < T . Given L , we define a process η by (3.36). We shall show sT { t} that η is an LRB. At time s, the law of η is { t} T ν∗(A)= ν (A + L ) for all A (R), (3.39) s sT ∈ B where A + y denotes the shifted set

A + y = x : x y A . (3.40) { − ∈ } Given the terminal value η , the finite-dimensional distributions of η are given by T { t} Q [η dx ,...,η dx L , η = z] s+t1 ∈ 1 s+tn ∈ n | sT T = Q [L L dx ,...,L L dx L , L L = z] s+t1,T − sT ∈ 1 s+tn,T − sT ∈ n | sT T T − sT = Q [L L dx ,...,L L dx L , L L = z] s+t1 − s ∈ 1 s+tn − s ∈ n | s T − s = Q [L dx ,...,L dx L = z] t1 ∈ 1 tn ∈ n | T −s n fT −s−tn (z xn) = − f − (x x ) , (3.41) f (z) ti−ti 1 i − i−1 T −s i=1 for every n N , every 0 = t < t < < t < T s, and every (x ,...,x ) Rn, ∈ + 0 1 n − 1 n ∈ where x0 = 0. Then we have

Q [η dx ,...,η dx , η dz L ] s+t1 ∈ 1 s+tn ∈ n T ∈ | sT n fT −s−tn (z xn) ∗ = − f − (x x ) ν (dz). (3.42) f (z) ti−ti 1 i − i−1 T −s i=1 Comparison of this expression to (3.3) shows that the process η has the { s+t}0≤t≤T −s law LRB ([0, T s], f , ν∗), and so the law of η is LRB ([s, T ], f , ν∗). C − { t} { t}s≤t≤T C { t} In the discrete case, we define η by setting { t} η = M M (s t T ). (3.43) t tT − sT ≤ ≤ Then, given M , η has the law LRB ([s, T ], Q ,P ∗), where P ∗ is defined by sT { t} D { t} ∗ P (a)= Ps(a + MsT ). (3.44) 3.8 Increments of LRBs 53

3.8 Increments of LRBs

The form of the transition law in Proposition 3.4.1 shows that in general the increments of an LRB are not independent. The special cases of LRBs with independent increments are discussed later. A result that holds for all LRBs is that they have stationary increments:

Proposition 3.8.1. For s,t,u satisfying 0 s

Q [M M z M ]= Q[M M z M ]. (3.46) u+t,T − uT ≤ | sT s+t,T − sT ≤ | sT Proof. We provide the proof for L . The proof for M is similar. Throughout { tT } { tT } the proof we assume that t < T u. The case t = T u follows from the stochastic − − continuity of L . First we consider the case s = 0. It follows from (3.14) that { tT } Q[L dy, L dx]= ψ (R; y)f (y x)f (x) dx dy. (3.47) u+t,T ∈ uT ∈ u+t t − u Then we have

Q[L L dz, L dx]= ψ (R; z + x)f (z)f (x) dx dz u+t,T − uT ∈ uT ∈ u+t t u ∞ f (w z x) = T −(u+t) − − dw f (z)f (x) dx dz. f (w) t u w=−∞ T (3.48)

Integrating over x, and changing the order of integration yields ∞ ∞ dw Q[L L dz]= f (w z x)f (x) dx f (z) dz. u+t,T − uT ∈ T −(u+t) − − u f (w) t w=−∞ x=−∞ T ∞ f (w z) = T −t − dw f (z) dz f (w) t w=−∞ T = ψt(R, z)ft(z) dz = Q[L dz]. (3.49) tT ∈ For the case s > 0, we use the dynamic consistency property. For s fixed and L given, the process η = L L is an LRB with the law sT { uT }s≤u≤T { uT − sT }s≤u≤T 54 3.8 Increments of LRBs

LRB ([s, T ], f , ν∗), where ν∗(A)= ν (A + L ). We have C { t} s sT Q [L L dz L ]= Q [η η dz L ] u+t,T − uT ∈ | sT u+t,T − uT ∈ | sT = Q [η dz L ] tT ∈ | sT ∞ f (w z) = T −t − ν∗(dw) f (z) dz f (w) t−s −∞ T −s ∞ f (w z + L ) = T −t − sT ν (dw) f (z) dz f (w L ) s t−s −∞ T −s − sT 1 ∞ f (w z + L ) = T −t − sT ν(dw) f (z) dz ψ (R; L ) f (w) t−s s sT −∞ T ψt(R; z + LsT ) = ft−s(z) dz ψs(R; LsT ) = Q[L L dz L ]. (3.50) tT − sT ∈ | sT

When L is integrable, the stationary increments property offers enough struc- { tT } ture to allow the calculation of the expected value of LtT :

Corollary 3.8.2. If E[ L ] < for all t (0, T ] then | tT | ∞ ∈ T t t s E [L L ]= − L + − E [L L ] (s < t), (3.51) tT | sT T s sT T s T T | sT − − and if E[ M ] < for all t (0, T ] then | tT | ∞ ∈ T t t s E [M M ]= − M + − E [M M ] (s < t). (3.52) tT | sT T s sT T s T T | sT − − Proof. We only provide the proof for L since the proof for M is similar. The { tT } { tT } case t = T is immediate, so we assume that t < T . First we consider the case s = 0. Suppose that t = m T , where m, n N and m < n. We wish to show that n ∈ + m E[L ]= E[L ]. (3.53) tT n T T Writing L(t, T )= L for clarity, define the random variables ∆ by tT { i} ∆ = L i T, T L (i−1) T, T . (3.54) i n − n It follows from Proposition 3.8.1 that the ∆i’s are identically distributed, and by as- sumption they are integrable. Hence we have 1 n 1 E[∆ ]= E ∆ = E[L ]. (3.55) i n i n T T i=1 3.8 Increments of LRBs 55

Then, as required, we have

m m E L m T, T = E ∆ = E[L ]. (3.56) n i n T T i=1 For general t, choose an increasing sequence of positive rational numbers q such { i} t that limi→∞ qi = T . From (3.56) we have

E[L(t, T )] = E[L(q T, T )] + E[L(t, T ) L(q T, T )] i − i = q E[L(T, T )] + E[L(t, T ) L(q T, T )]. (3.57) i − i By stochastic continuity, in the limit i the law of L(t, T ) L(q T, T ) tends to a →∞ − i Dirac measure centred at 0. Hence

E[L(t, T )] = lim qi E[L(T, T )] + E[L(t, T ) L(qiT, T )] i→∞ − t = E[L(T, T )]. (3.58) T For the case where s > 0, we use the dynamic consistency property. For s fixed

and LsT given, the process

η = L L (s t T ) (3.59) tT tT − sT ≤ ≤ is an LRB with law LRB ([s, T ], f , ν∗), where ν∗(A)= ν (A + L ). Then we have C { t} s sT E [L L ]= L + E[η L ] tT | sT sT tT | sT t s ∞ = L + − z ν∗(dz) sT T s − −∞ t s ∞ = L + − (z L ) ν (dz) sT T s − sT s − −∞ T s t s = − L + − E [L L ] . (3.60) T s sT T s T T | sT − −

We have shown that the increments of LRBs are stationary, and so it is then natural to ask when the increments are independent, i.e. when is an LRB a L´evy process? The answer lies in the functional form of ψt(R; y). For 0 s

If L has stationary, independent increments then { tT } q(t, y; s, x)= q(t s,y x;0, 0). (3.62) − − Therefore the ratio ψ (R; y) t (3.63) ψs(R; x) is a function of the differences t s and y x. Thus, if we have that − −

ψt(R; y)= a exp(by + ct), (3.64) for constants a, b and c, then L is a L´evy process. There are constraints on a, b { tT } and c since (3.61) is a probability density. When b = c = 0 we have ν(dz) = fT (z) dz which is the case where L law= L . Note that the subclass of LRBs that are L´evy { tT } { t} processes is small.

Example. In the Brownian case we consider a process W with law { tT } LRB ([0, T ], f , f (z θT ) dz), C { t} T −

where ft(x) is the normal density with zero mean and variance t given by (3.19). In other words, W is a standard Brownian motion conditioned so that W is a normal { tT } T T random variable with mean θT and variance T . In this case, we have

∞ f (z y) ψ (R; y)= T −t − f (z θT ) dz t f (z) T − −∞ T θ = exp θy t . (3.65) − 2 Simplifying the expression for the transition densities of the process W allows one { tT } to verify that W is a Brownian motion with drift θ. It is notable, by Girsanov’s { tT } theorem, that the process ψ (R; W ) is the Radon-Nikod´ym density process that { t t } transforms a standard Brownian motion into a Brownian motion with drift θ. Hence we can alternatively deduce that W is a Brownian motion with drift θ from the { tT } analysis in Section 3.6.

Example. In the gamma case, we consider a process Γ with law { tT } LRB ([0, T ], f , κ−1f (z/κ) dz), C { t} T 3.8 Increments of LRBs 57

(m) where ft(x)= gt (x) is the gamma density with mean t and variance t/m defined by (2.46), and κ > 0 is constant. Then Γ is a gamma process with mean unity and { tT } −1 variance m at t = 1, conditioned so that ΓT T has a gamma distribution with mean κT and variance κ2T/m. We have:

∞ f (z y) f (z/κ) ψ (R; y)= T −t − T dz t f (z) κ −∞ T = κ−mt exp m(1 κ−1)y . (3.66) − The transition density of Γ is then { tT } (y x)m(t−s)−1e−m(y−x)/κ Q[Γ dy Γ = x]= 1 − dy. (3.67) tT ∈ | sT {y>x} (κ/m)m(t−s)Γ[m(t s)] − Hence Γ is a gamma process with mean κ and variance κ2/m at t = 1. { tT }

3.8.1 Increment distributions

Partition the time interval [0, T ]by0= t < t < t < < t = T . Then define the 0 1 2 n increments ∆ n and α n by { i}i=1 { i}i=1

∆ = L L − (3.68) i ti,T − ti 1,T α = t t . (3.69) i i − i−1 Assume that ν has no continuous singular part [83]. Denoting the Dirac delta function centred at z by δ (x), x R, we can then write z ∈ ∞

ν(dz)= viδzi (z) dz + p(z) dz, (3.70) i=−∞ for some a R, z R , and p : R R . Here p(z) is the density of the { i} ⊂ { i} ⊂ + → + continuous part of ν, and vi is a point mass of ν located at zi. From (3.3), the joint T law of the random vector (∆1,..., ∆n) is given by

n n Q[∆ dy ..., ∆ dy ]= f y f (y ) dy , (3.71) 1 ∈ 1 n ∈ n i αi i i i=1 i=1 where ∞ p(z)+ viδz (z) f(z)= i=−∞ i . (3.72) f (z) T 58 3.8 Increments of LRBs

T Equation (3.71) shows that (∆1,..., ∆n) has a generalized multivariate Liouville distribution as defined by Gupta & Richards [51]. The classical multivariate Liou-

ville distribution is obtained when ft(x) is the density of a gamma distribution (see [48, 49, 50, 36]). A survey of Liouville distributions can be found in Gupta & Richards [47]. Barndorff-Nielsen & Jørgensen [9] construct a generalized Liouville distribution by conditioning a vector of independent inverse-Gaussian random variables on their sum. In the discrete case, the joint distribution of increments also has a generalized Liouville distribution. Define the increments D by { i}

D = M M − . (3.73) i ti,T − ti 1,T Then we can write

n n Q[D dy ...,D dy ]= Q y dQ (y ), (3.74) 1 ∈ 1 n ∈ n i αi i i=1 i=1 where ∞ P (ai)δa (z) Q(z)= i=−∞ i . (3.75) Q (z) T 3.8.2 The reordering of increments

We are able to extend the Markov property of LRBs. If we partition the path of an LRB into increments, then the Markov property means that future increments depend on the past only through the sum of past increments. We will show that for LRBs the ordering of the increments does not matter for this to hold—given the values of any set of increments of an LRB (past or future), the other increments depend on this subset only through the sum of its elements. Let π be a permutation of 1, 2,...,n . We define the partial sum Sπ by { } m m π Sm = ∆π(i) for m =1, 2,...,n, (3.76) i=1 where the ∆ are defined as in (3.68), and we define the partition 0 = tπ < tπ < { i} 0 1 π < tn = T by j tπ = α for j =1, 2,...,n 1. (3.77) j+1 π(i) − i=1 3.8 Increments of LRBs 59

Proposition 3.8.3. We may extend the Markov property of L to the following: { tT }

Q ∆ y ,..., ∆ y ∆ ,..., ∆ = π(m+1) ≤ m+1 π(n) ≤ n π(1) π(m) π Q ∆π(m+1) ym+1 ,..., ∆π(n) yn S . (3.78) ≤ ≤ m If ν has no singular continuous part, then

Q ∆ dy ,..., ∆ dy Sπ = π(m+1) ∈ m+1 π(n) ∈ n m π n n f S + yi m i=m+1 π fαπ(i) (yi) dyi. (3.79) ψ π (R; S ) tm m i=m+1 Proof. Define the increments ∆π by { i }

π ∆ = Ltπ ,T Ltπ ,T . (3.80) i n − n−1

π π n π T The law of the random vector (∆1 ,..., ∆n−1, 1 ∆i ) is given by

n Q ∆π dy ,..., ∆π dy , ∆π dz = 1 ∈ 1 n−1 ∈ n−1 i ∈ i=1 ν(dz) n−1 n−1 f z y f (y ) dy . (3.81) f (z) απ(n) − i απ(i) i i T i=1 i=1 n T This is also the law of (∆π(1),..., ∆π(n−1), 1 ∆π(i)) , hence

law π π (∆π(1),..., ∆π(n)) = (∆1 ,..., ∆n). (3.82)

The Markov property of LRBs gives

Q ∆π y ,..., ∆π y ∆π,..., ∆π = m+1 ≤ m+1 n ≤ n | 1 m m π π π Q ∆m+1 ym+1,..., ∆n yn ∆i , (3.83) ≤ ≤ i=1 and so we have

Q ∆ y ,..., ∆ y ∆ ,..., ∆ = π(m+1) ≤ m+1 π(n) ≤ n π(1) π(m) π Q ∆π(m+1) ym+1 ,..., ∆π(n) yn S . (3.84) ≤ ≤ m This proves the first part of the proposition. 60 3.8 Increments of LRBs

For the second part of the proof we assume that ν takes the form (3.70). Note that m π π π Ltm,T = i=1 ∆i , and that the density of Ltm,T is

∞ π π ftm (x)fT −tm (z x) x f π (x)ψ π (R; x)= − ν(dz). (3.85) → tm tm f (z) z=−∞ T π π T π The elements of the vector (Ltm,T , ∆m+1,..., ∆n) are non-overlapping increments of L , and the law of the vector is given by { tT }

π π Q L π dx, ∆ dy ,..., ∆ dy = tm,T ∈ m+1 ∈ m+1 n ∈ n n n π f x + yi ftm (x) dx fαπ(i) (yi) dyi. (3.86) i=m+1 i=m+1 Thus we have

π π Q ∆ dy ,..., ∆ dy L π = x m+1 ∈ m+1 n ∈ n tm,T π π Q π ∆m+1 dym+1 ,..., ∆n dyn, Ltm,T dx = ∈ ∈ ∈ Q L π dx tm,T ∈ n n f x + i=m+1 yi i=m+1 fαπ(i) (yi) = π . (3.87) ψ π (R; S ) tm m

T We note that Gupta & Richards [51] prove that if (∆1, ∆2,..., ∆n) has a general- ized Liouville distribution then equation (3.78) holds.

We can use Proposition 3.8.3 to extend the dynamic consistency property. In par- ticular we have the following:

Corollary 3.8.4. a. Fix times s , T satisfying 0 < T T s . The time-shifted, 1 1 1 ≤ − 1 space-shifted partial process

(1) η = Ls +t,T Ls ,T , (0 t T1), (3.88) t,T1 1 − 1 ≤ ≤ is an LRB with the law LRB ([0, T ], f , ν(1)), where ν(1) is a probability law C 1 { t} on R with density fT1 (x)ψT1 (R; x).

b. Construct partial processes η(i) , i =1,...,n, from non-overlapping portions of { t,Ti } L , in a similar way to that above. The intervals [s ,s + T ], i =1,...,n, are { tT } i i i 3.8 Increments of LRBs 61

(i) (i) non-overlapping except possibly at the endpoints. Set ηt,Ti = ηTi,Ti when t > Ti. If u > t, then

(1) (1) (n) (n) η Q η η x1,...,η η xn = u,T1 − t,T1 ≤ u,Tn − t,Tn ≤ Ft n (1) (1) (n) (n) (i) Q ηu,T1 ηt,T1 x1,...,ηu,T n ηt,Tn xn ηt,Ti , (3.89) − ≤ − ≤ i=1 where η (i) t = σ ηs,T , i =1, 2,...,n . (3.90) F i 0≤s≤t Remark 3.8.5. The partial processes of Corollary 3.8.4 are dependent, and we have

n (i) η (i) (i) (j) Q ηtT dx s = Q ηtT dx ηsT , ηsT , (3.91) ∈ F ∈ j=1 for 0 s < t T . ≤ ≤ We state but do not prove a discrete analogue of Proposition 3.8.3, which is as follows:

Proposition 3.8.6. We may extend the Markov property of M to the following: { tT }

Q D y ,...,D y D ,...,D = π(m+1) ≤ m+1 π(n) ≤ n π(1) π(m) π Q Dπ(m+1) ym+1,...,Dπ(n) yn R , (3.92) ≤ ≤ m

π m where Rm = i=1 Dπ(i). Furthermore,

Q Rπ + n y n Q π m i=m+1 i Dπ(m+1) = ym+1,...,Dπ(n) = yn Dm = ∞ π Qαπ(i) (yi). φ π (a ; R ) k=−∞ tm k m i=m+1 (3.93) Corollary 3.8.4 can be extended to include LRBs with discrete state-spaces. Chapter 4

Information-based asset pricing

We apply LRBs to the modelling of information flow within the information-based framework of Brody, Hughston & Macrina (BHM). The approach was applied to credit risk in Brody et al. [17], and this was extended to include stochastic interest rates in Rutkowski & Yu [81]. A general asset pricing framework was proposed in Brody et al. [19] (see also Macrina [64]), and there have also been applications to inflation modelling (Hughston & Macrina [55]), insider trading (Brody et al. [18]), insurance (Brody et al. [20]), and interest rate theory (Hughston & Macrina [55]). We model a financial market as a collection of cash flows occurring on fixed dates. We assume that each cash flow can be expressed as a function of independent market factors, which we call X-factors. Under the assumptions that interest rates are deter- ministic and that the pricing kernel is given, the no-arbitrage price of a cash flow is its discounted expected value under the risk-neutral measure. Each X-factor is revealed to the market through a factor information process. We model a factor information process as an LRB whose terminal value is the value of an X-factor. The market information process is then a collection of all factor information processes, and this process generates the market filtration. Hence, we are modelling the information flow in the market by our choice of X-factors and factor information pro- cesses. The dynamics of cash flow prices are derived by taking conditional expectations with respect to the market filtration. Stylistic path properties of price processes can be incorporated into the model by an appropriate choice of factor information processes. For example, if a cash flow depends on an X-factor whose information process is a pure-jump LRB, then the cash flow’s

62 4.1 BHM framework 63

price process will be a pure-. Dependence between two cash flows can be modelled by expressing them as functions of a common set of X-factors. To simplify matters, we examine the case of a cash flow that pays an amount equal to the value of a single X-factor. We derive the price process for the cash flow, and we derive an expression for a call option on this price. Although this model is simple in structure, because the class of information (LRB) processes is large, the results are quite general. In the special case that the X-factor can take only two values, we recover a generalisation of the binary bond model of Brody et al. [17]. All the results of this chapter are for a general LRB information process. We consider specific examples in the subsequent chapters.

4.1 BHM framework

We fix a finite time horizon [0, T ] and a probability space (Ω, , Q). We assume that F the risk-free rate of interest r is deterministic, and that r > 0 and ∞ r du = , { t} t t u ∞ for all t > 0. Then the time-s (no-arbitrage) price of a risk-free, zero-coupon bond maturing at time t (paying a nominal amount of unity) is

t P = exp r du (s t). (4.1) st − u ≤ s

For t < T , the time-t price of an integrable contingent cash flow HT , due at time T , is given by an expression of the form

H = P E[H ], (4.2) tT tT T |Ft

where is the market filtration. The sigma-algebra represents all the information {Ft} Ft available to market participants at time t. In order for equation (4.2) to be consistent with the theory of no-arbitrage pricing, we must interpret Q to be the risk-neutral measure. In such a set-up, the dynamics of the price process H are implicitly determined { tT } by the evolution of the market filtration . We assume the existence of a (possibly {Ft} multi-dimensional) information process ξ such that { tT }0≤t≤T

= σ ( ξ ) . (4.3) Ft { sT }0≤s≤t 64 4.1 BHM framework

So ξ is responsible for the delivery of all information to the market participants. { tT } The task of modelling the emergence of information in the market is reduced to that of specifying the law of the information process ξ . { tT }

4.1.1 Single X-factor market

We assume that the cash flow HT can be written in the form

HT = h(XT ), (4.4)

for some function h(x), and some market factor XT . We call XT an X-factor. We assume that ξ is a one-dimensional process such that ξ = X . Then we have { tT } T T T H = P E[h(X ) ]= P E[h(ξ ) ], (4.5) tT tT T |Ft tT T T |Ft which ensures that H = H . In the case where ξ is a Markov process, we have T T T { tT } H = P E[h(ξ ) ξ ]. (4.6) tT tT T T | tT

4.1.2 Multiple X-factor market

In the more general framework, we model a financial asset that generates the N cash flows H ,H ,...,H , which are to be received on the dates T T T , T1 T2 TN 1 ≤ 2 ≤≤ N respectively. At time T , we assume that the vector of X-factors X Rnk (n N ) k Tk ∈ k ∈ + is revealed to the market, and we write

T X = X(1),X(2),...,X(nk) . (4.7) Tk Tk Tk Tk We assume the X-factors are mutually independent, and that

HTk = hk(XT1 ,XT2 ,...,XTk ), (4.8)

n1 n2 nk for some hk : R R R R which we call a cash-flow function. For each X- (i) × ×× → (i,j) (i,j) (i) factor X , there is a factor information process ξ such that ξ = X for t Tj, Tj { t } t Tj ≥ and the factor information processes are mutually independent. Setting T = TN , we define the market information process ξ to be an Rn1+n2++nN -valued process with { tT } each of its elements being a factor information process. The market filtration is {Ft} 4.2 L´evy bridge information 65

generated by ξ . By construction, H is -measurable for t T . The time-t { tT } Tk Ft ≥ k

price of the cash flow HTk is

E (k) Pt,Tk [hk(XT1 ,XT2 ,...,XTk ) t] for t < Tk, Ht = |F (4.9)  0 for t T .  ≥ k Here we adopt the the convention that cash flows have nil value at the time that they are due. In other words, prices are quoted on an ex-dividend basis. In this way the price process H(k) is right-continuous at t = T . The asset price process is then { t } k n H = H(k) (0 t T ). (4.10) tT t ≤ ≤ k=1 4.2 L´evy bridge information

We consider a market with a single factor, which we denote XT . This X-factor is the size of a contingent cash flow to be received at time T > 0, so we take h(x) = x. For example, XT could be the redemption amount of a credit risky bond. XT is assumed to be integrable and to have the a priori probability law ν (we also exclude the case where X is constant). Information is supplied to the market by an information process ξ . T { tT } The law of ξ is LRB ([0, T ], f , ν), and we set ξ = X . We assume throughout { tT } C { t} T T T this chapter that the information process has a continuous state-space; the results can be extended to include LRB information processes with discrete state-spaces. Indeed, in Chapter 9 we give a detailed example of a discrete LRB, and apply it to the pricing of credit derivatives. Since the information process has the Markov property, the price of the cash flow

XT is given by X = P E [X ξ ] (0 t T ). (4.11) tT tT T | tT ≤ ≤ We note that X is -measurable and X = X , but X is not -measurable for T FT T T T T Ft t < T since we have excluded the case where X is constant. For t (0, T ), the T ∈ -conditional law of X as given by equation (3.26) is Ft T ψt(dz; ξtT ) νt(dz)= , (4.12) ψt(R; ξtT ) where fT −t(z ξ) ψt(dz; ξ)= − dz. (4.13) fT (z) 66 4.3 European option pricing

Then we have ∞ XtT = PtT z νt(dz). (4.14) −∞ When ν admits a density p(z), the -conditional density of X exists and is given by Ft T fT −t(z ξtT )p(z) pt(z)= − . (4.15) ψt(R; ξtT )fT (z)

Example. In the Brownian case the price is

∞ 1 ξ z− 1 t z2 z e T −t [ tT 2 T ] ν(dz) X = P −∞ . (4.16) tT tT ∞ 1 1 t 2 T −t [ξtT z− 2 T z ] −∞ e ν(dz) The following SDE can be derived for X (see [17, 19, 64, 81]): { tT } P Var[X ξ ] dX = r X dt + tT T | tT dW , (4.17) tT t tT T t t − where W is an -Brownian motion. { t} {Ft} Example. In the gamma case we have

∞ m(T −t)−1 2−mT (z ξtT ) z ν(dz) X = P ξtT − . (4.18) tT tT ∞ m(T −t)−1 1−mT (z ξtT ) z ν(dz) ξtT − 4.3 European option pricing

We consider the problem of pricing a European option on the price XtT at time t. For a strike price K and 0 s

+ + EQ (X K) ξ = EQ (P E[X ξ ] K) ξ tT − sT tT T | tT − sT ∞ + = EQ (PtT z K) νt(dz) ξsT −∞ − ∞ + 1 = EQ (PtT z K)ψt(dz; ξtT ) ξsT . ψt(R; ξtT ) −∞ − (4.20) 4.3 European option pricing 67

Recall that the Radon-Nikodym density process

dL = ψ (R; ξ )−1 (4.21) dQ t tT Ft defines a measure L under which ξtT 0≤t

1 ∞ + EL (PtT z K) ψt(dz; ξtT ) ξsT = ψs(R; ξsT ) −∞ − ∞ + 1 f T −t(z ξtT ) EL (PtT z K) − ν(dz) ξsT . (4.22) ψs(R; ξsT ) −∞ − fT (z) Equation (3.9) states that 0 < f (z ξ ) < , and we have T −s − sT ∞

−1 fT −s(z ξsT ) νs(dz)= ψs(R; ξsT ) − ν(dz). (4.23) fT (z)

Thus we can write the expectation in terms of the ξsT -conditional terminal law νs in the form

∞ + fT −t(z ξtT ) EL (PtT z K) − νs(dz) ξsT = −∞ − fT −s(z ξsT ) − ∞ ∞ + fT −t(z x) (PtT z K) − νs(dz) ft−s(x ξsT ) dx. (4.24) − f (z ξ ) − −∞ −∞ T −s − sT

We defined the (marginal) L´evy bridge density ftT (x; z) by

fT −t(z x)ft(x) ftT (x; z)= − . (4.25) fT (z)

From this we can define the ξsT -dependent law µst(dx; z) by

µ (dx; z)= f (x ξ , z ξ ) dx. (4.26) st t−s,T −s − sT − sT

Thus µst(dx; z) is the time-t marginal law of a L´evy bridge starting at the value ξsT at

time s, and terminating at the value z at time T . Defining the set Bt by ∞ f (z x) B = x R : (P z K) T −t − ν (dz) > 0 t ∈ tT − f (z ξ ) s −∞ T −s − sT ∞ f (z x) = x R : (P z K) T −t − ν(dz) > 0 , (4.27) ∈ tT − f (z) −∞ T 68 4.3 European option pricing

where the second equality follows from (4.23), the expectation reduces to

∞ (P z K)µ (B ; z) ν (dz). (4.28) tT − st t s −∞ Then the option price is given by

∞ C = P (P z K)µ (B ; z) ν (dz). (4.29) st st tT − st t s −∞

We can write XtT = Λ(t, ξtT ), for Λ a deterministic function. The set Bt can then be written B = ξ R : Λ(t, ξ) >K . (4.30) t { ∈ } We see that if Λ is increasing in its second argument then B =(ξ∗, ) for some critical t t ∞ value ξ∗ of the information process. Λ is monotonic if the ξ is a L´evy process. t { tT } Example. In the Brownian case we have

∞ 1 xz− 1 t z2 z e T −t [ 2 T ] ν(dz) Λ(t, x)= P −∞ . (4.31) tT ∞ 1 1 t 2 T −t [xz− 2 T z ] −∞ e ν(dz) It can be shown that the function Λ is increasing in its second argument (see [19, 81]); hence B = (ξ∗, ) for the unique ξ∗ satisfying Λ(t, ξ∗) = K. A short calculation t t ∞ t t verifies that µst(dx; z) is the normal law with mean M(z) and variance V given by T t t s t s M(z)= − ξ + − z, V = − (T t). (4.32) T s sT T s T s − − − − This is the time-t marginal law of a Brownian bridge starting from the value ξsT at time s, and finishing at the value z at time T . We have ξ∗ M(z) M(z) ξ∗ µ (B ; z)=1 Φ t − = Φ − t , (4.33) st t − √ √ V V where Φ[x] is the standard normal distribution function. The option price is then

∞ ∗ ∞ ∗ M(z) ξt M(z) ξt Cst = PsT z Φ − νs(dz)+ PstK Φ − νs(dz). (4.34) −∞ √V −∞ √V

Example. In the gamma case we have

∞(z x)m(T −t)−1z2−mT ν(dz) Λ(t, x)= P x − . (4.35) tT ∞(z x)m(T −t)−1z1−mT ν(dz) x − 4.4 Binary bond 69

The monotonicity of Λ(t, x) in x is proved for m(T t) > 1 by Brody et al. [20]. − The authors also give a numerical example where Λ(t, x) was not monotonic in x for m(T t) < 1. For all t (0, T ), we have − ∈ x ξ m(t−s)−1 z y m(T −t)−1 µ (dx; z)= 1 k(z)−1 − st − dx, (4.36) st {ξsT

k(z)=(z ξ ) B[m(t s), m(T t)]. (4.37) − sT − −

So µ (dx; z) is an (z ξ )-scaled, ξ -shifted beta-law with parameters α = m(t s) st − sT sT − and β = m(T t). This is the time-t marginal law of a gamma bridge starting at the − value ξ at time s, and terminating at the value x at time T . When m(T t) > 1, a sT − critical ξ∗ exists such that Λ(t, ξ∗)= K. Then B =(ξ∗, ), and t t t t ∞ ξ∗ ξ µ (B ; z)=1 I t − sT ; m(t s), m(T t) st t − z ξ − − − sT z ξ∗ = I − t ; m(T t), m(t s) , (4.38) z ξ − − − sT

where I[z; α, β]= Iz[α, β] is the regularized incomplete beta function. The option price is then given by

∞ z ξ∗ C = P zI − t ; m(T t), m(t s) ν (dz) st sT z ξ − − s ξsT − sT ∞ z ξ∗ + P K I − t ; m(T t), m(t s) ν (dz). (4.39) st z ξ − − s ξsT − sT

4.4 Binary bond

The simplest non-trivial contingent cash flow is X k ,k , for k < k . This is T ∈ { 0 1} 0 1 the pay-off from a zero-coupon, credit-risky bond that has principal k1, and a fixed recovery rate k0/k1 on default. Assume that, a priori, Q[XT = k0] = p > 0 and Q[X = k ]=1 p. Then T 1 − f (k ) f (k ξ ) 1 p −1 Q[X = k ξ ]= 1+ T 0 T −t 1 − tT − , (4.40) T 0 | tT f (k ) f (k ξ ) p T 1 T −t 0 − tT 70 4.4 Binary bond

and f (k ) f (k ξ ) p −1 Q[X = k ξ ]= 1+ T 1 T −t 0 − tT . (4.41) T 1 | tT f (k ) f (k ξ ) 1 p T 0 T −t 1 − tT − The bond price process X associated with the given terminal cash flow is given by { tT } X = P (k Q[X = k ξ ]+ k Q[X = k ξ ]) (0 t T ). (4.42) tT tT 0 T 0 | tT 1 T 1 | tT ≤ ≤

Example. In the Brownian case we have 1 k k 1 p −1 Q[X = k ξ ]= 1 + exp 1 − 0 ( t (k + k ) 2ξ ) − , (4.43) T 0 | tT −2 T t T 0 1 − tT p − and 1 k k p −1 Q[X = k ξ ]= 1 + exp 1 − 0 ( t (k + k ) 2ξ ) . (4.44) T 1 | tT 2 T t T 0 1 − tT 1 p − − Writing ρ = Q[X = k ξ ], note that i T i | tT Var[X ξ ]=(k k )2ρ ρ T | tT 1 − 0 1 0 = (k k ρ k ρ )(k k ρ k ρ ) − 0 − 0 0 − 1 1 1 − 0 0 − 1 1 = (k X )(k X ). (4.45) − 0 − tT 1 − tT Thus, recalling (4.17), we see that the SDE of X is { tT } P (k X )(k X ) dX = r X dt tT 0 − tT 1 − tT dW , (4.46) tT t tT − T t t − with the initial condition X = P (k p + k (1 p)). For K (P k ,P k ), we are 0T 0T 0 1 − ∈ tT 0 tT 1 able to solve the equation Λ(t, x)= K for x. We have

Λ(t, x)= P (k Q[X = k ξ = x]+ k Q[X = k ξ = x]) tT 0 T 0 | tT 1 T 1 | tT = P (k (k k ) Q[X = k ξ = x]) , (4.47) tT 1 − 1 − 0 T 0 | tT so the solution to Λ(t, x)= K is t T t p K P k ξ∗ = (k + k ) − log − tT 0 . (4.48) t 2T 0 1 − k k 1 p P k K 1 − 0 − tT 1 − The price of a call option on XtT is

1 M(k ) ξ∗ C = P (P k K) Φ i − t Q[X = k ξ ]. (4.49) st st tT − √ T i | sT i=0 V Chapter 5

Variance-gamma information

We presented some of the properties of the VG process and the VG bridge in Section 2.7. We now consider the (standard) variance gamma random bridge (VGRB). In a similar way to a VG bridge, we have two terminal value decompositions of a VGRB. The first decomposition writes a VGRB in terms of its terminal value, a Brownian bridge, a gamma bridge, and a random volatility factor. The second de- composition writes a VGRB in terms of its terminal value, two gamma bridges, and a random volatility factor. These lead to two efficient algorithms for the simulation of VGRBs. We demonstrate the flexibility of VG information models by way of two simple examples. The first example recovers the equity model of Madan et al. [67]. In this case we consider a single X-factor XT which, after a linear transformation, represents the log-return on a non-dividend-paying stock over the time period [0, T ]. We assume that, a priori, XT is an asymmetric VG random variable, and that the market filtration is generated by a standard VGRB whose terminal value is pinned to the value of XT . Under these conditions, we show that the stock price process is an exponentiated asymmetric VG process. The second example is an application to credit risk. We price a binary bond when the market filtration is generated by a VGRB. Following Brody et al. [17], we include a rate parameter σ which controls the speed at which information is released. Through simulations, we explore the effect of σ on the sample paths of bond price processes in the case of default. When σ is high, information about the impending default enters the market quickly, and the price of the bond becomes small well before the maturity

71 72 5.1 Variance-gamma random bridge

date. When σ is low, information about the default is slow to reach the market, and there is a sudden collapse in the bond price close to the maturity date. Further simulations are performed that demonstrate the effect of the VG parameter m on bond price trajectories, where m−1 was the variance of the gamma time change at t = 1. When m is small, bond price processes exhibit more large jumps. When m is large, bond price processes are smoother, and look more like the prices generated in [17] using Brownian information.

5.1 Variance-gamma random bridge

Let V be a standard VGRB, so in the transition law (3.14) we take f (x)= f (m)(x) { tT } t t which we defined in (2.73) to be

mt − 1 2 mmt x2 2 4 (m) √ 2 ft (x)= Kmt− 1 2mx , (5.1) π Γ[mt] 2m 2 for m > 0. We assume that V has marginal law ν. To ensure that 0 < f (x) < T T T ∞ ν-a.s., we require that Q[V =0]= ν( 0 )=0or T > (2m)−1. From (2.87), we can T T { } decompose V as { tT } V = V γ + Σ β(γ ) (0 t T ), (5.2) tT T T tT T tT ≤ ≤ where (i) β(t) is a Brownian bridge to the value 0 at time 1, (ii) γ is a gamma { } { tT } bridge to the value 1 at time T (with parameter m > 0), and (iii) given VT T , ΣT is a GIG random variable with parameter set mT 1/2, V , √2m . Also, from (2.92), { − | T T | } we can decompose V as { tT } V = V γ¯ + Y µ (¯γ γ ) (0 t T ), (5.3) tT T T tT T tT − tT ≤ ≤ where (i) µ = (m/2)−1/2, (ii) γ and γ¯ are independent gamma bridges to the { tT } { tT } value 1 at time T , and (iii) given VT T , YT is a random variable with density m2mt 1 2 mT −1 −m(VT T +2y) y {y>(−V )+} (yVT T + y ) e . (5.4) → T T 2 (m) Γ[mT ] fT (VT T ) 5.1.1 Example

Recall that in (2.75) we defined k(m,θ,ρ) as θ2 k(m,θ,σ) = 1+ 2 . (5.5) 2mσ 5.2 Simulation 73

Define ρ> 0 by 1 2θ2 ρ2 = 1+ 1+ , (5.6) 2 m then ρ satisfies ρ = k(m,θ,ρ). Now suppose that, a priori, VT T is a VG random variable (m,θ,ρ) with density fT (x) as given by (2.74). We then have

(m) fT −t(z ξ) (m,θ,ρ) ψt(dz; ξ)= (m) − fT (z) dz fT (z) 2 −2mt = eθξ/ρ k f (m,θ,ρ)(z ξ) dz, (5.7) (m,θ,ρ) T −t − for t [0, T ). The transition law of V is ∈ { tT }

ψt(R; y) (m) Q [VtT dy VsT = x]= ft−s (y x) dy ∈ | ψs(R; x) − 2 −2m(t−s) = eθ(y−x)/ρ k f (m)(y x) dy (m,θ,ρ) t−s − (m,θ,ρ) = f (y x) dy, (5.8) t−s −

and similarly Q [V dy V = x]= f (m,θ,ρ)(y x) dy. (5.9) T T ∈ | sT T −s − Over the time period [0, T ], V is then a two-parameter VG process (since ρ is a { tT } function of θ and m). The scaled LRB σV /ρ is a three parameter VG process { tT } with parameters m, θ, and σ. We have shown that an asymmetric VG process can be constructed as a random bridge of a symmetric VG process.

5.2 Simulation

In this section we will assume that we can generate sample paths for Brownian and gamma bridges (see Avramidis et al. [4], and Ribeiro & Webber [79]), gamma and GIG

random variates (see Devroye [26]), and random variates from the law of VT T . From the constructions (5.2) and (5.3), we have two natural ways of simulating paths of VGRBs. Indeed, all we need to do is be able to sample from the distributions

of YT and ΣT , given the value of VT T . Since the conditional distribution of ΣT is GIG,

this causes no problems. To sample from the conditional distribution of YT , we note 74 5.2 Simulation

the following: If V 0 then, from (5.4), the density of Y is proportional to T T ≤ T

1 (y(V + y))mT −1e−2my 1 y2mT −2e−2my {y>−VT T } T T ≤ {y>−VT T } 1 (m) {y>−VT T }gT − 1 (y), (5.10) ∝ 2m

(m) where gt (x) is the gamma density given in (2.46). If VT T > 0 then the density of YT is proportional to

1 (y(V + y))mT −1e−(VT T +2y) 1 (V + y)2mT −2e−2m(VT T +y) {y>0} T T ≤ {y>−VT T } T T 1 (m) {y>0}gT − 1 (VT T + y). (5.11) ∝ 2m

−1 So, given VT T , if T > (2m) we can simulate YT with the following acceptance-rejection algorithm: If V 0 T T ≤

(m) 1. Generate G from the gamma distribution with density g 1 (VT T + y). T − 2m

2. If G V go to step 1. ≤ − T T

3. Generate U from the standard uniform distribution.

mT −1 4. If (1+ VT T /G) < U go to step 1.

5. Return G.

If VT T > 0

(m) 1. Generate G from the gamma distribution with density g 1 (VT T + y). T − 2m

2. If G V go to step 1. ≤ T T

3. Generate U from the standard uniform distribution.

1−mT 4. If (1+ VT T /G) < U go to step 1.

5. Return G V . − T T 5.3 VG equity model 75

5.3 VG equity model

Madan et al. [67] used an asymmetric VG process to model the log-returns of a single stock, and priced European options on the stock. It follows from the asymptotic

behaviour of the Bessel function Kν[z] given in (2.70) that the exponential moment of the VG distribution exists only if θ + σ2/2 < m, which we now assume is true. We model the stock price as

S = S exp [rt + L + wt] (t 0), (5.12) t 0 t ≥ where, under the risk-neutral measure, L is a three-parameter VG process with { t} parameters (m, θ, σ), r> 0 is the constant rate of interest, and

θ σ2 w = m log 1 . (5.13) − m − 2m The characteristic function of L was given in (2.62), from which it becomes apparent { t} that the drift term wt ensures that the discounted stock price process e−rtS is a { t} martingale under the risk-neutral measure. We will show that this asymmetric VG model can be recovered using the information- based approach when the information process is a symmetric VGRB. First we fix some suitably distant future date T (for example, this could be the maturity date of the

longest-dated, liquidly-traded, European option on the stock). We then let XT be a (m,θ,ρ) VG random variable with density fT (x), where ρ is given by (5.6), and set

h(x)= S0 exp(rT + σx/ρ + wT ). (5.14)

Let the information process ξ be a standard VGRB with parameter m such that { tT } ξ = X . From Section 5.1 we have that σξ /ρ is a VG process with parameters T T T { tT } m, θ, σ . We then have { } H = exp( r(T t))E[h(X ) ξ ] tT − − T | tT σ(ξT T −ξtT )/ρ = S0 exp(rt + σξtT /ρ + wT ) E[e ]

= S0 exp(rt + σξtT /ρ + wt), (5.15)

for t [0, T ]. Hence we have H law= S . European options on the price ∈ { tT }0≤t≤T { t}0≤t≤T process H can then be calculated using the techniques in [67]. { tT } 76 5.4 VG binary bond

5.4 VG binary bond

In the example of Section 4.4 we derived the price of a binary bond in the Brownian information model. Brody et al. [17] derived a similar price process, only they included a rate parameter σ > 0. This rate parameter behaved like the volatility parameter in the Black-Scholes model. We can construct a similar model using a VGRB as the information process. Let X be an X-factor such that Q[X = 0] = p and Q[X = σ]=1 p. Set T T T − h(x) = x/σ, and let HT = h(XT ) be the redemption amount of a credit-risky, zero- coupon bond. We assume that the market filtration is generated by a standard VGRB ξ such that ξ = X . For ξ to be well defined, its parameter m must satisfy { tT } T T T { tT } T > (2m)−1. The time-t price of the bond is then given by

B = P E[h(X ) ξ ]= P Q[X = σ ξ ]. (5.16) tT tT T | tT tT T | tT From Section 4.4, we have

(m) (m) −1 fT (σ) fT −t( ξtT ) p BtT = PtT 1+ − f (m)(0) f (m) (σ ξ ) 1 p T T −t tT − − −1 m(T −t)− 1 2 2 Km(T −t)−1/2 2mξ ξtT tT = PtT 1+ c , (5.17)  σ ξtT 2  Km(T −t)−1/2 2m(σ ξtT ) − −   where mT − 1 √ 2 mσ2 2 4 KmT −1/2 2mσ c =2p . (5.18) 2 (1 p)Γ[mT 1/2] − − See Figures 5.1, 5.2, and 5.3 for example simulations. 5.4 VG binary bond 77

BtT 1.0 ΞtT

0.5 0.8

Time 0.6 0.2 0.4 0.6 0.8 1.0

0.4 0.5

0.2 1.0 Time 0.0 0.2 0.4 0.6 0.8 1.0

(a) Information process when m = 10. (b) Bond price with m = 10.

BtT 1.0 ΞtT

0.8 0.5 0.6 Time 0.2 0.4 0.6 0.8 1.0 0.4

0.5 0.2

1.0 Time 0.0 0.2 0.4 0.6 0.8 1.0

(c) Information process when m = 25. (d) Bond price with m = 25.

BtT ΞtT 1.0 1.5 0.8

1.0 0.6

0.5 0.4

Time 0.2 0.2 0.4 0.6 0.8 1.0 Time 0.5 0.0 0.2 0.4 0.6 0.8 1.0

(e) Information process when m = 100. (f) Bond price with m = 100.

Figure 5.1: Simulations of VG information processes and bond price processes in cases where the bond defaults. Various values of the parameter m used. The other parameter values are fixed as rt = 0, p =0.3, T = 1, and σ = 1. 78 5.4 VG binary bond

BtT 1.0 ΞtT

0.8 1.5

0.6 1.0

0.4 0.5

Time 0.2 0.2 0.4 0.6 0.8 1.0 Time 0.5 0.0 0.2 0.4 0.6 0.8 1.0

(a) Information process when m = 10. (b) Bond price with m = 10.

BtT 1.0 ΞtT

1.5 0.8

1.0 0.6

0.5 0.4

Time 0.2 0.4 0.6 0.8 1.0 0.2

0.5 Time 0.0 0.2 0.4 0.6 0.8 1.0

(c) Information process when m = 25. (d) Bond price with m = 25.

BtT ΞtT 1.0

0.8 1.0 0.6

0.5 0.4

Time 0.2 0.2 0.4 0.6 0.8 1.0 Time 0.0 0.2 0.4 0.6 0.8 1.0

(e) Information process when m = 100. (f) Bond price with m = 100.

Figure 5.2: Simulations of VG information processes and bond price processes in cases where the bond does not default. Various values of the parameter m used. The other parameter values are fixed as rt = 0, p =0.3, T = 1, and σ = 1. 5.4 VG binary bond 79

BtT 1.0 ΞtT 1.0 0.8 0.5 0.6 Time 0.2 0.4 0.6 0.8 1.0 0.4 0.5

0.2 1.0 Time 1.5 0.0 0.2 0.4 0.6 0.8 1.0

(a) Information process when σ =0.1. (b) Bond price with σ =0.1.

BtT 1.0 ΞtT

0.8 0.5

0.6 Time 0.2 0.4 0.6 0.8 1.0 0.4

0.5 0.2

1.0 Time 0.0 0.2 0.4 0.6 0.8 1.0

(c) Information process when σ = 1. (d) Bond price with σ = 1.

BtT 1.0 ΞtT

0.8 1.0

0.6 0.5

Time 0.4 0.2 0.4 0.6 0.8 1.0

0.5 0.2

1.0 Time 0.0 0.2 0.4 0.6 0.8 1.0

(e) Information process when σ = 10. (f) Bond price with σ = 10.

Figure 5.3: Simulations of VG information processes and bond price processes in cases where the bond defaults. Various values of the information rate parameter σ are used.

The other parameter values are fixed as rt = 0, p =0.3, T = 1, and m = 25. Chapter 6

Stable-1/2 information

The term ‘stable process’ refers here to a strictly stable process with index α (0, 2); ∈ thus, we are excluding the case of Brownian motion (α = 2). The use of stable processes for the modelling of prices in financial markets was proposed by Mandelbrot [69] in connection with his analysis of cotton futures. In the tails, the L´evy densities of stable processes exhibit power-law decay. As a result, the behaviour of stable processes is wild, and their trajectories exhibit frequent large jumps. The variance of a stable random variable is infinite. If α 1, the expectation either does not exist or is infinite. ≤ This heavy-tailed behaviour makes stable processes ill-suited to some applications in finance, such as forecasting and option pricing. To overcome some of the drawbacks of the stable processes, so-called tempered stable processes have been introduced (see Cont & Tankov [24], for example, for details). A tempered stable process is a pure-jump L´evy process, and its L´evy density is the exponentially dampened L´evy density of a stable process. The exponential dampening of the L´evy density improves the integrability of the process to the extent that all the moments of a tempered stable process exist. Tempered stable processes do not possess the time-scaling property of stable processes. In this chapter we apply stable-1/2 random bridges to the modelling of cumulative losses. The techniques presented can equally be applied to cumulative gains. The integrability of a stable-1/2 random bridge depends on the integrability of its terminal distribution. At some fixed future time, the nth moment of the process is finite if and only if the nth moment of its terminal value is finite. Thus a stable-1/2 random bridge with an integrable terminal distribution can be considered to be a dampened

80 6 Stable-1/2 information 81 stable-1/2 subordinator. In fact, the stable-1/2 random bridge is a generalisation of the tempered stable-1/2 subordinator. If the L´evy density of a stable-1/2 subordinator is exponentially dampened, the resulting process is an IG process. We shall see that the IG process is a special case of a stable-1/2 random bridge. We look in detail at the non-life reserving problem. An insurance company will incur losses when certain events occur. An event may be, for example, a period of high wind, a river flooding, or a motor accident. The losses are the costs associated with recompensing policy-holders who have been disadvantaged by an event. These costs might, for example, cover repairs to property, replacement of damaged items, loss of business, medical care, and so on. Although a loss is incurred by the insurance company on the date of an event (the ‘loss date’ or ‘accident date’), payment is generally not made immediately. Delays will occur because loss is not always immediately reported to the company, the full extent of the costs takes time to emerge, the insurance company’s obligation to pay takes time to establish, and so forth. In return for covering policy-holder risk, the insurance company receives premiums. The premiums received over a fixed period of time should, typically, be sufficient to cover the losses the company incurs over that period. Since losses can take years to pay in full, the company sets aside some of the premiums to cover future payments; these are called ‘reserves’. If the reserves are set too low, the company may struggle to cover its liabilities, leading to insolvency. Large upward moves in the reserves due to a worsening in the expected future development of liabilities can cause similar problems. If the reserves are set too high, the company may be accused by shareholders or regulators of inappropriately withholding profits. Hence, it is the interest of the company to forecast its ultimate liability as accurately as possible when deciding the level of reserves to set. We use a stable-1/2 random bridge to model the paid-claims process (i.e. cumu- lative amount paid to-date) of an insurance company. The losses contributing to the paid-claims process are assumed to have occurred in a fixed interval of time. Sometimes claims-handling information about individual losses is known, such as that contained in police or loss-adjuster reports. In the model presented here, such information is disregarded, and the paid-claims process is regarded as providing all relevant infor- mation. We derive the conditional distribution of the company’s total liability given the paid-claims process. We then estimate recoveries from reinsurance treaties on the total liability. The expressions arising in such estimates are similar to the expectations 82 6.1 Stable-1/2 random bridge

encountered in the pricing of call spreads on stock prices. We shall examine the upper-tail of the conditional distribution of the ultimate liability, and find that it is as heavy as the a priori tail. This has an interesting interpretation in the case when the insurer is exposed to a catastrophic loss. At time t < T , the probability of a catastrophic loss occurring in the interval [t, T ] decreases as t approaches T . However, in some sense, the size of a catastrophic loss does not decrease as t approaches T , since the tail of the conditional distribution of the cumulative loss does not thin. When the a priori total loss distribution is a generalized inverse-Gaussian distri- bution, we find that the model is particularly tractable. We present a family of special cases where the expected total loss can be expressed as a rational function of the current value of the paid-claims process. That is, each member of the family is a martingale that can be written as a rational function of an increasing process. The model can be extended to include more than one paid-claims process. We consider the case where there are two processes that are not independent, and which have different activity parameters. We then have two ultimate losses to estimate. We provide expressions for the expected values of the ultimate losses given both paid- claims processes. The numerical computations required to evaluate these expressions are no more difficult than those of the one-dimensional case. We demonstrate how to calculate the a priori correlation between the ultimate liabilities. The correlation can be used as a calibration tool when modelling cumulative losses arising from related lines of business (e.g. personal motor and commercial motor business). We also describe how to simulate sample paths of the stable-1/2 random bridge, and how to use a deterministic time-change to adjust the model when the paid-claims process is expected to develop non-linearly.

6.1 Stable-1/2 random bridge

In this chapter we take ft(x) to be the increment density of the stable-1/2 subordinator as given in (2.94); that is

ct 1 c2t2 ft(x)= 1{x>0} exp . (6.1) √2π x3/2 −2 x 6.2 Insurance model 83

Let ξ be a process with law LRB ([0, T ], f , ν). We say that ξ is a stable-1/2 { tT } C { t} { tT } random bridge. Recall that ν must concentrate mass where fT (x) is positive and finite. Hence ν must be a probability law with support on the positive half-line.

6.2 Insurance model

We approach the non-life insurance claims reserving problem by modelling a paid- claims process by a stable-1/2 random bridge. The stable-1/2 random bridge is a suitable candidate model for a paid-claims process because it is (i) increasing, and (ii) tractable (in particular, its density has a simple form). We shall look at the problem of calculating the reserves required to cover the losses arising from a single line of business when we observe the paid-claims process. Arjas [2] and Norberg [76, 77] provide general descriptions of the problem. England & Verrall [34] survey some of the existing actuarial models. B¨uhlmann [21] and Mikosch [74] contain related topics. The present work ties in with that of Brody et al. [20] who use a gamma random bridge process to model a cumulative loss or gain. The method we use has a flavour of the Bornhuetter-Ferguson model from [16] (see also [34]). In implementing the Bornhuetter-Ferguson model, one begins with an a priori estimate for the ultimate loss (the total cumulative loss arising from the underwritten risks). Periodically, this estimate is revised using a chain-ladder technique to take into account both the a priori estimate and the development of the total paid (or reported) claims to date. In the proposed model, we assume an a priori distribution for the ultimate loss. By conditioning on the development of the paid-claims process, we revise the ultimate loss distribution using the Bayesian methods described in Chapter 4. In this way, we continuously update the conditional distribution for the total loss. This is as opposed to the deterministic Bornhuetter-Ferguson model in which only a point estimate is updated. Knowledge of the conditional distribution allows one to calculate confidence intervals around the expected loss, and to calculate expected reinsurance recoveries. The main assumptions of the model are the following:

1. The claims arising from the line of business have run-off at time T . That is, at

time T all claims have been settled, and the ultimate loss UT is known. 84 6.3 Estimating the ultimate loss

2. U has a priori law ν such that U > 0 and E[U 2 ] < . T T T ∞

3. The paid-claims process ξ is a stable-1/2 random bridge, and ξ = U . { tT } T T T

4. The best estimate of the ultimate loss is U = E U ξ , where ξ is the tT T Ft {Ft } natural filtration of ξ . tT { } A few remarks should be made about assumption 4. First, using the natural filtration of ξ as the reserving filtration means that the paid-claims process is the only source { tT } of information about the ultimate loss once the measure ν is set. We do not consider the situation where we have access to information about claims that have been reported but not yet paid in full (such as case estimates). Second, the expectation is taken with respect to Q, which may or may not be the ‘real-world’ measure. Let us call Q the actuarial measure. When reserving, practitioners routinely discount data before modelling. Discounting may adjust the data for the time-value of money or for the effects of claims inflation. Claims inflation, and interest rates, though understood to be stochastic, usually only provide a small amount of uncertainty to the forecasting of the ultimate loss, relative to the uncertainty surrounding the frequency and (discounted) sizes of insurance claims. Furthermore, it is often for practical purposes reasonable to assume that claims inflation and interest rates are independent of claim frequency and size. Hence, a stochastic reserving model may lose little from the assumption that interest rates and inflation rates are deterministic. We make this assumption, and further assume that the paid-claims process has been discounted for the effects of interest and inflation.

6.3 Estimating the ultimate loss

The time-t conditional law of UT is

ψt(dz; ξtT ) νt(dz)= ψt(R; ξtT ) 3/2 1 z c2 (T −t)2 T 2 {z>ξtT } z−ξ exp 2 z−ξ z ν(dz) = tT − tT − . (6.2) 3/2 2 ∞ u exp c2 (T −t) T 2 ν(du) ξtT u−ξtT − 2 u−ξtT − u 6.4 The paid-claims process 85

Given this law, confidence intervals and quantiles for the ultimate loss are readily calculated. The best-estimate ultimate loss is

∞ UtT = z νt(dz). (6.3) ξtT At time t < T , the total amount of claims yet to be paid is U ξ . The amount that T − tT the insurance company sets aside to cover this unknown amount is called the reserve. The expected value of the total future payments is called the best-estimate reserve, and can be expressed by R = U ξ . (6.4) tT tT − tT For prudence, the reserve may be greater than the best-estimate reserve. However, for regulatory reasons it is sometimes required that the best-estimate reserve is reported. The variance of the total future payments is the variance of the ultimate loss, which is

∞ ξ ξ 2 Var UT ξtT t = Var UT t = (z UtT ) νt(dz). (6.5) − F F ξ − tT 6.4 The paid-claims process

We give the first two conditional moments of the paid-claims process. From Corollary 3.8.2, we have T t t s E ξ ξ = − ξ + − U . (6.6) tT Fs T s sT T s sT − − Equation (6.6) implies that the paid-claims development is expected to be linear. We return to this point later. Using Proposition 2.8.5 and a straightforward conditioning argument, we have

∞ 2 2 2 t 2 c T 2π −1/2 E ξ = z 1 c(T t)e 2z Φ cT z ν(dz) tT T − − z − 0 ∞ 2 2 t 2 3/2 c T −1/2 = E U c(T t)√2π z e 2z Φ cT z ν(dz). (6.7) T T − − − 0 Fix s < T and define the relocated process η by { tT }s≤t≤T

η = ξ ξ . (6.8) tT tT − sT 86 6.5 Reinsurance

The dynamic consistency property implies that, given ξ , η is a stable-1/2 random sT { tT } ∗ bridge with the marginal law of ηT T being ν (A)= νs(A + ξsT ). Then we have

E ξ2 ξ = E η2 ξ +2ξ E [η ξ ]+ ξ2 tT Fs tT | sT sT tT | sT sT T t t s = − ξ2 + − E U 2 ξ T s sT T s T | sT − − ∞ 2 − 2 3 c (T s) − c(T s) c(T t)√2π (z ξ ) 2 e 2(z ξsT ) Φ − ν (dz). (6.9) − − − sT −√z ξ s ξsT − sT

6.5 Reinsurance

An insurance company may buy reinsurance to protect against adverse claim develop- ment. The stop-loss and aggregate excess-of-loss treaties are two types of reinsurance that cover some or all of the total amount of claims paid over a fixed threshold. Under a stop-loss treaty, the reinsurance covers all the losses above a prespecified level. If this level is K, then the reinsurance provider pays a total amount (U K)+ to the T − insurance company. The ‘aggregate L excess of K’ treaty is a capped stop-loss, and covers the layer [K,K + L]. In this case the reinsurance provider pays an amount (U K)+ (U K L)+. T − − T − − The insurance company typically receives money from the reinsurance provider periodically. The amount received depends on the amount the company has paid on claims to-date. If the insurer has the paid-claims process ξ , and receives payments { tT } from a stop-loss treaty (at level K) on the fixed dates t < t < < t = T , then the 1 2 n amount received on date ti is

+ + (ξ K) (ξ − K) . (6.10) ti,T − − ti 1,T −

Recall the following expressions from Section 2.8: the stable-1/2 bridge marginal den- sity function

ft(y)fT −t(z y) ftT (y; z)= − (6.11) fT (z) 2 2 exp 1 c (Ty−tz) 1 ct(T t) − 2 yz(z−y) = 1{0

the stable-1/2 bridge marginal distribution function

y FtT (y; z)= ftT (x; z) dx 0 c(Ty tz) 2t 2 c((2t T )y tz) = Φ − + 1 e2c t(T −t)/z Φ − − , (6.12) yz(z y) − T yz(z y) − − and the stable-1/2 bridge partial first moment

y MtT (y; z)= z ftT (x; z) dx 0 t c(Ty tz) 2 c((2t T )y tz) = z Φ − e2c t(T −t)/z Φ − − . (6.13) T yz(z y) − yz(z y) − − The expected value of reinsurance payments such as (6.10) can be calculated using the following:

Proposition 6.5.1. Fix t (0, T ). At time s < t, the expected exceedence of ξ over ∈ tT some fixed K > 0 is

D = E (ξ K)+ ξ st tT − Fs T t t s = − ξ + − U K T s sT T s sT − − − ∞ 1 + {K>ξsT }(K ξsT ) Ft−s,T −s(K ξsT ; z ξsT ) νs(dz) − K − − ∞ 1 M (K ξ ; z ξ ) ν (dz). (6.14) − {K>ξsT } t−s,T −s − sT − sT s K Proof. If K ξ then ≤ sT

E (ξ K)+ ξ = E ξ ξ K tT − Fs tT Fs − T t t s = − ξ + − U K. (6.15) T s sT T s sT − − −

Thus we need only consider the case when K>ξsT . The ξ-conditional law of ξ is Fs tT

ξ ψt(R; ξtT ) Q[ξtT dy s ]= ft−s(y ξsT ) dy. (6.16) ∈ |F ψs(R; ξsT ) − 88 6.5 Reinsurance

Hence we have 1 ∞ D = (y K)ψ (R; y)f (y ξ ) dy st ψ (R; ξ ) − t t−s − sT s sT K 1 ∞ ∞ f (z y) = (y K) T −t − ν(dz) f (y ξ ) dy ψ (R; ξ ) − f (z) t−s − sT s sT K K T 1 ∞ z f (z y)f (y ξ ) = (y K) T −t − t−s − sT dy ν(dz) ψs(R; ξsT ) K K − fT (z) ∞ z = (y K)f (y ξ ; z ξ ) dy ν (dz). (6.17) − t−s,T −s − sT − sT s K K Making the change of variable x = y ξ yields − sT ∞ z−ξsT D = (x + ξ K)f (x; z ξ ) dx ν (dz) st sT − t−s,T −s − sT s K K−ξsT ∞ t s = − (z ξsT ) Mt−s,T −s(K ξsT ; z ξsT ) νs(dz) K T s − − − − − ∞ +(ξ K) 1 F (K ξ ; z ξ ) ν (dz) sT − { − t−s,T −s − sT − sT } s K T t t s ∞ = − ξsT + − UsT K + (K ξsT )Ft−s,T −s(K ξsT ; z ξsT ) νs(dz) T s T s − K − − − − ∞ − M (K ξ ; z ξ ) ν (dz). (6.18) − t−s,T −s − sT − sT s K

Suppose that the insurance company has limited its liability by entering into a stop-loss reinsurance contract. At time s [0, T ), the expected reinsurance recovery ∈ between times t and u is

E (ξ K)+ (ξ K)+ ξ = D D , (6.19) uT − − tT − Fs su − st for s

of ξtT conditional on it exceeding a threshold. For a threshold θ>ξsT , we find

T −t t−s ∞ ξsT + UsT Mt−s,T −s(θ ξsT ; z ξsT ) νs(dz) T −s T −s ξsT E[ξtT ξsT ,ξtT > θ]= ∞ − − − . | 1 Ft−s,T −s(θ ξsT ; z ξsT ) νs(dz) − ξsT − − (6.20) Sometimes called the conditional value-at-risk (CVaR), this expected value is a coherent risk measure, and is a useful tool for risk management (see McNeil et al. [73]). Note 6.6 Tail behaviour 89

that CVaR is normally defined as an expected value conditional on a shortfall in profit. Since we are modelling loss, and not profit, the risk we most wish to manage is on the upside. Hence, conditioning on an exceedence is of greater interest.

6.6 Tail behaviour

In this section we consider how the probability of extreme events is affected by the paid- claims development. Suppose that the line of business we are modelling is exposed to rare but ‘catastrophic’ large loss events. In this case we assume that the a priori distribution of the ultimate loss has a heavy right-tail. If a catastrophic loss could hit the insurance company at any time before run-off, then it is important that any conditional distributions for the ultimate loss retain the heavy-tail property. We shall see that in the stable-1/2 random bridge model, the conditional distributions are as heavy tailed as the a priori distribution.

Assume that UT has a continuous density p(z) which is positive for all z above some threshold. Then the value of UT is unbounded in the sense that

Q[U > x] > 0, for all x R. (6.21) T ∈ Define Q [ξT T > L] Tailt = lim . (6.22) L→∞ Q [ξ ξ > L ξ ] T T − tT | tT If Tail = then the tail of the future-payments distribution at time t > 0 is not as t ∞ heavy as the a priori tail. That is, a catastrophic loss at time t is ‘smaller’ than a

catastrophic loss at time 0. If Tailt = 0 then the tail of the future-payments distribution is greater at time t than a priori. If0 < Tail < then the tail is as heavy at time t t ∞ as a priori. Using L’Hˆopital’s rule, we have R ∞ ψt( ; ξtT ) L p(z) dz Tailt = lim L→∞ 3/2 2 ∞ z exp c2 (T −t) T 2 p(z) dz L+ξtT z−ξtT − 2 z−ξtT − z ψt(R; ξtT )p(L) = lim 3/2 2 2 2 L→∞ L+ξtT c (T −t) T exp p(L + ξtT ) L − 2 L − L+ξtT p(L) = ψt(R; ξtT ) lim , (6.23) L→∞ p(L + ξtT ) for t (0, T ). Some examples follow: ∈ 90 6.7 Generalized inverse-Gaussian prior

If p(z) 1 e−z (exponential) then Tail = ψ (R; ξ ) eξtT . • ∝ {z>0} t t tT −z2 ξ2 If p(z) 1 e (half-normal) then Tail = ψ (R; ξ ) e tT . • ∝ {z>0} t t tT If p(z) 1 z−3/2e−1/z (L´evy) then Tail = ψ (R; ξ ). • ∝ {z>0} t t tT This property has an interesting parallel with the subexponential distributions. By definition, X has a subexponential distribution if Q [ n X > L] lim i=1 i = n, (6.24) L→∞ Q [X > L] where X n are independent copies of X (see Embrechts et al. [32]). We note that { i}i=1 Q [Z > L] lim T = , (6.25) L→∞ Q [Z Z > L Z ] ∞ T − t | t for Z a Brownian motion, a geometric Brownian motion or a gamma process. If { t} Z is a stable-1/2 subordinator, so the increments of Z are subexponential, then { t} { t} Q [Z > L] T lim T = . (6.26) L→∞ Q [Z Z > L Z ] T t T − t | t − 6.7 Generalized inverse-Gaussian prior

The generalized inverse-Gaussian (GIG) distribution is a three-parameter family of distributions on the positive half-line (see Jørgensen [57] or Eberlein & v. Hammerstein [29] for further details). In the present work it has so far appeared only tangentially. We now provide some of its properties. The density of the GIG distribution is

λ γ 1 λ−1 1 2 −1 2 GIG 1 f (x; λ,δ,γ)= {x>0} x exp 2 (δ x + γ x) . (6.27) δ 2 Kλ[γδ] − Here Kν[z] is the modified Bessel function seen in Section 2.7. The permitted parameter values are

δ 0, γ> 0, if λ> 0, (6.28) ≥ δ > 0, γ> 0, if λ = 0, (6.29) δ > 0, γ 0, if λ< 0. (6.30) ≥ For λ > 0, taking the limit δ 0+ yields the gamma distribution. For λ < 0, → taking the limit γ 0+ yields the reciprocal-gamma distribution—this includes the → 6.7 Generalized inverse-Gaussian prior 91

L´evy distribution for λ = 1/2 (recall that the L´evy distribution is the increment − distribution of stable-1/2 subordinators). The case λ = 1/2 and γ > 0 corresponds − to the inverse-Gaussian (IG) distribution. If X has density (6.27) then the moment k µk = E[X ] is given by

K [γδ] δ k µ = λ+k for λ R, δ > 0, γ > 0, (6.31) k K [γδ] γ ∈ λ Γ[λ + k] 2 k k > λ Γ[λ] γ2 − µk =  and λ> 0, δ = 0, γ > 0, (6.32)  k λ ∞ ≤ −  Γ[ λ k] δ2 k  − − k < λ µ = Γ[ λ] 2 − and λ< 0, δ > 0, γ = 0. (6.33) k  −  k λ ∞ ≥ −  For convenience, we recall some facts about the IG process that appeared in Section 2.10.1. The IG process is a L´evy process with increment density

2 2 ct 1 γ c qt(x)= 1{x>0} exp x t . (6.34) √2π x3/2 −2x − γ 1 We see that q (x)= fGIG(x; ,ct,γ). The kth moment of q (x) is t − 2 t

k+ 1 2 ct 2 m(k) = γeγct K [γct], (6.35) t π γ k−1/2 for k > 0.

6.7.1 GIG terminal distribution

We shall see that the GIG distributions constitute a natural class of a priori distribu- tions for the ultimate loss. With γ > 0 and c> 0 fixed, we examine some properties of

a paid-claims process ξ with time-T density fGIG(z; λ,cT,γ). The transition law is { tT }

ψt(R; y) Q[ξtT dy ξsT = x]= ft−s(y x) dy, (6.36) ∈ | ψs(R; x) −

ψs(dy; x) Q[ξT T dy ξsT = x]= , (6.37) ∈ | ψs(R; x) 92 6.7 Generalized inverse-Gaussian prior

where

ψ0(dz; ξ)= fGIG(z; λ,cT,γ) dz, (6.38)

2 exp c2 (T −t) T 2 t 2 z−ξ z ψ (dz; ξ)=(1 )1 − − fGIG(z; λ,cT,γ) dz. (6.39) t − T {z>ξ} (1 ξ/z)3/2 − Writing γ λ 1 κ = , (6.40) cT 2 Kλ[γ√T ] we have

2 2 c (T −t) 1 2 ∞ − 2 z−y − 2 γ (z−y) t − 1 γ2y λ+ 1 e ψ (R; y)= κ(1 )e 2 z 2 dz t − T (z y)3/2 y − 2 − 2 ∞ − c (T t) − 1 γ2z 1 2 1 e 2 z 2 t − 2 γ y λ+ 2 = κ(1 T )e (z + y) 3/2 dz − 0 z ∞ κ√2π − 1 γ2y−γc(T −t) λ+ 1 = e 2 (z + y) 2 q (z) dz. (6.41) cT T −t 0

Given ξtT = y, the best-estimate ultimate loss is given by

∞ 3 ∞ λ+ 2 −1 0 (z + y) qT −t(z) dz UtT = ψt(R; y) z ψt(dz; y)= . (6.42) ∞ λ+ 1 y (z + y) 2 q (z) dz 0 T −t 6.7.2 Case λ = 1/2 − When λ = 1/2 we have − R 2 2 ψt( ; y) 1 1 c(t s) γ ((y x) c(t s)/γ) ft−s(y x)= {y−x>0} − 3/2 exp − − − ψs(R; x) − √2π (y x) − 2 y x − − = q (y x). (6.43) t−s − Hence ξ is an IG process. Note that in this case ξ has independent increments. { tT } { tT }

6.7.3 Case λ = n 1 − 2 We now consider the case where λ = n 1 , for n N . For convenience we write − 2 ∈ +

(k) q (x)= fGIG(x; k 1/2,ct,γ). (6.44) t − 6.7 Generalized inverse-Gaussian prior 93

Hence we have q(0)(x)= q (x). The transition density of ξ is then t t { tT } ∞ n ψ (R; y) (z + y) qT −t(z) dz t f (y x)= q (y x) 0 ψ (R; x) t−s − t−s − ∞(z + x)nq (z) dz s 0 T −s n n (n−k) k k=0 k mT −t y = qt−s(y x) . (6.45) − n n (n−k) k k=0 kmT −s x When n = 1 this is ψ (R; y) y + c (T t) t f (y x)= q (y x) γ − ψ (R; x) t−s − t−s − x + c (T s) s γ − c(t s) = 1 − q(0) (y x) − γx + c(T s) t−s − − c(t s) + − q(1) (y x). (6.46) γx + c(T s) t−s − − Thus the increment density is a weighted sum of GIG densities. We shall now derive a weighted sum representation for general n. We can write

∞ ∞ (z + y)n q (z) dz = ((z + x)+(y x))n q (z) dz T −t − T −t 0 0 n ∞ n n−k k = (y x) (z + x) qT −t(z) dz k − 0 k=0 n n k k = (y x)n−k m(k−j) xk. (6.47) k − j T −t j=0 k=0 Then we have ∞ n ψ (R; y) (z + y) qT −t(z) dz t f (y x)= q (y x) 0 ψ (R; x) t−s − t−s − ∞(z + x)nq (z) dz s 0 T −s n n n−k k k (k−j) j k=0 k (y x) j=0 j mT −t x = qt−s(y x) − . (6.48) − n n (n−k) k k=0 k mT −s x However, when k N , ∈ 0 k k z q (z) z fGIG(z; 1/2,c(t s),γ) t−s = − − (k) fGIG(z; k 1/2,c(t s),γ) qt−s(z) − − k c(t s) Kk−1/2[γc(t s)] = − − γ K [γc(t s)] 1/2 − (k) = mt−s. (6.49) 94 6.7 Generalized inverse-Gaussian prior

Hence we have (y x)n−k q (y x)= m(n−k) q(n−k)(y x). (6.50) − t−s − t−s t−s − Using the identity (6.50), (6.48) can be expanded to obtain n ψt(R; y) (k) (k) ft−s(y x)= wst (x) qt−s(y x), (6.51) ψs(R; x) − − k=0 where n (n−k) k k (k−j) j (k) k mt−s j=0 j mT −t x wst (x)= . (6.52) n n (n−j) j j=0j mT−s x (k) Notice that wst (x) is a rational function whose denominator is a polynomial of order n, and whose numerator is a polynomial of order k n. Thus the transition probabilities ≤ of ξ depend on the first n integer powers of the current value. The conditional law { tT } of the ultimate loss is ψ (dy; ξ ) ynq(0) (y ξ ) s sT T −s − sT = n (n−k) dy. (6.53) ψs(R; ξsT ) k k=0 ξsT mT −s n (k) We can verify that k=0 wst (x) = 1 using the fact that IG densities are closed under convolution. We have z q (z)= q (y)q (z y) dy, for 0 s < t < T . (6.54) T −s T −t t−s − ≤ 0 For fixed n N , we then have ∈ + n n ∞ m(n−k) xk = (z + x)nq (z) dz k T −s T −s k=0 0 ∞ z n = (z + x) qT −t(y)qt−s(z y) dy dz 0 0 − ∞ ∞ n = qT −t(y) (z + x) qt−s(z y) dz dy 0 y − ∞ ∞ n = qT −t(y) (z + y + x) qt−s(z) dz dy 0 0 ∞ n n = q (y) m(n−k)(y + x)k dy t−s k t−s 0 k=0 n ∞ n (n−k) k = mt−s (y + x) qt−s(y) dy k 0 k=0 n n k k = m(n−k) m(k−j)xj, (6.55) k t−s j T −t j=0 k=0 6.8 Exposure adjustment 95

which gives n n (n−k) k k (k−j) j mt−s mT −t x k j=0 j =1. (6.56) n n (n−j) j mT−s x k=0 j=0 j 6.7.4 Moments of the paid-claims process

The best-estimate ultimate loss simplifies to

n+1 n+1 (n+1−k) k k=0 k mT −t ξtT UtT = . (6.57) n n (n−k) k k=0 kmT −t ξtT For example, when n = 1 we obtain

c(T t)(1 + γc(T t))+2γ2c(T t)ξ + γ3ξ2 U = − − − tT tT . (6.58) tT γ2c(T t)+ γ3ξ − tT By similar calculations, we have

n+m n+m (n+m−k) k m k=0 k mT −t ξtT E[U ξtT ]= for m N+, (6.59) T | n n (n−k) k ∈ k=0 k mT −t ξtT and

n n (n−k) k 1 2 m¯ ξ α UT k=0 k T −t tT 1 2 E e 2 ξtT = exp α ξtT (T t)(¯γ γ) , (6.60) n n m(n−k)ξk 2 − − − k=0 k T −t tT (k) for 0 <α<γ, whereγ ¯ = γ2 α2,andm ¯ is the kth moment of the IG distribution − t with parameters δ = ct and γ =γ ¯.

6.8 Exposure adjustment

We have seen that t E[ξ ]= E[U ]; (6.61) tT T T thus in the model so far the development of the paid-claims process is expected to be linear. This is not always the case in practice. In some cases the marginal exposure is (strictly) decreasing as the development approaches run-off. This manifests itself as

∂2 E[ξ ] < 0, (6.62) ∂t2 tT 96 6.8 Exposure adjustment

for t close to T . A straightforward method to adjust the development pattern is through a time change. We describe the marginal exposure of the insurer through time by a deterministic function ε : [0, T ] R . The total exposure of the insurer is → + T ε(s) ds. (6.63) 0 We define the increasing function τ(t) by

t 0 ε(s) ds τ(t)= T T . (6.64) 0 ε(s) ds By construction τ(0) = 0 and τ(T )= T . Now let τ(t) determine the operational time in the model. We define the time- changed paid-claims process ξτ by { tT } τ ξtT = ξ(τ(t), T ), (6.65)

and set the reserving filtration to be natural filtration of ξτ . Then we have { tT } t τ 0 ε(s) ds E[ξtT ]= T E[UT ] (6.66) 0 ε(s) ds and ∂2 E[U ] E[ξτ ]= T ε′(t). (6.67) ∂t2 tT T 0 ε(s) ds 6.8.1 Craighead curve

Craighead [25] proposed fitting a Weibull distribution function to the development pattern of paid claims for forecasting the ultimate loss (see also Benjamin & Eagles [12]). In actuarial work, the Weibull distribution function is sometimes referred to as the ‘Craighead curve’. To achieve a similar development pattern we can use the Weibull density as the marginal exposure:

b b ε(t)= (x/a)b−1 e(x/a) , (6.68) a for a,b > 0. Then the time change τ(t) is the renormalised, truncated Weibull distri- bution function 1 e(t/a)b τ(t)= T − . (6.69) 1 e(T/a)b − 6.9 Simulation 97

See Figure 6.1 for plots of this function. When b 1, τ ′(t) is decreasing. Under such ≤ a time change, the marginal exposure is decreasing for all t [0, T ]. When b> 1, τ ′(t) ∈ achieves its maximum at b 1 1/b t∗ = a − , (6.70) b and τ ′(t) is decreasing for t t∗. Thus if T > t∗ then the marginal exposure is ≥ decreasing for t [t∗, T ]. If T t∗ then the marginal exposure is increasing for ∈ ≤ t [0, T ]. ∈ Τ 1.0

0.8

0.6

0.4

0.2

t 0.2 0.4 0.6 0.8 1.0

Figure 6.1: Plots of the truncated Weibull time change for various parameters, and with T = 1. The expected paid-claims development of the model will have the same

profile as τ(t) (scaled by E[UT ]). Hence, under any one of the above time changes, ∂2 τ when t is close to T the marginal exposure falls (i.e. ∂t2 E[ξtT ] < 0).

6.9 Simulation

We consider the simulation of sample paths of a stable-1/2 random bridge. First, we can generalise (2.103) to

s+t law ξ( 2 , T ) ξ(s, T )= y,ξ(t, T )= z = 1 Z y + (z y) 1+ , (6.71) 2 − c2(t s)2/(z y)+ Z2 − − where 0

1. Generate the variate ξˆ(T, T ) with law ν, and set ξˆ(0, T ) = 0.

ˆ T ˆ ˆ 2. Generate ξ( 2 , T ) from ξ(0, T ) and ξ(T, T ) using identity (6.71).

ˆ T ˆ ˆ T ˆ 3T 3. Generate ξ( 4 , T ) from ξ(0, T ) and ξ( 2 , T ), and then generate ξ( 4 , T ) from ˆ T ˆ ξ( 2 , T ) and ξ(T, T ).

ˆ T ˆ 3T ˆ 5T ˆ 7T 4. Generate ξ( 8 , T ), ξ( 8 , T ), ξ( 8 , T ), ξ( 8 , T ).

5. Then iterate.

See Figure 6.2 for example simulations.

3.0 3.0

2.5 2.5

2.0 2.0

1.5 1.5

1.0 1.0

0.5 0.5

Time Time 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0

(a) c =3 (b) c =5

3.0 3.0

2.5 2.5

2.0 2.0

1.5 1.5

1.0 1.0

0.5 0.5

Time Time 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0

(c) c =7 (d) c = 10

Figure 6.2: Simulations of the paid-claims process ξ (bottom line) and the best- { tT } estimate process U (top line). Various values of the activity parameter c are used. { tT } A priori, the ultimate loss UT has a generalized Pareto distribution (GPD) with density x−1 −5 GPD 1 f (x)= {x>1} 1+ 4 . (This is the GPD with scale parameter σ = 1, location parameter µ = 1, and shape parameter ξ =1/4.) 6.10 Multiple lines of business 99

6.10 Multiple lines of business

We shall generalise the paid-claims model to achieve two goals. The first is to allow more than one paid-claims process, and allow dependence between the processes. The second is to keep the dimensionality of the calculations low with a view to practicality. The following results can be applied to the modelling of multiple lines of business or multiple origin years when there is dependence between loss processes.

6.10.1 Two paid-claims processes

We consider a case with two paid-claims processes, but the results can be extended to c c higher dimensions. In what follows, we set ft (x)= ft(x) as given by (6.1), and ftT (x)=

ftT (x) as given by (6.12). Here we have introduced the superscript to emphasise the dependence on c. Let S(t, T ∗) be a stable-1/2 random bridge with terminal density { } p(z) = ν(dz)/dz, and with activity parameter c. Fix a time T < T ∗, then define two paid-claims processes by

ξ(1) = S(t, T ∗) (0 t T ), (6.72) tT ≤ ≤ ξ(2) = k2S(λt + T, T ∗) k2S(T, T ∗) (0 t T ), (6.73) tT − ≤ ≤

where λ = T ∗/T 1, and k = c /(cλ) for some c > 0. The density of ξ(1) is given by − 2 2 T T ∞ c (1) c fT ∗−T (z x) p (x)= fT (x) c − p(z) dz 0 fT ∗ (z) ∞ c = fT,T ∗ (x; z) p(z) dz, (6.74) 0

(2) and the density of ξT T is

∞ c −2 (2) −2 c −2 fT (z k x) ∗ p (x)= k fT −T (k x) c− p(z) dz 0 fT ∗ (z) ∞ c −2 −2 −4 c −2 fT (k z k x) −2 ∗ = k fT −T (k x) c −−2 p(k z) dz (6.75) 0 fT ∗ (k z) ∞ −4 c −2 −2 −2 = k fT ∗−T,T ∗ (k x; k z) p(k z) dz (6.76) 0 ∞ −2 c2 −2 = k fT,λ−1T ∗ (x; z) p(k z) dz. (6.77) 0 100 6.10 Multiple lines of business

Here (6.75) follows after a change of variable, (6.76) follows from the definition of ftT (y; z) given in (2.21), and (6.77) follows from the functional form of ftT (y; z) for stable-1/2 bridges given in (2.99). It follows from Corollary 3.8.4 that ξ(1) is a { tT } stable-1/2 random bridge with law LRB ([0, T ], f c ,p(1)(z)dz), and (combined with C { t } the scaling property of stable-1/2 bridges) ξ(2) has law LRB ([0, T ], f c2 ,p(2)(z)dz). { tT } C { t } (1) −2 (2) The conditional, joint density of (ξtT ,k ξtT ) is

Q ξ(1) dy ,k−2ξ(2) dy ξ(1) = x ,k−2ξ(2) = x = tT ∈ 1 tT ∈ 2 sT 1 sT 2 ∞ c f ∗ (z (y + y )) T −(1+λ)t − 1 2 c c c p(z) dz ft−s(y1 x1) dy1 fλ(t−s)(y2 x2) dy2, f ∗ (z (x + x )) − − z=x1+x2 T −(1+λ)s − 1 2 (6.78) for 0 s < t T . Then we have ≤ ≤ Q ξ(1) + k−2ξ(2) dy ξ(1) = x ,k−2ξ(2) = x tT tT ∈ sT 1 sT 2 ∞ c f ∗ (z y) T −(1+λ)t − c = c p(z) dz f(1+λ)(t−s)(y (x1 + x2)) dy f ∗ (z (x + x )) − z=x1+x2 T −(1+λ)s − 1 2 ∞ c = f ∗ (z (x + x ); y (x + x )) p(z) dz dy; (6.79) (1+λ)(t−s),T −(1+λ)s − 1 2 − 1 2 z=x1+x2 (1) −2 (2) (1) and, given ξsT = x1 and k ξsT = x2, the marginal density of ξtT is ∞ c y f ∗ (y x ; z (x + x )) p(z) dz, (6.80) 1 → t−s,T −(1+λ)s 1 − 1 − 1 2 z=x1+x2 −2 (2) and the marginal density of k ξtT is

∞ c y f ∗ (y x ; z (x + x )) p(z) dz. (6.81) 2 → λ(t−s),T −(1+λ)s 2 − 2 − 1 2 z=x1+x2 6.10.2 Correlation

The a priori correlation between the terminal values is well defined when the second moment of ν is finite. The correlation can be used as a tool in the calibration of the model. Assuming that E[S(T ∗, T ∗)2] < , the correlation is defined as ∞ (1) (2) (1) (2) E ξT T ξT T E ξT T E ξT T − . (6.82) 2 2 2 2 (1) (1) (2) (2) E ξT T E ξT T E ξT T E ξT T − − 6.10 Multiple lines of business 101

We shall calculate each of the components of (6.82) separately. First, we have T E ξ(1) = E[S(T, T ∗)] = E[S(T ∗, T ∗)]. (6.83) T T T ∗ Noting that

ξ(2) = k2(S(T ∗, T ∗) S(T, T ∗)) T T − law= k2S(T ∗ T, T ∗), (6.84) − we have

E ξ(2) = k2E[S(T ∗ T, T ∗)] T T − T = k2 1 E[S(T ∗, T ∗)]. (6.85) − T ∗ (1) (2) The second moments of ξT T and ξT T follow from (6.7), and are given by

2 (1) T ∗ ∗ 2 ∗ E ξ = E S(T , T ) (T T )C ∗ , (6.86) T T T ∗ − − T and

2 (2) 4 T ∗ ∗ 2 4 E ξ = k 1 E S(T , T ) k TC ∗ , (6.87) T T − T ∗ − T where ∞ 2 ∗2 3/2 c T ∗ −1/2 C ∗ = c√2π z e 2z Φ cT z p(z) dz. (6.88) T − 0 The final term required for working out the correlation is the cross moment. This is

E ξ(1) ξ(2) = k2E [S(T, T ∗)(S(T ∗, T ∗) S(T, T ∗))] T T T T − = k2E [S(T, T ∗)S(T ∗, T ∗)] k2E S(T, T ∗)2 . (6.89) − The first term on the right of (6.89) is

∞ ∞ c c 2 ∗ ∗ ∗ 2 fT (x)fT ∗−T (y x) k E [S(T, T )S(T , T )] = k xy c − dxp(y) dy 0 0 fT ∗ (y) ∞ ∞ 2 c = k x y fT,T ∗ (x; y) dxp(y) dy 0 0 T ∞ = k2 y2 p(y) dy T ∗ 0 T = k2 E[S(T ∗, T ∗)2]. (6.90) T ∗ 102 6.10 Multiple lines of business

The second term on the right of (6.89) is given by (6.86). Hence we have

(1) (2) 2 ∗ E ξ ξ = k (T T )C ∗ . (6.91) T T T T − T The expression for the correlation follows by putting (6.83), (6.85), (6.86), (6.87), and (6.91) together.

6.10.3 Ultimate loss estimation

We now estimate the terminal values of the paid-claims processes. At time t < T , the best-estimate ultimate loss of ξ(1) (or, indeed, ξ(2) ) depends on the two values ξ(1) { tT } { tT } tT and ξ(2). The best-estimate ultimate loss of ξ(1) is tT { tT }

(1) (1) (1) (2) UtT = E ξT T ξtT = x1,ξtT = x2

∗ ∗ ∗ ∗ −2 = E S(T, T ) S(t, T )= x1,S(T + λt, T ) S(T, T )= k x2 − ∗ ∗ ∗ ∗ −2 −2 = E S(T + λt, T ) S(t, T )= x1,S(T + λt, T ) S(T, T )= k x2 k x2 − − ∗ ∗ ∗ ∗ −2 −2 = E S(T + λt, T ) S(t, T )= x1,S((1 + λ)t, T ) S(t, T )= k x2 k x2 − − (6.92) = E S(T + λt, T ∗) S((1 + λ)t, T ∗)= x + k−2x k−2x (6.93) 1 2 − 2 T t = − E S(T ∗, T ∗) S((1 + λ)t, T ∗)= x + k−2x k−2x T ∗ (1 + λ)t 1 2 − 2 − T ∗ (T t) + − − x . (6.94) T ∗ (1 + λ)t 1 − Equation (6.92) holds since reordering the increments of an LRB yields an LRB with same law, (6.93) follows from the Markov property of LRBs, and (6.94) follows from (6.6). We also have

∞ ∗ ∗ ∗ −2 E S(T , T ) S((1 + λ)t, T )= x1 + k x2 = zpt(z) dz, (6.95) 0 where

3/2 −1 z p (z)= 1 −2 K t {z>x1+k x2} z (x + k−2x ) − 1 2 c2 (T ∗ (1 + λ)t)2 T ∗2 exp − p(z), (6.96) × − 2 z (x + k−2x ) − z − 1 2 6.10 Multiple lines of business 103

for K a constant chosen to normalise the density pt(z). Similarly, the best-estimate ultimate loss of ξ(2) is { tT } T ∗ (T t) U (2) = k2 − − E S(T ∗, T ∗) S((1 + λ)t, T ∗)= x + k−2x x tT T ∗ (1 + λ)t 1 2 − 1 − T t + − x . (6.97) T ∗ (1 + λ)t 2 − (1) (2) To compute both UtT and UtT we need to perform at most two one-dimensional in- tegrals (the integral we need is (6.95), but we note that pt(x) includes a normalising constant K—which is found be evaluating a second integral). We are saved the com- plication of performing double integrals. To extend these results to higher dimensions we can split the ‘master’ process S into more than two subprocesses. Regardless of the number of subprocesses { tT } (i.e. paid-claims processes), all of the best-estimate ultimate losses can be computed by performing at most two one-dimensional integrals. This makes such a multivariate model highly computationally efficient. It should be noted that the case where all the subprocesses are identical in law (in the example above, this is the case when k = 1), many of the results can be extended to an arbitrary master LRB using Corollary 3.8.4. Importantly, the computational efficiency can be achieved for an arbitrary LRB. Chapter 7

Cauchy information

In this chapter, as in the last, we examine the random bridges of a stable process. We look here at LRBs based on a driftless Cauchy process, which is strictly stable with index α = 1. We call these LRBs Cauchy random bridges (CRBs). The third moment of the (one-dimensional) marginal distribution of a CRB does not exist for t (0, T ), ∈ regardless of the choice of terminal distribution, since the corresponding moments of a Cauchy bridge do not exist. Integrability is obtainable for moments of order less than three. Hence, conditioning a Cauchy process on its time-T distribution can moderate its wild behaviour. This moderation cannot be as severe as exponentially-dampening the process’s L´evy density, which would produce a tempered-stable process. We pro- vide expressions for the first two moments of a CRB (when they exist). These follow from the analysis of Cauchy bridges in Section 2.9. We then outline an algorithm for the simulation of CRBs. The algorithm uses the representation of a Cauchy pro- cess as a Brownian motion time-changed by a stable-1/2 subordinator. Returning to information-based pricing, we consider the valuation of a binary bond when the infor- mation process is a CRB. We price a call option on the binary bond price from first principles. This proves tractable, and a closed-form expression is derived for the call price. Plots of simulations of bond price processes are provided at the end, which show the influence of the activity parameter c.

104 7.1 Cauchy random bridges 105

7.1 Cauchy random bridges

As in (2.121), we define ct f (y)= , (7.1) t π(y2 + c2t2) which is the density of a driftless Cauchy process. Let Z be a random bridge of { tT } a driftless Cauchy process such that the marginal law of Z is ν. That is, Z is T T { tT } a CRB with activity parameter c. Since fT (x) is bounded and positive on the whole

real line, ZT T can be restricted to take values in any Borel set of R. For fixed k > 0, let Z¯ be a CRB with activity parameter c such that Z¯ law= kZ , i.e. { t,kT } kT,kT T T Q[Z¯ z]= Q[kZ z]= ν(( ,z/k]). (7.2) kT,kT ≤ kT ≤ −∞ Then it follows from Proposition 2.4.1 that

k−1Z¯ law= Z . (7.3) kt,kT 0≤t≤T { tT }0≤t≤T If E[Z ] < then we have T T ∞ T t t s E [Z Z = x]= − x + − E[Z Z = x], (7.4) tT | sT T s T s T T | sT − − and if E[Z2 ] < then, using (2.130), T T ∞ T t t s E Z2 Z = x = − x2 + − (E[Z2 Z = x]+ c2(T s)(T t)), (7.5) tT sT T s T s T T | sT − − − − for 0 s

ψs(dz; ZsT ) νs(dz)= ψs(R; ZsT ) z2+c2T 2 2 2 2 ν(dz) (z−ZsT ) +c (T −s) = ∞ z2+c2T 2 . (7.7) 2 2 2 ν(dz) −∞ (z−ZsT ) +c (T −s)

Then the ZsT -conditional marginal density and distribution functions of ZtT can be written ∞ y f (y Z ; z Z ) ν (dz), (7.8) → t−s,T −s − sT − sT s −∞ 106 7.2 Simulation

and

∞ y F (y Z ; z Z ) ν (dz), (7.9) → t−s,T −s − sT − sT s −∞ respectively. Here ftT (y; z) is given by (2.124), and FtT (y; z) is given in the statement of Proposition 2.9.1.

7.1.1 Example

Consider the case where Z can only take values in the countable set a ∞ . T T { i}i=−∞ Writing q(i)= ν( a ), the transition law of Z is given by { i} { tT } c(T t)(t s) w (i) q(i) Q[Z dy Z = y]= − − i tT ((y x)2 + c2(t s)2)−1, (7.10) tT ∈ | sT π(T s) w (i) q(i) − − − i sT and wsT (j) q(j) Q[ZT T = aj ZsT = y]= , (7.11) | i wsT (i) q(i) where a2 + c2T 2 w (i)= i . (7.12) tT (a y)2 + c2(T t)2 i − − 7.2 Simulation

The Cauchy process is a Brownian motion time-changed by an independent stable-1/2 subordinator. For W (t) a Brownian motion and S a stable-1/2 subordinator, the { } { t} joint density of the pair (ST , W (ST )) is

−z2/(2y) e cT −c2T 2/(2y) (y, z) 1{y>0} e . (7.13) → √2πy √2π

Then, conditional on W (ST )= z, the density of ST is

1 e−(z2+c2T 2)/(2y) y 1 (z2 + c2T 2) . (7.14) → {y>0} 2 y2

Integrating (7.14) yields

z2 + c2T 2 Q[S >y W (S )= z] = exp for y> 0. (7.15) T | T − 2y 7.3 Cauchy binary bond 107

Given W (ST ), ST is the reciprocal of an exponential random variable, and we have

z2 + c2T 2 S law= , (7.16) T − log U where U is a standard uniform random variable. Conditioned on W (ST ), the time- change process S is stable-1/2 random bridge. { t}0≤t≤T An algorithm for the simulation of stable-1/2 random bridges appeared in Section

6.9. If we can generate a value of ZT T from its a priori distribution then we can construct a CRB using the following algorithm:

1. Generate a standard uniform random variate U, and then set S = (Z2 + T − T T c2T 2)/(log U).

2. Generate a path S of a stable-1/2 random bridge to S . { t}0≤t≤T T 3. Generate a path β(t) of a Brownian bridge with the terminal value 0. { }0≤t≤1 4. Return (S /S )Z + √S β(S /S ) . { t T T T T t T }0≤t≤T Schehr & Majumdar [84] describe a general algorithm for the simulation of a sym- metric stable bridge using Monte Carlo methods. This method could be adapted for the simulation of Cauchy random bridge paths.

7.3 Cauchy binary bond

We revisit the information-based binary bond model, this time taking the information process ξ to be a CRB. The X-factor X is the redemption amount of a credit-risky, { tT } T zero-coupon, binary bond, and we set ξT T = XT . A priori, we have Q[XT = k0] = p and Q[X = k ]=1 p, where k

(1 p)(c2T 2 + k2)(c2(T t)2 +(k ξ)2) −1 Q[ξ = k ξ = ξ]= 1+ − 1 − 0 − . (7.17) T T 0 | tT p(c2T 2 + k2)(c2(T t)2 +(k ξ)2) 0 − 1 − For convenience, we define

q (0) = Q[ξ = k ξ ], (7.18) t T T 0 | tT 108 7.3 Cauchy binary bond and

q (1) = Q[ξ = k ξ ]=1 q (0). (7.19) t T T 1 | tT − t We define the function Λ(t, x): R R R by + × → Λ(t, x)= P E[X ξ = x]. (7.20) tT T | tT Then the process P Λ(t, ξ ) is a martingale, and the bond price process B { 0t tT }0≤t≤T { tT } is given by

BtT = Λ(t, ξtT ). (7.21)

After a straightforward calculation, we find

2 α0 + α1 x + α2 x Λ(t, x)= PtT 2 , (7.22) β0 + β1 x + β2 x where

α = pk (c2T 2 + k2)(c2(T t)2 + k2)+(1 p)k (c2T 2 + k2)(c2(T t)2 + k2), 0 0 0 − 1 − 1 1 − 0 α = 2k k p(c2T 2 + k2)+(1 p)(c2T 2 + k2) , (7.23) 1 − 0 1 0 − 1 2 2 2 2 2 2 α2 = pk0(c T + k )+(1 p)k1(c T + k ), 0 − 1 and

β = p(c2T 2 + k2)(c2(T t)2 + k2)+(1 p)(c2T 2 + k2)(c2(T t)2 + k2), 0 0 − 1 − 1 − 0 β = 2 k p(c2T 2 + k2)+ k (1 p)(c2T 2 + k2) , (7.24) 1 − 1 0 1 − 1 2 2 2 2 2 2 β2 = p(c T + k )+(1 p)(c T + k ). 0 − 1 7.3.1 Call option price

We shall calculate the price of a call option on the bond price. We approach the problem from first principles, as opposed to referring to the general formula derived in Section 4.3.

At time s < t, the price of a call option on the bond price BtT is

C = P E (B K)+ ξ st st tT − sT + = Pst E (Λ(t, ξtT ) K ) ξsT , (7.25) − 7.3 Cauchy binary bond 109

where K is the strike price. Defining the set At by

A = x : Λ(t, x) >K , (7.26) t { } the call option price is

C = P E 1 (Λ(t, ξ ) K) ξ st st {ξtT ∈At} tT − sT = Pst E 1{ξ ∈A }Λ(t, ξtT ) ξsT PstK Q[ξtT At ξsT ]. (7.27) tT t − ∈ | Note that when discussing the price of an option on an asset in Section 4.3, we used the notation Bt instead of At. We have changed to At here to avoid confusion with the bond price BtT . We proceed by finding an explicit expression for the set At. For fixed t, classical calculus reveals that Λ(t, x) is maximised and minimised at the points

1 x = k + k + (k k )2 +4c2(T t)2 , (7.28) 2 0 1 0 − 1 − and

1 x = k + k (k k )2 +4c2(T t)2 , (7.29) 2 0 1 − 0 − 1 − respectively. As a function of x, Λ(t, x) is decreasing on the intervals ( , x] and −∞ [x, ), and is increasing on the interval [x, x]. If K Λ(t, x) then C = 0, and if ∞ ≥ sT K Λ(t, x) then C = B . See Figure 7.1 for an example plot of x Λ(t, x). ≤ sT sT →

x

K

K x x Ht,xL

x

Figure 7.1: An example plot of Λ(t, x) as a function of x. 110 7.3 Cauchy binary bond

We define ∗ α2 K = lim Λ(t, x)= PtT . (7.30) x→±∞ β2 When K (x, x) K∗ , the equation ∈ \{ } Λ(t, x)= K (7.31) is quadratic in x, and has the two real solutions

(P α Kβ ) (P α Kβ )2 4(P α Kβ )(P α Kβ ) x± = − tT 1 − 1 ± tT 1 − 1 − tT 2 − 2 tT 0 − 0 . (7.32) 2(P α Kβ ) tT 2 − 2 When K>K∗ we have x+ < x−, and when K x−. When K = K∗ we set α β α β x+ = 1 2 − 2 0 , (7.33) −β α α β 2 1 − 2 1 x− =+ ; (7.34) ∞ in this way x+ is the only solution of equation (7.31). Then we can write

(x+, x−) for K K∗, At = ≥ (7.35)  R [x−, x+] for K

1 − + Q[ξ A ξ ]= 1 ∗ + q (i) F (i) F (i) , (7.36) tT ∈ t | sT {K

E 1 Λ(t, ξ ) ξ = Λ(t, y) Q[ξ dy ξ ] {ξtT ∈At} tT sT tT ∈ | sT y∈At ∞ ψt(dz; y) ψt(R; y) = PtT z ft−s(y ξsT ) dy y∈At z=−∞ ψt(R; y) ψs(R; ξsT ) − ∞ = P z f (y ξ ; z ξ ) dy ν (dz). tT t−s,T −s − sT − sT s z=−∞ y∈At (7.38) 7.3 Cauchy binary bond 111

Noting that

f (y ξ ; z ξ ) dy = t−s,T −s − sT − sT At − + 1 ∗ + F (x ξ ; z ξ ) F (x ξ ; z ξ ), (7.39) {K

1 E {ξtT ∈At}Λ(t, ξtT ) ξsT = 1 −1 − + 1 ∗ P Λ(s,ξ )+ P k q (i) F (i) F (i) . (7.40) {K

1 − + C = 1 ∗ (B P K)+ q (i)(P k P K) F (i) F (i) . (7.41) st {K

BtT 1.0 ΞtT

0.8

0.0005 0.6

Time 0.4 0.2 0.4 0.6 0.8 1.0

0.2 0.0005 Time 0.0 0.2 0.4 0.6 0.8 1.0

(a) Information process when c = 10−3. (b) Bond price with c = 10−3.

BtT 1.0 ΞtT 0.8 0.8 0.6

0.4 0.6

0.2 0.4 Time 0.2 0.4 0.6 0.8 1.0 0.2 0.2

0.4 Time 0.0 0.2 0.4 0.6 0.8 1.0

(c) Information process when c = 1. (d) Bond price with c = 1.

BtT ΞtT 1.0

4 0.8

2 0.6

Time 0.4 0.2 0.4 0.6 0.8 1.0

2 0.2

Time 4 0.0 0.2 0.4 0.6 0.8 1.0

(e) Information process when c = 10. (f) Bond price with c = 10.

Figure 7.2: Simulations of Cauchy information processes and bond price processes in cases where the bond defaults. Various values of the activity parameter c are used.

The other parameter values are fixed as rt = 0, p =0.3, and T = 1. 7.3 Cauchy binary bond 113

BtT ΞtT 1.0 1.0 0.8 0.8

0.6 0.6

0.4 0.4

0.2 0.2

Time Time 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0

(a) Information process when c = 10−3. (b) Bond price with c = 10−3.

BtT ΞtT 1.0

1.0 0.8

0.5 0.6

Time 0.4 0.2 0.4 0.6 0.8 1.0

0.2 0.5

Time 1.0 0.0 0.2 0.4 0.6 0.8 1.0

(c) Information process when c = 1. (d) Bond price with c = 1.

BtT ΞtT 1.0 8

6 0.8

4 0.6 2 Time 0.4 0.2 0.4 0.6 0.8 1.0 2 0.2 4

6 Time 0.0 0.2 0.4 0.6 0.8 1.0

(e) Information process when c = 10. (f) Bond price with c = 10.

Figure 7.3: Simulations of Cauchy information processes and bond price processes in cases where the bond does not default. Various values of the activity parameter c are used. The other parameter values are fixed as rt = 0, p =0.3, and T = 1. Chapter 8

Normal inverse-Gaussian information

This chapter is a mirror of Chapter 5, only we now consider the NIG process instead of the VG process. Since the VG process and the NIG process are similar, we find that the NIG random bridge (NIGRB) is very similar to the VG random bridge. To avoid too much repetition, we briefly list the contents of this chapter:

It is shown that an asymmetric NIGRB is a scaled standard NIGRB. • An algorithm is provided for the simulation of NIGRB sample paths. • An exponential NIG equity model is shown to be a special case of a single-factor • information-based model when the information process is an NIGRB.

The price of binary bond is derived in a single-factor information-based model. • Simulations of binary bond price processes are plotted. •

8.1 Normal inverse-Gaussian random bridge

Let Y be a standard NIGRB, so in the transition law (3.14), we take f (x)= f (α)(x) { tT } t t which we defined in (2.145) to be

2 α2t (α) α t e 2 2 2 f (x)= K1 α√α t + x . (8.1) t π√α2t2 + x2 114 8.1 Normal inverse-Gaussian random bridge 115

We assume that Y has marginal law ν. Since 0 < f (α)(x) < for all x R, Y T T T ∞ ∈ T T can be restricted to take values in any Borel set of R.

8.1.1 Example

Fix c> 0 and θ R, and define ∈

k2 = c−1√θ2 + c2, α2 = c√θ2 + c2. (8.2)

Suppose that Y is a standard NIGRB with parameter α and terminal density { tT }

2 2 2 (c,θ,k) cT c2T +θz/k2 c k + θ −2 2 2 2 2 2 2 2 fT (z)= e 2 2 2 2 K1 k (θ + c k )(c k T + z ) . (8.3) kπ c k T + z Equation (8.3) is the time-T density of an NIG process with parameters c, θ, and k. From (2.146) and (2.147), we can write

(c2−α2)T +θz/k2 (α) ν(dz) = e fT (z) dz. (8.4)

We then have

(α) fT −t(z ξ) (c,θ,k) ψt(dz; ξ)= (α) − fT (z) dz fT (z) 2 2 2 = e(c −α )t+θξ/k f (c,θ,k)(z ξ) dz, (8.5) T −t −

for t [0, T ). For 0 s

ψt(R; y) (α) Q [YtT dy YsT = x]= ft−s(y x) dy ∈ | ψs(R; x) − 2 2 2 = e(c −α )(t−s)+θ(y−x)/k f (α)(y x) dy t−s − = f (c,θ,k)(y x) dy. (8.6) t−s −

Similarly Q [Y dy Y = x]= f (c,θ,k)(y x) dy. (8.7) T T ∈ | sT T −s − Over the time period [0, T ], Y is a two-parameter NIG process (since k is a func- { tT } tion of θ and c). The scaled LRB σY /k is a three parameter NIG process with { tT } parameters c, θ, and σ. 116 8.2 Simulation

8.2 Simulation

Recall that a standard NIG process can be written as W (X ) , where W is a { t } { t} standard Brownian motion, and X is an independent IG process. The joint density { t} of (XT , W (XT )) is

z2 c2T exp cT e 1 c2 2 −1 − 2y (y, z) 1{y>0} exp (T y + y) . (8.8) → √2π y3/2 − 2 √2πy

Given W (XT )= z, up to a normalisation the density of XT is

1 c2T 2 + z2 y 1 y−2 exp + c2y . (8.9) → {y>0} −2 y

Hence the conditional distribution of XT is generalized inverse-Gaussian with density fGIG(y; 1, √c2T 2 + z2,c), where − λ γ 1 λ−1 1 2 −1 2 GIG 1 f (x; λ,δ,γ)= {x>0} x exp 2 (δ x + γ x) . (8.10) δ 2 Kλ[γδ] − Furthermore, given W (X ) the time-change X is a stable-1/2 random bridge T { t}0≤t≤T (since an IG random bridge is a stable-1/2 random bridge). We can simulate a sample path of Y by the following algorithm: { tT }

1. Generate a variate YT T from the probability law ν.

2. Generate a variate X from the GIG distribution with parameters λ = 1, T T − 2 2 2 δ = c T + YT T , and γ = c. 3. Simulate a path X of a stable-1/2 random bridge with parameter c, and { tT } terminating at value XT T at time T .

4. Simulate a standard Brownian bridge β(t) , terminating at value 0 at time { }0≤t≤1 1.

5. Return X Y /X + √X β(X /X ) . { tT T T T T T T tT T T }0≤t≤T We note that Ribeiro & Webber [80] apply related bridge-based constructions to the simulation of the NIG process. 8.3 NIG equity model 117

8.3 NIG equity model

In a similar way to the VG equity model of Section 5.3, we can use a three-parameter NIG process to model the log-returns of a single stock (see Ribeiro & Webber [80]). It follows from the asymptotic behaviour of the Bessel function Kν[z] (see (2.70)) that the exponential moment of the NIG distribution exists only if c2 > σ2 +2θ, which we assume holds. We model the stock price as

S = S exp [rt + L + wt] (t 0), (8.11) t 0 t ≥ where, under the risk-neutral measure, L is an NIG process with parameter set { t} c,θ,σ , r> 0 is the constant rate of interest, and { } w = c√c2 σ2 2θ c2. (8.12) − − − Since E[eLt ] = exp( wt), the drift term wt ensures that the discounted stock price − process e−rtS is a martingale under the risk-neutral measure. { t} We now show how to recover this model using the information-based approach when the information process is a symmetric NIGRB. We let XT be a NIG random variable (c,θ,k) with density fT (x), where k = k(c, θ) is given by (8.2), and set

h(x) = exp(rT + σx/k + wT ). (8.13)

Let the information process ξ be a standard NIGRB with parameter α given by { tT } (8.2), such that ξ = X . Then σξ /k is an NIG process with parameter set T T T { tT } c,θ,σ . Furthermore { } H = exp( r(T t))E[h(X ) ξ ] tT − − T | tT σ(ξT T −ξtT )/k = exp(rt + σξtT /k + wT ) E[e ]

= exp(rt + σξtT /k + wt), (8.14) for t [0, T ]. Hence we have H law= S . ∈ { tT }0≤t≤T { t}0≤t≤T

8.4 NIG binary bond

We price a binary bond in an NIG information model including a rate parameter σ > 0. We let X be an X-factor such that Q[X =0] = p and Q[X = σ]=1 p. Setting T T T − 118 8.4 NIG binary bond

h(x)= x/σ, we suppose that HT = h(XT ) is the redemption amount of a credit-risky, zero-coupon bond. We assume that the market filtration is generated by a standard NIGRB ξ such that ξ = X . The time-t price of the bond is then given by { tT } T T T B = P E[h(X ) ξ ] (8.15) tT tT T | tT = P Q[X = σ ξ ]. (8.16) tT T | tT From Section 4.4, we have

(α) (α) −1 fT (σ) fT −t( ξtT ) p BtT = PtT 1+ − f (α)(0) f (α) (σ ξ ) 1 p T T −t tT − − −1 2 2 2 α2(T t)2 +(σ ξ )2 K1 α α (T t) + ξtT = P 1+ γ − − tT − , tT 2 2 2  α (T t) + ξtT K α α2(T t)2 +(σ ξ )2  − 1 − − tT  (8.17) where 2 2 2 2 2 p α T K1 α√α T + σ γ = . (8.18) 1 p α2T 2 + σ2 K [α2T ] − 1 See Figures 8.1, 8.2, and 8.3 for example simulations. 8.4 NIG binary bond 119

BtT 1.0 ΞtT

0.4 0.8

0.2 0.6

Time 0.2 0.4 0.6 0.8 1.0 0.4

0.2 0.2

0.4 Time 0.0 0.2 0.4 0.6 0.8 1.0

(a) Information process when c = 10. (b) Bond price with c = 10.

BtT 1.0 ΞtT

1.0 0.8

0.5 0.6

Time 0.4 0.2 0.4 0.6 0.8 1.0

0.5 0.2

1.0 Time 0.0 0.2 0.4 0.6 0.8 1.0

(c) Information process when c = 25. (d) Bond price with c = 25.

BtT 1.0 ΞtT

0.8 0.5

0.6 Time 0.2 0.4 0.6 0.8 1.0 0.4

0.5 0.2

1.0 Time 0.0 0.2 0.4 0.6 0.8 1.0

(e) Information process when c = 100. (f) Bond price with c = 100.

Figure 8.1: Simulations of NIG information processes and bond price processes in cases where the bond defaults. Various values of the parameter c used. The other parameter values are fixed as rt = 0, p =0.3, T = 1, and σ = 1. 120 8.4 NIG binary bond

BtT ΞtT 1.0

1.5 0.8

1.0 0.6

0.5 0.4

Time 0.2 0.2 0.4 0.6 0.8 1.0 Time 0.5 0.0 0.2 0.4 0.6 0.8 1.0

(a) Information process when c = 1. (b) Bond price with c = 1.

BtT ΞtT 1.0

1.5 0.8

1.0 0.6

0.5 0.4

Time 0.2 0.2 0.4 0.6 0.8 1.0 Time 0.5 0.0 0.2 0.4 0.6 0.8 1.0

(c) Information process when c = 10. (d) Bond price with c = 10.

BtT 1.0 ΞtT

2.0 0.8

1.5 0.6 1.0 0.4 0.5

0.2 Time 0.2 0.4 0.6 0.8 1.0

0.5 Time 0.0 0.2 0.4 0.6 0.8 1.0

(e) Information process when c = 50. (f) Bond price with c = 50.

Figure 8.2: Simulations of NIG information processes and bond price processes in cases where the bond does not default. Various values of the parameter c used. The other parameter values are fixed as rt = 0, p =0.3, T = 1, and σ = 1. 8.4 NIG binary bond 121

BtT 1.0 ΞtT

1.0 0.8

0.5 0.6

Time 0.4 0.2 0.4 0.6 0.8 1.0

0.5 0.2

Time 1.0 0.0 0.2 0.4 0.6 0.8 1.0

(a) Information process when σ =0.1. (b) Bond price with σ =0.1.

BtT 1.0 ΞtT

0.8

0.5 0.6

Time 0.4 0.2 0.4 0.6 0.8 1.0

0.2 0.5

Time 0.0 0.2 0.4 0.6 0.8 1.0

(c) Information process when σ =1.0. (d) Bond price with σ = 1.

BtT 1.0 ΞtT

0.5 0.8

Time 0.6 0.2 0.4 0.6 0.8 1.0

0.4 0.5

0.2

1.0 Time 0.0 0.2 0.4 0.6 0.8 1.0

(e) Information process when σ = 10. (f) Bond price with σ = 10.

Figure 8.3: Simulations of NIG information processes and bond price processes in cases where the bond defaults. Various values of the information rate parameter σ are used.

The other parameter values are fixed as rt = 0, p =0.3, T = 1, and c = 10. Chapter 9

Poisson information

In this chapter we present some properties of a discrete random bridge, namely, the Poisson random bridge (PRB). PRBs are counting processes, and we shall see that they are Poisson processes with a state-dependent intensity. For a PRB, the distribution function of the waiting time to the next jump takes an interesting form. It can be written as an elementary transformation of the proba- bility generating function of the PRB’s final value. By differentiating this distribution function, we find that that the intensity of the PRB is a linear function of the PRB’s expected final value. We consider the case when the terminal value of a PRB has a negative binomial distribution. We find that all of the increments of this PRB have a negative binomial distribution. The intensity of this process is a linear function of the current value. When a jump occurs, this causes the intensity to jump, and in the periods between jumps the intensity decreases smoothly. We see therefore that this process exhibits a kind of contagion in the sense that every jump instantaneously increases the probability of another jump happening in some fixed future time interval. A PRB whose terminal value has a log-series distribution also has negative binomial transition probabilities. It can be considered to be a limiting case of a PRB with a negative binomial terminal distribution. A mixed Poisson distribution is a Poisson distribution with a mixed mean. Thus, the mixed Poisson distribution is a distribution on the non-negative integers. When the terminal distribution of a PRB is mixed Poisson, then every increment of the PRB has a mixed Poisson distribution. Such a process is called a mixed Poisson process. If

122 9.1 Poisson random bridge 123

the gamma distribution is used to mix the Poisson mean, the negative binomial PRB is recovered. Since the log-series distribution is a distribution on the positive integers, it is not a mixed Poisson distribution. In fact, if the terminal distribution of a PRB

takes values in a strict subset of N0, then it is not a mixed Poisson process. Hence, the class of PRBs does not coincide with the class of mixed Poisson processes. The simulation of sample paths of a PRB is straightforward using the order property of the jump times of the process. We shall provide example simulations of a PRB with its conditional expected terminal value and intensity. Using a construction similar to that of the , the com- pound Poisson random bridge is a PRB with random jump sizes. We shall derive an explicit expression for the marginal characteristic function of this process. Although not explored in the present work, these compound processes could be applied to prob- lems in finance and insurance (as an alternative to the compound Poisson process). At the end of this chapter, we provide an application of PRBs to finance. We consider the problem of pricing an nth-to-default credit swap. The buyer of such a contract pays a continuous premium in exchange for a lump-sum payment on the date of the nth default from a basket of credit risky assets. Modelling the default times of the credit risky assets as the jump times of a PRB we are able to provide an explicit expression for the value of the swap, with the additional simplifying assumptions that (a) the interest rate is constant, and (b) premiums are paid continuously.

9.1 Poisson random bridge

Let N be an LRB with law LRB ([0, T ], Q ,P ), where { tT } D { t} e−λt(λt)k Q (k)= , k N , (9.1) t k! ∈ 0 for some constant λ > 0. We call N a Poisson random bridge. One can con- { tT } sider N to be a Poisson process conditioned to have the probability mass function { tT } P : N R at time T . We assume that 0 → + E[N ]= kP (k) < . (9.2) T T ∞ k We can use the results from Section 2.11 and Chapter 3 to derive various properties of N . It follows from (3.27) that the N -conditional probability mass function of { tT } sT 124 9.1 Poisson random bridge

NT T is k k! 1 t P (k) (k−NsT )! − T Ps(k)= k . (9.3) ∞ k! 1 t P (k) k=NsT (k−NsT )! − T Corollary 3.8.2 gives T t t s E [N N ]= − N + − E [N N ] . (9.4) tT | sT T s sT T s T T | sT − − If E[N 2 ] < , then conditioning on the terminal value N and recalling equation T T ∞ T T (2.158) gives

2 2 E NtT = E E NtT NT T 2 t t t = 1 E[N ]+ E N 2 . (9.5) T − T T T T T T This result can be generalised to the following:

Proposition 9.1.1. If k2P (k) < , then k ∞ 2 T t t s T t E N 2 N = − N 2 + − − (2 E [N N ] 1) N tT | sT T s sT T s T s T T | sT − sT − − − t s T t t s 2 + − − E [N N ]+ − E N 2 N . (9.6) T s T s T T | sT T s T T sT − − − Proof. Fix s < T and define the process η by η = N N . It follows { tT }s≤t≤T tT tT − sT from the dynamic consistency property that, given N , η is an LRB with terminal sT { tT } ∗ probability mass function P (i)= Ps(i + NsT ). With the understanding that Es[A]= E[A N ], we have | sT 2 2 Es NtT = Es (NsT + ηtT ) 2 2 = NsT +2NsT Es[ηtT ]+ Es[ηtT ] 2 T −t t−s T −t T −t 2 2 = NsT +2NsT T −s Es[ηT T ]+ T −s T −s Es[ηT T ]+ T −s Es[ηT T ] 2 T −t t−s T −t = N +2NsT (Es[NT T ] NsT )+ (Es[NT T ] NsT ) sT T −s − T −s T −s − 2 + T −t E [N 2 ] 2N E [N ]+ N 2 T −s s T T − sT s T T sT T −t 2 2 t−s T −t = N + (2 Es[NT T ] 1) NsT T −s sT T −s T −s − t−s T −t t−s 2 2 + T−s T −s Es[NT T ]+ T −s Es[NT T ]. (9.7) 9.1 Poisson random bridge 125

Counting processes are characterised by the distribution of their jump times. We shall see that the distribution function of the ith jump of the PRB is an infinite sum of weighted beta probabilities. This is a consequence of the order statistics property of the jump times of Poisson processes. From this result it becomes apparent that the distribution of the next-jump time of a PRB can be expressed in terms of the conditional probability generating function of its terminal value. We shall then use the next-jump time distribution to write the intensity of the PRB in a form that highlights the impact of new information.

Denote the probability generating function of P by GP (z), i.e.

∞ NT T k GP (z)= E[z ]= z P (k). (9.8) k=0 Note that G (z) < for 0 z 1, so the definition of G is non-degenerate. Indeed, P ∞ ≤ ≤ P we have G(k)(0) P (k)= P . (9.9) k! Furthermore, since we have assumed that E[N ] < , it follows that G′ (z) exists for T T ∞ P 0 z 1, with ≤ ≤ ∞ ′ ′ GP (1) = lim GP (z)= kP (k)= E[NT T ]. (9.10) z→1− k=0 The probability generating function of NtT is

NtT NtT E z = E E z NT T t t NT T = E 1 + z − T T t t = G 1 + z , (9.11) P − T T for z 1. Define the ith jump time of N by ≤ { tT }

T = inf t [0, T ]: N = i . (9.12) i { ∈ tT }

We adopt the convention that inf = ; if N < i then T = . From equation ∅ ∞ T T i ∞ (2.163), the distribution function of the ith jump time Ti can be written in terms of 126 9.1 Poisson random bridge

the regularized incomplete beta function Iz[a, b] in the form

∞ Q [T t]= Q [T x N = k] P (k) i ≤ i ≤ | T T k=0 ∞ = 1 I [i, k i + 1]P (k), (9.13) {1≤i≤k} t/T − k=0 for 0 t T . We also have ≤ ≤ i Q[T = ]= Q[N < i]= P (k). (9.14) i ∞ T T k=1 The distribution of T is mixed in the sense that it has a point mass at given by i ∞ (9.14), and a continuous part with density

∞ t i−1 1 t k−i t 1 1 T − T P (k). (9.15) → {0≤t≤T } {1≤i≤k} T B[i, k i + 1] k=0 − Note that I [1,k]=1 (1 z)k, (9.16) z − − from which it follows that

∞ t k Q[T1 t]= 1 1 P (k) ≤ − − T k=1 ∞ t k =1 1 P (k) − − T k=0 t =1 G 1 , (9.17) − P − T for t [0, T ]. The distribution function of T is given by ∈ 1 0 t< 0,  1 G (1 t ) 0 t T,  − P − T ≤ ≤ FT1 (t)=  (9.18)  1 P (0) T

and there is a point mass P (0) at t = corresponding to the case where there are no ∞ jumps. For t [0, T ], we have ∈ t Q[N =0]=1 Q[T t]= G 1 . (9.20) tT − 1 ≤ P − T Since N has stationary increments, we also have { tT } t Q[N N =0]= G 1 , (9.21) u+t,T − u,T P − T where t, u satisfy 0 u

∗ P (k)= Ps(k + NsT ). (9.22)

The probability generating function of P ∗ is

∞ k ∗ GP ∗ (z)= z P (k) k=0 ∞ k = z Ps(k + NsT ) k=0 ∞ n−NsT = z Ps(n) n=NsT −NsT = z GPs (z). (9.23)

Denoting the ith jump time of η by T (η), we have { tT } i Q[W t N = i]= Q[T (η) t] i+1 ≤ | sT 1 ≤ T −t =1 G ∗ − P T −s T −t −NsT T −t =1 GP , (9.24) − T −s s T −s for t [s, T ]. This is equivalent to ∈

−NsT Q[N N > 0 N ]=1 T −(s+t) G T −(s+t) , (9.25) u+t,T − uT | sT − T −s Ps T −s for 0 s u

Proposition 9.1.2. The intensity of the PRB is E[N N ] N λ = T T | sT − sT . (9.26) s T s − Proof.

−N T −(s+t) sT T −(s+t) Q[N N =1 N ] 1 T −s GPs T −s lim s+t,T − sT | sT = lim − t→0+ t t→0+ t

d T −u −NsT T −u = 1 T −s GPs T −s du − u=s ′ G (1) NsTGPs (1) = Ps − T s E[N N− ] N = T T | sT − sT . (9.27) T s −

Remark 9.1.3. The intensity of a counting process is related to survival probabilities of jump times by t

Q[T > t N ]= E exp λ − du N , (9.28) 1+NsT | sT − u sT s where 0 s < t T . ≤ ≤

9.1.1 Example: Negative binomial prior

Consider the case where P is a negative binomial probability mass function, i.e. k + m 1 P (k)= − pm(1 p)k, for k N , (9.29) k − ∈ 0 where m> 0 and 0

P (k)= Q[Bk+m is the mth failure] (9.30) = Q [# B : B =0, 1 i k + m = m] . (9.31) { i i ≤ ≤ } The probability generating function for the negative binomial distribution is p m G (z)= . (9.32) P 1 (1 p)z − − 9.1 Poisson random bridge 129

For the remainder of this chapter, if x! is written and x / N , then we adopt the ∈ 0 convention that x! = Γ[x + 1]. Let N be a PRB where N has the probability mass function P given in (9.29). { tT } T T Then we have

QT −t(k j) φt(k; j)= − P (k) QT (k) tλ e k! k−j k + m 1 = 1 1 t − pm(1 p)k, (9.33) {0≤j≤k} λjT j(k j)! − T k − − and ∞ tλ ∞ e (j + m 1)! k−j k + m 1 φ (k, j)= − pm(1 p)j 1 t (1 p) − t λjT j (m 1)! − − T − k j k=−∞ − k=j − etλj! p m 1 p j j + m 1 = − − λjT j p + t (1 p) p + t (1 p) j ∗ T T ∞ − − n j+m n + j + m 1 1 t (1 p) p + t (1 p) − ∗ − T − T − n n=0 etλj! p m 1 p j j + m 1 = − − . (9.34) λjT j p + t (1 p) p + t (1 p) j T − T − Hence the transition probabilities of N are given by { tT } φ (k, j) Q[N = j N = i]= k t Q (j i) tT | sT φ (k, i) t−s − k s j−i j + m 1 p + s (1 p) i+m t−s (1 p) = 1 − T − T − , (9.35) {0≤i≤j} j i p + t (1 p) p + t (1 p) − T − T − and

φs(j, i) Q[NT T = j NsT = i]= | k φs(k, i) j + m 1 i+m j−i = 1 − p + s (1 p) 1 s (1 p) . (9.36) {0≤i≤j} j i T − − T − − Remarkably, all the increments of the process N have negative binomial dis- { tT }0≤t≤T tributions. Note that p m Q[N =0]= = G 1 t , (9.37) tT p + t (1 p) P − T T − as expected. By definition, we have

P (j)= Q[N = j N ], (9.38) s tT | sT 130 9.1 Poisson random bridge

and so ∞ k GPs (z)= z Ps(k)

k=NsT ∞ NsT k = z z Ps(k + NsT ) k=0 p + s (1 p) m+NsT = zNsT T − . (9.39) 1 (1 p)(1 s )z − − − T It is then straightforward to verify that p + s (1 p) m+NsT Q[N N =0 N ]= T − tT − sT | sT p + t (1 p) T − T t −NsT T t = − G − , (9.40) T s Ps T s − − as expected. At time s, the conditional expectation of NT T is (1 s )(1 p) E[N N ]= N +(N + m) − T − T T | sT sT sT p + s (1 p) T − N + m(1 s )(1 p) = sT − T − , (9.41) p + s (1 p) T − and the intensity of the process N is { tT } E[N N ] N (N + m)(1 p) T T | sT − sT = sT − . (9.42) T s pT + s(1 p) − − We see that a jump in the process N causes a jump in its intensity process. { tT } B¨uhlmann [21, 2.2.4] proposes a positive contagion model where the intensity of a Poisson process is a linear function of its current level, which bears some similarity with the negative binomial PRB. One stark difference is that B¨uhlmann assumes the linear form for the intensity process in an ad hoc manner, whereas the intensity of the PRB is a consequence of Bayesian updating.

9.1.2 Example: Log-series prior

We shall see that a PRB with log-series terminal distribution has similar dynamics to the a PRB with a negative binomial terminal distribution. Notably, after the first jump (which occurs with probability 1) the increment distributions of the PRB are negative binomial. 9.1 Poisson random bridge 131

For p (0, 1), log-series distribution has probability mass function ∈ 1 (1 p)k P (k)= − − k N . (9.43) log p k ∈ + Assume that N is a PRB with the terminal probability mass function given in { tT } (9.43). Note that N 1. When N = 0, the transition probabilities of N are T T ≥ sT { tT } given by log[p + t (1 p)] T − (j = 0), log[p + s (1 p)]  T Q[NtT = j NsT =0]= − j (9.44) |  1 t−s (1 p)  T − (j N ), j log[p + s (1 p)] p + t (1 p) ∈ + T − T −  and  (1 p)j(1 s )j Q[N = j N =0]= − − − T (j N ). (9.45) T T | sT j log[p + s (1 p)] ∈ + T − When NsT > 0 the transition probabilities are similar to negative binomial case: j−i j 1 p + s (1 p) i t−s (1 p) Q[N = j N = i]= − T − T − , (9.46) tT | sT j i p + t (1 p) p + t (1 p) − T − T − j 1 i j−i Q[N = j N = i]= − p + s (1 p) 1 s (1 p) , (9.47) T T | sT j i T − − T − − for i N and j i. Comparing (9.46) and (9.47) to (9.35) and (9.36) we see that ∈ + ≥ N can be considered a PRB with negative binomial terminal distribution in the { tT } limiting case m 0. → The probability generating function of the log-series distribution is log[1 (1 p)z] G (z)= − − . (9.48) P log p The distribution function of the first jump time of N is { tT } log[p + t (1 p)] Q[T t]= 1 1 T − . (9.49) 1 ≤ {t≥0} − {0≤t≤T } log p

Then T1 has a continuous distribution with density (1 p) t 1 − − . (9.50) → {0≤t≤T } (pT + t(1 p)) log p − Given NsT = 0, the distribution function of T1 updates to log[1 t−s (1 p)(1 s )] Q[T t N =0]= 1 1 − T −s − − T . (9.51) 1 ≤ | sT {t≥s} − {s≤t≤T } log[p + s (1 p)] T − If NsT > 0 is given, then the jump time distributions are recovered from the negative binomial case by setting m = 0. 132 9.2 Mixed Poisson processes

9.2 Mixed Poisson processes

A wide and tractable class of PRBs consists of the mixed Poisson processes. Grendell [46] provides a thorough exposition of these processes (see also Lundberg [63]). If every increment of a process has a mixed Poisson distribution (i.e., a Poisson distribution with a mixed mean) then it is a mixed Poisson process. We shall see that when the terminal distribution of a PRB is a mixed Poisson distribution, then the PRB is a mixed Poisson process. Let N be a PRB with terminal probability mass function P (k) given by { T T } T k ∞ P (k)= θke−θT π(θ) dθ, (9.52) k! 0 for some probability density π : R R : We say that N has a mixed Poisson + → + T T distribution. If Θ is a random variable with density π(θ), then NT T can be interpreted as a Poisson random variable with the unknown parameter ΘT . Without loss of generality, we my assume that λ = 1 in (9.1), so tke−t Q (k)= . (9.53) t k! Then we have

QT −t(k j)P (k) φt(k; j)= − (9.54) QT (k) (T t)k−jet ∞ = − θke−θT π(θ) dθ, (9.55) (k j)! − 0 for j, k N satisfying j k. It is straightforward to show ∈ 0 ≤ ∞ ∞ t j −θt φt(k; j) = e θ e π(θ) dθ. (9.56) 0 k=j Then the transition probabilities of NtT are given by { ∞} φt(k; j) Q[N = j N = i]= k=j Q (j i) tT sT ∞ φ (k; i) t−s | k=i s − ∞ θje−θtπ(θ) dθ (t s)j−ie−(t−s) = 0 − , (9.57) ∞ θie−θsπ(θ) dθ (j i)! 0 − where i, j N satisfy i j, and s, t satisfy 0 s

where i, k N satisfy i k, and 0 s < T . When π(θ) = δ (θ) for some µ > 0, ∈ 0 ≤ ≤ µ the transition probabilities reduce to those of a Poisson process with parameter µ. So, given the value of Θ > 0, we have (Θ(t s))j−ie−Θ(t−s) Q[N = j N = i, Θ] = − , (9.60) tT | sT (j i)! − for i j and s t T . ≤ ≤ ≤ We shall show that the transition distributions of N are mixed Poisson distri- { tT } butions. First, we find the conditional distribution of Θ given the information to date. For 0 < t < t < < t < T , the joint law of ( N , N , Θ) is given by 1 2 n { ti,T }i T T Q [N = k , N = k ,...,N = k , N = K, Θ dθ]= t1,T 1 t2,T 2 tn,T n T ∈ n K −θT Q (K k ) Q − (k k ) (θT ) e T −tn − n i=1 ti−ti 1 i − i−1 π(θ) dθ, (9.61) Q (K) K! T where t0 = 0 and k0 = 0. Using the Bayes theorem, we have

Q [N = K, Θ dθ N = k , N = k ,...,N = k ] T T ∈ | t1,T 1 t2,T 2 tn,T n K −θT QT −tn (K−kn) (θT ) e π(θ) dθ QT (K) K! = − ∞ − K θT ∞ QT tn (K−kn) (θT ) e π(θ) dθ 0 K=kn QT (K) K! Q (K k )θK e−θT π(θ) dθ = T −tn n . (9.62) ∞ ∞ − K −θT QT −t (K kn)θ e π(θ) dθ 0 K=kn n − This implies that ∞ k −θT QT −t(k NtT )θ e π(θ) dθ Q Θ dθ N = k=NtT − t ∞ ∞ k −θT ∈ F QT −t (k NtT )θ e π(θ) dθ 0 k=NtT n − NtT −θt θ e π(θ) dθ = ∞ , (9.63) NtT −θt 0 θ e π(θ) dθ where N is the filtration generated by N . We write {Ft } { tT } θNtT e−θtπ(θ) πt(θ)= ∞ ; (9.64) NtT −θt 0 θ e π(θ) dθ thus π (θ) is the N -conditional density of Θ. From equation (9.60), we then have t Ft ∞ Q[N = j N ]= Q[N = j N , Θ= θ] Q[Θ dθ N ] tT | sT tT | sT ∈ | sT θ=0 (t s)j−NsT ∞ = − θj−NsT e−θ(t−s)π (θ) dθ. (9.65) (j N )! s − sT 0 Hence, all the increments of N have a mixed Poisson distribution. { tT } 134 9.3 Simulation

Remark 9.2.1. The negative binomial PRB encountered in the previous section can be recovered by taking

1 pT m pT π(θ)= θm−1 exp θ . (9.66) Γ[m] 1 p −1 p − − In this case Θ is a gamma random variable.

Remark 9.2.2. The log-series distribution is not a mixed Poisson distribution (note that P (0) = 0 for the log-series distribution). Hence, if the terminal distribution of a PRB is log-series then the PRB is not a mixed Poisson process. Indeed, if P (k)=0 for any k N , then a PRB with terminal mass function P is not a mixed Poisson ∈ 0 process.

The NsT -conditional distribution of NT T is

∞ k−NsT ∞ (T s) k−NsT −θ(T −s) E[NT T NsT ]= k − θ e πs(θ) dθ | (k NsT )! 0 k=NsT − ∞ = θ πs(θ) dθ 0 = E[Θ N ]. (9.67) | sT

Then the intensity of N is given by { tT }

∞ 1+NsT −θs E[Θ NsT ] NsT 1 0 θ e π(θ) dθ | − = ∞ NsT . (9.68) T s T s θNsT e−θsπ(θ) dθ − − − 0 9.3 Simulation

An efficient method for the simulation of a PRB is to generate the jump times. The case N = 0 is trivial since we have N = 0 for all t [0, T ]. When N > 0 we set T T tT ∈ T T the jump times to be given by

i j=1 Ej Ti = for 1 i NT T , (9.69) 1+NT T ≤ ≤ j=1 Ej where the Ej’s are independent, identically distributed exponential random variables (with arbitrary parameter). See Figure 9.1 for example simulations. 9.4 Compound Poisson random bridge 135

15 15

10 10

5 5

Time Time 0 1 2 3 4 5 0 1 2 3 4 5

2.9

2.8 2.8

2.7 2.6 2.6

2.5 2.4

2.4 2.2 2.3 Time Time 1 2 3 4 5 1 2 3 4 5

Figure 9.1: Top row: Two simulations of a Poisson random bridge with a negative binomial terminal distribution are plotted (bottom line) with the conditional expecta- tion of the terminal value (top line). Bottom row: The intensity process of the Poisson random bridge is plotted. The negative-binomial parameters are m = 8, p =0.4.

9.4 Compound Poisson random bridge

Compound Poisson processes are L´evy processes. If the jump size distribution of a compound Poisson process is discrete then associated discrete LRBs can be defined, and if the jump size distribution of a compound Poisson process is sufficiently smooth then associated continuous LRBs can be defined. We call such LRBs compound-Poisson random bridges (C-PRBs). Hence, a C-PRB can be considered to be a compound Poisson process conditioned to have a particular marginal distribution at time T . The a priori distribution of the number of jumps and the distribution of the jump sizes depend on the choice of terminal distribution. The jumps of the C-PRBs occur at the jump times of a PRB that is adapted to the filtration of the C-PRB, and in general the jump sizes of a C-PRB are not independent. Progress can be made in the analysis of C-PRBs, but dependence between the jump-size and jump-time distributions makes the path tricky. 136 9.4 Compound Poisson random bridge

A simpler but more tractable class of processes can be constructed by allowing the jump sizes of a PRB to be random and independent. We call such processes compound Poisson random bridges (CPRBs), and trust that the absence of a hyphen is enough to avoid confusion with C-PRBs. It is CPRBs that we consider for the rest of this section. Let the process N be a PRB, and let N have arbitrary marginal prob- { tT }0≤t≤T T T ability mass function P Q . Let X be a sequence of independent, identically ≪ T { i} distributed random variables with characteristic function χX . We assume that the X ’s are independent of N . We then define the CPRB Y by i { tT } { tT }

NtT

YtT = Xi. (9.70) i=1 If X = 0 then Y jumps at the same times as N , but the jump sizes of Y i { tT } { tT } { tT } are random. The characteristic function of YtT is

NtT iαYtT E e = E exp iα Xi i=1 NtT = E E [exp (iαX ) N ] i | tT i=1 NtT = E χX (α) t t = G 1 + χ (α) . (9.71) P − T T X Using the dynamic consistency property, this can be generalised to

NtT iαYtT iαYsT E e YsT , NsT = e E exp iα Xi NsT i=NsT +1 NtT = eiαYsT E E [exp (iαX )N , N ] i | sT tT i=N +1 sT iαYsT NtT −NsT = e E χX (α) NsT t s t s iαYsT = e G ∗ 1 − + − χ (α) P − T s T s X T −t t−−s − GPs + χX (α) = eiαYsT T −s T −s . (9.72) T −t t−s NsT T −s + T −s χX (α) 9.5 nth-to-default baskets 137

Counting processes compounded by random jump sizes are common in the insurance literature, and we refer the reader to B¨uhlmann [21], Lundberg [63], and Mikosch [74] for further details and references.

9.5 nth-to-default baskets

We now consider the pricing of an nth-to-default credit swap. The swap provides protection against defaults occuring in a basket of K homogeneous credit risks. In return for a continuously paid premium, the buyer receives an amount 1 R on the date − of the nth default, if this default occurs before date T . The buyer pays the premium until the earlier time of the nth default and T . Here R represents the recovery rate on the credit risks, and is for simplicity assumed constant. We also assume that interest

rates are constant, i.e. rt = r, for all t. We assume that the defaults occur on the jump times of the PRB N with { tT } terminal probability mass function P : 1, 2,...,K R . Denoting the nth jump of { }→ + N by T , the value of the swap to the buyer is { tT } n

Tn∧T V = E e−rteqt dt + (1 R)1 e−rTn , (9.73) 0 − {Tn≤T } 0 where q is the premium rate. This can be rewritten as

1 V = E 1 e(q−r)Tn + e(q−r)T Q [T > T ] 1 + (1 R) E 1 e−rTn . 0 q r {Tn≤T } n − − {Tn≤T } − (9.74) Note that n−1

Q [Tn > T ]= Q [NT T < n]= P (k). (9.75) k=1 Recalling the expression for density of Tn given in (9.15), we have

K T t n−1 1 t k−n E 1 e−rTn = P (k) T − T e−rt dt {Tn≤T } T B[n, k n + 1] k=n 0 − K = M[n, k +1, rT ]P (k), (9.76) − k=n where Kummer’s function M[α,β,z] was given in (2.53). The initial value of the 138 9.5 nth-to-default baskets

contract is then

1 K n−1 V = M[n, k +1, (q r)T ]P (k) + e(q−r)T P (k) 1 0 q r − − − k=n k=0 K + (1 R) M[n, k +1, rT ]P (k). (9.77) − − k=n

Typically, the premium rate q would be set to ensure that V0 = 0, so the swap has zero initial value. At time s < T , if the nth default has not yet occurred, the value of the swap to the buyer is

Tn∧T V = E e−rteqt dt + (1 R)1 e−rTn N . (9.78) s − {Tn≤T } sT s By the dynamic consistency property, if we know NsT we can define an LRB by

η = N N (s t T ). (9.79) tT tT − sT ≤ ≤ Then we can write

1 (η) (q−r)Tn−N Vs = E 1 (η) e sT NsT {T − ≤T } q r n NsT − 1 1 + e(q−r)T Q [T > T N ] e(q−r)s q r n | sT − q r − − (η) −rTn−N + (1 R) E 1 (η) e sT NsT . (9.80) {T − ≤T } − n NsT In a similar way to the calculation of V0, we find

1 K V = M[n N ,k N +1, (q r)(T s)]P (k) s q r − sT − sT − − s − k=n 1 n−1 1 + e(q−r)T P (k) e(q−r)s q r s − q r − k=NsT − K + (1 R) M[n N ,k N +1, r(T s)]P (k), (9.81) − − sT − sT − − s k=n where Ps is the NsT -conditional probability mass function of NT T , and is given by (9.3). Appendix A

Some integrals

A.1 Proof of Proposition 2.8.2

Proposition. For y [0, z], the distribution function of the random variable S(z) is ∈ tT given by

c(Ty tz) 2t 2c2t(T −t)/z c((2t T )y tz) FtT (y; z) = Φ − + 1 e Φ − − . (A.1) yz(z y) − T yz(z y) − − Proof. We need to show that

1 c2(Tu−tz)2 y exp 1 ct(T t) − 2 uz(z−u) − 3/2 du √2π T 0 (u u2/z) − c(Ty tz) 2t 2 c((2t T )y tz) = Φ − + 1 e2c t(T −t)/z Φ − − . (A.2) yz(z y) − T yz(z y) − − First we define c(Tu tz) x(u)= − . (A.3) uz(z u) − Inverting this function is a matter of finding the positive root of a quadratic equation, and it gives z(2c2tT + x2z)+ xz 4c2t(T t)z + x2z2 u(x)= 2 2 2 − . (A.4) 2(c T + x z) Differentiating x(u) yields

(T 2t)u + tz c x′(u)= − . (A.5) 2z (u u2/z)3/2 − 139 140 A.2 Proof of Proposition 2.8.5

Writing α(x)= 4c2t(T t)z + x2z2, (A.6) − we have t(T t) 2z − T (T 2t)u(x)+ tz − 4t(T t)(c2T 2 + x2z) = − 4c2tT 2(T t)+ T 2x2z + T (T 2t)xα(x) − − T 2α(x)2 (T 2t)2x2z2 = − − T 2α(x)2 + T (T 2t)xzα(x) − (Tα(x)+(T 2t)xz)(Tα(x) (T 2t)xz) = − − − Tα(x)(Tα(x)+(T 2t)xz) − (T 2t)xz =1 − . (A.7) − Tα(x) Then (A.5) and (A.7) give

(T 2t)x(u)z ′ ct(T t) 1 − x (u)= −2 3/2 . (A.8) − T 4c2t(T t)z + x(u)2z2 T (u u /z) − − So making the change of variable x = x(u) on the left hand side of (A.2) gives

x(y) e−x2/2 (T 2t)xz 1 − dx 2 2 2 −∞ √2π − T 4c t(T t)z + x z − x(y) 2 2t x e−x /2 = Φ[x(y)] 1 dx 2 2 − − T −∞ 4c t(T t)/z + x √2π − 2t 2 = Φ[x(y)] 1 e2c t(T −t)/z Φ 4c2t(T t)/z + x(y)2 1 − − T − − 2t 2 = Φ[x(y)] + 1 e2c t(T −t)/z Φ 4c2t(T t)/z + x(y)2 − T − − c(Ty tz) 2t 2 c((2t T )y tz) = Φ − + 1 e2c t(T −t)/z Φ − − . (A.9) yz(z y) − T yz(z y) − −

A.2 Proof of Proposition 2.8.5

(z) Proposition. Define the incomplete first moment of StT by y M (y; z)= u f (u; z) du (0 y z). (A.10) tT tT ≤ ≤ 0 A.2 Proof of Proposition 2.8.5 141

Then we have

t c(Ty tz) 2c2t(T −t)/z c((2t T )y tz) MtT (y; z)= z Φ − e Φ − − , (A.11) T yz(z y) − yz(z y) − − (z) and the second moment of StT is given by

2 2 2 (z) t 2 c T 2π −1/2 E S = z 1 c(T t)e 2z Φ cT z . (A.12) tT T − − z − Proof. Making the change of variable (A.3), we have

y t(T t) x(y) 2zu(x) e−x2/2 u ftT (u; z) du = − dx. (A.13) T (T 2t)u(x)+ tz √2π 0 −∞ − Recalling equation (A.7), we have

t(T t) 2zu(x) t(T t)z 2tz − = − 2 T (T 2t)u(x)+ tz T (T 2t) − (T 2t)u(x)+ tz − − − tz (T 2t)xz = T 2t + − T (T 2t) − α(x) − t xz = z 1+ . (A.14) T α(x) Then we have

y u ftT (u; z) du 0 t(T t) x(y) 2zu(x) e−x2/2 = − dx T (T 2t)u(x)+ tz √2π −∞ − t x(y) x e−x2/2 = z Φ[x(y)] + dx 2 2 T −∞ 4c t(T t)/z + x √2π − t 2 = z Φ[x(y)] e2c t(T −t)/z Φ 4c2t(T t)/z + x(y)2 T − − − t c(Ty tz) 2 c((2t T )y tz) = z Φ − e2c t(T −t)/z Φ − − , (A.15) T yz(z y) − yz(z y) − − as required. Changing the variable in the second integral of the proposition gives

z ∞ 2 −x2/2 2 t(T t) 2zu(x) e u ftT (u; z) du = − dx. (A.16) T (T 2t)u(x)+ tz √2π 0 −∞ − 142 A.3 Proof of Proposition 2.9.1

Now, using (A.6) and (A.14), we have t(T t) 2zu(x)2 − T (T 2t)u(x)+ tz − t xz = zu(x) 1+ T α(x) t c2tT + x2z xα(x)2 + xz(2c2tT + x2z) = z2 + T c2T 2 + x2z 2(c2T 2 + x2z)α(x) t c2tT + x2z xz(c2t(3T 2t)+ x2z) = z2 + − T c2T 2 + x2z 2(c2T 2 + x2z)α(x) 2 2 2 2 t 2 c tT + x z xz(c t(3T 2t)+ x z) = z 2 2 2 + − . (A.17) T c T + x z 2(c2T 2 + x2z) 4c2t(T t)+ x2z2 − Note that the second term above is odd in x. Hence y 2 u ftT (u; z) du 0 t ∞ c2tT + x2z e−x2/2 = z2 dx T c2T 2 + x2z √2π −∞ t 2c2T (T t) ∞ 1 e−x2/2 = z2 1 − dx T − z c2T 2/z + x2 √2π 0 2 2 2 t 2 2c T (T t) π z c T = z 1 − e 2z Φ c2T 2/z (A.18) T − z 2 c2T 2 − 2 2 t 2 c T 2π −1/2 = z 1 c(T t)e 2z Φ cT z . (A.19) T − − z − The equality (A.18) uses the identity [1, 7.4.11].

A.3 Proof of Proposition 2.9.1

(z) Proposition. The distribution function of ZtT is 1 (T t)(c2T (T 2t)+ z2) y F (y; z)= + − − arctan tT 2 πT (c2(T 2t)2 + z2) ct − t(c2T (T 2t) z2) z y + − − arctan − πT (c2(T 2t)2 + z2) c(T t) − − ct(T t)z y2 + c2t2 + − log , (A.20) πT (c2(T 2t)2 + z2) (z y)2 + c2(T t)2 − − − where 0 0. A.3 Proof of Proposition 2.9.1 143

Proof. Writing a = ct and b = c(T t), we need to calculate the integral − ab y dx F (y; z)= (z2 +(a b)2) . (A.21) tT π(a + b) − (x2 + a2)((z x)2 + b2) −∞ − First we note that 1 u x + v u x + v = 1 1 + 2 2 , (A.22) (x2 + a2)((z x)2 + b2) x2 + a2 (z x)2 + b2 − − where 2z u = , 1 (z2 +(a + b)2)(z2 +(a b)2) − z2 + b2 a2 v = − , 1 (z2 +(a + b)2)(z2 +(a b)2) (A.23) − u = u , 2 − 1 v =2u z v . 2 1 − 1 For R

2 2 v1 u1z v1 π u1 y + a lim IR(y)= + − + log R→∞ a b 2 2 (y z)2 + b2 − v y u z v y z + 1 arctan + 1 − 1 arctan − . (A.26) a a b b 144 A.4 Proof of Proposition 2.9.3

The distribution function is then

ab 2 2 FtT (y; z)= (z +(a b) ) lim IR(y) π(a + b) − R→∞ 1 ab z y2 + a2 = + log 2 π(a + b) z2 +(a + b)2 (z y)2 + b2 − b b2 a2 + z2 y + − arctan π(a + b) z2 +(a + b)2 a a b2 a2 z2 z y + − − arctan − . (A.27) π(a + b) z2 +(a + b)2 b Substituting for a and b yields the required result.

A.4 Proof of Proposition 2.9.3

(z) Proposition. The first two moments of ZtT exist, and are given by t E Z(z) = z, (A.28) tT T and 2 t E Z(z) = (z2 + c2T (T t)). (A.29) tT T − Proof. Note the following two integrals: ∞ 2 2 2 I1 = (y + c t )ftT (y; z) dy −∞ ct(T t) ∞ z2 + c2T 2 = − dy πT (z y)2 + c2(T t)2 −∞ − − ct(T t) ∞ z2 + c2T 2 = − dy πT y2 + c2(T t)2 −∞ − t = (z2 + c2T 2), (A.30) T and

∞ I = ((z y)2 + c2(T t)2)f (y; z) dy 2 − − tT −∞ ct(T t) ∞ z2 + c2T 2 = − dy πT y2 + c2t2 −∞ t = 1 (z2 + c2T 2). (A.31) − T A.4 Proof of Proposition 2.9.3 145

Then we have ∞ I I c2t2 + c2(T t)2 + z2 y f (y; z) dy = 1 − 2 − − tT 2z −∞ t = z, (A.32) T and

∞ y2 f (y; z) dy = I c2t2 tT 1 − −∞ t = (z2 + c2T (T t)), (A.33) T − as required. Bibliography

[1] M. Abramowitz & I.A. Stegun. Handbook of mathematical functions with formulas, graphs, and mathematical tables. Dover, New York, 1964.

[2] E. Arjas. The claims reserving problem in non-life insurance: some structural ideas. ASTIN Bulletin, 19:139–152, 1989.

[3] J. Armendinger. Initial Enlargement of Filtrations and Additional Information in Financial Markets. PhD thesis, Technische Universit¨at Berlin, 1998.

[4] A.N. Avramidis, P. L’Ecuyer & P.-A. Tremblay. Efficent simulation of gamma and variance-gamma processes. In Proceedings of the 2003 Winter Simulation Conference, 2003.

[5] K. Back. Insider trading in continuous time. Review of Financial Studies, 5(3): 387–409, 1992.

[6] K. Back & H. Pedersen. Long-lived information and intraday patterns. Journal of Financial Markets, 1:385–402, 1998.

[7] O.E. Barndorff-Nielsen. Normal inverse Gaussian distributions and stochastic volatility. Scandinavian Journal of Statistics, 24(1):1–13, 1997.

[8] O.E. Barndorff-Nielsen. Processes of normal inverse Gaussian type. Finance and Stochastics, 2:41–68, 1998.

[9] O.E. Barndorff-Nielsen & B. Jørgensen. Some parametric models on the simplex. Journal of Multivariate Analysis, 39:106–116, 1991.

146 Bibliography 147

[10] F. Baudoin. Conditioned stochastic differential equations: theory, examples and application to finance. Stochastic Processes and their Applications, 100:109–145, 2002.

[11] F. Baudoin & L. Nguyen-Ngoc. The financial value of a weak information on a financial market. Finance and Stochastics, 8:415–435, 2004.

[12] S. Benjamin & L.M. Eagles. A curve fitting method and a regression method. In Claims Reserving Manual, volume 2, D3. Faculty and Institute of Actuaries, 1997. \{www.actuaries.org.uk/__data/assets/pdf_file/0012/24321/crm2-D3.pd%f}.

[13] J. Bertoin. L´evy Processes. Cambridge Univeristy Press, Cambridge, 1996.

[14] T.R. Bielecki & M. Rutkowski. Credit Risk: Modeling, Valuation and Hedging. Springer-Verlag, Berlin, 2004.

[15] N.H. Bingham & R. Kiesel. Risk-Neutral Valuation: Pricing and hedging of finan- cial derivatives. Springer-Verlag, Berlin, second edition, 2004.

[16] R.L. Bornhuetter & R.E. Ferguson. The Actuary and IBNR. Proceedings of the Casualty Actuarial Society, 59:181–195, 1972.

[17] D.C. Brody, L.P. Hughston & A. Macrina. Beyond hazard rates: A new framework for credit-risk modelling. In R. Elliot, M. Fu, R. Jarrow & Ju-Yi Yen, editors, Advances in Mathematical Finance, Festschrift volume in honour of Dilip Madan. Birkh¨auser/Springer, 2007.

[18] D.C. Brody, M.H.A. Davis, R.L. Friedman & L.P. Hughston. Informed traders. Proceedings of the Royal Society A, 465:1103–1122, 2008.

[19] D.C. Brody, L.P. Hughston & A. Macrina. Information-based asset pricing. In- ternational Journal of Theoretical and Applied Finance, 11:107–142, 2008.

[20] D.C. Brody, L.P. Hughston & A. Macrina. Dam rain and cumulative gain. Pro- ceedings of the Royal Society A, 464:1801–1822, 2008.

[21] H. B¨uhlmann. Mathematical methods in risk theory. Springer-Verlag, Heidelberg, 1970. 148 Bibliography

[22] L. Chaumond, D.G. Hobson & M. Yor. Some consequences of the cyclic ex- changeability property for exponential functionals of L´evy processes. In S´eminaire de probabilit´es de Strasbourg, volume 35, pages 334–347. Springer-Verlag, Berlin, 2001.

[23] J.M.C. Clark. The simulation of pinned diffusions. In Proceedings of the 29th Conference on Decision and Control, Honolulu, Hawaii, 1990.

[24] R. Cont & P. Tankov. Financial Modelling with Jump Processes. Chapman & Hall, 2004.

[25] D.H. Craighead. Some aspects of the London reinsurance market in world-wide short-term business. Journal of the Institiute of Actuaries, 106:227–297, 1979.

[26] L. Devroye. Non-uniform random variable generation. Springer-Verlag, New York, 1986.

[27] J.L. Doob. Conditional Brownian motion and the limits of harmonic functions. Bulletin de la Soci´et´eMath´ematique de France, 85:431–458, 1957.

[28] F. Dufresne, H.U. Gerber & E.S.W. Shiu. Risk theory with the gamma process. ASTIN Bulletin, 21(2):177–192, 1991.

[29] E. Eberlein & E.A. v. Hammerstein. Generalized hyperbolic and inverse Gaus- sian distributions: limiting cases and approximation of processes. In R. Dalang, M. Dozzi & F. Russo, editors, Seminar on Stochastic Analysis, Random Fields and Applications IV, volume 58 of Progress in Probability. Birkh¨auser, 2004.

[30] R.J. Elliot & M. Jeanblanc. Incomplete markets with jumps and informed agents. Mathematical Methods in Operations Research, 50:475–492, 1999.

[31] R.J. Elliott & P.E. Kopp. Equivalent martingale measures for bridge processes. Stochastic Analysis and Applications, 9(4):429–444, 1991.

[32] P. Embrechts, C. Kl¨uppelberg & T. Mikosch. Modelling Extremal Events for Insurance and Finance. Springer-Verlag, Berlin, 1997.

[33] M. Emery´ & M. Yor. A parallel between Brownian bridges and gamma bridges. Publications of the Research Institute for Mathematical Sciences, 40:669–688, 2004. Bibliography 149

[34] P.D. England & R.J. Verrall. Stochastic claims reserving. British Actuarial Jour- nal, 8(3):443–544, 2002.

[35] E.F. Fama. Efficient capital markets: A revew of theory and empirical work. The Journal of Finance, 25(2):383–417, 1970.

[36] K.-T. Fang, S. Kotz & K.W. Ng. Symmetric Multivariate and Related Distribu- tions. Chapman & Hall, New York, 1990.

[37] W. Feller. An introduction to and its applications II. Wiley, New York, 1971.

[38] T.S. Ferguson. A Bayesian analysis of some non-parametric problems. Annals of Statistics, 1(2):209–230, 1973.

[39] P.J. Fitzsimmons. Markov process with identical bridges. Electronic Journal of Probability, 3(12):1–12, 1998.

[40] P.J. Fitzsimmons & R.K. Getoor. Occupation time distributions for L´evy bridges and excursions. Stochastic Processes and their Applications, 58:73–89, 1995.

[41] P.J. Fitzsimmons, J. Pitman & M. Yor. Markovian bridges: Construction, palm interpretation, and splicing. Seminar on Stochastic Processes, 33:102–133, 1993.

[42] H. F¨ollmer, C.-T. Wu & M. Yor. Canonical decomposition of linear transfor- mations of two independent Brownian motions motivated by models of insider trading. Stochastic Processes and their Applications, 84(1):137–164, 1999.

[43] D. Gasbarra, E. Valkeila & L. Vostrikova. Enlargement of filtration and additional information in pricing models: a Bayesian approach. In Yu. Kabanov, R. Lipster & J. Stoyanov, editors, From to Mathematical Finance: The Shiryaev Festschrift, pages 257–285, Berlin, 2006. Springer.

[44] H. Geman, D.B. Madan & M. Yor. Probing option prices for information. Method- ology and Computing in Applied Probability, 9:115–131, 2007.

[45] I.I. Gikhman & A.V. Skorokhod. Introduction to the Theory of Stochastic Pro- cesses. Dover, 1996. 150 Bibliography

[46] J. Grendell. Mixed Poisson Processes. Number 77 in Monographs on Statistics and Applied Probability. Chapman & Hall, 1997.

[47] R.D. Gupta & D. St P. Richards. The histrory of the Dirichlet and Liouville distributions. International Statistical Review, 69:433–446, 2001.

[48] R.D. Gupta & D. St P. Richards. Multivariate Liouville distributions. Journal of Multivariate Analysis, 23(2):233–256, 1987.

[49] R.D. Gupta & D. St P. Richards. Multivariate Liouville distributions II. Probability and , 12(2):291–309, 1991.

[50] R.D. Gupta & D. St P. Richards. Multivariate Liouville distributions III. Journal of Multivariate Analysis, 43(1):29–57, 1992.

[51] R.D. Gupta & D. St P. Richards. Multivariate Liouville distributions IV. Journal of Multivariate Analysis, 54:1–17, 1995.

[52] E. Hoyle, L.P. Hughston & A. Macrina. L´evy random bridges and the modelling of financial information. arXiv:0912.3652, 2009.

[53] E. Hoyle, L.P. Hughston & A. Macrina. Stable-1/2 bridges and insurance: a Bayesian approach to non-life reserving. arXiv:1005.0496, 2010.

[54] L.P. Hughston & A. Macrina. Pricing fixed-income securities in an information- based framework. arXiv:0911.1610, 2010.

[55] L.P. Hughston & A. Macrina. Information, inflation, and interest. In L. Stettner, editor, Advances in mathematics of finance, volume 83, pages 117–138, Warsaw, 2008. Banach Centre Publications, Institute of Mathematics, Polish Academy of Sciences.

[56] P. Imkeller. Malliavin’s calculus in insider models: Additional utility and free lunches. Mathematical Finance, 13(1):153–169, 2003.

[57] B. Jørgensen. Statistical Properties of the Generalized Inverse Gaussian Distribu- tion, volume 9 of Lecture Notes in Statistics. Springer, New York, 1982. Bibliography 151

[58] I. Karatzas & S.E. Shreve. Brownian Motion and Stochastic Calculus. Springer- Verlag, New York, 1991.

[59] I. Karatzas & S.E. Shreve. Methods of Mathematical Finance. Springer-Verlag, New York, 1998.

[60] A.S. Kyle. Continuous auctions and insider trading. Econometrica, 53(6):1315– 1335, 1985.

[61] A.E. Kyprianou. Introductory Lectures on the Fluctuations of L´evy Processes with Applications. Springer-Verlag, Berlin, 2006.

[62] E. Lukacs. A characterization of the gamma distribution. The Annals of Mathe- matical Statistics, 26(2):319–324, 1955.

[63] O. Lundberg. On Random Processes and their Application to Sickness and Acci- dent Statistics. Almqvist & Wiksells, Uppsala, 1940.

[64] A. Macrina. An information-based framework for asset pricing: X-factor theory and its applications. PhD thesis, King’s College London, 2006. arXiv:0807.2124.

[65] D.B. Madan & F. Milne. Option pricing with V.G. martingale components. Math- ematical Finance, 1:39–55, 1991.

[66] D.B. Madan & E. Seneta. The variance gamma (V.G.) model for share market returns. Journal of Business, 63:511–524, 1990.

[67] D.B. Madan, P. Carr & E.C. Chang. The and option pricing. European Finance Review, 2:79–105, 1998.

[68] D.P. Madan & M. Yor. Making Markov martingales meet marginals: with explicit constructions. Bernouilli, 8(4):509–536, 2002.

[69] B.B. Mandelbrot. The variation of certain spectulative prices. The Journal of Business, 36:394–419, 1963.

[70] B.B. Mandelbrot. Fractals and scaling in finance: discontinuity, concentration, risk. Springer-Verlag, New York, 1997. 152 Bibliography

[71] R. Mansuy & M. Yor. Harnesses, L´evy bridges and Monsieur Jourdain. Stochastic Processes and their Applications, 115:329–338, 2004.

[72] P. McCullagh. Conditional inference and Cauchy models. Biometrika, 79(2):247– 259, 1992.

[73] A.J. McNeil, R. Frey & P. Embrechts. Quantitative Risk Management. Princeton University Press, 2005.

[74] T. Mikosch. Non-life insurance mathematics: an introduction with stochastic pro- cesses. Springer-Verlag, Berlin, 2004.

[75] M. Musiela & M. Rutkowski. Martingale Methods in Financial Modelling. Springer-Verlag, second edition, 2005.

[76] R. Norberg. Prediction of outstanding liabilities in non-life insurance. ASTIN Bulletin, 23:95–115, 1993.

[77] R. Norberg. Prediction of outstanding liabilities II: model variations and exten- sions. ASTIN Bulletin, 29:5–25, 1999.

[78] J.R. Pierce. An Introduction to Information Theory: symbols, signals and noise. Dover, 1980.

[79] C. Ribeiro & N. Webber. Valuing path dependent options in the variance-gamma model by Monte Carlo with a gamma bridge. Journal of Computational Finance, Vol. 7:81–100, 2003.

[80] C. Ribeiro & N. Webber. A Monte Carlo method for the normal inverse Gaussian option valuation model using an inverse Gaussian bridge. Working paper, Cass Business School, 2003.

[81] M. Rutkowski & N. Yu. On the Brody-Hughston-Macrina approach to modelling of defaultable term structure. International Journal of Theoretical and Applied Finance, Vol. 10:557–589, 2007.

[82] T.H. Rydberg. The normal inverse Gaussian L´evy process: simulation and ap- proximation. Stochastic Models, 13(4):887–910, 1997. Bibliography 153

[83] K.-I. Sato. L´evy processes and infintely divisble distributions. Cambridge Uni- veristy Press, Cambridge, 1999.

[84] G. Schehr & S.N. Majumdar. Area distribution and the average shape of a L´evy bridge. arXiv:1004.5046, 2010.

[85] R.J. Shiller. Do stock prices move too much to be justified by subsequent changes in dividends? The American Economic Review, 71(3):421–436, 1981.

[86] S.E. Shreve. Stochastic Calculus for Finance II: Continuous-time models. Springer, New York, 2004.

[87] A. Soklakov. Information derivatives. Risk, April:90–94, 2008.

[88] D. Williams. Probability with Martingales. Cambridge University Press, 1991.

[89] C.-T. Wu. Construction of Brownian Motions in Enlarged Filtrations and their Role in Mathematical Models of Insider Trading. PhD thesis, Humboldt- Universit¨at zu Berlin, 1999.