arXiv:2012.10712v1 [math.PR] 19 Dec 2020 Keywords: ondary) ie iait éyprocess Lévy bivariate a Given Introduction 1 process stationary equat ; recurrence ruin random Markov-modulated process; ditive 2020 ( hscaso rcse a enitoue n18 yd Haan de by that random 1988 [10] certain in in of shown introduced solutions discrete-time been to has analogue time processes of class This many, 463-37251. V t ∗ ) ehiceUiesttDedn ntttfrMathematisc für Institut Dresden, Universität Technische t ≥ akvmdltdgnrlzdOnti-hebc proces Ornstein-Uhlenbeck generalized Markov-modulated ahmtc ujc classification. subject Mathematics [email protected] oml nti ikmdlta xrse h unprobabili ruin the process. Markov-s Markov-additive expresses a a that of model to functional risk exponential process this risk in Paulsen’s formula generalizes Markov-modula This an as ment. propose process we Ornstein-Uhlenbeck di Finally the generalized processes. by Markov-additive given is of distribution tional stationary its that Markov-modulate prove the and of stationarity strict for conditions u ob h nqeslto facransohsi differen process. stochastic Markov-additive driving certain the of a differ terms of in stochastic plicitely this solution present unique We process. the Markov-additive continuou be in to equation out recurrence random Markov-modulated 0 rvnby driven edrv h akvmdltdgnrlzdOrnstein-Uhle generalized Markov-modulated the derive We xoeta ucinl eeaie Ornstein-Uhlenbeck generalized functional; exponential ( ,η ξ, ( V t ) ) t ≥ sdfie as defined is 0 n napiaini iktheory risk in application an and steuiu ouino h SDE the of solution unique the is nt Behme Anita V and ( t ,η ξ, = [email protected] e ( = ) dV − ξ t t 2dDcme 2020 December 22nd

= ξ 01,6J5 05,(rmr) 05,9G5 03 (sec- 60K37 91G05, 60J57, (primary), 60G51, 60J25, 60H10, t V η , V 0 t − t ∗ + ) Abstract dU n psoo Sideris Apostolos and t ≥ Z 0 (0 t the , ,t 1 + ] e eSohsi,Zlece e 21,009Dedn Ger- Dresden, 01069 12-14, Weg Zellescher Stochastik, he dL ξ s − eeaie rsenUlnek(O)process (GOU) Ornstein-Uhlenbeck generalized t t , dη eeaie rsenUlnekprocess, Ornstein-Uhlenbeck generalized d oevr egv eesr n sufficient and necessary give we Moreover, eurneeutos oevr thsbeen has it Moreover, equations. recurrence nileuto swl sisslto ex- solution its as well as equation ential s tiuino pcfi xoeta func- exponential specific a of stribution ! o;Mro-wthn oe;rs theory; risk model; Markov-switching ion; e ikmdlwt tcatcinvest- stochastic with model risk ted ≥ plcto fteMarkov-modulated the of application yi em ftedsrbto fan of distribution the of terms in ty t , icigevrnet edrv a derive We environment. witching ie h bandpoesturns process obtained The time. s ileuto rvnb bivariate a by driven equation tial 0 , hn:+93143345 a:+49-351- fax: +49-351-463-32425, phone: , n aadkr[0 scontinuous- as [10] Karandikar and bc rcs yebdiga embedding by process nbeck ≥ rcs;Lv rcs;Mro ad- Markov process; Lévy process; 0 ∗ . ses (1.1) (1.2) for another bivariate Lévy process (Ut,Lt)t≥0, which is in direct relation to (ξt, ηt)t≥0. Special cases of GOU processes date back to the early 20th century, cf. [12] and [26]. Nowadays, GOU processes have become a permanent tool in stochastic modelling, appearing in numerous applications such as finance and insurance, see [3], [21], or [27] for some examples. The properties of GOU processes have been studied thoroughly in many papers, see for instance [4], [20], [23], or [25]. Also dating back to the 80’s, see e.g. [19], [15] or [31], Markov-switching models have become a popular tool in finance and other areas; a collection of possible applications being presented in [16]. It is thus natural to aim for GOU processes with a Markov-switching behaviour. Several studies in this direction exist. In [17], [24], and [35] the authors consider so-called Markov-modulated Ornstein-Uhlenbeck (MMOU) processes, that is Markov-modulated versions of the classical Ornstein-Uhlenbeck process, i.e. of (1.1) for a deterministic process ξt = λt, t ≥ 0, and a with drift ηt = γt + σBt, t ≥ 0. In [36] a Markov-modulated version of the Lévy-driven Ornstein-Uhlenbeck process (i.e. (1.1) with ξt = λt, t ≥ 0 and general Lévy process (ηt)t≥0) is used to model electricity spot prices. However, despite the fact that [10] and [15] have been published about 30 years ago and in the meantime GOU processes and Markov-modulated models have become prominent tools for numerous applications, up to now there exists no thorough theoretical description of a GOU process with Markov-modulated behaviour in the literature. It is the first aim of this paper to close this gap, and thus in Section 2, we will define a Markov-modulated version of the GOU process following the approach of [10]. That is, we derive the Markov-modulated generalized Ornstein-Uhlenbeck (MMGOU) process as continuous-time analogue to discrete-time solu- tions of Markov-modulated random recurrence equations. Hereby we will allow for a rather general Markov-modulation in the sense that the background driving may have a countably infinite state space, and that jumps in the background driving Markov chain may induce additional jumps in (ξ, η). Having defined the MMGOU process it is our second purpose to determine necessary and sufficient conditions for strict stationarity of the process and to describe its stationary distribution. Apart from the classical case of the Lévy-driven GOU process (1.1) studied in [23] and [4], to our knowledge this topic is only covered in the literature for the special case of a Markov-modulated Ornstein-Uhlenbeck process, see e.g. [24], or [35]. We provide necessary and sufficient conditions for the existence of a strictly stationary MMGOU process in Section 3 of this paper and prove that - as in the Lévy case - given it exists, its stationary distribution can be described by the distribution of a certain exponen- tial functional. However, in the Markov-modulated situation the exponential functional is driven by time-reverted processes and does not necessarily have to converge in an almost sure sense to ensure stationarity. We end this paper with a short study of a Markov-modulated risk model that fits nicely into the framework of MMGOU processes and exponential functionals of MAPs. Our model combines a Markov- modulated version of the classical Cramér-Lundberg risk process with an investment possibility that is also modulated by the background driving Markov chain. It can therefore be seen as a generalization of Paulsen’s risk process [27] to the Markov-switching situation. Note that first Markov-modulated risk models have already been studied in [19] and [31] in the 80’s. The first Markov-modulated risk process where risk reserves can be invested into a stock index following a geometric Brownian motion has been introduced by [22]. In [30] a generalization of this model has been considered, in which the investment returns still follow a geometric Brownian motion, but this is also influenced by the external Markov chain. The developed theory of the MMGOU process now allows to go one more step further, in that investment returns may be generated by a process with jumps, instead of just a Brownian motion. Moreover, other than in the mentioned studies, our model allows for joint jumps of the modulating Markov chain, the surplus generating process and the investments

2 generating process that may be interpreted as market shocks at times of regime switches. In Theorem 4.2 we present a formula for the ruin probability in model. This formula is based on the distribution of a stochastic exponential of a bivariate MAP associated to the model; in other words, the ruin probability is governed by the stationary distribution of an associated MMGOU process.

2 The definition of a Markov-modulated GOU process

Extending an earlier result from Wolfe [34], De Haan and Karandikar [10] introduced the generalized Ornstein-Uhlenbeck process (1.1) as continuous-time analogue of a solution to a random recurrence equation with i.i.d. coefficients. In order to derive the Markov-modulated GOU process in a similar way as a continuous-time analogon to solutions of Markov-modulated random recurrence equations in Section 2.3 below, we first recall some basic facts on Markov additive processes in Section 2.1 and derive some preliminary results concerning the stochastic exponential of Markov additive processes in Section 2.2.

2.1 Some preliminaries on Markov additive processes

Throughout the paper, let (Ω, F, F, P) be a filtered probability space and (S, S) a measurable space, d where we assume S to be at most countable. Let (X, J) = (Xt, Jt)t≥0 be a Markov process on R × S, F d ≥ 1. The filtration = (Ft)t≥0 shall always satisfy the usual conditions, and if not stated otherwise we choose it to be the smallest filtration satisfying the usual conditions for which (X, J) is adapted. For any j ∈ S we write Pj(·) := P(·|J0 = j) and Ej[·] for the expectation with respect to Pj. Convergence Pj in Pj-probability will be denoted as −→. Equality and convergence in distribution will be denoted as d d = and −→, respectively. We will use the standard definition of a Markov , cf. [2], that we present here in a multivariate setting as it has been already considered e.g. in [8, 9]. d Definition 2.1. A d + 1-dimensional continuous time Markov process (Xt, Jt)t≥0 on R × S is called a (d-dimensional) Markov additive process with respect to F (F-MAP), if for all s,t ≥ 0 and for all bounded and measurable functions f : Rd → R, g : S → R E E J0 [f(Xs+t − Xs)g(Js+t)|Fs]= Js [f(Xt)g(Jt)] . (2.1)

Given a MAP (X, J) the marginal process J is often called Markovian component or Markov driving chain/process, sometimes the name modulator is use. The process X is typically called the additive component, and in practice this is the process of interest. In our derivations we shall also rely on the following extension of (2.1). Lemma 2.2. The condition (2.1) in Definition 2.1 implies (and is thus equivalent to) P P J0 (Cs, Js+t ∈ D|Fs)= Js (C0, Jt ∈ D) (2.2) for all Cs ∈ σ(Xs+u − Xs, 0 ≤ u ≤ t) and corresponding events C0 ∈ σ(Xu, 0 ≤ u ≤ t), and for all D ∈ S.

Proof. Note first that it follows by induction from (2.1) for all n ∈ N, 0 ≤ s, 0= t0 ≤ t1 ≤ t2 ≤ . . . ≤ tn and bounded, measurable functions f1,...,fn, g, that n n E E J0 fk Xs+tk − Xs+tk−1 g(Js+tn )|Fs = Js fk Xtk − Xtk−1 g(Jtn ) . " k=1 ! # " k=1 ! # Y  Y 

3 This may be rewritten as P J0 (Xs+t1 − Xs ∈ B1,..., Xs+tn − Xs+tn−1 ∈ Bn, Js+tn ∈ D|Fs P = Js Xt1 ∈ B1,...,Xtn − Xtn−1 ∈ B n, Jtn ∈ D R  for any Borel sets B1,...,Bn ⊂ \ {0} and D ∈ S. As the occuring events {Xs+tk − Xs+tk−1 ∈ Bk} generate the sigma-fields σ(Xs+u − Xs, 0 ≤ u ≤ t) this implies by uniqueness of measures that P P J0 (Cs, Js+tn ∈ C|Fs)= Js (C0, Jtn ∈ D) for all Cs ∈ σ(Xs+u − Xs, 0 ≤ u ≤ t) and corresponding events C0 ∈ σ(Xu, 0 ≤ u ≤ t) as claimed.

From the definition of the MAP (X, J) one can show (see [2] for the case of S being finite or [8] for the original treatment), that since S is at most countable, there exists a sequence of independent j 2 d-dimensional Lévy processes {X , j ∈ S} with characteristic triplets {(γXj , ΣXj ,νXj ) : j ∈ S} such that, whenever Jt = j on some time interval (t1,t2), the additive component (Xt)t1

(1) (2) Js− ij Xt = X0 + X + X = X0 + dX + Φ 1 , (2.3) t t s− X,n {JTn−1 =i,JTn =j,Tn≤t} (0,t] n i,j S Z X≥1 X∈ where (Tn)n≥0 denotes the sequence of jump times of J and T0 = 0. Conversely, it can be easily checked that every process (X, J), such that (Jt)t≥0 is a continuous-time Markov chain with state space S and (1) X has a representation as in (2.3), is a MAP. Furthermore, note that the process (Xt )t≥0 in (2.3) is (2) a whose characteristics are functionals of J, cf. [14]. As (Xt )t≥0 clearly is a process of finite total variation and thus a semimartingale as well (cf. [29, Thm. II.7]), we may and will use the additive component X of a MAP (X, J) as stochastic integrator. (1) We will always assume that X0 = 0. We refer to X as the pure switching part of X. In the case that ij ΦX,n ≡ 0 a.s. for all n ≥ 0 and all i, j ∈ S we call X a pure switching MAP. In this paper, we will mostly work with MAPs that have a two-dimensional additive component, i.e. we 2 consider (X, J) = ((ζ,χ), J) = ((ζt,χt), Jt)t≥0 ∈ R × S. For such bivariate MAPs the Lévy processes j j j ij X = (ζ ,ν ) and the additional jumps ΦX,n are two-dimensional. In particular in this case the triplets 2 R2 (γXj , ΣXj ,νXj ) consist of a non-random vector (γζj , γχj ) = (γζ (j), γχ(j)) ∈ , a symmetric, non- negative definite 2 × 2 matrix

2 2 2 2 σζj ,ζj σζj ,χj σζ (j) σζ,χ(j) ΣXj =ΣX (j) =: 2 = 2 σ j j σ j j σ (j) σ (j) ζ ,χ χ ,χ !  ζ,χ χ  R2 and a Lévy measure νXj on \ {(0, 0)}. Note that in this article we will switch between the notations ζ of row vectors (ζ,χ) and the corresponding column vectors χ depending on readability only - without marking transpositions.  For a more detailed description of Lévy processes and their characteristics we refer to [32]. For more details on MAPs we refer to Section 3.1 below, the seminal works of Çinlar [8],[9], the modern textbook treatment in [2], and the comprehensive Appendix of [11].

4 2.2 Stochastic exponentials and Markov multiplicative processes

Apart of Markov additive processes our derivation of the Markov-modulated GOU process also relies on a class of Markov processes that we will call Markov multiplicative processes (see Definition 2.3 below). In this subsection, we will introduce these and study their deep relation to the class of Markov additive processes.

Recall that for any real-valued semimartingale U = (Ut)t≥0 the (Doléans-Dade) stochastic exponential of U, denoted by E(U) = (E(U)t)t≥0, is defined as the unique solution of the SDE

dZt = Zt−dUt, t> 0,Z0 = 1. (2.4)

It has the explicit representation (see e.g. [29, Thm. II.37])

1 E(U) = exp U − [U c,U c] (1+∆U ) e−∆Us , t ≥ 0, (2.5) t t 2 t s