arXiv:1309.4987v1 [math.PR] 19 Sep 2013 ihls hntobarriers. two than less with rcse aiu oeta esrscnb ie na xlctfor explicit an in given spectr be of can case measures the In potential processes. various Markov processes of theory general the emntn are,ie bv nupro eo oe emntn b terminating lower pas at a barriers its below place upon or to (killed) allowed upper is stopped an it is Also above process i.e. a barrier, that terminating o assumed as terminating is known either It (also measures be sures). potential p can corresponding each the reflected w identify where MAP for and lower), negative exit and spectrally as (upper a well barriers consider we as developments exit, these two-sided Following and one-sided including ta xoeta ie oecnb oei h A etn hc ilb proofs. will short which and setting formulas spectr neat of MAP rather theory the present the in to in us done progress allows Recent be MAPs following. can correspond the more the in of time; discussed distribution pot exponential the that an to noted lead be at readily also densities should their It therein. and references and 8.4] Sec. possibilities. modeling rich provides which environment, ino ´v rcs ihvrosapiain nqeen,rs the risk queueing, gene in [ natural applications e.g. a various see is with mathematics, time L´evyfinancial process continuous a in of (MAP) tion process additive Markov A Introduction 1. oeta esrso n-ie Markov one-sided of measures Potential oeta esrshv ueosapiain scnb anticipat be can as applications numerous have measures Potential h anei rbesfrsetal eaieMP r ovdi [ in solved are MAPs negative spectrally for problems exit main The diiepoesswt eetn and reflecting with processes additive nvriyo asne urirUI-oin,11 Lausa 1015 UNIL-Dorigny, Quartier Lausanne, of University in nlss us-ttoaydistributions. quasi-stationary analysis, sient phrases: and Keywords qu to lead processes. they corresponding particular the in of rat distributions and resulting applications, The various L´evy processes. have one-sided possibl gene for all which results measures, and of ter potential or processes corresponding reflecting the non-defective either identify and be can defective each where both barrier, lower a and Abstract: 05,6J5 60J45. 60J25, 60G51, osdraoesddMro diiepoeswt nupper an with process additive Markov one-sided a Consider emntn barriers terminating e-mail: egnj Ivanovs, Jevgenijs [email protected] akvmdlto,ei rma nevl tran- interval, an from exit modulation, Markov 1 .I a ese saLeypoesi Markov in L´evy process a as seen be can It ]. ∞ 1 and −∞ hc orsod oamodel a to corresponds which , e etformulas neat her aie number a ralizes n,Switzerland, nne, cnro we scenarios e asi-stationary For minating. lyngtv L´evy negative ally nilmeasures ential eovn mea- resolvent lynegative ally reflecting, r aeoe a over sage n process ing ,se[ see m, rocesses. r and ory dfrom ed t two ith raliza- arrier. 10 15 ], e , J. Ivanovs/Potential measures of MAPs 2
We formulate the problem and discuss the relation between potential mea- sures and quasi-stationary distributions in Section 2. Section 3 reviews some basic results from the exit theory of MAPs, and defines related matrices and matrix-valued functions. Occupation densities, reviewed in Section 4, constitute the main tool for deriving potential densities in case of terminating barriers, see Section 5. Time-reversal, discussed in Section 6, is the main tool used in Sec- tion 7 in deriving potential densities for reflected processes. The above results are obtained for defective processes, which are extended to non-defective ones in Section 8. Some concluding remarks are given in Section 9. The results on potential measures (and hence on quasi-stationary distribu- tions) are spread among Theorem 1, Theorem 2, Corollary 2 and Corollary 1. In addition, Corollary 4 presents stationary distributions. Finally, this paper contains some useful identities concerning time-reversed process and various limits. This paper complements [19], where some of the potential measures were identified using a very different approach. Some formulas may look remarkably different, which is a known issue with MAPs.
2. Preliminaries
A MAP is a bivariate Markov process (X,J)= {(X(t),J(t)) : t ≥ 0} where X is an additive level component and J represents the environment, see [1, Sec. XI.2a]. It is assumed that (X,J) is adapted to some right-continuous, complete filtration {Ft : t ≥ 0}, and that J is an irreducible Markov chain on a finite number of states, say n. The defining property of a MAP states that for any t ≥ 0 and any i =1,...,n conditionally on {J(t)= i} the process
{(X(t + s) − X(t),J(t + s)) : s ≥ 0} is independent of Ft and has the law of {(X(s) − X(0),J(s)) : s ≥ 0} given {J(0) = i}. In other words, the environment governs the increments of the level process. Importantly, this property also holds when t is replaced by any stopping time τ; note that {J(τ)= i} implicitly assumes that τ < ∞. We allow for defective (or killed) processes, i.e. we add an additional absorbing cemetery state to the environment without writing it explicitly. Hence {J(t) = i} also means that the process has survived up to time t. In the special case when n = 1 one obtains a L´evy process, which may be killed at an independent exponential time. Putting to work Markov additive property requires repeated conditioning on the state of the environment, which leads to matrix algebra and justifies the following notation. For a random variable Y we write Ex[Y ; J(t)] to denote an n × n matrix with ij-th element
Ex,i(Y ; J(t)= j)= E(Y 1{J(t)=j}|X(0) = x, J(0) = i).
Similarly, for an event A we write Px[A, J(t)] = Ex[1A; J(t)] for the correspond- ing matrix of probabilities. Moreover, subscript x can be omitted when x = 0. J. Ivanovs/Potential measures of MAPs 3
Finally, the identity and the zero matrices are denoted by I and O respectively, and ∆v stands for a diagonal matrix with a vector v on the diagonal. We can assume that X(0) = 0 and the barriers are placed at −a and at b, where a,b ∈ [0, ∞] (not simultaneously 0), because otherwise one can simply shift the picture. We write |−a,b|, |−a,b], [−a,b|, [−a,b] to denote different sce- narios, where | means termination and [ or ] mean reflection at the corresponding barriers. In particular, X[−a,b|(t) is a process reflected at −a and terminated at b, i.e. upon exiting the interval (−∞,b]. In the case when a barrier is placed at −∞ or ∞, i.e. there is no barrier, we write (−∞, ∞), (−∞,b|, (−∞,b], |− a, ∞), [−a, ∞) to denote all possible scenarios, where the first corresponds to a free process, the second means termination upon exiting the interval (−∞,b], and so forth. For a rigorous definition of reflection, both one-sided and two-sided, see e.g. [1, 12]. Here we only recall the simplest cases X[0,∞)(t)= X(t) − X(t) and X(−∞,0](t) = X(t) − X(t), where X(t) and X(t) are the running infimum and supremum respectively, and note that any reflecting barrier locally acts in a similar way, c.f. [1, Ch. XIV.3]. For each of the above scenarios I (e.g. I = |− a,b]) we consider the corre- sponding potential measure (a matrix of measures)
∞ ∞ P E UI (A)= [XI (t) ∈ A, J(t)]dt = i 1{XI (t)∈A,J(t)=j}dt , (1) Z0 Z0 where A ∈ B[−a,b]. It turns out that in all the cases the measure UI (A) has a density uI (x) on (a,b) with respect to Lebesgue measure. We identify this density and compute the point masses at −a and b. It will be shown that there is never a point mass at the level of a terminating barrier, and hence we only specify the point masses for the levels of reflecting barriers. One of the possible uses of potential measures is given by the following basic formula
∞ Ei f(X(t),J(t))dt = f(x, j)Uij (dx), (2) R 0 j Z X Z where f ≥ 0 is a measurable function, which is equal to 0 for the cemetery state of the environment. Recall that we allow for defective MAPs, in which case one can think that the time runs only up to the killing epoch (when J enters the cemetery state). A defective MAP can be seen as a non-defective MAP killed at some rate qi ≥ 0 while J is in i for all i =1,...,n. A special case arises when qi = q for all i, i.e. a MAP is killed at an independent exponential time eq of rate q. Then
∞ P −qtP q [XI (eq) ∈ A, J(eq)] = q e [XI (t) ∈ A, J(t)]dt = qUI (A), (3) Z0 where superscript q is used to distinguish between objects corresponding to a MAP and its killed version. This relation can be generalized in the following way. Let q = (q1,...,qn) be a vector of killing rates and let T be the killing time, i.e. J. Ivanovs/Potential measures of MAPs 4 the life-time of the Markov chain J, implying that T has a corresponding phase type distribution. Then a standard argument shows that
∞ P Pq q [XI (T ) ∈ A, J(T )] = [XI (t) ∈ A, J(t)]dt∆q = UI (A)∆q. Z0 One can think about an independent Poissonian observer whose rate qi depends q on the environment, then UI (A)∆q gives the distribution of the process at his first observation epoch T .
3. Exit theory review
Throughout this paper it is assumed that the level component X has no positive jumps, and that for each i the process X given {J(0) = i} visits (0, ∞) before the switch of the environment with positive probability (none of the underlying L´evy processes is a downward subordinator). The second assumption also appearing in [10] allows to greatly simplify our notation and to avoid unpleasant technical difficulties. Let us briefly review the exit theory of such MAPs. For x ≥ 0 define the first passage times
+ − τx = inf{t ≥ 0 : X(t) > x}, τ−x = inf{t ≥ 0 : X(t) < −x}.
In addition we define the first hitting time of a level x ∈ R by
τ{x} = inf{t> 0 : X(t)= x}.
+ − It is known that τx ≤ τ{x} and τ−x ≤ τ{−x} for x ≥ 0 a.s., i.e. X can not hit a level without immediately passing over it, which can be seen from the small-time behavior of L´evy processes, see also [10, Prop. 7]. There is a matrix-valued function F (α), which characterizes the law of the process (X,J), and in particular E[eαX(t); J(t)] = eF (α)t,α,t ≥ 0. Note that F (0) is the transition rate matrix of J, and hence our MAP is non-defective if and only if F (0)1 = 0, where 1 and 0 denote column vectors of 1s and 0s respectively. In this case let a row vector π be the stationary distribution of J, and let µ = EπX(1) be the asymptotic drift. According to µ < 0,µ > 0 and µ = 0 the process X(t) (as t → ∞) tends to −∞, tends to ∞, and oscillates between −∞ and ∞, see [1, Prop. XI.2.10]. + It is not difficult to see that {J(τx ), x ≥ 0} is a Markov chain with some transition rate matrix G, so that
P + Gx [J(τx )] = e , x ≥ 0.
It is known that G is a right solution (unique in a certain class) to a matrix integral equation F (−G) = O, see [6] for details. We use the symbol G to be consistent with the theory of discrete time skip-free upwards MAPs, see [1, Sec. XI.3]. In fact, there are numerous similarities between discrete and continuous time theories, which are explored in [11]. Analogously to the discrete time case, J. Ivanovs/Potential measures of MAPs 5 one defines a matrix R as the unique left solution to F (−R) = O, but see also (10). It is known that both G and R are non-singular, unless the MAP is non-defective and µ ≥ 0, in which case they both have a simple eigenvalue at 0. Another important object in the exit theory for MAPs is a matrix-valued function W (x), which is continuous on [0, ∞) and is identified by the transform
∞ e−αxW (x)dx = F (α)−1 Z0 for large enough α; we put W (x) = O for x < 0. It holds that W (x) is non- singular for x> 0 and P + − + P + + −1 [τb < τ−a; J(τb )] = [τb < τ{−a}; J(τb )] = W (a)W (a + b) . (4) The reasons for the first equality are the following. Firstly X can not hit a level −a without immediately going below it, and secondly if X jumps below −a then it has to hit it on the way to b. Let us discuss differentiability of W (x) for x> 0. In [10, Thm. 5] it is shown ′ ′ that both left and right derivatives, W−(x) and W+(x) respectively, exist for all x> 0, but may not coincide at countably many points. Furthermore, this result ′ ′ ′ can be used to show that limy↑x W+(y)= W−(x), and that W+(x) is Riemann ′ integrable on any interval [0,a]. When W+(x) enters into density representation, we simply write W ′(x). Define yet another function x Z(x)= I − W (y)dyF (0), Z0 which is used in various identities, and in particular in P − −1 [J(τ−a)] = Z(a) − W (a)R F (0), (5) where in the case of a non-defective process with µ ≥ 0 the term R−1F (0) should be interpreted in a limiting sense (by letting the killing rates approach 0), see (10) and [10]. In general, the function Z is defined as a function of two parameters α and x, but for the purpose of this paper the case of α = 0 is sufficient. Let us finally remark about simplifications when one considers a L´evy process, i.e. J lives on a single state. Firstly, all the matrices become scalars, and in particular F (0) = −q, where q ≥ 0 is the killing rate. Moreover, G = R = −Φ, where Φ ≥ 0 is called the right-inverse of the Laplace exponent F (α), i.e. it solves F (Φ) = 0. More precisely, Φ = 0 when q = 0 and µ ≥ 0, and otherwise Φ is a unique positive zero of F (α).
4. Occupation densities
Let us consider L(x, j, t) the occupation density of (X,J) at(x, j) up to time t, which is defined in [10] based on corresponding theory for L´evy processes. Firstly, J. Ivanovs/Potential measures of MAPs 6
L(x, j, t) always exists under our assumptions. It is nonnegative, measurable in x, and for all t ≥ 0 and all measurable f ≥ 0 satisfies
t f(X(s),J(s))ds = f(x, j)L(x, j, t)dx a.s., (6) R 0 j Z X Z where f is 0 for the cemetery state. Compare this occupation density formula to (2). Note that L leads to more refined formula, but that is valid only for a free process and moreover L lacks explicit expression. Importantly, for every x ∈ R and every j: {L(x, j, t): t ≥ 0} is Ft adapted process, which increases only when X = x, J = j, and inherits the following additive property. Consider a stopping time τ such that X(τ) = y ∈ R on {J(τ)= i} for some y and i. Then on the event {J(τ)= i} the shifted process {L(x, j, τ + s) − L(x, j, τ): s ≥ 0} is independent of Fτ and has the law of {L(x − y,j,s): s ≥ 0} given {J(0) = i}. Let us define another important matrix-valued function H(x) by
E + Hij (x)= iL(0, j, τx ) for x> 0 and H(0) = limx↓0 H(x). In other words, H(x) is a matrix of expected occupa- + tion densities at 0 for the time interval [0, τx ). The following relation is obtained in [10] W (x)= e−GxH(x) (7) for x ≥ 0. Let also H = H(∞) be the expected occupation density at 0. It is known that H has finite entries and is invertible unless the process is non- defective and µ = 0. In the rest of this section we suppose that the process is either defective or µ 6= 0. Then H has finite entries and by the additive property of L we have for x ≥ 0 P + P H(x)= H − [J(τx )] [J(τ{−x})]H. (8) So combining (7) and (8) we get
Gx −1 P[J(τ{x})] = e − W (−x)H (9) for x ≤ 0, but then it is clearly true also for x> 0. Finally, in [11] the following relation between R and G is obtained:
R = H−1GH. (10)
Often it is simpler and more convenient to work with defective processes. In order, to retrieve identities for non-defective processes, one can let the killing rates go to 0. We follow this idea in the rest of the paper and provide comments concerning non-defective processes in Section 8. J. Ivanovs/Potential measures of MAPs 7
5. Terminating barriers
Let us present results for all the cases when there are no reflecting barriers. Theorem 1. For a defective MAP it holds that
Gx Rx u(−∞,∞)(x)= e H − W (−x)= He − W (−x), (11) Gb u(−∞,b|(x)= e W (b − x) − W (−x), (12) R(x+a) u|−a,∞)(x)= W (a)e − W (−x), (13) −1 u|−a,b|(x)= W (a)W (a + b) W (b − x) − W (−x), (14) where uI (x) is a density of the measure UI (dx) on the corresponding interval, see (1). In the case of a L´evy process this result is due to [5, 20, 4]. The methods employed in these papers are different from ours. In particular, the last paper relies on the Wiener-Hopf factorization. It should be noted that for a L´evy process H =1/F ′(Φ) yielding 1 u (x)= e−Φx − W (−x), (−∞,∞) F ′(Φ) which agrees with a representation appearing in [18, Thm. 1].
Proof of Theorem 1. When computing densities uI (x) we will not consider the case x = 0, since uI (0) can be arbitrary. The potential measure of the free process (X,J) is given by
∞ Uij (A)= Ei 1{X(t)∈A,J(t)=j}dt = Ei L(x, j, ∞)dx Z0 ZA according to (6), and hence it has a density uij (x) = EiL(x, j, ∞). By the additive property of L and (9) we write
Gx u(x)= P[J(τ{x})]H = e H − W (−x).
Together with (10) this proves (11). Similarly we find for x
E + P P + P u(−∞,b|(x)= iL(x, j, τb ) = [J(τ{x})]H − [J(τb )] [J(τ{x−b})]H Gx Gb G(x−b) Gb = e H − W (−x) − e (e H − W (b − x)) = e W (b − x) − W (−x).
For x> −a we have
E − P P P + u|−a,∞)(x)= iL(x, j, τ−a) = [J(τ{x})]H − [J(τ{−a})] [J(τa+x)]H = Gx −Ga −1 G(x+a) R(x+a) e H − W (− x) − (e − W (a)H )e H = W (a)e − W (−x), J. Ivanovs/Potential measures of MAPs 8 where we used (10) and the fact that if a process has to pass over −a and then return to x> −a then it has to hit −a. Finally, for −a