On Rd-valued multi-self-similar Markov processes

Loïc Chaumont1 Salem Lamine2 September 6, 2018

Abstract

d (x) 1,x1 d,xd d An R -valued Markov process Xt = (Xt ,...,Xt ), t ≥ 0, x ∈ R is said d to be multi-self-similar with index (α1, . . . , αd) ∈ [0, ∞) if the identity in law

i,xi/ci (d) (x) (ciXt ; i = 1, . . . , d)t≥0 = (Xct )t≥0 ,

Qd αi where c = i=1 ci , is satisfied for all c1, . . . , cd > 0 and all starting point x. Multi- self-similar Markov processes were introduced by Jacobsen and Yor [11] in the aim of extending the Lamperti transformation of positive self-similar Markov processes d to R+-valued processes. This paper aims at giving a complete description of all d R -valued multi-self-similar Markov processes. We show that their state space is always a union of open orthants with 0 as the only absorbing state and that there is no finite entrance law at 0 for these processes. We give conditions for these processes to satisfy the Feller property. Then we show that a Lamperti-type representation d is also valid for R -valued multi-self-similar Markov processes. In particular, we obtain a one-to-one relationship between this set of processes and the set of Markov d d additive processes with values in {−1, 1} × R . We then apply this representation to study the almost sure asymptotic behavior of multi-self-similar Markov processes.

Keywords: Multi-self-similarity, Markov , Lévy process, time change.

AMS MSC 2010: 60J45

1 Introduction

d A Markov process {(Xt)t≥0, Px} with state space E ⊂ R satisfies the multi-scaling d property of index α = (α1, . . . , αd) ∈ [0, ∞) if for all x = (x1, . . . , xd) ∈ E and d c = (c1, . . . , cd) ∈ (0, ∞) ,

{(Xcαt)t≥0, Pc◦x} = {(c ◦ Xt)t≥0, Px} , (1.1)

α Qd αi where c := i=1 ci and ◦ denotes the Hadamard product, that is c◦x := (c1x1, . . . , cdxd). These processes, called multi-self-similar Markov processes, were introduced by Jacobsen

1LAREMA UMR CNRS 6093, Université d’Angers, 2, Bd Lavoisier Angers Cedex 01, 49045, France. Email: [email protected] 2University of Monastir, Faculty of sciences, Monastir and LAREMA UMR CNRS 6093, Université d’Angers, 2, Bd Lavoisier Angers Cedex 01, 49045, France. Email: [email protected]

1 and Yor in [11] in order to extend the famous Lamperti representation to (0, ∞)d-valued Markov processes. They proved that any (0, ∞)d-valued multi-self-similar Markov process (1) (d) {(Xt ,...,Xt )t≥0, Px} can be represented as

(i) (i) Xt = exp ξτt , (1.2)

(1) (d) where (ξ , . . . , ξ ) is a d-dimensional Lévy process issued from (log x1,..., log xd) and R s (1) (d) τt = inf{s : 0 exp(α1ξu + ··· + αdξu ) du > t}. They also proved that conversely, for any Lévy process (ξ(1), . . . , ξ(d)), the transformation (1.2) defines a (0, ∞)d-valued multi- self-similar Markov process.

On the other hand, in the recent work [1], the authors showed that there is a way to extend the Lamperti representation to all standard self-similar Markov processes with values in Rd \{0}. This extension defines a bijection between the set of these processes and this of Sd−1 ×R-valued Markov additive processes, where Sd−1 is the sphere in dimen- sion d. Actually this representation does not provide a complete description of Rd-valued self-similar Markov processes since it does not give any information on the existence of an entrance law at 0 or a recurrent extension after the first passage time at 0. Regarding these questions, only the real case has been investigated up to now, see [15], [9], [13] and the references therein.

We show in this paper that unlike for self-similar Markov processes, a complete description of Rd-valued multi-self-similar Markov processes can be given through a Lamperti-type representation. The multi-scaling property generates quite specific prop- d erties of the process. In particular any element x ∈ R such that x1x2 . . . xd = 0 is absorbing and the state space can always be reduced to a union of open orthants with 0 as the only absorbing state. It implies that there is no continuous multi-self-similar Markov process whose state space covers the whole set Rd. Moreover there is no finite multi-self-similar entrance law for these processes. We will then prove that, provided the process has infinite lifetime, the Feller property is satisfied on the whole state space, see Theorem 1. These features will be proved in Section 2 and will be used in Section 3 to show that (1.2) can be extended for Rd-valued processes, so that

(i) (i) (i) Xt = Jτt exp ξτt , (1.3)

(i) (i) d d where (J , ξ )1≤i≤d is a with values in {−1, 1} × R , and R s (1) (d) τt = inf{s : 0 exp(α1ξu + ··· + αdξu ) du > t}, see Theorem 2.

Markov additive processes can be considered as generalizations of d-dimensional Lévy (i) processes. Roughly speaking, (ξ )1≤i≤d behaves like a new Lévy process in between (i) each pair of jumps of the continuous time (J )1≤i≤d. Together with the representation (1.3) they provide an easy means to construct many concrete examples of multi-self-similar Markov processes. This quite simple structure will also be exploited for the description of their path properties. We will show in Subsection 3.3, see Theorem d 3, that actually these processes cannot reach the set {x ∈ R \{0} : x1x2 . . . xd = 0} in a continuous way. Moreover, contrary to self-similar Markov processes, the finiteness of their lifetime depends on their scaling index. Then we will describe the behavior of multi- self-similar Markov processes on the left neighborhood of their lifetime. In particular, we will study the existence of a limit and give conditions for this limit to be 0, when it exists.

2 2 General properties of mssMp’s

2.1 The Markovian framework.

d d Let us first set some notation. For a = (a1, . . . , ad) ∈ [0, ∞) and b = (b1, . . . , bd) ∈ R , b b1 bd d we set a = a1 . . . ad . For x ∈ R , we set sign(x) := (sign(x1),..., sign(xd)), where sign(0) = 0, and for all s ∈ {−1, 0, 1}d, we define the set,

d Qs = {x ∈ R : sign(x) = s}. (2.1) d If s ∈ {−1, 1} , then Qs is called an open orthant. In all the remainder of this work, unless explicitly stated, we will assume that d ≥ 2. Let us emphasize that most of our results would not apply for d = 1. However in the latter case multi-self-similar Markov processes coincide with self-similar Markov processes which have already been extensively studied in the literature, see [12], [4], [9], [15] or [13], for instance.

In this subsection, we give a proper definition of multi-self-similar Markov processes and describe the general form of their state space. Let E be a subset of Rd which is locally compact with a countable base and let E be its Borel σ-field. Let

(Ω, F, (Ft)t≥0, (Xt)t≥0, (Px)x∈E) be a Markov process with values in (E, E), where (Ω, F) is some measurable space, (Px)x∈E are measures on (Ω, F) such that Px(X0 = x) = 1, for all x ∈ E and (Ft)t≥0 is some filtration to which X = (Xt)t≥0 is adapted and completed with respect to the measures (Px)x∈E. Such a process will be denoted by {X, Px} and will simply be referred to as an E-valued Markov process.

We will now consider E-valued Markov processes which satisfy the multi-scaling prop- erty (1.1). We stress the fact that this property supposes the condition:

Qsign(x) ⊂ E, for all x ∈ E.

Define T = inf{t : Xt 6= X0}. We say that x ∈ E is a holding state if Px(T > 0) = 1 and that it is an absorbing state if Px(T = ∞) = 1.

Proposition 1. Let {X, Px} be an E-valued Markov process. Assume moreover that {X, Px} is right continuous and satisfies (1.1). Then,

1. each x ∈ E such that x1x2 . . . xd = 0 is an absorbing state. 2. If x ∈ E is a holding sate (resp. an absorbing state), then all elements of the set

Qsign(x) are holding sates (resp. absorbing sates).

Proof. Let x ∈ E such that x1x2 . . . xd = 0. With no loss of generality we can assume d that x1 = 0. Let c ∈ (0, ∞) such that c2 = c3 = ··· = cd = 1. Then the multi-scaling property (1.1) entails that the following identity in law holds for all c1 > 0 and t ≥ 0,

(1) (2) (d) (1) (2) (d) {(X α ,X α ,...,X α ), (0,x ,...,x )} = {(X ,X ,...,X ), (0,x ,...,x )} . (2.2) c1 t c1 t c1 t P 2 d t t t P 2 d

But letting c1 go to 0 and using the fact that {X, Px} is right continuous at 0, we obtain that for all t ≥ 0,

(1) (2) (d) P(0,x2,...,xd)((Xt ,Xt ,...,Xt ) = (0, x2, . . . , xd)) = 1 .

3 Therefore the state x must be an absorbing state. The second assertion follows directly from the multi-scaling property (1.1).

Let {X, Px} be a Markov process satisfying the conditions of Proposition 1. Since all x ∈ E such that x1x2 . . . xd = 0 are absorbing, we can send the process to 0 whenever it reaches the set {x ∈ E : x1x2 . . . xd = 0}. Moreover, from part 2 of Proposition 1, if x ∈ E is an absorbing state such that xi 6= 0, for all i = 1, . . . , d, then all states of the orthant Qsign(x) are absorbing. Then we will remove absorbing orthants as well as the set {x ∈ E : x1x2 . . . xd = 0}\{0} from the state space so that with no loss of generality we can claim that any right continuous Markov process satisfying (1.1) has a state space of the form: E ∪ {0} where E = ∪s∈SQs , (2.3) and where S is some subset of {−1, 1}d. Moreover, 0 is the only absorbing state.

From now on E will always be a set of the form given in (2.3). In order to define multi-self-similar Markov processes, we need further usual assumptions. In particular, we will consider the set E0 := E ∪ {0} as the Alexandroff one-point compactification of E. c This means that the open sets of E0 are those of E and all sets of the form 0 ∪ K , where K is a compact subset of E. The later sets form a neighborhood system for 0 and this particular state is called the point at infinity. Then E0 endowed with this topology is a (n) (n) compact space. In particular, for a sequence x of E0, limn x = 0 if and only if

 (n) (n) −1  lim min |xi |, |xi | , i = 1, . . . , d = 0 . (2.4) n

We denote by E0, the Borel σ-fields of E0. Let us set ζ := inf{t : Xt = 0}. The random time ζ is called the lifetime of {X, Px}, and the latter process is said to be absorbed at 0.

In all the remainder this paper, we will be dealing with Hunt processes which we recall the definition from Section I.9 of [3] and Section A.2 of [7]. An E-valued Markov process {X, Px} absorbed at 0 is a Hunt process if: (i) it is a strong Markov process,

(ii) its paths are right continuous on [0, ∞) and have left limits on (0, ∞),

(iii) it has quasi-left continuous paths on (0, ∞).

Such a process will be called an E-valued Hunt process absorbed at 0.

Definition 1. A multi-self-similar Markov process (mssMp) with index α ∈ [0, ∞)d is an E-valued Hunt process absorbed at 0, which satisfies the multi-scaling property (1.1). The state space of mssMp’s is always of the form given in (2.3) and 0 is the only absorbing state. In the sequel, such a process will simply be referred to as an E-valued mssMp with index α ∈ [0, ∞)d.

Some examples of mssMp’s with state space E = (0, ∞)d are given in [11]. For more general sets E of the form (2.3), let us mention the following simple examples.

(1) (d) 1) Let (Jt ,...,Jt )t≥0 be any continuous time Markov chain with values in some subset d d S of {−1, 1} and starting at (1, 1,..., 1) ∈ S. Let α ∈ [0, ∞) and set α¯ = α1 + ··· + αd.

4 For each x ∈ E, let Px be the probability measure which assigns to the process X the law of (1) (d) (x1Jt/xα¯ , . . . , xdJt/xα¯ ) , t ≥ 0. (2.5)

Then we readily check that {X, Px} is an E-valued mssMp with index α.

2) A slightly more sophisticated example is given by the law of

 −α 1/α¯ (1) −α 1/α¯ (d)  x1(1 +αtx ¯ ) Jln(1+¯αtx−α)1/α¯ , . . . , xd(1 +αtx ¯ ) Jln(1+¯αtx−α)1/α¯ , t ≥ 0, (2.6) where we assume that α¯ > 0. Again the process {X, Px} is an E-valued mssMp with index α.

3) We called our third example "the jumping spider". Let {X, Px} be a process which starts at time t = 0, at some point x ∈ E and runs along the axis (0, x) as a reflected Brow- nian motion. Then at some time before hitting 0, the process jumps out to some other state of y ∈ E and runs in the same way along the axis (0, y), and so on. More specifically (1) (d) let (Rt)t≥0 be a reflected Brownian motion independent of the process (Jt ,...,Jt )t≥0 defined above and such that R0 = 1, a.s. Let α be such that α¯ = 2 and define

( (i) α xiJ α Rt/xα , 0 ≤ t < x ζ, (i) R t/x R−2 ds Xt = 0 s 0 , t ≥ xαζ, where ζ = inf{s : Rs = 0}. Then we can check that {X, Px} is a mssMp with index α, which is absorbed at 0, at time ζ.

Note that in these three examples, the decomposition (2.3) of the space E is determined (1) (d) by the state space S of the continuous time Markov chain (Jt ,...,Jt )t≥0. An extension of the above constructions of mssMp’s will be given in Section 3 through a Lamperti type transformation, see Theorem 2. Remark 1. It is straightforward from (1.1) that mssMp’s of index α are also self-similar Markov processes with index α1 + ··· + αd. However multi-self-similarity imparts to Markov processes much richer properties which could not be derived from the study of self-similar Markov processes given in [1]. See for instance Theorem 1 and Proposition 3 below.

2.2 On the Feller property of mssMp’s.

Proposition 1 means in particular that given any mssMp {X, Px}, there does not exist any Markov process starting from 0 and with the same semigroup as {X, Px}. We will see in the next proposition that actually there is no finite multi-self-similar entrance law.

Let us now fix some definitions. In what follows, {X, Px} will always be an E-valued d mssMp with index α ∈ [0, ∞) . We will denote by (Pt) the transition function of {X, Px}. An entrance law for {X, Px} is a family of non zero measures {ηt, t > 0} on E satisfying the identity ηsPt = ηt+s, that is for all nonnegative Borel function f defined on E and all s, t > 0, Z Z Ex(f(Xt), t < ζ) ηs(dx) = f(x) ηt+s(dx) . E E

5 We say that {ηt, t > 0} is a multi-self-similar entrance law if moreover there is a multi- index γ ∈ [0, ∞)d such that for all c ∈ (0, ∞)d,

−γ ηs = c ηsc−α Hc , (2.7) where Hc denotes the dilation operator Hcf(x) = f(c ◦ x). This definition is the natural extension of self-similar entrance laws introduced in the framework of positive self-similar Markov processes in [15], see (EL-i) and (EL-ii). In this paper the existence of self-similar entrance laws has been fully studied. Then in part 4.2 of [15] the author extended his study to the case of mssMp’s with values in E = (0, ∞)d whose radial part tends to infinity. In this particular case, the definition of multi-self-similarity of the entrance law corresponds to (2.7) for γ = 0. An expression for the corresponding multi-self-similar entrance law can be found in [15]. In the next proposition we complete this result by showing that there cannot exist any finite such entrance law in general. Proposition 2. MssMp’s do not admit finite multi-self-similar entrance laws.

Proof. Let {ηt, t > 0} be a multi-self-similar entrance law for {X, Px} and let f be any positive Borel function defined on E. Let t > 0 and a > 0, then from (2.7) applied to c = (1, a, 1,..., 1), we obtain Z Z γ2 f(x1, . . . , xd)a ηt(dx) = f(x1, ax2, . . . , xd)ηt/aα2 (dx) . (2.8) E E

Let πi : x 7→ xi be the projection on the i-th coordinate and set Ui = πi(E). Denote by 0 −1 ηt = ηt ◦ π1 the image of ηt by π1 on U1. It follows from the above identity that

γ2 0 0 a ηt = ηt/aα2 . (2.9)

0 00 −1 If α2 = 0 and γ2 > 0, then ηt is necessarily the infinite measure. Denote by ηt = ηt ◦ π2 00 the image of ηt by π2 on U2. If α2 = γ2 = 0, then from (2.8), ηt satisfies Z Z 00 00 g(x2)ηt (dx) = g(ax2)ηt (dx) , U2 U2 for all positive Borel function g defined on U2. Recalling that either U2 or −U2 is the multi- 00 plicative group R\{0} or (0, ∞), the latter identity shows that if ηt is finite on all compact 00 00 −1 sets, then ηt corresponds to the Haar measure on U2, that is ηt (dx) = (cst) · |x| dx, which has infinite mass.

Now suppose that α2 > 0. Then applying (2.7) again with c = (a, 1,..., 1), we obtain Z Z γ1 f(x1, . . . , xd)a ηt(dx) = f(ax1, x2, . . . , xd)ηt/aα1 (dx) , E E that is for all positive Borel function g defined on U1, Z Z γ1 0 0 g(x1)a ηt(dx) = g(ax1)ηt/aα1 (dx) . (2.10) U1 U1

Then replacing a by aα1/α2 in (2.9) gives

γ2α1/α2 0 0 a ηt = ηt/aα1 , (2.11)

6 so that from (2.10) and (2.11) with κ = γ1 − γ2α1/α2, Z Z 0 κ 0 g(ax1)ηt(dx) = g(x1)a ηt(dx) . U1 U1

But again, either U1 or −U1 is the multiplicative group R\{0} or (0, ∞), hence the latter 0 0 −(κ+1) identity shows that if ηt is finite on all compact sets, then η (dx) = (cst) · |x| dx, which has infinite mass.

Knowing the form of the state space of mssMp’s and their entrance boundaries, we can now investigate their Feller property. Actually there is no universally agreed definition of the Feller property. This varies depending on which space the transition function (Pt) of {X, Px} should be defined. Let Cb(E0) (resp. Cb(E)) be the space of continuous and bounded functions on E0 (resp. E). In our case, the most natural definition should require the following two conditions:

(a) For all f ∈ Cb(E0) and t ≥ 0, Ptf ∈ Cb(E0).

(b) For all f ∈ Cb(E0), limt→0 Ptf = f, uniformly.

If the transition function of {X, Px} satisfies (a) and (b), we say that {X, Px} is a on E0. As will be seen later on, this property is actually very strong and rarely satisfied by mssMp’s. We will actually focus our attention on mssMp’s with an infinite lifetime. A mssMp {X, Px} is said to have an infinite lifetime if Px(ζ = ∞) = 1, for all x ∈ E. When the lifetime is infinite, the restriction of the transition function (Pt) of {X, Px} to the space E is still Markovian. In this case, we say that {X, Px} has the Feller property on E if it satisfies:

0 (a ) For all f ∈ Cb(E) and t ≥ 0, Ptf ∈ Cb(E).

0 (b ) For all f ∈ Cb(E), limt→0 Ptf = f, uniformly on compact subsets of E. Examples of mssMp’s whose lifetime is infinite can be found in [11].

Theorem 1. Let {X, Px} be a mssMp with an infinite lifetime. Then, {X, Px} has the Feller property on E.

Proof. Let us define the scaling operator Sc by

Sc(X) = (c ◦ Xt/cα )t≥0 ,

d (n) (n) (n) where c ∈ (0, ∞) . Then let x ∈ E and x = (x1 , . . . , xd ) be any sequence of E (i) which converges towards x ∈ E. Recall the form (2.3) of E and set Q = Qsign(xi) so (i) (i) that xi ∈ Q . Since the Q ’s are open sets, there is n0 such that for all n ≥ n0 and (n) (i) (n) all i = 1, . . . , d, xi ∈ Q and xi has the same sign as xi. Now for n ≥ n0, define (n) (n) (n) (n) (n) d ci = xi /xi and note that c = (c1 , . . . , cd ) ∈ (0, ∞) . Let f ∈ Cb(E), then from (1.1), for all t ≥ 0, Ex(n) (f(Xt)) = Ex(f(Sc(n) (X)t)) . (2.12) (n) Since c tends to 1 and {X, Px} is almost surely continuous at time t, see [3], it follows from (2.12) and dominated convergence that

lim x(n) (f(Xt)) = x(f(Xt)) . n→∞ E E

7 This proves (a0).

Let us now prove (b0). First observe that for all x ∈ E,

{X, Px} = {Sc(x)(X), Psign(x)} , (2.13)

d where ci(x) = |xi|. Let K be some compact subset of E. For s ∈ {−1, 1} , define the compact subsets Ks = Qs ∩ K. From (2.13) and the right continuity of {X, Px} at 0, we have for all y ∈ Ks,

lim |f(Sc(y)(X)t) − f(y)| = 0 , s-almost surely, t→0 P

so that from the uniform continuity of f on Ks,

lim sup |f(Sc(y)(X)t) − f(y)| = 0 , s-almost surely. t→0 P y∈Ks Then the result follows from the inequality for all x ∈ K,

|Ptf(x) − f(x)| ≤ max s( sup |f(Sc(y)(X)t) − f(y)|) , d E s∈{−1,1} y∈Ks the boundeness of f and dominated convergence.

Remark 2. Let us emphasize that Theorem 1 highlights a great difference between self- similarity and multi-self-similarity. Indeed, it is no true that all d-dimensional self-similar Markov processes with infinite lifetime satisfy the Feller property on E. As can been seen in the previous proof, self-similarity on all axis allows us to consider the limit of Ptf(xn) for any sequence (xn) converging to x in E, whereas this convergence would only hold for sequences of the type xn = cnx, where cn > 0 → 1 for self-similar Markov processes.

We will say that a mssMp {X, Px} is symmetric if it satisfies the two following conditions: (i) E = s ◦ E for all s ∈ {−1, 1}d such that (s ◦ E) ∩ E 6= ∅.

(ii) Px(Xt ∈ A) = Ps◦x(Xt ∈ s ◦ A), for all t ≥ 0, A ∈ E and s satisfying (i). Note that if E satisfies condition (i), then conditions (1.1) and (ii) are equivalent to

{(X|c|αt)t≥0, Pc◦x} = {(c ◦ Xt)t≥0, Px} , (2.14)

α α1 αd d where |c| := |c1| ... |cd| , for all x ∈ E and c ∈ (R \{0}) such that c ◦ x ∈ E. d Note also that if E consists in a single orthant, that is E = Qs, for some s ∈ {−1, 1} , then {X, Px} is symmetric according to this definition. Moreover, we easily construct symmetric mssMp’s with E as the union of at least two orthants from the examples given in Subsection 2.1.

We will see in the next proposition that when {X, Px} is symmetric, the lifetime is either a.s. finite or a.s. infinite, independently of the starting state. Moreover, either the process hits 0 continuously, a.s. or by a jump, a.s. In the next proposition, we will use the notation, lim Xt = Xζ− t↑ζ

8 when this limit exists. Note that when ζ is finite, the existence of Xζ− is guaranteed by the fact that {X, Px} is a Hunt process. Moreover, the fact that Xζ− = 0 is to be understood in the topology of E0. It means that,

(i) (i) −1 lim min(|Xt |, |Xt | , i = 1, . . . , d) = 0, a.s., t↑ζ

see (2.4).

Proposition 3. Assume that {X, Px} is a symmetric mssMp. Then,

(i) either Px(ζ = ∞) = 1, for all x ∈ E, or Px(ζ < ∞) = 1, for all x ∈ E.

(ii) Assume that Px(ζ < ∞) = 1, for all x ∈ E. Then either Px(Xζ− = 0) = 1, for all x ∈ E or Px(Xζ− 6= 0) = 1, for all x ∈ E. Proof. Let us note that from our assumptions, for all x, y ∈ E, there is c ∈ (R \{0})d such that y = c ◦ x and {(X|c|αt)t≥0, Py} = {(c ◦ Xt)t≥0, Px}, which yields the identity in law, α {|c| ζ, Px} = {ζ, Py}, (2.15) since ζ is the first passage time at 0 by X, i.e. ζ = inf{t : Xt = 0}. This observation allows us to extend the case of positive self-similar Markov processes which is treated in [12] and from which our proof is inspired.

Let F = {ζ < ∞}. Then from (2.15), Px(F ) does not depend on x ∈ E. Let us set Px(F ) = p. From the , one has for all t > 0,

Px(t < ζ < ∞) = Ex(1I{t<ζ}Px(∃s ∈ (t, ∞), Xs = 0 | Ft))

= Ex(1I{t<ζ}PXt (ζ < ∞)) = pPx(t < ζ) , which leads to

p = Px(ζ ≤ t) + Px(t < ζ < ∞) = Px(ζ ≤ t) + pPx(t < ζ) , so that (1 − p)Px(ζ ≤ t) = 0. We conclude that either p = 1, or Px(ζ ≤ t) = 0 for all t ≥ 0 and all x ∈ E, that is Px(ζ = +∞) = 1 for all x ∈ E.

Let us now prove (ii). Set G = {Xζ− = 0}. Again, from our assumptions, for all x, y ∈ E, there is c ∈ (R \{0})d such that y = c ◦ x and

{c ◦ Xζ−, Px} = {Xζ−, Py},

so that Px(G) does not depend on x ∈ E. Set q = Px(G) and let K be any compact c c subset of E. Set T = inf{t : Xt ∈ K }, where K denotes the complementary set of K in E. Since G ⊂ {T < ζ} and G ◦ θT = G, it follows from the strong Markov property that for all x ∈ E,  q = Px(G) = Px(G, T < ζ) = Ex 1I{T <ζ}Px(G ◦ θT | FT )  = Px 1I{T <ζ}PXT (G) = qPx(T < ζ) .

If q 6= 0, then Px(T < ζ) = 1, for all x ∈ E. Since this is true for all compact subsets of E, it follows that {X, Px} doest not reach 0 by a jump and hence q = 1.

9 The Lamperti-type representation established in Subsection 3.2 will allow us to give many other examples of mssMp’s with infinite lifetime, see Theorem 3.

Note that in order to have the Feller property on E0, the process {X, Px} should also satisfy lim x(f(Xt)) = f(0) , (2.16) x→0 E

for all t ≥ 0 and f ∈ Cb(E0), where again this limit is to be understood in the topology of E0, see (2.4). If x tends to 0 (in E0) in such a way that |xi| > a for some a > 0 and all i = 1, . . . , d, then (2.16) holds. Indeed, from the multiscaling property,

Esgn(x)(f(|x| ◦ Xt/|x|α )).

α In this case, t/|x| tends to 0 as x tends to 0 in E0, so that from the right continuity of

X at 0, limx→0 Xt/|x|α = sgn(x), Psgn(x)-a.s. and hence limx→0 |x| ◦ Xt/|x|α = 0, Psgn(x)-a.s. Then (2.16) follows from the fact that f ∈ Cb(E0) and dominated convergence. However, it seems that (2.16) may fail when lim inf |xi| = 0 for some coordinates xi of x since in α α this case, we can have lim infx→0 |x| = 0 and lim supx→+∞ |x| = +∞.

2.3 Multiplicative agglomeration property of mssMp’s. We will prove in this subsection that symmetric mssMp’s enjoy the multiplicative agglom- eration property, namely the process obtained by multiplying some of its coordinates is still a mssMp (with lower dimension). This property has been highlighted for (0, ∞)d- valued mssMp’s in [11] as a direct consequence of the Lamperti representation, see Corol- lary 5 therein. Although Lamperti representation will be generalized to all mssMp’s later on in this paper, we prefer to study multiplicative agglomeration property of mssMp’s in a more direct way by using Dynkin’s criterion.

0 For 1 ≤ d ≤ d and a partition I = {I1,...,Id0 } of {1, 2, . . . , d}, we define  ΠI (x) = Πi∈I1 xi,..., Πi∈Id0 xi , x ∈ E.

(I) (I) d0 Let us also define E = ΠI (E). Then clearly E is a subspace of R which has the (I) (I) form described in (2.3). We denote by E the corresponding Borel σ-field and by E0 the Alexandroff compactification of E(I) which is defined as for E, see Subsection 2.1. (I) Given any E-valued mssMp absorbed at 0, {X, Px}, we define the E -valued process absorbed at 0, X(I) by (I) X = ΠI (X) , (I) (I) (I) (I) (I) and we set F = σ(Xt ∈ B, t ≥ 0,B ∈ E ) and for all t ≥ 0, Ft = Ft ∩ F . d Proposition 4. Let {X, Px} be a symmetric mssMp with index α ∈ [0, ∞) such that 0 0 for all i = 1, . . . , d and for all j, k ∈ Ii, αj = αk. We set αi = αj, for j ∈ Ii. Then the (I) (I) (I) (I) process X defined on the space (Ω, F , (Ft )t≥0) is an E -valued mssMp absorbed 0 0 0 (I) at 0 with index α = (α1, . . . , αd0 ). Moreover, the family of probability measures Py , (I) (I) y ∈ E0 associated with X is given by

(I) (I) Py (Γ) = Px(Γ), Γ ∈ F , (2.17)

−1 for any x ∈ ΠI (y).

10 Proof. From Dynkin’s criterion, see Theorem 10.23, p.325 in [8], in order to prove that (I) (I) (I) X is a Markov process defined on (Ω, F , (Ft )t≥0), with respect to the family of (I) 0 probability measures (Py ) given in (2.17), it suffices to prove that for all x, x ∈ E such 0 (I) that ΠI (x) = ΠI (x ), and for all t ≥ 0 and B ∈ E ,

−1 −1 Px(Xt ∈ ΠI (B)) = Px0 (Xt ∈ ΠI (B)). (2.18)

(I) 0 0 Let t ≥ 0, B ∈ E and x, x ∈ E be such that ΠI (x) = ΠI (x ) and let us set

xi ai = 0 . xi

Since all coordinates of ΠI (a), with a = (a1, . . . , ad), are equal to 1 and from our assump- tion on α, we have |a|−α = 1. Hence, from (2.14),

−1 −1 Px(Xt ∈ ΠI (B)) = Pa◦x0 (Xt ∈ ΠI (B)) −1 = Px0 (a ◦ X|a|−αt ∈ ΠI (B)) −1 = Px0 (Xt ∈ ΠI (B)), which is Dynkin’s criterion (2.18).

(I) (I) It remains to check that the process {X , Px } satisfies the multi-self-similarity prop- erty of index α0 defined in the statement. This follows directly from the definition of (I) (I) 0 {X , Px } and the multi-self-similarity property of {X, Px}. Indeed, recall that d is the dimension of E(I) and let c0 ∈ (0, ∞)d0 , y ∈ E(I) and x ∈ E, c ∈ (0, ∞)d such that for all −1 0 card(Ii) −1 j ∈ Ii, cj = (ci) and x ∈ ΠI (y). Then,

0 (I) (I) {c ◦ X , Py } = {ΠI (c ◦ X), Px} = {ΠI (Xcα·), Pc◦x} (I) (I) = {X 0 , 0 }, (c0)α · Pc ◦y which achieves the proof of the proposition. Note that we recover Jacobsen and Yor’s result from Proposition 4 since the process is always symmetric when E = (0, ∞)d. Let us also mention that symmetry is not a nec- (I) (I) essary condition for the process {X , Px } to be a mssMp. Examples of non symmetric mssMp’s which satisfy the multiplicative agglomeration property can be obtained from the Lamperti type representation presented in the next sections. We also emphasize the importance for the coordinates of the index α to be constant on each element of the par- tition I. The above proof shows that it necessary for the process X(I) to be Markovian. This fact is easier to see from the Lamperti type representation, see Theorem 2.

3 Time changes in mssMp’s.

3.1 Markov Additive Processes.

We will now consider Markov processes with values in a state space of the form S × Rd, where S is some topological set such that S × Rd is locally compact with a countable base. As usual we define the Alexandroff compactification of S × Rd by adding a point at infinity which we denote by δ.

11 d Definition 2. A Markov additive process (MAP) {(J, ξ),Py,z} is an S × R -valued Hunt process absorbed at some extra state δ, such that for any y ∈ S, z ∈ Rd, s, t ≥ 0, and for any positive measurable function f, defined on S × Rd,

Ey,z(f(Jt+s, ξt+s − ξt), t + s < ζ∗ | Gt) = EJt,0(f(Js, ξs), s < ζ∗)1I{t<ζ∗} , (3.1)

where ζ∗ = inf{t :(Jt, ξt) = δ} is the lifetime of {(J, ξ),Py,z} and (Gt)t≥0 is some filtration d to which (J, ξ) is adapted and completed by the measures Py,z, (y, z) ∈ S × R . In what follows, we will always consider the case where S is a subset of {−1, 1}d. Then the space S × Rd is always locally compact with a countable base. Moreover, while the structure of general MAP’s can turn out to be quite complicated (see [10] and [5] where these processes were first introduced) the case where S is a finite set is rather intuitive and can be plainly described. Since in this case, the process (Jt)t≥0 is nothing but a possibly absorbed continuous time Markov chain, it is readily seen from (3.1) that in between two successive jump times of J, the process ξ behaves like a Lévy process. Let us state this result more formally.

It is straightforward from (3.1) that the law of {J, Py,z} does not depend on z. More- over, since S is finite, {J, Py,z} is an S-valued continuous time Markov chain with lifetime 0 d ζ∗, which may be sent to some extra state δ for t ≥ ζ∗. Let us set n := 2 = card(S), then the law of the MAP {(J, ξ),Py,z} is characterized by the intensity matrix Q = (qij)i,j∈S of J, the laws of n possibly killed Rd-valued Lévy processes ξ˜(1),..., ξ˜(n), and the Rd-valued random variables ∆ij, such that ∆ii = 0 and where, for i 6= j, ∆ij represents the size of the jump of ξ when J jumps from i to j. More specifically, for u ∈ Cd, define for i, j ∈ S and k = 1, . . . , n when these expectations exist,

˜(k) hu,ξ i ψk(u) E(e 1 ) = e and Gi,j(u) = E(exp(hu, ∆i,ji)) . Then a trivial extension of Proposition 2.2 in Section XI.2 of [2] shows that the law of {(J, ξ),Py,z} is given by

hu,ξti A(u)t d Ei,0(e ,Jt = j) = (e )i,j , i, j ∈ S , u ∈ C , (3.2) where A(u) is the matrix,

A(u) = diag(ψ1(u), . . . , ψn(u)) + (qijGi,j(u))i,j∈S .

The matrix-valued mapping u 7→ A(u) will be called the characteristic exponent of the MAP {(J, ξ),Py,z}. We also refer to Sections A.1 and A.2 of [9] for more details. Through- out the remainder of this paper, the coordinates of a MAP with values in S × Rd will be (i) (i) denoted by (J, ξ) = (J , ξ )1≤i≤d.

d Let {(J, ξ),Py,z} be an S × R -valued MAP with infinite lifetime, that is Py,z(ζ∗ = ∞) = 1, for all y, z ∈ S×Rd. Then it is readily seen that for each k = 1, . . . , d, the process (k) (k) (J, ξ ) is itself an S × R-valued MAP with infinite lifetime. Let Py,z , y, z ∈ S × R be (k) (k) the corresponding family of probability measures, i.e. {(J, ξ ),Py,z } is an S × R-valued MAP with infinite lifetime. Denote by Ak the corresponding characteristic exponent, that is from (3.2), Ak(u) = A(u · ek) , u ∈ C , (3.3)

12 d where ek is the k-th unit vector of R . Fix k = 1, . . . , d, assume that the Markov chain (Jt) is irreducible and that there exists u ∈ R \{0} such that Ak(u), i = 1, . . . , d is well defined (i.e. all entries of Ak(u) exist and are finite). Then from Perron-Frobenius theory, the matrix Ak(u) has a real simple eigenvalue χk(u) which is larger than the real part of all its other eigenvalues. Let I = [u, 0] if u < 0 and I = [0, u] if u > 0. Then the 0 function u 7→ χk(u) is convex on I. Let us denote by χk(0) the left (respectively, the right) derivative at 0 of χk if u < 0 (respectively, if u > 0). The following result can be found in Section XI.2 of [2].

Proposition 5. Assume that J is irreducible. Let k = 1, . . . , d and assume that there (k) exists u ∈ R \{0} is such that Ak(u) is well defined. Then the asymptotic behavior of ξ (k) (k) does not depend on the initial state of {(J, ξ ),Py,z } and is given by

(k) ξt 0 (k) lim = χk(0),Py,z -a.s. for all y, z ∈ S × R. t→∞ t

(k) (k) (k) In that case, for all y, z ∈ S × R, limt→∞ ξt = ∞, Py,z -a.s. or limt→∞ ξt = −∞, (k) (k) (k) (k) 0 Py,z -a.s. or lim supt→∞ ξt = − lim inft→∞ ξt = ∞, Py,z -a.s., according as χk(0) > 0, 0 0 χk(0) < 0 or χk(0) = 0, respectively.

0 Note that more generally, if M : Rd → Rd is a linear mapping, where d0 is any integer, d0 then the process {(J, M(ξ)),Py,z} is an S × R -valued MAP. This property will be used in the next subsections with M(x) = hα, xi, for some α ∈ [0, ∞)d.

Examples of MAP’s can easily be obtained by coupling any continuous time Markov chain on S together with any d-dimensional Lévy process and by killing the couple at some independent exponential time. More specifically, the transition of the process {(J, ξ),Py,z} have the following particular form:

 −λt J0 0 ξ0 0 Py,z(Jt ∈ dy1, ξt ∈ dz1) = e Py (Jt ∈ dy1)Pz (ξt ∈ dz1) , −λt (3.4) Py,z((Jt, ξt) = δ) = 1 − e ,

d 0 ξ0 for all t ≥ 0, (y, z), (y1, z1) ∈ S × R , where λ > 0 is some constant, {ξ ,Pz } is any 0 J0 non killed d-dimensional Lévy process and {J ,Py } is any continuous time Markov chain on S with infinite lifetime. Then it is easy to check that this process {(J, ξ),Py,z} is an S × Rd-valued MAP which is absorbed at δ, in the sense of Definition 2. The law of such a MAP is characterized by the fact that ψk = ψl for all k, l = 1, . . . , n and Gi,j(u) = 1 for all i, j ∈ S in (3.2). Assume that the above process has infinite lifetime, that is λ = 0. Then the condition of Proposition 5 is satisfied if and only if there exists u ∈ R such 0 that ψ1(u) exists and is finite. In this example one may check that χ (0) > 0, < 0 or = 0 0 according as ψ1(0) > 0, < 0 or = 0. Of course this result is intuitively clear since J and ξ are independent.

3.2 The Lamperti representation for mssMp’s. Recall that S and E are any sets such that

d S ⊂ {−1, 1} and E = ∪s∈SQs ,

13 d where Qs is defined in (2.1). Then let us define the one-to-one transformation ϕ : S×R → E and its inverse as follows:

zi d ϕ(y, z) = (yie )1≤i≤d , (y, z) ∈ S × R , −1 ϕ (x) = (sgn(xi), log(|xi|))1≤i≤d , x ∈ E.

In the remainder of this work, for α ∈ [0, ∞)d, we will denote,

ξ¯ = hα, ξi ,

d where ξ is the second coordinate of the S × R -valued MAP {(J, ξ),Py,z}.

The next theorem extends Theorem 1 of [11]. It provides a one to one relationship between the set of Rd-valued mssMp’s and this of MAP’s with values in {−1, 1}d × Rd.

d d Theorem 2. Let α ∈ [0, ∞) and {(J, ξ),Py,z} be a MAP in S × R , with lifetime ζ∗ and absorbing state δ. Define the process X by

( ζ ¯ ϕ(J , ξ ) , if t < R ∗ eξs ds , X = τt τt 0 t R ζ∗ ¯ 0 , if t ≥ 0 exp(ξs) ds ,

R s ξ¯u R ζ∗ ξ¯s where τt is the time change τt = inf{s : 0 e du > t}, for t < 0 e ds. Define the probability measures Px := Pϕ−1(x), for x ∈ E and P0 := Pδ. Then the process {X, Px} is R ζ∗ ξ¯s an E-valued mssMp, with index α and lifetime 0 e ds. d Conversely, let {X, Px} be an E-valued mssMp, with index α ∈ [0, ∞) and denote by ζ its lifetime. Define the process (J, ξ) by  ϕ−1(X ) , if t < R ζ ds ,  At 0 (1) α (d) α |Xs | 1 ...|Xs | d (Jt, ξt) = δ , if t ≥ R ζ ds , 0 (1) α (d) α  |Xs | 1 ...|Xs | d

where δ is some extra state, and A is the time change A = inf{s : R s du > t}, t t 0 (1) α (d) α |Xu | 1 ...|Xu | d for t < R ζ ds . Define the probability measures, P := , for (y, z) ∈ 0 (1) α (d) α y,z Pϕ(y,z) |Xs | 1 ...|Xs | d d d S × R and Pδ := P0. Then the process {(J, ξ),Py,z} is a MAP in S × R , with lifetime R ζ ds . 0 (1) α (d) α |Xs | 1 ...|Xs | d

Proof. Let us denote by (Gt)t≥0 the filtration associated to the process (J, ξ) and com- pleted with respect to the measures (Py,z)y,z∈S×Rd , see the beginning of Subsection 2.1. Then define the process  ϕ(J, ξ)t , if t < ζ∗ , Yt = 0 , if t ≥ ζ∗ , (1) (d) and set Yt = (Yt ,...,Yt ) as usual. Recall from Subsection 2.1 the definition of E0. Since ϕ is a continuous one-to-one transformation from S × Rd to E, we readily check that the process

(Ω, F, (Gt)t≥0, (Yt)t≥0, (Px)x∈E0 ),

where Px is defined as in the statement, is an E-valued Hunt process absorbed at 0. R ζ∗ ξ¯s Now let τt be as in the statement if t < 0 e ds, and set τt = ∞ and Yτt = 0, if

14 ζ ¯ ¯ R ∗ ξs ξs (1) α1 (2) α2 (d) αd t ≥ 0 e ds. Since e = |Ys | |Ys | ... |Ys | , (τt)t≥0 is the right continuous t∧ζ R ∗ (1) α1 (2) α2 (d) αd inverse of the continuous, additive functional t 7→ 0 |Ys | |Ys | ... |Ys | ds of {Y, Px}, which is strictly increasing on (0, ζ∗). It follows from Theorem A.2.12, p.406 in [7] that the time changed process

(Ω, F, (Gτt )t≥0, (Xt)t≥0, (Px)x∈E0 ),

R ζ∗ ξ¯s is an E-valued Hunt process absorbed at 0. Moreover ζ := 0 e ds is the lifetime of {X, Px}.

d Now let us show that {X, Px} fulfills the multi-scaling property. Let c ∈ (0, ∞) , then −α R ζ∗ ξ¯s for t < c 0 e ds, s Z (c) hα,ξv i τcαt = inf{s : e dv > t} , (3.5) 0 (c) (c,1) (c,d) (c,i) (i) where ξt = (ξt , . . . , ξt ) and ξt = − ln ci + ξt . It follows from Definition 2 of MAP’s that (c) {(J, ξ ),Py,ln c+z} = {(J, ξ),Py,z}, (3.6) (c) where ln c = (ln c1,..., ln cd). Let us set τt := τcαt, then we derive from (3.5) and (3.6) that (i) (c,i) (i) (i) {(ciJ (c) exp(ξ (c) ))t≥0,Py,ln c+z} = {(ciJτt exp(ξτt ))t≥0,Py,z} . (3.7) τt τt On the other hand, it is straightforward from the definitions that

(i) (i) (c,i) Xcαt = ciJ (c) exp(ξ (c) ) . (3.8) τt τt

Then by taking x = ϕ(y, z) so that c ◦ x = ϕ(y, ln c + z) and Px = Py,z, Pc◦x = Py,ln c+z, we derive from (3.7) and (3.8) that

{(Xcαt)t≥0, Pc◦x} = {(c ◦ Xt)t≥0, Px} .

d Conversely, let {X, Px} be an E-valued mssMp, with index α ∈ [0, ∞) and life- time ζ. Then by arguing exactly as in the direct part, we prove that the process d {(J, ξ),Py,z} defined in the statement is an S × R -valued Hunt process, with lifetime ζ := R ζ ds . ∗ 0 (1) α (d) α |Xs | 1 ...|Xs | d

Now we have to check that the Hunt process {(J, ξ),Py,z} is a MAP. Let (Ft)t≥0 be the filtration associated to X and completed with respect to the measures (Px)x∈E. Define A as in the statement if t < R ζ ds , set A = ∞, if t ≥ R ζ ds and t 0 (1) α (d) α t 0 (1) α (d) α |Xs | 1 ...|Xs | d |Xs | 1 ...|Xs | d note that for each t, At is a of (Ft)t≥0. Let us denote by θt the usual shift operator at time t and note that for all s, t ≥ 0,

At+s = At + θAt (As) .

Then let us prove that {(J, ξ),Py,z} is a MAP in the filtration Gt := FAt . First observe that (J, ξ) is clearly adapted to this filtration. Then from the strong Markov property

15 of {X, Px} applied at the stopping time At, we derive from the definition of {(J, ξ),Py,z} that for any positive, Borel function f and x ∈ E,

Eϕ−1(x)(f(Jt+s, ξt+s − ξt), t + s < ζ∗ | Gt) = f ϕ−1(X ) − (0, ln |X |) ,A + θ (A ) < ζ | F  Ex At+θAt (As) At t At s At −1   = EXA f ϕ (XAs ) − (0, ln z) ,As < ζ 1I{At<ζ} t z=|XAt | −1   = f ϕ (XA ) ,As < ζ 1I{A <ζ} Esign(XAt ) s t

= EJt,0(f(Js, ξs), s < ζ∗)1I{t<ζ∗} ,

where we have set |x| = (|x1| ..., |xd|), ln |x| = (ln |x1| ..., ln |xd|) and where the third equality follows from the multi-self-similarity property of {X, Px}. We have obtained (3.1) and this ends the proof of the theorem. From this theorem, it is now easy to construct many examples of non trivial mssMp’s. We can use for instance the MAP which is defined in (3.4) by coupling any continuous time Markov chain with an independent Lévy process.

3.3 Asymptotic behavior of mssMp’s.

In this subsection we derive from Theorem 2 the behavior of a mssMp {X, Px} as t tends to ζ. From this theorem and the construction of MAP’s given in Subsection 3.1, if the lifetime ζ∗ of the underlying MAP (J, ξ) under Py,z is finite with positive probability, then so is ζ and for x = ϕ(y, z), the process X under Px, jumps to 0 on the set ζ < ∞, that is Xζ− 6= 0, with positive probability. This situation has no interest for the problem we are studying and we will skip it. Moreover, we will always assume that J is irreducible so that if E is composed of at least two orthants, then {X, Px} does not pass through some of them a finite number of times. The reducible case can always be boiled down to the irreducible one from classical arguments. Therefore, we will assume that {X, Px} is an E- valued mssMp absorbed at 0 whose underlying MAP {(J, ξ),Py,z} in the transformation given in Theorem 2 satisfies:

d (a) For all y, z ∈ S × R , ζ∗ = ∞, Py,z-a.s. (b) J is irreducible.

Under assumption (a) and from Theorem 2, the lifetime of {X, Px} is given by ζ = R ∞ ξ¯s 0 e ds. In order to determine conditions for this lifetime to be finite or infinite, we need reasonable assumptions on the asymptotic behavior of ξ¯. These conditions will be ensured by Proposition 5, so we will also assume:

(c) For each k = 1, . . . , d, there is u ∈ R \{0} such that Ak(u) is well defined.

d ¯ Let α ∈ [0, ∞) and note that the law of the process (J, ξ) under Py,z only depends on y and hα, zi. As already observed at the end of Subsection 3.1, this process is actually an S × R-valued MAP under the family of probability measures defined by P y,t := Py,z, ¯ where z is any vector such that t = hα, zi. The characteristic exponent of {(J, ξ), P y,t} is then given by A¯(u) = A(u · α), u ∈ C.

16 ¯ Since Ak(u) = A(u · ek), under assumption (c), there is u ∈ R \{0} such that A(u) is well defined. Note that the Perron-Frobenius eigenvalue of A¯(u) depends on α. Let us denote it by κα. Applying Proposition 5, we obtain ¯ ξt d lim = κα,Py,z − a.s., for all y, z ∈ S × R . (3.9) t→∞ t ¯ Recall from Proposition XI.2.10 of [2] that limt→∞ ξt = −∞, Py,z-a.s. for all y, z ∈ d ¯ d ¯ S × R or limt→∞ ξt = +∞, Py,z-a.s. for all y, z ∈ S × R or lim inft→∞ ξt = −∞ and ¯ d lim supt→∞ ξt = +∞, Py,z-a.s. for all y, z ∈ S × R , according as κα < 0, > 0 or = 0. Then it follows from Theorem 2 that ζ < ∞, Px − a.s. for all x ∈ E or ζ = ∞, Px − a.s. for all x ∈ E according as κα < 0 or κα > 0. The case where κα = 0 requires a bit more care and is proved in the following lemma which we have not found explicitly stated in the literature.

Lemma 1. Assume that conditions (a), (b) and (c) are satisfied.

R ∞ ξ¯s d (i) If κα < 0, then 0 e ds < ∞, Py,z-a.s. for all y, z ∈ S × R ,

R ∞ ξ¯s d (ii) If κα ≥ 0, then 0 e ds = ∞, Py,z-a.s. for all y, z ∈ S × R .

Proof. The cases where κα < 0 and κα > 0 follow directly from (3.9). Let us assume ¯ ¯ now that κα = 0. Then as recalled above, lim inft→∞ ξt = −∞ and lim supt→∞ ξt = +∞, d (n) (n−1) ¯ Py,z-a.s. for all y, z ∈ S × R . Fix a > 0 and let τa = inf{t ≥ σa : ξt > a} and (n) (n) ¯ (0) (n) σa = inf{t ≥ τa : ξt < a}, n ≥ 0, with σa = 0. Then τa is a sequence of stopping (n) times in the filtration (Gt)t≥0 of {(J, ξ),Py,z} such that limn→∞ τa = +∞, Py,z-a.s. for all y, z ∈ S × Rd. Moreover,

∞ (n) ∞ Z ∞ Z σa ξ¯s X ξ¯s X (n) (n) e ds ≥ e ds ≥ exp(a) (σa − τa ) . (3.10) (n) 0 n=1 τa n=1 Recall that J is irreducible and let π be its invariant measure on S. We derive from the (n) (n) definition of MAP’s that the sequence σa −τa , n ≥ 0 is stationary under Pπ,0 and since (1) (1) P (n) (n) Pπ,0(σa − τa > 0) = 1, we obtain that Pπ,0( n=1(σa − τa ) = ∞) = 1. On the other hand, from the Markov property, for all y ∈ S and n ≥ 1,

∞ ! ∞ !! X (k) (k) X (k) (k) Py,0 (σa − τa ) = ∞ = Ey,0 PJ (n) ,0 (σa − τa ) = ∞ . τa k=1 k=1

Since, the finite valued Markov chain (J (n) )n≥1 converges in law to π, by letting n go τa P∞ (k) (k)  to ∞ in this equality, we obtain that for all y ∈ S, Py,0 k=1(σa − τa ) = ∞ =

P∞ (n) (n) R ∞ ξ¯s Pπ,0( n=1(σa − τa ) = ∞) = 1, so that from inequality (3.10), 0 e ds = ∞, Py,0-a.s. for all y ∈ S. But the definition of MAP’s clearly implies that this holds Py,z-a.s. for all y, z ∈ S × Rd. Note that the trichotomy of Lemma 1 holds for any MAP with values in F × R, where F is any finite set.

17 Recall from Subsection 2.2 the notation

lim Xt = Xζ−, t↑ζ when this limit exists and where ζ is supposed to be finite or infinite. We remind that this limit is to be understood in the topology of the compact space E0 and that, as already observed before Proposition 3, when ζ < ∞, the existence of Xζ− is guaranteed by the 0 fact that {X, Px} is a Hunt process. Recall also the definition of χk(0) from Subsection 3.1.

Theorem 3. Assume that conditions (a), (b) and (c) are satisfied.

(i) If κα < 0, then Px(ζ < ∞) = 1, for all x ∈ E and if κα ≥ 0, then Px(ζ = ∞) = 1, for all x ∈ E.

(ii) If one of the following conditions is satisfied:

0 0 (a) χk(0) < 0 or χk(0) > 0, for some k = 1, . . . , d, 0 (b) χk(0) = 0, for all k = 1, . . . , d and κα < 0, 0 (c) χk(0) = 0, for all k = 1, . . . , d and κα > 0,

then Px(Xζ− = 0) = 1, for all x ∈ E.

d (iii) For all x ∈ E, Px(Xζ− exists and Xζ− ∈ R \{0}) = 0.

Proof. First is it useful to observe that under our assumptions, since ζ∗ = ∞, Py,z-a.s., for all y, z ∈ S × Rd,

d lim τt = ∞, Py,z-a.s., for all y, z ∈ S × R . (3.11) t↑ζ

R ∞ ξ¯s The first assertion is an immediate consequence of the expression ζ = 0 e ds in The-

orem 2 and Lemma 1. The second one follows from the expression Xt = ϕ(Jτt , ξτt ). 0 0 Indeed, if χk(0) < 0 or χk(0) > 0, for some k = 1, . . . , d, then from Proposition XI.2.10 (k) (k) (k) ξτt 0 of [2], as t tends to ζ, the coordinate Xt = Jτt e tends to 0 if χk(0) < 0 and to 0 d ∞ if χk(0) > 0, Py,z-a.s. for all (y, z) ∈ S × R . Therefore Xt tends to 0 as t tends to 0 ζ, in the topology of the compact set E0, Px-a.s. for all x ∈ E. If χk(0) = 0, for all k = 1, . . . , d and κα < 0 (resp. κα > 0), then from Proposition XI.2.10 of [2], the process ¯ (1) α1 (d) αd ξτ |(Xt ) ... (Xt ) | = e t tends to 0 (resp. to ∞) as t tends to ∞. Hence Xt tends to 0 in the topology of E0.

Let us now prove (iii). Assume that x = ϕ(y, z) is such that Xζ− exists Px-a.s. (k) then for any k = 1, . . . , d, from (3.11) and Proposition XI.2.10 of [2], the process ξτt either tends to −∞ or to +∞ or oscillates Px-a.s. as t tends to ζ. Therefore the limit (k) (k) (k) ξτt limt↑ζ Xt = limt↑ζ Jτt e cannot belong to R \{0}. Note that parts (i) and (ii) of Theorem 3 extend Proposition 3 to the case where E is any state space, but with additional assumptions. It seems that it is not possible to 0 conclude in the case where χk(0) = 0, for all k = 1, . . . , d and κα = 0. We are only able to construct examples such that for all x ∈ E, Xt has no limit Px-a.s., when t tends ∞. Note also that part (iii) of Theorem 3 completes part 1. of Proposition 1 where it was

18 d proved that the set {x ∈ R \{0} : x1x2 . . . xd = 0} is absorbing. Actually under our assumptions this set is a.s. never attained.

Remark 3. It is important to note that {X, Px} has finite or infinite lifetime depending on the valued of α. Changing α may change a finite lifetime to an infinite one. This makes another difference with self-similar Markov processes, where the index can be changed simply by raising the process to some power. Then the lifetime remains unchanged. However, recall that {X, Px} is a self-similar Markov process with index α1 + ··· + αd. Therefore the finiteness of the lifetime of {X, Px} does not depend on the sum α1+···+αd.

19 References

[1] L. Alili, L. Chaumont, P. Graczyk and T. Żak: Inversion, duality and Doob h-transforms for self-similar Markov processes. Electron. J. Probab. 22, no. 20, 1–18, (2017).

[2] S. Asmussen: Applied probability and queues. Volume 51 of Applications of Math- ematics Springer-Verlag, New York, second edition, 2003.

[3] R.M. Blumenthal and R.K. Getoor: Markov processes and potential theory. Pure and Applied Mathematics, Vol. 29 Academic Press, New York-London (1968).

[4] L. Chaumont, H. Pantí and V. Rivero: The Lamperti representation of real- valued self-similar Markov processes. Bernoulli, 19 (2013), no. 5B, 2494–2523.

[5] E. Çinlar: Markov additive processes. I, II. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 24 (1972), 85–93; ibid. 24 (1972), 95–121.

[6] I.I. Ezhov and A.V. Skorohod: Markov processes with homogeneous second component: I. Teor. Verojatn. Primen, 14, 1–13 (1969).

[7] M. Fukushima, Y. Oshima and M. Takeda: Dirichlet forms and symmetric Markov processes. Second revised and extended edition. De Gruyter Studies in Math- ematics, 19. Walter de Gruyter & Co., Berlin, 2011.

[8] E.B. Dynkin: Markov Processes. Vol.I Springer, Berlin, (1965).

[9] S. Dereich, L. Döring and A.E. Kyprianou: Real self-similar processes start- ing from the origin. Ann. Probab. 45 (2017), no. 3, 1952–2003, (2017).

[10] I.I. Ezhov and A.V. Skorohod: Markov processes with homogeneous second component: I. Teor. Verojatn. Primen, 14, 1–13 (1969).

n [11] M. Jacobsen and M. Yor: Multi-self-similar Markov processes on R+ and their Lamperti representations. Probab. Theory Related Fields, 126 (2003), no. 1, 1–28.

[12] J. Lamperti: Semi-stable Markov processes. I. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 22:205–225, (1972).

[13] H. Pantí, J.C. Pardo and V. Rivero: Recurrent extensions of real-valued self- similar Markov processes. Preprint, arXiv:1808.00129, (2018).

[14] J. Pitman and L.C.G Rogers: Markov functions. Ann. Probab. 9, 573–582, (1981).

[15] V. Rivero: Entrance laws for positive self-similar Markov processes. Mathemati- cal Congress of the Americas, 119–140, Contemp. Math., 656, Amer. Math. Soc., Providence, RI, (2016).

20