arXiv:math/0407140v1 [math.PR] 8 Jul 2004 admcecetmdl,uiomMro eea theory. renewal Markov uniform p models, martingale, coefficient Wald random Markov-dependent distribution, height 04 o.1,N.3 1202–1241 3, No. DOI: 14, Probability Vol. Applied 2004, of Annals The space uhthat such with (1.1)

c aiainadtpgahcdetail. typographic 1202–1241 and 3, the pagination by No. published 14, Vol. article 2004, original the Mathematical of of reprint Institute electronic an is This nttt fMteaia Statistics Mathematical of Institute e od n phrases. and words Key 1 .Introduction. 1. M 00sbetclassifications. subject 2000 AMS 2002. November revised 2001; December Received upre npr yNCGat91-2118-M-001-016. Grant NSC by part in Supported 10.1214/105051604000000260 S X 0 NFR AKVRNWLTER N RUIN AND THEORY RENEWAL MARKOV UNIFORM = ihtasto probability transition with nadtv component additive an ondt h hi uhthat such chain the to joined ak nti ae,w rv nfr akvrnwltheore ap is renewal result Markov for This convergence. problems uniform of crossing a rate boundary the prove on we estimate an paper, with this In walk. ie o given for cise, b deot yeaypoi xaso o h rtpsaepr passage first the for ities expansion asymptotic type Edgeworth eoe h rbblt ne h nta distribution initial the under probability the denotes npout frno arcsaeas given. also are passage matrices first random and of models products coefficient in esti random sequential in to tests Applications truncated derived. are probabilitie terms passage rection first the for approximations Brownian RBBLTE NMRO ADMWALKS RANDOM MARKOV IN PROBABILITIES with } { ξ hnadrift a When . ( 0 Let X P ,tkn ausi h elline real the in values taking 0, = π n P σ { S , { { m < τ -algebra X ( n X n = = ) n , n , n Let P P S , b } ≥ ≥ ≥ and { ( rwinapoiain rtpsaepoaiiis ladd probabilities, passage first approximation, Brownian µ n ,A x, 0 ( ,dfietesopn time stopping the define 0, ) X } 0 fterno walk random the of { A ∈ } 2004 , yCegDrFuh Cheng-Der By X P eaMro hi nagnrlsaespace state general a on chain Markov a be 1 ups nadtv component additive an Suppose . π S , saMro hi on chain Markov a is A × n { S n , cdmaSinica Academia hsrpitdffr rmteoiia in original the from differs reprint This . ,S m, < τ 1 n × B ) rmr 00;scnay6J0 60K15. 60J10, secondary 60K05; Primary ae ausi h elline real the in values takes P ) ≥ ( ∈ , B in { n ttoayprobability stationary and A 0 ( + } X h naso ple Probability Applied of Annals The × { m 1 n ( eaMro hi nagnrlstate general a on chain Markov a be s X S , c < B ) | n ( n | S , X ( } ) S X where , n , n n n 0 ) − ≥ n , s0 edrv one-term a derive we 0, is S , 1 S , 0 ≥ 0 } τ R ( = ) n m = 0 saMro random Markov a is × X sajie otechain the to adjoined is , − } out frno matrices, random of roducts ∞ ≤ τ ob oepre- more be To . 1 ( ( = ) x, b π inf = ) R When . , 0) R c ainand mation ,s x, with } π ≤ ihcor- with s n sad- is and Suppose . b { n le to plied ) and obabil- } µ : times S S 0, 6= n n P X m 1 > π = , P t n =0 ξ er t 2 C.-D. FUH for all x ,s R, A and B (:= Borel σ-algebra on R). The chain (X ,S ∈X),n 0∈ is called∈A a Markov∈B . For an initial distribu- { n n ≥ } tion ν on X0, let Pν denote the probability measure under the initial dis- tribution ν on X0 and let Eν denote the corresponding expectation. If ν is degenerate at x, we shall simply write Px (Ex) instead of Pν (Eν ). In this paper, we shall assume that Xn,n 0 has an invariant probability π. For b 0, define the stopping{ time≥ } ≥ (1.2) τ = τ(b)=inf n : S > b , τ = τ(0). { n } + In a variety of contexts, for given m and c b, we need to approximate the first passage probabilities ≤∞ ≤ (1.3) P τ

2. Main results. Let (Xn,Sn), n 0 be a Markov random walk on R. For ease of notation,{ write P≥(x, A} )= P (x, A R) as the transi- X × × tion probability kernel of Xn,n 0 . For two transition probability kernels Q(x, A), K(x, A), x , A{ and≥ for} all measurable functions h(x),x , define Qh and QK by∈XQh(x∈A)= Q(x,dy)h(y) and QK(x, A)= K(x,dy∈X)Q(y, A), respectively. R R Let be the Banach space of measurable functions h : C (:= the set of complexN numbers) with norm h < . We also introduceX → the Banach space of transition probability kernelsk k ∞Q such that the operator norm Q =B sup Qg ; g 1 is finite. Two prototypical norms considered in k k {k k k k≤ } 4 C.-D. FUH the literature are the supremum norm and the Lp norm. Two other com- monly used norms in applications are the weighted variation norm and the bounded Lipschitz norm, described as follows: 1. Let w : [1, ) be a measurable function, define for all measurable X → ∞ functions h, a weighted variation norm h w = supx∈X h(x) /w(x), and set = h : h < . The correspondingk k norm in | is| of the form Nw { k kw ∞} Bw Q w = supx∈X Q (x,dy)w(y)/w(x). 2.k Letk ( , d) be a metric| | space. For any continuous function h on , the R LipschitzX seminorm is defined by h := sup h(x) h(y) /dX(x,y). k kL x6=y | − | The supremum norm is h ∞ = supx∈X h(x) . Let h BL := h L + h ∞ and = h : h

Definition 1. A Markov chain Xn,n 0 is said to be uniformly ergodic with respect to a given norm { , if≥ there} exists a stochastic ker- nel Π such that P (n) Π as n kin · k the induced operator norm in . → →∞ B The Markov chain Xn,n 0 is called w-uniformly ergodic in the case of weighted variation norm.{ ≥ }

The Markov chain Xn,n 0 is assumed to be irreducible [with respect to a maximal irreducible{ measure≥ } ϕ on ( , )], aperiodic and uniformly ergodic with respect to a given norm .X InA this paper, we assume ϕ is σ- k · k finite, and Xn,n 0 has an invariant probability π. It is known [cf. Theo- rem 13.3.5 of{ Meyn≥ and} Tweedie (1993)] that for an aperiodic and irreducible Markov chain, if there exists some λ-small set C and some P ∞(C) > 0 such that, as n , →∞ n ∞ λC(dx)(P (x, C) P (C)) 0, C − → Z where λC( )= λ( )/λ(C) is normalized to a probability on C, then the chain is positive,· and there· exists a ϕ-null set N such that, for any initial distri- bution ν with ν(N)=0,

ν(dx)P n(x, ) π 0 as n , · − → →∞ Z tv where denotes the total variation norm. Theorem 1.1 of Kartashov tv (1996)k gives · k that P has a unique stationary projector Π in the sense that Π2 =Π= P Π = ΠP , and Π(x, A)= π(A) for all x , A . The following assumptions will be used throughout∈X this∈A paper. MARKOV RENEWAL THEORY 5

C1. There exists a measure Ψ on R, and measurable function h on such that π(dx)h(x) > 0, Ψ(X × R)=1, Ψ(dx R)h(x) > 0, andX the kernel T (x, A B)= P (x,X A × B) h(x)Ψ(A ×B) is nonnegative R R for all A and,×B . × − × ∈A ∈B C2. For all x , supkhk≤1 E[h(X1) X0 = x] < . ∈X2 k | k ∞ r C3. supx Ex ξ1 < and, for all x , supkhk≤1 E[ ξ1 h(X1) X0 = x] < for some| | r ∞1. ∈X k | | | k ∞ ≥ C4. Let ν be an initial distribution of the Markov chain Xn,n 0 ; assume that for some r 1, { ≥ } ≥ (2.1) ν := sup h(x)E ξ rν(dx) < . k k x| 1| ∞ khk≤1 Zx∈X

∞ n0 C5. Assume that for some n0 1, −∞ x∈X Ex exp(iθξ 1) π(dx) dθ < . C6. There exists Θ R containing≥ an interval| { of zero}| such that, for∞ all ⊂ R R x and θ Θ, supkhk≤1 E[exp(θξ1)h(X1) X0 = x] C < , for some∈XC> 0. ∈ k | k≤ ∞ C7. There exists a σ-finite measure M on ( , ) such that, for all x , the probability measure P on ( , )X definedA by P (A)= P (X ∈ X x X A x 1 ∈ A X0 = x) is absolutely continuous with respect to M, so that Px(A)= | p(x,y)M(dy) for all A , where p(x, )= dP /dM. A ∈A · x R Remark 1. Condition C1 is a mixing condition on the Markov chain (Xn,Sn),n 0 . It is also called a minorization condition in Ney and Num- {melin (1987),≥ where} they constructed a regeneration scheme and proved large deviation theorem. An alternative condition for C1 is that there ex- ists a measure Ψ on , and family of measures h(x,B); B on R, for each x such thatX the kernel T (x, A B)= P ({x, A B) ∈h( B}x,B)Ψ(A) is nonnegative∈X for all A and B . If× a Markov chain× is− Harris recurrent, then C1 holds for n-step∈A transition∈B probability. It is known that under the irreducible assumption, C1 implies that (X ,ξ ),n 0 is Harris recurrent { n n ≥ } [cf. Theorem 3.7 and Proposition 3.12 of Nummelin (1984) and Theorem 4.1(iv) of Ney and Nummelin (1987)]. An example on page 9 of Kartashov (1996) also shows that there exists a uniformly ergodic Markov chain with respect to a given norm, which is not Harris recurrent. Theorem 2.2 of Kar- tashov (1996) states that under condition C1, a Markov chain Xn,n 0 with transition kernel P is uniformly ergodic with respect to a{ given norm≥ } if and only if there exists 0 <ρ< 1 such that (2.2) P n Π = O(ρn), k − k as n . When the Markov chain is uniformly ergodic with respect to the weighted→∞ variation norm, (2.2) still hold without condition C1. 6 C.-D. FUH

Remark 2. Conditions C2–C4 are standard moment conditions. Con- dition C6 implies that the exponential moment, in the sense of the corre- sponding norm, of ξ1 exists for θ in Θ. Condition C5 implies that for all n n0, Sn has a bounded probability density function for given Xn. The existence≥ of the transition density in C7 will be used in Theorems 2 and 3 only. It holds in most applications. Next, we will describe the uniform version of conditions C1–C7. Since the uniform version is in the sense of varying drift [see (2.13)], we consider a compact set Γ R which contains an interval of 0. For each α Γ, let α α ⊂ ∈ (Xn ,Sn ),n 0 be the Markov random walk on a general state space {defined as (1.1),≥ } with transition probability P α and invariant probabilityX α α measure π . For each α Γ, the Markov chain Xn ,n 0 is assumed to be irreducible [with respect∈ to a maximal irreducible{ measu≥ }re ϕ on ( , )], aperiodic and uniformly ergodic with respect to a given norm . X A To establish the uniform Markov renewal theorem, we shallk ma · kke use of the uniform version of (1.1) in conjunction with the following extension of the uniform Cram´er’s (strong nonlattice) condition: α α (2.3) g(θ) := inf inf 1 Eπ exp(ivS1 ) > 0 for all θ> 0. α∈Γ |v|>θ | − { }|

Additive component Sn is called strongly nonlattice if Γ has only one ele- ment. In addition, we also assume that the [conditional uniform] Cram´er’s (strong nonlattice) condition holds. There exists m 1 such that ≥ α α (2.4) suplim sup E exp(iθSm) X0, Xm < 1. α∈Γ |θ|→∞ | { | }| Note that under condition C5, (2.3) and (2.4) can be removed. Next, we assume the strong mixing condition holds. There exist γ1 > 0 and 0 < ρ1 < 1 such that, for all k 0 and n 1, and for all real-valued measurable functions g, h with g, h≥ , ≥ ∈N E g(X )h(X ) E g(X ) E h(X ) γ ρn−1. | ν { k k+n } − { ν k }{ ν k+n }| ≤ 1 1 Note that when the norm is the weighted variation norm, we only need that (2.3) and (2.4) hold without the strong mixing condition. For α Γ, the uniform versions of C1–C6 are: ∈ K1. There exists a measure Ψα on R, and measurable function h on such that πα(dx)h(x) > 0, ΨαX×( R)=1, Ψα(dx R)h(x) > 0, andX the kernel T α(x, A B)= P α(x,X× A B) h(x)Ψα(A ×B) is nonnegative R R for all A and B× . × − × ∈A ∈B α α α K2. For all x , supα∈Γ supkhk≤1 E [h(X1 ) X0 = x] < . ∈X α α 2 k | k ∞ α α r K3. supα∈Γ supx Ex ξ1 < and for all x , supα∈Γ supkhk≤1 E [ ξ1 h(Xα) Xα = x]| <| for∞ some r 1. ∈X k | | × 1 | 0 k ∞ ≥ MARKOV RENEWAL THEORY 7

α α K4. Let ν be an initial distribution of the Markov chain Xn ,n 0 ; as- sume that for some r 1, { ≥ } ≥ sup sup h(x)Eα ξα rνα(dx) < . x | 1 | ∞ α∈Γ khk≤1 Zx∈X

K5. Assume that for some n 0 1, ≥ ∞ sup Eα exp(iθξα) n0 πα(dx) dθ < . | x { 1 }| ∞ α∈Γ Z−∞ Zx∈X K6. There exists Θ R containing an interval of zero such that, for all ⊂ α α α α x and θ Θ, supα∈Γ supkhk≤1 E [exp(θξ1 )h(X1 ) X0 = x] C for∈X some C> ∈0. k | k≤

α α Theorem 1. Let (Xn ,Sn ),n 0 be a uniformly strong nonlattice { ≥ } α α Markov random walk satisfying K1–K4 with r 2 in K3. Let µ := µ1 := Eαξα > 0 and µα := Eα(ξα)2 < . Then, as s ≥ , π 1 2 π 1 ∞ →∞ ∞ h (2.5) P α s Sα s + h, Xα A = πα(A)+ o(s−(r−1)), ν { ≤ n ≤ n ∈ } µα nX=0 ∞ s µα (2.6) P α Sα s, Xα A = + 2 πα(A)+ o(s−(r−2)). ν {−∞ ≤ n ≤ n ∈ } µα 2(µα)2 nX=0   Furthermore, if K6 holds, then for some r > 0, as s , 1 →∞ ∞ s µα (2.7) P α Sα s, Xα A = + 2 πα(A)+ O(e−r1s). ν {−∞ ≤ n ≤ n ∈ } µα 2(µα)2 nX=0  

Remark 3. When Γ has only one element and the increments ξn are i.i.d., these results are proved via Fourier transform and Schwartz’s theory of distributions. These methods can be extended to the Markov case via perturbation theory of the transition probability operator. Such extensions can also be modified to yield the rate of convergence in Markov renewal theory, which generalizes the corresponding results of Stone (1965a, b) and Carlsson (1983) for simple random walks.

Remark 4. For each fixed α, the Markov renewal theorems in Theo- rems 1–4 in Fuh and Lai (2001) provide the rate of convergence as s . However, in the applications to Theorem 3, we shall be letting α 0 simul-→∞ taneously with s . Consequently, we must consider the possibility→ that certain unpleasant→∞ situations might occur, such as a case in which, as α 0, the rate of convergence to 0 of the error terms in (2.5)–(2.7) gets slower→ and slower. Theorem 1 guarantees that this cannot happen; that is, that there 8 C.-D. FUH is a certain rate of convergence which applies uniformly to all α in some neighborhood of 0. Let ν be an initial distribution of the Markov chain Xn,n 0 and let 2 −1 2 { ≥−1 } µ = Eπξ1, σ = limn→∞ n Eν (Sn nµ) and κ = limn→∞ n Eν (Sn nµ)3 , which are well defined under{ − C4 for} some r 3. It will be convenient{ − to use} the notation ≥ (2.8) P (m,s)(A)= P A S = s . ν ν{ | m } Let τ = inf n 1 : S > 0 be the first ascending ladder epoch of S , + { ≥ n } n τn = inf k τn−1 : Sk >Sτn−1 be the nth ascending ladder epoch of Sn, for n = 2, 3,...,{ ≥ and let τ = inf} n 1 : S 0 be the first descending lad- − { ≥ n ≤ } der epoch of Sn. Since µ> 0, τn are finite almost surely under the probabil- ity P X A X = x and therefore, the associated ladder heights S are { τ+ ∈ | 0 } τn well-defined positive random variables. Furthermore, (Xτn ,Sτn ),n 0 is a Markov chain, and it is the so-called ladder Markov{ random walk.≥When} µ = 0, we can still define the ladder Markov chain via the property of uniform integrability in Theorem 5 of Fuh and Lai (1998). It is assumed throughout this paper that Px(τ+ < ) = 1 for all x and that the ladder random walk is uniform ergodic with∞ respect to a given∈X norm. The moment conditions C2–C4 and C6 for the ladder random walk are in Lemma 1 and Lemma 14, respectively. The uniformly strong nonlattice for the ladder random walk is in Lemma 13. Since C5 holds in Theorems 2 and 3, we do not need (2.4) d anymore. Let π+ denote the invariant measure of the kernel P+(x, A R ) which is assumed to be irreducible and aperiodic. The property of Harris× re- current for ladder Markov chains has been established in Alsmeyer (2000). Therefore, C1 holds for the ladder random walk. In Section 3, we will show how uniform ergodicity of the ladder chain and finiteness of moments of Sτ+ can be established in some interesting examples.

Theorem 2. Let (Xn,Sn),n 0 be a Markov random walk satisfying C1–C5 and C7 with r {= 3 in C3. Suppose≥ } µ = 0, σ = 1, and that there exists 1/2 1/2 ε> 0 such that infx Pπ ξ1 ε X1 = x > 0. Let b = ζm and s = ζ0m for some ζ> 0 and { <ζ≤−<ζ| . Then,} as m , −∞ 0 →∞ (m,s) Pπ τ

Remark 5. Approximations (2.9) and (2.10) are the corresponding re- sults for Brownian motion with drift 0, b replaced by b + ρ+, s(c) re- placed by s + κ/3 (c + κ/3) and m replaced by m + κs/3. Also, note that 2 the constant ρ+ in (2.9) and (2.10) reduces to ESτ+ /2ESτ+ when Sn is a simple random walk [cf. Siegmund (1985), pages 220 and 221]. Since Pπ τ 0 such that, for z δ, = (z) (z) and | |≤ N N1 ⊕N2 (2.12) P Q h = λ(z)Q h for all h , z z z ∈N where (z) is a one-dimensional subspace of , λ(z) is the eigenvalue of P N1 N z with corresponding eigenspace 1(z) and Qz is the parallel projection of onto the subspace (z) in theN direction of (z). Extension of their argu-N N1 N2 ment to weighted variation norm and random ξn satisfying some regularity assumptions is given in Fuh and Lai (2001). We extend this result to uniform ergodic Markov random walks with respect to a given norm in the Appendix. Let h be the constant function h 1 and let r(x; z) = (Q h )(x). 1 ∈N 1 ≡ z 1 From (2.12), it follows that r( ; z) is an eigenfunction of Pz associated with the eigenvalue λ(z), that is, r(·; z) generates the one-dimensional eigenspace · 1(z). N In particular, z = α R such that there exists δ> 0 and α [ δ, δ] := Γ. Define the “twisting” transformation∈ ∈ − r(y; α) (2.13) P α(x,dy ds)= e−Λ(α)+αsP (x,dy ds) where Λ = log λ. × r(x; α) × α α α Then P is the transition probability of a Markov random walk (Xn ,Sn ), n 0 , with invariant probability πα. The function Λ(α) is normalized{ so that≥ Λ(0)} = Λ(1)(0) = 0, where (1) denotes the first derivative. Then P 0 = P is the transition probability of the Markov random walk (Xn,Sn),n 0 { α ≥ } with invariant probability π. Here and in the sequel, we denote Pν as the probability measure of the Markov random walk (Xα,Sα),n 0 with { n n ≥ } 10 C.-D. FUH transition probability kernel (2.13), and having initial distribution να. For α α ease of notation, we denote ν := ν, and let Eν be the expectation under α Pν . It is known that Λ is a strictly convex and real analytic function for (1) α α α α which Λ (α)= Eπ ξ1 . Therefore, Eπ ξ1 <, =, or > 0 α <, =, or > 0. For any value α =0 and α < δ, there is at most one value⇔ α′ with α′ < δ, necessarily of6 opposite| sign,| for which Λ(α) = Λ(α′). Assume such α|′ exists;| ′ ′ we may let that α0 = min(α, α ) and α1 = max(α, α ) such that α0 < 0 < α1 and Λ(α0) = Λ(α1). Denote ∆ = α1 α0. We also assume, without loss of generality, that σ2 = Λ(2)(0) = 1, where− (2) denotes the second derivative.

Theorem 3. Let (Xn,Sn),n 0 be a strong nonlattice Markov ran- dom walk satisfying C1–C5{ and C7≥. Let} (Xα,Sα),n 0 be the Markov { n n ≥ } random walk induced by (2.13). Suppose there exists ε> 0 such that infx Pπ ξ1 1/2 1/2 { ≤ ε X1 = x > 0. Let b = ζm for some ζ> 0, c = γm for some γ ζ, and− | also that} √m∆= δ is a fixed positive constant. Then as m ,≤ for j = 0 or 1, →∞

αj αj P α τ

c + κ/3 2(b + ρ+) 1 j 1/2 Φ − 1/2 + ( 1) ∆(m + kc/3) ×  (m + κc/3) 2 −  + o(m−1/2).

(α) Remark 6. Let Mn = r(Xn; α)exp αSn nΛ(α) and n be the σ- { − } (α) F algebra generated by (X ,S ),t n . Then for α δ, Mn , ,n 0 is { t t ≤ } | |≤ { Fn ≥ } a martingale under any initial distribution ν of X0 [cf. Ney and Nummelin (1987) and Fuh and Lai (1998)]. Note that r( ; α) in (2.14) reduces to 1 when · Sn is a simple random walk. Hence, r(y; α)/r(x; α) can be regarded as the reflection of Markovian dependence under uniform ergodicity condition with respect to a given norm.

3. Examples. In this section, we give examples of strongly nonlattice Markov random walks satisfying conditions C1–C7, and such that the un- derlying Markov chain X ,n 0 is irreducible and aperiodic. Many time { n ≥ } series and queuing models Xn are irreducible, aperiodic and w-uniformly er- godic Markov chains, as shown in Chapters 15 and 16 of Meyn and Tweedie MARKOV RENEWAL THEORY 11

(1993), and conditions C2–C4 and C6 are moment conditions on the addi- tive components attached to Xn that are satisfied in typical applications. However, renewal theorems are often applied to the ladder random walk, as in Section 2. The techniques used by Meyn and Tweedie (1993) to prove the w-uniform ergodicity of a rich class of and queuing models can also be applied to show that their ladder random walks indeed satisfy conditions C1–C7, as illustrated by the following examples.

3.1. Random coefficient models. Let Xn,n 1 be the Markov chain which satisfies a first-order random coefficient{ autoregressi≥ } on model

(3.1) Xn = βnXn−1 + εn, X0 = 0, where (βn)n≥1 is a sequence of i.i.d. random variables with Eβn = β and Var(β )= σ2, where σ 0 is known. (ε ) is a sequence of i.i.d. random n ≥ n n≥1 variables with Eεn = 0 and Var(εn) = 1. Further, we assume that (βn)n≥1 ′ and (εn)n≥1 are independent, and (βn, εn) has common density function q with respect to Lebesgue measure is positive everywhere. In the case of AR(1) model for which βn is a constant β, Lai and Siegmund (1983) proposed a sequential estimation procedure for the unknown param- eter β. Pergamenshchikov and Shiryaev (1993) generalized their results to the model (3.1). They introduced the

n 2 Xk (3.2) T = Tc = inf n 1 : 2 2 c , ( ≥ 1+ σ Xk ≥ ) kX=1 where c> 0 is a fixed number, and considered a modification of the sequential least-squares estimate

T T 2 Xk−1Xk Xk (3.3) bT = 2 2 2 2 . 1+ σ Xk−1 ! 1+ σ Xk−1 ! kX=1  kX=1 b Their Theorems 1–3 showed that T < with probability 1 for any c> 0, c ∞ and bT is asymptotically normal under some moment conditions. In this section we investigate the limiting behavior of Tc under the sta- bilityb assumption β2 + σ2 < 1. Meyn and Tweedie [(1993), Theorem 16.5.1] established w(x)= x 2-uniform ergodicity of the random coefficient model (3.1) by proving that| a| drift condition is satisfied. By Lemma 15.2.9 of Meyn and Tweedie (1993), it is also ( x + 1)-uniform ergodic. Suppose the condi- 2 | | 2 2 tional distribution of ξn = Xn/(1 + σ Xn) given X0,...,Xn is of the form

FXn−1,Xn such that ∞ ∞ ∞ iθξ (3.4) lim sup e dFx,βx+z(ξ) q(β, z) dβ dz π(dx) < 1, |θ|→∞ ZX Z−∞ Z−∞Z−∞ 

12 C.-D. FUH where π is the stationary distribution of Xn . Since ξ1 has probability density function with respect to Lebesgue measure,{ } (2.4) can be removed. Let n Sn = i=0 ξi. Then (Xn,Sn),n 0 is strongly nonlattice, and by Theorem 6(ii) of Fuh and Lai{ (1998), so≥ is the} ladder random walk with transition P kernel P+ defined by (3.5) P (x, A B)= P X A, S B X = x . + × { τ+ ∈ τ+ ∈ | 0 } Assume furthermore that

(3.6) sup Ex ξ1(1 + β1 + ε1 ) < and µ := Eπξ1 > 0. x { | | | | } ∞

We first note that Xτ+ has a positive density function with respect to Lebesgue measure and is therefore -irreducible. Moreover, the ladder L L chain is clearly aperiodic; see Section 5.4.3 of Meyn and Tweedie (1993). Let w(x)= x + 1. To show that the ladder chain is w-uniformly ergodic, by Theorem 16.0.1| | of Meyn and Tweedie (1993), it suffices to show that there exist positive constants b, λ and a petite set C such that

(3.7) E w(X ) w(x) λw(x)+ b½ (x) for all x R, x T − ≤− C ∈ for T = τ+. We first show that the drift condition (3.7) in fact holds for all stopping times T [with respect to the filtration generated by (Xn,Sn)] such that, for some a> 0, (3.8) E T a( x + 1) for all x R. x ≤ | | ∈ We then show that τ+ satisfies (3.8) and therefore (3.7) indeed holds for T = τ+.

∗ ∗∗ ½ Let B> 0, and denote εi = εi ½{|εi|≤B}, εi = εi {|εi|>B}. Note that

E XT = E βT β1X0 + βT −1 β1ε1 + + β1εT −1 + εT (3.9) | | | · · · · · · · · · | T β x + (1 β )−1B + ε∗∗ for X = x. ≤ | || | − | | | i | 0 Xi=1 ∗∗ Since the εi are i.i.d. random variables, Wald’s equation yields T ∗∗ ∗∗ ∗∗ (3.10) Ex εi = (ExT )E ε1 a ( x + 1)E ε1 , | |≤ | | | | i=1 X by (3.8). Since E ε1 < from (3.6), we can choose B sufficiently large ∗∗ | | ∞ so that aE ε1 + β < 1. In view of (3.9) and (3.10), we can then choose | | | | ∗∗ −1 ∗∗ 0 <λ< 1 β aE ε1 , b> (1 β ) B + aE ε1 and C = x : x K with K sufficiently− | |− large| | such that− | (3.7)| holds. Note| | that C is{ a petite| |≤ set;} see Section 5.2 of Meyn and Tweedie (1993). MARKOV RENEWAL THEORY 13

To show that τ+ satisfies (3.8), we use Wald’s equation for Markov random walks [see Lemma 1(i) in Section 4]: For any stopping time T with EνT < and E w(X ) < , ∞ ν T ∞ (3.11) E S = µE T + E ∆(X ) ∆(X ) , ν T ν ν { T − 0 } (B) where supx ∆(x) /w(x) < , and ∆ is defined in (4.1). Let ξi = ξi ½{ξi≤B}, (B) (B)| | (B) ∞ (B) Sn = ξ1 + + ξn and τ(B) = inf n : Sn > 0 . Since µ (= Eπξ1 = 2 2 · ·2 · { } (B) EπX1 /(1 + σ X1 )) > 0, we can choose B large enough such that µ (= (B) (B) Eπξ1 ) > 0. Note that Sn Sn and τ(B) τ+. Hence it suffices to show that τ(B) satisfies (3.8). By≤ the monotone≥ convergence theorem, we need only show that (3.8) holds with T = τ(B) m for every m 1. Since Sτ(B)∧m B, (3.11) yields ∧ ≥ ≤

(B) B µ ExT Ex ∆(XT ) ∆(x) (3.12) ≥ − | | − | | µ(B)E T cE X c( x + 2), ≥ x − x| T |− | | since ∆(x) cw(x)= c( x + 1) for some c> 0 and all x. By (3.9) and (3.10),| |≤ | | −1 Ex XT β x + (1 β ) B (3.13) | | ≤ | || | − | |

+ (E T )E( ε ½ ). x | 1| {|ε1|>B} (B) Choosing B large enough so that cE ε1 ½{|ε1|>B} <µ /2, we obtain from (3.12) and (3.13) that | |

B + c(1 β )−1B + 2c + c x (1 + β ) µ(B)E T/2, − | | | | | | ≥ x proving (3.8) for T = τ(B) m. Note that the ladder chain also satisfies the mixing condition C1 by Theorem∧ 1 of Alsmeyer (2000).

From (3.9) and (3.10) with T = τ+, it follows that supx Exw(Xτ+ )/ w(x) < . Hence C2 holds for the ladder chain. To prove{ the moment condition} ∞ C6 hold for the kernel (3.5), we assume the additional moment conditions

(3.14) supEx exp θ(ξ1 + β1 + ε1) < for θ Θ. x { } ∞ ∈ 1/p q 1/q First note that Ex exp θSτ+ w(Xτ+ ) Ex exp θpSτ+ Exw (Xτ+ ) , where p−1 +q−1 = 1. Lemma{ } 14 in Section≤ { 6 implies{ that}}E exp{ θpS <} x { τ+ } ∞ for x . From (3.9), there exists a constant cq depending only on q such that ∈X

q q −1 q Ex Xτ cq x + (1 β )Ex max εi + i≤τ | | ≤ | | − | | + | |  14 C.-D. FUH

τ+ q −1 q cq x + (1 β )Ex εi ≤ (| | − | | | | ) Xi=1 = c x q + (1 β −1)E ε qE τ . q{| | − | | | 1| x +}

It follows that supx Ex(exp θSτ+ w(Xτ+ ))/w(x) < . By using the same argument and applying Wald’s{ equation} for Markov∞ chain in Lemma 1, it follows that the moment condition C3 also holds for the ladder chain. Since

S0 =0 and supx Ex exp θSτ+ /w(x) < , C4 also holds for the kernel P+ { } ∞∞ if the initial distribution ν satisfies −∞ x dν(x) < . Hence C1–C6 are satisfied by the ladder random walk with| | transition∞ kernel P when the R + underlying chain is the random coefficient model (3.1) and ξn satisfies (3.4), (3.6) and (3.14). { } 2 Under the normality assumption on (βk, εk) with known σ , the log- likelihood ratio Zn for testing H0 : β µ0 against H1 : β>µ0 is given by ≤ n (3.15) Z = 1 ((X µ X )2 (X µ X )2), n 2 i − 1 i−1 − i − 0 i−1 Xi=1 where µ is so chosen µ >µ . Define the stopping time T = inf n 1 : Z λ . 1 1 0 λ { ≥ n ≥ } Given m> 0, we consider the test of H0 : β µ0 against H1 : β>µ0 defined by the following: stop sampling at min(T ≤,m); reject H if T m, and λ 0 λ ≤ otherwise do not reject H0. 2 2 Under the stability assumption β +σ < 1, and consider Yn = (Xn−1, Xn) as the underlying Markov chain. Suppose the conditional distribution of 2 2 ξn = (Xn β1Xn−1) (Xn β0Xn−1) given Y0,...,Yn is of the form FYn−1,Yn such that− − − ∞ ∞ ∞ iθξ lim sup e dFy,βy+z(ξ) |θ|→0 ZX ×X Z−∞ Z−∞Z−∞  (3.16)

q(β, z) dβ dz π1(dy) < 1, × where π is the stationary distribution of Y . Let S = n ξ ; then 1 n n i=0 i (Y ,S ),n 0 is strongly nonlattice. It is{ easy} to see that there exists { n n ≥ } P ε> 0 such that infx Pπ1 ξ1 ε X1 = x > 0. The rest of the argument is the same as (3.4)–(3.14){ and≤− is omitted.| }

3.2. Products of random matrices. In this section, we apply the renewal theory in Section 2 to generalize some results of Kesten (1973) on prod- ucts of random matrices in three directions. First, while Kesten considered products of i.i.d. matrices Mn, we work with the more general setting in which (Xn, Mn),n 0 are products of Markov random matrices. Second, we provide{ uniform≥ renewal} theory, with polynomial and exponential rate of MARKOV RENEWAL THEORY 15 convergence, respectively. This extension enables us to apply corrected dif- fusion approximation for the first passage probabilities. Third, while Kesten assumed the entries of Mn to be positive with probability 1, we can dispense with this assumption. Moreover, our proof is considerably simpler and pro- vides a more transparent description of the basic constants that appear in his results. We shall consider k k nonsingular matrices M, with real entries, and de- × k fine the norm by M = sup|x|=1 Mx , where is a norm in R . Following k k | | | · | −1 Bougerol (1988), define χ(M) = max(log M , log M ). Let (Xn, Mn),n 0 be a Markov chain satisfying the followingk k assumptions:k k { ≥ } (A1) M is a k k nonsingular matrix with real entries such that n × supE exp(aχ(M1)) X0 = x < for some a > 0. x { | } ∞ (A2) X ,n 0 is a w-uniformly ergodic Markov chain and satisfies C7. { n ≥ } (A3) (Xn, Mn),n 0 is quasi-irreducible and γ1 = γ2, where γ1 γ2 γk denotes{ its Lyapunov≥ } exponents. 6 ≥ ≥···≥ For the definition of “quasi-irreducibility,” see Bougerol [(1988), page 199]. For the definition and basic properties of Lyapunov exponents, see Bougerol and Lacroix [(1985), Sections III.5 and III.6) and Bougerol [(1988), pages 197 and 198]. Let M be the k k identity matrix and define the product 0 × (3.17) Π = M M . n n · · · 0 Let u be unit column vectors in Rk. Since Π is nonsingular, Π u =0 and n n 6 n (3.18) log Π u = ξ where ξ = log( Π u / Π u ). | n | t t | t | | t−1 | Xt=1 Let Y0 = (X0, u),...,Yn = (Xn, Πnu/ Πnu ). Define ξt as (3.18), and let n | | Sn = t=1 ξt. Then it follows from (3.17) that (Yn,Sn),n 0 is a Markov random walk. Under (A1)–(A3), Bougerol (1988{ ) has shown≥ } that Y has P n an invariant measure and is uniform ergodic with H¨older continuous norm [see Definitions 3 and 4 of Bougerol (1988)]. Moreover, in view of (A1), conditions C3 and C4 are satisfied for every r> 1. Furthermore, condition C6 holds. Assuming ξ1 to be strongly nonlattice and conditional strongly nonlattice, we can therefore apply the renewal theorems in Section 2 to the Markov random walk (Yn,Sn),n 0 , thereby both generalizing Kesten’s (1973) renewal theory{ for products≥ of} i.i.d. matrices with positive entries and providing convergence rates in the renewal theorems. The mean value µ in these theorems is equal to γ1 of upper Lyapunov exponents. Let denote the sphere consisting of unit column vectors in Rk. For u , defineS the stopping time ∈ S (3.19) N(b)=inf n 1 : Π u > eb = inf n 1 : S > b , inf φ = . { ≥ | n | } { ≥ n } ∞ 16 C.-D. FUH

r Suppose γ1 0. Since supx∈X ,u∈S E( ξ1 X0 = x) < in view of (A1), The- orem 2 in Section≥ 2 can be applied| to| show| that (2.9)∞ and (2.10) hold for N(b) as b . This generalizes Theorem 2 of Kesten (1973) that considers →∞ the case of i.i.d. Mn with positive entries. We next consider the case γ1 < 0 and assume in addition that, for some 0 < a∗ < a [where a is given in (A1)],

a∗ a∗/2 (A4) inf E M1u X0 = x k . x∈X ,u∈S {| | | }≥

For i.i.d. matrices Mn = (Mn(h, i))1≤h,i≤k with positive entries, Kesten’s (1973) Theorem 3 restricts u to the subset + of consisting only of vectors with nonnegative entries and assumes, amongS otherS conditions, that ∗ k a a∗/2 (3.20) E min M1(h, i) k , (1≤i≤k ! ) ≥ hX=1 which implies (A4) with replaced by + [see the inequality preceding (2.66) in Kesten (1973)]. UnderS assumptionsS (A1) and (A2), if the assumption of quasi-irreducibility in (A3) is strengthened into “strong irreducibility” [see Section 5 of Bougerol (1988) for its definition and properties], then it can be shown that tilting for the operator Pα is defined by (P f)(x, u)= E eαξ1 f(Y ) Y = (x, u) α { 1 | 0 } on the space L(α) of functions on with the H¨older continuous norm X × S whose spectrum is taken over x [if the induced Markov chain (Yn,Sn),n 0 is irreducible]. By making use∈X of (A1)–(A4) and Proposition 1{ in the Ap-≥ } pendix, (Pαf)(x, u) is well defined. Let λ(α) and r((x, u); α) be the largest eigenvalue and associated eigenfunction defined as (2.11)–(2.13). Therefore, the usual tilting argument shows that Theorem 3 of Kesten (1973) holds. In particular, taking B = eb yields

(3.21) BPx max Πnu >B Kr((x, u), α) as B , n≥1  | |  −→ →∞ −b(Λ(α)−1) α where K = (e /γ1) X×Rk 1/r((x, u); α) dπ+(x, u). R 4. Proof of Theorem 1. The ingredients we need to make the uniform α α Markov renewal theorem over the family (Xn ,Sn ),n 0 : α Γ are pro- vided by Lemmas 3 and 4. The proof of{ Lemma 3 depends≥ ∈ on} a uniform upper bound for the expectation of the overshoot, which we state and prove in Lemma 2. To prove Lemma 2, we need Wald’s equation for Markov chains, and moment convergence of the stopping time τ(b) defined in (1.2) for all b 0. These are included in Lemma 1. A version of Wald’s equations for uniformly≥ ergodic Markov random walks can be found in Fuh and Lai MARKOV RENEWAL THEORY 17

(1998), where they applied the spectral theory of positive operators related to Markov semigroups. Fuh and Zhang (2000) first derived Poisson equa- tions for Markov random walks, and then applied them to establish Wald’s equations. Here in C1 and C2, we applied results in Harris recurrent Markov random walks to obtain (first-order) Wald’s equation via Poisson equation.

Lemma 1. Assume C1 and C2 and that 0 <µ := E ξ < . Let ν be π 1 ∞ an initial distribution of X0, and let T be a stopping time such that Eν T < : ∞ (i) If sup E ( ξ ) < , then x x | 1| ∞ E S = µE T + E ∆(X ) ∆(X ) . ν T ν ν{ T − 0 } The constant E ∆(X ) ∆(X ) is zero when ν = π. Denote ξ+ (ξ−) as ν { T − 0 } 1 1 the positive (negative) part of ξ1. − (ii) If supx Ex(ξ1 ) < , then Eν τ(b) < . (iii) Let p 1. If sup E∞(ξ−) < and sup∞ E (ξ+)p < , then E Sp < . ≥ x x 1 ∞ x x 1 ∞ ν τ(b) ∞

Proof. (i) The minorization condition C1 ensures that (Xn,Sn),n 0 is a split chain [cf. Lemma 3.1 of Ney and Nummelin (1987{)]. Under the≥ irreducible} assumption, it is also a Harris recurrent Markov chain. Proposi- tion 17.4.1 and Theorem 17.4.2 of Meyn and Tweedie (1993) give that the following Poisson equation (4.1) E ∆(X ) ∆(x)= E ξ E ξ x 1 − x 1 − π 1 has a solution ∆ : R for almost every x . Under the assumption of X → ∈X supx Ex( ξ1 ) < , Eν (∆(XT ) ∆(X0)) is finite. Therefore, by Corollary 1 and Theorem| | 4∞ of Fuh and Zhang− (2000), we have the proof of (i). ′ To prove (ii), we choose B > 0 such that µ := Eπ(ξ1(B)) > 0, where ′ ξt(B)= ξtI(ξt B). Let Sn = ξ1(B)+ + ξn(B) and let Nb = inf n ′ ≤ ′ · · · { ≥ 1 : Sn b . Then Sn Sn and Nb τ(b). For m> 0, apply (i) to Nb m; we ≥ }′ ′ ≤ ≥ ∧ have Eν S = µ Eν(Nb m)+ O(1) as m . By the monotone conver- Nb∧m ∧ →∞ gence theorem, limm→∞ Eν(Nb m)= EνNb. Moreover, by the definition of ∧ ′ Nb, SNb∧m b + B for all m 1. Hence b + B µ Eν Nb a for some a> 0, and therefore≤ > E N E≥τ(b). ≥ − ∞ ν b ≥ ν Finally, we prove (iii). Since 0 Sτ(b) < b+ξτ(b), it follows from Minkowski’s inequality that ≤ τ(b) 1/p p 1/p + p (EνSτ(b)) b + Eν (ξt ) . ≤ ( " #) Xt=1 + p Since we have already shown that Eν τ(b) < and supx Ex(ξ1 ) < by assumption, it follows from (i) that ∞ ∞ τ(b) + p + p Eν (ξt ) supEx(ξ1 ) Eντ(b)+ O(1), " # ≤ x Xt=1   18 C.-D. FUH p  proving the finiteness of EνSτ(b).

Lemma 2. Assume C1–C4 with r = 2 in C3. Suppose µ> 0. For b 0, let R(b)= S b. Then, ≥ τ(b) − + 2 Eπ(ξ1 ) (4.2) supEπR(b) . b≥0 ≤ Eπξ1

When the initial distribution of X0 is ν, (4.2) becomes that there exists a constant K> 0 such that sup E R(b) E (ξ+)2/E ξ + K. b≥0 ν ≤ π 1 π 1 Remark 7. In the case of simple random walks, the upper bound (4.2) was given in Lorden (1970) by pathwise integration.

Proof of Lemma 2. For any values of ξ1,ξ2,..., the overshoot function R(b); b 0 is piecewise linear, with all pieces having slope 1. We consider first{ the≥ case} where the ξ’s are nonnegative. It is easy to see− that, for c 0, ≥ τ(c) c Sτ(c) Sτ(c) 1 2 1 2 (4.3) R(b) db = R(b) db R(b) db = 2 ξt 2 R(c) . 0 0 − c − Z Z Z Xt=1 Since for c 0, Eπτ(c) is finite by Lemma 1(ii), the sum in (4.3) has finite expectation by≥ C3 and Wald’s equation for Markov random walks, and since the other terms are nonnegative, they also have finite expectations. Since R(b) 0 for all b, we have by Fubini’s theorem and Wald’s equation for Markov≥ random walks in Lemma 1(i) c E R(b) db = 1 E ξ2E τ(c) 1 E R(c)2, π 2 π 1 π − 2 π Z0 2 where Eπξ1 is finite via condition C3. Note that Eπ ∆(Xτ(c)) ∆(X0) = 0 in the Wald’s equation. { − } By Jensen’s inequality and Wald’s equation for Markov random walks, c (4.4) E R(b) db 1 µ−1E ξ2(c + E R(c)) 1 (E R(c))2. π ≤ 2 π 1 π − 2 π Z0 It is easy to see that for all b, u 0, Eπτ(b + u) Eπτ(b)+ Eπτ(u), since the conditional expectation of τ(b+≥u) τ(b) given τ≤(b)= n, X , X ,ξ ,...,X ,ξ − 0 1 1 n n equals Eπτ(u r), where r = ξ1 + +ξn b> 0 and τ(u r) is zero if r > u, so that E τ(u− r) E τ(u). It· follows · · − from Wald’s equation− for Markov π − ≤ π random walks that EπR(b) is a subadditive function of b and therefore 1 2 c(EπR(c)+ gc) 1 (4.5) c inf (EπR(b)+ EπR(c b)) ≤ 2 0≤b≤1/2c − 1/2c c (E R(b)+ E R(c b)) db = E R(b) db. ≤ π π − π Z0 Z0 MARKOV RENEWAL THEORY 19

Combining (4.4) and (4.5) and rewriting, we obtain (4.6) (E R(c))2 + (c E ξ2/µ)E R(c) cE ξ2/µ 0. π − π 1 π − π 1 ≤ The left-hand side of (4.6) is a quadratic in EπR(c) which is nonpositive 2 2 only between its roots, c and Eπξ1/µ. Therefore, EπR(c) Eπξ1 /µ and since c is arbitrary, the proof− is complete for the nonnegative≤ case. The case where ξ1,ξ2,... may take negative values reduces to the nonneg- ative case through consideration of the associated sequence of positive ladder variables, which forms the ladder Markov random walks (Xτn ,Sτn ),n 0 defined in the paragraph before Theorem 2. We first{ note that, by≥ as-} sumption, the ladder Markov chain is uniformly ergodic with respect to a given norm. Next, we need to verify that conditions C1–C4 still hold for the associated ladder Markov chains (Xτn ,Sτn ),n 0 . It is known [cf. Theorem 1 of Alsmeyer (2000)] that{ if (X ,S ),n≥ 0} is Harris recur- { n n ≥ } rent, then the associated ladder Markov chains (Xτn ,Sτn ),n 0 are also Harris recurrent. Under the irreducible assumption,{ the minorization≥ } con- dition C1 is equivalent to Harris recurrent. Therefore the ladder Markov chain (Xτn ,Sτn ),n 0 satisfies C1. The moment conditions C2–C4 hold by Lemma{ 1(iii). ≥ } Since R(b) is pointwise the same for ξ1,ξ2,... and the sequence of ladder variables, and 0 0. It is known that as b , Ex(Sτ(b) b)= Eπ+ Sπ+ /2Eπ+ Sτ+ + o∞(1) uniformly in x [cf. (3.20)→∞ of Fuh and Lai− (2001)]. Therefore, the ∈X difference between supb≥0 Eν R(b) and supb≥0 EπR(b) is a constant K > 0, and the proof is complete. 

α α For z C and α Γ, define the operators P z , P,ν∗ and Q on as (2.11). By∈ (A.1)–(A.3)∈ and Proposition 1 in the Appendix, we can defineN α α eigenvalue λ (z) of the operator P z . Let “ ” and “ ” denote “real part of” and “imaginary part of,” respec- tively. DenoteR B =I [s,s + h) and let ∞ (4.7) U (α,A)(B) := P α s Sα s + h, Xα A ν { ≤ n ≤ n ∈ } nX=0 be the renewal measure for each α Γ. ∈ 20 C.-D. FUH

Lemma 3. Assume the conditions of Theorem 1 hold. For k 1, let α α k α α k ≥ µk := Eπ(ξ1 ) > 0 and ηk := supx Ex(ξ1 ) > 0. Then, for each positive inte- ger k, there exist µ , η > 0 and µ∗, η∗ < such that k∗ k∗ k k ∞ α α ∗ α α ∗ (4.8) µk∗ inf µk sup µk µk and ηk∗ inf ηk sup ηk ηk. ≤ α∈Γ ≤ α∈Γ ≤ ≤ α∈Γ ≤ α∈Γ ≤

Also, there exist r1 > 0 and δ> 0 such that, for all z with (z) r1 and ∗ R ≤ (z) < δ, then for each positive integer k, there exists vk < such that, |Ifor all| α Γ, ∞ ∈ k d α ∗ k λ (z) vk. dz ≤

Finally, there exists C such that, for all α Γ, and s 0 and h 2 in (4.7), ∈ ≥ ≤ U (α,A)(B) C. ≤ Proof. To prove (4.8), we only consider the first part, since the second part can be proved in a similar way. By the assumption of uniformly strong nonlattice in the form (2.3), we have for all θ> 0 iθξα g(θ) := inf Eπe 1 1 g(θ) > 0, α∈Γ| − |≥ so that using the facte that eit 1 t for all real t, we obtain for all α Γ and θ> 0 that | − | ≤ | | ∈ 0 < g(θ) eiθs 1 P α(x,dy ds)πα(dx) ≤ | − | × Zx,y∈X Z[0,∞) e θsP α(x,dy ds)πα(dx)= θµα, ≤ × 1 Zx,y∈X Z[0,∞) α α α where P ( , ) denotes the transition probability of (Xn ,Sn ),n 0 , and α · ·×· α α { ≥ } π ( ) denotes the invariant probability of (Xn ,Sn ),n 0 . This implies that· inf µα sup g(θ)/θ > 0. Hence, we{ get the existence≥ } of µ . The α∈A 1 ≥ θ>0 1∗ existence of µk∗ for positive integers k now follows from Jensen’s inequality. For the upper bounde in (4.8), note that since et >tk/k! for all t> 0, we have by K6 that

µα = skP α(x,dy ds)πα(dx) k × Zx,y∈X Z[0,∞) k! er1sP α(x,dy ds)πα(dx) ≤ rk × 1 Zx,y∈X Z[0,∞) k!C k ≤ r1 for all α Γ. ∈ MARKOV RENEWAL THEORY 21

To prove the second assertion, note that by Proposition 1 and the Cauchy– Schwarz inequality, for (z) r1/2 and (z) < δ, there exists c> 0 such that, for all α Γ, R ≤ |I | ∈ k d α α α α α k zξ1 α α k r1ξ1 /2 k λ (z) = Eπ ((ξ1 ) e ) + c Eπ ((ξ1 ) e )+ c dz | | ≤

α α 2k α r ξα 1/2 ∗ 1/2 ∗ [E ((ξ ) )E (e 1 1 )] + c (µ C) + c := v . ≤ π 1 π ≤ 2k k The final assertion can be proved by using Lemma 1(i), Lemma 2 and (4.8). Let A = for simplicity; then for all α Γ, s 0 and h 2, there exists C < suchX that ∈ ≥ ≤ 1 ∞ α,X 1 α α α α U (B) α α [Eν Sτ(s+h) Eν Sτ(s)]+ C1 ≤ Eπ ξ1 −

1 α α = α α [h + Eν R(s + h) Eν R(s)] + C1 Eπ ξ1 −

1 α 2+sup sup Eν R(s) + C1 ≤ µ1∗  α∈Γ 0≤s<∞  α 2 1 Eπ(ξ1 ) 2+sup α + K + C1 ≤ µ1∗  α∈Γ Eπξ1  1 µ∗ 2+ 2 + K + C := C. µ µ 1  ≤ 1∗  1∗  Lemma 4. Assume the conditions of Theorem 1 hold. Then there exist r1 > 0, δ> 0 and ε> 0 such that, for all z satisfying 0 < (z) r1 and α R ≤ (z) δ, and for all α Γ, λ (z) = 1, and for all z satisfying (z)= r1 and|I |≤(z) δ, and for all∈ α Γ, 6 R |I |≤ ∈ (4.9) λα(z) 1 ε. | − |≥ α α k Proof. Let µk := Eπ(ξ1 ) . By integration by parts and Proposition 1, z λα(z)=1+ µαz + tλα(2)(z t) dt 1 − Z0 at least for all z such that (z) < r and (z) δ, where (2) denotes the sec- R 1 |I |≤ ond derivative. Therefore, for all z in the set S := z : (z) r1/2, (z) ∗ ∗ { R ≤ |I |≤ δ, z µ1∗/v2 , where µ1∗, v2 are defined in Lemma 3, and all α Γ, we have| |≤ } ∈ z λα(z) 1 µαz tλα(2)(z t) dt | − | ≥ | 1 |− − Z0

v∗ µ µ z 2 z 2 1∗ z . ≥ 1∗| |− 2 | | ≥ 2 | | 22 C.-D. FUH

Take ε> 0 such that the square Sε := (z) ε, (z) ε is contained in the set S; for example, ε = r /2 δ µ{R/(√2≤v∗)|I will do.|≤ Then} 1 ∧ ∧ 1∗ 2 µ (4.10) λα(z) 1 1∗ z for all z S . | − |≥ 2 | | ∈ ε α By the assumption that Sn : α Γ is uniformly strong nonlattice and { ∈ } α α iθξα Proposition 1, we have, for all θ < δ, λ (iθ) = Eπ e 1 + O( θ ); this implies that λα(iθ) 1 g(ε)|>| 0, for| all θ| |ε and all| α |Γ.| Take r := g(ε)(2v∗)| ε> 0.− Then,| ≥ for all 0 u r <| r|≥, δ> θ ε and∈α Γ, 1 ∧ ≤ ≤ 1 | |≥ ∈ λα(u + iθ) 1 | − | λα(iθ) 1 λα(u + iθ) λα(iθ) ≥ | − | − | − | u+iθ g(ε) λα(2)(z) dz g(ε) uv∗ ≥ − ≥ − 1 Ziθ

g(ε) g(ε) ∗ g(ε) ∗ v1 = . ≥ − 2v1 2 Furthermore, (4.10) implies that λα(u + iθ) 1 is positive for all 0 < u r, α | − | ∗ ≤ θ ε and α Γ, and λ (r + iθ) 1 is at least µ1r/2 for all θ ε. Thus, |taking|≤ δ := g(∈ε)/2 µ∗|r/2 > 0, the− lemma| is proved.  | |≤ ∧ 1 Since the rate of convergence in the uniform Markov renewal theorem in Theorem 1 can be proved for each α Γ, the uniformity in α Γ appealing to Lemmas 3 and 4 when necessary,∈ we will present the proofs∈ by omitting α for simplicity. Let B = [s,s+h), and recall that U (α,A)(B) defined in (4.7) is the renewal measure. For simplicity, we delete α and denote ∞ (4.11) U (A)(B) := P s S s + h, X A ν { ≤ n ≤ n ∈ } nX=0 as the renewal measure of (Xn,Sn),n 0 . To prove Theorem 1, we{ evaluate the≥ Fourier} transform of the renewal measure U A. As in Carlsson (1983) and Carlsson and Wainger (1982), we perform Fourier inversion of the Fourier transform as a generalized function. We refer the reader to Gelfand and Shilov (1964), Schwartz (1966) and Strichartz (1994) for the basic theory; in particular, the following notation and concepts will be used. A test function ϕ(s) is an infinitely differentiable function that vanishes outside a bounded region in R. Let denote the linear space of all test functions, and ′ the space of linearD functionals on . A sequence ϕ D D n ∈ D is said to converge to zero if ϕn and all its derivatives converge to 0 uni- formly and vanish outside a common bounded subset of R. A generalized MARKOV RENEWAL THEORY 23 function is a continuous linear functional on . A function f defined on R for which f(s)ϕ(s) ds is absolutely convergentD for any ϕ is called locally integrable. A C∞ function f on R is of class if f and∈ all D its partial R derivatives are rapidly decreasing in the sense that theyT are of order O( s −a) as s , for every a> 0. Linear functionals on are called tempered| | dis- tributions| |→∞, and ′ denotes the set of all temperedT distributions. The Fourier transform ϕ ofT a function ϕ is defined by ϕ(θ)= ϕ(s) exp(iθs) ds for θ R. The Fourier transform∈ of D a generalized function f is the linear func- ∈ R tional f definedb on the space ψ : ψ is the Fourierb transform of some ϕ { ∈ D} by (2π)(f, ϕ) = (f, ϕ) for all ϕ . As inb renewal theory for simple∈ D random walks, the proof of Theorem 1 requires detailedb b analysis of the characteristic function for the additive component Sn. The analysis can be decomposed in two parts: for θ near zero and for θ away from zero. The rate of convergence for the| renewal| measure to the| Lebesgue| measure scaled by the mean is given by the analysis of θ near zero. The contribution of θ away from zero is negligible via the| property| of local integrability. That| is,| we need to show that for θ R, ∞ iθSn ∈ there exists a δ> 0 with θ > δ, n=0 Eπ(e ) and its kth derivatives with respect to θ are locally integrable| | for k = 1, 2,.... By using kth integration P by parts, we thus need to show the following lemma.

Lemma 5. Suppose (Xn,Sn),n 0 is a strongly nonlattice Markov random walk satisfying (2.4),{ C1–C4 ≥and}C6. Then for every c> 0, for any r 2 and k = 0, 1, . . . , r, ≥ ∞ iθSn k (4.12) sup Eπ(e Sn) < . |θ|>c | | ∞ nX=0 Proof. For k = 0, . . . , r and for any x , we have ∈X E (SkeiθSn ) E (ξ ξ eiθSn ) , | x n |≤ | x j1 · · · jk | X k where the summation extends over j1,...,jk 1,...,n . There are n terms. ∈ { 0 } 0 We shall give upper bounds for each term. Fix j1 ,...,jk 1,...,n , and a natural number m to be determined later. Let ∈ { } J = j 1,...,n : j j0 3m,p = 1,...,k . { ∈ { } | − p |≥ } Divide J into blocks A1,B1,...,Al,Bl as follows: define j1,...,jl by j = inf J and j = inf j j + 7m : j J 1 p+1 { ≥ p ∈ } and let l be the smallest integer for which the inf is undefined. Write

−1/2 A = eiθn ξj : j j m , p = 1,...,l, p { | − p|≤ } Y 24 C.-D. FUH

−1/2 B = eiθn ξj : j + m + 1 j j m 1 , p = 1,...,l 1, p { p ≤ ≤ p+1 − − } − Y −1/2 B = eiθn ξj : j > j + m + 1 , l { l } Y iθξj R = (ξj0 ξj0 ) e : j / J . 1 · · · k { ∈ } Then Y l iθSn ξj0 ξj0 e = ApBpR. 1 · · · k Y1 We have l l ExR ApBp ExR BpE(Ap ξj : j = jp) − | 6 1 1 Y Y l q−1

ExR ApBp(Aq E(Ap ξj : j = jq)) ≤ − | 6 q=1 1 X Y

l (4.13) BpE(Ap ξj : j = jp) × | 6 q+1 Y q−1 l ExR ApBp(Aq E(Ap ξj : j = jq)) ≤ − | 6 q=1 1 X Y

l BpE(Ap ξj : 0 < j jp 3m) . × | | − |≤ q+1 Y

The first summation term in (4.13) vanishes since q−1 l R A B and B E(A ξ : 0 < j j 3m) p p p p| j | − p|≤ Y1 qY+1 are both measurable with respect to the σ-field generated by ξj : j = jq. Recall that the functions 6 E(A ξ : 0 < j j 3m) for p = 1,...,l p| j | − p|≤ are weakly dependent since jp+1 jp 7m,p = 1,...,l 1. Using condi- tion C1 we obtain − ≥ − l ExR BpE(Ap ξj : 0 < j jp 3m) | | − |≤ 1 Y l β r (2n ) Ex E(Ap ξj : 0 < j jp 3m) ≤ | | − |≤ 1 Y

MARKOV RENEWAL THEORY 25

l (2nβ)r E E(A ξ : 0 < j j 3m) ≤ x| p| j | − p|≤ | Y1 + (2nβ)rl 4d−1e−dm. · With the strong nonlattice condition (2.3), the conditional strong nonlattice condition (2.4) and Lemma 2 in Statulevicius (1969), we find an upper bound for E E(A ξ : 0 < j j 3m) . x| p| j | − p|≤ | −δ We have for θ δ, the relation Ex E(Ap ξj : j = jq) e , and hence by (2.4), for all |θ |≥R, θ δ, | | 6 |≤ ∈ | |≤ E E(A ξ : j = j ) exp( δ θ 2/n). x| p| j 6 q |≤ − | | Therefore, for all θ R, ∈ E E(A ξ : 0 < j j 3m) x| p| j | − p|≤ | E E(A ξ : j = j ) ≤ x| p| j 6 q | max(exp( δ θ 2/n), e−δ). ≤ − | | If we choose K appropriately and let m be the integral part of K log n, then the assertion of the lemma follows from exp( δ θ 2/n)n/m exp( δ θ 2/(K log n)) − | | ≤ − | | ′ exp( δ nε/2) ≤ − ′ for θ cnε and some δ > 0.  | |≥ Proof of Theorem 1. By using the same argument as that in The- orem 2.4 of Fuh and Lai (2001), we have (2.5) and (2.6). The details are omitted. To prove (2.7), let g(s) C∞ and have support in Ω= s : s 1 . Let ∈ { | |≤ } gε(s)= εg(s/ε), let IA be the of the set A and let Ωc = s : s c for c> 0. Let L be the measure with density ds/µ. As in Carlsson {and| Wainger|≤ } (1982), we have that g I (U (A) L)(s) Kε ε ∗ Ω1−ε ∗ − − (4.14) (U (A) L)(Ω + s) ≤ − g I (U (A) L)(s)+ Kε, ≤ ε ∗ Ω1+ε ∗ − where K can be chosen uniformly for c bounded. Letting ε = s−k with k large enough, we have (2.7) if we can show that there exists r> 0 such that (4.15) g I (U (A) L)(s) Ke−rs. | ε ∗ Ωc ∗ − |≤ 26 C.-D. FUH

To prove (4.15), we consider the Fourier transform ∞ iθSn ψ(θ)= Eν e h1(Xn) n=0 { } (4.16) X ∞ ∞ = λn(θ)ν Q h + ν Pn(I Q )h , ∗ θ 1 ∗ θ − θ 1 nX=0 nX=0 where Pθ, Qθ are defined in (2.11) with z = iθ, and h1 := IA. Note that the second equation in (4.16) follows from Proposition 1(i). As in Carlsson and Wainger (1982), there exists a δ> 0 such that, for θ < δ, | | 1 π ∞ (4.17) ψ(θ)= + δ(θ) ν Q h + ν Pn(I Q )h , 1 λ(θ) µ ∗ θ 1 ∗ θ − θ 1  −  nX=0 where δ( ) denotes the Dirac delta function. By using Fourier inversion of the generalized· function, we have g I U (A)(s) ε ∗ Ωc ∗ 1 = e−iθsgˆ (θ)Iˆ (θ)ψ(θ) dθ 2π ε Ωc Z ˆ π 1 −iθs gˆε(θ)IΩc (θ) (4.18) = g(0) + e ν∗Qθh1 dθ µ 2π 0<|θ|<δ 1 λ(θ) Z − ∞ 1 −iθs ˆ n (4.19) + e gˆε(θ)IΩc (θ) ν∗Pθ (I Qθ)h1 dθ 2π 0<|θ|<δ − Z nX=0 1 ∞ −iθs ˆ iθSn (4.20) + e gˆε(θ)IΩc (θ) Eν e h1(X1) dθ, 2π |θ|>δ { } Z nX=0 ˆ whereg ˆε denotes the Fourier transform of gε and IΩc denotes the Fourier transform of IΩc . Equation (4.18) can be analyzed by making use of the Taylor expansion of λ(θ) as that in Proposition 1, which says that for any r 2 as θ 0, ≥ | |→ 1 (k) (λ(θ) 1) = O(1) for k r 1, θ − ≤ − | |  1 (r) 1 (λ(θ) 1) = o , θ − θ | |   | | 1 (k) (λ(θ) 1+ iθµ) = O(1) for k r 2, θ 2 − ≤ − | |  1 (k) (λ(θ) 1+ iθµ) = o( θ r−k−2) for k = r 1, r, θ 2 − | | − | |  MARKOV RENEWAL THEORY 27

(k) where denotes the kth derivative. Also ν∗Qθh1 and its kth derivatives converge to 0 as θ 0. Next, we want| to|→ verify that the rate of convergence in (4.18) is O(e−rs) for some r> 0 as s . Note that →∞ 1 g(0) (4.21) e−iθsgˆ (θ)Iˆ (θ)ν Q h (i/µθ) dθ = + O(e−rs), 2π R{ ε Ωc ∗ θ 1 } 2µ Z0<|θ|<δ as s . Consider →∞ 1 −iθs ˆ 1 i e gˆε(θ)IΩc (θ)ν∗Qθh1 dθ 2π 0<|θ|<δ 1 λ(iθ) − µθ Z  −  1 −iθs = lim e gˆε(θ)IˆΩ (θ)ν∗Qθh1 2π ε→0 c Z−δ<θ<ε 1 i (4.22) dθ × 1 λ(iθ) − µθ  −  −iθs ˆ + e gˆε(θ)IΩc (θ)ν∗Qθh1 Zε<θ<δ 1 i dθ . × 1 λ(iθ) − µθ  −   Let 0 < u Θ, where Θ is defined in K6. For any u (0, u ), and z = 1 ∈ ∈ 1 u + iθ C, consider four lines L1(ε)= z : (z) = 0, ε (z) δ , L2 = z : (z∈) [0, u], (z)= δ , L = z : (z){ [0R, u], (z)=≤δ |I, L|≤= }z : (z)= { R ∈ I } 3 { R ∈ I − } 4 { R u, (z) δ , and one semicircle, L5(ε), from ε to ε, oriented clockwise. Define|I |≤ } − 1 i (4.23) h(z)= e−zsgˆ (z)Iˆ (z)ν Q h . ε Ωc ∗ z 1 1 λ(z) − µz  −  Since h is analytic in the regions from L1(ε) to L5(ε), by Cauchy’s theorem for contour integral, we have

h(z) dz + h(z) dz + h(z) dz L1(ε) L2 L3 (4.24) Z Z Z + h(z) dz + h(z) dz = 0. ZL4 ZL5(ε) The continuity h yields that the residual of h(z) at 0 is 0, whence

lim h(z) dz = 0. ε→0 ZL5(ε) Combining with the Riemann–Lebesgue lemma, we have

lim h(z) dz ε→0 ZL1(ε) 28 C.-D. FUH

δ −(u+iθ)s ˆ = e gˆε(u + iθ)IΩc (u + iθ)ν∗Qu+iθh1 (4.25) Z−δ 1 i dθ × 1 λ(u + iθ) − µ(u + iθ)  −  = O(e−rs) for some r> 0 as s . →∞ ∞ iθSn To analyze (4.20), we make use of Lemma 5 which implies that n=0 Eν e and its partial derivatives up to order r are bounded for any r 2, and{ } Pˆ ≥ a fortiori locally integrable, in the region θ : θ δ . Moreover, IΩc and its { | |≥d } −1 derivatives are bounded by a constant times i=1 θ as θ . There- fore N integrations by parts as in page 359 of Carlsson| | and| Wainger|→∞ (1982) d QN can be used to show that (4.20) = O( log ε /s1 ) for any N. Since η(θ) and its partial derivatives of order r are bounded| | for 0 < θ δ∗ for any r 2, | |≤ −N ≥ N integrations by parts can also be used to show that (4.19) = O(s1 ) for any N. Therefore, by making use (4.18)–(4.20), (4.23) and (4.25), we have (4.15) and hence get the proof. 

5. Proof of Theorem 2. In this section, we assume the Markov random walk (Xn,Sn),n 0 defined as (1.1) is uniformly ergodic with respect to a given{ norm, and≥ the} stationary mean µ = 0. Under the minorization con- dition C1, making use of the results in Hipp (1985), Malinovskii (1987) and Jensen (1989), we have the following asymptotic expansions of the density for the distribution in Markov random walks.

Lemma 6. Assume C1–C5 with r = 3 in C3. We assume, without loss of generality, the asymptotic variance σ2 = 1. Then Q (s) 1 (5.1) P S s√n = Φ(s)+ φ(s) 1 +(1+ s 3)−1o , ν n √n √n { ≤ } | |   where o( ) is uniform in s. Here Φ( ) denotes the standard normal distribu- · · 2 tion, φ( ) denotes the standard normal density, and Q1(s)= κ/6(1 s )+κν , · 3 ∞ 2 ∞ 2 ∞ − where κ = Eπξ1 +3 t=1 Eπξ1 ξt+1 +3 t=1 Eπξ1ξt+1 +6 t1,t2=1 Eπξ1ξt1+1ξt1+t2+1 and κ = ∞ E ξ . Note that k = 0 if ν = π. ν t=1 ν tP ν P P Furthermore, if P S s√n has a density p (s√n ), then P π{ n ≤ } π,n k 1 (5.2) p (s√n )√n = φ(s) 1+ (s3 3s) +(1+ s 3)−1o , π,n √ √  6 n −  | |  n  where o( ) is uniform in s. · In the following, we shall assume C1–C5 and C7 hold. Lemma 7 is taken from Theorem 5 of Fuh and Lai (1998); we include it here for completeness. MARKOV RENEWAL THEORY 29

Lemma 7. Let r 1. Assume sup E (ξ+)r+1 < , where ξ+ denotes ≥ x x 1 ∞ 1 the positive part of ξ1. Furthermore, assume there exists ε> 0 such that inf P ξ ε X = x > 0. Then, E Sr < and S b, b > 0 is x π{ 1 ≤− | 1 } π τ+ ∞ { τ(b) − } uniformly integrable under the probability Pπ.

A Markov random walk is called lattice with span d> 0 if d is the maximal number for which there exists a measurable function γ : [0, ), called the shift function, such that P ξ γ(x)+γ(y) ..., 2d,X →d, 0, d,∞2d, . . . X = { 1 − ∈ { − − }| 0 x, X1 = y = 1 for almost all x,y . If no such d exists, it is called nonlattice}. ∈ X Let W (t), 0 t , denote Brownian motion with drift µ and put ≤ ≤∞ (µ) τW = τW (b)=inf t : W (t) b . Define the inverse Gaussian distribution G(t; µ, b)= P τW (b) t and H (s) = (E {S )−1 ≥s P} S t dt. { ≤ } + π+ τ+ 0 π+ { τ+ ≥ } R Lemma 8. Assume P is nonlattice. Suppose b and m so that, for some fixed 0 <ζ< , b = ζm1/2. Then for all→∞0 t,s →∞, ∞ ≤ ≤∞ P τ(b) mt, S b s G(t; 0,ζ)H (s). π{ ≤ τ(b) − ≤ } −→ + Proof. Note that P τ(b) >mt, S b s π{ τ(b) − ≤ } = E (P S b s τ(b) >mt,S ; τ(b) >mt). π π{ τ(b) − ≤ | m} Let ˜b = b b1/4. Then by the central limit theorem for Markov random walks [cf. Theorem− 17.2.2 of Meyn and Tweedie (1993)], we have E (P S b s τ(b) >mt,S ; τ(b) > mt,˜b

Moreover, it is also known that Pπ+ τ+ < = 1 and by Lemma 7, Eπ+ Sτ+ < , and { ∞} ∞ P S b s τ(b) >mt,S ˜b π{ τ(b) − ≤ | m ≤ } ˜b = P S b s X = x,τ(b) >mt,S dv π{ τ(b) − ≤ | m m ∈ } Z0 ZX P X dx τ(b) >mt,S dv × π{ m ∈ | m ∈ } ˜b = P S (b v) s P X dx τ(b) >mt,S dv . x{ τ(b−v) − − ≤ } π{ m ∈ | m ∈ } Z0 ZX Since Pπ τ(b) > mt,Sm ˜b > 0 as b , therefore for v uniformly in τ(b) > mt,S{ ˜b , P ≤S } (b →∞v) s H (s) as b by { m ≤ } x{ τ(b−v) − − ≤ }→ + →∞ 30 C.-D. FUH the Markov renewal theorem in Theorem 1. Hence, uniformly in τ(b) > mt,S ˜b , as b , { m ≤ } →∞ P S b s τ(b) >mt,S H (s). π{ τ(b) − ≤ | m}→ + Under irreducible assumption and the minorization condition C1, for r = 2, assumption C3 ensures that the Poisson equation (4.1) has a solution ∆ 2 which satisfies Eπ(∆(X1)) < . Therefore, by the invariance principle for Harris recurrent Markov chain∞ [cf. Theorem 17.4.4 of Meyn and Tweedie (1993)], the limiting marginal distribution of τ(b)/m is G(t; 0,ζ). Hence, P τ(b) >mt, S b s π{ τ(b) − ≤ } = E (P S b s τ(b) >mt,S ; τ(b) >mt,S ˜b)+ o(1) π π{ τ(b) − ≤ | m} m ≤ = H (s)P τ(b) >mt,S ˜b + o(1) H (s)(1 G(t; 0,ζ)).  + π{ m ≤ } → + − 1/2 Lemma 9. Let 0 <ζ< , and Rm = Sτ (b) ζm . Then for any ε> 0, P R > εm1/2 = o(m−1/2∞). − π{ m } Proof. By Markov’s inequality we have

1/2 −1 −1/2 Pπ Rm > εm = ε m Rm dPπ { } 1/2 Z{Rm>εm } which is o(m−1/2) by Lemma 7. 

−2 Lemma 10. Let 0 <ζ< , 0 s , and m1 = m(1 (log m) ). Then as m , ∞ ≤ ≤∞ − →∞ (5.3) P m <τ

Proof of Theorem 2. Since we will consider time delay in the proof, (m,s0,s) we denote S = s for convenience. Let Pπ (A)= P A S = s ,S = 0 0 π{ | 0 0 m MARKOV RENEWAL THEORY 31

1/2 1/2 ′ s . Set s0 = m λ0 and s = m ζ0 for some λ0,ζ0 <ζ. Let s = 2b s = }1/2 − m (2ζ ζ0) denote s reflected about b. From (5.2) in Lemma 6 and the Markov renewal− theorem in Theorem 1, in which Γ has only one element, −2 it follows as in Lemmas 7–10 that, for m1 = m(1 (log m) ) and some ε 0, − m → (5.6) P (m,s0,s) τ

′ (5.9) P (m,s0,s) τ

pπ,m−τ (s Sτ ) Eπ − pπ,m(s s0) (5.11)  − ′ pπ,m−τ (s Sτ ) exp[ 2(ζ λ0)(ζ ζ0)] ′ − ; Am S0 = s0 . − − − − pπ,m(s s0) − 

The rest of the proof of (2.9) involves use of Lemma 6 to expand the integrand in (5.11) and application of Lemma 8 to evaluate the resulting 32 C.-D. FUH

1/2 expectation. Let Rm = Sτ m ζ. Some tedious algebra gives that the integrand in (5.11) equals −

[(1 τ/m)1/2φ(ζ λ )]−1 − 0 − 0 ζ ζ + R /m1/2 ζ ζ R /m1/2 φ − 0 m φ − 0 − m , × (1 τ/m)1/2 − (1 τ/m)1/2   −   −  which can be expanded to give

−1/2 1 2 1 2 2(1 τ/m) exp[ 2 (ζ0 λ0) 2 (ζ ζ0) /(1 τ/m)] (5.12) − − − − − − [(ζ ζ )R /m1/2(1 τ/m)] + o((1 + R2 )/m) × − 0 m − m uniformly on Am. According to Lemma 8, τ/m and Rm are asymptotically independent, converge in law, and by Lemma 7, Rm is uniformly integrable. Also, (5.12) is a bounded, continuous function of τ/m on Am. Hence, (5.12) can be substituted into (5.11) and Lemma 8 applied to evaluate the result. Putting s0 = 0 and performing the appropriate integrations yields (2.9). Formally, (2.10) follows by substituting (2.9) into

(5.13) P τ

(5.14) P (x, A B)= p(x,y)Qx,y(B)M(dy) := F (x,y; B)M(dy), × ZA ZA where F (x,y; B)= p(x,y)Qx,y(B). For ease of notation, we still denote π as the density of the invariant probability measure. We shall use to refer to the time-reversed (or dual) chain (X˜ , S˜ ),n 0 with transition∼ kernel: { n n ≥ } (5.15) F˜(y,x; B)= F (x,y; B)π(x)/π(y). Let τ ∗ = sup n : n 0 = Pπ τ˜ < { } { } { MARKOV RENEWAL THEORY 33

(m,0,ζ) m ; so to approximate Pπ τ

6. Proof of Theorem 3. The way to prove Theorem 3 is a suitable ap- plication of Theorem 1 via the following lemmas.

Lemma 11. Assume the conditions of Theorem 3 hold. Then, there exists α α δ > 0 and α δ such that the induced Markov chain (Xn ,Sn ),n 0 with transition| |≤ probability (2.13) is aperiodic and irreducible.{ Moreover,≥ it is} uniform ergodic with respect to a given norm and satisfying K1–K6.

Proof. For α δ, it is known [cf. Ney and Nummelin (1987)] that α α | |≤ (Xn ,Sn ),n 0 is an aperiodic and irreducible Markov chain. Since (Xn,Sn), {n 0 satisfies≥ } C1, P α is geometrically e−Λ(α)-recurrent for α <{ δ, and therefore≥ } is e−Λ(α)-uniformly ergodic, compare Theorem 4.1| of| Ney and Nummelin (1987). Define Ψ(dy ds)e−Λ(α)+αsr(y; α) Ψα(dy ds)= × , × (Ψr)(α) where (Ψr)(α) is a normalizing constant, and hα(x)=(Ψr)(α)r−1(x; α)h(x), where Ψ( ) and h( ) are defined in C1 with · · P (x,dy ds) h(x)Ψ(dy ds). × ≥ × This implies that Ψ(dy ds)e−Λ(α)+αsr(y; α) P α(x,dy ds) × (Ψr)(α)r−1(x; α)h(x) × ≥ (Ψr)(α) = hα(x)Ψα(dy ds). × Therefore the mixing condition K1 hold. α α To prove the moment condition K6, denote λ (z) as the eigenvalue of P z . By (2.13), we have α θξα θξ E (e 1 ) E (e 1 ) | π − π | ∞ r(y; α) r(x; α) = eθs eαs−Λ(α) P α(x,dy ds)πα(dx) r(x; α) r(y; α) − × Zx,y∈X Z−∞  

∞ r(y; α) r(x; α) eθs eαs−Λ(α) P α(x,dy ds)πα(dx) ≤ r(x; α) | | r(y; α) − × Zx,y∈X Z−∞

34 C.-D. FUH

α θξα for θ Θ R. From lim r(y; α)/r(x; α) = 1, we get lim sup E (e 1 ) ∈ ⊂ α↓0 | | α↓0 θ| π − Eα(eθξ1 ) = 0 by dominated convergence. Therefore, we may choose α∗ > 0 π α | θξ θξ1 so that supα∈[0,α∗] supθ Eπ(e 1 ) Eπ(e ) C, say. Hence, condition C6 implies K6 holds. By using| a similar− argument,|≤ we also have the moment conditions K2–K4. 

Lemma 12. Assume the conditions of Theorem 3 hold. Then there exists ∗ α α ∗ α > 0 such that the family (Xn ,Sn ),n 0 : 0 α α satisfies (2.3) and (2.4). { ≥ ≤ ≤ }

Proof. Since the proofs of (2.3) and (2.4) are similar, we only prove (2.3), the uniformly strong nonlattice case. Let λα(z) denote the eigenvalue of P α. Since (X ,S ),n 0 is assumed to be strongly nonlattice, we have z { n n ≥ } that g(1) := inf 1 E (eiθξ1 ) > 0. However, by (2.13), |θ|≥1 | − π |

iθξ α iθξα E (e 1 ) E (e 1 ) | π − π | ∞ r(y; α) r(x; α) = eiθs eαs−Λ(α) P α(x,dy ds)πα(dx) r(x; α) − r(y; α) × Zx,y∈X Z−∞  

∞ r(y; α) r(x; α) eαs−Λ(α) P α(x,dy ds)πα(dx) ≤ r(x; α) − r(y; α) × Zx,y∈X Z−∞

iθξ1 for all real θ. Since limα↓0 r (y; α)/r(x; α) =1, so limα↓0 supθ Eπ(e ) α iθξα | | | − E (e 1 ) = 0 by dominated convergence theorem. Therefore, we may choose π α ∗ | iθξ1 α iθξ α > 0 so that supα∈[0,α∗] supθ Eπ(e ) Eπ (e 1 ) g(1)/2, say. Choosing α∗ in this way, by applying the| definition− of g(1) and|≤ the triangle inequality, we obtain α α iθξ1 inf inf 1 Eπ (e ) g(1)/2. α∈[0,α∗] |θ|≥1 | − |≥ Consider α α iθξ1 g(δ) := inf inf 1 Eπ (e ) ; α∈[0,α∗] |θ|≥δ | − | we want to show that g(δ) > 0 for all δ> 0. If δ 1, then g(δ) g(1)/2 > 0. Suppose 0 <δ< 1. To show g(δ) > 0, it suffices≥ to show ≥ iθξα inf inf 1 Eαπ(e 1 ) > 0. α∈[0,α∗] δ≤|θ|≤1 | − |

α α iθξ α (α+iθ)ξ1 α αξ1 However, since Eπ (e 1 )= Eπ (e )/Eπ (e ), it is easy to see that α iθξα Eπ (e 1 ) is a continuous function of (α, θ) for α Γ and real θ. Therefore, α iθξα ∈ 1 Eπ (e 1 ) , being a continuous function on the compact set (α, θ) : 0 α| −α∗, δ θ | 1 , must attain its minimum there. To complete{ the proof,≤ we≤ need only≤ | |≤ show} that this minimum value cannot be 0. Supposing to the MARKOV RENEWAL THEORY 35

α iθξα ∗ contrary that Eπ (e 1 )=1 for some 0 α α and δ θ 1, we would have ≤ ≤ ≤ | |≤

α θ Pπ ξ1 is an integer = 1. 2π  However, by the assumption that (X ,S ),n 0 is strongly nonlattice { n n ≥ } and the property of exponential embedding that Pπ is absolutely continuous α  with respect to Pπ , this is a contradiction.

Use the same notation as the paragraph before Theorem 2 in Section 2. Note that τn, τ+ and τ− depend on α; we omit it here for simplicity. The following lemma is related to uniform strong nonlattice of the ladder chains. The proof is a straightforward generalization of Theorem 6 in Fuh and Lai (1998) and is omitted.

Lemma 13. α Assume the conditions of Theorem 3 hold. Let Pπ+ be the α α transition probability of the ladder Markov chain (Xτn ,Sτn ),n 0 . Then, ∗ ∗ { α ≥α } there exists α > 0 such that, for 0 α α , the family (Xτn ,Sτn ),n 0 is uniformly strong nonlattice. ≤ ≤ { ≥ }

The following lemma generalizes Lemmas 4.4 and 4.5 in Heyde (1964) for simple random walks.

Lemma 14. Assume the conditions of Theorem 3 hold. Then, there exist α∗ > 0, r > 0 and C such that, for all α [0, α∗], 1 ∈ α α r1S E (e τ+ ) C. π ≤ Proof. Under the assumption C6 and Lemma 11, for α Γ R, we ∈ ⊂ can define the linear operators Pα, P, ν∗ and Q on as in (2.11). By the spectral decomposition theory for linear operator on theN space developed in Proposition 1, we have for h , N ∈N (6.1) E eαSn h(X ) = λn(α)ν Q h + ν Pn(I Q )h, ν{ n } ∗ α ∗ α − α ∗ where Qα is defined in (2.12). It also can be shown that there exist K > 0 and 0 < δ∗ < δ such that, for α δ∗, | |≤ (6.2) ν Pn(I Q )h K∗ h α (1 + 2ρ)/3 n, k ∗ α − α k≤ k k| |{ } and under assumption C6, it follows from Proposition 1 that λ(α) has the Taylor expansion r j (6.3) λ(α)=1+ λjα /j!+∆(α) jX=1 36 C.-D. FUH in some neighborhood of the origin, where ∆(α)= O( α r) as α 0. Now, for such α, we have for all 0 such that eC λ(α) < 1 and for all c, 0 1. Then, (6.4) implies that ≤ ≤ ∞ (6.5) ernF (log γ) < , n ∞ nX=1 for some r> 0. Note that the probability pn of the first passage time τ(γ) out of the interval (log γ, ) for the Markovian random walk S is n is given by ∞ n (6.6) p = F (log γ) F (log γ), n 1. n n−1 − n ≥ tτ(γ) By (6.5) and (6.6), we have Eνe < for some t> 0, for all γ (0, 1). Hence ∞ ∈ tξ tS (6.7) E (e 1 ) < implies E (e τ+ ) < . π ∞ π ∞ Using the requirement in the definition of the exponential embedding that Γ must contain an interval about 0, take any positive α1 Γ. Let C := α1Sτ+ ∈ Eπ(e ); by (6.7), C is finite. Since 0 < inf|α|>δ,x∈X r(x; α) sup|α|>δ,x∈X r(x; α) < ∗ ≤ ∗ , then, if we take α and r1 both to be α1/2, say, for any α [0, α ] we have∞ ∈

r Sα r(Xτ ; α) α 1 τ+ + (r1+α)Sτ+ −τ+ψ(α) Eπ e = Eπ e { }  r(X0; α) 

r(Xτ+ ; α) (r +α)S E e 1 τ+ π r(X ; α) ≤  0  α S E e 1 τ+ = C, ≤ π{ } which is the desired property. 

Lemma 15. Assume the conditions of Theorem 3 hold. Suppose b and 0 < α 0 such that, for some <ϑ< , αb ϑ. Then for 0 t,s→∞ : ↓ −∞ ∞ → ≤ ≤ ∞ MARKOV RENEWAL THEORY 37

(i) P α τ(b) b2t,Sα b s G(t; ϑ, 1)H (s) and π { ≤ τ(b) − ≤ } −→ + E Sr+1 α α r π+ τ+ (ii) Eπ l (Sτ(b) b) ; τ(b) < (r+1)E S . { − ∞} −→ π+ τ+

Proof. (i) Let m = b2t and be the σ-algebra generated by (X ,S ), FN { n n n N . By Wald’s likelihood ratio identity for Markov chains, we have that ≤ } for any stopping time N, α′, α′′, A , and for each fixed x , ∈ FN ∈X ′ P α A (N < ) x { ∩ ∞ } (6.8) ′ r(X ; α ) ′′ = N exp((α′ α′′)S N(Λ(α′) Λ(α′′))) dP α . r(x; α′) − N − − x ZA∩(N<∞) And this implies that

P α τ m,Sα b s π { ≤ τ − ≤ } r(Xτ ; α) = Eπ exp αSτ τΛ(α) ; τ m,Sτ b s r(X0; α) { − } ≤ − ≤ (6.9)   r(X ; α) = exp(αb)E τ exp α(S b) τΛ(α) ; π r(X ; α) τ  0 { − − }

τ m,Sτ b s . ≤ − ≤  It follows that as 0 < α 0, αb ϑ, r(X ; α)/r(X ; α) 1 and Λ(α) ↓ → τ 0 → ∼ 1 α2 1 ϑ2/b2. Hence at least for all finite s, Lemma 7 shows that the right- 2 ∼ 2 hand side of (6.8) converges to

exp(ϑ)E [exp 1 ϑ2τ (1) ; τ (1) t]H (s) π {− 2 W } W ≤ + = E [exp ϑW (τ (1)) 1 ϑ2τ (1) ; τ (1) t]H (s) π { W − 2 W } W ≤ + = P (ϑ) τ (1) t H (s) π { W ≤ } + = G(t; ϑ, 1)H+(s). That this calculation is also valid when s = follows from the Markov ∞rS renewal Theorem 1 once it is known that E (e τ+ ) < , for some r> 0. π ∞ This holds by Lemma 14.

(ii) The proof of the convergence of Eπ+ Sπ+ follows from Lemma 6. The rest is a similar calculation and is omitted. 

By using the exponential martingale (2.13), uniform renewal theorem in Theorem 1 and Lemmas 11–15, the proof of Theorem 3 is similar to that of Theorem 2 and is omitted. 38 C.-D. FUH

APPENDIX Characteristic functions of uniform Markov random walks. Here we gen- eralize the work of Fuh and Lai (2001). We include it for completeness. Using the same notation and assumptions as in the first paragraph after Theorem α α 2 of Section 2, define P z , P, ν∗ and Q on as (2.11). Condition K2 ensures that P α and P are bounded linear operatorsN on , and (2.2) implies that z N (A.1) Pn Q = sup Pnh Qh γρn. k − k h∈N : khk=1 k − k≤ For a bounded linear operator T : , the resolvent set is defined as y C : (T yI)−1 exists and (T NyI →N)−1 is called the resolvent (when the inverse{ ∈ exists).− From (A.1)} it follows− that, for y =1 and y > ρ, 6 | | ∞ (A.2) R(y) := Q/(y 1) + (Pn Q)/yn+1 − − nX=0 is well defined. Since R(y)(P yI)= I = (P yI)R(y), the resolvent of P is R(y). Moreover, by K3− and an− argument− similar to the proof of Lemma− 2.2 of Jensen (1987), there exist K > 0 and η> 0 such that, for α z η, y 1 > (1 ρ)/6 and y > ρ + (1 ρ)/6, P z P K α and | α|≤ | −∞ | − α | | n − k − k≤α | | α Rz (y) := n=0 R(y) (P z P)R(y) is well defined. Since Rz (y)(P z α α { − } α α − yI)= Rz (y) (P z P) + (P yI) = I = (P z yI)Rz (y), the resolvent α P{α − − } − − of P z is Rz (y). For z − η, the spectrum (which is the complement of the resolvent set) of P α |therefore|≤ lies inside the two circles C = y : y 1 = (1 ρ)/3 and z 1 { | − | − } C2 = y : y = ρ + (1 ρ)/3 . Hence by the spectral decomposition theorem [cf. Riesz{ | and| Sz-Nagy− (1955} ), page 421], = (z) (z) and N N1 ⊕N2 1 1 (A.3) Qα := Rα(y) dy, I Qα := Rα(y) dy z 2πi z − z 2πi z ZC1 ZC2 are parallel projections of onto the subspaces 1(z), 2(z), respectively. Moreover, by an argumentN similar to the proofN of LemmaN 2.3 of Jensen (1987), there exists 0 < δ η such that B1(z) is one-dimensional for z δ α ≤ α | |≤ α and sup|z|≤δ Qz Q < 1. For z δ, let λ (z) be the eigenvalue of P z with correspondingk − eigenspacek | |≤(z). Since Qα is the parallel projection N1 z onto the subspace B1(z) in the direction of B2(z), (2.12) holds. Therefore, for h , ∈N α E ezSn h(X ) = ναP αnh = ναP αn Qα + (I Qα) h ν { n } ∗ z ∗ z { z − z } = (λα(z))nναQαh + ναP αn(I Qα)h. ∗ z ∗ z − z Suppose K4 also holds. An argument similar to the proof of Lemma 2.4 of Jensen (1987) shows that, there exist 0 < δ∗ < δ and K∗ > 0 such that for MARKOV RENEWAL THEORY 39

∗ α αn α ∗ n z δ , ν∗ P z (I Qz )h K h w z (1+ 2ρ)/3 . Moreover, analogous | |≤ | − |≤ k k | |{ } α α α to Lemmas 2.5–2.7 of Jensen (1987), it can be shown that λ (z), ν∗ Qz h ∞ α αn α and n=0 ν∗ P z (I Qz )h have continuous partial derivatives of order [r] for z δ∗. Furthermore,− we have the following proposition. | P|≤ Proposition 1. Assume the conditions of Theorem 1 hold. Let h and there exists a δ> 0 such that z C and z δ. ∈N ∈ | |≤ α zSn α n α α α α n α (i) Eν e h(Xn) = (λ (z)) ν∗ Qz h + ν∗ (P z ) (I Qz )h. Moreover, there exist{ 0 < δ∗ < δ}, 0 <γ< 1 and K > 0 such that,− for z δ∗, λα(z), α α ∞ α α n α | |≤ ν∗ Qz h and n=0 ν∗ (Pz ) (I Qz )h have continuous partial derivatives of order [r], and − P να(P α)n(I Qα)h K h z γn for all n 1. | ∗ z − z |≤ k k| | ≥ Furthermore, λα(0) = 1, λα(0) = iΓµα, 2λα(0) = ΓV αΓ′. ∇ ∇ − α

α ∞ zSn ½ (ii) Define fA(z)= n=0 Eν(e ½{Xn∈A}), and let hA(z)= {x∈A}. Then for 0 < z δ∗, | |≤ P f α(z)=(1 λα(z))−1ναQαh + ηα(z), A − ∗ z A where ηα(z) has continuous partial derivatives of order [r] and ηα(z)= O( z ) as z 0. | | → Acknowledgments. The author is grateful to the referee for construc- tive comments, suggestions and correction of some mistakes in the proofs. Thanks also to Professor Søren Asmussen, the Associate Editor, for careful proofreading.

REFERENCES Alsmeyer, G. (1994). On the Markov renewal theorem. . Appl. 50 37–56. MR1262329 Alsmeyer, G. (2000). The ladder variables of a Markov random walk. Theory Probab. Math. Statist. 20 151–168. MR1785244 Arndt, K. (1980). Asymptotic properties of the distribution of the supremum of a random walk on a Markov chain. Theory Probab. Appl. 25 309–323. MR572564 Asmussen, S. (1989a). Aspects of matrix Wiener–Hopf factorization in applied probabil- ity. Math. Sci. 14 101–116. MR1042507 Asmussen, S. (1989b). Risk theory in a Markovian environment. Scand. Actuar. J. 69– 100. MR1035668 Asmussen, S. (2000). Ruin Probabilities. World Scientific, Singapore. MR1794582 Athreya, K. B., McDonald, D. and Ney, P. (1978). Limit theorems for semi-Markov processes and renewal theory for Markov chains. Ann. Probab. 6 788–797. MR503952 Billingsley, P. (1968). Convergence of Probability Measures. Wiley, New York. MR233396 40 C.-D. FUH

Bougerol, P. (1988). Th´eor´emes limite pour les syst´emes lin´eaires `a coefficients markoviens. Probab. Theory Related Fields 78 193–221. MR945109 Bougerol, P. and Lacroix, J. (1985). Product of Random Matrices with Application to Schr¨odinger Operators. Birkh¨auser, Basel. MR886674 Bougerol, P. and Picard, N. (1992). Stationarity of GARCH processes and of some non-negative time series. J. Econometrics 52 115–128. MR1165646 Carlsson, H. (1983). Remainder term estimates of the renewal function. Ann. Probab. 11 143–157. MR682805 Carlsson, H. and Wainger, S. (1982). An asymptotic series expansion of the multidi- mensional renewal measure. Comput. Math. 47 355–364. MR681614 Fuh, C. D. (1997). Corrected diffusion approximations for ruin probabilities in Markov random walk. Adv. in Appl. Probab. 29 695–712. MR1462484 Fuh, C. D. and Lai, T. L. (1998). Wald’s equations, first passage times and moments of ladder variables in Markov random walks. J. Appl. Probab. 35 566–580. MR1659504 Fuh, C. D. and Lai, T. L. (2001). Asymptotic expansions in multidimensional Markov renewal theory and first passage times for Markov random walks. Adv. in Appl. Probab. 33 652–673. MR1860094 Fuh, C. D. and Zhang, C. H. (2000). Poisson equation, moment inequalities and quick convergence for Markov random walks. Stochastic Process. Appl. 87 53–67. MR1751164 Ganelius, T. (1971). Tauberian Remainder Theorems. Lecture Notes in Math. 232. Springer, Berlin. MR499898 Gelfand, I. M. and Shilov, G. E. (1964). Generalized Functions 1. Academic Press, New York. MR166596 Heyde, C. C. (1964). Two probability theorems and their applications to some first passage problems. J. Austral. Math. Soc. 4 214–222. MR183004 Hipp, C. (1985). Asymptotic expansions in the central limit theorem for compound and Markov processes. Z. Wahrsch. Verw. Gebiete 69 361–385. MR787604 Jensen, J. L. (1987). A note on asymptotic expansions for Markov chains using operator theory. Adv. in Appl. Math. 8 377–392. MR916051 Jensen, J. L. (1989). Asymptotic expansions for strongly mixing Harris recurrent Markov chains. Scand. J. Statist. 16 47–63. MR1003968 Kartashov, N. V. (1980). A uniform asymptotic renewal theorem. Theory Probab. Appl. 25 589–592. MR582589 Kartashov, N. V. (1996). Strong Stable Markov Chains. VSP, Utrecht and TBiMC Sci- entific Publishers, Kiev. MR1451375 Kesten, H. (1973). Random difference equations and renewal theory for products of random matrices. Acta Math. 131 311–349. MR440724 Kesten, H. (1974). Renewal theory for functionals of a Markov chain with general state space. Ann. Probab. 2 355–386. Kovalenko, I. N., Kuznetsov, N. Yu. and Shurenkov, V. M. (1996). Models of Ran- dom Processes. A Handbook for Mathematicians and Engineers. CRC Press, Boca Ra- ton, FL. MR365740 Lai, T. L. (1976). Uniform Tauberian theorems and their applications to renewal theory and first passage problems. Ann. Probab. 4 628–643. MR410966 Lai, T. L. and Siegmund, D. (1983). Fixed accuracy estimation of an autoregressive parameter. Ann. Statist. 11 478–485. MR696060 Lorden, G. (1970). On excess over the boundary. Ann. Math. Statist. 41 520–527. MR254981 Meyn, S. P. and Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. Springer, New York. MR1287609 MARKOV RENEWAL THEORY 41

Malinovskii, V. K. (1986). Asymptotic expansions in the central limit theorem for re- current Markov renewal processes. Theory Probab. Appl. 31 523–526. MR866885 Malinovskii, V. K. (1987). Limit theorems for Harris–Markov chains, I. Theory Probab. Appl. 31 269–285. Nagaev, S. V. (1957). Some limit theorems for stationary Markov chains. Theory Probab. Appl. 2 378–406. MR94846 Nagaev, S. V. (1961). More exact statements of limit theorems for homogeneous Markov chains. Theory Probab. Appl. 6 62–81. MR131291 Ney, P. and Nummelin, E. (1987). Markov additive processes I. Eigenvalue properties and limit theorems. Ann. Probab. 15 561–592. MR885131 Nummelin, E. (1984). General Irreducible Markov Chains and Non-negative Operators. Cambridge Univ. Press. Pergamenshchikov, S. M. and Shiryaev, A. N. (1993). Sequential estimation of the parameters of a stochastic difference equation with random coefficients. Theory Probab. Appl. 37 449–470. MR776608 Riesz, F. and Sz-Nagy, B. (1955). Functional Analysis. Ungar, New York. MR71727 Schwartz, L. (1966). Th´eorie des Distributions, 2nd ed. Hermann, Paris. MR209834 Shurenkov, V. M. (1984). On the theory of Markov renewal. Theory Probab. Appl. 9 247–265. MR749913 Shurenkov, V. M. (1989). Ergodic Markov Processes. Nauka, Moscow. MR1087782 Siegmund, D. (1979). Corrected diffusion approximations in certain random walk prob- lems. Adv. in Appl. Probab. 11 701–719. MR544191 Siegmund, D. (1985). Sequential Analysis. Springer, New York. MR799155 Siegmund, D. and Yuh, Y. S. (1982). Brownian approximations to first passage proba- bilities. Z. Wahrsch. Verw. Gebiete 59 239–248. MR650615 Silvestrov, D. S. (1978). The renewal theorem in a series scheme I. Theory Probab. Math. Statist. 18 144–161. MR488350 Silvestrov, D. S. (1979). The renewal theorem in a series scheme II. Theory Probab. Math. Statist. 20 97–116. MR529265 Silvestrov, D. S. (1994). Coupling for Markov renewal processes and the rate of conver- gence in ergodic theorems for processes with semi-Markov switchings. Acta Appl. Math. 34 109–124. MR1273849 Silvestrov, D. S. (1995). Exponential asymptotic for perturbed renewal equations. The- ory Probab. Math. Statist. 52 153–162. Silvestrov, D. S. and Brusilovskii, I. L. (1990). An invariance principle for sums stationary connected random variables satisfying a uniform strong mixing condition with weight. Theory Probab. Math. Statist. 40 117–127. MR1445549 Statulevicius, V. (1969). Limit theorems for sums of random variables that are con- nected in a Markov chain. I, II. Litovsk. Mat. Sb. 9 345–362; 10 635–672. MR266281 Stone, C. (1965a). On characteristic functions and renewal theory. Trans. Amer. Math. Soc. 120 327–342. MR189151 Stone, C. (1965b). On moment generating functions and renewal theory. Ann. Math. Statist. 36 298–301. MR179857 Strichartz, R. (1994). A Guide to Distribution Theory and Fourier Transforms. CRC Press, Boca Raton, FL. MR1276724 Zhang, C. H. (1989). A renewal theory with varying drift. Ann. Probab. 17 723–736. MR985386 42 C.-D. FUH

Institute of Statistical Science Academia Sinica Taipei 11529, Taiwan Republic of China e-mail: [email protected]