and Letters 82 (2012) 455–463

Contents lists available at SciVerse ScienceDirect

Statistics and Probability Letters

journal homepage: www.elsevier.com/locate/stapro

Asymptotics for dependent Bernoulli random variables

Lan Wu a, Yongcheng Qi b, Jingping Yang a,∗ a LMAM, Department of Financial Mathematics, Center for Statistical Science, Peking University, Beijing 100871, China b Department of Mathematics and Statistics, University of Minnesota Duluth, 1117 University Drive, Duluth, MN 55812, USA article info a b s t r a c t

Article history: This paper considers a sequence of Bernoulli random variables which are dependent in a Received 25 March 2011 way that the success probability of a trial conditional on the previous trials depends on Received in revised form 5 December 2011 the total number of successes achieved prior to the trial. The paper investigates almost Accepted 5 December 2011 sure behaviors for the sequence and proves the strong under weak Available online 14 December 2011 conditions. For linear probability functions, the paper also obtains the strong law of large numbers, the central limit theorems and the law of the iterated logarithm, extending the MSC: results by James et al.(2008). 60F15 © 2011 Elsevier B.V. All rights reserved. Keywords: Dependent Bernoulli random variables Strong law of large numbers Law of the iterated logarithm

1. Introduction

Consider a sequence of Bernoulli random variables {Xn, n ≥ 1}, which are dependent in a way that the success probability of a trial conditional on the previous trials depends on the total number of successes achieved to that point. In particular, we assume that

P(Xn+1 = 1|Fn) = θn + gn(Sn/n), (1) = n ≥ = ≥ where Sn j=1 Xj for n 1, Fn σ (X1,..., Xn) is the σ -field generated by X1,..., Xn, θn 0 is a sequence of constants, and gn(x) is a non-negative measurable function defined over [0, 1] such that 0 ≤ θn + gn(x) ≤ 1 for x ∈ [0, 1]. When θn = p(1−γ ) and gn(x) = γ x, where p ∈ (0, 1) and γ ∈ [0, 1) are constants, model (1) reduces to the generalized binomial model which was first introduced by Drezner and Farnum(1993). If P(X1 = 1) = p, then X1, X2,... are identically distributed Bernoulli random variables, and in this case, Drezner and Farnum(1993) obtained the distribution of Sn. Heyde (2004) investigated the limiting distributions of Sn and proved that the central limit theorem for Sn holds when γ ≤ 1/2. Recently James et al.(2008) explored model (1) when θn = p(1 − dn) and gn(x) = dnx, where p ∈ (0, 1) and dn ∈ [0, 1) is a sequence of constants. They obtained the conditions for the strong law of large numbers, the central limit theorem and the law of the iterated logarithm for the partial sums of the dependent Bernoulli random variables. In this paper, we are mainly interested in investigating the strong law of large numbers for the partial sum Sn under the general setup (1). We also consider the central limit theorem, and the law of the iterated logarithm when the probability function on the right-hand side of (1) is linear. The main results of the paper are given in Section2, and all the proofs are put in Section3.

∗ Corresponding author. Fax: +86 10 62751801. E-mail address: [email protected] (J. Yang).

0167-7152/$ – see front matter © 2011 Elsevier B.V. All rights reserved. doi:10.1016/j.spl.2011.12.002 456 L. Wu et al. / Statistics and Probability Letters 82 (2012) 455–463

2. Main results

We first present a strong limit theorem from which we conclude the law of large numbers for Sn for general probability functions. Then we consider some linear probability functions and obtain a refined strong law of large numbers, the central limit theorem and the law of the iterated logarithm. We introduce a few examples where conditions for the strong law of large numbers in the general case can be verified. An example is also given to show that the upper limit and the lower limit are attainable in general.

1 n = ∈ [ ] Theorem 1. Assume that limn→∞ n j=1 θj θ 0, 1 and there exists a non-decreasing continuous function g defined on [0, 1] such that

(C1) gn(t) converges to g(t) uniformly on [0, 1], i.e. supx∈[0,1] |gn(x) − g(x)| → 0 as n → ∞. Then S S ∗ ≤ n ≤ n ≤ ∗ ν1 lim inf lim sup ν2 a.s., (2) n→∞ n n→∞ n where ∗ = { ∈ [ ]: − ≥ } ∗ = { ∈ [ ]: − ≤ } ν1 inf x 0, 1 x g(x) θ , ν2 sup x 0, 1 x g(x) θ . (3)

Remark 1. Consider a sequence of bounded random variables {Xn, n ≥ 1} distributed over an interval [b1, b2] such that

P(Xn+1|Fn) = θn + gn(Sn/n), where {θn} is a sequence of constants and {gn, n ≥ 1} is a sequence of functions defined over [b1, b2]. Replacing all intervals [0, 1] by [b1, b2] in Theorem 1, then (2) can be proved similarly under the above new model. Since the above model is not the focus in this paper, details for this development are not provided here.

Remark 2. In some cases, the lower and upper limits in (2) are attainable. Let θn = 0 and gn(x) = g(x) for all n ≥ 1, where [ ] = = ∗ = g(x) is a continuous and nondecreasing function defined on 0, 1 with g(0) 0 and g(1) 1. It is easy to see that ν1 0 ∗ = = = − = = ∈ and ν2 1. Let P(X1 1) 1 P(X1 0) p (0, 1). It is trivial to see that S  = ; n = 1, if X1 1 n 0, if X1 = 0.

Hence, both the lower and upper limits in (2) are attainable with a positive probability. Evidently, E(Sn/n) = p. Next we will give a less trivial example in which both the lower and upper limits in (2) are achieved with a positive probability.

Example 1. Let 0 < v1 < v2 < 1 be two arbitrary constants and g any nondecreasing continuous function defined over [0, 1] such that v , if x ≤ v ; g(x) = 1 1 v2, if x ≥ v2.

Let {δn} be a non-increasing sequence of constants satisfying that

2 nδn δ1 < min(v1, 1 − v2), δn → 0 and → ∞ as n → ∞. log log n Define

v1 − δn, if 0 ≤ x ≤ v1; gn(x) = g(x), if v1 < x < v2; v2 + δn, if v2 ≤ x ≤ 1. | − | → = = ∗ = Then supx∈[0,1] gn(x) g(x) 0. Now assume (1) holds with θn 0 for all n and 0 < P(X1 1) < 1. Hence v1 v1 and ∗ = v2 v2 in (3). We prove in Section3 that

 S  n = ∗ = P lim vi > 0 i 1, 2. (4) n→∞ n

Under some additional conditions, we conclude the strong law of large numbers for Sn as a consequence of Theorem 1. L. Wu et al. / Statistics and Probability Letters 82 (2012) 455–463 457

Corollary 1. In addition to the conditions in Theorem 1, if (C2) there exists a unique solution, say ν∗, to equation x − g(x) = θ, x ∈ [0, 1], then

Sn ∗ lim = v a.s.. (5) n→∞ n

Corollary 2. Assume g(x) is a non-decreasing continuous function on [0, 1] satisfying the assumptions in Theorem 1. Then condition (C2) is fulfilled (and hence (5) holds) if any one of the following conditions holds: (C3) g(x) is strictly convex such that g(0) + θ > 0 and g(1) + θ < 1; (C4) g(x) is strictly concave such that g(0) + θ > 0 and g(1) + θ < 1; (C5) x − g(x) is strictly increasing on [0, 1]. We present some examples in which the conditions in Corollary 2 can be easily verified.

Example 2. Assume constants c > 0, β > 0 and θ ∈ (0, 1) such that θ + β < 1. Define g(x) = βxc , x ∈ [0, 1]. Then condition (C3) is satisfied if c > 1, condition (C4) is satisfied if c < 1, and condition (C5) is satisfied if c = 1.

Example 3. Assume θ ∈ (0, 1), and g(x) is a non-decreasing continuous function on [0, 1]. If g′(x) < 1 for x ∈ (0, 1), then condition (C5) is satisfied.

Consider some linear probability functions gn(x) = dnx in (1), i.e., we assume that Xn, n ≥ 1 satisfy the model −1 P(Xn+1 = 1|Fn) = θn + dnn Sn, (6) where 0 ≤ θn < 1, 0 ≤ dn < 1 and 0 ≤ θn + dn < 1 for every n ≥ 1. This is an extension of the model considered in James et al.(2008) where θn = p(1 − dn) for some p ∈ (0, 1). From (6), the success pn := P(Xn = 1), n ≥ 1 can be recursively obtained by the formula

n−1 dn−1  pn = θn−1 + pj, n ≥ 2. (7) n − 1 j=1

Under model (6), we can obtain a necessary and sufficient condition for the strong law of large numbers for Sn.

Theorem 2. Assume model (6) holds. Then

Sn − E(Sn) lim = 0 a.s. (8) n→∞ n − ∞ 1 dj = ∞ if and only if j=1 j+1 .

Remark 3. For some extreme cases under model (6), Eq. (8) does not follow from Theorem 1. For instance, assume that dn → 1 and θn → 0 as n → ∞ under model (6). When dn → 1, the limit of the corresponding function gn(x) = dnx in model (1) is g(x) = x. When θn → 0, we have θ = 0 in Theorem 1. However, the equation x − g(x) = 0 holds for all ∈ [ ] ∗ = ∗ = x 0, 1 , and this implies that ν1 0 and ν2 1 in (3). Therefore, (2) does not give the strong law of large numbers in (8). Write n−1   n n −  dj 2  1 2  pj(1 pj) an = 1 + , A = and B = . (9) j n 2 n 2 j=1 j=1 aj j=1 aj

Theorem 3. Assume (6) holds with limn→∞ Bn = ∞ and that An/Bn is bounded. Then as n → ∞,

Sn − E(Sn) d → N(0, 1). anBn

Theorem 4. Under the conditions of Theorem 3 we have √ Sn − E(Sn) lim sup ± √ = 2 a.s.. n→∞ anBn log log(Bn)

≤ ≥ 2 2 = − ≥ Remark 4. Note that Bn An for all n 1. In Theorems 3 and4, if the variance σn of Xn satisfies that σn pn(1 pn) 2 = − c > 0 for some c > 0, then An/Bn is bounded. In the model considered by James et al.(2008), σn p(1 p) for some p ∈ (0, 1). 458 L. Wu et al. / Statistics and Probability Letters 82 (2012) 455–463

3. Proofs

Before proving Theorem 1, we need to prove a lemma.

Lemma 1. Assume condition (C1) in Theorem 1 is satisfied for some non-decreasing continuous function g(x). If {bn, n ≥ 1} is a sequence of constants with bn ∈ [0, 1], then

1 n   lim sup gj(bj) ≤ g lim sup bn (10) n→∞ n n→∞ j=1 and

1 n   lim inf gj(bj) ≥ g lim inf bn . (11) n→∞ n n→∞ j=1

Proof. From condition (C1) it is easy to see that

1 n lim |gj(bj) − g(bj)| = 0, n→∞ n j=1 which implies that

1 n 1 n lim sup gj(bj) = lim sup g(bj). n→∞ n n→∞ n j=1 j=1

Hence, to prove (10) it suffices to show that

1 n   lim sup g(bj) ≤ g lim sup bn . (12) n→∞ n n→∞ j=1

Let b = lim supn→∞ bn. Obviously the above inequality holds when b = 1. Next we assume b < 1. Then for every small ε > 0 such that b + ε < 1 there exists an n0 such that bj ≤ b + ε for all j ≥ n0. Hence,

n 1 n 1 0 lim sup g(bj) ≤ lim sup g(bj) + g(b + ε) = g(b + ε), n→∞ n n→∞ n j=1 j=1 which yields (12) by letting ε ↓ 0. This proves (10). Similarly, we can show (11). 

Proof of Theorem 1. Assume that all random variables are defined in a (Ω, F , P). Let F0 = {Ω, φ}, where φ is the empty set. Define g0(x) = 0 for x ∈ [0, 1] and set θ0 = E(X1|F0) = P(X1 = 1|F0) = P(X1 = 1). Let Zn = Xn − E(Xn|Fn−1) for n ≥ 1. Then {Zn, Fn, n ≥ 1} is a sequence of bounded martingale differences. Since ∞ 1 2 n 1 E(Z |F − ) < ∞ a.s., we have from Theorem 2.17 in Hall and Heyde(1980) that Z converges almost surely. n=1 n2 n n 1 j=1 j j In view of Kronecker’s lemma we get

n Sn − E(Xj|Fj−1) j=1 tn := → 0 a.s.. n Note that

E(Xj|Fj−1) = θj−1 + gj−1(Sj−1/(j − 1))

≥ = Sn = Sn for all j 1. Set ν1 lim infn→∞ n and ν2 lim supn→∞ n . Since

n n−1 n Sn 1  1  1  = tn + E(Xj|Fj−1) = tn + θj + gj−1(Sj−1/(j − 1)), n n n n j=1 j=0 j=2 we have from Lemma 1 that with probability one

 −  1 n 1 1 n ν2 = lim sup θj + gj−1(Sj−1/(j − 1)) ≤ θ + g(ν2) n→∞ n n j=0 j=2 L. Wu et al. / Statistics and Probability Letters 82 (2012) 455–463 459 and

ν1 ≥ θ + g(ν1). ≥ ∗ ≤ ∗ Therefore, we have ν1 ν1 and ν2 ν2 with probability one. This proves the theorem.  Proof of Corollary 1. Assume condition (C2) is fulfilled. We will show that x − g(x) < θ if and only if x < ν∗. Set f (x) = x − g(x) − θ. Since 0 ≤ θn + gn(x) ≤ 1 for all n ≥ 1, we have 0 ≤ θ + g(x) ≤ 1 for x ∈ [0, 1], which implies that f (0) ≤ 0 and f (1) ≥ 0. If the only root to the equation f (x) = 0 is ν∗ ∈ (0, 1), then we have f (0) < 0 and f (1) > 0, and thus we get that f (x) < 0 if x < ν∗, and f (x) > 0 if x > ν∗. If ν∗ = 0, then f (x) > 0 for all x > 0. Similarly, if ν∗ = 1 we have that f (x) < 0 for all x < 1. From condition (C2), we conclude that {x ∈ [0, 1]: x − g(x) ≤ θ} = [0, ν∗] and {x ∈ [0, 1]: x − g(x) ≥ θ} = [ν∗, 1]. ∗ = ∗ = ∗ Thus ν1 ν2 ν and (5) follows from Theorem 1.  Proof of Corollary 2. Set f (x) = x − g(x) − θ. Then it is easy to verify that the equation f (x) = 0 has a unique solution over [0, 1] in each of the three cases, (C3)–(C5). The details are omitted here.  The proofs for Theorems 2–4 use very extensively the classic limit theorems for martingales. For general classic results on martingales, we refer to Brown(1971) and Hall and Heyde(1980). For sequences of bounded martingale differences, the central limit theorems and the law of the iterated logarithm have been derived by James et al.(2008) from the aforementioned literature, and each of these results involves verifying only one condition on conditional variances of the martingale differences. We will give a sketch for each of the proofs and cite only a few related lemmas from James et al. (2008). Define

Sn − E(Sn) Tn = (13) an and

Y1 = T1, Yn = Tn − Tn−1, n ≥ 2. By using (6), we have

 −1  Sn−1 − E(Sn−1) − E(Xn) + θn−1 + dn−1(n − 1) Sn−1 E[Tn|Fn−1] = an S − E(S ) − θ − d (n − 1)−1E(S ) + θ + d (n − 1)−1S  = n−1 n−1 n−1 n−1 n−1 n−1 n−1 n−1 an −1 (S − − E(S − )) (1 + d − (n − 1) ) = n 1 n 1 n 1 an

Sn−1 − E(Sn−1) = = Tn−1. an−1

Thus, {Tn, Fn, n ≥ 1} is a martingale and {Yn, Fn, n ≥ 1} are the martingale differences. For j ≥ 2,

Sj − E(Sj) Sj−1 − E(Sj−1) Yj = − aj aj−1   Xj − E(Xj)   1 1 = + Sj−1 − E(Sj−1) − aj aj aj−1

Xj − E(Xj) Sj−1 − E(Sj−1) dj−1 = − . (14) aj j − 1 aj Therefore,

3 |Yj| ≤ , j ≥ 1, (15) aj and {Yj, j ≥ 1} are bounded.

an = ∞ = Lemma 2. (i) n is non-increasing in n; if limn→∞ An , then limn→∞ an/n 0. − = ∞ 1 dj = ∞ (ii) limn→∞ an/n 0 if and only if j=1 j+1 . 460 L. Wu et al. / Statistics and Probability Letters 82 (2012) 455–463

Proof. Since 0 ≤ dj < 1 for all j ≥ 1,

− a n 1 1 + j−1d n =  j n 1 + j−1 j=1 is non-increasing in n. Therefore, the limit of an/n exists. Set v = limn→∞ an/n. Then v ∈ [0, 1]. If v > 0, then it is readily seen that An → ∞. Hence the condition limn→∞ An = ∞ implies v = 0. This completes the proof of part (i). For the proof of part (ii), since

 n−1  an  1 − dj = exp − + O(1) , n 1 + j j=1

− = n−1 1 dj → ∞ it follows that limn→∞ an/n 0 if and only if j=1 j+1 , proving the lemma. 

∞ 1−d a  j = ∞ an → = j Proof of Theorem 2. First assume that j=1 j+1 . From Lemma 2 we see that n 0. Let Zj j Yj. In order to show n { ≥ } that j=1 Zj converges almost surely, we apply Lemma 3.3 in James et al.(2008) to the martingale differences Zn, Fn, n 1 . ∞ 2 ∞ 2 ∞ 9 To this end, it suffices to verify that E(Z |F − ) < ∞ almost surely. In fact, we have that E(Z |F − ) ≤ j=1 j j 1 j=1 j j 1 j=1 j2 n an since |Zj| ≤ 3/j for all j ≥ 1 from (15). Since is non-decreasing in n, using Kronecker’s lemma we see that Tn = an n an n j Zj converges to zero almost surely, where Tn is defined in (13). Thus n j=1 aj

Sn − E(Sn) an lim = lim Tn = 0 a.s.. n→∞ n n→∞ n ∞ 1−d  j ∞ an ∞ 2 Now assume that j=1 j+1 < . Then 0 < limn→∞ n < , which implies that limn→∞ An is finite with An being ∞ [ 2| ] ≤ ∞ 2 = 2 ∞ defined in (9). By (15) we have j=1 E Yj Fj−1 j=1 9/aj 9 limn→∞ An < almost surely. By applying Lemma 3.3 Sn−E(Sn) n in James et al.(2008) once again, we get that = Tn = Yj converges almost surely to some , an j=1 say T . By Fatou’s lemma, for each n ≥ 1   m ∞ − 2 = − 2 ≤ − 2 =  2 ≤  2 E(Tn T ) E lim (Tn Tm) lim inf E(Tn Tm) lim inf EYj 9/aj . m→∞ m→∞ m→∞ j=n+1 j=n+1 − 2 = = = ∞ 2 Thus, we obtain that limn→∞ E(Tn T ) 0, which implies Var(T ) limn→∞ Var(Tn) j=1 E(Yj ) > 0. Hence, T is ∞ Sn−E(Sn) = an a non-degenerate random variable. Since 0 < limn→∞ an/n < , lim supn→∞ n limn→∞ n T is non-degenerate. This completes the proof. 

Proof of Theorem 3. Note that limn→∞ Bn = ∞ implies limn→∞ An = ∞, where An and Bn are defined in (9). From Lemma 2 and Theorem 2, the strong law of large numbers (8) holds. Then by using (14) one can easily obtain that   pj(1 − pj) 1 E[Y 2| ] = + o a.s. j Fj−1 2 2 , aj aj which implies

n  [ 2| ] E Yj Fj−1 j=1 → 1 a.s.. (16) 2 Bn

Since {Yn, Fn, n ≥ 1} is a sequence of bounded martingale differences, it follows from Lemma 3.4 of James et al.(2008) that n Yj j=1 Sn − E(Sn) d = → N(0, 1), Bn anBn proving the theorem.  Proof of Theorem 4. It is easily verified from the given conditions that 2 2 → → ∞ Bn+1/Bn 1 as n , which together with (16) yields the theorem by Lemma 3.5 of James et al.(2008).  L. Wu et al. / Statistics and Probability Letters 82 (2012) 455–463 461

{ } = = ∗ − Proof of Eq. (4). First, define a sequence of independent Bernoulli random variables In with P(In+1 1) v1 δn for ≥ = = = = n ≥ = +n ∗− − ∗+ = n 1 and P(I1 1) P(X1 1). Set Rn i=1 Ii for n 1. Note that Var(Rn) Var(I1) i=2(v1 δi−1)(1 v1 δi−1) ∗ − ∗ + → ∞ nv1 (1 v1 )(1 o(1)) as n . We first prove that     k k P {Sn ≤ xn} = P {Rn ≤ xn} for all k ≥ 1 (17) n=1 n=1 for any non-decreasing sequence of non-negative integers {xn} satisfying

xn ∗ ≤ v for all n ≥ 1. (18) n 1 Note that (17) is trivial if k = 1. Assume that (17) holds for all k ≤ K with any sequence satisfying (18). By induction, it suffices to show (17) for k = K + 1. This is true if xK+1 ≥ xK + 1 since  +       +  K1 K K K1 P {Sn ≤ xn} = P {Sn ≤ xn} = P {Rn ≤ xn} = P {Rn ≤ xn} . n=1 n=1 n=1 n=1

Now assume xK = xK+1. We have  +   −  K1 K1   P {Sn ≤ xn} = P {Sn ≤ xn} {SK ≤ xK − 1} {SK+1 ≤ xK } n=1 n=1  −  K1   + P {Sn ≤ xn} {SK = xK } {SK+1 ≤ xK } n=1  −  K1  = P {Sn ≤ xn} {SK ≤ xK − 1} n=1   −  K1   + E P {Sn ≤ xn} {SK = xK } {XK+1 = 0}|FK . n=1

= = | = − SK = − ∗ + When SK xK , P(XK+1 0 FK ) 1 gK ( K ) 1 v1 δK . Thus  +   −  K1 K1  P {Sn ≤ xn} = P {Sn ≤ xn} {SK ≤ xK − 1} n=1 n=1 K−1  + − ∗ + { ≤ } { = } (1 v1 δK )P Sn xn SK xK n=1 K−1  = ∗ − { ≤ } { ≤ − } (v1 δK )P Sn xn SK xK 1 n=1 K−1  + − ∗ + { ≤ } { ≤ } (1 v1 δK )P Sn xn SK xK n=1 K−1  = ∗ − { ≤ ∧ − } { ≤ − } (v1 δK )P Sn xn (xK 1) SK xK 1 n=1 K−1  + − ∗ + { ≤ } { ≤ } (1 v1 δK )P Sn xn SK xK . n=1 Since (17) holds for all k ≤ K, we have

K+1  K−1  { ≤ } = ∗ − { ≤ ∧ − } { ≤ − } P Sn xn (v1 δK )P Rn xn (xK 1) RK xK 1 n=1 n=1 K−1  + − ∗ + { ≤ } { ≤ } (1 v1 δK )P Rn xn RK xK n=1 462 L. Wu et al. / Statistics and Probability Letters 82 (2012) 455–463

K−1  = ∗ − { ≤ } { ≤ − } (v1 δK )P Rn xn RK xK 1 n=1 K−1  + − ∗ + { ≤ } { ≤ } (1 v1 δK )P Rn xn RK xK . n=1 Similarly, we get that  +   −  K1 K1   P {Rn ≤ xn} = P {Rn ≤ xn} {RK ≤ xK − 1} {RK+1 ≤ xK } n=1 n=1  −  K1   + P {Rn ≤ xn} {RK = xK } {RK+1 ≤ xK } n=1  −  K1  = P {Rn ≤ xn} {RK ≤ xK − 1} n=1  −  K1   + P {Rn ≤ xn} {RK = xK } {YK+1 = 0} n=1  −  K1  = P {Rn ≤ xn} {RK ≤ xK − 1} n=1 K−1  + − ∗ + { ≤ } { = } (1 v1 δK )P Rn xn RK xK n=1 K−1  = ∗ − { ≤ } { ≤ − } (v1 δK )P Rn xn RK xK 1 n=1 K−1  + − ∗ + { ≤ } { ≤ } (1 v1 δK )P Rn xn RK xK . n=1 Therefore, (17) is also true for k = K + 1. This completes the proof (17). Next we will finish the proof of (4). It is easy to see that Kolmogorov’s law of the iterated logarithm holds for Rn, that is, n − ∗ +  Rn nv1 δi √ i=1 =  ∗ − ∗ lim sup 2v1 (1 v1 ) a.s.. n→∞ n log log n

2 nδn → ∞ Since the assumption log log n implies n δn = √ i 1 → ∞, n log log n we have  ∞  { ≤ ∗} = lim P Rn nv1 1. k→∞ n=k ∞ { ≤ ∗} = Choose a large k such that P( n=k+1 Rn nv1 ) > 0. From the fact that P(Rk 0) > 0 we obtain  ∞   ∞  { ≤ ∗} ≥ { = }  { ≤ ∗} P Rn nv1 P Rk 0 Rn nv1 n=1 n=k+1  ∞  = =  { ≤ ∗}| = P(Rk 0)P Rn nv1 Rk 0 n=k+1  ∞  = =  { − ≤ ∗}| = P(Rk 0)P Rn Rk nv1 Rk 0 n=k+1 L. Wu et al. / Statistics and Probability Letters 82 (2012) 455–463 463

 ∞  = =  { − ≤ ∗} P(Rk 0)P Rn Rk nv1 n=k+1  ∞  ≥ =  { ≤ ∗} P(Rk 0)P Rn nv1 > 0. n=k+1 From (17) we have

 k   k  { ≤ ∗} = { ≤ ∗} ≥ P Sn nv1 P Rn nv1 for all k 1. n=1 n=1 By letting k → ∞ we get  ∞  { ≤ ∗} P Sn nv1 > 0, n=1 which implies  S  n ≤ ∗ P lim sup v1 > 0. n→∞ n

In view of Theorem 1 we have proved (4) for i = 1. In a similar manner, we can show (4) for i = 2. 

Acknowledgments

The authors would like to thank the reviewer for helpful comments and constructive suggestions that lead to significant improvement of the paper. Qi’s research was supported by NSF Grant DMS 0604176, and Yang’s research was supported by the National Basic Research Program (973 Program) of China (2007CB814905) and the National Natural Science Foundation of China (Grant Nos. 10871008, 11131002).

References

Brown, B.M., 1971. Martingale central limit theorems. Ann. Math. Statist. 42, 59–66. Drezner, Z., Farnum, N., 1993. A generalized . Comm. Statist. Theory Methods 22, 3051–3063. Hall, P., Heyde, C.C., 1980. Martingale Limit Theory and Its Application. Academic Press, New York. Heyde, C.C., 2004. Asymptotics and criticality for a correlated . Aust. N. Z. J. Stat. 46, 53–57. James, B., James, K., Qi, Y., 2008. Limit theorems for correlated Bernoulli random variables. Statist. Probab. Lett. 78, 2339–2345.