Open Math. 2019; 17:452–464

Open Mathematics

Research Article

Shui-Li Zhang*, Yu Miao, and Cong Qu Sucient and necessary conditions of convergence for eρ-mixing random variables https://doi.org/10.1515/math-2019-0036 Received October 5, 2018; accepted January 31, 2019

Abstract: In the present paper, the sucient and necessary conditions of the complete convergence and complete moment convergence for eρ-mixing random variables are established, which extend some well- known results.

Keywords: Complete convergence, complete moment convergence, eρ-mixing random variables, weakly bounded.

MSC: 60F15

1 Introduction

1.1 ρe-mixing sequence

Let (Ω, F, P) be a probability space, {Xn , n ≥ 1} be a sequence of random variables dened on (Ω, F, P), n P Sn = Xi for n ≥ 1. For any S ⊂ N = {1, 2, ··· }, dene FS = σ(Xi , i ∈ S). Let A and B be two sub σ-algebra i=1 on F, put ( ) |EXY − EXEY| ρ(A, B) = sup p : X ∈ L2(A), Y ∈ L2(B) . (1.1) E(X − EX)2E(Y − EY)2

Dene the eρ-mixing coecients by

eρn = sup{ρ(FS , FT): S, T ⊂ N with dist(S, T) ≥ n}, (1.2) where dist(S, T) = inf{|s−t| : s ∈ S, t ∈ T}. Obviously, 0 ≤ eρn+1 ≤ eρn ≤ eρ0 = 1. Then the sequence {Xn , n ≥ 1} is called eρ-mixing if there exists k ∈ N such that eρk < 1. The notion of eρ-mixing random variables was rst introduced by Bradley [1], and a number of limits results for eρ-mixing random variables have been established by many authors. One can refer to Bradley [1] for the central limit theorem; Sung [2, 3], An and Yuan [4], Lan [5], Guo and Zhu [6] for complete convergence; Zhang [7] for complete moment convergence; Peligrad and Gut [8], Utev and Peligrad [9] for the moment inequalities; Gan [10], Wu and Jiang [11], Kuczmaszewska [12] for strong law of large numbers.

*Corresponding Author: Shui-Li Zhang: College of Mathematics and Statistics, University, Province, 467000, , E-mail: [email protected] Yu Miao: College of Mathematics and Information Science, Henan Normal University, Henan Province, 453007, China, E-mail: [email protected] Cong Qu: College of Mathematics and Statistics, , Henan Province, 467000, China, E-mail: [email protected]

Open Access. © 2019 Zhang et al., published by De Gruyter. This work is licensed under the Creative Commons Attribution alone 4.0 License. Convergence for eρ-mixing random variables Ë 453

1.2 Some notations and known results

Let {Xn , n ≥ 1} be a sequence of random variables and if there exist a positive constant C1(C2) and a random variable X, such that the left-hand side (right-hand side) of the following inequalities is satised for all n ≥ 1 and x ≥ 0, n X C |X| x 1 |X | x C |X| x 1P( > ) ≤ n P( k > ) ≤ 2P( > ), (1.3) k=1 then the sequence {Xn , n ≥ 1} is said to be weakly lower (upper) bounded by X. The sequence {Xn , n ≥ 1} is said to be weakly bounded by X if it is both weakly lower and upper bounded by X. A sequence of random variables {Un , n ≥ 1} is said to converge completely to a constant C if

X∞ P (|Un − C| > ε) < ∞, for all ε > 0. (1.4) n=1

The concept of complete convergence was introduced rstly by Hsu and Robbins [13]. In view of the Borel- Cantelli lemma, complete convergence implies that Un → C almost surely. The complete moment convergence is a more general concept than the complete convergence, which was introduced by Chow [14]. Let {Zn , n ≥ 1} be a sequence of random variables and an > 0, bn > 0, q > 0, if

∞ X −1 q anE{bn |Zn| − ϵ}+ < ∞, for some or all ϵ > 0, n=1 then the above result was called the complete moment convergence. The complete convergence and complete moment convergence have been studied by many authors. For instance, see Wang [15], Zhao [16], Zhang [17] and so on. Baum and Katz [18] obtained the following equivalent conditions for the i.i.d. random variables. Theorem A. (Baum and Katz [18]) Let 0 < r < 2, r ≤ p. Suppose that {Xn , n ≥ 1} is a sequence of i.i.d. random p variables with mean zero, then E|X1| < ∞ is equivalent to the condition that

∞ X p/r−2  1/r n P |Sn| > εn < ∞, for all ε > 0, (1.5) n=1 and also equivalent to the condition that

∞   X p/r−2 1/r n P max |Sk| > εn < ∞, for all ε > 0. (1.6) 1≤k≤n n=1

For the i.i.d. case, related results are fruitful and detailed. It is natural to extend them to dependent case, for examples, martingale dierence, negatively associated, mixing random variables and so on. In the present paper, we are interested in the eρ-mixing random variables. For identically distributed eρ-mixing random variables, Peligrad and Gut [8] extended the results of Baum and Katz to eρ-mixing random variables (see Theorem B); subsequently, An and Yuan [4] extended the results of Peligrad and Gut [8] to weighted sums of eρ-mixing random variables (see Theorem C); Gan [10] obtained a sucient condition on complete convergence (see Theorem D). Theorem B. (Peligrad and Gut [8]) Let {Xn , n ≥ 1} be a sequence of identically distributed eρ-mixing random p variables, αp > 1, α > 1/2, and suppose that EX1 = 0 for α ≤ 1. Assume that limn→∞ eρn < 1, then E|X1| < ∞ is equivalent to the condition that

∞   X αp−2 α n P max |Sj| > εn < ∞, for all ε > 0. (1.7) 1≤j≤n n=1 454 Ë Shui-Li Zhang et al.

Theorem C. (An and Yuan [4]) Let {Xn , n ≥ 1} be a sequence of identically distributed eρ-mixing random variables, αp > 1, α > 1/2 and suppose that EX1 = 0 for α ≤ 1. Assume that {ani , 1 ≤ i ≤ n} is an array of real numbers satisfying n X p δ |ani| = O(n ), 0 < δ < 1, i=1 and p −1 −1/k ]Ank = ]{1 ≤ i ≤ n : |ani| > (k + 1) } ≥ ne . p Then E|X1| < ∞ is equivalent to   ∞ j X αp−2 X α n P max ani Xi > εn  < ∞, for all ε > 0. (1.8) 1≤j≤n n=1 i=1

Theorem D. (Gan [10]) Let {Xn , n ≥ 1} be a sequence of identically distributed eρ-mixing random variables p with eρ(1) < 1 and 1 < p ≤ 2, δ > 0, α ≥ max{(1 + δ)/p, 1}. If EX1 = 0 and E|X1| < ∞, then

∞ X αp−2−δ α n P |Sn| > εn < ∞, for all ε > 0. (1.9) n=1

In this paper, the purpose is to study and establish the equivalent conditions on complete convergence and complete moment convergence for eρ-mixing random variables. Our main results are stated in Section 2 and all proofs are given in Section 3. Throughout the paper, C denotes a positive constant not depending on n, which may be dierent in various places. Let I(A) be the indicator function of the set A, an = O(bn) represent an ≤ Cbn for all n ≥ 1.

2 Main results

In the section, we state our main results and some remarks. Recall that a real-valued function l(x), positive and measurable on (0, ∞), is said to be slowly varying at innity if lim l(λx)/l(x) = 1 for each λ > 0. Let x→∞ n L = f :f(x) is slowly varying function, and such that

k Z s s o x f (x)dx ≥ Ck +1f (k), for all k > 1, s > −1 . 1

Firstly, we state the complete convergence for the weighted sums of {Xn , n ≥ 1}.

Theorem 2.1. Let {Xn , n ≥ 1} be a sequence of eρ-mixing random variables which is weakly bounded by X. Let α > 1/2, αp ≥ 1 and EXn = 0 if p > 1. Assume that l(x) ∈ L and {ani , 1 ≤ i ≤ n, n ≥ 1} is an array of real −α p 1/α numbers satisfying max |ani| = O(n ). Then E|X| l(|X| ) < ∞ is equivalent to 1≤i≤n

∞ k ! X αp−2 X n l(n)P max ani Xi > ε < ∞, for all ε > 0. (2.1) 1≤k≤n n=1 i=1

Remark 2.1. When right-hand side of inequality (1.3) is satised and l(x) is slowly varying function, the moment p α condition E|X| l(|X|1/ ) < ∞ still implied (2.1). Conversely, for the sucient condition of Theorem 2.1, we need left-hand side of inequality (1.3) and l ∈ L.

Since log x ∈ L and 1 ∈ L, we can obtain the following corollaries. Convergence for eρ-mixing random variables Ë 455

Corollary 2.1. Let {Xn , n ≥ 1} be a sequence of eρ-mixing random variables which is weakly bounded by X. Let α > 1/2, αp ≥ 1 and EXn = 0 if p > 1. Assume that {ani , 1 ≤ i ≤ n, n ≥ 1} is an array of real numbers satisfying −α p max |ani| = O(n ). Then E|X| log(|X|) < ∞ is equivalent to 1≤i≤n

∞ k ! X αp−2 X n log nP max ani Xi > ε < ∞, for all ε > 0. (2.2) 1≤k≤n n=1 i=1

Corollary 2.2. Let {Xn , n ≥ 1} be a sequence of eρ-mixing random variables which is weakly bounded by X. Let α > 1/2, αp ≥ 1 and EXn = 0 if p > 1. Assume that {ani , 1 ≤ i ≤ n, n ≥ 1} is an array of real numbers satisfying −α p max |ani| = O(n ). Then E|X| < ∞ is equivalent to 1≤i≤n

∞ k ! X αp−2 X n P max ani Xi > ε < ∞, for all ε > 0. (2.3) 1≤k≤n n=1 i=1

Remark 2.2. Let {Xn , n ≥ 1} be a sequence of identically distributed eρ-mixing random variables, then {Xn , n ≥ −α 1} is weakly bounded by X1. By taking ani = n for all 1 ≤ i ≤ n, n ≥ 1, then Theorem B can be obtained by Corollary 2.2. Moreover, we not only consider the case αp > 1, but also consider the case αp = 1, so Theorem 2.1 extended and improved well-known results.

Remark 2.3. For independent random variable sequence, we have eρn = 0 for all n ≥ 1. So our results extend the Baum-Katz theorem from i.i.d. case to non-identically distributed eρ-mixing random variables.

Next, we give the complete moment convergence for the weighted sums of {Xn , n ≥ 1}.

Theorem 2.2. Let {Xn , n ≥ 1} be a sequence of eρ-mixing random variables which is weakly bounded by X. Let p > 1, α > 1/2, αp ≥ 1 and EXn = 0. Assume that l(x) ∈ L and {ani , 1 ≤ i ≤ n, n ≥ 1} is an array of real −α p 1/α numbers satisfying max |ani| = O(n ). Then E|X| l(|X| ) < ∞ is equivalent to 1≤i≤n

+ ∞ k ! X αp−2 X n l(n)E max ani Xi − ϵ < ∞, for all ε > 0. (2.4) 1≤k≤n n=1 i=1

Remark 2.4. Similar to Theorem 2.1, the necessary condition of Theorem 2.2 only need that the right-hand side of inequality (1.3) and l(x) is slowly varying function, the sucient condition of Theorem 2.2 need the left-hand side of inequality (1.3) and l ∈ L.

Corollary 2.3. Let {Xn , n ≥ 1} be a sequence of eρ-mixing random variables which is weakly bounded by X. Let p > 1, α > 1/2, αp ≥ 1 and EXn = 0. Assume {ani , 1 ≤ i ≤ n, n ≥ 1} is an array of real numbers satisfying −α p max |ani| = O(n ). Then E|X| log(|X|) < ∞ is equivalent to 1≤i≤n

+ ∞ k ! X αp−2 X n log(n)E max ani Xi − ϵ < ∞, for all ε > 0. (2.5) 1≤k≤n n=1 i=1

Corollary 2.4. Let {Xn , n ≥ 1} be a sequence of eρ-mixing random variables which is weakly bounded by X. Let p > 1, α > 1/2, αp ≥ 1 and EXn = 0. Assume {ani , 1 ≤ i ≤ n, n ≥ 1} is an array of real numbers satisfying −α p max |ani| = O(n ). Then E|X| < ∞ is equivalent to 1≤i≤n

+ ∞ k ! X αp−2 X n E max ani Xi − ϵ < ∞, for all ε > 0. (2.6) 1≤k≤n n=1 i=1 456 Ë Shui-Li Zhang et al.

3 Proofs of Main results

3.1 Some lemmas

To prove our results, we rst give some lemmas as follows.

Lemma 3.1. (Kuzmaszewska [19]) Let {Xn , n ≥ 1} be a sequence of random variables which is weakly mean p dominated (or weakly upper bounded) by a random variable X. If E|X| < ∞ for some p > 0, then for any t > 0 and n ≥ 1, the following statements hold: n X 1 |X |p C |X|p n E k ≤ E , (3.1) k=1

n X 1 |X |p I |X | t C  |X|p I |X| t tp |X| t  n E k ( k ≤ ) ≤ E ( ≤ ) + P( > ) (3.2) k=1 and n X 1 |X |p I |X | t C |X|p I |X| t n E k ( k > ) ≤ E ( > ). (3.3) k=1

Lemma 3.2. (Lan [5]) Let {Xn , n ≥ 1} be a sequence of eρ-mixing random variables, then there exists a positive constant C such that for any x ≥ 0 and all n ≥ 1,    n     1 X C − P max |Xk| > x P(|Xk| > x) ≤ + 1 P max |Xk| > x . (3.4) 2 1≤k≤n 2 1≤k≤n k=1

Lemma 3.3. (Sung [20]) Let {Yn , n ≥ 1} and {Zn , n ≥ 1} be sequences of random variables, then for any q > 1, ϵ > 0 and all a > 0, we have  + j X E max (Yi + Zi) − ϵa 1≤j≤n i=1  q   (3.5)   j j 1 1 1 X X ≤ q + q E max Yi  + E max Zi  . ϵ q − 1 a −1 1≤j≤n 1≤j≤n i=1 i=1

Lemma 3.4. (Utev and Peligrad [9]) For a positive integer N ≥ 1 and positive real numbers q ≥ 2 and 0 ≤ r < 1, there is a positive constant C = C(q, N, r) such that if {Xn , n ≥ 1} is a sequence of random variables with eρN ≤ r, q with EXk = 0 and E|Xk| < ∞ for every k ≥ 1, then for all n ≥ 1,  q   j n n !q/2 X X q X 2 E max Xi  ≤ C  E|Xi| + EXi  . (3.6) 1≤j≤n i=1 i=1 i=1

Lemma 3.5. (Zhou [21]) Let l(x) be slowly varying at innity, then we have n X p p (i) k l(k) ≤ Cn +1l(n), for p > −1 and positive integer n; k=1

∞ X p p (ii) k l(k) ≤ Cn +1l(n), for p < −1 and positive integer n. k=n

Lemma 3.6. (Bai [22]) Let l(x) be slowly varying at innity, then we have (i) lim l(kx) = 1, for any k > 0; lim l(x+u) = 1, for any u > 0; x→∞ l(x) x→∞ l(x) Convergence for eρ-mixing random variables Ë 457

(ii) lim xα l(x) = ∞; lim x−α l(x) = 0, for any α > 0; x→∞ x→∞ l(x) (iii) lim sup n = 1; n→ l(2 ) ∞ 2n≤x<2n+1 n nr n P kr k nr n (iv) C12 l(ϵ2 ) ≤ 2 l(ϵ2 ) ≤ C22 l(ϵ2 ) for every r > 0, ϵ > 0 and positive integer n. k=1

3.2 Proof of Theorem 2.1

Without loss of generality, we can assume that ani > 0 for all 1 ≤ i ≤ n, n ≥ 1. For xed n, let

α 0 α Xni = Xi I(|Xi| ≤ n ), Xni = Xi I(|Xi| > n ), i ≥ 1.

We will consider the following three cases, p > 1, p = 1 and 0 < p < 1 respectively. (i) Let p > 1. It is easy to see that

∞ k ! X αp−2 X n l(n)P max ani Xi > ε 1≤k≤n n=1 i=1 ∞ n ! X αp−2 [ α ≤ n l(n)P (|Xi| > n ) n=1 i=1 (3.7) ∞ k ! X αp−2 X + n l(n)P max ani Xni > ε 1≤k≤n n=1 i=1 ,I + J.

In order to prove (2.1), it need only to show that I < ∞ and J < ∞. From (1.3), Lemma 3.5 and Markov’s inequality, we have ∞ n ! X αp−2 [ α I = n l(n)P (|Xi| > n ) n=1 i=1 ∞ n X αp−2 X α ≤ n l(n) P(|Xi| > n ) n=1 i=1 ∞ X αp α ≤C n −1l(n)P(|X| > n ) n=1 ∞ X αp α α ≤C n −1− l(n)E[|X|I(|X| > n )] n=1 ∞ ∞ X αp α X α α (3.8) ≤C n − −1l(n) E[|X|I(k ≤ |X| < (k + 1) )] n=1 k=n ∞ k X α α X αp α ≤C E[|X|I(k ≤ |X| < (k + 1) )] n − −1l(n) k=1 n=1 ∞ X α α αp α ≤C E[|X|I(k ≤ |X| < (k + 1) )]k − l(k) k=1 ∞ X p α α α ≤C E[|X| l(|X|1/ )I(k ≤ |X| < (k + 1) )] k=1 p α ≤CE[|X| l(|X|1/ )] < ∞. 458 Ë Shui-Li Zhang et al.

Note that αp ≥ 1, EXn = 0, by Lemma 3.1, then

j j X X α max aniEXni ≤ max aniEXi I(|Xi| > n ) 1≤j≤n 1≤j≤n i=1 i=1 n n X X (3.9) a |X |I |X | nα C 1 |X |I |X | nα ≤ niE i ( i > ) ≤ nα E i ( i > ) i=1 i=1 1 p α ≤C |X| I(|X| > n ) → 0. nαp−1 E

In order to prove J < ∞, from (3.9), it is enough to check

∞ k ! X αp−2 X ε n l(n)P max ani(Xni − EXni) > < ∞. 1≤k≤n 2 n=1 i=1

q {p αp } By taking > max , 2, α 1 and from Lemma 3.4, we have that − 2

∞ k ! X αp−2 X ε n l(n)P max ani(Xni − EXni) > 1≤k≤n 2 n=1 i=1 q ∞ k ! X αp−2 X ≤C n l(n)E max ani(Xni − EXni) 1≤k≤n n=1 i=1 (3.10)   ∞ n n !q/2 X αp−2 X q q X 2 2 ≤C n l(n)  aniE|Xni − EXni| + aniE(Xni − EXni)  n=1 i=1 i=1

,J1 + J2.

Note that q > p, by (3.8) and Lemma 3.1, Lemma 3.5, then

∞ n X αp−2 X q q J1 ≤C n l(n) aniE|Xni| n=1 i=1 ∞ n X αp−2−αq X q α ≤C n l(n) E|Xi| I(|Xi| ≤ n ) n=1 i=1 ∞ X αp αq  q α αq α  ≤C n −1− l(n) E|X| I(|X| ≤ n ) + n P(|X| > n ) n=1 ∞ n X αp αq X q α α ≤C n −1− l(n) E|X| I((k − 1) < |X| ≤ k ) n=1 k=1 (3.11) ∞ X αp α + C n −1l(n)P(|X| > n ) n=1 ∞ ∞ X q α α X αp αq ≤C E|X| I((k − 1) < |X| ≤ k ) n −1− l(n) k=1 n=k ∞ X q α α αp αq ≤C E|X| I((k − 1) < |X| ≤ k )k − l(k) k=1 h p α i ≤CE |X| l(|X|1/ ) < ∞.

In order to get J2 < ∞, we consider the following cases. Convergence for eρ-mixing random variables Ë 459

p αp q αp α 1 case 1: ≥ 2, > 1. From > ( − 1)/( − 2 ) and Lemma 3.6, we can get that

∞ n !q/2 X αp−2 X 2 2 J2 = n l(n) aniE(Xni − EXni) n=1 i=1 ∞ n !q/2 X αp−2−αq X 2 ≤C n l(n) EXni n=1 i=1 (3.12) ∞ q X αp αq q   /2 ≤C n −2− n /2l(n) EX2 n=1 ∞ X αp αq q ≤C n −2− + /2l(n) < ∞. n=1

case 2: 1 < p < 2, αp > 1. Take q = 2, by Lemma 3.1 and (3.8), then

∞ n X αp−2 X 2 2 J2 = n l(n) aniE(Xni − EXni) n=1 i=1 ∞ n X αp−2−2α X 2 α ≤C n l(n) EXi I(|Xi| ≤ n ) n=1 i=1 ∞ X αp α α α α ≤C n −1−2 l(n)[EX2I(|X| ≤ n ) + n2 P(|X| > n )] n=1 ∞ n ∞ X αp α X α α X αp α (3.13) ≤C n −1−2 l(n) EX2I((k − 1) < |X| ≤ k ) + n −1l(n)P(|X| > n )] n=1 k=1 n=1 ∞ ∞ X α α X αp α ≤C EX2I((k − 1) < |X| ≤ k ) n −1−2 l(n) k=1 n=k ∞ X α α αp α ≤C EX2I((k − 1) < |X| ≤ k )k −2 l(k) k=1 h p α i ≤CE |X| l(|X|1/ ) < ∞.

p αp α 1 q J case 3: > 1, = 1, > 2 . Take = 2, similar to the proof of (3.13), it following that 2 < ∞. From the above discussions, we can get (2.1) for the case p > 1. (ii) Let p = 1. By the similar proof of (3.9), we have

j X max aniEXni → 0. (3.14) 1≤j≤n i=1

So in order to get (2.1), it is enough to show

∞ n ! X α−2 [ α n l(n)P (|Xi| > n ) < ∞ (3.15) n=1 i=1 and ∞ k k ! X α−2 X X ε n l(n)P max ani Xni − aniEXni > < ∞. (3.16) 1≤k≤n 2 n=1 i=1 i=1 460 Ë Shui-Li Zhang et al.

By the condition (1.3) and Lemma 3.5, we have

∞ n ! X α−2 [ α n l(n)P (|Xi| > n ) n=1 i=1 ∞ n X α−2 X α ≤ n l(n) P(|Xi| > n ) n=1 i=1 ∞ X α α ≤C n −1l(n)P(|X| > n ) n=1 ∞ ∞ X α X α α ≤C n −1l(n) P(k < |X| ≤ (k + 1) ) n=1 k=n (3.17) ∞ k X α α X α ≤C P(k < |X| ≤ (k + 1) ) n −1l(n) k=1 n=1 ∞ X α α α ≤C P(k < |X| ≤ (k + 1) )k l(k) k=1 ∞ X α α α ≤C E[|X|l(|X|1/ )I(k < |X| ≤ (k + 1) )] k=1 α ≤CE[|X|l(|X|1/ )] < ∞.

Furthermore, from (3.17), Lemma 3.4 and Lemma 3.5, we get

∞ k k ! X α−2 X X ε n l(n)P max ani Xni − aniEXni > 1≤k≤n 2 n=1 i=1 i=1 2 ∞ k ! X α−2 X ≤C n l(n)E max ani(Xni − EXni) 1≤k≤n n=1 i=1 ∞ n X −α−2 X 2 ≤C n l(n) E|Xni − EXni| n=1 i=1 ∞ X α h α α α i ≤C n− −1l(n) E|X|2I(|X| ≤ n ) + n2 P(|X| > n ) (3.18) n=1 ∞ n X α X α α ≤C n− −1l(n) E|X|2I((k − 1) < |X| ≤ k ) n=1 k=1 ∞ ∞ X α α X α ≤C E|X|2I((k − 1) < |X| ≤ k ) n− −1l(n) k=1 n=k ∞ X α α α ≤C E|X|2I((k − 1) < |X| ≤ k )k− l(k) k=1 α ≤CE|X|l(|X|1/ ) < ∞.

Based on the above discussions, we can get (2.1) for the case p = 1. Convergence for eρ-mixing random variables Ë 461

0 (iii) Let 0 < p < 1. Since Xi = Xni + Xni for all i ≥ 1, we have

∞ k ! X αp−2 X n l(n)P max ani Xi > ε 1≤k≤n n=1 i=1 ∞ k ! X αp−2 X ε ≤ n l(n)P max ani Xni > 1≤k≤n 2 n=1 i=1 (3.19) ∞ k ! X αp−2 X 0 ε + n l(n)P max ani Xni > 1≤k≤n 2 n=1 i=1 , I* + J*.

By Lemma 3.1 and Lemma 3.5, we obtain that ! ∞ k * X αp−2 X ε I = n l(n)P max ani Xni > 1≤k≤n 2 n=1 i=1 ∞ n X αp−α−2 X ≤C n l(n) E |Xni| n=1 i=1 ∞ X αp α  α α α  ≤C n − −1l(n) E|X|I(|X| ≤ n ) + n P(|X| > n ) n=1 ∞ n X αp α X α α ≤C n − −1l(n) E|X|I((k − 1) < |X| ≤ k ) n=1 k=1 ∞ ∞ X αp X p α α + C n /2−1l(n) E|X| /2I(k ≤ |X| < (k + 1) ) n=1 k=n (3.20) ∞ ∞ X α α X αp α ≤C E|X|I((k − 1) < |X| ≤ k ) n − −1l(n) k=1 n=k ∞ k X p α α X αp + C E|X| /2I(k ≤ |X| < (k + 1) ) n /2−1l(n) k=1 n=1 ∞ X α α αp α ≤C E|X|I((k − 1) < |X| ≤ k )k − l(k) k=1 ∞ X p α α αp + C E|X| /2I(k ≤ |X| < (k + 1) )k /2l(k) k=1 p α ≤CE|X| l(|X|1/ ) < ∞. 462 Ë Shui-Li Zhang et al. and ! ∞ k * X αp−2 X 0 ε J = n l(n)P max ani Xni > 1≤k≤n 2 n=1 i=1 ∞ n !p/2 X αp/2−2 X 0 ≤C n l(n)E |Xni| n=1 i=1 ∞ n X αp/2−2 X 0 p/2 ≤C n l(n) E|Xni| n=1 i=1 ∞ X αp p α (3.21) ≤C n /2−1l(n)E|X| /2I(|X| > n ) n=1 ∞ ∞ X αp X p α α ≤C n /2−1l(n) E|X| /2I(k ≤ |X| < (k + 1) ) n=1 k=n ∞ X p α α αp ≤C E|X| /2I(k ≤ |X| < (k + 1) )k /2l(k) k=1 p α ≤CE|X| l(|X|1/ ) < ∞. From (3.19)-(3.21), we can see that (2.1) holds for 0 < p < 1. −α Conversely, we take ani = n for all 1 ≤ i ≤ n, n ≥ 1, then ∞   X αp−2 α n l(n)P max |Xi| > ϵn 1≤i≤n n=1   ∞ j X αp−2 X α ≤C n l(n)P 2 max Xi > ϵn  1≤j≤n (3.22) n=1 i=1   ∞ j X αp−2 X ϵ ≤C n l(n)P max ani Xi >  < ∞. 1≤j≤n 2 n=1 i=1

Because of αp ≥ 1, we have   α lim P max |Xi| > ϵn → 0. (3.23) n→∞ 1≤i≤n Thus, for all n large enough, it follows that   α 1 P max |Xi| > ϵn < . (3.24) 1≤i≤n 4

Using Lemma 3.2 and (3.24), we get n   α X α α nC1P(|X| > ϵn ) ≤ P(|Xk| > ϵn ) ≤ (2C + 4)P max |Xk| > ϵn . (3.25) 1≤k≤n k=1 Convergence for eρ-mixing random variables Ë 463

Hence by l ∈ L, we obtain ∞   X αp−2 α ∞ > n l(n)P max |Xi| > n 1≤i≤n n=1 ∞ X αp α ≥C n −2l(n)nP(|X| > n ) n=1 ∞ ∞ X αp X α α =C n −1l(n) P(k < |X| ≤ (k + 1) ) n=1 k=n (3.26) ∞ k X α α X αp ≥C P(k < |X| ≤ (k + 1) ) n −1l(n) k=1 n=1 ∞ X α α αp ≥C P(k < |X| ≤ (k + 1) )k l(k) k=1  p α  ≥CE |X| l(|X|1/ ) , which implies the desired results.

3.3 Proof of Theorem 2.2

First we assume that the claim (2.4) holds, then it follows that + ∞ k ! X αp−2 X ∞ > n l(n)E max ani Xi − ϵ 1≤k≤n n=1 i=1 ∞ Z∞ k ! X αp−2 X = n l(n) P max ani Xi − ϵ > t dt 1≤k≤n n i =1 0 =1 ϵ (3.27) ∞ Z k ! X αp−2 X ≥ n l(n) P max ani Xi > ϵ + t dt 1≤k≤n n i =1 0 =1 ∞ k ! X αp−2 X ≥C n l(n)P max ani Xi > 2ϵ . 1≤k≤n n=1 i=1 p α From Theorem 2.1, we know that E|X| l(|X|1/ ) < ∞ is equivalent to

∞ k ! X αp−2 X n l(n)P max ani Xi > ϵ < ∞. 1≤k≤n n=1 i=1 p α Conversely, if E|X| l(|X|1/ ) < ∞, then from Lemma 3.3, we have that + ∞ k ! X αp−2 X n l(n)E max ani Xi − ϵ 1≤k≤n n=1 i=1 + ∞ k ! X αp−2 X 0 0 ≤ n l(n)E max ani[(Xni − EXni) + (Xni − EXni)] − ϵ 1≤k≤n n=1 i=1 q ∞ k ! (3.28) X αp−2 X ≤C n l(n)E max ani(Xni − EXni) 1≤k≤n n=1 i=1 ∞ k ! X αp−2 X 0 0 + n l(n)E max ani(Xni − EXni) 1≤k≤n n=1 i=1 ,eI + eJ. 464 Ë Shui-Li Zhang et al.

From (3.10)-(3.13), we have that eI < ∞. Furthermore, by (3.8) and Lemma 3.1, we obtain

∞ k ! X αp−2−α X 0 0 eJ ≤C n l(n)E max (Xni − EXni) 1≤k≤n n=1 i=1 ∞ n X αp−2−α X α ≤C n l(n) E|Xi|I(|Xi| > n ) (3.29) n=1 i=1 ∞ X αp α α ≤C n −1− l(n)E|X|I(|X| > n ) < ∞. n=1

From (3.28) and (3.29), we can get (2.4).

Acknowledgement: This work is supported by IRTSTHN (14IRTSTHN023), NSFC (11471104).

References

[1] Bradley R.C., On the spectral density and asymptotic normality of weakly dependent random elds, J Theoret Probab, 1992, 5, 355-373 [2] Sung S.H., Complete convergence for weighted sums of ρ*-mixing random variables, Discret Dyn Nat Soc, 2010, Article ID 630608,13 pp. [3] Sung S.H, On the strong convergence for weighted sums for ρ*-mixing random variables, Stat Papers, 2013, 54,773-781 [4] An J., Yuan D.M., Complete convergence of weighted sums for ρ*-mixing sequence of random variables, Stat Probab Lett, 2013, 78,1466-1472 [5] Lan C.F., Sucient and necessary conditions of complete convergence for weighted sums of eρ-mixing random variables, J Appl Math, 2014, Article ID 612140, 8pp. [6] Guo M.L., Zhu D.J., Complete convergence of weighted sums for ρ*-mixing sequence of random variables, J Math Research Appl, 2013, 33, 483-492 [7] Zhang Y., Complete moment convergence for moving average process generated by eρ-mixing random variables, J Inequal Appl, 2015, 245, 13 pp. [8] Peligrad M., Gut A., Almost-sure results for a class of dependent random variables, J Theor Probab, 1999, 12, 87-104. [9] Utev S., Peligrad M, Maximal inequalities and an invariance principle for a class of weakly dependent random variables, J Theor Probab, 2003, 16, 101-115 [10] Gan S.X., Almost sure convergence for eρ-mixing random variables sequence, Stat Probab Lett, 2004, 67, 289-298 [11] Wu Q.Y., Jiang Y.Y., Some strong limit theorems for eρ-mixing sequences of random variables, Stat Probab Lett, 2008, 78, 1017-1023 [12] Kuczmaszewska A., On Chung-Teicher type strong law of large numbers for ρ*-mixing random variables, Discret Dyn Nat Soc, 2008, Article ID 140548, 10 pp. [13] Hsu P.L., Robbins H., Complete convergence and the law of large number, Proc Natl Acad Sci USA, 1947, 33, 25-31 [14] Chow Y.S., On the rate of moment convergence of sample sums and extremes, Bull Inst Math Acad Sinica, 1988, 3, 177-201 [15] Wang X.J., Hu S. H., Complete convergence and complete moment convergence for martingale dierence sequence, Acta Math Sin, 2013, 30, 119-132 [16] Zhao J.H., Sun Y., Miao Y., On the complete convergence for weighted sums of a class of random varaiables, J Inequal Appl., 2016, 218, 12 pp. [17] Zhang S.L., Qu C., Miao Y., On the complete convergence for pairwise negatively quadrant dependent random varaiables, J Inequal Appl, 2015, 213, 11 pp. [18] Baum L.E., Katz M., Convergence rates in the law of large numbers, Trans Amer Math Soc, 1965, 120, 108-123 [19] Kuczmaszewska A., On complete convergence in Marcinkiewicz-Zygmund type SLLN for negatively associated random variables, Acta Math Hungar, 2010, 128, 116-130 [20] Sung S.H., Moment inequalities and complete moment convergence, J Inequal Appl, 2009, Article ID 271265, 14 pp. [21] Zhou X.C., Complete moment convergence of moving average processes under φ-mixing assumptions, Stat Probab Lett, 2010, 80, 285-292 [22] Bai Z.D., Su C., The complete convergence for partial sums of i.i.d random variables, Sci China Ser A, 1985, 28, 1261-1277