
Notes 4 : Laws of large numbers Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Fel71, Sections V.5, VII.7], [Dur10, Sections 2.2-2.4]. 1 Easy laws P Let X1;X2;::: be a sequence of RVs. Throughout we let Sn = k≤n Xk. We begin with a straighforward application of Chebyshev’s inequality. 2 THM 4.1 (L weak law of large numbers) Let X1;X2;::: be uncorrelated RVs, i.e., E[XiXj] = E[Xi]E[Xj] for i 6= j, with E[Xi] = µ < +1 and Var[Xi] ≤ −1 −1 C < +1. Then n Sn !L2 µ and, as a result, n Sn !P µ. Proof: Note that 2 !23 2 X Var[Sn] = E[(Sn − E[Sn]) ] = E 4 (Xi − E[Xi]) 5 i X X = E[(Xi − E[Xi])(Xj − E[Xj])] = Var[Xi]; i;j i since, for i 6= j, E[(Xi − E[Xi])(Xj − E[Xj])] = E[XiXj] − E[Xi]E[Xj] = 0: Hence −1 −2 −1 Var[n Sn] ≤ n (nC) ≤ n C ! 0; −1 that is, n Sn !L2 µ, and the convergence in probability follows from Chebyshev. With a stronger assumption, we get an easy strong law. 4 4 THM 4.2 (Strong Law in L ) If the Xis are IID with E[Xi ] < +1 and E[Xi] = −1 µ, then n Sn ! µ a.s. 1 Lecture 4: Laws of large numbers 2 Proof: Assume w.l.o.g. that µ = 0. (Otherwise translate all Xis by µ.) Then 2 3 4 X 4 2 2 2 E[Sn] = E 4 XiXjXkXl5 = nE[X1 ] + 3n(n − 1)(E[X1 ]) = O(n ); i;j;k;l 3 where we used that E[Xi Xj] = 0 by independence and the fact that µ = 0. (Note 2 4 that E[X1 ] ≤ 1 + E[X1 ].) Markov’s inequality then implies that for all " > 0 [S4] [jS j > n"] ≤ E n = O(n−2); P n n4"4 which is summable, and (BC1) concludes the proof. The law of large numbers has interesting implications, for instance: EX 4.3 (A high-dimensional cube is almost the boundary of a ball) Let X1;X2;::: 2 be IID uniform on (−1; 1). Let Yi = Xi and note that E[Yi] = 1=3, Var[Yi] ≤ 2 4 E[Yi ] ≤ 1, and E[Yi ] ≤ 1 < +1. Then X2 + ··· + X2 1 1 n ! ; n 3 both in probability and almost surely. In particular, this implies for " > 0 rn rn (1 − ") < kX(n)k < (1 + ") ! 1; P 3 2 3 (n) where X = (X1;:::;Xn). I.e., most of the cube is close to the boundary of a ball of radius pn=3. 2 Weak laws In the case of IID sequences we get the following. THM 4.4 (Weak law of large numbers) Let (Xn)n be IID. A necessary and suf- ficient condition for the existence of constants (µn)n such that S n − µ ! 0; n n P is n P[jX1j > n] ! 0: In that case, the choice 1 µn = E[X1 jX1|≤n]; works. Lecture 4: Laws of large numbers 3 1 COR 4.5 (L weak law) If (Xn)n are IID with EjX1j < +1, then S n ! [X ]: n P E 1 Proof: From (DOM) 1 nP[jX1j > n] ≤ E[jX1j jX1j>n] ! 0; and 1 µn = E[X1 jX1|≤n] ! E[X1]: Before proving the theorem, we give an example showing that the condition in Theorem 4.4 does not imply the existence of a first moment. We need the following important lemma which follows from Fubini’s theorem. (Exercise.) LEM 4.6 If Y ≥ 0 and p > 0, then Z 1 p p−1 E[Y ] = py P[Y > y]dy: 0 EX 4.7 Let X ≥ e be such that, for some α ≥ 0, 1 [X > x] = ; 8x ≥ e: P x(log x)α (There is a jump at e. The choice of e makes it clear that the tail stays under 1.) Then Z +1 Z +1 2 2 1 1 E[X ] = e + 2x α dx ≥ 2 α dx = +1; 8α ≥ 0: e x(log x) e (log x) (Indeed, it decays slower than 1=x which diverges.) So the L2 weak law does not apply. On the other hand, Z +1 1 Z +1 1 E[X] = e + α dx = e + α du: e x(log x) 1 u This is +1 if 0 ≤ α ≤ 1. But for α > 1 −α+1 +1 u 1 E[X] = e + = e + : −α + 1 1 α − 1 Finally, 1 n [X > n] = ! 0; 8α > 0: P (log n)α Lecture 4: Laws of large numbers 4 (In particular, the WLLN does not apply for α = 0.) Also, we can compute µn in Theorem 4.4. For α = 1, note that (by the change of variables above) Z n 1 1 µn = E[X1X≤n] = e + − dx ∼ log log n: e x log x n log n Note, in particular, that µn may not have a limit. 2.1 Truncation To prove sufficiency, we use truncation. In particular, we give a weak law for triangular arrays which does not require a second moment—a result of independent interest. THM 4.8 (Weak law for triangular arrays) For each n, let (Xn;k)k≤n be inde- 0 1 pendent. Let bn with bn ! +1 and let Xn;k = Xn;k jXn;k|≤bn . Suppose that Pn 1. k=1 P[jXn;kj > bn] ! 0. −2 Pn 0 2. bn k=1 Var[Xn;k] ! 0. Pn Pn 0 If we let Sn = k=1 Xn;k and an = k=1 E[Xn;k] then Sn − an !P 0: bn 0 Pn 0 Proof: Let Sn = k=1 Xn;k. Clearly 0 Sn − an 0 Sn − an P > " ≤ P[Sn 6= Sn] + P > " : bn bn For the first term, by a union bound n 0 X P[Sn 6= Sn] ≤ P[jXn;kj > bn] ! 0: k=1 For the second term, we use Chebyshev’s inequality: 0 0 n Sn − an Var[Sn] 1 X 0 P > " ≤ 2 2 = 2 2 Var[Xn;k] ! 0: bn " b " b n n k=1 Lecture 4: Laws of large numbers 5 Proof: (of sufficiency in Theorem 4.4) We apply Theorem 4.4 with bn = n. Note that an = nµn. Moreover, −1 0 −1 0 2 n Var[Xn;1] ≤ n E[(Xn;1) ] Z 1 −1 0 = n 2yP[jXn;1j > y]dy 0 Z n −1 = n 2y[P[jXn;1j > y] − P[jXn;1j > n]]dy 0 1 Z n ≤ 2 yP[jX1j > y]dy n 0 ! 0; since we are “averaging” a function going to 0. Details in [D]. The other direction is proved in the appendix. 3 Strong laws Recall: DEF 4.9 (Tail σ-algebra) Let X1;X2;::: be RVs on (Ω; F; P). Define \ Tn = σ(Xn+1;Xn+2;:::); T = Tn: n≥1 By a previous lemma, T is a σ-algebra. It is called the tail σ-algebra of the se- quence (Xn)n. THM 4.10 (Kolmogorov’s 0-1 law) Let (Xn)n be a sequence of independent RVs with tail σ-algebra T . Then T is P-trivial, i.e., for all A 2 T we have P[A] = 0 or 1. In particular, if Z 2 mT then there is z 2 [−∞; +1] such that P[Z = z] = 1: EX 4.11 Let X1;X2;::: be independent. Then −1 −1 lim sup n Sn and lim inf n Sn n n are almost surely a constant. Lecture 4: Laws of large numbers 6 3.1 Strong law of large numbers THM 4.12 (Strong law of large numbers) Let X1;X2;::: be pairwise indepen- P dent IID with EjX1j < +1. Let Sn = k≤n Xk and µ = E[X1]. Then S n ! µ, a.s. n If instead EjX1j = +1 then Sn P lim exists 2 (−∞; +1) = 0: n n Proof: For the converse, assume EjX1j = +1. From Lemma 4.6 +1 X +1 = EjX1j ≤ P[jX1j > n]: n=0 By (BC2) P[jXnj > n i.o.] = 1: Because S S (n + 1)S − nS S − nX S X n − n+1 = n n+1 = n n+1 = n − n+1 ; n n + 1 n(n + 1) n(n + 1) n(n + 1) n + 1 we get that −1 flim n Sn exists 2 (−∞; +1)g \ fjXnj > n i.o.g = ; n because, on that event, Sn=n(n + 1) ! 0 so that Sn Sn+1 2 − > i.o. n n + 1 3 a contradiction. The result follows because P[jXnj > n i.o.] = 1. There are several steps in the proof of the =) direction: 1 P 1. Truncation. Let Yk = Xk fjXk|≤kg and Tn = k≤n Yk. (Note that the Yi’s are not identically distributed.) Since (by integrating and using Lemma 4.6) +1 X P[jXkj > k] ≤ EjX1j < +1; k=1 −1 (BC1) implies that it suffices to prove n Tn ! µ. Lecture 4: Laws of large numbers 7 2. Subsequence. For α > 1, let k(n) = [αn]. By Chebyshev’s inequality, for " > 0, +1 +1 X 1 X Var[Tk(n)] [jT − [T ]j > "k(n)] ≤ P k(n) E k(n) "2 k(n)2 n=1 n=1 +1 k(n) 1 X 1 X = Var[Y ] "2 k(n)2 i n=1 i=1 +1 1 X X 1 = Var[Y ] "2 i k(n)2 i=1 n:k(n)≥i +1 1 X ≤ Var[Y ](Ci−2) "2 i i=1 < +1; where the next to last line follows from the sum of a geometric series and the last line follows from the next lemma—proved later: LEM 4.13 We have +1 X Var[Yi] ≤ jX j < +1: i2 E 1 i=1 By (DOM) and (BC1), since " is arbitrary, we have E[Yk] ! µ and Tk(n) ! µ, a.s.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages12 Page
-
File Size-