Review of Basic Asymptotic Theory
Probability Space: (Ω, F, P), such that the following 3 conditions are satisfied:
1. A ∈ F =⇒ Ac ∈ F. S∞ 2. A1,A2,..., ∈ F =⇒ i=1 Ai ∈ F. 3. ∅ ∈ F. T∞ Question: Show that the above conditions imply Ω ∈ F and i=1 Ai ∈ F. The events lim sup and lim inf are defined as: T∞ S 1. lim supn→∞ An = n=1 m≥n Am = {An, i.o.} where i.o. denotes “infinitely often”. S∞ T 2. lim infn→∞ An = n=1 m≥n Am = {An, e.v.} where e.v. denotes “eventually”. Stochastic Convergence: All materials can be found in Ch3, Amemiya and Davidson Ch 18. Almost Sure Convergence:
a.s. Xn −→ 0 ⇐⇒ P (lim Xn (ω) = 0) = 1. ⇐⇒ ∀ > 0,P (|Xn (ω) | > , i.o.) = 0. ∞ \ [ [ ⇐⇒∀ > 0,P (|Xm (ω) | > ) = 0. ⇐⇒ ∀ε > 0, lim P {|Xm (ω) | > } = 0. n→∞ n=1 m≥n m≥n
∞ [ \ \ ⇐⇒∀ > 0,P {|Xm (ω) | ≤ } = 1. ⇐⇒ lim P {|Xm (ω) | ≤ } = 1. n→∞ n=1 m≥n m≥n
Question: Show the above definitions are equivalent. p p L p L convergence Xn −→ 0: limn→∞ E|Xn| = 0.
Convergence in probability: ∀ > 0, limn→∞ P (|Xn (ω) | ≤ ) = 1.
Convergence in distribution: limn→∞ P (Xn ≤ x) = P (X ≤ x) for every continuity point x in the distribution of X. Relations: a.s. p T 1. Xn → 0 =⇒ Xn → 0 : Note that P m≥n |Xm (ω) | ≤ ≤ P (|Xn (ω) | ≤ ).
p L p −p p 2. Xn −→ 0, p > 0 =⇒ Xn −→ 0 . Note that P (|Xn| > ) ≤ E|Xn| by the Markov Inequality.
p d d p 3. Xn → 0 ⇐⇒ Xn → 0 : Almost by definition. Note however that Xn → X ; Xn − X → 0 unless X is degenerate.
P∞ Borel-Cantelli lemma(BC): n=1 P (En) < ∞ =⇒ P (En, i.o.) = 0. S P Proof: P (En, i.o.) = limn→∞ P n≥m Em ≤ limn→∞ m≥n P (Em) → 0 where the last equality is by the summability assumption. a.s. Example: Xi ∼ IIDUniform(0, 1), show that min1≤i≤n Xi −→ 0. Qn n P∞ n Proof: Note that P (min1≤i≤n Xi > ) = i=1 P (Xi > ) = (1 − ) . And n=1 (1 − ) < ∞. Use Borel-Cantelli to conclude. P∞ Look at the example in Amemiya P88: Note that need n=1 P (|Xn| > ) = ∞ for the 1 arguments to hold, hence P (|Xn| > ) = n will do. But Borel-Cantelli may not be neccessary for a.s. convergence: Let ω be uniformly 1 1 a.s. distributed on (0, 1). Define Xn (ω) = n if ω ≤ n and Xn (ω) = 0 if ω > n . Obviously Xn (ω) → 0 but BC does not hold. p a.s. does not imply L convergence: The same example above, note EXn = 1 for all n, a.s. although Xn −→ 0. So when does a.s. convergence imply convergence in distribution: need to control for the cases where things go really wrong with small probability. a.s. Monotone Convergence Theorem(MON): If Xn → X and Xn is increasing almost surely, then limn→∞ EXn = EX. a.s. Dominated Convergence Theorem(DOM): If Xn → X and E (supn |Xn (ω) |) < ∞, then limn→∞ EXn = EX. Note that this also applies to the Lebesgue measure, which is not a probablity a.s. measure, in which case we have: If fn (x) → f (x) with respect to the Lebesgue measure, and R R R | supn fn (x) |dx < ∞, then limn→∞ fn (x) dx = f (x) dx.
Uniform Integrability(UI):Definition Xn is U.I. if limM→∞ supn E (|Xn|1 (|Xn| > M)) = 0. a.s. Theorem: If Xn is U.I. and if Xn → 0 then lim E|Xn| = 0. Hence lim EXn = 0.
Proof: Write E|Xn| = E|Xn|1 (|Xn| > M) + E|Xn|1 (|Xn| < M). The first term → 0 as M → ∞ by U.I. Use DOM to show that given M, the second term → 0 since it is dominated by M. p Stochastic Order: Xn = op (1) if Xn −→ 0. Xn = Op (1) if limM→∞ lim supn P (|Xn| > M) = 0. −1 Facts: Xn = Op (an) means an Xn = Op (1). Op (1) op (1) = op (1). Op (an) Op (bn) = Op (anbn). Op (an) + Op (bn) = Op (an + bn) = Op (max (an, bn)). Slutsky: See Amemiya thm 3.2.7. p89. Continuous Mapping: See Amemiya thm 3.2.5. p88. ¯ 1 Pn Weak Law of Large Numbers(WLLN): Xt, t = 1,.... Let Xn = n t=1 Xt, underwhat p conditions does X¯n − EX¯n −→ 0. p Sufficient to show E|X¯n − EX¯n| → 0, say p = 2 but other p > 0 also works.
WLLN for independent nonindentically distributed Xt, Davidson p293: Let Xt be an P∞ 2 2 ¯ ¯ 2 independent sequence. If t=1 σt /t < ∞ then E Xn − EXn → 0. ¯ ¯ 2 ¯ 1 Pn 2 Proof: E Xn − EXn = V ar Xn = n2 t=1 σt . Use Kronecker’s Lemma(See p34 Davidson), which says that for a positive sequence of numbers xt and a sequence of numbers that monotonically increase to infinity, a , if P∞ x /a < ∞, then 1 Pn x → 0 as n → ∞. Now take x = σ2 and t t=1 t t an t=1 t t t 2 take at = t . 2 An easy WLLN: Xt uncorrelated with mean 0 and V ar (Xt) = σ . a.s. Strong Law of Large Numbers(SLLN): Under what conditions does X¯n − EX¯n −→ 0. 2 a.s. P∞ σt ¯ SLLN 1: Thm 3.3.1 Amemiya P90 If Xt independent, t=1 t2 < ∞, then Xn − EXn −→ 0.
2 a.s. SLLN 2: Thm 3.3.2 Amemiya p90 If Xt iid with finite mean u, then X¯n − u → 0. Triangular Array: Useful for convergence in probability and in distribution {{Xnt, t = 1, . . . , n}, n = 1,..., ∞}. 2 2 For example, given Xt, t = 1,..., ∞ independent and mean 0, let σt = V ar (Xt), and let Cn = Pn σ2. Then X = Xt is an triangular array of random variables, for t = 1, . . . , n and for t=1 t nt Cn n = 1,..., ∞. Pn Pn Question: Show that V ar ( t=1 Xnt) = t=1 V ar (Xnt) = 1. Consistency of least square coefficient: p95 Amemiya 0 2 2 ˆ 0 −1 0 Model 1: yt = xtβ + ut, ut uncorrelated, mean 0, Eut = σ for all t. β = (X X) X y. 0 ˆ p Consistency Theorem: If λs (X X) → ∞, then β → β. Use λs (A) and λl (A) to denote the smallest and largest eigenvalues of A.
0 h 0 −1i h 0 −1i 0 −1 Proof: λs (X X) → ∞ =⇒ λl (X X) → 0 =⇒ trace (X X) → 0. But the trace of (X X) is the sum of the variances for the elements in βˆ − β. 2 2 p 2 Consistency of σˆ , Thm 3.5.2 Amemiy p96 Assume ut iid in Model 1. Thenσ ˆ → σ , where σˆ2 = T −1uˆ0uˆ = T −1u0u − T −1u0P u for P = X (X0X)−1 X0. Proof:
1. T −1u0u −→a.s. σ2 by SLLN 2.