Heavy Tailed Analysis Eurandom Summer 2005

Heavy Tailed Analysis Eurandom Summer 2005

HEAVY TAILED ANALYSIS EURANDOM SUMMER 2005 SIDNEY RESNICK School of Operations Research and Industrial Engineering Cornell University Ithaca, NY 14853 USA [email protected] http://www.orie.cornell.edu/ sid ∼ & Eurandom http://www.eurandom.tue.nl/people/EURANDOM_chair/eurandom_chair.htm 1 60 SIDNEY RESNICK 8. Asymptotic Normality of the Tail Empirical Measure As in Section 7.1, suppose X , j 1 are iid, non-negative random variables with common { j ≥ } distribution F (x), where F¯ RV−α for α > 0. Continue with the notation in (7.1), (7.2), and (7.3). Define the tail empirical∈ process, 1 n n (8.1) W (y)=√k ǫ (y−1/α, ] F¯(b(n/k)y−1/α) n k Xi/b(n/k) ∞ − k i=1 X =√k ν (y−1/α, ] E(ν (y−1/α, ]) . n ∞ − n ∞ Theorem 7. If (7.1), (7.2), and (7.3) hold, then in D[0, ) ∞ W W n ⇒ where W is Brownian motion on [0, ). ∞ Remark 3. Note, because of regular variation, as n , k/n 0, → ∞ → n −α (8.2) Eν (y−1/α, ]= F¯(b(n/k)y−1/α) y−1/α = y. n ∞ k → For applications to such things as the asymptotic normality of the Hill estimator and other estimators derived from the tail empirical measure, we would prefer the centering in (8.1) be y. However, to make this substitution in (8.1) requires knowing or assuming n (8.3) lim √k F¯(b(n/k)y−1/α) y n→∞ k − exists finite. This is one of the origins for the need of second order regular variation. Proof. The proof requires several steps: Step 1: Donsker theorem. Suppose ξj, j 1 are iid, E(ξj) = 0, and Var(ξj) = 1. Then in D[0, ), { ≥ } ∞ [n·] ξ i W. √n ⇒ i=1 X Step 2: Vervaat’s lemma. See Vervaat (1972b) or Vervaat (1972a). Suppose xn D[0, ) and x C[0, ) and x are non-decreasing. If c and ∈ ∞ ∞ ∈ ∞ n n → ∞ c (x (t) t) x (t), (n ) n n − → ∞ → ∞ locally uniformly, then also c (x←(t) t) x (t), (n ) n n − → − ∞ → ∞ locally uniformly. From Skorohod’s theorem, we get the following stochastic processes version: Suppose Xn is a sequence of D[0, ) valued random elements and X∞ has continuous paths. If Xn has non-decreasing paths∞ and if c , then n → ∞ c (X (t) t) X (t), (n ) n n − ⇒ ∞ → ∞ in D[0, ) implies ∞ c (X←(t) t) X (t), (n ) n n − ⇒ − ∞ → ∞ HEAVY TAIL ANALYSIS 61 in D[0, ). In fact, we also have (e(t)= t) ∞ (8.4) c (X ( ) e,c (X←( ) e (X ( ), X←( )), n n · − n n · − ⇒ ∞ · ∞ · in D[0, ) D[0, ). ∞ × ∞ Step 3: Renewal theory. Suppose Yn, n 1 are iid, non-negative random variables 2 { n≥ } with E(Yj)= µ, and Var(Yj)= σ . Set Sn = i=1 Yi. Then Donsker’s theorem says S [nt]µ [nt] − P W (t) σ√n ⇒ in D[0, ). Since for any M > 0 ∞ ntµ [nt]µ sup | − | 0, 0≤t≤M √n → it is also true that in D[0, ) ∞ S ntµ [nt] − W (t) σ√n ⇒ or by dividing numerator and denominator by nµ S[nt] nµ t c (X (t) t) := − W (t). n n − σn−1/2/µ ⇒ ← This implies the result for Xn and we need to evaluate this process: X←(t) = inf s : X (s) t n { n ≥ } = inf s : S /nµ t = inf s : S tnµ { [ns] ≥ } { [ns] ≥ } j 1 = inf : S tnµ = N(tnµ), {n j ≥ } n some version of the renewal function. For later use, note that N(t) differs at most by 1 from ∞ (8.5) 1[Sj ≤t]. i=1 X The conclusion from Vervaat is µ 1 √n N(nµt) t W (t) σ n − ⇒ or changing variables s = µt µ 1 s s 1 √n N(ns) W ( ) =d W (s). σ n − µ ⇒ µ √µ The conclusion: µ3/2 1 s (8.6) √n N(ns) W (s), σ n − µ ⇒ in D[0, ). ∞ 62 SIDNEY RESNICK Special case: The Poisson process. Let Γ = E + + E n 1 ··· n be a sum of n iid standard exponential random variables. In this case µ = σ = 1 and 1 (8.7) √k N(ks) s W (s), (k ), k − ⇒ → ∞ in D[0, ). Step∞ 4: Approximation. Review the relation of N(t) with the quantity in (8.5). We claim, as n , and k/n 0, for any T > 0, → ∞ → ∞ n √ 1 1 P (8.8) sup k 1[Γi≤ks] 1[Γi≤ks] 0. 0≤s≤T k − k → i=1 i=1 X X The idea is that Γi is localized about its mean and any term with i too far from k is unlikely. The result says that i > n gives a term which is negligible. More formally, the difference in (8.8) is 1 ∞ 1 ∞ sup 1[Γi≤ks] 1[Γi≤kT ] 0≤s≤T √ ≤√ k i=n+1 k i=n+1 X X ∞ i 1 ′ = 1[Γ +Γ′ ≤kT ] (Γ = El+n). √ n i i k i=1 X Xl=1 Now for any δ > 0, ∞ 1 Γn k P [ 1[Γ +Γ′ ≤kT ] > δ] P [Γn kT ]= P [ T ] √ n i ≤ ≤ n ≤ n k i=1 X and since k/n 0, for any η > 0 we ultimately have this last term bounded by → Γ P [ n 1 η] 0, ≤ n ≤ − → by the weak law of large numbers. Combining (8.8), the definition of N, and (8.7) we get the conclusion 1 n (8.9) √k 1 s W (s) (k , k/n 0), k [Γi≤ks] − ⇒ → ∞ → i=1 X in D[0, ). Step∞ 5: Time change. For s 0, define ≥ n Γ φ (s)= F¯ b(n/k)s−1/α n+1 n k n so that from regular variation and the weak law of large numbers, P (8.10) sup φn(s) s 0 0≤s≤T | − | → HEAVY TAIL ANALYSIS 63 for any T > 0. Therefore, joint convergence ensues 1 n √k 1 ( ),φ ( ) (W, e), (e(t)= t) k [Γi≤k·] − · n · ⇒ i=1 X in D[0, ) D[0, ) and applying composition we arrive at ∞ × ∞ 1 n (8.11) √k 1 φ (s) W (s) k [Γi≤kφn(s)] − n ⇒ i=1 X in D[0, ). Step∞ 6: Probability integral transform. The Γ’s have the property that Γ1 Γn d Γn Γ1 d ,... = 1 ,..., 1 =(U(1:n),...,U(n:n)) Γn+1 Γn+1 − Γn+1 − Γn+1 where U U ) (1:n) ≤···≤ (n:n) are the order statistics in increasing order of n iid U(0, 1) random variables U1,...,Un. Observe from (8.11), 1 n 1 n 1 n 1[Γ ≤kφ (s)] = 1 Γi 1 −1/α = 1 Γi −1/α i n [ ≤ F¯(b(n/k)s )Γn+1] [ ≤F¯(b(n/k)s )] k k k k k Γn+1 i=1 i=1 i=1 X X X 1 n 1 n − Γ = 1[F (b(n/k)s 1/α)≤1− i ] = 1 −1/α ← Γi k Γn+1 k [b(n/k)s ≤F 1− Γ ] i=1 i=1 n+1 Xn Xn d 1 1 −1/α ← = 1 −1/α ← = 1[b(n/k)s ≤F (Ui)] k [b(n/k)s ≤F U(i:n) ] k i=1 i=1 X X n n d 1 1 −1/α X − = 1 −1/α = 1[ i ≥s 1/α] = νn[s , ]. k [b(n/k)s ≤Xi ] k b(n/k) ∞ i=1 i=1 X X Also, n Γ n √k sup F¯ b(n/k)s−1/α n+1 F¯(b(n/k)s−1/α 0≤s≤T k n −k n Γn+1 = sup F¯(b(n/k )s−1/α)√k 1 0≤s≤T k n − k Γ n =O(1) n+1 − = O(1) o(1)O (1) , n √n p r P from the central limit theorem, and this 0. → This proves the result since the last statement removes the difference between φn(s) and E ν [s−1/α, ] . n ∞ From this result we can recover Theorem 5 page 41 and its consequences. 64 SIDNEY RESNICK 8.1. Asymptotic normality of the Hill estimator. For this section it is convenient to assume (8.3) and, in fact, we assume for simplicity n (8.12) lim √k F¯(b(n/k)y−1/α) y = 0 n→∞ k − uniformly for x > x0. (The uniformity is not an extra assumption but this requires proof.) With (8.12), we can modify the result of Theorem 7 to n # 1 (8.13) W (y)= √k 1 −1/α y =: √k(V (y) y) W (y), n k [Xi/b(n/k)>y ] − n − ⇒ i=1 X in D[0, ). Therefore, from Vervaat’s lemma, ∞ X −α (8.14) √k(V ←(y) y)= √k ⌈ky⌉ y W (y). n − b(n/k) − ⇒ − In fact we have joint convergence X −α (8.15) √k V ( ) e , √k V ←( ) e , (k) (W, W, 1) n · − n · − b(n/k) ⇒ − in D[0, ) D[0, ) R. Apply the map, ∞ × ∞ × (x ( ), x ( ), k) (x (k ), x (1)e) 1 · 2 · 7→ 1 · 2 to get n 1 X(k) −α X(k) −α √ X √ k 1 Xi (k) −1/α y , k y y W (y), yW (1) . k [ b(n/k) > b(n/k) y ] − b(n/k) b(n/k) − ⇒ − i=1 X Add the components to get, 1 n √ X (8.16) k 1 Xi (k) −1/α y W (y) yW (1).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    21 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us