Edgeworth Expansion and Saddle Point Approximation for Discrete Data with Application to Chance Games
Total Page:16
File Type:pdf, Size:1020Kb
Degree Project Edgeworth Expansion and Saddle Point Approximation for Discrete Data with Application to Chance Games. Rani Basna 2010-09-27 Subject: Mathematics Level: Master Course code: 5MA11E Edgeworth Expansion and Saddle Point Approximation for Discrete Data With Application to Chance Games Abstract We investigate mathematical tools, Edgeworth series expansion and the saddle point method, which are approximation techniques that help us to estimate the distribution function for the standardized mean of independent identical distributed random variables where we will take into consideration the lattice case. Later on we will describe one important application for these mathematical tools where game developing companies can use them to reduce the amount of time needed to satisfy their standard requests before they approve any game. Keywords Characteristic function, Edgeworth expansion, Lattice random variables, Saddle point approximation. Acknowledgments First I would like to show my gratitude to my supervisor Dr.Roger Pettersson for his continuous support and help provided to me. It is a pleasure to me to thank Boss Media® for giving me this problem. I would like also to thank my fiancée Hiba Nassar for her encourage .Finally I want to thank my parents for their love and I dedicate this thesis to my father who inspired me. Rani Basna Number of Pages: 45 Contents 1 Introduction 3 2 Notations and Definitions 3 2.1 CharacteristicFunction. 3 2.2 CentralLimitTheorem. 6 2.3 Definition of Lattice Distribution and Bounded Variation . 7 3 Edgeworth Expansion 8 3.1 FirstCase ............................. 9 3.2 SecondCase............................ 14 3.2.1 AuxiliaryTheorems. 14 3.2.2 OnTheRemainderTerm. 15 3.2.3 MainTheorem ...................... 17 3.2.4 Simulation Edgeworth Expansion for Continuous Ran- domVariables....................... 18 3.3 Lattice Edgeworth Expansion . 18 3.3.1 The Bernoulli Case . 19 3.3.2 Simulating for the Edgeworth expansion Bernoulli Case 21 3.3.3 GeneralLatticeCase . 21 3.3.4 Continuity-Corrected Points Edgeworth Series . 25 3.3.5 Simulation on Triss Cards . 26 4 Saddle Point Approximation 30 4.1 Simulation With Saddle Point Approximation Method . 32 A Matlab Codes 33 References 38 2 1 Introduction The basic idea in this Master’s Thesis is to define, adjust, and apply two mathematical tools (Edgeworth Expansion and Saddle Point approximation). Mainly we want to estimate the cumulative distribution function for inde- pendent and identically distributed random variables, specifically the lattice random variables. These approximating methods will give us the ability to reduce the amount of independent random variables for our estimate, com- paring to what we usually use by normal approximation using the central limit theorem. This mean that will make these methods more applicable to real life industry. More precisely the chance game industry may use methods to diminish the amount of time they need to publish a new trusted game. We will write Matlab codes to verify theoretical results, by simulating a Triss game similar to real ones with three boxes, and then apply the codes on this game to see how accurate our methods will be. In the second chapter we define some basic concepts and present important theorems that will help us in our work. In the third chapter we will intro- duce the Edgeworth expansion series in addition to the improvement of the remainder term that Esseen (1968)[11] present. In Chapter four we will dis- cuss the lattice random variables case which is much more important for us since we face it in important applications. After that we will try to apply the Edgeworth method to our main problem and look at the results. In Chapter five we will describe the most important useful formulas for the saddle point approximation technique without theoretical details. Finally we will apply the saddle point approximation method to our problem. 2 Notations and Definitions 2.1 Characteristic Function The definitions and theorems presented below can be found, for example, at [14] ,[18],...,[19]. Definition 2.1 (Distribution Function). For a random variable X, FX (x)= P (X x) is called the distribution function of X. ≤ Definition 2.2 (Characteristic Function). For a random variable X let itX itx ΨX (t) = E e = ∞ e dFX (x), called the characteristic function of X. −∞ R Here the integral is in the usual Stieltjes integral sense. 3 Theorem 2.1. Let X be a random variable with distribution function F and characteristic function ψX (t). 1. If E X n < for some n =1, 2,..., then | | ∞ n j n n n+1 n+1 (it) j t X t X ψX (t) EX E min 2| | | | , | | | | − j! ≤ ( n! (n + 1)! ) j=0 X In particular, ψ (t) 1 E min 2, tX , | X − |≤ { | |} If E X < , then | | ∞ ψ (t) 1 itEX E min 2 tX ,t2X2/2 , | X − − |≤ | | and if EX2 < , then ∞ ψ (t) 1 itEX t2EX2/2 E min t2X2, tX 3 /6 , X − − − ≤ | | n t n n | | 2. If E X < , for all n, and n! E X 0 as n for all t R, then| | ∞ | | → → ∞ ∈ ∞ (it)j ψ (t)=1+ EXj X j! j=1 X Theorem 2.2 (Characteristic Function of Normal Random Variables). If X N(µ,σ) Then it’s characteristic function is ∈ σ2t2 ψ (t) = exp(iµt ) (1) X − 2 To make the paper more self-contained a proof is included. Proof: We know that 1 ∞ (x µ)2 itx − 2 ψX (t)= e e− 2σ dx σ√2π Z−∞ (x2 2xµ+µ2) 1 ∞ itx − = e e− 2σ2 dx σ√2π Z−∞ (x2 2xµ+µ2)+2itxσ2 1 ∞ − = e− 2σ2 dx σ√2π Z−∞ (x2 2xµ 2itxσ2) µ2 1 ∞ − − = e− 2σ2 e− 2σ2 dx σ√2π Z−∞ (x2 2(µ+itσ2)x) µ2 1 ∞ − = e− 2σ2 e− 2σ2 dx σ√2π Z−∞ 4 (x2 2(µ+itσ2)x+(µ+itσ2)2) (µ+itσ2)2 µ2 1 ∞ − + = e− 2σ2 2σ2 e− 2σ2 dx σ√2π Z−∞ (x µ itσ2)2 (µ+itσ2)2 µ2 1 ∞ − − − = e− 2σ2 e 2σ2 dx σ√2π Z−∞ (x µ itσ2)2 2 2 1 ∞ − − µit t σ = e− 2σ2 e − 2 dx σ√2π Z−∞ t2σ2 (x µ itσ2)2 exp(µit 2 ) ∞ − − = − e− 2σ2 dx. σ√2π Z−∞ (x µ itσ2) − − By substituting y = σ we get dx dy = σ 2 2 t σ 2 exp(itµ 2 ) ∞ y ψX (t)= − e− 2 dy √2π Z−∞ 2 1 y where e −2 dy =1 √2π ⇒ R t2σ2 ψ (t) = exp(itµ ). X − 2 Using Maclaurin expansion for (1) we get the following expansion t2σ2 1 t2σ2 ψ (t)=1+(µit )+ (µit )2 + ... X − 2 2 − 2 In addition, if we have two normal random variables X,Y : t2σ2 ψ (t) = exp(µ it X ) X X − 2 and t2σ2 ψ (t) = exp(µ it Y ) y Y − 2 then we can easily prove that t2(σ2 + σ2 ) ψ (t)= ψ (t)ψ (t) = exp[(µ + µ )it X Y ]. X+Y X Y X Y − 2 For the special case when X N(0, 1) ∈ we have the formula t2/2 ψX (t)= e− . 5 2.2 Central Limit Theorem Theorem 2.3 (Central Limit Theorem). Let X1,X2,... be a sequence of independent and identically distributed random variables each having mean µ and variance σ2 < . Then the distribution of ∞ X + ... + X nµ 1 n − σ√n tends to the standard normal distribution as n . → ∞ The theorem is fundamental in probability theory. One simple derivation is in Blom [14] which we follow below. n Proof: Let’s put: Yk =(Xk µ)/σ, and Sn = 1 Yk/√n. We will show that − itSn P ψSn (t)= E(e ) t2/2 converge to e− , the characteristic function of the standard normal distri- bution. By the independence, n ψSn (t)= ψ Yk (it/√n)=[ψY (it/√n)] . P Furthermore t2 ψ (t)=1+ iE(Y )t E(Y 2) + t3H(t) Y − 2 where H(t) is bounded in a neighborhood of 0. We get 2 2 t 3/2 ψ (t/√n)=1+ iE(Y )t/√n E(Y ) + n− H . Y − 2n n where Hn is finite and E(Y )=0,V (Y )=1. Hence 2 t 3/2 n ψ (t)=[1 + n− H ] . Sn − 2n n and 2 t 3/2 ln ψ (t)= n ln(1 + n− H ) Sn − 2n n 2 2 3/2 t 3/2 ln(1 t /2n + n− Hn) = n + n− Hn − . −2n t2 + n 3/2H − 2n − n From the logarithm property ln(1+x) 1, as x 0. Thus x → → 2 2 t 3/2 t n + n− H , as n −2n n →− 2 → ∞ 6 and 2 3/2 ln(1 t /2n + n− H ) − n 1, as n t2 + n 3/2H → → ∞ − 2n − n Since the characteristic function of Sn converges to the Characteristic func- tion of the standard normal distribution, Sn converges in distribution to the standard normal random variable, see e.g Cramér[4]. Theorem 2.4 (Laypunov’s Theorem). suppose that for each n the sequence 2 X1,X2,...,Xn is independent and satisfies E [Xn]=0, Var[Xn] = σn and S2 = N σ2. if for some δ > 0 the expected values E X 2+δ are finite N n=1 n | k| for every k and that Lyapounov’s condition P h i N 1 2+δ lim 2+δ E Xn =0 SN n=1 X holds for some positive δ then the central limit theorem holds. Remark 2.5. This theorem is considered as further development of Lya- pounov’s solution to the second problem of Chebyshev (you can see more details in Gnedenko and Kolmogorov [13]) which turned out to be much sim- ple and more useful in applications of the central limit theorem than former solutions.