Functions of Random Variables
Total Page:16
File Type:pdf, Size:1020Kb
Names for Eg(X ) Generating Functions Topic 8 The Expected Value Functions of Random Variables 1 / 19 Names for Eg(X ) Generating Functions Outline Names for Eg(X ) Means Moments Factorial Moments Variance and Standard Deviation Generating Functions 2 / 19 Names for Eg(X ) Generating Functions Means If g(x) = x, then µX = EX is called variously the distributional mean, and the first moment. • Sn, the number of successes in n Bernoulli trials with success parameter p, has mean np. • The mean of a geometric random variable with parameter p is (1 − p)=p . • The mean of an exponential random variable with parameter β is1 /β. • A standard normal random variable has mean0. Exercise. Find the mean of a Pareto random variable. Z 1 Z 1 βαβ Z 1 αββ 1 αβ xf (x) dx = x dx = βαβ x−β dx = x1−β = ; β > 1 x β+1 −∞ α x α 1 − β α β − 1 3 / 19 Names for Eg(X ) Generating Functions Moments In analogy to a similar concept in physics, EX m is called the m-th moment. The second moment in physics is associated to the moment of inertia. • If X is a Bernoulli random variable, then X = X m. Thus, EX m = EX = p. • For a uniform random variable on [0; 1], the m-th moment is R 1 m 0 x dx = 1=(m + 1). • The third moment for Z, a standard normal random, is0. The fourth moment, 1 Z 1 z2 1 z2 1 4 4 3 EZ = p z exp − dz = −p z exp − 2π −∞ 2 2π 2 −∞ 1 Z 1 z2 +p 3z2 exp − dz = 3EZ 2 = 3 2π −∞ 2 3 z2 u(z) = z v(t) = − exp − 2 0 2 0 z2 u (z) = 3z v (t) = z exp − 2 4 / 19 Names for Eg(X ) Generating Functions Factorial Moments If g(x) = (x)k , where( x)k = x(x − 1) ··· (x − k + 1), then E(X )k is called the k-th factorial moment. If X is uniformly distributed on f1; 2;:::; ng, then n n X 1 1 1 X 1 (n + 1)k+1 E(X ) = (x) = · ∆ (x) = · : k k n n k + 1 + k+1 n k + 1 x=1 x=1 The Stirling numbers of the second kind gives a linear transformation from the factorial moments to the moments. 5 / 19 Names for Eg(X ) Generating Functions Variance and Standard Deviation m m • If g(x) = (x − µX ) , then E(X − µX ) is called the m-th central moment. • The most frequently used central moment is the second central moment 2 2 σX = E(X − µX ) commonly called the variance. We often write Var(X ) for the variance. 2 2 2 2 σX = Var(X ) = E(X − µX ) = EX − 2µX EX + µX 2 2 2 2 2 = EX − 2µX + µX = EX − µX : The square root of the variance is known as the standard deviation and denoted σX . If we subtract the mean and divide by the standard deviation, then the result X − µ Z = σ has mean0 and variance1. Z is called the standardized version of X . 6 / 19 Names for Eg(X ) Generating Functions Variance and Standard Deviation ForT , an exponential random variable, Z 1 Z 1 ET m = mtm−1PfT > tg dt = mtm−1 exp(−βt) dt 0 0 m Z 1 m = tm−1β exp(−βt) dt = ET m−1 β 0 β Thus, by induction, we have that m! ET m = : βm Thus, 2 1 2 1 Var(T ) = ET 2 − (ET )2 = − = : β2 β β2 7 / 19 Names for Eg(X ) Generating Functions Variance and Standard Deviation Exercise. Compute the variance and standard deviation for 1. a single Bernoulli trial, 2 2 2 p Var(X ) = EX − µX = p − p = p(1 − p); σX = p(1 − p): 2. a fair dice 7 1 91 EX = ; EX 2 = (12 + 22 + 32 + 42 + 52 + 62) = 2 6 6 91 72 182 − 147 35 r35 Var(X ) = − = = ; σ = 6 2 12 12 X 12 3. Y = a + bX . Answer in terms of the variance or standard deviation of X . 2 2 2 2 2 µY = a + bµX ; σY = E[(Y − µY ) ] = E[((a + bX ) − (a + bµX )) ]] = b σX σY = jbjσY 8 / 19 Names for Eg(X ) Generating Functions Skewness The third moment of the standardized random variable " # X − µ3 γ = E 1 σ is called the skewness. Random variables with positive skewness have a more pronounced tail to the density on the right. Random variables with negative skewness have a more pronounced tail to the density on the left. Exercise. Find the skewness of X a Bernoulli random variable. Answer. The third central moment E[(X − p)3] = (−p)3PfX = 0g + (1 − p)3PfX = 1g = − p3(1 − p) + (1 − p)3p = p(1 − p)(−p2 + (1 − p)2) = p(1 − p)(−p2 + 1 − 2p + p2) = p(1 − p)(1 − 2p): 9 / 19 Names for Eg(X ) Generating Functions Kurtosis Thus, " # 2 !33 X − µ3 X − p p(1 − p)(1 − 2p) 1 − 2p E = E 4 5 = = σ pp(1 − p) (p(1 − p))3=2 pp(1 − p) and X is positively skewed if p < 1=2 and is negatively skewed if p > 1=2. The fourth moment of the standard normal random variable is3. The kurtosis compares the fourth moment of the standardized random variable to this value " # X − µ4 E − 3: σ Random variables with a negative kurtosis are called leptokurtic. Lepto means slender. Those with a positive kurtosis are called platykurtic. Platy means broad. 10 / 19 Names for Eg(X ) Generating Functions Characteristic Function For generating functions, it is useful to recall that if h has a converging infinite Taylor series in a interval about the point x = a, then 1 X h(n)(a) h(x) = (x − a)n n! n=0 Where h(n)(a) is the n-th derivative of h evaluated at x = a. If g(x) = exp(iθx), then φX (θ) = E exp(iθX ) is called the Fourier transform or the characteristic function. Because jg(x)j = j exp(iθx)j = 1, the expectation exists for any random variable. 11 / 19 Names for Eg(X ) Generating Functions Moment Generating Function Similarly, g(x) = exp(tx), then MX (t) = E exp(tX ) is called the Laplace transform or the moment generating function. To see the basis for this name, note that if we can reverse the order of differentiation and integration, then d d dt MX (t) = EX exp(tX ) dt M(0) = EX d2 2 d2 2 dt2 MX (t) = EX exp(tX ) dt2 M(0) = EX . dk k dk k dtk MX (t) = EX exp(tX ) dtk M(0) = EX 12 / 19 Names for Eg(X ) Generating Functions Moment Generating Function For a standard normal random variable Z, Z 1 2 Z 1 2 2 1 tz z t2=2 1 z − 2tz + t MZ (t) = p e exp − dz = e p exp − dz 2π −∞ 2 2π −∞ 2 1 2 2 1 Z (z − t) 2 = et =2 p exp − dz = et =2: 2π −∞ 2 Writing out the power series, 1 (n) 1 n 1 2k X M (0) X EZ 2 X t M (t) = Z tn = tn and et =2 = Z n! n! 2k k! n=0 n=0 k=0 Thus, the odd moments are0 and the even moments (2k)! 4! 6! EZ 2k = so EZ 4 = = 3; EZ 6 = = 15;::: 2k k! 222! 233! 13 / 19 Names for Eg(X ) Generating Functions Moment Generating Function For an exponential random variable T and for t < β, Z 1 Z 1 tu MT (t) = e β exp(−βu) du = β exp u(t − β) du 0 0 1 m β 1 X t = = = β − t 1 − t/β β m=0 and the moments of T can be determined by equating the coefficients of tm. Mm(0) ET m 1 m! = = and ET m = : m! m! βm βm 14 / 19 Names for Eg(X ) Generating Functions Cumulant Generating Function The cumulant generating function is defined to be KX (t) = log MX (t): The k-th cumulant, dk κ = K (0) k dxk X can be determined from k-th terms in the Taylor series expansion at0. For the first and second cumulant, M0 (t) K 0 (t) = X K 0 (0) = M0 (0) = EX X MX (t) X X 00 0 2 00 MX (t)MX (t)−MX (t) 00 00 0 2 2 2 K (t) = 2 K (0) = M (0) − M (0) = EX − (EX ) = Var(X ) X MX (t) X X X 15 / 19 Names for Eg(X ) Generating Functions Probability Generating Function + x If X is Z -valued and g(x) = z , then 1 1 X X x X x ρX (z) = Ez = PfX = xgz = fX (x)z x=0 x=0 is called the (probability) generating function. ρ is an analytic function with radius of convergence is at least1. Thus we can obtain the derivatives of ρ by differentiating term by term. At z = 0 we obtain, 1 dx f (x) = ρ (0): X x! dzx X 16 / 19 Names for Eg(X ) Generating Functions Probability Generating Function If we can take the derivative at z = 1, we have d X −1 0 dz ρX (z) = EXz ρx (1) = EX d2 X −2 00 dz2 ρX (z) = EX (X − 1)z ρX (1) = E(X )2 . dk X −k (k) dzk ρX (z) = E(X )k z ρX (1) = E(X )k For X ,a geometric random variable, 1 1 X X p ρ (z) = EzX = f (x)zx = p(1 − p)x zx = : X X 1 − (1 − p)z x=0 x=0 0 p(1−p) 0 1−p ρX (z) = (1−(1−p)z)2 ρX (1) = p 00 2p(1−p)2 00 2(1−p)2 ρX (z) = (1−(1−p)z)3 ρX (1) = p2 17 / 19 Names for Eg(X ) Generating Functions Probability Generating Function For X , a binomial random variable based on n Bernoulli trials, n n X X n ρ (z) = EzX = f (x)zx = px (1 − p)n−x zx = ((1 − p) + zp)n: X X x x=0 x=0 by the binomial theorem.