Chapter 7 “Continuous Distributions”.Pdf
Total Page:16
File Type:pdf, Size:1020Kb
CHAPTER 7 Continuous distributions 7.1. Basic theory 7.1.1. Denition, PDF, CDF. We start with the denition a continuous random variable. Denition (Continuous random variables) A random variable X is said to have a continuous distribution if there exists a non- negative function f = f such that X ¢ b P(a 6 X 6 b) = f(x)dx a for every a and b. The function f is called the density function for X or the PDF for X. ¡More precisely, such an X is said to have an absolutely continuous¡ distribution. Note that 1 . In particular, a for every . −∞ f(x)dx = P(−∞ < X < 1) = 1 P(X = a) = a f(x)dx = 0 a 3 ¡Example 7.1. Suppose we are given that f(x) = c=x for x > 1 and 0 otherwise. Since 1 and −∞ f(x)dx = 1 ¢ ¢ 1 1 1 c c f(x)dx = c 3 dx = ; −∞ 1 x 2 we have c = 2. PMF or PDF? Probability mass function (PMF) and (probability) density function (PDF) are two names for the same notion in the case of discrete random variables. We say PDF or simply a density function for a general random variable, and we use PMF only for discrete random variables. Denition (Cumulative distribution function (CDF)) The distribution function of X is dened as ¢ y F (y) = FX (y) := P(−∞ < X 6 y) = f(x)dx: −∞ It is also called the cumulative distribution function (CDF) of X. 97 98 7. CONTINUOUS DISTRIBUTIONS We can dene CDF for any random variable, not just continuous ones, by setting F (y) := P(X 6 y). Recall that we introduced it in Denition 5.3 for discrete random variables. In that case it is not particularly useful, although it does serve to unify discrete and continuous random variables. In the continuous case, the fundamental theorem of calculus tells us, provided f satises some conditions, that f (y) = F 0 (y) : By analogy with the discrete case, we dene the expectation of a continuous random variable. 7.1.2. Expectation, discrete approximation to continuous random variables. Denition (Expectation) For a continuous random variable X with the density function f we dene its expec- tation by ¢ 1 EX = xf(x)dx −∞ if this integral is absolutely convergent. In this case we call X integrable. Recall that this integral is absolutely convergent if ¢ 1 jxjf(x)dx < 1: −∞ In the example above, ¢ ¢ 1 1 2 −2 EX = x 3 dx = 2 x dx = 2: 1 x 1 Later in Example 10.1 we will see that a continuous random variable with Cauchy distribution has innite expectation. Proposition 7.1 (Discrete approximation to continuous random variables) Suppose X is a nonnegative continuous random variable with a nite expectation. Then there is a sequence of discrete random variables 1 such that fXngn=1 Xn −−−! X: E n!1 E Proof. First observe that if a continuous random variable X is nonnegative, then its density f (x) = 0 x < 0. In particular, F (y) = 0 for y 6 0, thought the latter is not needed for our proof. Thus for such a random variable ¢ 1 EX = xf(x)dx: 0 Suppose , then we dene to be n if n n, for . n 2 N Xn(!) k=2 k=2 6 X(!) < (k+1)=2 k 2 N[f0g This means that we are approximating X from below by the largest multiple of 2−n that is still below the value of X. Each Xn is discrete, and Xn increase to X for each ! 2 S. 7.1. BASIC THEORY 99 Consider the sequence 1 . This sequence is an increasing sequence of positive num- fEXngn=1 bers, and therefore it has a limit, possibly innite. We want to show that it is nite and it is equal to EX. We have 1 X k k X = X = E n 2n P n 2n k=1 1 X k k k + 1 = X < 2n P 2n 6 2n k=1 1 ¢ n X k (k+1)=2 = n f(x)dx 2 n k=1 k=2 1 ¢ n X (k+1)=2 k = n f(x)dx: n 2 k=1 k=2 If x 2 [k=2n; (k + 1)=2n), then x diers from k=2n by at most 1=2n, and therefore ¢ n ¢ n (k+1)=2 (k+1)=2 k 0 6 xf(x)dx − n f(x)dx k=2n k=2n 2 ¢ n ¢ n (k+1)=2 k 1 (k+1)=2 = x − n f(x)dx 6 n f(x)dx k=2n 2 2 k=2n Note that 1 ¢ n ¢ X (k+1)=2 1 xf(x)dx = xf(x)dx n k=1 k=2 0 and 1 ¢ n 1 ¢ n ¢ X 1 (k+1)=2 1 X (k+1)=2 1 1 1 n f(x)dx = n f(x)dx = n f(x)dx = n : 2 n 2 n 2 2 k=1 k=2 k=1 k=2 0 Therefore 100 7. CONTINUOUS DISTRIBUTIONS ¢ 1 ¢ n 1 X (k+1)=2 k 0 6 EX − EXn = xf(x)dx − n f(x)dx n 2 0 k=1 k=2 1 ¢ n 1 ¢ n X (k+1)=2 X (k+1)=2 k = xf(x)dx − n f(x)dx n n 2 k=1 k=2 k=1 k=2 1 ¢ n ¢ n ! X (k+1)=2 (k+1)=2 k = xf(x)dx − n f(x)dx n n 2 k=1 k=2 k=2 1 ¢ n X 1 (k+1)=2 1 6 n f(x)dx = n −−! 0: 2 n 2 n!0 k=1 k=2 We will not prove the following, but it is an interesting exercise: if Xm is any sequence of discrete random variables that increase up to , then will have the same value X limm!1 EXm EX. This fact is useful to show linearity, if X and Y are positive random variables with nite expectations, then we can take Xm discrete increasing up to X and Ym discrete increasing up to Y . Then Xm + Ym is discrete and increases up to X + Y , so we have (X + Y ) = lim (Xm + Ym) E m!1 E = lim Xm + lim Ym = X + Y: m!1 E m!1 E E E Note that we can not easily use the approximations to X, Y and X + Y we used in the previous proof to use in this argument, since Xm + Ym might not be an approximation of the same kind. If X is not necessarily positive, we can show a similar result; we will not do the details. Similarly to the discrete case, we have Proposition 7.2 Suppose X is a continuous random variable with density fX and g is a real-valued function, then ¢ 1 Eg(X) = g(x)f(x)dx −∞ as long as the expectation of the random variable g (X) makes sense. As in the discrete case, this allows us to dene moments, and in particular the variance 2 Var X := E[X − EX] : As an example of these calculations, let us look at the uniform distribution. 7.1. BASIC THEORY 101 Uniform distribution We say that a random variable has a uniform distribution on if 1 X [a; b] fX (x) = b−a if a 6 x 6 b and 0 otherwise. To calculate the expectation of X ¢ ¢ 1 b 1 X = xf (x)dx = x dx E X b − a −∞ ¢ a 1 b = x dx b − a a 1 b2 a2 a + b = − = : b − a 2 2 2 This is what one would expect. To calculate the variance, we rst calculate ¢ ¢ 1 b 2 2 2 2 2 1 a + ab + b EX = x fX (x)dx = x dx = : −∞ a b − a 3 We then do some algebra to obtain (b − a)2 Var X = X2 − ( X)2 = : E E 12 102 7. CONTINUOUS DISTRIBUTIONS 7.2. Further examples and applications Example 7.2. Suppose X has the following p.d.f. ( 2 x 1 f(x) = x3 > 0 x < 1: Find the CDF of , that is, nd . Use the CDF to nd . X FX (x) P (3 6 X 6 4) Solution: we have F (x) = 0 if x 1 and will need to compute X 6 ¢ x 2 1 FX (x) = P (X 6 x) = 3 dy = 1 − 2 1 y x when x > 1. We can use this formula to nd the following probability P (3 6 X 6 4) = P (X 6 4) − P (X < 3) 1 1 7 = F (4) − F (3) = 1 − − 1 − = : X X 42 32 144 Example 7.3. Suppose X has density ( 2x 0 x 1 f (x) = 6 6 : X 0 otherwise Find EX. Solution: we have that ¢ ¢ 1 2 E [X] = xfX (x)dx = x · 2x dx = : 0 3 Example 7.4. The density of X is given by ( 1 if 0 x 2 f (x) = 2 6 6 : X 0 otherwise X Find E e . Solution: using Proposition 7.2 with g(x) = ex we have ¢ 2 X x 1 1 2 Ee = e · dx = e − 1 : 0 2 2 Example 7.5. Suppose X has density ( 2x 0 x 1 f(x) = 6 6 : 0 otherwise © Copyright 2017 Phanuel Mariano, Patricia Alonso Ruiz, Copyright 2020 Masha Gordina. 7.2. FURTHER EXAMPLES AND APPLICATIONS 103 Find Var(X). 2 Solution: in Example 7.3 we found E [X] = . Now ¢ 3 ¢ 1 1 2 2 3 1 E X = x · 2xdx = 2 x dx = : 0 0 2 Thus 1 22 1 Var(X) = − = : 2 3 18 Example 7.6.