<<

Functions of Random Variables, Expectation and

ECE 313 with Engineering Applications Lecture 13 Professor Ravi K. Iyer Dept. of Electrical and Computer Engineering University of Illinois at Urbana Champaign

Iyer - Lecture 13 ECE 313 - Fall 2013 Today’s Topics

• Functions of a Random • Expectation of a Function of a • Variance – Variance of – Variance of

Iyer - Lecture 13 ECE 313 - Fall 2013 Functions of a Random Variable

• Let Y =φ(X ) = X 2 As an example, X could denote the measurement error in a certain physical and Y would then be the of the error (recall the method of least squares). • Note that

FY (y) = 0for y ≤ 0. For y > 0,

FY (y) = P(Y ≤ y) = P(X 2 ≤ y) = P(− y ≤ X ≤ y)

= FX ( y) − FX (− y), and by differentiation the density of Y is ⎧ 1 ⎪ [ f X (− y)], y > 0, fY (y) = ⎨ 2 y ⎩⎪0, otherwise.

Iyer - Lecture 13 ECE 313 - Fall 2013 Functions of a Random Variable (cont.)

• Let X have the standard normal distribution [N(0,1)] so that

1 −x2 / 2 f X (x) = e , − ∞ < x < ∞. 2π Then

⎧ 1 ⎛ 1 y / 2 1 y / 2 ⎞ ⎪ ⎜ e− + e− ⎟ y > 0, fY (y) = ⎨2 y ⎝ 2π 2π ⎠, ⎪ y ≤ 0, ⎩ 0, or ⎧ 1 y > 0, ⎪ e− y / 2 , fY (y) = ⎨ 2πy y ≤ 0. ⎪ ⎩ 0, • This is a chi-squared distribution with one degree of freedom

Iyer - Lecture 13 ECE 313 - Fall 2013 Functions of a Random Variable (cont.)

• Let X be uniformly distributed on (0,1). We show that Y = −λ−1 ln(1− X ) has an exponential distribution with λ > 0 . Observe that Y is

a nonnegative random variable implying FY (y) = 0for y ≤ 0. • F or y > 0, we have −1 FY (y) = P(Y ≤ y) = P[−λ ln(1− X) ≤ y] = P[ln(1− X) ≥ −λy] = P[(1− X) ≥ e−λy ] (since ex is an increasing function of x,) = P(X ≤1− e−λy ) −λy = FX (1− e ).

But since X is uniform over (0,1), FX (x) = x, 0 ≤ x ≤1. −λy Thus FY (y) =1− e . Therefore Y is exponentially distributed with parameter λ. • This fact can be used in a distribution-driven simulation. In simulation programs it is important to be able to generate values of variables with known distribution functions. Such values are known as random deviates or random variates. Most computer systems provide built-in functions to generate random deviates from the uniform distribution over (0,1), say u. Such random deviates are called random numbers.

Iyer - Lecture 13 ECE 313 - Fall 2013 Example 1

• Let X be uniformly distributed on (0,1). We obtain the cumulative distribution function (CDF) of the random variable Y, defined by Y = Xn as follows: for 0 ≤ y ≤1,

FY (y) = P{Y ≤ y} = P{X n ≤ y} = P{X ≤ y1/ n} 1/ n = FX (y ) = y1/ n • For instance, the probability density function (PDF) of Y is given by " 1 1 −1 $ y n 0 ≤ y ≤1 fY (y) = # n $ otherwise % 0

Iyer - Lecture 13 ECE 313 - Fall 2013 Expectation of a Function of a Random Variable • Given a random variable X and its or its pmf/pdf • We are interested in calculating not the of X, but the expected value of some function of X, say, g(X). • One way: since g(X) is itself a random variable, it must have a probability distribution, which should be computable from a knowledge of the distribution of X. Once we have obtained the distribution of g(X), we can then compute E[g(X)] by the definition of the expectation. • Example 1: Suppose X has the following probability mass function: p(0) = 0.2, p(1) = 0.5, p(2) = 0.3 • Calculate E[X2]. • Letting Y=X2,we have that Y is a random variable that can take on one of the values, 02, 12, 22 with respective

2 Hence, pY (0) = P{Y = 0 } = 0.2 2 2 E[X ] = E[Y] = 0(0.2) +1(0.5) + 4(0.3) =1.7 pY (1) = P{Y =1 } = 0.5 2 Note that pY (2) = P{Y = 2 } = 0.3 1.7 = E[X 2 ] ≠ E[X ]2 =1.21

Iyer - Lecture 13 ECE 313 - Fall 2013 Expectation of a Function of a Random Variable (cont.) • Proposition 2: (a) If X is a discrete random variable with probability mass function p(x), then for any real-valued function g, E[g(X )]= ∑ g(x) p(x) x:p(x)>0 • (b) if X is a continuous random variable with probability density function f(x), then for any real-valued function g: ∞ E[g(X )]= g(x) f (x)dx ∫−∞

• Example 3, Applying the proposition to Example 1 yields E[X 2 ] = 02 (0.2) + (12 )(0.5) + (22 )(0.3) =1.7

• Example 4, Applying the proposition to Example 2 yields 1 E[X 3 ] = x3dx (since f(x)=1, 0 < x <1) ∫0 1 = 4

Iyer - Lecture 13 ECE 313 - Fall 2013 Corollary

• If a and b are constants, then E [aX + b] = aE[X ]+ b • The discrete case: E[aX + b] = ∑(ax + b) p(x) x:p(x)>0 = a ∑ xp(x) + b ∑ p(x) x:p(x)>0 x:p(x)>0 = aE[X ]+ b • The continuous case: ∞ E[aX + b] = ∫ (ax + b) f (x)dx −∞ ∞ ∞ = a ∫ xf (x)dx + b ∫ f (x)dx −∞ −∞ = aE[X ]+ b

Iyer - Lecture 13 ECE 313 - Fall 2013 Moments

• The expected value of a random variable X, E[X], is also referred to as the or the first of X.

• The quantity E [ X n ], n ≥ 1 is called the nth moment of X. We have: % n ' ∑ x p(x), if X is discrete x:p(x)>0 n ' E[X ] = & ∞ ' xn f (x)dx, if X is continuous ' ∫ ( −∞

• Another quantity of interest is the variance of a random variable X, denoted by Var(X), which is defined by: Var(X ) = E[(X − E[X ])2 ]

Iyer - Lecture 13 ECE 313 - Fall 2013 Variance of a Random Variable

• Suppose that X is continuous with density f, let E [ X ] = µ . Then, 2 Var(X ) = E[(X − µ) ]

2 2 = E[X − 2µX + µ )] ∞ = (x2 − 2µx + µ 2 ) f (x)dx ∫ −∞ ∞ ∞ ∞ = ∫ x2 f (x)dx −2µ ∫ xf (x)dx + µ 2 ∫ f (x)dx −∞ −∞ −∞ 2 2 = E[X ]− 2µµ + µ = E[X 2 ]− µ 2

• So we obtain the useful identity: Var(X ) = E[X 2 ]−(E[X ])2

Iyer - Lecture 13 ECE 313 - Fall 2013 Variance of Normal Random Variable

• Let X be normally distributed with µ and σ 2 . Find Var(X). • Recalling that E [ X ] = µ , we have that: Var(X ) = E[(X − µ)2 ] ∞ 1 2 −(x−µ)2 / 2σ 2 = ∫ (x − µ) e dx 2πσ −∞ • Substituting y = ( x − µ ) / σ yields: 2 ∞ σ 2 Var(X ) = ∫ y 2e− y / 2dy 2π −∞ 2 • Integrating by parts ( u = y , dv = ye − y / 2 dy ) gives: 2 ∞ 2 ∞ σ ⎛ 2 ∞ 2 ⎞ σ 2 Var(X ) = ⎜ − ye −y / 2 + e−y / 2dy ⎟ = e−y / 2dy = σ 2 ⎜[ ]−∞ ∫ ⎟ ∫ 2π ⎝ −∞ ⎠ 2π −∞

Iyer - Lecture 13 ECE 313 - Fall 2013 Variance of Exponential Distribution

# −λx ' % λe ; x ≥ 0 % 2 2 f (x) = $ ( Var(x) = E(x )−[E(x)] 0; otherwise &% )% ∞ −λx • From this, we determine the following proof: E(x) = xλe dx ∫0 • From this point, we need to use integration by parts to solve this equation: − e−λx u = xλ v = λ du = λdx dv = −e−λxdx • Now we can use the integration by parts formula ∫u dv = uv − ∫vdu to continue solving: −λx ∞ −λx ⎡(λx)(−e )⎤ ∞ e λ E(x) = ⎢ ⎥ − − dx λ ∫0 λ ⎣ ⎦0 −λx ∞ ∞ −λx E(x) = xe 0 + − e dx [ ] ∫0 ∞ ⎡e−λx ⎤ E(x) = 0 + ⎢ ⎥ λ ⎣ ⎦0 ⎛ 1 ⎞ E(x) = 0 − ⎜− ⎟ ⎝ λ ⎠

Iyer - Lecture 13 ECE 313 - Fall 2013 Variance of Exponential Distribution (cont.) • Now, we need to determine E(x2) so we can calculate the variance: ∞ E(x2 ) = x2λe−λxdx ∫0 • Now, we need to do integration by parts again: − e−λx u = x2λ v = λ du = 2xλdx dv = −e−λxdx • Now we use the integration by parts formula again: 2 −λx ∞ −λx ⎡(x )(−e )⎤ ∞ (2xλ)(−e ) E(x2 ) = + dx ⎢ ⎥ ∫0 ⎣ λ ⎦0 λ ∞ 2 2 −λx ∞ 2 −λx E(x ) = [x e ]0 + xλe dx λ ∫0 ∞ 1 • Now, remember that E( x) = x λ e−λxdx = and we will be able to ∫0 substitute it into the equation: λ

Iyer - Lecture 13 ECE 313 - Fall 2013 Variance of Exponential Distribution (cont.)

2 ⎛ 2 ⎞⎛ 1 ⎞ E(x ) = 0 + ⎜ ⎟⎜ ⎟ ⎝ λ ⎠⎝ λ ⎠

2 ⎛ 2 ⎞ E(x ) = ⎜ 2 ⎟ ⎝ λ ⎠ 1 2 • Now that we have found E(x) = and E( x ) = and we can substitute λ λ them into the equation Var(x) = E(x2 ) − [E(x)]2 to find the following holds true: 2 1 Var(x) = − λ2 λ2 1 Var(x) = λ2

Iyer - Lecture 13 ECE 313 - Fall 2013