<<

Discrete Random Variables

• The word random effectively unpredictable • In engineering practice we may treat some signals as random to simplify the analysis even though they may not actually be random Defined

X A random variable ()  is the assignment of numerical values to the outcomes  of Random Variables Examples of assignments of numbers to the outcomes of experiments. Discrete-Value vs Continuous- Value Random Variables •A discrete-value (DV) random variable has a set of distinct values separated by values that cannot occur • A random variable associated with the outcomes of coin flips, card draws, dice tosses, etc... would be DV random variable •A continuous-value (CV) random variable may take on any value in a continuum of values which may be finite or infinite in size Probability Mass Functions

The probability mass (pmf ) for a discrete random variable X is P x = P  X = x . X ()  Probability Mass Functions

A DV random variable X is a Bernoulli random variable if it takes on only two values 0 and 1 and its pmf is 1 p , x = 0  PX ()x =  p , x = 1  0 , otherwise and 0 < p < 1. Probability Mass Functions

Example of a Bernoulli pmf Probability Mass Functions

If we perform n trials of an whose outcome is Bernoulli distributed and if X represents the total number of 1’s that occur in those n trials, then X is said to be a Binomial random variable and its pmf is

 n x nx  p ()1 p , x {}0,1,2,,n PX ()x =  x

0 , otherwise Probability Mass Functions

Binomial pmf Probability Mass Functions

If we perform Bernoulli trials until a 1 (success) occurs and the probability of a 1 on any single trial is p, the probability that the k1 first success will occur on the kth trial is p()1 p . A DV random variable X is said to be a Geometric random variable if its pmf is

 x1  p()1 p , x {}1,2,3,... PX ()x =  0 , otherwise Probability Mass Functions

Geometric pmf Probability Mass Functions If we perform Bernoulli trials until the rth 1 occurs and the probability of a 1 on any single trial is p, the probability that the rth success will occur on the kth trial is

 k 1 r kr P()rth success on kth trial = p ()1 p .  r 1 A DV random variable Y is said to be a negative - Binomial or Pascal random variable with r and p if its pmf is

 y 1 r yr  p ()1 p , y {}r,r +1,, PY ()y =  r 1

0 , otherwise Probability Mass Functions

Negative Binomial (Pascal) pmf Probability Mass Functions Suppose we randomly place n points in the time interval 0  t < T with each point being equally likely to fall anywhere in that . The probability that k of them fall inside an interval of length t < T inside that range is

 n k nk n! k nk P k inside t =  p ()1 p = p ()1 p k k!()n  k ! where p = t / T is the probability that any single point falls within t . Further, suppose that as n , n / T = , a constant. If  is constant and n  that implies that T  and p  0. Then  is the average number of points per unit time, over all time. Probability Mass Functions

Events occurring at random times Probability Mass Functions

It can be shown that n  k     k P k inside t = lim 1 = e  k! n n  k!  =e where  = t. A DV random variable is a Poisson random variable with  if its pmf is  x  e , x {}0,1,2,, PX ()x =  x!  0 , otherwise Cumulative Distribution Functions The cumulative distribution function (CDF) is defined by F x P  X x . X ()=   For example, the CDF for tossing a single die is u()x 1 + u()x  2 + u()x  3 F ()x = ()1/6  X u x 4 u x 5 u x 6 + () + () + () 1 , x  0 where u()x  0 , x < 0 Functions of a Random Variable

Consider a transformation from a DV random variable X to another DV random variable Y through Y = g()X . If the 1 function g is invertible, then X = g ()Y and the pmf for Y is P y = P g1 y where P x is the pmf for X. Y ()X ()() X () Functions of a Random Variable If the function g is not invertible the pmf and pdf of Y can be found by finding the probability of each value of Y. Each value of X with non-zero probability causes a non-zero probability for the corresponding value of Y. So, for the ith value of Y, P Y = y  = P  X = x  + P  X = x  +  i   i,1   i,2   n + P  X = x  = P  X = x   i,n    i,k  k=1 The function to the right is an example of a non-invertible function. Expectation and Moments

Imagine an experiment with M possible distinct outcomes performed N times. The average of those N outcomes is 1 M X n x where x is the ith distinct value of X and n =  i i i i N i=1 is the number of times that value occurred. Then 1 M M n M X n x i x r x =  i i =  i =  i i N i=1 i=1 N i=1 The of X is M M M ni E  X  = lim xi = lim ri xi = P  X = xi  xi   N  N     i=1 N i=1 i=1 Expectation and Moments Three common measures are used in to indicate an "average" of a random variable are the , the and the . The mean is the sum of the values 1 M divided by the number of values X n x . =  i i N i=1 The mode is the value that occurs most often.

PX ()xmode  PX ()x for all x. The median is the value for which an equal number of values fall above and below. P X > x = P X < x X ()median X ()median Expectation and Moments

The first of a random variable is its expected value M E  X  x P  X x    =  i  = i  i=1 The second moment of a random variable is its mean-squared value (which is the mean of its square, not the square of its mean). M E  X 2  = x2 P  X = x     i  i  i=1 The name "moment" comes from the fact that it is mathematically the same as a moment in classical mechanics. Expectation and Moments

The nth moment of a random variable is defined by M E  X n  = xn P  X = x     i  i  i=1

The expected value of a function g of a random variable is M E g X  = g X P  X = x   ()  () i  i=1 Expectation and Moments

A of a random variable is the moment of that random variable after its expected value is subtracted.

n M n E  X  E  X   = x  E  X  P  X=x ()  ()i   i  i=1 The first central moment is always zero. The second central moment (for real-valued random variables) is the ,

2 M 2  2 = E  X  E  X   = x  E  X  P  X=x X ()  ()i   i  i=1 The variance of X can also be written as Var  X . The positive square root of the variance is the . Expectation and Moments

Properties of expectation   E a a ,EaX  a E  X  ,E X E  X   =  =   n  =   n  n n where a is a constant. These properties can be use to prove the handy relationship,  2 = E  X 2   E2  X  X   The variance of a random variable is the mean of its square minus the square of its mean. Another handy relation is Var aX b a2 Var X .  +  =   Conditional Probability Mass Functions The concept of conditional probability can be extended to a conditional probability mass function defined by P ()x X , x  A P x = P A X |A () 

0 , otherwise where A is the condition that affects the probability of X. Similarly the conditional expected value of X is E  X | A x P x and the conditional cumulative  =  X |A () xB distribution function for X is F x = P  X  x | A . X |A () Conditional Probability

Let A be A = {}X  a where a is a constant. P ()X  x ()X  a  Then F x P  X x | X a   . X |A ()=     = P  X  a If a x then P  X x X a  P X a and  () ()  =    P  X  a F x P  X x | X a   1. X |A ()=     = = P  X  a If a x then P  X x X a  P X x and  () ()  =    P  X x F x    X () F ()x = P  X  x | X  a = = X |A   P  X  a F a   X ()