2.5 Mean and Variance

Total Page:16

File Type:pdf, Size:1020Kb

2.5 Mean and Variance Section 5. Mean and Variance (LECTURE NOTES 4) 57 2.5 Mean and Variance The mean (or expected value) of a discrete random variable X with range R and pmf f(x) is, provided series converges absolutely, X µ = E(X) = xf(x): x2R The variance of discrete random variable X with range R and pmf f(x) is, provided series converges absolutely, X σ2 = V ar(X) = (x − µ)2f(x) = E[(X − µ)2] x2R X 2 = x2f(x) − µ2 = E(X2) − [E(X)] = E(X2) − µ2; x2R with associated standard deviation, σ. 1 For uniform random variable X with range R = f1; 2; : : : ; kg, and pmf f(x) = k , r k + 1 k2 − 1 k2 − 1 µ = E(X) = ; σ2 = V ar(X) = ; σ = ; 2 12 12 for binomial random variable, p µ = E(X) = np; σ2 = V ar(X) = npq; σ = npq; for Poisson random variable, p µ = E(X) = λ, σ2 = V ar(X) = λ, σ = λ. Exercise 2.5 (Mean and Variance) 1. Expected value, variance and standard deviation: seizures. The probability function for the number of seizures, X, of a typical epileptic person in any given year is given in the following table. x 0 2 4 6 8 10 f(x) 0.17 0.21 0.18 0.11 0.16 0.17 (a) Calculating the expected value. The expected value (mean) number of seizures is given by X E(X) = xf(x) = 0(0:17)+2(0:21)+4(0:18)+6(0:11)+8(0:16)+10(0:17) = x (i) 4:32 (ii) 4:78 (iii) 5:50 (iv) 5:75. 58 Chapter 2. Discrete Random Variables (LECTURE NOTES 4) P(X = x) P(X = x) P(X = x) 0.20 0.20 0.20 0.15 0.15 0.15 0.10 0.10 0.10 0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10 (a) (b) (c) Figure 2.11: Expected value: fulcrum point of balance x <- c(0,2,4,6,8,10) # values of random variable px <- c(0.17,0.21,0.18,0.11,0.16,0.17) # probabilities EX <- sum(x*px); EX # expected value [1] 4.78 (b) Understanding expected value: seizures. If the expected value is like a fulcrum point which balances the \weight" of the probability distribution, then the expected value is most likely close to the point of the fulcrum given in which of the three graphs above? (i) (a) 9 (ii) (b) 5 (iii) (c) 1. (c) Variance. The variance in number of seizures is given by σ2 = V ar[X] = E (X − µ)2 X = (X − µ)2f(x) x = (0 − 4:78)2(0:17) + (2 − 4:78)2(0:21) + ··· + (10 − 4:78)2(0:17) ≈ (i) 7:32 (ii) 8:78 (iii) 10:50 (iv) 12:07. VarX <- sum((x-EX)^2*px); VarX # variance [1] 12.0716 (d) Standard Deviation. The standard deviation in the number of seizures is p σ = pV ar(X) ≈ 12:07 ≈ (circle one) (i) 3:47 (ii) 4:11 (iii) 5:07 (iv) 6:25. SDX <- sqrt(VarX); SDX # standard deviation [1] 3.474421 In other words, we expect to see about µ ± σ = 4.78 ± 3.47 seizures according to the probability distribution given here. Section 5. Mean and Variance (LECTURE NOTES 4) 59 4.78 +- 3.47 4.78 +- 2.47 4.78 +- 1.47 P(Y = y) P(Y = y) P(Y = y) 0.20 0.20 0.20 0.15 0.15 0.15 0.10 0.10 0.10 0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10 (a) seizure distribution (b) another distribution (c) and another distribution Figure 2.12: Standard deviation: dispersion of distribution (e) Understanding standard deviation: \dispersion". The standard deviation measures the dispersion of a probability distribu- tion. The most dispersed distribution occurs in (i) (a) (ii) (b) (iii) (c). 2. Variance and standard deviation: rolling a pair of dice. If the dice are fair, the distribution of X (the sum of two rolls of a pair of dice) is x 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 5 4 3 2 1 f(x) 36 36 36 36 36 36 36 36 36 36 36 (a) Expected value, g(x) = x, X 1 1 µ = E(X) = xf(x) = 2 + ··· + 12 = 36 36 x (i) 4 (ii) 5 (iii) 6 (iv) 7. x <- 2:12 # values of random variable px <- c(1,2,3,4,5,6,5,4,3,2,1)/36 # probabilities EX <- sum(x*px); EX # E(X) [1] 7 (b) If g(x) = x2, X 1 1 E(X2) = x2f(x) = 22 + ··· + 122 = 36 36 x (i) 35:43 (ii) 47:61 (iii) 54:83 (iv) 65:67. EX2 <- sum(x^2*px); EX2 # E(X^2) [1] 54.83333 (c) Variance. σ2 = V [X] = E (X − µ)2 = E X2 − µ2 = 54:83 − 72 ≈ (i) 3:32 (ii) 5:83 (iii) 7:50 (iv) 8:07. 60 Chapter 2. Discrete Random Variables (LECTURE NOTES 4) VarX <- EX2 - EX^2; VarX # variance [1] 5.833333 (d) Standard deviation. p σ = pV ar(X) ≈ 5:83 ≈ (i) 2:42 (ii) 3:11 (iii) 4:07 (iv) 5:15. SDX <- sqrt(VarX); SDX # standard deviation [1] 2.415229 3. Uniform: die. Fair die has the following uniform pmf. x 1 2 3 4 5 6 1 1 1 1 1 1 f(x) 6 6 6 6 6 6 1 (a) Since pmf of uniform is f(x) = k , in this case, k = (i) 5 (ii) 6 (iii) 7 (iv) 8. (b) The expected value of die is X 1 1 1 1 1 1 µ = E(X) = xf(x) = 1 · + 2 · + 3 · + 4 · + 5 · + 6 · = 6 6 6 6 6 6 x (i) 2:5 (ii) 3:0 (iii) 3:5 (iv) 4:0, x <- 1:6 # values of random variable px <- rep(1/6,6) # probabilities: 1/6 repeated 6 times EX <- sum(x*px); EX # E(X) [1] 3.5 or, using the formula, the expected value of die is k + 1 6 + 1 µ = = = 2 2 (i) 2:5 (ii) 3:0 (iii) 3:5 (iv) 4:0. (c) If µ = 11, then k + 1 µ = 11 = 2 so k = (i) 11 (ii) 21 (iii) 22 (iv) 23 that is, the die has 21 sides. Section 5. Mean and Variance (LECTURE NOTES 4) 61 4. Another die question. Fair six-sided die is labelled in one of three ways: there are two sides labelled 1, three sides labelled 2 and one side labelled 3. If it costs $1 to play and you win $1 × result from die, what is the expected value of this game? die 1 2 3 x, payoff 1 − 1 2 − 1 3 − 1 2 3 1 f(x) 6 6 6 The expected value is X 2 3 1 µ = E(X) = xf(x) = 0 · + 1 · + 2 · = 6 6 6 x 1 3 5 7 (i) 6 (ii) 6 (iii) 6 (iv) 6 . x <- 0:2 # values of random variable px <- c(2,3,1)/6 # probabilities EX <- sum(x*px); EX # E(X) [1] 5/6 5. Binomial: Airplane engines. Each engine of four (n = 4) on an airplane fails 11% (p = 0:11; q = 1 − p = 0:89) of the time. Assume this problem obeys the conditions of a binomial experiment, in other words, X is b(4; 0:11). x 0 1 2 3 4 f(x) 0.310 0.058 0.005 0.000 (a) Fill in the blank: the chance no (zero) engines fail is 4 f(0) = 0:1100:894 = (i) 0:005 (ii) 0:058 (iii) 0:310 (iv) 0:627. 0 dbinom(0,4,0.11) # binomial pmf [1] 0.6274224 (b) The expected number of failures is X µ = E(X) = xf(x) = 0(0:627)+1(0:310)+2(0:058)+3(0:005)+4(0:000) ≈ x (i) 0:44 (ii) 0:51 (iii) 0:62 (iv) 0:73. or, using the formula, the expected number of failures is µ = np = 4(0:11) = (i) 0:44 (ii) 0:51 (iii) 0:62 (iv) 0:73. 62 Chapter 2. Discrete Random Variables (LECTURE NOTES 4) (c) The variance in number of failures is σ2 = V [X] = E (X − µ)2 X = (X − µ)2f(x) x = (0 − 0:44)2(0:627) + (1 − 0:44)2(0:0:310) + ··· + (4 − 0:44)2(0:000) ≈ (i) 0:15 (ii) 0:39 (iii) 0:51 (iv) 0:63. or, using the formula, the variance in number of failures is σ2 = npq = 4(0:11)(1 − 0:11) ≈ (i) 0:15 (ii) 0:39 (iii) 0:51 (iv) 0:63. (d) The pstandard deviation in number of failures is σ = 0:39 ≈ (i) 0:45 (ii) 0:56 (iii) 0:63 (iv) 0:83. 6. Bernoulli: mean and variance formulas. Bernoulli pmf is given by: x 0 1 f(x) 1 − p p (a) The expected value is X µ = E(X) = xf(x) = 0(1 − p) + 1(p) = x (i) p (ii) 1 − p (iii) (1 − p) (iv) p(1 − p). (b) The variance is X σ2 = V ar(X) = (x − µ)2f(x) = (0 − p)2(1 − p) + (1 − p)2(p) = x (i) p (ii) 1 − p (iii) (1 − p) (iv) p(1 − p). 7. Poisson: accidents.
Recommended publications
  • 1. How Different Is the T Distribution from the Normal?
    Statistics 101–106 Lecture 7 (20 October 98) c David Pollard Page 1 Read M&M §7.1 and §7.2, ignoring starred parts. Reread M&M §3.2. The eects of estimated variances on normal approximations. t-distributions. Comparison of two means: pooling of estimates of variances, or paired observations. In Lecture 6, when discussing comparison of two Binomial proportions, I was content to estimate unknown variances when calculating statistics that were to be treated as approximately normally distributed. You might have worried about the effect of variability of the estimate. W. S. Gosset (“Student”) considered a similar problem in a very famous 1908 paper, where the role of Student’s t-distribution was first recognized. Gosset discovered that the effect of estimated variances could be described exactly in a simplified problem where n independent observations X1,...,Xn are taken from (, ) = ( + ...+ )/ a normal√ distribution, N . The sample mean, X X1 Xn n has a N(, / n) distribution. The random variable X Z = √ / n 2 2 Phas a standard normal distribution. If we estimate by the sample variance, s = ( )2/( ) i Xi X n 1 , then the resulting statistic, X T = √ s/ n no longer has a normal distribution. It has a t-distribution on n 1 degrees of freedom. Remark. I have written T , instead of the t used by M&M page 505. I find it causes confusion that t refers to both the name of the statistic and the name of its distribution. As you will soon see, the estimation of the variance has the effect of spreading out the distribution a little beyond what it would be if were used.
    [Show full text]
  • 18.440: Lecture 9 Expectations of Discrete Random Variables
    18.440: Lecture 9 Expectations of discrete random variables Scott Sheffield MIT 18.440 Lecture 9 1 Outline Defining expectation Functions of random variables Motivation 18.440 Lecture 9 2 Outline Defining expectation Functions of random variables Motivation 18.440 Lecture 9 3 Expectation of a discrete random variable I Recall: a random variable X is a function from the state space to the real numbers. I Can interpret X as a quantity whose value depends on the outcome of an experiment. I Say X is a discrete random variable if (with probability one) it takes one of a countable set of values. I For each a in this countable set, write p(a) := PfX = ag. Call p the probability mass function. I The expectation of X , written E [X ], is defined by X E [X ] = xp(x): x:p(x)>0 I Represents weighted average of possible values X can take, each value being weighted by its probability. 18.440 Lecture 9 4 Simple examples I Suppose that a random variable X satisfies PfX = 1g = :5, PfX = 2g = :25 and PfX = 3g = :25. I What is E [X ]? I Answer: :5 × 1 + :25 × 2 + :25 × 3 = 1:75. I Suppose PfX = 1g = p and PfX = 0g = 1 − p. Then what is E [X ]? I Answer: p. I Roll a standard six-sided die. What is the expectation of number that comes up? 1 1 1 1 1 1 21 I Answer: 6 1 + 6 2 + 6 3 + 6 4 + 6 5 + 6 6 = 6 = 3:5. 18.440 Lecture 9 5 Expectation when state space is countable II If the state space S is countable, we can give SUM OVER STATE SPACE definition of expectation: X E [X ] = PfsgX (s): s2S II Compare this to the SUM OVER POSSIBLE X VALUES definition we gave earlier: X E [X ] = xp(x): x:p(x)>0 II Example: toss two coins.
    [Show full text]
  • Randomized Algorithms
    Chapter 9 Randomized Algorithms randomized The theme of this chapter is madendrizo algorithms. These are algorithms that make use of randomness in their computation. You might know of quicksort, which is efficient on average when it uses a random pivot, but can be bad for any pivot that is selected without randomness. Analyzing randomized algorithms can be difficult, so you might wonder why randomiza- tion in algorithms is so important and worth the extra effort. Well it turns out that for certain problems randomized algorithms are simpler or faster than algorithms that do not use random- ness. The problem of primality testing (PT), which is to determine if an integer is prime, is a good example. In the late 70s Miller and Rabin developed a famous and simple random- ized algorithm for the problem that only requires polynomial work. For over 20 years it was not known whether the problem could be solved in polynomial work without randomization. Eventually a polynomial time algorithm was developed, but it is much more complicated and computationally more costly than the randomized version. Hence in practice everyone still uses the randomized version. There are many other problems in which a randomized solution is simpler or cheaper than the best non-randomized solution. In this chapter, after covering the prerequisite background, we will consider some such problems. The first we will consider is the following simple prob- lem: Question: How many comparisons do we need to find the top two largest numbers in a sequence of n distinct numbers? Without the help of randomization, there is a trivial algorithm for finding the top two largest numbers in a sequence that requires about 2n − 3 comparisons.
    [Show full text]
  • 1 One Parameter Exponential Families
    1 One parameter exponential families The world of exponential families bridges the gap between the Gaussian family and general dis- tributions. Many properties of Gaussians carry through to exponential families in a fairly precise sense. • In the Gaussian world, there exact small sample distributional results (i.e. t, F , χ2). • In the exponential family world, there are approximate distributional results (i.e. deviance tests). • In the general setting, we can only appeal to asymptotics. A one-parameter exponential family, F is a one-parameter family of distributions of the form Pη(dx) = exp (η · t(x) − Λ(η)) P0(dx) for some probability measure P0. The parameter η is called the natural or canonical parameter and the function Λ is called the cumulant generating function, and is simply the normalization needed to make dPη fη(x) = (x) = exp (η · t(x) − Λ(η)) dP0 a proper probability density. The random variable t(X) is the sufficient statistic of the exponential family. Note that P0 does not have to be a distribution on R, but these are of course the simplest examples. 1.0.1 A first example: Gaussian with linear sufficient statistic Consider the standard normal distribution Z e−z2=2 P0(A) = p dz A 2π and let t(x) = x. Then, the exponential family is eη·x−x2=2 Pη(dx) / p 2π and we see that Λ(η) = η2=2: eta= np.linspace(-2,2,101) CGF= eta**2/2. plt.plot(eta, CGF) A= plt.gca() A.set_xlabel(r'$\eta$', size=20) A.set_ylabel(r'$\Lambda(\eta)$', size=20) f= plt.gcf() 1 Thus, the exponential family in this setting is the collection F = fN(η; 1) : η 2 Rg : d 1.0.2 Normal with quadratic sufficient statistic on R d As a second example, take P0 = N(0;Id×d), i.e.
    [Show full text]
  • Probability Cheatsheet V2.0 Thinking Conditionally Law of Total Probability (LOTP)
    Probability Cheatsheet v2.0 Thinking Conditionally Law of Total Probability (LOTP) Let B1;B2;B3; :::Bn be a partition of the sample space (i.e., they are Compiled by William Chen (http://wzchen.com) and Joe Blitzstein, Independence disjoint and their union is the entire sample space). with contributions from Sebastian Chiu, Yuan Jiang, Yuqi Hou, and Independent Events A and B are independent if knowing whether P (A) = P (AjB )P (B ) + P (AjB )P (B ) + ··· + P (AjB )P (B ) Jessy Hwang. Material based on Joe Blitzstein's (@stat110) lectures 1 1 2 2 n n A occurred gives no information about whether B occurred. More (http://stat110.net) and Blitzstein/Hwang's Introduction to P (A) = P (A \ B1) + P (A \ B2) + ··· + P (A \ Bn) formally, A and B (which have nonzero probability) are independent if Probability textbook (http://bit.ly/introprobability). Licensed and only if one of the following equivalent statements holds: For LOTP with extra conditioning, just add in another event C! under CC BY-NC-SA 4.0. Please share comments, suggestions, and errors at http://github.com/wzchen/probability_cheatsheet. P (A \ B) = P (A)P (B) P (AjC) = P (AjB1;C)P (B1jC) + ··· + P (AjBn;C)P (BnjC) P (AjB) = P (A) P (AjC) = P (A \ B1jC) + P (A \ B2jC) + ··· + P (A \ BnjC) P (BjA) = P (B) Last Updated September 4, 2015 Special case of LOTP with B and Bc as partition: Conditional Independence A and B are conditionally independent P (A) = P (AjB)P (B) + P (AjBc)P (Bc) given C if P (A \ BjC) = P (AjC)P (BjC).
    [Show full text]
  • On the Meaning and Use of Kurtosis
    Psychological Methods Copyright 1997 by the American Psychological Association, Inc. 1997, Vol. 2, No. 3,292-307 1082-989X/97/$3.00 On the Meaning and Use of Kurtosis Lawrence T. DeCarlo Fordham University For symmetric unimodal distributions, positive kurtosis indicates heavy tails and peakedness relative to the normal distribution, whereas negative kurtosis indicates light tails and flatness. Many textbooks, however, describe or illustrate kurtosis incompletely or incorrectly. In this article, kurtosis is illustrated with well-known distributions, and aspects of its interpretation and misinterpretation are discussed. The role of kurtosis in testing univariate and multivariate normality; as a measure of departures from normality; in issues of robustness, outliers, and bimodality; in generalized tests and estimators, as well as limitations of and alternatives to the kurtosis measure [32, are discussed. It is typically noted in introductory statistics standard deviation. The normal distribution has a kur- courses that distributions can be characterized in tosis of 3, and 132 - 3 is often used so that the refer- terms of central tendency, variability, and shape. With ence normal distribution has a kurtosis of zero (132 - respect to shape, virtually every textbook defines and 3 is sometimes denoted as Y2)- A sample counterpart illustrates skewness. On the other hand, another as- to 132 can be obtained by replacing the population pect of shape, which is kurtosis, is either not discussed moments with the sample moments, which gives or, worse yet, is often described or illustrated incor- rectly. Kurtosis is also frequently not reported in re- ~(X i -- S)4/n search articles, in spite of the fact that virtually every b2 (•(X i - ~')2/n)2' statistical package provides a measure of kurtosis.
    [Show full text]
  • Joint Probability Distributions
    ST 380 Probability and Statistics for the Physical Sciences Joint Probability Distributions In many experiments, two or more random variables have values that are determined by the outcome of the experiment. For example, the binomial experiment is a sequence of trials, each of which results in success or failure. If ( 1 if the i th trial is a success Xi = 0 otherwise; then X1; X2;:::; Xn are all random variables defined on the whole experiment. 1 / 15 Joint Probability Distributions Introduction ST 380 Probability and Statistics for the Physical Sciences To calculate probabilities involving two random variables X and Y such as P(X > 0 and Y ≤ 0); we need the joint distribution of X and Y . The way we represent the joint distribution depends on whether the random variables are discrete or continuous. 2 / 15 Joint Probability Distributions Introduction ST 380 Probability and Statistics for the Physical Sciences Two Discrete Random Variables If X and Y are discrete, with ranges RX and RY , respectively, the joint probability mass function is p(x; y) = P(X = x and Y = y); x 2 RX ; y 2 RY : Then a probability like P(X > 0 and Y ≤ 0) is just X X p(x; y): x2RX :x>0 y2RY :y≤0 3 / 15 Joint Probability Distributions Two Discrete Random Variables ST 380 Probability and Statistics for the Physical Sciences Marginal Distribution To find the probability of an event defined only by X , we need the marginal pmf of X : X pX (x) = P(X = x) = p(x; y); x 2 RX : y2RY Similarly the marginal pmf of Y is X pY (y) = P(Y = y) = p(x; y); y 2 RY : x2RX 4 / 15 Joint
    [Show full text]
  • Basic Econometrics / Statistics Statistical Distributions: Normal, T, Chi-Sq, & F
    Basic Econometrics / Statistics Statistical Distributions: Normal, T, Chi-Sq, & F Course : Basic Econometrics : HC43 / Statistics B.A. Hons Economics, Semester IV/ Semester III Delhi University Course Instructor: Siddharth Rathore Assistant Professor Economics Department, Gargi College Siddharth Rathore guj75845_appC.qxd 4/16/09 12:41 PM Page 461 APPENDIX C SOME IMPORTANT PROBABILITY DISTRIBUTIONS In Appendix B we noted that a random variable (r.v.) can be described by a few characteristics, or moments, of its probability function (PDF or PMF), such as the expected value and variance. This, however, presumes that we know the PDF of that r.v., which is a tall order since there are all kinds of random variables. In practice, however, some random variables occur so frequently that statisticians have determined their PDFs and documented their properties. For our purpose, we will consider only those PDFs that are of direct interest to us. But keep in mind that there are several other PDFs that statisticians have studied which can be found in any standard statistics textbook. In this appendix we will discuss the following four probability distributions: 1. The normal distribution 2. The t distribution 3. The chi-square (␹2 ) distribution 4. The F distribution These probability distributions are important in their own right, but for our purposes they are especially important because they help us to find out the probability distributions of estimators (or statistics), such as the sample mean and sample variance. Recall that estimators are random variables. Equipped with that knowledge, we will be able to draw inferences about their true population values.
    [Show full text]
  • Functions of Random Variables
    Names for Eg(X ) Generating Functions Topic 8 The Expected Value Functions of Random Variables 1 / 19 Names for Eg(X ) Generating Functions Outline Names for Eg(X ) Means Moments Factorial Moments Variance and Standard Deviation Generating Functions 2 / 19 Names for Eg(X ) Generating Functions Means If g(x) = x, then µX = EX is called variously the distributional mean, and the first moment. • Sn, the number of successes in n Bernoulli trials with success parameter p, has mean np. • The mean of a geometric random variable with parameter p is (1 − p)=p . • The mean of an exponential random variable with parameter β is1 /β. • A standard normal random variable has mean0. Exercise. Find the mean of a Pareto random variable. Z 1 Z 1 βαβ Z 1 αββ 1 αβ xf (x) dx = x dx = βαβ x−β dx = x1−β = ; β > 1 x β+1 −∞ α x α 1 − β α β − 1 3 / 19 Names for Eg(X ) Generating Functions Moments In analogy to a similar concept in physics, EX m is called the m-th moment. The second moment in physics is associated to the moment of inertia. • If X is a Bernoulli random variable, then X = X m. Thus, EX m = EX = p. • For a uniform random variable on [0; 1], the m-th moment is R 1 m 0 x dx = 1=(m + 1). • The third moment for Z, a standard normal random, is0. The fourth moment, 1 Z 1 z2 1 z2 1 4 4 3 EZ = p z exp − dz = −p z exp − 2π −∞ 2 2π 2 −∞ 1 Z 1 z2 +p 3z2 exp − dz = 3EZ 2 = 3 2π −∞ 2 3 z2 u(z) = z v(t) = − exp − 2 0 2 0 z2 u (z) = 3z v (t) = z exp − 2 4 / 19 Names for Eg(X ) Generating Functions Factorial Moments If g(x) = (x)k , where( x)k = x(x − 1) ··· (x − k + 1), then E(X )k is called the k-th factorial moment.
    [Show full text]
  • Expected Value and Variance
    Expected Value and Variance Have you ever wondered whether it would be \worth it" to buy a lottery ticket every week, or pondered questions such as \If I were offered a choice between a million dollars, or a 1 in 100 chance of a billion dollars, which would I choose?" One method of deciding on the answers to these questions is to calculate the expected earnings of the enterprise, and aim for the option with the higher expected value. This is a useful decision making tool for problems that involve repeating many trials of an experiment | such as investing in stocks, choosing where to locate a business, or where to fish. (For once-off decisions with high stakes, such as the choice between a sure 1 million dollars or a 1 in 100 chance of a billion dollars, it is unclear whether this is a useful tool.) Example John works as a tour guide in Dublin. If he has 200 people or more take his tours on a given week, he earns e1,000. If the number of tourists who take his tours is between 100 and 199, he earns e700. If the number is less than 100, he earns e500. Thus John has a variable weekly income. From experience, John knows that he earns e1,000 fifty percent of the time, e700 thirty percent of the time and e500 twenty percent of the time. John's weekly income is a random variable with a probability distribution Income Probability e1,000 0.5 e700 0.3 e500 0.2 Example What is John's average income, over the course of a 50-week year? Over 50 weeks, we expect that John will earn I e1000 on about 25 of the weeks (50%); I e700 on about 15 weeks (30%); and I e500 on about 10 weeks (20%).
    [Show full text]
  • Volatility Modeling Using the Student's T Distribution
    Volatility Modeling Using the Student’s t Distribution Maria S. Heracleous Dissertation submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Economics Aris Spanos, Chair Richard Ashley Raman Kumar Anya McGuirk Dennis Yang August 29, 2003 Blacksburg, Virginia Keywords: Student’s t Distribution, Multivariate GARCH, VAR, Exchange Rates Copyright 2003, Maria S. Heracleous Volatility Modeling Using the Student’s t Distribution Maria S. Heracleous (ABSTRACT) Over the last twenty years or so the Dynamic Volatility literature has produced a wealth of uni- variateandmultivariateGARCHtypemodels.Whiletheunivariatemodelshavebeenrelatively successful in empirical studies, they suffer from a number of weaknesses, such as unverifiable param- eter restrictions, existence of moment conditions and the retention of Normality. These problems are naturally more acute in the multivariate GARCH type models, which in addition have the problem of overparameterization. This dissertation uses the Student’s t distribution and follows the Probabilistic Reduction (PR) methodology to modify and extend the univariate and multivariate volatility models viewed as alternative to the GARCH models. Its most important advantage is that it gives rise to internally consistent statistical models that do not require ad hoc parameter restrictions unlike the GARCH formulations. Chapters 1 and 2 provide an overview of my dissertation and recent developments in the volatil- ity literature. In Chapter 3 we provide an empirical illustration of the PR approach for modeling univariate volatility. Estimation results suggest that the Student’s t AR model is a parsimonious and statistically adequate representation of exchange rate returns and Dow Jones returns data.
    [Show full text]
  • Random Variables and Applications
    Random Variables and Applications OPRE 6301 Random Variables. As noted earlier, variability is omnipresent in the busi- ness world. To model variability probabilistically, we need the concept of a random variable. A random variable is a numerically valued variable which takes on different values with given probabilities. Examples: The return on an investment in a one-year period The price of an equity The number of customers entering a store The sales volume of a store on a particular day The turnover rate at your organization next year 1 Types of Random Variables. Discrete Random Variable: — one that takes on a countable number of possible values, e.g., total of roll of two dice: 2, 3, ..., 12 • number of desktops sold: 0, 1, ... • customer count: 0, 1, ... • Continuous Random Variable: — one that takes on an uncountable number of possible values, e.g., interest rate: 3.25%, 6.125%, ... • task completion time: a nonnegative value • price of a stock: a nonnegative value • Basic Concept: Integer or rational numbers are discrete, while real numbers are continuous. 2 Probability Distributions. “Randomness” of a random variable is described by a probability distribution. Informally, the probability distribution specifies the probability or likelihood for a random variable to assume a particular value. Formally, let X be a random variable and let x be a possible value of X. Then, we have two cases. Discrete: the probability mass function of X specifies P (x) P (X = x) for all possible values of x. ≡ Continuous: the probability density function of X is a function f(x) that is such that f(x) h P (x < · ≈ X x + h) for small positive h.
    [Show full text]