Stat 411 – Review Problems for Exam 3 Solutions

Total Page:16

File Type:pdf, Size:1020Kb

Stat 411 – Review Problems for Exam 3 Solutions Stat 411 { Review problems for Exam 3 Solutions 1. (a) Since the Gamma(a; b) PDF is xa−1e−x=b=baΓ(a), and the Exp(b) PDF is e−x=b=b, it is clear that Exp(b) = Gamma(1; b); here you also need the fact that, for integer a, Γ(a) = (a − 1)!. (b) The gamma PDF can be rewritten as f(x) = expf(a − 1) log x − (1=b)x − a log b − log Γ(a)g: This is clearly in the form of a two-parameter exponential family with p1(a; b) = a − 1, p2(a; b) = −1=b, K1(x) = log x, and K2(x) = x. a−1 (c) Let X ∼ Beta(a; 1), with PDF fX (x) = ax , and Y = − log X. Then −Y X = e and the transformation rule says the PDF fY (y) is −y −y −y a−1 −y −ay fY (y) = fX (e )e = a(e ) e = ae : This is the PDF of an Exp(1=a) = Gamma(1; 1=a) random variable. (d) Let X ∼ Γ(a; b). The transformation rule can be used again to show that Y = tX ∼ Gamma(a; tb) for t > 0. Perhaps a simpler argument is based on b being a scale parameter. That is X = bV where V ∼ Gamma(a; 1). Writing tb in place of b in the previous statement proves the claim. −ai (e) The moment-generating function for Xi ∼ Gamma(ai; b) is Mi(t) = (1−bt) , for t < 1=b; see page 151 in the book. Then the moment-generating function Pn of i=1 Xi is the product of the individual moment-generating functions: n n Pn Y Y −ai − ai Mi(t) = (1 − bt) = (1 − bt) i=1 : i=1 i=1 Pn The latter expression is the moment-generating function of a Gamma( i=1 ai; b) Pn random variable and, by uniqueness of moment-generating functions, i=1 Xi must have this distribution. (f) The ChiSq(r) PDF is given on page 152 of the book. It is clear the this is just the special case Gamma(r=2; 2). 2. (a) The likelihood function is n 2−1 −Xi/θ Y Xi e −2n −(1/θ) Pn X L(θ) = / θ e i=1 i : θ2Γ(2) i=1 Calculus gives the MLE as θ^ = X=2, and so the likelihood ratio statistic is 2n L(θ0) nX Λ = = e2n e−(1/θ0)nX : L(θ^) 2nθ0 Let T = 2nX/θ0. Then the likelihood ratio statistic is proportional to g(T ) with g(t) = (t=2)2ne−t=2. This function is increasing for a while, and then 1 decreasing so, overall, g(T ) is small iff T < k1 or T > k2, where k1 < k2 are constants to be determined. To make the size equal α, we choose k1; k2 such Pn that Pθ0 (T < k1) = Pθ0 (T > k2) = α=2. Since i=1 Xi ∼ Gamma(2n; θ0) when H0 is true, it follows that T ∼ Gamma(2n; 2) = ChiSq(4n). Therefore, we take k1; k2 to be percentiles of the ChiSq(4n) distribution, i.e., the exact size-α likelihood ratio test rejects H0 in favor of H1 iff 2 2 T < χ4n;1−α/2 or T > χ4n,α/2: (b) Wilk's theorem says that, if n is large, then −2 log Λ is approximately dis- tributed as ChiSq(1). In that case, an approximate size-α likelihood ratio test 2 rejects H0 in favor of H1 iff −2 log Λ > χ1,α. But −2 log Λ can be simplified: −2 log Λ = −4n log(T=4n) − 4n + T: So the approximately size-α likelihood ratio test rejects H0 iff −4n log(T=4n)− 2 4n + T ≥ χ1,α. (c) The two tests in (a) and (b), respectively, are different. However, both are based on T and have cutoffs determined by chi-square distribution percentiles. For the exact test, the power function can be derived by \inserting" the true θ in the denominator of T . That is, pow(θ) = Pθ(T < k1) + Pθ(T > k2) = Pθf(θ0/θ)T < (θ0/θ)k1g + Pθf(θ0/θ)T > (θ0/θ)k2g = PfChiSq(4n) < (θ0/θ)k1g + PfChiSq(4n) > (θ0/θ)k2g = H((θ0/θ)k1) + 1 − H((θ0/θ)k2); where H is the CDF of a ChiSq(4n) distributed random variable, and k1 = 2 2 χ4n;1−α/2 and k2 = χ4n,α/2. The H function can be evaluated in R with the function pchisq. The power function for the approximate test in (b) can be evaluated exactly, and the calculation looks similar to that given above for the exact test, but it requires a numerical solution to a non-linear equation, which is outside the scope of what we're doing here. But it can be found via Monte Carlo without much trouble; see the corresponding R code. Plots of these two power functions (n = 10, α = 0:05, and θ0 = 2) are given below. Exact Approx ) θ ( pow 0.2 0.4 0.6 0.8 1.0 1.5 2.0 2.5 3.0 θ 2 3. In this case, the likelihood function for (θ1; θ2) is n Y 1 1 n L(θ ; θ ) = I (X ) = I (X )I (X ): 1 2 θ − θ [θ1,θ2] i θ − θ [θ1;1) (1) (−∞,θ2] (n) i=1 2 1 2 1 If the null hypothesis is true, and the distribution is symmetric, i.e., Unif(−θ; θ) for ^0 some θ > 0, then the MLE for the upper endpoint θ is θ = max{−X(1);X(n)g; the superscript \0" is to indicate that this is the MLE under H0. This can be derived by manipulating the indicator functions in the likelihood. The overall MLE is given by ^ ^ θ1 = X(1) and θ2 = X(n). Now the likelihood ratio statistic for testing H0 : θ1 = −θ2 (symmetry) is L(−θ^0; θ^0) X − X n = (n) (1) : ^ ^ L(θ1; θ2) 2 max{−X(1);X(n)g 4. Both examples are exponential families which, in general, have the monotone like- lihood ratio property. However, this can be verified directly. (a) Let θ1 > θ0. Then the likelihood ratio is 2 1 1 Pn L(θ0) θ1 ( − ) Xi = ne θ1 θ0 i=1 : (1) L(θ1) θ0 1 1 Since θ1 > θ0, θ − θ < 0, and so the likelihood ratio is a decreasing function Pn 1 0 of T = i=1 Xi. Therefore, Gamma(2; θ) has the monotone likelihood ratio property in T . (b) Again, take θ1 > θ0. The likelihood ratio is n n θ −θ L(θ0) Γ(θ1) Y 0 1 = X : L(θ ) Γ(θ ) i 1 0 i=1 Qn Since the exponent on T = i=1 is negative, the ratio is decreasing in T . Therefore, Gamma(θ; 1) has the monotone likelihood ratio property in T . 5. (a) The Neyman{Pearson theorem says that the most powerful size-α test of H0 : θ = θ0 versus H1 : θ = θ1 rejects H0 iff L(θ0)=L(θ1) ≤ k, where k is chosen so that the size of the test is α. The likelihood ratio in (1) is decreasing Pn Pn 0 in i=1 Xi, so it's less than k iff i=1 Xi is greater than some k . From Pn Problem 2(a), (2/θ0) i=1 Xi ∼ ChiSq(4n), so the most powerful size-α test Pn 2 rejects H0 in favor of H1 iff (2/θ0) i=1 Xi ≥ χ4n,α. (b) Since the most-powerful test of H0 : θ = θ0 versus H1 : θ = θ1 does not depend on θ1, we can conclude that the test is actually uniformly most-powerful for the one-sided alternative H1 : θ > θ0. This conclusion could have been reached automatically based on the monotone likelihood ratio property shown in Problem 4(a). 3 Pn 2 (c) The power function is pow(θ) = Pθf(2/θ0) i=1 Xi ≥ χ4n,αg. The key is that, Pn if θ is the true parameter, then (2/θ) i=1 Xi ∼ ChiSq(4n). So we only need to do a bit of algebra: n n n 2 X o n θ 2 X o pow(θ) = P X ≥ χ2 = P X ≥ χ2 θ θ i 4n,α θ θ θ i 4n,α 0 i=1 0 i=1 n n2 X θ0 o = P X ≥ χ2 = PfChiSq(4n) ≥ θ χ2 /θg θ θ i θ 4n,α 0 4n,α i=1 2 = 1 − H(θ0χ4n,α/θ); where H is the CDF of a ChiSq(4n) distributed random variable. A plot of this function versus the power function for the exact likelihood ratio test in Problem 2(c) above (with n = 10, α = 0:05, θ0 = 2) is shown below. As expected, the UMP test has larger power for all θ > 2. UMP Exact LRT ) θ ( pow 0.1 0.2 0.3 0.4 0.5 0.6 2.0 2.2 2.4 2.6 2.8 3.0 θ 6. The goal is to test H0 : µ = 140 versus H1 : µ > 140. The observed data gives summary statistics n = 12, x = 145, and s = 8. All units are milligrams. (a) Since the distribution is assumed to be normal with both mean and variance unknown, a t-test seems most appropriate. The t-test will reject H0 if and only if X − µ T := 0 > t? : S=n1=2 n−1;1−α Here T has a Student-t distribution with n − 1 degrees of freedom when H0 ? is true; then tn−1;1−α is the value t such that P(T > t) = α.
Recommended publications
  • On Moments of Folded and Truncated Multivariate Normal Distributions
    On Moments of Folded and Truncated Multivariate Normal Distributions Raymond Kan Rotman School of Management, University of Toronto 105 St. George Street, Toronto, Ontario M5S 3E6, Canada (E-mail: [email protected]) and Cesare Robotti Terry College of Business, University of Georgia 310 Herty Drive, Athens, Georgia 30602, USA (E-mail: [email protected]) April 3, 2017 Abstract Recurrence relations for integrals that involve the density of multivariate normal distributions are developed. These recursions allow fast computation of the moments of folded and truncated multivariate normal distributions. Besides being numerically efficient, the proposed recursions also allow us to obtain explicit expressions of low order moments of folded and truncated multivariate normal distributions. Keywords: Multivariate normal distribution; Folded normal distribution; Truncated normal distribution. 1 Introduction T Suppose X = (X1;:::;Xn) follows a multivariate normal distribution with mean µ and k kn positive definite covariance matrix Σ. We are interested in computing E(jX1j 1 · · · jXnj ) k1 kn and E(X1 ··· Xn j ai < Xi < bi; i = 1; : : : ; n) for nonnegative integer values ki = 0; 1; 2;:::. The first expression is the moment of a folded multivariate normal distribution T jXj = (jX1j;:::; jXnj) . The second expression is the moment of a truncated multivariate normal distribution, with Xi truncated at the lower limit ai and upper limit bi. In the second expression, some of the ai's can be −∞ and some of the bi's can be 1. When all the bi's are 1, the distribution is called the lower truncated multivariate normal, and when all the ai's are −∞, the distribution is called the upper truncated multivariate normal.
    [Show full text]
  • A Study of Non-Central Skew T Distributions and Their Applications in Data Analysis and Change Point Detection
    A STUDY OF NON-CENTRAL SKEW T DISTRIBUTIONS AND THEIR APPLICATIONS IN DATA ANALYSIS AND CHANGE POINT DETECTION Abeer M. Hasan A Dissertation Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY August 2013 Committee: Arjun K. Gupta, Co-advisor Wei Ning, Advisor Mark Earley, Graduate Faculty Representative Junfeng Shang. Copyright c August 2013 Abeer M. Hasan All rights reserved iii ABSTRACT Arjun K. Gupta, Co-advisor Wei Ning, Advisor Over the past three decades there has been a growing interest in searching for distribution families that are suitable to analyze skewed data with excess kurtosis. The search started by numerous papers on the skew normal distribution. Multivariate t distributions started to catch attention shortly after the development of the multivariate skew normal distribution. Many researchers proposed alternative methods to generalize the univariate t distribution to the multivariate case. Recently, skew t distribution started to become popular in research. Skew t distributions provide more flexibility and better ability to accommodate long-tailed data than skew normal distributions. In this dissertation, a new non-central skew t distribution is studied and its theoretical properties are explored. Applications of the proposed non-central skew t distribution in data analysis and model comparisons are studied. An extension of our distribution to the multivariate case is presented and properties of the multivariate non-central skew t distri- bution are discussed. We also discuss the distribution of quadratic forms of the non-central skew t distribution. In the last chapter, the change point problem of the non-central skew t distribution is discussed under different settings.
    [Show full text]
  • The Normal Moment Generating Function
    MSc. Econ: MATHEMATICAL STATISTICS, 1996 The Moment Generating Function of the Normal Distribution Recall that the probability density function of a normally distributed random variable x with a mean of E(x)=and a variance of V (x)=2 is 2 1 1 (x)2/2 (1) N(x; , )=p e 2 . (22) Our object is to nd the moment generating function which corresponds to this distribution. To begin, let us consider the case where = 0 and 2 =1. Then we have a standard normal, denoted by N(z;0,1), and the corresponding moment generating function is dened by Z zt zt 1 1 z2 Mz(t)=E(e )= e √ e 2 dz (2) 2 1 t2 = e 2 . To demonstate this result, the exponential terms may be gathered and rear- ranged to give exp zt exp 1 z2 = exp 1 z2 + zt (3) 2 2 1 2 1 2 = exp 2 (z t) exp 2 t . Then Z 1t2 1 1(zt)2 Mz(t)=e2 √ e 2 dz (4) 2 1 t2 = e 2 , where the nal equality follows from the fact that the expression under the integral is the N(z; = t, 2 = 1) probability density function which integrates to unity. Now consider the moment generating function of the Normal N(x; , 2) distribution when and 2 have arbitrary values. This is given by Z xt xt 1 1 (x)2/2 (5) Mx(t)=E(e )= e p e 2 dx (22) Dene x (6) z = , which implies x = z + . 1 MSc. Econ: MATHEMATICAL STATISTICS: BRIEF NOTES, 1996 Then, using the change-of-variable technique, we get Z 1 1 2 dx t zt p 2 z Mx(t)=e e e dz 2 dz Z (2 ) (7) t zt 1 1 z2 = e e √ e 2 dz 2 t 1 2t2 = e e 2 , Here, to establish the rst equality, we have used dx/dz = .
    [Show full text]
  • Volatility Modeling Using the Student's T Distribution
    Volatility Modeling Using the Student’s t Distribution Maria S. Heracleous Dissertation submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Economics Aris Spanos, Chair Richard Ashley Raman Kumar Anya McGuirk Dennis Yang August 29, 2003 Blacksburg, Virginia Keywords: Student’s t Distribution, Multivariate GARCH, VAR, Exchange Rates Copyright 2003, Maria S. Heracleous Volatility Modeling Using the Student’s t Distribution Maria S. Heracleous (ABSTRACT) Over the last twenty years or so the Dynamic Volatility literature has produced a wealth of uni- variateandmultivariateGARCHtypemodels.Whiletheunivariatemodelshavebeenrelatively successful in empirical studies, they suffer from a number of weaknesses, such as unverifiable param- eter restrictions, existence of moment conditions and the retention of Normality. These problems are naturally more acute in the multivariate GARCH type models, which in addition have the problem of overparameterization. This dissertation uses the Student’s t distribution and follows the Probabilistic Reduction (PR) methodology to modify and extend the univariate and multivariate volatility models viewed as alternative to the GARCH models. Its most important advantage is that it gives rise to internally consistent statistical models that do not require ad hoc parameter restrictions unlike the GARCH formulations. Chapters 1 and 2 provide an overview of my dissertation and recent developments in the volatil- ity literature. In Chapter 3 we provide an empirical illustration of the PR approach for modeling univariate volatility. Estimation results suggest that the Student’s t AR model is a parsimonious and statistically adequate representation of exchange rate returns and Dow Jones returns data.
    [Show full text]
  • The Multivariate Normal Distribution=1See Last Slide For
    Moment-generating Functions Definition Properties χ2 and t distributions The Multivariate Normal Distribution1 STA 302 Fall 2017 1See last slide for copyright information. 1 / 40 Moment-generating Functions Definition Properties χ2 and t distributions Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ2 and t distributions 2 / 40 Moment-generating Functions Definition Properties χ2 and t distributions Joint moment-generating function Of a p-dimensional random vector x t0x Mx(t) = E e x t +x t +x t For example, M (t1; t2; t3) = E e 1 1 2 2 3 3 (x1;x2;x3) Just write M(t) if there is no ambiguity. Section 4.3 of Linear models in statistics has some material on moment-generating functions (optional). 3 / 40 Moment-generating Functions Definition Properties χ2 and t distributions Uniqueness Proof omitted Joint moment-generating functions correspond uniquely to joint probability distributions. M(t) is a function of F (x). Step One: f(x) = @ ··· @ F (x). @x1 @xp @ @ R x2 R x1 For example, f(y1; y2) dy1dy2 @x1 @x2 −∞ −∞ 0 Step Two: M(t) = R ··· R et xf(x) dx Could write M(t) = g (F (x)). Uniqueness says the function g is one-to-one, so that F (x) = g−1 (M(t)). 4 / 40 Moment-generating Functions Definition Properties χ2 and t distributions g−1 (M(t)) = F (x) A two-variable example g−1 (M(t)) = F (x) −1 R 1 R 1 x1t1+x2t2 R x2 R x1 g −∞ −∞ e f(x1; x2) dx1dx2 = −∞ −∞ f(y1; y2) dy1dy2 5 / 40 Moment-generating Functions Definition Properties χ2 and t distributions Theorem Two random vectors x1 and x2 are independent if and only if the moment-generating function of their joint distribution is the product of their moment-generating functions.
    [Show full text]
  • Strategic Information Transmission and Stochastic Orders
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Szalay, Dezsö Working Paper Strategic information transmission and stochastic orders SFB/TR 15 Discussion Paper, No. 386 Provided in Cooperation with: Free University of Berlin, Humboldt University of Berlin, University of Bonn, University of Mannheim, University of Munich, Collaborative Research Center Transregio 15: Governance and the Efficiency of Economic Systems Suggested Citation: Szalay, Dezsö (2012) : Strategic information transmission and stochastic orders, SFB/TR 15 Discussion Paper, No. 386, Sonderforschungsbereich/Transregio 15 - Governance and the Efficiency of Economic Systems (GESY), München, http://nbn-resolving.de/urn:nbn:de:bvb:19-epub-14011-6 This Version is available at: http://hdl.handle.net/10419/93951 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence.
    [Show full text]
  • The Rth Moment About the Origin of a Random Variable X, Denoted By
    SAMPLE MOMENTS 1. POPULATION MOMENTS 1.1. Moments about the origin (raw moments). The rth moment about the origin of a random 0 r variable X, denoted by µr, is the expected value of X ; symbolically, 0 r µr =E(X ) (1) = X xr f(x) (2) x for r = 0, 1, 2, . when X is discrete and 0 r µr =E(X ) v (3) when X is continuous. The rth moment about the origin is only defined if E[ Xr ] exists. A mo- 0 ment about the origin is sometimes called a raw moment. Note that µ1 = E(X)=µX , the mean of the distribution of X, or simply the mean of X. The rth moment is sometimes written as function of θ where θ is a vector of parameters that characterize the distribution of X. If there is a sequence of random variables, X1,X2,...Xn, we will call the rth population moment 0 of the ith random variable µi, r and define it as 0 r µ i,r = E (X i) (4) 1.2. Central moments. The rth moment about the mean of a random variable X, denoted by µr,is r the expected value of ( X − µX ) symbolically, r µr =E[(X − µX ) ] r (5) = X ( x − µX ) f(x) x for r = 0, 1, 2, . when X is discrete and r µr =E[(X − µX ) ] ∞ r (6) =R−∞ (x − µX ) f(x) dx r when X is continuous. The rth moment about the mean is only defined if E[ (X - µX) ] exists. The rth moment about the mean of a random variable X is sometimes called the rth central moment of r X.
    [Show full text]
  • Discrete Random Variables Randomness
    Discrete Random Variables Randomness • The word random effectively means unpredictable • In engineering practice we may treat some signals as random to simplify the analysis even though they may not actually be random Random Variable Defined X A random variable () is the assignment of numerical values to the outcomes of experiments Random Variables Examples of assignments of numbers to the outcomes of experiments. Discrete-Value vs Continuous- Value Random Variables •A discrete-value (DV) random variable has a set of distinct values separated by values that cannot occur • A random variable associated with the outcomes of coin flips, card draws, dice tosses, etc... would be DV random variable •A continuous-value (CV) random variable may take on any value in a continuum of values which may be finite or infinite in size Probability Mass Functions The probability mass function (pmf ) for a discrete random variable X is P x = P X = x . X () Probability Mass Functions A DV random variable X is a Bernoulli random variable if it takes on only two values 0 and 1 and its pmf is 1 p , x = 0 P x p , x 1 X ()= = 0 , otherwise and 0 < p < 1. Probability Mass Functions Example of a Bernoulli pmf Probability Mass Functions If we perform n trials of an experiment whose outcome is Bernoulli distributed and if X represents the total number of 1’s that occur in those n trials, then X is said to be a Binomial random variable and its pmf is n x nx p 1 p , x 0,1,2,,n P x () {} X ()= x 0 , otherwise Probability Mass Functions Binomial pmf Probability Mass Functions If we perform Bernoulli trials until a 1 (success) occurs and the probability of a 1 on any single trial is p, the probability that the k1 first success will occur on the kth trial is p()1 p .
    [Show full text]
  • Moment Generating Function
    Moment Generating Function Statistics 110 Summer 2006 Copyright °c 2006 by Mark E. Irwin Moments Revisited So far I've really only talked about the ¯rst two moments. Lets de¯ne what is meant by moments more precisely. De¯nition. The rth moment of a random variable X is E[Xr], assuming that the expectation exists. So the mean of a distribution is its ¯rst moment. De¯nition. The r central moment of a random variable X is E[(X ¡ E[X])r], assuming that the expectation exists. Thus the variance is the 2nd central moment of distribution. The 1st central moment usually isn't discussed as its always 0. The 3rd central moment is known as the skewness of a distribution and is used as a measure of asymmetry. Moments Revisited 1 If a distribution is symmetric about its mean (f(¹ ¡ x) = f(¹ + x)), the skewness will be 0. Similarly if the skewness is non-zero, the distribution is asymmetric. However it is possible to have asymmetric distribution with skewness = 0. Examples of symmetric distribution are normals, Beta(a; a), Bin(n; p = 0:5). Example of asymmetric distributions are Distribution Skewness Bin(n; p) np(1 ¡ p)(1 ¡ 2p) P ois(¸) ¸ 2 Exp(¸) ¸ Beta(a; b) Ugly formula The 4th central moment is known as the kurtosis. It can be used as a measure of how heavy the tails are for a distribution. The kurtosis for a normal is 3σ4. Moments Revisited 2 Note that these measures are often standardized as in their raw form they depend on the standard deviation.
    [Show full text]
  • The Binomial Moment Generating Function
    MSc. Econ: MATHEMATICAL STATISTICS, 1996 The Moment Generating Function of the Binomial Distribution Consider the binomial function n! (1) b(x; n, p)= pxqnx with q =1p. x!(n x)! Then the moment generating function is given by Xn n! M (t)= ext pxqnx x x!(n x)! x=0 Xn (2) n! = (pet)x qnx x!(n x)! x=0 =(q+pet)n, where the nal equality is understood by recognising that it represents the expansion of binomial. If we dierentiate the moment generating function with respect to t using the function-of-a-function rule, then we get dM (t) x = n(q + pet)n1pet (3) dt = npet(q + pet)n1. Evaluating this at t = 0 gives (4) E(x)=np(q + p)n1 = np. Notice that this result is already familiar and that we have obtained it previ- ously by somewhat simpler means. To nd the second moment, we use the product rule duv dv du (5) = u + v dx dx dx to get d2M (t) x = npet (n 1)(q + pet)n2pet +(q+pet)n1 npet dt2 (6) = npet(q + pet)n2 (n 1)pet +(q+pet) = npet(q + pet)n2 q + npet . 1 MSc. Econ: MATHEMATICAL STATISTICS: BRIEF NOTES, 1996 Evaluating this at t = 0 gives E(x2)=np(q + p)n2(q + np) (7) = np(q + np). From this, we see that 2 V (x)=E(x2) E(x) (8) =np(q + np) n2p2 = npq. Theorems Concerning Moment Generating Functions In nding the variance of the binomial distribution, we have pursed a method which is more laborious than it need by.
    [Show full text]
  • Variance, Covariance, Moment-Generating Functions: Definitions, Properties, and Practice Problems
    Math 408, Actuarial Statistics I A.J. Hildebrand Variance, covariance, and moment-generating functions Definitions and basic properties • Basic definitions: – Variance: Var(X) = E(X2) − E(X)2 – Covariance: Cov(X, Y ) = E(XY ) − E(X)E(Y ) – Correlation: ρ = ρ(X, Y ) = √ Cov(X,Y ) Var(X) Var(Y ) tX – Moment-generating function (mgf): M(t) = MX (t) = E(e ) • General properties: – E(c) = c, E(cX) = cE(X) – Var(c) = 0, Var(cX) = c2 Var(X), Var(X + c) = Var(X) – M(0) = 1, M 0(0) = E(X), M 00(0) = E(X2), M 000(0) = E(X3), etc. – E(X + Y ) = E(X) + E(Y ) – Var(X + Y ) = Var(X) + Var(Y ) + 2 Cov(X, Y ). • Additional properties holding for independent r.v.’s X and Y : – E(XY ) = E(X)E(Y ) – Cov(X, Y ) = 0 – Var(X + Y ) = Var(X) + Var(Y ) – MX+Y (t) = MX (t)MY (t) • Notes: – Analogous properties hold for three or more random variables; e.g., if X1,...,Xn are mutually independent, then E(X1 ...Xn) = E(X1) ...E(Xn). – Note that the product formula for mgf’s involves the sum of two independent r.v.’s, not the product. The reason behind this is that the definition of the mgf of X + Y is the expectation of et(X+Y ), which is equal to the product etX · etY . In case of indepedence, the expectation of that product is the product of the expectations. – While a variance is always nonnegative, covariance and correlation can take negative values. 1 Math 408, Actuarial Statistics I A.J.
    [Show full text]
  • Comparing TL-Moments, L-Moments and Conventional Moments of Dagum Distribution by Simulated Data
    Revista Colombiana de Estadística Junio 2013, volumen 36, no. 1, pp. 79 a 93 Comparing TL-Moments, L-Moments and Conventional Moments of Dagum Distribution by Simulated data Comparación de momentos TL, momentos L y momentos convencionales de la distribución Dagum mediante datos simulados Mirza Naveed Shahzad1;a, Zahid Asghar2;b 1Department of Statistics, University of Gujrat, Gujrat, Pakistan 2Department of Statistics, Quaid-i-Azam University, Islamabad, Pakistan Abstract Modeling income, wage, wealth, expenditure and various other social variables have always been an issue of great concern. The Dagum distribu- tion is considered quite handy to model such type of variables. Our focus in this study is to derive the L-moments and TL-moments of this distribution in closed form. Using L & TL-moments estimators we estimate the scale parameter which represents the inequality of the income distribution from the mean income. Comparing L-moments, TL-moments and conventional moments, we observe that the TL-moment estimator has lessbias and root mean square errors than those of L and conventional estimators considered in this study. We also find that the TL-moments have smaller root mean square errors for the coefficients of variation, skewness and kurtosis. These results hold for all sample sizes we have considered in our Monte Carlo simu- lation study. Key words: Dagum distribution, L-moments, Method of moments, Para- meter estimation, TL-moments. Resumen La modelación de ingresos, salarios, riqueza, gastos y muchas otras varia- bles de tipo social han sido siempre un tema de gran interés. La distribución Dagum es considerada para modelar este tipo de variables.
    [Show full text]