Moment Generating Function

Total Page:16

File Type:pdf, Size:1020Kb

Moment Generating Function Moment Generating Function Statistics 110 Summer 2006 Copyright °c 2006 by Mark E. Irwin Moments Revisited So far I've really only talked about the ¯rst two moments. Lets de¯ne what is meant by moments more precisely. De¯nition. The rth moment of a random variable X is E[Xr], assuming that the expectation exists. So the mean of a distribution is its ¯rst moment. De¯nition. The r central moment of a random variable X is E[(X ¡ E[X])r], assuming that the expectation exists. Thus the variance is the 2nd central moment of distribution. The 1st central moment usually isn't discussed as its always 0. The 3rd central moment is known as the skewness of a distribution and is used as a measure of asymmetry. Moments Revisited 1 If a distribution is symmetric about its mean (f(¹ ¡ x) = f(¹ + x)), the skewness will be 0. Similarly if the skewness is non-zero, the distribution is asymmetric. However it is possible to have asymmetric distribution with skewness = 0. Examples of symmetric distribution are normals, Beta(a; a), Bin(n; p = 0:5). Example of asymmetric distributions are Distribution Skewness Bin(n; p) np(1 ¡ p)(1 ¡ 2p) P ois(¸) ¸ 2 Exp(¸) ¸ Beta(a; b) Ugly formula The 4th central moment is known as the kurtosis. It can be used as a measure of how heavy the tails are for a distribution. The kurtosis for a normal is 3σ4. Moments Revisited 2 Note that these measures are often standardized as in their raw form they depend on the standard deviation. Theorem. If the rth moment of a RV is exists, then the sth moment exists for all s < r. Also the sth central moment exists for all s · r. So you can't have a distribution that has a ¯nite mean, an in¯nite variance, and a ¯nite skewness. Proof. Postponed till later. 2 Why are moments useful? They can be involved in calculating means and variances of transformed RVs or other summaries of RVs. Example: What are the mean and variance of A = ¼R2 E[A] = ¼E[R2] ¡ ¢ Var(A) = ¼2Var(R2) = ¼2 E[R4] ¡ (E[R2])2 So we need E[R4] in addition to E[R] and E[R2]. Moments Revisited 3 Example: What is the skewness of X? E[(X ¡ ¹)3] = E[X3 ¡ 3¹X2 + 3¹2X ¡ ¹3] = E[X3] ¡ 3¹E[X2] + 2¹3 so E[X];E[X2], and E[X3] are needed to calculate the skewness. Moments Revisited 4 Moment Generating Function De¯nition. The Moment Generating Function (MGF) of a random tX variable X, is MX(t) = E[e ] if the expectation is de¯ned. X tx MX(t) = e pX(x) (Discrete) x Z tx MX(t) = e fX(x)dx (Continuous) X Whether the MGF is de¯ned depends on the distribution and the choice of t. For example, the MX(t) is de¯ned for all t if X is normal, de¯ned for no t if X is Cauchy, and for t < ¸ if X » Exp(¸). For those that have done some analysis, for the continuous case, the moment generating function is related to the Laplace transform of the density function. Many of the results about it come from that theory. Moment Generating Function 5 Why should we care about the MGF? ² To calculate moments. It may be easier to work with the MGF than to directly calculate E[Xr]. ² To determine distributions of functions of random variables. ² Related to this, approximating distributions. For example can use it to show that as n increases, the Bin(n; p) \approaches" a normal distribution. The following theorems justify these uses of the MGF. Moment Generating Function 6 Theorem. If MX(t) of a RV X is ¯nite in an open iterval containing 0, then it has derivatives of all orders and (r) r tX MX (t) = E[X e ] (r) r MX (0) = E[X ] Proof. Z 1 (1) d tx MX (t) = e fX(x)dx dt ¡1 Z 1 µ ¶ d tx = e fX(x)dx ¡1 dt Z 1 tx = xe fX(x)dx ¡1 = E[XetX] Moment Generating Function 7 d M (2)(t) = M (1) X dt X Z 1 µ ¶ d tx = x e fX(x)dx ¡1 dt Z 1 2 tx 2 tX = x e fX(x)dx = E[X e ] ¡1 The rest can be shown by induction. The second part of the theorem follows from e0 = 1. 2 Another way to see this result is due to the Taylor series expansion of y2 y3 ey = 1 + y + + + :::; 2! 3! which gives Moment Generating Function 8 · ¸ X2t2 X3t3 M (t) = E 1 + Xt + + + ::: X 2! 3! t2 t3 = 1 + E[X]t + E[X2] + E[X3] + ::: 2! 3! Example MGFs: ² X » U(a; b) Z b etx ebt ¡ eat MX(t) = dx = a b ¡ a (b ¡ a)t Moment Generating Function 9 ² X » Exp(¸) Z 1 Z 1 tx ¡¸x ¡(¸¡t)x ¸ MX(t) = e ¸e dx = ¸e dx = 0 0 ¸ ¡ t Note that this integral is only de¯ned when t < ¸ ² X » Geo(p); (q = 1 ¡ p) X1 X1 pet M (t) = etxpqx¡1 = pet (et)x¡1qx¡1 = X 1 ¡ qet x=1 x=1 ² X » P ois(¸) 1 ¡¸ x 1 t x X e ¸ X (e ¸) t M (t) = etx = e¡¸ = e¸(e ¡1) X x! x! x=0 x=0 Moment Generating Function 10 Examples of using the MGF to calculate moments ² X » Exp(¸) ¸ 1 M (1)(t) = ; E[X] = (¸ ¡ t)2 ¸ 2¸ 2 M (2)(t) = ; E[X2] = (¸ ¡ t)3 ¸2 ¡(r + 1)¸ ¡(r + 1) M (r)(t) = ; E[Xr] = (¸ ¡ t)r+1 ¸r Moment Generating Function 11 ² X » Geo(p); (q = 1 ¡ p) pet pqe2t 1 M (1)(t) = + ; E[X] = 1 ¡ qet (1 ¡ qet)2 p pet 3pqe2t 2pq2e3t 5 ¡ 6p + 2p2 M (2)(t) = + + ; E[X2] = 1 ¡ qet (1 ¡ qet)2 (1 ¡ qet)3 p Theorem. If Y = a + bX then at MY (t) = e MX(bt) Proof. tY at+btX at (bt)X at MY (t) = E[e ] = E[e ] = e E[e ] = e MX(bt) 2 Moment Generating Function 12 For example, this result can be used to verify the result that E[a + bX] = a + bE[X] as (1) at at (1) MY (t) = ae MX(bt) + be MX (bt) (1) (1) MY (0) = aMX(0) + bMX (0) = a + bE[X] Theorem. If X and Y are independent RVs with MGFs MX and MY and Z = X + Y , then MZ(t) = MX(t)MY (t) on the common interval where both MGFs exist. Proof. tZ t(X+Y ) tX tY MZ(t) = E[e ] = E[e ] = E[e e ] tX tY = E[e ]E[e ] = MX(t)MY (t) 2 By induction, this result can be extended to sums of many independent RVs. Moment Generating Function 13 One particular use of this result is that it can give an easy approach to showing what the distribution of a sum of RVs is without having the calculate the convolution of the densities. But ¯rst we need one more result. Theorem. [Uniqueness theorem] If the MGF of X exists for t in an open interval containing 0, then it uniquely determines the CDF. i.e no two di®erent distributions can have the same values for the MGFs on an interval containing 0. Proof. Postponed 2 Example:P Let X1;X2;:::;Xn be iid Exp(¸). What is the distribution of S = X i µ ¶ Yn ¸ ¸ n M (t) = = S ¸ ¡ t ¸ ¡ t i=1 Moment Generating Function 14 Note that this isn't the form of the MGF for an exponential, so the sum isn't exponential. As shown in Example B on page 145, the MGF of a Gamma(®; ¸) is µ ¶ ¸ ® M(t) = ¸ ¡ t so S » Gamma(n; ¸) This approach also leads to an easy proof that the sum of independent normals is also normal. The moment generating function for N(¹; σ2) RV ¹t+σ2t2=2 iid 2 is M(t) = e . So if Xi » N(¹i; σi ); i = 1; : : : ; n, then n Y 2 2 X X P ¹it+σi t =2 2 2 M Xi = e = exp(t ¹i + t =2 σi ) i=1 P P 2 which is the moment generating function of a N( ¹i; σi ) RV. There is one important thing with this approach. We must be able to identify what MGF goes with each density or PMF. Moment Generating Function 15 For example, let X » Gamma(®; ¸) and Y » Gamma(¯; ¹) be independent. Then the MGF of Z » X + Y is µ ¶ µ ¶ ¸ ® ¹ ¯ M (t) = Z ¸ ¡ t ¹ ¡ t This is not the MGF of a gamma distribution unless ¸ = ¹. In fact I'm not quite sure what the density looks like beyond Z 1 ¸®x®¡1e¡¸x ¹¯(z ¡ x)¯¡1e¡¹(z¡x) fZ(z) = dx 0 ¡(®) ¡(¯) You can sometimes use tables of Laplace transforms or doing some complicated complex variable integration to invert the MGF to determine the density or PMF. While we can't get the density easily in this case, we can still use the MGF to get the moments of this distribution. Moment Generating Function 16 It is also possible to work with more complicated situations described by hierarchical models. Suppose that the MGFs for X(MX(t)) and Y jX = x (MY jX(t)) are known. Then the marginal MGF of Y is tY tY MY (t) = E[e ] = E[E[e jX]] = E[MY jX(t)] For example, this could be used to get the MGF of the Beta-Binomial model. Another situation where this is useful is with a random sums model where XN S = Xi i=1 and N is random.
Recommended publications
  • Use of Proc Iml to Calculate L-Moments for the Univariate Distributional Shape Parameters Skewness and Kurtosis
    Statistics 573 USE OF PROC IML TO CALCULATE L-MOMENTS FOR THE UNIVARIATE DISTRIBUTIONAL SHAPE PARAMETERS SKEWNESS AND KURTOSIS Michael A. Walega Berlex Laboratories, Wayne, New Jersey Introduction Exploratory data analysis statistics, such as those Gaussian. Bickel (1988) and Van Oer Laan and generated by the sp,ge procedure PROC Verdooren (1987) discuss the concept of robustness UNIVARIATE (1990), are useful tools to characterize and how it pertains to the assumption of normality. the underlying distribution of data prior to more rigorous statistical analyses. Assessment of the As discussed by Glass et al. (1972), incorrect distributional shape of data is usually accomplished conclusions may be reached when the normality by careful examination of the values of the third and assumption is not valid, especially when one-tail tests fourth central moments, skewness and kurtosis. are employed or the sample size or significance level However, when the sample size is small or the are very small. Hopkins and Weeks (1990) also underlying distribution is non-normal, the information discuss the effects of highly non-normal data on obtained from the sample skewness and kurtosis can hypothesis testing of variances. Thus, it is apparent be misleading. that examination of the skewness (departure from symmetry) and kurtosis (deviation from a normal One alternative to the central moment shape statistics curve) is an important component of exploratory data is the use of linear combinations of order statistics (L­ analyses. moments) to examine the distributional shape characteristics of data. L-moments have several Various methods to estimate skewness and kurtosis theoretical advantages over the central moment have been proposed (MacGillivray and Salanela, shape statistics: Characterization of a wider range of 1988).
    [Show full text]
  • Probability and Statistics
    APPENDIX A Probability and Statistics The basics of probability and statistics are presented in this appendix, provid- ing the reader with a reference source for the main body of this book without interrupting the flow of argumentation there with unnecessary excursions. Only the basic definitions and results have been included in this appendix. More complicated concepts (such as stochastic processes, Ito Calculus, Girsanov Theorem, etc.) are discussed directly in the main body of the book as the need arises. A.1 PROBABILITY, EXPECTATION, AND VARIANCE A variable whose value is dependent on random events, is referred to as a random variable or a random number. An example of a random variable is the number observed when throwing a dice. The distribution of this ran- dom number is an example of a discrete distribution. A random number is discretely distributed if it can take on only discrete values such as whole numbers, for instance. A random variable has a continuous distribution if it can assume arbitrary values in some interval (which can by all means be the infinite interval from −∞ to ∞). Intuitively, the values which can be assumed by the random variable lie together densely, i.e., arbitrarily close to one another. An example of a discrete distribution on the interval from 0 to ∞ (infinity) might be a random variable which could take on the values 0.00, 0.01, 0.02, 0.03, ..., 99.98, 99.99, 100.00, 100.01, 100.02, ..., etc., like a Bund future with a tick-size of 0.01% corresponding to a value change of 10 euros per tick on a nominal of 100,000 euros.
    [Show full text]
  • On Moments of Folded and Truncated Multivariate Normal Distributions
    On Moments of Folded and Truncated Multivariate Normal Distributions Raymond Kan Rotman School of Management, University of Toronto 105 St. George Street, Toronto, Ontario M5S 3E6, Canada (E-mail: [email protected]) and Cesare Robotti Terry College of Business, University of Georgia 310 Herty Drive, Athens, Georgia 30602, USA (E-mail: [email protected]) April 3, 2017 Abstract Recurrence relations for integrals that involve the density of multivariate normal distributions are developed. These recursions allow fast computation of the moments of folded and truncated multivariate normal distributions. Besides being numerically efficient, the proposed recursions also allow us to obtain explicit expressions of low order moments of folded and truncated multivariate normal distributions. Keywords: Multivariate normal distribution; Folded normal distribution; Truncated normal distribution. 1 Introduction T Suppose X = (X1;:::;Xn) follows a multivariate normal distribution with mean µ and k kn positive definite covariance matrix Σ. We are interested in computing E(jX1j 1 · · · jXnj ) k1 kn and E(X1 ··· Xn j ai < Xi < bi; i = 1; : : : ; n) for nonnegative integer values ki = 0; 1; 2;:::. The first expression is the moment of a folded multivariate normal distribution T jXj = (jX1j;:::; jXnj) . The second expression is the moment of a truncated multivariate normal distribution, with Xi truncated at the lower limit ai and upper limit bi. In the second expression, some of the ai's can be −∞ and some of the bi's can be 1. When all the bi's are 1, the distribution is called the lower truncated multivariate normal, and when all the ai's are −∞, the distribution is called the upper truncated multivariate normal.
    [Show full text]
  • A Study of Non-Central Skew T Distributions and Their Applications in Data Analysis and Change Point Detection
    A STUDY OF NON-CENTRAL SKEW T DISTRIBUTIONS AND THEIR APPLICATIONS IN DATA ANALYSIS AND CHANGE POINT DETECTION Abeer M. Hasan A Dissertation Submitted to the Graduate College of Bowling Green State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY August 2013 Committee: Arjun K. Gupta, Co-advisor Wei Ning, Advisor Mark Earley, Graduate Faculty Representative Junfeng Shang. Copyright c August 2013 Abeer M. Hasan All rights reserved iii ABSTRACT Arjun K. Gupta, Co-advisor Wei Ning, Advisor Over the past three decades there has been a growing interest in searching for distribution families that are suitable to analyze skewed data with excess kurtosis. The search started by numerous papers on the skew normal distribution. Multivariate t distributions started to catch attention shortly after the development of the multivariate skew normal distribution. Many researchers proposed alternative methods to generalize the univariate t distribution to the multivariate case. Recently, skew t distribution started to become popular in research. Skew t distributions provide more flexibility and better ability to accommodate long-tailed data than skew normal distributions. In this dissertation, a new non-central skew t distribution is studied and its theoretical properties are explored. Applications of the proposed non-central skew t distribution in data analysis and model comparisons are studied. An extension of our distribution to the multivariate case is presented and properties of the multivariate non-central skew t distri- bution are discussed. We also discuss the distribution of quadratic forms of the non-central skew t distribution. In the last chapter, the change point problem of the non-central skew t distribution is discussed under different settings.
    [Show full text]
  • Stochastic Time-Series Spectroscopy John Scoville1 San Jose State University, Dept
    Stochastic Time-Series Spectroscopy John Scoville1 San Jose State University, Dept. of Physics, San Jose, CA 95192-0106, USA Spectroscopically measuring low levels of non-equilibrium phenomena (e.g. emission in the presence of a large thermal background) can be problematic due to an unfavorable signal-to-noise ratio. An approach is presented to use time-series spectroscopy to separate non-equilibrium quantities from slowly varying equilibria. A stochastic process associated with the non-equilibrium part of the spectrum is characterized in terms of its central moments or cumulants, which may vary over time. This parameterization encodes information about the non-equilibrium behavior of the system. Stochastic time-series spectroscopy (STSS) can be implemented at very little expense in many settings since a series of scans are typically recorded in order to generate a low-noise averaged spectrum. Higher moments or cumulants may be readily calculated from this series, enabling the observation of quantities that would be difficult or impossible to determine from an average spectrum or from prinicipal components analysis (PCA). This method is more scalable than PCA, having linear time complexity, yet it can produce comparable or superior results, as shown in example applications. One example compares an STSS-derived CO2 bending mode to a standard reference spectrum and the result of PCA. A second example shows that STSS can reveal conditions of stress in rocks, a scenario where traditional methods such as PCA are inadequate. This allows spectral lines and non-equilibrium behavior to be precisely resolved. A relationship between 2nd order STSS and a time-varying form of PCA is considered.
    [Show full text]
  • The Normal Moment Generating Function
    MSc. Econ: MATHEMATICAL STATISTICS, 1996 The Moment Generating Function of the Normal Distribution Recall that the probability density function of a normally distributed random variable x with a mean of E(x)=and a variance of V (x)=2 is 2 1 1 (x)2/2 (1) N(x; , )=p e 2 . (22) Our object is to nd the moment generating function which corresponds to this distribution. To begin, let us consider the case where = 0 and 2 =1. Then we have a standard normal, denoted by N(z;0,1), and the corresponding moment generating function is dened by Z zt zt 1 1 z2 Mz(t)=E(e )= e √ e 2 dz (2) 2 1 t2 = e 2 . To demonstate this result, the exponential terms may be gathered and rear- ranged to give exp zt exp 1 z2 = exp 1 z2 + zt (3) 2 2 1 2 1 2 = exp 2 (z t) exp 2 t . Then Z 1t2 1 1(zt)2 Mz(t)=e2 √ e 2 dz (4) 2 1 t2 = e 2 , where the nal equality follows from the fact that the expression under the integral is the N(z; = t, 2 = 1) probability density function which integrates to unity. Now consider the moment generating function of the Normal N(x; , 2) distribution when and 2 have arbitrary values. This is given by Z xt xt 1 1 (x)2/2 (5) Mx(t)=E(e )= e p e 2 dx (22) Dene x (6) z = , which implies x = z + . 1 MSc. Econ: MATHEMATICAL STATISTICS: BRIEF NOTES, 1996 Then, using the change-of-variable technique, we get Z 1 1 2 dx t zt p 2 z Mx(t)=e e e dz 2 dz Z (2 ) (7) t zt 1 1 z2 = e e √ e 2 dz 2 t 1 2t2 = e e 2 , Here, to establish the rst equality, we have used dx/dz = .
    [Show full text]
  • Unbiased Central Moment Estimates
    Unbiased Central Moment Estimates Inna Gerlovina1;2, Alan Hubbard1 February 11, 2020 1University of California, Berkeley 2University of California, San Francisco [email protected] Contents 1 Introduction 2 2 Estimators: na¨ıve biased to unbiased 2 3 Estimates obtained from a sample 4 4 Higher order estimates (symbolic expressions for expectations) 5 1 1 Introduction Umoments package calculates unbiased central moment estimates and their two-sample analogs, also known as pooled estimates, up to sixth order (sample variance and pooled variance are commonly used second order examples of corresponding estimators). Orders four and higher include powers and products of moments - for example, fifth moment and a product of second and third moments are both of fifth order; those estimators are also included in the package. The estimates can be obtained directly from samples or from na¨ıve biased moment estimates. If the estimates of orders beyond sixth are needed, they can be generated using the set of functions also provided in the package. Those functions generate symbolic expressions for expectations of moment combinations of an arbitrary order and, in addition to moment estimation, can be used for other derivations that require such expectations (e.g. Edgeworth expansions). 2 Estimators: na¨ıve biased to unbiased For a sample X1;:::;Xn, a na¨ıve biased estimate of a k'th central moment µk is n 1 X m = (X − X¯)k; k = 2; 3;:::: (1) k n i i=1 These biased estimates together with the sample size n can be used to calculate unbiased n estimates of a given order, e.g.
    [Show full text]
  • Volatility Modeling Using the Student's T Distribution
    Volatility Modeling Using the Student’s t Distribution Maria S. Heracleous Dissertation submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Economics Aris Spanos, Chair Richard Ashley Raman Kumar Anya McGuirk Dennis Yang August 29, 2003 Blacksburg, Virginia Keywords: Student’s t Distribution, Multivariate GARCH, VAR, Exchange Rates Copyright 2003, Maria S. Heracleous Volatility Modeling Using the Student’s t Distribution Maria S. Heracleous (ABSTRACT) Over the last twenty years or so the Dynamic Volatility literature has produced a wealth of uni- variateandmultivariateGARCHtypemodels.Whiletheunivariatemodelshavebeenrelatively successful in empirical studies, they suffer from a number of weaknesses, such as unverifiable param- eter restrictions, existence of moment conditions and the retention of Normality. These problems are naturally more acute in the multivariate GARCH type models, which in addition have the problem of overparameterization. This dissertation uses the Student’s t distribution and follows the Probabilistic Reduction (PR) methodology to modify and extend the univariate and multivariate volatility models viewed as alternative to the GARCH models. Its most important advantage is that it gives rise to internally consistent statistical models that do not require ad hoc parameter restrictions unlike the GARCH formulations. Chapters 1 and 2 provide an overview of my dissertation and recent developments in the volatil- ity literature. In Chapter 3 we provide an empirical illustration of the PR approach for modeling univariate volatility. Estimation results suggest that the Student’s t AR model is a parsimonious and statistically adequate representation of exchange rate returns and Dow Jones returns data.
    [Show full text]
  • The Multivariate Normal Distribution=1See Last Slide For
    Moment-generating Functions Definition Properties χ2 and t distributions The Multivariate Normal Distribution1 STA 302 Fall 2017 1See last slide for copyright information. 1 / 40 Moment-generating Functions Definition Properties χ2 and t distributions Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ2 and t distributions 2 / 40 Moment-generating Functions Definition Properties χ2 and t distributions Joint moment-generating function Of a p-dimensional random vector x t0x Mx(t) = E e x t +x t +x t For example, M (t1; t2; t3) = E e 1 1 2 2 3 3 (x1;x2;x3) Just write M(t) if there is no ambiguity. Section 4.3 of Linear models in statistics has some material on moment-generating functions (optional). 3 / 40 Moment-generating Functions Definition Properties χ2 and t distributions Uniqueness Proof omitted Joint moment-generating functions correspond uniquely to joint probability distributions. M(t) is a function of F (x). Step One: f(x) = @ ··· @ F (x). @x1 @xp @ @ R x2 R x1 For example, f(y1; y2) dy1dy2 @x1 @x2 −∞ −∞ 0 Step Two: M(t) = R ··· R et xf(x) dx Could write M(t) = g (F (x)). Uniqueness says the function g is one-to-one, so that F (x) = g−1 (M(t)). 4 / 40 Moment-generating Functions Definition Properties χ2 and t distributions g−1 (M(t)) = F (x) A two-variable example g−1 (M(t)) = F (x) −1 R 1 R 1 x1t1+x2t2 R x2 R x1 g −∞ −∞ e f(x1; x2) dx1dx2 = −∞ −∞ f(y1; y2) dy1dy2 5 / 40 Moment-generating Functions Definition Properties χ2 and t distributions Theorem Two random vectors x1 and x2 are independent if and only if the moment-generating function of their joint distribution is the product of their moment-generating functions.
    [Show full text]
  • Expectation and Functions of Random Variables
    POL 571: Expectation and Functions of Random Variables Kosuke Imai Department of Politics, Princeton University March 10, 2006 1 Expectation and Independence To gain further insights about the behavior of random variables, we first consider their expectation, which is also called mean value or expected value. The definition of expectation follows our intuition. Definition 1 Let X be a random variable and g be any function. 1. If X is discrete, then the expectation of g(X) is defined as, then X E[g(X)] = g(x)f(x), x∈X where f is the probability mass function of X and X is the support of X. 2. If X is continuous, then the expectation of g(X) is defined as, Z ∞ E[g(X)] = g(x)f(x) dx, −∞ where f is the probability density function of X. If E(X) = −∞ or E(X) = ∞ (i.e., E(|X|) = ∞), then we say the expectation E(X) does not exist. One sometimes write EX to emphasize that the expectation is taken with respect to a particular random variable X. For a continuous random variable, the expectation is sometimes written as, Z x E[g(X)] = g(x) d F (x). −∞ where F (x) is the distribution function of X. The expectation operator has inherits its properties from those of summation and integral. In particular, the following theorem shows that expectation preserves the inequality and is a linear operator. Theorem 1 (Expectation) Let X and Y be random variables with finite expectations. 1. If g(x) ≥ h(x) for all x ∈ R, then E[g(X)] ≥ E[h(X)].
    [Show full text]
  • Strategic Information Transmission and Stochastic Orders
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Szalay, Dezsö Working Paper Strategic information transmission and stochastic orders SFB/TR 15 Discussion Paper, No. 386 Provided in Cooperation with: Free University of Berlin, Humboldt University of Berlin, University of Bonn, University of Mannheim, University of Munich, Collaborative Research Center Transregio 15: Governance and the Efficiency of Economic Systems Suggested Citation: Szalay, Dezsö (2012) : Strategic information transmission and stochastic orders, SFB/TR 15 Discussion Paper, No. 386, Sonderforschungsbereich/Transregio 15 - Governance and the Efficiency of Economic Systems (GESY), München, http://nbn-resolving.de/urn:nbn:de:bvb:19-epub-14011-6 This Version is available at: http://hdl.handle.net/10419/93951 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence.
    [Show full text]
  • The Rth Moment About the Origin of a Random Variable X, Denoted By
    SAMPLE MOMENTS 1. POPULATION MOMENTS 1.1. Moments about the origin (raw moments). The rth moment about the origin of a random 0 r variable X, denoted by µr, is the expected value of X ; symbolically, 0 r µr =E(X ) (1) = X xr f(x) (2) x for r = 0, 1, 2, . when X is discrete and 0 r µr =E(X ) v (3) when X is continuous. The rth moment about the origin is only defined if E[ Xr ] exists. A mo- 0 ment about the origin is sometimes called a raw moment. Note that µ1 = E(X)=µX , the mean of the distribution of X, or simply the mean of X. The rth moment is sometimes written as function of θ where θ is a vector of parameters that characterize the distribution of X. If there is a sequence of random variables, X1,X2,...Xn, we will call the rth population moment 0 of the ith random variable µi, r and define it as 0 r µ i,r = E (X i) (4) 1.2. Central moments. The rth moment about the mean of a random variable X, denoted by µr,is r the expected value of ( X − µX ) symbolically, r µr =E[(X − µX ) ] r (5) = X ( x − µX ) f(x) x for r = 0, 1, 2, . when X is discrete and r µr =E[(X − µX ) ] ∞ r (6) =R−∞ (x − µX ) f(x) dx r when X is continuous. The rth moment about the mean is only defined if E[ (X - µX) ] exists. The rth moment about the mean of a random variable X is sometimes called the rth central moment of r X.
    [Show full text]