Properties of the Multivariate Normal

Total Page:16

File Type:pdf, Size:1020Kb

Properties of the Multivariate Normal Properties of the multivariate normal We can write that a vector is multivariate normal as y ∼ Np(µ; Σ). Some important properties of multivariate normal distributions include 1. Linear combinations of the variables y1;:::; yp are also normal with Note that for some distributions, such as the Poisson, sums of independent (but not necessarily identically distributed) random variables stay within the same family of distributions. For other distributions, they don't stay in the same family (e.g., exponential random variables). However, it is not clear that sums of two correlated Poissons will still be Poisson. Also, differences between Poisson random variables are not Poisson. For the normal, we have the nice property that even if two (or more) normal random variables are correlated, any linear combinations will still be normal. SAS Programming February 4, 2015 1 / 48 Properties of multivariate normal distributions 2. If A is constant (entries are not random variables) and is q × p with rank q ≤ p, then 0 Ay ∼ Nq(Aµ; AΣA ) What happens if q > p? SAS Programming February 4, 2015 2 / 48 Properties of multivariate normal distributions 3. A vector y can be standardized using either z = (T0)−1(y − µ) where T is obtained using the Cholesky decomposition so that T0T = Σ, or z = (Σ1=2)−1(y − µ) This standardization is similar to the idea of z-scores; however, just taking the usual z-scores of the individual variables in y will still leave the variables correlated. The standardizations above result in z ∼ Np(0; I) SAS Programming February 4, 2015 3 / 48 Properties of multivariate normal distributions 4. Sums of squares of p independent standard normal random variables have a χ2 distribution with p degrees of freedom. Therefore, if y ∼ Np(µ; Σ), then −1 (y − µ)0Σ−1(y − µ) = (y − µ)0 Σ1=2Σ1=2 (y − µ) −1 −1 = (y − µ)0 Σ1=2 Σ1=2 (y − µ) −1 0 −1 = Σ1=2 (y − µ) Σ1=2 (y − µ) = z0z Here the vector z consists of i.i.d. standard normal vectors according to property 3, so z0z is a sum of squared i.i.d. standard normals, which is known to have χ2 distribution. Therefore 0 −1 2 (y − µ) Σ (y − µ) ∼ χp SAS Programming February 4, 2015 4 / 48 Properties of multivariate normal distributions 5. Normality of marginal distributions If y has p random variables and is multivariate normal, then any subset yi1 ;:::; yir , r < p, is also multivariate normal. We can assume that the r variables of interested are listed first so that 0 0 y1 = (y1;:::; yr ) ; y2 = (yr+1;:::; yp) Then we have y µ Σ Σ y = 1 ; µ = 1 ; Σ = 11 12 y2 µ2 Σ21 Σ22 and y1 ∼ Nr (µ1; Σ11) SAS Programming February 4, 2015 5 / 48 Properties of multivariate normal distributions 0 Note that if y1 and y2 are each normal, it doesn't follow that y = (y1; y2) is multivariate normal. For an extreme example, let y1 ∼ N(0; 1), z ∼ Bernoulli(1/2) and y2 = y1 · I (z = 1) − y1 · (z = 0). In other words, with probability 1/2, y2 = y1, and with probability 1/2, y2 = −y1. Then 0 y2 is normal, yet (y1; y2) is not multivariate normal. 0 What does the distribution of (y1; y2) look like? SAS Programming February 4, 2015 6 / 48 Properties of multivariate normal distributions > y2 <- y1*(z==1)-y1*(z==0) > y1 <- rnorm(1000) > z <- rbinom(1000,1,.5) > y2 <- y1*(z==1)-y1*(z==0) > shapiro.test(y2) # quick test of normality of a vector Shapiro-Wilk normality test data: y2 W = 0.9989, p-value = 0.8234 SAS Programming February 4, 2015 7 / 48 Properties of multivariate normal distributions To check more theoretically that y2 is normal, we use the fact that for a standard normal y1 and −y1 have the same distribution P(y2 ≤ x) = P(y2 ≤ xjz = 1)P(z = 1) + P(y2 ≤ xjz = 2)P(z = 2) = P(y1 ≤ x)(1=2) + P(−y1 ≤ x)(1=2) = P(y1 ≤ x)(1=2) + P(y1 ≤ x)(1=2) = P(y1 ≤ x) Therefore y2 has the same CDF (cumulative distribution function) as y1, so they have the same distribution. This shows more than that y2 is normal | it also standard normal just like y1. SAS Programming February 4, 2015 8 / 48 Properties of multivariate normal distributions SAS Programming February 4, 2015 9 / 48 Marginal distributions If you have a collection of random variables, and you ignore some of them, the distribution of the remaining is a marginal distribution. For a bivariate 0 random variable y = (y1; y2) , the distribution of y1 is a marginal distribution of the distribution of y. In non-vector notation, the joint density for two random variables is often written f12(y1; y2) and the marginal distribution can be obtained by Z 1 f1(y1) = f12(y1; y2) dy2 −∞ The joint density for y1 is Z 1 Z 1 f12···r (y1;:::; yr ) = ··· f1···p(y1;:::; yp) dyr+1 ··· dyp −∞ −∞ And this is why it is called a marginal density. SAS Programming February 4, 2015 10 / 48 Plotting marginal densities in R > install.packages("ade4") > library(ade4) > x <- rnorm(100) > y <- x+ rnorm(100) > d <- data.frame(x,y) > s.hist(d) SAS Programming February 4, 2015 11 / 48 Plotting marginal densities in R SAS Programming February 4, 2015 12 / 48 Plotting marginal densities in R For the weird example with y2 = y11 · I (z = 1) − y1 · I (z = 0) 300 200 100 d = 21 716 764 600 870 262 305184 819 681 44 960 135612663815 809665518 464 656659 885 207 197 179209828497 758 218852668 551928122614 311 14687222242842916 302896353496 888 404989434 2 566 636413751742160 907946316247 423 99145229427285249687366 286 675695 5335165137 829439 970 906593640801890 859 394595 749482526354 136826 435 14079347 349759615 621717 8538075915 820 163780948 476643699 876 403950 623283 312135 426495 206 654 41652199 411915132770115 417 123274 6794655551912412778501208202884 255332463580523590619766 420470537336 688204450 83449 493 433 238236914 3 438861 582118 9828571097782052566678875025781376 12 830 902 848170 553 4063466114012199667 377 308 260940499 15315070186229 441639985 436999 35930614 794400203211240 684571736873765519 882844840921729303223402655432 568 734931776 789156130977 594900825860318510604867293451 199 748149331 864747422691910772856588877670 6 325239 224715893506941 564527816 587395835389782279 23090144722629893797 733967268 627755800437 8807331989 194767 273507832927881 557414831289536543241964693613721683637119129457 939444542 712930 55960645329718243111112876243410633544 962573980 4451903608737023936760874461205181714806 775 314 725 947724215415 849772885310717147933775 629 583763855 917 32394584622767885034201635625462 866784261956570752 2923381789325376290379851335792489418 599 586535904 843280148997 1925781527456903481426424461912018 483891 165 981 281968231664796979965569271650827408 822340382726 242711304125212 398925407195602 966477456818229531954538529567345 534996 362935 8122452538996226695052870 4 134 234703661139458 387397 357374524961990373 837817842173598713437187275567373814591912932220141754 25254545 648575984 299926309277845 3813809769299695044982465782468556117390469210 126987 847547781811378 471502 972166 50 790591138 487886 6079595929116136 243933 480545952 189657585301321 333102307 311383 391911549266942287838228 62949339620167539 131998364193786 28471327886816386 5503449872 757 958324425513481 63189526923523250818074161095316494414358166225645923366686369 116176 486106584 466267768704773 11839 198641720554351 909618920798517385363 744 983 105 975897196546384783777104889692 894865159361113 769735667740617978638963761731 237 803 516326576 630528 70942916927021468652 887392 175177601 5322725975004753156130440 69456394 6806895091000 771310449608589698 57494369 3509026 448151 460 485 624296988484473488 4928664 821 93471995745521437264941158922368427 92 558 64585813757970267224818546779 33 112 986 47836532234787967449045463482628646 710144607 259541100383155174746971730 56 355 430 924474676908869892 264328 133412 47227685 632396 697903700 421805342804540522995225334 603503 341282 84785706 739188677659 974 923494 530244913756671787836938951 572 258833552371320330696682443 863 9588 644565147799442 221251520 375424626114265 732183110200 73751122707 91879 405263401876318632358 7 217 753108 46735648 658 154585281 295 616 329883515216596317685919393875 808871367854 65380 802 38101562 548728250 561121 823168 419 993388213774708 992512 79127535274 172723 10560905 124 17 841592 743514409 839810 162103 973722 491313399 327994 647 898 660 609300 705 797 814955157254 878 577 605 SAS Programming February 4, 2015 13 / 48 Properties of the multivariate normal distribution Let the observation vector (rows of the data matrix) be partitioned into y and x with y µ y Σ Σ E = y ; cov = yy yx x µx x Σxy Σxx with y µy Σyy Σyx ∼ Np+q ; x µx Σxy Σxx SAS Programming February 4, 2015 14 / 48 Properties of the multivariate normal distribution 5. (a) The subvectors y and x are independent if Σyx = 0. (b) yi and yj (or yi and xj ) are independent if σij = 0. These properties do not always hold if the distribution is not multivariate normal. In particular, for the weird example where y2 = y1 · I (z = 1) − y1 · I (z = 0), what can you say about the correlation between y1 and y2, and what can you say about their independence? SAS Programming February 4, 2015 15 / 48 Properties of the multivariate normal distribution It is often easier to show that two variables are uncorrelated than that they are independent. So this property of the multivariate normal, that no correlation implies independence, is quite useful. SAS Programming February 4, 2015 16 / 48 Conditional distributions Given two or more random variables with a joint distribution, we can condition on some random variables to get the conditional distribution of the remaining variables. For example, with the heights and ages of couples example, we could look
Recommended publications
  • Chapter 8 Fundamental Sampling Distributions And
    CHAPTER 8 FUNDAMENTAL SAMPLING DISTRIBUTIONS AND DATA DESCRIPTIONS 8.1 Random Sampling pling procedure, it is desirable to choose a random sample in the sense that the observations are made The basic idea of the statistical inference is that we independently and at random. are allowed to draw inferences or conclusions about a Random Sample population based on the statistics computed from the sample data so that we could infer something about Let X1;X2;:::;Xn be n independent random variables, the parameters and obtain more information about the each having the same probability distribution f (x). population. Thus we must make sure that the samples Define X1;X2;:::;Xn to be a random sample of size must be good representatives of the population and n from the population f (x) and write its joint proba- pay attention on the sampling bias and variability to bility distribution as ensure the validity of statistical inference. f (x1;x2;:::;xn) = f (x1) f (x2) f (xn): ··· 8.2 Some Important Statistics It is important to measure the center and the variabil- ity of the population. For the purpose of the inference, we study the following measures regarding to the cen- ter and the variability. 8.2.1 Location Measures of a Sample The most commonly used statistics for measuring the center of a set of data, arranged in order of mag- nitude, are the sample mean, sample median, and sample mode. Let X1;X2;:::;Xn represent n random variables. Sample Mean To calculate the average, or mean, add all values, then Bias divide by the number of individuals.
    [Show full text]
  • Permutation Tests
    Permutation tests Ken Rice Thomas Lumley UW Biostatistics Seattle, June 2008 Overview • Permutation tests • A mean • Smallest p-value across multiple models • Cautionary notes Testing In testing a null hypothesis we need a test statistic that will have different values under the null hypothesis and the alternatives we care about (eg a relative risk of diabetes) We then need to compute the sampling distribution of the test statistic when the null hypothesis is true. For some test statistics and some null hypotheses this can be done analytically. The p- value for the is the probability that the test statistic would be at least as extreme as we observed, if the null hypothesis is true. A permutation test gives a simple way to compute the sampling distribution for any test statistic, under the strong null hypothesis that a set of genetic variants has absolutely no effect on the outcome. Permutations To estimate the sampling distribution of the test statistic we need many samples generated under the strong null hypothesis. If the null hypothesis is true, changing the exposure would have no effect on the outcome. By randomly shuffling the exposures we can make up as many data sets as we like. If the null hypothesis is true the shuffled data sets should look like the real data, otherwise they should look different from the real data. The ranking of the real test statistic among the shuffled test statistics gives a p-value Example: null is true Data Shuffling outcomes Shuffling outcomes (ordered) gender outcome gender outcome gender outcome Example: null is false Data Shuffling outcomes Shuffling outcomes (ordered) gender outcome gender outcome gender outcome Means Our first example is a difference in mean outcome in a dominant model for a single SNP ## make up some `true' data carrier<-rep(c(0,1), c(100,200)) null.y<-rnorm(300) alt.y<-rnorm(300, mean=carrier/2) In this case we know from theory the distribution of a difference in means and we could just do a t-test.
    [Show full text]
  • Sampling Distribution of the Variance
    Proceedings of the 2009 Winter Simulation Conference M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, and R. G. Ingalls, eds. SAMPLING DISTRIBUTION OF THE VARIANCE Pierre L. Douillet Univ Lille Nord de France, F-59000 Lille, France ENSAIT, GEMTEX, F-59100 Roubaix, France ABSTRACT Without confidence intervals, any simulation is worthless. These intervals are quite ever obtained from the so called "sampling variance". In this paper, some well-known results concerning the sampling distribution of the variance are recalled and completed by simulations and new results. The conclusion is that, except from normally distributed populations, this distribution is more difficult to catch than ordinary stated in application papers. 1 INTRODUCTION Modeling is translating reality into formulas, thereafter acting on the formulas and finally translating the results back to reality. Obviously, the model has to be tractable in order to be useful. But too often, the extra hypotheses that are assumed to ensure tractability are held as rock-solid properties of the real world. It must be recalled that "everyday life" is not only made with "every day events" : rare events are rarely occurring, but they do. For example, modeling a bell shaped histogram of experimental frequencies by a Gaussian pdf (probability density function) or a Fisher’s pdf with four parameters is usual. Thereafter transforming this pdf into a mgf (moment generating function) by mgf (z)=Et (expzt) is a powerful tool to obtain (and prove) the properties of the modeling pdf . But this doesn’t imply that a specific moment (e.g. 4) is effectively an accessible experimental reality.
    [Show full text]
  • Arxiv:1804.01620V1 [Stat.ML]
    ACTIVE COVARIANCE ESTIMATION BY RANDOM SUB-SAMPLING OF VARIABLES Eduardo Pavez and Antonio Ortega University of Southern California ABSTRACT missing data case, as well as for designing active learning algorithms as we will discuss in more detail in Section 5. We study covariance matrix estimation for the case of partially ob- In this work we analyze an unbiased covariance matrix estima- served random vectors, where different samples contain different tor under sub-Gaussian assumptions on x. Our main result is an subsets of vector coordinates. Each observation is the product of error bound on the Frobenius norm that reveals the relation between the variable of interest with a 0 1 Bernoulli random variable. We − number of observations, sub-sampling probabilities and entries of analyze an unbiased covariance estimator under this model, and de- the true covariance matrix. We apply this error bound to the design rive an error bound that reveals relations between the sub-sampling of sub-sampling probabilities in an active covariance estimation sce- probabilities and the entries of the covariance matrix. We apply our nario. An interesting conclusion from this work is that when the analysis in an active learning framework, where the expected number covariance matrix is approximately low rank, an active covariance of observed variables is small compared to the dimension of the vec- estimation approach can perform almost as well as an estimator with tor of interest, and propose a design of optimal sub-sampling proba- complete observations. The paper is organized as follows, in Section bilities and an active covariance matrix estimation algorithm.
    [Show full text]
  • Examination of Residuals
    EXAMINATION OF RESIDUALS F. J. ANSCOMBE PRINCETON UNIVERSITY AND THE UNIVERSITY OF CHICAGO 1. Introduction 1.1. Suppose that n given observations, yi, Y2, * , y., are claimed to be inde- pendent determinations, having equal weight, of means pA, A2, * *X, n, such that (1) Ai= E ai,Or, where A = (air) is a matrix of given coefficients and (Or) is a vector of unknown parameters. In this paper the suffix i (and later the suffixes j, k, 1) will always run over the values 1, 2, * , n, and the suffix r will run from 1 up to the num- ber of parameters (t1r). Let (#r) denote estimates of (Or) obtained by the method of least squares, let (Yi) denote the fitted values, (2) Y= Eai, and let (zt) denote the residuals, (3) Zi =Yi - Yi. If A stands for the linear space spanned by (ail), (a,2), *-- , that is, by the columns of A, and if X is the complement of A, consisting of all n-component vectors orthogonal to A, then (Yi) is the projection of (yt) on A and (zi) is the projection of (yi) on Z. Let Q = (qij) be the idempotent positive-semidefinite symmetric matrix taking (y1) into (zi), that is, (4) Zi= qtj,yj. If A has dimension n - v (where v > 0), X is of dimension v and Q has rank v. Given A, we can choose a parameter set (0,), where r = 1, 2, * , n -v, such that the columns of A are linearly independent, and then if V-1 = A'A and if I stands for the n X n identity matrix (6ti), we have (5) Q =I-AVA'.
    [Show full text]
  • Lecture 14 Testing for Kurtosis
    9/8/2016 CHE384, From Data to Decisions: Measurement, Kurtosis Uncertainty, Analysis, and Modeling • For any distribution, the kurtosis (sometimes Lecture 14 called the excess kurtosis) is defined as Testing for Kurtosis 3 (old notation = ) • For a unimodal, symmetric distribution, Chris A. Mack – a positive kurtosis means “heavy tails” and a more Adjunct Associate Professor peaked center compared to a normal distribution – a negative kurtosis means “light tails” and a more spread center compared to a normal distribution http://www.lithoguru.com/scientist/statistics/ © Chris Mack, 2016Data to Decisions 1 © Chris Mack, 2016Data to Decisions 2 Kurtosis Examples One Impact of Excess Kurtosis • For the Student’s t • For a normal distribution, the sample distribution, the variance will have an expected value of s2, excess kurtosis is and a variance of 6 2 4 1 for DF > 4 ( for DF ≤ 4 the kurtosis is infinite) • For a distribution with excess kurtosis • For a uniform 2 1 1 distribution, 1 2 © Chris Mack, 2016Data to Decisions 3 © Chris Mack, 2016Data to Decisions 4 Sample Kurtosis Sample Kurtosis • For a sample of size n, the sample kurtosis is • An unbiased estimator of the sample excess 1 kurtosis is ∑ ̅ 1 3 3 1 6 1 2 3 ∑ ̅ Standard Error: • For large n, the sampling distribution of 1 24 2 1 approaches Normal with mean 0 and variance 2 1 of 24/n 3 5 • For small samples, this estimator is biased D. N. Joanes and C. A. Gill, “Comparing Measures of Sample Skewness and Kurtosis”, The Statistician, 47(1),183–189 (1998).
    [Show full text]
  • Statistics Sampling Distribution Note
    9/30/2015 Statistics •A statistic is any quantity whose value can be calculated from sample data. CH5: Statistics and their distributions • A statistic can be thought of as a random variable. MATH/STAT360 CH5 1 MATH/STAT360 CH5 2 Sampling Distribution Note • Any statistic, being a random variable, has • In this group of notes we will look at a probability distribution. examples where we know the population • The probability distribution of a statistic is and it’s parameters. sometimes referred to as its sampling • This is to give us insight into how to distribution. proceed when we have large populations with unknown parameters (which is the more typical scenario). MATH/STAT360 CH5 3 MATH/STAT360 CH5 4 1 9/30/2015 The “Meta-Experiment” Sample Statistics • The “Meta-Experiment” consists of indefinitely Meta-Experiment many repetitions of the same experiment. Experiment • If the experiment is taking a sample of 100 items Sample Sample Sample from a population, the meta-experiment is to Population Population Sample of n Statistic repeatedly take samples of 100 items from the of n Statistic population. Sample of n Sample • This is a theoretical construct to help us Statistic understand the probabilities involved in our Sample of n Sample experiment. Statistic . Etc. MATH/STAT360 CH5 5 MATH/STAT360 CH5 6 Distribution of the Sample Mean Example: Random Rectangles 100 Rectangles with µ=7.42 and σ=5.26. Let X1, X2,…,Xn be a random sample from Histogram of Areas a distribution with mean value µ and standard deviation σ. Then 1. E(X ) X 2 2 2.
    [Show full text]
  • Week 5: Simple Linear Regression
    Week 5: Simple Linear Regression Brandon Stewart1 Princeton October 8, 10, 2018 1These slides are heavily influenced by Matt Blackwell, Adam Glynn and Jens Hainmueller. Illustrations by Shay O'Brien. Stewart (Princeton) Week 5: Simple Linear Regression October 8, 10, 2018 1 / 101 Where We've Been and Where We're Going... Last Week I hypothesis testing I what is regression This Week I Monday: F mechanics of OLS F properties of OLS I Wednesday: F hypothesis tests for regression F confidence intervals for regression F goodness of fit Next Week I mechanics with two regressors I omitted variables, multicollinearity Long Run I probability ! inference ! regression ! causal inference Questions? Stewart (Princeton) Week 5: Simple Linear Regression October 8, 10, 2018 2 / 101 Macrostructure The next few weeks, Linear Regression with Two Regressors Multiple Linear Regression Break Week What Can Go Wrong and How to Fix It Regression in the Social Sciences and Introduction to Causality Thanksgiving Causality with Measured Confounding Unmeasured Confounding and Instrumental Variables Repeated Observations and Panel Data Review session timing. Stewart (Princeton) Week 5: Simple Linear Regression October 8, 10, 2018 3 / 101 1 Mechanics of OLS 2 Properties of the OLS estimator 3 Example and Review 4 Properties Continued 5 Hypothesis tests for regression 6 Confidence intervals for regression 7 Goodness of fit 8 Wrap Up of Univariate Regression 9 Fun with Non-Linearities 10 Appendix: r 2 derivation Stewart (Princeton) Week 5: Simple Linear Regression October
    [Show full text]
  • Sampling Distributions Menzies Research Institute Tasmania, 2014
    Sampling distributions Menzies Research Institute Tasmania, 2014 Sampling distributions 1. Statistical inference based on sampling distributions The process of statistical inference The frequentist approach to statistical inference involves: 1. formulating a null hypothesis about a characteristic1 of members of a population, 2. measuring that characteristic in a sample of the population, 3. selecting a test statistic to test that null hypothesis, and 4. assessing whether the calculated value of the test statistic is so large that it provides sufficient evidence to reject the null hypothesis. 1. The null hypothesis The practice of science involves asserting and testing hypotheses that are capable of being proven false. The null hypothesis typically corresponds to a position of no association between measured characteristics or no effect of an intervention. The null hypothesis is paired with an alternative hypothesis that asserts a particular relationship exists between measured characteristics2 or that the intervention has a particular effect.3 2. The sample Sampling is used because usually it is infeasible or too costly to measure the characteristic(s) The of every member of the population. For statistical inference to be possible, the sample must be drawn at random or randomly allocated between the arms of a trial. 3. The test statistic The test statistic for a null hypothesis is a summary value derived from the sample data. It could be the sample total or mean or proportion, or the difference between the sample totals or means or proportions of two or more groups. It could be a count or ratio of counts, a correlation coefficient, or a regression coefficient.
    [Show full text]
  • Visualizing Distributions of Covariance Matrices ∗
    Visualizing Distributions of Covariance Matrices ∗ Tomoki Tokudaa, Ben Goodrichb, Iven Van Mechelena, Andrew Gelmanb, Francis Tuerlinckxa aUniversity of Leuven, Belgium bColumbia University, New York, NY, USA Abstract We present some methods for graphing distributions of covariance matrices and demonstrate them on several models, including the Wishart, inverse-Wishart, and scaled inverse-Wishart families in different dimensions. Our visualizations follow the principle of decomposing a covariance matrix into scale parameters and correlations, pulling out marginal summaries where possible and using two and three-dimensional plots to reveal multivariate structure. Visualizing a distribution of covariance matrices is a step beyond visualizing a single covariance matrix or a single multivariate dataset. Our visualization methods are available through the R package VisCov. Keywords: Bayesian statistics, prior distribution, Wishart distribution, inverse-Wishart distribution, statistical graphics 1 Background Covariance matrices and their corresponding distributions play an important role in statis- tics. To understand the properties of distributions, we often rely on visualization methods. ∗We thank Institute of Education Science grant #ED-GRANTS-032309-005, Institute of Education Science grant #R305D090006-09A, National Science Foundation #SES-1023189, Dept of Energy #DE- SC0002099, National Security Agency #H98230-10-1-0184, the Research Fund of the University of Leuven (for Grant GOA/10/02) and the Belgian Government (for grant IAP P6/03) for partial support of this work. 1 2 But visualizing a distribution in a high-dimensional space is a challenge, with the additional difficulty that covariance matrices must be positive semi-definite, a restriction that forces the joint distribution of the covariances into an oddly-shaped subregion of the space.
    [Show full text]
  • Sampling and Hypothesis Testing Or Another
    Population and sample Population : an entire set of objects or units of observation of one sort Sampling and Hypothesis Testing or another. Sample : subset of a population. Parameter versus statistic . Allin Cottrell size mean variance proportion Population: N µ σ 2 π Sample: n x¯ s2 p 1 Properties of estimators: sample mean If the sampling procedure is unbiased, deviations of x¯ from µ in the upward and downward directions should be equally likely; on average, they should cancel out. 1 n E(x¯) µ E(X) x¯ xi = = = n The sample mean is then an unbiased estimator of the population iX1 = mean. To make inferences regarding the population mean, µ, we need to know something about the probability distribution of this sample statistic, x¯. The distribution of a sample statistic is known as a sampling distribution . Two of its characteristics are of particular interest, the mean or expected value and the variance or standard deviation. E(x¯): Thought experiment: Sample repeatedly from the given population, each time recording the sample mean, and take the average of those sample means. 2 3 Efficiency Standard error of sample mean One estimator is more efficient than another if its values are more tightly clustered around its expected value. σ σx¯ = √n E.g. alternative estimators for the population mean: x¯ versus the average of the largest and smallest values in the sample. The degree of dispersion of an estimator is generally measured by the The more widely dispersed are the population values around their standard deviation of its probability distribution (sampling • mean (larger ), the greater the scope for sampling error (i.e.
    [Show full text]
  • Chapter 8 Sampling Distributions
    Chapter 8 Sampling Distributions Sampling distributions are probability distributions of statistics. 8.1 Distribution of the Sample Mean Sampling distribution for random sample average, X¯, is described in this section. The central limit theorem (CLT) tells us no matter what the original parent distribution, sampling distribution of X¯ is typically normal when n 30. Related to this, ≥ 2 2 σX σX µ ¯ = µ , σ = , σ ¯ = . X X X¯ n X √n Exercise 8.1 (Distribution of the Sample Mean) 1. Practice with CLT: average, X¯. (a) Number of burgers. Number of burgers, X, made per minute at Best Burger averages µX =2.7 burgers with a standard deviation of σX = 0.64 of a burger. Consider average number of burgers made over random n = 35 minutes during day. i. µX¯ = µX = (circle one) 2 7 / 2 8 / 2 9. σX 0.64 σ ¯ . ii. X = √n = √35 = 0 10817975 / 0 1110032 / 0 13099923. iii. P X¯ > 2.75 (circle one) 0.30 / 0.32 / 0.35. ≈ (Stat, Calculators, Normal, Mean: 2.7, Std. Dev.: 0.10817975, Prob(X 2.75) = ? , Compute.) ≥ iv. P 2.65 < X¯ < 2.75 (circle one) 0.36 / 0.39 / 0.45. ≈ (Stat, Calculators, Normal, Between, Mean: 2.7, Std. Dev.: 0.10817975, Prob(2.65 X 2.75)) ≤ ≤ (b) Temperatures. Temperature, X, on any given day during winter in Laporte averages µX = 135 136 Chapter 8. Sampling Distributions (Lecture Notes 8) 0 degrees with standard deviation of σX = 1 degree. Consider average temperature over random n = 40 days during winter. i. µX¯ = µX = (circle one) 0 / 1 / 2.
    [Show full text]