Multivariate Normal Distribution from Wikipedia, the Free Encyclopedia
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
5. the Student T Distribution
Virtual Laboratories > 4. Special Distributions > 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 5. The Student t Distribution In this section we will study a distribution that has special importance in statistics. In particular, this distribution will arise in the study of a standardized version of the sample mean when the underlying distribution is normal. The Probability Density Function Suppose that Z has the standard normal distribution, V has the chi-squared distribution with n degrees of freedom, and that Z and V are independent. Let Z T= √V/n In the following exercise, you will show that T has probability density function given by −(n +1) /2 Γ((n + 1) / 2) t2 f(t)= 1 + , t∈ℝ ( n ) √n π Γ(n / 2) 1. Show that T has the given probability density function by using the following steps. n a. Show first that the conditional distribution of T given V=v is normal with mean 0 a nd variance v . b. Use (a) to find the joint probability density function of (T,V). c. Integrate the joint probability density function in (b) with respect to v to find the probability density function of T. The distribution of T is known as the Student t distribution with n degree of freedom. The distribution is well defined for any n > 0, but in practice, only positive integer values of n are of interest. This distribution was first studied by William Gosset, who published under the pseudonym Student. In addition to supplying the proof, Exercise 1 provides a good way of thinking of the t distribution: the t distribution arises when the variance of a mean 0 normal distribution is randomized in a certain way. -
An Unforeseen Equivalence Between Uncertainty and Entropy
An Unforeseen Equivalence between Uncertainty and Entropy Tim Muller1 University of Nottingham [email protected] Abstract. Beta models are trust models that use a Bayesian notion of evidence. In that paradigm, evidence comes in the form of observing whether an agent has acted well or not. For Beta models, uncertainty is the inverse of the amount of evidence. Information theory, on the other hand, offers a fundamentally different approach to the issue of lacking knowledge. The entropy of a random variable is determined by the shape of its distribution, not by the evidence used to obtain it. However, we dis- cover that a specific entropy measure (EDRB) coincides with uncertainty (in the Beta model). EDRB is the expected Kullback-Leibler divergence between two Bernouilli trials with parameters randomly selected from the distribution. EDRB allows us to apply the notion of uncertainty to other distributions that may occur when generalising Beta models. Keywords: Uncertainty · Entropy · Information Theory · Beta model · Subjective Logic 1 Introduction The Beta model paradigm is a powerful formal approach to studying trust. Bayesian logic is at the core of the Beta model: \agents with high integrity be- have honestly" becomes \honest behaviour evidences high integrity". Its simplest incarnation is to apply Beta distributions naively, and this approach has limited success. However, more powerful and sophisticated approaches are widespread (e.g. [3,13,17]). A commonality among many approaches, is that more evidence (in the form of observing instances of behaviour) yields more certainty of an opinion. Uncertainty is inversely proportional to the amount of evidence. Evidence is often used in machine learning. -
Moments of the Product and Ratio of Two Correlated Chi-Square Variables
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Springer - Publisher Connector Stat Papers (2009) 50:581–592 DOI 10.1007/s00362-007-0105-0 REGULAR ARTICLE Moments of the product and ratio of two correlated chi-square variables Anwar H. Joarder Received: 2 June 2006 / Revised: 8 October 2007 / Published online: 20 November 2007 © The Author(s) 2007 Abstract The exact probability density function of a bivariate chi-square distribu- tion with two correlated components is derived. Some moments of the product and ratio of two correlated chi-square random variables have been derived. The ratio of the two correlated chi-square variables is used to compare variability. One such applica- tion is referred to. Another application is pinpointed in connection with the distribution of correlation coefficient based on a bivariate t distribution. Keywords Bivariate chi-square distribution · Moments · Product of correlated chi-square variables · Ratio of correlated chi-square variables Mathematics Subject Classification (2000) 62E15 · 60E05 · 60E10 1 Introduction Fisher (1915) derived the distribution of mean-centered sum of squares and sum of products in order to study the distribution of correlation coefficient from a bivariate nor- mal sample. Let X1, X2,...,X N (N > 2) be two-dimensional independent random vectors where X j = (X1 j , X2 j ) , j = 1, 2,...,N is distributed as a bivariate normal distribution denoted by N2(θ, ) with θ = (θ1,θ2) and a 2 × 2 covariance matrix = (σik), i = 1, 2; k = 1, 2. The sample mean-centered sums of squares and sum of products are given by a = N (X − X¯ )2 = mS2, m = N − 1,(i = 1, 2) ii j=1 ij i i = N ( − ¯ )( − ¯ ) = and a12 j=1 X1 j X1 X2 j X2 mRS1 S2, respectively. -
Guaranteed Bounds on Information-Theoretic Measures of Univariate Mixtures Using Piecewise Log-Sum-Exp Inequalities
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 20 October 2016 doi:10.20944/preprints201610.0086.v1 Peer-reviewed version available at Entropy 2016, 18, 442; doi:10.3390/e18120442 Article Guaranteed Bounds on Information-Theoretic Measures of Univariate Mixtures Using Piecewise Log-Sum-Exp Inequalities Frank Nielsen 1,2,* and Ke Sun 1 1 École Polytechnique, Palaiseau 91128, France; [email protected] 2 Sony Computer Science Laboratories Inc., Paris 75005, France * Correspondence: [email protected] Abstract: Information-theoretic measures such as the entropy, cross-entropy and the Kullback-Leibler divergence between two mixture models is a core primitive in many signal processing tasks. Since the Kullback-Leibler divergence of mixtures provably does not admit a closed-form formula, it is in practice either estimated using costly Monte-Carlo stochastic integration, approximated, or bounded using various techniques. We present a fast and generic method that builds algorithmically closed-form lower and upper bounds on the entropy, the cross-entropy and the Kullback-Leibler divergence of mixtures. We illustrate the versatile method by reporting on our experiments for approximating the Kullback-Leibler divergence between univariate exponential mixtures, Gaussian mixtures, Rayleigh mixtures, and Gamma mixtures. Keywords: information geometry; mixture models; log-sum-exp bounds 1. Introduction Mixture models are commonly used in signal processing. A typical scenario is to use mixture models [1–3] to smoothly model histograms. For example, Gaussian Mixture Models (GMMs) can be used to convert grey-valued images into binary images by building a GMM fitting the image intensity histogram and then choosing the threshold as the average of the Gaussian means [1] to binarize the image. -
Chapter 8 Fundamental Sampling Distributions And
CHAPTER 8 FUNDAMENTAL SAMPLING DISTRIBUTIONS AND DATA DESCRIPTIONS 8.1 Random Sampling pling procedure, it is desirable to choose a random sample in the sense that the observations are made The basic idea of the statistical inference is that we independently and at random. are allowed to draw inferences or conclusions about a Random Sample population based on the statistics computed from the sample data so that we could infer something about Let X1;X2;:::;Xn be n independent random variables, the parameters and obtain more information about the each having the same probability distribution f (x). population. Thus we must make sure that the samples Define X1;X2;:::;Xn to be a random sample of size must be good representatives of the population and n from the population f (x) and write its joint proba- pay attention on the sampling bias and variability to bility distribution as ensure the validity of statistical inference. f (x1;x2;:::;xn) = f (x1) f (x2) f (xn): ··· 8.2 Some Important Statistics It is important to measure the center and the variabil- ity of the population. For the purpose of the inference, we study the following measures regarding to the cen- ter and the variability. 8.2.1 Location Measures of a Sample The most commonly used statistics for measuring the center of a set of data, arranged in order of mag- nitude, are the sample mean, sample median, and sample mode. Let X1;X2;:::;Xn represent n random variables. Sample Mean To calculate the average, or mean, add all values, then Bias divide by the number of individuals. -
On the Meaning and Use of Kurtosis
Psychological Methods Copyright 1997 by the American Psychological Association, Inc. 1997, Vol. 2, No. 3,292-307 1082-989X/97/$3.00 On the Meaning and Use of Kurtosis Lawrence T. DeCarlo Fordham University For symmetric unimodal distributions, positive kurtosis indicates heavy tails and peakedness relative to the normal distribution, whereas negative kurtosis indicates light tails and flatness. Many textbooks, however, describe or illustrate kurtosis incompletely or incorrectly. In this article, kurtosis is illustrated with well-known distributions, and aspects of its interpretation and misinterpretation are discussed. The role of kurtosis in testing univariate and multivariate normality; as a measure of departures from normality; in issues of robustness, outliers, and bimodality; in generalized tests and estimators, as well as limitations of and alternatives to the kurtosis measure [32, are discussed. It is typically noted in introductory statistics standard deviation. The normal distribution has a kur- courses that distributions can be characterized in tosis of 3, and 132 - 3 is often used so that the refer- terms of central tendency, variability, and shape. With ence normal distribution has a kurtosis of zero (132 - respect to shape, virtually every textbook defines and 3 is sometimes denoted as Y2)- A sample counterpart illustrates skewness. On the other hand, another as- to 132 can be obtained by replacing the population pect of shape, which is kurtosis, is either not discussed moments with the sample moments, which gives or, worse yet, is often described or illustrated incor- rectly. Kurtosis is also frequently not reported in re- ~(X i -- S)4/n search articles, in spite of the fact that virtually every b2 (•(X i - ~')2/n)2' statistical package provides a measure of kurtosis. -
A Tail Quantile Approximation Formula for the Student T and the Symmetric Generalized Hyperbolic Distribution
A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Schlüter, Stephan; Fischer, Matthias J. Working Paper A tail quantile approximation formula for the student t and the symmetric generalized hyperbolic distribution IWQW Discussion Papers, No. 05/2009 Provided in Cooperation with: Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics Suggested Citation: Schlüter, Stephan; Fischer, Matthias J. (2009) : A tail quantile approximation formula for the student t and the symmetric generalized hyperbolic distribution, IWQW Discussion Papers, No. 05/2009, Friedrich-Alexander-Universität Erlangen-Nürnberg, Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung (IWQW), Nürnberg This Version is available at: http://hdl.handle.net/10419/29554 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu IWQW Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung Diskussionspapier Discussion Papers No. -
Chapter 4: Central Tendency and Dispersion
Central Tendency and Dispersion 4 In this chapter, you can learn • how the values of the cases on a single variable can be summarized using measures of central tendency and measures of dispersion; • how the central tendency can be described using statistics such as the mode, median, and mean; • how the dispersion of scores on a variable can be described using statistics such as a percent distribution, minimum, maximum, range, and standard deviation along with a few others; and • how a variable’s level of measurement determines what measures of central tendency and dispersion to use. Schooling, Politics, and Life After Death Once again, we will use some questions about 1980 GSS young adults as opportunities to explain and demonstrate the statistics introduced in the chapter: • Among the 1980 GSS young adults, are there both believers and nonbelievers in a life after death? Which is the more common view? • On a seven-attribute political party allegiance variable anchored at one end by “strong Democrat” and at the other by “strong Republican,” what was the most Democratic attribute used by any of the 1980 GSS young adults? The most Republican attribute? If we put all 1980 95 96 PART II :: DESCRIPTIVE STATISTICS GSS young adults in order from the strongest Democrat to the strongest Republican, what is the political party affiliation of the person in the middle? • What was the average number of years of schooling completed at the time of the survey by 1980 GSS young adults? Were most of these twentysomethings pretty close to the average on -
The Probability Lifesaver: Order Statistics and the Median Theorem
The Probability Lifesaver: Order Statistics and the Median Theorem Steven J. Miller December 30, 2015 Contents 1 Order Statistics and the Median Theorem 3 1.1 Definition of the Median 5 1.2 Order Statistics 10 1.3 Examples of Order Statistics 15 1.4 TheSampleDistributionoftheMedian 17 1.5 TechnicalboundsforproofofMedianTheorem 20 1.6 TheMedianofNormalRandomVariables 22 2 • Greetings again! In this supplemental chapter we develop the theory of order statistics in order to prove The Median Theorem. This is a beautiful result in its own, but also extremely important as a substitute for the Central Limit Theorem, and allows us to say non- trivial things when the CLT is unavailable. Chapter 1 Order Statistics and the Median Theorem The Central Limit Theorem is one of the gems of probability. It’s easy to use and its hypotheses are satisfied in a wealth of problems. Many courses build towards a proof of this beautiful and powerful result, as it truly is ‘central’ to the entire subject. Not to detract from the majesty of this wonderful result, however, what happens in those instances where it’s unavailable? For example, one of the key assumptions that must be met is that our random variables need to have finite higher moments, or at the very least a finite variance. What if we were to consider sums of Cauchy random variables? Is there anything we can say? This is not just a question of theoretical interest, of mathematicians generalizing for the sake of generalization. The following example from economics highlights why this chapter is more than just of theoretical interest. -
A Multivariate Student's T-Distribution
Open Journal of Statistics, 2016, 6, 443-450 Published Online June 2016 in SciRes. http://www.scirp.org/journal/ojs http://dx.doi.org/10.4236/ojs.2016.63040 A Multivariate Student’s t-Distribution Daniel T. Cassidy Department of Engineering Physics, McMaster University, Hamilton, ON, Canada Received 29 March 2016; accepted 14 June 2016; published 17 June 2016 Copyright © 2016 by author and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/ Abstract A multivariate Student’s t-distribution is derived by analogy to the derivation of a multivariate normal (Gaussian) probability density function. This multivariate Student’s t-distribution can have different shape parameters νi for the marginal probability density functions of the multi- variate distribution. Expressions for the probability density function, for the variances, and for the covariances of the multivariate t-distribution with arbitrary shape parameters for the marginals are given. Keywords Multivariate Student’s t, Variance, Covariance, Arbitrary Shape Parameters 1. Introduction An expression for a multivariate Student’s t-distribution is presented. This expression, which is different in form than the form that is commonly used, allows the shape parameter ν for each marginal probability density function (pdf) of the multivariate pdf to be different. The form that is typically used is [1] −+ν Γ+((ν n) 2) T ( n) 2 +Σ−1 n 2 (1.[xx] [ ]) (1) ΓΣ(νν2)(π ) This “typical” form attempts to generalize the univariate Student’s t-distribution and is valid when the n marginal distributions have the same shape parameter ν . -
A Lower Bound on the Differential Entropy of Log-Concave Random Vectors with Applications
entropy Article A Lower Bound on the Differential Entropy of Log-Concave Random Vectors with Applications Arnaud Marsiglietti 1,* and Victoria Kostina 2 1 Center for the Mathematics of Information, California Institute of Technology, Pasadena, CA 91125, USA 2 Department of Electrical Engineering, California Institute of Technology, Pasadena, CA 91125, USA; [email protected] * Correspondence: [email protected] Received: 18 January 2018; Accepted: 6 March 2018; Published: 9 March 2018 Abstract: We derive a lower bound on the differential entropy of a log-concave random variable X in terms of the p-th absolute moment of X. The new bound leads to a reverse entropy power inequality with an explicit constant, and to new bounds on the rate-distortion function and the channel capacity. Specifically, we study the rate-distortion function for log-concave sources and distortion measure ( ) = j − jr ≥ d x, xˆ x xˆ , with r 1, and we establish thatp the difference between the rate-distortion function and the Shannon lower bound is at most log( pe) ≈ 1.5 bits, independently of r and the target q pe distortion d. For mean-square error distortion, the difference is at most log( 2 ) ≈ 1 bit, regardless of d. We also provide bounds on the capacity of memoryless additive noise channels when the noise is log-concave. We show that the difference between the capacity of such channels and the capacity of q pe the Gaussian channel with the same noise power is at most log( 2 ) ≈ 1 bit. Our results generalize to the case of a random vector X with possibly dependent coordinates. -
Notes Mean, Median, Mode & Range
Notes Mean, Median, Mode & Range How Do You Use Mode, Median, Mean, and Range to Describe Data? There are many ways to describe the characteristics of a set of data. The mode, median, and mean are all called measures of central tendency. These measures of central tendency and range are described in the table below. The mode of a set of data Use the mode to show which describes which value occurs value in a set of data occurs most frequently. If two or more most often. For the set numbers occur the same number {1, 1, 2, 3, 5, 6, 10}, of times and occur more often the mode is 1 because it occurs Mode than all the other numbers in the most frequently. set, those numbers are all modes for the data set. If each number in the set occurs the same number of times, the set of data has no mode. The median of a set of data Use the median to show which describes what value is in the number in a set of data is in the middle if the set is ordered from middle when the numbers are greatest to least or from least to listed in order. greatest. If there are an even For the set {1, 1, 2, 3, 5, 6, 10}, number of values, the median is the median is 3 because it is in the average of the two middle the middle when the numbers are Median values. Half of the values are listed in order. greater than the median, and half of the values are less than the median.