Multivariate Quantiles and Ranks Using Optimal Transportation

Total Page:16

File Type:pdf, Size:1020Kb

Multivariate Quantiles and Ranks Using Optimal Transportation Multivariate Quantiles and Ranks using Optimal Transportation Bodhisattva Sen1 Department of Statistics Columbia University, New York Department of Statistics George Mason University Joint work with Promit Ghosal (Columbia University) 05 April, 2019 1Supported by NSF grants DMS-1712822 and AST-1614743 Ranks and quantiles when d = 1 X is a random variable with c.d.f. F Rank: The rank of x R is F (x) 2 Property: If F is continuous, F (X ) Unif([0; 1]) ∼ Quantile: The quantile function is F −1 Property: If F is continuous, F −1(U) F where U Unif([0; 1]) ∼ ∼ How to define ranks and quantiles in Rd , d > 1? Quantile: The quantile function is F −1 Property: If F is continuous, F −1(U) F where U Unif([0; 1]) ∼ ∼ How to define ranks and quantiles in Rd , d > 1? Ranks and quantiles when d = 1 X is a random variable with c.d.f. F Rank: The rank of x R is F (x) 2 Property: If F is continuous, F (X ) Unif([0; 1]) ∼ How to define ranks and quantiles in Rd , d > 1? Ranks and quantiles when d = 1 X is a random variable with c.d.f. F Rank: The rank of x R is F (x) 2 Property: If F is continuous, F (X ) Unif([0; 1]) ∼ Quantile: The quantile function is F −1 Property: If F is continuous, F −1(U) F where U Unif([0; 1]) ∼ ∼ Many notions of multivariate quantiles/ranks have been suggested: Puri and Sen (1971), Chaudhuri and Sengupta (1993), M¨ott¨onenand Oja (1995), Chaudhuri (1996), Liu and Singh (1993), Serfling (2010) ... Spatial median and geometric quantile Spatial median: M := arg min E X m d m2R k − k Quantile when d = 1: For u (0; 1), 2 −1 F (u) = arg min E X x (2u 1)x x2R j − j − − h i Geometric quantile [Chaudhuri (1996)]: For u < 1, let k k Q(u) := arg min E X x u; x x2 d k − k − h i R h i Defining quantiles, ranks, depth, etc. difficult when d > 1 Lack of a natural ordering in Rd , when d > 1 Spatial median and geometric quantile Spatial median: M := arg min E X m d m2R k − k Quantile when d = 1: For u (0; 1), 2 −1 F (u) = arg min E X x (2u 1)x x2R j − j − − h i Geometric quantile [Chaudhuri (1996)]: For u < 1, let k k Q(u) := arg min E X x u; x x2 d k − k − h i R h i Defining quantiles, ranks, depth, etc. difficult when d > 1 Lack of a natural ordering in Rd , when d > 1 Many notions of multivariate quantiles/ranks have been suggested: Puri and Sen (1971), Chaudhuri and Sengupta (1993), M¨ott¨onenand Oja (1995), Chaudhuri (1996), Liu and Singh (1993), Serfling (2010) ... Quantile when d = 1: For u (0; 1), 2 −1 F (u) = arg min E X x (2u 1)x x2R j − j − − h i Geometric quantile [Chaudhuri (1996)]: For u < 1, let k k Q(u) := arg min E X x u; x x2 d k − k − h i R h i Defining quantiles, ranks, depth, etc. difficult when d > 1 Lack of a natural ordering in Rd , when d > 1 Many notions of multivariate quantiles/ranks have been suggested: Puri and Sen (1971), Chaudhuri and Sengupta (1993), M¨ott¨onenand Oja (1995), Chaudhuri (1996), Liu and Singh (1993), Serfling (2010) ... Spatial median and geometric quantile Spatial median: M := arg min E X m d m2R k − k Geometric quantile [Chaudhuri (1996)]: For u < 1, let k k Q(u) := arg min E X x u; x x2 d k − k − h i R h i Defining quantiles, ranks, depth, etc. difficult when d > 1 Lack of a natural ordering in Rd , when d > 1 Many notions of multivariate quantiles/ranks have been suggested: Puri and Sen (1971), Chaudhuri and Sengupta (1993), M¨ott¨onenand Oja (1995), Chaudhuri (1996), Liu and Singh (1993), Serfling (2010) ... Spatial median and geometric quantile Spatial median: M := arg min E X m d m2R k − k Quantile when d = 1: For u (0; 1), 2 −1 F (u) = arg min E X x (2u 1)x x2R j − j − − h i Defining quantiles, ranks, depth, etc. difficult when d > 1 Lack of a natural ordering in Rd , when d > 1 Many notions of multivariate quantiles/ranks have been suggested: Puri and Sen (1971), Chaudhuri and Sengupta (1993), M¨ott¨onenand Oja (1995), Chaudhuri (1996), Liu and Singh (1993), Serfling (2010) ... Spatial median and geometric quantile Spatial median: M := arg min E X m d m2R k − k Quantile when d = 1: For u (0; 1), 2 −1 F (u) = arg min E X x (2u 1)x x2R j − j − − h i Geometric quantile [Chaudhuri (1996)]: For u < 1, let k k Q(u) := arg min E X x u; x x2 d k − k − h i R h i Outline 1 Introduction to Optimal Transportation Monge's Problem Kantorovich Relaxation: Primal Problem A Geometric Approach 2 Quantile and Rank Functions in Rd (d 1) ≥ 3 Some Applications is Statistics Two-sample Goodness-of-fit Testing Independence Testing Outline 1 Introduction to Optimal Transportation Monge's Problem Kantorovich Relaxation: Primal Problem A Geometric Approach 2 Quantile and Rank Functions in Rd (d 1) ≥ 3 Some Applications is Statistics Two-sample Goodness-of-fit Testing Independence Testing Goal: inf Eµ[c(X ; T (X ))] T :T (X )∼ν µ (on ) and ν (on ) probability measures, R dµ(x) = R dν(y) = 1 X Y X Y c(x; y) 0: cost of transporting x to y (e.g., c(x; y) = x y p) ≥ k − k T transports µ to ν, i.e., T (X ) ν where X µ, or, ∼ ∼ ν(B) = µ(T −1(B))= dµ, B −1 ⊂ Y ZT (B) Monge Problem What’s the cheapest way to transport a pile of sand to cover a Gaspard Mongesinkhole? (1781): What is the cheapest way to transport a pile of sand to cover a sinkhole? Blanchet (Columbia U. and Stanford U.) 5/60 Monge Problem What’s the cheapest way to transport a pile of sand to cover a Gaspard Mongesinkhole? (1781): What is the cheapest way to transport a pile of sand to cover a sinkhole? Goal: inf Eµ[c(X ; T (X ))] Blanchet (Columbia U.T and:T Stanford(X )∼ U.)ν 5/60 µ (on ) and ν (on ) probability measures, R dµ(x) = R dν(y) = 1 X Y X Y c(x; y) 0: cost of transporting x to y (e.g., c(x; y) = x y p) ≥ k − k T transports µ to ν, i.e., T (X ) ν where X µ, or, ∼ ∼ ν(B) = µ(T −1(B))= dµ, B −1 ⊂ Y ZT (B) Figure 3: Two densities p and q and the optimal transport map to that morphs p into q. where p 1. When p =1thisisalsocalledtheEarth Mover distance. The minimizer J ⇤ (which does≥ exist) is called the optimal transport plan or the optimal coupling.Incasethere is an optimal transport map T then J is a singular measure with all its mass on the set (x, T (x)) . { } It can be shown that W p(P, Q)=sup (y)dQ(y) φ(x)dP (x) p − ,φ Z Z where (y) φ(x) x y p.Thisiscalledthedualformulation.Inspecialcasewhere p = 1 we have− the very|| simple− || representation W (P, Q) = sup f(x)dP (x) f(x)dQ(x): f 1 − 2F (Z Z ) where denotes all maps from Rd to R such that f(y) f(x) x y for all x, y. F | − ||| − || When d =1,thedistancehasaclosedform: 1 1/p 1 1 p W (P, Q)= F − (z) G− (z) p | − | ✓Z0 ◆ 4 One-dimensional optimal transport Suppose ; R; µ, ν abs. cont.; Fµ and Fν c.d.f.'s X Y ⊂ Goals: (i) Transport µ to ν; i.e., find T s.t. if X µ then T (X ) ν ∼ ∼ 2 2 (ii) T minimizes cost Eµ[(X T (X )) ]; assume c(x; y) = (x y) − − Figure 3: Two densities p and q and the optimal transport map to that morphs p into q. where p 1. When p =1thisisalsocalledtheEarth Mover distance. The minimizer J ⇤ (which does≥ exist) is called the optimal transport plan or the optimal coupling.Incasethere is an optimal transport map T then J is a singular measure with all its mass on the set (x, T (x)) . { } It can be shown that W p(P, Q)=sup (y)dQ(y) φ(x)dP (x) p − ,φ Z Z where (y) φ(x) x y p.Thisiscalledthedualformulation.Inspecialcasewhere p = 1 we have− the very|| simple− || representation W (P, Q) = sup f(x)dP (x) f(x)dQ(x): f 1 − 2F (Z Z ) where denotes all maps from Rd to R such that f(y) f(x) x y for all x, y. F | − ||| − || When d =1,thedistancehasaclosedform: 1 1/p 1 1 p W (P, Q)= F − (z) G− (z) p | − | ✓Z0 ◆ 4 One-dimensional optimal transport Suppose ; R; µ, ν abs. cont.; Fµ and Fν c.d.f.'s X Y ⊂ Goals: (i) Transport µ to ν; i.e., find T s.t. if X µ then T (X ) ν ∼ ∼ 2 2 (ii) T minimizes cost Eµ[(X T (X )) ]; assume c(x; y) = (x y) − − This means that if x > x then T (x ) T (x ) 1 0 1 ≥ 0 So T must be a monotone nondecreasing function Therefore, choose T ( ) so that (recall: ν(B) = dµ) · T −1(B) x T (x) R dµ(x) = dν(y) Fµ(x) = Fν (T (x)) ) Z−∞ Z−∞ −1 Thus, T = F Fµ (and this map T is unique) ν ◦ Figure 3: Two densities p and q and the optimal transport map to that morphs p into q.
Recommended publications
  • 5. the Student T Distribution
    Virtual Laboratories > 4. Special Distributions > 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 5. The Student t Distribution In this section we will study a distribution that has special importance in statistics. In particular, this distribution will arise in the study of a standardized version of the sample mean when the underlying distribution is normal. The Probability Density Function Suppose that Z has the standard normal distribution, V has the chi-squared distribution with n degrees of freedom, and that Z and V are independent. Let Z T= √V/n In the following exercise, you will show that T has probability density function given by −(n +1) /2 Γ((n + 1) / 2) t2 f(t)= 1 + , t∈ℝ ( n ) √n π Γ(n / 2) 1. Show that T has the given probability density function by using the following steps. n a. Show first that the conditional distribution of T given V=v is normal with mean 0 a nd variance v . b. Use (a) to find the joint probability density function of (T,V). c. Integrate the joint probability density function in (b) with respect to v to find the probability density function of T. The distribution of T is known as the Student t distribution with n degree of freedom. The distribution is well defined for any n > 0, but in practice, only positive integer values of n are of interest. This distribution was first studied by William Gosset, who published under the pseudonym Student. In addition to supplying the proof, Exercise 1 provides a good way of thinking of the t distribution: the t distribution arises when the variance of a mean 0 normal distribution is randomized in a certain way.
    [Show full text]
  • Theoretical Statistics. Lecture 20
    Theoretical Statistics. Lecture 20. Peter Bartlett 1. Recall: Functional delta method, differentiability in normed spaces, Hadamard derivatives. [vdV20] 2. Quantile estimates. [vdV21] 3. Contiguity. [vdV6] 1 Recall: Differentiability of functions in normed spaces Definition: φ : D E is Hadamard differentiable at θ D tangentially → ∈ to D D if 0 ⊆ φ′ : D E (linear, continuous), h D , ∃ θ 0 → ∀ ∈ 0 if t 0, ht h 0, then → k − k → φ(θ + tht) φ(θ) − φ′ (h) 0. t − θ → 2 Recall: Functional delta method Theorem: Suppose φ : D E, where D and E are normed linear spaces. → Suppose the statistic Tn : Ωn D satisfies √n(Tn θ) T for a random → − element T in D D. 0 ⊂ If φ is Hadamard differentiable at θ tangentially to D0 then ′ √n(φ(Tn) φ(θ)) φ (T ). − θ If we can extend φ′ : D E to a continuous map φ′ : D E, then 0 → → ′ √n(φ(Tn) φ(θ)) = φ (√n(Tn θ)) + oP (1). − θ − 3 Recall: Quantiles Definition: The quantile function of F is F −1 : (0, 1) R, → F −1(p) = inf x : F (x) p . { ≥ } Quantile transformation: for U uniform on (0, 1), • F −1(U) F. ∼ Probability integral transformation: for X F , F (X) is uniform on • ∼ [0,1] iff F is continuous on R. F −1 is an inverse (i.e., F −1(F (x)) = x and F (F −1(p)) = p for all x • and p) iff F is continuous and strictly increasing. 4 Empirical quantile function For a sample with distribution function F , define the empirical quantile −1 function as the quantile function Fn of the empirical distribution function Fn.
    [Show full text]
  • A Tail Quantile Approximation Formula for the Student T and the Symmetric Generalized Hyperbolic Distribution
    A Service of Leibniz-Informationszentrum econstor Wirtschaft Leibniz Information Centre Make Your Publications Visible. zbw for Economics Schlüter, Stephan; Fischer, Matthias J. Working Paper A tail quantile approximation formula for the student t and the symmetric generalized hyperbolic distribution IWQW Discussion Papers, No. 05/2009 Provided in Cooperation with: Friedrich-Alexander University Erlangen-Nuremberg, Institute for Economics Suggested Citation: Schlüter, Stephan; Fischer, Matthias J. (2009) : A tail quantile approximation formula for the student t and the symmetric generalized hyperbolic distribution, IWQW Discussion Papers, No. 05/2009, Friedrich-Alexander-Universität Erlangen-Nürnberg, Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung (IWQW), Nürnberg This Version is available at: http://hdl.handle.net/10419/29554 Standard-Nutzungsbedingungen: Terms of use: Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Documents in EconStor may be saved and copied for your Zwecken und zum Privatgebrauch gespeichert und kopiert werden. personal and scholarly purposes. Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle You are not to copy documents for public or commercial Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich purposes, to exhibit the documents publicly, to make them machen, vertreiben oder anderweitig nutzen. publicly available on the internet, or to distribute or otherwise use the documents in public. Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, If the documents have been made available under an Open gelten abweichend von diesen Nutzungsbedingungen die in der dort Content Licence (especially Creative Commons Licences), you genannten Lizenz gewährten Nutzungsrechte. may exercise further usage rights as specified in the indicated licence. www.econstor.eu IWQW Institut für Wirtschaftspolitik und Quantitative Wirtschaftsforschung Diskussionspapier Discussion Papers No.
    [Show full text]
  • Stat 5102 Lecture Slides: Deck 1 Empirical Distributions, Exact Sampling Distributions, Asymptotic Sampling Distributions
    Stat 5102 Lecture Slides: Deck 1 Empirical Distributions, Exact Sampling Distributions, Asymptotic Sampling Distributions Charles J. Geyer School of Statistics University of Minnesota 1 Empirical Distributions The empirical distribution associated with a vector of numbers x = (x1; : : : ; xn) is the probability distribution with expectation operator n 1 X Enfg(X)g = g(xi) n i=1 This is the same distribution that arises in finite population sam- pling. Suppose we have a population of size n whose members have values x1, :::, xn of a particular measurement. The value of that measurement for a randomly drawn individual from this population has a probability distribution that is this empirical distribution. 2 The Mean of the Empirical Distribution In the special case where g(x) = x, we get the mean of the empirical distribution n 1 X En(X) = xi n i=1 which is more commonly denotedx ¯n. Those with previous exposure to statistics will recognize this as the formula of the population mean, if x1, :::, xn is considered a finite population from which we sample, or as the formula of the sample mean, if x1, :::, xn is considered a sample from a specified population. 3 The Variance of the Empirical Distribution The variance of any distribution is the expected squared deviation from the mean of that same distribution. The variance of the empirical distribution is n 2o varn(X) = En [X − En(X)] n 2o = En [X − x¯n] n 1 X 2 = (xi − x¯n) n i=1 The only oddity is the use of the notationx ¯n rather than µ for the mean.
    [Show full text]
  • Depth, Outlyingness, Quantile, and Rank Functions in Multivariate & Other Data Settings Eserved@D = *@Let@Token
    DEPTH, OUTLYINGNESS, QUANTILE, AND RANK FUNCTIONS – CONCEPTS, PERSPECTIVES, CHALLENGES Depth, Outlyingness, Quantile, and Rank Functions in Multivariate & Other Data Settings Robert Serfling1 Serfling & Thompson Statistical Consulting and Tutoring ASA Alabama-Mississippi Chapter Mini-Conference University of Mississippi, Oxford April 5, 2019 1 www.utdallas.edu/∼serfling DEPTH, OUTLYINGNESS, QUANTILE, AND RANK FUNCTIONS – CONCEPTS, PERSPECTIVES, CHALLENGES “Don’t walk in front of me, I may not follow. Don’t walk behind me, I may not lead. Just walk beside me and be my friend.” – Albert Camus DEPTH, OUTLYINGNESS, QUANTILE, AND RANK FUNCTIONS – CONCEPTS, PERSPECTIVES, CHALLENGES OUTLINE Depth, Outlyingness, Quantile, and Rank Functions on Rd Depth Functions on Arbitrary Data Space X Depth Functions on Arbitrary Parameter Space Θ Concluding Remarks DEPTH, OUTLYINGNESS, QUANTILE, AND RANK FUNCTIONS – CONCEPTS, PERSPECTIVES, CHALLENGES d DEPTH, OUTLYINGNESS, QUANTILE, AND RANK FUNCTIONS ON R PRELIMINARY PERSPECTIVES Depth functions are a nonparametric approach I The setting is nonparametric data analysis. No parametric or semiparametric model is assumed or invoked. I We exhibit the geometric structure of a data set in terms of a center, quantile levels, measures of outlyingness for each point, and identification of outliers or outlier regions. I Such data description is developed in terms of a depth function that measures centrality from a global viewpoint and yields center-outward ordering of data points. I This differs from the density function,
    [Show full text]
  • Sampling Student's T Distribution – Use of the Inverse Cumulative
    Sampling Student’s T distribution – use of the inverse cumulative distribution function William T. Shaw Department of Mathematics, King’s College, The Strand, London WC2R 2LS, UK With the current interest in copula methods, and fat-tailed or other non-normal distributions, it is appropriate to investigate technologies for managing marginal distributions of interest. We explore “Student’s” T distribution, survey its simulation, and present some new techniques for simulation. In particular, for a given real (not necessarily integer) value n of the number of degrees of freedom, −1 we give a pair of power series approximations for the inverse, Fn ,ofthe cumulative distribution function (CDF), Fn.Wealsogivesomesimpleandvery fast exact and iterative techniques for defining this function when n is an even −1 integer, based on the observation that for such cases the calculation of Fn amounts to the solution of a reduced-form polynomial equation of degree n − 1. We also explain the use of Cornish–Fisher expansions to define the inverse CDF as the composition of the inverse CDF for the normal case with a simple polynomial map. The methods presented are well adapted for use with copula and quasi-Monte-Carlo techniques. 1 Introduction There is much interest in many areas of financial modeling on the use of copulas to glue together marginal univariate distributions where there is no easy canonical multivariate distribution, or one wishes to have flexibility in the mechanism for combination. One of the more interesting marginal distributions is the “Student’s” T distribution. This statistical distribution was published by W. Gosset in 1908.
    [Show full text]
  • A Tutorial on Quantile Estimation Via Monte Carlo
    A Tutorial on Quantile Estimation via Monte Carlo Hui Dong and Marvin K. Nakayama Abstract Quantiles are frequently used to assess risk in a wide spectrum of applica- tion areas, such as finance, nuclear engineering, and service industries. This tutorial discusses Monte Carlo simulation methods for estimating a quantile, also known as a percentile or value-at-risk, where p of a distribution’s mass lies below its p-quantile. We describe a general approach that is often followed to construct quantile estimators, and show how it applies when employing naive Monte Carlo or variance-reduction techniques. We review some large-sample properties of quantile estimators. We also describe procedures for building a confidence interval for a quantile, which provides a measure of the sampling error. 1 Introduction Numerous application settings have adopted quantiles as a way of measuring risk. For a fixed constant 0 < p < 1, the p-quantile of a continuous random variable is a constant x such that p of the distribution’s mass lies below x. For example, the median is the 0:5-quantile. In finance, a quantile is called a value-at-risk, and risk managers commonly employ p-quantiles for p ≈ 1 (e.g., p = 0:99 or p = 0:999) to help determine capital levels needed to be able to cover future large losses with high probability; e.g., see [33]. Nuclear engineers use 0:95-quantiles in probabilistic safety assessments (PSAs) of nuclear power plants. PSAs are often performed with Monte Carlo, and the U.S. Nuclear Regulatory Commission (NRC) further requires that a PSA accounts for the Hui Dong Amazon.com Corporate LLC∗, Seattle, WA 98109, USA e-mail: [email protected] ∗This work is not related to Amazon, regardless of the affiliation.
    [Show full text]
  • Nonparametric Multivariate Kurtosis and Tailweight Measures
    Nonparametric Multivariate Kurtosis and Tailweight Measures Jin Wang1 Northern Arizona University and Robert Serfling2 University of Texas at Dallas November 2004 – final preprint version, to appear in Journal of Nonparametric Statistics, 2005 1Department of Mathematics and Statistics, Northern Arizona University, Flagstaff, Arizona 86011-5717, USA. Email: [email protected]. 2Department of Mathematical Sciences, University of Texas at Dallas, Richardson, Texas 75083- 0688, USA. Email: [email protected]. Website: www.utdallas.edu/∼serfling. Support by NSF Grant DMS-0103698 is gratefully acknowledged. Abstract For nonparametric exploration or description of a distribution, the treatment of location, spread, symmetry and skewness is followed by characterization of kurtosis. Classical moment- based kurtosis measures the dispersion of a distribution about its “shoulders”. Here we con- sider quantile-based kurtosis measures. These are robust, are defined more widely, and dis- criminate better among shapes. A univariate quantile-based kurtosis measure of Groeneveld and Meeden (1984) is extended to the multivariate case by representing it as a transform of a dispersion functional. A family of such kurtosis measures defined for a given distribution and taken together comprises a real-valued “kurtosis functional”, which has intuitive appeal as a convenient two-dimensional curve for description of the kurtosis of the distribution. Several multivariate distributions in any dimension may thus be compared with respect to their kurtosis in a single two-dimensional plot. Important properties of the new multivariate kurtosis measures are established. For example, for elliptically symmetric distributions, this measure determines the distribution within affine equivalence. Related tailweight measures, influence curves, and asymptotic behavior of sample versions are also discussed.
    [Show full text]
  • MODELING AUTOREGRESSIVE CONDITIONAL SKEWNESS and KURTOSIS with MULTI-QUANTILE Caviar 1
    WORKING PAPER SERIES NO 957 / NOVEMBER 2008 MODELING AUTOREGRESSIVE CONDITIONAL SKEWNESS AND KURTOSIS WITH MULTI-QUANTILE CAViaR by Halbert White, Tae-Hwan Kim and Simone Manganelli WORKING PAPER SERIES NO 957 / NOVEMBER 2008 MODELING AUTOREGRESSIVE CONDITIONAL SKEWNESS AND KURTOSIS WITH MULTI-QUANTILE CAViaR 1 by Halbert White, 2 Tae-Hwan Kim 3 and Simone Manganelli 4 In 2008 all ECB publications This paper can be downloaded without charge from feature a motif taken from the http://www.ecb.europa.eu or from the Social Science Research Network 10 banknote. electronic library at http://ssrn.com/abstract_id=1291165. 1 The views expressed in this paper are those of the authors and do not necessarily reflect those of the European Central Bank. 2 Department of Economics, 0508 University of California, San Diego 9500 Gilman Drive La Jolla, California 92093-0508, USA; e-mail: [email protected] 3 School of Economics, University of Nottingham, University Park Nottingham NG7 2RD, U.K. and Yonsei University, Seoul 120-749, Korea; e-mail: [email protected] 4 European Central Bank, DG-Research, Kaiserstrasse 29, D-60311 Frankfurt am Main, Germany; e-mail: [email protected] © European Central Bank, 2008 Address Kaiserstrasse 29 60311 Frankfurt am Main, Germany Postal address Postfach 16 03 19 60066 Frankfurt am Main, Germany Telephone +49 69 1344 0 Website http://www.ecb.europa.eu Fax +49 69 1344 6000 All rights reserved. Any reproduction, publication and reprint in the form of a different publication, whether printed or produced electronically, in whole or in part, is permitted only with the explicit written authorisation of the ECB or the author(s).
    [Show full text]
  • Normal Quantile Plot; Chance Experiments, Probability Concepts
    Lecture 6: Normal Quantile Plot; Chance Experiments, Probability Concepts Chapter 5: Probability and Sampling Distributions Example • Scores for 10 students are: 78 80 80 81 82 83 85 85 86 87 • Find the median and quartiles: 1. Median= Q2 = M = (82+83)/2 = 82.5 2. Q1 = Median of the lower half, i.e. 78 80 80 81 82, = 80 3. Q3 = Median of the upper half, i.e. 83 85 85 86 87, = 85 Therefore, IQR = Q3 – Q1 = 85 – 80 = 5 • Additionally, find Min and Max Min = 78, and Max = 87 – We get a five-number summary! – Min Q1 Median Q3 Max 78 80 82.5 85 87 Boxplots; Modified Version • Visual representation of the five-number summary – Central box: Q1 to Q3 – Line inside box: Median – Extended straight lines: from each end of the box to lowest and highest observation. • Modified Boxplots: only extend the lines to the smallest and largest observations that are not outliers. Each mild outlier* is represented by a closed circle and each extreme outlier** by an open circle. *Any observation farther than 1.5 IQR from the closest quartile is an outlier. **An outlier is extreme if more than 3 IQR from the nearest quartile, and is mild otherwise. Example • Five-number summary is: • Min: 78 • Q1: 80 • Median: 82.5 • Q3: 85 • Max: 87 • Draw a boxplot: More on Boxplots • Much more compact than histograms • “Quick and Dirty” visual picture • Gives rough idea on how data is distributed – Shows center/typical value (the median); – Position of median line indicates symmetric/not symmetric, positively/negatively skewed.
    [Show full text]
  • Quantile Function Methods for Decision Analysis
    QUANTILE FUNCTION METHODS FOR DECISION ANALYSIS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF MANAGEMENT SCIENCE AND ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Bradford W. Powley August 2013 © 2013 by Bradford William Powley. All Rights Reserved. Re-distributed by Stanford University under license with the author. This work is licensed under a Creative Commons Attribution- Noncommercial 3.0 United States License. http://creativecommons.org/licenses/by-nc/3.0/us/ This dissertation is online at: http://purl.stanford.edu/yn842pf8910 ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Ronald Howard, Primary Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Ross Shachter I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Tom Keelin Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice Provost Graduate Education This signature page was generated electronically upon submission of this dissertation in electronic format. An original signed hard copy of the signature page is on file in University Archives. iii Abstract Encoding prior probability distributions is a fundamental step in any decision analysis. A decision analyst often elicits an expert's knowledge about a continuous uncertain quantity as a set of quantile-probability pairs (points on a cumulative distribution function) and seeks a probability distribution consistent with them.
    [Show full text]
  • Quantile Models and Estimators for Data Analysis
    Metrika (2002) 55: 17–26 > Springer-Verlag 2002 Quantile models and estimators for data analysis Gilbert W. Bassett Jr.1, Mo-Yin S. Tam1, Keith Knight2 1University of Illinois at Chicago, 601 South Morgan Chicago, IL 60607, USA 2University of Toronto, Department of Statistics, 100 St. George St., Toronto, Ont. M5S3G3, Canada Abstract. Quantile regression is used to estimate the cross sectional relation- ship between high school characteristics and student achievement as measured by ACT scores. The importance of school characteristics on student achieve- ment has been traditionally framed in terms of the e¤ect on the expected value. With quantile regression the impact of school characteristics is allowed to be di¤erent at the mean and quantiles of the conditional distribution. Like robust estimation, the quantile approach detects relationships missed by tra- ditional data analysis. Robust estimates detect the influence of the bulk of the data, whereas quantile estimates detect the influence of co-variates on alter- nate parts of the conditional distribution. Since our design consists of multi- ple responses (individual student ACT scores) at fixed explanatory variables (school characteristics) the quantile model can be estimated by the usual re- gression quantiles, but additionally by a regression on the empirical quantile at each school. This is similar to least squares where the estimate based on the entire data is identical to weighted least squares on the school averages. Un- like least squares however, the regression through the quantiles produces a dif- ferent estimate than the regression quantiles. Key words: Quantile Models, Regression Quantiles, Robustness Student Achievement 1 Introduction Quantile regression represents an extension of traditional estimation methods that allows for distinct quantile e¤ects; see Koenker and Hallock (2001).
    [Show full text]