High-Breakdown Robust Multivariate Methods

Total Page:16

File Type:pdf, Size:1020Kb

High-Breakdown Robust Multivariate Methods Statistical Science 2008, Vol. 23, No. 1, 92–119 DOI: 10.1214/088342307000000087 c Institute of Mathematical Statistics, 2008 High-Breakdown Robust Multivariate Methods Mia Hubert, Peter J. Rousseeuw and Stefan Van Aelst Abstract. When applying a statistical method in practice it often oc- curs that some observations deviate from the usual assumptions. How- ever, many classical methods are sensitive to outliers. The goal of robust statistics is to develop methods that are robust against the possibility that one or several unannounced outliers may occur anywhere in the data. These methods then allow to detect outlying observations by their residuals from a robust fit. We focus on high-breakdown methods, which can deal with a substantial fraction of outliers in the data. We give an overview of recent high-breakdown robust methods for multivariate settings such as covariance estimation, multiple and multivariate re- gression, discriminant analysis, principal components and multivariate calibration. Key words and phrases: Breakdown value, influence function, multi- variate statistics, outliers, partial least squares, principal components, regression, robustness. 1. INTRODUCTION and often they do not show up by simple visual in- spection. Many multivariate datasets contain outliers, that The usual multivariate analysis techniques (e.g., is, data points that deviate from the usual assump- principal components, discriminant analysis and mul- tions and/or from the pattern suggested by the ma- tivariate regression) are based on empirical means, jority of the data. Outliers are more likely to occur in covariance and correlation matrices, and least squares datasets with many observations and/or variables, fitting. All of these can be strongly affected by even a few outliers. When the data contain nasty outliers, typically two things happen: arXiv:0808.0657v1 [stat.ME] 5 Aug 2008 Mia Hubert is Professor, University Center for Statistics and Department of Mathematics, Katholieke • the multivariate estimates differ substantially from Universiteit Leuven, Celestijnenlaan 200 B, B-3001 the “right” answer, defined here as the estimates Leuven, Belgium (e-mail: [email protected]). we would have obtained without the outliers; Peter J. Rousseeuw is Professor, Department of • the resulting fitted model does not allow to de- Mathematics and Computer Science, University of tect the outliers by means of their residuals, Ma- Antwerp, Middelheimlaan 1, B-2020 Antwerp, Belgium halanobis distances or the widely used “leave-one- (e-mail: [email protected]). Stefan Van Aelst is out” diagnostics. Professor, Department of Applied Mathematics and Computer Science, Ghent University, Krijgslaan 281 The first consequence is fairly well known (although S9, B-9000 Ghent, Belgium (e-mail: the size of the effect is often underestimated). Unfor- [email protected]). tunately, the second consequence is less well known, This is an electronic reprint of the original article and when stated many people find it hard to believe published by the Institute of Mathematical Statistics in or paradoxical. Common intuition says that outliers Statistical Science, 2008, Vol. 23, No. 1, 92–119. This must “stick out” from the classical fitted model, and reprint differs from the original in pagination and indeed some of them may do so. But the most harm- typographic detail. ful types of outliers, especially if there are several of 1 2 M. HUBERT, P. J. ROUSSEEUW AND S. VAN AELST them, may affect the estimated model so much “in 2. MULTIVARIATE LOCATION AND their direction” that they are now well-fitted by it. SCATTER Once this effect is understood, one sees that the 2.1 The Need for Robustness following two problems are essentially equivalent: In the multivariate location and scatter setting • Robust estimation: find a “robust” fit, which is we assume that the data are stored in an n × p data similar to the fit we would have found without matrix X = (x ,..., x )′ with x = (x ,...,x )′ the the outliers. 1 n i i1 ip ith observation. Hence n stands for the number of • Outlier detection: find all the outliers that matter. objects and p for the number of variables. Indeed, a solution to the first problem allows us to To illustrate the effect of outliers we consider the identify the outliers by their residuals, and so on, following engineering problem, taken from Rousseeuw from the robust fit. Conversely, a solution to the and Van Driessen (1999). Philips Mecoma (The Nether- second problem allows us to remove or downweight lands) produces diaphragm parts for television sets. the outliers followed by a classical fit, which yields These are thin metal plates, molded by a press. a robust result. When starting a new production line, p = 9 charac- Our research focuses on the first problem, and uses teristics were measured for n = 677 parts. The aim is its results to answer the second. We prefer this ap- to gain insight in the production process and to find proach over the opposite direction because from a out whether abnormalities have occurred. A classical combinatorial viewpoint it is more feasible to search approach is to compute the Mahalanobis distance for sufficiently many “good” data points than to find ′ ˆ −1 all the “bad” data points. (1) MD(xi)= (xi − µˆ0) Σ0 (xi − µˆ0) It turns out that most of the currently available q x µˆ highly robust multivariate estimators are difficult of each measurement i. Here 0 is the arithmetic Σˆ to compute, which makes them unsuitable for the mean and 0 is the classical covariance matrix. The x x analysis of large and/or high-dimensional datasets. distance MD( i) should tell us how far away i is Among the few exceptions is the minimum covari- from the center of the cloud, relative to the size of ance determinant estimator (MCD) of Rousseeuw the cloud. (1984, 1985). The MCD is a highly robust estima- In Figure 1 we plotted the classical Mahalanobis tor of multivariate location and scatter, that can be distance versus the index i, which corresponds to the computed efficiently with the FAST-MCD algorithm production sequence. The horizontal line is at the 2 of Rousseeuw and Van Driessen (1999). usual cutoff value χ9,0.975 = 4.36. Figure 1 suggests Section 2 concentrates on robust estimation of lo- that most observationsq are consistent with the clas- cation and scatter. We first describe the MCD es- sical assumption that the data come from a multi- timator and discuss its main properties. Alterna- variate normal distribution, except for a few isolated tives for the MCD are explained briefly with relevant outliers. This should not surprise us, even in the pointers to the literature for more details. Section 3 first experimental run of a new production line, be- does the same for robust regression and mainly fo- cause the Mahalanobis distances are known to suffer cuses on the least trimmed squares (LTS) estimator from the masking effect. That is, even if there were a (Rousseeuw, 1984), which is an analog of MCD for group of outliers (here, deformed diaphragm parts), ˆ multiple regression. Since estimating the covariance they would affect µˆ0 and Σ0 in such a way that they matrix is the cornerstone of many multivariate sta- get small Mahalanobis distances MD(xi) and thus tistical methods, robust scatter estimators have also become invisible in Figure 1. To get a reliable anal- been used to develop robust and computationally ef- ysis of these data we need robust estimators µˆ and ficient multivariate techniques. The paper then goes Σˆ that can resist possible outliers. For this purpose on to describe robust methods for multivariate re- we will use the MCD estimates described below. gression (Section 4), classification (Section 5), prin- 2.2 Description of the MCD cipal component analysis (Section 6), principal com- ponent regression (Section 7), partial least squares The MCD method looks for the h observations regression (Section 8) and other settings (Section (out of n) whose classical covariance matrix has the 9). Section 10 concludes with pointers to available lowest possible determinant. The MCD estimate of software for the described techniques. location is then the average of these h points, whereas ROBUST MULTIVARIATE STATISTICS 3 Fig. 1. Mahalanobis distances of the Philips data. ∗ the MCD estimate of scatter is a multiple of their For many estimators εn(µˆ, X) varies only slightly covariance matrix. The MCD location and scatter with X and n, so that we can denote its limiting estimates are affine equivariant, which means that value (for n →∞) by ε∗(µˆ). Similarly, the break- they behave properly under affine transformations down value of a covariance matrix estimator Σˆ is of the data. That is, for an n × p dataset X the defined as the smallest fraction of outliers that can ˆ ˆ MCD estimates (µˆ, Σ) satisfy take either the largest eigenvalue λ1(Σ) to infin- ˆ ′ ity or the smallest eigenvalue λp(Σ) to zero. The (2) µˆ(XA + 1nv )= µˆ(X)A + v, MCD estimates (µˆ, Σˆ ) of multivariate location and ˆ ′ ′ ˆ (3) Σ(XA + 1nv )= A Σ(X)A, ∗ ∗ ˆ scatter have breakdown value εn(µˆ)= εn(Σ) ≈ (n − for all p×1 vectors v and all nonsingular p×p matri- h)/n. The MCD has its highest possible breakdown ′ ∗ ces A. The vector 1n is (1, 1,..., 1) with n elements. value (ε = 50%) when h = [(n + p + 1)/2] (see Lop- Affine equivariance is a natural property of the un- uha¨aand Rousseeuw, 1991). Note that no affine derlying model and makes the analysis independent equivariant estimator can have a breakdown value of the measurement scales of the variables as well as above 50%. For a recent discussion of the impor- translations or rotations of the data. tance of equivariance in breakdown considerations, A useful measure of robustness is the finite-sample see Davies and Gather (2005).
Recommended publications
  • Robust Statistics Part 3: Regression Analysis
    Robust Statistics Part 3: Regression analysis Peter Rousseeuw LARS-IASC School, May 2019 Peter Rousseeuw Robust Statistics, Part 3: Regression LARS-IASC School, May 2019 p. 1 Linear regression Linear regression: Outline 1 Classical regression estimators 2 Classical outlier diagnostics 3 Regression M-estimators 4 The LTS estimator 5 Outlier detection 6 Regression S-estimators and MM-estimators 7 Regression with categorical predictors 8 Software Peter Rousseeuw Robust Statistics, Part 3: Regression LARS-IASC School, May 2019 p. 2 Linear regression Classical estimators The linear regression model The linear regression model says: yi = β0 + β1xi1 + ... + βpxip + εi ′ = xiβ + εi 2 ′ ′ with i.i.d. errors εi ∼ N(0,σ ), xi = (1,xi1,...,xip) and β =(β0,β1,...,βp) . ′ Denote the n × (p + 1) matrix containing the predictors xi as X =(x1,..., xn) , ′ ′ the vector of responses y =(y1,...,yn) and the error vector ε =(ε1,...,εn) . Then: y = Xβ + ε Any regression estimate βˆ yields fitted values yˆ = Xβˆ and residuals ri = ri(βˆ)= yi − yˆi . Peter Rousseeuw Robust Statistics, Part 3: Regression LARS-IASC School, May 2019 p. 3 Linear regression Classical estimators The least squares estimator Least squares estimator n ˆ 2 βLS = argmin ri (β) β i=1 X If X has full rank, then the solution is unique and given by ˆ ′ −1 ′ βLS =(X X) X y The usual unbiased estimator of the error variance is n 1 σˆ2 = r2(βˆ ) LS n − p − 1 i LS i=1 X Peter Rousseeuw Robust Statistics, Part 3: Regression LARS-IASC School, May 2019 p. 4 Linear regression Classical estimators Outliers in regression Different types of outliers: vertical outlier good leverage point • • y • • • regular data • ••• • •• ••• • • • • • • • • • bad leverage point • • •• • x Peter Rousseeuw Robust Statistics, Part 3: Regression LARS-IASC School, May 2019 p.
    [Show full text]
  • Robustbase: Basic Robust Statistics
    Package ‘robustbase’ June 2, 2021 Version 0.93-8 VersionNote Released 0.93-7 on 2021-01-04 to CRAN Date 2021-06-01 Title Basic Robust Statistics URL http://robustbase.r-forge.r-project.org/ Description ``Essential'' Robust Statistics. Tools allowing to analyze data with robust methods. This includes regression methodology including model selections and multivariate statistics where we strive to cover the book ``Robust Statistics, Theory and Methods'' by 'Maronna, Martin and Yohai'; Wiley 2006. Depends R (>= 3.5.0) Imports stats, graphics, utils, methods, DEoptimR Suggests grid, MASS, lattice, boot, cluster, Matrix, robust, fit.models, MPV, xtable, ggplot2, GGally, RColorBrewer, reshape2, sfsmisc, catdata, doParallel, foreach, skewt SuggestsNote mostly only because of vignette graphics and simulation Enhances robustX, rrcov, matrixStats, quantreg, Hmisc EnhancesNote linked to in man/*.Rd LazyData yes NeedsCompilation yes License GPL (>= 2) Author Martin Maechler [aut, cre] (<https://orcid.org/0000-0002-8685-9910>), Peter Rousseeuw [ctb] (Qn and Sn), Christophe Croux [ctb] (Qn and Sn), Valentin Todorov [aut] (most robust Cov), Andreas Ruckstuhl [aut] (nlrob, anova, glmrob), Matias Salibian-Barrera [aut] (lmrob orig.), Tobias Verbeke [ctb, fnd] (mc, adjbox), Manuel Koller [aut] (mc, lmrob, psi-func.), Eduardo L. T. Conceicao [aut] (MM-, tau-, CM-, and MTL- nlrob), Maria Anna di Palma [ctb] (initial version of Comedian) 1 2 R topics documented: Maintainer Martin Maechler <[email protected]> Repository CRAN Date/Publication 2021-06-02 10:20:02 UTC R topics documented: adjbox . .4 adjboxStats . .7 adjOutlyingness . .9 aircraft . 12 airmay . 13 alcohol . 14 ambientNOxCH . 15 Animals2 . 18 anova.glmrob . 19 anova.lmrob .
    [Show full text]
  • Recent Outlier Detection Methods with Illustrations Loss Reserving Context
    Recent outlier detection methods with illustrations in loss reserving Benjamin Avanzi, Mark Lavender, Greg Taylor, Bernard Wong School of Risk and Actuarial Studies, UNSW Sydney Insights, 18 September 2017 Recent outlier detection methods with illustrations loss reserving Context Context Reserving Robustness and Outliers Robust Statistical Techniques Robustness criteria Heuristic Tools Robust M-estimation Outlier Detection Techniques Robust Reserving Overview Illustration - Robust Bivariate Chain Ladder Robust N-Dimensional Chain-Ladder Summary and Conclusions References 1/46 Recent outlier detection methods with illustrations loss reserving Context Reserving Context Reserving Robustness and Outliers Robust Statistical Techniques Robustness criteria Heuristic Tools Robust M-estimation Outlier Detection Techniques Robust Reserving Overview Illustration - Robust Bivariate Chain Ladder Robust N-Dimensional Chain-Ladder Summary and Conclusions References 1/46 Recent outlier detection methods with illustrations loss reserving Context Reserving The Reserving Problem i/j 1 2 ··· j ··· I 1 X1;1 X1;2 ··· X1;j ··· X1;J 2 X2;1 X2;2 ··· X2;j ··· . i Xi;1 Xi;2 ··· Xi;j . I XI ;1 Figure: Aggregate claims run-off triangle I Complete the square (or rectangle) I Also - multivariate extensions. 1/46 Recent outlier detection methods with illustrations loss reserving Context Reserving Common Reserving Techniques I Deterministic Chain-Ladder I Stochastic Chain-Ladder (Hachmeister and Stanard, 1975; England and Verrall, 2002) I Mack’s Model (Mack, 1993) I GLMs
    [Show full text]
  • CFE-Cmstatistics 2020 Book of Abstracts
    CFE-CMStatistics 2020 PROGRAMME AND ABSTRACTS 14th International Conference on Computational and Financial Econometrics (Virtual CFE 2020) http://www.cfenetwork.org/CFE2020 and 13th International Conference of the ERCIM (European Research Consortium for Informatics and Mathematics) Working Group on Computational and Methodological Statistics (Virtual CMStatistics 2020) http://www.cmstatistics.org/CMStatistics2020 19 – 21 December 2020 Computational and Methodological Statistics CMStatistics Computational and CFENetwork Financial Econometrics ⃝c ECOSTA ECONOMETRICS AND STATISTICS. All rights reserved. I CFE-CMStatistics 2020 ISBN 978-9963-2227-9-7 ⃝c 2020 - ECOSTA ECONOMETRICS AND STATISTICS All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any other form or by any means without the prior permission from the publisher. II ⃝c ECOSTA ECONOMETRICS AND STATISTICS. All rights reserved. CFE-CMStatistics 2020 International Organizing Committee: Ana Colubi, Erricos Kontoghiorghes and Manfred Deistler. CFE 2020 Co-chairs: Anurag Banerjee, Scott Brave, Peter Pedroni and Mike So. CFE 2020 Programme Committee: Knut Are Aastveit, Alessandra Amendola, David Ardia, Josu Arteche, Anindya Banerjee, Travis Berge, Mon- ica Billio, Raffaella Calabrese, Massimiliano Caporin, Julien Chevallier, Serge Darolles, Luca De Angelis, Filippo Ferroni, Ana-Maria Fuertes, Massimo Guidolin, Harry Haupt, Masayuki Hirukawa, Benjamin Hol- cblat, Rustam Ibragimov, Laura Jackson Young, Michel Juillard, Edward Knotek, Robinson Kruse-Becher, Svetlana Makarova, Ilia Negri, Ingmar Nolte, Jose Olmo, Yasuhiro Omori, Jesus Otero, Michael Owyang, Alessia Paccagnini, Indeewara Perera, Jean-Yves Pitarakis, Tommaso Proietti, Artem Prokhorov, Tatevik Sekhposyan, Etsuro Shioji, Michael Smith, Robert Taylor, Martin Wagner and Ralf Wilke. CMStatistics 2020 Co-chairs: Tapabrata Maiti, Sofia Olhede, Michael Pitt, Cheng Yong Tang and Tim Verdonck.
    [Show full text]
  • Download from the Resource and Environment Data Cloud Platform (
    sustainability Article Quantitative Assessment for the Dynamics of the Main Ecosystem Services and their Interactions in the Northwestern Arid Area, China Tingting Kang 1,2, Shuai Yang 1,2, Jingyi Bu 1,2 , Jiahao Chen 1,2 and Yanchun Gao 1,* 1 Key Laboratory of Water Cycle and Related Land Surface Processes, Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, P.O. Box 9719, Beijing 100101, China; [email protected] (T.K.); [email protected] (S.Y.); [email protected] (J.B.); [email protected] (J.C.) 2 University of the Chinese Academy of Sciences, Beijing 100049, China * Correspondence: [email protected] Received: 21 November 2019; Accepted: 13 January 2020; Published: 21 January 2020 Abstract: Evaluating changes in the spatial–temporal patterns of ecosystem services (ESs) and their interrelationships provide an opportunity to understand the links among them, as well as to inform ecosystem-service-based decision making and governance. To research the development trajectory of ecosystem services over time and space in the northwestern arid area of China, three main ecosystem services (carbon sequestration, soil retention, and sand fixation) and their relationships were analyzed using Pearson’s coefficient and a time-lagged cross-correlation analysis based on the mountain–oasis–desert zonal scale. The results of this study were as follows: (1) The carbon sequestration of most subregions improved, and that of the oasis continuously increased from 1990 to 2015. Sand fixation decreased in the whole region except for in the Alxa Plateau–Hexi Corridor area, which experienced an “increase–decrease” trend before 2000 and after 2000; (2) the synergies and trade-off relationships of the ES pairs varied with time and space, but carbon sequestration and soil retention in the most arid area had a synergistic relationship, and most oases retained their synergies between carbon sequestration and sand fixation over time.
    [Show full text]
  • Of Typicality and Predictive Distributions in Discriminant Function Analysis Lyle W
    Wayne State University Human Biology Open Access Pre-Prints WSU Press 8-22-2018 Of Typicality and Predictive Distributions in Discriminant Function Analysis Lyle W. Konigsberg Department of Anthropology, University of Illinois at Urbana–Champaign, [email protected] Susan R. Frankenberg Department of Anthropology, University of Illinois at Urbana–Champaign Recommended Citation Konigsberg, Lyle W. and Frankenberg, Susan R., "Of Typicality and Predictive Distributions in Discriminant Function Analysis" (2018). Human Biology Open Access Pre-Prints. 130. https://digitalcommons.wayne.edu/humbiol_preprints/130 This Open Access Article is brought to you for free and open access by the WSU Press at DigitalCommons@WayneState. It has been accepted for inclusion in Human Biology Open Access Pre-Prints by an authorized administrator of DigitalCommons@WayneState. Of Typicality and Predictive Distributions in Discriminant Function Analysis Lyle W. Konigsberg1* and Susan R. Frankenberg1 1Department of Anthropology, University of Illinois at Urbana–Champaign, Urbana, Illinois, USA. *Correspondence to: Lyle W. Konigsberg, Department of Anthropology, University of Illinois at Urbana–Champaign, 607 S. Mathews Ave, Urbana, IL 61801 USA. E-mail: [email protected]. Short Title: Typicality and Predictive Distributions in Discriminant Functions KEY WORDS: ADMIXTURE, POSTERIOR PROBABILITY, BAYESIAN ANALYSIS, OUTLIERS, TUKEY DEPTH Pre-print version. Visit http://digitalcommons.wayne.edu/humbiol/ after publication to acquire the final version. Abstract While discriminant function analysis is an inherently Bayesian method, researchers attempting to estimate ancestry in human skeletal samples often follow discriminant function analysis with the calculation of frequentist-based typicalities for assigning group membership. Such an approach is problematic in that it fails to account for admixture and for variation in why individuals may be classified as outliers, or non-members of particular groups.
    [Show full text]
  • Robust Linear Regression: Optimal Rates in Polynomial Time
    Robust Linear Regression: Optimal Rates in Polynomial Time Ainesh Bakshi* Adarsh Prasad [email protected] [email protected] CMU CMU Abstract We obtain robust and computationally efficient estimators for learning several linear mod- els that achieve statistically optimal convergence rate under minimal distributional assump- tions. Concretely, we assume our data is drawn from a k-hypercontractive distribution and an ǫ-fraction is adversarially corrupted. We then describe an estimator that converges to the 2 2/k optimal least-squares minimizer for the true distribution at a rate proportional to ǫ − , when the noise is independent of the covariates. We note that no such estimator was known prior to our work, even with access to unbounded computation. The rate we achieve is information- theoretically optimal and thus we resolve the main open question in Klivans, Kothari and Meka [COLT’18]. Our key insight is to identify an analytic condition that serves as a polynomial relaxation of independence of random variables. In particular, we show that when the moments of the noise and covariates are negatively-correlated, we obtain the same rate as independent noise. 2 4/k Further, when the condition is not satisfied, we obtain a rate proportional to ǫ − , and again match the information-theoretic lower bound. Our central technical contribution is to algo- rithmically exploit independence of random variables in the ”sum-of-squares” framework by formulating it as the aforementioned polynomial inequality. arXiv:2007.01394v4 [stat.ML] 4 Dec 2020 *AB would like to thank the partial support from the Office of Naval Research (ONR) grant N00014-18-1-2562, and the National Science Foundation (NSF) under Grant No.
    [Show full text]
  • Computational Statistics and Data Analysis Robust PCA for Skewed
    Computational Statistics and Data Analysis 53 (2009) 2264–2274 Contents lists available at ScienceDirect Computational Statistics and Data Analysis journal homepage: www.elsevier.com/locate/csda Robust PCA for skewed data and its outlier map a b b, Mia Hubert , Peter Rousseeuw , Tim Verdonck ∗ a Department of Mathematics, LSTAT, Katholieke Universiteit Leuven, Belgium b Department of Mathematics and Computer Science, University of Antwerp, Belgium article info abstract Article history: The outlier sensitivity of classical principal component analysis (PCA) has spurred the Available online 6 June 2008 development of robust techniques. Existing robust PCA methods like ROBPCA work best if the non-outlying data have an approximately symmetric distribution. When the original variables are skewed, too many points tend to be flagged as outlying. A robust PCA method is developed which is also suitable for skewed data. To flag the outliers a new outlier map is defined. Its performance is illustrated on real data from economics, engineering, and finance, and confirmed by a simulation study. © 2008 Elsevier B.V. All rights reserved. 1. Introduction Principal component analysis is one of the best known techniques of multivariate statistics. It is a dimension reduction technique which transforms the data to a smaller set of variables while retaining as much information as possible. These new variables, called the principal components (PCs), are uncorrelated and maximize variance (information). Once the PCs are computed, all further analysis like cluster analysis, discriminant analysis, regression, ...can be carried out on the transformed data. When given a data matrix X with n observations and p variables, the PCs ti are linear combinations of the data ti Xpi where = pi argmax var(Xa) = a { } under the constraints a 1 and a p1,...,pi 1 .
    [Show full text]
  • Rainbow Plots, Bagplots and Boxplots for Functional Data
    Rainbow plots, bagplots and boxplots for functional data Rob J Hyndman and Han Lin Shang Department of Econometrics and Business Statistics, Monash University, Clayton, Australia June 5, 2009 Abstract We propose new tools for visualizing large amounts of functional data in the form of smooth curves. The proposed tools include functional versions of the bagplot and boxplot, and make use of the first two robust principal component scores, Tukey's data depth and highest density regions. By-products of our graphical displays are outlier detection methods for functional data. We compare these new outlier detection methods with existing methods for detecting outliers in functional data, and show that our methods are better able to identify the outliers. An R-package containing computer code and data sets is available in the online supple- ments. Keywords: Highest density regions, Robust principal component analysis, Kernel density estima- tion, Outlier detection, Tukey's halfspace depth. 1 Introduction Functional data are becoming increasingly common in a wide range of fields, and there is a need to develop new statistical tools for analyzing such data. In this paper, we are interested in visualizing data comprising smooth curves (e.g., Ramsay & Silverman, 2005; Locantore et al., 1999). Such functional data may be age-specific mortality or fertility rates (Hyndman & Ullah, 2007), term- structured yield curves (Kargin & Onatski, 2008), spectrometry data (Reiss & Ogden, 2007), or one of the many applications described by Ramsay & Silverman (2002). Ramsay & Silverman (2005) and Ferraty & Vieu (2006) provide detailed surveys of the many parametric and nonparametric techniques for analyzing functional data. 1 Visualization methods help in the discovery of characteristics that might not have been ap- parent using mathematical models and summary statistics; and yet this area of research has not received much attention in the functional data analysis literature to date.
    [Show full text]
  • Robust Statistics for Outlier Detection Peter J
    Focus Article Robust statistics for outlier detection Peter J. Rousseeuw and Mia Hubert When analyzing data, outlying observations cause problems because they may strongly influence the result. Robust statistics aims at detecting the outliers by searching for the model fitted by the majority of the data. We present an overview of several robust methods and outlier detection tools. We discuss robust proce- dures for univariate, low-dimensional, and high-dimensional data such as esti- mation of location and scatter, linear regression, principal component analysis, and classification. C 2011 John Wiley & Sons, Inc. WIREs Data Mining Knowl Discov 2011 1 73–79 DOI: 10.1002/widm.2 INTRODUCTION ESTIMATING UNIVARIATE n real data sets, it often happens that some obser- LOCATION AND SCALE I vations are different from the majority. Such ob- As an example, suppose we have five measurements servations are called outliers. Outlying observations of a length: may be errors, or they could have been recorded un- der exceptional circumstances, or belong to another 6.27, 6.34, 6.25, 6.31, 6.28 (1) population. Consequently, they do not fit the model and we want to estimate its true value. For this, well. It is very important to be able to detect these one usually computes the mean x¯ = 1 n x , which outliers. n i=1 i in this case equals x¯ = (6.27 + 6.34 + 6.25 + 6.31 + In practice, one often tries to detect outliers 6.28)/5 = 6.29. Let us now suppose that the fourth using diagnostics starting from a classical fitting measurement has been recorded wrongly and the data method.
    [Show full text]
  • Robust Covariance Estimation for Financial Applications
    Robust covariance estimation for financial applications Tim Verdonck, Mia Hubert, Peter Rousseeuw Department of Mathematics K.U.Leuven August 30 2011 Tim Verdonck, Mia Hubert, Peter Rousseeuw Robust covariance estimation August 30 2011 1 / 44 Contents 1 Introduction Robust Statistics 2 Multivariate Location and Scatter Estimates 3 Minimum Covariance Determinant Estimator (MCD) FAST-MCD algorithm DetMCD algorithm 4 Principal Component Analysis 5 Multivariate Time Series 6 Conclusions 7 Selected references Tim Verdonck, Mia Hubert, Peter Rousseeuw Robust covariance estimation August 30 2011 2 / 44 Introduction Robust Statistics Introduction Robust Statistics Real data often contain outliers. Most classical methods are highly influenced by these outliers. What is robust statistics? Robust statistical methods try to fit the model imposed by the majority of the data. They aim to find a ‘robust’ fit, which is similar to the fit we would have found without outliers (observations deviating from robust fit). This also allows for outlier detection. Robust estimate applied on all observations is comparable with the classical estimate applied on the outlier-free data set. Robust estimator A good robust estimator combines high robustness with high efficiency. ◮ Robustness: being less influenced by outliers. ◮ Efficiency: being precise at uncontaminated data. Tim Verdonck, Mia Hubert, Peter Rousseeuw Robust covariance estimation August 30 2011 3 / 44 Introduction Robust Statistics Univariate Scale Estimation: Wages data set 6000 households with male head earning
    [Show full text]
  • Robust Statistics Part 1: Introduction and Univariate Data General References
    Robust Statistics Part 1: Introduction and univariate data Peter Rousseeuw LARS-IASC School, May 2019 Peter Rousseeuw Robust Statistics, Part 1: Univariate data LARS-IASC School, May 2019 p. 1 General references General references Hampel, F.R., Ronchetti, E.M., Rousseeuw, P.J., Stahel, W.A. Robust Statistics: the Approach based on Influence Functions. Wiley Series in Probability and Mathematical Statistics. Wiley, John Wiley and Sons, New York, 1986. Rousseeuw, P.J., Leroy, A. Robust Regression and Outlier Detection. Wiley Series in Probability and Mathematical Statistics. John Wiley and Sons, New York, 1987. Maronna, R.A., Martin, R.D., Yohai, V.J. Robust Statistics: Theory and Methods. Wiley Series in Probability and Statistics. John Wiley and Sons, Chichester, 2006. Hubert, M., Rousseeuw, P.J., Van Aelst, S. (2008), High-breakdown robust multivariate methods, Statistical Science, 23, 92–119. wis.kuleuven.be/stat/robust Peter Rousseeuw Robust Statistics, Part 1: Univariate data LARS-IASC School, May 2019 p. 2 General references Outline of the course General notions of robustness Robustness for univariate data Multivariate location and scatter Linear regression Principal component analysis Advanced topics Peter Rousseeuw Robust Statistics, Part 1: Univariate data LARS-IASC School, May 2019 p. 3 General notions of robustness General notions of robustness: Outline 1 Introduction: outliers and their effect on classical estimators 2 Measures of robustness: breakdown value, sensitivity curve, influence function, gross-error sensitivity, maxbias curve. Peter Rousseeuw Robust Statistics, Part 1: Univariate data LARS-IASC School, May 2019 p. 4 General notions of robustness Introduction What is robust statistics? Real data often contain outliers.
    [Show full text]