Recommended Methods for Outlier Detection and Calculations Of
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
A Review on Outlier/Anomaly Detection in Time Series Data
A review on outlier/anomaly detection in time series data ANE BLÁZQUEZ-GARCÍA and ANGEL CONDE, Ikerlan Technology Research Centre, Basque Research and Technology Alliance (BRTA), Spain USUE MORI, Intelligent Systems Group (ISG), Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Spain JOSE A. LOZANO, Intelligent Systems Group (ISG), Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Spain and Basque Center for Applied Mathematics (BCAM), Spain Recent advances in technology have brought major breakthroughs in data collection, enabling a large amount of data to be gathered over time and thus generating time series. Mining this data has become an important task for researchers and practitioners in the past few years, including the detection of outliers or anomalies that may represent errors or events of interest. This review aims to provide a structured and comprehensive state-of-the-art on outlier detection techniques in the context of time series. To this end, a taxonomy is presented based on the main aspects that characterize an outlier detection technique. Additional Key Words and Phrases: Outlier detection, anomaly detection, time series, data mining, taxonomy, software 1 INTRODUCTION Recent advances in technology allow us to collect a large amount of data over time in diverse research areas. Observations that have been recorded in an orderly fashion and which are correlated in time constitute a time series. Time series data mining aims to extract all meaningful knowledge from this data, and several mining tasks (e.g., classification, clustering, forecasting, and outlier detection) have been considered in the literature [Esling and Agon 2012; Fu 2011; Ratanamahatana et al. -
Outlier Identification.Pdf
STATGRAPHICS – Rev. 7/6/2009 Outlier Identification Summary The Outlier Identification procedure is designed to help determine whether or not a sample of n numeric observations contains outliers. By “outlier”, we mean an observation that does not come from the same distribution as the rest of the sample. Both graphical methods and formal statistical tests are included. The procedure will also save a column back to the datasheet identifying the outliers in a form that can be used in the Select field on other data input dialog boxes. Sample StatFolio: outlier.sgp Sample Data: The file bodytemp.sgd file contains data describing the body temperature of a sample of n = 130 people. It was obtained from the Journal of Statistical Education Data Archive (www.amstat.org/publications/jse/jse_data_archive.html) and originally appeared in the Journal of the American Medical Association. The first 20 rows of the file are shown below. Temperature Gender Heart Rate 98.4 Male 84 98.4 Male 82 98.2 Female 65 97.8 Female 71 98 Male 78 97.9 Male 72 99 Female 79 98.5 Male 68 98.8 Female 64 98 Male 67 97.4 Male 78 98.8 Male 78 99.5 Male 75 98 Female 73 100.8 Female 77 97.1 Male 75 98 Male 71 98.7 Female 72 98.9 Male 80 99 Male 75 2009 by StatPoint Technologies, Inc. Outlier Identification - 1 STATGRAPHICS – Rev. 7/6/2009 Data Input The data to be analyzed consist of a single numeric column containing n = 2 or more observations. -
A Machine Learning Approach to Outlier Detection and Imputation of Missing Data1
Ninth IFC Conference on “Are post-crisis statistical initiatives completed?” Basel, 30-31 August 2018 A machine learning approach to outlier detection and imputation of missing data1 Nicola Benatti, European Central Bank 1 This paper was prepared for the meeting. The views expressed are those of the authors and do not necessarily reflect the views of the BIS, the IFC or the central banks and other institutions represented at the meeting. A machine learning approach to outlier detection and imputation of missing data Nicola Benatti In the era of ready-to-go analysis of high-dimensional datasets, data quality is essential for economists to guarantee robust results. Traditional techniques for outlier detection tend to exclude the tails of distributions and ignore the data generation processes of specific datasets. At the same time, multiple imputation of missing values is traditionally an iterative process based on linear estimations, implying the use of simplified data generation models. In this paper I propose the use of common machine learning algorithms (i.e. boosted trees, cross validation and cluster analysis) to determine the data generation models of a firm-level dataset in order to detect outliers and impute missing values. Keywords: machine learning, outlier detection, imputation, firm data JEL classification: C81, C55, C53, D22 Contents A machine learning approach to outlier detection and imputation of missing data ... 1 Introduction .............................................................................................................................................. -
Tolerance-Package
Package ‘tolerance’ February 5, 2020 Type Package Title Statistical Tolerance Intervals and Regions Version 2.0.0 Date 2020-02-04 Depends R (>= 3.5.0) Imports MASS, rgl, stats4 Description Statistical tolerance limits provide the limits between which we can expect to find a speci- fied proportion of a sampled population with a given level of confidence. This package pro- vides functions for estimating tolerance limits (intervals) for various univariate distributions (bi- nomial, Cauchy, discrete Pareto, exponential, two-parameter exponential, extreme value, hyper- geometric, Laplace, logistic, negative binomial, negative hypergeometric, normal, Pareto, Pois- son-Lindley, Poisson, uniform, and Zipf-Mandelbrot), Bayesian normal tolerance limits, multi- variate normal tolerance regions, nonparametric tolerance intervals, tolerance bands for regres- sion settings (linear regression, nonlinear regression, nonparametric regression, and multivari- ate regression), and analysis of variance tolerance intervals. Visualizations are also avail- able for most of these settings. License GPL (>= 2) NeedsCompilation no Author Derek S. Young [aut, cre] Maintainer Derek S. Young <[email protected]> Repository CRAN Date/Publication 2020-02-05 13:10:05 UTC R topics documented: tolerance-package . .3 acc.samp . .4 anovatol.int . .5 bayesnormtol.int . .7 bintol.int . .9 bonftol.int . 12 cautol.int . 13 1 2 R topics documented: diffnormtol.int . 14 DiffProp . 16 DiscretePareto . 18 distfree.est . 19 dpareto.ll . 20 dparetotol.int . 21 exp2tol.int . 23 exptol.int . 25 exttol.int . 26 F1.............................................. 28 fidbintol.int . 29 fidnegbintol.int . 31 fidpoistol.int . 33 gamtol.int . 35 hypertol.int . 37 K.factor . 38 K.factor.sim . 40 K.table . 42 laptol.int . 44 logistol.int . -
Comparing Interval Estimates What Is a Tolerance Interval? Other Interval
Evaluating Tolerance Interval Estimates Michelle Quinlan, University of Nebraska-Lincoln James Schwenke, Boehringer Ingelheim Pharmaceuticals, Inc. University of Nebraska Walt Stroup, University of Nebraska-Lincoln Department of Statistics PQRI Stability Shelf Life Working Group Comparing Interval Estimates 22 . Computed using be / (ratio of between to within batch variance) and The mean of each interval estimate across iterations is computed Statistical interval estimates are constructed to central t-distribution Comparisons are made among the interval estimates . Estimate parameters . Variance is a linear combination of independent mean squares, df calculated . Quantify characteristics of population using Satterthwaite approximation To correctly interpret estimates, it must be clearly defined what each interval β-content tolerance interval: is estimating Prμ,σˆˆ {Pr X [μˆ - kσ ˆ x < X < μ ˆ + kσ ˆ x |μ,σ ˆ ˆ x ] β} = γ . Confidence/prediction intervals are well understood x γ = confidence coefficient . Definition of a tolerance interval varies among literature sources . Interval that contains at least 100β% of population with given confidence What is a Tolerance Interval? level γ (Mee) Tolerance intervals are being used with more frequency, thus a consistent o Computed using factors from Normal and Chi-squared distributions definition needs to be established o β-content interval in models with only 1 source of variation are computed using noncentral t-distribution Definitions found in literature: . Wald and Wolfowitz use same definition to define tolerance intervals but instead use the formula: . A bound that covers at least (100-α)% of the measurements with (100- γ)% n confidence (Walpole and Myers) X 2 rs χ n,β o Focuses on where individual observations fall o Equivalent to a (100-γ)% CI on middle (100-α)% of Normal distribution SAS® Proc Capabilities Method 3 computes an approximate statistical . -
1 Introduction: Statistical Intervals
Avd. Matematisk statistik Sf2955 COMPUTER INTENSIVE METHODS TOLERANCE INTERVALS with A COMPUTER ASSIGNMENT 2011 Timo Koski 1 Introduction: Statistical Intervals Many practical problems are phrased in terms of individual measurements rather than parameters of distributions. We take two such examples. The first one will require, what is to be called a prediction interval, instead of a confidence interval. A consumer is considering buying a car. Then this person should be far more interested in knowing whether a full tank on a particu- lar automobile will suffice to carry her/him the 500 kms to her/his destination than in learning that there is a 95% confidence inter- val for the mean mileage of the model, which is possible to use to project the average or total gasoline consumption for the ma- nufactured fleet of such cars over their first 5000 kilometers of use. A different situation appears in the following. This will require, what is to be called a tolerance interval, instead of a confidence interval. A design engineer is charged with the problem of determining how large a tank the car model really needs to guarantee that 99% of the cars produced will have a cruising range of 500 kilometers. 1 What the engineer really needs is a tolerance interval for a fraction of 100 β = 99% mileages of such automobiles. × Prediction and tolerance intervals address problems of inference for (future) measurements. 2 Definitions of Other Intervals than Confi- dence Intervals We must distinguish between two different questions (I - II) concerning in- ference for future values. Let X1,...,Xn be I.I.D. -
Effects of Outliers with an Outlier
• The mean is a good measure to use to describe data that are close in value. • The median more accurately describes data Effects of Outliers with an outlier. • The mode is a good measure to use when you have categorical data; for example, if each student records his or her favorite color, the color (a category) listed most often is the mode of the data. In the data set below, the value 12 is much less than Additional Example 2: Exploring the Effects of the other values in the set. An extreme value such as Outliers on Measures of Central Tendency this is called an outlier. The data shows Sara’s scores for the last 5 35, 38, 27, 12, 30, 41, 31, 35 math tests: 88, 90, 55, 94, and 89. Identify the outlier in the data set. Then determine how the outlier affects the mean, median, and x mode of the data. x x xx x x x 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 55, 88, 89, 90, 94 outlier 55 1 Additional Example 2 Continued Additional Example 2 Continued With the Outlier Without the Outlier 55, 88, 89, 90, 94 55, 88, 89, 90, 94 outlier 55 mean: median: mode: mean: median: mode: 55+88+89+90+94 = 416 55, 88, 89, 90, 94 88+89+90+94 = 361 88, 89,+ 90, 94 416 5 = 83.2 361 4 = 90.25 2 = 89.5 The mean is 83.2. The median is 89. There is no mode. -
How Significant Is a Boxplot Outlier?
Journal of Statistics Education, Volume 19, Number 2(2011) How Significant Is A Boxplot Outlier? Robert Dawson Saint Mary’s University Journal of Statistics Education Volume 19, Number 2(2011), www.amstat.org/publications/jse/v19n2/dawson.pdf Copyright © 2011 by Robert Dawson all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the author and advance notification of the editor. Key Words: Boxplot; Outlier; Significance; Spreadsheet; Simulation. Abstract It is common to consider Tukey’s schematic (“full”) boxplot as an informal test for the existence of outliers. While the procedure is useful, it should be used with caution, as at least 30% of samples from a normally-distributed population of any size will be flagged as containing an outlier, while for small samples (N<10) even extreme outliers indicate little. This fact is most easily seen using a simulation, which ideally students should perform for themselves. The majority of students who learn about boxplots are not familiar with the tools (such as R) that upper-level students might use for such a simulation. This article shows how, with appropriate guidance, a class can use a spreadsheet such as Excel to explore this issue. 1. Introduction Exploratory data analysis suddenly got a lot cooler on February 4, 2009, when a boxplot was featured in Randall Munroe's popular Webcomic xkcd. 1 Journal of Statistics Education, Volume 19, Number 2(2011) Figure 1 “Boyfriend” (Feb. 4 2009, http://xkcd.com/539/) Used by permission of Randall Munroe But is Megan justified in claiming “statistical significance”? We shall explore this intriguing question using Microsoft Excel. -
Meta-Analysis of Standardised Mean Differences from Randomised Trials with Treatment-Related Clustering Associated with Care Providers
This is a repository copy of Meta-analysis of standardised mean differences from randomised trials with treatment-related clustering associated with care providers. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/109207/ Version: Accepted Version Article: Walwyn, R and Roberts, C (2017) Meta-analysis of standardised mean differences from randomised trials with treatment-related clustering associated with care providers. Statistics in Medicine, 36 (7). pp. 1043-1067. ISSN 0277-6715 https://doi.org/10.1002/sim.7186 © 2016 John Wiley & Sons, Ltd. This is the peer reviewed version of the following article: "Walwyn, R., and Roberts, C. (2017) Meta-analysis of standardised mean differences from randomised trials with treatment-related clustering associated with care providers. Statist. Med., 36 (7): 1043–1067. doi: 10.1002/sim.7186." which has been published in final form at https://doi.org/10.1002/sim.7186. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving. Reuse Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy solely for the purpose of non-commercial research or private study within the limits of fair dealing. The publisher or other rights-holder may allow further reproduction and re-use of this version - refer to the White Rose Research Online record for this item. Where records identify the publisher as the copyright holder, users can verify any specific terms of use on the publisher’s website. -
Lecture 14 Testing for Kurtosis
9/8/2016 CHE384, From Data to Decisions: Measurement, Kurtosis Uncertainty, Analysis, and Modeling • For any distribution, the kurtosis (sometimes Lecture 14 called the excess kurtosis) is defined as Testing for Kurtosis 3 (old notation = ) • For a unimodal, symmetric distribution, Chris A. Mack – a positive kurtosis means “heavy tails” and a more Adjunct Associate Professor peaked center compared to a normal distribution – a negative kurtosis means “light tails” and a more spread center compared to a normal distribution http://www.lithoguru.com/scientist/statistics/ © Chris Mack, 2016Data to Decisions 1 © Chris Mack, 2016Data to Decisions 2 Kurtosis Examples One Impact of Excess Kurtosis • For the Student’s t • For a normal distribution, the sample distribution, the variance will have an expected value of s2, excess kurtosis is and a variance of 6 2 4 1 for DF > 4 ( for DF ≤ 4 the kurtosis is infinite) • For a distribution with excess kurtosis • For a uniform 2 1 1 distribution, 1 2 © Chris Mack, 2016Data to Decisions 3 © Chris Mack, 2016Data to Decisions 4 Sample Kurtosis Sample Kurtosis • For a sample of size n, the sample kurtosis is • An unbiased estimator of the sample excess 1 kurtosis is ∑ ̅ 1 3 3 1 6 1 2 3 ∑ ̅ Standard Error: • For large n, the sampling distribution of 1 24 2 1 approaches Normal with mean 0 and variance 2 1 of 24/n 3 5 • For small samples, this estimator is biased D. N. Joanes and C. A. Gill, “Comparing Measures of Sample Skewness and Kurtosis”, The Statistician, 47(1),183–189 (1998). -
A Fuzzy-Statistical Tolerance Interval from Residuals of Crisp Linear Regression Models
mathematics Article A Fuzzy-Statistical Tolerance Interval from Residuals of Crisp Linear Regression Models Maryam Al-Kandari 1,*, Kingsley Adjenughwure 2 and Kyriakos Papadopoulos 1 1 Department of Mathematics, Kuwait University, P.O. Box 5969, Safat 13060, Khaldiyah City, Kuwait; [email protected] 2 Department of Civil Engineering, Democritus University of Thrace, 67100 Xanthi, Greece; [email protected] * Correspondence: [email protected] Received: 10 August 2020; Accepted: 21 August 2020; Published: 25 August 2020 Abstract: Linear regression is a simple but powerful tool for prediction. However, it still suffers from some deficiencies, which are related to the assumptions made when using a model like normality of residuals, uncorrelated errors, where the mean of residuals should be zero. Sometimes these assumptions are violated or partially violated, thereby leading to uncertainties or unreliability in the predictions. This paper introduces a new method to account for uncertainty in the residuals of a linear regression model. First, the error in the estimation of the dependent variable is calculated and transformed to a fuzzy number, and this fuzzy error is then added to the original crisp prediction, thereby resulting in a fuzzy prediction. The results are compared to a fuzzy linear regression with crisp input and fuzzy output, in terms of their ability to represent uncertainty in prediction. Keywords: tolerance interval; fuzzy linear regression; crisp linear regression; fuzzy-statistics 1. Introduction In classical linear regression models, assumptions like linearity, fixed independent variables, normality of residuals, uncorrelated errors are made to simplify model estimation procedures. Despite these assumptions, the results are often taken at face value with very little effort, to adequately represent the uncertainty in predictions made by the model. -
Interpretation of 95-95 Tolerance Limits for Safety System Setpoint
NRC Staff Interpretation of 95/95 Tolerance Limits in Safety System Setpoint Analysis NRC Public Meeting with GE-Hitachi September 28, 2010 NRC Headquarters 6003 Executive Boulevard, Rockville, MD 20852 David Rahn, Sr. Electronics Engineer, NRR/DE/EICB [email protected] Disclaimer • The following slides summarize the collective opinions of several members of the NRC staff concerning the interpretation of statistical representations used in the analysis of instrument channel performance. This information is being considered for use in establishing acceptance review criteria pertinent to the review of proposed instrument setpoint methodologies. The information herein represents work in progress, and does not necessarily represent the concurrence of all cognizant staff members or a final decision in this matter. 2 Regulatory Guide 1.105 • Regulatory Position C.1 of Rev. 3 (1999) of Reg Guide 1.105 states: – “Section 4 of ISA-S67.04-1994 specifies the methods, but not the criterion, for combining uncertainties in determining a trip setpoint and its allowable values. The 95/95 tolerance limit is an acceptable criterion for uncertainties. That is, there is a 95% probability that the constructed limits contain 95% of the population of interest for the surveillance interval selected.” 3 Regulatory Guide 1.105 (continued) • Revision 2 (1986) of Reg. Guide 1.105 stated: – “Paragraph 4.3 of the standard specifies the methods for combining uncertainties in determining a trip setpoint and its allowable values. Typically, the NRC staff has accepted 95% as a probability limit for errors. That is, of the observed distribution of values for a particular error component in the empirical data base, 95% of the data points will be bounded by the value selected.