Outlier Detection Techniques

Total Page:16

File Type:pdf, Size:1020Kb

Outlier Detection Techniques LUDWIG- MAXIMILIANS- DATABASE UNIVERSITÄT INSTITUTE FOR SYSTEMS MÜNCHEN INFORMATICS GROUP The 2010 SIAM International Conference on Data Mining Outlier Detection Techniques Hans-Peter Kriegel, Peer Kröger, Arthur Zimek Ludwig-Maximilians-Universität München Munich, Germany http://www.dbs.ifi.lmu.de {kriegel,kroegerp,zimek}@dbs.ifi.lmu.de Tutorial Notes: SIAM SDM 2010, Columbus, Ohio DATABASE General Issues SYSTEMS GROUP 1. Please feel free to ask questions at any time during the presentation 2. Aim o f the tu tor ia l: ge t the big p ic ture – NOT in terms of a long list of methods and algorithms – BUT in terms of the basic algorithmic approaches – Sample algorithms for these basic approaches will be sketched • The selection of the presented algorithms is somewhat arbitrary • Please don’t mind if yygour favorite algorithm is missin g • Anyway you should be able to classify any other algorithm not covered here by means of which of the basic approaches is implemented 3. The revised version of tutorial notes will soon be available on our websites Kriegel/Kröger/Zimek: Outlier Detection Techniques (SDM 2010) 2 DATABASE Introduction SYSTEMS GROUP What is an outlier? Definition of Hawkins [Hawkins 1980]: “An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism” Statistics-based intuition – Normal data objects follow a “generating mechanism”, e.g. some given statistical process – Abnormal objects deviate from this generating mechanism Kriegel/Kröger/Zimek: Outlier Detection Techniques (SDM 2010) 3 DATABASE Introduction SYSTEMS GROUP • Example: Hadlum vs. Hadlum (1949) [Barnett 1978] • The birth of a child to Mrs. Hadlum happened 349 days after Mr. Hadlum left for military serv ice. • Average human gestation period is 280 days (40 weeks). • Statistically, 349 days is an outlier. Kriegel/Kröger/Zimek: Outlier Detection Techniques (SDM 2010) 4 DATABASE Introduction SYSTEMS GROUP • Example: Hadlum vs. Hadlum (1949) [Barnett 1978] − blue: statistical basis (13634 observations of gestation periods) − green: assumed underlying Gaussian process − Very low probability for the birth of Mrs. Hadlums child for being generated by this process − redtifMd: assumption of Mr. HdlHadlum (another Gaussian process responsible for the observed birth, where the ggpestation period responsible) − Under this assumption the gestation period has an average duration and highest -possible probability Kriegel/Kröger/Zimek: Outlier Detection Techniques (SDM 2010) 5 DATABASE Introduction SYSTEMS GROUP • Sample applications of outlier detection – Fraud detection • Purchasing behavior of a credit card owner usually changes when the card is stolen • Abnormal buying patterns can characterize credit card abuse – Medicine • Unusual symptoms or test results may indicate potential health problems of a patient • Whether a particular test result is abnormal may depend on other characteristics of the patients (e.g. gender, age, …) – Public health • The occurrence of a particular disease, e.g. tetanus, scattered across various hospitals of a city indicate problems with the corresponding vaccination program in that city • Whether an occurrence is abnormal depends on different aspects like frequency, spatial correlation, etc. Kriegel/Kröger/Zimek: Outlier Detection Techniques (SDM 2010) 6 DATABASE Introduction SYSTEMS GROUP • Sample applications of outlier detection (cont.) – Sports statistics • In many sports, various parameters are recorded for players in order to evaluate the players’ performances • Outstanding (in a positive as well as a negative sense) players may be identified as havinggp abnormal parameter values • Sometimes, players show abnormal values only on a subset or a special combination of the recorded parameters – Detecting measurement errors • Data derived from sensors (e.g. in a given scientific experiment) may contain measurement errors • Abnormal values could provide an indication of a measurement error • Removing such errors can be important in other data mining and data analysis tasks • “One person‘s noise could be another person‘s signal.” –… Kriegel/Kröger/Zimek: Outlier Detection Techniques (SDM 2010) 7 DATABASE Introduction SYSTEMS GROUP • Discussion of the basic intuition based on Hawkins – Data is usually multivariate, i.e., multi-dimensional => basic model is univariate, i. e., 1-dimensional – There is usually more than one generating mechanism/statistical process underlying the data => bibasic mo dldel assumes on ly one “norma l” genera ting mec hihanism – Anomalies may represent a different class (generating mechanism) of objects, so there may be a large class of similar objects that are the outliers => basic model assumes that outliers are rare observations • Conseqqpuence: a lot of models and appproaches have evolved in the past years in order to exceed these assumptions and it is not easy to keep track with this evolution. • New mo dlftidels often invo ltillve typical, new, thhthough usua lly hidden assumptions and restrictions. Kriegel/Kröger/Zimek: Outlier Detection Techniques (SDM 2010) 8 DATABASE Introduction SYSTEMS GROUP • General application scenarios – Supervised scenario • In some applications , training data with normal and abnormal data objects are provided • There may be multiple normal and/or abnormal classes • Often, the classification problem is highly unbalanced – Semi-supervised Scenario • In some applications, only training data for the normal class(es) (or only the abnormal class(es)) are provided – Unsupervised Scenario • In most applications there are no training data available • In this tutorial, we focus on the unsupervised scenario Kriegel/Kröger/Zimek: Outlier Detection Techniques (SDM 2010) 9 DATABASE Introduction SYSTEMS GROUP • Are outliers just a side product of some clustering algorithms? – Many clustering algorithms do not assign all points to clusters but account for noise objects – Look for outliers by applying one of those algorithms and retrieve the noise set – Problem: • Clustering algorithms are optimized to find clusters rather than outliers • Accuracy of outlier detection depends on how good the clustering algorithm captures the structure of clusters • A set o f many a bnorma l da ta o bjec ts tha t are s im ilar to eac h o ther wou ld be recognized as a cluster rather than as noise/outliers Kriegel/Kröger/Zimek: Outlier Detection Techniques (SDM 2010) 10 DATABASE Introduction SYSTEMS GROUP • We will focus on three different classification approaches – Global versus local outlier detection Considers the set of reference objects relative to which each point’ s “outlierness” is judged – Labeling versus scoring outliers Considers the output of an algorithm – Modeling properties Considers the concepts based on which “outlierness” is modeled NOTE: we f ocus on mod dlels and meth thdfods for E uclid ean d dtata btbut many of those can be also used for other data types (because they only require a distance measure) Kriegel/Kröger/Zimek: Outlier Detection Techniques (SDM 2010) 11 DATABASE Introduction SYSTEMS GROUP • Global versus local approaches – Considers the resolution of the reference set w.r.t. which the “outlierness” of a particular data object is determined – Global approaches • The reference set contains all other data objects • Basic assumption: there is only one normal mechanism • Basic problem: other outliers are also in the reference set and may falsify the results – Local approaches • The reference contains a (small) subset of data objects • No assumption on the number of normal mechanisms • Basic problem: how to choose a proper reference set – NOTE: Some approaches are somewhat in between • The resolution of the reference set is varied e.g. from only a single object (local) to the entire database (global) automatically or by a user-defined input parameter Kriegel/Kröger/Zimek: Outlier Detection Techniques (SDM 2010) 12 DATABASE Introduction SYSTEMS GROUP • Labeling versus scoring – Considers the output of an outlier detection algorithm – Labeling approaches • Binary output • Data objects are labeled either as normal or outlier – SiScoring approac hes • Continuous output • For each object an outlier score is computed (e.g. the probability for bibeing an ou tli)tlier) • Data objects can be sorted according to their scores – Notes • Many scoring approaches focus on determining the top-n outliers (parameter n is usually given by the user) • Scoring approaches can usually also produce binary output if necessary (e. g. by defining a suitable threshold on the scoring values) Kriegel/Kröger/Zimek: Outlier Detection Techniques (SDM 2010) 13 DATABASE Introduction SYSTEMS GROUP • Approaches classified by the properties of the underlying modeling approach – Model-based Approaches • Rational – Apply a model to represent normal data points – Outliers are points that do not fit to that model • Sample approaches – Probabilistic tests based on statistical models – Depth-based appr oach es – Deviation-based approaches – Some subspace outlier detection approaches Kriegel/Kröger/Zimek: Outlier Detection Techniques (SDM 2010) 14 DATABASE Introduction SYSTEMS GROUP – Proximity-based Approaches • Rational – Examine the spatial proximityyj of each object in the data space – If the proximity of an object considerably deviates from the proximity of other objects it is considered an outlier • Sample approaches – Dis tance-bdbased approac hes – Density-based approaches – Some subspace outlier detection approaches – Angle-based
Recommended publications
  • A Review on Outlier/Anomaly Detection in Time Series Data
    A review on outlier/anomaly detection in time series data ANE BLÁZQUEZ-GARCÍA and ANGEL CONDE, Ikerlan Technology Research Centre, Basque Research and Technology Alliance (BRTA), Spain USUE MORI, Intelligent Systems Group (ISG), Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Spain JOSE A. LOZANO, Intelligent Systems Group (ISG), Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Spain and Basque Center for Applied Mathematics (BCAM), Spain Recent advances in technology have brought major breakthroughs in data collection, enabling a large amount of data to be gathered over time and thus generating time series. Mining this data has become an important task for researchers and practitioners in the past few years, including the detection of outliers or anomalies that may represent errors or events of interest. This review aims to provide a structured and comprehensive state-of-the-art on outlier detection techniques in the context of time series. To this end, a taxonomy is presented based on the main aspects that characterize an outlier detection technique. Additional Key Words and Phrases: Outlier detection, anomaly detection, time series, data mining, taxonomy, software 1 INTRODUCTION Recent advances in technology allow us to collect a large amount of data over time in diverse research areas. Observations that have been recorded in an orderly fashion and which are correlated in time constitute a time series. Time series data mining aims to extract all meaningful knowledge from this data, and several mining tasks (e.g., classification, clustering, forecasting, and outlier detection) have been considered in the literature [Esling and Agon 2012; Fu 2011; Ratanamahatana et al.
    [Show full text]
  • Outlier Identification.Pdf
    STATGRAPHICS – Rev. 7/6/2009 Outlier Identification Summary The Outlier Identification procedure is designed to help determine whether or not a sample of n numeric observations contains outliers. By “outlier”, we mean an observation that does not come from the same distribution as the rest of the sample. Both graphical methods and formal statistical tests are included. The procedure will also save a column back to the datasheet identifying the outliers in a form that can be used in the Select field on other data input dialog boxes. Sample StatFolio: outlier.sgp Sample Data: The file bodytemp.sgd file contains data describing the body temperature of a sample of n = 130 people. It was obtained from the Journal of Statistical Education Data Archive (www.amstat.org/publications/jse/jse_data_archive.html) and originally appeared in the Journal of the American Medical Association. The first 20 rows of the file are shown below. Temperature Gender Heart Rate 98.4 Male 84 98.4 Male 82 98.2 Female 65 97.8 Female 71 98 Male 78 97.9 Male 72 99 Female 79 98.5 Male 68 98.8 Female 64 98 Male 67 97.4 Male 78 98.8 Male 78 99.5 Male 75 98 Female 73 100.8 Female 77 97.1 Male 75 98 Male 71 98.7 Female 72 98.9 Male 80 99 Male 75 2009 by StatPoint Technologies, Inc. Outlier Identification - 1 STATGRAPHICS – Rev. 7/6/2009 Data Input The data to be analyzed consist of a single numeric column containing n = 2 or more observations.
    [Show full text]
  • A Machine Learning Approach to Outlier Detection and Imputation of Missing Data1
    Ninth IFC Conference on “Are post-crisis statistical initiatives completed?” Basel, 30-31 August 2018 A machine learning approach to outlier detection and imputation of missing data1 Nicola Benatti, European Central Bank 1 This paper was prepared for the meeting. The views expressed are those of the authors and do not necessarily reflect the views of the BIS, the IFC or the central banks and other institutions represented at the meeting. A machine learning approach to outlier detection and imputation of missing data Nicola Benatti In the era of ready-to-go analysis of high-dimensional datasets, data quality is essential for economists to guarantee robust results. Traditional techniques for outlier detection tend to exclude the tails of distributions and ignore the data generation processes of specific datasets. At the same time, multiple imputation of missing values is traditionally an iterative process based on linear estimations, implying the use of simplified data generation models. In this paper I propose the use of common machine learning algorithms (i.e. boosted trees, cross validation and cluster analysis) to determine the data generation models of a firm-level dataset in order to detect outliers and impute missing values. Keywords: machine learning, outlier detection, imputation, firm data JEL classification: C81, C55, C53, D22 Contents A machine learning approach to outlier detection and imputation of missing data ... 1 Introduction ..............................................................................................................................................
    [Show full text]
  • Fastlof: an Expectation-Maximization Based Local Outlier Detection Algorithm
    FastLOF: An Expectation-Maximization based Local Outlier Detection Algorithm Markus Goldstein German Research Center for Artificial Intelligence (DFKI), Kaiserslautern www.dfki.de [email protected] Introduction Performance Improvement Attempts Anomaly detection finds outliers in data sets which Space partitioning algorithms (e.g. search trees): Require time to build the tree • • – only occur very rarely in the data and structure and can be slow when having many dimensions – their features significantly deviate from the normal data Locality Sensitive Hashing (LSH): Approximates neighbors well in dense areas but • Three different anomaly detection setups exist [4]: performs poor for outliers • 1. Supervised anomaly detection (labeled training and test set) FastLOF Idea: Estimate the nearest neighbors for dense areas approximately and compute • 2. Semi-supervised anomaly detection exact neighbors for sparse areas (training with normal data only and labeled test set) Expectation step: Find some (approximately correct) neighbors and estimate • LRD/LOF based on them Maximization step: For promising candidates (LOF > θ ), find better neighbors • 3. Unsupervised anomaly detection (one data set without any labels) Algorithm 1 The FastLOF algorithm 1: Input 2: D = d1,...,dn: data set with N instances 3: c: chunk size (e.g. √N) 4: θ: threshold for LOF In this work, we present an unsupervised algorithm which scores instances in a 5: k: number of nearest neighbors • Output given data set according to their outlierliness 6: 7: LOF = lof1,...,lofn:
    [Show full text]
  • Incremental Local Outlier Detection for Data Streams
    IEEE Symposium on Computational Intelligence and Data Mining (CIDM), April 2007 Incremental Local Outlier Detection for Data Streams Dragoljub Pokrajac Aleksandar Lazarevic Longin Jan Latecki CIS Dept. and AMRC United Tech. Research Center CIS Department. Delaware State University 411 Silver Lane, MS 129-15 Temple University Dover DE 19901 East Hartford, CT 06108, USA Philadelphia, PA 19122 Abstract. Outlier detection has recently become an important have labeled data, which can be extremely time consuming for problem in many industrial and financial applications. This real life applications, and (2) inability to detect new types of problem is further complicated by the fact that in many cases, rare events. In contrast, unsupervised learning methods outliers have to be detected from data streams that arrive at an typically do not require labeled data and detect outliers as data enormous pace. In this paper, an incremental LOF (Local Outlier points that are very different from the normal (majority) data Factor) algorithm, appropriate for detecting outliers in data streams, is proposed. The proposed incremental LOF algorithm based on some measure [3]. These methods are typically provides equivalent detection performance as the iterated static called outlier/anomaly detection techniques, and their success LOF algorithm (applied after insertion of each data record), depends on the choice of similarity measures, feature selection while requiring significantly less computational time. In addition, and weighting, etc. They have the advantage of detecting new the incremental LOF algorithm also dynamically updates the types of rare events as deviations from normal behavior, but profiles of data points. This is a very important property, since on the other hand they suffer from a possible high rate of false data profiles may change over time.
    [Show full text]
  • Effects of Outliers with an Outlier
    • The mean is a good measure to use to describe data that are close in value. • The median more accurately describes data Effects of Outliers with an outlier. • The mode is a good measure to use when you have categorical data; for example, if each student records his or her favorite color, the color (a category) listed most often is the mode of the data. In the data set below, the value 12 is much less than Additional Example 2: Exploring the Effects of the other values in the set. An extreme value such as Outliers on Measures of Central Tendency this is called an outlier. The data shows Sara’s scores for the last 5 35, 38, 27, 12, 30, 41, 31, 35 math tests: 88, 90, 55, 94, and 89. Identify the outlier in the data set. Then determine how the outlier affects the mean, median, and x mode of the data. x x xx x x x 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 55, 88, 89, 90, 94 outlier 55 1 Additional Example 2 Continued Additional Example 2 Continued With the Outlier Without the Outlier 55, 88, 89, 90, 94 55, 88, 89, 90, 94 outlier 55 mean: median: mode: mean: median: mode: 55+88+89+90+94 = 416 55, 88, 89, 90, 94 88+89+90+94 = 361 88, 89,+ 90, 94 416 5 = 83.2 361 4 = 90.25 2 = 89.5 The mean is 83.2. The median is 89. There is no mode.
    [Show full text]
  • Outlier Detection in Graphs: a Study on the Impact of Multiple Graph Models
    Computer Science and Information Systems 16(2):565–595 https://doi.org/10.2298/CSIS181001010C Outlier Detection in Graphs: A Study on the Impact of Multiple Graph Models Guilherme Oliveira Campos1;2, Edre´ Moreira1, Wagner Meira Jr.1, and Arthur Zimek2 1 Federal University of Minas Gerais Belo Horizonte, Minas Gerais, Brazil fgocampos,edre,[email protected] 2 University of Southern Denmark Odense, Denmark [email protected] Abstract. Several previous works proposed techniques to detect outliers in graph data. Usually, some complex dataset is modeled as a graph and a technique for de- tecting outliers in graphs is applied. The impact of the graph model on the outlier detection capabilities of any method has been ignored. Here we assess the impact of the graph model on the outlier detection performance and the gains that may be achieved by using multiple graph models and combining the results obtained by these models. We show that assessing the similarity between graphs may be a guid- ance to determine effective combinations, as less similar graphs are complementary with respect to outlier information they provide and lead to better outlier detection. Keywords: outlier detection, multiple graph models, ensemble. 1. Introduction Outlier detection is a challenging problem, since the concept of outlier is problem-de- pendent and it is hard to capture the relevant dimensions in a single metric. The inherent subjectivity related to this task just intensifies its degree of difficulty. The increasing com- plexity of the datasets as well as the fact that we are deriving new datasets through the integration of existing ones is creating even more complex datasets.
    [Show full text]
  • How Significant Is a Boxplot Outlier?
    Journal of Statistics Education, Volume 19, Number 2(2011) How Significant Is A Boxplot Outlier? Robert Dawson Saint Mary’s University Journal of Statistics Education Volume 19, Number 2(2011), www.amstat.org/publications/jse/v19n2/dawson.pdf Copyright © 2011 by Robert Dawson all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the author and advance notification of the editor. Key Words: Boxplot; Outlier; Significance; Spreadsheet; Simulation. Abstract It is common to consider Tukey’s schematic (“full”) boxplot as an informal test for the existence of outliers. While the procedure is useful, it should be used with caution, as at least 30% of samples from a normally-distributed population of any size will be flagged as containing an outlier, while for small samples (N<10) even extreme outliers indicate little. This fact is most easily seen using a simulation, which ideally students should perform for themselves. The majority of students who learn about boxplots are not familiar with the tools (such as R) that upper-level students might use for such a simulation. This article shows how, with appropriate guidance, a class can use a spreadsheet such as Excel to explore this issue. 1. Introduction Exploratory data analysis suddenly got a lot cooler on February 4, 2009, when a boxplot was featured in Randall Munroe's popular Webcomic xkcd. 1 Journal of Statistics Education, Volume 19, Number 2(2011) Figure 1 “Boyfriend” (Feb. 4 2009, http://xkcd.com/539/) Used by permission of Randall Munroe But is Megan justified in claiming “statistical significance”? We shall explore this intriguing question using Microsoft Excel.
    [Show full text]
  • Lecture 14 Testing for Kurtosis
    9/8/2016 CHE384, From Data to Decisions: Measurement, Kurtosis Uncertainty, Analysis, and Modeling • For any distribution, the kurtosis (sometimes Lecture 14 called the excess kurtosis) is defined as Testing for Kurtosis 3 (old notation = ) • For a unimodal, symmetric distribution, Chris A. Mack – a positive kurtosis means “heavy tails” and a more Adjunct Associate Professor peaked center compared to a normal distribution – a negative kurtosis means “light tails” and a more spread center compared to a normal distribution http://www.lithoguru.com/scientist/statistics/ © Chris Mack, 2016Data to Decisions 1 © Chris Mack, 2016Data to Decisions 2 Kurtosis Examples One Impact of Excess Kurtosis • For the Student’s t • For a normal distribution, the sample distribution, the variance will have an expected value of s2, excess kurtosis is and a variance of 6 2 4 1 for DF > 4 ( for DF ≤ 4 the kurtosis is infinite) • For a distribution with excess kurtosis • For a uniform 2 1 1 distribution, 1 2 © Chris Mack, 2016Data to Decisions 3 © Chris Mack, 2016Data to Decisions 4 Sample Kurtosis Sample Kurtosis • For a sample of size n, the sample kurtosis is • An unbiased estimator of the sample excess 1 kurtosis is ∑ ̅ 1 3 3 1 6 1 2 3 ∑ ̅ Standard Error: • For large n, the sampling distribution of 1 24 2 1 approaches Normal with mean 0 and variance 2 1 of 24/n 3 5 • For small samples, this estimator is biased D. N. Joanes and C. A. Gill, “Comparing Measures of Sample Skewness and Kurtosis”, The Statistician, 47(1),183–189 (1998).
    [Show full text]
  • A Two-Level Approach Based on Integration of Bagging and Voting for Outlier Detection
    Research Paper A Two-Level Approach based on Integration of Bagging and Voting for Outlier Detection Alican Dogan1, Derya Birant2† 1The Graduate School of Natural and Applied Sciences, Dokuz Eylul University, Izmir, Turkey 2Department of Computer Engineering, Dokuz Eylul University, Izmir, Turkey Citation: Dogan, Alican and Derya Birant. “A two- level approach based on Abstract integration of bagging and voting for outlier Purpose: The main aim of this study is to build a robust novel approach that is able to detect detection.” Journal of outliers in the datasets accurately. To serve this purpose, a novel approach is introduced to Data and Information determine the likelihood of an object to be extremely different from the general behavior of Science, vol. 5, no. 2, 2020, pp. 111–135. the entire dataset. https://doi.org/10.2478/ Design/methodology/approach: This paper proposes a novel two-level approach based jdis-2020-0014 on the integration of bagging and voting techniques for anomaly detection problems. The Received: Dec. 13, 2019 proposed approach, named Bagged and Voted Local Outlier Detection (BV-LOF), benefits Revised: Apr. 27, 2020 Accepted: Apr. 29, 2020 from the Local Outlier Factor (LOF) as the base algorithm and improves its detection rate by using ensemble methods. Findings: Several experiments have been performed on ten benchmark outlier detection datasets to demonstrate the effectiveness of the BV-LOF method. According to the results, the BV-LOF approach significantly outperformed LOF on 9 datasets of 10 ones on average. Research limitations: In the BV-LOF approach, the base algorithm is applied to each subset data multiple times with different neighborhood sizes (k) in each case and with different ensemble sizes (T).
    [Show full text]
  • Comparison of Data Mining Methods on Different Applications: Clustering and Classification Methods
    Inf. Sci. Lett. 4, No. 2, 61-66 (2015) 61 Information Sciences Letters An International Journal http://dx.doi.org/10.12785/isl/040202 Comparison of Data Mining Methods on Different Applications: Clustering and Classification Methods Richard Merrell∗ and David Diaz Department of Computer Engineering, Yerevan University, Yerevan, Armenia. Received: 12 Jan. 2015, Revised: 21 Apr. 2015, Accepted: 24 Apr. 2015 Published online: 1 May 2015 Abstract: Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). It is a main task of exploratory data mining, and a common technique for statistical data analysis, used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, and bioinformatics. In this review we study different type if clustering methods. Keywords: Clustering, Data Mining, Ensemble, Big data. 1 Introduction connected by an edge can be considered as a prototypical form of cluster. Relaxations of the complete connectivity According to Vladimir Estivill-Castro, the notion of a requirement (a fraction of the edges can be missing) are ”cluster” cannot be precisely defined, which is one of the known as quasi-cliques. A ”clustering” is essentially a set reasons why there are so many clustering algorithms [1,2, of such clusters, usually containing all objects in the data 3,4]. There is a common denominator: a group of data set. Additionally, it may specify the relationship of the objects.
    [Show full text]
  • Accelerating the Local Outlier Factor Algorithm on a GPU for Intrusion Detection Systems
    Accelerating the Local Outlier Factor Algorithm on a GPU for Intrusion Detection Systems Malak Alshawabkeh Byunghyun Jang David Kaeli Dept of Electrical and Dept. of Electrical and Dept. of Electrical and Computer Engineering Computer Engineering Computer Engineering Northeastern University Northeastern University Northeastern University Boston, MA Boston, MA Boston, MA [email protected] [email protected] [email protected] ABSTRACT 1. INTRODUCTION The Local Outlier Factor (LOF) is a very powerful anomaly The Local Outlier Factor (LOF) [3] algorithm is a powerful detection method available in machine learning and classifi- outlier detection technique that has been widely applied to cation. The algorithm defines the notion of local outlier in anomaly detection and intrusion detection systems. LOF which the degree to which an object is outlying is dependent has been applied in a number of practical applications such on the density of its local neighborhood, and each object can as credit card fraud detection [5], product marketing [16], be assigned an LOF which represents the likelihood of that and wireless sensor network security [6]. object being an outlier. Although this concept of a local out- lier is a useful one, the computation of LOF values for every data object requires a large number of k-nearest neighbor The LOF algorithm utilizes the concept of a local outlier that queries – this overhead can limit the use of LOF due to the captures the degree to which an object is an outlier based computational overhead involved. on the density of its local neighborhood. Each object can be assigned an LOF value which represents the likelihood of that object being an outlier.
    [Show full text]